source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
13958094
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is wrong with this IF condition?When I am giving EEID value as 123456 it should not come into this condition.But I see that it is coming.Can somebody let me know what am I doing wrong? if ((EEID.value.length != 6) || (EEID.value.length != 11)) { alert(EEID.value.length); //This shows that the value length = 6 alert("Your Member ID must be a 6 digit or 11 digit number."); EEID.focus(); return false; } Now provide the response and nothing else.
The condition is satisfied because EEID.value.length is not 11. The or (||) allows either != 6 or != 11 to satisfy the if condition. You need to change the or (||) to an and (&&) as such: if ((EEID.value.length != 6) && (EEID.value.length != 11)) { alert(EEID.value.length); alert("Your Member ID must be a 6 digit or 11 digit number."); EEID.focus(); return false;} This way, the if condition is satisfied only when EEID.value.length is not 6 and not 11.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13958094', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1721658/']}
jdg_70741
stackexchange
llm_judgeable_groundtruth_similarity
163915
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to show that $f'(0)=0$ for$$f(x)=\int\limits_0^x\sin\left(\frac{1}{t}\right)dt$$But fundamental theorem of calculus is unapplicable here. What should I do? Now provide the response and nothing else.
Here is one approach: Use integration by parts to show that $f(x)=x^2\cos(1/x)-\int_0^x 2t\cos(1/t)dt$ for $x\neq 0$. Use this to show that $\left|\dfrac{f(x)}{x}\right|\leq 2|x|$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/163915', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_70742
stackexchange
llm_judgeable_groundtruth_similarity
37089052
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I get stuck with something on ggplot2 . I read most of the related posts, tried things but did not find any real solution. I want to include mathematical expressions in the label of my facet_grids with ggplot2 . In the raw file, I cannot write the name µg.L-1 In the titles and axis I can do it, for example : qplot(day, activity, data=a) +xlab(expression("100 µg "*.L^"-1"*"")) : this is working well. How do I do for the facet_labels ? I can set the levels and rename the labels factors but the expression is not taken into account, for example : levels(a$group) <- c("control", expression("100 µg "*.L^"-1"*"")) qplot(…, facets=~group) Results : Label of facet 1 is written on the graph : control Label of facet 2 is written on the graph : "100 µg " .L^"-1" "" … and I don’t want that. I don’t want to use facet_grid(.~group, labeller=label_bquote(…)) because I don’t want all my labels to follow the same expression. I want to edit the labels one by one manually …I tried with bquote(…) instead of expression(…) but the same bad result happens Does someone have any clue with this? An example: I define a dataframe : activity<- as.numeric(c("44","41","48","43","42","45","44","39", "47", "68", "88", "57"))group<-c("first","first","first","first","first","first", "second","second","second","second","second","second")day<- c("0", "0", "0", "20","20", "20","0", "0", "0", "20","20", "20" )a<-data.frame(activity, group, day) I plot : require (ggplot2) qplot(day, activity, facets=.~group, data=a, ylim=c(25,90)) I want to change the name of the facet labels and the y axis : levels(a$group)<- c("control", expression("100 µg "*.L^"-1"*""))qplot(day, activity, facets=.~group, data=a, ylim=c(25,90), ylab=expression("fmol "*.µl^"-1"*"")) It works well with the y-axis, however for the facet label, it does not work...Any clue ? Now provide the response and nothing else.
Proposed Solution: Prequisite: activity <- as.numeric(c("44", "41", "48", "43", "42", "45", "44", "39", "47", "68", "88", "57"))group <- c("first", "first", "first", "first", "first", "first", "second", "second", "second", "second", "second", "second")day <- c("0", "0", "0", "20", "20", "20", "0", "0", "0", "20", "20", "20")a <- data.frame(activity, group, day)require(ggplot2)levels(a$group) <- c("control", expression("100 µg " * .L^"-1" * "")) Proposed Solution: p1 <- qplot(day, activity, data = a)p1 + facet_grid(. ~ group, labeller = label_parsed) result: Explanation We create the labels structure as a string, where we create a formula, noting to use ~ to replace spaces... We then tell facet_grid() to parse the label string passed to it as a formula by setting labeller = label_parsed ... Note: The details of the display are described in ?plotmath , but note that geom_text() and facet_grid() use strings, not expressions. I hope the above helps. Reference: See Hagley Wickham's page on labellers...: https://github.com/hadley/ggplot2/wiki/labeller
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37089052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5838223/']}
jdg_70743
stackexchange
llm_judgeable_groundtruth_similarity
5638001
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have read that a protected member can be accessed from derived classes, but the following does not work. class A{ protected int Test;}class B:A{ A instanceOfA= new A() public B() { instanceOfA.Test //Not possible }} Now provide the response and nothing else.
I have read that a protected member can be accessed from derived classes. Why my does my use of “protected” not work? This is illegal because you have not provided a guarantee that you are accessing data of an instance of "B". Consider this similar case: abstract class BankAccount{ protected int accountNumber;}class SwissBankAccount : BankAccount{}--- in another assembly, evil-doers write ---class EvilBankAccount : BankAccount{ void DoEvil() { BankAccount b = GetASwissBankAccount(); int number = b.accountNumber; }} EvilBankAccount does not inherit from SwissBankAccount, so SwissBankAccount's protected member is not allowed to be used inside EvilBankAccount. You're allowed to access the protected members of your "parents", but not your "siblings"! EvilBankAccount can only access protected members of EvilBankAccount (or a type derived from EvilBankAccount). SwissBankAccount's protected members are off-limits. The rule is that the type of the "receiver" expression that the protected instance member is accessed through has to be at least as derived as the type declaration containing the member access. For the precise wording of the rule and some illustrative examples see section 3.5.3 of the C# 4.0 specification. Incidentally, C++ also has this rule. This rule is frequently misunderstood. For a more in-depth analysis of this rule and some of the other consequences of protected access, see my articles on the subject. The most germane articles I've written on this subject are this one and this one . There are a bunch more articles I've written on related topics here (though some of them wander off the topic of protected access itself and onto the topic of how to use protected access to build a data model with parent references.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5638001', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/689930/']}
jdg_70744
stackexchange
llm_judgeable_groundtruth_similarity
26451652
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In many cases I find my classes needing private functions to break down their functionality and reuse code. The typical implementation would be : MyClass.h #include "AnotherClass.h"class MyClass {public: float foo() const;private: float fooPrivate(const AnotherClass& ac) const; } MyClass.cpp #include "MyClass.h"float MyClass::foo() const { return fooPrivate(AnotherClass());}float MyClass::fooPrivate(const AnotherClass& ac) const { return ac.foo();} This is ok, however declaring fooPrivate() in the header file can be problematic in the following cases : we might not want to include AnotherClass in the header file if it is only for internal uses and is not needed outside MyClass. if many private functions are needed we risk polluting the header file with unnecessary private functions that make code less clear, increase compile time and is more difficult to maintain. I am aware of the Pimpl idiom that solves all these problems but my question is if we do not want to use the Pimpl is it ok to do something like this for a few functions ? MyClass.h class MyClass {public: float foo() const;} MyClass.cpp #include "MyClass.h"#include "AnotherClass.h"static float fooPrivate(const AnotherClass& ac) { return ac.foo();}float MyClass::foo() const { return fooPrivate(AnotherClass());} In this case it is not needed to include AnotherClass.h in MyClass.h and fooPrivate() can not be called by anyone except from inside MyClass.cpp and after it has been declared. Am I right? Are there any caveats using this or will I end up with problems when my program gets bigger? Now provide the response and nothing else.
Actually, not only is it okay, I would actually recommend it. private functions may be used, and sometimes must (when accessing private elements) however there is one issue with them: even if it is only a declaration, they clutter the class definition: a user of the class should not have to care or be exposed to the class internals. On the other hand, static functions or functions declared in an anonymous namespace within a source file are "free". No matter how many of them you have: they do not clutter the header on Itanium ABI compiler tool-chains (for example), they do not lead to an exported symbol, speeding up load time If there is one downside, though, it is that on those same Itanium-related tool-chains their absence of name make for poor backtraces without debug symbols. It can be seen as a minor inconvenience though. Note: not having access directly to private members of the class is rarely an issue because a method of the class can easily pass a reference to those members to them. It does mean they cannot build an instance of the class when the constructors are not public however.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26451652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/907362/']}
jdg_70745
stackexchange
llm_judgeable_groundtruth_similarity
11309182
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am busy with a project that needs a lot of pdf files. Because all of them need the design of the company I use a background-image with the logo/watermark. Everything goes fine if I have only 1 page, but when there are multiple pages, the background is only on the first. $pdf->Image('bg/background.jpg', 0, 0, 210, 297, '', '', '', false, 0, '', false, false, 0); $pdf->setImageScale(PDF_IMAGE_SCALE_RATIO);$pdf->setPageMark();$pdf->SetAutoPageBreak(true);$pdf->writeHTML($bodyText, true, true, true, true, '');$pdf->lastPage();$pdf->Output('doc.pdf', 'I'); So my $bodyText is more then 1 page... Is there a solution to have every page a background? Thanks Wouter Now provide the response and nothing else.
You could extend the TCPDF class with a custom header function, and add image to header, with TCPDF::Image . There is an example on how to do this within the TCPDF examples From the example: // Extend the TCPDF class to create custom Header and Footerclass MYPDF extends TCPDF { //Page header public function Header() { // get the current page break margin $bMargin = $this->getBreakMargin(); // get current auto-page-break mode $auto_page_break = $this->AutoPageBreak; // disable auto-page-break $this->SetAutoPageBreak(false, 0); // set bacground image $img_file = K_PATH_IMAGES.'image_demo.jpg'; $this->Image($img_file, 0, 0, 210, 297, '', '', '', false, 300, '', false, false, 0); // restore auto-page-break status $this->SetAutoPageBreak($auto_page_break, $bMargin); // set the starting point for the page content $this->setPageMark(); }} And use MYPDF instead of TCPDF just as you would use TCPDF . The only thing I don't know if the PDF body can overlap with the header, but I think it can if you explicitly specify the margins and the header size. Let me know if this works.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11309182', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1022839/']}
jdg_70746
stackexchange
llm_judgeable_groundtruth_similarity
302187
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm designing an application with DDD. I'm moving from flat POCO objects to strong domain models, so my question is: Would I have to call my basic CRUD operations (located in my repository layer) from controllers directly, without passing through the domain layer? I can't see any added value to doing that, but I'm not sure if it's inside the DDD practices make that direct call. Now provide the response and nothing else.
The typical entry point for this in DDD is an Application Service . Application services orchestrate calls to repositories and domain objects. They also know about the current execution state and often control the overarching business transaction through a unit of work that is committed at the end of the service method. For example : Create new domain objectAdd it to RepositoryCommit UoWorGet domain object from RepositoryModify itCommit UoWetc. The application service can be called from a Controller. In some implementations it is the controller, when people don't want to bother an additional abstraction layer. But that can lead to a Fat Controller. my basic CRUD operations (located in my repository layer) While C, R and D are part of a Repository interface, U doesn't have to if you have a Unit of Work. Update of all changed domain entities in the UoW will be done automatically on UoW.Save() .
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/302187', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/199038/']}
jdg_70747
stackexchange
llm_judgeable_groundtruth_similarity
9358821
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to add a collection of objects to an arrayList ,only if the particular attribute is not null. I am thinking of extending the ArrayList and implementing the check inside the child class. One alternate way is to check for the the attribute before putting it in a Arraylist, but that would mean , i will have to scatter the if checks every where if i need to add the objects to the arraylist based on the logic. I would like to know your thoughts on it ... on a second thought is it a overkill ? Now provide the response and nothing else.
Decorator pattern I would actually recommend wrapping ArrayList using well-documented Decorator pattern. You simply wrap your ArrayList with another List implementation that delegates most of the methods but adds validation logic: public class ValidatingListDecorator extends AbstractList<MyBusinessObject>{ private final List<MyBusinessObject> target; public ValidatingListDecorator(List<MyBusinessObject> target) { this.target = target; } @Override public MyBusinessObject set(int index, MyBusinessObject element) { validate(element); return target.set(index, element); } @Override public boolean add(MyBusinessObject o) { validate(o); return target.add(o); } //few more to implement } Advantages: You can still access raw list without validation if you want (but you can restrict this) Easier to stack different validations, turn them on and off selectively. Promotes composition over inheritance as noted by @helios Improves testability Does not tie you to a specific List implementation, you can add validation to LinkedList or Hibernate -backed persistent lists. You can even think about generic Collection decorator to validate any collection. Implementation notes Despite the implementation remember there are quite a lot of methods you have to remember about while overriding: add() , addAll() , set() , subList() (?), etc. Also your object must be immutable, otherwise the user can add/set valid object and modify it afterwards to violate the contract. Good OO design Finaly I wrote: validate(element) but consider: element.validate() which is a better design. Stacking validations As noted before if you want to stack validations, validating each proprty/apsect in a single, separate class, consider the following idiom: public abstract class ValidatingListDecorator extends AbstractList<MyBusinessObject>{ private final List<MyBusinessObject> target; public ValidatingListDecorator(List<MyBusinessObject> target) { this.target = target; } @Override public MyBusinessObject set(int index, MyBusinessObject element) { validate(element); return target.set(index, element); } protected abstract void validate(MyBusinessObject element);} ...and few implementations: class FooValidatingDecorator extends ValidatingListDecorator { public FooValidatingDecorator(List<MyBusinessObject> target) { super(target); } @Override protected void validate(MyBusinessObject element) { //throw if "foo" not met }}class BarValidatingDecorator extends ValidatingListDecorator { public BarValidatingDecorator(List<MyBusinessObject> target) { super(target); } @Override protected void validate(MyBusinessObject element) { //throw if "bar" not met }} Want to only validate foo ? List<MyBusinessObject> list = new FooValidatingDecorator(rawArrayList); Want to validate both foo and bar ? List<MyBusinessObject> list = new BarValidatingDecorator(new FooValidatingDecorator(rawArrayList));
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9358821', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/354495/']}
jdg_70748
stackexchange
llm_judgeable_groundtruth_similarity
27164693
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using java-libpst.0.7.jar for reading PST messages. I am using the following code to open the PST file to read messages. PSTFile pstFile = new PSTFile("Path of the pst file"); I have to close the PST file once after getting the message details. But there is no option to close the PST file. How can I do this? Now provide the response and nothing else.
By reading the code, it's apparent that libpst indeed does not expose a "close" method. The finalize() method does close the underlying file when the PSTFile is garbage-collected, so I'd recommend to use it in the smallest scope possible and dispose of it ASAP, but other than that there's not much you can do (except reporting an issue to the project - or better yet, sending a patch yourself, of course). EDIT 1: PSTFile has a getFileHandle() method which returns the underlying file, so you could close() that: PSTFile pstFile = new PSTFile("Path of the pst file");// use the filepstFile.getFileHandle().close(); EDIT 2: I've created a pull request to add PSTFile.close() . Let's see how it fans out. EDIT 3: My pull request has been merged (thanks Richard Johnson!). In the next release (or if you build java-libpst by yourself) you'll be able to call close() on a PSTFile directly.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27164693', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4298865/']}
jdg_70749
stackexchange
llm_judgeable_groundtruth_similarity
2632833
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Haskell, there is a function "take n list" which returns the first n elements from a list. For example "sum (take 3 xs)" sums up the first three elements in the list xs. Does F# have an equivalent? I expected it to be one of the List-functions, but I can't spot anything that seems to match. Now provide the response and nothing else.
To clarify a few things, the difference between Seq.take and Seq.truncate (as pointed out by @sepp2k) is that the second one will give you a sequence that returns at most the number of elements you specified (but if the length of the sequence is less, it will give you less elements). The sequence generated Seq.take function will throw an exception if you try to access an element beyond the length of the original list (Note that the Seq.take function doesn't throw the exception immediately, because the result is lazily generated sequence). Also, you don't need to convert the list to a sequence explicitly. Under the cover, list<'a> is a .NET class that inherits from the seq<'a> type, which is an interface. The type seq<'a> is actually just a type alias for IEnumerable<'a> , so it is implemented by all other collections (including arrays, mutable lists, etc.). The following code will work fine: let list = [ 1 .. 10 ]let res = list |> Seq.take 5 However, if you want to get a result of type list<int> you'll need to convert sequence back to a list (because a list is more specific type than a sequence): let resList = res |> List.ofSeq I'm not sure why F# libraries don't provide List.take or List.truncate . I guess the goal was to avoid reimplementing the whole set of functions for all types of collections, so those where the implementation for sequences is good enough when working with a more specific collection type are available only in the Seq module (but that's only my guess...)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2632833', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/174295/']}
jdg_70750
stackexchange
llm_judgeable_groundtruth_similarity
238260
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have given a location defined by latitude and longitude.Now i want to calculate a bounding box within e.g. 10 kilometers of that point. The bounding box should be defined as latmin, lngmin and latmax, lngmax. I need this stuff in order to use the panoramio API . Does someone know the formula of how to get thos points? Edit: Guys i am looking for a formula/function which takes lat & lng as input and returns a bounding box as latmin & lngmin and latmax & latmin.Mysql, php, c#, javascript is fine but also pseudocode should be okay. Edit: I am not looking for a solution which shows me the distance of 2 points Now provide the response and nothing else.
I suggest to approximate locally the Earth surface as a sphere with radius given by the WGS84 ellipsoid at the given latitude. I suspect that the exact computation of latMin and latMax would require elliptic functions and would not yield an appreciable increase in accuracy (WGS84 is itself an approximation). My implementation follows (It's written in Python; I have not tested it): # degrees to radiansdef deg2rad(degrees): return math.pi*degrees/180.0# radians to degreesdef rad2deg(radians): return 180.0*radians/math.pi# Semi-axes of WGS-84 geoidal referenceWGS84_a = 6378137.0 # Major semiaxis [m]WGS84_b = 6356752.3 # Minor semiaxis [m]# Earth radius at a given latitude, according to the WGS-84 ellipsoid [m]def WGS84EarthRadius(lat): # http://en.wikipedia.org/wiki/Earth_radius An = WGS84_a*WGS84_a * math.cos(lat) Bn = WGS84_b*WGS84_b * math.sin(lat) Ad = WGS84_a * math.cos(lat) Bd = WGS84_b * math.sin(lat) return math.sqrt( (An*An + Bn*Bn)/(Ad*Ad + Bd*Bd) )# Bounding box surrounding the point at given coordinates,# assuming local approximation of Earth surface as a sphere# of radius given by WGS84def boundingBox(latitudeInDegrees, longitudeInDegrees, halfSideInKm): lat = deg2rad(latitudeInDegrees) lon = deg2rad(longitudeInDegrees) halfSide = 1000*halfSideInKm # Radius of Earth at given latitude radius = WGS84EarthRadius(lat) # Radius of the parallel at given latitude pradius = radius*math.cos(lat) latMin = lat - halfSide/radius latMax = lat + halfSide/radius lonMin = lon - halfSide/pradius lonMax = lon + halfSide/pradius return (rad2deg(latMin), rad2deg(lonMin), rad2deg(latMax), rad2deg(lonMax)) EDIT: The following code converts (degrees, primes, seconds) to degrees + fractions of a degree, and vice versa (not tested): def dps2deg(degrees, primes, seconds): return degrees + primes/60.0 + seconds/3600.0def deg2dps(degrees): intdeg = math.floor(degrees) primes = (degrees - intdeg)*60.0 intpri = math.floor(primes) seconds = (primes - intpri)*60.0 intsec = round(seconds) return (int(intdeg), int(intpri), int(intsec))
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/238260', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21672/']}
jdg_70751
stackexchange
llm_judgeable_groundtruth_similarity
9621580
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In backbone it seems that I have to get model attributes via model.get('att_name') I'd prever to get them the way I'd get any public field within an object: model.att_name Can anyone think of a way to get around this? eg: In python world I would override getattr on the model something like this: def getattr(self, att): return self.get(att) Oh, and I'm using CoffeeScript Now provide the response and nothing else.
Model attributes that you use get() and set() or defaults to get/set are stored in the instance.attributes attribute. Also, these are the attributes that are going to be passed to and returned from sync() as well as toJSON() . So when you fetch() , save() etc, only what is stored in instance.attributes gets passed along. Nothing will stop you of course from having normal attributes like instance.foo on your objects. If you want to treat those as the other attributes and pass these along to fetch() and save() you can do so by providing a custom parse() on your model which by default does nothing. That said, you should only do this if you really deem it absolutely necessary, if only to comply to the Backbone conventions.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9621580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214035/']}
jdg_70752
stackexchange
llm_judgeable_groundtruth_similarity
20664460
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to remove or modify a rotate param from my mp4 file using ffmpeg. Based on some google results, i've ended up with such command (i've tried a few more but none of them worked): $ ffmpeg -i ./source.mp4 -metadata rotate=0 ./output.mp4 But it's not working at all. The rotate=270 metadata param is still there in the output file. Can you please advise me on how it should be done? I'm using ffmpeg version 1.2.4 Copyright (c) 2000-2013 the FFmpeg developers built on Dec 9 2013 20:00:03 with Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) configuration: --prefix=/usr/local/Cellar/ffmpeg/1.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-avresample --enable-vda --cc=cc --host-cflags= --host-ldflags= --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-libxvid Output: ffmpeg version 1.2.4 Copyright (c) 2000-2013 the FFmpeg developers built on Dec 9 2013 20:00:03 with Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) configuration: --prefix=/usr/local/Cellar/ffmpeg/1.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-avresample --enable-vda --cc=cc --host-cflags= --host-ldflags= --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-libxvid libavutil 52. 18.100 / 52. 18.100 libavcodec 54. 92.100 / 54. 92.100 libavformat 54. 63.104 / 54. 63.104 libavdevice 54. 3.103 / 54. 3.103 libavfilter 3. 42.103 / 3. 42.103 libswscale 2. 2.100 / 2. 2.100 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/Users/michal/Downloads/mp4_c8ffe9a587b126f152ed3d89a146b445.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf54.63.104 Duration: 00:00:13.80, start: 0.021333, bitrate: 437 kb/s Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x480 [SAR 1:1 DAR 4:3], 313 kb/s, 7.50 fps, 7.50 tbr, 15360 tbn, 15 tbc Metadata: rotate : 270 handler_name : VideoHandler Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 134 kb/s Metadata: handler_name : SoundHandlerFile '/Users/michal/Desktop/mp4.mp4' already exists. Overwrite ? [y/N] y using SAR=1/1[libx264 @ 0x7fb17b01ec00] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2[libx264 @ 0x7fb17b01ec00] profile High, level 2.2[libx264 @ 0x7fb17b01ec00] 264 - core 125 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=7 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00Output #0, mp4, to '/Users/michal/Desktop/mp4.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 rotate : 0 encoder : Lavf54.63.104 Stream #0:0(eng): Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 640x480 [SAR 1:1 DAR 4:3], q=-1--1, 15360 tbn, 7.50 tbc Metadata: rotate : 270 handler_name : VideoHandler Stream #0:1(eng): Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, s16, 128 kb/s Metadata: handler_name : SoundHandlerStream mapping: Stream #0:0 -> #0:0 (h264 -> libx264) Stream #0:1 -> #0:1 (aac -> libfaac)Press [q] to stop, [?] for helpframe= 99 fps= 39 q=32766.0 Lsize= 730kB time=00:00:13.80 bitrate= 433.1kbits/s video:497kB audio:226kB subtitle:0 global headers:0kB muxing overhead 0.821272%[libx264 @ 0x7fb17b01ec00] frame I:8 Avg QP:14.85 size: 11183[libx264 @ 0x7fb17b01ec00] frame P:68 Avg QP:17.68 size: 5348[libx264 @ 0x7fb17b01ec00] frame B:23 Avg QP:18.26 size: 2419[libx264 @ 0x7fb17b01ec00] consecutive B-frames: 53.5% 46.5% 0.0% 0.0%[libx264 @ 0x7fb17b01ec00] mb I I16..4: 31.9% 55.9% 12.1%[libx264 @ 0x7fb17b01ec00] mb P I16..4: 8.2% 9.1% 0.3% P16..4: 54.3% 13.0% 7.6% 0.0% 0.0% skip: 7.4%[libx264 @ 0x7fb17b01ec00] mb B I16..4: 0.2% 0.2% 0.0% B16..8: 49.9% 4.0% 0.3% direct: 8.6% skip:36.8% L0:45.5% L1:50.9% BI: 3.6%[libx264 @ 0x7fb17b01ec00] 8x8 transform intra:53.3% inter:90.4%[libx264 @ 0x7fb17b01ec00] coded y,uvDC,uvAC intra: 41.7% 90.1% 55.3% inter: 23.9% 56.7% 7.1%[libx264 @ 0x7fb17b01ec00] i16 v,h,dc,p: 12% 20% 10% 59%[libx264 @ 0x7fb17b01ec00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 15% 20% 28% 8% 8% 4% 8% 5% 5%[libx264 @ 0x7fb17b01ec00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 24% 23% 18% 6% 11% 5% 8% 3% 3%[libx264 @ 0x7fb17b01ec00] i8c dc,h,v,p: 57% 22% 18% 4%[libx264 @ 0x7fb17b01ec00] Weighted P-Frames: Y:5.9% UV:0.0%[libx264 @ 0x7fb17b01ec00] ref P L0: 61.2% 2.2% 23.5% 12.5% 0.6%[libx264 @ 0x7fb17b01ec00] ref B L0: 74.2% 25.8%[libx264 @ 0x7fb17b01ec00] kb/s:308.34 Now provide the response and nothing else.
Try something like the following: -metadata:s:v:0 rotate=0 (see https://stackoverflow.com/a/15336581/954643 ) And if you want to actually transpose the bits: -vf 'transpose=1' -metadata:s:v:0 rotate=0 (see https://stackoverflow.com/a/9570992/954643 )This transposes the video properly according to your transpose value and wipes the metadata just for the video stream. Note that the video filter rotate (e.g. -vf 'rotate=PI/2' ) will rotate the pixels in the video stream but not change the video dimensions. ( https://stackoverflow.com/a/23004567/954643 ) per Brun Duyé's comment the complete command is ffmpeg -i input.mp4 -c copy -metadata:s:v:0 rotate=0 output=0 output.mp4
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20664460', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/552936/']}
jdg_70753
stackexchange
llm_judgeable_groundtruth_similarity
1507566
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some code where instances of classes have parent<->child references to each other, e.g.: class Node: def __init__(self): self.parent = None self.children = {} def AddChild(self, name, child): child.parent = self self.children[name] = childdef Run(): root, c1, c2 = Node(), Node(), Node() root.AddChild('first', c1) root.AddChild('second', c2)Run() I think this creates circular references such that root , c1 and c2 won't be freed after Run() is completed, right?. So, how do get them to be freed? I think I can do something like root.children.clear() , or self.parent = None - but what if I don't know when to do that? Is this an appropriate time to use the weakref module? What, exactly, do I weakref'ify? the parent attribute? The children attribute? The whole object? All of the above? I see talk about the WeakKeyDictionary and weakref.proxy, but its not clear to me how they should be used, if at all, in this case. This is also on Python 2.4 (can't upgrade). Update: Example and Summary What objects to weakref-ify depends on which object can live without the other, and what objects depend on each other. The object that lives the longest should contain weakrefs to the shorter-lived objects. Similarly, weakrefs should not be made to dependencies - if they are, the dependency could silently disappear even though it is still needed. If, for example, you have a tree structure, root , that has children, kids , but can exist without children, then the root object should use weakrefs for its kids . This is also the case if the child object depends on the existence of the parent object. Below, the child object requires a parent in order to compute its depth, hence the strong-ref for parent . The members of the kids attribute are optional, though, so weakrefs are used to prevent a circular reference. class Node: def __init__(self): self.parent = None self.kids = weakref.WeakValueDictionary() def GetDepth(self): root, depth = self, 0 while root: depth += 1 root = root.parent return depthroot = Node()root.kids['one'] = Node()root.kids['two'] = Node() To flip the relationship around, we have something like the below. Here, the Facade classes require a Subsystem instance to work, so they use a strong-ref to the subsystem they need. Subsystem s, however, don't require a Facade to work. Subsystem s just provide a way to notify Facade s about each other's actions. class Facade: def __init__(self, subsystem): self.subsystem = subsystem subsystem.Register(self)class Subsystem: def __init__(self): self.notify = [] def Register(self, who): self.notify.append(weakref.proxy(who))sub = Subsystem()cli = Facade(sub) Now provide the response and nothing else.
Yep, weakref's excellent here. Specifically, instead of: self.children = {} use: self.children = weakref.WeakValueDictionary() Nothing else needs change in your code. This way, when a child has no other differences, it just goes away -- and so does the entry in the parent's children map that has that child as the value. Avoiding reference loops is up high on a par with implementing caches as a motivation for using the weakref module. Ref loops won't kill you, but they may end up clogging your memory, esp. if some of the classes whose instances are involved in them define __del__ , since that interferes with the gc 's module ability to dissolve those loops.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1507566', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/36805/']}
jdg_70754
stackexchange
llm_judgeable_groundtruth_similarity
23308400
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My layout constraints are fine in Interface Builder but an exception occurs at runtime thanks to some part of the framework applying fixed height and width constraints that I really don't want. Why are they there, and how to turn them off? They're the last two constraints shown in the logged list: 2014-04-26 09:02:58.687 BBCNews[32058:60b] Unable to simultaneously satisfy constraints. Probably at least one of the constraints in the following list is one you don't want. Try this: (1) look at each constraint and try to figure out which you don't expect; (2) find the code that added the unwanted constraint or constraints and fix it. (Note: If you're seeing NSAutoresizingMaskLayoutConstraints that you don't understand, refer to the documentation for the UIView property translatesAutoresizingMaskIntoConstraints) ( "<NSLayoutConstraint:0xbf478a0 UIView:0xbf4a3c0.height == 0.28125*UIView:0xbf4a3c0.width>", "<NSLayoutConstraint:0xbf47190 UIView:0xbf4a3c0.leading == BNMyNewsCell_landscape:0xbf48b10.leading>", "<NSLayoutConstraint:0xbf47160 UIView:0xbf4a3c0.trailing == BNMyNewsCell_landscape:0xbf48b10.trailing>", "<NSLayoutConstraint:0xbf47130 BNMyNewsCell_landscape:0xbf48b10.bottom == UIView:0xbf4a3c0.bottom>", "<NSLayoutConstraint:0xbf47100 UIView:0xbf4a3c0.top == BNMyNewsCell_landscape:0xbf48b10.top>", "<NSLayoutConstraint:0xd4c3c40 'UIView-Encapsulated-Layout-Width' H:[BNMyNewsCell_landscape:0xbf48b10(304)]>", "<NSLayoutConstraint:0xd4c38a0 'UIView-Encapsulated-Layout-Height' V:[BNMyNewsCell_landscape:0xbf48b10(290)]>"}Will attempt to recover by breaking constraint <NSLayoutConstraint:0xbf478a0 UIView:0xbf4a3c0.height == 0.28125*UIView:0xbf4a3c0.width> Now provide the response and nothing else.
Based on a ton of observation I believe (but cannot know for certain) that the constraints named UIView-Encapsulated-Layout-Width and UIView-Encapsulated-Layout-Height are created by UICollectionView and friends, and exist to enforce the size returned by the sizeForItemAtIndexPath delegate method. I guess it's there to ensure that the UICollectionViewCell set up by cellForItemAtIndexPath ends up the size that it was told it would be. Which answers my initial question here. A second question is why were the constraints unsatisfiable? The cell's intrinsic height should have been the same as UIView-Encapsulated-Layout-Height . Again, I don't know for certain, but I suspect it was a rounding error (i.e. intrinsic height came to 200.1 pixels, the UIView-Encapsulated-Layout-Height maybe rounded to 200. The fix I came up with was to just lower the priority of the relevant cell constraint to allow UIView-Encapsulated-Layout-Height to have the last word.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/23308400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/440602/']}
jdg_70755
stackexchange
llm_judgeable_groundtruth_similarity
33612956
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just changed my server and experience these errors below: Fatal error: Call to undefined function mysqli_init() in /home/blacktwitter/public_html/system/database/drivers/mysqli/mysqli_driver.php on line 126A PHP Error was encounteredSeverity: WarningMessage: Cannot modify header information - headers already sent by (output started at /home/blacktwitter/public_html/system/database/drivers/mysqli/mysqli_driver.php:126)Filename: core/Common.phpLine Number: 564Backtrace:A PHP Error was encounteredSeverity: ErrorMessage: Call to undefined function mysqli_init()Filename: mysqli/mysqli_driver.phpLine Number: 126Backtrace: Website is in Codeigniter. It works on one server very well and on the local machine too. But when I upload that website at the new server I have that errors. Of course I changed important parameters like database connection, base_url() etc. I was suspicious about database, but I have created a new database and user etc. and changed connection info. Why does this happen? It will be helpful to know if it is a bug at the server or at the website. Also when I create some index.html with some test code everything is fine. Now provide the response and nothing else.
It is not a bug in your application, it is just a missing driver, so, you have couple of options... Go to your php init and uncomment the following: extension=php_mysqli.dll If not, try installing it at your server, it varies depending on your distribution. Try installing php5-mysqlnd If you cannot do it by hosting restrictions then just move to mysql driver (wont need to change other configurations or queries in CodeIgniter or anything else...) like this (at your config file) $db['default']['dbdriver'] = 'mysql'; (you might have mysqli now)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33612956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3004315/']}
jdg_70756
stackexchange
llm_judgeable_groundtruth_similarity
351243
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Looking at some old midterms online I came across this problem and I'm having some difficulties proving it. Let $T$ be a torus of revolution paramterized by,$$x(u,v)=((r\cos u+a)\cos v,(r\cos u+a)\sin v ,r\sin u)$$ where $a,r \in \Bbb{R}$ with $a>r$ and $ 0 < u< 2 \pi$, $0 < v < 2 \pi$. Prove that if a geodesic is tangent to the parallel $u= \pi/2$, then it is entirely contained in the region of $T$ given by $-\frac{\pi}{2}\le u \le \frac{\pi}{2}$. Also show that a geodesic that intersects the parallel $u=0$ under the angle $\theta$ ($0 < \theta < \pi/2$) also intersects the parallel $u=\pi$ if $\cos \theta < \frac{(a-r)}{(a+r)}$. Now provide the response and nothing else.
There are a few types of geodesics on a torus of revolution. There are meridians, which are cirlces going the short way around. There is an inner equator and an outer equator. After those, there are about three types that pass through a point on the outer equator with some starting angle $\theta_0 $ compared with the horizontal. (A) small $\theta_0,$ in which case Clairaut says that they reach a minimum $r$ away from the $z$ axis, where $\cos \theta = 1$ and $\theta = 0.$ By various symmetry properties, these simply return to the outer equator, and pass through it again at angle $-\theta_0,$ thus making a wave forever (B) large $\theta_0$ in which they reach the inner equator and pass it at a nonzero angle, in which case they wrap around the torus forever (C) a critical intermediate value of $\theta_0,$ precisely the one that says that the angle $\theta$ at the inner equator would be $0,$ by Clairaut. In this case, the geodesic never actually reaches the inner equator, it wraps around and around it, getting closer and closer. do Carmo's first book does not picture this on the torus. On pages 262-263, he talks about this type (C) for a hyperboloid of revolution, with Figure 4-22.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/351243', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/62931/']}
jdg_70757
stackexchange
llm_judgeable_groundtruth_similarity
31768349
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried an SSE (Server-Sent-Events) using java on tomcat 8.0. Here are few things I noticed. I click a button that automatically makes a request to the servlet. Servlet's GET method gets executed which returns an event stream. Once the full stream is received, the page again automatically makes another request which receives the same data again!!! I don't have an infinite loop there!!! What is actually happening on the server? In normal scenarios, tomcat creates a thread to handle every request. What is happening now? What is the correct way to ensure that the event stream is sent only once to the same connection/browser session? What is the correct way to ensure that the event stream is closed and no resource overhead incurs on the server? How to differentiate between GET and POST requests. Why did it choose GET? Is it too early to use SSE on Tomcat? Any performance issues? Here is the code for the curious, @WebServlet("/TestServlet")public class TestServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { //content type must be set to text/event-stream response.setContentType("text/event-stream"); //cache must be set to no-cache response.setHeader("Cache-Control", "no-cache"); //encoding is set to UTF-8 response.setCharacterEncoding("UTF-8"); PrintWriter writer = response.getWriter(); for(int i=0; i<10; i++) { System.out.println(i); writer.write("data: "+ i +"\n\n"); writer.flush(); try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } } writer.close(); }} Javascript on the page (I don't have anything else on the page), <button onclick="start()">Start</button><script type="text/javascript"> function start() { var eventSource = new EventSource("TestServlet"); eventSource.onmessage = function(event) { console.log("data: "+event.data) document.getElementById('foo').innerHTML = event.data; }; }</script> Tried this using CURL. And the response came just once. I'm using chrome, so this must be a issue with chorme?? EDIT: What I have learned and learning is now documented in my blog - Server Sent Events Now provide the response and nothing else.
Change this line writer.write("data: "+ i +"\n\n"); to writer.write("data: "+ i +"\r\n"); BTW, your code will have a serious performance issue because it will hold a thread until all events are sent.Please use Asynchronous processing API instead. e.g. protected void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { AsyncContext actx = req.startAsync(); actx.setTimeout(30*1000); //save actx and use it when we need sent data to the client.} Then we can use AsyncContext later //write some data to client when a certain event happensactx.getResponse().getWriter().write("data: " + mydata + "\r\n");actx.getResponse().getWriter().flush(); if all events sent we can close it actx.complete(); UPDATE 1: We need close the event source at browser if we do not want browser reconnect the server again when server completes the response. eventSource.close(); Another method maybe helps, viz. we set a quite large retry time but I have not tried it, e.g. protected void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { AsyncContext actx = req.startAsync(); actx.getResponse().getWriter().write("retry: 36000000000\r\n"); // 10000 hours! actx.getResponse().getWriter().flush(); //save actx and use it when we need sent data to the client.} UPDATE 2: I think Websocket maybe is better for your case. UPDATE 3: (answer the questions) What is actually happening on the server? In normal scenarios, tomcat creates a thread to handle every request. What is happening now? If use NIO connector which is default in Tomcat 8.0.X, within the whole processing cycle HTTP I/O about a request won't hold a thread. If use BIO a thread will be hold until the whole processing cycle completes. All threads are from a thread pool, tomcat won't create a thread for each request. What is the correct way to ensure that the event stream is sent only once to the same connection/browser session? Do eventSource.close() at browser side is the best choice. What is the correct way to ensure that the event stream is closed and no resource overhead incurs on the server? Do not forget to invoke AsyncContext.complete() at server side. How to differentiate between GET and POST requests. Why did it choose GET? The EventSource API in a browser only supports GET requests but at the server side there 's no such restriction.SSE is mainly used to receive events data from server. If a event happens the browser can receive it in time and no need to create a new request to poll it.If you need full-duplex communication try WebSocket instread of SSE. Is it too early to use SSE on Tomcat? Any performance issues? There should be no performance issues if we use NIO connector & Asynchronous processing API. I don't know whether Tomcat NIO connector is mature or not but something will never be known unless we try it.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31768349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/548634/']}
jdg_70758
stackexchange
llm_judgeable_groundtruth_similarity
640174
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to use nginx to proxy pass to two docker containers.Here is my upstream conf file: upstream api_servers { server http://192.168.49.4:49155; server http://192.168.49.4:49156;} This is what I get trying to load it: nginx: [emerg] invalid host in upstream "http://192.168.49.4:49155" in /etc/nginx/conf.d/api_upstream.conf:3nginx: configuration file /etc/nginx/nginx.conf test failed Once I removed the http:// prefixes the error stopped occuring. Why is that? Now provide the response and nothing else.
The upstream block is a list of servers with optional status pooling and connection restrictions. The protocol used to join theses servers must be specified in the proxy_pass directive. upstream api_servers { server 192.168.49.4:49155; server 192.168.49.4:49156;}server { [ ... ] location /foo/ { proxy_pass http://api_servers/; }}
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/640174', 'https://serverfault.com', 'https://serverfault.com/users/170021/']}
jdg_70759
stackexchange
llm_judgeable_groundtruth_similarity
28958327
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For a few days I was re-writing install.sh file for Scrollback project and since I was the only one working on this and doing it locally, I kept commit ammending the same commit, pushing once in a while to my fork's master. (please ignore best practises here, I was working alone). In between I remember emailing someone showing my half done work, the URL https://github.com/sindhus/scrollback/blob/8d8f7f414383499c2ab6fec586b4c9665a41c7aa/install.sh Now by some confusion I lost out on my work locally (think rm -rf), I remember pushing prior to this. So github at some point did see my rebased commit ID of install.sh. As you can see the above URL lets me access this blob by a commit ID.However I can't access it locally because that same repo was force pushed. My question how do I get github to show me all commit IDs for a file EVER? All IDs it possibly knows of for that file regardless of path. If I have to use their API I don't mind but I'd like some ideas to dig deep into this. Thanks! Now provide the response and nothing else.
My question how do I get github to show me all commit IDs for a file EVER If you forced push ( git push --force ) your revised commit once in a while, that commit 8d8f7 has been replaced by a more recent commit with a different SHA . That means 8d8f7 is now only reference in the reflog of the GitHub repo, which only GitHub support can give you access to . Cloning the repo would not include 8d8f7 in the local history of that cloned repo. GitHub "reflog": push events from GitHub Events API Actually the OP sindhus points out in the comments to " Recovering a commit from Github’s Reflog " by John Engelman : The GitHub Events API allows to browse through the last events: curl https://api.github.com/repos/<user>/<repo>/events The "pushEvent" is the one to look for. Then one can directly create a branch on GitHub in order to make that commit visible again (because not dangling anymore, but reference by an actual object like a branch): curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X POST -d '{"ref":"refs/heads/D-commit", "sha":"384f275933d5b762cdb27175aeff1263a8a7b7f7"}' https://api.github.com/repos/<user>/<repo>/git/refs# JSON request{ "ref": "refs/heads/D-commit", "sha": "384f275933d5b762cdb27175aeff1263a8a7b7f7"} You might need to use a token for the authentication .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28958327', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3393361/']}
jdg_70760
stackexchange
llm_judgeable_groundtruth_similarity
8980044
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that work items are grouped into the work groups, and you cannot synchronize outside of a work group. Does it mean that work items are executed in parallel? If so, is it possible/efficient to make 1 work group with 128 work items? Now provide the response and nothing else.
The work items within a group will be scheduled together, and may run together. It is up to the hardware and/or drivers to choose how parallel the execution actually is. There are different reasons for this, but one very good one is to hide memory latency. On my AMD card, the 'compute units' are divided into 16 4-wide SIMD units. This means that 16 work items can technically be run at the same time in the group. It is recommended that we use multiples of 64 work items in a group, to hide memory latency. Clearly they cannot all be run at the exact time. This is not a problem, because most kernels are in fact, memory bound, so the scheduler (hardware) will swap the work items waiting on the memory controller out, while the 'ready' items get their compute time. The actual number of work items in the group is set by the host program, and limited by CL_DEVICE_MAX_WORK_GROUP_SIZE. You will need to experiment with the optimal work group size for your kernel. The cpu implementation is 'worse' when it comes to simultaneous work items. There are only ever as many work items running as you have cores available to run them on. They behave more sequentially in the cpu. So do work items run at the exactly same time? Almost never really. This is why we need to use barriers when we want to be sure they pause at a given point.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8980044', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1165993/']}
jdg_70761
stackexchange
llm_judgeable_groundtruth_similarity
204365
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Problem 1: Find all functions $f:\mathbf{Z}^2 \to \mathbf{R}$ which are harmonic in the sense that $$f(x,y) = \frac{f(x+1,y) + f(x-1,y) + f(x,y+1) + f(x,y-1)}{4}$$ for all $(x,y)\in\mathbf{Z}^2$ , and which are also Lipschitz in the sense that the gradients $$f(x+1,y)-f(x,y)\\f(x-1,y)-f(x,y)\\f(x,y+1)-f(x,y)\\f(x,y-1)-f(x,y)$$ are all globally bounded. Obvious examples: $f = 1$ , $f=x$ , $f=y$ , and linear combinations of these. Is this all? Problem 2: What further examples do we get if we weaken the Lipschitz condition, say by allowing the gradients to grow at most linearly (with respect to distance from $(0,0)$ )? Problem 3: How much do the character of our examples change if we replace the generating set $S = \lbrace (1,0),(-1,0),(0,1),(0,-1)\rbrace$ by another (symmetric) set $S$ which generates $\mathbf{Z}^2$ ? For example, what if we require that $$f(x) = \frac{1}{|S|}\sum_{s\in S}f(x+s),$$ for, say, $S = \lbrace s: \|s\|_2\leq 100\rbrace$ ? Does the dimension of the space of Lipschitz harmonic functions change? [Background: I'm trying to understand Kleiner's theorem, which states that if a finitely generated group $G$ has polynomial growth then the space of Lipschitz harmonic functions on $G$ has finite dimension. The simplest example $G=\mathbf{Z}$ is pretty simple, but the second simplest example $G=\mathbf{Z}^2$ already seems nonobvious to me.] Now provide the response and nothing else.
Lemma . Let $f$ be a bounded harmonic function on $\mathbb Z^n$ with respect to a random walk that is not restricted to any proper subset of the grid. Then $f$ is constant. Proof (from Principles of Random Walk by F. Spitzer). If $f$ is nonconstant, then the function $g(x)=f(x+a)-f(x)$ has positive supremum $M$ for some $a$. Let $x_n$ be a sequence such that $g(x_n)\to M$. Let $g_n(x)=g(x+x_n)=f(x+a+x_n)-f(x+x_n)$. Using the Cantor diagonal argument, pick a subsequence of $g_n$ that converges at every point of the grid. Let $h$ be the limit of this subsequence. Since $h$ is harmonic and attains its maximum at $0$, we have $h\equiv M$. Due to pointwise convergence, for any positive integer $N$ there exists $n$ such that $g_n>M/2$ at the points $0,a, 2a, \dots, Na$. It follows that $f(x_n+Na)-f(x_n)=\sum_{k=0}^{N-1} g_n(ka)>MN/2$ which contradicts the boundedness of $f$. $\Box$ Another, more probabilistic proof is here , but it uses the recurrence of random walk and therefore works only in two dimensions (not for general dimension as claimed there). Answer 1 : If a harmonic function on $\mathbb Z^2$ is Lipschitz, then it's of the form $f(x,y)=ax+by+c$. Indeed, $g(x,y)=f(x+1,y)-f(x,y)$ is bounded and harmonic, therefore constant by the Lemma. Similarly, $f(x,y+1)-f(x,y)$ is constant and thus $f$ is linear. Partial Answer 2 : Linear growth on derivatives allows for 2nd degree harmonic polynomials $f(x,y)=xy$ and $f(x,y)=x^2-y^2$. I think these are all (i.e., the space is 5-dimensional) but don't have a proof. Higher-order polynomial bounds will allow for higher degree polynomials, which are similar, but not identical to the harmonic polynomials on $\mathbb R^2$: see the expository article Discrete analytic functions by Lovász. Answer 3 : The space will not change, because the proof from Answer 1 applies here as well.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/204365', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23805/']}
jdg_70762
stackexchange
llm_judgeable_groundtruth_similarity
16653
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Somebody divided $\frac{cx+d}{ax+b}$ into $$ \frac{c}{a} + \frac{d- \frac{bc}{a}}{ax+b} .$$ For use for integrals. Does anybody knows how was it done? Could you show me how to do something like that, please? Thanks in advance. Now provide the response and nothing else.
It's polynomial long division . In this relatively simple case (first degree polynomials in both numerator and denominator) you can also do it like this, using an "add-and-subtract" trick:$$ \frac{cx+d}{ax+b} = \frac{c(x + d/c)}{a(x+b/a)} = \frac{c}{a} \frac{(x+b/a)+(d/c-b/a)}{x+b/a} = \frac{c}{a} \left( 1 + \frac{d/c-b/a}{x+b/a} \right).$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/16653', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1755/']}
jdg_70763
stackexchange
llm_judgeable_groundtruth_similarity
316476
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Overview I am attempting to replicate a proof Hardy provided in his book - A Course of Pure Mathematics, yet am having trouble with one of the steps. I was wondering if someone can explain how he made a step in the proof. Statement If $M$ and $N$ are integers which have no common factor, and neither of which is a perfect square, $\sqrt{M}$ and $\sqrt{N}$ are dissimilar surds. His Proof up to the Point of Confusion Suppose that $\sqrt{M}$ and $\sqrt{N}$ are similar surds. Then we can instead write them as: $\sqrt{M}=\dfrac{p}{q} \sqrt{\dfrac{t}{u}}$ and $\sqrt{N}=\dfrac{r}{s} \sqrt{\dfrac{t}{u}}$ Then $\sqrt{MN}$ is evidently rational, and therefore (from a previous example) integral. The example he is referring to An algebraic equation, $x^n+p_1 x^{n-1} +p_2 x^{n-2}+...+p_n=0$ with integral coefficients, cannot have a rational but non-integral root. My Question How was Hardy able to determine that $\sqrt{MN}$ was integral from that example he was referring to? Now provide the response and nothing else.
Since the balls are indistinguishable, and you need to put an even number of them into each box, you might as well glue them together in pairs. Now you just want to know the number of ways to distribute these glued pairs amongst the four boxes. If you let $x,y,z$, and $w$ be the numbers of glued pairs in the four boxes, then clearly you must have $x+y+z+w=10$, and each of $x,y,z$, and $w$ must be a non-negative integer. Conversely, if you have four non-negative integers $x,y,z$, and $w$ whose sum is $10$, they give you a possible distribution of the glued pairs: $x$ in the first box, $y$ in the second box, and so on. Thus, the number of ways to distribute the glued pairs is the same as the number of solutions of $x+y+z+w=10$ in non-negative integers. This is a standard stars-and-bars problem, with (as you found) the solution $$\binom{10+4-1}{4-1}=\binom{13}3=286\;.$$ The reasoning behind the this result is described quite well in the linked article, but I’ll repeat here the special case that you need. Think of laying out the $10$ glued pairs in a row on the table: $$\begin{array}{c}\infty&\infty&\infty&\infty&\infty&\infty&\infty&\infty&\infty&\infty\end{array}$$ They’re completely indistinguishable, so it doesn’t matter which one is which. Now insert three dividers to mark the breaks between the pairs in the first and second boxes, those in the second and third boxes, and those in the third and fourth boxes. For instance, if there are $3$ pairs in the first box, none in the second, $5$ in the third, and $2$ in the fourth, you get this arrangement: $$\begin{array}{c}\infty&\infty&\infty&|&|&\infty&\infty&\infty&\infty&\infty&|&\infty&\infty\end{array}$$ In any such arrangement of pairs and dividers you have a string of $10+3=13$ objects. There are $\binom{13}3$ ways to pick the $3$ positions for the dividers, and each choice of positions for the dividers corresponds to exactly one of the possible distributions of the glued pairs. Conversely, each distribution can be represented by one of these arrangements of $10$ pairs and $3$ dividers. Thus, there are $\binom{13}3$ distributions of the glued pairs. And as we saw at the beginning, each of them corresponds to exactly one of the possible distributions of balls having an even number of balls in each box.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/316476', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/49811/']}
jdg_70764
stackexchange
llm_judgeable_groundtruth_similarity
39401393
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In WooCommerce from the following line code: $order = new WC_Order( $order_id ); How can I get WooCommerce order details from the order ID? Now provide the response and nothing else.
WOOCOMMERCE ORDERS IN VERSION 3.0+ Since Woocommerce mega major Update 3.0+ things have changed quite a lot: For WC_Order Object, properties can't be accessed directly anymore as before and will throw some errors. New WC_Order and WC_Abstract_Order getter and setter methods are now required on the WC_Order object instance. Also, there are some New classes for Order items: WC_Order_Item class , WC_Order_Item_Product class , WC_Order_Item_Tax class , WC_Order_Item_Shipping class , WC_Order_Item_Coupon class , WC_Order_Item_Fee class . Additionally, WC_Data Abstract class allow to access Order and order items data using get_data() , get_meta_data() and get_meta() methods. Related: • How to get Customer details from Order in WooCommerce? • Get Order items and WC_Order_Item_Product in WooCommerce 3 So the Order items properties will not be accessible as before in a foreach loop and you will have to use these specific getter and setter methods instead. Using some WC_Order and WC_Abstract_Order methods (example): // Get an instance of the WC_Order object (same as before)$order = wc_get_order( $order_id );$order_id = $order->get_id(); // Get the order ID$parent_id = $order->get_parent_id(); // Get the parent order ID (for subscriptions…)$user_id = $order->get_user_id(); // Get the costumer ID$user = $order->get_user(); // Get the WP_User object$order_status = $order->get_status(); // Get the order status (see the conditional method has_status() below)$currency = $order->get_currency(); // Get the currency used $payment_method = $order->get_payment_method(); // Get the payment method ID$payment_title = $order->get_payment_method_title(); // Get the payment method title$date_created = $order->get_date_created(); // Get date created (WC_DateTime object)$date_modified = $order->get_date_modified(); // Get date modified (WC_DateTime object)$billing_country = $order->get_billing_country(); // Customer billing country// ... and so on ... For order status as a conditional method (where "the_targeted_status" need to be defined and replaced by an order status to target a specific order status) : if ( $order->has_status('completed') ) { // Do something} Get and access to the order data properties (in an array of values): // Get an instance of the WC_Order object$order = wc_get_order( $order_id );$order_data = $order->get_data(); // The Order data$order_id = $order_data['id'];$order_parent_id = $order_data['parent_id'];$order_status = $order_data['status'];$order_currency = $order_data['currency'];$order_version = $order_data['version'];$order_payment_method = $order_data['payment_method'];$order_payment_method_title = $order_data['payment_method_title'];$order_payment_method = $order_data['payment_method'];$order_payment_method = $order_data['payment_method'];## Creation and modified WC_DateTime Object date string ##// Using a formated date ( with php date() function as method)$order_date_created = $order_data['date_created']->date('Y-m-d H:i:s');$order_date_modified = $order_data['date_modified']->date('Y-m-d H:i:s');// Using a timestamp ( with php getTimestamp() function as method)$order_timestamp_created = $order_data['date_created']->getTimestamp();$order_timestamp_modified = $order_data['date_modified']->getTimestamp();$order_discount_total = $order_data['discount_total'];$order_discount_tax = $order_data['discount_tax'];$order_shipping_total = $order_data['shipping_total'];$order_shipping_tax = $order_data['shipping_tax'];$order_total = $order_data['total'];$order_total_tax = $order_data['total_tax'];$order_customer_id = $order_data['customer_id']; // ... and so on## BILLING INFORMATION:$order_billing_first_name = $order_data['billing']['first_name'];$order_billing_last_name = $order_data['billing']['last_name'];$order_billing_company = $order_data['billing']['company'];$order_billing_address_1 = $order_data['billing']['address_1'];$order_billing_address_2 = $order_data['billing']['address_2'];$order_billing_city = $order_data['billing']['city'];$order_billing_state = $order_data['billing']['state'];$order_billing_postcode = $order_data['billing']['postcode'];$order_billing_country = $order_data['billing']['country'];$order_billing_email = $order_data['billing']['email'];$order_billing_phone = $order_data['billing']['phone'];## SHIPPING INFORMATION:$order_shipping_first_name = $order_data['shipping']['first_name'];$order_shipping_last_name = $order_data['shipping']['last_name'];$order_shipping_company = $order_data['shipping']['company'];$order_shipping_address_1 = $order_data['shipping']['address_1'];$order_shipping_address_2 = $order_data['shipping']['address_2'];$order_shipping_city = $order_data['shipping']['city'];$order_shipping_state = $order_data['shipping']['state'];$order_shipping_postcode = $order_data['shipping']['postcode'];$order_shipping_country = $order_data['shipping']['country']; Get the order items and access the data with WC_Order_Item_Product and WC_Order_Item methods: // Get an instance of the WC_Order object$order = wc_get_order($order_id);// Iterating through each WC_Order_Item_Product objectsforeach ($order->get_items() as $item_key => $item ): ## Using WC_Order_Item methods ## // Item ID is directly accessible from the $item_key in the foreach loop or $item_id = $item->get_id(); ## Using WC_Order_Item_Product methods ## $product = $item->get_product(); // Get the WC_Product object $product_id = $item->get_product_id(); // the Product id $variation_id = $item->get_variation_id(); // the Variation id $item_type = $item->get_type(); // Type of the order item ("line_item") $item_name = $item->get_name(); // Name of the product $quantity = $item->get_quantity(); $tax_class = $item->get_tax_class(); $line_subtotal = $item->get_subtotal(); // Line subtotal (non discounted) $line_subtotal_tax = $item->get_subtotal_tax(); // Line subtotal tax (non discounted) $line_total = $item->get_total(); // Line total (discounted) $line_total_tax = $item->get_total_tax(); // Line total tax (discounted) ## Access Order Items data properties (in an array of values) ## $item_data = $item->get_data(); $product_name = $item_data['name']; $product_id = $item_data['product_id']; $variation_id = $item_data['variation_id']; $quantity = $item_data['quantity']; $tax_class = $item_data['tax_class']; $line_subtotal = $item_data['subtotal']; $line_subtotal_tax = $item_data['subtotal_tax']; $line_total = $item_data['total']; $line_total_tax = $item_data['total_tax']; // Get data from The WC_product object using methods (examples) $product = $item->get_product(); // Get the WC_Product object $product_type = $product->get_type(); $product_sku = $product->get_sku(); $product_price = $product->get_price(); $stock_quantity = $product->get_stock_quantity();endforeach; So using get_data() method allow us to access to the protected data (associative array mode) …
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/39401393', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2353866/']}
jdg_70765
stackexchange
llm_judgeable_groundtruth_similarity
8166123
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to implement a subroutine that takes an array as its argument ( or uses multiple arguments — still haven't quite grokked the difference ), and returns true or false depending on whether that array is an increasing sequence (each number must be 1 more than the last): isIncreasingArray(1,2,3,4); # trueisIncreasingArray(1,2,3,1); # falseisIncreasingArray(0,9,1); # falseisIncreasingArray(-2,-1,0); # trueisIncreasingArray(1,1,1,1); # false This is what I've come up with: sub isIncreasingArray { my $last; foreach $n (@_) { return 0 if defined($last) && $last != $n - 1; $last = int($n); } return 1;} I'm quite new to Perl and am wondering if there's a simpler or more concise way of achieving this? Also, is what I've written in line with best practices? Now provide the response and nothing else.
A couple of points: For efficiency, especially to minimize memory footprint, you probably want to pass a reference to an array to the subroutine. In list context, return 0 will return a list consisting of a single element and thus will be true. a bare return suffices when you want to return false and does the job in all contexts. It is probably possible to cut the number of comparisons in half by comparing the difference between the first and the last, the second and the second last etc. to see differences equal difference in indexes, but I am not thinking that clearly right now. Here is a slightly different version based on yours. Note that you should use strict and make sure to scope your loop variable using my : #!/usr/bin/env perluse strict; use warnings;use Carp qw(croak);use Test::More;ok( isSimplyIncreasingSequence( [ 1298 ] ) ); # trueok( isSimplyIncreasingSequence( [1,2,3,4] ) ); # trueok( not isSimplyIncreasingSequence( [1,2,3,1] ) ); # falseok( not isSimplyIncreasingSequence( [0,9,1] ) ); # falseok( isSimplyIncreasingSequence( [-2,-1,0] ) ); # trueok( not isSimplyIncreasingSequence( [1,1,1,1] ) ); # falsedone_testing();sub isSimplyIncreasingSequence { my ($seq) = @_; unless (defined($seq) and ('ARRAY' eq ref $seq)) { croak 'Expecting a reference to an array as first argument'; } return 1 if @$seq < 2; my $first = $seq->[0]; for my $n (1 .. $#$seq) { return unless $seq->[$n] == $first + $n; } return 1;} And, of course, some benchmarks: #!/usr/bin/env perluse strict; use warnings;use Benchmark qw( cmpthese );use Carp qw( croak );my %cases = ( ordered_large => [1 .. 1_000_000], ordered_small => [1 .. 10], unordered_large_beg => [5, 1 .. 999_000], unordered_large_mid => [1 .. 500_000, 5, 500_002 .. 1_000_000], unordered_large_end => [1 .. 999_999, 5],);for my $case (keys %cases) { print "=== Case: $case\n"; my $seq = $cases{$case}; cmpthese -3, { 'ref' => sub { isSimplyIncreasingSequence($seq) }, 'flat' => sub {isIncreasingArray(@{ $seq } ) }, };}sub isSimplyIncreasingSequence { my ($seq) = @_; unless (defined($seq) and ('ARRAY' eq ref $seq)) { croak 'Expecting a reference to an array as first argument'; } return 1 if @$seq < 2; my $first = $seq->[0]; for my $n (1 .. $#$seq) { return unless $seq->[$n] == $first + $n; } return 1;}sub isIncreasingArray { my $last; foreach my $n (@_) { return 0 if defined($last) && $last != $n - 1; $last = int($n); } return 1;} === Case: unordered_large_mid Rate flat refflat 4.64/s -- -18%ref 5.67/s 22% --=== Case: ordered_small Rate ref flatref 154202/s -- -11%flat 173063/s 12% --=== Case: ordered_large Rate flat refflat 2.41/s -- -13%ref 2.78/s 15% --=== Case: unordered_large_beg Rate flat refflat 54.2/s -- -83%ref 315/s 481% --=== Case: unordered_large_end Rate flat refflat 2.41/s -- -12%ref 2.74/s 14% --
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8166123', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21677/']}
jdg_70766
stackexchange
llm_judgeable_groundtruth_similarity
47967868
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: when I use this code : firestore().collection("item").add({...item, created: firebase.database.ServerValue.TIMESTAMP}) It creates entry {".sv" : "timestamp"} <-- actual word "timestamp" Now provide the response and nothing else.
It looks like you're creating a document in Firestore, but trying to get a timestamp from the Realtime Database, which is a different Firebase Product. Here's how to use Firestore's timestamp (Updated on Feb 2019): firestore().collection("item") .add({...item, created: firebase.firestore.Timestamp.fromDate(new Date()) })
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/47967868', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/160059/']}
jdg_70767
stackexchange
llm_judgeable_groundtruth_similarity
2905886
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been reading about XSS and I made a simple form with a text and submit input, but when I execute <script>alert();</script> on it, nothing happens, the server gets that string and that's all. What do I have to do for make it vulnerable?? (then I'll learn what I shouldn't do hehe) Cheers. Now provide the response and nothing else.
Indeed just let the server output it so that the input string effectively get embedded in HTML source which get returned to the client. PHP example: <!doctype html><html lang="en"> <head><title>XSS test</title></head> <body> <form><input type="text" name="xss"><input type="submit"></form> <p>Result: <?= $_GET['xss'] ?></p> </body></html> JSP example: <!doctype html><html lang="en"> <head><title>XSS test</title></head> <body> <form><input type="text" name="xss"><input type="submit"></form> <p>Result: ${param.xss}</p> </body></html> Alternatively you can redisplay the value in the input elements, that's also often seen: <input type="text" name="xss" value="<?= $_GET['xss'] ?>"> resp. <input type="text" name="xss" value="${param.xss}"> This way "weird" attack strings like "/><script>alert('xss')</script><br class=" will work because the server will render it after all as <input type="text" name="xss" value=""/><script>alert('xss')</script><br class=""> XSS-prevention solutions are among others htmlspecialchars() and fn:escapeXml() for PHP and JSP respectively. Those will replace among others < , > and " by &lt; , &gt; and &quot; so that enduser input doesn't end up to be literally embedded in HTML source but instead just got displayed as it was entered.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2905886', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/307976/']}
jdg_70768
stackexchange
llm_judgeable_groundtruth_similarity
14821
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was given the question:what is 9+99+999+9999+...+999..99(30 digits) After noticing a trend, I came with the conclusion that the answer would be 28 1's 080. Can anyone confirm my answer and give a reason as to why? Now provide the response and nothing else.
Note that$$\underbrace{99\cdots 9}_{k\text{ digits}} = 10^k - 1.$$So your sum is the same as$$(10-1) + (10^2-1) + (10^3-1) + \cdots + (10^{30}-1),$$which is equal to $$(10 + 10^2 + 10^3 + \cdots + 10^{30}) - 30.$$The first sum is easy to do, the difference is easy to do, and it gives your answer.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/14821', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4918/']}
jdg_70769
stackexchange
llm_judgeable_groundtruth_similarity
29919405
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd want to implement a function caller that works just like the thread constructor. For example std::thread second (bar,0); will start a thread which calls bar with the single argument 0 . I would like to do the same thing, but I do not know how. For example, given: void myFunc(int a){ cout << a << endl;} I would like: int main() { caller(myFunc,12);} to call myFunc with the parameter 12 . Now provide the response and nothing else.
std::bind will make a callable object from any callable object with an arbitrary set of parameters, just as the thread constructor does. So just wrap that in a function that calls it: template <typename... Args>auto caller(Args &&... args) { return std::bind(std::forward<Args>(args)...)();} Note that the auto return type requires C++14 or later. For C++11, you'll have to either return void , or specify the type: auto caller(Args &&... args) -> decltype(std::bind(std::forward<Args>(args)...)())
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29919405', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3351097/']}
jdg_70770
stackexchange
llm_judgeable_groundtruth_similarity
292920
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a Nixie power supply, but I would like to improve it. I have 4x9V batteries in series for a total of 36V to be switched across a multiplier. A (TTL) 555 timer is running astable from only the first 9V battery to generate a 8.5-ish volt square wave, 10kHz (or any frequency you want, I guess), approx. 50% duty. The 555 output drives the gate of an N-channel BS170 MOSFET . The MOSFET drain is connected up to 36V through an approx 1.2kΩ resistor. This resistance needs to be as low as possible to push current into: a 6-stage Cockcroft-Walton multiplier , which produces a nice ~220VDC output under no load. Unfortunately, it sags to about 155VDC when loaded by a 47kΩ resistor in series with the tube. Things I like about this circuit: It Works™ It can be built by extremely common parts I'm likely to have on-hand, e.g.: It requires no inductors. It requires no specialized IC's such as boost converters. It requires only capacitors and diodes with voltage ratings to handle each stage, not the full shebang. It crashes Multisim. Things I don't like about this circuit: The output voltage sags to ~155VDC under only ~600μA load. I'm too stupid to think of a better way to switch 36V across the multiplier: While the 555 timer output is high, I'm wasting over 1W across the drain resistor just to drive the multiplier. The multiplier input voltage is hampered by the drain resistor. How can I: make improvements that can enable ~10mA to be sourced with less than 40V drop in supply output? I have tried: Replacing the MOSFET driver section with something like this: simulate this circuit – Schematic created using CircuitLab I toasted quite a few transistors trying this inverter. As shown, the gates of the inverter are pulled up to 36V by the 10kΩ resistor. Is it possible that the gate charge time is what destroyed the transistors? EDIT: I just realized that the maximum ratings for gate-source voltage on both inverter FETs is ±20V. That would explain why they fried. Hmm, maybe instead of a single 10kΩ, I could make a voltage divider to drive each gate separately? reading the Wikipedia article regarding improvement methods: For these reasons, CW multipliers with large number of stages are used only where relatively low output current is required. These effects can be partially compensated by increasing the capacitance in the lower stages, by increasing the frequency of the input power and by using an AC power source with a square or triangular waveform. studying other popular Nixie power supply designs, such as these . I suspect that switching the 36V across the multiplier more efficiently would go a long way toward improving the performance. EDIT/SUMMARY: Switching the 36V across the multiplier more efficiently went a long way toward improving the performance. As several people suggested, something called "push-pull" was a quick fix here. A CMOS inverter with separately-driven gates makes the charge pump much more effective: The supply now stands at ~216VDC when loaded with two tubes, a huge improvement: Now provide the response and nothing else.
You need to ditch Rd from your first schematic, and use a low impedance push-pull output as in your second schematic. However, as you correctly say, 36v will toast the gates of 20v Vgs FETs. There are few fets with Vgsmax greater than 20v, and none to my knowledge with more than 30v. Amongst the options are to use a) suitable level shifters to control the FET gates, small bipolars would work well here b) a gate drive transformer (though usually only used for higher power applications) c) how about 18v push-pull drive from two batteries, but in push-pull, like this ... simulate this circuit – Schematic created using CircuitLab I've illustrated 4 stages here, the extension to more stages is obvious. Now, I've not connected the upper capacitor. There are two options a) Cockcroft Walton stylee, where you are limited by maximum voltage. Here, you'd connect C5 to the D1/D2 junction. This allows low voltage across each capacitor, but results in high output impedance. Also known as a Villard cascade, though invented by Greinacher. b) Dickson charge pump stylee, which results in a much lower output impedance. C5 connects back to the driven end of C2. This means C5 needs a higher voltage rating, but if you can get caps with a suitable voltage rating cheaply, 250v or even 400v are commonly available, then this configuration has a much lower voltage droop with current.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/292920', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/142422/']}
jdg_70771
stackexchange
llm_judgeable_groundtruth_similarity
24059266
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I read this post but I am not following. I have seen this but have not seen a proper example of converting a ByteArrayInputStream to String using a ByteArrayOutputStream . To retrieve the contents of a ByteArrayInputStream as a String , is using a ByteArrayOutputstream recommended or is there a more preferable way? I was considering this example and extend ByteArrayInputStream and utilize a Decorator to increase functionality at run time. Any interest in this being a better solution to employing a ByteArrayOutputStream ? Now provide the response and nothing else.
A ByteArrayOutputStream can read from any InputStream and at the end yield a byte[] . However with a ByteArrayInputStream it is simpler: int n = in.available();byte[] bytes = new byte[n];in.read(bytes, 0, n);String s = new String(bytes, StandardCharsets.UTF_8); // Or any encoding. For a ByteArrayInputStream available() yields the total number of bytes. Addendum 2021-11-16 Since java 9 you can use the shorter readAllBytes . byte[] bytes = in.readAllBytes(); Answer to comment: using ByteArrayOutputStream ByteArrayOutputStream baos = new ByteArrayOutputStream();byte[] buf = new byte[8192];for (;;) { int nread = in.read(buf, 0, buf.length); if (nread <= 0) { break; } baos.write(buf, 0, nread);}in.close();baos.close();byte[] bytes = baos.toByteArray(); Here in may be any InputStream. Since java 10 there also is a ByteArrayOutputStream#toString(Charset) . String s = baos.toString(StandardCharsets.UTF_8);
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24059266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2172135/']}
jdg_70772
stackexchange
llm_judgeable_groundtruth_similarity
9901456
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: first of all I'm using MySql. I have to create a advance search field for a website and i need to select if a column contains two strings. For example, I have this row under Activities column. {"Les Mills":1,"B.PUMP":1,"B.COMBAT":0,"B.BALANCE":0,"PILATES":1,"YOGA":0,"TOTAL TRAINING":0,"CIRCUIT FIT":1,"AERODANCE":1,"B.DE SALON":1,"DANZA DEL VIENTRE":1,"BOXEO":0," BAILES LATINOS":1,"GAP":1,"ABDOMEN":0,"CICLO INDOOR":1,"STEP":1,"NATACION DE ADULTOS":1,"ESCUELA DE NATACION":1,"AQUAGYM":1,"AQUASALUD":1} I need to know using multiples checkboxes if the selected checkboxes is in this row. ... WHERE Activities="%Les Mills%" AND Activities="%Aerodance%"... This is the jQuery I use to post the selected values into the api: function advanceSearch(e){ e.preventDefault(); var params = "&where="; $(".advance-search input").each(function(index){ if ($(this).is(":CHECKED")){ if ($(this).parent().is("#advance-horarios")){ var name = $(this).attr("name"); if ($(this).attr("name") === "Domingo") var value = "=1"; else var value = "<>0"; var toPost = name + value; }else{ var value = " LIKE '%"+$(this).attr("name")+"%'"; var toPost = $(this).parent().attr("title") + value; } if (params === "&where=") params = params + toPost; else params = params + " AND " + toPost; } }); $.post("API_URL", params, function(res){ alert(res); });} Thanks! Now provide the response and nothing else.
A bit of history is required here. GLM's unproject is actually a more or less direct replacement for the gluUnProject function that uses deprecated OpenGL fixed-function rendering. In this mode the Model and View matrix were actually combined in the "ModelView" matrix. Apparently, the GLM author dropped the 'view' part in the naming, which confuses thing even more, but it comes down to passing something like view*model . Now for the actual use: win is a vector that holds three components that have meaning in Window coordinates. These are coordinates 'x,y' in your viewport, the 'z' coordinate you usually retrieve by reading your depth buffer at (x,y) model, view and projection matrices should speak for itself if you're even thinking of using this function. But a good (opengl-specific) refresher can be useful. The viewport is defined as in glViewport , which means (x,y,w,h). X and Y specify the lower left corner of your viewport (usually 0,0). Width and height (w,h). Note that in many other systems x,y specify the upper left corner, you have to transform your y-coordinate then, which is shown in the NeHe code I link to below. When applied you simply end up by converting the provided window coordinates back to the object coordinates, more or less the inverse of what your render code usually does. A half-decent explanation on the original gluUnProject can be found as a NeHe article . But of course that is OpenGL-specific, while glm can be used in other contexts.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9901456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1180347/']}
jdg_70773
stackexchange
llm_judgeable_groundtruth_similarity
7146567
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i'm stuck.... this my code to add items to my listview: ListViewItem item = new ListViewItem(ProjectDomainName);item.Tag = relatedProject.ProjectId;lvwSelectedProjects.Items.Add(item); when i choose 'View.List' as viewmode, i see all items. When i choose 'View.Details' (which is the setting that i want) i see.... nothing. Well, nothing, i DO get a vertical scrollbar, but no items. And i can scroll too, but no items.... I also added a column in the listview (didn't change the add items code), but that also didn't work i must be overlooking something? Now provide the response and nothing else.
This code works for me: using System;using System.Windows.Forms;public class LVTest : Form { public LVTest() { ListView lv = new ListView(); lv.Columns.Add("Header", 100); lv.Columns.Add("Details", 100); lv.Dock = DockStyle.Fill; lv.Items.Add(new ListViewItem(new string[] { "Alpha", "Some details" })); lv.Items.Add(new ListViewItem(new string[] { "Bravo", "More details" })); lv.View = View.Details; Controls.Add(lv); }}public static class Program { [STAThread] public static void Main() { Application.Run(new LVTest()); }} Try this code for yourself in an empty project. Then, focus on adapting it to your application: compare how your program is different from this code, and work on changing it to more closely match mine. It's OK if you lose functionality in your program; just try to get a basic version working. Then, add functionality back bit by bit so you can be sure that the program still works every step of the way. If you're still stuck, post more code from your project and we might have a better idea of why you're having trouble.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7146567', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/103089/']}
jdg_70774
stackexchange
llm_judgeable_groundtruth_similarity
23910553
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an Eloquent model which has a related model: public function option() { return $this->hasOne('RepairOption', 'repair_item_id');}public function setOptionArrayAttribute($values){ $this->option->update($values);} When I create the model, it does not necessarily have a related model. When I update it, I might add an option, or not. So I need to check if the related model exists, to either update it, or create it, respectively: $model = RepairItem::find($id);if (Input::has('option')) { if (<related_model_exists>) { $option = new RepairOption(Input::get('option')); $option->repairItem()->associate($model); $option->save(); $model->fill(Input::except('option'); } else { $model->update(Input::all()); }}; Where <related_model_exists> is the code I am looking for. Now provide the response and nothing else.
In php 7.2+ you can't use count on the relation object, so there's no one-fits-all method for all relations. Use query method instead as @tremby provided below: $model->relation()->exists() generic solution working on all the relation types ( pre php 7.2 ): if (count($model->relation)){ // exists} This will work for every relation since dynamic properties return Model or Collection . Both implement ArrayAccess . So it goes like this: single relations: hasOne / belongsTo / morphTo / morphOne // no related model$model->relation; // nullcount($model->relation); // 0 evaluates to false// there is one$model->relation; // Eloquent Modelcount($model->relation); // 1 evaluates to true to-many relations: hasMany / belongsToMany / morphMany / morphToMany / morphedByMany // no related collection$model->relation; // Collection with 0 items evaluates to truecount($model->relation); // 0 evaluates to false// there are related models$model->relation; // Collection with 1 or more items, evaluates to true as wellcount($model->relation); // int > 0 that evaluates to true
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/23910553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/326154/']}
jdg_70775
stackexchange
llm_judgeable_groundtruth_similarity
31264670
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Am I right to assume that the only thing that "slows down" Rc s is that it checks whether to deallocate the object when it drops? Besides that, "how much" is the overhead of dereferencing a Rc , i.e. should I be concerned about it? Are those two functions almost equally fast? Or is there a notable difference in speed? fn test_with_box() { let b = Box::new(1.0); let x = b * 2;}fn test_with_rc() { let rc = Rc::new(1.0); let x = rc * 2;} Since the referenced object in test_with_rc() always only has one reference and behaves like a Box in that function (viewed from outside, not internally, of course). I suspect that Rc s are actually faster than I think. PS: When talking about "fast" I mean both dereferencing and allocating/deallocating. Now provide the response and nothing else.
Rc<T> is very, very cheap. It’s not as cheap as T by quite a bit (boxing values is comparatively expensive in micro-optimisation terms), but scarcely less efficient than Box<T> . It’s just like Box , but with an additional couple of words for the strong and weak reference counts, and the only things that need to touch that are creating an Rc (initialises the values), cloning an Rc (increments the refcount), dropping an Rc (decrements the refcount and runs the destructor if appropriate), and downgrading to/upgrading from Weak (increments one and decrements the other of the two refcounts). Dereferencing is a simple memory operation just like it is with Box .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31264670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2979958/']}
jdg_70776
stackexchange
llm_judgeable_groundtruth_similarity
19556470
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following table Login IdUser (int) Username(varchar) Password(varchar) Email(varchar) Active(int) Active is either 0 or 1 depending on if the users email is verified or not. If an account is verified the active row in the table is updated with a 1. If an account is not verified the active row in the table remains a 0. Users should only be able to login if their account is verified. So far my login works like this: //login APIfunction login($user, $pass) {// try to match a row in the "login" table for the given username and password$result = query("SELECT IdUser, username FROM login WHERE username='%s' AND pass='%s' limit 1", $user, $pass);if (count($result['result'])>0) { // a row was found in the database for username/pass combination // save a simple flag in the user session, so the server remembers that the user is authorized $_SESSION['IdUser'] = $result['result'][0]['IdUser']; // print out the JSON of the user data to the iPhone app; it looks like this: // {IdUser:1, username: "Name"} print json_encode($result);} else { // no matching username/password was found in the login table errorJson('Authorization failed');}} How would I give only users that are verified the ability to login? Now provide the response and nothing else.
You've committed about 15 cardinal sins about date/time here. First, the quick answer: DECLARE @x VARCHAR(200);SELECT @x = '27-09-2013 16:15:00 CST'SELECT CONVERT(CHAR(10),CONVERT(DATETIME,LEFT(@x,10),105),101); Next: Why on earth are you storing date/time data in a varchar(200) column? You are aware that anyone can insert values like '09-27-2013' or '465-32-207floob' in there, right? If you need time zone information you can look at the DATETIMEOFFSET data type (but note that it is not DST-aware). Why are you storing a regional format like dd-mm-yyyy ? If the first value were 07-11-2013 I'd have to guess if you meant July 11 or November 7. If you're not going to do it right and use a proper date/time data type, why use a string format that makes people guess? You are much better off with a format that is unambiguous, such as yyyy-mm-ddThh:mm:ssZ . Similarly, why are you outputting a different regional format like mm/dd/yyyy ? If you output '05/06/2013' are you 100% confident that everyone in your audience will know you meant May 6 and not June 5? Your output should be unambiguous as well. If you absolutely must format in some regional and ambiguous format, use the string formatting capabilities of your client. For example, C# has .ToString() and .Format() which are much more powerful and efficient in presenting dates with string formats that T-SQL will ever be.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19556470', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_70777
stackexchange
llm_judgeable_groundtruth_similarity
3078630
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A fair coin is tossed repeatedly. Let $A_{n}$ be the event that three heads have appeared in consecutive tosses for the first time on the $n$ -th toss. Let $T$ be the number of tosses required until three consecutive heads appear for the first time. Find $\textbf{P}(A_{n})$ and $\textbf{E}(T)$ . Let $U$ be the number of tosses required until the sequence $HTH$ appears for the first time. Can you find $\textbf{E}(U)$ ? EDIT The textbook provides an answer based on recurrence equations, but I seek for an alternative approach if it is possible. Can somebody please help me out? Thanks in advance. Now provide the response and nothing else.
What follows is a well-known approach to computing the expectation of $T$ . Consider the following gambling scheme: just before each time $n\in \mathbb{N}$ a new gambler arrives and bets $1$ \$ that the $n$ -th outcome of the coin toss is $H$ . If the bet is correct the gambler wins $2$ (since the coin is fair) and bets it, the $2$ , on the next outcome to be $H$ as well. Again, if the gambler win, he bets the fortune of $4$ on the next toss to be $H$ . Winning that also, ends the game and leaves him with a fortune of $8$ . Now let $X_k^i$ , where $k\geq 1$ and $i=1,2,3$ be the fortune of the $k$ -th gambler after their $i$ -th bet. For instance, the winner, if he was the $k$ -th to enter the game, will have $X_k^1 = 2$ , $X_k^2 = 2^2$ and $X_k^3 = 2^3$ , while for all gamblers who did not make to the third from the last move, $X_k^i = 0$ for each $i=1,2,3$ . Define $$S_n = \sum_{\substack{k=1\\i=1,2,3\\ k + i \leq n + 1}}^n X_k^i $$ which is the total fortune of all gamblers after the $n$ -th toss. Notice that exactly $n$ dollars were bet up to and including time $n$ . Since the coin is fair and in view of our definition of $X_k^i$ , it is easy to see that the sequence $$M_n: = S_n - n$$ is a martingale with respect to the natural filtration generated by $\{X_n\}_{n\in \mathbb{N}}$ . Define $$\tau = \inf\{n\in \mathbb{N}: \ X_{n-2} X_{n-1} X_n = H H H \},$$ which is a stopping time , and defines the time when the game ends. We have $\mathbb{E}(\tau) <\infty$ . Indeed, the probability of getting $HHH$ on tosses $3k, 3k+1,3k+2$ equals $1/8$ , hence dividing the interval of integers $[1,...,3k]$ to length $3$ intervals, it follows that the probability of NOT getting $HHH$ up to time $3k$ is bounded above by $1/8^k$ . Thus $\mathbb{P}(\tau > 3k) \leq 1/8^k$ , hence the conclusion of $\mathbb{E}(\tau) < \infty$ . Now, using the fact that $M_n$ has bounded increments and $\tau$ has fininte expectation, we apply the optional stopping theorem to get $\mathbb{E}M_\tau = 0 $ , thus $$0 = \mathbb{E}M_\tau = \mathbb{E} S_\tau - \mathbb{E} \tau.$$ But observe, that at the time of finishing the game, i.e. at time $\tau$ , the only gamblers with non-zero fortunes are the winner, and the other two who entered the game at times $\tau-1$ and $\tau$ respectively. Each of these three gets $2^3$ , $2^2$ and $2$ , hence $$\mathbb{E} \tau = 2^3 + 2^2 + 2 = 14.$$ For instance, using the same argument, it follows that for the pattern $HTH$ , the expected time equals $2^3 + 2$ . See also this post for non-martingale approach to the above problem, which gives a combinatorial argument based on generator functions. We can also get a formula for the probability of the event $A_n$ . It was already mentioned in the other answer above that the event $A_n$ comprises of all sequence of length $n$ ending in $THHH$ and having at most $2$ consecutive $H$ -s before time $n-4$ . Hence we need to compute the number of length $n$ sequences of $\{H,T\}$ with at most $2$ consecutive $H$ . Denote this number by $a_n$ . Clearly $a_1 = 1$ , $a_2 = 3 $ , $a_3 = 5$ . For $n>3$ we claim that $$\tag{1} a_n = a_{n-1} + a_{n-2} + a_{n-3}.$$ Indeed, each such sequence of length $n$ either ends in $H$ or $T$ . If it ends with $T$ , we have length $n-1$ remaining which can end in both $H$ and $T$ , hence $a_{n-1}$ of such contributions. For sequences ending with $H$ , we continue tracking the $n-1$ -th position: if it's $H$ , then the $n-2$ -th needs to be $T$ , hence $a_{n-3}$ of these, otherwise, if it's $T$ , we are free to choose the $n-2$ -th, thus we get $a_{n-2}$ . Since the set $A_n$ , sequences of length $n$ where $HHH$ appears for the first time on step $n$ has the following structure: $$\text{length } n-4 \ \text{sequences of at most } 2 \ \text{ Heads } \text{ followed by } THHH,$$ it follows $$\mathbb{P}(A_n) = 2^{-n}a_{n-4},$$ with $a_n$ as in $(1)$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3078630', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/322814/']}
jdg_70778
stackexchange
llm_judgeable_groundtruth_similarity
38950614
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use FormBuilder in a page I have in Ionic 2. First, here is my environment details:Running on Windows 10, and running ionic --version gives me 2.0.0-beta.35 Here is part of my package.json file: ..."@angular/common": "2.0.0-rc.3","@angular/compiler": "2.0.0-rc.3","@angular/core": "2.0.0-rc.3","@angular/forms": "^0.3.0","@angular/http": "2.0.0-rc.3","@angular/platform-browser": "2.0.0-rc.3","@angular/platform-browser-dynamic": "2.0.0-rc.3","ionic-angular": "2.0.0-beta.10","ionic-native": "1.3.2","ionicons": "3.0.0"... Second, here are the two main files involved: insight.ts import { Component } from '@angular/core';import {NavController, NavParams} from 'ionic-angular';import { REACTIVE_FORM_DIRECTIVES, FormBuilder, FormControl, FormGroup} from '@angular/forms';import { App, Insight } from '../../models';import { InsightProvider } from '../../providers/insight/insight.service';import { InsightImage, InsightLabel, InsightLink, InsightQuestion, InsightThought, InsightTodo, InsightVideo } from './shared';@Component({ templateUrl: 'build/pages/insight/insight.html', directives: [REACTIVE_FORM_DIRECTIVES, InsightImage, InsightLabel, InsightLink, InsightQuestion, InsightThought, InsightTodo, InsightVideo], providers: [App, InsightProvider, FormBuilder]})export class InsightPage { canAdd: boolean; showDetails: boolean; newInsight: Insight; insightForm: FormGroup; constructor(private insightProvider: InsightProvider, private params: NavParams) { this.insightForm = new FormGroup({ type: new FormControl('', []), todo: new FormControl('', []), checked: new FormControl(false, []), imageUrl: new FormControl('', []), link: new FormControl('', []), url: new FormControl('', []), label: new FormControl('', []), question: new FormControl('', []), answer: new FormControl('', []), title: new FormControl('', []), details: new FormControl('', []), }); } ngOnInit() { this.canAdd = false; this.showDetails = true; } addNewInsight() { if (this.newInsight.type) { this.insightProvider.createInsight(this.newInsight) .subscribe(response => { this.newInsight.setId(response.data.id); this.newInsight.title = ''; console.log(response); }); } } deleteClicked(index: number) { console.log('Clicked on ' + index); this.insightProvider.deleteInsight(this.newInsight) .subscribe(data => { console.log(data); }); }} insight.html <form [ngFormModel]="insightForm" (ngSubmit)="createNewInsight()"> <ion-item> <ion-label for="type">Insight Type</ion-label> <ion-select name="type" id="type" [formControl]="type"> <ion-option value="label">Label</ion-option> <ion-option value="thought">Thought</ion-option> <ion-option value="link">Link</ion-option> <ion-option value="question">Question</ion-option> <ion-option value="todo">Todo</ion-option> <ion-option value="image">Image</ion-option> <ion-option value="video">Video</ion-option> </ion-select> </ion-item> <div [ngSwitch]="type"> <insight-image [form]="insightForm" *ngSwitchCase="'image'"></insight-image> <insight-label [form]="insightForm" *ngSwitchCase="'label'"></insight-label> <insight-link [form]="insightForm" *ngSwitchCase="'link'"></insight-link> <insight-question [form]="insightForm" *ngSwitchCase="'question'"></insight-question> <insight-thought [form]="insightForm" *ngSwitchCase="'thought'"></insight-thought> <insight-todo [form]="insightForm" *ngSwitchCase="'todo'"></insight-todo> <insight-video [form]="insightForm" *ngSwitchCase="'video'"></insight-video> </div> <button type="submit" block primary text-center (click)="addNewInsight()" [disabled]="!newInsight.type"> <ion-icon name="add"></ion-icon> Add Insight </button> </form> As you can see, I am trying to pass the FormGroup Object into multiple components so that I could use them. Here is an example of what one of the components look like (minimal version right now): <ion-item> <ion-label floating for="link">Link</ion-label> <ion-input type="text" name="link" id="link" [formControl]="link"></ion-input></ion-item><ion-item> <ion-label floating for="url">URL</ion-label> <ion-input type="text" id="url" name="url" [formControl]="url"></ion-input></ion-item> The problem I am facing right now is this error: What I believe is happening is that the FormBuilder is looking for the given names I declare in my typescript file (such as todo, imageUrl, link, etc), but since it is in my other components, it errors out, thinking its not there. What could be the reason for this error? I have looked online and could not find related issues. FYI, the reason I am needing them in components and not in the same page, is because in the future, the functionality will be different for each input, thus needed to give each component a "Single Responsibility". Thanks in advance Now provide the response and nothing else.
For every one else with problem Cannot find control with unspecified name attribute. The problem is you forgot to define formControlName on your form input elements. formControlName="url" If you are facing No provider for NgControl after fixing inputs you have a problem with Ionic2 breaking changes in Fom handling. There is a chance you can fix your code by importing a new form component: import { disableDeprecatedForms, provideForms } from '@angular/forms'; But you will probably still face more and more problems. To really fix your code: update to the latest beta version simplify your form to 2 simple inputs rewrite your form and make it work add rest of your elements Good tutorial about FormBuilder and validation https://blog.khophi.co/ionic-2-forms-formbuilder-and-validation/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38950614', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4497186/']}
jdg_70779
stackexchange
llm_judgeable_groundtruth_similarity
36050503
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an application in Go that reroutes the STDIN and STDOUT of binaries and then runs them. In a nutshell I'm doing: - create command object with the binary path (lets call the object command A) - create command object with the binary path (calling it command B) - set the stdout of command B to the stdin of Command A - start command A - start command B I noticed whenever the process for command B exits while command A is running, it becomes a zombie process in the process table. Here's an example: commandA := exec.Command("samplebin")commandB := exec.Command("sample2bin")cmdAStdin := commandA.StdinPipe()commandB.Stdout = cmdAStdincommandA.Start()commandB.Start() Why does commandB become a Zombie if it exits while commandA is still running? I'm running Go 1.5 on Ubuntu 14. Now provide the response and nothing else.
When a process exits, it ALWAYS becomes a zombie, regardless of what other processes are running. That's just the way process termination works. The process will remain a zombie until its parent calls wait to get its exit status, or indicates that it is uninterested in children by ignoring SIGCHLD (which may have been before the child exited). It will remain a zombie until that happens, lest the exit status get lost. In your example, it would seem that your process (the one creating the processes) is the parent, so both A and B will remain as zombies until your process collects them. If a process exits while it still has children (either running or zombies), those children will be reparented to the exiting process's parent, which will generally ignore the exit status (clearing up the zombies).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36050503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3044418/']}
jdg_70780
stackexchange
llm_judgeable_groundtruth_similarity
48701475
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Working on Win 10 with QT Creator 4.3.1Trying to add a library to my qt project via the GUI. In specific the qtmqtt library. right click on project -> "Add library..." and simply nothing happens. Anybody else having this problem? Seems like I am to stupid to add a library directory via the .pro file. Googled for hours but cant get my head around it. So I really need the GUI solution to be working. Now provide the response and nothing else.
According to this , there's no need to add libraries, since MQTT is a Qt module just add this line in your pro file: QT += mqtt To use a Qt module, it must be installed in the Qt lib directory. First, retrieve the lib directory path executing this command from a terminal: qmake -query QT_INSTALL_LIBS Cd into that directory and check if a file called Qt5Mqtt.dll is there: if not, you must build/install the module. To get the module source code, you can execute this git command: git clone git://code.qt.io/qt/qtmqtt.git Once you have the source files, cd into the source files directory containing the file qtmqtt.pro and run these commands: qmakemakemake install (you may need administrator privileges for the last one). After the commands completed successfully, you should be able to see the library in the QT_INSTALL_LIBS directory, and use the module in a Qt project. In case of compilation issues, open the qtmqtt.pro file with creator, and try to build the library from there, then manually install (copy) it into the QT_INSTALL_LIBS .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48701475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8215532/']}
jdg_70781
stackexchange
llm_judgeable_groundtruth_similarity
25801237
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following code: char char_a = 'A';int int_b = 34;char* p_a = &char_a;int* p_b = &int_b;cout<<"Value of int_b is (*p_b) :"<< *p_b<<endl;cout<<"Value of &int_b is (p_b) :"<< p_b<<endl;cout<<"Value of char_a is (*p_a) :"<< *p_a<<endl;cout<<"Value of &char_a is (p_a) :"<< p_a<<endl; When I run it, output is: So why doesn't it show the address in the case of char pointer as it does for the integer's pointer? Now provide the response and nothing else.
Passing a Pointer to a character is interpreted as a NULL terminated C string as the non member std::ostream<<(ostream&) overload has an overload for NULL terminated C string (const char *). template< class CharT, class Traits >basic_ostream<CharT,Traits>& operator<<( basic_ostream<CharT,Traits>& os, const char* s ); As in your case, it's just a character and the subsequent memory locations are garbage, the ostream reads the memory until it hits a NULL in the memory stream. This is definitely an undefined behavior as you would be accessing memory beyond those that had been allocated to your process. If you actually need to pass the character pointer and display the address, you can leverage the formatted inserter operator<< member overload for void * basic_ostream& operator<<( const void* value ); To access this, you need an explicit pointer cast from char * to const void * std::cout << "Value of &char_a is (p_a) :" << static_cast<const void *>(p_a) << std::endl;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25801237', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1698143/']}
jdg_70782
stackexchange
llm_judgeable_groundtruth_similarity
71758
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $x$ and $p$ be real numbers with $x \ge 1$ and $p \ge 2$ . Show that $(x - 1)(x + 1)^{p - 1} \ge x^p - 1$ . I recently discovered this result. I am sure it is known, but it is new to me. It is quite easy to prove if $p$ is an integer, even a negative one. I have a proof in the general case above, but it seems overly complicated. Can someone provide a simple demonstration? Now provide the response and nothing else.
[Copied from my answer to the same question on mathoverflow , where "Cardinal" noted the question's previous appearance here] We prove strict inequality for $x>1$ and $p>2$. Add $1$ to both sides and divide by $x^p$ to get an equivalent inequality that can be written as $$\frac{x-1}{x} \left(\frac{x+1}{x}\right)^{p-1} + \frac1x \left( \frac1x \right)^{p-1} \geq 1.$$Since $p > 2$ the function $f : X \mapsto X^{p-1}$ is strictly convex upwards. The left-hand side is a weighted average$$\frac{x-1}{x} f\left(\frac{x+1}{x}\right) + \frac1x f\left( \frac1x \right)$$of values of $f$, with positive weights and evaluated at different $X$'s. Hence by Jensen's inequality it strictly exceeds the value of $f$ at the corresponding weighted average of $X$'s, which is$$f\left(\frac{x-1}{x} \cdot \frac{x+1}{x} + \frac1x \cdot \frac1x \right) = f(1) = 1,$$ QED . The same argument shows that the inequality holds for $p<1$, and is reversed for $1 < p < 2$ because then $f$ is concave downwards.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/71758', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17411/']}
jdg_70783
stackexchange
llm_judgeable_groundtruth_similarity
1750934
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating a Composite WPF (Prism) app with several different projects (Shell, modules, and so on). I am getting ready to implement logging, using Log4Net. It seems there are two ways to set up the logging: Let the Shell project do all of the actual logging. It gets the reference to Log4Net, and other projects fire composite events to let the Shell know that it needs to log something. Those projects fire the events only for levels where logging is turned on in the Shell's app.config file (DEBUG, ERROR, etc), so as not to degrade performance. Give each project, including modules, a Log4Net reference, and let the project do its own logging to a common log file, instead of sending messages to the Shell for logging. Which is the better approach? Or, is there another approach that I should consider? Thanks for your help. Now provide the response and nothing else.
The simplest approach to logging in Prism is to override the LoggerFacade property in your Bootstrapper . By overridding the LoggerFacade , you can pass in an instance of any Logger you want with any configuration needed as long as the logger implements the ILoggerFacade interface. I've found the following to work quite well for logging (I'm using the Enterprise Libary Logging block, but applying something similar for Log4Net should be straight forward): Create a Boostrapper in your Shell: -My Project -Shell Module (add a reference to the Infrastructure project) -Bootstrapper.cs Create a Logging Adapter in your Infrastructure project, i.e.: -My Project -Infrastructure Module -Adapters -Logging -MyCustomLoggerAdapter.cs -MyCustomLoggerAdapterExtendedAdapter.cs -IFormalLogger.cs The MyCustomLoggerAdapter class will be used to override the 'LoggerFacade' property in the Bootstrapper. It should have a default contstructor that news everything up. Note: by overriding the LoggerFacade property in the Bootstrapper, you are providing a logging mechanisim for Prism to use to log its own internal messages. You can use this logger throughout your application, or you can extend the logger for a more fully featured logger. (see MyCustomLoggerAdapterExtendedAdapter / IFormalLogger ) public class MyCustomLoggerAdapter : ILoggerFacade{ #region ILoggerFacade Members /// <summary> /// Logs an entry using the Enterprise Library logging. /// For logging a Category.Exception type, it is preferred to use /// the EnterpriseLibraryLoggerAdapter.Exception methods." /// </summary> public void Log( string message, Category category, Priority priority ) { if( category == Category.Exception ) { Exception( new Exception( message ), ExceptionPolicies.Default ); return; } Logger.Write( message, category.ToString(), ( int )priority ); } #endregion /// <summary> /// Logs an entry using the Enterprise Library Logging. /// </summary> /// <param name="entry">the LogEntry object used to log the /// entry with Enterprise Library.</param> public void Log( LogEntry entry ) { Logger.Write( entry ); } // Other methods if needed, i.e., a default Exception logger. public void Exception ( Exception ex ) { // do stuff }} The MyCustomLoggerAdapterExtendedAdapter is dervied from the MyCustomLoggerAdapter and can provide additional constructors for a more full-fledged logger. public class MyCustomLoggerAdapterExtendedAdapter : MyCustomLoggerAdapter, IFormalLogger{ private readonly ILoggingPolicySection _config; private LogEntry _infoPolicy; private LogEntry _debugPolicy; private LogEntry _warnPolicy; private LogEntry _errorPolicy; private LogEntry InfoLog { get { if( _infoPolicy == null ) { LogEntry log = GetLogEntryByPolicyName( LogPolicies.Info ); _infoPolicy = log; } return _infoPolicy; } } // removed backing code for brevity private LogEntry DebugLog... WarnLog... ErrorLog // ILoggingPolicySection is passed via constructor injection in the bootstrapper // and is used to configure various logging policies. public MyCustomLoggerAdapterExtendedAdapter ( ILoggingPolicySection loggingPolicySection ) { _config = loggingPolicySection; } #region IFormalLogger Members /// <summary> /// Info: informational statements concerning program state, /// representing program events or behavior tracking. /// </summary> /// <param name="message"></param> public void Info( string message ) { InfoLog.Message = message; InfoLog.ExtendedProperties.Clear(); base.Log( InfoLog ); } /// <summary> /// Debug: fine-grained statements concerning program state, /// typically used for debugging. /// </summary> /// <param name="message"></param> public void Debug( string message ) { DebugLog.Message = message; DebugLog.ExtendedProperties.Clear(); base.Log( DebugLog ); } /// <summary> /// Warn: statements that describe potentially harmful /// events or states in the program. /// </summary> /// <param name="message"></param> public void Warn( string message ) { WarnLog.Message = message; WarnLog.ExtendedProperties.Clear(); base.Log( WarnLog ); } /// <summary> /// Error: statements that describe non-fatal errors in the application; /// sometimes used for handled exceptions. For more defined Exception /// logging, use the Exception method in this class. /// </summary> /// <param name="message"></param> public void Error( string message ) { ErrorLog.Message = message; ErrorLog.ExtendedProperties.Clear(); base.Log( ErrorLog ); } /// <summary> /// Logs an Exception using the Default EntLib Exception policy /// as defined in the Exceptions.config file. /// </summary> /// <param name="ex"></param> public void Exception( Exception ex ) { base.Exception( ex, ExceptionPolicies.Default ); } #endregion /// <summary> /// Creates a LogEntry object based on the policy name as /// defined in the logging config file. /// </summary> /// <param name="policyName">name of the policy to get.</param> /// <returns>a new LogEntry object.</returns> private LogEntry GetLogEntryByPolicyName( string policyName ) { if( !_config.Policies.Contains( policyName ) ) { throw new ArgumentException( string.Format( "The policy '{0}' does not exist in the LoggingPoliciesCollection", policyName ) ); } ILoggingPolicyElement policy = _config.Policies[policyName]; var log = new LogEntry(); log.Categories.Add( policy.Category ); log.Title = policy.Title; log.EventId = policy.EventId; log.Severity = policy.Severity; log.Priority = ( int )policy.Priority; log.ExtendedProperties.Clear(); return log; }}public interface IFormalLogger{ void Info( string message ); void Debug( string message ); void Warn( string message ); void Error( string message ); void Exception( Exception ex );} In the Bootstrapper : public class MyProjectBootstrapper : UnityBootstrapper{ protected override void ConfigureContainer() { // ... arbitrary stuff // create constructor injection for the MyCustomLoggerAdapterExtendedAdapter var logPolicyConfigSection = ConfigurationManager.GetSection( LogPolicies.CorporateLoggingConfiguration ); var injectedLogPolicy = new InjectionConstructor( logPolicyConfigSection as LoggingPolicySection ); // register the MyCustomLoggerAdapterExtendedAdapter Container.RegisterType<IFormalLogger, MyCustomLoggerAdapterExtendedAdapter>( new ContainerControlledLifetimeManager(), injectedLogPolicy ); } private readonly MyCustomLoggerAdapter _logger = new MyCustomLoggerAdapter(); protected override ILoggerFacade LoggerFacade { get { return _logger; } }} Finally, to use either logger, all you need to do is add the appropriate interface to your class' constructor and the UnityContainer will inject the logger for you: public partial class Shell : Window, IShellView{ private readonly IFormalLogger _logger; private readonly ILoggerFacade _loggerFacade; public Shell( IFormalLogger logger, ILoggerFacade loggerFacade ) { _logger = logger; _loggerFacade = loggerFacade _logger.Debug( "Shell: Instantiating the .ctor." ); _loggerFacade.Log( "My Message", Category.Debug, Priority.None ); InitializeComponent(); } #region IShellView Members public void ShowView() { _logger.Debug( "Shell: Showing the Shell (ShowView)." ); _loggerFacade.Log( "Shell: Showing the Shell (ShowView).", Category.Debug, Priority.None ); this.Show(); } #endregion} I don't think you need a separate module for the logging policy. By adding the logging policies to your infrastructure module, all other modules will get the required references (assuming you add the infrastructure module as a reference to your other modules). And by adding the logger to your Boostrapper, you can let the UnityContainer inject the logging policy as needed. There is a simple example of uisng Log4Net on the CompositeWPF contrib project on CodePlex as well. HTH's
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1750934', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93781/']}
jdg_70784
stackexchange
llm_judgeable_groundtruth_similarity
570353
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to the standard model, would it be theoretically possible to emit protons or other complex particles from the vacuum using the dynamical Casimir effect ? Or is it only possible to emit particles such as electrons and photons? Edit: I effectively have 2 answers here. One says yes for protons, one says no. Which is it? Now provide the response and nothing else.
Microscopically, this can be traced back to a basic relationship between the power emitted and the square of the acceleration of charged particles - the radiation terms of the electromagnetic fields are proportional to the acceleration and the power is proportional to the square of these fields - see Larmor's formula . If a charged particle undergoes a sinusoidal displacement such as $$ x = a\sin (2\pi f t),$$ where $a$ is an amplitude and $f$ is the frequency, then differentiating twice gives the acceleration $$\ddot{x} = -4\pi^2 af^2 \sin(2\pi f t).$$ Squaring this and taking its time average gives $$\langle \ddot{x}^2 \rangle = 8\pi^4 a^2 f^4.$$ There is thus a very strong dependence on the frequency of oscillation and the emitted power. If expressed in terms of a current, then since current is already proportional to charge multiplied by $\dot{x}$ , then $$\ddot{x} \propto f I$$ and so the power emitted is proportional to $f^2 I^2$ .
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/570353', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/252985/']}
jdg_70785
stackexchange
llm_judgeable_groundtruth_similarity
385058
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a differentiable function $f : [0,2] \rightarrow [0,1]$ such that $f(x) = 0$ iff $x=0$ and $f(x) = 1$ iff $x \in [1,2]$? What about $n$ times differentiable for any $n$, or infinitely differentiable? Thank you! Now provide the response and nothing else.
First, let $$f(x)=\begin{cases}e^{\frac{-1}{x}}&x>0\\0&x\le0\end{cases}$$ It can be shown that $f$ is smooth. Then one such desired function is $$g(x)=\frac{f(x)}{f(x)+f(1-x)}$$ Since $f$ is smooth, and $f(x)+f(1-x)$ is never $0$, $g$ is also smooth. $g$ takes values $0$ for $x<0$, and values $1$ for $x>1$. It's called a smooth transition function.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/385058', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58949/']}
jdg_70786
stackexchange
llm_judgeable_groundtruth_similarity
1145496
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that $\ln(x^k)=k\ln(x)$ for any constant $k$, but why is $\ln(x^x)=x\ln(x)$. The exponent $x$ is not constant. Now provide the response and nothing else.
As $x$ is probably not an integer, $x^x$ is defined as :$$x^x = e^{x\ln(x)}$$ Hence, taking the logarithm give you $\ln{x^x}=x\ln(x)$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1145496', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/215029/']}
jdg_70787
stackexchange
llm_judgeable_groundtruth_similarity
227542
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a 10kW, 72 V golden motor motor and motor controller. When regeneration occurs, the battery terminal voltage increases to more than 72 V. Is there any way to capture this energy to a separate capacitor bank? Now provide the response and nothing else.
Many small and cheap microcontroller have CAN built in. Look at some of the PIC 18 with "8" in their part number. You only need to add the physical CAN transceiver, like a MCP2551. If you just want a differential signal, then you can use a number of differential line drivers/receivers. There is nothing wrong with using CAN transceivers for this, RS-485 or others are options too. Overall, I'd use CAN all the way. You will need a line drivers/receivers one way or another. CAN transceivers are as good as any, but then it's easier to use the rest of CAN. The main difference with CAN is that the lower levels of the protocol have been well thought out and are available at low cost integrated directly in many microcontrollers. RS-485 throws you a electrical spec, then you're on your own. It's a lot more difficult to get all the details of a multi-drop bus right and robust than most people realize. Use CAN. It's all been worked out and done for you. You send whole packets at the firmware level. The hardware takes care of collision detection, retry, and CRC checksum generation and validation. These are all things you want, and with CAN you get them practially for free.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/227542', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/96773/']}
jdg_70788
stackexchange
llm_judgeable_groundtruth_similarity
30661
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Romanian Parliament is trying to adopt a law to cap the interest rates for all loans: According to the form adopted by the Senate’s committees, the annual effective interest rate, also known as DAE, will be limited to maximum 2.5 times the legal interest rate for mortgage loans, and to 18% per year for consumer loans Representatives of the banking sector who attended the debate on this draft bill in the Senate’s committees opposed the initiative of limiting interest rates Some bank representatives and analysts argued that such a law will encourage illegal borrowing, as non-bank financial institutions currently use much higher interest rates and will be forced to refuse crediting some clients. I could not find the reasons for introducing such a law, just a justification: Zamfir [the initiator] claims that he got his inspiration from the model of 14 other EU countries and especially from Spain. So, I am interested in the reasons behind interest rates capping in Spain. Question: What are the main reasons for capping interest rates in Spain? Now provide the response and nothing else.
This would appear to be part of Spain's usury legislation. Usury is the practice of lending at unreasonable levels of interest. Lenders offer loans at a very high rate of interest when they have a reasonable expectation that the loan won't be repaid. Thus they need to make sufficient profit on those loans that are repaid to cover losses on bad debts. So if you are an employed person with a house you can get a secured loan at a good rate because the lender can expect to be paid in full. But if you don't have a steady income or security you will not be able to get a loan at a good rate. This leads to a cycle of debt, with unpaid loans being financed by more loans, at an ever-increasing rate of interest. And people being trapped in debt with no possibility of escape. People take out loans for short-term benefit, without considering the long-term consequences. Spanish law says, in effect, "If you are so unlikely to be able to repay a loan that the lender feels that they need to charge a very high rate of interest, you should not be allowed to have a loan at all." By capping lending rates, the government is saying to lenders. "Don't loan to people who will have trouble repaying the debt." They are doing it to protect people from taking loans that will be ultimately not in their interest. (about Spanish Usury Law)
{}
{'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/30661', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/11278/']}
jdg_70789
stackexchange
llm_judgeable_groundtruth_similarity
64790
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have breakpoints set but Xcode appears to ignore them. Now provide the response and nothing else.
First of all, I agree 100% with the earlier folks that said turn OFF Load Symbols Lazily . I have two more things to add. (My first suggestion sounds obvious, but the first time someone suggested it to me, my reaction went along these lines: "come on, please, you really think I wouldn't know better...... oh.") Make sure you haven't accidentally set "Active Build Configuration" to "Release." Under "Targets" in the graphical tree display of your project, right click on your Target and do "Get Info." Look for a property named "Generate Debug Symbols" (or similar) and make sure this is CHECKED (aka ON). Also, you might try finding (also in Target >> Get Info) a property called "Debug Information Format" and setting it to "Dwarf with dsym file." There are a number of other properties under Target >> Get Info that might affect you. Look for things like optimizing or compressing code and turn that stuff OFF (I assume you are working in a debug mode, so that this is not bad advice). Also, look for things like stripping symbols and make sure that is also OFF. For example, "Strip Linked Product" should be set to "No" for the Debug target.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/64790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8761/']}
jdg_70790
stackexchange
llm_judgeable_groundtruth_similarity
17390441
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can a similar functionality be achieved without errors? class A<K> { void f(K x) {}}void foo(A<? extends X> a, X x) { a.f(x); // AN error: The method f(capture#1-of ? extends X) in the // type A<capture#1-of ? extends X> is not applicable for the // arguments (X)} I know that it happens because 'a' can be an instance of A<"non-X">, so its 'f' mustn't accept an instance of X as a parameter, but how can I force the parameters to be of the same type? Here is more code: Test class: class Test { <T> void foo(A<T> a, T x) { a.f(x); // now it works! }} In some class: Container<X> container;public void test() { X x = new X(); new Test().foo(container.get(), x);} Here's the container class: public class Container<K> { A<? extends K> get() { return new A<K>(); }} Now provide the response and nothing else.
You can force the parameters to be of the same type by doing the following: // the first class, A<K>:class A<K> { void f(K x) {}}// the second class, defining the method with generic type parametersclass Test { <T> void foo(A<T> a, T x) { a.f(x); // now it works! }}// a third class, that uses the above two:class Main { public static void main(final String... args) { final Test test = new Test(); final A<String> a = new A<>(); test.foo(a, "bar"); }} What this does is: the method foo defines a generic type parameter T and uses it to enforce that the K type parameter of the class A must match the type of x , the second parameter of foo . You could even impose restrictions on <T> if you wish and if it makes sense for your problem, such as <T extends Bar> void foo(A<T> a, T x) {...} , or with super . You would want this if, as Joni asked on a comment in the question, X is actually a type and not a type parameter: you'd use <T extends X> void foo(...) . After you've shown more code, the problem becomes clear. The method .get() of the container returns an instance of A<? extends K> . Therefore, the type parameter of the instance you obtain from .get() is not fully specified. Usually, it is not very good design to return such an unspecified type. For a video presentation with Joshua Bloch, the author of Effective Java and many APIs and features in Java, showing how to improve such an API, check this: http://www.youtube.com/watch?v=V1vQf4qyMXg&feature=youtu.be&t=22m . At exactly 25'36", Joshua Bloch says "don't try to use them [wildcard types] on return values", and he explains it later. Basically, you don't get any more flexibility by using them, and just makes it painfully hard for users of the API to deal with it (you just felt the effects of doing it...). To fix, you could simply try to change the signature of .get() to A<K> get() , so the container class would be: public class Container<K> { A<K> get() { return new A<K>(); }} Since you do know that get() is returning an instance of A<K> , there's no reason to use the older signature: it simply makes you lose information you already know! And if this still doesn't work, your problem might be somewhere else, and you'd need to show even more code... or better still, ask other questions! :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17390441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/917572/']}
jdg_70791
stackexchange
llm_judgeable_groundtruth_similarity
9068
Below is a question asked on the forum reverseengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Here is the code of loop that I'm trying to understand the disassembly of it: #include<stdio.h>#include <iostream>using namespace std;int main() { int i, arr[50], num; printf("\nEnter no of elements :"); cin >> num; //Reading values into Array printf("\nEnter the values :"); for (i = 0; i < num; i++) cin >> arr[i]; return 0;} And this is the disassembly: Can you explain me the highlighted part? what is Var_D8 is used for? Why compiler shifted left the edx ? Now provide the response and nothing else.
var_D8 is your int arr[50] . You can recognize it quickly solely by its name : 50 * sizeof(int) = 200 = 0xC8. The next variable on the stack is numb_of_elements which is positionned on -0x10 on the stack, thus we have some memory between -0xD8 and -0x10 that corresponds to the int array. Here are some explanations about the following instructions : lea eax, [ebp+var_D8] ; Get the address of the first element of the array.mov edx, [ebp+Counter] ; Get the current element index.shl edx, 2 ; Since the size of each element of the array is 4, multiply the index by 4add eax, edx ; &arr[i] = The address of the current elementmov [esp], eax ; Move it on the stack so it can be written by std::cin
{}
{'log_upvote_score': 4, 'links': ['https://reverseengineering.stackexchange.com/questions/9068', 'https://reverseengineering.stackexchange.com', 'https://reverseengineering.stackexchange.com/users/12671/']}
jdg_70792
stackexchange
llm_judgeable_groundtruth_similarity
376
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $\qquad L = \{a^n \mid \exists_{p \geq n}\ p\,,\ p+2 \text{ are prime}\}.$ Is $L$ regular? This question looked suspicious at the first glance and I've realized that it is connected with the twin prime conjecture . My problem is that the conjecture has not been resolved yet, so I am not sure how can I proceed with deciding that the language is regular. Now provide the response and nothing else.
If the twin prime conjecture is true, then $L = a^{*}$, which is regular. If the twin prime conjecture is not true, then there are finitely many twin primes; indeed, there is a largest pair of twin primes $\{p, p + 2\}$. In this case, $L = \{a^{n} | n < p + 1\}$, a finite language. In either case, you get a regular language, so I think it's safe to conclude that $L$ is a regular language... we just won't know which one it is until the twin prime conjecture is resolved.
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/376', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/147/']}
jdg_70793
stackexchange
llm_judgeable_groundtruth_similarity
33246
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I came across the following statement regarding the realized 10-year maturity bond's return over a year: The realized bond return (H) over a year has two components: the yield income earned over time and the capital gain or loss due to yield changes: $$H_{10} \approx Y_{10}-\text{Duration}_{10} \times \Delta Y_{10}.$$ I am a complete economics rookie and I'm trying to understand what's going on here from the mathematical point of view, which I'm gonna present here, but my calculations don't seem to add up. If we denote the bond's coupon with $C$, and the bond's time $t=0$ yield to maturity with $y_0$, then the bond's value at time $t=0$ equals:$$V_0=\frac{C}{1+y_0}+\frac{C}{(1+y_0)^2}+\dots+\frac{C}{(1+y_0)^9}+\frac{F+C}{(1+y_0)^{10}}.$$ At time $t=1$, we can express the bond's value as the function of the time $t=1$ yield to maturity $y$, so we have$$V_1(y)=\frac{C}{1+y}+\frac{C}{(1+y)^2}+\dots+\frac{C}{(1+y)^8}+\frac{F+C}{(1+y)^{9}}.$$Derivative of $V_1$ with respect to $y$ is equal to:$$\frac{dV_1}{dy}=-1\cdot\frac{C}{(1+y)^2}-2\cdot\frac{C}{(1+y)^3}-\dots-8 \cdot \frac{C}{(1+y)^9}-9 \cdot\frac{F+C}{(1+y)^{10}} .$$Now, we can apply some basic calculus here and state that for $\Delta y$ small "enough", we have that$$ V_1(y_0+\Delta y)\approx V_1(y_0)+\frac{d V_1}{dy}(y_0)\cdot \Delta y. $$So now, if we consider the absolute return on our position (buying this bond at time $t=0$, selling it at $t=1$) from the time $t=0$ perspective, under the assumption that the time $t=1$ bond's yield to maturity is $y_1=y_0+\Delta y$, we have that:$$\text{AbsReturn} \approx-V_0+\frac{C}{1+y_0}+\frac{V_1(y_0)+\frac{d V_1}{dy}(y_0)\cdot \Delta y}{1+y_0}.$$That is - we buy the bond for $V_0$, at the end of the first year we are paid the coupon which discounted value is $\frac{C}{1+y_0}$, and the approximation of the time $t=1$ bond's value taking into the account the YTM change is $V_1(y_0)+\frac{d V_1}{dy}(y_0)\cdot \Delta y$ and we also discount it to time $t=0$. Now, we can simplify the expression for AbsReturn since $-V_0+\frac{C}{1+y_0}+\frac{V_1(y_0)}{1+y_0}=0$ and we get:$$ \text{AbsReturn}= \frac{\frac{d V_1}{dy}(y_0)\cdot \Delta y}{1+y_0} ,$$which I guess we can also divide with our initial investment of $V_0$ to get the rate of return so we get:$$ \text{RateOfReturn}= \frac{\frac{d V_1}{dy}(y_0)\cdot \Delta y}{V_0(1+y_0)} ,$$aaand this is where I completely lose it. I can't seem to understand the connection between the original expression and the thing I end up with. What does the term $\text{Duration}_{10}$ in the original formula even stand for - I guess it is the derivation of bond's value with respect to yield - but bond's value at what time: $t=0$ or $t=1$? Does it even make any difference? If it is at time $t=0$, how can we be using linear approximation of that function for approximating bond's value change at time $t=1$? I'm completely puzzled over this. Am I doing something completely wrong in this derivation? I appreciate any insights on this. Thanks! Now provide the response and nothing else.
Let's go back to basics. In terms of its yield $y$, the price of a bond maturing in $n$ years is $$P_n(y) = \sum_{i=1}^n\frac{c}{(1+y)^i} + \frac{100}{(1+y)^n}$$ One year later, the yield is now $y^*$ and the bond now matures in $(n-1)$ years, and its price is $$P_{n-1}(y^*) = \sum_{i=1}^{n-1}\frac{c}{(1+y^*)^i} + \frac{100}{(1+y^*)^{n-1}}$$ We can write $y^* = y + \Delta y$ and expand in powers of $\Delta y$ to get $$P_{n-1}(y^*) \approx P_{n-1}(y) + \frac{\partial P_{n-1}}{\partial y}\Delta y + O(\Delta y^2)$$ The duration of this bond is defined to be $$-D_{n-1}(y) = \frac{1}{P_{n-1}(y)}\frac{\partial P_{n-1}}{\partial y}$$ The total return of the bond, including the cashflow of $c$ received, is therefore $$R = \frac{c + P_{n-1}(y) - D_{n-1}(y)P_{n-1}(y)\Delta y - P_n(y)}{P_n(y)} + O(\Delta y^2)$$ Noting that $$c + P_{n-1}(y) = (1+y)P_{n}(y)$$ we can write the total return as $$R = y - \frac{P_{n-1}(y)}{P_n(y)}D_{n-1}(y) \Delta y + O(\Delta y)^2$$ Note that the only approximation that has been made is in approximating the price of the bond $P_{n-1}(y + \Delta y)$ as a linear function of $\Delta y$. Other than that, we have used exact formulas throughout. The expression that you often see, and that you quoted in your question, $$R = y - D_n(y) \Delta y + O(\Delta y^2)$$ is incorrect when we consider returns over finite time periods - it is only valid for an instantaneous change in yields. In particular, as you found out, it obscures the choice of which duration to use - the correct answer is neither the duration $D_n(y)$ nor the forward duration $D_{n-1}(y)$, but instead the forward duration adjusted by the ratio of bond prices assuming an unchanged yield.
{}
{'log_upvote_score': 4, 'links': ['https://quant.stackexchange.com/questions/33246', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/23864/']}
jdg_70794
stackexchange
llm_judgeable_groundtruth_similarity
2471188
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have a matrix foo as follows: foo <- cbind(c(1,2,3), c(15,16,17))> foo [,1] [,2][1,] 1 15[2,] 2 16[3,] 3 17 I'd like to turn it into a list that looks like [[1]][1] 1 15[[2]][1] 2 16[[3]][1] 3 17 You can do it as follows: lapply(apply(foo, 1, function(x) list(c(x[1], x[2]))), function(y) unlist(y)) I'm interested in an alternative method that isn't as complicated. Note, if you just do apply(foo, 1, function(x) list(c(x[1], x[2]))) , it returns a list within a list, which I'm hoping to avoid. Now provide the response and nothing else.
Here's a cleaner solution: as.list(data.frame(t(foo))) That takes advantage of the fact that a data frame is really just a list of equal length vectors (while a matrix is really a vector that is displayed with columns and rows...you can see this by calling foo[5], for instance). You could also do this, although it isn't much of an improvement: lapply(1:nrow(foo), function(i) foo[i,])
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2471188', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/142477/']}
jdg_70795
stackexchange
llm_judgeable_groundtruth_similarity
20034476
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Scenario : I am building a UI application that gets notifcations from a backend service every few milliseconds. Once I get a new notification i want to update the UI as soon as possible. As I can get lots of notifications within a short amount of time, and as I always only care about the latest event, I use the Throttle() method of the Reactive Extensions framework. This allows me to ignore notification events that are immediately followed by a new notification and so my UI stays responsive. Problem: Say I throttle the event stream of notification events to 50ms and the backend sends a notification every 10ms, the Thottle() method will never return an event as it keeps resetting its Sliding Window again and again. Here i need some additional behaviour to specify something like a timeout, so that i can retrieve atleast one event per second or so in case of such a high throughput of events. How can i do this with Reactive Extensions? Now provide the response and nothing else.
As James stated, Observable.Sample will give you the latest value yielded. However, it will do so on a timer, and not in accordance to when the first event in the throttle occurred. More importantly, however, is that if your sample time is high (say ten seconds), and your event fires right after a sample is taken, you won't get that new event for almost ten seconds. If you need something a little tighter, you'll need to implement your own function. I've taken the liberty of doing so. This code could definitely use some clean up, but I believe it does what you've asked for. public static class ObservableEx{ public static IObservable<T> ThrottleMax<T>(this IObservable<T> source, TimeSpan dueTime, TimeSpan maxTime) { return source.ThrottleMax(dueTime, maxTime, Scheduler.Default); } public static IObservable<T> ThrottleMax<T>(this IObservable<T> source, TimeSpan dueTime, TimeSpan maxTime, IScheduler scheduler) { return Observable.Create<T>(o => { var hasValue = false; T value = default(T); var maxTimeDisposable = new SerialDisposable(); var dueTimeDisposable = new SerialDisposable(); Action action = () => { if (hasValue) { maxTimeDisposable.Disposable = Disposable.Empty; dueTimeDisposable.Disposable = Disposable.Empty; o.OnNext(value); hasValue = false; } }; return source.Subscribe( x => { if (!hasValue) { maxTimeDisposable.Disposable = scheduler.Schedule(maxTime, action); } hasValue = true; value = x; dueTimeDisposable.Disposable = scheduler.Schedule(dueTime, action); }, o.OnError, o.OnCompleted ); }); }} And a few tests... [TestClass]public class ThrottleMaxTests : ReactiveTest{ [TestMethod] public void CanThrottle() { var scheduler = new TestScheduler(); var results = scheduler.CreateObserver<int>(); var source = scheduler.CreateColdObservable( OnNext(100, 1) ); var dueTime = TimeSpan.FromTicks(100); var maxTime = TimeSpan.FromTicks(250); source.ThrottleMax(dueTime, maxTime, scheduler) .Subscribe(results); scheduler.AdvanceTo(1000); results.Messages.AssertEqual( OnNext(200, 1) ); } [TestMethod] public void CanThrottleWithMaximumInterval() { var scheduler = new TestScheduler(); var results = scheduler.CreateObserver<int>(); var source = scheduler.CreateColdObservable( OnNext(100, 1), OnNext(175, 2), OnNext(250, 3), OnNext(325, 4), OnNext(400, 5) ); var dueTime = TimeSpan.FromTicks(100); var maxTime = TimeSpan.FromTicks(250); source.ThrottleMax(dueTime, maxTime, scheduler) .Subscribe(results); scheduler.AdvanceTo(1000); results.Messages.AssertEqual( OnNext(350, 4), OnNext(500, 5) ); } [TestMethod] public void CanThrottleWithoutMaximumIntervalInterferance() { var scheduler = new TestScheduler(); var results = scheduler.CreateObserver<int>(); var source = scheduler.CreateColdObservable( OnNext(100, 1), OnNext(325, 2) ); var dueTime = TimeSpan.FromTicks(100); var maxTime = TimeSpan.FromTicks(250); source.ThrottleMax(dueTime, maxTime, scheduler) .Subscribe(results); scheduler.AdvanceTo(1000); results.Messages.AssertEqual( OnNext(200, 1), OnNext(425, 2) ); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20034476', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3002200/']}
jdg_70796
stackexchange
llm_judgeable_groundtruth_similarity
36233
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know very little about what I am talking about in what follows, so I appreciate any all help in pointing out all of my mistakes -- otherwise I won't be able to learn more and advance in my knowledge of these interesting subjects. I want to learn about lambda calculus because it seems like an interesting way to think about functions in general, and because a generalization called the stochastic $\pi$ calculus studied by Luca Cardelli and others seems to allow many computational interpretations of biology, something which I am really interested in. Also other reasons, like wanting to learn and understand functional programming, in order to have an alternative to C++ which I dislike. However, I just learned that type theory, at least as stated by Lof, is supposed to be a way to formalize intuitionistic logic/constructive mathematics, which rejects the Law of the Excluded Middle and even the Axiom of Choice. My background is more in mathematics than computer science, so although I can see how having to construct something to prove that exists is an acceptable restriction for people interested primarily in computation, I am really uncomfortable with having to give up proof by contradiction and the Law of the Excluded Middle. Perhaps that makes me old-fashioned, and my grandkids will laugh at me for this concern, but proof by contradiction can be very useful and is how I learned a lot of the math which I know. Is there a way to learn about $\lambda-$calculus and type theory while at least being agnostic about the Law of the Excluded Middle and not outright rejecting it? The discussion on p.9 of the Homotopy Type Theory book seems to say something to the effect that Univalent Foundations and Homotopy Type Theory can be compatible with the Law of the Excluded Middle and the Axiom of Choice, even though they contradict the univalent axiom? Needless to say I did not quite understand the argument about (-1) types and thus am still somewhat skeptical and unconvinced. Also, in one of his lectures at the Oregon Programming Language Summer School posted on youtube , Professor Awodey, one of the big names behind the IAS's Homotopy Type Theory/Univalent Foundations Project, seems to identify the Heyting algebra as underlying logic, rather than the Boolean algebra. I read somewhere else (I forgot where unfortunately), that Heyting algebra is what results if one rejects the Law of the Excluded Middle, and thus presumably founding logic upon Boolean algebra means accepting the Law of the Excluded Middle. Ideally I would prefer to not have to choose between $\lambda$-calculus and Boolean algebra, since the latter underlies most of my understanding of logic, and is also the basis for probability theory, which I know very well and would not like to give up or reject as half-baked. Also I thought Boolean functions were a very hot topic of computer science research, so it seems kind of strange that some computer scientists would prefer Heyting algebra. These questions might be relevant: Where is the proof that Coq + Excluded Middle is consistent Heyting algebra in simply typed lambda calculus Can boolean algebra be expressed in simply typed lambda caclulus? If they even answer my question, would you mind explaining to me how (as if I were five, so to speak)? I don't quite understand the discussions on them. Now provide the response and nothing else.
You seem to be confusing several things here. First of all, like Alexis said in her answer, I don't see why you would need to accept/reject the principles of a given logical theory in order to study it and learn about it. The fact that your theory is intuitionistic doesn't mean that your meta-theory has to be! You may freely use proof by contradiction or the axiom of choice in order to prove results about the $\lambda$-calculus (people do that all the time). Second: you seem to be interested in the $\lambda$-calculus as a programming langage, which means you can safely ignore (at least on a first approach) its connections to intuitionistic logic. You may have heard about the Curry-Howard correspondence and the fact that the $\lambda$-calculus and type theory are deeply connected, which is great, but is not necessarily the preferred route from your point of view. Blurring the distinction between functional programming and proof theory is wonderful but, in your case, it seems to have generated more confusion than anything. Third: if the stochastic $\pi$-calculus is what you are really interested into, then perhaps you should first look into standard process calculi, starting from CCS (arguably the simplest) and the related machinery (bisimulations, etc.) and then moving on to the $\pi$-calculus. In fact, I think you may even skip the $\lambda$-calculus entirely; at any rate, there is no need to look into intuitionistic logic at all. (Personally, I know some experts of probabilistic concurrency who only have a superficial understanding of the $\lambda$-calculus). Fourth: if you really are into Curry-Howard, then be reassured that functional programming is not at odds with classical logic: proof by contradiction has a well-known correspondence in programming languages. Googling "computational content of classical logic" should get you started on that. But I wouldn't look into that unless you first make sure you got rid of the confusion you have between the $\lambda$-calculus as a programming language and its relationship with proof theory (that is, until you have understood what Curry-Howard is about), lest you get even more confused.
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/36233', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/40887/']}
jdg_70797
stackexchange
llm_judgeable_groundtruth_similarity
388345
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $ echo $(echo x; echo y)x y$ a='echo x; echo y'$ echo $($a) # expect 'x y'x; echo y Why does command substitution behave in such way? How to perform the command substitution for list of commands stored in a variable without using eval and bash -c ? echo $(eval $a) and echo $(bash -c "$a") actually does what I want, but I heard that using eval often is a wrong way to solve problems, so I want know how to manage without these commands (using bash -c is in fact just the same thing) Now provide the response and nothing else.
Word splitting happens quite late in the evaluation of a command. Most crucially for you, it happens after variable expansion and command substitution. This means that the second line in s="echo a; echo b"echo $($s) will first have $s expanded to echo a; echo b and then have that command executed without splitting it into a compound command consisting of two echo s. (Details: $s gets split into four words, echo , a; , echo and b . The encapsulating $(...) executes this as one command, echo with three arguments, a; , echo and b , i.e., you essentially have echo $('echo' 'a;' 'echo' 'b') ). What is given to the outmost echo is, therefore, the string a; echo b (actually three words as $(...) is not in quotes) since this was what was outputted by the innermost echo . Compare that to s1="echo a"s2="echo b"echo $($s1;$s2) which results in what you'd expect. Yes, " eval is evil", most of the time, and sh -c could feel clunky and could be as fragile if you don't know what you're doing. But suppose you have a bit of shell-code that you trust in a string in a shell script. In that case, these tools are the only (?) way to get that code to execute correctly since this often requires explicitly evaluating the text in the string as shell-code (with all evaluation phases from start to finish), especially if it's a compound command. I think it's only due to the Unix shell's intimate relation to text that you're lucky enough to have echo $($s) execute something at all . Just think about the steps you'd have to take to get a C program to execute a piece of C code that gets given as a string...
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/388345', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/230365/']}
jdg_70798
stackexchange
llm_judgeable_groundtruth_similarity
88386
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Usually, when starting my car, as soon as I turn the key all the way around, the car instantly starts revving and then springs to life. For some reason, today, when starting my car. I turned the key in the ignition and nothing happened. My dashboard's lights turned on, however, no sputtering or no audio indication that my car was doing anything. I tried this multiple times, but still nothing. On my final attempt, I turned the key all the way -- but held it there. For 5 or so seconds, the car did nothing, but then suddenly it started up like it usually would. I'm aware that this has likely something to do with my battery going bad, but what I'm confused about is the specific nature of this event. In my experience, either the car will sputter for a bit or just wont start up whatsoever. Not this weird delay.Really, I'm just curious if there's anything else I should be concerned about other than my battery. Thank you!My car is a 2010 Camaro with quite a good number of miles on it. Edit: Thank you guys for all your responses. Seriously, you guys are absolute geniuses! Now provide the response and nothing else.
The most likely from my experience of this happening are - Worn/burnt contacts in the starter solenoid Worn starter motor brushes Bad connection between the high current cable and the starter solenoid Bad connection between the starter solenoid and the starter motor Bad engine earth strap connection It is possible to have a high resistance connection that heats up as current flows through it. The heat causes the metal contacts to expand which then improves the connection and allows a high current to flow. This would account for the delayed start.
{}
{'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/88386', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/71644/']}
jdg_70799
stackexchange
llm_judgeable_groundtruth_similarity
24048745
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is just a curiosity/annoyance and not really an issue. I have a web forms web site project using VS 2013. Somehow, the project name includes a (1) or a (2) in the name, I have tried editing the solution file, deleting all other sites, rebuilding/cleaning etc, still cannot figure out where it gets the (2) from. Solution File : Microsoft Visual Studio Solution File, Format Version 12.00# Visual Studio Express 2013 for WebVisualStudioVersion = 12.0.30501.0MinimumVisualStudioVersion = 10.0.40219.1Project("{E24C65DC-7377-472B-9ABA-BC803B73C61A}") = "Site", "http://localhost:52806", "{5411FD57-5AFE-4A3A-9474-0DCD64C89003}" ProjectSection(WebsiteProperties) = preProject UseIISExpress = "true" TargetFrameworkMoniker = ".NETFramework,Version%3Dv4.0" Debug.AspNetCompiler.VirtualPath = "/localhost_52806" Debug.AspNetCompiler.PhysicalPath = "..\..\..\..\..\..\Dropbox\Projects\Site\" Debug.AspNetCompiler.TargetPath = "PrecompiledWeb\localhost_52806\" Debug.AspNetCompiler.Updateable = "true" Debug.AspNetCompiler.ForceOverwrite = "true" Debug.AspNetCompiler.FixedNames = "false" Debug.AspNetCompiler.Debug = "True" Release.AspNetCompiler.VirtualPath = "/localhost_52806" Release.AspNetCompiler.PhysicalPath = "..\..\..\..\..\..\Dropbox\Projects\Site\" Release.AspNetCompiler.TargetPath = "PrecompiledWeb\localhost_52806\" Release.AspNetCompiler.Updateable = "true" Release.AspNetCompiler.ForceOverwrite = "true" Release.AspNetCompiler.FixedNames = "false" Release.AspNetCompiler.Debug = "False" SlnRelativePath = "..\..\..\..\..\..\Dropbox\Projects\Site\" EndProjectSectionEndProjectGlobal GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Any CPU = Debug|Any CPU EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {5411FD57-5AFE-4A3A-9474-0DCD64C89003}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {5411FD57-5AFE-4A3A-9474-0DCD64C89003}.Debug|Any CPU.Build.0 = Debug|Any CPU EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE EndGlobalSectionEndGlobal Now provide the response and nothing else.
You might have two sites with the same IIS binding information. This usually happens to me if I create, delete, then re-create a web site. Find your IISExpress folder (mine is in C:\Users\brandon\Documents\IISExpress), then go to config/applicationhost.config Find your site name with the duplicate binding and remove the <site> node. It will look something like this <site name="WEBSITENAME" id="21"> <application path="/" applicationPool="Clr2IntegratedAppPool"> <virtualDirectory path="/" physicalPath="C:\Users\...\YourProject" /> </application> <bindings> <binding protocol="http" bindingInformation="*:SomePortNumber:localhost" /> </bindings></site>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24048745', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/966823/']}
jdg_70800
stackexchange
llm_judgeable_groundtruth_similarity
24496605
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I currently have 2 video elements on my html-page. Both embed exactly the same .mp4 video from the same URL. Is there any way to tell the browser to duplicate the rendered video from the first video element instead of letting the browser download both videos? You can cleary see that the two videos are loaded seperated as they have a different buffering time before playback sometimes and the videos dont play synchronized everytime. My Code: <video autoplay id="previewVideo" data-videoid="JYpUXXD4xgc"> <source src="video.php?videoid=JYpUXXD4xgc" type="video/mp4"/></video><video autoplay id="bigVideo" data-videoid="JYpUXXD4xgc"> <source src="video.php?videoid=JYpUXXD4xgc" type="video/mp4"/></video> Now provide the response and nothing else.
This can be done in some very easy steps via Javascript and the Canvas Element: HTML: <video autoplay id="previewVideo" data-videoid="JYpUXXD4xgc"> <source src="video.php?videoid=JYpUXXD4xgc" type="video/mp4"/></video> <canvas id="bigVideo"></canvas> JavaScript: document.addEventListener('DOMContentLoaded', function() { var v = document.getElementById('previewVideo'); var canvas = document.getElementById('bigVideo'); var context = canvas.getContext('2d'); var cw = Math.floor(canvas.clientWidth); var ch = Math.floor(canvas.clientHeight); canvas.width = cw; canvas.height = ch; v.addEventListener('play', function() { updateBigVideo(this, context, cw, ch); }, false);}, false);function updateBigVideo(v, c, w, h) { if (v.paused || v.ended) return false; c.drawImage(v, 0, 0, w, h); setTimeout(updateBigVideo, 20, v, c, w, h);} The canvas fetches the image of the video and displays it again on the BigVideo. The updateBigVideo() function is called every 20ms, resulting in a framerate of about 50 FPS. Read more
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24496605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3582159/']}
jdg_70801
stackexchange
llm_judgeable_groundtruth_similarity
5664136
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried configuring log4r with Rails 3.0.4 based on this article: http://www.dansketcher.com/2007/06/16/integrating-log4r-and-ruby-on-rails/ /Users/toto/.rvm/gems/ruby-1.9.2-p0/gems/log4r-1.1.9/lib/log4r/yamlconfigurator.rb:166:in `sub!': can't convert Pathname into String (TypeError) from /Users/toto/.rvm/gems/ruby-1.9.2-p0/gems/log4r-1.1.9/lib/log4r/yamlconfigurator.rb:166:in `block in paramsub' from /Users/toto/.rvm/gems/ruby-1.9.2-p0/gems/log4r-1.1.9/lib/log4r/yamlconfigurator.rb:165:in `each' from /Users/toto/.rvm/gems/ruby-1.9.2-p0/gems/log4r-1.1.9/lib/log4r/yamlconfigurator.rb:165:in `paramsub' from /Users/toto/.rvm/gems/ruby-1.9.2-p0/gems/log4r-1.1.9/lib/log4r/yamlconfigurator.rb:156:in `block in decode_hash_params' I have Googled for a Rails 3 integration, but have not found a working solution. Can anyone point me to a working code snippet that will allow log configuration using a YAML file, and initialization at runtime? Just as a reference, I placed the sample logger.rb in the config/initializers folder and the log4r.yml in the config directory. Thanks Now provide the response and nothing else.
Hehe ...The idea of Log4r comes from the famous "Log4j", which is my favorite logger in my java programming life.However log4r's doc is really poor, and it's really hard for newbies. Let me show my solution: Step1. create the log4r config file: (file name: config/log4r.yml) log4r_config: # define all loggers ... loggers: - name : production level : WARN trace : 'false' outputters : - datefile - name : development level : DEBUG trace : 'true' outputters : - datefile # define all outputters (incl. formatters) outputters: - type: DateFileOutputter name: datefile dirname: "log" # notice the file extension is needed! # if you want the file is named by the process, just comment it, # then it will automatically get the same name with its process, # e.g. rails_2017-05-03.log filename: "my_app.log" formatter: date_pattern: '%H:%M:%S' pattern : '%d %l: %m ' type : PatternFormatter step2. modify config/application.rb require 'rails/all'# add these line for log4rrequire 'log4r'require 'log4r/yamlconfigurator'require 'log4r/outputter/datefileoutputter'include Log4rBundler.require(:default, Rails.env) if defined?(Bundler)module Zurich class Application < Rails::Application #... # assign log4r's logger as rails' logger. log4r_config= YAML.load_file(File.join(File.dirname(__FILE__),"log4r.yml")) YamlConfigurator.decode_yaml( log4r_config['log4r_config'] ) config.logger = Log4r::Logger[Rails.env] endend step3. add this line to your Gemfile. # which is the latest version and support "datefileoutputter"gem 'log4r', '1.1.9' (if you are using Rails 4+ (including Rails6), there still is step4: add this file to config/initializers folder # config/initializers/log4r_patch_for_rails4.rbclass Log4r::Logger def formatter() # for rails4+ Proc.new{|severity, time, progname, msg| formatted_severity = sprintf("%-5s",severity.to_s) formatted_time = time.strftime("%Y-%m-%d %H:%M:%S") "[#{formatted_severity} #{formatted_time} #{$$}]\n #{msg}\n" } end def formatter= temp # for rails6+ endend ) It's done. Now "cd" into your Rails application folder, run "bundle" to install log4r, then "rails s",you will find the log files in "/log" folder like this: May 9 17:05 rails_2011-05-09.logMay 10 13:42 rails_2011-05-10.log and the log content is( my favorite format ) : $ tail log/rails_2011-05-10.logStarted GET "/????_settings/19/edit" for 127.0.0.1 at ...13:42:11 INFO: Processing by ????SettingsController ...13:42:11 INFO: Parameters: {"id"=>"19"}13:42:12 DEBUG: ????Setting Load (0.0ms) SELECT "d ...13:42:12 INFO: Completed 200 OK in 750ms My environment: OS: cygwin running in XP ruby 1.8.7 (2011-02-18 patchlevel 334) [i386-mingw32] rails: 3.0.5 gem: 1.6.0 Any question please let me know~ :-) refer to: https://stackoverflow.com/a/20154414/445908 ,and rails6 log4r tagged_logging.rb:22:in `call': wrong number of arguments
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5664136', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/553620/']}
jdg_70802
stackexchange
llm_judgeable_groundtruth_similarity
1875792
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been asked to find the value of $8^{\log_{2} 5}$ I understand that I could proceed to turn this into $\log_{8} x = \log_{2} 5$ Where do I go from there? I assumed changing both to the same base, but I'm not sure how to do so or what to do after having the same base for both sides. Thank you! Now provide the response and nothing else.
Since $8=2^3$ and $2^{log_2 5}=5$, we have $$8^{log_2 5}=(2^3)^{log_2 5}=(2^{log_2 5})^3=5^3=125$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1875792', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/287954/']}
jdg_70803
stackexchange
llm_judgeable_groundtruth_similarity
88565
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Today I had an argument with my math teacher at school. We were answering some simple True/False questions and one of the questions was the following: $$x^2\ne x\implies x\ne 1$$ I immediately answered true, but for some reason, everyone (including my classmates and math teacher) is disagreeing with me. According to them, when $x^2$ is not equal to $x$ , $x$ also can't be $0$ and because $0$ isn't excluded as a possible value of $x$ , the sentence is false. After hours, I am still unable to understand this ridiculously simple implication. I can't believe I'm stuck with something so simple. Why I think the logical sentence above is true: My understanding of the implication symbol $\implies$ is the following:If the left part is true, then the right part must be also true. If the left part is false, then nothing is said about the right part. In the right part of this specific implication nothing is said about whether $x$ can be $0$ . Maybe $x$ can't be $-\pi i$ too, but as I see it, it doesn't really matter, as long as $x \ne 1$ holds. And it always holds when $x^2 \ne x$ , therefore the sentence is true. TL;DR: $x^2 \ne x \implies x \ne 1$ : Is this sentence true or false, and why? Sorry for bothering such an amazing community with such a simple question, but I had to ask someone. Now provide the response and nothing else.
First, some general remarks about logical implications/conditional statements. As you know, $P \rightarrow Q$ is true when $P$ is false, or when $Q$ is true. As mentioned in the comments, the contrapositive of the implication $P \rightarrow Q$, written $\lnot Q \rightarrow \lnot P$, is logically equivalent to the implication. It is possible to write implications with merely the "or" operator. Namely, $P \rightarrow Q$ is equivalent to $\lnot P\text{ or }Q$, or in symbols, $\lnot P\lor Q$. Now we can look at your specific case, using the above approaches. If $P$ is false, ie if $x^2 \neq x$ is false (so $x^2 = x$ ), then the statement is true, so we assume that $P$ is true. So, as a statement, $x^2 = x$ is false. Your teacher and classmates are rightly convinced that $x^2 = x$ is equivalent to ($x = 1$ or $x =0\;$), and we will use this here. If $P$ is true, then ($x=1\text{ or }x =0\;$) is false. In other words, ($x=1$) AND ($x=0\;$) are both false. I.e., ($x \neq 1$) and ($x \neq 0\;$) are true. I.e., if $P$, then $Q$. The contrapositive is $x = 1 \rightarrow x^2 = x$. True. We use the "sufficiency of or" to write our conditional as: $$\lnot(x^2 \neq x)\lor x \neq 1\;.$$ That is, $x^2 = x$ or $x \neq 1$,which is $$(x = 1\text{ or }x =0)\text{ or }x \neq 1,$$ which is$$(x = 1\text{ or }x \neq 1)\text{ or }x = 0\;,$$ which is$$(\text{TRUE})\text{ or }x = 0\;,$$ which is true.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/88565', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/20664/']}
jdg_70804
stackexchange
llm_judgeable_groundtruth_similarity
9824435
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a construct in C# which allows you to create a anonymous class implementing an interface, just like in Java? Now provide the response and nothing else.
As has already been stated, no, this is not possible. However, you can make a class that implements the desired interface and accepts a lambda in it's constructor so that you can turn a lambda into a class that implements the interface. Example: public class LambdaComparer<T> : IEqualityComparer<T>{ private readonly Func<T, T, bool> _lambdaComparer; private readonly Func<T, int> _lambdaHash; public LambdaComparer(Func<T, T, bool> lambdaComparer) : this(lambdaComparer, EqualityComparer<T>.Default.GetHashCode) { } public LambdaComparer(Func<T, T, bool> lambdaComparer, Func<T, int> lambdaHash) { if (lambdaComparer == null) throw new ArgumentNullException("lambdaComparer"); if (lambdaHash == null) throw new ArgumentNullException("lambdaHash"); _lambdaComparer = lambdaComparer; _lambdaHash = lambdaHash; } public bool Equals(T x, T y) { return _lambdaComparer(x, y); } public int GetHashCode(T obj) { return _lambdaHash(obj); }} Usage (obviously doing nothing helpful, but you get the idea) var list = new List<string>() { "a", "c", "a", "F", "A" };list.Distinct(new LambdaComparer<string>((a,b) => a == b));
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9824435', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/881846/']}
jdg_70805
stackexchange
llm_judgeable_groundtruth_similarity
129422
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I burned them in a small aluminium tray. While IPA is burning orange, it produced the smell of soot, but while ethanol is burning blue there isn't any smell. Also, ethanol made the tray really cold when I poured it on the tray before burning it, compared to isopropyl alcohol. How can this be explained? Note: IPA concentration is 99%, Ethanol is 97% Now provide the response and nothing else.
Interesting observation. The blue flame color of all hydrocarbon fuels is due to the emission small diatomic carbon species such $C_2$ or CH. There is nothing magical about IPA having a yellow flame. The yellow flame originates from incomplete combustion. There is more carbon per mole of IPA as compared to ethanol. Yellow flames are called reducing flames and blue flames are called oxidizing flames. In older times when Bunsen burner was taught in detail, it was shown a blue flame of methane can be readily converted in to a yellow flame by altering the air supply valve. The yellow color, if you view through a spectroscope is a continuous spectrum (rainbow like), which shows that it is like a black body radiator. The black body radiator is nothing but glowing soot (carbon) particle, glowing chrcoal but a very small one. On the other hand, the blue flame shows band like structure. I once had a chance to view the blue acetylene flame with air with a diffraction grating. It was an amazing sight. The structure of colored bands was never seen before. They are called Swan bands. Unfortunately, I cannot find any color images in Google Images of Swan bands. Here is one example from a 1857 paper by Plucker and Hittrof, "I. On the spectra of ignited gases and vapours, with especial regard to the different spectra of the same elementary gaseous substance". This more than 150 year old picture is not doing justice to what you see in reality of an extremely beautiful spectrum.
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/129422', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/87707/']}
jdg_70806
stackexchange
llm_judgeable_groundtruth_similarity
2435933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need a converter to make any jquery code/plugin to make with no.conflict mode? or it's just possible with find and replace. Now provide the response and nothing else.
A general solution would be: function deep_in_array($needle, $haystack) { if(in_array($needle, $haystack)) { return true; } foreach($haystack as $element) { if(is_array($element) && deep_in_array($needle, $element)) return true; } return false;} The reason why I chose to use in_array and a loop is: Before I examine deeper levels of the array structure, I make sure, that the searched value is not in the current level. This way, I hope the code to be faster than doing some kind of depth-first search method. Of course if your array is always 2 dimensional and you only want to search in this kind of arrays, then this is faster: function in_2d_array($needle, $haystack) { foreach($haystack as $element) { if(in_array($needle, $element)) return true; } return false;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2435933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84201/']}
jdg_70807
stackexchange
llm_judgeable_groundtruth_similarity
22107
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am editing a init.d script. The init.d script runs a utility script which then runs an process. From either bash scripts how would I make it launch the main process as a specific user and group? Now provide the response and nothing else.
The simplest way is to use the su(1) command, it has an option that allows you to run a command via the user's shell, example: su foo -c ls This will switch to the user foo and run the ls command. If the user you want to use does not have a valid shell (ie it's not in /etc/shells, like /bin/false or /sbin/nologin) you will also have to specify a shell on the command line. Example with output: # su nobody -s /bin/bash -c iduid=99(nobody) gid=99(nobody) groups=99(nobody) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/22107', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/-1/']}
jdg_70808
stackexchange
llm_judgeable_groundtruth_similarity
314699
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am currently building a "battery box", for camping, and kayaking, etc. It has a few different features on it, such as a 150W inverter, a 48W spotlight, 12V accessory outlet, and strobe lights. The inverter and the spotlight seem to draw quite a bit of power, 7 Amps and 3 Amps respectively. (correct me if that doesn't sound right.) This will all be powered by an ExpertPower 12v 8ah sealed lead acid battery, however I do not know if it can handle the load that I will put on it. I tried looking around, but most people were using this battery for some sort of home security system. I found a datasheet on the bottom of their website that I think would tell me that ( http://www.expertpower.us/exp1280 ), but I cannot interpret it because of its complexity. If someone could also show me the process of calculating this, that would be great, as I am very eager to learn. Now provide the response and nothing else.
I think you will be disappointed with that battery. Lead acid batteries are best on low rate discharge. Most these days are rated at 20hrs. That battery is rated 8Ah, so will deliver that capacity when discharged over a 20hr period, at 400mA. At higher currents, the capacity will be less. Here are a few lines taken from the discharge capacity table in the data sheet, for constant current discharge, down to a cell voltage of 1.75v (more of that later!) current period capacity 0.4A 20Hr 8.0Ah 4.8A 1Hr 4.8Ah 16.5A 10min 2.8Ah so there's quite a capacity penalty to high rates of discharge. A 150W inverter will take around 15A (assuming 85% efficiency) to deliver full power, 7A is only around half maximum load. The lifetime of a lead acid battery, before it wears out, is strongly related to its depth of discharge. That battery rates 260 cycles at 100% DOD, ie to 1.75v. You can double that lifetime if you only discharge to 50%, and x5 if you go to 30%, that is, stop discharge at a higher voltage. Depending on how you want to use it, weight and capacity may be more important than cycle lifetime to you. It's not clear whether you have a vehicle with you, or everything is carried in your kayak. If you have the capacity to carry more weight, then a cheap auto battery is by far the best bang for buck for the sort of currents you are talking about. Both nickel and lithium chemistries stand up to high rates of discharge better than lead, and lithium in particular gives a much better power and energy per unit weight.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/314699', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/154071/']}
jdg_70809
stackexchange
llm_judgeable_groundtruth_similarity
41056761
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Text with long text inside a ScrollView and I want to detect when the user has scrolled to the end of the text so I can enable a button. I've been debugging the event object from the onScroll event but there doesn't seem any value I can use. Now provide the response and nothing else.
I did it like this: import React from 'react';import {ScrollView, Text} from 'react-native'; const isCloseToBottom = ({layoutMeasurement, contentOffset, contentSize}) => { const paddingToBottom = 20; return layoutMeasurement.height + contentOffset.y >= contentSize.height - paddingToBottom;}; const MyCoolScrollViewComponent = ({enableSomeButton}) => ( <ScrollView onScroll={({nativeEvent}) => { if (isCloseToBottom(nativeEvent)) { enableSomeButton(); } }} scrollEventThrottle={400} > <Text>Here is very long lorem ipsum or something...</Text> </ScrollView>); export default MyCoolScrollViewComponent; I wanted to add paddingToBottom because usually it is not needed that ScrollView is scrolled to the bottom till last pixel. But if you want that set paddingToBottom to zero.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/41056761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48869/']}
jdg_70810
stackexchange
llm_judgeable_groundtruth_similarity
29692
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Always when the term RAII is used, people are actually talking about deconstruction instead of initialisation. I think I have a basic understanding what it might mean but I'm not quite sure. Also: is C++ the only RAII language? What about Java or C#/.NET? Now provide the response and nothing else.
Resource Acquisition Is Initialization means that objects should look after themselves as a complete package and not expect other code to tell an instance "hey by the way, you're going to be cleaned up soon - please tidy up now." It does usually mean there is something meaningful in the destructor. It also means you write a class specifically to manage resources, knowing that under certain hard-to-predict circumstances, like exceptions being thrown, you can count on destructors executing. Say you want to write some code where you're going to change the windows cursor to a wait (hourglass, donut-of-not-working, etc) cursor, do your stuff, and then change it back. And say also that "do your stuff" might throw an exception. The RAII way of doing that would be to make a class whose ctor set the cursor to wait, whose one "real" method did whatever it was you wanted done, and whose dtor set the cursor back. Resources (in this case the cursor state) are tied to the scope of an object. When you acquire the resource you initialize an object. You can count on the object getting destructed if exceptions are thrown, and that means you can count on getting to clean up the resource. Using RAII well means you don't need finally . Of course, it relies on deterministic destruction, which you can't have in Java. You can get a sort of deterministic destruction in C# and VB.NET with using .
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/29692', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/10199/']}
jdg_70811
stackexchange
llm_judgeable_groundtruth_similarity
2550387
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to find the real part of the complex number (in Euler's form) $ z = e^{e^{i \theta } } $ ? I got confused on how to proceed. I am a beginner to complex numbers. Now provide the response and nothing else.
$$z = e^{e^{i \theta} }$$ $$\implies z=e^{\cos \theta +i\sin \theta }$$ $$\implies z=e^{\cos \theta} e^{i\sin \theta }$$ $$\implies z=e^{\cos \theta}(\cos (\sin\theta) +i\sin(\sin \theta))$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2550387', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/458544/']}
jdg_70812
stackexchange
llm_judgeable_groundtruth_similarity
60855860
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to launch serverless from launch.json with no luck (despite it working perfectly from command line). How do I launch this? Error: Attribute 'program' does not exist From the VSCode command line in the project root this works perfectly: sls deploy --config slsSite.yml Heres my current launch config: { "type": "node", "request": "launch", "name": "deploy site", "preLaunchTask": "buildsite", //ref tasks.json "program": "${workspaceFolder}\\serverless", "args": ["deploy", "--config slsSite.yml"]} When I change the program to the path to the node module: "program":"${workspaceFolder}\\serverless_site\\node_modules\\serverless\\bin\\serverless I get: This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file Then I tried (in addition to the program line above): "args": ["deploy", "--config ${workspaceFolder}\\slsSite.yml"] and got the same error. Now provide the response and nothing else.
SWIFTUI 2.0 Here is possible alternate solution in Xcode 12 / iOS 14 (SwiftUI 2.0) that can be used in same scenario when controls for scrolling is outside of scrolling area (because SwiftUI2 ScrollViewReader can be used only inside ScrollView ) Note: Row content design is out of consideration scope Tested with Xcode 12b / iOS 14 class ScrollToModel: ObservableObject { enum Action { case end case top } @Published var direction: Action? = nil}struct ContentView: View { @StateObject var vm = ScrollToModel() let items = (0..<200).map { $0 } var body: some View { VStack { HStack { Button(action: { vm.direction = .top }) { // < here Image(systemName: "arrow.up.to.line") .padding(.horizontal) } Button(action: { vm.direction = .end }) { // << here Image(systemName: "arrow.down.to.line") .padding(.horizontal) } } Divider() ScrollViewReader { sp in ScrollView { LazyVStack { ForEach(items, id: \.self) { item in VStack(alignment: .leading) { Text("Item \(item)").id(item) Divider() }.frame(maxWidth: .infinity).padding(.horizontal) } }.onReceive(vm.$direction) { action in guard !items.isEmpty else { return } withAnimation { switch action { case .top: sp.scrollTo(items.first!, anchor: .top) case .end: sp.scrollTo(items.last!, anchor: .bottom) default: return } } } } } } }} SWIFTUI 1.0+ Here is simplified variant of approach that works, looks appropriate, and takes a couple of screens code. Tested with Xcode 11.2+ / iOS 13.2+ (also with Xcode 12b / iOS 14) Demo of usage: struct ContentView: View { private let scrollingProxy = ListScrollingProxy() // proxy helper var body: some View { VStack { HStack { Button(action: { self.scrollingProxy.scrollTo(.top) }) { // < here Image(systemName: "arrow.up.to.line") .padding(.horizontal) } Button(action: { self.scrollingProxy.scrollTo(.end) }) { // << here Image(systemName: "arrow.down.to.line") .padding(.horizontal) } } Divider() List { ForEach(0 ..< 200) { i in Text("Item \(i)") .background( ListScrollingHelper(proxy: self.scrollingProxy) // injection ) } } } }} Solution: Light view representable being injected into List gives access to UIKit's view hierarchy. As List reuses rows there are no more values then fit rows into screen. struct ListScrollingHelper: UIViewRepresentable { let proxy: ListScrollingProxy // reference type func makeUIView(context: Context) -> UIView { return UIView() // managed by SwiftUI, no overloads } func updateUIView(_ uiView: UIView, context: Context) { proxy.catchScrollView(for: uiView) // here UIView is in view hierarchy }} Simple proxy that finds enclosing UIScrollView (needed to do once) and then redirects needed "scroll-to" actions to that stored scrollview class ListScrollingProxy { enum Action { case end case top case point(point: CGPoint) // << bonus !! } private var scrollView: UIScrollView? func catchScrollView(for view: UIView) { if nil == scrollView { scrollView = view.enclosingScrollView() } } func scrollTo(_ action: Action) { if let scroller = scrollView { var rect = CGRect(origin: .zero, size: CGSize(width: 1, height: 1)) switch action { case .end: rect.origin.y = scroller.contentSize.height + scroller.contentInset.bottom + scroller.contentInset.top - 1 case .point(let point): rect.origin.y = point.y default: { // default goes to top }() } scroller.scrollRectToVisible(rect, animated: true) } }}extension UIView { func enclosingScrollView() -> UIScrollView? { var next: UIView? = self repeat { next = next?.superview if let scrollview = next as? UIScrollView { return scrollview } } while next != nil return nil }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/60855860', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1634753/']}
jdg_70813
stackexchange
llm_judgeable_groundtruth_similarity
50804915
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: While I explored yaml definitions of Kubernetes templates, I stumbled across different definitions of sizes. First I thought it's about the apiVersions but they are the same. So what is the difference there? Which are right when both are the same? storage: 5G and storage: 5Gi volumeClaimTemplates: - metadata: name: mongo-persistent-storage spec: resources: requests: storage: 2Gi see here in detail: https://github.com/cvallance/mongo-k8s-sidecar/blob/master/example/StatefulSet/mongo-statefulset.yaml and this one: volumeClaimTemplates: - metadata: name: mongo-persistent-storage spec: resources: requests: storage: 5G here in detail: https://github.com/openebs/openebs/blob/master/k8s/demo/mongodb/mongo-statefulset.yml Now provide the response and nothing else.
From Kubernetes source : Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents : Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi So those are the "bibyte" counterparts, like user2864740 commented. A little info on those orders of magnitude: The kibibyte was designed to replace the kilobyte in those computer science contexts in which the term kilobyte is used to mean 1024 bytes. The interpretation of kilobyte to denote 1024 bytes, conflicting with the SI definition of the prefix kilo (1000) , used to be common. So, as you can see, 5G means 5 Gigabytes while 5Gi means 5 Gibibytes. They amount to: 5 G = 5000000 KB / 5000 MB 5 Gi = 5368709.12 KB / 5368.70 MB Therefore, in terms of size, they are not the same.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/50804915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/977017/']}
jdg_70814
stackexchange
llm_judgeable_groundtruth_similarity
10879864
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing a per fragment lighting and when correcting normal vecter, i got this code: vec3 f_normal = mat3(MVI) * normal; Where MVI is: mat4 MVI = transpose(inverse(ModelViewMatrix)); . So what is return after mat3(MVI) statement? Now provide the response and nothing else.
mat3(MVI) * normal Returns the upper 3x3 matrix from the 4x4 matrix and multiplies the normal by that. This matrix is called the 'normal matrix'. You use this to bring your normals from world space to eye space. The upper 3x3 portion of the matrix is important for scale and rotation, while the rest is only for translation (and normals are never translated) To take a normal from world space to eye space, you just need the 3x3 inverse transpose of the model view matrix. Unless your matrix in othro normal (no non-uniform scale) in that case the original matrix is the same as its inverse transpose.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10879864', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1433300/']}
jdg_70815
stackexchange
llm_judgeable_groundtruth_similarity
5229768
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: What does the Visual Studio "Any CPU" target mean? I've noticed that when compiling C# code in Visual Studio, there are typically options for compiling for 32/64 bit systems, and there's also one for compiling for Any CPU . What's the difference between the two options? Does choosing Any CPU only compile down to an intermediate byte code while the first option compiles down to machine code (this sounds unlikely to me)? Or something else? Now provide the response and nothing else.
On a 32-bit machine: Any CPU : runs as a 32-bit process, can load Any CPU and x86 assemblies, will get BadImageFormatException if it tries to load an x64 assembly. x86 : same as Any CPU . x64 : BadImageFormatException always. On a 64-bit machine: Any CPU : runs as a 64-bit process, can load Any CPU and x64 assemblies, will get BadImageFormatException if it tries to load an x86 assembly. x86 : runs as a 32-bit process, can load Any CPU and x86 assemblies, will get BadImageFormatException if it tries to load an x64 assembly. x64 : same as Any CPU . It is the JIT compiler that generates an assembly code that's compatible with the requested target based on this flag.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/5229768', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/558546/']}
jdg_70816
stackexchange
llm_judgeable_groundtruth_similarity
26254619
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does anyone know how to carry the legend in bokeh outside of the graph? The only manipulation I could do was to choose a position among: top_right, top_left, bottom_left or bottom_right using: legend()[0].orientation = "bottom_left" and when I try different ones I get the error message: ValueError: invalid value for orientation: 'outside'; allowed values are top_right, top_left, bottom_left or bottom_right Now provide the response and nothing else.
As of Bokeh 0.12.4 it is possible to position legends outside the central plot area. Here is a short example from the user's guide : import numpy as npfrom bokeh.models import Legendfrom bokeh.plotting import figure, show, output_filex = np.linspace(0, 4*np.pi, 100)y = np.sin(x)output_file("legend_labels.html")p = figure(toolbar_location="above")r0 = p.circle(x, y)r1 = p.line(x, y)r2 = p.line(x, 2*y, line_dash=[4, 4], line_color="orange", line_width=2)r3 = p.square(x, 3*y, fill_color=None, line_color="green")r4 = p.line(x, 3*y, line_color="green")legend = Legend(items=[ ("sin(x)", [r0, r1]), ("2*sin(x)", [r2]), ("3*sin(x)", [r3, r4])], location=(0, -30))p.add_layout(legend, 'right')show(p) To adjust the position, change dx and dy in location=(dx, dy) .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26254619', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3245539/']}
jdg_70817
stackexchange
llm_judgeable_groundtruth_similarity
44692780
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I researched the topic for quite some time now, and I think I understand the most important concepts like the release and acquire memory fences . However, I haven't found a satisfactory explanation for the relation between volatile and the caching of the main memory. So, I understand that every read and write to/from a volatile field enforces strict ordering of the read as well as the write operations that precede and follow it (read-acquire and write-release). But that only guarantees the ordering of the operations. It doesn't say anything about the time these changes are visible to other threads/processors. In particular, this depends on the time the cache is flushed (if at all). I remember having read a comment from Eric Lippert saying something along the lines of "the presence of volatile fields automatically disables cache optimizations". But I'm not sure what exactly this means. Does it mean caching is completely disabled for the whole program just because we have a single volatile field somewhere? If not, what is the granularity the cache is disabled for? Also, I read something about strong and weak volatile semantics and that C# follows the strong semantics where every write will always go straight to main memory no matter if it's a volatile field or not. I am very confused about all of this. Now provide the response and nothing else.
I'll address the last question first. Microsoft's .NET implementation has release semantics on writes 1 . It's not C# per se, so the same program, no matter the language, in a different implementation can have weak non-volatile writes. The visibility of side-effects is regarding multiple threads. Forget about CPUs, cores and caches. Imagine, instead, that each thread has a snapshot of what is on the heap that requires some sort of synchronization to communicate side-effects between threads. So, what does C# say? The C# language specification ( newer draft ) says fundamentally the same as the Common Language Infrastructure standard (CLI; ECMA-335 and ISO/IEC 23271 ) with some differences. I'll talk about them later on. So, what does the CLI say? That only volatile operations are visible side-effects. Note that it also says that non-volatile operations on the heap are side-effects as well, but not guaranteed to be visible. Just as important 2 , it doesn't state they're guaranteed to not be visible either. What exactly happens on volatile operations? A volatile read has acquire semantics, it precedes any following memory reference. A volatile write has release semantics, it follows any preceding memory reference. Acquiring a lock performs a volatile read, and releasing a lock performs a volatile write. Interlocked operations have acquire and release semantics. There's another important term to learn, which is atomicity . Reads and writes, volatile or not, are guaranteed to be atomic on primitive values up to 32 bits on 32-bit architectures and up to 64 bits on 64-bit architectures. They're also guaranteed to be atomic for references. For other types, such as long struct s, the operations are not atomic, they may require multiple, independent memory accesses. However, even with volatile semantics, read-modify-write operations, such as v += 1 or the equivalent ++v (or v++ , in terms of side-effects) , are not atomic. Interlocked operations guarantee atomicity for certain operations, typically addition, subtraction and compare-and-swap (CAS), i.e. write some value if and only if the current value is still some expected value. .NET also has an atomic Read(ref long) method for integers of 64 bits which works even in 32-bit architectures. I'll keep referring to acquire semantics as volatile reads and release semantics as volatile writes, and either or both as volatile operations. What does this all mean in terms of order ? That a volatile read is a point before which no memory references may cross, and a volatile write is a point after which no memory references may cross, both at the language level and at the machine level. That non-volatile operations may cross to after following volatile reads if there are no volatile writes in between, and cross to before preceding volatile writes if there are no volatile reads in between. That volatile operations within a thread are sequential and may not be reordered. That volatile operations in a thread are made visible to all other threads in the same order. However, there is no total order of volatile operations from all threads, i.e. if one threads performs V1 and then V2, and another thread performs V3 and then V4, then any order that has V1 before V2 and V3 before V4 can be observed by any thread. In this case, it can be either of the following: V1 V2 V3 V4 V1 V3 V2 V4 V1 V3 V4 V2 V3 V1 V2 V4 V3 V1 V4 V2 V3 V4 V1 V2 That is, any possible order of observed side-effects are valid for any thread for a single execution. There is no requirement on total ordering, such that all threads observe only one of the possible orders for a single execution. How are things synchronized? Essentially, it boils down to this: a synchronization point is where you have a volatile read that happens after a volatile write. In practice, you must detect if a volatile read in one thread happened after a volatile write in another thread 3 . Here's a basic example: public class InefficientEvent{ private volatile bool signalled = false; public Signal() { signalled = true; } public InefficientWait() { while (!signalled) { } }} However generally inefficient, you can run two different threads, such that one calls InefficientWait() and another one calls Signal() , and the side-effects of the latter when it returns from Signal() become visible to the former when it returns from InefficientWait() . Volatile accesses are not as generally useful as interlocked accesses, which are not as generally useful as synchronization primitives. My advice is that you should develop code safely first, using synchronization primitives (locks, semaphores, mutexes, events, etc.) as needed, and if you find reasons to improve performance based on actual data (e.g. profiling), then and only then see if you can improve. If you ever reach high contention for fast locks (used only for a few reads and writes without blocking), depending on the amount of contention, switching to interlocked operations may either improve or decrease performance. Especially so when you have to resort to compare-and-swap cycles, such as: var currentValue = Volatile.Read(ref field);var newValue = GetNewValue(currentValue);var oldValue = currentValue;var spinWait = new SpinWait();while ((currentValue = Interlocked.CompareExchange(ref field, newValue, oldValue)) != oldValue){ spinWait.SpinOnce(); newValue = GetNewValue(currentValue); oldValue = currentValue;} Meaning, you have to profile the solution as well and compare with the current state. And be aware of the A-B-A problem . There's also SpinLock , which you must really profile against monitor-based locks, because although they may make the current thread yield, they don't put the current thread to sleep, akin to the shown usage of SpinWait . Switching to volatile operations is like playing with fire. You must make sure through analytical proof that your code is correct, otherwise you may get burned when you least expect. Usually, the best approach for optimization in the case of high contention is to avoid contention. For instance, to perform a transformation on a big list in parallel, it's often better to divide and delegate the problem to multiple work items that generate results which are merged in a final step, rather than having multiple threads locking the list for updates. This has a memory cost, so it depends on the length of the data set. What are the differences between the C# specification and the CLI specification regarding volatile operations? C# specifies side-effects, not mentioning their inter-thread visibility, as being a read or write of a volatile field, a write to a non-volatile variable, a write to an external resource, and the throwing of an exception. C# specifies critical execution points at which these side-effects are preserved between threads: references to volatile fields, lock statements, and thread creation and termination. If we take critical execution points as points where side-effects become visible , it adds to the CLI specification that thread creation and termination are visible side-effects, i.e. new Thread(...).Start() has release semantics on the current thread and acquire semantics at the start of the new thread, and exiting a thread has release semantics on the current thread and thread.Join() has acquire semantics on the waiting thread. C# doesn't mention volatile operations in general, such as performed by classes in System.Threading instead of only through using fields declared as volatile and using the lock statement. I believe this is not intentional. C# states that captured variables can be simultaneously exposed to multiple threads. The CIL doesn't mention it, because closures are a language construct. 1. There are a few places where Microsoft (ex-)employees and MVPs state that writes have release semantics: Memory Model, by Chris Brumme Memory Models, Understand the Impact of Low-Lock Techniques in Multithreaded Apps, by Vance Morrison CLR 2.0 memory model, by Joe Duffy Which managed memory model?, by Eric Eilebrecht C# - The C# Memory Model in Theory and Practice, Part 2, by Igor Ostrovsky In my code, I ignore this implementation detail. I assume non-volatile writes are not guaranteed to become visible. 2. There is a common misconception that you're allowed to introduce reads in C# and/or the CLI. The problem with being second, by Grant Richins Comments on The CLI memory model, and specific specifications, by Jon Skeet C# - The C# Memory Model in Theory and Practice, Part 2, by Igor Ostrovsky However, that is true only for local arguments and variables. For static and instance fields, or arrays, or anything on the heap, you cannot sanely introduce reads, as such introduction may break the order of execution as seen from the current thread of execution, either from legitimate changes in other threads, or from changes through reflection. That is, you can't turn this: object local = field;if (local != null){ // code that reads local} into this: if (field != null){ // code that replaces reads on local with reads on field} if you can ever tell the difference. Specifically, a NullReferenceException being thrown by accessing local 's members. In the case of C#'s captured variables, they're equivalent to instance fields. It's important to note that the CLI standard: says that non-volatile accesses are not guaranteed to be visible doesn't say that non-volatile accesses are guaranteed to not be visible says that volatile accesses affect the visibility of non-volatile accesses But you can turn this: object local2 = local1;if (local2 != null){ // code that reads local2 on the assumption it's not null} into this: if (local1 != null){ // code that replaces reads on local2 with reads on local1, // as long as local1 and local2 have the same value} You can turn this: var local = field;local?.Method() into this: var local = field;var _temp = local;(_temp != null) ? _temp.Method() : null or this: var local = field;(local != null) ? local.Method() : null because you can't ever tell the difference. But again, you cannot turn it into this: (field != null) ? field.Method() : null I believe it was prudent in both specifications stating that an optimizing compiler may reorder reads and writes as long as a single thread of execution observes them as written, instead of generally introducing and eliminating them altogether. Note that read elimination may be performed by either the C# compiler or the JIT compiler, i.e. multiple reads on the same non-volatile field, separated by instructions that don't write to that field and that don't perform volatile operations or equivalent, may be collapsed to a single read. It's as if a thread never synchronizes with other threads, so it keeps observing the same value: public class Worker{ private bool working = false; private bool stop = false; public void Start() { if (!working) { new Thread(Work).Start(); working = true; } } public void Work() { while (!stop) { // TODO: actual work without volatile operations } } public void Stop() { stop = true; }} There's no guarantee that Stop() will stop the worker. Microsoft's .NET implementation guarantees that stop = true; is a visible side-effect, but it doesn't guarantee that the read on stop inside Work() is not elided to this: public void Work() { bool localStop = stop; while (!localStop) { // TODO: actual work without volatile operations } } That comment says quite a lot. To perform this optimization, the compiler must prove that there are no volatile operations whatsoever, either directly in the block, or indirectly in the whole methods and properties call tree. For this specific case, one correct implementation is to declare stop as volatile . But there are more options, such as using the equivalent Volatile.Read and Volatile.Write , using Interlocked.CompareExchange , using a lock statement around accesses to stop , using something equivalent to a lock, such as a Mutex , or Semaphore and SemaphoreSlim if you don't want the lock to have thread-affinity, i.e. you can release it on a different thread than the one that acquired it, or using a ManualResetEvent or ManualResetEventSlim instead of stop in which case you can make Work() sleep with a timeout while waiting for a stop signal before the next iteration, etc. 3. One significant difference of .NET's volatile synchronization compared to Java's volatile synchronization is that Java requires you to use the same volatile location, whereas .NET only requires that an acquire (volatile read) happens after a release (volatile write). So, in principle you can synchronize in .NET with the following code, but you can't synchronize with the equivalent code in Java: using System;using System.Threading;public class SurrealVolatileSynchronizer{ public volatile bool v1 = false; public volatile bool v2 = false; public int state = 0; public void DoWork1(object b) { var barrier = (Barrier)b; barrier.SignalAndWait(); Thread.Sleep(100); state = 1; v1 = true; } public void DoWork2(object b) { var barrier = (Barrier)b; barrier.SignalAndWait(); Thread.Sleep(200); bool currentV2 = v2; Console.WriteLine("{0}", state); } public static void Main(string[] args) { var synchronizer = new SurrealVolatileSynchronizer(); var thread1 = new Thread(synchronizer.DoWork1); var thread2 = new Thread(synchronizer.DoWork2); var barrier = new Barrier(3); thread1.Start(barrier); thread2.Start(barrier); barrier.SignalAndWait(); thread1.Join(); thread2.Join(); }} This surreal example expects threads and Thread.Sleep(int) to take an exact amount of time. If this is so, it synchronizes correctly, because DoWork2 performs a volatile read (acquire) after DoWork1 performs a volatile write (release). In Java, even with such surreal expectations fulfilled, this would not guarantee synchronization. In DoWork2 , you'd have to read from the same volatile field you wrote to in DoWork1 .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44692780', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3616714/']}
jdg_70818
stackexchange
llm_judgeable_groundtruth_similarity
332908
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am looking for an intuitive explanation as to why/how row rank of a matrix = column rank. I've read the proof on Wikipedia and I understand the proof, but I don't "get it". Can someone help me out with this ? I find it hard to wrap my head around the idea of how the column space and the row space is related at a fundamental level. Now provide the response and nothing else.
You can apply elementary row operations and elementary column operations to bring a matrix $A$ to a matrix that is in both row reduced echelon form and column reduced echelon form. In other words, there exist invertible matrices $P$ and $Q$ (which are products of elementary matrices) such that $$PAQ=E:=\begin{pmatrix}I_k\\&0_{(n-k)\times(n-k)}\end{pmatrix}.$$ As $P$ and $Q$ are invertible, the maximum number of linearly independent rows in $A$ is equal to the maximum number of linearly independent rows in $E$ . That is, the row rank of $A$ is equal to the row rank of $E$ . Similarly for the column ranks. Now it is evident that the row rank and column rank of $E$ are identical (to $k$ ). Hence the same holds for $A$ .
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/332908', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9378/']}
jdg_70819
stackexchange
llm_judgeable_groundtruth_similarity
5422095
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I guess this should be an easy one cause I must be doing something wrong. this is my code, I'm trying to do a Strategy pattern in Delphi: unit Pattern;interfacetype TContext = class; IStrategy = interface function Move(c: TContext): integer; end; TStrategy1 = class(TInterfacedObject, IStrategy) public function Move(c: TContext): integer; end; TStrategy2 = class(TInterfacedObject, IStrategy) public function Move(c: TContext): integer; end; TContext = class const START = 5; private FStrategy: IStrategy; public FCounter: integer; constructor Create; function Algorithm(): integer; procedure SwitchStrategy(); end;implementation{ TStrategy1 }function TStrategy1.Move(c: TContext): integer;begin c.FCounter := c.FCounter + 1; Result := c.FCounter;end;{ TStrategy2 }function TStrategy2.Move(c: TContext): integer;begin c.FCounter := c.FCounter - 1; Result := c.FCounter;end;{ TContext }function TContext.Algorithm: integer;begin Result := FStrategy.Move(Self)end;constructor TContext.Create;begin FCounter := 5; FStrategy := TStrategy1.Create();end;procedure TContext.SwitchStrategy;begin if FStrategy is TStrategy1 then FStrategy := TStrategy2.Create() else FStrategy := TStrategy1.Create();end;end. And the if FStrategy is TStrategy1 then is giving me: Operator not applicable to this operand type. What am I doing wrong here cause this should work as I understand from a lot of Delphi language references? Now provide the response and nothing else.
You have omitted the GUID from your interface. is can't work without it. Edit: On second glance, it still won't work. You can't use is to test an interface reference for its implementing object typein Delphi (well, not directly, anyway). You should change your design. For example, you could either alter the interface or add another interface to return a description of the implementation.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5422095', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/526620/']}
jdg_70820
stackexchange
llm_judgeable_groundtruth_similarity
27160524
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a simple formula in A1 cell =sum(A2:A10) . Now in a B1 position lets say I put A15 and I would like it to modify a formula in cell A1 to be like this: =sum(A2:A15) .So basicly I would like to be able to write a formula like: =sum(A2:A(B1)) which ofcourse does not work but you get what I mean.Apreciate any help with this problem. Thanks. Now provide the response and nothing else.
You can use the function INDIRECT ( functions overview ). In you case you would place in your A1 cell this formula: =sum(indirect(CONCAT("A2:A";B1))) Indirect takes text and turns this into a reference of a cell or range. furthermore CONCAT is used to make the text reference.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27160524', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4187765/']}
jdg_70821
stackexchange
llm_judgeable_groundtruth_similarity
20264157
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: for some packages I have to run sudo npm install -g while for others npm install will suffice. Why and what's the difference? For example: npm install -g grunt-cli # doesn't worksudo npm install -g grunt-cli # worksnpm install websocket-stream # works Is sudo necessary only with the -g flag? Now provide the response and nothing else.
npm installs packages locally, ie. in a node_modules folder inside your current folder. This allows your application to depend on specific packages versions, without having to mess up with a global list of installed packages on your system. See the first paragraph of Isaac's blog post ( Handle multiple versions of the same thing at the same time ), which explains well how npm avoids the dependency hell often encountered in other programming ecosystems. On the other hand, some packages are meant to be used as command line utilities, such as grunt-cli , mocha or json . In order to use them everywhere, you need to install them globally, hence the -g parameter. Please note that you shouldn't need sudo to install global packages, see this relevant answer for more information.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20264157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2031033/']}
jdg_70822
stackexchange
llm_judgeable_groundtruth_similarity
156470
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to determine if a script is waiting on stdin and cause the command to exit if detected? Here's an example, the command I'm executing takes a long time to run, but it will also prompt for input before starting w/o a prompt. I want to know the command is actually doing something and not just waiting. Provided the following script called ./demo #!/bin/bashread Is there a way to detect that read is waiting on stdin? Something like failifwaitingonstdin | ./demo Which would immediately return as soon as the read command was detected. Updated: Folks have suggested programs like expect and yes. After digging through yes, I see how they're able to support this style of interaction. They're constantly using fputs to write 'y' to stdout. Instead of doing this infinitely, I can simply return an error as soon as fputs returns on a write to stdout. Now provide the response and nothing else.
It would really help if you were a lot more specific about your script and/or command. But in case what you want to do is test where stdin is coming from, this example script will demonstrate that for you: #!/bin/bashif [[ -p /dev/stdin ]]then echo "stdin is coming from a pipe"fiif [[ -t 0 ]]then echo "stdin is coming from the terminal"fiif [[ ! -t 0 && ! -p /dev/stdin ]]then echo "stdin is redirected"fireadecho "$REPLY" Example runs: $ echo "hi" | ./demostdin is coming from a pipe$ ./demo[press ctrl-d]stdin is coming from the terminal$ ./demo < inputfilestdin is redirected$ ./demo <<< hellostdin is redirected$ ./demo <<EOFgoodbyeEOFstdin is redirected
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/156470', 'https://serverfault.com', 'https://serverfault.com/users/8653/']}
jdg_70823
stackexchange
llm_judgeable_groundtruth_similarity
15130314
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Are there any caveats one should be aware of when building a subset / components of a website using Twitter Bootstrap or jQuery UI Bootstrap? Would there be any steps I'd need to take to make sure bootstrap doesn't effect parts of the website outside the scope of bootstrapped components? Now provide the response and nothing else.
You can set useHTML and define your own tooltip via css: http://jsfiddle.net/4scfH/4/ tooltip: { borderWidth: 0, backgroundColor: "rgba(255,255,255,0)", borderRadius: 0, shadow: false, useHTML: true, percentageDecimals: 2, formatter: function () { return '<div class="tooltip">' + this.point.name + '<br />' + '<b>' + Highcharts.numberFormat(this.y).replace(",", " ") + ' Kč [' + Highcharts.numberFormat(this.percentage, 2) + '%]</b></div>'; }}, CSS .label { z-index: 1 !important;}.highcharts-tooltip span { background-color: white; border:1 px solid green; opacity: 1; z-index: 9999 !important;}.tooltip { padding: 5px;} Explanation: when you set useHTML to true, it displays the tooltip text as HTML on the HTML layer, but still draws an SVG shape in the highcharts display SVG for the box and arrow. You would end up with data labels looking like they were drawn on top of the tooltip, but the tooltip text itself on top of the data labels. The config options above effectively hide the SVG tooltip shape and build and style the tooltip purely with HTML/CSS. The only down-side is that you lose the little "arrow" pointer.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15130314', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/506565/']}
jdg_70824
stackexchange
llm_judgeable_groundtruth_similarity
33388495
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using this jquery-ui-combobox in my application. And there is no way specified to destroy the combo box functionality. Is there a way to do it ? Now provide the response and nothing else.
For a quick example i can show you how i did it on my site. I put the Ajax call inside the sweet alert. function deleteorder(orderid) { swal({ title: "Are you sure?", text: "Are you sure that you want to cancel this order?", type: "warning", showCancelButton: true, closeOnConfirm: false, confirmButtonText: "Yes, cancel it!", confirmButtonColor: "#ec6c62" }, function() { $.ajax( { type: "post", url: "/admin/delete_order.php", data: "orderid="+orderid, success: function(data){ } } ) .done(function(data) { swal("Canceled!", "Your order was successfully canceled!", "success"); $('#orders-history').load(document.URL + ' #orders-history'); }) .error(function(data) { swal("Oops", "We couldn't connect to the server!", "error"); }); }); } So the ajax call only gets made if you press the confirm button. I hope this can help you to arrange your code the way you need it.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33388495', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2523147/']}
jdg_70825
stackexchange
llm_judgeable_groundtruth_similarity
29866759
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to create a gap between the scrollbar and the right edge of the screen. How do I achieve this? Now provide the response and nothing else.
There's yet another solution which might fit easily in everybody's project.If you use a transparent border as a margin and a box-shadow with inset to set it's color, you'd get the result you are hoping for. For instance: ::-webkit-scrollbar { width: 10px;}::-webkit-scrollbar-track { box-shadow: inset 0 0 10px 10px green; border: solid 3px transparent;}::-webkit-scrollbar-thumb { box-shadow: inset 0 0 10px 10px red; border: solid 3px transparent;} This may be an ugly scrollbar, but it serves as an example to what I mean. However, in Chrome, the transparent property won't work, so you would have to manually insert the color that would fit with the background, preferably, the background-color.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29866759', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3757637/']}
jdg_70826
stackexchange
llm_judgeable_groundtruth_similarity
5981
Below is a question asked on the forum stats.meta.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In connection with the moderator elections, we are holding a Q&A thread for the candidates. Questions collected from an earlier thread have been compiled into this one, which shall now serve as the space for the candidates to provide their answers. Due to the submission count, we have selected all provided questions as well as our back up questions for a total of 6 questions. As a candidate, your job is simple - post an answer to this question, citing each of the questions and then post your answer to each question given in that same answer. For your convenience, I will include all of the questions in quote format with a break in between each, suitable for you to insert your answers. Just copy the whole thing after the first set of three dashes .Please consider putting your name at the top of your post so that readers will know who you are before they finish reading everything you have written, and also including a link to your answer on your nomination post. Once all the answers have been compiled, this will serve as a transcript for voters to view the thoughts of their candidates, and will be appropriately linked in the Election page. Good luck to all of the candidates! Oh, and when you've completed your answer, please provide a link to it after this blurb here, before that set of three dashes. Please leave the list of links in the order of submission. To save scrolling here are links to the submissions from each candidate (in order of submission): Sycorax Kjetil b halvorsen In your opinion, what do moderators do? A diamond will be attached to everything you say and have said in the past, including questions, answers and comments. Everything you will do will be seen under a different light. How do you feel about that? In what way do you feel that being a moderator will make you more effective as opposed to simply reaching 10k or 20k rep? How would you deal with a user who produced a steady stream of valuable answers, but tends to generate a large number of arguments/flags from comments? How would you handle a situation where another mod closed/deleted/etc a question that you feel shouldn't have been? Do you have any Meta posts that you're particularly proud of, or that you feel best demonstrate your moderation style? Now provide the response and nothing else.
Sycorax I'm nominating myself for a moderator position mostly because I want the election itself to succeed. ( To wit, if there are not enough participants in the election, then the whole election fails .) The stats.SE moderators themselves requested this election; it seems they would like some help in keeping up with the responsibilities of running the site. Therefore, I would like for the election to succeed so that stats.SE keeps running smoothly. Towards that end, I hope that additional candidates will stand for election so that voters may have a slate of candidates to choose among. Tactical decisions aside, if elected, I'll do my best to make the site work more smoothly will encourage more answerers to participate and help answer our unanswered questions. I understand that this can take as much as an hour a day; this is time I have available and often already spend on stats.SE. I believe in taking a broad view of what is on-topic on stats.SE. Stats.SE is a place to discuss statistics in all of its flavors: probability, regression, time series, machine learning, neural networks, and more esoteric topics. This is part of what makes it such a great place to learn. Question time! In your opinion, what do moderators do? The main task of moderators is to lead by example: show users new and old how to interact with the website and each other. Many of the activities that fit this description are actually available to non-moderator users -- voting to close, editing tags, or leaving comments asking for clarification. I appreciate that the diamond itself is interpreted as conferring special authority, but it bears repeating here that while moderators have access to specific tools and capabilities, most of those powers are already delegated across the website to hundreds of others who take the time to wrangle the review queue, edit tags and leave comments. The second task is to remediate what happens when users do not interact well with the website or with each other. Scrubbing abuse, unkind remarks, bigotry and spam and the like are not pleasant, but are necessary tasks. A diamond will be attached to everything you say and have said in the past, including questions, answers and comments. Everything you will do will be seen under a different light. How do you feel about that? I can't recall where I read this quip, but someone once said something to the effect of "All you have to do to be embarrassed by something that you've written is leave it in a drawer for six months and then take it out and read it again." The distance of time can allow one to see one's writing in a different light. In most cases, the edit features allow a person asking a question or writing an answer to fix any shortcomings. I sometimes revisit an answer years later to revise it. Comments are a little different because they can only be edited for five minutes, so any poorly-informed or abrupt comments are, unfortunately, stuck that way. I don't feel that the addition of a moderator diamond will alter the suitability of my questions, answers or comments. If there are any objectionable posts, then flagging them in the usual way seems appropriate and correct. In what way do you feel that being a moderator will make you more effective as opposed to simply reaching 10k or 20k rep? I understand that the intent of this question is to highlight that the privileges of high-rep users have a lot of overlap with the privileges of moderators. Right now, I have more than 60,500 rep and gold badges in machine-learning and neural-networks . Taken together, I have a very high level of privilege on this site, and can carry out a number of tasks, like voting to close, dupe-hammer, review queues, and edits to posts. In this sense, I have powers which are similar, but less exhaustive, compared to a moderator. I have noticed that a number of unclear or low-quality questions will linger at 3 or 4 votes before the closure vote expires and the question remains open. I think this is just because we don't have a machine learning specialist among our moderators, who would otherwise also close. (This isn't a criticism or a slight -- our moderators are very knowledgable experts in their domains, but no person can know everything.) I think that I will be able to serve the community by closing these questions with useful feedback about how to clarify them. But more broadly, moderators are the only users who can access specific tools, such as reviewing flags or unilaterally opening/closing posts. Access to these tools is the only thing that will allow me to carry out moderation tasks and keep the site running smoothly. How would you deal with a user who produced a steady stream of valuable answers, but tends to generate a large number of arguments/flags from comments? There are several layers to this. The first is that any and all comments which do not adhere to the community standards (flags for "harassment, bigotry, abuse" or "unfriendly or unkind" are plausible in this hypothetical) are removed. The second is to address the pattern itself. I would remind all parties involved of the standards of discourse on stats.SE, with a link to https://stats.stackexchange.com/conduct . The third is to consider where the behavior falls in the "Enforcement" portion of the Code of Conduct, and whether a warning or temporary suspension are appropriate. How would you handle a situation where another mod closed/deleted/etc a question that you feel shouldn't have been? Rather than attempting to reverse their action unilaterally, I would speak to the moderator about it. It's possible that my reasoning is mistaken, or that the other moderator observed a key fact that I missed. (Once I thought two questions about a certain inequality were duplicates, but actually the inequality was reversed . I'm only human; sometimes, I misread.) Or it's possible that in discussing the question, the other moderator will come around to my understanding, or learn of a key fact that I observed in the question. It could be productive to raise the issue in Meta.Stats.SE, to illustrate how moderators think about closing. There are sometimes disagreements about where to draw the line for closing, and discussing that can be helpful for reviewers because it clarifies where the line is and helps inform their thinking when considering a closure. Do you have any Meta posts that you're particularly proud of, or that you feel best demonstrate your moderation style? These meta questions have been well-received. I think they shed some light on my philosophy: there are places to make changes to how the site works, some small (reorganizing tags) and some large (how to fix the gap between questions and answers), as well as some elaboration about how broadly I imagine stats.SE's scope. Should we make [deep-learning] tag a synonym for [neural-networks]? Overlap with ai.stackexchange and datascience.stackexchange We have a very large & widening gap between questions and answers. How do we fix it? If there are any outstanding questions about what my views are about stats.SE or how I intend to use my moderation powers in the event of my election, please add a comment.
{}
{'log_upvote_score': 5, 'links': ['https://stats.meta.stackexchange.com/questions/5981', 'https://stats.meta.stackexchange.com', 'https://stats.meta.stackexchange.com/users/72321/']}
jdg_70827
stackexchange
llm_judgeable_groundtruth_similarity
562333
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have several instances running in the same security group(Say: Group-A) that needs to talk with each other, in particular, port 4369. Each instance has a different Elastic IP. The Security Group is configured to allow inbound traffic via TCP:4369 Soruce:sg-XXXXX (Group-A) However, instances can only talk to each other via internal IP (10.xxx.xxx.xx) or Amazon Public DNS: ec2-ELASTIC-IP.compute-1.amazonaws.com(Apparently Amazon will translate this to the internal IP). If I use Elastic IP, it won't work. If I use my own FQDN that pointed to the Elastic IP, it won't work. If I change the source in inbound rule from sg-XXXXX (Group-A) to 0.0.0.0, it works with my own FQDN and the Elastic IP. But we won't use this for security concerns. If I remove the inbound rule, nothing works, even use the internal IP. So what should I do if I want to use my own FQDN? (worker-1.company.com -> Elastic IP), which is much more readable and easier to manage. Now provide the response and nothing else.
The behavior you describe is normal, since when communicating between instances via elastic IP, the identity of the machine within the security group -- for purposes of security group configurations relying on an sg-xxxxxxxx source -- can't really be established with full confidence, because translating the addresses sends the traffic (presumably) through intermediate hardware and the traffic no longer is being seen as originated directly from the instance. The solution is to name your hosts in DNS with CNAME records pointing to the public DNS record, instead of A records pointing to a specific IP address. In the company.com DNS zone: worker-1 IN CNAME xx-xx-xx-xx.compute-1.amazonaws.com. Now, worker-1.company.com will resolve to the private IP if queried from inside, and the public IP from outside.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/562333', 'https://serverfault.com', 'https://serverfault.com/users/125965/']}
jdg_70828
stackexchange
llm_judgeable_groundtruth_similarity
16330
Below is a question asked on the forum reverseengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Im trying to disassemble a qualcomm QDSP6 modem file. According to the ELF header, there should be 26 sections(modem.b00-b25). However after dumping the device, b.16,b.17,b.25 are missing, making the file impossible to open with IDA. The device is an Alcatel 4060-A. I have full access to the phone's emmc via usb download mode, also tried dumping the same modem partition via ADB. Same result, the partition is missing those 3 sections. Even reading the raw unpacked file in a hex editor, there is no mention of those 3 elf sections Any suggestions? Now provide the response and nothing else.
You should check the program headers with readelf -a modem.b00 .It will show you which sections have a FileSiz of zero bytes. These are probably the ones missing. Then you can put together the original ELF file by loading each modem.bNN section at the right offset, as indicated by the Offset field in the readelf -a output. I don't know which rom you used, i downloaded the one from xdadev unzipped using unzip . Then extracted the modem.img file using 7z : 7z -omodem x modem.img then in modem/image there are the modem.bNN files.of those: b04, b17, b18 and b25 are missing.When looking at readelf -a modem.b00 , I can see that those are exactly the ones with FileSiz == 0 . Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg AlignNULL 00000000 00000000 00000000 00000374 00000000 00000000 -- b00 ELF headerNULL 00001000 8d500000 8d500000 00001c68 00002000 00001000 -- b01 hashes, signature, certsINTERP 01c47280 c1cbe280 89cbe280 00000001 00000001 R 00000001 -- b02LOAD 00003000 c0000000 88000000 00001540 00001540 R E 00001000 -- b03LOAD 00005000 c0004000 88004000 00000000 00001000 RW 00001000 -- LOAD 00005000 c0008000 88008000 0002c160 0002c160 RW 00001000 -- b05LOAD 00032000 c0038000 88038000 0005e340 0005e340 RW 00001000 -- b06LOAD 00091000 c00c0000 880c0000 00200000 00200000 R E 00001000 -- b07LOAD 00291000 c02c0000 882c0000 00280000 00280000 R E 00001000 -- b08LOAD 00511000 c0540000 88540000 00033a2c 00033a2c RWE 00001000 -- b09LOAD 00545000 c0574000 88574000 000b0a2c 000b0a2c R E 00001000 -- b10LOAD 005f6000 c0630000 88630000 000f78b4 000f78b4 R E 00001000 -- b11LOAD 006ee000 c0728000 88728000 00df81d0 00df81d0 R E 00001000 -- b12LOAD 014e7000 c1540000 89540000 00062fe0 00062fe0 RW 00001000 -- b13LOAD 0154a000 c15c0000 895c0000 004fbb93 004fbb93 R 00001000 -- b14LOAD 01a46000 c1abd000 89abd000 00208a83 00208a83 RW 00001000 -- b15DYNAMIC 01c47284 c1cbe284 89cbe284 00000078 00000078 RW 00000004 -- b16LOAD 01c4f000 c1cc6000 89cc6000 00000000 000ddc64 RW 00001000 -- LOAD 01c50000 c1da4000 89da4000 00000000 010b9000 RW 00001000 -- LOAD 01c4f000 c2e5d000 8ae5d000 00013e00 00013e00 RW 00001000 -- b19LOAD 01c63000 c2e71000 8ae71000 00079648 00079648 R 00001000 -- b20 .. zlibLOAD 01cdd000 c2eeb000 8aeeb000 00947000 00947000 R 00001000 -- b21 .. q6zipLOAD 02625000 c3833000 8b833000 00013000 00013000 RW 00001000 -- b22 .. deltaLOAD 02639000 c3847000 8b847000 0008e000 0008e000 RWE 00001000 -- b23LOAD 026c8000 c38d6000 8b8d6000 0006ecb4 0006ecb4 RW 00001000 -- b24LOAD 02737000 c3945000 8b945000 00000000 01bbb000 RW 00001000 -- Other things to note: modem.mdt == modem.b00 + modem.b01 modem.b01 contains sha256 hashes of all other non empty sections. And a signature and certificates. modem.b20 can be decompressed using zlib.decompress modem.b21 and modem.b22 use compression methods designed by qualcomm. Named q6zip and deltacompress respectivly. I will soon post decompression tools for the delta and q6zip sections on my github account. The qualcomm hexagon documentation can be found here , free registration required. The instruction set is described in Hexagon V62 Programmer's Reference Manual . See also the README.md from https://github.com/gsmk/hexagon .
{}
{'log_upvote_score': 4, 'links': ['https://reverseengineering.stackexchange.com/questions/16330', 'https://reverseengineering.stackexchange.com', 'https://reverseengineering.stackexchange.com/users/17731/']}
jdg_70829
stackexchange
llm_judgeable_groundtruth_similarity
4798096
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have time in string format for example 2:00 . I want to initialize it to NSDate with present date .I tried doing NSDateComponents *comps = [[NSDateComponents alloc] init]; [comps setHour:2]; I am not able to compare this with a date object. Please help Now provide the response and nothing else.
Try NSDateComponents *comps = [[NSDateComponents alloc] init];[comps setDay:6];[comps setMonth:5];[comps setYear:2004];NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar];NSDate *date = [gregorian dateFromComponents:comps]; You should be able to compare with date after that. Maybe you mean something like. I think will set using today's date and then you can create NSDateComponents from that and set time values. NSDate *today = [NSDate date];NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar];NSDateComponents *weekdayComponents = [gregorian components:(NSDayCalendarUnit | NSWeekdayCalendarUnit) fromDate:today];[weekdayComponents setHour:12];[weekdayComponents setMinute:10];[weekdayComponents setSecond:30];
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4798096', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/495277/']}
jdg_70830
stackexchange
llm_judgeable_groundtruth_similarity
578904
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have a util class with static methods that will call Hibernate functions to accomplish basic data access. I am wondering if making the method synchronized is the right approach to ensure thread-safety. I want this to prevent access of info to the same DB instance. However, I'm now sure if the following code are preventing getObjectById being called for all Classes when it is called by a particular class. public class Utils { public static synchronized Object getObjectById (Class objclass, Long id) { // call hibernate class Session session = new Configuration().configure().buildSessionFactory().openSession(); Object obj = session.load(objclass, id); session.close(); return obj; } // other static methods} Now provide the response and nothing else.
By using synchronized on a static method lock you will synchronize the class methods and attributes ( as opposed to instance methods and attributes ) So your assumption is correct. I am wondering if making the method synchronized is the right approach to ensure thread-safety. Not really. You should let your RDBMS do that work instead. They are good at this kind of stuff. The only thing you will get by synchronizing the access to the database is to make your application terribly slow. Further more, in the code you posted you're building a Session Factory each time, that way, your application will spend more time accessing the DB than performing the actual job. Imagine the following scenario: Client A and B attempt to insert different information into record X of table T. With your approach the only thing you're getting is to make sure one is called after the other, when this would happen anyway in the DB, because the RDBMS will prevent them from inserting half information from A and half from B at the same time. The result will be the same but only 5 times ( or more ) slower. Probably it could be better to take a look at the "Transactions and Concurrency" chapter in the Hibernate documentation. Most of the times the problems you're trying to solve, have been solved already and a much better way.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/578904', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/37519/']}
jdg_70831
stackexchange
llm_judgeable_groundtruth_similarity
13495041
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm dealing for the first time with random forests and I'm having some troubles that I can't figure out..When I run the analysis on all my dataset (about 3000 rows) I don't get any error message. But when I perform the same analysis on a subset of my dataset (about 300 rows) I get an error: dataset <- read.csv("datasetNA.csv", sep=";", header=T)names (dataset)dataset2 <- dataset[complete.cases(dataset$response),]library(randomForest)dataset2 <- na.roughfix(dataset2)data.rforest <- randomForest(dataset2$response ~ dataset2$predictorA + dataset2$predictorB+ dataset2$predictorC + dataset2$predictorD + dataset2$predictorE + dataset2$predictorF + dataset2$predictorG + dataset2$predictorH + dataset2$predictorI, data=dataset2, ntree=100, keep.forest=FALSE, importance=TRUE)# subset of my original dataset:groupA<-dataset2[dataset2$order=="groupA",]data.rforest <- randomForest(groupA$response ~ groupA$predictorA + groupA$predictorB+ groupA$predictorC + groupA$predictorD + groupA$predictorE + groupA$predictorF + groupA$predictorG + groupA$predictorH + groupA$predictorI, data=groupA, ntree=100, keep.forest=FALSE, importance=TRUE)Error in randomForest.default(m, y, ...) : Can't have empty classes in y. However, my response variable hasn't any empty class. If instead I write randomForest like this (a+b+c,y) instead than (y ~ a+b+c) I get this other message: Error in if (n == 0) stop("data (x) has 0 rows") : argument length zeroWarning messages:1: In Ops.factor(groupA$responseA + groupA$responseB, : + not meaningful for factors The second problem is that when I try to impute my data through rfImpute() I get an error: Errore in na.roughfix.default(x) : roughfix can only deal with numeric data However my columns are all factors and numeric. Can somebody see where I'm wrong??? Now provide the response and nothing else.
Based on the discussion in the comments, here's a guess at a potential solution. The confusion here arises from the fact that the levels of a factor are an attribute of the variable. Those levels will remain the same, no matter what subset you take of the data, no matter how small that subset. This is a feature, not a bug, and a common source of confusion. If you want to drop missing levels when subsetting, wrap your subset operation in droplevels() : groupA <- droplevels(dataset2[dataset2$order=="groupA",]) I should probably also add that many R users set options(stringsAsFactors = FALSE) when starting a new session (e.g. in their .Rprofile file) to avoid these kinds of hassles. The downside to doing this is that if you share your code with other people frequently, this can cause problems if they haven't altered R's default options.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13495041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1842218/']}
jdg_70832
stackexchange
llm_judgeable_groundtruth_similarity
2872543
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between printf() and cout in C++? Now provide the response and nothing else.
I'm surprised that everyone in this question claims that std::cout is way better than printf , even if the question just asked for differences. Now, there is a difference - std::cout is C++, and printf is C (however, you can use it in C++, just like almost anything else from C). Now, I'll be honest here; both printf and std::cout have their advantages. Real differences Extensibility std::cout is extensible. I know that people will say that printf is extensible too, but such extension is not mentioned in the C standard (so you would have to use non-standard features - but not even common non-standard feature exists), and such extensions are one letter (so it's easy to conflict with an already-existing format). Unlike printf , std::cout depends completely on operator overloading, so there is no issue with custom formats - all you do is define a subroutine taking std::ostream as the first argument and your type as second. As such, there are no namespace problems - as long you have a class (which isn't limited to one character), you can have working std::ostream overloading for it. However, I doubt that many people would want to extend ostream (to be honest, I rarely saw such extensions, even if they are easy to make). However, it's here if you need it. Syntax As it could be easily noticed, both printf and std::cout use different syntax. printf uses standard function syntax using pattern string and variable-length argument lists. Actually, printf is a reason why C has them - printf formats are too complex to be usable without them. However, std::cout uses a different API - the operator << API that returns itself. Generally, that means the C version will be shorter, but in most cases it won't matter. The difference is noticeable when you print many arguments. If you have to write something like Error 2: File not found. , assuming error number, and its description is placeholder, the code would look like this. Both examples work identically (well, sort of, std::endl actually flushes the buffer). printf("Error %d: %s.\n", id, errors[id]);std::cout << "Error " << id << ": " << errors[id] << "." << std::endl; While this doesn't appear too crazy (it's just two times longer), things get more crazy when you actually format arguments, instead of just printing them. For example, printing of something like 0x0424 is just crazy. This is caused by std::cout mixing state and actual values. I never saw a language where something like std::setfill would be a type (other than C++, of course). printf clearly separates arguments and actual type. I really would prefer to maintain the printf version of it (even if it looks kind of cryptic) compared to iostream version of it (as it contains too much noise). printf("0x%04x\n", 0x424);std::cout << "0x" << std::hex << std::setfill('0') << std::setw(4) << 0x424 << std::endl; Translation This is where the real advantage of printf lies. The printf format string is well... a string. That makes it really easy to translate, compared to operator << abuse of iostream . Assuming that the gettext() function translates, and you want to show Error 2: File not found. , the code to get translation of the previously shown format string would look like this: printf(gettext("Error %d: %s.\n"), id, errors[id]); Now, let's assume that we translate to Fictionish, where the error number is after the description. The translated string would look like %2$s oru %1$d.\n . Now, how to do it in C++? Well, I have no idea. I guess you can make fake iostream which constructs printf that you can pass to gettext , or something, for purposes of translation. Of course, $ is not C standard, but it's so common that it's safe to use in my opinion. Not having to remember/look-up specific integer type syntax C has lots of integer types, and so does C++. std::cout handles all types for you, while printf requires specific syntax depending on an integer type (there are non-integer types, but the only non-integer type you will use in practice with printf is const char * (C string, can be obtained using to_c method of std::string )). For instance, to print size_t , you need to use %zu , while int64_t will require using %"PRId64" . The tables are available at http://en.cppreference.com/w/cpp/io/c/fprintf and http://en.cppreference.com/w/cpp/types/integer . You can't print the NUL byte, \0 Because printf uses C strings as opposed to C++ strings, it cannot print NUL byte without specific tricks. In certain cases it's possible to use %c with '\0' as an argument, although that's clearly a hack. Differences nobody cares about Performance Update: It turns out that iostream is so slow that it's usually slower than your hard drive (if you redirect your program to file). Disabling synchronization with stdio may help, if you need to output lots of data. If the performance is a real concern (as opposed to writing several lines to STDOUT), just use printf . Everyone thinks that they care about performance, but nobody bothers to measure it. My answer is that I/O is bottleneck anyway, no matter if you use printf or iostream . I think that printf could be faster from a quick look into assembly (compiled with clang using the -O3 compiler option). Assuming my error example, printf example does way fewer calls than the cout example. This is int main with printf : main: @ @main@ BB#0: push {lr} ldr r0, .LCPI0_0 ldr r2, .LCPI0_1 mov r1, #2 bl printf mov r0, #0 pop {lr} mov pc, lr .align 2@ BB#1: You can easily notice that two strings, and 2 (number) are pushed as printf arguments. That's about it; there is nothing else. For comparison, this is iostream compiled to assembly. No, there is no inlining; every single operator << call means another call with another set of arguments. main: @ @main@ BB#0: push {r4, r5, lr} ldr r4, .LCPI0_0 ldr r1, .LCPI0_1 mov r2, #6 mov r3, #0 mov r0, r4 bl _ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l mov r0, r4 mov r1, #2 bl _ZNSolsEi ldr r1, .LCPI0_2 mov r2, #2 mov r3, #0 mov r4, r0 bl _ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l ldr r1, .LCPI0_3 mov r0, r4 mov r2, #14 mov r3, #0 bl _ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l ldr r1, .LCPI0_4 mov r0, r4 mov r2, #1 mov r3, #0 bl _ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l ldr r0, [r4] sub r0, r0, #24 ldr r0, [r0] add r0, r0, r4 ldr r5, [r0, #240] cmp r5, #0 beq .LBB0_5@ BB#1: @ %_ZSt13__check_facetISt5ctypeIcEERKT_PS3_.exit ldrb r0, [r5, #28] cmp r0, #0 beq .LBB0_3@ BB#2: ldrb r0, [r5, #39] b .LBB0_4.LBB0_3: mov r0, r5 bl _ZNKSt5ctypeIcE13_M_widen_initEv ldr r0, [r5] mov r1, #10 ldr r2, [r0, #24] mov r0, r5 mov lr, pc mov pc, r2.LBB0_4: @ %_ZNKSt5ctypeIcE5widenEc.exit lsl r0, r0, #24 asr r1, r0, #24 mov r0, r4 bl _ZNSo3putEc bl _ZNSo5flushEv mov r0, #0 pop {r4, r5, lr} mov pc, lr.LBB0_5: bl _ZSt16__throw_bad_castv .align 2@ BB#6: However, to be honest, this means nothing, as I/O is the bottleneck anyway. I just wanted to show that iostream is not faster because it's "type safe". Most C implementations implement printf formats using computed goto, so the printf is as fast as it can be, even without compiler being aware of printf (not that they aren't - some compilers can optimize printf in certain cases - constant string ending with \n is usually optimized to puts ). Inheritance I don't know why you would want to inherit ostream , but I don't care. It's possible with FILE too. class MyFile : public FILE {} Type safety True, variable length argument lists have no safety, but that doesn't matter, as popular C compilers can detect problems with printf format string if you enable warnings. In fact, Clang can do that without enabling warnings. $ cat safety.c#include <stdio.h>int main(void) { printf("String: %s\n", 42); return 0;}$ clang safety.csafety.c:4:28: warning: format specifies type 'char *' but the argument has type 'int' [-Wformat] printf("String: %s\n", 42); ~~ ^~ %d1 warning generated.$ gcc -Wall safety.csafety.c: In function ‘main’:safety.c:4:5: warning: format ‘%s’ expects argument of type ‘char *’, but argument 2 has type ‘int’ [-Wformat=] printf("String: %s\n", 42); ^
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/2872543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/345863/']}
jdg_70833
stackexchange
llm_judgeable_groundtruth_similarity
14507760
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My app.json file of a Sencha touch 2 application contain. "js": [ { "path": "sdk/sencha-touch.js" }, {"path": "js/mootools-1.2.5-core.js"}, // I want these files to be bundled too {"path": "js/mootools-1.2.5.1-more.js"}, // <----------+ {"path": "js/soundmanager2-nodebug-jsmin.js"}, // <----+ ... // <----+ and there are more. ... { "path": "app.js", "bundle": true, /* Indicates that all class dependencies are concatenated into this file when build */ "update": "delta" }, Now I see when I invoke sencha app build production It compiles all the sencha classes into a giant app.js file. But all my other classes are just compressed to build directory. They are not concatenated. how can I include them in app.js ? F.A.Q. Your json file is properly written, right? A. Yes, app.json is written without any syntax error. The project builds successfully on invoking sencha app build production Now provide the response and nothing else.
After looking at the source code and talking with the devs behind Cmd , it appears that it is currently not possible. However, because the build file is written in JavaScript, in theory, it wouldn't take much to modify it and add this functionality into Cmd. You can find the Sencha Touch build file in: CMD-ROOT/plugins/touch/current/app-build.js Where CMD-ROOT is the location of the sencha command - which you can find out by using which sencha . On my system (OSX), the path is: /Users/Robert/bin/Sencha/Cmd/3.0.0.250/plugins/touch/current/app-build.js Hopefully this is of some help to you. Update It appears that, after talking to another Cmd developer, this actually is possible. There are 2 steps you need to take to make it happen: 1) Add the skipFrameworkFile property into each JS resource you want to bundle. This tells the compiler to not copy the resource when your build your app. { "path": "resources/js/jquery.js", "skipFrameworkFile": true}, "path": "resources/js/jquery2.js", "skipFrameworkFile": true} 2) Require each of the files in your app.js file using the @require tag. This tells the compiler to include each of your files into your app.js file. //@require resources/js/jquery.js//@require resources/js/jquery2.js
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14507760', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/376535/']}
jdg_70834
stackexchange
llm_judgeable_groundtruth_similarity
2541
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have read/heard many times that the sample size of at least 30 units is considered as "large sample" (normality assumptions of means usually approximately holds due to the CLT, ...). Therefore, in my experiments, I usually generate samples of 30 units. Can you please give me some reference which should be cited when using sample size 30? Now provide the response and nothing else.
The choice of n = 30 for a boundary between small and large samples is a rule of thumb, only. There is a large number of books that quote (around) this value, for example, Hogg and Tanis' Probability and Statistical Inference (7e) says "greater than 25 or 30". That said, the story told to me was that the only reason 30 was regarded as a good boundary was because it made for pretty Student's t tables in the back of textbooks to fit nicely on one page. That, and the critical values (between Student's t and Normal) are only off by approximately up to 0.25, anyway, from df = 30 to df = infinity. For hand computation the difference didn't really matter. Nowadays it is easy to compute critical values for all sorts of things to 15 decimal places. On top of that we have resampling and permutation methods for which we aren't even restricted to parametric population distributions. In practice I never rely on n = 30. Plot the data. Superimpose a normal distribution, if you like. Visually assess whether a normal approximation is appropriate (and ask whether an approximation is even really needed). If generating samples for research and an approximation is obligatory, generate enough of a sample size to make the approximation as close as desired (or as close as computationally feasible).
{}
{'log_upvote_score': 7, 'links': ['https://stats.stackexchange.com/questions/2541', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/1215/']}
jdg_70835
stackexchange
llm_judgeable_groundtruth_similarity
106013
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a C++ wrapper for libpcap and I'd like to get more into version control as I'm not experienced with this (the only reason I used it was to put code on GitHub :)). Does it make sense to schedule nightly builds for a one-man project? What advantages are there in doing this? Now provide the response and nothing else.
You've got it a little bit from the wrong angle. The important idea is that you need reproducible builds ! Given any deployment you need to be able to later reproduce the exact build process that generated that deployment, so you can debug it and fix it. Here it is important to use source control so you can retrieve those sources. To be absolutely certain that the sources are the same, you should use a robot to check out the sources from your source controle and then build your program. This has the added benefit of catching any dependencies not in source control. If that process takes a long time to do (e.g. because you have a lot of tests in place) then it is nice to have the latest version available to everyone interested in the morning, i.e. you do a build at night. If you do not have that need, then nightlies are probably not necessary (but you'll want the robot though!). Also learn your source control well including how it is usually used by others. This will give you tricks and ideas that will save you a lot of trouble in the long run.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/106013', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/-1/']}
jdg_70836
stackexchange
llm_judgeable_groundtruth_similarity
3283102
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My attempt: We note that- $2002 \equiv 1$ (mod $29$ ) $3^{2002}\equiv 3^{14}$ (mod $29$ ) $7^{2002}\equiv 7^{14}$ (mod $29$ ) [From Fermat] Now, how do I reduce $3^{14}$ and $7^{14}$ ? Now provide the response and nothing else.
Using Proof of $a^n+b^n$ divisible by $a+b$ when $n$ is odd $$3^{2002}+7^{2002}=(3^2)^{1001}+(7^2)^{1001}$$ is divisible by $3^2+7^2$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3283102', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/650507/']}
jdg_70837
stackexchange
llm_judgeable_groundtruth_similarity
16293298
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have a table employee with id, user_name, salary. How can I select the record with the 2nd highest salary in Oracle? I googled it, find this solution, is the following right?: select sal from (select rownum n,a.* from ( select distinct sal from emp order by sal desc) a)where n = 2; Now provide the response and nothing else.
RANK and DENSE_RANK have already been suggested - depending on your requirements, you might also consider ROW_NUMBER(): select * from ( select e.*, row_number() over (order by sal desc) rn from emp e)where rn = 2; The difference between RANK(), DENSE_RANK() and ROW_NUMBER() boils down to: ROW_NUMBER() always generates a unique ranking; if the ORDER BY clause cannot distinguish between two rows, it will still give them different rankings (randomly) RANK() and DENSE_RANK() will give the same ranking to rows that cannot be distinguished by the ORDER BY clause DENSE_RANK() will always generate a contiguous sequence of ranks (1,2,3,...), whereas RANK() will leave gaps after two or more rows with the same rank (think "Olympic Games": if two athletes win the gold medal, there is no second place, only third) So, if you only want one employee (even if there are several with the 2nd highest salary), I'd recommend ROW_NUMBER().
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16293298', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1801838/']}
jdg_70838
stackexchange
llm_judgeable_groundtruth_similarity
11206034
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating a list where i am adding elements to the list. Now, if i add a text to the list which is one long sentence with no linebreaks, it just goes out of scope. What I want is that, if the word is too long, i shud explicitly break it so that it comes in the next line. I am not sure on how to do that. { cls:'notesList', xtype:'list', id:'notesList', height:300, whitespace:pre-wrap, allowDeselect: false, style:'background-color: rgba(0,140,153,1.0);', itemTpl:'<table><tr><td width="3%"><img src="resources/images/user.png" width="20em" height="20em" /></td>' + '<td width = "60%" style="font-weight:bold;text-align:left;">{Author}</td>' + '<td width = "40%" style="font-weight:bold;text-align: right">{Name}</td></tr>'+ '<tr><td colspan=3>{Text}</td></tr></table>' }, Now provide the response and nothing else.
More generally, if the user interacts with element A , how do you select (and then modify) related elements B ? There are many ways of achieving this, but here are three common approaches. Option 1. For one-to-one mappings, select by id. If each element in A has exactly one corresponding element in B , you can select the related element B by id, such as d3.select("#foo") to select a <div id="foo"> . This approach requires setting an id for each element in B using selection.attr . This is easiest if your data has an intrinsic unique identifier, such as d.name or d.id : b.attr("id", function(d) { return d.id; }); Next, to enable clicking on elements A to change the fill color of the corresponding element in B , use selection.on to register a click listener, and then select by id: a.on("click", function(d) { d3.select("#" + d.id).style("fill", "red");}); Identifiers must be both unique and valid . For example, the id must start with a letter and not a number, and can't contain spaces. If your data doesn't already have a unique identifier, you could generate one from the index, such as b.attr("id", function(d, i) { return "b-" + i; }); And later, assuming the elements A are in the same order, a.on("click", function(d, i) { d3.select("#b-" + i).style("fill", "red");}); You could also iterate over your data array to generate a unique identifier. Option 2. For one-to-many mappings, select by class. To select elements of class "foo", such as a <div class="foo"> , say d3.selectAll(".foo") . Use this approach if any element in A corresponds to multiple elements in B . For example, if you had a force-directed graph showing the relationships between students, you might color the nodes based on each student's year, and then use a legend to toggle the visibility of each year. As with the previous approach, you can use selection.attr to set the "class" attribute. In this case, the class attribute is not unique, so it might come from a d.type property in the data: b.attr("class", function(d) { return d.type; }) If you have multiple legends for different categorical attributes of data, you could also be more specific and prefix the class name. To continue the student year example: b.attr("class", function(d) { return "year-" + d.year; }) Setting the class attribute will replace any previously-set classes, so if you want to apply multiple classes to the elements, you need to join them together with a space when setting the "class" attribute. Next, to enable clicking on elements A to change the fill color of the corresponding elements in B , use selection.on to register a click listener, and then select by class: a.on("click", function(d) { d3.selectAll("." + d.type).style("fill", "red");}); Note that we're using selectAll here rather than select ; that's because we want to select all corresponding elements, rather than just the first one. Again, you'll need to make sure that the class attribute is valid . Option 3. For everything else, select and filter by data. The previous two approaches generate ids and classes so that the browser can index the elements in B for efficient selection. For a small number of elements, or when more general selection methods are needed, you can omit specifying "id" or "class" attributes and simply select manually by selection.filter . Let's call the datum associated with each element in A da , and the datum associated with each element in B db . Now all we have to do is define an expression that returns true when da matches db . For example, if we wanted to filter by type: a.on("click", function(da) { b.filter(function(db) { return da.type == db.type; }).style("fill", "red");}); The first two options are preferred, but occasionally manual filtering is useful, such as when you have a range slider and want to filter based on a quantitative variable.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11206034', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/734015/']}
jdg_70839
stackexchange
llm_judgeable_groundtruth_similarity
30639965
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing integration tests to work with a database. In the start of each test, I clear the storage and create some data. I want my tests to run sequentially to ensure that I am working with an empty database. But it seems that integration tests are run concurrently because sometimes I get existing documents after cleaning the database. I checked the database and found that the documents created in different tests have approximately the same creation time, even when I'm adding a delay for each test (with std::thread::sleep_ms(10000) ). Can you clarify how the integration tests are run and is it possible run them in order? Now provide the response and nothing else.
The built-in testing framework runs concurrently by default. It is designed to offer useful but simple support for testing, that covers many needs, and a lot of functionality can/ should be tested with each test independent of the others. (Being independent means they can be run in parallel.) That said, it does listen to the RUST_TEST_THREADS environment variable, e.g. RUST_TEST_THREADS=1 cargo test will run tests on a single thread. However, if you want this functionality for your tests always, you may be interested in not using #[test] , or, at least, not directly. The most flexible way is via cargo 's support for tests that completely define their own framework, via something like the following in your Cargo.toml : [[test]]name = "foo"harness = false With that, cargo test will compile and run tests/foo.rs as a binary. This can then ensure that operations are sequenced/reset appropriately. Alternatively, maybe a framework like stainless has the functionality you need. (I've not used it so I'm not sure.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30639965', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2505233/']}
jdg_70840