source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
30933
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Basically a beginner type of topology question here, but I am trying to understand something and am a bit stuck on a definition. According to J.P. May, a fibration is a map $p : E \to B$ such that for all spaces $Y$, that embed into $E$ by some map $f$, and have a compatible homotopy onto $B$ by $h: Y \times I \to B$, where $h(y, 0) = p(f(y))$ for all $y\in Y$; there is a unique extension of $h$ to a homotopy $\tilde{h} : Y \times I \to E$ such that, $p(\tilde{h}(y, t)) = h(y,t)$. Now what I am trying to wrap my head around is what that actually means. What restrictions does this place on the space $E$ and the maps $p$? Are there examples of continuous maps $p$ which do not satisfy this property? If so, how is this supposed to generalize the usual notion of a fiber bundle, and what are the fibers constrained to be? Can the fibers be different dimensional spaces or have varying genus? Now provide the response and nothing else.
Fibrations are a generalization of fiber bundles. When fiber bundles occur in homotopy theory, the primary usefulness is that they have the homotopy lifting property (aka covering homotopy property.) So the definition of "fibration" is purely a way to extend the notion of "fiber bundle" as broadly as possible and still be useful in homotopy theory. The fibers are only constrained to be homotopy equivalents (assuming $B$ is path connected.) So the fiber over $x$ and the fiber over $y$ can have different dimensions, even one being a single point while the other is any contractible space. There are plenty of maps which are not fibrations. For example, any map from a closed interval of the reals numbers onto the circle is not a fibration. A simple example of a fibration which is not a fiber bundle is to take an acute triangle (including its interior) for $E$ and one of its edges for $B$ . Then define $\pi:E\rightarrow B$ as the orthonormal project of $E$ onto $B$ . Then the fibers at the end points are just single points, while the fibers everywhere else are closed intervals.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/30933', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/986/']}
jdg_381041
stackexchange
llm_judgeable_groundtruth_similarity
47824372
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My app currently uses a background service to communicate (Bluetooth) with a physical device. (I manufacture and sell the physical device.) The app sends a command to the device every 500ms. The user starts this process and it must run until the user stops it. While the app is active, results of the process are sent to the app. This process needs to run even if the app is not active. (i.e. they take a phone call, search the web.) Once the app becomes active again, the state of the process is synced with the app. The process can run anywhere from a few minutes to many hours. (Yes, the user would need to plug in if they want to run the process for 99hrs.) Most users run it for 2-15min. All is fine for now, but with API 26, it looks like this architecture is no longer allowed. One migration option is to move to a foreground service. However, I find the documentation to be unclear on how foreground services work. Does the foreground service continue to run with the app is not active? (i.e. it has gone through onPause.) If so, how is this any different than a background service? Is there better documentation on how the foreground service works. (My web searches have not turned up anything significant.) Alos, the API 26 documentation does not say if the app is bonded to the background service if the new limitations still apply. Do they? Thanks,Stan Now provide the response and nothing else.
A Foreground Service is a Service that you put in the foreground state, that means, the system will not kill the process if it needs CPU or if your app is closed. First you have 3 kinds of Services: Started Services (runs in the UI thread) IntentService (runs in its own thread) (See Services vs IntentServices ) Bound Services (runs as long as there's one activity active that bound it) As said above, if you close your app, a Bound Service will be closed too, it is launched by bindService() . IntentService s are a subtype of Service which simplify a "work queue process" for incoming intents, i.e it handles incoming intents one by one within a queue, as said in the IntentService description . It has a default implementation and is launched by startService() . It is mainly for asynchronous tasks. A Started Service is a Service started by a component, and continue to live until stopService() is called or your app is closed. Using a Foreground Service makes your Service persistent . You have to call startForeground() inside your service. It will still run until you stop your Service , e.g with stopSelf() or stopService() ; Note that onStartCommand() will be triggered each time you call startService() but onCreate() is triggered only once. Here is a simple implementation of a Foreground Started Service : In your Manifest.xml: <service android:name=".ConnectionService" android:enabled="true"/> In MyService.java: public class MyService extends Service { // Unique notification identifier private final static int NOTIFICATION_ID = 95; private NotificationManager mNotificationManager; public MyService() { super(); } @Override public void onCreate() { // Initialize notification mNotificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE); NotificationCompat.Builder mBuilder = new NotificationCompat.Builder(this); // Build your notification here mBuilder.setLargeIcon(BitmapFactory.decodeResource(getResources(), R.mipmap.ic_launcher)); mBuilder.setSmallIcon(R.mipmap.ic_small_icon); mBuilder.setContentTitle("MyService"); mBuilder.setContentText("The Service is currently running"); // Launch notification startForeground(NOTIFICATION_ID, mBuilder.build()); } @Override public int onStartCommand(Intent intent, int flags, int startId) { // Handle startService() if you need to // for exmple if you are passing data in your intent return START_NOT_STICKY; } @Override public IBinder onBind(Intent intent) { // We don't provide binding, so return null return null; } @Override public void onDestroy() { super.onDestroy(); // Remove the notification when the service is stopped mNotificationManager.cancel(NOTIFICATION_ID); }} Finally just call startService() .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47824372', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5128775/']}
jdg_381042
stackexchange
llm_judgeable_groundtruth_similarity
48876
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: With given number N, how to determine first number after N with same amount of odd and even divisors?For example if we have N=1, then next number we are searching for is : 2because divisors:odd : 1even : 2 I figured out that this special number can't be odd and obviously it can't be prime.I can't find any formula for this or do i have just compute it one by one and check if it's this special number ? Obviously 1 and number itself is divisors of this number.Cheers Now provide the response and nothing else.
To get some idea of what's going on, we do like other scientists do, we experiment . Special numbers will be even, so we write down the number of odd divisors, even divisors, for the even numbers, starting with $2$. If a number turns out to be special, we put a $\ast$ in its row. So we make a table, giving the number, how many odd divisors it has, how many even. Calculations are easy, but we must be very careful, since errors could lead us down the wrong path. $2 \qquad 1 \qquad 1\quad\ast$ $4 \qquad 1 \qquad 2$ $6 \qquad 2 \qquad 2\quad\ast$ $8 \qquad 1 \qquad 3$ $10 \qquad 2 \qquad 2\quad\ast$ $12 \qquad 2 \qquad 4$ $14 \qquad 2 \qquad 2\quad\ast$ $16 \qquad 1 \qquad 4$ $18 \qquad 3 \qquad 3\quad\ast$ We could easily go on for a while. It is definitely not a waste of time, since it is useful to be well-acquainted with the structure of the smallish numbers that we bump into often. A pattern seems to jump out: every second even number seems to be special. It looks as if "special" numbers are not all that special! It can be dangerous to jump to conclusions from data about small integers. But in this case, the conclusion turns out to be correct. The special numbers, so far, all have the shape $2a$, where $a$ is an odd number. They are divisible by $2$ but not by $4$. The even numbers in our list that are not special are all divisible by $4$. Now we try to prove that every number that is divisible by $2$ but not by $4$ is special, and that the others are not. Take an odd number $b$, and look at the number $2b$. Think about the divisors of $2b$. If $k$ is an odd divisor of $2b$, then $2k$ is an even divisor of $2b$, and vice-versa. If $k$ is an odd divisor of $2b$, call $2k$ the friend of $k$. Split the divisors of $2b$ into pairs of friends. For example, if $b=45$, we have the following pairs of friends. $$(1,2)\qquad (3,6) \qquad(5,10)\qquad(9,18)\qquad(15,30) \qquad (45,90)$$ We have split the divisors of $2b$ into pairs of friends. Each pair has one odd number and one even number, so $2b$ has exactly as many odd divisors as even divisors. Now let's show that no number divisible by $4$ can be special. The idea is that if a number is divisible by $4$, then it has "too many" even divisors. I will not write out the details, but you should . The idea goes as follows. Take a number $n$ that is divisible by $4$, like $36$ or $80$. Split the divisors of $n$ into teams . If $k$ is an odd divisor of $n$, put into the same team as $k$ the numbers $2k$, $4k$, and so on however far you can go. Here are the teams for $n=36$.$$(1,2,4) \qquad (3,6,12)\qquad (9,18,36)$$ Each team has more even numbers than odd numbers, so $n$ has more even divisors than odd divisors. That means $n$ can't be special. Now let's get to your question : what is the first special number after $N$? If $N$ is divisible by $4$, the first special number after $N$ is $N+2$.If $N$ is divisible by $2$ but not by $4$, the first special number after $N$ is $N+4$. If $N$ has remainder $1$ on division by $4$, the first special after $N$ is $N+1$, and if the remainder is $3$, the first special is $N+3$. These facts follow easily from what we have discovered about special numbers. Formulas : We have been operating without formulas, just straight thinking. But I should mention a relevant formula.Let $n$ be an integer greater than $1$, and express $n$ as a product of powers of distinct primes.In symbols, let$$n=p_1^{a_1}p_2^{a_2} \cdots p_k^{a_k}$$Then the number of divisors of $n$ is given by$$(a_1+1)(a_2+1) \cdots(a_k+1)$$ For example, $720=2^43^25^1$. The number of (positive) divisors of $n$ is$(4+1)(2+1)(1+1)$. The formula that gives the number of divisors of $n$ is not hard to prove. Try to produce a proof! The formula could be adapted to give a count of the odd divisors of $n$, and of the even divisors. Then we could use these formulas to identify the special numbers. But formulas cannot do the thinking for you. So as a first approach , the way we tackled things is much better than trying to use a formula.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/48876', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10394/']}
jdg_381043
stackexchange
llm_judgeable_groundtruth_similarity
28916061
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The book The C Programming Language talks about "the most restrictive type" in section 8.7, Example — A Storage Allocator : Although machines vary, for each machine there is a most restrictive type: if the most restrictive type can be stored at a particular address, all other types may be also. On some machines, the most restrictive type is a double ; on others, int or long suffices. In their code, the union header is aligned using the type long . What is meant by the most restrictive type? Is it perhaps the largest type (e.g., double ), or is there another approach? Now provide the response and nothing else.
CPUs often require that (or work more efficiently if) certain types of data are stored at addresses that are a multiple of some (power-of-two) value. This value is called the alignment of the data. For example, a CPU might require that four-byte integers are stored at addresses that are a multiple of four (that they have four-byte alignment , or are aligned on four bytes ). By the most restrictive type , they mean the type that has the most restrictive requirements in this area. So if e.g. long double requires eight-byte alignment on some machine, and no other type requires greater alignment than that, then the most restrictive type on that machine would be long double . It makes sense for malloc(3) to return addresses that satisfy the alignment requirements of the most restrictive type, since that means the returned memory can be used to store any type. malloc() doesn't know how the memory will be used, so it can't adapt. It's not necessarily the case that larger data types require greater alignment, though alignment requirements tend to increase with increasing size. (Some types of data might require even greater alignment than malloc() provides. For example, many x86 SSE instructions use data vectors that are aligned on 16 bytes, while e.g. the malloc() in glibc only guarantees eight-byte alignment. posix_memalign(3) can be used to dynamically allocate memory with even greater alignment requirements on POSIX (*nix) systems.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28916061', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_381044
stackexchange
llm_judgeable_groundtruth_similarity
4724765
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a code to make a MS Word to HTML system. After googleing for about half a minute, I was able to find the code which does exactly what I need. Now.. It works offline on the ASP.NET development server, but it won't work when I upload the files on my server. I read a couple of posts, and the problem seems to be becouse the server does not have MS Office installed on it. Now, i'm not sure if it has, i'm still avaiting an email from the good people @ hosting (but i assume it's not installed), but my question is... Is there ANY way to make it work without th MS Office installed?I'm using Microsoft.Office.Interop.Word ver. 12. / ASP 3.5 / C# and the error I'm getting is Could not load file or assembly 'Microsoft.Office.Interop.Word, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c' or one of its dependencies. Thank you for your time! Now provide the response and nothing else.
The Interop library is not a "working" library in itself, it is only a wrapper around winword.exe for .NET programs, so using this library does not make any sense if you don't install or use Microsoft Word. Instead you will need to find a library that allows for manipulating Word Documents. If you can constrain the documents to be in the new format (docx), then it will be quite an easy task, e.g. using the OOXML SDK (as proposed by Stilgar, too). But there are libraries for the old format, too. Update: I have to admit, although I was convinced I searched and found some libraries for the old doc format before, I do not manage to find those anymore, probably because the result lists is "spoiled" by the many offers for docx. To be clear: If you can afford to stick to docx (2007 or later) format, you should do that. Office Open XML is a (more or less) open standard based on ZIP and XML, and many tools already exist and will be developed in the future. The old format is much less supported nowadays. If you have to go for the old format, too, then Aspose (as proposed by Uwe) is the only library I found.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4724765', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/580020/']}
jdg_381045
stackexchange
llm_judgeable_groundtruth_similarity
9674397
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: An User has one Package associated with it. Many users can refer to the same package. User cannot exists without a Package defined. User should own the relation. Relation is bidirectional, so a Package has zero or more users in it. These requirements lead to ManyToOne relation for User and OneToMany relation of Package in Doctrine 2. However package_id in user table (that is foreign-key) allows null values. I've tried setting nullable=false but command: php app/console doctrine:generate:entities DL --path="src" --no-backup Says that there is no attribute nullable for the relation ManyToOne . What i'm missing? class User{ /** * @ORM\Id * @ORM\Column(type="integer") * @ORM\GeneratedValue(strategy="AUTO") */ private $id; /** * @ORM\ManyToOne(targetEntity="Package", inversedBy="users") */ private $package;}class Package{ /** * @ORM\Id * @ORM\Column(type="integer") * @ORM\GeneratedValue(strategy="AUTO") */ private $id; /** * @ORM\OneToMany(targetEntity="User", mappedBy="package") */ private $users;} Now provide the response and nothing else.
Use the JoinColumn annotation on your ManyToOne relation: /** * @ORM\ManyToOne(targetEntity="Package", inversedBy="users") * @ORM\JoinColumn(name="package_id", referencedColumnName="id", nullable=false) */private $package; The ManyToOne itself cannot be nullable, because it doesn't relate to a specific column. The JoinColumn on the other hand identifies the column in the database. Thus, you can use "normal" attributes like nullable or unique!
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9674397', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/220180/']}
jdg_381046
stackexchange
llm_judgeable_groundtruth_similarity
15037192
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How would I add custom attributes into Zend Framework 2 navigation? I know I can add id or class -> but that's about it.... 1) How would I add data-test='blahblah' attribute for example? 2) Can I add attribute to li elements that contain actual links? $container = new Zend\Navigation\Navigation(array( array( 'label' => 'Page 1', 'id' => 'home-link', 'uri' => '/', ), array( 'label' => 'Zend', 'uri' => 'http://www.zend-project.com/', 'order' => 100, ),); Edit: @Bram Gerritsen: Thanks for your answer. Yes - I can add 'data-test' => 'blahblah' and retrieve it as $page->get('data-test') - but this still doesn't append it as an attribute into <a></a> .... Would I ahve to override htmlify to to that? Now provide the response and nothing else.
Bram's answer helped point me to a solution, here's what I needed and how I solved it (since I was new to ZF2 and namespaces it took me much longer than it should have, so hopefully this will help others) Problem Want to use Zend\Navigation to benefit from its isActive() method and the built in translation, ACL, etc support. Needed to add CSS class name(s) to the <li> element and <a> element. (ZF2's Menu View Helper supports an 'either or' approach currently) Needed to add CSS class name(s) to nested <ul> elements. Needed to add additional attributes to the <a> element such as data-*="..." Needed these changes to support Bootstrap 3 markup Solution Description Create customer View Helper by extending Zend\View\Helper\Navigation\Menu Slightly modify the renderNormalMenu() and htmlify() methods Take advantage of the ability to add custom properties to Zend\Pages to add CSS classes and additional attributes to some elements Solution Step 1 Created custom View Helper under the Application module src\Application\View\Helper\NewMenu.php NewMenu.php <?phpnamespace Application\View\Helper;// I'm extending this class, need to include ituse Zend\View\Helper\Navigation\Menu;// Include namespaces we're using (from Zend\View\Helper\Navigation\Menu)use RecursiveIteratorIterator;use Zend\Navigation\AbstractContainer;use Zend\Navigation\Page\AbstractPage;class NewMenu extends Menu{ // copied fromZend\View\Helper\Navigation\Menu protected function renderNormalMenu(...){} // copied from Zend\View\Helper\Navigation\Menu public function htmlify(...){}} Step 2 Registered new View Helper with the getViewHelperConfig() in \module\Application\Module.php <?php/** * Zend Framework (http://framework.zend.com/) ...*/namespace Application;use Zend\Mvc\ModuleRouteListener;use Zend\Mvc\MvcEvent;class Module{ // ** snip ** public function getViewHelperConfig() { return array( 'invokables' => array( // The 'key' is what is used to call the view helper 'NewMenu' => 'Application\View\Helper\NewMenu', ) ); }} Step 3 In my layout.phtml script, I get my Navigation container and pass it to the NewMenu view helper. I also set some options like adding the parent <ul> class name and not escaping labels so I can add the standard 'dropdown caret' that Bootstrap uses (ie. <b class="caret"></b> ) to a label with a dropdown menu. $container = $this->navigation('navigation')->getContainer();echo $this->NewMenu($container)->setUlClass('nav navbar-nav')->escapeLabels(false); Intermission At this point, we should have more or less just duplicated the Menu View Helper. It should produce a navigation the same way the standard View Helper does. Step 4 In the NewMenu.php class, I remove the $addClassToListItem code to avoid it from placing classes on the wrong element by accident. protected function renderNormalMenu(...) // Add CSS class from page to <li>//if ($addClassToListItem && $page->getClass()) {// $liClasses[] = $page->getClass();//} public function htmlify(...) // Always apply page class to <a> tag. We'll use a diff. method for <li>//if ($addClassToListItem === false) { $attribs['class'] = $page->getClass();//} Step 5 Add a method to apply CSS class name to <li> tags, since we removed the $addClassTolistItem method. We simply use the Page classes ability to have custom properties and do this: protected function renderNormalMenu // Is page active?if ($isActive) { $liClasses[] = 'active';}if($wrapClass = $page->get('wrapClass')){ $liClasses[] = $wrapClass;}... Now, in our Navigation config file, we can simply add a property called wrapClass to apply CSS classes to the wrapping element ( <li> ). config\autoload\global.php ...'navigation' => array( 'default' => array( ... array( 'label' => 'Products <b class="caret"></b>', 'route' => 'products', 'wrapClass' => 'dropdown', // class to <li> 'class' => 'dropdown-toggle', // class to <a> like usual 'pages' => array( array( 'label' => 'Cars', 'route' => 'products/type', ... ), ... ), ),... Step 6 Add the ability to have additional attributes on <a> like data-* . For Bootstrap 3 you'll need data-toggle="dropdown" for example. public function htmlify(...) // get attribs for element$attribs = array( 'id' => $page->getId(), 'title' => $title,);// add additional attributes$attr = $page->get('attribs');if(is_array($attr)){ $attribs = $attribs + $attr;} In your config file, you can now add a property with an array of additional attributes: config\autoload\global.php ...'navigation' => array( 'default' => array( ... array( 'label' => 'Products <b class="caret"></b>', 'route' => 'products', 'wrapClass' => 'dropdown', // class to <li> 'class' => 'dropdown-toggle', // class to <a> like usual 'attribs' => array( 'data-toggle' => 'dropdown', // Key = Attr name, Value = Attr Value ), 'pages' => array( array( 'label' => 'Cars', 'route' => 'products/type', ... ), ... ), ),... Step 7 Add the ability to place class names on nested lists container (ie. <ul> ). protected function renderNormalMenu() if ($depth > $prevDepth) { // start new ul tag if ($ulClass && $depth == 0) { $ulClass = ' class="' . $ulClass . '"'; } // Added ElseIf below else if($ulClass = $page->get('pagesContainerClass')){ $ulClass = ' class="' . $ulClass . '"'; } else { $ulClass = ''; } $html .= $myIndent . '<ul' . $ulClass . '>' . self::EOL; The original code basically said "if this is the first <ul> and there's a UL class, add it, else do nothing. So, I added an additional check to say, if a property called pagesContainerClass is available, to apply the class to the <ul> as well. This means we need to add the property on the right Page in our configuration: config\autoload\global.php ...'navigation' => array( 'default' => array( ... array( 'label' => 'Products <b class="caret"></b>', 'route' => 'products', 'wrapClass' => 'dropdown', // class to <li> 'class' => 'dropdown-toggle', // class to <a> like usual 'attribs' => array( 'data-toggle' => 'dropdown', // Key = Attr name, Value = Attr Value ), 'pages' => array( array( 'label' => 'Cars', 'route' => 'products/type', // Give child <ul> a class name 'pagesContainerClass' => 'dropdown-menu', ... ), ... ), ),... Important to note, the UL class needs to be placed on the first child Page of a child because the conditional statements are wrapped in a the following condition: if ($depth > $prevDepth) { // start new ul tag ...} After the first child is called, the $dept = $prevDepth and the nested <ul> will have already been sent to the string buffer. This solution hasn't been rigorously tested but the idea is that is simply takes the current Menu View Helper, and overloads the two necessary methods and only slightly modifies that. I've tried to use setPartial() but that only helped with the <li> generation, it was still using the Menu View Helpers' htmlify() method (all of which was mentioned in Bram's discussion above). So with making those small tweeks to the to methods and using the Page class's ability to have custom properties, I could just add some additional logic to get class names on the <li> , <a> and nested <ul> classes as well as add additional properties on the <a> elements, so I could configure my Zend\Navigation from the config to spit out, basically, Bootstrap 3 Navbar markup. The end Layout then just looks like this: <nav class="navbar navbar-default navbar-static-top" role="navigation"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-ex1-collapse"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> </div> <div class="collapse navbar-collapse navbar-ex1-collapse"> <?php // Use Zend\Navigation to create the menu $container = $this->navigation('navigation')->getContainer(); echo $this->NewMenu($container)->setUlClass('nav navbar-nav')->escapeLabels(false); ?> </div><!-- /.navbar-collapse --></nav> The troubles I kept running into was a better understanding of PHP Namespaces and having needed to include the appropriate Qualified namespaces in my custom View Helper, even though I was extending it. The other problem, was that the Navigation View Helper can call the Menu View Helper from itself like so: $this->navigation('navigation')->menu(); This won't work: $this->navigation('navigation')->NewMenu(); I'm thinking because of namespace issues with NewMenu not being registered in the Navigation View Helper class and I'm not going to extend it just for that. So, hopefully this (long) answer will help others who are struggling with this need. Cheers!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15037192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2101630/']}
jdg_381047
stackexchange
llm_judgeable_groundtruth_similarity
19118376
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When would it make sense to use int32.Parse(String, IFormatProvider) ? As far as I can tell, this and int32.Parse(String) uses NumberStyles.Integer anyway which only allows a plus, a minus, or digits, optionally surrounded by whitespace, so why does the locale format enter into the equation? I know about thousand separators, but they don't matter because NumberStyles.Integer disallows them no matter your region. Now provide the response and nothing else.
Consider if you have culture where negative sign is M (minus). I am pretty sure it doesn't exist but just consider that you have something like that. Then you can do: string str = "M123";var culture = new CultureInfo("en-US");culture.NumberFormat.NegativeSign = "M";int number = Int32.Parse(str, culture); This would result in -123 as value. This is where you can use int32.Parse(String, IFormatProvider) overload. If you don't specify the culture, then it would use the current culture and would fail for the value M123 . (Old Answer) It is useful with string with thousand separator Consider the following example, string str = "1,234,567";System.Threading.Thread.CurrentThread.CurrentCulture = new CultureInfo("de-DE");int number = Int32.Parse(str, CultureInfo.CurrentCulture); This would result in an exception since . is the thousand separator in German culture. For int number = Int32.Parse("1.234", NumberStyles.AllowThousands); The above would parse successfully, since the German culture uses . as thousand separator. But if you have current culture set as US then it would give an exception. System.Threading.Thread.CurrentThread.CurrentCulture = new CultureInfo("en-US");int number = Int32.Parse("1.234", NumberStyles.AllowThousands); See: Int32.Parse Method (String, IFormatProvider) The provider parameter is an IFormatProvider implementation, such as a NumberFormatInfo or CultureInfo object . The provider parameter supplies culture-specific information about the format of s. If provider is null, the NumberFormatInfo object for the current culture is used.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19118376', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/842982/']}
jdg_381048
stackexchange
llm_judgeable_groundtruth_similarity
18858905
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I apologize if this question sounds simple, my intention is to understand in depth how this (these?) particular operator(s) works and I was unable to find a satisfactory description in the perldocs (It probably exists somewhere, I just couldn't find it for the life of me) Particularly, I am interested in knowing if a) <> b) <*> or whatever glob and c) <FH> are fundamentally similar or different, and how they are used internally. I built my own testing functions to gain some insight on this (presented below). I still don't have a full understanding (my understanding might even be wrong) but this is what I've concluded: <> In Scalar Context: Reads the next line of the "current file" being read (provided in @ARGV). Questions : This seems like a very particular scenario, and I wonder why it is the way it is and whether it can be generalized or not. Also what is the "current file" that is being read? Is it in a file handle? What is the counter? In List Context: Reads ALL of the files in @ARGV into an array <list of globs> In Scalar Context: Name of the first file found in current folder that matches the glob. Questions : Why the current folder? How do I change this? Is the only way to change this doing something like < /home/* > ? In List Context: All the files that match the glob in the current folder. <FH> just seems to return undef when assigned to a variable. Questions : Why is it undef? Does it not have a type? Does this behave similarly when the FH is not a bareword filehandle? General Question : What is it that handles the value of <> and the others during execution? In scalar context, is any sort of reference returned, or are the variables that we assign them to, at that point identical to any other non-ref scalar? I also noticed that even though I am assigning them in sequence, the output is reset each time. i.e. I would have assumed that when I do $thing_s = <>;@thing_l = <>; @thing_l would be missing the first item, since it was already received by $thing_s . Why is this not the case? Code used for testing: use strict;use warnings;use Switch;use Data::Dumper;die "Call with a list of files\n" if (@ARGV<1);my @whats = ('<>','<* .*>','<FH>');my $thing_s;my @thing_l;for my $what(@whats){ switch($what){ case('<>'){ $thing_s = <>; @thing_l = <>; } case('<* .*>'){ $thing_s = <* .*>; @thing_l = <* .*>; } case('<FH>'){ open FH, '<', $ARGV[0]; $thing_s = <FH>; @thing_l = <FH>; } } print "$what in scalar context is: \n".Dumper($thing_s)."\n"; print "$what in list context is: \n".Dumper(@thing_l)."\n";} Now provide the response and nothing else.
The <> thingies are all iterators. All of these variants have common behaviour: Used in list context, all remaining elements are returned. Used in scalar context, only the next element is returned. Used in scalar context, it returns undef once the iterator is exhausted. These last two properties make it suitable for use as a condition in while loops. There are two kinds of iterators that can be used with <> : Filehandles. In this case <$fh> is equivalent to readline $fh . Globs, so <* .*> is equivalent to glob '* .*' . The <> is parsed as a readline when it contains either nothing, a bareword, or a simple scalar. More complex expression can be embedded like <{ ... }> . It is parsed as a glob in all other cases. This can be made explicit by using quotes: <"* .*"> but you should really be explicit and use the glob function instead. Some details differ, e.g. where the iterator state is kept: When reading from a file handle, the file handle holds that iterator state. When using the glob form, each glob expression has its own state. Another part is if the iterator can restart: glob restarts after returning one undef . filehandles can only be restarted by seeking – not all FHs support this operation. If no file handle is used in <> , then this defaults to the special ARGV file handle. The behaviour of <ARGV> is as follows: If @ARGV is empty, then ARGV is STDIN . Otherwise, the elements of @ARGV are treated as file names. The following pseudocode is executed: $ARGV = shift @ARGV;open ARGV, $ARGV or die ...; # careful! no open mode is used The $ARGV scalar holds the filename, and the ARGV file handle holds that file handle. When ARGV would be eof , the next file from @ARGV is opened. Only when @ARGV is completely empty can <> return undef . This can actually be used as a trick to read from many files: local @ARGV = qw(foo.txt bar.txt baz.txt);while (<>) { ...;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18858905', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1440820/']}
jdg_381049
stackexchange
llm_judgeable_groundtruth_similarity
36124536
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have added reachability into my project following things are working fine.1.It Check successfully host request, wifi or mobile date active connection..but i have tested the reachability of wifi with loss of internet connection , it may give the results like reachable via wifi...(example like you have active wifi connection but no internet received form wifi) I did added NSTimers and achieved exact result, but i want to achieve the this thing by reachability so can anyone help to solve my issue... Now provide the response and nothing else.
You can do this pretty nicely with the switchMap operator. Each time the button is pressed it will switch to a new subscription of the interval observable - meaning it will start over again. The previous subscription is dropped automatically so there won't be multiple intervals running. Observable<Long> intervalObservable = Observable.interval(10, TimeUnit.SECONDS);RxView.clicks(refreshButton) .switchMap(ignored -> { return intervalObservable .startWith(0L) // For an immediate refresh .observeOn(AndroidSchedulers.mainThread()) .doOnNext(x -> performRefresh()); }) .subscribe(); The startWith adds an immediate value to the interval (causing the refresh immediately when the button is clicked), and the observeOn makes sure the refresh happens on the main thread (important since the interval will emit on a background thread). Update : vman noticed that this implementation only starts refreshing after the user clicks the button the first time. The following alternative will refresh immediately upon subscription, then every 10 seconds, until the button is clicked – at which point it will refresh immediately again, then continue updating every 10 seconds. Observable<Long> intervalObservable = Observable.interval(10, TimeUnit.SECONDS) .startWith(0L) // For an immediate refresh .observeOn(AndroidSchedulers.mainThread()) .doOnNext(x -> performRefresh());Observable<Long> buttonClickedObservable = RxView.clicks(refreshButton) .map(e -> 0L) // To make the compiler happy .switchMap(ignored -> Observable.error(new RuntimeException("button pressed")));Observable.merge(intervalObservable, buttonClickedObservable) .retry() .subscribe(); In this implementation an observable that does the refreshing gets subscribed to immediately (leading to an immediate refresh, and repeating every 10 seconds), It's merged with an Observable that will error whenever the refresh button is clicked. The retry at the end will cause the whole thing to get resubscribed to on an error (button click) – causing it to all start over again.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36124536', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3747951/']}
jdg_381050
stackexchange
llm_judgeable_groundtruth_similarity
12721827
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a simple app, which should be able to access files from internal storage and as well as from external storage (Removable cards) like Micro SD cards (when an user inserts a SDCARD). (Not the internal sdcard which comes with the device, I know it can be accessed using Environment.getExternalStorageDirectory() ) Is it possible to find out if an user has inserted a sdcard to the device? If yes, Is it possible to get the path of that SD CARD? I found that hard coding the path was not a good option, cos different devices has different paths for sdcard inserted by user. Any help is very much appreciated.Thank you. Now provide the response and nothing else.
(Not the internal sdcard which comes with the device, I know it can be accessed using Environment.getExternalStorageDirectory() ) Android consider Both removable storage media (such as an SD card) or an internal (non-removable) storage as external storage only. Following is form developer.android.com Every Android-compatible device supports a shared "external storage" that you can use to save files. This can be a removable storage media (such as an SD card) or an internal (non-removable) storage. Files saved to the external storage are world-readable and can be modified by the user when they enable USB mass storage to transfer files on a computer. To check SDCard availability you can use following code. private boolean isExternalStorageAvailable() { String state = Environment.getExternalStorageState(); boolean mExternalStorageAvailable = false; boolean mExternalStorageWriteable = false; if (Environment.MEDIA_MOUNTED.equals(state)) { // We can read and write the media mExternalStorageAvailable = mExternalStorageWriteable = true; } else if (Environment.MEDIA_MOUNTED_READ_ONLY.equals(state)) { // We can only read the media mExternalStorageAvailable = true; mExternalStorageWriteable = false; } else { // Something else is wrong. It may be one of many other states, but // all we need // to know is we can neither read nor write mExternalStorageAvailable = mExternalStorageWriteable = false; } if (mExternalStorageAvailable == true && mExternalStorageWriteable == true) { return true; } else { return false; } } Please read http://developer.android.com/guide/topics/data/data-storage.html#filesExternal
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12721827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1448201/']}
jdg_381051
stackexchange
llm_judgeable_groundtruth_similarity
96289
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: In 1995 (if I'm not mistaken) Taylor and Wiles proved that all semistable elliptic curves over $\mathbb{Q}$ are modular. This result was extended to all elliptic curves in 2001 by Breuil, Conrad, Diamond, and Taylor. I'm asking this as a matter of interest. Are there any other fields over which elliptic curves are known to be modular? Are there any known fields for which this is not true for? Also, is much research being conducted on this matter? Now provide the response and nothing else.
Yes, this is a very active area -- one of the major themes of current research in number theory. Much of the recent work has focussed on proving something slightly weaker, but easier to get at, than modularity. An elliptic curve $E$ over a number field $K$ is said to be potentially modular if there is a finite extension $L / K$ such that $E$ becomes modular over $L$. This notion of potential modularity has been much studied by Richard Taylor and his coauthors, and turns out to be almost as good for most purposes as knowing modularity over $K$. It's now known, for instance, that any elliptic curve over a totally real number field $K$ becomes modular over some totally real extension $L / K$; a bit of googling turns up http://www2.math.kyushu-u.ac.jp/~virdol/basechange2.pdf (which shows that one can choose $L$ in a rather specific way, using work of Taylor and Skinner-Wiles to do the heavy lifting). I'm not an expert in the area, but my impression from talking to genuine experts is that current methods are very much limited to the case where the elliptic curve is defined over a field which is either totally real or CM -- outside these situations modularity is much less well understood. (EDIT: I should add that there are some totally real fields for which one can show modularity, rather than just potential modularity; Jarvis and Manoharmayum have shown, for instance, that every semistable elliptic curve over $\mathbb{Q}(\sqrt{2})$ is modular.)
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/96289', 'https://mathoverflow.net', 'https://mathoverflow.net/users/22095/']}
jdg_381052
stackexchange
llm_judgeable_groundtruth_similarity
61348714
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: when in the development environment, my app works just fine.When in the production environment it crashes with the error: Uncaught TypeError: (0 , _react.useEffect) is not a function It happens in a file I created where I import React and useEffect like so: import React, { useEffect } from 'react'const X = () => { useEffect(() => { ... }) ...} adding a console.log just below this line confirms that useEffect is indeed undefined when in production and the expected function when in dev. I checked my package.json, yarn.lock & node_modules for any react or react-dom version that might be under 16.8.0 where useEffect was introduced. But everything is 16.13.1 and they are the main dependency and I did try to clean my yarn cache, delete node_modules & yarn.lock, and re-install. I tried adding and removing it from peerDependencies without success. I put in a check to make sure there are not 2 separate versions of React running, but saving window.React1 = React inside the library and window.React2 = React inside my application and checking window.React1 === window.React2 it was true, so that's not it either. Lastly, I also tried to alias React to the specific one in node_modules, but without any luck. The only solution I've found that works are if I import it like so: import React from 'react';const X = () => { React.useEffect(() => { ... }) ...} But this should be exactly the same as using a destructured import?If I do explicitly use React.useEffect it also forces me to change all of my other useState and useEffect hooks to React.useSate and React.useEffect The next error just becomes: TypeError: (0 , _react.useState) is not a function in another file where I use React hooks. I want to solve the problem not implement a workaround. I use microbundle to bundle my library using React.I use parcel-bundler to import the React-component and render it in a dev environment (directly from src) or prod (the bundled library) The bundled version I use is bundled with .mjs I checked the output of the minified .mjs bundle as well and inside React is imported like this: import ue,{useEffect as pe,useState as fe}from"react"; Which looks fine to me. What I really don't understand is how a restructured import would break it, but just doing React.useEffect would work just fine? Here's my package.json { "name": "xxx", "version": "1.1.4", "repository": "[email protected]:xxx/xxx.git", "author": "xxx", "license": "MIT", "source": "src/index.ts", "main": "dist/bundle.js", "umd:main": "dist/bundle.umd.js", "module": "dist/bundle.mjs", "publishConfig": { "registry": "https://npm.pkg.github.com/@xxx" }, "scripts": { "build": "microbundle", "dev": "parcel ./test-app/dev/index.html --port 3000", "start": "parcel ./test-app/serve/index.html --port 3000", "storybook": "start-storybook -s ./public -c .storybook --ci", "prepublishOnly": "yarn build" }, "dependencies": { "@api-platform/admin": "2.1.0", "@api-platform/api-doc-parser": "0.8.2", "@fortawesome/fontawesome-svg-core": "^1.2.28", "@fortawesome/free-solid-svg-icons": "^5.13.0", "@fortawesome/react-fontawesome": "^0.1.9", "@material-ui/core": "^4.9.10", "@material-ui/icons": "^4.9.1", "@react-keycloak/web": "^2.1.1", "@types/pluralize": "^0.0.29", "google-geocoder": "0.2.1", "history": "^4.10.1", "keycloak-js": "^9.0.3", "lodash.debounce": "^4.0.8", "lodash.omit": "^4.5.0", "lodash.set": "4.3.2", "notistack": "0.9.9", "papaparse": "^5.2.0", "parcel-bundler": "1.12.4", "polished": "^3.5.2", "react": "16.13.1", "react-admin": "3.4.1", "react-dom": "16.13.1", "react-is": "16.13.1", "react-redux": "^7.2.0", "recompose": "^0.30.0", "redux": "4.0.5", "styled-components": "5.1.0" }, "devDependencies": { "@babel/core": "7.9.0", "@babel/plugin-syntax-export-default-from": "7.8.3", "@babel/preset-env": "7.9.5", "@babel/preset-react": "7.9.4", "@storybook/addon-a11y": "5.3.18", "@storybook/addon-actions": "5.3.18", "@storybook/addon-info": "5.3.18", "@storybook/addon-knobs": "5.3.18", "@storybook/addon-links": "5.3.18", "@storybook/addon-storyshots": "5.3.18", "@storybook/addon-storysource": "5.3.18", "@storybook/addon-viewport": "5.3.18", "@storybook/react": "5.3.18", "@testing-library/react": "^10.0.3", "@types/jsonld": "1.5.1", "@types/lodash": "4.14.149", "@types/node": "13.11.1", "@types/papaparse": "5.0.3", "@types/react-redux": "7.1.7", "@types/recompose": "^0.30.7", "@types/styled-components": "5.1.0", "@welldone-software/why-did-you-render": "4.0.7", "awesome-typescript-loader": "5.2.1", "babel-loader": "^8.1.0", "babel-plugin-module-resolver": "4.0.0", "babel-plugin-styled-components": "1.10.7", "lodash.get": "4.4.2", "lodash.uniq": "4.5.0", "microbundle": "0.11.0", "openapi-types": "1.3.5", "parcel-plugin-static-files-copy": "2.3.1", "pluralize": "^8.0.0" }, "alias": { "jsonld": "./node_modules/jsonld/dist/jsonld.js" }, "staticFiles": { "staticPath": "public", "watcherGlob": "**" }} Also worth noting, it's only React I'm having this problem with. All my other restructured imports work just fine. Now provide the response and nothing else.
It seem that microbundler does not tolerate to React. This one create bundle that attempt to use react from global scope, instead React that really exposed. For the same reason your workaround with React.useEffect works as expected, just imagine that it looks like window.React.useEffect . Here is an example of a primitive application: import ReactDOM from 'react-dom';import React, { useEffect, useState } from 'react';/** * necessary workaround, microbundle use `h` pragma by default, * that undefined when use React * another option is to make build with option --jsx * @example microbundle --globals react=React --jsx React.createElement * yes, yet another workaround*/window.h = React.createElement;const X = () => { const [A, B] = useState('world'); useEffect(() => { B('MLyck'); }, []) return `Hello ${A}`;}ReactDOM.render(<X />, document.querySelector('react-app')); After bundling with just microbundle it completely broken, but when you try to bundle with microbundle --globals react=React as correctly suggest @Jee Mok, it will produce correct bundle.I hope comments will explain what happened. !function (e, t) { "object" == typeof exports && "undefined" != typeof module ? t(require("react-dom"), require("react")) : "function" == typeof define && define.amd ? define(["react-dom", "react"], t) : t(e.ReactDOM, e.React); /* String above is core of problem, in case you try to bundle without options `--globals react=React` it will looks like: `t(e.ReactDOM, e.react);` Obviously `react` is not defined in `e` e.g. `this` e.g. `window` due to react expose self as `React` */}(this, function (e, t) { e = e && e.hasOwnProperty("default") ? e.default : e, window.h = ("default" in t ? t.default : t).createElement, e.render(h(function () { var e = t.useState("world"), n = e[0], r = e[1]; return t.useEffect(function () { r("MLyck"); }, []), "Hello " + n; }, null), document.querySelector("react-app"));}); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.13.1/umd/react.development.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.13.1/umd/react-dom.development.js"></script> <react-app></react-app> And, by the way, "restructured import " not at all to blame.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61348714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4010197/']}
jdg_381053
stackexchange
llm_judgeable_groundtruth_similarity
153159
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A friend of mine is running Ubuntu and got GRUB RESCUE . Can they use Mint ISO to repair their Grub? As I don't have Ubuntu ISO? Now provide the response and nothing else.
If the Ubuntu installation is still present (and only GRUB was lost), sure, you can use any distro that has live booting to do so. chroot into the Ubuntu installation and install and update Grub. If /dev/sda5 is the Ubuntu partition: mount /dev/sda5 /mntmount -o bind /dev /mnt/devmount -t proc none /mnt/procmount -t sysfs none /mnt/sysmount -t devpts none /mnt/dev/ptschroot /mnt /bin/bash#Inside the chrootgrub-install /dev/sdaupdate-grubexit# Unmount all those mounts:for m in /mnt/{dev/pts,dev,proc,sys,}; do umount $m; done# reboot If all you need to do is install grub, and updating isn't necessary, then you don't need the chroot : mount /dev/sda5 /mntgrub-install --root-directory=/mnt /dev/sda If you have a separate boot partition, remember to mount it as well, after mounting /mnt .
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/153159', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/77239/']}
jdg_381054
stackexchange
llm_judgeable_groundtruth_similarity
60383178
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm searching for a solution to combine freezed and hive packages.For example like that: @freezedabstract class Person extends HiveObject with _$Person { @HiveType(typeId: 0) factory Person({@HiveField(0) String name, @HiveField(1) int age}) = _Person;} I know that this is not possible out of the box, but I think you know what I want to achieve. What would be the best way to implement freezed with hive? The only solution that I can currently think of is to store the json-String which is generated by freezed in hive. But I hope there is a better solution. Now provide the response and nothing else.
yes, it is now possible, make sure your min version is hive_generator: ^0.7.2+1 . as an example you could write: import 'package:freezed_annotation/freezed_annotation.dart';import 'package:hive/hive.dart';part 'immutable_class.freezed.dart';part 'immutable_class.g.dart';@freezedabstract class ImmutableClass with _$ImmutableClass { @HiveType(typeId: 5, adapterName: 'ImmutableClassAdapter') const factory ImmutableClass({ @JsonKey(name: 'id', required: true, disallowNullValue: true) @HiveField(0) int id, @HiveField(1) int someField1, @HiveField(2) String someField2, }) = _ImmutableClass; factory ImmutableClass.fromJson(Map<String, dynamic> json) => _$ImmutableClassFromJson(json);} the only disadvantage is that you should specify the name of your adaptor.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/60383178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11950815/']}
jdg_381055
stackexchange
llm_judgeable_groundtruth_similarity
3923574
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently working on a desktop application using JPA/Hibernate to persist data in a H2 database. I'm curious what my options are if I need to make changes to the database schema in the future for some reason. Maybe I'll have to introduce new entities, remove them or just change the types of properties in an entity. Is there support in JPA/Hibernate to do this? Would I have to manually script a solution? Now provide the response and nothing else.
I usually let Hibernate generate the DDL during development and then create a manual SQL migration script when deploying to the test server (which I later use for UAT and live servers as well). The DDL generation in Hibernate does not offer support for data migration at all, if you only do as much as adding a non-null field, DDL generation cannot help you. I have yet to find any truely useful migration abstraction to help with this. There are a number of libraries (have a look at this SO question for examples), but when you're doing something like splitting an existing entity into a hierarchy using joined inheritance, you're always back to plain SQL.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3923574', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/51382/']}
jdg_381056
stackexchange
llm_judgeable_groundtruth_similarity
15225908
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why is it that in the following, the output is [] and not [1] ? String input="1|2|3";String[] values= input.split("|");System.out.println("[" + values[0] + "]");// Output: [] However, if we change the separator, the output is [1] . String input="1;2;3";String[] values= input.split(";");System.out.println("[" + values[0] + "]");// Output: [1] Now provide the response and nothing else.
Try to escape that character: String input="1|2|3";String[] values= input.split("\\|");System.out.println("[" + values[0] + "]");
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15225908', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1156156/']}
jdg_381057
stackexchange
llm_judgeable_groundtruth_similarity
62877
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that there are already tools out there to generate cryptographic random number, but I was wondering if using AES could do the same. For example, I have a secret key for AES on my server and a counter in my database. Each time I want to generate a random number I increment the counter then encrypt it with AES using the secret key on my server then transform the resulting bits into the right number. Wouldn't this produce secure random number as long as my secret key is not found? N.B.: If you want to say that storing a password on your server is not safe, please explain how it is different than storing your private SSL key on your server. Edit Found some info on wikipedia A secure block cipher can be converted into a CSPRNG by running it in counter mode. This is done by choosing a random key and encrypting a 0, then encrypting a 1, then encrypting a 2, etc. The counter can also be started at an arbitrary number other than zero. Obviously, the period will be 2n for an n-bit block cipher; equally obviously, the initial values (i.e., key and "plaintext") must not become known to an attacker, however good this CSPRNG construction might be. Otherwise, all security will be lost. So, it was used before to construct CSPRNG. Now provide the response and nothing else.
What you are describing is a custom, AES-based PRNG . Encrypting successive values of a counter with AES and a secret key is a sound method for building a PRNG (indeed, it is equivalent to encrypting a long sequence of zeros with AES/CTR), provided that you take care of the following: You have to use the block cipher "as is": encode the counter as a single 16-byte block, process it with AES (no "block cipher mode"), get the next 16 pseudorandom bytes. Increment the counter, and loop. This won't be stronger than the encryption key is secret. Keep the key safe ! Of course this assumes that you already have a secret key, which implies that some other cryptographically secure PRNG was used at some point. A PRNG does not create randomness, it expands some initial randomness (the "seed", here the AES key) into a long stream of pseudorandom bytes. This PRNG relies on your never reusing a counter value. So you have to make sure that the counter is always increased and cannot be forced to "rewind". This is not as easy as it seems, in particular in embedded devices, where read-write permanent storage is at a premium, and the user (who may be the attacker) can force hardware resets at any time (by removing the battery). AES being a block cipher, it is a permutation of the space of block values. As such, it won't ever output twice the same 16-byte block, which deviates from what a true random source would produce. This may begin to show (statistically) after about 2 68 bytes, which is high enough not to be an issue. However, think twice before trying the same trick with a block cipher with smaller blocks (3DES, Blowfish...). An improvement may be to regenerate a new AES key each time the server starts up, using /dev/urandom . That way, there is no key to store anywhere. Indeed, there is no need to reuse always the same key. (Arguably, it is even simpler to use the strong PRNG which is already there, e.g. /dev/urandom . Switching to a custom PRNG like the one you envision is quite unwarranted; unless you need pseudorandom bytes at more than 10 megabytes per second, which may happen in some cases, but is weird. If you are in need of lots of pseudorandom bytes, you may want to consider using other more specialized algorithms like these stream ciphers .)
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/62877', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/50051/']}
jdg_381058
stackexchange
llm_judgeable_groundtruth_similarity
9347909
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to write a small low level program. For some parts of it I will need to use assembly language, but the rest of the code will be written on C/C++. So, if I will use GCC to mix C/C++ with assembly code, do I need to use AT&T syntax or can I use Intel syntax? Or how do you mix C/C++ and asm (intel syntax) in some other way? I realize that maybe I don't have a choice and must use AT&T syntax, but I want to be sure.. And if there turns out to be no choice, where I can find full/official documentation about the AT&T syntax? Thanks! Now provide the response and nothing else.
If you are using separate assembly files , gas has a directive to support Intel syntax: .intel_syntax noprefix # not recommended for inline asm which uses Intel syntax and doesn't need the % prefix before register names. (You can also run as with -msyntax=intel -mnaked-reg to have that as the default instead of att , in case you don't want to put .intel_syntax noprefix at the top of your files.) Inline asm: compile with -masm=intel For inline assembly, you can compile your C/C++ sources with gcc -masm=intel (See How to set gcc to use intel syntax permanently? for details.) The compiler's own asm output (which the inline asm is inserted into) will use Intel syntax, and it will substitute operands into asm template strings using Intel syntax like [rdi + 8] instead of 8(%rdi) . This works with GCC itself and ICC, but for clang only clang 14 and later . (Not released yet, but the patch is in current trunk.) Using .intel_syntax noprefix at the start of inline asm, and switching back with .att_syntax can work, but will break if you use any m constraints. The memory reference will still be generated in AT&T syntax. It happens to work for registers because GAS accepts %eax as a register name even in intel-noprefix mode. Using .att_syntax at the end of an asm() statement will also break compilation with -masm=intel ; in that case GCC's own asm after (and before) your template will be in Intel syntax. (Clang doesn't have that "problem"; each asm template string is local, unlike GCC where the template string truly becomes part of the text file that GCC sends to as to be assembled separately.) Related: GCC manual: asm dialect alternatives : writing an asm statement with {att | intel} in the template so it works when compiled with -masm=att or -masm=intel . See an example using lock cmpxchg . https://stackoverflow.com/tags/inline-assembly/info for more about inline assembly in general; it's important to make sure you're accurately describing your asm to the compiler, so it knows what registers and memory are read / written. AT&T syntax: https://stackoverflow.com/tags/att/info Intel syntax: https://stackoverflow.com/tags/intel-syntax/info The x86 tag wiki has links to manuals, optimization guides, and tutorials.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9347909', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/753418/']}
jdg_381059
stackexchange
llm_judgeable_groundtruth_similarity
4052478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing unit tests to test an MVC application. How can I validate my view models from a unit test? Now provide the response and nothing else.
A trivial solution to this is to have your webserver distinguish between API calls and regular calls, then have two different WSGI instances of your application: one with sessions enabled, the other with sessions disabled. (This is probably much easier with Nginx than with Apache.) An alternative is to inherit SessionMiddleware and then edit the process functions to ignore all requests matching your criteria. Something like: from django.contrib.sessions.middleware import SessionMiddlewareclass MySessionMiddleware(SessionMiddleware): def process_request(self, request): if request.path_info[0:5] == '/api/': return super(MySessionMiddleware, self).process_request(request) def process_response(self, request, response): if request.path_info[0:5] == '/api/': return response return super(MySessionMiddleware, self).process_response(request, response) And then edit your setting's file so that MIDDLEWARE_CLASSES contains the path to "MySessionMiddleware" and not 'django.contrib.sessions.middleware.SessionMiddleware'.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4052478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1127460/']}
jdg_381060
stackexchange
llm_judgeable_groundtruth_similarity
4548464
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use the following code to enable myClass to use foreach. But I am rather new to programming and have some difficulty in understanding the following code. I described my problems in the comments. I would be grateful for providing some information. public class MyClass : IEnumerable<string> { //1) What is IEnumerator for? // Whats the difference between IEnumerator and IEnumerable public IEnumerator<string> GetEnumerator() { yield return "first"; yield return "second"; } //2) What is it for? It just calls above method IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } } //3) Lastly what benefits I have from implementing genetic interface //IEnumerable<string> instead of just IEnumerable Now provide the response and nothing else.
What is the difference between IEnumerator and IEnumerable? Jason's answer is good but I thought I'd just add how I think about this. Imagine you have a sequence: 1, 1, 2, 3, 5, 8, 13, ... Now imagine you have an arrow pointing at some position of that sequence: 1, 1, 2, 3, 5, 8, 13, ... ^ An "arrow" is an object that can do two things. First, it can give you the thing it is pointing at. Second, it can make itself point at the next thing. IEnumerator is an arrow. It has a property, Current, that gives you the thing it is pointing at. It has a method, MoveNext() that makes itself point at the next thing. How do you get an arrow in the first place? You need an arrow factory. You ask the factory for an arrow, and it gives you an arrow that points to the first element in the sequence. IEnumerable is an arrow factory. It has a method, GetEnumerator, that gives you an arrow to the first element of the sequence. A nice property of this scheme is that you can have multiple arrows pointing to different places in the same sequence. what are the benefits of implementing generic interface IEnumerable instead of just IEnumerable? Suppose the sequence is of integers. If you implement IEnumerable then when you say foreach(int x in mysequence) what that will actually do is convert the int in the sequence to object, boxing the integer, and then immediately unbox the object back to integer, adding a completely unnecessary memory allocation to every single operation. If the compiler knows that the sequence is of integers then it can skip the unnecessary boxing operation. Suppose the sequence is of strings. If you implement IEnumerable<string> then you can say: string first = mysequence.First(); If you don't, then you have to say string first = (string)mysequence.First(); which is unnecessary and error-prone. Rather than instruct the compiler via a cast that the type is string, you can simply guarantee that the type is string by using the type system.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4548464', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/457640/']}
jdg_381061
stackexchange
llm_judgeable_groundtruth_similarity
7676356
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I first saw the alerts in Bootstrap I thought they would behave like the modal window does, dropping down or fading in, and then fading out when closed. But it seems like they are always visible. I guess I could have them sit in a layer above my app and manage showing them but I was wondering if the functionality was built in? thanks! Edit, what I have so far: <div id="saveAlert" class="alert-message success fade in" data-alert="alert" style="top:0"> <a class="close" href="#">×</a> <p><strong>Well done!</strong> You successfully read this alert message.</p></div> Now provide the response and nothing else.
I strongly disagree with most answers previously mentioned. Short answer: Omit the "in" class and add it using jQuery to fade it in. See this jsfiddle for an example that fades in alert after 3 seconds http://jsfiddle.net/QAz2U/3/ Long answer: Although it is true bootstrap doesn't natively support fading in alerts, most answers here use the jQuery fade function, which uses JavaScript to animate (fade) the element. The big advantage of this is cross browser compatibility. The downside is performance (see also: jQuery to call CSS3 fade animation? ). Bootstrap uses CSS3 transitions, which have way better performance. Which is important for mobile devices: Bootstraps CSS to fade the alert: .fade { opacity: 0; -webkit-transition: opacity 0.15s linear; -moz-transition: opacity 0.15s linear; -o-transition: opacity 0.15s linear; transition: opacity 0.15s linear;}.fade.in { opacity: 1;} Why do I think this performance is so important? People using old browsers and hardware will potentially get a choppy transitions with jQuery.fade(). The same goes for old hardware with modern browsers. Using CSS3 transitions people using modern browsers will get a smooth animation even with older hardware, and people using older browsers that don't support CSS transitions will just instantly see the element pop in, which I think is a better user experience than choppy animations. I came here looking for the same answer as the above: to fade in a bootstrap alert. After some digging in the code and CSS of Bootstrap the answer is rather straightforward. Don't add the "in" class to your alert. And add this using jQuery when you want to fade in your alert. HTML (notice there is NO in class!) <div id="myAlert" class="alert success fade" data-alert="alert"> <!-- rest of alert code goes here --></div> Javascript: function showAlert(){ $("#myAlert").addClass("in")} Calling the function above function adds the "in" class and fades in the alert using CSS3 transitions :-) Also see this jsfiddle for an example using a timeout (thanks John Lehmann!): http://jsfiddle.net/QAz2U/3/
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7676356', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/627408/']}
jdg_381062
stackexchange
llm_judgeable_groundtruth_similarity
13692015
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It seems like most LINQ is written with lambda expressions. How do I go about rewriting this linq using lambda, kinda confusion with the style (especially with joins)? var responses = from c in questionRepository.GetReponses() join o in questionRepository.GetQuestions() on c.QuestionID equals o.QuestionID where c.UserID == 9999 orderby o.DisplayOrder select new { o.QuestionText, c.AnswerValue }; Now provide the response and nothing else.
I prefer the "LINQ syntax" for Joins as I think it looks cleaner. In any case, here is how to translate the LINQ-join to the "Lambda Expression"-join. The translation for: from a in AAjoin b in BB ona.Y equals b.Yselect new {a, b} Is: AA.Join( // L BB, // R a => a.Y, b => b.Y, // L -> join value, R -> join value (a, b) => new {a, b}) // L+R result The other LINQ keywords are much simpler to convert (e.g. OrderBy(u => u.DisplayOrder) and are just "chained together" with . . - give it a go!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13692015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1069456/']}
jdg_381063
stackexchange
llm_judgeable_groundtruth_similarity
73640
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand how complexes are coloured. After some reading, I found out this was due to the d-d splitting induced by the coordinate bonds of ligands to the central metal ion. The electrons in the d-orbitals then transition between each other, and absorb light in the process. However, I do not understand what happens to the electrons donated by the ligand. Don't they form a sigma bond with the d-orbitals of the metal ion? In which case, wouldn't they go into the d-orbitals and 'fill it up', thus disallowing transitions? Or is it the case that the d-orbitals of the central metal ion are not involved in the bonding at all? Now provide the response and nothing else.
This is a very good question, I must say. It requires the understanding of the very fundamentals. You're right, if the electrons from the ligand pair up with the electrons of the metal, the electrons cannot undergo $d$-$d$ transitions, and no color would be seen. But do they pair up? Think carefully . Recall one of the most basic assumptions of the Crystal Field Theory: The attraction between the ligand and the metal is assumed to be purely electrostatic in nature . This very important assumption is the sole reason for the explanation of colors of these complexes. As the ligands approach the metal atom, they create a crystal field . These fields tend to repel the electrons of the metal and increases their energy. A perfectly symmetric spherical crystal field would increase the energy level of all the orbitals equally and to the same level. But this isn't the case in actual coordination complexes. An octahedral crystal field would approach along the axes of the $d_{x^2+y^2}$ and $d_{z^2}$ and tend to repel them to a greater extent than the other orbitals ($d_{xy}$, $d_{yz}$, $d_{zx}$). This difference in energies of the orbitals is known as crystal field splitting . We can now group these orbitals into two groups, three low energy $t_{2g}$, and two high energy $e_g$ orbitals. Keep in mind that the ligands do not pair with the electrons of the metal. They simply repel the orbitals electrostatically and increase their potential energy in the process. So, now how do the complexes actually get their color then? When the orbitals split, the difference in their energies is called Crystal Field Stabilization Energy (CFSE) and is denoted by $\Delta_{\text{o}}$. When photons of light are incident on the complex, it absorbs the photons which possess the energy equal to that of the value of $\Delta _{\text{o}}$. From quantum theory of electromagnetic waves, it's known that the energy of a photon is given by: $$U = \frac{hc}{\lambda}$$ Where $h$ is known as the Planck's Constant, with a value of $\pu{6.626×10^{-34}Js}$, and $\lambda$ is the wavelength of light. If the value of the wavelength lies in the visible light spectrum, you can find the color of the light absorbed by the complex. How do you find the color emitted out? There's a very easy and fun way to find that out. Take a look at this color wheel: Find where the color of absorption lies. Then the color of the complex lies on the opposite side of the color of absorption. This is a brief discussion about the colors of complexes. Let me know if you need more clarification.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/73640', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/44608/']}
jdg_381064
stackexchange
llm_judgeable_groundtruth_similarity
1715358
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Being fascinated by the approximation $$\sin(x) \simeq \frac{16 (\pi -x) x}{5 \pi ^2-4 (\pi -x) x}\qquad (0\leq x\leq\pi)$$ proposed, more than 1400 years ago by Mahabhaskariya of Bhaskara I (a seventh-century Indian mathematician) (see here ), I considered the function $$\sin \left(\frac{1}{2} \left(\pi -\sqrt{\pi ^2-4 y}\right)\right)$$ which I expanded as a Taylor series around $y=0$. This gives $$\sin \left(\frac{1}{2} \left(\pi -\sqrt{\pi ^2-4 y}\right)\right)=\frac{y}{\pi }+\frac{y^2}{\pi ^3}+\left(\frac{2}{\pi ^5}-\frac{1}{6 \pi ^3}\right) y^3+O\left(y^4\right)$$ Now, I made (and may be, this is not allowed) $y=(\pi-x)x$. Replacing, I obtain$$\sin(x)=\frac{(\pi -x) x}{\pi }+\frac{(\pi -x)^2 x^2}{\pi ^3}+\left(\frac{2}{\pi ^5}-\frac{1}{6 \pi ^3}\right) (\pi -x)^3 x^3+\cdots$$ I did not add the $O\left(.\right)$ on purpose since not feeeling very comfortable. What is really beautiful is that the last expansion matches almost exactly the function $\sin(x)$ for the considered range $(0\leq x\leq\pi)$ and it can be very useful for easy and simple approximate evaluations of definite integrals such as$$I_a(x)=\int_0^x \frac{\sin(t)}{t^a}\,dt$$ under the conditions $(0\leq x\leq \pi)$ and $a<2$. I could do the same with the simplest Padé approximant and obtain $$\sin(x)\approx \frac{(\pi -x) x}{\pi \left(1-\frac{(\pi -x) x}{\pi ^2}\right)}=\frac{5\pi(\pi -x) x}{5\pi ^2-5(\pi -x) x}$$ which, for sure, is far to be as good as the magnificent approximation given at the beginning of the post but which is not very very bad (except around $x=\frac \pi 2$). The problem is that I am not sure that I have the right of doing things like that. I would greatly appreciate if you could tell me what I am doing wrong and/or illegal using such an approach. Edit After robjohn's answer and recommendations, I improved the approximation writing as an approximant $$f_n(x)=\sum_{i=1}^n a_i \big(\pi-x)x\big)^i$$ and minimized $$S_n=\int_0^\pi\big(\sin(x)-f_n(x)\big)^2$$ with respect to the $a_i$'s. What is obtained is $$a_1=\frac{60480 \left(4290-484 \pi ^2+5 \pi ^4\right)}{\pi ^9} \approx 0.31838690$$ $$a_2=-\frac{166320 \left(18720-2104 \pi ^2+21 \pi ^4\right)}{\pi ^{11}}\approx 0.03208100$$ $$a_3=\frac{720720 \left(11880-1332 \pi ^2+13 \pi ^4\right)}{\pi ^{13}}\approx 0.00127113$$ These values are not very far from those given by Taylor ($\approx 0.31830989$), ($\approx 0.03225153$), ($\approx 0.00116027$) but, as shown below, they change very drastically the results. The errors oscillate above and below the zero line and, for the considered range, are all smaller than $10^{-5}$. After minimization, $S_3\approx 8.67\times 10^{-11}$ while, for the above Taylor series, it was $\approx 6.36\times 10^{-7}$. Now provide the response and nothing else.
A few approximations When making approximations, there is no legal or illegal. There are things that work better and things that don't. When making approximations that are supposed to work over a large range of values, often the plain Taylor series is not the best way to go. Instead, a polynomial or rational function that matches the function at a number of points is better.$$\frac{\pi(\pi-x)x}{\pi^2-\left(4-\pi\right)(\pi-x)x}\tag{1}$$matches the values and slopes of $\sin(x)$ at $0$, $\frac\pi2$, and $\pi$. However, it is always low. If instead, we match the values at $0$, $\frac\pi6$,$\frac\pi2$, $\frac{5\pi}6$, and $\pi$ we get Mahabhaskariya's approximation$$\frac{16(\pi-x)x}{5\pi^2-4(\pi-x)x}\tag{2}$$which is both high and low, and the maximal error is about $\frac13$ of the one-sided error. A good quadratic polynomial approximation also matches the values at $0$, $\frac\pi6$,$\frac\pi2$, $\frac{5\pi}6$, and $\pi$$$\frac{31}{10\pi^2}(\pi-x)x+\frac{18}{5\pi^4}(\pi-x)^2x^2\tag{3}$$ The maximal error is about $\frac23$ that of Mahabhaskariya's. If we want to extend to a cubic polynomial, we can try to match values at $0$, $\frac\pi6$, $\frac\pi4$, $\frac\pi2$$$\tfrac{9711-6400\sqrt2}{210\pi^2}(\pi-x)x+\tfrac{-7194+5120\sqrt2}{15\pi^4}(\pi-x)^2x^2+\tfrac{43488-30720\sqrt2}{35\pi^6}(\pi-x)^3x^3\tag{4}$$ The maximum error of approximation $(4)$ is about $\frac1{40}$ that of approximation $(3)$. Analysis of the functions in the question The function$$\frac{\pi(\pi-x)x}{\pi^2-(\pi-x)x}\tag{5}$$has a maximum error about $40\times$ as big as $(3)$ The function$$\frac{(\pi-x)x}\pi+\frac{(\pi-x)^2x^2}{\pi^3}+\left(\frac2{\pi^5}-\frac1{6\pi^3}\right)(\pi-x)^3x^3\tag{6}$$has $30\times$ the maximum error of $(4)$. However, the coefficients of $(6)$ are more appealing.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1715358', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/82404/']}
jdg_381065
stackexchange
llm_judgeable_groundtruth_similarity
7286585
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a zoo object called pp with daily data and 77 columns that looks like this: X02R X03N X04K X04N X04R X06I X06N X08J X08P X09O X11O X12L X14N X15G X16K (...)1961-01-01 8.3 5.2 3.2 0.0 8.7 5.2 15.0 7.2 11.5 13.0 0.0 4.9 0.0 2.9 6.0 1961-01-02 1.1 3.2 10.0 0.0 0.0 3.5 0.0 8.7 0.4 1.2 0.0 0.4 0.0 3.2 0.2 1961-01-03 12.0 4.2 50.5 0.0 9.0 38.5 15.0 31.7 1.7 8.7 9.0 69.2 4.2 22.2 9.2 (...) I want to use apply.monthly to each of the columns, so in the end I will still have 77 columns but with monthly data instead of daily data. I tried apply.monthly(pp, FUN=sum) but the result is a zoo object with just one column (I think is adding all the columns). I also tried a loop: for (i in 1:77){mensal<-apply.monthly(pp[,i], FUN=sum)} but it also results in just one column (the 77th column). I might be able to make the loop work with some trial and error but it takes ages to compute ( I have 17897 rows and 77 columns) and I guess there is a simpler way of doing this without using loops... So if you know how, please help. Thanks! Now provide the response and nothing else.
In order for apply.monthly to return an object with more than one column, you have to use a function that operates by column (or apply a function that doesn't). library(quantmod)getSymbols("SPY")zSPY <- as.zoo(SPY)# sum doesn't operate by column; it sums everything to one valuesum(zSPY)spy.sum <- apply.monthly(zSPY, sum)# colSums operates by columnspy.colSums <- apply.monthly(zSPY, colSums)# use apply to operate by columnspy.apply.sum <- apply.monthly(zSPY, apply, 2, sum)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7286585', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/747311/']}
jdg_381066
stackexchange
llm_judgeable_groundtruth_similarity
5851966
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a project with a simple local git repo, and I want to move this project (folders etc.) to another computer and work from there from now on. I don't want to have anything left on the old machine (except of course my other git projects). I want it to be as if I have been working from the new machine all along. Can I simply move all the files over to that computer, or will there be a problem with keys? Should I have the same key across two machines? If simply moving all the folders can't be done, what should I do? I want to avoid the hassle of setting up and learning to use a server, since this seems complicated and I don't want to alter my workflow. Now provide the response and nothing else.
For your case, the best way to do it is to copy over the folder (copy, scp, cp, robocopy - whichever) to the new computer and delete the old folder. I completely disagree with @Pablo Santa Cruz that cloning is the paradigm for what you are doing. No it is not. You are moving a repo to a new computer. Why I don't like clone for this purpose: It creates remote-tracking branches for each branch in the cloned repository . You are moving, and the old repo is defunct. Any remote branches and other refs are completely ignored. You don't get your hooks if you had any and you might forget that you had them! You cannot get "lost" commits etc using git reflog or other means. Might not be a huge issue, especially if the repo acted as a server but something to be aware of. If you search for ways to backup a git repo, git clone wouldn't be in the top answers. So it shouldn't be used for moving a repo! I also feel that just a git clone cannot be a proper answer because git clone has the --mirror option, which preserves the repo, meaning that a git clone repo is different from git clone --mirror repo (apart from being bare, the differences are mostly those I mentioned above). I would do a copy because I know what I get with the copied repo - the same repo! When to consider git clone: It is faster as git does some optimization while cloning You might have different git version on the new machine and a copy might make the repo unusable in the other version (not very common nowadays). But actually this can be another pro for copying, since this would inform you that that the new computer has a different git version.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5851966', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/154066/']}
jdg_381067
stackexchange
llm_judgeable_groundtruth_similarity
1694529
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a simple way of allowing interaction with a button in a UIView that lies under another UIView - where there are no actual objects from the top UIView on top of the button? For instance, at the moment I have a UIView (A) with an object at the top and an object at the bottom of the screen and nothing in the middle. This sits on top of another UIView that has buttons in the middle (B). However, I cannot seem to interact with the buttons in the middle of B. I can see the buttons in B - I've set the background of A to clearColor - but the buttons in B do not seem to receive touches despite the fact that there are no objects from A actually on top of those buttons. EDIT - I still want to be able to interact with the objects in the top UIView Surely there is a simple way of doing this? Now provide the response and nothing else.
You should create a UIView subclass for your top view and override the following method: - (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event { // UIView will be "transparent" for touch events if we return NO return (point.y < MIDDLE_Y1 || point.y > MIDDLE_Y2);} You may also look at the hitTest:event: method.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1694529', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/109009/']}
jdg_381068
stackexchange
llm_judgeable_groundtruth_similarity
34602000
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between using them, and when should they be used? An example of the documentation for an AppCompatView is: A tint aware EditText. This will automatically be used when you use EditText in your layouts. You should only need to manually use this class when writing custom views Why should the AppCompatView only be used for custom views? There is a similar question , but I am looking for a good explanation for why the AppCompatView should only be used for custom views. Now provide the response and nothing else.
Some material design features like the material theme and custom activity transitions are only available on Android 5.0 (API level 21) and above. However, you can design your apps to make use of these features when running on devices that support material design and still be compatible with devices running previous releases of Android. Que-> Why the AppCompatView should only be used for custom views. Answer -> In simple terms AppCompatView is used for maintaining compatibility . If your app uses the Material theme as with Theme.Material but does not provide an alternative theme, your app will not run on versions of Android earlier than 5.0. If the layouts that you design according to the material design guidelines do not use any of the new XML attributes introduced in Android 5.0 (API level 21), they will work on previous versions of Android. Otherwise, you can provide alternative layouts. You can also provide alternative layouts to customize how your app looks on earlier versions of Android. Making backwards compatible material design Android applications is much easier with AppCompat, especially when you understand how its styles and themes are working together to dynamically tint the user interface. With AppCompat, you should spend less time fiddling with assets and backwards compatibility, and more time focusing on actually building your application. Currently, new projects created through Android Studio incorporate this library by default. Note: This library depends on the v4 Support Library. Below are few links for references Android Material Themes Made Easy With AppCompat Migrating to the AppCompat Library Getting Material Design for Pre-Lollipop Devices with AppCompat v21
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34602000', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1171130/']}
jdg_381069
stackexchange
llm_judgeable_groundtruth_similarity
5799090
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I remove all whitespace from a string and make all characters lowercase in python? Also, can I add this operation to the string prototype like I could in javascript? Now provide the response and nothing else.
How about an uncomplicated fast answer? No map , no for loops, ... >>> s = "Foo Bar " * 5>>> s'Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar '>>> ''.join(s.split()).lower()'foobarfoobarfoobarfoobarfoobar'>>> [Python 2.7.1] >python -mtimeit -s"s='The quick brown Fox jumped over the lazy dogs'" "''.join(c.lower() for c in s if not c.isspace())"100000 loops, best of 3: 11.7 usec per loop>python27\python -mtimeit -s"s='The quick brown Fox jumped over the lazy dogs'" "''.join( i.lower() for i in s.split() )"100000 loops, best of 3: 3.11 usec per loop>python27\python -mtimeit -s"s='The quick brown Fox jumped over the lazy dogs'" "''.join( map(str.lower, s.split() ) )"100000 loops, best of 3: 2.43 usec per loop>\python27\python -mtimeit -s"s='The quick brown Fox jumped over the lazy dogs'" "''.join(s.split()).lower()"1000000 loops, best of 3: 1 usec per loop
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5799090', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84131/']}
jdg_381070
stackexchange
llm_judgeable_groundtruth_similarity
50994393
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I run php artisan migrate In Connection.php line 664: SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client (SQL: select * from information_schema.tables where table_schema = aviandb and table_name = migrations) In Connector.php line 68: SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client In Connector.php line 68: PDO::__construct(): The server requested authentication method unknown to the client [caching_sha2_password] How can I solve? Now provide the response and nothing else.
Your php mysql extension doesn't support the version of MySQL server you are running. I'm assuming you're running MySQL 8.0, which is new at the time of this post. You need to update or rebuild PHP with support for the latest version of MySQL, or downgrade your MySQL Server version. Another solution is to create a user with the mysql_native_password option. CREATE USER 'user'@'localhost' IDENTIFIED WITH mysql_native_password BY 'yourpassword';GRANT ALL PRIVILEGES ON *.* TO 'user'@'localhost' WITH GRANT OPTION;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50994393', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9975625/']}
jdg_381071
stackexchange
llm_judgeable_groundtruth_similarity
6932382
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to pass some selected text in vim to a curl command as a parameter value. For example. I need to be able to run curl -sSd css="body { border-radius: 5px; }" http://prefixr.com/api/index.php from vim. Obviously, the "body { border-radius: 5px; }" part will be dynamic. Usually, a visual mode selection in vim. How do I get the selected text and pass it as a parameter to curl? Now provide the response and nothing else.
You can use the :! command to filter selected text through an external program.The text is fed to stdin and substituted with the results from stdout. In this case you'll have to use cat and command substitution to feed the lines as a parameter to curl, like so: :'<,'>!curl -sSd css="`cat`" http://prefixr.com/api/index.php
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6932382', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/504318/']}
jdg_381072
stackexchange
llm_judgeable_groundtruth_similarity
2970
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can I get help on how to make frequency axis going from negative frequency to positive frequency, (in Hertz), that will be the x-axis in an FFT result, but given either an even length FFT, or odd length FFT. I am having some troubles making it in MATLAB. (Assume you know the sampling frequency f_s). Now provide the response and nothing else.
One approach is simply to calculate the frequency vector for the unshifted DFT output (i.e. what you would get directly from MATLAB's fft() function, without doing an fftshift() ), then remap the frequencies that correspond to locations on the negative side of the axis. Example: % assume input signal "x", sampling frequency "fs"% calculate FFTX = fft(x,Nfft);% calculate frequency spacingdf = fs / Nfft;% calculate unshifted frequency vectorf = (0:(Nfft-1))*df;% move all frequencies that are greater than fs/2 to the negative side of the axisf(f >= fs/2) = f(f >= fs/2) - fs;% now, X and f are aligned with one another; if you want frequencies in strictly% increasing order, fftshift() themX_normal_order = fftshift(X);f_normal_order = fftshift(f); The answer provided by learnvst should work also; this is just another way of thinking about it that doesn't require any special casing for even/odd DFT sizes.
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/2970', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/1329/']}
jdg_381073
stackexchange
llm_judgeable_groundtruth_similarity
3681291
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to find all references to an object in an SQL Server database. How can I quickly search? SQL Server Management Studio does not seem to do it. I use http://www.red-gate.com/products/SQL_Search/ but I'd like to find the "official" Microsoft solution to this. Is it in another product? For example, when I do a mass search in visual studio, I would like to be able to also find something in all stored procedures. Or maybe I'm not coding this the right way? Carl Now provide the response and nothing else.
Use: select object_name(m.object_id), m.* from sys.sql_modules m where m.definition like N'%name_of_object%' ...because SYSCOMMENTS and INFORMATION_SCHEMA.routines have nvarchar(4000) columns. So if "name_of_object" is used at position 3998, it won't be found. SYSCOMMENTS does have multiple lines, but INFORMATION_SCHEMA.routines truncates.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3681291', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/186746/']}
jdg_381074
stackexchange
llm_judgeable_groundtruth_similarity
12555203
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm scraping a Facebook page with the PhantomJS node module ( https://github.com/sgentle/phantomjs-node ), but when I try evaluating the page, it does not evaluate the function I pass to it. Executing it in a standalone script and running it with the Node interpreter works.. The same code in an Express.js app does not work. This is my code facebookScraper.prototype.scrapeFeed = function (url, cb) { f = ':scrapeFeed:'; var evaluator = function (s) { var posts = []; for (var i = 0; i < FEED_ITEMS; i++) { log.info(__filename+f+' iterating step ' + i); log.info(__filename+f+util.inspect(document, false, null)); } return { news: posts }; } phantom.create(function (ph) { ph.createPage(function (page) { log.fine(__filename+f+' opening url ' + url); page.open(url, function (status) { log.fine(__filename+f+' opened site? ' + status); setTimeout(function() { page.evaluate(evaluator, function (result) { log.info(__filename+f+'Scraped feed: ' + util.inspect(result, false, null)); cb(result, ph); }); }, 5000); }); }); });}; The output I get: {"level":"fine","message":"PATH/fb_regular.js:scrapeFeed: opening url <URL> ","timestamp":"2012-09-23T18:35:10.151Z"}{"level":"fine","message":"PATH/fb_regular.js:scrapeFeed: opened site? success","timestamp":"2012-09-23T18:35:12.682Z"}{"level":"info","message":"PATH/fb_regular.js:scrapeFeed: Scraped feed: null","timestamp":"2012-09-23T18:35:12.687Z"} So, as you see, it calls the phantom callback function (second parameter in the evaluate function) with a null argument, but it doesn't execute the first parameter (my evaluator function, which prints iterating step X). Anyone knows what the problem is? Now provide the response and nothing else.
I'm unsure as to what version of PhantomJS you are using, but as for the documentation of versions 1.6+ logging inside evaluated script will log the result in the contained page. It will not log into your console. To get that you would have to bind logging to the pages onConsoleMessage event: page.onConsoleMessage = function (msg) { console.log(msg); }; As for the result not being available: The page.evaluate function takes arguments like so - first one is a function to be executed and the rest are passed as input to that function. The result is returned directly: var title = page.evaluate(function (s) { return document.querySelector(s).innerText; }, 'title'); console.log(title);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12555203', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/602223/']}
jdg_381075
stackexchange
llm_judgeable_groundtruth_similarity
17114386
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: void reversefunction( const char *argv2, const char *argv3){ FILE *stream1=NULL;FILE *stream2=NULL;byteone table[HEADERLENGTH];byteone numberofchannels;byteone movebytes;bytefour i;bytefour sizeofdata;bytefour var_towrite_infile;stream1=fopen(argv2,"rb");stream2=fopen(argv3,"wb+");if(stream1==NULL){ printf("\n.xX!- failed - to - open - file -!Xx.\n"); exit(0);}if(stream2==NULL){ printf("\n.xX!- failed - to - create - new - file -!Xx.\n"); exit(0);}printf(".xX!- %s - opened - success -!Xx.\n",argv2);fread(table,1,HEADERLENGTH,stream1); //here starts the problem numberofchannels=little_endian_to_bytefour((table+22),NUMCHANNELS);sizeofdata=little_endian_to_bytefour((table+40),SUBCHUNK2SIZE); //here ends the problem fwrite(table,1,HEADERLENGTH,stream2);movebytes=numberofchannels*2;i=sizeofdata;fseek(stream1,i,SEEK_SET);while(i>=0){ fread(&var_towrite_infile,4,movebytes,stream1); fwrite(&var_towrite_infile,4,movebytes,stream2); i=i-movebytes; fseek(stream1,i,SEEK_SET); printf("%d\n",i); printf("%d\n",sizeofdata); printf("%d\n",little_endian_to_bytefour((table+40),SUBCHUNK2SIZE)); printf("-------------\n");}fclose(stream1);fclose(stream2);return; } So, when i'm trying to pass in variables numberofchannels and sizeofdata the return value of function little_endian_to_bytefour it doesn't pass nothing.And when i print the return value, it prints it correct. So why this happens ? //screen of terminal ...00113920-------------00113920-------------00113920-------------... //end of screen terminal //additional imformation typedef unsigned char byteone;typedef unsigned short int bytetwo;typedef unsigned int bytefour;bytefour little_endian_to_bytefour(byteone *table, byteone bit_length){ bytefour number=0; if(bit_length==2){ number=table[1]; number<<=8; number|=table[0]; } else{ number=table[3]; number<<=8; number|=table[2]; number<<=8; number|=table[1]; number<<=8; number|=table[0]; } return number;} small example/* int myfunction(int var1, int var2){ int var3; var3=var1+var2 return var3;}int main(void){ int zaza1; zaza1=myfunction(2,3); printf("the number is %d",zaza1);return;} //terminal the number is 0 //end of terminal */ Now provide the response and nothing else.
It depends entirely on what you're going to do with it. The tv_nsec members of a struct timespec is of type long . You can set it to any value you like in the range LONG_MIN to LONG_MAX . If you perform a calculation that exceeds LONG_MAX , which is at least 2 31 -1, then you're going to have problems (undefined behavior that will probably show up as the value wrapping around). Whether setting it to a value less than 0, or greater than or equal to one billion, will cause a problem depends on what you do with it. If you just want to print it, or perform some calculations on it, any valid long value should be ok -- but the stored values are probably more useful if you normalize them. clock_gettime() should always give you a tv_nsec value in the range 0..999999999. POSIX requires the clock_settime() , clock_nanosleep() , and nanosleep() functions to fail, and set errno to EINVAL , if "The tp argument specified a nanosecond value less than zero or greater than or equal to 1000 million." References: http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_settime.html http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_nanosleep.html http://pubs.opengroup.org/onlinepubs/9699919799/functions/nanosleep.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17114386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2487076/']}
jdg_381076
stackexchange
llm_judgeable_groundtruth_similarity
3040894
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I add a UITableView into my View-Based Application where the user will tap on more than one cell, and it will become selected, exactly like the Clock app's "New Alarm" setting named "Repeat" (Clock>Alarms> + >Repeat), and how can I get all of the selected cells in an array? Now provide the response and nothing else.
For multiple selection, add the line below in viewDidLoad() tableView.allowsMultipleSelection = true Configure each cell after dequeuing (or initializing) it in tableView(_:cellForRowAt:) let selectedIndexPaths = tableView.indexPathsForSelectedRowslet rowIsSelected = selectedIndexPaths != nil && selectedIndexPaths!.contains(indexPath)cell.accessoryType = rowIsSelected ? .checkmark : .none// cell.accessoryView.hidden = !rowIsSelected // if using a custom image Update each cell when it's selected/deselected override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { let cell = tableView.cellForRow(at: indexPath)! cell.accessoryType = .checkmark // cell.accessoryView.hidden = false // if using a custom image}override func tableView(_ tableView: UITableView, didDeselectRowAt indexPath: IndexPath) { let cell = tableView.cellForRow(at: indexPath)! cell.accessoryType = .none // cell.accessoryView.hidden = true // if using a custom image} When you're done, get an array of all the selected rows let selectedRows = tableView.indexPathsForSelectedRows and get the selected data, where dataArray maps to the rows of a table view with only 1 section let selectedData = selectedRows?.map { dataArray[$0.row].ID }
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3040894', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/296794/']}
jdg_381077
stackexchange
llm_judgeable_groundtruth_similarity
22199429
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to design a report page with a different layout for printing to mobile. I am using bootstrap v3. It seems the grid can't differentiate between the two as the breakpoint for printing is the same as the breakpoint for mobile (xs) For example: In the below test html my printed page (or print preview) shows the xs6 columns side by side but the sm6 columns stacked. There isn't a breakpoint between xs and sm. Surely my printed page is wider than my mobile viewport so shouldn't it use the sm layout? Am I doing something wrong or is this the way it is? Is there a defined viewport width for printing? <!doctype html><html lang="en"><head> <meta charset="UTF-8"> <title>Test</title> <link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css"></head><body> <div class="container"> <div class="row"> <div class="col-xs-6"> xs6 </div> <div class="col-xs-6"> xs6 </div> </div> <div class="row"> <div class="col-sm-6"> sm6 </div> <div class="col-sm-6"> sm6 </div> </div> </div></body></html> Now provide the response and nothing else.
What I did was to manually recreate those columns classes in my print css. .col-print-1 {width:8%; float:left;}.col-print-2 {width:16%; float:left;}.col-print-3 {width:25%; float:left;}.col-print-4 {width:33%; float:left;}.col-print-5 {width:42%; float:left;}.col-print-6 {width:50%; float:left;}.col-print-7 {width:58%; float:left;}.col-print-8 {width:66%; float:left;}.col-print-9 {width:75%; float:left;}.col-print-10{width:83%; float:left;}.col-print-11{width:92%; float:left;}.col-print-12{width:100%; float:left;} Then I just use those classes like I use bootstrap classes to make my columns for print only. I also created .visible-print and .hidden-print to hide/show elements only in the print version. It still needs some work, but that quick patch helped me a lot.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22199429', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1996768/']}
jdg_381078
stackexchange
llm_judgeable_groundtruth_similarity
361754
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I've been thinking about data integrity, I currently backup about 2tb of data and always have one backup of data from a year ago. My concern is if a file became corrupt on our production file server no one would notice because some files aren't accessed for many years and if corruption occurred I'd be backing up a corrupted file. I'm not sure how I should handle this problem, is there a way to detect data corruption? Or is the only solution to store older backups in case something becomes corrupted and isn't noticed? Now provide the response and nothing else.
In my experience, each file type needs its own checks to determine if something is indeed corrupt. Data is just dots and dashes at its heart, and what determines "corruption" is entirely file dependent. You will need to determine what file types are most important, and then determine if it is reasonably possible to create automation that checks for the file type's consistency. That will be a daunting task as file type specifications change over time and as you encounter proprietary formats that have no easy way to programmatically determing corruption. Furthermore, data corruption is only a part of the problem. Sometimes files can be wrong from a human perspective, but consistent from a data structure perspective. If someone mistakenly edits a file - the data is fine from a corruption standpoint. Ultimately you need to sit down with the leadership of the business and determine what the most important data assets are for the company. Then determine how long those need to be retained and with what level of recall. Do they want fine-graned point-in-time recovery to four years in the past? Maybe only for certain files but not for others? Considering that you only have 2TB to backup, a GFS tape backup scheme using LTO4 cartridges can allow you to reach back many years with relatively few tapes. This is, of course, entirely depdendent on data churn. If you have a lot of busy bits, then you'll have more tapes. Still, 2TB is a relative speck that LTO4 or even commodity disk storage would yawn at to keep a few years of data around. It's not an easy task to protect digital assets. Keep the Tums handy.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/361754', 'https://serverfault.com', 'https://serverfault.com/users/93730/']}
jdg_381079
stackexchange
llm_judgeable_groundtruth_similarity
116054
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm not asking about converting a LaTeX document to html. What I'd like to be able to do is have some way to use LaTeX math commands in an html document, and have it appear correctly in a browser. This could be done server or client side. Now provide the response and nothing else.
MediaWiki can do what you are looking for. It uses Texvc ( http://en.wikipedia.org/wiki/Texvc ) which "validates (AMS) LaTeX mathematical expressions and converts them to HTML, MathML, or PNG graphics." Sounds like what you are looking for. Check out Wikipedia's article on how they handle math equations here: http://en.wikipedia.org/wiki/Help:Formula . They also have an extensive reference on LaTeX and pros/cons of the different rendering types (PNG/MathML/HTML). MediaWiki uses a subset of TeX markup, including some extensions from LaTeX and AMS-LaTeX, for mathematical formulae. It generates either PNG images or simple HTML markup, depending on user preferences and the complexity of the expression. In the future, as more browsers are smarter, it will be able to generate enhanced HTML or even MathML in many cases. (See blahtex for information about current work on adding MathML support.) More precisely, MediaWiki filters the markup through Texvc, which in turn passes the commands to TeX for the actual rendering. Thus, only a limited part of the full TeX language is supported; see below for details. ... Pros of HTML In-line HTML formulae always align properly with the rest of the HTML text. The formula's background, font size and face match the rest of HTML contents and the appearance respects CSS and browser settings. Pages using HTML will load faster. Pros of TeX TeX is semantically superior to HTML. In TeX, "x" means "mathematical variable x", whereas in HTML "x" could mean anything. Information has been irrevocably lost. This has multiple benefits: TeX can be transformed into HTML, but not vice-versa. This means that on the server side we can always transform a formula, based on its complexity and location within the text, user preferences, type of browser, etc. Therefore, where possible, all the benefits of HTML can be retained, together with the benefits of TeX. It's true that the current situation is not ideal, but that's not a good reason to drop information/contents. It's more a reason to help improve the situation. TeX can be converted to MathML for browsers which support it, thus keeping its semantics and allowing it to be rendered as a vector. TeX has been specifically designed for typesetting formulae, so input is easier and more natural, and output is more aesthetically pleasing. When writing in TeX, editors need not worry about browser support, since it is rendered into an image by the server. HTML formulae, on the other hand, can end up being rendered inconsistent of editor's intentions (or not at all), by some browsers or older versions of a browser.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/116054', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16970/']}
jdg_381080
stackexchange
llm_judgeable_groundtruth_similarity
1891857
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to make this image stretch to the maximum size possible without overflowing it's <div> or skewing the image. I can't predict the aspect-ratio of the image, so there's no way to know whether to use: <img src="url" style=" width : 100%;"> or <img src="url" style=" height : 100%;"> I can't use both (i.e. style="width: 100%; height: 100%;") because that will stretch the image to fit the <div> . The <div> has a size set by percentage of the screen, which is also unpredictable. Now provide the response and nothing else.
Update 2016: Modern browser behave much better. All you should need to do is to set the image width to 100% ( demo ) .container img { width: 100%;} Since you don't know the aspect ratio, you'll have to use some scripting. Here is how I would do it with jQuery ( demo ): CSS .container { width: 40%; height: 40%; background: #444; margin: 0 auto;}.container img.wide { max-width: 100%; max-height: 100%; height: auto;}.container img.tall { max-height: 100%; max-width: 100%; width: auto;} HTML <div class="container"> <img src="http://i48.tinypic.com/wrltuc.jpg" /></div><br /><br /><div class="container"> <img src="http://i47.tinypic.com/i1bek8.jpg" /></div> Script $(window).load(function(){ $('.container').find('img').each(function(){ var imgClass = (this.width/this.height > 1) ? 'wide' : 'tall'; $(this).addClass(imgClass); })})
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1891857', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/120888/']}
jdg_381081
stackexchange
llm_judgeable_groundtruth_similarity
22965106
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am running a large data feed into Mongo that I need to have an external client connect to and tap into the feed from the last available record - not anything older than current. I have a tailable cursor that works just fine, but it starts at the beginning of the table, and I'd like to have it start at the most recently inserted record. I know how to do this, presuming I have the _ID of the last inserted record. My problem is that I can't get findOne working properly in Java to bring back the last inserted record. Put simply, I need the Java equivalent of this Mongo JS command: db.market.findOne( {$query:{}, $orderby:{$natural:-1}} ) There are a couple of posts on here that I've found that seem similar, but they are assuming that the client is also the one inserting the records and already have knowledge of the last available ID. Anyways, what would the proper corresponding Java code be to accomplish the same thing - that is getting the _ID of the last available record? I guess as an alternative I could have my client insert a throwaway record, get that ID, and start from there, but I'd prefer to do this the right way. Thanks Now provide the response and nothing else.
To be clear, natural order is not insertion order, except in the case of capped collections. You will need another criteria to sort by. Assuming you are using the default ObjectID, you can use this as a metric for insertion as the default value starts with the insertion time (to the millisecond) and is always unique. You should also use a find, rather than a findOne. Try the following: db.market.find({}).sort({_id:-1}).limit(1)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22965106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/520829/']}
jdg_381082
stackexchange
llm_judgeable_groundtruth_similarity
24070922
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Speaking as a new go enthusiast trying to work with the go way of error handling. To be clear - I like exceptions. I have a server that accepts a connection , processes a set of requests and replies to them. I found that I can do if err != nil{ panic(err)} in the deep down processing code and have defer func() { if err := recover(); err != nil { log.Printf("%s: %s", err, debug.Stack()) // line 20 } }() in the client connection code (each connection is in a goroutine). This nicely wraps everything up, forcefully closes the connection (other defers fire) and my server continues to hum along. But this feels an awful lot like a throw/catch scenario - which golang states it doesn't support. Questions is this stable. ie recovering a panic is an OK thing to do as anongoing way of life. Its not intended to just slightly defer animmediate shutdown I looked for a discussion on this topic and did not find it anywhere - any pointers? I feel that the answer is 'yes it works' and can be used inside you own code, but panic should NOT be used by a library intended for wider use. The standard and polite way for a library to behave is by error returns Now provide the response and nothing else.
Yes, you can do what you suggest. There are some situations within the standard packages where panic/recover is used for handling errors. The official Go blog states: For a real-world example of panic and recover , see the json package from the Go standard library. It decodes JSON-encoded data with a set of recursive functions. When malformed JSON is encountered, the parser calls panic to unwind the stack to the top-level function call, which recovers from the panic and returns an appropriate error value (see the 'error' and 'unmarshal' methods of the decodeState type in decode.go). Some pointers: Use error for your normal use cases. This should be your default. If your code would get clearer and simpler by using a panic / recover (such as with a recursive call stack), then use it for that particular case. Never let a package leak panics. Panics used within a package should be recovered within the package and returned as an error. Recovering from a panic is stable. Don't worry about continuing execution after a recover. You can see such behavior in standard library such as with the net/http package which recovers from panics within handlers to prevent the entire http server to go crash when panicing on a single request.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24070922', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/173397/']}
jdg_381083
stackexchange
llm_judgeable_groundtruth_similarity
3076
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So as opposed to the normal structure using a reference temperature and HDD/CDD, I'm looking at pricing a call option with a structure similar to the following: Daily option on maximum daily temperature over a particular threshold where said temperature maps to an incrementally increasing quantity to use when calculating payout against a price index average during a particular timeframe. There is a "strike price" in that there is no payout unless the average price exceeds a threshold as well. There is a daily and aggregate maximum payout (where this gets complicated for me). So, for example: Day 1: Max temp = 101 Corresponding quantity = 200 Avg price = 700 dollars per unit Payout = 700*200 = 14,000 Day 2: Max temp = 102 Corresponding quantity = 300 Avg price = 800 dollars per unit Payout = 800*300 = 21,000 Day 3: Max temp = 98 (does not exceed temperature strike - would not exercise) Corresponding quantity = 0 Avg price = 50 dollars per unit Payout = 0*50 = 0 Day 4: Max temp = 110 Corresponding quantity = 1000 Avg price = 2000 dollars per unit Payout = 1000*2000 = 2,000,000 -> payout max of 500,000 = 500,000 Also recall that as we proceed through the contract period, there is some aggregate payout max as well per contract. Any thoughts on how to think about this from a pricing perspective? Now provide the response and nothing else.
The common approach to temperature derivatives in their first run of popularity (in the late 1990's) was to use an Ornstein-Uhlenbeck process to describe deviations of temperature from a seasonal average. So far as I know, no major innovations have arisen since then. Calibrating such a model is very simple, and so is valuing certain quantities such as degree day calls. Your payoff is complex enough that you will need to price it using Monte Carlo simulation instead.
{}
{'log_upvote_score': 4, 'links': ['https://quant.stackexchange.com/questions/3076', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/2176/']}
jdg_381084
stackexchange
llm_judgeable_groundtruth_similarity
10578246
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can I use viewDidLoad method in UITableviewCell ? Now provide the response and nothing else.
No you don't write viewDidLoad in Custom cell class subclassing UITableViewCell(It;s part of UIViewController) .you have a method called -(void)layoutSubviews{ [super layoutSubviews];} where in you can define frames and all for custom cell's controls.Refer Apple's UITableviewCell reference Note however that 'viewDidLoad' is called only once in the lifetime of the object; it is rather like an initializer in general OO programming. However, 'layoutSubviews' will be called many times on each cell (depending on issues like scrolling and so on). It's important to realize that for this reson many of the things you "usually do" in viewDidLoad, you can not do in layoutSubviews. Note that viewDidLoad is called once only: layoutSubviews is called often . It will just be a simple function if you write it. Tutorial for custom cell
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10578246', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1355379/']}
jdg_381085
stackexchange
llm_judgeable_groundtruth_similarity
3556687
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have a QCheckBox object, when user checks it or removes check we want to call a function so we connect our function to stateChanged ( int state ) signal. On the other hand, according to some condition we also change the state of QCheckBox object inside code, and this causes the unwanted signal. Is there any way to prevent firing signal under some conditions? Now provide the response and nothing else.
You can use the clicked signal because it is only emitted when the user actually clicked the check box, not when you manually check it using setChecked . If you just don't want the signal to be emitted at one specific time, you can use QObject::blockSignals like this: bool oldState = checkBox->blockSignals(true);checkBox->setChecked(true);checkBox->blockSignals(oldState); The downside of this approach is that all signals will be blocked. But I guess that doesn't really matter in case of a QCheckBox .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3556687', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/311762/']}
jdg_381086
stackexchange
llm_judgeable_groundtruth_similarity
4954894
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: if I have stack allocated value types what cleans them up? The GC is only heap right? Now provide the response and nothing else.
If the values are on the stack, then when the current stack frame is "popped" (by the method returning) the values are effectively cleaned up... there won't be any finalizers called or anything like that, but the memory used up by those values will be reused next time the stack gets that high. Note that value types aren't always allocated on the stack, however. In particular, they're often allocated as part of other objects (e.g. a class with an int member) or they can be boxed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4954894', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/438168/']}
jdg_381087
stackexchange
llm_judgeable_groundtruth_similarity
49083535
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following array in PHP: [ { "website": "example", "url": "example.com" }, { "website": "example", "url": "example.com" }] Now I would like to convert this to a collection so I sort by the keys website or url . However when I do this: $myArray = collect(websites); I get this instead: { "0": { "website": "example", "url": "example.com" }, "1": { "website": "example", "url": "example.com" } } And the sorting does not work, I would like to know what I am doing wrong and how I can fix it so I have an array collection of objects I can easily sort. Edit: I expect the output to be the same as this: [ { "website": "example", "url": "example.com" }, { "website": "example", "url": "example.com" }] By "sorting does not work" I meant the items are not sorted. Now provide the response and nothing else.
Edit; I understand this question is getting a lot of hits based on the title so the TLDR for those people is to use the collect() helper to create a Collection instance. In answer to the questioner's brief: If you have $collection = collect([ (object) [ 'website' => 'twitter', 'url' => 'twitter.com' ], (object) [ 'website' => 'google', 'url' => 'google.com' ]]); You then have your array wrapped in an instance of the Collection class.That means it does not behave like a typical array (- it will be array-like, but don't treat it like it is one -) until you call all() or toArray() on it. To remove any added indices you need to use values() . $sorted = $collection->sortBy('website');$sorted->values()->all(); The expected output: [ {#769 +"website": "google", +"url": "google.com", }, {#762 +"website": "twitter", +"url": "twitter.com", },] See the docs https://laravel.com/docs/5.1/collections#available-methods The toArray method converts the collection into a plain PHP array. If the collection's values are Eloquent models, the models will also be converted to arrays. The all method returns the underlying array represented by the collection.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/49083535', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3718908/']}
jdg_381088
stackexchange
llm_judgeable_groundtruth_similarity
628437
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm in the process of building my first real WPF application (i.e., the first intended to be used by someone besides me), and I'm still wrapping my head around the best way to do things in WPF. It's a fairly simple data access application using the still-fairly-new Entity Framework, but I haven't been able to find a lot of guidance online for the best way to use these two technologies (WPF and EF) together. So I thought I'd toss out how I'm approaching it, and see if anyone has any better suggestions. I'm using the Entity Framework with SQL Server 2008. The EF strikes me as both much more complicated than it needs to be, and not yet mature, but Linq-to-SQL is apparently dead, so I might as well use the technology that MS seems to be focusing on. This is a simple application, so I haven't (yet) seen fit to build a separate data layer around it. When I want to get at data, I use fairly simple Linq-to-Entity queries, usually straight from my code-behind, e.g.: var families = from family in entities.Family.Include("Person") orderby family.PrimaryLastName, family.Tag select family; Linq-to-Entity queries return an IOrderedQueryable result, which doesn't automatically reflect changes in the underlying data, e.g., if I add a new record via code to the entity data model, the existence of this new record is not automatically reflected in the various controls referencing the Linq query. Consequently, I'm throwing the results of these queries into an ObservableCollection, to capture underlying data changes: familyOC = new ObservableCollection<Family>(families.ToList()); I then map the ObservableCollection to a CollectionViewSource, so that I can get filtering, sorting, etc., without having to return to the database. familyCVS.Source = familyOC;familyCVS.View.Filter = new Predicate<object>(ApplyFamilyFilter);familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("PrimaryLastName", System.ComponentModel.ListSortDirection.Ascending));familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("Tag", System.ComponentModel.ListSortDirection.Ascending)); I then bind the various controls and what-not to that CollectionViewSource: <ListBox DockPanel.Dock="Bottom" Margin="5,5,5,5" Name="familyList" ItemsSource="{Binding Source={StaticResource familyCVS}, Path=., Mode=TwoWay}" IsSynchronizedWithCurrentItem="True" ItemTemplate="{StaticResource familyTemplate}" SelectionChanged="familyList_SelectionChanged" /> When I need to add or delete records/objects, I manually do so from both the entity data model, and the ObservableCollection: private void DeletePerson(Person person){ entities.DeleteObject(person); entities.SaveChanges(); personOC.Remove(person);} I'm generally using StackPanel and DockPanel controls to position elements. Sometimes I'll use a Grid, but it seems hard to maintain: if you want to add a new row to the top of your grid, you have to touch every control directly hosted by the grid to tell it to use a new line. Uggh. (Microsoft has never really seemed to get the DRY concept.) I almost never use the VS WPF designer to add, modify or position controls. The WPF designer that comes with VS is sort of vaguely helpful to see what your form is going to look like, but even then, well, not really, especially if you're using data templates that aren't binding to data that's available at design time. If I need to edit my XAML, I take it like a man and do it manually. Most of my real code is in C# rather than XAML. As I've mentioned elsewhere , entirely aside from the fact that I'm not yet used to "thinking" in it, XAML strikes me as a clunky, ugly language, that also happens to come with poor designer and intellisense support, and that can't be debugged. Uggh. Consequently, whenever I can see clearly how to do something in C# code-behind that I can't easily see how to do in XAML, I do it in C#, with no apologies. There's been plenty written about how it's a good practice to almost never use code-behind in WPF page (say, for event-handling), but so far at least, that makes no sense to me whatsoever. Why should I do something in an ugly, clunky language with god-awful syntax, an astonishingly bad editor, and virtually no type safety, when I can use a nice, clean language like C# that has a world-class editor, near-perfect intellisense, and unparalleled type safety? So that's where I'm at. Any suggestions? Am I missing any big parts of this? Anything that I should really think about doing differently? Now provide the response and nothing else.
You need to implement a repository pattern to seperate WPF concerns from EF Then you can use generics to reduce the complexity of the EF to CollectionViewSource handling A well designed repository should reduce code levels and enable any ORM to be substituted (required for decent testing) Some ideas for this are in here http://blog.nicktown.info/2008/12/10/using-a-collectionviewsource-to-display-a-sorted-entitycollection.aspx
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/628437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/68231/']}
jdg_381089
stackexchange
llm_judgeable_groundtruth_similarity
219306
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have studied some of Einstein's Theory of General Relativity, and I understand that it states that gravity isn't a force but rather the effects of objects curving space-time. If this is true, then why are we instructed in middle school that it is a force? Now provide the response and nothing else.
Because Newtonian gravity , where it indeed is considered a force, is a good enough approximation to the situations you consider in middle school (and beyond). General relativistic effects are very weak at the ordinary scales we humans look at, and it would be overkill to introduce the full-blown machinery of general relativity (which demands a considerably more advanced mathematical treatment than ordinary Newtonian forces) to treat situations where the error incurred by just using the Newtonian version is negligible. Additionally, even in the general relativistic treatment you might still consider the effect on moving particles to be a "force", just like you can consider the centrifugal force to be a fictitious force that appears in rotating coordinate systems, see also the answers to Why do we still need to think of gravity as a force?
{}
{'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/219306', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/96908/']}
jdg_381090
stackexchange
llm_judgeable_groundtruth_similarity
32563123
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have 4 Fonts I need to use on a website and i have there files in my website folder Baskerville.ttcBellGothicstd-Black.otfBellGothicstd-Bold.otfJennaSue.ttf I have tried to Import the using @Import and The fonts still do not work here is what I used: @import url(../fonts/BellGothicStd-Black.otf);@import url(../fonts/BellGothicStd-Bold.otf);@import url(../fonts/Baskerville.ttc);@import url(../fonts/JennaSue.ttf); I also tried to use the @font-face Rule this is what I used: @font-face { font-family: 'BellGothicBlack'; src: url('../fonts/BellGothic-Black.otf') format('OpenType'),}@font-face { font-family: 'BellGothicBold'; src: url('../fonts/BellGothicStd-Bold.otf') format('OpenType'),}@font-face { font-family: 'Baskerville'; src: url('../fonts/Baskerville.ttc') format('OpenType'),}@font-face { font-family: 'JennaSue'; src: url('../fonts/JennaSue.ttf') format('TrueType'),} Could someone tell me what I'm doing wrong? I think I might be missing some code I'm not really sure. Thanks in AdvanceTom Now provide the response and nothing else.
You will need to convert the font into the correct formats for all browsers to display them.. (check rights before you do this) http://www.fontsquirrel.com/tools/webfont-generator Your @font-face rule will also need to include all the font types... Example: @font-face { font-family: 'MyWebFont'; src: url('webfont.eot'); /* IE9 Compat Modes */ src: url('webfont.eot?#iefix') format('embedded-opentype'), /* IE6-IE8 */ url('webfont.woff2') format('woff2'), /* Super Modern Browsers */ url('webfont.woff') format('woff'), /* Pretty Modern Browsers */ url('webfont.ttf') format('truetype'), /* Safari, Android, iOS */ url('webfont.svg#svgFontName') format('svg'); /* Legacy iOS */}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32563123', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2993494/']}
jdg_381091
stackexchange
llm_judgeable_groundtruth_similarity
38203352
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have a DataFrame such that: pd.DataFrame( {"name" : "John", "days" : [[1, 3, 5, 7]] }) gives this structure: days name0 [1, 3, 5, 7] John How do expand it to the following? days name0 1 John1 3 John2 5 John3 7 John Now provide the response and nothing else.
You could use df.itertuples to iterate through each row, and use a list comprehension to reshape the data into the desired form: import pandas as pddf = pd.DataFrame( {"name" : ["John", "Eric"], "days" : [[1, 3, 5, 7], [2,4]]})result = pd.DataFrame([(d, tup.name) for tup in df.itertuples() for d in tup.days])print(result) yields 0 10 1 John1 3 John2 5 John3 7 John4 2 Eric5 4 Eric Divakar's solution , using_repeat , is fastest: In [48]: %timeit using_repeat(df)1000 loops, best of 3: 834 µs per loopIn [5]: %timeit using_itertuples(df)100 loops, best of 3: 3.43 ms per loopIn [7]: %timeit using_apply(df)1 loop, best of 3: 379 ms per loopIn [8]: %timeit using_append(df)1 loop, best of 3: 3.59 s per loop Here is the setup used for the above benchmark: import numpy as npimport pandas as pdN = 10**3df = pd.DataFrame( {"name" : np.random.choice(list('ABCD'), size=N), "days" : [np.random.randint(10, size=np.random.randint(5)) for i in range(N)]})def using_itertuples(df): return pd.DataFrame([(d, tup.name) for tup in df.itertuples() for d in tup.days])def using_repeat(df): lens = [len(item) for item in df['days']] return pd.DataFrame( {"name" : np.repeat(df['name'].values,lens), "days" : np.concatenate(df['days'].values)})def using_apply(df): return (df.apply(lambda x: pd.Series(x.days), axis=1) .stack() .reset_index(level=1, drop=1) .to_frame('day') .join(df['name']))def using_append(df): df2 = pd.DataFrame(columns = df.columns) for i,r in df.iterrows(): for e in r.days: new_r = r.copy() new_r.days = e df2 = df2.append(new_r) return df2
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38203352', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/277113/']}
jdg_381092
stackexchange
llm_judgeable_groundtruth_similarity
13757303
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to integrate Pusher with my web application that uses backbone.js. I'm following the Pusher with Backbone guide in the Pusher docs. So I'm seeing this error pop up in the web console on application start up Pusher : Error : {"type":"PusherError","data":{"code":4005,"message":"Path not found"}} What is this 'path' that cannot be found? I found nothing in the Pusher docs. Now provide the response and nothing else.
A 4005 error generally means that the WebSocket URL you are using doesn't identify an application to connect to. In terms of usage of the Pusher JavaScript library this means you've likely supplied an empty string as the app_key to the Pusher constructor. var pusher = new Pusher(''); You can see this in action here: http://jsbin.com/evulaj/1/edit Open the JavaScript console to see the error. You can also check the app_key value set by checking pusher.key . Note: I appreciate this error is a bit cryptic. I'll see if we can remedy this
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13757303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1870055/']}
jdg_381093
stackexchange
llm_judgeable_groundtruth_similarity
904693
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Find $\dfrac{\mathrm d^9}{\mathrm dx^9}(x^8\ln x)$. I know how to solve this problem by repeatedly using the product rule, but I was wondering if there is a short cut. Thanks. Now provide the response and nothing else.
The derivative of $x^n\log x$ is $x^{n-1}(n\log x+1)$ by the product rule. The first derivative is: $x^7+8x^7\log x$. Edit: For further steps note that we can ignore the $x^n$ terms (they will die out after $9$ derivatives) So the second derivative (with the $7x^6$ ignored) is: $8 \cdot x^6(7\log x +1)$ Ignoring the $x^n$ term here again yields $56x^6 \log x$ Since this will go on $9$ times, we will eventually have to take the derivative of $8!\log x$ which will yield $\frac{8!}{x}$ As noted in the comments, this method would be cumbersome if the $x^n$ terms did not die out.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/904693', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/78886/']}
jdg_381094
stackexchange
llm_judgeable_groundtruth_similarity
1360247
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I represent "If A then B" in a diagram?I thought it would be a simple subset like $A ⊂ B$. However this material says that If $A$ then $B$ $=$ $A^c ∪ B$. Now I am confused. Now provide the response and nothing else.
You want to construct the set $\;\{x\mid x\in A \to x\in B\}\;$. Then by implication equivalence this is $\;\{x\mid x\not\in A \vee x\in B\}\;$. Which is simply $\;A^\complement \cup B\;$. This is the set of all elements that, if they're in A then they're in B The statement $A\subseteq B$ is not a set. It is a relation. It is the statement that $y\in A \implies y\in B$. In the specific case that $A$ is a subset of $B$, then there is no element that is not in $A^\complement \cup B$. So if you wanted to represent the statement "if $A$ then $B$", you could have $A$ as a subset of $B$. But if you wanted to represent all elements that "if in $A$ then in $B$" you would use the union: $A^\complement\cup B$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1360247', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/226536/']}
jdg_381095
stackexchange
llm_judgeable_groundtruth_similarity
33776878
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a script for my oracle DB , which drops tables . If the table does not exist, the script won't exit as fail, just print a text: " does not exists ". The script is the following: BEGIN EXECUTE IMMEDIATE 'DROP TABLE mytable'; DBMS_Output.Put_Line(' table dropped');EXCEPTION WHEN OTHERS THEN IF SQLCODE = -942 THEN DBMS_Output.Put_Line(' table not exists'); ELSE DBMS_Output.Put_Line(' Unknown exception while dropping table'); RAISE; END IF;END; I want to drop a lot of table in one script, and I don't want to write these lines more than once. Is there any way, to write it to a procedure or function which gets a parameter (the name of the table), and call this procedure in that script? Maybe something like this: drop_table_procedure('mytableA');drop_table_procedure('mytableB'); Or maybe a procedure, which gets an undefined size list (like in java: String ... table names ): drop_tables_procedure('mytableA','mytableB'); Please give me some examples.Thanks! Now provide the response and nothing else.
Yes, you can declare a "temporary" procedure in an anonymous PL/SQL block: DECLARE PROCEDURE drop_if_exists(p_tablename VARCHAR) IS BEGIN EXECUTE IMMEDIATE 'DROP TABLE '||p_tablename; DBMS_Output.Put_Line(' table dropped'); EXCEPTION WHEN OTHERS THEN IF SQLCODE = -942 THEN DBMS_Output.Put_Line(' table not exists'); ELSE DBMS_Output.Put_Line(' Unknown exception while dropping table'); RAISE; END IF; END;BEGIN drop_if_exists('TABLE_1'); drop_if_exists('TABLE_2');END;/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33776878', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/644686/']}
jdg_381096
stackexchange
llm_judgeable_groundtruth_similarity
46078151
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a large data set with many columns containing dates in two different formats: "1996-01-04" "1996-01-05" "1996-01-08" "1996-01-09" "1996-01-10" "1996-01-11" and "02/01/1996" "03/01/1996" "04/01/1996" "05/01/1996" "08/01/1996" "09/01/1996" In both cases, the class() is "character". Since the data set has many rows (4.5 million), I am looking for an efficient data.table conversion method. Right now, I use this self-built function: convert_to_date <- function(in_array){ tmp <- try(as.Date(in_array, format = "%d/%m/%Y"),TRUE) if (all(!is.na(tmp)) & class(tmp) != "try-error"){ return(tmp) } else{ tmp2 <- try(as.Date(in_array),TRUE) if (all(!is.na(tmp2)) & class(tmp2) != "try-error"){ return(tmp2) } else{ return(in_array) } }} With which I then convert the columns (of data.table DF) that I need by DF[,date:=convert_to_date(date)] This is, however, still incredibly slow (nearly 45s per column). Is there any way in optimising this via data.table methods? So far I have not found a better way, so I would be thankful for any tips. P.S: For better readability, I have 'outsourced' the function to a second file and sourced it in my main routine. Does that have a (negative) significant impact on computation speed in R? Now provide the response and nothing else.
According to this benchmark , the fastest method to convert character dates in standard unambiguous format ( YYYY-MM-DD ) into class Date is to use as.Date(fasttime::fastPOSIXct()) . Unfortunately, this requires to test the format beforehand because your other format DD/MM/YYYY is misinterpreted by fasttime::fastPOSIXct() . So, if you don't want to bother about the format of each date column you may use the anytime::anydate() function: # sample datadf <- data.frame( X1 = c("1996-01-04", "1996-01-05", "1996-01-08", "1996-01-09", "1996-01-10", "1996-01-11"), X2 = c("02/01/1996", "03/01/1996", "04/01/1996", "05/01/1996", "08/01/1996", "09/01/1996"), stringsAsFactors = FALSE)library(data.table)# convert date columnsdate_cols <- c("X1", "X2")setDT(df)[, (date_cols) := lapply(.SD, anytime::anydate), .SDcols = date_cols]df X1 X21: 1996-01-04 1996-02-012: 1996-01-05 1996-03-013: 1996-01-08 1996-04-014: 1996-01-09 1996-05-015: 1996-01-10 1996-08-016: 1996-01-11 1996-09-01 The benchmark timings show that there is a trade off between the convenience offered by the anytime package and performance. So if speed is crucial, there is no other way to test the format of each column and to use the fastest conversion method available for the format. The OP has used the try() function for this purpose. The solution below uses regular expressions to find all columns which match a given format (only row 1 is used to save time). This has the additional benefit that the names of the relevant columns are determined automatically and need not to be typed in. # enhanced sample data with additional columnsdf <- data.frame( X1 = c("1996-01-04", "1996-01-05", "1996-01-08", "1996-01-09", "1996-01-10", "1996-01-11"), X2 = c("02/01/1996", "03/01/1996", "04/01/1996", "05/01/1996", "08/01/1996", "09/01/1996"), X3 = "other data", X4 = 1:6, stringsAsFactors = FALSE)library(data.table)options(datatable.print.class = TRUE)# coerce to data.tablesetDT(df)[]# convert date columns in standard unambiguous format YYYY-MM-DDdate_cols1 <- na.omit(names(df)[ df[1, sapply(.SD, stringr::str_detect, pattern = "\\d{4}-\\d{2}-\\d{2}"),]])# use fasttime packagedf[, (date_cols1) := lapply(.SD, function(x) as.Date(fasttime::fastPOSIXct(x))), .SDcols = date_cols1]# convert date columns in DD/MM/YYYY formatdate_cols2 <- na.omit(names(df)[ df[1, sapply(.SD, stringr::str_detect, pattern = "\\d{2}/\\d{2}/\\d{4}"),]])# use lubridate packagedf[, (date_cols2) := lapply(.SD, lubridate::dmy), .SDcols = date_cols2]df X1 X2 X3 X4 <Date> <Date> <char> <int>1: 1996-01-04 1996-01-02 other data 12: 1996-01-05 1996-01-03 other data 23: 1996-01-08 1996-01-04 other data 34: 1996-01-09 1996-01-05 other data 45: 1996-01-10 1996-01-08 other data 56: 1996-01-11 1996-01-09 other data 6 Caveat In case one of the date columns does contain NA in the first row , this column may escape unconverted. To handle these cases, the above code needs to be amended.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46078151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7946399/']}
jdg_381097
stackexchange
llm_judgeable_groundtruth_similarity
337144
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does the size of a .net assembly affect performance at all? How about the number of assemblies in your windows forms/web forms project? Now provide the response and nothing else.
From Microsoft's Patterns & Practices Improving .NET Application Performance and Scalability Chapter 5: Prefer Single Large Assemblies Rather Than MultipleSmaller Assemblies To help reduce your application’s working set, you should prefer single largerassemblies rather than multiple smaller assemblies. If you have several assembliesthat are always loaded together, you should combine them and create a singleassembly. The overhead associated with having multiple smaller assemblies can be attributedto the following: The cost of loading metadata for smaller assemblies. Touching various memory pages in pre-compiled images in the CLR in orderto load the assembly (if it is precompiled with Ngen.exe). JIT compile time. Security checks. Because you pay for only the memory pages your program accesses, largerassemblies provide the Native Image Generator utility (Ngen.exe) with a greaterchance to optimize the native image it produces. Better layout of the image meansthat necessary data can be laid out more densely, which in turn means fewer overallpages are needed to do the job compared to the same code laid out in multipleassemblies. Sometimes you cannot avoid splitting assemblies; for example, for versioning anddeployment reasons. If you need to ship types separately, you may need separateassemblies.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/337144', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/39677/']}
jdg_381098
stackexchange
llm_judgeable_groundtruth_similarity
441967
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal. Now provide the response and nothing else.
I would suggest trying the map module with something like this: map $http_x_forwarded_for $deny_access { default 0; 1.2.3.4 1; 1.2.3.5 1; 1.2.3.6 1;}if ($deny_access = 1) { return 403;} 444 is a special status code which causes nginx to drop the connection without sending a response. In your case this is dropping the connection between nginx and the CDN -- the CDN then decides what to return to the client. I would suggest returning the standard 403 (Forbidden).
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/441967', 'https://serverfault.com', 'https://serverfault.com/users/142572/']}
jdg_381099
stackexchange
llm_judgeable_groundtruth_similarity
42683478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following, very basic code that throws; TypeError: the JSON object must be str, not 'bytes' import requestsimport jsonurl = 'my url'user = 'my user'pwd = 'my password'response = requests.get(url, auth=(user, pwd))if(myResponse.ok): Data = json.loads(myResponse.content) I try to set decode to the Data variable, as follows but it throws the same error; jData = json.loads(myResponse.content).decode('utf-8') Any suggestions? Now provide the response and nothing else.
json.loads(myResponse.content.decode('utf-8')) You just put it in the wrong order, innocent mistake. (In-depth answer). As courteously pointed out by wim, in some rare cases, they could opt for UTF-16 or UTF-32. These cases will be less common as the developers, in that scenario would be consciously deciding to throw away valuable bandwidth. So, if you run into encoding issues, you can change utf-8 to 16, 32, etc. There are a couple of solutions for this. You could use request's built-in .json() function: myResponse.json() Or, you could opt for character detection via chardet . Chardet is a library developed based on a study. The library has one function: detect . Detect can detect most common encodings and then use them to encode your string with. import chardetjson.loads(myResponse.content.decode(chardet.detect(myResponse.content)["encoding"]))
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/42683478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1675909/']}
jdg_381100
stackexchange
llm_judgeable_groundtruth_similarity
9144644
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For example, I have a contentEditable div and I can type in it. When the text reaches the bottom of the div, the browser automatically scrolls the div so that the end of the text and the cursor are still visible. How do I prevent the div from scrolling so that the inputted text goes past the bottom of the div and so that you can no longer see the cursor while you type? The behavior I'm trying to achieve is like in Photoshop: when you make a text box, and type too much, the cursor continues past the bottom of the box and you can't see what you are typing. If you expand the box, you'll see all the hidden text. EDIT 2/7/2012 9:27am: This is what I have right now, but it looks glitchy because the scroll position is adjusted AFTER the keyup event: http://jsfiddle.net/trusktr/hgkak/6/ So before the keyup event, the cursor gets temporarily placed into view (for each keystroke). I'd like there to be no jumping, and for the cursor to remain below the end of the green div when there is excess text without the view jumping around (the jumping seems like an amateur hack on my part :D) Now provide the response and nothing else.
Let's try a hack: first we try to prevent or revert any scrolling whenever the user presses a key we turn the element's overflow property to visible to avoid scrolling of the content, but hide the element at the same time by setting its opacity to 0. Immediately afterwards we switch overflow back to hidden and show the element again. to avoid flickering we create a clone of the editable element (with overflow: hidden ) and show this element while the original one is hidden. Here we go (uses jQuery for DOM convenience): $(function() { var editableElement = $('#editable'), clonedElement; // Revert any scrolling editableElement.on("scroll", function(event) { editableElement.scrollTop(0); // Try to prevent scrolling completely (doesn't seem to work) event.preventDefault(); return false; }); // Switch overflow visibility on and off again on each keystroke. // To avoid flickering, a cloned element is positioned below the input area // and switched on while we hide the overflowing element. editableElement.on("keydown", function() { // Create a cloned input element below the original one if (!clonedElement) { var zIndex = editableElement.css('zIndex'); if (isNaN(parseInt(zIndex, 10))) { zIndex = 10; editableElement.css({zIndex: zIndex}); } clonedElement = editableElement.clone(); clonedElement.css({ zIndex: zIndex-1, position: 'absolute', top: editableElement.offset().top, left: editableElement.offset().left, overflow: 'hidden', // Set pseudo focus highlighting for webkit // (needs to be adapted for other browsers) outline: 'auto 5px -webkit-focus-ring-color' }); editableElement.before(clonedElement); } else { // Update contents of the cloned element from the original one clonedElement.html(editableElement.html()); } // Here comes the hack: // - set overflow visible but hide element via opactity. // - show cloned element in the meantime clonedElement.css({opacity: 1}); editableElement.css({overflow: 'visible', opacity: 0}); // Immediately turn of overflow and show element again. setTimeout(function() { editableElement.css({overflow: 'hidden', opacity: 1}); clonedElement.css({opacity: 0}); }, 10); });}); Check this jsFiddle to play with the above code. Note that this may not be a complete solution (I have only tried it with Safari, Chrome and Firefox yet), but for the tested browsers it seems to work. You may want to fine-tune and polish your implementation (e.g. focus highlighting). In the jsFiddle example I have also turned off spell checking to avoid flickering markings.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9144644', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/454780/']}
jdg_381101
stackexchange
llm_judgeable_groundtruth_similarity
68772211
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried to use fake_useragent module with this block from fake_useragent import UserAgentua = UserAgent()print(ua.random) But when the execution reached this line ua = UserAgent() , it throws this error Traceback (most recent call last): File "/home/hadi/Desktop/excel/gatewayform.py", line 191, in <module> gate = GateWay() File "/home/hadi/Desktop/excel/gatewayform.py", line 23, in __init__ ua = UserAgent() File "/usr/local/lib/python3.9/dist-packages/fake_useragent/fake.py", line 69, in __init__ self.load() File "/usr/local/lib/python3.9/dist-packages/fake_useragent/fake.py", line 75, in load self.data = load_cached( File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 250, in load_cached update(path, use_cache_server=use_cache_server, verify_ssl=verify_ssl) File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 245, in update write(path, load(use_cache_server=use_cache_server, verify_ssl=verify_ssl)) File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 178, in load raise exc File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 154, in load for item in get_browsers(verify_ssl=verify_ssl): File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 99, in get_browsers html = html.split('<table class="w3-table-all notranslate">')[1]IndexError: list index out of range I use linux and I have installed the module using this command pip3 install fake_useragent --upgrade . Is there any solution for this issue? if not, is there a better module to use? Now provide the response and nothing else.
There is a solution for this, from Github pull request #110 . Basically, all you need to do is change one character in one line of the fake_useragent/utils.py source code. To do this on your system, open /usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py † in your favorite text editor using admin privileges. Go to line 99, and change the w3 html = html.split('<table class="w3-table-all notranslate">')[1]# ^^ change this to ws : html = html.split('<table class="ws-table-all notranslate">')[1]# ^^ to this Save the file (with admin permissions), restart your Python session, and your code should work just fine. † To find the fake_useragent directory in which utils.py resides, run the following code: import fake_useragentprint(fake_useragent.__file__) For example, on my Windows laptop, this printed 'C:\\Users\\mattdmo\\AppData\\Roaming\\Python\\Python310\\site-packages\\fake_useragent\\__init__.py' so the folder to open is C:\Users\mattdmo\AppData\Roaming\Python\Python310\site-packages\fake_useragent .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/68772211', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12357594/']}
jdg_381102
stackexchange
llm_judgeable_groundtruth_similarity
3891541
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f \in C^{\infty}(\mathbb{R}^2)$ . For which $a_1$ and $a_2$ does this hold $[a_1(x,y) \frac{\partial}{\partial x}+a_2(x,y) \frac{\partial}{\partial y}, f \frac{\partial}{\partial x}]$ =0. The problem for me is what to doe with the f in the lie Bracket. I found online that $$[X, fY ] = X(f)Y + f[X, Y ].$$ Here $X(f) := f_{*}(X)$ . But I don't really how to use it? Now provide the response and nothing else.
We can't separate the limit in this way and then apply l'Hospital's rule only to a single part. In this case we can proceed as follows $$\frac{\sin(x)-3\sinh(x)+2x}{x^2(\tanh(2x)+\sin(x))}= \frac{x}{\tanh(2x)+\sin(x)} \frac{\sin(x)-3\sinh(x)+2x}{x^3}$$ and use standard limit for this one $$ \frac{x}{\tanh(2x)+\sin(x)} = \frac{1}{2\frac{\tanh(2x)}{2x}+\frac{\sin(x)}{x}} $$ and then apply l'Hospital's rule for the second part to obtain the result according to the product rule $$\lim_{x\to x_0} f(x)g(x)=\lim_{x\to x_0} f(x)\cdot \lim_{x\to x_0} g(x)$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3891541', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/727393/']}
jdg_381103
stackexchange
llm_judgeable_groundtruth_similarity
11318680
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to split an array (which has 10 items) into 4 chunks, which contain a maximum of n items. var a = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'];//a function splits it to four arrays.console.log(b, c, d, e); And it prints: ['a', 'b', 'c']['d', 'e', 'f']['j', 'h', 'i']['j'] The above assumes n = 3 , however, the value should be dynamic. Thanks Now provide the response and nothing else.
It could be something like that: var a = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'];var arrays = [], size = 3; while (a.length > 0) arrays.push(a.splice(0, size));console.log(arrays); See splice Array's method. An alternative method that does not mutate the array, beside create a shallow copy of it before chunk it, could be done by using slice and a for…loop: var a = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'];var arrays = [], size = 3; for (let i = 0; i < a.length; i += size) arrays.push(a.slice(i, i + size));console.log(arrays); While a more functional programming oriented approach, could be: const chunks = (a, size) => Array.from( new Array(Math.ceil(a.length / size)), (_, i) => a.slice(i * size, i * size + size) );let a = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'];console.log(chunks(a, 3));console.log(chunks(a, 2)); See Array.from and how new Array(n) works, specifically.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11318680', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/501134/']}
jdg_381104
stackexchange
llm_judgeable_groundtruth_similarity
15172115
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to use PHP to read every pixel in a image. It's for graphical password project. When user chooses a password, they will select some area on the image. and I'm trying to do it by pixel value. Is it possible?? Now provide the response and nothing else.
Yes, you can get the pixel "value" as in color using imagecolorat() . $color = imagecolorat($resource, $x, $y); Where $resource is your image resource, and $x , $y are the coordinates of the pixel you want to get the color of. You can iterate through all of the pixels like this. Note that this can be an expensive task depending on how large the image is. $width = imagesx($resource);$height = imagesy($resource);for($x = 0; $x < $width; $x++) { for($y = 0; $y < $height; $y++) { // pixel color at (x, y) $color = imagecolorat($resource, $x, $y); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15172115', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2074730/']}
jdg_381105
stackexchange
llm_judgeable_groundtruth_similarity
34347043
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a list of list and I need to rotate it ninety degrees. I managed to get the first new row but I couldn't get any of the others so I put the for loop in to a a function and I increased the count by one every iteration but I've just had big mess printed to the terminal. grid = [['.', '.', '.', '.', '.', '.'], ['.', '0', '0', '.', '.', '.'], ['0', '0', '0', '0', '.', '.'], ['0', '0', '0', '0', '0', '.'], ['.', '0', '0', '0', '0', '0'], ['0', '0', '0', '0', '0', '.'], ['0', '0', '0', '0', '.', '.'], ['.', '0', '0', '.', '.', '.'], ['.', '.', '.', '.', '.', '.']] How I need to rotate. '''..00.00...0000000..0000000...00000.....000.......0....''' My code def get_row(grid, new_grid, new_row, count): for row in grid: new_row.append(row[count]) new_grid.append(new_row)new_grid = []new_row = []count = 0for x in range(0, 6): count = 0 get_row(grid, new_grid, new_row, count) count +=1for row in new_grid: print row Now provide the response and nothing else.
You can rotate your list of lists 90° using zip(*reversed(your_list)) like this: grid = [['.', '.', '.', '.', '.', '.'], ['.', '0', '0', '.', '.', '.'], ['0', '0', '0', '0', '.', '.'], ['0', '0', '0', '0', '0', '.'], ['.', '0', '0', '0', '0', '0'], ['0', '0', '0', '0', '0', '.'], ['0', '0', '0', '0', '.', '.'], ['.', '0', '0', '.', '.', '.'], ['.', '.', '.', '.', '.', '.']]print("\n".join(map("".join, zip(*reversed(grid))))) Out: ..00.00...0000000..0000000...00000.....000.......0.... Instead of reversed(grid) you can use grid[::-1] which also reverses the outer list, except it creates a copy of your list, which uses more memory (here I'm also using pprint to show you what exactly your transposed list looks like): from pprint import pprintpprint(list(zip(*grid[::-1])))[('.', '.', '0', '0', '.', '0', '0', '.', '.'), ('.', '0', '0', '0', '0', '0', '0', '0', '.'), ('.', '0', '0', '0', '0', '0', '0', '0', '.'), ('.', '.', '0', '0', '0', '0', '0', '.', '.'), ('.', '.', '.', '0', '0', '0', '.', '.', '.'), ('.', '.', '.', '.', '0', '.', '.', '.', '.')] Which if you really wanted lists instead of tuples you could convert them back to list: pprint([list(row) for row in zip(*reversed(grid))])[['.', '.', '0', '0', '.', '0', '0', '.', '.'], ['.', '0', '0', '0', '0', '0', '0', '0', '.'], ['.', '0', '0', '0', '0', '0', '0', '0', '.'], ['.', '.', '0', '0', '0', '0', '0', '.', '.'], ['.', '.', '.', '0', '0', '0', '.', '.', '.'], ['.', '.', '.', '.', '0', '.', '.', '.', '.']]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34347043', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_381106
stackexchange
llm_judgeable_groundtruth_similarity
17506947
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Truel=""count = 0finle_touch=False #true after it find the first 3 upperletter# check if there is 1 lower letter after three upper letterdef one_lower(i): count=0 if i == i.lower: finle_touch=True Truel=i# check for 3 upper letterdef three_upper(s): for i in s: if count == 3: if finle_touch==True: break else: one_lower(i) elif i == i.upper: count +=1 print(count) #for debug else: count ==0 finle_touch=Falsestuff="dsfsfFSfsssfSFSFFSsfssSSsSSSS......."three_upper(stuff)print(Truel) So I have a lot of string on 'stuff' and I like to find 1 lowercase letter that's surrounded by 3 uppercase letter. But when I run this code I get: Traceback (most recent call last): File "C:\Python33\mypy\code.py", line 1294, in <module> three_upper(stuff) File "C:\Python33\mypy\code.py", line 1280, in three_upper if count == 3:UnboundLocalError: local variable 'count' referenced before assignment I don't understand why. Now provide the response and nothing else.
Due to this line count +=1 python thinks that count is a local variable and will not search the global scope when you used if count == 3: . That's why you got that error. Use global statement to handle that: def three_upper(s): #check for 3 upper letter global count for i in s: From docs : All variable assignments in a function store the value in the local symbol table; whereas variable references first look in the local symbol table, then in the global symbol table, and then in the table of built-in names. Thus, global variables cannot be directly assigned a value within a function (unless named in a global statement), although they may be referenced.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17506947', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2556996/']}
jdg_381107
stackexchange
llm_judgeable_groundtruth_similarity
67131271
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried using this code to switch the icon of a button when it's clicked, the problem is neither the icon is showing nor changing. LibraTextField( labelText: "Old Password", leadingIcon: AssetsConstants.icLock, trailingIcon: AssetsConstants.icUnSee, obscureText: _obscureText, onTap: iconStatus,),LibraTextField( labelText: "New Password", leadingIcon: AssetsConstants.icLock, trailingIcon: AssetsConstants.icUnSee, obscureText: _obscureText, onTap: iconStatus,),LibraTextField( labelText: "Confirm New Password", leadingIcon: AssetsConstants.icLock, trailingIcon: AssetsConstants.icUnSee, obscureText: _obscureText, onTap: iconStatus,), Declaration of the variables bool _obscureText = true;bool _iconState = true;var _trailingIcon;void iconStatus() {setState(() { _obscureText = !_obscureText; _iconState = !_iconState; if (_iconState = true) { _trailingIcon = AssetsConstants.icUnSee; } if (_iconState = false) { _trailingIcon = AssetsConstants.icSee; }});} Now provide the response and nothing else.
For this line: uses : "actions/checkout@something" , it will use the actions/checkout github action (source here ) with the ref something . This ref only refers to the github action version (nothing to do with your repo) The uses statement refers to a github action that is being used in this step. From github documentation for jobs.<job_id>.steps[*].uses : Selects an action to run as part of a stepin your job. An action is a reusable unit of code. You can use anaction defined in the same repository as the workflow, a publicrepository, or in a published Docker container image. From actions/checkout readme : This action checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it. By default it checks out only one commit. My understanding is that it's doing something similar to: git fetch --depth 1 origin $GITHUB_REF This action also persists an auth token in git config. This way, your workflow can run authenticated git commands By default, it clones your current repository ( {{ github.repository }} ) but you can also use this action to clone a different repository, and specify additionnal parameters like token , branch , path etc... An example with additionnal input parameters: check out all git history by setting fetch-depth to 0 (default is 1 ), see usage doc : - uses: actions/checkout@v2 with: fetch-depth: 0
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/67131271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15269576/']}
jdg_381108
stackexchange
llm_judgeable_groundtruth_similarity
3401
Below is a question asked on the forum bioinformatics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to subset a VCF which only has chromosome 2. The problem with using various grep commands, e.g. grep -w '^#\|^2' my.vcf > my_new.vcf or if there's a 'chr' prefix grep -w '^#\|^chr2' my.vcf > my_new.vcf is that this will remove the header. I've been trying to use vcftools with the following command: vcftools --vcf input.vcf --chr 2 --out subset which does not output a VCF subset.vcf as expected, but rather subset.log : VCFtools - 0.1.15(C) Adam Auton and Anthony Marcketta 2009Parameters as interpreted: --vcf input.vcf --chr 2 --out subsetAfter filtering, kept 2 out of 2 IndividualsAfter filtering, kept 80 out of a possible 720 SitesRun Time = 0.00 seconds Given that the run time is Run Time = 0.00 seconds , I guess there's an error here. How can I subset a VCF and keep the header? Is there an option with bcftools instead maybe? Now provide the response and nothing else.
This seems to work for me with bcftools filter and the -r or -R argument. -r, --regions chr|chr:pos|chr:from-to|chr:from-[,...] Comma-separated list of regions, see also -R , --regions-file . Note that -r cannot be used in combination with -R . -R, --regions-file FILE Regions can be specified either on command line or in a VCF, BED, or tab-delimited file (the default). The columns of the tab-delimited file are: CHROM, POS, and, optionally, POS_TO, where positions are 1-based and inclusive. Uncompressed files are stored in memory, while bgzip-compressed and tabix-indexed region files are streamed. Note that sequence names must match exactly, "chr20" is not the same as "20". Also note that chromosome ordering in FILE will be respected, the VCF will be processed in the order in which chromosomes first appear in FILE. However, within chromosomes, the VCF will always be processed in ascending genomic coordinate order no matter what order they appear in FILE. Note that overlapping regions in FILE can result in duplicated out of order positions in the output. This option requires indexed VCF/BCF files. Note that -R cannot be used in combination with -r . Here's an example: $ bcftools filter vcf_nocomp_merge_geno98.vcf.gz -r 4 | head -n 38 | colrm 100 1000000##fileformat=VCFv4.2##FILTER=<ID=PASS,Description="All filters passed">##fileDate=20171222##source=PLINKv1.90##contig=<ID=1,length=249167691>##contig=<ID=2,length=242695901>##contig=<ID=3,length=197800245>##contig=<ID=4,length=190915651>##contig=<ID=5,length=180666277>##contig=<ID=6,length=170877445>##contig=<ID=7,length=159086805>##contig=<ID=8,length=146293415>##contig=<ID=9,length=141018424>##contig=<ID=10,length=135434552>##contig=<ID=11,length=134938471>##contig=<ID=12,length=133763353>##contig=<ID=13,length=115045730>##contig=<ID=14,length=107285438>##contig=<ID=15,length=102369712>##contig=<ID=16,length=90141356>##contig=<ID=17,length=81006630>##contig=<ID=18,length=78014583>##contig=<ID=19,length=59071322>##contig=<ID=20,length=62906515>##contig=<ID=21,length=48077813>##contig=<ID=22,length=51156934>##contig=<ID=23,length=154847490>##INFO=<ID=PR,Number=0,Type=Flag,Description="Provisional reference allele, may not be based on rea##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">##bcftools_filterVersion=1.2-187-g1a55e45+htslib-1.2.1-256-ga356746##bcftools_filterCommand=filter -r 4 vcf_nocomp_merge_geno98.vcf.gz#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT 1_125 3_5555 4_7565 5_94 71566 rs13125929 T C . PASS PR GT 0/1 0/1 0/04 87113 rs2006748 T C . PASS PR GT 0/1 0/0 0/04 110646 rs11727494 C T . PASS PR GT 0/1 0/0 1/14 142550 rs11735742 T C . PASS PR GT 0/1 0/0 0/04 200631 rs6826124 A G . PASS PR GT 0/1 1/1 0/14 221623 rs7695945 G T . PASS PR GT 0/0 0/0 0/0
{}
{'log_upvote_score': 4, 'links': ['https://bioinformatics.stackexchange.com/questions/3401', 'https://bioinformatics.stackexchange.com', 'https://bioinformatics.stackexchange.com/users/1770/']}
jdg_381109
stackexchange
llm_judgeable_groundtruth_similarity
3138498
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Basically, once a user leaves a webpage in my application, I need to call a PHP script with AJAX, which will insert a time spent on the webpage to the database and then leave the page. It is important to wait for the AJAX request to finish because webpages in my application are not accessible to users unless they have spent a certain time on a previous page (let's say two minutes). Here is my jquery code: $(document).ready(function() { var teid = TEID; var startTime = new Date().getTime(); $(window).unload(function() { var timeSpentMilliseconds = new Date().getTime() - startTime; var t = timeSpentMilliseconds / 1000 / 60; $.ajax({ type: 'POST', url: '/clientarea/utils/record-time', data: 'teid=' + teid + '&t=' + t }); });}); How should I change it so it will wait for the AJAX request to end before leaving the webpage? EDIT: Or it might be better (easier) to just let the AJAX request be repeated every minute or so. Is that possible? Now provide the response and nothing else.
Well, you can set async: false on your AJAX call to make the browser wait for the request to finish before doing anything else, but note that this will 'hang' the browser for the duration of the request. $.ajax({ type: 'POST', async: false, url: '/clientarea/utils/record-time', data: 'teid=' + teid + '&t=' + t}); From the manual: By default, all requests are sent asynchronous (i.e. this is set to true by default). If you need synchronous requests, set this option to false. Cross-domain requests and dataType: "jsonp" requests do not support synchronous operation. Note that synchronous requests may temporarily lock the browser, disabling any actions while the request is active. ⚠ WARNING: This answer was posted in 2010 and is now outdated - the XHR specification highlights the following statement: Synchronous XMLHttpRequest outside of workers is in the process of being removed from the web platform as it has detrimental effects to the end user’s experience. (This is a long process that takes many years.) Developers must not pass false for the async argument when current global object is a Window object. User agents are strongly encouraged to warn about such usage in developer tools and may experiment with throwing an "InvalidAccessError" DOMException when it occurs. DevTools in Chrome has recently started warning about it, so this change (which has been coming for some years) could be imminent.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3138498', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/95944/']}
jdg_381110
stackexchange
llm_judgeable_groundtruth_similarity
115194
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been using kSar to look at my servers resource use. There is a definite spike in process creation at 4:04AM daily. Cron seems to define the interval they should be run, but not the specific time How can I find what cron job runs at that time? Now provide the response and nothing else.
If you look at any CentOS 5 or 6 system the file /etc/crontab is typically where all the action starts. There are 4 directories that will contain various scripts. These directories are named: $ ls -1d /etc/cron*/etc/cron.d/etc/cron.daily/etc/cron.deny/etc/cron.hourly/etc/cron.monthly/etc/crontab/etc/cron.weekly The /etc/cron.d and /etc/cron.deny are special so I'm not going to discuss them. The remaining 4 directories: hourly, daily, weekly, & monthly are for exactly what their names imply. But when do they run? Take a look at the /etc/crontab to find that out. ######################################################################## run-parts##01 * * * * root run-parts /etc/cron.hourly02 4 * * * root run-parts /etc/cron.daily22 4 * * 0 root run-parts /etc/cron.weekly42 4 1 * * root run-parts /etc/cron.monthly###################################################################### Your issue with something running daily @ 4:04AM? It's the /etc/cron.daily directory that's causing this. You'll need to familiarize yourself with what's in that directory to know what the actual culprit is. But if I had to guess it's likely one of these 2 guys: $ ls -l /etc/cron.dailylogrotatemlocate.cron What else is running? If you have a deviant cron that's tanking your system always consult the log file. Here's everything running at 4AM on my CentOS 5 system: $ grep " 04:" /var/log/cron | head -10Feb 9 04:10:01 skinner crond[25640]: (root) CMD (/usr/lib/sa/sa1 1 1)Feb 9 04:20:02 skinner crond[27086]: (root) CMD (/usr/lib/sa/sa1 1 1)Feb 9 04:22:01 skinner crond[27432]: (root) CMD (run-parts /etc/cron.weekly)Feb 9 04:22:01 skinner anacron[27436]: Updated timestamp for job `cron.weekly' to 2014-02-09Feb 9 04:30:01 skinner crond[28561]: (root) CMD (/usr/lib/sa/sa1 1 1)Feb 9 04:40:01 skinner crond[30022]: (root) CMD (/usr/lib/sa/sa1 1 1)Feb 9 04:50:01 skinner crond[31482]: (root) CMD (/usr/lib/sa/sa1 1 1)Feb 10 04:00:02 skinner crond[7578]: (root) CMD (/usr/lib/sa/sa1 1 1)Feb 10 04:01:01 skinner crond[7700]: (root) CMD (run-parts /etc/cron.hourly)Feb 10 04:02:01 skinner crond[7934]: (root) CMD (run-parts /etc/cron.daily) Notice the 04:02 AM time slots?
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/115194', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/46764/']}
jdg_381111
stackexchange
llm_judgeable_groundtruth_similarity
606611
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am looking for a way to force lavaan to use positive factor loadings in a latent construct.I am working on a sem defined as follows: ' FF =~ NA*ma09 + mp02 + mp03_r + mp19_r + mp18_r + ma02 + mp17_r + mp16_r + mp07 + ma03 + mp06 + mi02 + ma12_r + ma01 + mi04 + mp08_r + ma04 BI =~ NA*isced97 BiE =~ NA*max_edu_Eltern MI =~ NA*Migrationsscore KO =~ NA*Kontakt NS =~ NA*pn11_r INC =~ NA*inc #Regressions FF ~ BiE + INC + NS + KO + MI + BI NS ~ BI FF ~~ 1*FF BiE ~~ 1*BiE INC ~~ 1*INC NS ~~ 1*NS KO ~~ 1*KO MI ~~ 1*MI BI ~~ 1*BI ' In this model the factor loadings on "FF" are positive but if I make minor modifications to the model, like adding some restraints like the ones below (or others like including another regression) the factor loadings suddenly all become negative. mp02~~mp03_r mp02~~mp06 mp02~~mp07 mp02~~mp08_r mp03_r~~mp06 mp03_r~~mp07 mp03_r~~mp08_r mp06~~mp08_r mp16_r~~mp17_r mp16_r~~mp18_r mp16_r~~mp19_r mp17_r~~mp18_r mp17_r~~mp19_r mp18_r~~mp19_r mi02~~mi04 ma01~~ma02 ma01~~ma03 ma01~~ma04 ma01~~ma09 ma02~~ma03 ma02~~ma04 ma02~~ma09 ma04~~ma09 How do I stop that from happening? Using start() with a positive number, include labels for the loadings and set them to >0,...? Thanks in advance. Now provide the response and nothing else.
Perhaps the simplest elementary yet rigorous proof employs the cumulant-generating functions. If you insist, you can translate this into integrals involving the distribution functions. Anything else would be so burdensome as to be of little or no interest from a statistical perspective, IMHO. Recall that the cgf of a random variable $X$ with distribution $F$ is $$\psi_F(t) = \log E\left[e^{itX}\right].$$ When $X$ has non-negative support (as is the case in this question), this can be expressed via integration by parts directly in terms of $F$ as $$\psi_F(t) = \log\int e^{itx}\,\mathrm dF(x) = \log \int_0^\infty \frac{e^{itx} - 1}{it}(1 - F(x))\,\mathrm dx.$$ For the Negative Binomial distribution with parameters $p\in(0,1)$ and $r \gt 0,$ whose probability function is $f(k;p,r) = \binom{k+r-1}{k}(1-p)^k p^r$ and survival function is $$1 - F(x;p,r) = I_p(x+1,r)$$ (the regularized Incomplete Beta function), the cgf is $$\psi(t;p,r) = r\left(\log p - \log\left(1 - (1-p)e^{it}\right)\right).$$ Fix a number $\lambda\gt 0$ and let $p = r/(r+\lambda).$ Then $$\psi(t;p(r,\lambda),r) = r\left(\log \left(1 - \frac{\lambda}{r+\lambda}\right) - \log\left(1 - \frac{\lambda}{r+\lambda}e^{it}\right)\right).$$ Its limiting value (if it exists) as $r\to\infty$ is the limit as $z = \lambda/(r+\lambda)$ shrinks to zero among positive values. So, recognizing $r =\lambda/z - \lambda,$ rewrite the right hand side as $$r\left(\log \left(1 - z\right) - \log\left(1 -ze^{it}\right)\right) = \left(\frac{\lambda}{z}-\lambda\right)(-z + ze^{it} + O(z^2)) = \lambda\left(e^{it}-1\right) + O(z).$$ Thus, the limit does exist and equals $\lambda\left(e^{it}-1\right),$ the cgf of the Poisson $(\lambda)$ distribution. The Lévy Continuity Theorem asserts the limit of the Negative Binomial distribution function is the Poisson distribution function at every continuity point of the latter, which implies it converges everywhere (because all these functions share the same set of points of discontinuity; namely, the natural numbers.) We are now in a better place to appreciate the content of the Lévy Continuity Theorem and to address the question about analyzing the limits of the distribution functions (or, equivalently, the survival functions). We have seen what kind of relationship must hold among $p$ and $r$ asymptotically. Continuing to fix $\lambda \gt 0$ and taking $p = r/(r+\lambda),$ let's analyze the survival function directly, the incomplete Beta function mentioned in the title: $$1 - F(x;p,r) = I_p(x+1,r) = \frac{1}{B(x+1,r)}\int_0^{1-r/(r+\lambda)} t^x(1-t)^{r-1}\,\mathrm dt.$$ As $r$ grows large, the factor $(1-t)^{r-1}$ in the integrand dwindles except for $t$ extremely close to $0.$ This strongly suggests the classic approximation $$(1-t)^{r-1} = \left[(1-t)^{1/t}\right]^{t(r-1)} = e^{-t(r-1)} + O(1/r).$$ Accordingly, change the variable of integration from $t$ to $z=t(r-1)$ (and do a little algebra along the way with the Beta function and the upper limit of integration) to find $$1 - F(x;p,r) = \frac{\Gamma(x+r+1)}{\Gamma(x+1)\Gamma(r)(r-1)^{x+1}}\int_0^{\lambda(r-1)/(r+\lambda)} z^x\left(e^{-z}+O\left(\frac{1}{r}\right)\right)\,\mathrm dz.$$ The way forward is now so clear that it shouldn't be necessary to give the details. Simply notice that for very large $r,$ The basic relation $\Gamma(x+r+1) = \Gamma(r)(r)(r+1)\cdots(r+x)$ implies $$\frac{\Gamma(x+r+1)}{\Gamma(x+1)\Gamma(r)(r-1)^{x+1}} = \frac{1}{\Gamma(x+1)} + O\left(\frac{1}{r}\right).$$ (If you're struggling with this, take logarithms and expand each one to first order in its Maclaurin series.) The error in the integrand is bounded, $$\int_0^{\lambda(r-1)/(r+\lambda)} z^x\left(e^{-z}+O\left(\frac{1}{r}\right)\right)\,\mathrm dz = \int_0^{\lambda(r-1)/(r+\lambda)} z^xe^{-z}\,\mathrm dz + O\left(\frac{1}{r}\right).$$ The region of integration is close to the simpler one bounded above by $\lambda = \lambda(r-1)/(r+\lambda) + O(1/r),$ $$\int_0^{\lambda(r-1)/(r+\lambda)} z^xe^{-z}\,\mathrm dz = \int_0^\lambda z^xe^{-z}\,\mathrm dz + O\left(\frac{1}{r}\right).$$ The Lévy Continuity Theorem has essentially taken care of all three parts of this analysis. Putting these results together yields $$\lim_{r\to \infty}1 - F(x;p(r,\lambda),r) = \frac{1}{\Gamma(x+1)}\int_0^\lambda z^x e^{-z}\,\mathrm dz.$$ This incomplete Gamma function (the other one mentioned in the title of this thread) is the Poisson $(\lambda)$ survival function for all integral $x\ge 0,$ QED.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/606611', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/380837/']}
jdg_381112
stackexchange
llm_judgeable_groundtruth_similarity
21592012
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following HTML code: <td class="image"> <a href="/target/tt0111161/" title="Target Text 1"> <img alt="target img" height="74" src="img src url" title="image title" width="54"/> </a> </td> <td class="title"> <span class="wlb_wrapper" data-caller-name="search" data-size="small" data-tconst="tt0111161"> </span> <a href="/target/tt0111161/"> Other Text </a> <span class="year_type"> (2013) </span> I am trying to use beautiful soup to parse certain elements into a tab-delimited file. I got some great help and have: for td in soup.select('td.title'): span = td.select('span.wlb_wrapper') if span: print span[0].get('data-tconst') # To get `tt0082971` Now I want to get "Target Text 1" . I've tried some things like the above text such as: for td in soup.select('td.image'): #trying to select the <td class="image"> tagimg = td.select('a.title') #from inside td I now try to look inside the a tag that also has the word titleif img: print img[2].get('title') #if it finds anything, then I want to return the text in class 'title' Now provide the response and nothing else.
If you're trying to get a different td based on the class (i.e. td class="image" and td class="title" you can use beautiful soup as a dictionary to get the different classes. This will find all the td class="image" in the table. from bs4 import BeautifulSouppage = """<table> <tr> <td class="image"> <a href="/target/tt0111161/" title="Target Text 1"> <img alt="target img" height="74" src="img src url" title="image title" width="54"/> </a> </td> <td class="title"> <span class="wlb_wrapper" data-caller-name="search" data-size="small" data-tconst="tt0111161"> </span> <a href="/target/tt0111161/"> Other Text </a> <span class="year_type"> (2013) </span> </td> </tr></table>"""soup = BeautifulSoup(page)tbl = soup.find('table')rows = tbl.findAll('tr')for row in rows: cols = row.find_all('td') for col in cols: if col.has_attr('class') and col['class'][0] == 'image': hrefs = col.find_all('a') for href in hrefs: print href.get('title') elif col.has_attr('class') and col['class'][0] == 'title': spans = col.find_all('span') for span in spans: if span.has_attr('class') and span['class'][0] == 'wlb_wrapper': print span.get('data-tconst')
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21592012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3216890/']}
jdg_381113
stackexchange
llm_judgeable_groundtruth_similarity
47780355
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wanted to start CUDA Programming with C++ and installed the toolkit v9.0 from the official Nvidia website. I ran deviceQuery.cpp on VS 2017 and everything worked perfectly: CUDA Device Query (Runtime API) version (CUDART static linking)Detected 1 CUDA Capable device(s)Device 0: "GeForce GTX 960M" CUDA Driver Version / Runtime Version 9.0 / 9.0 CUDA Capability Major/Minor version number: 5.0 Total amount of global memory: 2048 MBytes (2147483648 bytes) ( 5) Multiprocessors, (128) CUDA Cores/MP: 640 CUDA Cores GPU Max Clock rate: 1176 MHz (1.18 GHz) Memory Clock rate: 2505 Mhz Memory Bus Width: 128-bit L2 Cache Size: 2097152 bytes Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096) Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 1 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model) Device supports Unified Addressing (UVA): Yes Supports Cooperative Kernel Launch: No Supports MultiDevice Co-op Kernel Launch: No Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 9.0, NumDevs = 1Result = PASS Then I tried running bandwidthTest.cu and I have some compile errors: Severity Code Description File Line Category Suppression StateError expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\type_traits 504 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\type_traits 505 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\type_traits 506 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\type_traits 538 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\type_traits 1043 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\type_traits 1558 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\type_traits 2371 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\type_traits 2371 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xutility 543 Error class "std::enable_if<<error-constant>, int>" has no member "type" C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xtr1common 58 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xutility 3135 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xutility 3662 Error class "std::enable_if<<error-constant>, void>" has no member "type" C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xtr1common 58 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xmemory0 390 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xmemory0 1002 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xmemory0 1322 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xstring 1718 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xutility 298 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\vector 495 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\algorithm 278 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\memory 1540 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\memory 1547 Error constant value is not known C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\memory 2482 Error expression must have a constant value C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\memory 2582 Error more than one instance of overloaded function "std::_Deallocate_plain" matches the argument list: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xstring 1780 Error more than one instance of overloaded function "std::_Deallocate_plain" matches the argument list: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xstring 1780 Error more than one instance of overloaded function "std::_Deallocate_plain" matches the argument list: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xstring 1780 Error more than one instance of overloaded function "std::_Deallocate_plain" matches the argument list: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\include\xstring 1780 Error MSB3721 The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin\nvcc.exe" -gencode=arch=compute_30,code=\"sm_30,compute_30\" -gencode=arch=compute_35,code=\"sm_35,compute_35\" -gencode=arch=compute_37,code=\"sm_37,compute_37\" -gencode=arch=compute_50,code=\"sm_50,compute_50\" -gencode=arch=compute_52,code=\"sm_52,compute_52\" -gencode=arch=compute_60,code=\"sm_60,compute_60\" -gencode=arch=compute_70,code=\"sm_70,compute_70\" --use-local-env --cl-version 2017 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.12.25827\bin\HostX86\x64" -x cu -I./ -I../../common/inc -I./ -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\/include" -I../../common/inc -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\include" -G --keep-dir x64\Debug -maxrregcount=0 --machine 64 --compile -cudart static -Xcompiler "/wd 4819" -g -DWIN32 -DWIN32 -D_MBCS -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /FS /Zi /RTC1 /MTd " -o x64/Debug/bandwidthTest.cu.obj "C:\ProgramData\NVIDIA Corporation\CUDA Samples\v9.0\1_Utilities\bandwidthTest\bandwidthTest.cu"" exited with code 1. C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\VC\VCTargets\BuildCustomizations\CUDA 9.0.targets 707 I am just compiling the existing bandwidth.cu in 1_Utilities after installing the toolkit and I am using VS 2017. I have searched a long time for a solution on the Internet but I can't seem to find anything. Any help is appreciated. EDIT: I have installed a side-by-side MSVC toolset: https://blogs.msdn.microsoft.com/vcblog/2017/11/15/side-by-side-minor-version-msvc-toolsets-in-visual-studio-2017/ Now most of my errors are gone but the last one still remains (the long one). This seems to be an error of a different kind. EDIT 2: It seems that the line -Bv in the command line was creating the error. I removed it and now all my projects compile successfully. This seems to be a bug with the new version of VC++ and will probably be fixed soon. Now provide the response and nothing else.
To summarize what I did:In the Visual Studio 2017 installer select Modify then at the top click Individual Components , scroll down to Compilers, build tools, and runtimes and check VC++ 2017 version 15.4 v14.11 toolset (that is going to check Visual C++ 2017 Redistributable Update as well) and then you modify it. After that you go to a specific location. For me it was here: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\14.11 and copy the file with the .props -ending to your solution. In VS you right click your project, select Unload Project and right click again to select Edit -project name- . Here you look for the line that says <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" /> and above that line you add <Import Project="$(SolutionDir)\Microsoft.VCToolsVersion.14.11.props" /> You save the file and reload the project and you should be good to go.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47780355', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9089988/']}
jdg_381114
stackexchange
llm_judgeable_groundtruth_similarity
155933
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I saw some kind of workflow in vim: Vim had a file open. Then some combination of keys made vim disappear and the user was in the command line working in cli mode, then opened another file and then suddenly returned to the previously opened file exactly at the place/line he was. It reminded me the switch among windows we do in Windows. Does anyone know how this worflow is done in vim? Now provide the response and nothing else.
There are several options to do so: You can use a terminal multiplexer like screen or tmux . In screen, for example, the shortcut Ctrl + a - a , has the same functiononality as Alt + Tab in graphical environments: switch to the last screen. Or you use vim 's internal function. Type :!command in vim 's command mode. For example: :!ls -l . After the command finishes press Enter to switch back to vim . There is one more option: Job conrol . Press Ctrl + z to stop the current process ( vim ). You will find yourself in a terminal. To bring the stopped process back to the foreground type fg . For me, I prefer screen. I have an unwritten rule for myself: "Always open a screen."
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/155933', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/42132/']}
jdg_381115
stackexchange
llm_judgeable_groundtruth_similarity
21927445
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I register a Custom Post Type, and I don't want it to have its own menu, instead I want to place it as a submenu of an existing admin menu item called my-custom-parent-page . Here's my code: register_post_type('my_custom_post_type', array( 'labels' => array( 'name' => __('Books', 'mcpt'), 'singular_name' => __('Book', 'mcpt'), ), 'supports' => array('title', 'editor'), 'show_ui' => true, 'show_in_nav_menus' => false, 'show_in_menu' => 'my-custom-parent-page', )); It works, meaning that it's properly located under the menu my-custom-parent-page , however now when I click on the parent menu (i.e. my-custom-parent-page ) it points me to the my_custom_post_type page... Any help? Now provide the response and nothing else.
Place a Custom-Post-Type in an submenu of an existing parent page According to the Codex, this is a known and expected behavior: Note: When using 'some string' to show as a submenu of a menu page created by a plugin, this item will become the first submenu item, and replace the location of the top level link . Source: https://codex.wordpress.org/Function_Reference/register_post_type#Arguments (See the "show_in_menu" section) Here is the end of the quote which offers a solution: If this isn't desired, the plugin that creates the menu page needs to set the add_action priority for admin_menu to 9 or lower. So this is quite simple to solve. However in my case I couldn't change the priority of the parent page because it is generated by a third-party library. Therefore I came up with this solution: // Move the "example_cpt" Custom-Post-Type to be a submenu of the "example_parent_page_id" admin page.add_action('admin_menu', 'fix_admin_menu_submenu', 11);function fix_admin_menu_submenu() { // Add "Example CPT" Custom-Post-Type as submenu of the "Example Parent Page" page add_submenu_page('example_parent_page_id', 'Example CPT', 'Example CPT', 'edit_pages' , 'edit.php?post_type=example_cpt');} Please note the priority 11, and also when registering the Custom-Post-Type I set the " show_in_menu " parameter to false , so we can add it in the menu manually via add_submenu_page as shown above. Properly set the Custom-Post-Type submenu entry as "active" Now, the above solution works fine, however when creating/editing a post of the "example_cpt" Custom-Post-Type, it is not set as active and the submenu is not unfolded. Here is how to make sure that it is set as active, as well as the submenu in which it resides is properly set as active when creating/editing a post of the "example_cpt" Custom-Post-Type: // Set the "example_parent_page_id" submenu as active/current when creating/editing a "example_cpt" postadd_filter('parent_file', 'fix_admin_parent_file');function fix_admin_parent_file($parent_file){ global $submenu_file, $current_screen; // Set correct active/current menu and submenu in the WordPress Admin menu for the "example_cpt" Add-New/Edit/List if($current_screen->post_type == 'example_cpt') { $submenu_file = 'edit.php?post_type=example_cpt'; $parent_file = 'example_parent_page_id'; } return $parent_file;} Fine-tuning: Rename the first submenu entry Furthermore, I also wanted the first menu entry of my submenu to be named differently from the parent name. By default, and using the code above, this is what we have: - Example Parent Page-- Example Parent Page-- Example CPT So as you can see, the first menu entry of the submenu is a duplicate of the parent menu, and this is the default WordPress behavior. I wanted to rename this duplicate entry to something different, much like WordPress does with the default menus (for example "Posts" and the submenu entry "All Posts" which both point to the same page but are named differently). Here is how to rename the first submenu entry: add_action('admin_menu', 'rename_first_submenu_entry', 11);function rename_first_submenu_entry() { // Rename first submenu entry (duplicate of parent menu) from "Example Parent Page" to "Submenu Text" add_submenu_page('example_parent_page_id', 'Example Parent Page', 'Submenu Text', 'edit_pages' , 'example_parent_page_id');} Please note the priority 11, so it is renamed after it has been created. And now we have: - Example Parent Page-- Submenu Text-- Example CPT Please note that "Submenu Text" points to the same location as "Example Parent Page".
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21927445', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2327283/']}
jdg_381116
stackexchange
llm_judgeable_groundtruth_similarity
22967895
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have written a module in angularJS that encapsulates all the backend communications. For greater flexibility I have the api prefix as a constant value on the module (could be value since I am not using it in the config phase).so something like angular.module('myapp.data').constant('apiPrefix', '/api/data'); Now I want to use this module from two different applications. One uses /api1/data and the other one /api2/data and I would like to change this during the config phase of the application.I know how to do that with a provider, but having a provider to hold a value seems like an overkill to me. Can I modify used modules constants or values from the application config phase? something like: angular.module("data", []).value('apiPrefix', '/api/data').factory('display', function(apiPrefix){ return { pref: function(){ console.log(apiPrefix); return apiPrefix; } }});angular.module("myApp",['data']).config(['apiPrefix', function(prefix){ prefix = 'https:/api/data'; }]).controller("Example", function($scope, display) { $scope.prefix = display.pref;}); Now provide the response and nothing else.
to override the module values, you can redefine the angular value in later modules. I believe it should not be done module config time. angular.module("data", []).value('apiPrefix', '/api/data').factory('Display', function(apiPrefix){ return { pref: function(){ return apiPrefix; } }});angular.module('myapp', ['data']) .value('apiPrefix', '/api2/data') .controller('MainCtrl', function($scope, Display) { $scope.name = Display.pref(); }); see the plunker here: http://plnkr.co/edit/k806WE same thing is applicable for angular constants too.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22967895', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/465311/']}
jdg_381117
stackexchange
llm_judgeable_groundtruth_similarity
3850
Below is a question asked on the forum reverseengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When I have a kernel module without symbols, I'd typically first open it in IDA and give names to some of the subroutines (those I'm interested in). Since I prefer my kernel debugging with plain WinDbg (and not the IDA-integrated WinDbg), I'd like WinDbg to recognize the names IDA (and me) gave to those addresses. That way, a) I could break on those functions by name, change variables by name, and b) WinDbg's output and views would read better (in stack traces etc.). Unfortunately, IDA has no "create PDB" feature, and I don't even see a non-PDB way of importing addresses into WinDbg. Ideas, anyone? Now provide the response and nothing else.
This page contains an IDC script And a Windbg Extension to dump the names and a WinDbg extension to load those names into WinDbg. EditTo Address the comment by @OzgurH yes the idc as well as AddSyntheticSymbol are slow in fact getting a list of Names along with the boundaries from idc is tedious(also it was done in idafree 5 which isn't available for some time now only ida free7 is available which is only 64 bit so i havent checked it much but i simply wrote another windbg extension and leveraged the windbg script execution command line to add correct name and sizesalso by using this method i can have a reusable database of reversed symbol i have put the source / compile settings / precompiled binary in github here
{}
{'log_upvote_score': 4, 'links': ['https://reverseengineering.stackexchange.com/questions/3850', 'https://reverseengineering.stackexchange.com', 'https://reverseengineering.stackexchange.com/users/4136/']}
jdg_381118
stackexchange
llm_judgeable_groundtruth_similarity
6149770
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any difference between using new RegExp("regex"); and /same_regex/ to test against a target string? I am asking this question because I got different validating result while use these two approaches. Here is the snippet I used to validate an email field: var email="[email protected]@foo.com";var regex1 = new RegExp("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$"); var regex2 = /^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$/;//using RegExp objectif(regex1.test(email)) { console.log("email matched regex1"); } else { console.log("email mismatched regex1"); }//using slash notationif(regex2.test(email)) { console.log("email matched regex2"); } else { console.log("email mismatched regex2"); } I got two inconsistent results: email matched regex1email mismatched regex2 I am wondering if there is any difference here or I omitted something in this specific example? For an executable example please refer to here Now provide the response and nothing else.
If you use the constructor to create a new RegExp object instead of the literal syntax, you need to escape the \ properly: new RegExp("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$") This is necessary as in JavaScript any unknown escape sequence \x is interpreted as x . So in this case the \. is interpreted as . .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6149770', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/231010/']}
jdg_381119
stackexchange
llm_judgeable_groundtruth_similarity
13476073
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Oracle database and I'm having problem to get one result from 2 result sets. I have table CASES Create table cases (ID varchar(1), date_entered date, sub_category varchar (5));insert into cases (id, date_entered, sub_category)values('1', to_date('2012/05/03','yyyy/mm/dd'),'Temp1');insert into cases (id, date_entered, sub_category)values('2', to_date('2012/06/01','yyyy/mm/dd'),'Temp2');insert into cases (id, date_entered, sub_category)values('3', to_date('2012/03/15','yyyy/mm/dd'),'Temp3');insert into cases (id, date_entered, sub_category)values('4', to_date('2012/03/01','yyyy/mm/dd'),'Call1');insert into cases (id, date_entered, sub_category)values('5', to_date('2012/03/08','yyyy/mm/dd'),'Call2');insert into cases (id, date_entered, sub_category)values('6', to_date('2012/02/20','yyyy/mm/dd'),'Call2'); and need to count records BY SUB CATEGORIES, BY MONTH, where one count includes sub_category: Temp1, Temp2, Temp3 other count includes sub_category: Call1, Call2, Call3 I have made query1: With skills AS ( Select sub_category, date_entered, extract(MONTH FROM cases.date_entered) as month_entered, count (*) from cases where SUB_CATEGORY IN('Temp1', 'Temp2', 'Temp3') group by cases.sub_category, cases.date_entered order by to_char(cases.date_entered,'MM') ) select s.month_entered, count(*)as skill_count from skills s group by s.month_entered ORDER BY CAST(s.month_entered AS INTEGER) ASC with result: MONTH_ENTERED SKILL_COUNT3 15 16 1 and query 2: With trainingAS ( Select sub_category, date_entered, extract(MONTH FROM cases.date_entered) as month_entered, count (*) from cases where SUB_CATEGORY IN('Call1', 'Call2', 'Call3') group by cases.sub_category, cases.date_entered order by to_char(cases.date_entered,'MM'))select t.month_entered, count(*)as training_count from training tgroup by t.month_enteredORDER BY CAST(t.month_entered AS INTEGER) ASC with result: MONTH_ENTERED TRAINING_COUNT2 13 2 The result that I need from these 2 queries is: MONTH_ENTERED SKILL_COUNT TRAINING_COUNT 2 0 1 3 1 2 5 1 0 6 1 0 Tried union and left join but nothing gives me this result. Here is sqlfiddle example http://sqlfiddle.com/#!4/504cd/31 . Now provide the response and nothing else.
There are a lot of redundancies in this regular expression of yours (and also, the leaning toothpick syndrome ). This, though, should produce results: $rx = '~ ^(?:https?://)? # Optional protocol (?:www[.])? # Optional sub-domain (?:youtube[.]com/watch[?]v=|youtu[.]be/) # Mandatory domain name (w/ query string in .com) ([^&]{11}) # Video id of 11 characters as capture group 1 ~x';$has_match = preg_match($rx, $url, $matches);// if matching succeeded, $matches[1] would contain the video ID Some notes: use the tilde character ~ as delimiter, to avoid LTS use [.] instead of \. to improve visual legibility and avoid LTS. ("Special" characters - such as the dot . - have no effect in character classes (within square brackets)) to make regular expressions more "readable" you can use the x modifier (which has further implications; see the docs on Pattern modifiers ), which also allows for comments in regular expressions capturing can be suppressed using non-capturing groups: (?: <pattern> ) . This makes the expression more efficient. Optionally, to extract values from a (more or less complete) URL, you might want to make use of parse_url() : $url = 'http://youtube.com/watch?v=VIDEOID';$parts = parse_url($url);print_r($parts); Output: Array( [scheme] => http [host] => youtube.com [path] => /watch [query] => v=VIDEOID) Validating the domain name and extracting the video ID is left as an exercise to the reader. I gave in to the comment war below; thanks to Toni Oriol, the regular expression now works on short (youtu.be) URLs as well.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13476073', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1417420/']}
jdg_381120
stackexchange
llm_judgeable_groundtruth_similarity
16110124
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I display an HTML, with an embedded SVG. I want it to detect mouse events, but it isn't working on the mobile (Android Jellybean). It works fine for a desktop browser. Here is a demonstration page: http://artsyenta.org/misc/ss/j.touchtry1.html . If you drag the mouse over the circles you see a log of mouse entries into the elements named "j_xxx". This works in Firefox and Chrome. Open your Android tablet (I've also tried this on somebody's iPhone, with the same results). Drag your finger over the circles and you get a touchenter event only now and then. Nothing else shows. You can see the whole page and code by viewing page source. It isn't long, the longest part is the SVG definition. The important parts are: $(document).ready(function() { makeSomethingHappen("hello");});function makeSomethingHappen(svg) { placeATop(true); $('[class^=j_]') .on("mouseover", function(event) { logAction(event, this); }) .on("mouseout", function(event) { logAction(event, this); }) .on("touchstart", function(event) { logAction(event, this); }) .on("touchend", function(event) { logAction(event, this); }) .on("touchenter", function(event) { logAction(event, this); }) .on("touchleave", function(event) { logAction(event, this); }) .on("touchEnter", function(event) { logAction(event, this); }) .on("touchLeave", function(event) { logAction(event, this); });}var cntAct = 0;function logAction(ev, ele) { cntAct++; var logSpan = $('#logTrace'); logSpan.html("" + cntAct + ": " + ev.type + " '" + $(ele).attr("class") + "'<br/>" + logSpan.html());} Here is part of the SVG: <svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" id="jsvg" x="0px" y="0px" width="376.247px" height="364.318px" viewBox="140 110 130 120" enable-background="new 0 0 376.247 364.318" xml:space="preserve"> <g id="Layer_1"> <path class="j_aa_" opacity="0.75" fill="#FFFFFF" stroke="#0071BC" stroke-width="0.9925" enable-background="new " d="M224.739,6.55l-6.414,23.957c-10.377-2.785-21.304-2.785-31.671,0L180.232,6.55C194.813,2.63,210.155,2.63,224.739,6.55z"/> [snip] </g></svg> Again, I detect mouse events on a desktop browser but no touch or mouse events for a mobile browser. Is there a missing technique, or there something missing with the mobiles? It fails with the iPhone browser, Google Chrome on Jellybean and Firefox mobile. Thanks in advance,Jerome. Now provide the response and nothing else.
After a lot of research into plain SVG events and RaphaelJS events, I have a workable solution for each. Here is a RaphaelJS solution: window.onload = function(e) { document.getElementById("rsr").addEventListener("mousemove", function(event) { logAction(event, this, "m"); }, false); document.getElementById("rsr").addEventListener("touchmove", function(event) { if(event.preventDefault) event.preventDefault(); // perhaps event.targetTouches[0]? logAction(event.changedTouches[0], this, "t"); }, false);}; The code is not airtight, but illustrates the major points. First, the events must be registered through the addEventHandler() call. Using the RaphaelJS onmousemove(), etc., handlers doesn't work on the tablet. Second, for touch events you need to dig into the list of touches. My application only cares about a single finger, and so the [0] event of the list is enough. There are a number of lists -- touches, targetTouches, changedTouches -- so choose an appropriate one. Third, determine if the window needs to bubble the events. I get more sensitivity to touches if I call preventDefault(). I tested this on a Google Nexus, iPad 3 and iPad Mini. Good results. I also have a solution for plain SVG. It is based on this site: http://my.opera.com/MacDev_ed/blog/2010/02/01/how-to-get-all-svg-elements-intersected-by-a-given-rectangle The differences for what I use and the Javascript he uses is that, again, for touches the touches list needs accessing. "root" is the svg element ID for this example. "logTrace" is a span that receives comments. var root = document.getElementById("root");var evtt = evt.touches[0];var rpos = root.createSVGRect();rpos.x = evtt.clientX;rpos.y = evtt.clientY;rpos.width = rpos.height = 1;var list = root.getIntersectionList(rpos, null);var maxItemId = list.length <= 0 ? "(no match)" : list[list.length - 1].id;document.getElementById("logTrace").innerHTML = "screen: (" + evtt.clientX + ", " + evtt.clientY + ") ? uu(" + maxItemId + "): (" + uupos.x.toFixed(0) + "," + uupos.y.toFixed(0) + ")"; I've tested this solution on a Nexus and an iPad successfully. However, it behaves badly on an iPad Mini -- why behave differently on two iPad devices? I also noticed that the "plain svg" solution doesn't seem to detect as accurately as the RaphaelJS version. Near the edges of my SVG elements the detection just isn't very good with the plain svg detection. I'm consistently getting good results for the RaphaelJS use. OTOH, the RaphaelJS use is sensitive to the SVG having (fill:none). The plain SVG doesn't care if (fill:none) is set in an element. Choose your poison.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16110124', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1149830/']}
jdg_381121
stackexchange
llm_judgeable_groundtruth_similarity
2913778
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i'm trying to join two independent forms (login and register) in thesame page. My idea is (just looking at the signin form): Create an action that shows both forms (partials): public function executeLoginAndRegister(sfWebRequest $request){ $this->form_signin = $this->getUser()->getAttribute('form_signin');} Each partial calls to its action: form action="php?> echo url_for('@sf_guard_signin') ?>" method="post"> In the actions i write this code public function executeSignin($request){ //... $this->form = new $MyFormclass(); if ($this->form->isValid()) { //... }else{ // save the form to show the error messages. $this->getUser()->setAttribute('form_signin', $this->form); return $this->forward('sfGuardAuth', 'loginAndRegister'); }} It works, but, for example, if i execute LoginAndRegister and submitincorrectly the signin form and I go to another page and then return toLoginAndRegister, i will find the submiting error messages... If i execute LoginAndRegister and submit incorrectly the signin form andopen another browser tab, i will find the submiting error messages inthe signin form of the second tab... Any idea? any better approach? Now provide the response and nothing else.
getopt_long will do the job, here is an example from http://www.gnu.org/s/libc/manual/html_node/Getopt-Long-Option-Example.html #include <stdio.h> #include <stdlib.h> #include <getopt.h> /* Flag set by ‘--verbose’. */ static int verbose_flag; int main (argc, argv) int argc; char **argv; { int c; while (1) { static struct option long_options[] = { /* These options set a flag. */ {"verbose", no_argument, &verbose_flag, 1}, {"brief", no_argument, &verbose_flag, 0}, /* These options don't set a flag. We distinguish them by their indices. */ {"add", no_argument, 0, 'a'}, {"append", no_argument, 0, 'b'}, {"delete", required_argument, 0, 'd'}, {"create", required_argument, 0, 'c'}, {"file", required_argument, 0, 'f'}, {0, 0, 0, 0} }; /* getopt_long stores the option index here. */ int option_index = 0; c = getopt_long (argc, argv, "abc:d:f:", long_options, &option_index); /* Detect the end of the options. */ if (c == -1) break; switch (c) { case 0: /* If this option set a flag, do nothing else now. */ if (long_options[option_index].flag != 0) break; printf ("option %s", long_options[option_index].name); if (optarg) printf (" with arg %s", optarg); printf ("\n"); break; case 'a': puts ("option -a\n"); break; case 'b': puts ("option -b\n"); break; case 'c': printf ("option -c with value `%s'\n", optarg); break; case 'd': printf ("option -d with value `%s'\n", optarg); break; case 'f': printf ("option -f with value `%s'\n", optarg); break; case '?': /* getopt_long already printed an error message. */ break; default: abort (); } } /* Instead of reporting ‘--verbose’ and ‘--brief’ as they are encountered, we report the final status resulting from them. */ if (verbose_flag) puts ("verbose flag is set"); /* Print any remaining command line arguments (not options). */ if (optind < argc) { printf ("non-option ARGV-elements: "); while (optind < argc) printf ("%s ", argv[optind++]); putchar ('\n'); } exit (0); }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2913778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/248959/']}
jdg_381122
stackexchange
llm_judgeable_groundtruth_similarity
9788962
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I get a dump of all local & session variables when an exception occurs? I was thinking of writing some sort of reflection based function that would interrogate the calling function & create a dump of variables & values. Is there an existing library that I can use? UPDATE After speaking to a colleague, I was pointed to AOP or Aspect Oriented Programming. Here is what I understand ... Using AOP, one would simple decorate the methods & classes with certain attributes. AOP framework then injects code in or around these classes & methods. There are two separate kinds of framework, one that injects code & then compiles the assembly & the second simply uses reflection & traps the call which you have decorated and wraps whatever code around the method at runtime. I hope all that makes sense. I will be doing more research on this & post my approach. Thanks guys ... Now provide the response and nothing else.
I'm not sure if this is what you're looking for. But if you're in a catch-block you can get all fields and properties of this class in the following way: try{ double d = 1 / 0;}catch (Exception ex){ var trace = new System.Diagnostics.StackTrace(); var frame = trace.GetFrame(1); var methodName = frame.GetMethod().Name; var properties = this.GetType().GetProperties(); var fields = this.GetType().GetFields(); // public fields // for example: foreach (var prop in properties) { var value = prop.GetValue(this, null); } foreach (var field in fields) { var value = field.GetValue(this); } foreach (string key in Session) { var value = Session[key]; }} I've showed how to get the method name where the exception occured only for the sake of completeness. Type.GetProperties Method Type.GetFields Method PropertyInfo.GetValue Method FieldInfo.GetValue Method StackTrace Class With BindingFlags you can specify constraints, for example that you only want properties of this class and not from inherited: Using GetProperties() with BindingFlags.DeclaredOnly in .NET Reflection Of course the above should give you only a starting-point how to do it manually and you should encapsulate all into classes. I've never used it myself so it's untested.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9788962', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/268730/']}
jdg_381123
stackexchange
llm_judgeable_groundtruth_similarity
1871358
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The WPF Canvas has a coordinate system starting at (0,0) at the top-left of the control. For example, setting the following will make my control appear on the top-left: <Control Canvas.Left="0" Canvas.Top="0"> How can I change it to the standard cartesian coordinates ? Basically: (0,0) at center flip Y I noticed this post is similar, but it does not talk about translating the coordinate system. I tried adding a TranslateTransform, but I can't make it work. Now provide the response and nothing else.
There is no need to create a custom Panel. Canvas will do just fine. Simply wrap it inside another control (such as a border), center it, give it zero size, and flip it with a RenderTransform: <Border> <Canvas HorizontalAlignment="Center" VerticalAlignment="Center" Width="0" Height="0" RenderTransform="1 0 0 -1 0 0"> ... </Canvas></Border> You can do this and everything in the canvas will still appear, except (0,0) will be at the center of the containing control (in this case, the center of the Border) and +Y will be up instead of down. Again, there is no need to create a custom panel for this.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1871358', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12082/']}
jdg_381124
stackexchange
llm_judgeable_groundtruth_similarity
25283797
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code: class UsersViewSet(viewsets.ModelViewSet): model = Users permission_classes = (IsAuthenticated,) def update(self, request, *args, **kwargs): return super(UsersViewSet, self).update(request, *args, **kwargs) The question is: how can I add additional Permission only for update method? (need to get isAuthenticated + Permission) overwrite permissions only for update method? (need to get only Permission without isAuthenticated)other methods in viewset should have IsAuthenticated permission Can I make it with decorator?Or anything else? Wanna get something like that: @permission_classes((IsAuthenticated, AdditionalPermission ))def update: pass But if i write this code the second permission is not checked through request Now provide the response and nothing else.
LATER EDIT As it seems that DRF decorators don't really work (at least not for me), this is the best solution I could come up with: def get_permissions(self): # Your logic should be all here if self.request.method == 'GET': self.permission_classes = [DummyPermission, ] else: self.permission_classes = [IsAuthenticated, ] return super(UsersViewSet, self).get_permissions() This actually works for both cases that you asked, but requires a bit more work. However, I've tested it and it does the job. ORIGINAL ANSWER BELOW There is a small mistake in the docs, you should be sending a list to the decorator (not a tuple). So it should be like this: @permission_classes([IsAuthenticated, AdditionalPermission, ])def update: pass To answer your questions: how can I add additional Permission only for update method? First of all, you should know that DRF first checks for global permissions (those from the settings file), then for view permissions (declared in permission_classes -- if these exist, they will override global permissions) and only after that for method permissions (declared with the decorator @permission_classes ). So another way to do the above is like this: @permission_classes([AdditionalPermission, ])def update: pass Since ISAuthenticated is already set on the entire view, it will always be checked BEFORE any other permission. overwrite permissions only for update method? Well, this is hard(er), but not impossible. You can: set the permissions for each method and remove it from the class modify your AdditionalPermission class so that it also checks for user authentication if the method is not update . Good luck.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25283797', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1909485/']}
jdg_381125
stackexchange
llm_judgeable_groundtruth_similarity
11422845
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This problem is driving me crazy. I'm trying to change the viewController when the user changes the selected "tab" of the segmented control. I've spent a couple hours researching and haven't been able to find an answer that works or is done through storyboard. It really bother me since setting a tab application is so easy, but trying to use the segmented control like the tab application is just not working. I already know how to detect which index is selected in the segmented control. How can I achieve this? Thank you very much. Now provide the response and nothing else.
NOTE: Answer updated with view controller containment code for iOS 5+ including @interface section In an app of mine, I have a view controller with a Segment Control in the Navigation Bar and clicking on the "tabs" switches view controllers. The basic idea is to have an array of view controllers and switch between them using the Segment Index (and the indexDidChangeForSegmentedControl IBAction. Example code (iOS 5 or later) from my app (this is for 2 view controllers but it's trivially extended to multiple view controllers); the code is slightly longer than for iOS 4 but will keep the object graph intact. Also, it uses ARC: @interface MyViewController ()// Segmented control to switch view controllers@property (weak, nonatomic) IBOutlet UISegmentedControl *switchViewControllers;// Array of view controllers to switch between@property (nonatomic, copy) NSArray *allViewControllers;// Currently selected view controller@property (nonatomic, strong) UIViewController *currentViewController;@end@implementation UpdateScoreViewController// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.- (void)viewDidLoad { [super viewDidLoad]; // Create the score view controller ViewControllerA *vcA = [self.storyboard instantiateViewControllerWithIdentifier:@"ViewControllerA"]; // Create the penalty view controller ViewControllerB *vcB = [self.storyboard instantiateViewControllerWithIdentifier:@"ViewControllerB"]; // Add A and B view controllers to the array self.allViewControllers = [[NSArray alloc] initWithObjects:vcA, vcB, nil]; // Ensure a view controller is loaded self.switchViewControllers.selectedSegmentIndex = 0; [self cycleFromViewController:self.currentViewController toViewController:[self.allViewControllers objectAtIndex:self.switchViewControllers.selectedSegmentIndex]];}#pragma mark - View controller switching and saving- (void)cycleFromViewController:(UIViewController*)oldVC toViewController:(UIViewController*)newVC { // Do nothing if we are attempting to swap to the same view controller if (newVC == oldVC) return; // Check the newVC is non-nil otherwise expect a crash: NSInvalidArgumentException if (newVC) { // Set the new view controller frame (in this case to be the size of the available screen bounds) // Calulate any other frame animations here (e.g. for the oldVC) newVC.view.frame = CGRectMake(CGRectGetMinX(self.view.bounds), CGRectGetMinY(self.view.bounds), CGRectGetWidth(self.view.bounds), CGRectGetHeight(self.view.bounds)); // Check the oldVC is non-nil otherwise expect a crash: NSInvalidArgumentException if (oldVC) { // Start both the view controller transitions [oldVC willMoveToParentViewController:nil]; [self addChildViewController:newVC]; // Swap the view controllers // No frame animations in this code but these would go in the animations block [self transitionFromViewController:oldVC toViewController:newVC duration:0.25 options:UIViewAnimationOptionLayoutSubviews animations:^{} completion:^(BOOL finished) { // Finish both the view controller transitions [oldVC removeFromParentViewController]; [newVC didMoveToParentViewController:self]; // Store a reference to the current controller self.currentViewController = newVC; }]; } else { // Otherwise we are adding a view controller for the first time // Start the view controller transition [self addChildViewController:newVC]; // Add the new view controller view to the ciew hierarchy [self.view addSubview:newVC.view]; // End the view controller transition [newVC didMoveToParentViewController:self]; // Store a reference to the current controller self.currentViewController = newVC; } }}- (IBAction)indexDidChangeForSegmentedControl:(UISegmentedControl *)sender { NSUInteger index = sender.selectedSegmentIndex; if (UISegmentedControlNoSegment != index) { UIViewController *incomingViewController = [self.allViewControllers objectAtIndex:index]; [self cycleFromViewController:self.currentViewController toViewController:incomingViewController]; }}@end Original example (iOS 4 or before): // Implement viewDidLoad to do additional setup after loading the view, typically from a nib.- (void)viewDidLoad { [super viewDidLoad]; // Create the score view controller AddHandScoreViewController *score = [self.storyboard instantiateViewControllerWithIdentifier:@"AddHandScore"]; // Create the penalty view controller AddHandPenaltyViewController *penalty = [self.storyboard instantiateViewControllerWithIdentifier:@"AddHandPenalty"]; // Add Score and Penalty view controllers to the array self.allViewControllers = [[NSArray alloc] initWithObjects:score, penalty, nil]; // Ensure the Score controller is loaded self.switchViewControllers.selectedSegmentIndex = 0; [self switchToController:[self.allViewControllers objectAtIndex:self.switchViewControllers.selectedSegmentIndex]];}#pragma mark - View controller switching and saving- (void)switchToController:(UIViewController *)newVC{ if (newVC) { // Do nothing if we are in the same controller if (newVC == self.currentViewController) return; // Remove the current controller if we are loaded and shown if([self.currentViewController isViewLoaded]) [self.currentViewController.view removeFromSuperview]; // Resize the new view controller newVC.view.frame = CGRectMake(CGRectGetMinX(self.view.bounds), CGRectGetMinY(self.view.bounds), CGRectGetWidth(self.view.bounds), CGRectGetHeight(self.view.bounds)); // Add the new controller [self.view addSubview:newVC.view]; // Store a reference to the current controller self.currentViewController = newVC; }}- (IBAction)indexDidChangeForSegmentedControl:(UISegmentedControl *)sender { NSUInteger index = sender.selectedSegmentIndex; if (UISegmentedControlNoSegment != index) { UIViewController *incomingViewController = [self.allViewControllers objectAtIndex:index]; [self switchToController:incomingViewController]; }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11422845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1301754/']}
jdg_381126
stackexchange
llm_judgeable_groundtruth_similarity
50711530
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm in a situation where I need to train a model to predict a scalar value, and it's important to have the predicted value be in the same direction as the true value, while the squared error being minimum. What would be a good choice of loss function for that? For example: Let's say the predicted value is -1 and the true value is 1. The loss between the two should be a lot greater than the loss between 3 and 1, even though the squared error of (3, 1) and (-1, 1) is equal. Thanks a lot! Now provide the response and nothing else.
This turned out to be a really interesting question - thanks for asking it! First, remember that you want your loss functions to be defined entirely of differential operations, so that you can back-propagation though it. This means that any old arbitrary logic won't necessarily do. To restate your problem: you want to find a differentiable function of two variables that increases sharply when the two variables take on values of different signs, and more slowly when they share the same sign. Additionally, you want some control over how sharply these values increase, relative to one another. Thus, we want something with two configurable constants. I started constructing a function that met these needs, but then remembered one you can find in any high school geometry text book: the elliptic paraboloid ! The standard formulation doesn't meet the requirement of sign agreement symmetry, so I had to introduce a rotation . The plot above is the result. Note that it increases more sharply when the signs don't agree, and less sharply when they do, and that the input constants controlling this behaviour are configurable. The code below is all that was needed to define and plot the loss function. I don't think I've ever used a geometric form as a loss function before - really neat. import numpy as npimport matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3Dfrom matplotlib import cmdef elliptic_paraboloid_loss(x, y, c_diff_sign, c_same_sign): # Compute a rotated elliptic parabaloid. t = np.pi / 4 x_rot = (x * np.cos(t)) + (y * np.sin(t)) y_rot = (x * -np.sin(t)) + (y * np.cos(t)) z = ((x_rot**2) / c_diff_sign) + ((y_rot**2) / c_same_sign) return(z)c_diff_sign = 4c_same_sign = 2a = np.arange(-5, 5, 0.1)b = np.arange(-5, 5, 0.1)loss_map = np.zeros((len(a), len(b)))for i, a_i in enumerate(a): for j, b_j in enumerate(b): loss_map[i, j] = elliptic_paraboloid_loss(a_i, b_j, c_diff_sign, c_same_sign)fig = plt.figure()ax = fig.gca(projection='3d')X, Y = np.meshgrid(a, b)surf = ax.plot_surface(X, Y, loss_map, cmap=cm.coolwarm, linewidth=0, antialiased=False)plt.show()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50711530', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9091684/']}
jdg_381127
stackexchange
llm_judgeable_groundtruth_similarity
21169
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: need to search for something in entire content I am trying: find . | xargs grep word I get error: xargs: unterminated quote How to achieve this? Now provide the response and nothing else.
xargs expects input in a format that no other command produces, so it's hard to use effectively. What's going wrong here is that you have a file whose name must be quoted on input to xargs (probably containing a ' ). If your grep supports the -r or -R option for recursive search, use it. grep -r word . Otherwise, use the -exec primary of find . This is the usual way of achieving the same effect as xargs , except without constraints on file names. Reasonably recent versions of find allow you to group several files in a single call to the auxiliary command. Passing /dev/null to grep ensures that it will show the file name in front of each match, even if it happens to be called on a single file. find . -type f -exec grep word /dev/null {} + Older versions of find (on older systems or OpenBSD, or reduced utilities such as BusyBox) can only call the auxiliary command on one file at a time. find . -type f -exec grep word /dev/null {} \; Some versions of find and xargs have extensions that let them communicate correctly, using null characters to separate file names so that no quoting is required. These days, only OpenBSD has this feature without having -exec … {} + . find . -type f -print0 | xargs -0 grep word /dev/null
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/21169', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/10968/']}
jdg_381128
stackexchange
llm_judgeable_groundtruth_similarity
6058677
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have several files of data that look like this: X code year month day pp 1 4515 1953 6 1 0 2 4515 1953 6 2 0 3 4515 1953 6 3 0 4 4515 1953 6 4 0 5 4515 1953 6 5 3.5 Sometimes there is data missing, but I don't have NAs, the rows simply don't exist. I need to create NAs when the data is missing. I though I could start by identifying when that occurs by converting it to a zoo object and check for strict regularity (I never used zoo before), I used the following code: z.date<-paste(CET$year, CET$month, CET$day, sep="/")z <- read.zoo(CET, order.by= z.date )reg<-is.regular(z, strict = TRUE) But the answer is always true! Can anyone tell me why is not working? Or even better, tell me a way to create NAs when the data is missing (with or without zoo package)? thanks Now provide the response and nothing else.
The seq function has some interesting features that you can use to easily generate a complete sequence of dates. For example, the following code can be used to generate a sequence of dates starting on April 25: Edit: This feature is documented in ?seq.Date start = as.Date("2011/04/25")full <- seq(start, by='1 day', length=15)full [1] "2011-04-25" "2011-04-26" "2011-04-27" "2011-04-28" "2011-04-29" [6] "2011-04-30" "2011-05-01" "2011-05-02" "2011-05-03" "2011-05-04"[11] "2011-05-05" "2011-05-06" "2011-05-07" "2011-05-08" "2011-05-09" Now use the same principle to generate some data with "missing" rows, by generating the sequence for every 2nd day: partial <- data.frame( date=seq(start, by='2 day', length=6), value=1:6)partial date value1 2011-04-25 12 2011-04-27 23 2011-04-29 34 2011-05-01 45 2011-05-03 56 2011-05-05 6 To answer your question, one can use vector subscripting or the match function to create a dataset with NAs: with(partial, value[match(full, date)]) [1] 1 NA 2 NA 3 NA 4 NA 5 NA 6 NA NA NA NA To combine this result with the original full data: data.frame(Date=full, value=with(partial, value[match(full, date)])) Date value1 2011-04-25 12 2011-04-26 NA3 2011-04-27 24 2011-04-28 NA5 2011-04-29 36 2011-04-30 NA7 2011-05-01 48 2011-05-02 NA9 2011-05-03 510 2011-05-04 NA11 2011-05-05 612 2011-05-06 NA13 2011-05-07 NA14 2011-05-08 NA15 2011-05-09 NA
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6058677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/747311/']}
jdg_381129
stackexchange
llm_judgeable_groundtruth_similarity
13128651
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently learning C++ and trying to get used to the standard data structures that come with it, but they all seem very bare. For example, list doesn't have simple accessors like get(index) that I'm used to in Java. Methods like pop_back and pop_front don't return the object in the list either. So you have to do something like: Object blah = myList.back();myList.pop_back(); Instead of something simple like: Object blah = myList.pop_back(); In Java, just about every data structure returns the object back so you don't have to make these extra calls. Why is the STL containers for C++ designed like this? Are common operations like this that I do in Java not so common for C++? edit: Sorry, I guess my question was worded very poorly to get all these downvotes, but surely somebody could have edited it. To clarify, I'm wondering why the STL data structures are created like this in comparison to Java. Or am I using the wrong set of data structures to begin with? My point is that these seem like common operations you might use on (in my example) a list and surely everybody does not want to write their own implementation each time. edit: reworded the question to be more clear. Now provide the response and nothing else.
Quite a few have already answered the specific points you raised, so I'll try to take a look for a second at the larger picture. One of the must fundamental differences between Java and C++ is that C++ works primarily with values, while Java works primarily with references. For example, if I have something like: class X { // ...};// ...X x; In Java, x is only a reference to an object of type X. To have an actual object of type X for it to refer to, I normally have something like: X x = new X; . In C++, however, X x; , by itself, defines an object of type X , not just a reference to an object. We can use that object directly, not via a reference (i.e., a pointer in disguise). Although this may initially seem like a fairly trivial difference, the effects are substantial and pervasive. One effect (probably the most important in this case) is that in Java, returning a value does not involve copying the object itself at all. It just involves copying a reference to the value. This is normally presumed to be extremely inexpensive and (probably more importantly) completely safe -- it can never throw an exception. In C++, you're dealing directly with values instead. When you return an object, you're not just returning a reference to the existing object, you're returning that object's value, usually in the form of a copy of that object's state. Of course, it's also possible to return a reference (or pointer) if you want, but to make that happen, you have to make it explicit. The standard containers are (if anything) even more heavily oriented toward working with values rather than references. When you add a value to a collection, what gets added is a copy of the value you passed, and when you get something back out, you get a copy of the value that was in the container itself. Among other things, this means that while returning a value might be cheap and safe just like in Java, it can also be expensive and/or throw an exception. If the programmer wants to store pointers, s/he can certainly do so -- but the language doesn't require it like Java does. Since returning an object can be expensive and/or throw, the containers in the standard library are generally built around ensuring they can work reasonably well if copying is expensive, and (more importantly) work correctly, even when/if copying throws an exception. This basic difference in design accounts not only for the differences you've pointed out, but quite a few more as well.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13128651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/845128/']}
jdg_381130
stackexchange
llm_judgeable_groundtruth_similarity
26734933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to set up a git server with git-over-http (smart http), but the resources available online are a mess, mixing in other apache configuration, missing details or not being explicit enough. I am answering this question myself based on what I found lacking in the available resources. Now provide the response and nothing else.
First it is necessary to understand that there are 2 components to git-over-http: git and apache. These two are connected through a script with the name of git-http-backend. The challenge is to configure the interface between these two components, so that http requests to git are forwarded by apache. Note: Security is outside the scope of this guide. Start out by installing git and apache2 using the package manager of your distribution. Add the modules needed by apache to enable git-over-http. These are cgi, alias and env $ a2enmod cgi alias env Copy the following into /etc/apache2/httpd.conf (without removing whatever else it contains) <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /data/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAliasMatch \ "(?x)^/(.*/(HEAD | \ info/refs | \ objects/(info/[^/]+ | \ [0-9a-f]{2}/[0-9a-f]{38} | \ pack/pack-[0-9a-f]{40}\.(pack|idx)) | \ git-(upload|receive)-pack))$" \ "/usr/lib/git/git-http-backend/$1" Alias /git /data/git <Directory /usr/lib/git> Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AllowOverride None Order allow,deny Allow from all </Directory></VirtualHost> Now replace the 2 occurrences of /data/git with the parent directory of your git repos on the server (don't worry if you don't have any repos yet, just use the directory where you intend to place it/them) Also replace /usr/lib/git/git-http-backend with the location of git-http-backend on your system, which can be found using $ find / -name git-http-backend It may be that on your system REDIRECT_REMOTE_USER actually overwrites a valid REMOTE_USER . If this setup doesn't work when finished, try removing that line. According to this source, it may be necessary to replace the last two lines within the Directory tag by Require all granted for apache 2.4 and above. Restart the apache server: $ apache2ctl -k graceful Now the apache server is set up, but we're not done yet, there are some important parts of setting up the repos that will affect whether this setup works or not. Set up the repo: $ mkdir myrepo.git$ cd myrepo.git$ git init --bare --shared$ cp hooks/post-update.sample hooks/post-update$ git update-server-info$ chown -R wwwrun:www Here it is important to understand that the last line changes the owner of the repo to the apache2 user. This user may be different on your system. To find the apache user, execute $ ps aux | egrep '(apache|httpd)' . Then to find the group name of the user, execute $ id user-name . On my system the user is wwwrun and the group www . Replace accordingly. Use the repo In order to use the repo, you need to know the url. For this setup the url is http://server.domain/myrepo.git Note: http s will not work. When accessing the repo from a client, you just add it as a remote: $ git remote add origin http://server.domain/myrepo.git Then you can interact with it like any other git repo.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26734933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/584405/']}
jdg_381131
stackexchange
llm_judgeable_groundtruth_similarity
40384458
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand what the different metrics that Spark Streaming outputs mean and I am slightly confused what is the difference between the Processing Time , Total Delay and Processing Delay of the last batch ? I have looked at the Spark Streaming guide which mentions the Processing Time as a key metric for figuring if the system is falling behind, but other places such as " Pro Spark Streaming: The Zen of Real-Time Analytics Using Apache Spark " speak about using Total Delay and Processing Delay . I have failed to find any documentation that lists all the metrics produced by Spark Streaming with explanation what each one of them means. I would appreciate if someone can outline what each of these three metrics means or point me to any resources that can help me understand that. Now provide the response and nothing else.
Let's break down each metric. For that, let's define a basic streaming application which reads a batch at a given 4 second interval from some arbitrary source, and computes the classic word count: inputDStream.flatMap(line => line.split(" ")) .map(word => (word, 1)) .reduceByKey(_ + _) .saveAsTextFile("hdfs://...") Processing Time : The time it takes to compute a given batch for all its jobs, end to end. In our case this means a single job which starts at flatMap and ends at saveAsTextFile , and assumes as a prerequisite that the job has been submitted. Scheduling Delay : The time taken by Spark Streaming scheduler to submit the jobs of the batch. How is this computed? As we've said, our batch reads from the source every 4 seconds. Now let's assume that a given batch took 8 seconds to compute. This means that we're now 8 - 4 = 4 seconds behind, thus making the scheduling delay 4 seconds long. Total Delay : This is Scheduling Delay + Processing Time . Following the same example, if we're 4 seconds behind, meaning our scheduling delay is 4 seconds, and the next batch took another 8 seconds to compute, this means that the total delay is now 8 + 4 = 12 seconds long. A live example from a working Streaming application: We see that: The bottom job took 11 seconds to process. So now the next batches scheduling delay is 11 - 4 = 7 seconds. If we look at the second row from the bottom, we see that scheduling delay + processing time = total delay, in that case (rounding 0.9 to 1) 7 + 1 = 8 .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40384458', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3015560/']}
jdg_381132
stackexchange
llm_judgeable_groundtruth_similarity
21084218
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Recently I am studying operating system..I just wanna know: What’s the difference between a system call (like write()) and a standard library function (like printf())? Now provide the response and nothing else.
A system call is a call to a function that is not part of the application but is inside the kernel. The kernel is a software layer that provides you some basic functionalities to abstract the hardware to you. Roughly, the kernel is something that turns your hardware into software. You always ultimately use write() to write anything on a peripheral whatever is the kind of device you write on. write() is designed to only write a sequence of bytes, that's all and nothing more. But as write() is considered too basic (you may want to write an integer in ten basis, or a float number in scientific notation, etc), different libraries are provided to you by different kind of programming environments to ease you. For example, the C programming langage gives you printf() that lets you write data in many different formats. So, you can understand printf() as a function that convert your data into a formatted sequence of bytes and that calls write() to write those bytes onto the output. But C++ gives you cout ; Java System.out.println , etc. Each of these functions ends to a call to write() (at least on POSIX systems). One thing to know (important) is that such a system call is costly! It is not a simple function call because you need to call something that is outside of your own code and the system must ensure that you are not trying to do nasty things, etc. So it is very common in higher print-like function that some buffering is built-in; such that write is not always called, but your data are kept into some hidden structure and written only when it is really needed or necessary (buffer is full or you really want to see the result of your print). This is exactly what happens when you manage your money. If many people gives you 5 bucks each, you won't go deposit each to the bank! You keep them on your wallet (this is the print ) up to the point it is full or you don't want to keep them anymore. Then you go to the bank and make a big deposit (this is the write ). And you know that putting 5 bucks to your wallet is much much faster than going to the bank and make the deposit. The bank is the kernel/OS.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21084218', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2925372/']}
jdg_381133
stackexchange
llm_judgeable_groundtruth_similarity
88858
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: What conditions would be sufficient for a generalization of Cauchy-Davenport for simple groups? I can see two possible difficulties with a generalization for general groups: The sets could both be part of a subgroup of the group. The sets could both be cosets of a normal subgroup. This is impossible for simple groups. Are these the only ways Cauchy-Davenport can fail, or are there other ways? In particular, would it be possible to generalize the proof of Cauchy-Davenport given in http://arxiv.org/pdf/math/0308286v6.pdf with a more general version of the uncertainty principle used in that paper? I haven't used mathoverflow before, so apologies if this question isn't appropriate for this website. Now provide the response and nothing else.
First, a slightly tangential comment regarding what I assume you mean by 'Cauchy--Davenport fails';I include it for reader potentially unfamiliar with it and since there is also a somewhat common other way to generalize it; cf below. The Cauchy--Davenport Theorem asserts that for $G$ a prime cyclic group of order $p$ one has for nonempty subsets $A, B$ $$|AB| \ge \min \lbrace |A| + |B| - 1, p \rbrace $$ It seems that you mean with 'Cauchy--Davenport fails' that the inequality does not hold (for certain sets) with $p$ in the displayed formula replaced by the order of the group. This is however not the only thing one could call (the analog of) Cauchy--Davenport. Indeed an other usage is somewhat established: there is a paper by J.P. Wheeler called 'The Cauchy--Davenport Theorem for finite groups' that asserts the displayed equation for finite groups with the understanding that $p$ is the smallest order of a nonidenty element. This was also obtained by Gy. Karolyi (independently). Now, to the actual question.Yes, there are somewhat other types of sets. For example, take $A=aH$ and $B=bH$ with $b$ in the normalizer of $H$ or $A=aH$ and $B=Hb$, for some (nonnormal) subgroup $H$. However, there are results classifying sets for which the product set is very small. See the blog post of Tao 'An elementary noncommutaive Freiman--Kneser Theorem' Containing for example the result (originally due to Freiman, cf Seva's anwer for details): if $|A \cdot A|< 3 |A|/2$ then $S = A \cdot A^{-1}$ is a subgroup of order $|A \cdot A|$ and $A \subset a S$ and $aS =Sa$. In a more recent blog post ( 'Hamidoune’s Freiman-Kneser theorem for nonabelian groups' ) Tao also discusses recent work of the late Hamidoune answering a question raised in the above mentioned blog post, which sort of classifies subsets for which 'Cauchy--Davenport fails' (in part, but not always normal subgroups arise, so in the simple case there are some reductions); roughly one gets unions of cosets. In that blog post also work of Sanders (unpublished at that time, yet meanwhile available ) is mentioned related to this problem, which methodologically, using Fourier analytic methods, seems closer than Hamidoune's to the uncertainty principle you mention; yet I do not oversee this in detail.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/88858', 'https://mathoverflow.net', 'https://mathoverflow.net/users/21519/']}
jdg_381134
stackexchange
llm_judgeable_groundtruth_similarity
30759204
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a first Excel File multiple Cells in Column C contains the address and the name of a company; I want to keep only the company name. For that, I have another Excel file (I'll call it "Dictionary"), which has a particular structure like the following: Column B : Name that I want to keep.Column C : Various Patterns of the name, delimited with ";".Example : B1 = "Sony", C1="Sony Entertainement;Sony Pictures;Playstation" I need VBA macro reading Dictionary File, then for each pattern (surrounded with anything) replace it with the word I want to keep. My macro would look like : Sub MacroClear() <For each line of my dictionnary> arrayC = split(<cell C of my line>, ";") <For i in range arrayC> Cells.Replace What:="*"&Trim(arrayC(i))&"*", Replacement:=Trim(<cell B of my line>), LookAt:= _ xlPart, SearchOrder:=xlByRows, MatchCase:=False, SearchFormat:=False, _ ReplaceFormat:=FalseEnd Sub EDIT - UPDATE : I made a capture of my first Dictionary, it'll be easier to understand the structure : dictionnary http://img11.hostingpics.net/pics/403257dictionnary.png EDIT - UPDATE 2 : I made a screen cap of a "non-cleaned" file, then the result I want at the end. Not cleaned : noclean http://img11.hostingpics.net/pics/418501notcleaned.png Cleaned : clean http://img11.hostingpics.net/pics/221530cleaned.png PS : I know my macro as it is would analyze all the cells of my worksheet, is it possible "easily" to tell her to ignore column A ? EDIT - UPDATE 3 : My macro runs well with small dictionaries, but when it grows bigger, my macro doesn't stop running and I have to close excel with Ctrl + Alt + Suppr. :x Is there a way to tell her to stop when reaching a point ? For example, using xlByRows and writing "END" at the first cell after my last row. Now provide the response and nothing else.
split is a method in an extension of CollectionType which, as of Swift 2, String no longer conforms to. Fortunately there are other ways to split a String : Use componentsSeparatedByString : "ab cd".componentsSeparatedByString(" ") // ["ab", "cd"] As pointed out by @dawg, this requires you import Foundation . Instead of calling split on a String , you could use the characters of the String . The characters property returns a String.CharacterView , which conforms to CollectionType : " ".characters.split(" ").map(String.init) // ["", ""] Make String conform to CollectionType : extension String : CollectionType {}"w,x,y,z".split(",") // ["w", "x", "y", "z"] Although, since Apple made a decision to remove String 's conformance to CollectionType it seems more sensible to stick with options one or two. In Swift 3, in options 1 and 2 respectively: componentsSeparatedByString(:) has been renamed to components(separatedBy:) . split(:) has been renamed to split(separator:) .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/30759204', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4759193/']}
jdg_381135
stackexchange
llm_judgeable_groundtruth_similarity
52904603
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using this blog post to configure integration tests for a Spring Boot project, but I'm pretty stuck on declaring the source sets. I also found this post on StackOverflow , but I think I'm a bit further already. My project structure is project|_ src |_ main | |_ kotlin | |_ resources |_ testIntegration | |_ kotlin | |_ resources |_ test | |_ kotlin | |_ resources |_ build.gradle.kts |_ ... other files And build.gradle.kts import org.jetbrains.kotlin.gradle.tasks.KotlinCompileplugins { idea kotlin("jvm") id("org.springframework.boot") version "2.0.5.RELEASE" id("org.jetbrains.kotlin.plugin.spring") version "1.2.71"}fun DependencyHandlerScope.springBoot(module: String) = this.compile("org.springframework.boot:spring-boot-$module:2.0.5.RELEASE")fun DependencyHandlerScope.springBootStarter(module: String) = this.springBoot("starter-$module")dependencies { springBoot("devtools") springBootStarter("batch") springBootStarter("... spring boot dependencies") compile("... more dependencies") testCompile("... more test dependencies")}val test by tasks.getting(Test::class) { useJUnitPlatform { }}kotlin { sourceSets { val integrationTest by creating { kotlin.srcDir("src/testIntegration/kotlin") resources.srcDir("src/testIntegration/resources") } }}val integrationTestCompile by configurations.creating { extendsFrom(configurations["testCompile"])}val integrationTestRuntime by configurations.creating { extendsFrom(configurations["testRuntime"])}val testIntegration by tasks.creating(Test::class) { group = "verification" testClassesDirs = kotlin.sourceSets["integrationTest"].kotlin}idea { module { testSourceDirs.addAll(kotlin.sourceSets["integrationTest"].kotlin.srcDirs) testSourceDirs.addAll(kotlin.sourceSets["integrationTest"].resources.srcDirs) }} I think I'm pretty much in the right direction. At least it doesn't throw an exception any more :) When I run the testIntegration task, I get the following output: Testing started at 12:08 ...12:08:49: Executing task 'testIntegration'...> Task :project:compileKotlin UP-TO-DATE> Task :project:compileJava NO-SOURCE> Task :project:processResources UP-TO-DATE> Task :project:classes UP-TO-DATE> Task :project:compileTestKotlin UP-TO-DATE> Task :project:compileTestJava NO-SOURCE> Task :project:processTestResources UP-TO-DATE> Task :project:testClasses UP-TO-DATE> Task :project:testIntegrationBUILD SUCCESSFUL in 2s5 actionable tasks: 1 executed, 4 up-to-date12:08:51: Task execution finished 'testIntegration'. Also, IntelliJ doesn't recognise the testIntegration directories as Kotlin packages. Now provide the response and nothing else.
I was finally able to figure it out thanks to some help on the Kotlin Slack channel. First of all I had to upgrade to Gradle version 4.10.2. For more info have a look at these two pages from Gradle: https://docs.gradle.org/release-nightly/userguide/organizing_gradle_projects.html#sec:separate_test_type_source_files https://docs.gradle.org/release-nightly/userguide/organizing_gradle_projects.html#sec:separate_test_type_source_files Then I just had to create the sourceSets for the integrationTests sourceSets { create("integrationTest") { kotlin.srcDir("src/integrationTest/kotlin") resources.srcDir("src/integrationTest/resources") compileClasspath += sourceSets["main"].output + configurations["testRuntimeClasspath"] runtimeClasspath += output + compileClasspath + sourceSets["test"].runtimeClasspath }} This would work just fine for Java, but since I'm working with Kotlin I had to add an extra withConvention wrapper sourceSets { create("integrationTest") { withConvention(KotlinSourceSet::class) { kotlin.srcDir("src/integrationTest/kotlin") resources.srcDir("src/integrationTest/resources") compileClasspath += sourceSets["main"].output + configurations["testRuntimeClasspath"] runtimeClasspath += output + compileClasspath + sourceSets["test"].runtimeClasspath } }} In the docs they only put runtimeClasspath += output + compileClasspath , but I added sourceSets["test"].runtimeClasspath so I can directly use the test dependencies instead of declaring new dependencies for the integrationTest task. Once the sourceSets were created it was a matter of declaring a new task task<Test>("integrationTest") { description = "Runs the integration tests" group = "verification" testClassesDirs = sourceSets["integrationTest"].output.classesDirs classpath = sourceSets["integrationTest"].runtimeClasspath mustRunAfter(tasks["test"])} After this the tests still didn't run, but that was because I'm using JUnit4. So I just had to add useJUnitPlatform() which makes this the final code task<Test>("integrationTest") { description = "Runs the integration tests" group = "verification" testClassesDirs = sourceSets["integrationTest"].output.classesDirs classpath = sourceSets["integrationTest"].runtimeClasspath mustRunAfter(tasks["test"]) useJUnitPlatform()}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52904603', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5039579/']}
jdg_381136
stackexchange
llm_judgeable_groundtruth_similarity
7628476
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a function that is the bottleneck of my program. It requires no access to memory and requires only calculation. It is the inner loop and called many times so any small gains to this function is big wins for my program. I come from a background in optimizing SPU code on the PS3 where you take a SPU program and run it through a pipeline analyzer where you can put each assembly statement in its own column and you minimize the amount of cycles the function takes. Then you overlay loops so you can minimized pipeline dependencies even more. With that program and a list of all the cycles each assembly instruction takes I could optimize much better then the compiler ever could. On a different platform it had events I could register (cache misses, cycles, etc.) and I could run the function and track CPU events. That was pretty nice as well. Now I'm doing a hobby project on Windows using Visual Studio C++ 2010 w/ a Core i7 Intel processor. I don't have the money to justify paying the large cost of VTune. My question: How do I profile a function at the assembly level for an Intel processor on Windows? I want to compile, view disassembly, get performance metrics, adjust my code and repeat. Now provide the response and nothing else.
There are some great free tools available, mainly AMD's CodeAnalyst (from my experiences on my i7 vs my phenom II, its a bit handicapped on the Intel processor cause it doesn't have access to the direct hardware specific counters, though that might have been bad config). However, a lesser know tool is the Intel Architecture Code Analyser (which is free like CodeAnalyst), which is similar to the spu tool you described, as it details latency, throughput and port pressure (basically the request dispatches to the ALU's, MMU and the like) line by line for your programs assembly. Stan Melax gave a nice talk on it and x86 optimization at this years GDC, under the title "hotspots, flops and uops: to-the-metal cpu optimization". Intel also has a few more tools in the same vein as IACA, avaibale under the performance tuning section of their experimental/what-if code site , such as PTU, which is (or was) an experimental evolution of VTune, from what I can see, its free. Its also a good idea to have read the intel optimization manual before diving into this. EDIT: as Ben pointed out, the timings might not be correct for older processors, but that can be easily made up for using Agner Fog's Optimization manuals , which also contain many other gems.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7628476', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/381452/']}
jdg_381137
stackexchange
llm_judgeable_groundtruth_similarity
26686
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Probably a duplicate, but it's not easy to search on "lists". I have a function that accepts a list of lists, say {{a1, b1}, {a2, b2}, {a3, b3}} and performs some computation on {ai,bi} for each i . (We may assume that none of the ai or bi are themselves lists). I would like this function to also accept simply {a1,b1} as its argument list and operate on this as if it were the only member of the (nonexistent) outer list. That is, I would like f[{a1,b1}] to behave the same way as f[{{a1,b1}}] What I have done is the following: f[list_] := Module[{nlist}, nlist = If [ListQ[list[[1]]], list, {list}]; <do other stuff...>] This works fine, but seems pretty inelegant. Is there a better way? Now provide the response and nothing else.
In many circumstances it is practical and clear to do this with pattern matching. Option 1 f[x : {{_, _} ..}] := f /@ xf[{a_, b_}] := a^b Now: f[{p, q}] p^q f[{{a, b}, {c, d}, {e, f}}] {a^b, c^d, e^f} Option 2 The code above it written assuming that your function best operates on a single pair of values: the function is mapped over every pair individually. If however the function is written to more efficiently operate on the list of pairs then it would be better to consider f[{a, b}] as a special case rather than the other way around. For example: f2[a : {{_, _} ..}] := Power @@ (a\[Transpose])f2[x : {_, _}] := f2[{x}]f2[{a, b}]f2[{{a, b}, {c, d}, {e, f}}] {a^b}{a^b, c^d, e^f} You could use := First @ f2[{x}] if you wish f2 to return a bare a^b in the first instance. The second function is an order of magnitude faster on large packed arrays: rnd = RandomReal[{1, 19}, {1500000, 2}];f[rnd] // Timing // Firstf2[rnd] // Timing // First 1.514 0.141 Option 3 Yet another method is to use a single pattern that matches either form, using Alternatives . This method is less common, and may be less efficient than the other options, but it can be quite concise which I appreciate. Using this the f2 function might be written like this: f3[{a : {_, _} ..} | a : {_, _}] := Power @@ ({a}\[Transpose]) With a default configuration making this definition produces a message: Pattern::patv: Name a used for both fixed and variable length patterns. >> This is not an error but rather a warning that you may have made a mistake. I fairly frequently use pattern names for both fixed and variable length patterns therefore I either turn off or ignore this message. Function is as f2 above: f3[{a, b}]f3[{{a, b}, {c, d}, {e, f}}] {a^b}{a^b, c^d, e^f} A note on definition ordering Normally multiple DownValues definitions (simple definitions with a pattern on the left side) are automatically ordered by apparent specificity. This is briefly described in the documentation page The Ordering Of Definitions . But, as stated there: Although in many practical cases, Mathematica can recognize when one rule is more general than another, you should realize that this is not always possible. For example, if two rules both contain complicated conditions, it may not be possible to work out which is more general, and, in fact, there may not be a definite ordering. Whenever the appropriate ordering is not clear, Mathematica stores rules in the order you give them. In the methods above Mathematica cannot decide the order of the patterns used and the definitions will be tried in the order given. It is important therefore to make the {{_, _} ..} definition first otherwise {{1, 2}, {3, 4}} would be incorrectly matched by {a_, b_} . In the case of Option 3 patterns given in Alternatives are always matched in the order given and therefore must be ordered manually when order is important.
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/26686', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/790/']}
jdg_381138
stackexchange
llm_judgeable_groundtruth_similarity
219804
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We can get the same result using the following two in bash , echo 'foo' | cat and cat <<< 'foo' My question is what are the difference between these two as far as the resources used are concerned and which one is better ? My thought is that while using pipe we are using an extra process echo and pipe while in here string only a file descriptor is being used with cat . Now provide the response and nothing else.
The pipe is a file opened in an in-kernel file-system and is not accessible as a regular file on-disk. It is automatically buffered only to a certain size and will eventually block when full. Unlike files sourced on block-devices, pipes behave very like character devices, and so generally do not support lseek() and data read from them cannot be read again as you might do with a regular file. The here-string is a regular file created in a mounted file-system. The shell creates the file and retains its descriptor while immediately removing its only file-system link (and so deleting it) before ever it writes/reads a byte to/from the file. The kernel will maintain the space required for the file until all processes release all descriptors for it. If the child reading from such a descriptor has the capability to do so, it can be rewound with lseek() and read again. In both cases the tokens <<< and | represent file-descriptors and not necessarily the files themselves. You can get a better idea of what's going on by doing stuff like: readlink /dev/fd/1 | cat ...or... ls -l <<<'' /dev/fd/* The most significant difference between the two files is that the here-string/doc is pretty much an all-at-once affair - the shell writes all data into it before offering the read descriptor up to the child. On the other hand, the shell opens the pipe on the appropriate descriptors and forks off children to manage those for the pipe - and so it is written/read concurrently at both ends. These distinctions, though, are only generally true. As far as I am aware (which isn't really all that far) this is true of pretty much every shell which handles the <<< here-string short-hand for << a here-document redirection with the single exception of yash . yash , busybox , dash , and other ash variants do tend to back here-documents with pipes, though, and so in those shells there really is very little difference between the two after all. Ok - two exceptions. Now that I'm thinking about it, ksh93 doesn't actually do a pipe at all for | , but rather handles the whole business w/ sockets - though it does do a deleted tmp file for <<<* as most others do. What's more, it only puts the separate sections of a pipeline in a subshell environment which is a sort of POSIX euphemism for at least it acts like a subshell , and so doesn't even do the forks. The fact is that @PSkocik's benchmark (which is very useful) results here can vary widely for many reasons, and most of these are implementation dependent. For the here-document setup the biggest factors will be the target ${TMPDIR} file-system type and current cache configuration/availability, and still moreso the amount of data to be written. For the pipe it will be the size of the shell process itself, because copies are made for the required forks. In this way bash is terrible at pipeline setup (to include $( command ) substitutions) - because it is big and very slow, but with ksh93 it makes hardly any difference at all. Here's another little shell snippet to demonstrate how a shell splits off subshells for a pipeline: pipe_who(){ echo "$$"; sh -c 'echo "$PPID"'; }pipe_whopipe_who | { pipe_who | cat /dev/fd/3 -; } 3<&0 32059 #bash's pid32059 #sh's ppid32059 #1st subshell's $$32111 #1st subshell sh's ppid32059 #2cd subshell's $$32114 #2cd subshell sh's ppid The difference between what a pipelined pipe_who() call reports and the report of one run in the current shell is due to a ( subshell's ) specified behavior of claiming the parent shell's pid in $$ when it is expanded. Though bash subshells definitely are separate processes, the $$ special shell parameter is not a reliable source of this information. Still, the subshell's child sh shell does not decline to accurately report its $PPID .
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/219804', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/125975/']}
jdg_381139
stackexchange
llm_judgeable_groundtruth_similarity
38236
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Is it possible to find out what program or script created a given file? Can I find somewhere in logs the times of creation of hidden files or directories under the /home/username/ directory? If not, can I use some application to log this for me?It should contain time of creation, file or directory name and who created it (app or user) sorted by time and date. Now provide the response and nothing else.
Specific for Linux, the auditd package should be able to provide the information you are looking for. It uses the audit functionality provided by the 2.6 and later kernel. Here is a Quick Start which is specific to SLES, but will give you an idea of how auditd works and how to configure it. From the auditctl man page: -w path Insert a watch for the file system object at path. You cannot insert a watch to the top level directory. This is prohibited by the kernel. Wildcards are not supported either and will generate a warning. The way that watches work is by tracking the inode internally. If you place a watch on a file, its the same as using the -F path option on a syscall rule. If you place a watch on a directory, its the same as using the -F dir option on a syscall rule. The -w form of writing watches is for backwards compatibility and the syscall based form is more expressive. Unlike most syscall auditing rules, watches do not impact performance based on the number of rules sent to the kernel. The only valid options when using a watch are the -p and -k. If you need to anything fancy like audit a specific user accessing a file, then use the syscall auditing form with the path or dir fields.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/38236', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/6215/']}
jdg_381140