source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
61541455
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have successfully integrated the bottom navigation with the latest android architecture navigation components. The following is my complete code. Navigation <?xml version="1.0" encoding="utf-8"?><navigation xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/mobile_navigation" app:startDestination="@+id/navigation_home"> <fragment android:id="@+id/navigation_home" android:name="in.zedone.bottomsample.ui.home.HomeFragment" android:label="@string/title_home" tools:layout="@layout/fragment_home" /> <fragment android:id="@+id/navigation_saloons" android:name="in.zedone.bottomsample.ui.saloons.SaloonsFragment" android:label="@string/title_saloon" tools:layout="@layout/fragment_saloons" /> <fragment android:id="@+id/navigation_offers" android:name="in.zedone.bottomsample.ui.offers.OffersFragment" android:label="@string/title_offer" tools:layout="@layout/fragment_offers" /> <fragment android:id="@+id/navigation_account" android:name="in.zedone.bottomsample.ui.account.AccountFragment" android:label="@string/title_account" tools:layout="@layout/fragment_account" /></navigation> BottomNavigationView <com.google.android.material.bottomnavigation.BottomNavigationView android:id="@+id/nav_view" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_marginStart="0dp" android:layout_marginEnd="0dp" android:background="?android:attr/windowBackground" app:labelVisibilityMode="labeled" app:itemTextAppearanceActive="@style/BottomNavigationView.Active" app:itemTextAppearanceInactive="@style/BottomNavigationView" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:menu="@menu/bottom_nav_menu" /> MainActivity BottomNavigationView navView = findViewById(R.id.nav_view);// Passing each menu ID as a set of Ids because each// menu should be considered as top level destinations.AppBarConfiguration appBarConfiguration = new AppBarConfiguration.Builder( R.id.navigation_home, R.id.navigation_saloons, R.id.navigation_offers,R.id.navigation_account) .build();NavController navController = Navigation.findNavController(this, R.id.nav_host_fragment);NavigationUI.setupActionBarWithNavController(this, navController, appBarConfiguration);NavigationUI.setupWithNavController(navView, navController); Now how can I add transition/animation on select each tab/fragment in bottom navigation? Now provide the response and nothing else.
Instead of using setupWithNavController function, follow this way. First, create your NavOptions which include animation shown below. val options = NavOptions.Builder() .setLaunchSingleTop(true) .setEnterAnim(R.anim.enter_from_bottom) .setExitAnim(R.anim.exit_to_top) .setPopEnterAnim(R.anim.enter_from_top) .setPopExitAnim(R.anim.exit_to_bottom) .setPopUpTo(navController.graph.startDestination, false) .build() Then use setOnNavigationItemSelectedListener to navigate with animation like that. bottomNavigationView.setOnNavigationItemSelectedListener { item -> when(item.itemId) { R.id.fragmentFirst -> { navController.navigate(R.id.fragmentFirst,null,options) } R.id.fragmentSecond -> { navController.navigate(R.id.fragmentSecond,null,options) } R.id.fragmentThird -> { navController.navigate(R.id.fragmentThird,null,options) } } true } Finally, you should prevent same item selection case so you can add below code. bottomNavigationView.setOnNavigationItemReselectedListener { item -> return@setOnNavigationItemReselectedListener I used bottomNavigation like that in my project to add animation for page transitions.I hope it helped.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61541455', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8470296/']}
jdg_71841
stackexchange
llm_judgeable_groundtruth_similarity
1015639
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: One of the ways to compute the integral $$\int_0^{\infty} \log(\sin^2(x))\left(1-x\operatorname{arccot}(x)\right) \ dx=\frac{\pi}{4}\left(\operatorname{Li_3}(e^{-2})+2\operatorname{Li_2}(e^{-2})-2\log(2)-\zeta(3)\right)$$ is to make use of the series of $\log(\sin(x))$, but the result I got after doing that wasn't that friendly. Is it possible to find a neat way of evaluating the integral? Now provide the response and nothing else.
First notice that $$ \begin{align} &\frac{1}{2} \int_{0}^{\infty} \log (4 \sin^{2} x) \Big(1 - x \, \text{arccot}(x) \Big) \, dx \\ &= \frac{\log(4)}{2} \int_{0}^{\infty} \Big( 1- x \, \text{arccot}(x) \Big) \, dx + \frac{1}{2} \int_{0}^{\infty} \log (\sin^{2} x) \Big(1 - x \, \text{arccot}(x) \Big) \, dx \\ &= \frac{\pi \log(2)}{4} + \frac{1}{2} \int_{0}^{\infty} \log (\sin^{2} x) \Big(1 - x \, \text{arccot}(x) \Big) \, dx . \tag{1} \end{align}$$ Now use the fact $$ \text{Re} \log(1-e^{2ix}) = \frac{1}{2} \log(4 \sin^{2} x) $$ and integrate by parts to get $$ \begin{align} &\frac{1}{2} \int_{0}^{\infty} \log (4\sin^{2} x) \Big(1 - x \, \text{arccot}(x) \Big) \, dx \\ &= \text{Re} \int_{0}^{\infty} \log (1-e^{2ix}) \Big(1- x \, \text{arccot}(x) \Big) \, dx \\ &= \text{Re} \, \Big(1- x \, \text{arccot}(x) \Big) \frac{i \, \text{Li}_{2}(e^{2ix})}{2} \Bigg|^{\infty}_{0}- \text{Re} \, \frac{i}{2} \int_{0}^{\infty} \left(\frac{x}{1+x^{2}} - \text{arccot}(x) \right) \text{Li}_{2} (e^{2ix}) \, dx \\ &= 0 + \frac{1}{2} \int_{0}^{\infty} \Big(\frac{x}{1+x^{2}} - \text{arccot}(x) \Big) \sum_{n=1}^{\infty} \frac{\sin({\color{red}{2}}nx)}{n^{2}} \, dx \\ &= \frac{1}{2} \sum_{n=1}^{\infty} \frac{1}{n^{2}} \int_{0}^{\infty} \Big(\frac{x}{1+x^{2}} - \text{arccot}(x) \Big) \sin (2nx) \, dx \\ &= \frac{1}{2} \sum_{n=1}^{\infty} \frac{1}{n^{2}} \left(-\frac{\pi}{4n} + \frac{1}{n} \int_{0}^{\infty} \frac{\cos(2nx)}{(1+x^{2})^{2}} \, dx \right) \tag{2} \\ &= \frac{1}{2} \sum_{n=1}^{\infty} \frac{1}{n^{2}} \Big(-\frac{\pi}{4n} + \frac{1}{n} \frac{\pi}{4} e^{-2n} (2n+1) \Big) \tag{3} \\ &= - \frac{\pi}{8} \sum_{n=1}^{\infty} \frac{1}{n^{3}} + \frac{\pi}{4} \sum_{n=1}^{\infty} \frac{e^{-2n}}{n^{2}} + \frac{\pi}{8} \sum_{n=1}^{\infty} \frac{e^{-2n}}{n^{3}} \\ &= -\frac{\pi}{8} \zeta(3) + \frac{\pi}{4} \text{Li}_{2}(e^{-2}) + \frac{\pi}{8} \text{Li}_{3} (e^{-2}). \end{align}$$ Therefore, $$ \int_{0}^{\infty} \log (\sin^{2} x) \Big(1 - x \, \text{arccot}(x) \Big) \, dx = \frac{\pi}{4} \Big(\text{Li}_{3} (e^{-2}) + 2 \text{Li}_{2}(e^{-2}) -2 \log(2) - \zeta(3) \Big).$$ $ $ $(1)$ Simple Integral $\int_0^\infty (1-x\cot^{-1} x)dx=\frac{\pi}{4}$. $(2)$ Integrate by parts again. $(3)$ There is probably a question on here about evaluating $\int_{0}^{\infty} \frac{\cos(ax)}{(1+x^{2})^{2}} \, dx$, but I can't find it at the moment. The most direct approach is to use the residue theorem. You could also use the fact that $\int_{0}^{\infty} \frac{\cos (ax)}{b^{2}+x^{2}} \, dx = \frac{\pi}{2b} e^{-ab} \, , \, (a \ge 0,b > 0) $ and differentiate both sides with respect to $b$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1015639', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32016/']}
jdg_71842
stackexchange
llm_judgeable_groundtruth_similarity
5360014
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I check if an URL has parameters in it? for instance, if the string is like this, form_page_add.php?parent_id=1return true But if it is like this, form_page_add.php?return nothing Thanks. EDIT: Sorry for not being clear, the URL is submitted from a from as a string. and I will store that string in a variable, if(isset($_POST['cfg_path'])) $cfg_path = trim($_POST['cfg_path']); so I need to check this variable $cfg_path whether is has parameters in it. Now provide the response and nothing else.
You can use this simple function: function url_get_param($url, $name) { parse_str(parse_url($url, PHP_URL_QUERY), $vars); return isset($vars[$name]) ? $vars[$name] : null;} See it in action here . It will return the value of the parameter if it exists in the url, or null if it does not appear at all. You can differentiate between a parameter having no value and not appearing at all by the identical operator (triple equals, === ). This will work with any URL you pass it, not just $_SERVER['REQUEST_URI'] . Update: If you just want to know if there is any parameter at all in the URL then you can use some variant of the above (see Phil's suggestion in the comments). Or, you can use the surprisingly simple test if (strpos($url, '=')) { // has at least one param} We don't even need to bother to check for false here, as if an equals sign exists it won't be the first character. Update #2: While the method using strpos will work for most URLs, it's not bulletproof and so should not be used if you don't know what kind of URL you are dealing with. As Steve Onzra correctly points out in the comments, URLs like http://example.com/2012/11/report/cGFyYW1fd2l0aF9lcXVhbA== are valid and yet do not contain any parameter.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5360014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/413225/']}
jdg_71843
stackexchange
llm_judgeable_groundtruth_similarity
28560744
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: given this sample xml: <xml><pb facs="id1" /> <aa></aa> <aa></aa> <lot-of-xml></lot-of-xml><pb facs="id2" /> <bb></bb> <bb></bb> <lot-of-xml></lot-of-xml></xml> i need to parse it and get all the content between pb , saving into distinct external files. expected result: $ cat id1 <aa></aa> <aa></aa> <lot-of-xml></lot-of-xml>$ cat id2 <bb></bb> <bb></bb> <lot-of-xml></lot-of-xml> what is the correct xpath axe to use? from lxml import etreexml = etree.parse("sample.xml")for pb in xml.xpath('//pb'): filename = pb.xpath('@facs')[0] f = open(filename, 'w') content = **{{ HOW TO GET THE CONTENT HERE? }}** f.write(content) f.close() is there any xpath expression to get all descendants and stop when reached a new pb ? Now provide the response and nothing else.
There are at least 2 options to do that: Following http://www.blog.project13.pl/index.php/coding/1077/runwith-junit4-with-both-springjunit4classrunner-and-parameterized/ Your test needs to look something like this: @RunWith(Parameterized.class) @ContextConfiguration(classes = {ApplicationConfigTest.class}) public class ServiceTest { private TestContextManager testContextManager; @Before public void setUpContext() throws Exception { //this is where the magic happens, we actually do "by hand" what the spring runner would do for us, // read the JavaDoc for the class bellow to know exactly what it does, the method names are quite accurate though this.testContextManager = new TestContextManager(getClass()); this.testContextManager.prepareTestInstance(this); } ... } There is a github project https://github.com/mmichaelis/spring-aware-rule , which builds on previous blog, but adds support in a generalized way @SuppressWarnings("InstanceMethodNamingConvention")@ContextConfiguration(classes = {ServiceTest.class})public class SpringAwareTest { @ClassRule public static final SpringAware SPRING_AWARE = SpringAware.forClass(SpringAwareTest.class); @Rule public TestRule springAwareMethod = SPRING_AWARE.forInstance(this); @Rule public TestName testName = new TestName(); ...} So you can have a basic class implementing one of the approaches, and all tests inheriting from it.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28560744', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1070980/']}
jdg_71844
stackexchange
llm_judgeable_groundtruth_similarity
20040825
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Following my code: HTML: <ul id="ul_o"><li>v1</li><li>v2</li></ul> JS: console.log(document.getElementById("ul_o").getElementsByClassName("LI").length); Why the in the console there are the number 0 instead of 2? Now provide the response and nothing else.
Give - document.getElementById("ul_o").getElementsByTagName("li").length To have a wider answer that ensures the dom is ready for being accessed and updated by JS , we can make use of the onreadystatechange event something like in html5 - <html><head><title>Test</title></head><body><ul id="ul_o"><li>v1</li><li>v2</li><li>v3</li></ul><script type='text/javascript'>document.onreadystatechange = function () { if (document.readyState === "interactive") { document.body.innerHTML = '<h4><code>ul</code> with <i>ul_o</i> has '+document.getElementById("ul_o").getElementsByTagName("li").length +' <code>li</code> Tags</h4>'; }}</script></body></html> Fiddle
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20040825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2992868/']}
jdg_71845
stackexchange
llm_judgeable_groundtruth_similarity
13641007
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to pull out rows from a postgres database, I can pull them out desc but when I try random I get a Syntax error near random. Error PG::Error: ERROR: syntax error at or near "rand"LINE 1: ... "hashtags".* FROM "hashtags" ORDER BY tweet_id rand LIMIT... ^: SELECT "hashtags".* FROM "hashtags" ORDER BY tweet_id rand LIMIT 4 Code to pull it out <div id="hashtags"><% Hashtag.order("tweet_id desc").limit(4).each do |hashtag| %> <blockquote><%= hashtag.content %></blockquote> <div class="from">&mdash; @<%= hashtag.screen_name %></div> <% end %></div> Now provide the response and nothing else.
To fetch random entries from your database you have a few options. Here's a couple 1st approach This will take 4 random entries out of your DB using SQL. Hashtag.order("RANDOM()").limit(4) 2nd approach: You can also use ActiveRecord sample() method to retrieve 4 random rows. Hashtag.all.sample(4) As of speed and efficiency; I made a mini-benchmark and tested two commands on my own db (contains 500 records).The first approach (as expected) was more than twice faster than the second approach. SQL: 1.8msSample Method: 4.2ms
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13641007', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1798594/']}
jdg_71846
stackexchange
llm_judgeable_groundtruth_similarity
44305953
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I create the key pairs of public and private using SecKeyGeneratePair. ViewController import UIKitimport Securityclass ViewController: UIViewController { @IBOutlet weak var textFld: UITextField! @IBOutlet weak var encryptedTextFld: UITextView! @IBOutlet weak var decryptedTextFld: UITextView! var statusCode: OSStatus? var publicKey: SecKey? var privateKey: SecKey? override func viewDidLoad() { super.viewDidLoad() let publicKeyAttr: [NSObject: NSObject] = [kSecAttrIsPermanent:true as NSObject, kSecAttrApplicationTag:"com.xeoscript.app.RsaFromScrach.public".data(using: String.Encoding.utf8)! as NSObject] let privateKeyAttr: [NSObject: NSObject] = [kSecAttrIsPermanent:true as NSObject, kSecAttrApplicationTag:"com.xeoscript.app.RsaFromScrach.private".data(using: String.Encoding.utf8)! as NSObject] var keyPairAttr = [NSObject: NSObject]() keyPairAttr[kSecAttrKeyType] = kSecAttrKeyTypeRSA keyPairAttr[kSecAttrKeySizeInBits] = 2048 as NSObject keyPairAttr[kSecPublicKeyAttrs] = publicKeyAttr as NSObject keyPairAttr[kSecPrivateKeyAttrs] = privateKeyAttr as NSObject statusCode = SecKeyGeneratePair(keyPairAttr as CFDictionary, &publicKey, &privateKey) if statusCode == noErr && publicKey != nil && privateKey != nil { print("Key pair generated OK") print("Public Key: \(publicKey!)") print("Private Key: \(privateKey!)") } else { print("Error generating key pair: \(String(describing: statusCode))") } } @IBAction func encryptPressed(_ sender: Any) { } @IBAction func decryptPressed(_ sender: Any) { } Then I got the keys like below Public Key <SecKeyRef algorithm id: 1, key type: RSAPublicKey, version: 4, block size: 2048 bits, exponent: {hex: 10001, decimal: 65537}, modulus: B4854E2DA6BA5EBC96C38BD44078D314E4504D130A03018ACD17D0F6679C3B6C9937B5D932A635AEAC32B9245EC400208C1F79932174EF804468D0DCE40DAF5B544CF9E4BCD7C49BA5D0BF3F8246B89B57A3A910CBB5200DCA6145E3EE216CE9C4A3283F1027AA15F7543BD3BEFF35BE24EE709CF8EB12545970AFFDA38CA11410ECA20A8F428D228ED07BF5399A2F55D93D7C143BAFA59A08E4FF932C3A689FA7F3F166B79A43837028319CB383F716B594F317ED6E20D7A8003190A13BC132D5B13708EDAEA3E2012B16CF37437BB617070D9A6DDFE55884A79BD530E4E654B823A8BBBF0AA777C8E46E94BD83E1C59EC6E1D34E69405640C309515243AA8D, addr: 0x608000420e80> Private Key <SecKeyRef algorithm id: 1, key type: RSAPrivateKey, version: 4, block size: 2048 bits, addr: 0x60000003b960> But I don't want to generate like above. I except the keys like below Public Key -----BEGIN RSA PUBLIC KEY-----MIIBCgKCAQEA0bipoOhkkvPxcsyOzcqsIUeVe0+iwe8W7N4EbHZMgujRERu1TPpyUcCO0uuKmm1TU09Kl40rRvDbtgB1YcGV3FPnNp3sOyFVsdyZ5bzxZtyyLrSWtj/nbLnGwaG9xJSwd2R/pTQLzOLV5KldwD2eUb3Z4Z4e9Z8II7eWgGaCLLqbrtEAa05NEqARckxrzJ1S3j+59h4AQovF72KI90/kRPryT2OGDiVlJ6CTjn2ZnTYcx65X6RwfAeJKHZAGhw96j9tXyS+dJcXy4IBUTi3PXw0aEfhHQr/JsSHuMp/8mrhVJEokXb1CgKDZgJXujpGhCBdztHBAJxLBQMlODg7srwIDAQAB-----END RSA PUBLIC KEY----- Private Key -----BEGIN RSA PRIVATE KEY-----MIIEowIBAAKCAQEAsfxMagVKY5++61Kot0esyhEOesqyQlZNvWbqMBcOoaOAb3pkLvwaGJ2YtD12u4yDEKcY5rpX7B/2t8GBHf+74NG47zAutf4Gf6qgQRUmIx2b7i4kWBt3KIifb/Zfs9KVJLhD4007bg1OtXA4kIhhXiuvhajDjDLOEthogF45CkJe+N67JnH5hVW5CqBxPyRCrWCFbEHcXs5H515JV/Kz1+JVrB2/M03fW751wptO2GdGwsdeofqQzY+WUzqUihXigIjAVLFRemky3HpwuhzXUJn6A0ZD4tkk1JLstpSSJdBpH+L2b9QlOitehxFgRsYmto+idpD1XrS9UyUtmpbTuwIDAQABAoIBAQCYvrAJcJ7lnmtnYtm96LoF89tcT+Xpfk1bFR43xSHeYAXSJdQiamIu69joHbNuwuib+vsoz5Sy5L+D9YHMb/MZvoIaa1w6/VUwbQr4r6C6FCgEoP65ymBZnd5OZL6/ASLTj3tbb6VoDe2VUkiI6TG+cnlAmJOxFsy5aZVNTQ9gmCMS0+AdpTbDsxTPg3y0EKFXeVRyKjq0lO9mp3G5yHkFjzWWY6s5XHx27gDTt8eXg/un72Qsz1rh5iUnAoxrga0Oco3Yk9DMvMwza1I1Lo5fpB6FbTGX3k24heSnLDFEnlBvsBBg0g/n/qgwoZJ81MgG8Q4kAfeScuCIsYVnHEBpAoGBAOpnrKEkyhk1rXG4Md+z1/odhqx89mV7mF1ttW4IhFcwpJSMohsGr27Ic87whkpRxz2Mwj3B5WPGne4UkbvniH46n3jEW7ZIUF+ASVWkjMaGJWtOqSLCI19Snie9WvpREwaCVuvT2l4IeM1WL5gKotBwa3csZgGYH6gcyW5Ipbo/AoGBAMJh/+WXbohF4+A989q0jYjRRhKwUJAYeK8/AePrx8MnAXnRd09TiqeGn0Xig/RNZ0RE96/TC1dTIBIHk5aDMy3vQhhYF0KbwcQWmCOnGo1qNTTaWTa3UitFMWf0hO0HuZtpRyD1YwhHP0W2tiK2GVjCreqIYASCpYKLq5Qq1K+FAoGARk2h8RLfqn/27UyZaMa/2DxS0BkKrZVMNXlaGQ5k4uGr+wHS/NgcddWZJk/tdwzf/Q3ilDM7YZmIdIemzfy7a2CZw9bgyuMVeA85733S2xgQ0QZepBYmFcjptnGMf9chJaqh90krDVjtImjfDXLjMjEFilC+p2vA0uMPZwxS6HECgYAc5dLUQBoHmlRRTwSEvBjagToopxujAHBYpUZTqwbMpWzbvl89ZM8VLrdY/V7en+89P/+OnRJvjgUTiRrQ4npmVs59rgLvPRamXzGJA1u4MFTuoZNnxgMqOaQprzlfv6lBSHpxlOl/HpByfcJAENBd2LtgRZv4r6+JY9hDM8bgvQKBgCDTSCLj5c1CYyuJMdrz9L5+xLFmrmL48djhK460ZcmcZ/gP808CyXx/sDneow+JWt7Jb3p5zyUvvq1aDGNSsn4plB2rg7AqtoHcZYyFFZGI/K/b6JZna1yuFUYOfcanunabxY1wPQxuvR+AEuufBjB0aKg+qkLCCN1HYQtLs+N8-----END RSA PRIVATE KEY----- How to convert the keys generated by SecKeyGeneratePair to -----BEGIN RSA PUBLIC KEY----- ................ -----END RSA PUBLIC KEY----- format Now provide the response and nothing else.
Please try this : let publicKeyAttr: [NSObject: NSObject] = [ kSecAttrIsPermanent:true as NSObject, kSecAttrApplicationTag:"com.xeoscript.app.RsaFromScrach.public".data(using: String.Encoding.utf8)! as NSObject, kSecClass: kSecClassKey, // added this value kSecReturnData: kCFBooleanTrue] // added this valuelet privateKeyAttr: [NSObject: NSObject] = [ kSecAttrIsPermanent:true as NSObject, kSecAttrApplicationTag:"com.xeoscript.app.RsaFromScrach.private".data(using: String.Encoding.utf8)! as NSObject, kSecClass: kSecClassKey, // added this value kSecReturnData: kCFBooleanTrue] // added this valuevar keyPairAttr = [NSObject: NSObject]()keyPairAttr[kSecAttrKeyType] = kSecAttrKeyTypeRSAkeyPairAttr[kSecAttrKeySizeInBits] = 2048 as NSObjectkeyPairAttr[kSecPublicKeyAttrs] = publicKeyAttr as NSObjectkeyPairAttr[kSecPrivateKeyAttrs] = privateKeyAttr as NSObjectvar publicKey : SecKey?var privateKey : SecKey?;let statusCode = SecKeyGeneratePair(keyPairAttr as CFDictionary, &publicKey, &privateKey)if statusCode == noErr && publicKey != nil && privateKey != nil { print("Key pair generated OK") var resultPublicKey: AnyObject? var resultPrivateKey: AnyObject? let statusPublicKey = SecItemCopyMatching(publicKeyAttr as CFDictionary, &resultPublicKey) let statusPrivateKey = SecItemCopyMatching(privateKeyAttr as CFDictionary, &resultPrivateKey) if statusPublicKey == noErr { if let publicKey = resultPublicKey as? Data { print("Public Key: \((publicKey.base64EncodedString()))") } } if statusPrivateKey == noErr { if let privateKey = resultPrivateKey as? Data { print("Private Key: \((privateKey.base64EncodedString()))") } }} else { print("Error generating key pair: \(String(describing: statusCode))")}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44305953', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6130265/']}
jdg_71847
stackexchange
llm_judgeable_groundtruth_similarity
18865756
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have a class with an integer that should be always positive with a simple C'tor: class A { unsigned int x; public: A(unsigned int X) : x(X) {}}; And lets say someone accidentally creates an A object with the value -5 .Of course, this is not valid and X will get a really huge value since the MSB now doesnt stand for the number sign.Problem is, that now I can't really check if the number is negative or invalid (maybe bitwise?). Should I just avoid the unsigned and just use a plain int ? By doing so, i could just throw an exception if the given value is beyond a max value OR below zero. Would love to hear some suggestions. Now provide the response and nothing else.
Two approaches come to mind: Add an explicit conversion for the 'signed' types. #include <cassert>class A { unsigned int x; public: A(unsigned int X) : x(X) {} explicit A(int X) : x(static_cast<unsigned>(X)) { assert(X>=0); // note X, not x! }};int main(){ A ok(5); A bad(-5);} Prohibit implicit conversions by deleting better overloads: A(int X) = delete;A(long X) = delete;A(char X) = delete; This will require all users to cast to unsigned before constructing the A instance. It's safe but clumsy. Note that this does not prohibit implicit conversions from all integral types (e.g. enum s) so you'd need to do more to make this fool proof. Here is a rudimentary SFINAE-based example that accepts all implicit conversions except if they involve signed values: Live on Coliru #include <type_traits>#include <limits>class A { unsigned int x; public: template<typename T, typename = typename std::enable_if<std::is_integral<T>::value, void>::type> A(T X) : x(X) { static_assert(!std::numeric_limits<T>::is_signed, "Signed types cannot be accepted"); }};int main(){ A ok(5u); A bad(-5);}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18865756', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2042801/']}
jdg_71848
stackexchange
llm_judgeable_groundtruth_similarity
691
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My question may be stupid, but let's take the Baker-Gill-Soloway theorem as an example : There exists an oracle A such as $P^A = NP^A$ and an oracle B such as $P^B \neq NP^B$. If we take both oracles A and B, I suppose we have $P^{A,B} = NP^{A,B}$ and $P^{A,B} \neq NP^{A,B}$, and then, theory wouldn't be consistent. What says us that the theory with only oracle A is consistent ? If such theory isn't consistent, then there could be non-relativizing proofs of $P \neq NP$. Now provide the response and nothing else.
Whether or not $P^{A,B}$ equals $NP^{A,B}$ is going on the particular oracle languages A and B that you are using. Iirc, in the BGS paper the language A is TQBF (or any other PSPACE-complete language). The language B, somewhat ironically, is actually defined via a diagonalization construction. Now, if you use the construction from the paper and apply it to TMs with an A oracle, then the resulting language B will be such that $P^{A,B} \ne NP^{A,B}$. On the other hand, if you stick with the original language B from the paper, it will depend on the details of B's construction. If B is in PSPACE (I don't know offhand if it is or not), then a TM with an A oracle could simulate any queries to B, and you should have $P^{A,B} = NP^{A,B}$. The bottom line, though, is that this has nothing to do with consistency--it is just a technical question about the specific oracle languages A and B (and you could get different answers depending on the particular choices of A and B that you use).
{}
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/691', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/378/']}
jdg_71849
stackexchange
llm_judgeable_groundtruth_similarity
1130887
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider a metric space $(\Bbb N, d)$ where $d(m,n) = \frac{\vert m-n \vert} {1+\vert m-n \vert}$. Need to prove that any infinite subset $X \subset \Bbb Z$ is not compact and not sequentially compact. (Have previously proven that every set in $(\Bbb N, d)$ defined above is open and closed, and that $(\Bbb N, d)$ is complete.) I find the question a bit odd, as there are 2 spaces involved: $\Bbb N$ and $\Bbb Z$. So I just take it to mean "not compact and not sequentially compact" in $(\Bbb Z, d)$ rather than $(\Bbb N, d)$ So I suspect the trick for this question is that $\Bbb Z$ is topologically equivalent to $\Bbb N$ ("homeomorphism"?) and I need to consider infinite subsets $X' \subset \Bbb N$ and use my previous results about $(\Bbb N, d)$ Also, to answer this question, I suppose I just need to prove either X is not compact or not sequentially compact, since the 2 are equivalent for any metric space. I also think that (correct me if I'm wrong) in any metric space, a subset is compact iff it is closed and bounded. But I don't really know how to proceed. Any help would be much appreciated. Now provide the response and nothing else.
$$\log(1-x)=-\sum_{n=1}^\infty \frac{1}{n}x^n$$so$$\log(x)=-\log\frac{1}{x}=-\log(1-\frac{x-1}{x})=\sum_{n=1}^\infty \frac{1}{n}(\frac{x-1}{x})^n$$And$$\frac{1}{1-x}=\sum_{n=0}^\infty x^n$$so$$x=\frac{1}{1-\frac{x-1}{x}}=\sum_{n=0}^\infty (\frac{x-1}{x})^n$$ and thus $$x\log(x)=(\sum_{n=1}^\infty \frac{1}{n}(\frac{x-1}{x})^n)\cdot \sum_{m=0}^\infty (\frac{x-1}{x})^m = \sum_{n=1}^\infty a_n(\frac{x-1}{x})^n$$where$$a_n=\sum_{k=0}^{n-1} \frac{1}{n-k}=1+\frac{1}{2}+\frac{1}{3}+....\frac{1}{n}=\frac{S_n}{n!}$$ So the result is$$x\log(x)=\sum_{n=1}^\infty \frac{S_n}{n!}(\frac{x-1}{x})^n$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1130887', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/202461/']}
jdg_71850
stackexchange
llm_judgeable_groundtruth_similarity
57398118
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the code below it is not possible to obtain a reference to a trait object from a reference to a dynamically-sized type implementing the same trait. Why is this the case? What exactly is the difference between &dyn Trait and &(?Sized + Trait) if I can use both to call trait methods? A type implementing FooTraitContainerTrait might e.g. have type Contained = dyn FooTrait or type Contained = T where T is a concrete type that implements FooTrait . In both cases it's trivial to obtain a &dyn FooTrait . I can't think of another case where this wouldn't work. Why isn't this possible in the generic case of FooTraitContainerTrait ? trait FooTrait { fn foo(&self) -> f64;}///trait FooTraitContainerTrait { type Contained: ?Sized + FooTrait; fn get_ref(&self) -> &Self::Contained;}///fn foo_dyn(dyn_some_foo: &dyn FooTrait) -> f64 { dyn_some_foo.foo()}fn foo_generic<T: ?Sized + FooTrait>(some_foo: &T) -> f64 { some_foo.foo()}///fn foo_on_container<C: FooTraitContainerTrait>(containing_a_foo: &C) -> f64 { let some_foo = containing_a_foo.get_ref(); // Following line doesn't work: //foo_dyn(some_foo) // Following line works: //some_foo.foo() // As does this: foo_generic(some_foo)} Uncommenting the foo_dyn(some_foo) line results in the compiler error error[E0277]: the size for values of type `<C as FooTraitContainerTrait>::Contained` cannot be known at compilation time --> src/main.rs:27:22 |27 | foo_dyn(contained) | ^^^^^^^^^ doesn't have a size known at compile-time | = help: the trait `std::marker::Sized` is not implemented for `<C as FooTraitContainerTrait>::Contained` = note: to learn more, visit <https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait> = help: consider adding a `where <C as FooTraitContainerTrait>::Contained: std::marker::Sized` bound = note: required for the cast to the object type `dyn FooTrait` Now provide the response and nothing else.
This problem can be reduced to the following simple example (thanks to turbulencetoo ): trait Foo {}fn make_dyn<T: Foo + ?Sized>(arg: &T) -> &dyn Foo { arg} At first glance, it really looks like this should compile, as you observed: If T is Sized , the compiler knows statically what vtable it should use to create the trait object; If T is dyn Foo , the vtable pointer is part of the reference and can just be copied to the output. But there's a third possibility that throws a wrench in the works: If T is some unsized type that is not dyn Foo , even though the trait is object safe, there is no vtable for impl Foo for T . The reason there is no vtable is because the vtable for a concrete type assumes that self pointers are thin pointers. When you call a method on a dyn Trait object, the vtable pointer is used to look up a function pointer, and only the data pointer is passed to the function. However, suppose you implement a(n object-safe) trait for an unsized type: trait Bar {}trait Foo { fn foo(&self);}impl Foo for dyn Bar { fn foo(&self) {/* self is a fat pointer here */}} If there were a vtable for this impl , it would have to accept fat pointers, because the impl may use methods of Bar which are dynamically dispatched on self . This causes two problems: There's nowhere to store the Bar vtable pointer in a &dyn Foo object, which is only two pointers in size (the data pointer and the Foo vtable pointer). Even if you had both pointers, you can't mix and match "fat pointer" vtables with "thin pointer" vtables, because they must be called in different ways. Therefore, even though dyn Bar implements Foo , it is not possible to turn a &dyn Bar into a &dyn Foo . Although slices (the other kind of unsized type) are not implemented using vtables, pointers to them are still fat, so the same limitation applies to impl Foo for [i32] . In some cases, you can use CoerceUnsized (only on nightly as of Rust 1.36) to express bounds like "must be coercible to &dyn FooTrait ". Unfortunately, I don't see how to apply this in your case. See also What is a "fat pointer" in Rust? Use trait object to pass str in rust has a concrete example of a reference to an unsized type ( str ) that cannot be coerced to a reference to a trait object.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57398118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/929037/']}
jdg_71851
stackexchange
llm_judgeable_groundtruth_similarity
3860114
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a JSP that spits out a JNLP file as shown below. Work son localhost, when deployedto a remote server, Java Web Start errors out with an exception - Unable to load resource: http://localhost:8080/jnlp/myjnlp.jnlp ExitException[ 3]com.sun.deploy.net.FailedDownloadException: Unable to load resource: http: // localhost:8080/jnlp/myjnlp.jnlp at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Unknown Source) The included JNLP file is getting its codebase line replaced based on server's URL. Attaching a debugger to the JSP show sthe correct codebase line with server's IP /Host name. Don't understand where the localhost comes from ? <%@ page import="java.io.*" %><%@ page contentType="application/x-java-jnlp-file" language="java" %><% response.setHeader("Cache-Control", "no-cache"); response.setHeader("Pragma", "no-cache"); response.setDateHeader("Expires", -1); response.setContentType("application/x-java-jnlp-file");// Generating the codebase for the JNLP. This URL will be used for downloading this jsp and the jar StringBuffer jspUrlsb = request.getRequestURL(); String url = jspUrlsb.substring(0,jspUrlsb.lastIndexOf("/")); url = url.substring(0,url.lastIndexOf("/")); String jnlpCodebase = url+ "/jnlp/"; String tool = request.getParameter("tool"); tool = tool==null || tool.length()==0 ? "myjnlp" : tool; String jnlpFile = tool+".jnlp"; response.setHeader("Content-Disposition", "filename=\""+jnlpFile+"\";"); String path = config.getServletContext().getRealPath(request.getContextPath()); FileInputStream fis = new FileInputStream(new File(path,"/jnlp/"+jnlpFile)); InputStreamReader isr = new InputStreamReader(fis); BufferedReader br = new BufferedReader(isr); String line = null; while ((line = br.readLine()) != null) { //consuming the stream if (line.startsWith("<jnlp")) { line = "<jnlp codebase=\""+jnlpCodebase+"\" href=\""+jnlpFile+"\" spec=\"6.0+\">"; }%> <%=line%><% } br.close(); isr.close(); fis.close();%> Now provide the response and nothing else.
All you need to do is configure your SSH setup with multiple SSH keypairs. This link is easy to follow (Thanks Eric): http://code.tutsplus.com/tutorials/quick-tip-how-to-work-with-github-and-multiple-accounts--net-22574 Generating SSH keys (Win/msysgit): https://help.github.com/articles/generating-an-ssh-key/ Relevant steps from the first link: Generate an SSH-key: ssh-keygen -t ed25519 -C "[email protected]" Follow the prompts and decide a name, e.g. id_ed25519_example_company . Copy the SSH public-key to GitHub from ~/.ssh/id_ed25519_doe_company.pub and tell ssh about the key: ssh-add ~/.ssh/id_ed25519_doe_company Create a config file in ~/.ssh with the following contents: Host github-doe-company HostName github.com User git IdentityFile ~/.ssh/id_ed25519_doe_company Add your remote: git remote add origin git@github-doe-company:username/repo.git or change using: git remote set-url origin git@github-doe-company:username/repo.git Also, if you're working with multiple repositories using different personas, you need to make sure that your individual repositories have the user settings overridden accordingly: Setting user name, email and GitHub token – Overriding settings for individual repos https://help.github.com/articles/setting-your-commit-email-address-in-git/ Hope this helps. Note: Some of you may require different emails to be used for different repositories, from git 2.13 you can set the email on a directory basis by editing the global config file found at: ~/.gitconfig using conditionals like so: [user] name = Default Name email = [email protected][includeIf "gitdir:~/work/"] path = ~/work/.gitconfig And then your work-specific config ~/work/.gitconfig would look like this: [user] name = Pavan Kataria email = [email protected] Thank you @alexg for informing me of this in the comments.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/3860114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/396400/']}
jdg_71852
stackexchange
llm_judgeable_groundtruth_similarity
1682492
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A tournament is a directed graph with exactly one edge between every pair of vertices. (So for each pair (u,v) of vertices, either the edge from u to v or from v to u exists, but not both.) You can think of the nodes as players in a tournament, where each player has to play against every other player. The edge points from the winner to the loser of a game. A Hamiltonian path (not cycle) is a sequence of consecutive directed edges that visits every vertex exactly once. How can i prove that every tournament contains at least one Hamiltonian path?thanks your help! Now provide the response and nothing else.
This can be proven using strong induction. Let $n$ be the number of vertices. When $n \le 2$, a hamiltonian path clearly exists. Now, for any given $n > 2$, pick any arbitrary vertex $v$. Partition all other vertices other than $v$ into the sets $V_{out}$ and $V_{in}$ -- $V_{out}$ contains all other vertices $u$ such that the edge $(v, u)$ exists, while $V_{in}$ contains all other vertices $u$ such that the edge $(u, v)$ exists. Clearly $|V_{out}| < n$ and $|V_{in}| < n$, so by the inductive hypothesis, there is a hamiltonian path in both sets. Let $H_{out}$ be any hamiltonian path in $V_{out}$ and $H_{in}$ be any hamiltonian path in $V_{in}$. You can then form a hamiltonian path for all vertices by concatenating $H_{in}$, $v$, and $H_{out}$.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1682492', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/319777/']}
jdg_71853
stackexchange
llm_judgeable_groundtruth_similarity
14471
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: You should have already seen such a thing, specially in documentaries about ancient life: the movement of a tribe is shown on an ancient map along a route with a dashed arrow. I want to do the same thing in Mathematica. Given a bitmap image of a map like this one and a list of points on the 2d map such as {{123,233}, {177, 279}, ...} , I want to make a movie that shows an arrow along the given route on the map. The movie looks like it is captured from a helicopter flying on the map. Edit: This is what I did after rm -rf♦ 's comment: map = ImageResize[ Import["http://upload.wikimedia.org/wikipedia/commons/e/e3/China_\old_map.jpg"], 500];path = .3 {{300, 500}, {350, 600}, {450, 700}, {600, 750}, {650, 750}, {700, 780}, {800, 800}, {850, 900}, {900, 1000}, {1000, 1050}, {1100, 1050}, {1150, 1100}};ListAnimate[ Table[Show[map, Graphics[{Thick, Dashed, Arrow[Take[path, i]]}], ImageSize -> {500}], {i, Length@path}]] Now provide the response and nothing else.
map = Import["http://upload.wikimedia.org/wikipedia/commons/e/e3/China_old_map.jpg"]{w, h} = map // ImageDimensions; The route: route = {{1107.07`, 184.181`}, {1096.17`, 198.195`}, {1072.81`, 237.121`}, {1071.25`, 244.906`}, {1068.14`, 254.249`}, {1065.03`, 265.148`}, {1063.47`, 274.49`}, {1063.47`, 286.947`}, {1063.47`, 296.289`}, {1063.47`, 308.746`}, {1061.91`, 313.417`}, {1061.91`, 319.645`}, {1057.24`, 338.33`}, {1051.01`, 350.786`}, {1044.78`, 361.685`}, {1037.`, 372.585`}, {1029.21`, 381.927`}, {1019.87`, 388.155`}, {1012.09`, 391.269`}, {1001.19`, 397.498`}, {991.845`, 403.726`}, {980.945`, 409.954`}, {970.046`, 413.068`}, {957.589`, 417.739`}, {943.576`, 425.525`}, {935.791`, 430.196`}, {931.119`, 433.31`}, {924.891`, 437.981`}, {921.777`, 441.095`}, {918.663`, 448.881`}, {915.549`, 451.995`}, {907.764`, 462.894`}, {903.092`, 470.679`}, {899.978`, 481.579`}, {895.307`, 489.364`}, {893.75`, 497.149`}, {892.193`, 504.935`}, {887.522`, 511.163`}, {884.408`, 517.391`}, {882.851`, 525.176`}, {881.294`, 539.19`}, {879.737`, 545.418`}, {878.18`, 550.089`}, {876.622`, 556.317`}, {876.622`, 567.217`}, {876.622`, 578.116`}, {876.622`, 587.459`}, {876.622`, 598.358`}, {876.622`, 603.029`}, {870.394`, 618.6`}, {868.837`, 629.499`}, {862.609`, 643.513`}, {857.938`, 657.526`}, {853.267`, 668.425`}, {848.595`, 680.882`}, {840.81`, 690.224`}, {833.025`, 699.567`}, {825.24`, 707.352`}, {819.011`, 715.137`}, {811.226`, 721.365`}, {800.327`, 729.151`}, {787.87`, 738.493`}, {781.642`, 746.278`}, {775.414`, 752.506`}, {772.3`, 752.506`}, {761.4`, 763.406`}, {744.273`, 783.648`}, {734.93`, 794.547`}, {725.588`, 813.232`}, {714.689`, 830.359`}, {708.46`, 844.373`}, {703.789`, 861.5`}, {700.675`, 873.957`}, {699.118`, 884.856`}, {699.118`, 900.427`}, {700.675`, 911.326`}, {702.232`, 920.669`}, {708.46`, 928.454`}, {720.917`, 940.91`}, {730.259`, 948.696`}, {739.602`, 954.924`}, {759.843`, 965.823`}, {781.642`, 975.165`}, {800.327`, 978.28`}, {815.897`, 979.837`}, {819.011`, 979.837`}, {834.582`, 981.394`}, {853.267`, 984.508`}, {868.837`, 986.065`}, {879.737`, 986.065`}, {890.636`, 987.622`}, {898.421`, 987.622`}, {912.435`, 989.179`}, {923.334`, 990.736`}, {932.677`, 992.293`}, {943.576`, 993.85`}, {954.475`, 995.407`}, {968.489`, 995.407`}, {977.831`, 996.964`}, {987.173`, 998.521`}, {996.516`, 1000.08`}, {1007.42`, 1001.64`}, {1019.87`, 1004.75`}, {1027.66`, 1004.75`}, {1033.89`, 1006.31`}, {1044.78`, 1007.86`}, {1058.8`, 1009.42`}, {1068.14`, 1009.42`}, {1072.81`, 1009.42`}, {1102.4`, 1006.31`}, {1121.08`, 1003.19`}, {1144.44`, 998.521`}, {1156.89`, 996.964`}, {1175.58`, 992.293`}, {1194.26`, 982.951`}, {1206.72`, 978.28`}, {1222.29`, 972.051`}, {1244.09`, 962.709`}, {1258.1`, 958.038`}, {1262.77`, 956.481`}, {1292.36`, 942.467`}, {1326.61`, 933.125`}, {1349.97`, 925.34`}, {1363.98`, 919.111`}, {1370.21`, 917.554`}, {1373.32`, 915.997`}, {1379.55`, 911.326`}, {1384.22`, 908.212`}, {1387.34`, 906.655`}, {1390.45`, 906.655`}, {1395.12`, 901.984`}, {1401.35`, 898.87`}, {1407.58`, 895.756`}, {1410.69`, 894.199`}, {1413.81`, 889.527`}, {1418.48`, 884.856`}, {1421.59`, 883.299`}, {1423.15`, 880.185`}, {1426.26`, 877.071`}, {1430.93`, 872.4`}, {1432.49`, 869.286`}, {1432.49`, 864.614`}, {1435.61`, 858.386`}}; Use ImageCompose to combine route and map: Manipulate[ ImageCompose[ map, Graphics[{Thickness[0.02], Dashed, Arrowheads[.06], Arrow[SplineCurve@Take[route, i]]}, PlotRange -> {{0, w}, {0, h}}] ], {i, 1, Length@route, 1}] Routines to show a part of the route: routePart[i_] := Take[route, Max[1, Round[Length[route] i]]]mapPlusRoute[i_] := ImageCompose[map, Graphics[{Thickness[0.02], Dashed, Arrowheads[.06], Arrow[BSplineCurve@routePart[i]]}, PlotRange -> {{0, w}, {0, h}}]] Now put it in 3D using Graphics3D and Texture . Add some camera movements and aiming and... presto. I exported the separate frames to process them in an external program, but MMA can make animated GIFs as well. mc = 1;Do[ Export["C:\\Users\\Sjoerd\\Desktop\\Movie\\Movie" <> IntegerString[mc++, 10, 4] <> ".bmp", Graphics3D[ { Texture[mapPlusRoute[i]], Polygon[{{0, 0, 0}, {w, 0, 0}, {w, h, 0}, {0, h, 0}}, VertexTextureCoordinates -> {{0, 0}, {1, 0}, {1, 1}, {0, 1}}] }, Lighting -> "Neutral", Boxed -> False, ViewVector -> {{1500 Cos[i 2 \[Pi] ] + w/2, h/2 - 1500 Sin[2 \[Pi] i ], 1000}, Append[routePart[i][[-1]], 0]}, ViewVertical -> {0, 0, 1}, ViewAngle -> 25 \[Degree], ImageSize -> 600 ] ], {i, 0.07, 1, .005} ] Gif Movie here , uploading didn't work at the moment. The hand-drawn route is a bit rocky but you get the idea.
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/14471', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/573/']}
jdg_71854
stackexchange
llm_judgeable_groundtruth_similarity
15883568
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What are the possible ways for reading user input using read() system call in Unix. How can we read from stdin byte by byte using read() ? Now provide the response and nothing else.
You can do something like this to read 10 bytes: char buffer[10];read(STDIN_FILENO, buffer, 10); remember read() doesn't add '\0' to terminate to make it string (just gives raw buffer). To read 1 byte at a time: char ch;while(read(STDIN_FILENO, &ch, 1) > 0){ //do stuff} and don't forget to #include <unistd.h> , STDIN_FILENO defined as macro in this file. There are three standard POSIX file descriptors, corresponding to the three standard streams, which presumably every process should expect to have: Integer value Name 0 Standard input (stdin) 1 Standard output (stdout) 2 Standard error (stderr) So instead STDIN_FILENO you can use 0. Edit: In Linux System you can find this using following command: $ sudo grep 'STDIN_FILENO' /usr/include/* -R | grep 'define'/usr/include/unistd.h:#define STDIN_FILENO 0 /* Standard input. */ Notice the comment /* Standard input. */
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15883568', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2252500/']}
jdg_71855
stackexchange
llm_judgeable_groundtruth_similarity
843352
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: We run an Apache Cassandra cluster where each host has a few hundred thousand files open at any given time. We'd like to be able to get a count of open files at periodic intervals and feed this number into graphite , but when we run lsof under collectd , it ends up taking a few minutes to complete and chewing up an inordinate amount of CPU in the meantime. I'm wondering if there's an alternate and more friendly means of getting the same data that lsof provides, or even a way of running lsof that won't eat into CPU as noticeably? (Although I assume this latter method would likely take much longer to complete than it currently does... not ideal). Perhaps the kernel maintains some variable somewhere that contains the number of open files? Wishful thinking? Update: In reponse to one of the answers, we're already using the -b and -n flags. Here's the full command as I have it running under collectd : sudo lsof -b -n -w | stdbuf -i0 -o0 -e0 wc -l Now provide the response and nothing else.
You probably don't need to resolve the network addresses for socket, so a least use the -n switch.Then you may also want so skip blocking operations with -b . These 2 first switches should really make it faster. And then -l to avoid resolving uids. And -L to avoid counting links. Etc. See the man lsof . Alternatively, with Linux, you could make a script to simply count the links under /proc/<PID>/fd like this: find /proc -mindepth 3 -maxdepth 3 -type l | awk -F/ '$4 == "fd" { s++ } END { print s }'
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/843352', 'https://serverfault.com', 'https://serverfault.com/users/182920/']}
jdg_71856
stackexchange
llm_judgeable_groundtruth_similarity
7555321
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: At some places I have seen people creating a thread pool and creating threads and executing a function with those threads. While calling that function boost::mutex is passed by reference. Why it is done so? I believe you can have a mutex declared in the called function itself or can be declared a class member or global. Can anyone please explain? e.g. myclass::processData() { boost::threadpool::pool pool(2); boost::mutex mutex; for (int i =0; data<maxData; ++data) pool.schedule(boost::bind(&myClass::getData, boost_cref(*this), boost::ref(mutex))); } Then, myClass::getData(boost::mutex& mutex) { boost::scoped_lock(mutex) // Why can't we have class member variable mutex or //local mutex here //Do somethign Here} Now provide the response and nothing else.
Mutex's are non-copyable objects, and while they can be members of a class, it would greatly complicate a parent class's copy-ability. Thus one preferred method, should a number of class instances need to share the same data, would be to create the mutex as a static data-member. Otherwise if the mutex only needed to be locked within an instance of the class itself, you could create a pointer to a mutex as a non-static data-member, and then each copy of the class would have it's own dynamically allocated mutex (and remain copyable if that is a requirement). In the code example above, what's basically taking place is there is a global mutex being passed into the thread pool by reference. That enables all the threads sharing the same memory locations to create an exclusive lock on that memory using the exact same mutex, but without the overhead of having to manage the non-copyable aspect of the mutex itself. The mutex in this code example could have also been a static data-member of class myClass rather than a global mutex that is passed in by reference, the assumption being that each thread would need to lock some memory that is globally accessible from each thread. The problem with a local mutex is that it's only a locally accessible version of the mutex ... therefore when a thread locks the mutex in order to share some globally accessible data, the data itself is not protected, since every other thread will have it's own local mutex that can be locked and unlocked. It defeats the whole point of mutual exclusion.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7555321', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/894688/']}
jdg_71857
stackexchange
llm_judgeable_groundtruth_similarity
130285
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This seems like it should be easy but I am trying to concisely enhance some nested associations with Mathematica objects (e.g., convert things to matrices, etc) but I'm having a bit of difficulty with nested items. In my case I want to go through an arbitrarily nested association and anywhere there is a key == 'somevalue' (correlation below) I want to replace its value with the result. KeyValueMap almost does the trick (see below). But it doesn't take a levelspec, I'm wondering if someone can show me how you get around this since some functions take levelspecs and some don't. Example code: test = <| "name" -> "foo", "correlation" -> {{1, 2}, {3, 4}} , "descendants" -> <| "name" -> "bar", "correlation" -> {{12, 13}, {20, 22}}|> |>;KeyValueMap[If[#1 == "correlation", #1 -> MatrixForm[#2], #1 -> #2] &, test] Expected output would have both correlations formatted as matrices, but as you can guess the descendant one is missed with the code as is. Now provide the response and nothing else.
You can use Replace[] on Association s, but it's sort of weird because it tries to treat keys transparently. If your Replace patterns include association markers ( <| |> ), then Replace can be convinced to care about keys, but the tricky part there is matching against every rule within an association. It turns out that you can get past all of this and coax Mathematica into doing what you want with a clever pattern. Replace[test, <|a___, "correlation" -> x_, b___|> :> <|a, "correlation" -> MatrixForm[x], b|>, All] We're asking Replace to match all associations that have a "correlation" rule anywhere inside (using the pattern <|a___, "correlation" -> x_, b___|> ) at All nesting levels within your association. The a___ (three underscores) and b___ parts capture everything within the Association before or after the "correlation" rule. The part after the :> in the replace rule drops your transformed middle part (captured by x ) in between what was captured as a and b .
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/130285', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/42162/']}
jdg_71858
stackexchange
llm_judgeable_groundtruth_similarity
6074247
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to get some kind of notification/broadcast/etc. when a custom account is removed from "Accounts & sync settings"? The application I have can facilitate multiple users on a device (this is for a corporate use) and uses a single SQLite database. Say I create multiple users for my application on a device and populate database with data that is relevant only to those two users. My problem here is that if one of the user is removed from "Accounts & sync settings" I have no way to cleanup database and/or some external files on SD card. I could duplicate user information in a redundant table and compare it with registered accounts and then removing user data from the database if user information in the table and Account[] array from AccountManager does not match. Feels dirty to me. Now provide the response and nothing else.
You have two options: You can use the addOnAccountsUpdatedListener method of AccountManager to add a listener in the onCreate method of an Activity or Service -- make sure you remove the listener in your onDestroy method (i.e. do NOT use this in an endlessly running service) or the Context used to retrieve the AccountManager will never be garbage collected The AccountsService will broadcast an intent with the action AccountManager.LOGIN_ACCOUNTS_CHANGED_ACTION every time an account is added, removed or changed which you can add a receiver for.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6074247', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/181714/']}
jdg_71859
stackexchange
llm_judgeable_groundtruth_similarity
261853
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Currently the Ethernet ports in the building I work in are down, but the Wi-Fi works. I have a Wi-Fi-enabled laptop ( Ubuntu 14.04 LTS (Trusty Tahr)) and a non-Wi-Fi enabled workstaion ( Debian 8 (Jessie)) with only an Ethernet plug. Is it possible to connect the two via an Ethernet cable and be able to get network connectivity on the workstation? Now provide the response and nothing else.
Yes, you can do this, and it's not even that hard. I have a laptop with a wireless card, and an ethernet port. I plugged a RapberryPi running Arch Linux into it, via a "crossover" ethernet cable. That's one special thing you might need - not all ethernet cards can do a machine-to-machine direct connection. The other tricky part is IP addressing. It's best to illustrate this. Here's my little set-up script. Again, enp9s0 is the laptop's ethernet port, and wlp12s0 is the laptop's wireless device. #!/bin/bash/usr/bin/ip link set dev enp9s0 up/usr/bin/ip addr add 172.16.1.1/24 dev enp9s0sleep 10modprobe iptable_natecho 1 > /proc/sys/net/ipv4/ip_forwardiptables -t nat -A POSTROUTING -s 172.16.1.0/24 -j MASQUERADEiptables -A FORWARD -o enp9s0 -i wlp12s0 -s 172.16.1.0/24 -m conntrack --ctstate NEW -j ACCEPTiptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPTdhcpd -cf /etc/dhcpd.enp9s0.conf enp9s0 The script sets a static IP address for the ethernet card, 172.16.1.1, then sets up NAT by loading a kernel module. It turns on IP routing (on the laptop), then does some iptables semi-magic to get packets routed from the wireless card out the ethernet, and vice versa. I have dhcpd running on the ethernet port to give out IP addresses because that's what the Raspberry Pi wants, but you could do a static address on your workstation, along with static routing, DNS server, and NTP server. The file /etc/dhcpd.enp9s0.conf looks like this, just in case you go down that route: option domain-name "subnet";option domain-name-servers 10.0.0.3;option routers 172.16.1.1;option ntp-servers 10.0.0.3;default-lease-time 14440;ddns-update-style none;deny bootp;shared-network intranet { subnet 172.16.1.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; pool { range 172.16.1.50 172.16.1.200; } }} The IP address choice is pretty critical. I used 172.16.1.0/24 for the ethernet cable coming out of the laptop. The wireless card on the laptop ends up with a 192.161.1.0/24 . You need to look at what IP address the laptop's wireless has, and choose some other subnet for the ethernet card. Further, you need to choose one of the "bogon" or "non-routable" networks. In my example, 172.16.1.0/24 is from the official non-routable ranges of IP addresses, as is 192.168.1.0/24, and so is the 10.0.0.3 address dhcpd.enp9s0.conf gives out for a DNS server and NTP server. You'll have to use your head to figure out what's appropriate for your setup.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/261853', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/156275/']}
jdg_71860
stackexchange
llm_judgeable_groundtruth_similarity
52353
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an application that reads a CSV file with piles of data rows. I give the user a summary of the number of rows based on types of data, but I want to make sure that I don't read in too many rows of data and cause OutOfMemoryError s. Each row translates into an object. Is there an easy way to find out the size of that object programmatically? Is there a reference that defines how large primitive types and object references are for a VM ? Right now, I have code that says read up to 32,000 rows , but I'd also like to have code that says read as many rows as possible until I've used 32MB of memory. Maybe that is a different question, but I'd still like to know. Now provide the response and nothing else.
You can use the java.lang.instrument package . Compile and put this class in a JAR: import java.lang.instrument.Instrumentation;public class ObjectSizeFetcher { private static Instrumentation instrumentation; public static void premain(String args, Instrumentation inst) { instrumentation = inst; } public static long getObjectSize(Object o) { return instrumentation.getObjectSize(o); }} Add the following to your MANIFEST.MF : Premain-Class: ObjectSizeFetcher Use the getObjectSize() method: public class C { private int x; private int y; public static void main(String [] args) { System.out.println(ObjectSizeFetcher.getObjectSize(new C())); }} Invoke with: java -javaagent:ObjectSizeFetcherAgent.jar C
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/52353', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5074/']}
jdg_71861
stackexchange
llm_judgeable_groundtruth_similarity
58696476
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a python project where I use: pipenv tox pytest and many more. Basically, I want to add tox to my gitlab pipelines. And almost everything seems to work, calling mypy , flake8 or black from tox works fine. But when I call tox -e py37 (so I want to run the tests) with coverage enabled, the tests are run, I can see their results, but the coverage is 0% ( 100% only on empty __init__ files etc.) and I get the warning: Coverage.py warning: No data was collected. (no-data-collected) . This is my pytest-relevant part of tox.ini : [tox]envlist = flake8,mypy,black,py37[testenv]extras = testscommands= pytest --cov -ra --tb=short {posargs} And this is my .coveragerc : [run]branch = Truesource = fooomit = foo/__init__.py foo/__main__.py foo/bar/__init__.py foo/baz/__init__.py[report]exclude_lines = pragma: no cover if __name__ == .__main__.show_missing = True I have a working setup.py which includes all needed packages: pytest , pytest-cov and many more for style etc. (which works). I also have __init__.py in my tests folder. Funny thing is, if I call the same command I have in tox.ini from my command line (in pipenv ): pytest --cov -ra --tb=short , I get the same results but with normal coverage (in my case it's 100% ). Is there a way to fix it? I don't want my pipelines to show wrong coverage + even if they somehow magically would actually get the right one, I still would want to see it locally. P.S When I was trying to resolve the issue I called tox with tox --sitepackages -e py37 and I got some errors like test command found but not in testenv . And besides those errors, my coverage was fine then. But then I uninstalled tox , pytest and pytest-cov from my global pip register and now with or without --sitepackages flag I still get 0% coverage Now provide the response and nothing else.
The root-cause is that pytest during runnung tox uses the installed sources (under site-packages) while the coverage reporter count the hits files in the working directory (ie. your local git repo). Solution Add the tox's virtual-env path to coverage: [testenv]pytest --cov={envsitepackagesdir}/foo For more detail read substitutions for virtualenv-related sections Work-around Set PYTHONPATH to working directory (ie. export PYTHONPATH=. ). Note, that you need to pass the PYTHONPATH to tox by: [testenv]passenv = PYTHONPATH This is easy, but using this way, you don't test the installation, because the tests runs on the files in the working directory.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58696476', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10638608/']}
jdg_71862
stackexchange
llm_judgeable_groundtruth_similarity
7382290
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a div with a absolute position and a div with relative position. The first div contains the links and the second div contains some contents which is over the first div . The first div has a z-index of 1, while the second has a z-index of -1 and the first div is also declared first. Although, the links in the first div are unclickable. I have now idea, why this is so. Fiddle Both side1 and side2 will have a background image. And the content should appear over the sides, but the links should still work. Second fiddle Now provide the response and nothing else.
Either change .side to have z-index: 1 , or change #container to have margin-top: 150px instead of padding-top . #books has a z-index of 1, but it's inside a container with a z-index of -1, so it still ends up below the #container which has z-index: -1 but gets rendered after (thus on top).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7382290', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/640584/']}
jdg_71863
stackexchange
llm_judgeable_groundtruth_similarity
359577
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If you look at $2^{3^4}$, what is the expected result? Should it be read as $2^{(3^4)}$ or $(2^3)^4$? Normally I would use parentheses to make the meaning clear, but if none are shown, what would you expect? (In this case, the formatting gives a hint, because I can either enter $2^{3^4}$ or ${2^3}^4$. If I omit braces in the MathML expression, the output is shown as $2^3^4$. Just suppose all three numbers were displayed in the same size, and the same vertical offset between 2, 3 and 3, 4.) Now provide the response and nothing else.
Barring parentheses, $2^{3^4}$ should definitely be read as, and is equivalent to $\;2^{\left(3^4\right)}$:$$2^{3^4} = 2^{(3^4)} = 2^{81}$$ whereas $${(2^3)}^4 = 2^{3\cdot 4} = 2^{12}$$ Added: As pointed out in the comments, it is fairly standard practice that exponents are "right associative" - which is somewhat of a misnomer which should only be taken to mean, evaluate rightmost first: read a^b^c as a^(b^c) and read a^b^c^d as a^[b^(c^d)] , and so on. As the example above shows,the exponential operator is not associative. So as you have indicated you typically do, use parentheses, when possible, in your own usage, to avoid any possible confusion.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/359577', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33245/']}
jdg_71864
stackexchange
llm_judgeable_groundtruth_similarity
2348152
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying handle bad json data when parsed through json_decode(). I'm using the following script: if(!json_decode($_POST)) { echo "bad json data!"; exit;} If $_POST equals: '{ bar: "baz" }' Then json_decode handles the error fine and spits out "bad json data!";However, if I set $_POST to something like "invalid data", it gives me: Warning: json_decode() expects parameter 1 to be string, array given in C:\server\www\myserver.dev\public_html\rivrUI\public_home\index.php on line 6bad json data! Do I need to write a custom script to detect valid json data, or is there some other nifty way to detect this? Now provide the response and nothing else.
Here are a couple of things about json_decode : it returns the data, or null when there is an error it can also return null when there is no error : when the JSON string contains null it raises a warning where there is a warning -- warning that you want to make disappear. To solve the warning problem, a solution would be to use the @ operator (I don't often recommend using it, as it makes debuging a lot more harder... But here, there is not much of a choice) : $_POST = array( 'bad data');$data = @json_decode($_POST); You'd then have to test if $data is null -- and, to avoid the case in which json_decode returns null for null in the JSON string, you could check json_last_error , which (quoting) : Returns the last error (if any) occurred by last JSON parsing. Which means you'd have to use some code like the following : if ($data === null && json_last_error() !== JSON_ERROR_NONE) { echo "incorrect data";}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2348152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/281434/']}
jdg_71865
stackexchange
llm_judgeable_groundtruth_similarity
1273156
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I I tried using substitution and I got an extra integral at the end and do not know how to proceed. Can anyone help me to break this down? Now provide the response and nothing else.
Consider the sequence of functions on $(0,1)$$$f_n(x) = \begin{cases}n & \text{ if } x \in (0,1/n)\\0 & \text{ otherwise}\end{cases}$$We have $\lim_{n \to \infty} f_n(x) = 0 = f(x)$. However,$$\lim_{n \to \infty} \int_0^1f_n(x)dx = 1 \neq 0 = \int_0^1 f(x) dx$$The key in the dominated convergence theorem is that sequence of functions $f_n(x)$ must be dominated by a function $g(x)$, which is also integrable.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1273156', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/238814/']}
jdg_71866
stackexchange
llm_judgeable_groundtruth_similarity
345666
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question: Let $R$ be a ring. Suppose that for every prime ideal $p \lhd R$ the local ring $R_p$ ($=(R\setminus p)^{-1}R$) has no non-zero nilpotent elements. Prove that $R$ has no non-zero nilpotent elements in this case. ( Hint: look at the nilradical of $R_p$? ) I tried the following: Assume for contradiction $0\ne x\in R$ is a non-zero nilpotent element of $R$, so $x^n=0$ for some $n>1$. Now $\frac{x}{s}$ is in $R_p$ for any $s\in R\setminus p$. And $(\frac{x}{s})^n=\frac{x^n}{s^n}=\frac{0}{s^n}=\frac{0}{1}=0$. So $R_p$ has non-zero nilpotent elements, a contradiction. Therefore $R$ cannot have non-zero nilpotent elements. My question: May I assume that $\frac{x}{s}$ is non-zero? If so, I seem to have found a proof which works, but doesn't use the nilradical. So what would a proof using the nilradical look like? In case it helps, I am going through Chapter 3 of Atiyah, MacDonald. Now provide the response and nothing else.
Recall that $\frac{x}{1}=0$ implies that there is an $s\in S$ such that $xs=0$ (for example look at page 37 of Atiyah-MacDonald). Since $\operatorname{Ann(x)}$ is an ideal and $x$ is nilpotent, then there is a prime ideal $\mathfrak{p}$ such that $x^{n-1}\in\operatorname{Ann}(x)\subseteq\mathfrak{p}$ and $x\in\mathfrak{p}$. Thus by contraposition $\frac{x}{1}\neq 0$ in $R_{\mathfrak{p}}$. Your proof is essentially a full write-down of the nilradical proof: By Corollary 3.12 if $\mathfrak{N}$ is the nilradical of $R$, then $\mathfrak{N}_\mathfrak{p}$ is the nilradical of $R_\mathfrak{p}$. Since being the $0$ module is a local property, $\mathfrak{N}_\mathfrak{p}=0$ for every prime ideal $\mathfrak{p}\subset R$ implies that $\mathfrak{N}=0$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/345666', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/67890/']}
jdg_71867
stackexchange
llm_judgeable_groundtruth_similarity
96083
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In a question over on Skeptics Stack Exchange, I half-jokingly listed "blunt force trauma" as a means by which a bacterium could be killed. That makes me wonder if it really is only a joke or whether such an assault is plausible. The webcomic xkcd also did an episode on the possibility of physically attacking a cell (though not a bacterial one). There are plenty of resources on how bacteria can die from biological processes (e.g. infection with a bacteriophage, biological effects of antibiotics, etc.), chemical attacks (e.g. membranes getting lysed with bleach), temperature (i.e. cooking), and radiation, but have bacteria ever been shown to be vulnerable to a direct physical assault such as being hit with a hammer, cut with a knife, thrown onto a hard surface, twisted until they burst, or being drawn and quartered? I do know that many common weapons that are generally effective against animal life (swords, axes, clubs, assault rifles, etc.) do not have the exact same properties at the microscopic scale, but does that mean that physically beating a bacterium to death is absurd or is there a known way to kill a bacterium using only physical force? To be clear, I recognize that our normal means of killing unwanted bacteria - cooking food to prevent food poisoning, treating bacterial infections with antibiotics, cleaning hard surfaces with bleach, applying rubbing alcohol to wounds, etc., are probably effective enough that most people don't bother pulling out that old Wild West six-shooter they have in the closet or the antique sword hanging on their wall the way they might consider them if attacked by a bear or wolf, but that's not the question. Now provide the response and nothing else.
My (limited) understanding is that it is quite hard to avoid killing some bacteria even with very gentle physical manipulation. On the other hand, it is quite hard to use physical force to achieve reasonable level of sterilization. Let's bring some examples with a few (hastily found) references. Pressure My guess is that most examples the OP mentions (hit with a hammer, knife) will exert their biggest effect on bacteria through a peak in pressure - bacteria are comparable in size to width of a moderately sharp knife edge and thus there will be more "crushing" than "cutting" even with a knife ( Science of Sharp discusses that Chosera 1k - a very good stone - will sharpen a knife to 0.5 μm, Wiki has bacteria at 0.5 - 5 μm). I recently collaborated on a project where initial interpretation of results was wrong simply because putting a glass slip over a sample to inspect it under a microscope caused around 10% of bacterial (B. subtilis) cells to die. Putting a small weight on the slip resulted in roughly 80kPa additional pressure applied and ~ 80% of the cells dying within 25 minutes. But 80% death rate won't do much to e.g. reduce risk of food poisoning - as long as there are nutrients, bacteria will regrow quickly. As m4rio mentions in a comment to another answer , and in line with The effect of high pressure processing on the microbial, physical and chemical properties of Valencia and Navel orange juice you need to ramp up the pressure to 100s of MPa to achieve high efficacy. And even that may not work so cleanly - e.g., the linked paper mentions that after applying 600 MPa, bacteria regrew to detectable levels after 4 weeks of storage at 4°C. Killing all (or almost all) bacteria is just really hard. Because it was fun, I'll also link to study showing that High pressure-processed guacamole (4 cycles, 689 MPa, 5 min each) is clean enough to not spoil for 30 days in 25°C. Note: the two sources above were linked from a commercial vendor of high-pressure products and may thus be biased. While the exact numbers might be problematic, I think the main point is unlikely to be affected. A review High-pressure processing – effects on microbial food safety and food quality mentions some limitations, e.g.: Endospores tend to be extremely HPP resistant compared with vegetative cells, withstanding treatments of more than 1000 MPa ascospores of heat-resistant moulds such as Byssochlamys, Neosartorya and Talaromyces are generally considered to be extremely HPP resistant It should however be noted that it seems that high-pressure treatment is good enough to be semi-commonly used as the main sterilization for suitable foodstuffs - Current status and future trends of high-pressure processing in food industry reports high-pressure treatment to be FDA approved and USD 10 billion worth of food being treated as of 2015. Pressure for killing cells previously briefly discussed at Can bacteria or other microorganisms be killed by applying pressure? Ultrasound While frequently used for cell lysis in lab, it also doesn't kill all cells. For example The Sterilization of Suspensions Contaminated with MicroorganismsUsing Ultrasound Irradiation reports that roughly 20-30% of bacteria (Bacillus) survived 60 minutes of ultrasound treatment. Ball mill This is probably closest to actual "physical trauma" - you put small beads and a suspension with microorganisms into a cylinder and roll it hard. A low energy process for the recovery of bioproducts from cyanobacteria using a ball mill discusses using this process to extract proteins from cyanobacteria. They don't report absolute number of cells killed, just that it varies considerably with the rotation speed, size of beads, .... My interpretation is that in most cases a non-negligible proportion of the cells survive while majority is killed (and their contents extracted for further use).
{}
{'log_upvote_score': 5, 'links': ['https://biology.stackexchange.com/questions/96083', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/27125/']}
jdg_71868
stackexchange
llm_judgeable_groundtruth_similarity
986034
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Math rule I don't understand. My discrete math midterm is tomorrow and I'm studying proof styles. I came across a rule (algebra maybe?) I don't quite understand and I was hoping someone could explain it step by step for me. $$\frac{7^{n+1}-1}{6} + 7^{n+1} = \frac{7^{n+2}-1}{6}$$ I guess I can memorize it, but could someone show me how it works step by step? Thanks Now provide the response and nothing else.
This isn't the sort of rule you need to memorize, but you do need know the operations to get from one side of the equation to the other. $$\begin{align}\frac{7^{n+1} - 1}{6} + 7^{n+1}&= \frac{7^{n+1} - 1}{6} + \frac{6\cdot 7^{n+1}}{6} \\&= \frac{7^{n+1} - 1 + 6 \cdot 7^{n+1}}{6} \\&= \frac{7 \cdot 7^{n+1} - 1}{6} \\&= \frac{7^{n+2} -1}{6}.\end{align}$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/986034', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/154317/']}
jdg_71869
stackexchange
llm_judgeable_groundtruth_similarity
66403
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Often there are programming languages that are specialized for specific tasks. Some programming languages are excellent at array arithmetic (such as matrices and the use of multidimensional arrays), yet some are much better at higher-level mathematics that is hard to reproduce in other languages (yet still possible all the same). What about a language makes it better for a specific task or end-goal than other languages, given that most simply compile down to assembly anyways? I am talking about Turing complete languages, ones that are Turing equivalent. Now provide the response and nothing else.
There are a few things to consider: Abstraction: what does the language treat as "special"? Are matrices first-class values, like in Matlab, or do you encode them using simpler types like arrays, as in C? C makes you think about how matrices are implemented, Matlab doesn't. Likewise, if you've got a complicated system of asynchronous communications, you probably want first-class functions so that you can do callbacks. The more features that are added, the more complex the language gets. So while there are some "multi-paradigm" languages like C++ and D, most languages pick a niche, choose the things that are important to that niche, and prioritize those as their main abstractions. Performance: all abstraction comes with a cost, whether it be compile-time or run-time. Some languages limit themselves in a way that makes them less abstract but more performant. Early Fortran didn't allow for pointers or recursion, so it was great at number crunching, faster than a language like C, where you were running lots of loops. But it was terrible at encoding large data structures, like trees, where you needed pointers for indirection. (Note that I don't know much about modern Fortran.) Essentially, there are always tradeoffs. More abstract means either slower runtime or more complexity at compile-time. More features means a more complex language. Some features make certain things fast and others slow, like pointers and recursion. There's no perfect language, so each language will reach a sort of local maximum in the space of language qualities.
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/66403', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/61943/']}
jdg_71870
stackexchange
llm_judgeable_groundtruth_similarity
8652746
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do i go about emptying the values of textboxes here is below code i h've worked out but doesn't seem to work var txtName = $("input#txtName"),txtPrice = $("input#txtPrice"); First Method $(txtName,txtPrice).val(""); this is actually wrong because the price textbox would now become the context to search within i suppose. Second Method $([txtName,txtPrice]).val(""); I don't understand why i should do this as they are already jQuery Objects(But works) I Put them in variables as these are used further in the script. Now provide the response and nothing else.
Here is a few ways to do it; txtName.add(txtPrice).val("");// OR$("input#txtName,input#txtPrice").val(""); (There is a $ sign in your txtPrice input by the way.) First Method didn't work because it's a way of using jQuery selector. When you use jQuery like that first parameter will be the selector and second will be the container object where the selector works. Basically it's almost same thing like this; $(txtPrice).find(txtName).val(""); Because there is no txtName in txtPrice neither value will be emptied. Second Method works because you're giving your parameters as an array to jQuery selector. It accepts it and does .val() action to every element of array. This way is legit but because your variables already jQuery objects there is no need to use this.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8652746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/571507/']}
jdg_71871
stackexchange
llm_judgeable_groundtruth_similarity
2771880
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Basically I have been programing for a little while and after finishing my last project can fully understand how much easier it would have been if I'd have done TDD. I guess I'm still not doing it strictly as I am still writing code then writing a test for it, I don't quite get how the test becomes before the code if you don't know what structures and how your storing data etc... but anyway... Kind of hard to explain but basically lets say for example I have a Fruit objects with properties like id, color and cost. (All stored in textfile ignore completely any database logic etc) FruitID FruitName FruitColor FruitCost 1 Apple Red 1.2 2 Apple Green 1.4 3 Apple HalfHalf 1.5 This is all just for example. But lets say I have this is a collection of Fruit (it's a List<Fruit> ) objects in this structure. And my logic will say to reorder the fruitids in the collection if a fruit is deleted (this is just how the solution needs to be). E.g. if 1 is deleted, object 2 takes fruit id 1, object 3 takes fruit id2. Now I want to test the code ive written which does the reordering, etc. How can I set this up to do the test? Here is where I've got so far. Basically I have fruitManager class with all the methods, like deletefruit, etc. It has the list usually but Ive changed hte method to test it so that it accepts a list, and the info on the fruit to delete, then returns the list. Unit-testing wise: Am I basically doing this the right way, or have I got the wrong idea? and then I test deleting different valued objects / datasets to ensure method is working properly. [Test]public void DeleteFruit(){ var fruitList = CreateFruitList(); var fm = new FruitManager(); var resultList = fm.DeleteFruitTest("Apple", 2, fruitList); //Assert that fruitobject with x properties is not in list ? how}private static List<Fruit> CreateFruitList(){ //Build test data var f01 = new Fruit {Name = "Apple",Id = 1, etc...}; var f02 = new Fruit {Name = "Apple",Id = 2, etc...}; var f03 = new Fruit {Name = "Apple",Id = 3, etc...}; var fruitList = new List<Fruit> {f01, f02, f03}; return fruitList;} Now provide the response and nothing else.
If you don't see what test you should start with, it's probably that you didn't think of what your functionality should do in simple terms. Try to imagine a prioritized list of basic behaviors that are expected. What's the first thing you would expect from a Delete() Method ? If you were to ship the Delete "product" in 10 minutes, what would be the non-negotiable behaviour included ? Well... probably that it deletes the element. So : 1) [Test]public void Fruit_Is_Removed_From_List_When_Deleted() When that test is written, go through the whole TDD loop (execute test => red ; write just enough code to make it pass => green ; refactor => green) Next important thing related to this is that the method shouldn't modify the list if the fruit passed as an argument is not in the list. So next test could be : 2) [Test]public void Invalid_Fruit_Changes_Nothing_When_Deleted() Next thing you specified is that ids should be rearranged when a fruit is deleted : 3) [Test]public void Fruit_Ids_Are_Reordered_When_Fruit_Is_Deleted() What to put in that test ? Well, just set up a basic but representative context that will prove your method behaves as expected. For example, create a list of 4 fruits, delete the first and check one by one that the 3 remaining fruits ids are reordered properly. That would pretty well cover the basic scenario. Then you could create unit tests for error or borderline cases : 4) [Test]public void Fruit_Ids_Arent_Reordered_When_Last_Fruit_Is_Deleted()5) [Test][ExpectedException]public void Exception_Is_Thrown_When_Fruit_List_Is_Empty() ...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2771880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/212639/']}
jdg_71872
stackexchange
llm_judgeable_groundtruth_similarity
2279662
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is better practise and why: accessing base class variables through a protected field or a public getter on the private field. (The getter will be public regardless) Now provide the response and nothing else.
If there's going to be a public getter anyway, why would you want to expose the field itself more widely than absolutely necessary? That means it's immediately writable by subclasses (unless it's final to start with). Personally I like all my fields to be private: it provides a cleaner separation between API and implementation. I regard the relationship between a superclass and a subclass as similar to that of a caller and callee - changes to the underlying implementation shouldn't break subclasses any more than they should break callers. The name of a field is an implementation detail which shouldn't impact other classes. Admittedly my view is occasionally seen as somewhat extreme...
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2279662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/249571/']}
jdg_71873
stackexchange
llm_judgeable_groundtruth_similarity
12546832
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Description: I have a huge MySQL database table. The total size is about 10 terabytes. It only contains texts. A sample text from this database table: In other cases, some countries have gradually learned to produce the same products and services that previously only the U.S. and a few other countries could produce. Real income growth in the U.S. has slowed. There are about 50 billion different texts. What have I tried? I've tried to zip them all. Actually it has worked, reduced the total size. However, I need to make searching and I can't search any data while they are located in a zip file. I've tried PHP's base64 encoding. It has made my sample text data as: SW4gb3RoZXIgY2FzZXMsIHNvbWUgY291bnRyaWVzIGhhdmUgZ3JhZHVhbGx5IGxlYXJuZW QgdG8gcHJvZHVjZSB0aGUgc2FtZSBwcm9kdWN0cyBhbmQgc2VydmljZXMgdGhhdCBwcmV2 aW91c2x5IG9ubHkgdGhlIFUuUy4gYW5kIGEgZmV3IG90aGVyIGNvdW50cmllcyBjb3VsZC Bwcm9kdWNlLiBSZWFsIGluY29tZSBncm93dGggaW4gdGhlIFUuUy4gaGFzIHNsb3dlZC4= What I'd like to accomplish? I want to reduce text's size before sending them to MySQL. First of all, I don't know how I can do this job. I'm thinking of encrypting and decrypting the data. So, here is an example what I want to do: I want to encrypt text data before storing. Then, I want to call encrypted data from MySQL in order to decrypt. Any way to reduce the size of texts? Base64 does not work for me, is there any other way? Now provide the response and nothing else.
Please Note neither base64 nor encryption was designed for reduction of string length. What you should be looking at is compression and i think you should look at gzcompress and gzdeflate Example using decoded version of your text $original = "In other cases, some countries have gradually learned to produce the same products and services that previously only the U.S. and a few other countries could produce. Real income growth in the U.S. has slowed." ;$base64 = base64_encode($original);$compressed = base64_encode(gzcompress($original, 9));$deflate = base64_encode(gzdeflate($original, 9));$encode = base64_encode(gzencode($original, 9));$base64Length = strlen($base64);$compressedLength = strlen($compressed) ;$deflateLength = strlen($deflate) ;$encodeLength = strlen($encode) ;echo "<pre>";echo "Using GZ Compress = " , 100 - number_format(($compressedLength / $base64Length ) * 100 , 2) , "% of Improvement", PHP_EOL;echo "Using Deflate = " , 100 - number_format(($deflateLength / $base64Length ) * 100 , 2) , "% of Improvement", PHP_EOL;echo "</pre>"; Output Using GZ Compress = 32.86% ImprovementUsing Deflate = 35.71% Improvement
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12546832', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/962524/']}
jdg_71874
stackexchange
llm_judgeable_groundtruth_similarity
30958954
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a function: cost(X, model, reg = 1e-3, sparse) And I need to pass this function to another one under the form: f(X, model)f(X, model, reg = reg) I am using lambda to do this: f = lambda X, model: cost(X, model, sparse = np.random.rand(10,10)) And python complains that lambda got an unexpected argument reg. How do I do this correctly? If I do the other way: f = lambda X, model, reg: cost(X, model, reg = reg, sparse = np.random.rand(10,10)) Then it's not working in the first case. Now provide the response and nothing else.
Lambda's take the same signature as regular functions, and you can give reg a default: f = lambda X, model, reg=1e3: cost(X, model, reg=reg, sparse=np.random.rand(10,10)) What default you give it depends on what default the cost function has assigned to that same parameter. These defaults are stored on that function in the cost.__defaults__ structure, matching the argument names. It is perhaps easiest to use the inspect.getargspec() function to introspect that info: from inspect import getargspecspec = getargspec(cost)cost_defaults = dict(zip(spec.args[-len(defaults:], spec.defaults))f = lambda X, model, reg=cost_defaults['reg']: cost(X, model, reg=reg, sparse=np.random.rand(10,10)) Alternatively, you could just pass on any extra keyword argument: f = lambda X, model, **kw: cost(X, model, sparse=np.random.rand(10,10), **kw)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30958954', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1617352/']}
jdg_71875
stackexchange
llm_judgeable_groundtruth_similarity
45408920
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the requirement to scroll a certain element inside a div ( not a direct child ) into view. Basically I need the same functionality as ScrollIntoView provides, but for a specified parent (only this parent should scroll). Additionally it is not possible for me to use any 3rd party libraries. I am not quite sure on how to approach this problem, as I do very limited JavaScript development. Is there someone that could help me out? I found this code that would do exactly what I need, but unfortunately it requires JQuery and I was not able to translate it to plain JavaScript. Now provide the response and nothing else.
I think I have a start for you. When you think about this problem you think about getting the child div into the viewable area of the parent. One naive way is to use the child position on the page relative to the parent's position on the page. Then taking into account the scroll of the parent. Here's a possible implementation. function scrollParentToChild(parent, child) { // Where is the parent on page var parentRect = parent.getBoundingClientRect(); // What can you see? var parentViewableArea = { height: parent.clientHeight, width: parent.clientWidth }; // Where is the child var childRect = child.getBoundingClientRect(); // Is the child viewable? var isViewable = (childRect.top >= parentRect.top) && (childRect.bottom <= parentRect.top + parentViewableArea.height); // if you can't see the child try to scroll parent if (!isViewable) { // Should we scroll using top or bottom? Find the smaller ABS adjustment const scrollTop = childRect.top - parentRect.top; const scrollBot = childRect.bottom - parentRect.bottom; if (Math.abs(scrollTop) < Math.abs(scrollBot)) { // we're near the top of the list parent.scrollTop += scrollTop; } else { // we're near the bottom of the list parent.scrollTop += scrollBot; } }} Just pass it the parent and the child node like this: scrollParentToChild(parentElement, childElement) Added a demo using this function on the main element and even nested elements https://jsfiddle.net/nex1oa9a/1/
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45408920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1342006/']}
jdg_71876
stackexchange
llm_judgeable_groundtruth_similarity
47369
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The proof that column rank = row rank for matrices over a field relies on the fact that the elements of a field commute. I'm looking for an easy example of a matrix over a ring for which column rank $\neq$ row rank. i.e. can one find a $2 \times 3$-(block)matrix with real $2\times 2$-matrices as elements, which has different column and row ranks? Now provide the response and nothing else.
It is a classical observation due to Nathan Jacobson that a division ring such that the set of invertible matrices over it is closed under transposition has to be a field, i.e. commutative. The reason is simple: the matrix $\begin{pmatrix} a & b \\ c & 1 \end{pmatrix}$is invertible if and only if $\begin{pmatrix} a - bc & 0 \\ c & 1 \end{pmatrix}$is invertible. This happens if and only $a - bc \neq 0$. For the transpose you get the condition $a - cb \neq 0$. Hence, taking $a = cb$ and a pair of non-commuting elements $b,c$ in the division ring, you get an example of an invertible matrix, whose transpose is not invertible.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/47369', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6415/']}
jdg_71877
stackexchange
llm_judgeable_groundtruth_similarity
18293746
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am learning NumPy and I am not really sure what is the operator * actually doing. It seems like some form of multiplication, but I am not sure how is it determined. From ipython: In [1]: import numpy as npIn [2]: a=np.array([[1,2,3]])In [3]: b=np.array([[4],[5],[6]])In [4]: a*bOut[4]: array([[ 4, 8, 12], [ 5, 10, 15], [ 6, 12, 18]])In [5]: b*aOut[5]: array([[ 4, 8, 12], [ 5, 10, 15], [ 6, 12, 18]])In [6]: b.dot(a)Out[6]: array([[ 4, 8, 12], [ 5, 10, 15], [ 6, 12, 18]])In [7]: a.dot(b)Out[7]: array([[32]]) It seems like it is doing matrix multiplication, but only b multiplied by a , not the other way around. What is going on? Now provide the response and nothing else.
It's a little bit complicated and has to do with the concept of broadcasting and the fact that all numpy operations are element wise. a is a 2D array with 1 row and 3 columns and b is a 2D array with 1 column and 3 rows. If you try to multiply them element by element (which is what numpy tries to do if you do a * b because every basic operation except the dot operation is element wise), it must broadcast the arrays so that they match in all their dimensions. Since the first array is 1x3 and the second is 3x1 they can be broadcasted to 3x3 matrix according to the broadcasting rules. They will look like: a = [[1, 2, 3], [1, 2, 3], [1, 2, 3]]b = [[4, 4, 4], [5, 5, 5], [6, 6, 6]] And now Numpy can multiply them element by element, giving you the result: [[ 4, 8, 12], [ 5, 10, 15], [ 6, 12, 18]] When you are doing a .dot operation it does the standard matrix multiplication. More in docs
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18293746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/101152/']}
jdg_71878
stackexchange
llm_judgeable_groundtruth_similarity
54313839
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to I set up an interface listener for detecting a variable change in Kotlin. I successful implemented the following in Java, but am running into issues doing it in Kotlin: Interface: public interface InterfaceRefreshList { public void refreshListRequest(); } Class containing listener: public class SignalChange { private static boolean refreshListSwitch; private static List<InterfaceRefreshList> refreshListListeners = new ArrayList<>(); public static void setRefreshList(boolean value) { refreshListSwitch = value; for(InterfaceRefreshList l : refreshListListeners) l.refreshListRequest(); } public static void addRefreshListListener(InterfaceRefreshList l) { refreshListListeners.add(l); } } Class where listener is listening: public class FragmentBrowse extends Fragment{ public FragmentBrowse() /// Constructor { SignalChange.addRefreshListListener(() -> refreshList()); } refreshList() { // do something }} To signal a change: SignalChange.setRefreshList(true); I can set up the interface and the signal class: class SignalChange{ private var refreshListSwitch: Boolean = false var setSwitch: Boolean get() = refreshListSwitch set(value) { refreshListSwitch = value } private var refreshListListeners = ArrayList<InterfaceRefreshPersonsList>() fun sendRefreshSignal() { for(l in refreshListListeners) l.refreshPersonsList() } fun addRefreshListListener(l: InterfaceRefreshPersonsList) { refreshListListeners.add(l) }} But I cannot setup the listener in the FragmentBrowse class. The fragment class doesn't allow constructors. Now provide the response and nothing else.
You can use built-in Kotlin delegates, for example: object SignalChange { var refreshListListeners = ArrayList<InterfaceRefreshList>() // fires off every time value of the property changes var property1: String by Delegates.observable("initial value") { property, oldValue, newValue -> // do your stuff here refreshListListeners.forEach { it.refreshListRequest() } }}interface InterfaceRefreshList { fun refreshListRequest()} Add listeners like this: SignalChange.refreshListListeners.add(object : InterfaceRefreshList { override fun refreshListRequest() { refreshList() }}) OR Intead of interface you can use lambda : object SignalChange { var refreshListListeners = ArrayList<() -> Unit>() // fires off every time value of the property changes var property1: String by Delegates.observable("initial value") { property, oldValue, newValue -> // do your stuff here refreshListListeners.forEach { it() } }} And to add listener just call: SignalChange.refreshListListeners.add(::refreshList)//orSignalChange.refreshListListeners.add { refreshList() }fun refreshList() {}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54313839', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2909304/']}
jdg_71879
stackexchange
llm_judgeable_groundtruth_similarity
1283216
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Simple question:If I have a string and I want to add to it head and tail strings (one in the beginning and the other at the end), what would be the best way to do it?Something like this: std::string tmpstr("some string here");std::string head("head");std::string tail("tail");tmpstr = head + tmpstr + tail; Is there any better way to do it? Thanks in advance. Now provide the response and nothing else.
If you were concerned about efficiency and wanted to avoid the temporary copies made by the + operator, then you could do: tmpstr.insert(0, head);tmpstr.append(tail); And if you were even more concerned about efficiency, you might add tmpstr.reserve(head.size() + tmpstr.size() + tail.size()); before doing the inserting/appending to ensure any reallocation only happens once. However, your original code is simple and easy to read. Sometimes that's "better" than a more efficient but harder to read solution.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1283216', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/144154/']}
jdg_71880
stackexchange
llm_judgeable_groundtruth_similarity
18512503
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I hope this question isn't too broad, but there's a LOT of unfamiliar syntax happening in this particular require.js/ESRI example, and I am hoping someone can explain some of this to me. First, this code works (that is, it does what I expect it to do): it creates a base map and adds a FeatureLayer pulled from a map service. It's a stripped-down version of an example from the ESRI Javascript API page. Here's the code: var map; var featureLayer; require(["esri/map", "dojo/domReady!", "esri/layers/FeatureLayer"], function (Map) { map = new Map("map", { basemap: "topo", center: [-100.195, 39.567], // long, lat zoom: 4 }); featureLayer = new esri.layers.FeatureLayer( "http://my-server-url.com/arcgis/rest/services/Projects/MapServer/0", { mode: esri.layers.FeatureLayer.MODE_ONDEMAND } ); map.addLayer(featureLayer); }); Now for the particular questions: What is this require([...], function(args) { } syntax doing ? I don't even know how to read this. Is it a function call to require.js? What goes in the square brackets? What are the function arguments? From other examples, it looks like there normally should be one function argument per include in the require.js call. But here, if I add an argument for FeatureLayer, it doesn't work. The "dojo/domReady!" include doesn't seem to ever have a corresponding argument in any example. Is this related to the exclamation point? What does the exclamation point mean? Can anyone point me to a USEFUL require.js reference? The requirejs.org website reads more like a tech spec than a user manual. And the ESRI website seems to assume you know how to use require. And yes, I have been Googling--the problem is that Google isn't great at searching for computer syntax questions since it strips punctuation, and because "require javascript syntax" and the like make for crappy (over-broad) search terms. Now provide the response and nothing else.
The require([...], function(args) { } syntax is saying "load this list of modules, and once they're all loaded, call this function with the return values of those modules as arguments" . What goes in the square brackets is an array of either paths to a script file (minus the .js) or a module IDs mapped using the require.config paths section . The arguments to the callback function correspond to the paths/modules in the array, but not all modules return a useful value, as you noticed in your next question... Adding an argument for FeatureLayer doesn't work because you can't skip arguments. But note that many modules don't actually return a value intended to be used. You will see this a lot with jQuery plugins, where the loading of the module simply registers the plugin with jQuery but doesn't return a value to the caller. I don't know ESRI, but from the code snippet it looks like loading FeatureLayer simply adds FeatureLayer to the esri.layers global object. The exclamation point syntax is reserved for plugins . Typically there would be something else following the exclamation point which indicates the resource that the plugin would load, e.g. text!myTemplate.html , but in the case of domReady! the plugin exists just as a way to wait until DOM loaded before invoking your callback function, so nothing needs to follow the exclamation point. Recommendation lists of external resources is off-topic for StackOverflow, but here's one that I found helpful in getting the basic concepts: http://aaronhardy.com/javascript/javascript-architecture-requirejs-dependency-management/
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18512503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135041/']}
jdg_71881
stackexchange
llm_judgeable_groundtruth_similarity
36408035
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to validate input for file name and check if it contains invalid characters, in PowerShell. I had tried following approach, which works when just one of these character is entered but doesn't seem to work when a given alpha-numeric string contains these characters. I believe I didn't construct regex correctly, what would be the right way to validate whether given string contains these characters? Thanks in advance. #Validate file name whether it contains invalid characters: \ / : * ? " < > |$filename = "\?filename.txt"if($filename -match "^[\\\/\:\*\?\<\>\|]*$") {Write-Host "$filename contains invalid characters"}else {Write-Host "$filename is valid"} Now provide the response and nothing else.
I would use Path.GetInvalidFileNameChars() rather than hardcoding the characters in a regex pattern, and then use the String.IndexOfAny() method to test if the file name contains any of the invalid characters: function Test-ValidFileName{ param([string]$FileName) $IndexOfInvalidChar = $FileName.IndexOfAny([System.IO.Path]::GetInvalidFileNameChars()) # IndexOfAny() returns the value -1 to indicate no such character was found return $IndexOfInvalidChar -eq -1} and then: $filename = "\?filename.txt"if(Test-ValidFileName $filename){ Write-Host "$filename is valid"}else{ Write-Host "$filename contains invalid characters"} If you don't want to define a new function, this could be simplified as: if($filename.IndexOfAny([System.IO.Path]::GetInvalidFileNameChars()) -eq -1){ Write-Host "$filename is valid"}else{ Write-Host "$filename contains invalid characters"}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36408035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5349170/']}
jdg_71882
stackexchange
llm_judgeable_groundtruth_similarity
1834260
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $\left(M, g\right)$ be a complete Riemannian manifold. Let us fix a point $p \in M$ and consider the distance function$$r(x) := \operatorname{dist}(x, p).$$I would like to characterize the point where $r$ is not smooth. Is it true that they must be critical points of the function $\exp_p$? And is it true that a critical point of $\exp_p$ is a point where $r$ is not smooth? Any help will be really appreciated. Thanks! Now provide the response and nothing else.
Let $C_p$ be the cut locus of $p$ . $C_p$ contains two type of points: (i) points $q$ such that there exist at least two minimizing geodesics from $p$ to $q$ ; (ii) points $q$ that are conjugate to $p$ . It can be proven that $C_p$ is closed and also a null set (i.e. if $(h,U)$ is a chart, then $h (U \cap C_p)$ is a null set with respect to the usual Lebesgue measure). There is an alternative description of $C_p$ . Let $I_p$ be the connected component of $\{ v \in T_p M \mid \Bbb d _v \exp_p \text{ is an isomorphism} \}$ containing $0 \in T_p M$ . Let $c_p = \partial I_p$ , the boundary of $I_p$ . Then $C_p = \exp _p (c_p)$ . Finally, to answer your question: it is well known that $d(p, \cdot)$ is smooth on $M \setminus (C_p \cup \{p\})$ (that $d(p, \cdot)$ cannot be smooth in $p$ is obvious: choose normal coordinates around $p$ and think of what happens to the Euclidean norm around $0$ ). The square $d(p, \cdot)^2$ is smooth on $M \setminus C_p$ . For further details on the cut locus, see for instance sub-chapter III.2 of "Riemannian Geometry" by Isaac Chavel (I have a feeling, though, that you will not enjoy his style). A clearer exposition can be found in sub-chapter VIII.7 of vol. 2 of "Foundations of Differential Geometry" by Kobayashi & Nomizu (conjugate points are introduced in VIII.3). A brief exposition of these concepts, together with a proof of the differentiability of $d(p, \cdot)$ can be found in theorem 3.1 of the lecture notes of a course given by Zuoqin Wang at the University of Science and Technology of China.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1834260', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/271255/']}
jdg_71883
stackexchange
llm_judgeable_groundtruth_similarity
38293
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So, I am trying to understand the effect of sawtooth waveform on a mechanical system model. For example, a system of a plate and a box on it. And then we try to oscillate the plate. In order to do that, I need to calculate the inertial force caused by the sawtooth waveform. And to find the inertial force, I need to calculate second derivative of sawtooth waveform. But because, you know, sawtooth waveform is not differentiable everywhere, I think make an approximation by using a smooth sawtooth waveform is enough. 'Smooth' here, I mean substitute upper tip and bottom tip of the sawtooth waveform into a circular curve of a certain radius. So in short, my question is: How to make sawtooth wavefrom with a dull tip with Mathematica?Here, although I said sawtooth, it would be better if applicable to any triangle waveform. EDIT: What if my triangle waveform is made from a completely arbitrary piecewise function such as below f = Piecewise[{{10 Mod[x, 1], 0 <= Mod[x, 1] < 0.1}, {-(10/9) Mod[x, 1] + (10/9), 0.1 <= Mod[x, 1] < 1}}];Plot[Evaluate[f], {x, -3, 3}] Is it still possible? Now provide the response and nothing else.
There are nice trigonometric formulas δ = 0.01;trg[x_] := 1 - 2 ArcCos[(1 - δ) Sin[2 π x]]/π;sqr[x_] := 2 ArcTan[Sin[2 π x]/δ]/π;swt[x_] := (1 + trg[(2 x - 1)/4] sqr[x/2])/2;Plot[{TriangleWave[x], trg[x]}, {x, -2, 2}, PlotRange -> All]Plot[{SquareWave[x], sqr[x]}, {x, -2, 2}, PlotRange -> All, Exclusions -> None]Plot[{SawtoothWave[x], swt[x]}, {x, -2, 2}, PlotRange -> All, Exclusions -> None] Previous answer You can use Gaussian smoothing and interpolation: n = 1000;δ = 0.02;f = Interpolation[Prepend[#, {0.0, #[[-1, 2]]}], PeriodicInterpolation -> True] &@ Transpose@{#, GaussianFilter[SawtoothWave[#], δ n {5, 1}, Padding -> "Periodic"]} &@Range[1/n, 1.0, 1/n];Plot[{SawtoothWave[x], f[x]}, {x, -1.2, 1.2}] Here n is the number of interpolation points and δ is the standard deviation of the smoothing. The second derivative: Plot[f''[x], {x, -0.1, 0.1}]
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/38293', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/10497/']}
jdg_71884
stackexchange
llm_judgeable_groundtruth_similarity
14950321
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: All the solutions I was able to find suggests to use .live() method. But as of today it is deprecated. .hover() works perfectly on "li" elements not created dynamically. But once I append new "li" .hover() is not triggered at all. Anybody has figured this one out? Now provide the response and nothing else.
The "hover" event has been deprecated with delegated event handling such as .on() per the .on() jQuery doc pages . Instead, you need to use .on() delegated event handling with mouseenter and mouseleave and an event handler for each. For example: $(document).on("mouseenter", "li", function() { // hover starts code here});$(document).on("mouseleave", "li", function() { // hover ends code here}); In your real code, you would select a static parent object that is much closer to the dynamic li tags than the document object for better performance.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14950321', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1751163/']}
jdg_71885
stackexchange
llm_judgeable_groundtruth_similarity
28819327
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to mostly understand the reasoning behind the Big O and Omega of inserting a new element in a heap. I know I can find answers online but I really like having a thorough understanding rather than just finding answers online and just memorizing blindly. So for instance if we have the following heap (represented in array format) [8,6,7,3,5,3,4,1,2] If we decide to insert a new element "4" our array will look like this now [8,6,7,3,5,3,4,1,2,4] It would be placed in index 9 and since this is a 0th index based array its parent would be index 4 which is element 5. In this case we would not need to do anything because 4 is < 5 and it does not violate the binary heap property. So best case is OMEGA(1). However if the new element we insert is 100 then we would have to call the max-heapify function which has running time of O(log n) and therefore in the worst case inserting a new element in the heap takes O(log n). Can someone correct me if I am wrong because I am not sure if my understanding or reasoning is 100%? Now provide the response and nothing else.
Yes you are right about the best-case running time. And for the worst-case running time, you are also right that this is Theta(lg n) and the reason why is that your heap is always assumed to be BALANCED, i.e. every height level set of nodes is full except at the bottom level. So when you insert an element at the bottom level and swap from one level up to the next level in your heap, the number of nodes at that level is cut roughly in half and so you can only do this swap log_2(n) = O(lg n) times before you are at the root node (i.e. the level at the top of the heap that has just one node). And if you insert a value that belongs at the top of the heap, initially at the bottom of the heap then you will indeed have to do basically log_2(n) swaps to get the element to the top of the heap where it belongs.. So the number of swaps in the worst case is Theta(lg n).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28819327', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2733436/']}
jdg_71886
stackexchange
llm_judgeable_groundtruth_similarity
563570
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let's say I'm trying to find a bijection from $\mathbb N^\mathbb N$, i.e., the set of all functions from $\mathbb N$ to $\mathbb N$, to some other set, say an open interval $(a,b)∈R$. What do I need to establish, to say there can exist a bijection? I do understand that I need to prove it's both injective and surjective, but how do you do it with a set of functions as the domain? Let's say I'm trying to see if the injective part is true. So, I need to find a function that takes every possible sequence of natural numbers as input, and associates each of them with one real number in the interval $(a,b)$. But what does it mean to have a sequence of natural numbers as the input of a function? And even if that makes sense, what sort of functions can satisfy such a purpose? I hope I sound sane. Now provide the response and nothing else.
A function from $\mathbb N$ to $\mathbb N$ would be of the form $$f(1)=n_1, f(2)=n_2, f(3)=n_3, f(4)=n_4, f(5)=n_5, \ldots$$ for particular values of $n_1,n_2,n_3,n_4,n_5,\ldots$, so producing a sequence of natural numbers. You can then regard the function and the sequence as equivalent. A function from these to the real numbers is much as you might expect, so for example the continued fraction: $$g(n_1,n_2,n_3,n_4,n_5,\ldots)= n_1 + \cfrac{1}{n_2 + \cfrac{1}{n_3 + \cfrac{1}{n_4 + \cfrac{1}{n_5+\cdots} } } }.$$ To show this is injective (which it is in this case) you need to show that each different sequence produces a different real number To show this is surjective (which it is not quite in this case - try finding a rational) you need to show that each real number is produced by some sequence
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/563570', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/51731/']}
jdg_71887
stackexchange
llm_judgeable_groundtruth_similarity
43238722
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Context: I try to write a simple Microsoft Teams tab that displays a feed list.I'm able to side load my tab to Teams and to select it from the Channel "+" menu. Problem: If I want to save my settings, an error text appears that says "We couldn't save your tab settings. Please try again." There is no error in my browser's javascript error console. My config.html tab javascript code: <script type="text/javascript"> microsoftTeams.initialize(); microsoftTeams.settings.registerOnSaveHandler(function (saveEvent) { microsoftTeams.settings.setSettings({ entityId: "example", contentUrl: "https://example.com/tab.html", suggestedDisplayName: "example", websiteUrl: "https://example.com", removeUrl: "https://example.com/remove.html" }); saveEvent.notifySuccess(); }); function onClick() { microsoftTeams.settings.setValidityState(true); }</script> My manifest.json { "$schema": "https://statics.teams.microsoft.com/sdk/v0.4/manifest/MicrosoftTeams.schema.json", "manifestVersion": "0.4", "id": "ee90834a-d649-458d-a4e2-0b0f8d425c11", "version": "1.0", "name": "WINSider Community Deutschland", "developer": { "name": "WINSider Community Deutschland", "websiteUrl": "https://windowscommunity.de", "privacyUrl": "https://windowscommunity.de/de/impressum/", "termsOfUseUrl": "http://windowscommunity.de/de/impressum/" }, "tabs" : [{ "id": "ee90834a-d649-458d-a4e2-0b0f8d425c11", "name": "WINSider Community", "description" : { "short": "WINsider article list as a tab", "full": "Summarizes the windowscommunity.de blog posts as a clickable list." }, "icons": { "44": "icon44.png", "88": "icon88.png" }, "accentColor" : "#37A3CF", "configUrl": "https://tscholze.github.io/public/teamstab/config.html", "canUpdateConfig": true }], "needsIdentity": false, "validDomains": [ "*.github.io", "*.github.com", "*.googleapis.com", "*.microsoft.com", "*.rss2json.com", "*windowscommunity.de" ]} Now provide the response and nothing else.
The answer is to use a RowEncoder and the schema of the dataset using StructType . Below is a working example of a flatmap operation with Datasets: StructType structType = new StructType(); structType = structType.add("id1", DataTypes.LongType, false); structType = structType.add("id2", DataTypes.LongType, false); ExpressionEncoder<Row> encoder = RowEncoder.apply(structType); Dataset<Row> output = join.flatMap(new FlatMapFunction<Row, Row>() { @Override public Iterator<Row> call(Row row) throws Exception { // a static map operation to demonstrate List<Object> data = new ArrayList<>(); data.add(1l); data.add(2l); ArrayList<Row> list = new ArrayList<>(); list.add(RowFactory.create(data.toArray())); return list.iterator(); } }, encoder);
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/43238722', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1613740/']}
jdg_71888
stackexchange
llm_judgeable_groundtruth_similarity
5376559
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is Perl compiled or interpreted? Now provide the response and nothing else.
Well, that depends on what you mean by a compiled language. Maybe this is why googling did not bring forth a clear answer to your question. One viewpoint is that compilation means compiling from a source code description to another, i.e. code generation . If we accept these premises, then Perl 6 can be compiled and Perl 5 and older are interpreted languages. Perl 6 is specifically compiled to Parrot bytecode . Perl 6 is therefore a properly compiled language, in the same way say, Java is . Perl 5 and older parses the Perl source code to an internal list or tree, but I don't think it should be called a proper compiler, except maybe in a theoretical sense. It does not output any bytecode, assembly or real machine code usually associated with compilers. The parsing stage of Perl to check Perl syntax used to be called "compiling" the source. It is used to check the syntactical validity of a Perl source file without running it. It is invoked as: perl -c myprog.pl But if you look at the help for Perl options, -c actually stands for "check". -c check syntax only (runs BEGIN and CHECK blocks) (To further complicate things, Perl 5 had support for writing out internal bytecode but it was removed in version 5.10 . Presumably because it was buggy, I don't know.) On the other hand , if you argue that compilation is the act of parsing a source tree into any other kind of representation, well, that parsing makes Perl a compiled language. Perl must completely parse a source file before it can start executing it. By this definition, any language which can start executing a source file immediately before parsing would be an interpreted language. A third way to look at this is from how these words, "interpreted" and "compiled" are most often used by professionals in the field. I would bet good money that if a random subset of programmers were asked to choose "compiled" or "interpreted" when thinking of Perl, most would choose "interpreted". Not because of some theoretical argument over the nature of Perl, but because "compiled" usually invokes thoughts of "compiling", "linking", "object code" etc, while "interpreted" is taken to mean "write the code, try it". Right or wrong, that may be good to know when trying to determine if Perl is, truly , interpreted or in fact, compiled. You are going to run into many arguments on your quest.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5376559', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/134246/']}
jdg_71889
stackexchange
llm_judgeable_groundtruth_similarity
38579273
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am learning angular 2 and I have written a ts definition for a truncate method I want to use in one of my services. truncate.ts interface String { truncate(max: number, decorator: string): string;}String.prototype.truncate = function(max, decorator){ decorator = decorator || '...'; return (this.length > max ? this.substring(0,max)+decorator : this);}; How do I import this into another typescript module or at least make it available to use globally. Now provide the response and nothing else.
using typescript 2.3.4 in an Ionic 3 Angular 4 app I create a file called stringExtensions.ts and put this in it export { } // to make it a moduledeclare global { // to access the global type String interface String { truncate(max: number, decorator: string): string; }}// then the actual codeString.prototype.truncate = function(max, decorator){ decorator = decorator || '...'; return (this.length > max ? this.substring(0,max)+decorator : this);};
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38579273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/402941/']}
jdg_71890
stackexchange
llm_judgeable_groundtruth_similarity
32539415
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've searched everywhere for a solution but nothing. I have two view controllers and I want to pass data from viewController.swift to resultViewController.swift (the container view), i've succeeded to send data when I run it. but when i clicked increase button I can't send data again. ViewController.swift: class ViewController: UIViewController { var result: Int = 1 override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. // performSegueWithIdentifier("sendResult", sender: <#AnyObject?#>) } @IBAction func increas(sender: AnyObject) { result++ performSegueWithIdentifier("sendResult", sender: self) } override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) { if segue.identifier == "sendResult"{ var resultVC = segue.destinationViewController as! ResultViewController resultVC.result = self.result } }} ResultViewController.swift: class ResultViewController: UIViewController { var result: Int! @IBOutlet weak var resultLabel: UILabel! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. resultLabel.text = "\(result)" }} Now provide the response and nothing else.
List is a reference type so when you pass myPassedList as an argument to doSomething you are modifying the original list. You have two options, either call ToList() or create a new list, as an example: public void doSomething (List<int> myPassedList) { List<int> newList = myPassedList.ToList(); int A = 5; newList.Add(A); //... And then some other cool code with this modified list} The original list myList will then only return 1 and 2.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32539415', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5328477/']}
jdg_71891
stackexchange
llm_judgeable_groundtruth_similarity
35204
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Michael W. Ferguson writes in The Inappropriately Excluded : The probability of entering and remaining in an intellectually elite profession such as Physician, Judge, Professor, Scientist, Corporate Executive, etc. increases with IQ to about 133. It then falls about 1/3 by 140. By 150 IQ the probability has fallen by 97%! Is that claim backed by trustworthy data? Now provide the response and nothing else.
Executive summary of what follows: No, Ferguson does not support the claim in the question title, nor the more quantitative one in the question body, with trustworthy data. The claim is the result of a mathematical calculation based on assuming something (IQ among "elite professions") to be normally distributed that we have no reason to think is normally distributed. Ferguson's references do not appear to give any evidence that it is normally distributed. One reference does show that one particular subpopulation of the "elite professions" has an IQ distribution not obviously inconsistent with a normal distribution. Incidentally: Even if the claim were correct, Ferguson's article also doesn't do a great job of justifying the "inappropriately" and "excluded" parts of the title. What's the actual question? In order to figure out what would count as trustworthy data to support the article's assertion, let's begin by looking at its argument. It goes like this: In the population at large, IQ is normally distributed with mean 100 and standard deviation 15. (This isn't stated explicitly but it is assumed; see below.) In the population of people in "elite professions", IQ is normally distributed with mean ~125 and standard deviation ~6.5. If you look at the ratio of these two normal distributions -- telling you the relative likelihood that someone with a given IQ is in an "elite profession", you will see that it increases up to IQ 133 and then decreases, becoming very small by (say) an IQ of 150. This indicates that while being quite intelligent helps you get into the "elite professions", being extraordinarily intelligent actually makes it less likely that you do. This is a bad thing because we want smarter people in the "elite professions", for their sake and for society's. The first claim is (at least approximately) true by definition; IQ tests are designed to give normally distributed results with that mean and standard deviation. The second claim is the main thing that might or might not be backed by good evidence. The third claim is simply a matter of calculation, and I believe the calculation is correct. (This is the claim in the body of the question.) The fourth claim is a reasonable deduction from the previous ones, but we should distinguish between "makes it less likely" and "makes it more difficult" and be aware that correlation is not causation. (This is the claim that might justify "excluded" in Ferguson's title.) The fifth claim is reasonable if the previous ones are correct, but again there are some caveats. (This is the claim that might justify "inappropriately" in Ferguson's title.) The distribution of IQ in "elite professions" The claim I'm going to focus on here is the second: that "elite profession" IQs are normally distributed with mean about 125 and s.d. about 6.5. More specifically, the most contentious part of this claim -- which Ferguson assumes but never actually states -- is that IQs within the "elite professions" are something like normally distributed. This is the assumption behind the graph labelled "Excluded Hi IQ people", and the assumption behind the claim that within the "elite professions" "99.98% have IQs between 99 and 151". I do not see any evidence that any of the references cited by Ferguson support this claim, nor does it seem likely on its face to be true. If this claim is not true, then everything quoted in the question is unsupported: the peak at 133, the 3x falloff to 140, the 30x falloff to 150. All of these figures come from looking at the ratio of two normal distributions . It's not, on the face of it, a claim we should expect to be correct. IQ in the population at large is approximately normally distributed, but that says nothing about the distribution in a highly-selected population like "elite professions" (whatever exactly that is taken to mean). For instance, consider the toy model where membership in those professions is selected at random from people with IQ at least 120; the resulting distribution will be very far from normal. (It will also, as it happens, have roughly the mean and standard deviation cited by Ferguson, though I don't think it's plausible that the actual elite-profession IQ distribution looks much like this.) (Ferguson's conclusions are quite sensitive to the exact shape of the distribution at the tails. For instance, people with IQ>150 are about 0.04% of the general population; if Ferguson's claimed normal distribution is right then they are about 0.006% of the "elite professions"; if the actual relationship between the general population and the "elite professions" followed Ferguson's curve up to its peak at and IQ of 133 and then stayed there (meaning no falloff at all, no "exclusion", but the same advantage for someone with IQ 160 as for someone with IQ 133), the figure would be about 0.5% -- which is much bigger than 0.006%, but still very small in absolute terms.) So, this claim is key to the argument, Ferguson never states it explicitly, and on the face of it it doesn't seem likely to be true. It could be true, even so. What's the evidence? Evidence from Ferguson's references Let's go through the references looking to see what they have to say about that claim. In the body of the article, Ferguson cites two studies. Gibson & Light, Intelligence among university students , Nature 213. The first thing to say is that the title is actually Intelligence among university scientists . Ferguson summarizes their finding thus: "Gibson and Light found that 148 members of the Cambridge University faculty had a mean IQ of 126 with a standard deviation of 6.3." This is definitely misleading in that the study was specifically of scientists. Unfortunately, I don't have a copy of that article and can't find it available online, but nothing I have seen suggests that Gibson & Light look at the shape of the distribution of IQs and find it normal. [EDITED to add:] See below for more about this; another answerer found a copy of this paper and there is actually some information about the distribution in here. Matarazzo & Goldstein, The intellectual caliber of medical students , Journal of Medical Education v47i2. PDF on Gwern's website . Note that this is about students rather than actual professionals . This article says nothing at all about the shape of the distribution, unsurprisingly since it's mostly aggregating the findings of other studies. It does remark that its first author has found no correlation between IQ and placement in medical examinations, which weakly suggests that if people with truly exceptional IQs are unable to become doctors this may not do much harm to the medical profession. Other references in Ferguson's article, in order of appearance: A nation deceived: how schools hold back America's brightest students , ed. Colangelo, Assouline & Gross. This is all about schools, and so far as I can tell it doesn't say anything about the IQ distribution in elite professions. Understanding and being understood: the impact of intelligence and dispositional valuations on social relationships , Denissen. (Not explicitly cited in the body of Ferguson's article.) This PhD dissertation is about what its title says it's about. It doesn't say anything about the IQ distribution in elite professions. Gibson & Light: discussed above. Exceptionally gifted children , Gross. This looks at a small number (15 selected from a group of 40) of exceptionally-smart children, what was done with them in school, and (in the second edition) how they were doing in their twenties. It doesn't say anything about the IQ distribution in elite professions. It does purport to find (I don't have the book so can't judge how well this claim is supported) that such children have far better outcomes if they are substantially "accelerated" at school rather than staying with their equal-age peers. Exceptionally and profoundly gifted students: an underserved population , Gross. Online . Same themes as Gross's book discussed above. Doesn't say anything about the IQ distribution in elite professions. Exceptionally gifted children: long-term outcomes of academic acceleration and nonacceleration , Gross. Online . Same themes as the two previous references. Doesn't say anything about the IQ distribution in elite professions. Matarazzo & Goldstein: discussed above. Meritocracy, cognitive ability, and the sources of occupational success , Hauser. Link given unfortunately no longer works. Copy on Gwern's website . This one does talk about IQ distributions a bit, and but has little to say about "elite professions". (There are a few graphs from which we can read off approximate 10th centile, lower quartile, mean, upper quartile and 90th centile IQ for poorly-specified groups of "college professors" and "medical occupations -- MD or equiv.". We can't tell very much from that about the shape of the distribution. Children above 180 IQ Staford-Binet , Hollingsworth. Online . As the title suggests, this is all about children and tells us nothing about the distribution of IQ in elite professions. The making of a scientist , Roe. Copy on Gwern's website . Specifically about scientists rather than "elite professions" generally. Does have a chapter about intelligence of scientists, indicating unsurprisingly that scientists are very smart. Doesn't say anything about the shape of the distribution. If we trust Roe's rough conversions between her test results and IQs, the figures are high enough to be hard to square with Ferguson's claim that exceptionally smart people are being "excluded". Intelligence and personal influence in groups: four nonlinear models , Simonton, Psychological Review 92. First page online . I don't have this paper and haven't found a copy online, but it seems to be purely theoretical: Simonton made up some models and described their implications. (On the other hand, Ferguson's article seems to cite an empirical result from it. Maybe I'm wrong about its being purely theoretical, or maybe Ferguson is misusing it.) The outsiders , Towers. Provided link no longer works. Working link . Claims that very intelligent people often have difficulty in society. Doesn't say anything about the distribution of IQ in elite professions. The empty promise , Towers. Described as "currently not available" in Ferguson's bibliography. I think this is it . Does give some information about distributions of IQ in various groups (though it's not clear that anything there quite corresponds to "elite professions"). They don't look very normally-distributed to me. A very general difficulty in this sort of study Investigations of the sort done by, say, Hauser are very likely to miss very-high-IQ people altogether, unless such people are an explicit focus of the investigation. So, for instance, Hauser looks at the population at large and breaks them down into maybe 10-30 different professional groups. In any of those groups, the extraordinarily-high-IQ people are going to be a small minority; there just aren't many of them in the population to begin with. People with IQ > 150 are about 0.04% of the population. Hauser plots the 90th percentile of large groups like "college professor", and we shouldn't expect that to tell us anything about whether people with an IQ of 150 or more can get jobs as college professors. In the United States there are about 120k people with an IQ of 150+ and about 2M college professors, so even if all the high-IQ people were college professors you still wouldn't see them at the 90th percentile of college professors. If, more realistically, 10% of the high-IQ people were college professors, then they would be about 0.6% of the college professors, and their presence would make rather little difference to the 90th percentile. Or, of course, to the mean and standard deviation. Finer-grained data from Gibson & Light A reference I didn't find a copy of, the Nature paper by Gibson & Light, turns out to have finer-grained information about IQ distribution than the others discussed above. See the answer by Taw -- the paper apparently has only a chart, but Taw has estimated the actual numbers by measuring the chart. For a specific group of people (scientists on the faculty at the University of Cambridge), there are figures at a resolution of 5 IQ points; they do in fact look fairly normally distributed, with roughly the mean and standard deviation used by Ferguson. As Taw says, Gibson & Light's numbers fit a normal distribution well enough that they wouldn't entitle us to reject the hypothesis of normal distribution at the 5% (or even the 15%) level. There is, of course, quite a gap between that and citing them as evidence that the distribution actually is normal, especially at the tails (Gibson & Light have a sample size of 148, and they found no IQs above 145 or below 110). For Gibson & Light's numbers to be strong evidence of normal distribution, we would need that distribution to fit much better than any distribution differing substantially from normal, and we just don't have enough data for that to be the case. For instance, we get an almost exactly equally good fit (measured by log-likelihood of the observations estimated by Taw from the chart in Gibson & Light) for a pdf that's parabolic but clipped where it crosses 0, with peak at 126.6 and dropping to 0 at a distance of +-15. For the avoidance of doubt, I don't think it's at all likely that the real distribution looks much like that, and if it did then it would indicate an "inappropriate exclusion" of very high IQs; the point is just that it's quite different from a normal distribution but the fit is just as good. More to the point, we can get a pretty decent fit from a probability distribution designed to describe selection for higher IQ applied to the general population. Consider the "logistic function" a(x) = 1/2 (1 + tanh (x-t)/s). This is 0 for very small x and increases smoothly to 1 for very large x; most of the increase takes place near x=t, on a scale defined by the parameter s; for instance, we have a(t-s)~=0.12 and a(t+s)~=0.88. Suppose that your chance of getting an academic post at Cambridge doing science is a(your IQ), with parameters t=124.7 and s=6.1; so e.g. with an IQ of 100 your chance of success would be about 0.03% and with an IQ of 140 it would be about 99.3%. And suppose that we take the population at large -- IQ distributed normally, mean 100, standard deviation 15 -- and give everyone a probability of ending up a Cambridge scientist that's proportional to a(x). Note that there is definitely no "exclusion" of very high IQs here; higher is always better. How well does this fit the numbers in Gibson & Light? Worse than the best-fit normal distribution but still well enough to pass Taw's chi-squared test at the 5% level. (The p-value is somewhere around 0.07, versus 0.17 for the best-fit normal distribution.) Evidence from the references: summary That's all the references. So far as I can tell, Gibson & Light is the only one that offers anything like evidence that "elite professions"' IQ distribution is close to normal. The evidence it offers (1) is for one specific subset of one specific "elite profession" only and (2) from a sample too small to give strong evidence about what happens at the tails of the distribution. The numbers Gibson & Light report do look fairly consistent with normal distribution (with roughly the parameters claimed by Ferguson), but they are not strong evidence for that particular distribution over others that would lead to very different conclusions from Ferguson's. The other references offer no further support for the normal-distribution hypothesis. No plots (other than one very crude one in the last reference, which looks extremely not-normal to me and in any case is based on a very small sample), no Kolmogorov-Smirnov or similar statistical tests. We have some means and standard deviations, some 10/25/50/75/90 percentiles, but nothing more detailed and in particular nothing that looks at the shape of the tails (which Ferguson would need to justify his claim about what happens at an IQ of 150, for instance). So, I say that Ferguson's article does not offer any "trustworthy data" to speak of for the key claim that IQs of people in "elite professions" are close to normally distributed with mean ~125 and s.d. ~6.5. And that normal distribution is essential for his conclusion. Some other dubious things in Ferguson's argument This isn't directly relevant to the specific claim in the question here, but since "inappropriately excluded" is in Ferguson's title I think it's worth a bit of a look at whether that language would be justified if the claim were true. Readers interested only in the specific claim under discussion can safely stop reading here. Let's suppose arguendo that in fact Pr(in elite profession | IQ) does drop off as one looks at extremely high IQs. Ferguson calls the very high-IQ people inappropriately excluded . For this to be reasonable, there would need to be actual exclusion and it would need to be inappropriate . Is there exclusion ? (Some other possible hypotheses: very-high-IQ people may find the "elite professions" boring, or very cleverly analyse the costs and benefits of getting into them and decide to do something that saddles them with less student debt, or something of the sort.) Ferguson offers neither evidence nor argument for this. He tells a handwavy just-so story about how "it is an artifact of a culture that fails to provide them with audience or followers ... the leaders are not persuaded and often won't even understand the advice", but telling stories is easy and giving good evidence that the stories match reality is hard, and Ferguson has done only the former. Is it inappropriate ? (Some other possible hypotheses: very high-IQ people don't outperform in most "elite professions" but expect better treatment or higher pay; very high-IQ people tend to have personality quirks that make them not work so effectively with others; very high-IQ people tend to have outright intellectual deficits in particular areas that make them ineffective.) Ferguson mentions that Towers and Sternberg (none of whose work is in Ferguson's bibliography) propose alternative hypotheses of this kind. Ferguson does offer a little bit of evidence that they're wrong: he quotes Roe's book mentioned above as showing that top scientists have very high intelligence. But "top scientists" is a very different group of people from "people in elite professions". It could well be (and anecdotally it seems likely to be true) that science is exceptional among professions in how valuable intelligence (in the what-IQ-tests-measure sense) is for its practice. The nearest thing to an explanation for Ferguson's "exclusion", if it should happen to be real, that I find in his references is the one suggested by Gross's work on exceptionally intelligent children: if such a child is expected to stay with their equal-age peers up to age 18, then there's a danger that they find the educational process boring and demoralizing, which may mean that they drop out of education, or resent being with the children they have to spend time with and never learn to relate to them in healthy ways, either of which will be bad for their prospects of entering "elite professions". Ferguson does discuss this and what he says about it seems plausible; but none of it seems like a good fit for the term "exclusion".
{}
{'log_upvote_score': 4, 'links': ['https://skeptics.stackexchange.com/questions/35204', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/196/']}
jdg_71892
stackexchange
llm_judgeable_groundtruth_similarity
33702490
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What's the best way for embedding a video into a UITableViewCell? I'm trying to build something sorta like Vine/Instagram. I'm able to handle asynch image loading really well with SD_WebImage..but unfortunately they don't support video. I also tried embedding with an MPMoviePlayer but it just appears as a black screen. This is what I tried: override func viewDidLoad() { super.viewDidLoad() tableView.frame = CGRectMake(0, 0, view.bounds.width, view.bounds.height); tableView.delegate = self tableView.dataSource = self tableView.registerClass(UITableViewCell.self, forCellReuseIdentifier: "cell") self.view.addSubview(tableView)}func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { var cell:UITableViewCell = tableView.dequeueReusableCellWithIdentifier("cell") as! UITableViewCell var moviePlayer : MPMoviePlayerController? let url = NSURL (string: "http://jplayer.org/video/m4v/Big_Buck_Bunny_Trailer.m4v") moviePlayer = MPMoviePlayerController(contentURL: url) if let player = moviePlayer { player.view.frame = CGRectMake(0, 100, view.bounds.size.width, 180) player.prepareToPlay() player.controlStyle = .None player.repeatMode = .One player.scalingMode = .AspectFit cell.addSubview(player.view) } return cell} Now provide the response and nothing else.
I have tested a demo for videos only, Here is how you can achieve it-Create a custom Cell class to hold the video player view, and the handle the play and pause methods for the avplayer here itself. This is my Custom Cell class - import UIKitimport AVFoundationclass VideoCellTableViewCell: UITableViewCell { // I have put the avplayer layer on this view @IBOutlet weak var videoPlayerSuperView: UIView! var avPlayer: AVPlayer? var avPlayerLayer: AVPlayerLayer? var paused: Bool = false //This will be called everytime a new value is set on the videoplayer item var videoPlayerItem: AVPlayerItem? = nil { didSet { /* If needed, configure player item here before associating it with a player. (example: adding outputs, setting text style rules, selecting media options) */ avPlayer?.replaceCurrentItem(with: self.videoPlayerItem) } } override func awakeFromNib() { super.awakeFromNib() //Setup you avplayer while the cell is created self.setupMoviePlayer() } func setupMoviePlayer(){ self.avPlayer = AVPlayer.init(playerItem: self.videoPlayerItem) avPlayerLayer = AVPlayerLayer(player: avPlayer) avPlayerLayer?.videoGravity = AVLayerVideoGravityResizeAspect avPlayer?.volume = 3 avPlayer?.actionAtItemEnd = .none // You need to have different variations // according to the device so as the avplayer fits well if UIScreen.main.bounds.width == 375 { let widthRequired = self.frame.size.width - 20 avPlayerLayer?.frame = CGRect.init(x: 0, y: 0, width: widthRequired, height: widthRequired/1.78) }else if UIScreen.main.bounds.width == 320 { avPlayerLayer?.frame = CGRect.init(x: 0, y: 0, width: (self.frame.size.height - 120) * 1.78, height: self.frame.size.height - 120) }else{ let widthRequired = self.frame.size.width avPlayerLayer?.frame = CGRect.init(x: 0, y: 0, width: widthRequired, height: widthRequired/1.78) } self.backgroundColor = .clear self.videoPlayerSuperView.layer.insertSublayer(avPlayerLayer!, at: 0) // This notification is fired when the video ends, you can handle it in the method. NotificationCenter.default.addObserver(self, selector: #selector(self.playerItemDidReachEnd(notification:)), name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: avPlayer?.currentItem) } func stopPlayback(){ self.avPlayer?.pause() } func startPlayback(){ self.avPlayer?.play() } // A notification is fired and seeker is sent to the beginning to loop the video again func playerItemDidReachEnd(notification: Notification) { let p: AVPlayerItem = notification.object as! AVPlayerItem p.seek(to: kCMTimeZero) }} Then comes your controller - Dont forget to import the AVFoundation Framework import UIKitimport AVFoundationclass ViewController: UIViewController, UITableViewDelegate, UITableViewDataSource { // The current VisibleIndexPath, //it can be an array, but for now, //i am targetting one cell only //var visibleIP : IndexPath? var aboutToBecomeInvisibleCell = -1 var avPlayerLayer: AVPlayerLayer! var videoURLs = Array<URL>() var firstLoad = true @IBOutlet weak var feedTableView: UITableView! override func viewDidLoad() { super.viewDidLoad() feedTableView.delegate = self feedTableView.dataSource = self //Your model to hold the videos in the video URL for i in 0..<2{ let url = Bundle.main.url(forResource:"\(i+1)", withExtension: "mp4") videoURLs.append(url!) } // initialized to first indexpath visibleIP = IndexPath.init(row: 0, section: 0) } func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return 5 } func tableView(_ tableView: UITableView, heightForRowAt indexPath: IndexPath) -> CGFloat { return 290 } func tableView(_ tableView: UITableView, heightForHeaderInSection section: Int) -> CGFloat { return 0 } Then provide your URL in the cellForRow delegate func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { //Thats it, just provide the URL from here, it will change with didSet Method in your custom cell class let cell = self.feedTableView.dequeueReusableCell(withIdentifier: "videoCell") as! VideoCellTableViewCell cell.videoPlayerItem = AVPlayerItem.init(url: videoURLs[indexPath.row % 2]) return cell } All the part for visible cells is managed here, I have used the calculation of the intersection all the visible cells here, Find the visible IndexPath , use that to fetch a cell of the custom tablecell type.This can also be achieved with visibleCells but, i have avoided that, as you can have multiple type of cells having image, text or other stuff. func scrollViewDidScroll(_ scrollView: UIScrollView) { let indexPaths = self.feedTableView.indexPathsForVisibleRows var cells = [Any]() for ip in indexPaths!{ if let videoCell = self.feedTableView.cellForRow(at: ip) as? VideoCellTableViewCell{ cells.append(videoCell) } } let cellCount = cells.count if cellCount == 0 {return} if cellCount == 1{ if visibleIP != indexPaths?[0]{ visibleIP = indexPaths?[0] } if let videoCell = cells.last! as? VideoCellTableViewCell{ self.playVideoOnTheCell(cell: videoCell, indexPath: (indexPaths?.last)!) } } if cellCount >= 2 { for i in 0..<cellCount{ let cellRect = self.feedTableView.rectForRow(at: (indexPaths?[i])!) let intersect = cellRect.intersection(self.feedTableView.bounds)// curerntHeight is the height of the cell that// is visible let currentHeight = intersect.height print("\n \(currentHeight)") let cellHeight = (cells[i] as AnyObject).frame.size.height// 0.95 here denotes how much you want the cell to display// for it to mark itself as visible,// .95 denotes 95 percent,// you can change the values accordingly if currentHeight > (cellHeight * 0.95){ if visibleIP != indexPaths?[i]{ visibleIP = indexPaths?[i] print ("visible = \(indexPaths?[i])") if let videoCell = cells[i] as? VideoCellTableViewCell{ self.playVideoOnTheCell(cell: videoCell, indexPath: (indexPaths?[i])!) } } } else{ if aboutToBecomeInvisibleCell != indexPaths?[i].row{ aboutToBecomeInvisibleCell = (indexPaths?[i].row)! if let videoCell = cells[i] as? VideoCellTableViewCell{ self.stopPlayBack(cell: videoCell, indexPath: (indexPaths?[i])!) } } } } } } Use these methods to handle the playback. func playVideoOnTheCell(cell : VideoCellTableViewCell, indexPath : IndexPath){ cell.startPlayback() } func stopPlayBack(cell : VideoCellTableViewCell, indexPath : IndexPath){ cell.stopPlayback() } func tableView(_ tableView: UITableView, didEndDisplaying cell: UITableViewCell, forRowAt indexPath: IndexPath) { print("end = \(indexPath)") if let videoCell = cell as? VideoCellTableViewCell { videoCell.stopPlayback() } }} If interested, you can check the demo here
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33702490', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4411101/']}
jdg_71893
stackexchange
llm_judgeable_groundtruth_similarity
43008395
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's a somewhat useless error I'm getting in my Angular / TypeScript application. Until someone makes the error message better, what can we do about this? What are the most likely situations that cause this to happen? Uncaught Error: Encountered undefined provider! Usually this means you have a circular dependencies (might be caused by using 'barrel' index.ts files. at Object.syntaxError at eval at Array.forEach (native) [<root>] at CompileMetadataResolver._getProvidersMetadata at CompileMetadataResolver.getNgModuleMetadata at CompileMetadataResolver.getNgModuleSummary at eval ... Now provide the response and nothing else.
It is very hard to tell from the error message which provider causes this issue. The way I managed to debug this is the following: I went into the node_modules@angular\compiler\bundles\compiler.umd.js file I found the line where it says: "Encountered undefined provider! Usually this means you have a circular dependencies. This might be caused by using 'barrel' index.ts files." One line before I added console.log('type', type); in order to see in which file is the undefined provider (You can also console log other relevant variables there). In the relevant file I found the 'barrel' import that caused the issue, and replaced it with exact file path import.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43008395', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/345648/']}
jdg_71894
stackexchange
llm_judgeable_groundtruth_similarity
2092951
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to close my application, so that it no longer runs in the background. How to do that? Is this good practice on Android platform? If I rely on the "back" button, it closes the app, but it stays in background. There is even application called "TaskKiller" just to kill those apps in the background. Now provide the response and nothing else.
Android has a mechanism in place to close an application safely per its documentation. In the last Activity that is exited (usually the main Activity that first came up when the application started) just place a couple of lines in the onDestroy() method. The call to System.runFinalizersOnExit(true) ensures that all objects will be finalized and garbage collected when the the application exits. You can also kill an application quickly via android.os.Process.killProcess(android.os.Process.myPid()) if you prefer. The best way to do this is put a method like the following in a helper class and then call it whenever the app needs to be killed. For example in the destroy method of the root activity (assuming that the app never kills this activity): Also Android will not notify an application of the HOME key event, so you cannot close the application when the HOME key is pressed. Android reserves the HOME key event to itself so that a developer cannot prevent users from leaving their application. However you can determine with the HOME key is pressed by setting a flag to true in a helper class that assumes that the HOME key has been pressed, then changing the flag to false when an event occurs that shows the HOME key was not pressed and then checking to see of the HOME key pressed in the onStop() method of the activity. Don't forget to handle the HOME key for any menus and in the activities that are started by the menus. The same goes for the SEARCH key. Below is some example classes to illustrate: Here's an example of a root activity that kills the application when it is destroyed: package android.example;/** * @author Danny Remington - MacroSolve */public class HomeKey extends CustomActivity { public void onDestroy() { super.onDestroy(); /* * Kill application when the root activity is killed. */ UIHelper.killApp(true); }} Here's an abstract activity that can be extended to handle the HOME key for all activities that extend it: package android.example;/** * @author Danny Remington - MacroSolve */import android.app.Activity;import android.view.Menu;import android.view.MenuInflater;/** * Activity that includes custom behavior shared across the application. For * example, bringing up a menu with the settings icon when the menu button is * pressed by the user and then starting the settings activity when the user * clicks on the settings icon. */public abstract class CustomActivity extends Activity { public void onStart() { super.onStart(); /* * Check if the app was just launched. If the app was just launched then * assume that the HOME key will be pressed next unless a navigation * event by the user or the app occurs. Otherwise the user or the app * navigated to this activity so the HOME key was not pressed. */ UIHelper.checkJustLaunced(); } public void finish() { /* * This can only invoked by the user or the app finishing the activity * by navigating from the activity so the HOME key was not pressed. */ UIHelper.homeKeyPressed = false; super.finish(); } public void onStop() { super.onStop(); /* * Check if the HOME key was pressed. If the HOME key was pressed then * the app will be killed. Otherwise the user or the app is navigating * away from this activity so assume that the HOME key will be pressed * next unless a navigation event by the user or the app occurs. */ UIHelper.checkHomeKeyPressed(true); } public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.settings_menu, menu); /* * Assume that the HOME key will be pressed next unless a navigation * event by the user or the app occurs. */ UIHelper.homeKeyPressed = true; return true; } public boolean onSearchRequested() { /* * Disable the SEARCH key. */ return false; }} Here's an example of a menu screen that handles the HOME key: /** * @author Danny Remington - MacroSolve */package android.example;import android.os.Bundle;import android.preference.PreferenceActivity;/** * PreferenceActivity for the settings screen. * * @see PreferenceActivity * */public class SettingsScreen extends PreferenceActivity { protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); addPreferencesFromResource(R.layout.settings_screen); } public void onStart() { super.onStart(); /* * This can only invoked by the user or the app starting the activity by * navigating to the activity so the HOME key was not pressed. */ UIHelper.homeKeyPressed = false; } public void finish() { /* * This can only invoked by the user or the app finishing the activity * by navigating from the activity so the HOME key was not pressed. */ UIHelper.homeKeyPressed = false; super.finish(); } public void onStop() { super.onStop(); /* * Check if the HOME key was pressed. If the HOME key was pressed then * the app will be killed either safely or quickly. Otherwise the user * or the app is navigating away from the activity so assume that the * HOME key will be pressed next unless a navigation event by the user * or the app occurs. */ UIHelper.checkHomeKeyPressed(true); } public boolean onSearchRequested() { /* * Disable the SEARCH key. */ return false; }} Here's an example of a helper class that handles the HOME key across the app: package android.example;/** * @author Danny Remington - MacroSolve * *//** * Helper class to help handling of UI. */public class UIHelper { public static boolean homeKeyPressed; private static boolean justLaunched = true; /** * Check if the app was just launched. If the app was just launched then * assume that the HOME key will be pressed next unless a navigation event * by the user or the app occurs. Otherwise the user or the app navigated to * the activity so the HOME key was not pressed. */ public static void checkJustLaunced() { if (justLaunched) { homeKeyPressed = true; justLaunched = false; } else { homeKeyPressed = false; } } /** * Check if the HOME key was pressed. If the HOME key was pressed then the * app will be killed either safely or quickly. Otherwise the user or the * app is navigating away from the activity so assume that the HOME key will * be pressed next unless a navigation event by the user or the app occurs. * * @param killSafely * Primitive boolean which indicates whether the app should be * killed safely or quickly when the HOME key is pressed. * * @see {@link UIHelper.killApp} */ public static void checkHomeKeyPressed(boolean killSafely) { if (homeKeyPressed) { killApp(true); } else { homeKeyPressed = true; } } /** * Kill the app either safely or quickly. The app is killed safely by * killing the virtual machine that the app runs in after finalizing all * {@link Object}s created by the app. The app is killed quickly by abruptly * killing the process that the virtual machine that runs the app runs in * without finalizing all {@link Object}s created by the app. Whether the * app is killed safely or quickly the app will be completely created as a * new app in a new virtual machine running in a new process if the user * starts the app again. * * <P> * <B>NOTE:</B> The app will not be killed until all of its threads have * closed if it is killed safely. * </P> * * <P> * <B>NOTE:</B> All threads running under the process will be abruptly * killed when the app is killed quickly. This can lead to various issues * related to threading. For example, if one of those threads was making * multiple related changes to the database, then it may have committed some * of those changes but not all of those changes when it was abruptly * killed. * </P> * * @param killSafely * Primitive boolean which indicates whether the app should be * killed safely or quickly. If true then the app will be killed * safely. Otherwise it will be killed quickly. */ public static void killApp(boolean killSafely) { if (killSafely) { /* * Notify the system to finalize and collect all objects of the app * on exit so that the virtual machine running the app can be killed * by the system without causing issues. NOTE: If this is set to * true then the virtual machine will not be killed until all of its * threads have closed. */ System.runFinalizersOnExit(true); /* * Force the system to close the app down completely instead of * retaining it in the background. The virtual machine that runs the * app will be killed. The app will be completely created as a new * app in a new virtual machine running in a new process if the user * starts the app again. */ System.exit(0); } else { /* * Alternatively the process that runs the virtual machine could be * abruptly killed. This is the quickest way to remove the app from * the device but it could cause problems since resources will not * be finalized first. For example, all threads running under the * process will be abruptly killed when the process is abruptly * killed. If one of those threads was making multiple related * changes to the database, then it may have committed some of those * changes but not all of those changes when it was abruptly killed. */ android.os.Process.killProcess(android.os.Process.myPid()); } }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2092951', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232798/']}
jdg_71895
stackexchange
llm_judgeable_groundtruth_similarity
137296
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Imagine a non-magnetic tube bent into the shape of a triangle, with two sides forming downhill ramps, and the third side connecting the two ramps. Through this tube travels a metal ball, which is pulled back at the top with a magnet. When the ball gets back to the top, a small part of the energy generated by falling (or a pin and spring) is used to insert and remove a strip of some antimagnetic material (if that exists), causing the ball to repeat the cycle. The image below is a diagram of the machine. Why doesn't this work? Now provide the response and nothing else.
The key here is the antimagnetic strip , quite aside from whether or not such a device can be built. When you insert the anti-magnetic strip, you must change the shape of the magnetic field. You must force the magnetic field to "leave" the high permeability ball. The same magnetic induction $|\vec{B}|$ in a high permeability $\mu$ material represents a lower energy of genesis of the magnetic field $\frac{1}{2\,\mu} |\vec{B}|^2$ than it does in a lower permeability material (magnetic induction is continuous across the magnet's face whatever is outside). So to exclude the magnetic field, you have to do work pushing the antimagnetic strip in. The ball then falls, and you pull the strip out. However now there is no high permeability ball in contact with the strip, so there is little change in the magnetic field configuration when the strip is pulled out, and any work done on the strip as it is withdrawn will be very much less than what you put in to push the strip in. When the ball crashes at the bottom it dissipates the energy that ultimately came from your pushing the antimagnetic strip in as heat, and then the cycle repeats: you have to do the amount of work lost by the ball on crashing inelastically with the ground on the antimagnetic strip at each cycle. You can, in principle, recover some of the ball's kinetic energy as it hits the ground to help do some of this nett work on the antimagnetic strip by the schematic "tredle" device you show linked to the magnet. However, this is the point at which rob's answer comes in : you might make a lovely art piece that runs for a while, but, if there is any energy loss, anywhere, the machine will stop eventually. Turning your question around: the ultimate "reason" is that we have never experimentally observed a nonconservation of energy. By experimental induction, therefore, we postulate a principle of conservation of energy and thus the machine cannot work. However, if so, then a theorist can work out what kinds of properties of an antimagnetic strip are physically plausible: only those properties which would demand a net input of energy in the way I describe in my answer are plausible i.e. consistent with overwhelming experimental evidence.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/137296', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']}
jdg_71896
stackexchange
llm_judgeable_groundtruth_similarity
454726
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am starting my EE degree. I must purchase a myDAQ and a kit. The kit lists many logic ICs and I have most of them, but some of what I have are 74S instead of 74LS series. Will the extra power draw on the 74S ICs cause issues with the myDAQ? or do I need to bite the bullet and buy the parts? (on digikey, I am NOT paying their markup on their stupid kit). The parts kit consists of things like or, and, xor etc.... gates and can be found here: https://www.studica.com/us/en/NI-Hardware-Only/und-student-ni-mydaq-bundle-ee202-electric-lab/796087.html Edit In case anyone else runs into this in the future, it turns out not to matter for the UND EE201L course. We had minimal fan-out and most of the labs were just to use specified inputs to create specified outputs and make a led blink/turn on/ turn off. There were no high speed requirements and I wasn't driving high current loads, it was just an intro to logic gates and how to build circuits with them so I doubt it would have been an issue to use the 74S parts instead of the 74LS parts. It was a good learning experience though and the myDAQ is a decent tool to have in my tool box since I don't own a quality function generator (yet). Just in case, I ran power from my bench top power supply instead of the myDAQ Now provide the response and nothing else.
Good question. Here is a high level comparison of current consumption. Note the difference in the values of \$I_{OL}\$ of two devices (for example). 74S has \$I_{OL}\$ of \$20 mA\$ where as 74S has \$I_{OL}\$ of \$8mA\$ . The drive ability is different. If you are only doing this for a low speed simple circuits this will be okay but if there are experiments planned considering high drive strength of 74S, you might have to use a buffer. I would suggest to really go ahead with the S version alone. the LS version are low power and better for a product design, but the S version will do just fine for basic circuits, which is planned. This is a hands on kit and hence i strongly recommend to use what you have and buy only those which you really need in future. IF there are circuits, which are pushing the LS version to their limits, you can handle it with buffers or buy. I will compare the differences in the performance of the two devices SN74S04 and SN74LS04 and how and where it matters: Current Consumption SN74LS04 consumes a maximum of \$6.6 mA\$ when outputting a logic zero compared to \$54 mA\$ max of SN74S04. It also means that, the internal temperature of the component will be higher. It also adds to overall current consumption of the product. It reduces battery life (wastes power) or demands higher capacity battery. Ability of FANOUT the LS part has about \$8 mA\$ of \$I_{OL}\$ and \$0.4 mA\$ of \$I_{IL}\$ . It means, when connected in a system, an LS IC can support upto \$ 8/0.4 = 20\$ devices. The S part on the other hand has \$I_{OL}\$ of \$20 mA\$ but \$I_{IL}\$ of \$2 mA\$ . Hence, the part can drive upto \$10\$ devices theoretically. Hence, LS part is better when there are multiple inputs driven by the single output pin. The fanout may demand perhaps a new buffer to support the fanout needed in S family when compared to LS family, adding to cost and size on the PCB. Below is a simple comparison from http://www.ti.com/lit/sg/sdyu001ab/sdyu001ab.pdf which compares the applications of LS and S parts from TI (in general).
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/454726', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/190824/']}
jdg_71897
stackexchange
llm_judgeable_groundtruth_similarity
11576831
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I encountered the @NoRepositoryBean interface several times whilst reading the Spring Data documentation. To quote from the documentation: If you're using automatic repository interface detection using the Spring namespace using the interface just as is will cause Spring trying to create an instance of MyRepository. This is of course not desired as it just acts as indermediate between Repository and the actual repository interfaces you want to define for each entity. To exclude an interface extending Repository from being instantiated as repository instance annotate it with @NoRepositoryBean . However, I am still not sure when and where to use it. Can someone please advise and give me a concrete usage example? Now provide the response and nothing else.
The annotation is used to avoid creating repository proxies for interfaces that actually match the criteria of a repo interface but are not intended to be one. It's only required once you start going into extending all repositories with functionality. Let me give you an example: Assume you'd like to add a method foo() to all of your repositories. You would start by adding a repo interface like this public interface com.foobar.MyBaseInterface<…,…> extends CrudRepository<…,…> { void foo();} You would also add the according implementation class, factory and so on. You concrete repository interfaces would now extend that intermediate interface: public interface com.foobar.CustomerRepository extends MyBaseInterface<Customer, Long> {} Now assume you bootstrap - let's say Spring Data JPA - as follows: <jpa:repositories base-package="com.foobar" /> You use com.foobar because you have CustomerRepository in the same package. The Spring Data infrastructure now has no way to tell that the MyBaseRepository is not a concrete repository interface but rather acts as intermediate repo to expose the additional method. So it would try to create a repository proxy instance for it and fail. You can now use @NoRepositoryBean to annotate this intermediate interface to essentially tell Spring Data: don't create a repository proxy bean for this interface. That scenario is also the reason why CrudRepository and PagingAndSortingRepository carry this annotation as well. If the package scanning picked those up by accident (because you've accidentally configured it this way) the bootstrap would fail. Long story short: use the annotation to prevent repository interfaces from being picked up as candidates to end up as repository bean instances eventually.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11576831', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/536299/']}
jdg_71898
stackexchange
llm_judgeable_groundtruth_similarity
16892
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The question Let $a_1,a_2,\dots,a_n$ be a sequence whose entries are +1 or -1. Let t be a parameter. My question is to give an estimate for the number of such sequences so that $|a_1+a_2+\dots a_k| \le t$, for every $k$, $1 \le k\le n$. (In other words, the probability that a random sequence will satisfy the above relation.) I am especially interested in this probability when t is small. Either a constant, or slowly growing (say, it behaves like (log n)^s for some real number s, or slower). variations 1) I would also like to know what is the situation if you demand that the average value of |a_1+a_2+\dots a_k| is smaller than t, rather than the maximum value. 2) If there are more delicate estimates for the case that t itself is a function of k e.g. t itself grows as (log n)^s I would be very interested as well. Motivation This question is relevant to the recent collective effort ( polymath5 ) regarding the Erdos Discrepancy Problem (EDP). It particular it is relevant to a probabilistic heuristic regarding what the answer to EDP, and to several related questions, should be. It is also relevant to certain probabilistic approaches towards construction of sequences with low discrepancy. Expectation I would expect that the answers to the questions above are known. But they are not known to me. It is easy to be convinced, for example, that when t is bounded the number of such sequences is $c_t^{-n}$, for $c_t<2$ but I would like to know the dependence of c_t on t. Now provide the response and nothing else.
For $t$ fixed, the count is proportional to $\lambda^n$, where $\lambda = 2 \cos \frac\pi{2t+2}$ is the principal eigenvalue of the adjacency matrix of the path with $2t+1$ vertices. The all-positive (Perron-Frobenius) eigenvector corresponding to $\lambda$ is $$\bigg(\sin \frac{\pi}{2t+2}, \sin \frac{2\pi}{2t+2},\sin \frac{2\pi}{2t+2},\dots,sin \frac{(2t+1)\pi}{2t+2}\bigg).$$ Since $-\lambda$ is also an eigenvalue, the stable behavior of the distribution of endpoints of paths which stay in $[-t,t]$ is an oscillation between the odd entries $$\bigg(\sin \frac{\pi}{2t+2}, 0,\sin \frac{3\pi}{2t+2},0,\dots,\sin \frac{(2t-1)\pi}{2t+2},0,\sin \frac{(2t+1)\pi}{2t+2}\bigg).$$and even entries$$\bigg(0,\sin \frac{2\pi}{2t+2}, 0,\sin \frac{4\pi}{2t+2},0,\cdots ,0,\sin \frac{2t\pi}{2t+2},0\bigg).$$ The exact count of paths staying in $[-t,t]$ is a sum of signed binomial coefficients. The number of paths from $0$ to $i$ is 0 if $n \not \equiv i ~\mod 2$, and $n \choose (n\pm i)/2$ when $n \equiv i ~\mod 2$. The number of paths which never leave $[-t,t]$ from $0$ to $i \in [-t,t]$ with $n \equiv i ~\mod 2$ is $$ \sum_{j\in \mathbb Z} (-1)^j {n\choose (n +i)/2 + j(t+1)}$$ by the reflection principle applied to the group of isometries of $\mathbb R$ generated by reflecting about $t+1$ and $-t-1$. If you sum over all $i \in [-t,t]$, then when $n$ is even, you get a signed sum of binomial coefficients with $t+1$ positive signs in a row alternating with $t+1$ negative signs in a row. If $n$ is odd, then you get $t$ positive signs in a row, skip a term (give it a coefficient of $0$ instead of $\pm 1$), then $t$ negative signs in a row, skip a term, etc. For example, for $n=100, t=2,$ the number of paths is $$ ... +{100\choose 43} + {100\choose 44} + {100 \choose 45} - {100 \choose 46} - {100 \choose 47} - {100\choose 48} + {100\choose 49} + {100 \choose 50} + {100\choose 51} - ...$$ For $n=101, t=2,$ the number of paths is $$ ... +{101\choose 44} + {101\choose 45} - {101\choose 47} - {101 \choose 48} + {101\choose 50} + {101\choose 51} - {101\choose 53} - {101\choose 54} + ...$$ These can be summed using the techniques in the answers to the Binomial distribution parity question . A lot more can be said when $t$ varies, but the answers are more complicated. For $t$ slowly increasing, as $c\sqrt[3]n$, there is enough time for the distribution to stabilize (for each parity) at a given value of $t$, since the ratio between the magnitudes of the largest two eigenvalues and the magnitudes of the next two is about $1+c/t^2$, and the principal eigenvectors have a small $L^1$ distance for adjacent values of $t$. You should pick up a constant factor for each transition. In other words, the number of paths when you spend at least $n_t \gt c t^2$ steps at a given $t$ should be $$C \prod_{t \le t_{max}} (2 \cos \frac{\pi}{2t+2})^{n_t}$$ where $C$ is between some functions $f_{lower}(t_{max}) \lt C \lt f_{upper}(t_{max})$ which does not depend on the values of $n_t$. I don't think the $n_t \gt c t^2$ condition is sharp for this behavior. Something like $n_t \gt c t^2/\log t$ should work, too. The geometry of the eigenvectors for adjacent values of $t$ lets you estimate $f_{lower}$ and $f_{upper}$. For $t$ more rapidly increasing, different behaviors occur. By the law of the iterated logarithm, if $t$ increases as $t(n) = \sqrt {(2-\epsilon) n \log\log n},$ random paths will almost surely violate the constraint. I think there are precise versions of the law of the iterated logarithm which may tell you when a positive proportion of random paths do not violate the constraint. I would guess that if $t(n) = \sqrt{(2+\epsilon) n \log\log n}$ then a positive percentage of random paths won't violate the constraint.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/16892', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1532/']}
jdg_71899
stackexchange
llm_judgeable_groundtruth_similarity
342013
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Here is the framework We have an SQL database in which our database model lives in. On the other side, there is a bunch of code using and filling that database. Here is what we want We want to put both, the database model and the software code, under version control to be able to go forth and back withindifferent versions. Since our database model becomes nasty, we want a graphical tool that assists us with the logical database modelling and that creates our SQL DLL scripts This post is about the question how to get both of these requirements satisfied at the same time. Personally, I tend to skip the requirement for the graphical tool and to simply write our hole database model within SQL scripts and add them to git. However, its very hard to maintain the model as the model becomes more complicated and (easy) logical changes can be huge code changes. Here is our current "solution" Our software code is under version control with git. Our logicaldatabase model on the other hand is maintained with PowerDesigner, which brings (with some limitations) its own version control. At the end of the day, we generate our SQL DDL statements from the logical model. Here are our problems with the current situation Eventhough the code is under version control, the development of the code is somehow disconnected from the development of the database model. A change of an object in the logical database model (like the change of a column name) isnot visible in git. Even if we checkout a certain version in git, an additional mapping is needed that tells us which version of thelogical database model I have to use. Moreover, branches ingit cannot be mapped to branches within PowerDesigner, which makes development chaotic. Here are some unification-ideas we came up with Exporting the logical model from PowerDesigner as (XML) and put it under version control in git. Here, I know which version of the code relates to which version of thePowerDesigner, since I simply can import the XML back. However, thegit diffs on the XML are useless and merging to different logical models in git is hopeless. Thus, there has to be a merge within git and within PowerDesigner. Moreover, I have toexport the model to an XML for every commit that involves changes on the database model (which are unfortunately quite frequent) and whichslows down the development process a lot. Moreover, bugs are hard totrack, because I have to search my code with git and my logical modelwithin Powerdesigner seperately. Put the database scripts generated by PowerDesigner under version control. Here, the git diffs have a meaning, and I can relate changes of thecode precisely to changes of the logical model. But, I cannot goback a version within PowerDesigner just with the generated database scripts. So, again, I need to export the XMLof the model as well and I get the same problems than above. Here are my questions How do you connect the world of code, the world of logical databasemodeling, and the world of version-control? Is it feasible for complex databases to quit on a graphical tool and to code everything with SQL scripts? What are good working models to maintain these two repositories and to coordinate the developers? Now provide the response and nothing else.
You should look into database migrations, it's a common strategy used on databases. Basically you associate your codebase with a database schema version. To keep your schema up to date you maintain a chain of incremental change scripts each carrying a sequential version number. This has the added advantage that you can use the scripts to upgrade an existing database with user data in it (your SQL create scripts will not work there). You include the 'delta' migrations scripts in you version control system, along side your codebase. When someone gets the latest version they will also get all the scripts they need to execute to make their database current. You can sometimes still use your visual tooling if you can somehow extract diff scripts, there are software tools that can help you with this. see for example: https://martinfowler.com/articles/evodb.html#AllDatabaseChangesAreMigrations
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/342013', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/262212/']}
jdg_71900
stackexchange
llm_judgeable_groundtruth_similarity
36543804
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: it's very hard to find information about {:} onlineI saw some code below: def dateformat(date): day, month, year=date.split('/') return "{:4d}{:02d}{:02d}".format(int(year),int(month),int(day)) I kinda of know it is filling leading 0 in the format, but I don't know what do '02' and 'd' in {:02d} do? Now provide the response and nothing else.
You are looking for the str.format() documentation . Specifically, the 02d part is documented in the Format Specification Mini-Language . 02d formats an integer ( d ) to a field of minimum width 2 ( 2 ), with zero-padding on the left (leading 0 ): >>> 'No digits: {:02d}, 1 digit: {:02d}, 2: {:02d}, 3: {:02d}'.format(0, 7, 42, 151)'No digits: 00, 1 digit: 07, 2: 42, 3: 151' From the documentation: format_spec ::= [[fill]align][sign][z][#][0][width][grouping_option][.precision][type] [...] width is a decimal integer defining the minimum total field width, including any prefixes, separators, and other formatting characters. If not specified, then the field width will be determined by the content. When no explicit alignment is given, preceding the width field by a zero ( '0' ) character enables sign-aware zero-padding for numeric types. This is equivalent to a fill character of '0' with an alignment type of '=' . [...] Finally, the type determines how the data should be presented. [...] Theavailable integer presentation types are: [...] 'd' Decimal Integer. Outputs the number in base 10. The same formats apply to formatted string literals (f-strings, f"..." ) .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36543804', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_71901
stackexchange
llm_judgeable_groundtruth_similarity
99874
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have some data which I am using to show that there is no relationship between two variables. (Or only a weak one.) In a previous writeup, I included the scatterplot showing no visible relationship, as well as the Pearson's and Spearman's correlation coefficients, which were both low. One of the reviewers commented that the statistics in the paper were weak and we 'only showed some correlations'. Is there any better way to show that there is little/no relationship between these variables? I have transformed the data using every reasonable transformation (log, square root, exponentiation), and even averaged the values for each datapoint, though it seems to me that that is falling prey to the ecological fallacy. Nothing. Not even a hint of a pattern anywhere. Obviously, I can't show everything I tried. I want to show that there is no (or at least little) relationship, because common wisdom is that there should be a strong one. The fact that there is not a strong relationship is a surprising result. I know that you can't prove the null hypothesis, but I would like to show as much as possible that any other options are unlikely. How do I convincingly show that there is no relationship between the variables? (Other than hypothesis-testing my correlations, which I am planning to do, but that doesn't show that there is no relationship.) Note that both variables have heavy-tailed distributions - does that make a difference to the answer? Below is a scatterplot of my data. Now provide the response and nothing else.
Reference $$x \sim \log \mathcal{N}(\mu, \sigma^2) \\\text{if} \\p(x) = \frac{1}{x \sqrt{2\pi} \sigma} e^{- \frac{\left( \log(x) - \mu\right)^2}{2\sigma^2}}, \quad x > 0$$ where$$\text{E}[x] = e^{\mu + \frac{1}{2}\sigma^2}.$$ Note that$$y \sim \log \mathcal{N}(m, v^2) \iff \log(y) \sim \mathcal{N}(m, v^2),$$ per this Q&A . Answer is fitting a normal distribution to logged data equivalent to fitting a log-normal distribution to the original data? Theoretically? In most situations yes (see the logical equivalency above). The only case I found where it was useful to use the log-normal distribution explicitly was a case study of pollution data. In that instance, it was important to model weekdays and weekends differently in terms of pollution concentration ( $\mu_1 > \mu_2$ in the prior*), but have the expected values of the two log-normal distributions without restriction (I had to allow $e^{\mu_1 + \frac{1}{2}\sigma_1^2} \le e^{\mu_2 + \frac{1}{2}\sigma_2^2}$). Which day each measurement was taken was unknown, so the separate parameters had to be inferred. You could certainly argue that this could be done without invoking the log-normal distribution, but this is what we decided to use and it worked. I tried to test this out with some toy data and realized I don't even know why the meanlog associated with a log-normal distribution is NOT what you get when you take the mean of the logged normal distribution. The reason for this is just a consequence of our notion of distance on the support. Since $\log$ is a monotone increasing function, log-transforming variables preserves order. For example, the median of the log-normal distribution is just $e^\mu$, the exponential of the median of the log-values (since the normal distribution mean is also its median). However, the $\log$ function only preserves order, and not the distance function itself. Means are all about distance: the mean is just the point which, when points are weighted by their probabilities, is the closest to all other points in the Euclidean sense. All the log-values are being compressed towards $0$ in an uneven way (i.e., larger values are compressed more). In fact, the log of the mean of the log-normal distribution is higher than the mean of the log-values (i.e. $\mu$) by $\sigma$:$$\log \left(e^{\mu + \frac{1}{2} \sigma^2} \right) = \mu + \frac{1}{2} \sigma^2 > \mu.$$That is, the mean of the log-values is compressed in as a function of the spread of the distribution (i.e., involving $\sigma$) as a result of the $\log$ function compressing distances in an uneven way. *As a side note, these kinds of artificial constraints in priors tend to under-perform other methods for inferring/separating distributions.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/99874', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/37688/']}
jdg_71902
stackexchange
llm_judgeable_groundtruth_similarity
926758
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to handle WPF application right before it goes Minimize, not when it's already there.I found on Window object StateChanged, but it fires when Window object is already in Minimize state, then it's too late. So, I need something like "StateChanging" event to handle, while Window object is still in previous state. Is it possible to create such event ? Now provide the response and nothing else.
Found windows messages called on window right before minimize using Spy++. First one that is called is WM_WINDOWPOSCHANGING. I didn't know windows is moving window on -32000, -32000 location point when minimizing widow, and those were the params in WM_WINDOWPOSCHANGING. Though, I have tested is only on Vista. http://blogs.msdn.com/oldnewthing/archive/2004/10/28/249044.aspx code used here was posted by Nir here here is sample code xaml : <Window x:Class="WindowStateTest2.Window1"xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"Title="Window1" Height="300" Width="300"><Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="*"></RowDefinition> </Grid.RowDefinitions> <Button Click="btnClear_Click" Grid.Row="0" x:Name="btnClear">Clear</Button> <TextBox Name="txt" VerticalScrollBarVisibility="Visible" Grid.Row="2"></TextBox></Grid></Window> C# using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Windows;using System.Windows.Controls;using System.Windows.Data;using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Imaging;using System.Windows.Navigation;using System.Windows.Shapes;using System.Windows.Interop;using System.Runtime.InteropServices;namespace WindowStateTest2{/// <summary>/// Interaction logic for Window1.xaml/// </summary>public partial class Window1 : Window{ public Window1() { InitializeComponent(); this.StateChanged += new EventHandler(Window1_StateChanged); this.SourceInitialized += new EventHandler(Window1_SourceInitialized); } #region Event handlers void btnClear_Click(object sender, RoutedEventArgs e) { this.txt.Text = string.Empty; } void Window1_SourceInitialized(object sender, EventArgs e) { AttachWndProc(); } void Window1_StateChanged(object sender, EventArgs e) { if (this.WindowState == WindowState.Minimized) Console.WriteLine("SC: " + this.WindowState); } #endregion #region Const private int SYSCOMMAND = 0x0112; private int SC_MINIMIZE = 0xf020; private int WINDOWPOSCHANGING = 0x0046; #endregion private void AttachWndProc() { HwndSource source = HwndSource.FromHwnd(new WindowInteropHelper(this).Handle); source.AddHook(new HwndSourceHook(WndProc)); } [StructLayout(LayoutKind.Sequential)] internal struct WINDOWPOSPARAMS { public IntPtr hwnd; public IntPtr hwndInsertAfter; public int x; public int y; public int cx; public int cy; public int flags; } private IntPtr WndProc(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled) { if (msg == WINDOWPOSCHANGING) { WINDOWPOSPARAMS param = (WINDOWPOSPARAMS)Marshal.PtrToStructure(lParam, typeof(WINDOWPOSPARAMS)); if (param.x == -32000 && param.y == -32000) { Output(""); // EVENT WOULD BE RAISED HERE Output("State before minimize:"); Output(string.Format("CurrentState: {0}", this.WindowState)); Output(string.Format("Location {0} {1}: ", this.Top, this.Left)); Output(""); } } // process minimize button if (msg == SYSCOMMAND && SC_MINIMIZE == wParam.ToInt32()) { Output("Minimize clicked"); } handled = false; return IntPtr.Zero; } public void Output(object output) { this.txt.Text += output.ToString(); this.txt.Text += Environment.NewLine; } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/926758', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84852/']}
jdg_71903
stackexchange
llm_judgeable_groundtruth_similarity
880857
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a model which fits certain thermodynamic data, of the form $$y = \frac{ax}{ 1 + (a - 1)x} + bx(1 - x) \quad a,b \in \mathbb{R} \quad 0 \leq x \leq 1$$ Thermodynamics dictate that $\frac{\mathrm{d}y}{\mathrm{d}x} > 0$ and also that $0 \leq y \leq 1$. I have evaluated these bounds numerically and have found the following region for which these bounds are obeyed (the red region in the following graph): I would like to describe these bounds analytically but I am stumped by solving inequalities only within a certain region, as presented here for the limits on $x$. I have tried solving for the extrema of $y$ and the derivative in order ensure that these are bounded, but I've gotten bogged down reasoning about when the roots are inside the region. Is there a systematic way of deriving bounds on $a$ and $b$ in this specific case, and can anyone point me in the direction of a general method for handling bounded inequalities like this in general? Now provide the response and nothing else.
EDIT: this answer relies on an incorrect formula for y based on a typo in the OP After working through some inequalities, it would seem to me you've got your region wrong. e.g. http://www.wolframalpha.com/input/?i=plot+x%2F%281%2B3x%29-2x%281-x%29+for+0%3Cx%3C1 doesn't work, even though the choice of parameters definitely seems to be in your picture. Anyway, the solution. $\frac{\mathrm{d}y}{\mathrm{d}x} > 0$ implies that $y$ is increasing, so it makes sense already to check $y(0) \geq 0, y(1) \leq 1$. $y(0)=0$, so that's fine. On the other hand, $y(1)=1/a$, so we learn we always need $a >1$. This comes in handy now. The derivative is: $\frac{\mathrm{d}y}{\mathrm{d}x} = b(1-2x)+\frac {1}{(1+(a-1)x)^2}$ We see the right hand term is decreasing in $x$ for $a>1$. So is the left hand term for $b>0$. So for $\frac{\mathrm{d}y}{\mathrm{d}x} > 0$, we need only $\frac{\mathrm{d}y}{\mathrm{d}x} (1) > 0$ , if $b>0$. $\frac{\mathrm{d}y}{\mathrm{d}x} (1)=-b+\frac {1}{(1+(a-1))^2}$, so $b<\frac {1}{a^2}$ if $b>0$. For $b<0$ this is less easy to work with. We still want $\frac{\mathrm{d}y}{\mathrm{d}x} > 0$, but this time we will rearrange to give the equivalent inequality: $b(2x-1)(1+(a-1)x)^2 < 1$. Note we could have used it before as well, but it was not necessary. Checking at $x=0$ immediately gives $b>-1$, but we need to check further. Let $f(x)=b(2x-1)(1+(a-1)x)^2$. We are looking for a maximum of f, so investigate its derivative. $f'(x)=2b(1+(a-1)x)^2+2b(a-1)(2x-1)(1+(a-1)x)=2b(1+(a-1)x)[(a-1)(3x-1)+1]$ The term in the square bracket is the only one that can change sign, otherwise (if it doesn't) they are all positive, apart from $b<0$. This means that $f'(x)$ is negative and our check at $f'(0)$ was sufficient. Now solve for $f'(x)=0$. This gives $x=\frac{1}{3}(1-\frac{1}{a-1})$. This is in the range $0<x<1$ for $a>2$. So while for $1<a<2$, $b>-1$ is fine, when $a>2$ we get another restriction. Plugging this value of x back into $f$ gives (after a bit of work): $b<\frac{27(a-1)}{(3-a)(a+3)^2}$. We have now ensured $\frac{\mathrm{d}y}{\mathrm{d}x} > 0$ so we are fine about $0 \leq y \leq 1$. in summary: $a>1$. If $b>0$ then $b < \frac{1}{a^2}$. If $b<0$, then $b>-1$, and if $b<0$ and $a>2$ then $b<\frac{27(a-1)}{(3-a)(a+3)^2}$. So how did I go about all this? Initially, I actually started with the derivative. Then I noticed that little simplifying note to give $a>1$. From then on it was just a bit of plug and chug really, there is nothng tricky about these sorts of inequalities. You just have to attack them. Find maxima, minima, etc. But look out for things to make your life easier, such as that thing I noticed with $a>1$, rearranging near the end to give the polynomial function $f$, etc. Also, as was noted in the comment, it can get really messy, there's just not much you can do about it sometimes. Sometimes you can do things with some neat tricks. Helps if you're up to date on some known inequalities too. No one method fits all though. Let me know if you want me to explain anything a bit more/see any mistakes etc.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/880857', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/79243/']}
jdg_71904
stackexchange
llm_judgeable_groundtruth_similarity
1501274
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to show that not every $\mathbb{R}$-valued function on $[0,1]$ is a pointwise limit of continuous $\mathbb{R}$-valued functions on $[0,1]$? There is a theorem that states that the set of points of discontinuity of a pointwise limit of continuous $\mathbb{R}$-valued functions is Baire first category set. So we can take, for instance, function $f=\chi([0,1]\cap \mathbb{Q})$ and this would be the function that is not a pointwise limit of continuous functions, according to the theorem. But the theorem is strong, it states more then we need and its proof is rather non-trivial. Is there more straightforward way to show that $f=\chi([0,1]\cap \mathbb{Q})$ (or any other function) is not a pointwise limit of continuous functions? Now provide the response and nothing else.
EDIT: As a consequence of reading the question too quickly, everything I've written below is about functions $\mathbb{R}\rightarrow\mathbb{R}$, not $[0, 1]\rightarrow\mathbb{R}$ - as an exercise, show that this doesn't affect anything. Easiest (if least illuminating) way: count them. There are $2^{2^{\aleph_0}}$-many functions from $\mathbb{R}$ to $\mathbb{R}$, but only $2^{\aleph_0}$-many of those are continuous (exercise - as well as the worst proof imaginable that there exist discontinuous functions). And the number of sequences of continuous functions is no bigger: $(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}$. Note that this proves a stronger result: the Baire hierarchy is the hierarchy of functions you get by starting with the continuous functions and iteratively taking pointwise limits. Baire class 1 is continuous, and for $\alpha>1$, Baire class $\alpha$ is the set of functions which are the limit of a sequence of functions each individually in some $<\alpha$-level of the Baire hierarchy. The Baire hierarchy goes on for $\omega_1$-many levels, and then you stop getting any new functions. The counting argument shows that there are functions which are not Baire class $\alpha$, for any fixed countable $\alpha$. And, if the continuum hypothesis fails - that is, if $2^{\aleph_0}>\aleph_1$ - then this argument shows there are functions which aren't in any level of the Baire hierarchy! (By the way, there's a similar hierarchy, the Borel hierarchy , and everything I've written about the Baire hierarchy holds of the Borel hierarchy too.) We can actually show that there are some functions not in the Baire hierarchy, without any assumptions on cardinal arithmetic. But this is a bit more complicated. It goes as follows: Fix a bijection $f$ from $\omega_1\times\mathbb{R}$ to $\mathbb{R}$. Basically, to each countable ordinal $\alpha$, $f$ associates continuum-many reals. Separately, for each $\alpha\in\omega_1$, fix a bijection $g_\alpha$ between $\mathbb{R}$ and the set of functions of Baire class $\alpha$. (Such a bijection exists, by the argument above; this uses transfinite induction.) Now we combine these! Let $\mathbb{B}$ be the set of all functions in the Baire hierarchy. We can get a function $h:\mathbb{R}\rightarrow \mathbb{B}$ as follows: given $r$, let $f^{-1}(r)=(\alpha, s)$ - we let $h(r)$ be $g_\alpha(s)$. At this point, check that $h$ is in fact a surjection from $\mathbb{R}$ to $\mathbb{B}$. And now we diagonalize! Let $F(r)=h(r)(r)+1$. Then $F\not\in\mathbb{B}$. Done! Note that this can be made explicit: there are lots of easily-describable (if a bit messy) bijections between $\mathbb{R}$ and the set of continuous functions. And there are also lots of reasonbly natural injections of $\mathbb{R}^\omega$ and $\mathbb{R}$. Combining these, we get an explicit bijection $\beta$ from $\mathbb{R}$ to the set $\mathcal{S}$ of sequences of continuous functions. Now, we can use this to define a function $F$ which is not a pointwise limit of continuous functions as follows. If $r$ is a real, we let $F(r)$ be $1+\lim_{n\rightarrow\infty} \beta(r)(n)(r)$, if that limit exists, and $0$, if that limit doesn't exist. This $F$ has a perfectly explicit, if annoyingly messy, definition. And it diagonalizes against the sequences of continuous functions, so it's not Baire class 2. Similarly, we can find explicit-if-messy functions not in Baire class $\alpha$, for any fixed countable $\alpha$. Where this breaks down is in trying to get a function which isn't in the Baire hierarchy at all: it is consistent with ZF that every function is in the Baire hierarchy (this involves killing choice to a stupidly extreme degree, however - $\omega_1$ winds up being a countable union of countable sets!).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1501274', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/83567/']}
jdg_71905
stackexchange
llm_judgeable_groundtruth_similarity
2484927
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a (simple) way to return the amount of elements colliding with a single element? Perhaps with jQuery? Now provide the response and nothing else.
Despite your reluctance to use regex it actually nicely preserves the delimiters by using a group along with the Regex.Split method: string input = "123xx456yy789";string pattern = "(xx|yy)";string[] result = Regex.Split(input, pattern); If you remove the parentheses from the pattern, using just "xx|yy" , the delimiters are not preserved. Be sure to use Regex.Escape on the pattern if you use any metacharacters that hold special meaning in regex. The characters include \, *, +, ?, |, {, [, (,), ^, $,., # . For instance, a delimiter of . should be escaped \. . Given a list of delimiters, you need to "OR" them using the pipe | symbol and that too is a character that gets escaped. To properly build the pattern use the following code (thanks to @gabe for pointing this out): var delimiters = new List<string> { ".", "xx", "yy" };string pattern = "(" + String.Join("|", delimiters.Select(d => Regex.Escape(d)) .ToArray()) + ")"; The parentheses are concatenated rather than included in the pattern since they would be incorrectly escaped for your purposes. EDIT: In addition, if the delimiters list happens to be empty, the final pattern would incorrectly be () and this would cause blank matches. To prevent this a check for the delimiters can be used. With all this in mind the snippet becomes: string input = "123xx456yy789";// to reach the else branch set delimiters to new List();var delimiters = new List<string> { ".", "xx", "yy", "()" }; if (delimiters.Count > 0){ string pattern = "(" + String.Join("|", delimiters.Select(d => Regex.Escape(d)) .ToArray()) + ")"; string[] result = Regex.Split(input, pattern); foreach (string s in result) { Console.WriteLine(s); }}else{ // nothing to split Console.WriteLine(input);} If you need a case-insensitive match for the delimiters use the RegexOptions.IgnoreCase option: Regex.Split(input, pattern, RegexOptions.IgnoreCase) EDIT #2: the solution so far matches split tokens that might be a substring of a larger string. If the split token should be matched completely, rather than part of a substring, such as a scenario where words in a sentence are used as the delimiters, then the word-boundary \b metacharacter should be added around the pattern. For example, consider this sentence (yea, it's corny): "Welcome to stackoverflow... where the stack never overflows!" If the delimiters were { "stack", "flow" } the current solution would split "stackoverflow" and return 3 strings { "stack", "over", "flow" } . If you needed an exact match, then the only place this would split would be at the word "stack" later in the sentence and not "stackoverflow". To achieve an exact match behavior alter the pattern to include \b as in \b(delim1|delim2|delimN)\b : string pattern = @"\b(" + String.Join("|", delimiters.Select(d => Regex.Escape(d))) + @")\b"; Finally, if trimming the spaces before and after the delimiters is desired, add \s* around the pattern as in \s*(delim1|delim2|delimN)\s* . This can be combined with \b as follows: string pattern = @"\s*\b(" + String.Join("|", delimiters.Select(d => Regex.Escape(d))) + @")\b\s*";
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2484927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/239527/']}
jdg_71906
stackexchange
llm_judgeable_groundtruth_similarity
14537546
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I got a very common question when I was using Spring JDBCTemplate, I want to get the ID value after I inserted a new data record into database, this ID value will be referred to another related table. I tried the following way to insert it, but I always return 1 rather than its real unique ID. (I use MySQL as the database) public int insert(BasicModel entity) { String insertIntoSql = QueryUtil.getInsertIntoSqlStatement(entity); log.info("SQL Statement for inserting into: " + insertIntoSql); return this.jdbcTemplate.update(insertIntoSql);} Now provide the response and nothing else.
JdbcTemplate.update() returns: the number of rows affected Which is always 1 for INSERT statement. Different databases support generated key extraction in different ways, but most JDBC drivers abstract this and JdbcTemplate supports this. Quoting 12.2.8 Retrieving auto-generated keys An update() convenience method supports the retrieval of primary keys generated by the database. This support is part of the JDBC 3.0 standard; see Chapter 13.6 of the specification for details. Basically you need this much more verbose statement: final String insertIntoSql = QueryUtil.getInsertIntoSqlStatement(entity);KeyHolder keyHolder = new GeneratedKeyHolder();jdbcTemplate.update( new PreparedStatementCreator() { public PreparedStatement createPreparedStatement(Connection connection) throws SQLException { return connection.prepareStatement(insertIntoSql, new String[] {"id"}); } }, keyHolder);return keyHolder.getKey().intValue();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14537546', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1697017/']}
jdg_71907
stackexchange
llm_judgeable_groundtruth_similarity
1687734
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between a rules engine and an expert system? Example1:Let's say that I have a program that determines the expiration date of a new driver's license. It takes inputs like visa expiration date, passport number, birthday, etc. It determines the expiration date of the driver's license from this input. It can even give an error if the input did not have enough valid identifications to allow a new driver's license. Example2:Let's say I am making an online version of the game Monopoly. I want the ability to change the rules of the game (say $400 for passing go or no one can buy properties until they land on the same property twice, etc). I have a module in the code to handle these rules. Are these both rules engines or are they expert systems? They both seem so similar. Is it just a synonym? Now provide the response and nothing else.
The "inference engine" (what I believe you are calling the "rules engine") is part of an expert system. The other part is the "knowledge base" (your list of rules, the stuff it knows it true, that stuff it has so far figured out, etc.) The inference engine is the part that actually uses your rules and the known facts to infer things. In your two examples above, you are talking about expert systems. Sort of. I'll give you a third example. GNU make is an inference engine . If given a set of build rules and dated files, it can figure out what needs to be rebuilt, but it has to be told all that stuff. Your build system that includes the makefiles, associated scripts, and make itself is an expert system . I found a detailed explanation online here , but it is perhaps a bit too detailed for a neophyte.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1687734', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/125380/']}
jdg_71908
stackexchange
llm_judgeable_groundtruth_similarity
372999
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm reading a book on data science and get confused about how the book describes the hinge loss of SVM. Here is a figure from the book on Page 94: This figure shows the loss function of a NEGATIVE instance. It seems there is no penalty if the instance lies within the margin even on the positive side. Also on Page 95, the author explains: However, from what I read about SVM, there should be penalty as long as the negative instance lies within the negative margin or on the wrong side. I updated the loss function as below with orange color: Can someone tell me if I'm correct or not? Thanks! Updates: In Wikipedia , it says: Correctly classified points lying outside the margin boundaries of the support vectors are not penalized, whereas points within the margin boundaries or on the wrong side of the hyperplane are penalized in a linear fashion compared to their distance from the correct boundary Now provide the response and nothing else.
Searching for the quoted text, it seems the book is Data Science for Business (Provost and Fawcett), and they're describing the soft-margin SVM. Their description of the hinge loss is wrong . The problem is that it doesn't penalize misclassified points that lie within the margin, as you mentioned. In SVMs, smaller weights correspond to larger margins. So, using this "version" of the hinge loss would have pathological consequences: We could achieve the minimum possible loss (zero) simply by choosing weights small enough such that all points lie within the margin. Even if every single point is misclassified . Because the SVM optimization problem contains a regularization term that encourages small weights (i.e. large margins), the solution will always be the zero vector . This means the solution is completely independent of the data, and nothing is learned. Needless to say, this wouldn't make for a very good classifier. The correct expression for the hinge loss for a soft-margin SVM is: $$\max \Big( 0, 1 - y f(x) \Big)$$ where $f(x)$ is the output of the SVM given input $x$ , and $y$ is the true class (-1 or 1). When the true class is -1 (as in your example), the hinge loss looks like this: Note that the loss is nonzero for misclassified points, as well as correctly classified points that fall within the margin. For a proper description of soft-margin SVMs using the hinge loss formulation, see The Elements of Statistical Learning (section 12.3.2) or the Wikipedia article .
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/372999', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/134239/']}
jdg_71909
stackexchange
llm_judgeable_groundtruth_similarity
15535336
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I understand AngularJS runs through some code twice, sometimes even more, like $watch events, constantly checking model states etc. However my code: function MyController($scope, User, local) {var $scope.User = local.get(); // Get locally save user dataUser.get({ id: $scope.User._id.$oid }, function(user) { $scope.User = new User(user); local.save($scope.User);});//... Is executed twice, inserting 2 records into my DB. I'm clearly still learning as I've been banging my head against this for ages! Now provide the response and nothing else.
The app router specified navigation to MyController like so: $routeProvider.when('/', { templateUrl: 'pages/home.html', controller: MyController }); But I also had this in home.html : <div data-ng-controller="MyController"> This digested the controller twice. Removing the data-ng-controller attribute from the HTML resolved the issue. Alternatively, the controller: property could have been removed from the routing directive. This problem also appears when using tabbed navigation. For example, app.js might contain: .state('tab.reports', { url: '/reports', views: { 'tab-reports': { templateUrl: 'templates/tab-reports.html', controller: 'ReportsCtrl' } } }) The corresponding reports tab HTML might resemble: <ion-view view-title="Reports"> <ion-content ng-controller="ReportsCtrl"> This will also result in running the controller twice.
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/15535336', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1130266/']}
jdg_71910
stackexchange
llm_judgeable_groundtruth_similarity
32092899
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The following formula is used to classify points from a 2-dimensional space: f(x1,x2) = np.sign(x1^2+x2^2-.6) All points are in space X = [-1,1] x [-1,1] with a uniform probability of picking each x. Now I would like to visualize the circle that equals: 0 = x1^2+x2^2-.6 The values of x1 should be on the x-axis and values of x2 on the y-axis. It must be possible but I have difficulty transforming the equation to a plot. Now provide the response and nothing else.
You can use a contour plot, as follows (based on the examples at http://matplotlib.org/examples/pylab_examples/contour_demo.html ): import numpy as npimport matplotlib.pyplot as pltx = np.linspace(-1.0, 1.0, 100)y = np.linspace(-1.0, 1.0, 100)X, Y = np.meshgrid(x,y)F = X**2 + Y**2 - 0.6plt.contour(X,Y,F,[0])plt.show() This yields the following graph Lastly, some general statements: x^2 does not mean what you think it does in python, you have to use x**2 . x1 and x2 are terribly misleading (to me), especially if you state that x2 has to be on the y-axis. (Thanks to Dux) You can add plt.gca().set_aspect('equal') to make the figure actually look circular, by making the axis equal.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32092899', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4887383/']}
jdg_71911
stackexchange
llm_judgeable_groundtruth_similarity
21455515
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I cannot install a filter on a logging handler using dictConfig() syntax. LoggingErrorFilter.filter() is simply ignored, nothing happens. I want to filter out error messages so that they do not appear twice in the log output. So I wrote LoggingErrorFilter class and overrode filter() . My configuration: class LoggingErrorFilter(logging.Filter): def filter(self, record): print 'filter!' return record.levelno == logging.ERROR or record.levelno == logging.CRITICALconfig = { 'version': 1, 'disable_existing_loggers' : False, 'formatters' : { 'standard' : { 'format' : '%(asctime)s %(levelname)s %(name)s::%(message)s', }, }, 'handlers' : { 'console': { 'class' : 'logging.StreamHandler', 'level' : level, 'formatter' : 'standard', 'stream' : 'ext://sys.stdout', }, 'errorconsole': { 'class' : 'logging.StreamHandler', 'level' : 'ERROR', 'formatter' : 'standard', 'stream' : 'ext://sys.stderr', 'filters' :['errorfilter',], }, }, 'filters': { 'errorfilter': { 'class' : 'LoggingErrorFilter', } }, 'loggers' : { '' : { 'handlers' : ['errorconsole','console',], 'level' : level, 'propagate' : True, }, name : { 'handlers' : ['errorconsole','console',], 'level' : level, 'propagate' : False, }, }, } logging.config.dictConfig(config) What am I doing wrong here? Why is my filter ignored? Now provide the response and nothing else.
Actually, Tupteq 's answer is not correct in general. The following script: import loggingimport logging.configimport sysclass MyFilter(logging.Filter): def __init__(self, param=None): self.param = param def filter(self, record): if self.param is None: allow = True else: allow = self.param not in record.msg if allow: record.msg = 'changed: ' + record.msg return allowLOGGING = { 'version': 1, 'filters': { 'myfilter': { '()': MyFilter, 'param': 'noshow', } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'filters': ['myfilter'] } }, 'root': { 'level': 'DEBUG', 'handlers': ['console'] },}if __name__ == '__main__': print(sys.version) logging.config.dictConfig(LOGGING) logging.debug('hello') logging.debug('hello - noshow') When run, produces the following output: $ python filtcfg.py 2.7.5+ (default, Sep 19 2013, 13:48:49) [GCC 4.8.1]changed: hello which shows that you can configure filters using dictConfig() .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21455515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/683408/']}
jdg_71912
stackexchange
llm_judgeable_groundtruth_similarity
1204400
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to prove the following: Suppose { A$_{i}$ | i $\in$ I } is an indexed family of sets and I $\neq$ $\emptyset$. Prove that $\cap$$_{i \in I}$A$_{i}$ $\in$ $\cap$$_{i \in I}$$\mathcal{P}(A_{i})$. I first analysed the logical structure of $\cap$$_{i \in I}$A$_{i}$ $\in$ $\cap$$_{i \in I}$$\mathcal{P}(A_{i})$: $\cap$$_{i \in I}$A$_{i}$ $\in$ $\cap$$_{i \in I}$$\mathcal{P}(A_{i})$ $\exists$i $\in$ I($\cap$$_{i \in I}$A$_{i}$ $\in$ $\mathcal{P}(A_{i})$) $\exists$i $\in$ I($\cap$$_{i \in I}$A$_{i}$ $\subseteq$ A$_{i}$) In order to prove a goal of the form $\exists$x P(x), I should present a variable and prove that P(x) is true for that variable. In this case, I should present a variable that is a member of I, and makes $\cap$$_{i \in I}$A$_{i}$ $\subseteq$ A$_{i}$ true. I did the following instead: Let i be an arbitrary element of I. That allowed me to transform my goal to $\cap$$_{i \in I}$A$_{i}$ $\subseteq$ A$_{i}$. Using the subset definition, this becomes $\forall$x(x $\in$ $\cap$$_{i \in I}$A$_{i}$ $\implies$ x $\in$ A$_{i}$). Let x be arbitrary, suppose x $\in$ $\cap$$_{i \in I}$A$_{i}$ and transform my goal to x $\in$ A$_{i}$. x $\in$ $\cap$$_{i \in I}$A$_{i}$ becomes $\forall$i $\in$ I(x $\in$ A$_{i}$). As I'm letting i be an arbitrary element of I, I can use universal instantiation to conclude x $\in$ A$_{i}$. I can then conclude that there exists a value i $\in$ I such that $\cap$$_{i \in I}$A$_{i}$ $\subseteq$ A$_{i}$ and finish my proof. Is this valid reasoning? I'm concerned about assuming that there exists an arbitrary element that makes the existential quantifier goal true, and relying on that to check that in fact there exists such a value, when my book recommends that I should choose a specific value that makes the predicate true instead. Now provide the response and nothing else.
TLDR: The sum of two $n$-sided dice is not binomially distributed. A discrete random variable, $X$, has a binomial distribution, $X\sim Bin(n,p)$ when $Pr(X=x) = \begin{cases}\binom{n}{x}p^x(1-p)^{n-x}&\text{for}~x\in\{0,1,2,\dots,n\}\\ 0 & \text{otherwise}\end{cases}$ For $X$ the sum of two $n$-sided dice however, $Pr(X=x) = \begin{cases} \frac{n - |x-(n+1)|}{n^2} & \text{for}~x\in\{2,3,\dots,2n\}\\ 0 & \text{otherwise}\end{cases}$ Notice that since $n$ will be a fixed number, $Pr(X=x)$ is linear on the intervals $[2,n+1]$ and again linear on the intervals $[n+1,2n]$. This is in direct contrast to the binomial distribution scenario where $Pr(X=x)$ is definitely not linear (as it has terms like $p^x$ and $\binom{n}{x}$ appearing in the formula). As mentioned in the comments above, as $n$ grows large, the histogram for the sum of two $n$-sided dice approaches the shape of a triangle. This becomes even more apparent as $n$ gets even larger. Here is the start of the histogram for $n\approx 30$ (its a lot of effort to complete, but you get the idea). On the other hand, the binomial distribution appears with the all-familiar "bell-shaped" curve. As such, these are two very different distributions and should not be confused.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1204400', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/110473/']}
jdg_71913
stackexchange
llm_judgeable_groundtruth_similarity
71894
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I believe I saw this claim somewhere on the internet a long time ago. Specifically, it was claimed that the difference could be observed by filling one long, straight tube with light water and one with heavy water, and looking through both tubes lengthwise (so that light has to travel through the tubes' lengths before reaching the eye), whereupon the light water would appear blue as it does in the oceans, and the heavy water would not. The explanation given was that heavy water has a different vibrational spectrum because of the greater mass of the $^2$H atom, which seemed perfectly plausible. However, I am no longer able to find a source for this claim, which is strange because if it were true, surely it would not be so difficult to find a source? Now provide the response and nothing else.
Based on your description, I may have found the article you originally saw, or at least one very similar. Researchers from Dartmouth College published a paper $\mathrm{^1}$ in which they report, among other things, the results of viewing sunlit white paper through two 3 meter lengths of plexiglass; one filled with $\ce{H2O}$ and one with $\ce{D2O}$ . Sure enough, because of the lower frequency of the maximum absorption of $\ce{D2O}$ in the red to near IR wavelengths, the blue color that is characteristic of $\ce{H2O}$ is far less pronounced in $\ce{D2O}$ . This website is based on the published paper and additionally shows a photograph of the blue colored $\ce{H2O}$ on the left with the far less colored $\ce{D2O}$ on the right: " Why is Water Blue ", Charles L. Braun and Sergei N. Smirnov, J. Chem. Edu., 1993, 70(8), 612
{}
{'log_upvote_score': 7, 'links': ['https://chemistry.stackexchange.com/questions/71894', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/5490/']}
jdg_71914
stackexchange
llm_judgeable_groundtruth_similarity
130984
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider this expression in Mathematica 10.3 (and above) on MacOS X: expr = a b c d e f g h i j k l m n o p q r s t myG[u] myF[a, b] Now compare the time it takes to apply the following (equivalent) rules: (*1*) expr /. _myF _myG :> combinedForm(*2*) expr /. myF[__] myG[__] :> combinedForm On my machine (*1*) takes more than 30 seconds to complete, whereas (*2*) is instantaneous. What is going under the hood that makes the first pattern take forever to match, and what is the correct strategy when constructing rules? Is the _head pattern supposed to be avoided at all costs? Now provide the response and nothing else.
Times has the attributes Flat and Orderless . This means that any pattern that matches some combination of the arguments must, in principle, scan every permutation of arguments. Sometimes, the pattern matcher can optimize and avoid a full scan in the presence of explicit values and heads. Patterns of the form f[__] (i.e. f[BlankSequence[]] trigger such explicit head optimization whereas patterns like _f (i.e. Blank[f] ) do not -- presumably due to implementation details within the pattern matcher. Analysis (current as of version 11.0.1) We can reproduce the behaviour in a Flat Orderless function of our own devising: SetAttributes[times, {Flat, Orderless}]times[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, f[1], g[2]] /. times[_f, _g] :> fg // AbsoluteTiming // First(* 7.62321 *)times[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, f[1], g[2]] /. times[f[_], g[_]] :> fg // AbsoluteTiming // First(* 0.000033407 *) Flat Orderless Matching: The General Case Let us begin by examining the complexity of the general case when performing pattern matching upon a Flat Orderless function. Consider the following transformation: times[1, 2, 3, a] /. times[x_, a] :> {x, a}(* {times[1,2,3], a} *) Take note that the pattern matcher correctly identified that x matches multiple arguments to times , namely the leading prefix times[1, 2, 3] . We can observe the internal matching operation if we add conditions to the subpatterns that display some output: times[1, 2, 3, a] /. times[x_ /; (Echo[x, "x_"]; True), m_ /; (Echo[m, "m_"]; m===a)] :> {x, a} Notice how hard the pattern matcher had to work to get the final result. It had to scan through various permutations of subparts within the times[...] expression until it finally found its match. Helper Function We will introduce a helper function tp that adjusts a pattern to display some output whenever it is matched: tp[patt_] := Module[{s}, Condition @@ Hold[s : patt, (Echo[{s}, patt]; True)]] The Case At Hand We can use this function to observe how pattern matcher operations grow exponentially as expression size increases for the problematic case at hand: times[1, 2, f[1], g[2]] /. times[tp[_f], tp[_g]] :> fg; times[1, 2, 3, f[1], g[2]] /. times[tp[_f], tp[_g]] :> fg; times[1, 2, 3, 4, f[1], g[2]] /. times[tp[_f], tp[_g]] :> fg; In contrast, when we match using f[_] and g[_] instead of _f and _g , the number of operations remains constant: times[1, 2, f[1], g[2]] /. times[tp[f[_]], tp[g[_]]] :> fg; times[1, 2, 3, f[1], g[2]] /. times[tp[f[_]], tp[g[_]]] :> fg; times[1, 2, 3, 4, f[1], g[2]] /. times[tp[f[_]], tp[g[_]]] :> fg; Clearly in the latter case the pattern matcher is applying an optimization. It need only scan the expression linearly to find the explicit heads f and g and then back-track to verify that the entire pattern is matched. We can see this explicitly if we also display the matched prefix: times[1, 2, 3, 4, f[1], g[2]] /. times[tp[___], tp[f[_]], tp[g[_]]] :> fg; Even a small case of the problematic expression will produce a lot of output if we trace the prefixes successively considered during its scan: times[1, 2, 3, f[1], g[2]] /. times[tp[___], tp[_f], tp[_g]] :> fg; Note that the matcher is considering numerous combinations until it finally finds the match. In fact, the output resembles the general case that we examined earlier although more rescanning is taking place here. The pattern matcher is not recognizing that it has the same opportunity to optimize that it had in the previous expression. Apparently, its implementation will recognize that the pattern f[__] (i.e. f[BlankSequence[]] has an explicit head but it fails to make that recognition for _f (i.e. Blank[f] ). My guess is that this is an implementation coincidence and that the code is explicitly looking for the (meta)pattern _[BlankSequence[]] but not Blank[_] . The pattern matcher is rumoured to be an interesting piece of code so it might not necessarily be easy for WRI to introduce or maintain optimizations of this sort. Disclaimer Beware that it is difficult to trace the operation of the pattern matcher from high-level code. Any change to a pattern can alter the execution strategy chosen by the matcher (e.g. a change such as the trick used here of introducing a condition to display output). The examples shown in this response are meant to illustrate the principles involved rather than offering a strict step-by-step description of pattern matching operation.
{}
{'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/130984', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/2048/']}
jdg_71915
stackexchange
llm_judgeable_groundtruth_similarity
34874
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: If you visit this link , you'll see at the top of the PDF view. Basic properties of finite abelian groups: Every quotient group of a finite abelian group is isomorphic to a subgroup. If the above statement true, it would make some proofs in Serge Lang's Algebra easier, particularly in the p-Sylow groups section. I know that there is a correspondence between subgroups of G/N and subgroups of G containing N, but the corresponding groups are not necessarily isomorphic or are they? Now provide the response and nothing else.
The result you are interested in is Theorem 19 on page 8 of http://alpha.math.uga.edu/~pete/4400algebra2point5.pdf As I explain there, this fact is a kind of duality statement, but it lies deeper than the fact that passage to the dual group takes injections to surjections and conversely (Proposition 16). To deduce Theorem 19 from Proposition 16, one needs the fact that a finite abelian group is [oy vey -- at least ] non-canonically isomorphic to its own dual group (Theorem 20), which I go on to prove in Section 5 of these notes in the most elementary way I know how. Note that the first step in the proof of Theorem 20 develops the Sylow theory of finite abelian groups from scratch -- this is much easier than the nonabelian case.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/34874', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8237/']}
jdg_71916
stackexchange
llm_judgeable_groundtruth_similarity
3805130
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: https://i.stack.imgur.com/cIf8R.jpg I would appreciate it if someone could explain why the two methods yield different results to the same limit. Have I done something incorrect? I can't seem to find the error in either method. And it would be much appreciated if the correct answer was pointed out as well. Now provide the response and nothing else.
On the left side, you cannot replace $\ln(1+x)/x^2$ by $1/x$ , because it is not a factor in your expression, but an addend. You would have to make sure that you are not missing terms that could contribute to the leading term. In your case, you would have to expand to the next term: $\ln(1+x)/x^2=1/x-1/2+\dots$ . The $1/x$ cancels but the constant $-1/2$ contributes to the limit, since it is of the same order as $\ln(1+x)/x=1+\dots$ . The final result is $1/2$ , as on the right side. Hope this helps.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3805130', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/645009/']}
jdg_71917
stackexchange
llm_judgeable_groundtruth_similarity
30598125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Trying to place a View below a TextView in a RelativeLayout. The code below doesn't work. Please help. <RelativeLayout android:layout_width="wrap_content" android:layout_height="wrap_content" > <TextView android:id="@+id/tv_Following" android:layout_width="60dp" android:layout_height="60dp" android:layout_centerHorizontal="true" android:layout_centerVertical="true" android:background="@drawable/circulartextview" android:gravity="center" android:text="Followings" android:textColor="@color/white" android:textSize="10dp" /> <View android:id="@+id/flg1" android:layout_width="4dp" android:layout_height="wrap_content" android:layout_below="@id/tv_Following" android:layout_centerHorizontal="true" android:background="@color/grey2" /></RelativeLayout> Now provide the response and nothing else.
Thank you jst for clearing the things up. Just for the information purposes for the future developer in SO, I'm posting the full answer to this question Lets assume that there is a bean from SimplePojo public class SimplePojo implements Pojo { public void foo() { this.bar(); } public void bar() { ... }} When we call the method foo() , it reinvokes the method bar() inside it. Even thought the method foo() is invoked from the AOP Proxy, the internal invocation of the bar() is not covered by the AOP Proxy. So eventually this makes, if there are any advices attached to the method bar() to not get invoked Solution Use AopContext.currentProxy() to call the method. Unfortunately this couples the logic with AOP. public void foo() { ((Pojo) AopContext.currentProxy()).bar();} Reference: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30598125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4105152/']}
jdg_71918
stackexchange
llm_judgeable_groundtruth_similarity
32517241
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: While creating a log I am using Datetime.Now.Millisecond . But I am getting only millisecond. I am not getting the full time like 11/09/2015 10:59:48:936 but getting only 936 . Where am I going wrong? Now provide the response and nothing else.
This is the answer of Microsoft as posted to my bug report at Microsoft Connect: Windows Server 2003 and Windows XP have problems with dynamically loading a DLL (via LoadLibrary) that uses thread-local storage, which is what thread-safe statics use internally to provide efficient execution when the static local has already been initialized. As these systems are out of support, it is extremely unlikely for a patch to be created for those systems to add this support as is present in Vista and newer OSes, and we are reluctant to penalize the performance on in-support OSes to provide this functionality to the old out-of-support ones. To work around the issue you can use /Zc:threadSafeInit- to disable the thread-safe initialization code and this will avoid the thread-local variable. Of course by doing so the initialization code reverts back to the VS2013 mode and is not thread-safe, so this option is only viable if you don't rely on the thread-safety of local statics.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32517241', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5192440/']}
jdg_71919
stackexchange
llm_judgeable_groundtruth_similarity
164786
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have seen that: $\pi = 22/7$ $\pi = 3.14\ldots$ $\pi = 17 - \sqrt{192}$. $22/7 \gt \pi$ $22/7 \lt\pi$ My brain storming doubt was, is A = B? Is B = C? Is C= A = B? how D and E are correct or myself is totally wrong? Please discuss. I think, if any one is wrong, then whole mathematics may be wrong, especially things which we dealt with $\pi$. These questions there from long back in my mind. I think, by your help, I can end by your solutions or reasoning. Thanks in advance. Now provide the response and nothing else.
1) False. Sadly, some teachers tell students that $\pi$ is $\frac{22}{7}$. 2) Impossible to decide, depends on what is intended by the dots. The number $3.14$ (no dots) is a slightly worse approximation to $\pi$ than $\frac{22}{7}$. But the decimal expansion of $\pi$ does begin with $3.14$, and then continues. 3) Doesn't even need checking, $\pi$ is transcendental while the given number is algebraic. But one might note that in fact it is a worse approximation to $\pi$ than either $\frac{22}{7}$ or $3.14$. 4) True, $\pi$ correct to $5$ decimal places is $3.14159$ while $\frac{22}{7}\approx 3.142857$. 5) False, since in fact 4) is true.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/164786', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34486/']}
jdg_71920
stackexchange
llm_judgeable_groundtruth_similarity
896274
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a concise way to select the nextval for a PostgreSQL sequence multiple times in 1 query? This would be the only value being returned. For example, I would like to do something really short and sweet like: SELECT NEXTVAL('mytable_seq', 3) AS id; And get: id ----- 118 119 120(3 rows) Now provide the response and nothing else.
select nextval('mytable_seq') from generate_series(1,3); generate_series is a function which returns many rows with sequential numbers, configured by it's arguments. In above example, we don't care about the value in each row, we just use generate_series as row generator. And for each row we can call nextval. In this case it returns 3 numbers (nextvals). You can wrap this into function, but I'm not sure if it's really sensible given how short the query is.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/896274', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/122/']}
jdg_71921
stackexchange
llm_judgeable_groundtruth_similarity
11892623
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some legacy code with a legacy function that takes a filename as an argument and processes the file contents. A working facsimile of the code is below. What I want to do is not have to write to disk with some content that I generate in order to use this legacy function, so I though I could use StringIO to create an object in place of the physical filename. However, this does not work, as you can see below. I thought StringIO was the way to go with this. Can anyone tell me if there is a way to use this legacy function and pass it something in the argument that isn't a file on disk but can be treated as such by the legacy function? The legacy function does have the with context manager doing work on the filename parameter value. The one thing I came across in google was: http://bugs.python.org/issue1286 , but that didn't help me... Code from pprint import pprintimport StringIO # Legacy Functiondef processFile(filename): with open(filename, 'r') as fh: return fh.readlines() # This worksprint 'This is the output of FileOnDisk.txt'pprint(processFile('c:/temp/FileOnDisk.txt'))print # This failsplink_data = StringIO.StringIO('StringIO data.')print 'This is the error.'pprint(processFile(plink_data)) Output This is the output in FileOnDisk.txt : ['This file is on disk.\n'] This is the error: Traceback (most recent call last): File "C:\temp\test.py", line 20, in <module> pprint(processFile(plink_data)) File "C:\temp\test.py", line 6, in processFile with open(filename, 'r') as fh:TypeError: coercing to Unicode: need string or buffer, instance found Now provide the response and nothing else.
A StringIO instance is an open file already. The open command, on the other hand, only takes filenames, to return an open file. A StringIO instance is not suitable as a filename. Also, you don't need to close a StringIO instance, so there is no need to use it as a context manager either. While closing an instance frees the memory allocated, so does simply letting the garbage collector reap the object. At any rate, the contextlib.closing() context manager could take care of closing the object if you want to ensure freeing the memory while still holding a reference to the object. If all your legacy code can take is a filename, then a StringIO instance is not the way to go. Use the tempfile module to generate a temporary filename instead. Here is an example using a contextmanager to ensure the temp file is cleaned up afterwards: import osimport tempfilefrom contextlib import contextmanager@contextmanagerdef tempinput(data): temp = tempfile.NamedTemporaryFile(delete=False) temp.write(data) temp.close() try: yield temp.name finally: os.unlink(temp.name)with tempinput('Some data.\nSome more data.') as tempfilename: processFile(tempfilename) You can also switch to the newer Python 3 infrastructure offered by the io module (available in Python 2 and 3), where io.BytesIO is the more robust replacement for StringIO.StringIO / cStringIO.StringIO . This object does support being used as a context manager (but still can't be passed to open() ).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11892623', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1022967/']}
jdg_71922
stackexchange
llm_judgeable_groundtruth_similarity
103231
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am a student in middle school. My textbook says that Cytoplasm is the gelatinous liquid part of the cell excluding organelles. However my teacher said this is wrong. According to her, the correct definition is "The gelatinous liquid part of a cell excluding the nucleus is known as the cytoplasm". I am confused now as to which is the correct definition. If my teacher's definition is correct, then the organelles (other than nucleus) are part of the cytoplasm. Is that correct? What is the correct definition? Now provide the response and nothing else.
The cytoplasm is like the ocean. When you talk about the ocean, do you include the fish? What about islands? Sometimes yes, sometimes no. You could use a cytosol/cytoplasm distinction, where cytosol is "just the liquid part outside the organelles" and cytoplasm includes all the fish, but context matters and I don't think it's actually necessary to have a distinction. If someone is talking about the "pH of the cytoplasm", you can assume they mean the liquid part, not that they're taking some weighted average of pH over all the different organelles plus the space outside of them. Same thing for ion concentrations. If someone is talking about the "cytoplasmic face" of a membrane, it's clear they mean the side that faces the liquid stuff in cells, even if you're talking about vesicles and organelles where all the membrane faces are "inside the cytoplasm" since the whole organelle is. However, if one were to "remove all the cytoplasm from a cell" you'd expect the organelles (minus the nucleus) to come along; after all, they are in the cytoplasm so they go where it goes. If you need to memorize a definition for a class, use what the teacher gives you. Otherwise, think critically about the context in which the word is used. I like your teacher's definition better, not because the two options are "liquid only" and "liquid plus organelles", but because organelles are in the cytoplasm, so sometimes when people say cytoplasm they mean the whole ocean including the contents. Your teacher's definition allows for both uses of the term; yours explicitly does not.
{}
{'log_upvote_score': 5, 'links': ['https://biology.stackexchange.com/questions/103231', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/67214/']}
jdg_71923
stackexchange
llm_judgeable_groundtruth_similarity
31591051
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am expecting to generate a lot of data and then catch it R. How can I estimate the size of the data.frame (and thus memory needed) by the number of rows, number of columns and variable types? Example. If I have 10000 rows and 150 columns out of which 120 are numeric, 20 are strings and 10 are factor level, what is the size of the data frame I can expect? Will the results change depending on the data stored in the columns (as in max(nchar(column)) )? > m <- matrix(1,nrow=1e5,ncol=150)> m <- as.data.frame(m)> object.size(m)120009920 bytes> a=object.size(m)/(nrow(m)*ncol(m))> a8.00066133333333 bytes> m[,1:150] <- sapply(m[,1:150],as.character)> b=object.size(m)/(nrow(m)*ncol(m))> b4.00098133333333 bytes> m[,1:150] <- sapply(m[,1:150],as.factor)> c=object.size(m)/(nrow(m)*ncol(m))> c4.00098133333333 bytes> m <- matrix("ajayajay",nrow=1e5,ncol=150)> > m <- as.data.frame(m)> object.size(m)60047120 bytes> d=object.size(m)/(nrow(m)*ncol(m))> d4.00314133333333 bytes Now provide the response and nothing else.
You can simulate an object and compute an estimation of the memory that is being used to store it as an R object using object.size : m <- matrix(1,nrow=1e5,ncol=150)m <- as.data.frame(m)m[,1:20] <- sapply(m[,1:20],as.character)m[,29:30] <- sapply(m[,29:30],as.factor)object.size(m)120017224 bytesprint(object.size(m),units="Gb")0.1 Gb
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31591051', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/447537/']}
jdg_71924
stackexchange
llm_judgeable_groundtruth_similarity
60347362
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Intercom provide the following snippet to be injected before the body tag: <script> window.intercomSettings = { app_id: "", name: "<?php echo json_encode($current_user->name) ?>", // Full name email: "<?php echo json_encode($current_user->email) ?>", // Email address created_at: "<?php echo strtotime($current_user->created_at) ?>" // Signup date as a Unix timestamp };</script> However, it creates the launcher with an error; stating to check the email variable. What should the variable be for Wordpress? Now provide the response and nothing else.
Okay, so after 4 days of intense digging through reported issues on github and a handful of youtube tutorials I could find, I muddled my way to getting this to work. root jest.config.js globals: { 'ts-jest': { astTransformers: ['jest-preset-angular/InlineHtmlStripStylesTransformer'], diagnostics: false, stringifyContentPathRegex: '\\.html$', },},moduleDirectories: ['node_modules', './'],modulePaths: ['node_modules', './'],preset: 'ts-jest',projects: ['projects/projectA', 'projects/projectB'],reporters: ['jest-silent-reporter'],testEnvironment: 'node',transform: { '\\.ts$': ['ts-jest'], '\\.html$': ['ts-jest'],},verbose: true, projectA/B jest.config.js module.exports = {globals: { 'ts-jest': { astTransformers: ['jest-preset-angular/InlineHtmlStripStylesTransformer'], diagnostics: false, stringifyContentPathRegex: '\\.html$', tsConfig: '<rootDir>/projects/projectA/tsconfig.spec.json', },},displayName: 'projectA',moduleDirectories: ['node_modules', './'],modulePaths: ['node_modules', './'],name: 'projectA',rootDir: './../../',testMatch: ['<rootDir>/projects/projectA/**/*.spec.ts'],transform: { '\\.ts$': ['ts-jest'], '\\.html$': ['ts-jest'],},}; Turns out the really important bit is the "rootDir: './../../'" piece. The rest was somewhat specific to fixing the other errors that came up as I worked through them, but once I got the rootDir defined properly from the sub projects, it actually started seeing all of them properly. So that is the key. jest.config.js (regardless of where it is in the project) defines rootDir relative to itself. So when it got to a project folder and accessed the config it remapped the rootDir and got lost. Defining root from the sub projects as the same as the parent jest config fixes the problem. Jest's documentation isn't exactly clear on this point.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/60347362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12941405/']}
jdg_71925
stackexchange
llm_judgeable_groundtruth_similarity
4726269
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've just began to learn iOS development and now I have a sigabrt on my main function. The error is triggered by calling: int retVal = UIApplicationMain(argc, argv, nil, nil); As I am very new to iOS programming I have no idea on what could by causing this error. One thing that I have found on my callstack is the following exception being raised: dyld_stub_objc_exception_throw What could be causing this error? Now provide the response and nothing else.
I was having this problem in X-Code 4.2 and the issue was it couldn't find my Storyboard. The frustrating part was that there was no indication of what the actual exception was. I was able to get the exception in the log by adding a try / catch to my main function. (Note I'm using ARC, so if you aren't using ARC your main will look a little different with try catch) int main(int argc, char *argv[]){ int retVal = 0; @autoreleasepool { NSString *classString = NSStringFromClass([sortaAppDelegate class]); @try { retVal = UIApplicationMain(argc, argv, nil, classString); } @catch (NSException *exception) { NSLog(@"Exception - %@",[exception description]); exit(EXIT_FAILURE); } } return retVal;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4726269', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/156623/']}
jdg_71926
stackexchange
llm_judgeable_groundtruth_similarity
17261910
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my FXML I created a gridpane. Now I want to add dynamic element (Like button, textfield)by java code (not by FXML), while I am trying to do so, I am getting error. Please Help. my FXML: <AnchorPane fx:controller="tableview.TableViewSample" id="AnchorPane" maxHeight="- Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="400.0" prefWidth="600.0" xmlns:fx="http://javafx.com/fxml"> <children> <GridPane fx:id="greadpane" layoutX="0.0" layoutY="0.0" prefHeight="400.0" prefWidth="600.0"> <columnConstraints> <ColumnConstraints fx:id="col0" hgrow="SOMETIMES" minWidth="10.0" prefWidth="100.0" /> <ColumnConstraints hgrow="SOMETIMES" minWidth="10.0" prefWidth="100.0" /> </columnConstraints> <rowConstraints> <RowConstraints fx:id="row0" minHeight="10.0" prefHeight="30.0" vgrow="SOMETIMES" /> <RowConstraints minHeight="10.0" prefHeight="30.0" vgrow="SOMETIMES" /> <RowConstraints minHeight="10.0" prefHeight="30.0" vgrow="SOMETIMES" /> </rowConstraints> </GridPane> </children> </AnchorPane> My Java Code: public class TableViewSample extends Application { @FXML private GridPane greadpane; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws IOException { Pane myPane = (Pane)FXMLLoader.load(getClass().getResource ("tabviewexamlpe.fxml")); Scene scene = new Scene(myPane); stage.setTitle("Table View "); stage.setWidth(450); stage.setHeight(500); stage.setScene(scene); final Label label = new Label("Address Book"); label.setFont(new Font("Arial", 20)); greadpane.add(label, 0, 0); stage.show();}} Now provide the response and nothing else.
You get a null pointer cause you try to do operation before stage.show() so fxml are not yet initialized. don't do dirty things and put your greadPane.add on a separate controller public class Controller implements Initializable { @FXML private GridPane greadpane; @Override public void initialize(URL url, ResourceBundle resourceBundle) { final Label label = new Label("Address Book"); label.setFont(new Font("Arial", 20)); greadpane.add(label, 0, 0); }} and assign your fxml to this controler. and it's will be ok
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17261910', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2444552/']}
jdg_71927
stackexchange
llm_judgeable_groundtruth_similarity
870213
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: My Debian 8.9 DRBD 8.4.3 setup somehow has got into a state where the two nodes cannot connect over the network any more. They should replicate a single resource r1 , but immediately after drbdadm down r1; drbadm up r1 on both nodes their /proc/drbd describe the situation as follows: on 1st node (Connection State is either WFConnection or StandAlone ): 1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----- ns:0 nr:0 dw:0 dr:912 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:20 on 2nd node: 1: cs:StandAlone ro:Secondary/Unknown ds:UpToDate/DUnknown r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:48 The two nodes can ping each other over the IP addresses cited in /etc/drbd.d/r1.res , and netstat shows that both are listening on the cited port. How can I (further diagnose and) get out of this situation so that the two nodes can become Connected and replicate over DRBD again? BTW, on a higher level of abstraction this problem currently manifests itself by systemctl start drbd never exiting, apparently because it gets stuck in drbdadm wait-connect all (as suggested by /lib/systemd/system/drbd.service ). Now provide the response and nothing else.
The situation was apparently caused by a case of split-brain. I had not noticed this because I had only inspected recent journal entries for drbd.service ( sudo journalctl -u drbd ), but the problem apparently was reported in other kernel logs and slightly earlier ( sudo journalctl | grep Split-Brain ). With that, manually solving the split-brain (as described here or here ) also resolved the troublesome situation as follows. On split-brain victim (assuming the DRBD resource is r1 ): drbdadm disconnect r1drbdadm secondary r1drbdadm connect --discard-my-data r1 On split-brain survivor: drbdadm primary r1drbdadm connect r1
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/870213', 'https://serverfault.com', 'https://serverfault.com/users/432220/']}
jdg_71928
stackexchange
llm_judgeable_groundtruth_similarity
38321880
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am adding some description to my method in class. This is how I achieved this: And it looks like this upon clicking... How can I make the underlined method clickable? I want it to be referenced so that when a user clicks on it, they are directed to a particular web page for documentation. Is it even possible? Thanks in advance, any help will be appreciated Now provide the response and nothing else.
New in Xcode 13 Using the new DocC tool in Xcode, you can now reference other methods by using a double backtick. If the type, property, or method you are referencing is not a "sibling" of the one you are documenting, you can refer to it by qualifying the reference. struct House { /// The rooms in the house. var rooms: [Room] /// The maximum size of the household. /// /// This is calculated by summing the ``Room/occupancyLimit`` of this /// house's ``rooms``. var maximumHouseholdSize: ...}struct Room { /// The maximum number of occupants allowed in the room. var occupancyLimit: ...} Here, the documentation comment for House.maximumHouseholdSize references House.rooms with: ``rooms`` because rooms is a sibling of maximumHouseholdSize . It also references Room.occupancyLimit with: ``Room/occupancyLimit`` because occupancyLimit is not nested in the same type, but rather under the Room type. Prior to Xcode 13 You can link to another method by tagging it with /// - Tag: and referring to it by Tag using the x-source-tag://[Tag] scheme like so: /// - Tag: someMethodfunc someMethod() { ...}/// Make sure to call [someMethod](x-source-tag://someMethod) at some point when overriding.func otherMethod() { ...} Clicking on the someMethod link in the Quick Help pop-over will take you to the method and flash-highlight it in yellow.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38321880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6248853/']}
jdg_71929
stackexchange
llm_judgeable_groundtruth_similarity
2536882
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create an incrementing second timer like a stopwatch. So I want to be able to display the seconds and minutes incrementing in the format 00:01... Google only brings up 24 hour clock examples, I was wondering could anyone get me started with an example or tutorial of what I want to do? Edit: Here is what I have using the Chronometer in Android so far In onCreate() secondsT = 0; elapsedTimeBeforePause = 0; stopWatch.start(); startTime = SystemClock.elapsedRealtime(); stopWatch.setBase(elapsedTimeBeforePause); stopWatch.setOnChronometerTickListener(new OnChronometerTickListener(){ @Override public void onChronometerTick(Chronometer arg0) { //countUp is a long declared earlier secondsT = (SystemClock.elapsedRealtime() - arg0.getBase()) / 1000; String asText = (secondsT / 60) + ":" + (secondsT % 60); //textGoesHere is a TextView ((TextView)findViewById(R.id.time)).setText(asText); } }); In onDestroy() @Overridepublic void onDestroy() { inCall = false; elapsedTimeBeforePause = SystemClock.elapsedRealtime() - stopWatch.getBase(); super.onDestroy();} The above compiles and runs but the TextView never increments, it always stays at 0, can anyone see why? Now provide the response and nothing else.
I'm assuming you aren't aware of the Android Chronometer - it already has a basic stopwatch function. You need to work with its peculiarities a bit, but it's not hard to get it to do what you want once you understand how it works. There are a few ways that time is calculated on the phone, but the two main ones are: The "real time", such as right now according to my computer clock, it is 11:23am in England. However, this can change if my computer contacts a time server and is told it has the wrong time, or if I were travelling with a laptop and crossed a timezone boundary. Using this would wreak havoc with your stopwatch as the measured time could change at any time. The "elapsed time since boot", which is the number of milliseconds since the phone was switched on. This number doesn't bear any relation to the real time it is, but it will behave in a perfectly predictable manner. This is what the Android Chronometer uses. The Chronometer is essentially a 'count up' timer, comparing the current SystemClock.elapsedRealtime() against the elapsedRealtime() that was set fot its base time. The difference between the two, divided by 1000, is the number of seconds since the timer was started. However, if you stop the timer and then start it again, you will get a counter-intuitive result - the timer will show the elapsed time as if it had never stopped. This is because you need to adjust its base time to take into consideration the time it was stopped. This is simple to do: // When you're stopping the stopwatch, use this// This is the number of milliseconds the timer was running forelapsedTimeBeforePause = SystemClock.elapsedRealtime() - timer.getBase();// When you're starting it again:timer.setBase(SystemClock.elapsedRealtime() - elapsedTimeBeforePause); Edit: Here is the full code for a basic stopwatch, which displays your time in a TextView rather than the Chronometer widget declared in your XML file. public class TestProject extends Activity { TextView textGoesHere; long startTime; long countUp; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Chronometer stopWatch = (Chronometer) findViewById(R.id.chrono); startTime = SystemClock.elapsedRealtime(); textGoesHere = (TextView) findViewById(R.id.textGoesHere); stopWatch.setOnChronometerTickListener(new OnChronometerTickListener(){ @Override public void onChronometerTick(Chronometer arg0) { countUp = (SystemClock.elapsedRealtime() - arg0.getBase()) / 1000; String asText = (countUp / 60) + ":" + (countUp % 60); textGoesHere.setText(asText); } }); stopWatch.start(); }} In your main.xml you need to have this <Chronometer android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/chrono" android:visibility="gone"/> There's undoubtedly a way to get the Chronometer to work without declaring it in the XML file, but the constructor Chronometer stopwatch = new Chronometer(this); didn't work properly. The above code displays the elapsed time in a very basic way. For example, if only 5 seconds have gone by, it will show 0:5 rather than the 0:05 you probably want. Fixing that is not hard to do, but I'll leave that for you to work out! :)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2536882', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/243999/']}
jdg_71930
stackexchange
llm_judgeable_groundtruth_similarity
44254076
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm running a php docker image (php:5.6-apache) which has apache's error and access logs redirected to STDERR and STDOUT respectively using symbolic links. When I run the docker image in the foreground or access the docker container logs, I can see the STDOUT output. But I don't see any errors (even when I generate php errors). Any idea why that is and how I can fix it? I'm running docker for Mac (but I don't think this makes any difference) Thanks access.log -> /dev/stdouterror.log -> /dev/stderrother_vhosts_access.log -> /dev/stdout Edit / Solved: As @BMitch mentions and proves below, the STDERR redirection works fine. The problem was with PHP configuration. If I logged an error with error_log() , it would get output to the log. But if I had a php error, like calling an undefined function, the error would never appear in the log. This seems a little inconsistent. In any case, ... I had to create a php.ini file in /usr/local/etc/php/ and add these two parameters: log_errors = Onerror_log = /var/log/apache2/error.log and then restart the docker container. This caused all PHP errors to be logged and output to STDERR. See @German's answer for an example. Now provide the response and nothing else.
I'm unable to reproduce your situation. If the below doesn't help, please provide an mcve of your error. Basic Dockerfile: $ cat Dockerfile FROM php:5.6-apacheCOPY . /var/www/html/ The only php is this file to generate an error: $ cat error.php <?error_log("Hello error log.")?> Build and run it: $ docker build -t test-php .Sending build context to Docker daemon 3.072kBStep 1/2 : FROM php:5.6-apache ---> f16436448ebdStep 2/2 : COPY . /var/www/html/ ---> Using cache ---> cfe66485e2ccSuccessfully built cfe66485e2ccSuccessfully tagged test-php:latest$ docker run -p 8080:80 -d --name test-php test-php7f9a1836a8157963966b583579dff94c6413292547b84d22957add77ad2d8e14 Curl is empty as expected, but calling it generates an error in the logs: $ curl localhost:8080/error.php Show stdout logs, redirecting error to /dev/null: $ docker logs test-php 2>/dev/null172.17.0.1 - - [31/May/2017:00:06:37 +0000] "GET /error.php HTTP/1.1" 200 174 "-" "curl/7.38.0" Show stderr logs, redirecting stdout to /dev/null $ docker logs test-php >/dev/nullAH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this messageAH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message[Wed May 31 00:06:25.064546 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/5.6.30 configured -- resuming normal operations[Wed May 31 00:06:25.064584 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'[Wed May 31 00:06:37.833470 2017] [:error] [pid 17] [client 172.17.0.1:50040] Hello error log. Note the last line of the error output.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44254076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1028577/']}
jdg_71931
stackexchange
llm_judgeable_groundtruth_similarity
30030335
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wrote circular cutout sprite shader which works ok in editor & standalone version but behaves incorrectly on Android devices. In essence it's simple pixel shader: fixed4 frag(pixelData IN) : SV_Target{ const float PI = 3.1415; float angle = -atan2(IN.texcoord.x - 0.5, -IN.texcoord.y + 0.5) + PI; if (angle < _Percent * 2 * PI) //_Percent is in range from 0 to 1 return tex2D(_MainTex, IN.texcoord) * IN.color; else return float4(1,1,1,0); } Rendering in editor (DX11 on DX9 GPU) Screenshot from Android (OpenGL - Nexus 4) As you can see exactly in the middle there are pixels which should be red I'm using Unity 5.0.0f4. Attaching zipped test project: ShaderTest.zip (30kB) Now provide the response and nothing else.
Great question! Thanks for the code, and pictures, it makes the problem a lot easier to understand as well as making it easier to quickly iterate on possible solutions. The problem here is essentially a matplotlib one. Cartopy calls ax.set_aspect('equal') as this is part of the the Cartesian units of a projection's definition. Matplotlib's equal aspect ratio functionality resizes the axes to match the x and y limits, rather than changing the limits to fit to the axes rectangle. It is for this reason that the axes does not fill the space allocated to it on the figure. If you interactively resize the figure you will see that the amount of space that the axes occupies varies depending on the aspect that you resize your figure to. The simplest way of identifying the location of an axes is with the ax.get_position() method you have already been using. However, as we now know, this "position" changes with the size of the figure. One solution therefore is to re-calculate the position of the colorbar each time the figure is resized. The matplotlib event machinery has a "resize_event" which is triggered each time a figure is resized. If we use this machinery for your colorbar, our event might look something like: def resize_colobar(event): # Tell matplotlib to re-draw everything, so that we can get # the correct location from get_position. plt.draw() posn = ax.get_position() colorbar_ax.set_position([posn.x0 + posn.width + 0.01, posn.y0, 0.04, axpos.height])fig.canvas.mpl_connect('resize_event', resize_colobar) So if we relate this back to cartopy, and your original question, it is now possible to resize the colorbar based on the position of the geo-axes. The full code to do this might look like: import cartopy.crs as ccrsimport matplotlib.pyplot as pltimport osfrom netCDF4 import Dataset as netcdf_datasetfrom cartopy import configfname = os.path.join(config["repo_data_dir"], 'netcdf', 'HadISST1_SST_update.nc' )dataset = netcdf_dataset(fname)sst = dataset.variables['sst'][0, :, :]lats = dataset.variables['lat'][:]lons = dataset.variables['lon'][:]fig, ax = plt.subplots(1, 1, figsize=(10,5), subplot_kw={'projection': ccrs.PlateCarree()})# Add the colorbar axes anywhere in the figure. Its position will be# re-calculated at each figure resize. cbar_ax = fig.add_axes([0, 0, 0.1, 0.1])fig.subplots_adjust(hspace=0, wspace=0, top=0.925, left=0.1)sst_contour = ax.contourf(lons, lats, sst, 60, transform=ccrs.PlateCarree())def resize_colobar(event): plt.draw() posn = ax.get_position() cbar_ax.set_position([posn.x0 + posn.width + 0.01, posn.y0, 0.04, posn.height])fig.canvas.mpl_connect('resize_event', resize_colobar)ax.coastlines()plt.colorbar(sst_contour, cax=cbar_ax)ax.gridlines()ax.set_extent([-20, 60, 33, 63])plt.show()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30030335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1671463/']}
jdg_71932
stackexchange
llm_judgeable_groundtruth_similarity
32968332
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I always compile TypeScript with the flag --noImplicitAny . This makes sense as I want my type checking to be as tight as possible. My problem is that with the following code I get the error: Index signature of object type implicitly has an 'any' type interface ISomeObject { firstKey: string; secondKey: string; thirdKey: string;}let someObject: ISomeObject = { firstKey: 'firstValue', secondKey: 'secondValue', thirdKey: 'thirdValue'};let key: string = 'secondKey';let secondValue: string = someObject[key]; Important to note is that the idea is that the key variable comes from somewhere else in the application and can be any of the keys in the object. I've tried explicitly casting the type by: let secondValue: string = <string>someObject[key]; Or is my scenario just not possible with --noImplicitAny ? Now provide the response and nothing else.
Adding an index signature will let TypeScript know what the type should be. In your case that would be [key: string]: string; interface ISomeObject { firstKey: string; secondKey: string; thirdKey: string; [key: string]: string;} However, this also enforces all of the property types to match the index signature. Since all of the properties are a string it works. While index signatures are a powerful way to describe the array and 'dictionary' pattern, they also enforce that all properties match their return type. Edit: If the types don't match, a union type can be used [key: string]: string|IOtherObject; With union types, it's better if you let TypeScript infer the type instead of defining it. // Type of `secondValue` is `string|IOtherObject`let secondValue = someObject[key];// Type of `foo` is `string`let foo = secondValue + ''; Although that can get a little messy if you have a lot of different types in the index signatures. The alternative to that is to use any in the signature. [key: string]: any; Then you would need to cast the types like you did above.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/32968332', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1888394/']}
jdg_71933
stackexchange
llm_judgeable_groundtruth_similarity
105542
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have an RLC network in a black box, and I bang it hard in the lab to get the impulse response. I have two options now, I can take the Fourier transform or I can take the Laplace transform to get the frequency response. How do I know which one to choose and what is the physical difference between each? I have been told that the Laplace transform also gives you the transient response or the decay whereas the Fourier transform does not. Is this true? If I suddenly apply a sinusoidal signal at the input, then there should be a transient response for a brief period of time where the output is not a sinusoid until the system settles. Can someone give me a practical example in terms of an RLC network to show how this is true? Also, often in circuits class, we take the Laplace transform of a circuit where the real part of \$s = \sigma + j\omega\$ is assumed to be zero anyway, so when we use \$\frac{1}{Cs}\$ to denote the Laplace transform of the capacitor, it is assumed that this is equivalent to \$\frac{1}{j\omega C}\$. I believe the real part is zero since the current through the capacitor is 90 degrees out of phase with the voltage across - is this correct? I thought Fourier transform was the same as Laplace transform with \$\sigma = 0\$. However, that does not seem to be true - consider \$x(t) = u(t)\$: $$\mathcal{F}\{x(t)\} = \int_{-\infty}^\infty{u(t)e^{-j\omega t}}dt = \pi\delta(\omega) + \frac{1}{j\omega} \neq \mathcal{L}\{x(t)\} = \int_0^\infty{e^{-st}dt} = \frac{1}{s}$$ We can see that even if I substitute \$s = j \omega\$ with no real part at the output of the Laplace transform, they are still not equal. How come the Fourier transform has an extra impulse component but Laplace does not? When can I substitute \$s = j\omega\$ and expect the Fourier transform to equal the Laplace transform? Edit: the latter part of my question has answers here and here . Now provide the response and nothing else.
The Fourier and the Laplace transform are not the same. First of all, note that when we talk about the Laplace transform, we very often mean the unilateral Laplace transform, where the transformation integrals starts at \$t=0\$ (and not at \$t=-\infty\$), i.e. with the Laplace transform we usually analyze causal signals and systems. With the Fourier transform this is not always the case. In order to understand the differences between the two, it is important to look at the region of convergence (ROC) of the Laplace transform. For causal signals, the ROC is always a right-half plane, i.e. there are no poles (of a rational function in \$s\$) to the right of some value \$\sigma_0\$ (where \$\sigma\$ denotes the real part of the complex variable \$s\$). Now if \$\sigma_0<0\$, i.e. if the \$j\omega\$ axis is inside the ROC, then you simply obtain the Fourier transform by setting \$s=j\omega\$. If \$\sigma_0>0\$ then the Fourier transform does not exist (because the corresponding system is unstable). The third case (\$\sigma_0=0\$) is interesting because here the Fourier transform does exist but it cannot be obtained from the Laplace transform by setting \$s=j\omega\$. Your example is of this type. The Laplace transform of the step function has a pole at \$s=0\$, which lies on the \$j\omega\$ axis. In all such cases the Fourier transform has additional \$\delta\$ impulses at the pole locations on the \$j\omega\$ axis. Note that it is not true that the Fourier transform cannot deal with transients. This is just a misunderstanding which probably comes from the fact that we often use the Fourier transform to analyze the steady-state behavior of systems by applying sinusoidal input signals that are defined for \$-\infty<t<\infty\$. Please also see this answer to a similar question.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/105542', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/17245/']}
jdg_71934
stackexchange
llm_judgeable_groundtruth_similarity
5089802
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for something like this: raise Exception rescue nil But the shortest way I've found is this: begin raise Exceptionrescue Exceptionend Now provide the response and nothing else.
def ignore_exception begin yield rescue Exception endend Now write you code as ignore_exception { puts "Ignoring Exception"; raise Exception; puts "This is Ignored" }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5089802', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316700/']}
jdg_71935
stackexchange
llm_judgeable_groundtruth_similarity
10045424
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So, I have a script called "engine", and after much headbashing and (futile) debugging, I've found out that GC simply isn't reloading it! This is how I include it in the webpage (inside the <head> element): <script type="text/javascript" src="engine.js"></script> When a put 10 console.log("asdf"); 's at the start of the script, it's like they aren't there. When I went to the "resources" tab in the GC console, I saw that no changes are being applied whatsoever to that script! Hlep? Would putting a + "?" + new Date() at the end help? Now provide the response and nothing else.
The universal solution that works in Chrome, Firefox and IE is cleaning the cache via Ctrl + Shift + Del (on Mac ⌘ + Shift + ⌫ ). Chrome solution #1 Open Developer Tools ( F12 or ⌘ + ⌥ + i , or right-click → Inspect ). Select the Network tab and tick the Disable cache checkbox. Reload the page. ❗️ Note : The cache will be disabled only when the devtools window is open. Chrome solution #2 This only makes sense if #1 is not used. Open Developer Tools. Click the Settings cogwheel icon in the bottom right corner. In the dialog that appears, select under the Network subsection the Disable cache checkbox: from now on the cache will be skipped when the devtools window is open. When the devtools windows is closed caching will work as usual. Chrome solution #3: empty cache + hard reload Open Developer Tools (otherwise the menu in the next step won't show). Click and hold down the Refresh button, and then select from the dropdown Empty Cache and Hard Reload . Modifying javascript code A browser-agnostic solution which could be used for debugging is to append in your server-side code a randomly-generated version string as a query parameter, i.e. call your script as: <script type="text/javascript" src="myscript.js?ver=12345"></script> This trick will force the browser to reload the script when the value of the ver parameter changes. If you make ajax requests then you can append "?ver=" + new Date().getTime() to your URL. NOTE: Don't forget to remove the parameter when you are finished debugging because in production you most likely do want the scripts to be cached. It is a common practice though to generate a new timestamp on each new build — this can be used in production, and will ensure that after a new deployment the clients will always get the updated scripts. Unlike all the above solutions this one will work even when you have some sort of caching (e.g. redis, memcached, varnish) or CDN (e.g. akamai, cloudflare, cloudfront, etc) between the client and the server.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10045424', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/924313/']}
jdg_71936
stackexchange
llm_judgeable_groundtruth_similarity
67893323
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to use something as below to create/manage common tags for all resources in projects. For the common_var_tag, I'd like it to be applied only there are any other changes. So the sources are tagged with last-modified by who and when. Is there any way to do it? Thanks in advance! locals { common_var_tags = { ChangedBy = data.aws_caller_identity.current.arn ChangedAt = timestamp() } common_fix_tags = { Project. = "Project" Owner = "Tiger Peng" Team = "DevOps" CreatedAt = "2021-06-08" } } For example, right now, I have to comment out the " local.common_var_tags " as each time when I run terraform plan or terrafomr apply without changing any attribute, the resource nginx is marked/changed due to ChangedAt = timestamp() . I'd like to find the way that only when other attributes changed, this tag changing will be applied. resource "aws_instance" "nginx" { count = 1 ami = var.nginx-ami instance_type = var.nginx-instance-type subnet_id = var.frontend-subnets[count.index] key_name = aws_key_pair.key-pair.key_name vpc_security_group_ids = [aws_security_group.nginx-sg.id] root_block_device { delete_on_termination = false encrypted = true volume_size = var.nginx-root-volume-size volume_type = var.default-ebs-type tags = merge( local.common_fix_tags, #local.common_var_tags, map( "Name", "${var.project}-${var.env}-nginx-${var.zones[count.index]}" ) ) } tags = merge( local.common_fix_tags, #local.common_var_tags, map( "Name", "${var.project}-${var.env}-nginx-${var.zones[count.index]}", "Role", "Nginx" ) )} Now provide the response and nothing else.
While the answer from @Burt looks correct (didn't test it though), there is a much more efficient way to solve the same task: function toHex16 (bytes16 data) internal pure returns (bytes32 result) { result = bytes32 (data) & 0xFFFFFFFFFFFFFFFF000000000000000000000000000000000000000000000000 | (bytes32 (data) & 0x0000000000000000FFFFFFFFFFFFFFFF00000000000000000000000000000000) >> 64; result = result & 0xFFFFFFFF000000000000000000000000FFFFFFFF000000000000000000000000 | (result & 0x00000000FFFFFFFF000000000000000000000000FFFFFFFF0000000000000000) >> 32; result = result & 0xFFFF000000000000FFFF000000000000FFFF000000000000FFFF000000000000 | (result & 0x0000FFFF000000000000FFFF000000000000FFFF000000000000FFFF00000000) >> 16; result = result & 0xFF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000 | (result & 0x00FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF0000) >> 8; result = (result & 0xF000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000) >> 4 | (result & 0x0F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F00) >> 8; result = bytes32 (0x3030303030303030303030303030303030303030303030303030303030303030 + uint256 (result) + (uint256 (result) + 0x0606060606060606060606060606060606060606060606060606060606060606 >> 4 & 0x0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F) * 7);}function toHex (bytes32 data) public pure returns (string memory) { return string (abi.encodePacked ("0x", toHex16 (bytes16 (data)), toHex16 (bytes16 (data << 128))));} This code produces upper case output. For lower case output, just change 7 to 39 in the code. Explanation The idea is to process 16 bytes at once using binary operations. The toHex16 function converts a sequence of 16 bytes represented as a bytes16 value into a sequence of 32 hexadecimal digits represented as a bytes32 value. The toHex function splits a bytes32 value into two bytes16 chunks, converts each chunk to hexadecimal representation via the toHex16 function, and finally concatenates the 0x prefix with the converted chunks using abi.encodePacked function. The most sophisticated part is how the toHex16 function works. Let's explain it sentence by sentence. The first sentence: result = bytes32 (data) & 0xFFFFFFFFFFFFFFFF000000000000000000000000000000000000000000000000 | (bytes32 (data) & 0x0000000000000000FFFFFFFFFFFFFFFF00000000000000000000000000000000) >> 64; Here we shift the last 64 bits of the input to the right by 64 bits, basically doing: 0123456789abcdeffedcba9876543210\______________/\______________/ | | | +---------------+ ______V_______ ______V_______/ \ / \0123456789abcdef0000000000000000fedcba9876543210 The second sentence: result = result & 0xFFFFFFFF000000000000000000000000FFFFFFFF000000000000000000000000 | (result & 0x00000000FFFFFFFF000000000000000000000000FFFFFFFF0000000000000000) >> 32; Here we shift the last 32 bits of both 64-bit chunks to the right by 32 bits: 0123456789abcdef0000000000000000fedcba9876543210\______/\______/ \______/\______/ | | | | | +-------+ | +-------+ __V___ __V___ __V___ __V___/ \ / \ / \ / \012345670000000089abcdef00000000fedcba980000000076543210 The next sentence: result = result & 0xFFFF000000000000FFFF000000000000FFFF000000000000FFFF000000000000 | (result & 0x0000FFFF000000000000FFFF000000000000FFFF000000000000FFFF00000000) >> 16; does: 012345670000000089abcdef00000000fedcba980000000076543210\__/\__/ \__/\__/ \__/\__/ \__/\__/ | | | | | | | | | +---+ | +---+ | +---+ | +---+ V_ V_ V_ V_ V_ V_ V_ V_/ \ / \ / \ / \ / \ / \ / \ / \012300004567000089ab0000cdef0000fedc0000ba980000765400003210 And the next one: result = result & 0xFF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000 | (result & 0x00FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF0000) >> 8; does: 012300004567000089ab0000cdef0000fedc0000ba980000765400003210\/\/ \/\/ \/\/ \/\/ \/\/ \/\/ \/\/ \/\/| | | | | | | | | | | | | | | || +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+V V V V V V V V V V V V V V V V/\ /\ /\ /\ /\ /\ /\ /\ /\ /\ /\ /\ /\ /\ /\ /\01002300450067008900ab00cd00ef00fe00dc00ba00980076005400320010 The final sentence in this series is a bit different: result = (result & 0xF000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000) >> 4 | (result & 0x0F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F00) >> 8; It shifts odd nibbles to the right by 4 bits, and even nibbles by 8 bits: 01002300450067008900ab00cd00ef00fe00dc00ba00980076005400320010|\ |\ |\ |\ |\ |\ |\ |\ |\ |\ |\ |\ |\ |\ |\ |\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | V V V V V V V V V V V V V V V V V V V V V V V V V V V V V V V V000102030405060708090a0b0c0d0e0f0f0e0d0c0b0a09080706050403020100 So all the nibbles of the initial data are distributed one per byte. Now with every byte x we need to do the following transformation: x` = x < 10 ? '0' + x : 'A' + (x - 10) Let's rewrite this formula a bit: x` = ('0' + x) + (x < 10 ? 0 : 'A' - '0' - 10)x` = ('0' + x) + (x < 10 ? 0 : 1) * ('A' - '0' - 10) Note, that (x < 10 ? 0 : 1) could be calculated as ((x + 6) >> 4) , thus we have: x` = ('0' + x) + ((x + 6) >> 4) * ('A' - '0' - 10)x` = (0x30 + x) + ((x + 0x06) >> 4) * 7 The final statement: result = bytes32 (0x3030303030303030303030303030303030303030303030303030303030303030 + uint256 (result) + (uint256 (result) + 0x0606060606060606060606060606060606060606060606060606060606060606 >> 4 & 0x0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F) * 7); Basically performs the above calculation for every byte. The 0x0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F mask after the right shift is needed to zero out the bits "dropped" by the right shift in the original formula. BTW, it would be better to ask questions like this one at https://ethereum.stackexchange.com/
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/67893323', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1614414/']}
jdg_71937
stackexchange
llm_judgeable_groundtruth_similarity
1397803
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we have a box (parallelepiped) A completely contained within another box B. Is the surface area of A nessecarily less than the surface area of the B? Edit: note that the sides of A are not nessecarily parallel to the sides of B. I happen to know that the answer is yes but the only solution I know of is very hand-wavy. Now provide the response and nothing else.
One form of Crofton's formula states that the area of a surface in $\mathbb R^3$ is proportional to the integral, over all planes, of the length of the intersection between the plane and the surface. (Reference: Stereology for Statisticians , equation 4.13) Now, for any plane, its intersection with the inner box is a convex curve which is fully inside the intersection with the outer box. By another application of Crofton's formula in $\mathbb R^2$, the inner curve is shorter. So the inner box's integral is smaller. And so the inner box's area is smaller! As achille hui noted in the comments, we don't even need the fact that the bodies are boxes. We just need them to be convex.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1397803', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/189736/']}
jdg_71938
stackexchange
llm_judgeable_groundtruth_similarity
42726350
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been having trouble making a request to my 64-bit ASP.NET Core API running on an Azure App Service. The error I get back is: Unhandled Exception: System.BadImageFormatException: Could not load file or assembly '***.dll'. An attempt was made to load a program with an incorrect format. I understand that this means there is a mismatch between the platform of the app (64-bit) and that of the environment it runs on. I just can't figure out how to change the App Service so it runs using 64-bit. In the Application Settings in the Azure portal I have set Platform to 64-bit: However when I check in Kudu, the runtime environment indicates that it's operating under win8-x86: project.json "buildOptions": { "emitEntryPoint": true, "preserveCompilationContext": true, "platform": "x64"},"runtimes": { "win10-x64": {}} Some questions How do I change the App Service to ensure it's running on a 64-bit platform? Does it matter that the RID is win8... when my runtimes configuration in project.json specifies win10... . Presumably x86 vs x64 matters, but does it need to be the same version of windows too ie. win8 vs win10. Now provide the response and nothing else.
This is now available in Azure App Service. Steps to deploy: Set platform to 64-bit in portal Ensure the project targets 64-bit by including the following in the csproj: <PropertyGroup> <PlatformTarget>x64</PlatformTarget></PropertyGroup> When publishing application ensure target framework is set to win-x64. (If running dotnet publish just add -r win-x64 ) The documentation is here but (at present) it is acknowledged to be a little sparse. This github issue response suggests we should be able to do a framework dependent deployment and have it "just work". YMMV but that wasn't my own experience hence the runtime flag suggestion above
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42726350', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/161735/']}
jdg_71939
stackexchange
llm_judgeable_groundtruth_similarity
745506
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have the domain example.com with the subdomains : www.example.com and a.example.com. Let's suppose that I can't buy a wildcard certificate but, I bought the certificates for example.com and www.example.com. Now I bought the certificate for a.example.com. The site and it's subdomains are in the same folder, using one Apache vhost file and subdomains are defined as aliases. Can I merge the certificates into one file ? What if I used the same key, csr files to generate the certificates ? Is the ONLY solution to declare the sites in two different vhosts ? Now provide the response and nothing else.
Can I merge the certificates into one file? No, only one leaf certificate is supported. Although from version 2.4.8 the SSLCertificateFile directive allows multiple certificates in that file, that is only to include intermediate CA certificates, sorted from leaf to root, as a replacement for the SSLCertificateChainFile directive. The SSLCertificateFile directive can be used multiple times (referencing different filenames) when an RSA, a DSA, and an ECC based server certificate is used in parallel. But you can't use multiple SSLCertificateFile directives to load a certificate for the ServerName and an extra for the ServerAlias. What if I used the same key, csr files to generate the certificates? Still no. Is the ONLY solution to declare the sites in two different vhosts ? If you have two certificates you need two SSL VirtualHosts.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/745506', 'https://serverfault.com', 'https://serverfault.com/users/295341/']}
jdg_71940