source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
9922539
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I use emacs string-insert-rectangle operation to add a vector of numbers to a series of lines? For example, I've got this shortened version of a bunch of text entries in my emacs buffer: element01 8 111111111011010000100000001100101000001111101111011111111000element01 8 111101101010101001111111111000111110111111011110100101010111element01 8 111111011001001110111010111111100111010110101011111010110011element01 8 111111111111111111111111010111101101011101011111000001100000element01 8 111100111111011111100110110000001011110101000011111011111101element01 8 111001001011000000011100000101001001100101101011101101011011element01 8 111011111101101111111111111101101010111110111011111101011011element01 8 101101111101101111000110111101111010111011101111001101001011element02 6 110101110101100101100101000111010101110111001001101111111011element02 6 111001011001001011101110111100111101101011111111111011111101element02 6 101111100111010111111010010101111101111111101101111011111011element02 6 111101111111111100111110110111101011111001001101101100111111element02 6 111111010111101111010011110111001100001000101010111111111101element02 6 111110111001101111111100111011110000011011100100100111111010element03 13 110011011111111111101011100111111110011111110100111010011111element03 13 100011101000111110101101000000000001110110110011110110111101element03 13 101100011100011111110111110110101101111111110110110100101111element03 13 111111011110101110101011010111110000010111111011100100011111element03 13 011100110110110111100101110101111110111100101110010111110011element03 13 100111111111100100111110110110111111111101011101110110110111element03 13 101111111111110101110110111011111110111101110110111111111111element03 13 111110010111110110101111110110111111111110101111111101110011element03 13 100111111101110110110110111110111010111110110011111111110111element03 13 110100110111110110110100111010110100110110110110110101111111element03 13 011111011010111101101001011100111110010111111011111101011010element03 13 011101111110010000111000000101101010111110100010110110110110element03 13 110100110110110010101010100011100011000000110011011100110100element03 13 010101101010110010111100101001001010111001100111110000011011[...] And I want to add a column between the second and third column that will look like this: element01 8 id1 111111111011010000100000001100101000001111101111011111111000element01 8 id2 111101101010101001111111111000111110111111011110100101010111element01 8 id3 111111011001001110111010111111100111010110101011111010110011element01 8 id4 111111111111111111111111010111101101011101011111000001100000element01 8 id5 111100111111011111100110110000001011110101000011111011111101element01 8 id6 111001001011000000011100000101001001100101101011101101011011element01 8 id7 111011111101101111111111111101101010111110111011111101011011element01 8 id8 101101111101101111000110111101111010111011101111001101001011element02 6 id9 110101110101100101100101000111010101110111001001101111111011element02 6 id10 111001011001001011101110111100111101101011111111111011111101element02 6 id11 101111100111010111111010010101111101111111101101111011111011element02 6 id12 111101111111111100111110110111101011111001001101101100111111element02 6 id13 111111010111101111010011110111001100001000101010111111111101element02 6 id14 111110111001101111111100111011110000011011100100100111111010element03 13 id15 110011011111111111101011100111111110011111110100111010011111element03 13 id16 100011101000111110101101000000000001110110110011110110111101element03 13 id17 101100011100011111110111110110101101111111110110110100101111element03 13 id18 111111011110101110101011010111110000010111111011100100011111element03 13 id19 011100110110110111100101110101111110111100101110010111110011element03 13 id20 100111111111100100111110110110111111111101011101110110110111element03 13 id21 101111111111110101110110111011111110111101110110111111111111element03 13 id22 111110010111110110101111110110111111111110101111111101110011element03 13 id23 100111111101110110110110111110111010111110110011111111110111element03 13 id24 110100110111110110110100111010110100110110110110110101111111element03 13 id25 011111011010111101101001011100111110010111111011111101011010element03 13 id26 011101111110010000111000000101101010111110100010110110110110element03 13 id27 110100110110110010101010100011100011000000110011011100110100element03 13 id28 010101101010110010111100101001001010111001100111110000011011[...] How can I use something like string-insert-rectangle in emacs to add this new third column with increasing number count? PS: I know I could do this with a bash/perl/python/etc script, in this question I am asking if this can be easily done with emacs. Now provide the response and nothing else.
I think the simplest solution is to mark the first character of the original third column in the first line, move point to the same character of the last line, and then type: C-u C-x r N RET id%d RET rectangle-number-lines is an interactive compiled Lisp function in `rect.el'. It is bound to C-x r N. (rectangle-number-lines START END START-AT &optional FORMAT) Insert numbers in front of the region-rectangle. START-AT, if non-nil, should be a number from which to begin counting. FORMAT, if non-nil, should be a format string to pass to `format' along with the line count. When called interactively with a prefix argument, prompt for START-AT and FORMAT. The regexp-replace and macro techniques are both superb general-purpose tools to know, but rectangle-number-lines is pretty much custom-built for this very question. Edit : I hadn't noticed at the time, but it turns out that this is a new feature in Emacs 24. Earlier versions of Emacs will translate that sequence to C-x r n (lower-case n) which runs an entirely different function.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9922539', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/719016/']}
jdg_380041
stackexchange
llm_judgeable_groundtruth_similarity
47167790
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My question clearly states it in the heading, anyway a more elaborate question is how can I randomly show a variable (eg. name) from a mysql database using php with a dbo connection in xampp environment. Here is what I've come up with from scratch:- <?php $db = new PDO ('mysql:dbname=lab1;host=localhost;charset=utf8','root', ''); $stmt = $db->prepare('SELECT * FROM countries'); ?> <head> <meta charset="UTF-8"/> <p>RANDOM!</p> </head> <body><?php $stmt -> execute(); foreach($stmt as $row){ $something = array ($row['name'] . $row['name']); } $rand_keys = array_rand($something, 1); echo $something[$rand_keys[0]] . "\n";?> </body> Seems to not work when I try opening it (from localhost) Now provide the response and nothing else.
Yes you can easily do it ALL from the command line (NO IDE involved, I promise). This uses the old faithful Apache Ant . It does not use Gradle , that takes more work. To Summarize What you type is ( just 2 lines to produce an apk): android create project --target "android-16" --path basj --activity TestActivity --package com.android.basj (This produces an Apache Ant build file called build.xml file which is like the build.gradle file. Now write some code but TestActivity.java is there already and will compile) ant debug Setup ( Note : The " android.bat " command is deprecated since Build Tools v26 , so use an old one (see link below), deprecated in this case means TOTALLY removed !{ naughty Google }). Install Java JDK if not installed already (you can use jdk-8u151-windows-x64.exe for example), and make sure JAVA_HOME environment variable is defined e.g.: JAVA_HOME=C:\Program Files\Java\jdk1.8.0_112 JAVA_PATH=C:\Program Files\Java\jre1.8.0_112\bin JDK is the Java Development Kit. JRE is the Java Run-time Environment. Install Android SDK Tools (e.g. installer_r24.4.1-windows.exe , see this answer ) if not already done, and then in the SDK Manager GUI, deselect everything and choose "Android SDK Build-Tools" (e.g. Android SDK Build-Tools 19.1 ) + one (or many) platforms (e.g. Android 4.1.2 (API 16) JELLY_BEAN ). To prove you don't need Android Studio , were not going to download it ! (only the SDK). Download Apache Ant (for example apache-ant-1.9.9-bin.zip ) Detail To create a project from the command line using Android SDK : Decide on a place to put your project: cd c:\androidmkdir antTestcd antTest Run the command: C:\Android\sdk1\tools\android create project --target "android-16" --path basj --activity TestActivity --package com.android.basj ^ |--------------+ (here's where I keep an old version of tools (version 25 in my case) Here is the directory structure created (and all the files you need to build): C:.+---basj +---bin +---libs +---res ¦ +---drawable-hdpi ¦ +---drawable-ldpi ¦ +---drawable-mdpi ¦ +---drawable-xhdpi ¦ +---layout ¦ +---values +---src +---com +---android +---basj detailed output of create project: Created project directory: C:\Android\antTest\basjCreated directory C:\Android\antTest\basj\src\com\android\basjAdded file C:\Android\antTest\basj\src\com\android\basj\TestActivity.javaCreated directory C:\Android\antTest\basj\resCreated directory C:\Android\antTest\basj\binCreated directory C:\Android\antTest\basj\libsCreated directory C:\Android\antTest\basj\res\valuesAdded file C:\Android\antTest\basj\res\values\strings.xmlCreated directory C:\Android\antTest\basj\res\layoutAdded file C:\Android\antTest\basj\res\layout\main.xmlCreated directory C:\Android\antTest\basj\res\drawable-xhdpiCreated directory C:\Android\antTest\basj\res\drawable-hdpiCreated directory C:\Android\antTest\basj\res\drawable-mdpiCreated directory C:\Android\antTest\basj\res\drawable-ldpiAdded file C:\Android\antTest\basj\AndroidManifest.xmlAdded file C:\Android\antTest\basj\build.xmlAdded file C:\Android\antTest\basj\proguard-project.txt Download Apache Ant from http://ant.apache.org/ . See this tutorial for setup: http://www.vogella.com/tutorials/ApacheAnt/article.html Also see this tutorial: http://blog.vogella.com/2011/03/16/creating-android-applications-via-the-command-line-ant/ Write your code (Hello world). Run this command and you get an Android Apk out the other side (called TestActivity-debug.apk): ant debug Hey presto, you got an android apk ! With new structure added: C:.├───bin│ ├───classes│ │ └───com│ │ └───android│ │ └───basj│ ├───dexedLibs│ └───res│ ├───drawable-hdpi│ ├───drawable-ldpi│ ├───drawable-mdpi│ └───drawable-xhdpi├───gen│ └───com│ └───android│ └───basj For a final build : ant release If your interested in a more extensive example of Ant build.xml , or DEX files, and the deeper workings of Android look here How to sign an already compiled apk See how to sign an already compiled apk and also this From an answer by @for3st here's a relevant piece of that post: Manual Process: Step 1: Generate Keystore (only once) You need to generate a keystore once and use it to sign your unsigned apk.Use the keytool provided by the JDK found in %JAVA_HOME%/bin/ keytool -genkey -v -keystore my.keystore -keyalg RSA -keysize 2048 -validity 10000 -alias app Step 2 or 4: Zipalign zipalign which is a tool provided by the Android SDK found in e.g. %ANDROID_HOME%/sdk/build-tools/24.0.2/ is a mandatory optimization step if you want to upload the apk to the Play Store. zipalign -p 4 my.apk my-aligned.apk Note: when using the old jarsigner you need to zipalign AFTER signing. When using the new apksigner method you do it BEFORE signing (confusing, I know). Invoking zipalign before apksigner works fine because apksigner preserves APK alignment and compression (unlike jarsigner ). You can verify the alignment with: zipalign -c 4 my-aligned.apk Step 3: Sign & Verify Using build-tools 24.0.2 and older Use jarsigner which, like the keytool, comes with the JDK distribution found in %JAVA_HOME%/bin/ and use it like so: jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my.keystore my-app.apk my_alias_name and can be verified with jarsigner -verify -verbose my_application.apk Using build-tools 24.0.3 and newer Android 7.0 introduces APK Signature Scheme v2 , a new app-signing scheme that offers faster app install times and more protection against unauthorized alterations to APK files (See here and here for more details). Therefore, Google implemented their own apk signer called: apksigner (duh!)The script file can be found in %ANDROID_HOME%/sdk/build-tools/24.0.3/ (the .jar is in the /lib subfolder). Use it like this: apksigner sign --ks my.keystore my-app.apk --ks-key-alias alias_name and can be verified with: apksigner verify my-app.apk
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47167790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8902759/']}
jdg_380042
stackexchange
llm_judgeable_groundtruth_similarity
25316399
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I had to make a php form for a class assignment. We then had to alter the form using another language to make it more functional. I decided to add JavaScript to it so the user could add multiple parts. After I added the JavaScript I now get this error: Symfony \ Component \ HttpKernel \ Exception \ MethodNotAllowedHttpException Here is the form: {{ Form::open() }}@for ($i=0; $i < 10; $i++)<input type="number" name="part_number" placeholder="Part Number" /><input type="number" name="quantity" placeholder="Quantity" /><input type="number" name="annual_usage" placeholder="Annual Usage" /><input type="submit" value="Add Part" />@endfor<input type="textarea" name="comment" placeholder="Comment" /><input type="text" name="shippingaddress" placeholder="Shipping Address" /><input type="text" name="project_id" placeholder="Project Id" /><input type="text" name="user_id" placeholder="User Id" /><input type="submit" value="Send Requests" />{{ Form::close() }} Now provide the response and nothing else.
Your server's response allows the request to include three specific non-simple headers : Access-Control-Allow-Headers:origin, x-requested-with, content-type but your request has a header not allowed by the server's response: Access-Control-Request-Headers:access-control-allow-origin, content-type All non-simple headers sent in a CORS request must be explicitly allowed by the Access-Control-Allow-Headers response header. The unnecessary Access-Control-Allow-Origin header sent in your request is not allowed by the server's CORS response. This is exactly what the " ...not allowed by Access-Control-Allow-Headers " error message was trying to tell you. There is no reason for the request to have this header: it does nothing, because Access-Control-Allow-Origin is a response header, not a request header. Solution : Remove the setRequestHeader call that adds a Access-Control-Allow-Origin header to your request.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25316399', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3900321/']}
jdg_380043
stackexchange
llm_judgeable_groundtruth_similarity
3460484
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we have a finite field extension $K = \mathbb{Q(\alpha)}$ with basis $1,\alpha,\dots,\alpha^{n-1}$ where all $\alpha^i$ are integral elements. Do they form an integral basis of the ring of integers $\mathcal{O}_K$ of $K$ ? Now provide the response and nothing else.
No. Take for instance $\alpha = \sqrt 5$ . Even worse, there may not exist a suitable $\alpha$ . This is the case for the cubic field generated by a root of the polynomial $X^{3}-X^{2}-2X-8$ , according to Wikipedia .For a discussion and a proof, see Rings of integers without a power basis by Keith Conrad.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3460484', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/622755/']}
jdg_380044
stackexchange
llm_judgeable_groundtruth_similarity
76306
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm confused about the risks of 2-factor authentication. I'm a developer, who has to do some ops, and considering turning it on for my DigitalOcean and GitHub accounts. Both offer TOTP(?) and indicate Google Authenticator. Great. Then I read some FUD about Google Authenticator and looked for an alternative. There's Authy and a bunch of others. Ok, now I'm wondering what level of Trust I'm giving my 2-factor provider. I've never heard of these alternative companies. 1) I think any authenticator can't access my accounts because they don't have my username/password. Is that true? If they are compromised or a bad actor, what is at risk? 2) What are some simple industry best practice TOTP apps? Is Google Authenticator in the top 3? Now provide the response and nothing else.
A TOTP (Time-based One-Time Password) authenticator app will not have your username and password. What it will have is a secret token that is used to generate a specific pseudo-random sequence of numbers that changes based on time (typically every 30 seconds). The app need not even have any network connectivity itself - that is just for backup purposes in case you loose your phone. It's job is simply to compute, for the current time, what the pseudo-random number should be. The site you are logging into will have the same token stored and associated with your account, and will compute the same pseudo-random value at any given time. Thus, when you log in, it asks for the current value being displayed by the app. Who are you trusting? In general, the answer is 'no one', which is good: Edit : Of course, you are always trusting that the app you have installed on your phone does what it claims to do, doesn't have nasty security vulnerabilities, etc... You can't ever really trust "no one" when it comes to computing unless you start from scratch (as in sand) and make your own computer hardware yourself ;-) Google Authenticator only stores the tokens locally on the phone. If your phone got hacked, somebody would have your TOTP tokens, but other than that, you're fine. Authy apparently encrypts the tokens locally on your phone using a password you enter, so that they are just storing an encrypted blob for you (see their blog posting on this for details) . That means that even if Authy's servers got hacked, nobody would be able to determine your TOTP tokens unless they guess the password used to encrypt them. Even if somebody got your TOTP tokens, that only lets them defeat the second factor: it doesn't give them any information about your username/password. ** Best Practice Apps** As to best practices, here are some thoughts: This posting favors Authy and seems to give a clear explanation of the problems with Google Authenticator, as well as some info about features that you would want. In screening a potential candidate, it seems that the list of desirable features would include: A way to de-authorize a particular device if it gets lost A way to back up your authentication tokens, in case your phone gets wiped. This backup method should encrypt the tokens locally on the phone with a key/password that the entity doing the backup never gets: otherwise your are trusting them to not give your TOTP tokens to a bad guy. Again, this is about TOTP tokens and is completely separate from your username and password. The ablilty to de-authorize one device using another, in case the device is lost/stolen. This would mean that each device has it's own TOTP tokens (not the same token shared across multiple devices).
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/76306', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/63751/']}
jdg_380045
stackexchange
llm_judgeable_groundtruth_similarity
6246907
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This happened when working with git on Ruby on Rails.several commits and branches were done and created. git version 1.7.3.4 I accidentally deleted the .git folder using git -rf .git but I have all the files and updates done on online git-hub repository.I want the .git folder restored. Questions ? Am I totally screwed up ? OR If theres a way to restore .git from my online git-hub repository, how canthat be done ? Now provide the response and nothing else.
The only way you can restore a deleted .git folder is by: Cloning it again from somewhere Checking your recycling bin or backup (if no remote repository exists) Since your files are from the online github repository, then its simple. Just clone it again from the directory where you deleted the .git folder: git initgit remote add origin <repo_address>git pull origin master where repo_address can be [email protected]:yourname/yourproject.git Your local checkout will be back to normal. If you have uncommitted changes in your working copy you would want to keep, instead of using git pull use git fetch and then git reset --soft your local branch to the remote branch it should be at. The soft reset will not change your working copy
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6246907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/471384/']}
jdg_380046
stackexchange
llm_judgeable_groundtruth_similarity
439776
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Simple question - given that electric fields are used to program and erase flash memory will placing such a memory in an external high voltage gradient damage them, or damage the bit integrity? I am thinking of fields of around 1 MV/m Now provide the response and nothing else.
Flash memory uses floating gate MOSFETs to store a charge. The amount of charge stored is the actual information. Let's compare your 1MV/m to the field strength which is used to program such a memory cell. First question, how thick is the oxide? I found this article where they mention the oxide thickness to be typically between 15 to 20 nm. Let's use 20 nm. The programming voltage is often about 10 V, this is very process dependent but let's just use 10 V. Then we get a field strength in the oxide of: 10 V / 20 nm = 500 MV/m So that's a 500 times higher field strength than your 1 MV/m Of course this is a value that ensures reliable programming of the cells, I mean, it could already work at 100 MV/m for example but that could be unreliable. So in my view, 1 MV/m would by far not be enough to have any effect.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/439776', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/56642/']}
jdg_380047
stackexchange
llm_judgeable_groundtruth_similarity
52075642
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do you usually handle unique database entries in Flask? I have the following column in my db model: bank_address = db.Column(db.String(42), unique=True) The problem is, that even before I can make a check whether it is already in the database or not, I get an error: Check if it is unique and THEN write into db: if request.method == 'POST': if user.bank_address != request.form['bank_address_field']: user.bank_address = request.form['bank_address_field'] db.session.add(user) db.session.commit() The error I get: sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: user.bank_address_field [SQL: 'UPDATE user SET bank_address_field=? WHERE user.id = ?'] Now provide the response and nothing else.
You could do one of two things: Make a query for users with that field: if User.query.filter(User.bank_address == request.form['bank_address_field']).first(): # error, there already is a user using this bank address This has a big problem, however, see below. Catch the exception: from sqlalchemy.exc import IntegrityErrortry: db.session.commit()except IntegrityError: db.session.rollback() # error, there already is a user using this bank address or other # constraint failed where IntegrityError can be imported from sqlalchemy.exc . As soon as the IntegrityError is raised, regardless of whether or not you've caught the error, the session you were working in is invalidated. To continue using the session you'll need to issue a db.session.rollback() . The latter is better, because it is not subject to race conditions . Imagine two users trying to register the same bank address, at the same time: User A submits, the User.query.filter().first() returns None because no-one is using the address yet. Almost at the same time, user B submits, the User.query.filter().first() returns None because no-one is using the address yet. User A's bank address is written to the database, successfully User B's bank address can't be written to the database, because the integrity check fails, as User A just recorded that address. So just catch the exception , because database transactions guarantee that the database locks the table first before testing the constraint and adding or updating the user. You could lock the whole table in Flask too, but Python talking to a database is a lot slower. If you have a busy site, you don't want database updates to be slow, you'll end up with a lot of users waiting for the lock to clear. You want to keep locking to a minimum, and as short as possible, and the closer to the actual data you lock, the sooner you can release the lock again. Databases are very good at this sort of locking, and are very close to their data (naturally), so leave locking to the database and rely on the exception instead.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52075642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4602605/']}
jdg_380048
stackexchange
llm_judgeable_groundtruth_similarity
49459491
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code in selenium but continue to get a syntax error. I'm trying to select an element based on multiple conditions. choices = driver.find_elements_by_xpath("//div[contains(.,'5') and [contains(@class, 'option')]]")$ Thanks for any help you can give. Now provide the response and nothing else.
As per the xpath you have shared as follows : choices = driver.find_elements_by_xpath("//div[contains(.,'5') and [contains(@class, 'option')]]")$ You need to consider a few facts : The multiple conditions for selecting the <div> tag can't be within nested [] . Either you have to specify within one [] or within multiple [] s. The xpath shouldn't end with unwanted characters e.g $ Solution You can rewrite the xpath in either of the following ways : choices = driver.find_elements_by_xpath("//div[contains(.,'5') and contains(@class, 'option')]")# orchoices = driver.find_elements_by_xpath("//div[contains(.,'5')][contains(@class, 'option')]")
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49459491', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8827205/']}
jdg_380049
stackexchange
llm_judgeable_groundtruth_similarity
4804005
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can someone please explain why the code below does not work when setting the facecolor of the figure? import matplotlib.pyplot as plt# create figure instancefig1 = plt.figure(1)fig1.set_figheight(11)fig1.set_figwidth(8.5)rect = fig1.patchrect.set_facecolor('red') # works with plt.show(). # Does not work with plt.savefig("trial_fig.png")ax = fig1.add_subplot(1,1,1)x = 1, 2, 3y = 1, 4, 9ax.plot(x, y)# plt.show() # Will show red face color set above using rect.set_facecolor('red')plt.savefig("trial_fig.png") # The saved trial_fig.png DOES NOT have the red facecolor.# plt.savefig("trial_fig.png", facecolor='red') # Here the facecolor is red. When I specify the height and width of the figure using fig1.set_figheight(11) fig1.set_figwidth(8.5) these are picked up by the command plt.savefig("trial_fig.png") . However, the facecolor setting is not picked up. Why? Thanks for your help. Now provide the response and nothing else.
It's because savefig overrides the facecolor for the background of the figure. (This is deliberate, actually... The assumption is that you'd probably want to control the background color of the saved figure with the facecolor kwarg to savefig . It's a confusing and inconsistent default, though!) The easiest workaround is just to do fig.savefig('whatever.png', facecolor=fig.get_facecolor(), edgecolor='none') (I'm specifying the edgecolor here because the default edgecolor for the actual figure is white, which will give you a white border around the saved figure) Hope that helps!
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/4804005', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316357/']}
jdg_380050
stackexchange
llm_judgeable_groundtruth_similarity
18963783
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm making an mobile app which needs thousands of fast string lookups and prefix checks. To speed this up, I made a Trie out of my word list, which has about 180,000 words. Everything's great, but the only problem is that building this huge trie (it has about 400,000 nodes) takes about 10 seconds currently on my phone, which is really slow. Here's the code that builds the trie. public SimpleTrie makeTrie(String file) throws Exception { String line; SimpleTrie trie = new SimpleTrie(); BufferedReader br = new BufferedReader(new FileReader(file)); while( (line = br.readLine()) != null) { trie.insert(line); } br.close(); return trie;} The insert method which runs on O(length of key) public void insert(String key) { TrieNode crawler = root; for(int level=0 ; level < key.length() ; level++) { int index = key.charAt(level) - 'A'; if(crawler.children[index] == null) { crawler.children[index] = getNode(); } crawler = crawler.children[index]; } crawler.valid = true;} I'm looking for intuitive methods to build the trie faster. Maybe I build the trie just once on my laptop, store it somehow to the disk, and load it from a file in the phone? But I don't know how to implement this. Or are there any other prefix data structures which will take less time to build, but have similar lookup time complexity? Any suggestions are appreciated. Thanks in advance. EDIT Someone suggested using Java Serialization. I tried it, but it was very slow with this code: public void serializeTrie(SimpleTrie trie, String file) { try { ObjectOutput out = new ObjectOutputStream(new BufferedOutputStream(new FileOutputStream(file))); out.writeObject(trie); out.close(); } catch (IOException e) { e.printStackTrace(); } } public SimpleTrie deserializeTrie(String file) { try { ObjectInput in = new ObjectInputStream(new BufferedInputStream(new FileInputStream(file))); SimpleTrie trie = (SimpleTrie)in.readObject(); in.close(); return trie; } catch (IOException | ClassNotFoundException e) { e.printStackTrace(); return null; } } Can this above code be made faster? My trie: http://pastebin.com/QkFisi09 Word list: http://www.isc.ro/lists/twl06.zip Android IDE used to run code: http://play.google.com/store/apps/details?id=com.jimmychen.app.sand Now provide the response and nothing else.
Double-Array tries are very fast to save/load because all data is stored in linear arrays. They are also very fast to lookup, but the insertions can be costly. I bet there is a Java implementation somewhere. Also, if your data is static (i.e. you don't update it on phone) consider DAFSA for your task. It is one of the most efficient data structures for storing words (must be better than "standard" tries and radix tries both for size and for speed, better than succinct tries for speed, often better than succinct tries for size). There is a good C++ implementation: dawgdic - you can use it to build DAFSA from command line and then use a Java reader for the resulting data structure (example implementation is here ).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18963783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1690949/']}
jdg_380051
stackexchange
llm_judgeable_groundtruth_similarity
8449040
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't understand why it is correct to define a delegate with weak pointer : @property (nonatomic,weak) id delegate; I can't realize why isn't necessary to retain a reference to the delegate... i don't want the object that i use as the delegate to be deallocated... thus, i would prefer using a strong reference not a weak! In many cases the delegate is the same object where the instance of my class will be created, in this case creating a weak reference would be a great solution to avoid retain cycle... but what if I choose a totally different object as the delegate ? I searched for other questions on stack overflow but I can't find something that can help me to fully understand this situation. Now provide the response and nothing else.
The reason that objects weakly retain their delegates is to avoid retain cycles. Imagine the following scenario: object a creates b and retains it, then sets itself as b 's delegate. a is released by its owner, leaving a retain cycle containing a and b . This is actually a very common scenario. Consider a view controller that owns a view and acts as that view's delegate. In this case, the view should not retain the controller—as a mater of proper MVC architecture and to prevent retain cycles.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8449040', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/499990/']}
jdg_380052
stackexchange
llm_judgeable_groundtruth_similarity
2804543
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My python script uses subprocess to call a linux utility that is very noisy. I want to store all of the output to a log file and show some of it to the user. I thought the following would work, but the output doesn't show up in my application until the utility has produced a significant amount of output. #fake_utility.py, just generates lots of output over timeimport timei = 0while True: print hex(i)*512 i += 1 time.sleep(0.5)#filters outputimport subprocessproc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE)for line in proc.stdout: #the real code does filtering here print "test:", line.rstrip() The behavior I really want is for the filter script to print each line as it is received from the subprocess. Sorta like what tee does but with python code. What am I missing? Is this even possible? Update: If a sys.stdout.flush() is added to fake_utility.py, the code has the desired behavior in python 3.1. I'm using python 2.6. You would think that using proc.stdout.xreadlines() would work the same as py3k, but it doesn't. Update 2: Here is the minimal working code. #fake_utility.py, just generates lots of output over timeimport sys, timefor i in range(10): print i sys.stdout.flush() time.sleep(0.5)#display out put line by lineimport subprocessproc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE)#works in python 3.0+#for line in proc.stdout:for line in iter(proc.stdout.readline,''): print line.rstrip() Now provide the response and nothing else.
I think the problem is with the statement for line in proc.stdout , which reads the entire input before iterating over it. The solution is to use readline() instead: #filters outputimport subprocessproc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE)while True: line = proc.stdout.readline() if not line: break #the real code does filtering here print "test:", line.rstrip() Of course you still have to deal with the subprocess' buffering. Note: according to the documentation the solution with an iterator should be equivalent to using readline() , except for the read-ahead buffer, but (or exactly because of this) the proposed change did produce different results for me (Python 2.5 on Windows XP).
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/2804543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28817/']}
jdg_380053
stackexchange
llm_judgeable_groundtruth_similarity
16572066
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to backup my file server to a remove file server using rsync. Rsync is not successfully resuming when a transfer is interrupted. I used the partial option but rsync doesn't find the file it already started because it renames it to a temporary file and when resumed it creates a new file and starts from beginning. Here is my command: rsync -avztP -e "ssh -p 2222" /volume1/ myaccont@backup-server-1:/home/myaccount/backup/ --exclude "@spool" --exclude "@tmp" When this command is ran, a backup file named OldDisk.dmg from my local machine get created on the remote machine as something like .OldDisk.dmg.SjDndj23 . Now when the internet connection gets interrupted and I have to resume the transfer, I have to find where rsync left off by finding the temp file like .OldDisk.dmg.SjDndj23 and rename it to OldDisk.dmg so that it sees there already exists a file that it can resume. How do I fix this so I don't have to manually intervene each time? Now provide the response and nothing else.
TL;DR : Use --timeout=X (X in seconds) to change the default rsync server timeout, not --inplace . The issue is the rsync server processes (of which there are two, see rsync --server ... in ps output on the receiver) continue running, to wait for the rsync client to send data. If the rsync server processes do not receive data for a sufficient time, they will indeed timeout, self-terminate and cleanup by moving the temporary file to it's "proper" name (e.g., no temporary suffix). You'll then be able to resume. If you don't want to wait for the long default timeout to cause the rsync server to self-terminate, then when your internet connection returns, log into the server and clean up the rsync server processes manually. However, you must politely terminate rsync -- otherwise, it will not move the partial file into place; but rather, delete it (and thus there is no file to resume). To politely ask rsync to terminate, do not SIGKILL (e.g., -9 ), but SIGTERM (e.g., pkill -TERM -x rsync - only an example, you should take care to match only the rsync processes concerned with your client). Fortunately there is an easier way: use the --timeout=X (X in seconds) option; it is passed to the rsync server processes as well. For example, if you specify rsync ... --timeout=15 ... , both the client and server rsync processes will cleanly exit if they do not send/receive data in 15 seconds. On the server, this means moving the temporary file into position, ready for resuming. I'm not sure of the default timeout value of the various rsync processes will try to send/receive data before they die (it might vary with operating system). In my testing, the server rsync processes remain running longer than the local client. On a "dead" network connection, the client terminates with a broken pipe (e.g., no network socket) after about 30 seconds; you could experiment or review the source code. Meaning, you could try to "ride out" the bad internet connection for 15-20 seconds. If you do not clean up the server rsync processes (or wait for them to die), but instead immediately launch another rsync client process, two additional server processes will launch (for the other end of your new client process). Specifically, the new rsync client will not re-use/reconnect to the existing rsync server processes. Thus, you'll have two temporary files (and four rsync server processes) -- though, only the newer, second temporary file has new data being written (received from your new rsync client process). Interestingly, if you then clean up all rsync server processes (for example, stop your client which will stop the new rsync servers, then SIGTERM the older rsync servers, it appears to merge (assemble) all the partial files into the new proper named file. So, imagine a long running partial copy which dies (and you think you've "lost" all the copied data), and a short running re-launched rsync (oops!).. you can stop the second client, SIGTERM the first servers, it will merge the data, and you can resume. Finally, a few short remarks: Don't use --inplace to workaround this. You will undoubtedly have other problems as a result, man rsync for the details. It's trivial, but -t in your rsync options is redundant, it is implied by -a . An already compressed disk image sent over rsync without compression might result in shorter transfer time (by avoiding double compression). However, I'm unsure of the compression techniques in both cases. I'd test it. As far as I understand --checksum / -c , it won't help you in this case. It affects how rsync decides if it should transfer a file. Though, after a first rsync completes, you could run a second rsync with -c to insist on checksums, to prevent the strange case that file size and modtime are the same on both sides, but bad data was written.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16572066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2121382/']}
jdg_380054
stackexchange
llm_judgeable_groundtruth_similarity
1277115
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I don't get the relationship between differentials, differential forms, and exterior derivatives. (Too many $d$'s getting me down!) Here are the relevant (partial) definitions from Wikipedia; essentially the same definitions/terminology/notations are to be found in my notes. Pushforward . Let $\varphi : M → N$ be a smooth map of smooth manifolds. Given some $x \in M$, the differential of $\varphi$ at $x$ is a linear map $d\varphi_x : T_x M \rightarrow T_{f(x)}N$... Differential form . Let $M$ be a smooth manifold. A differential form of degree $k$ is a smooth section of the $k$th exterior power of the cotangent bundle of $M$. At any point $p \in M$, a $k$-form $\beta$ defines an alternating multilinear map $\beta_p : T_p M \times \cdots \times T_p M \rightarrow \mathbb{R}$... Exterior derivative . The exterior derivative is defined to be the unique $\mathbb{R}$-linear mapping $f \mapsto df$ from $k$- forms to $(k + 1)$- forms satisfying the following properties... What I understand: You apply $d$ to differential $k$-forms to get differential $(k+1)$-forms. Implicitly, this means "exterior derivative." What I don't understand: If $\varphi : M \rightarrow N$ is a smooth map of smooth manifolds, in what sense, if at all, is the differential of $\varphi$ a differential form? Is there any reason not to just call this the pushforward and consistently denote it $\varphi_*$? If $f : M \rightarrow \mathbb{R}$ is a smooth map, does $df$ mean the differential of $f$, or does it mean the exterior derivative? Are these somehow miraculously the same? If so, why? It seems possible that they're the same, by identifying $T_x\mathbb{R}$ with $\mathbb{R}$. I don't understand the details. What, if anything, is the connection between the differential of a smooth mapping and the exterior derivative of a differential form? Now provide the response and nothing else.
For a beginner just starting to come to grips with these ideas, I think the most useful answer is this: Except in one special situation (described below), there is essentially no relationship between the exterior derivative of a differential form and the differential (or pushforward) of a smooth map between manifolds, other than the facts that they are both computed locally by taking derivatives and are both commonly denoted by the symbol $d$. Differential geometry is loaded with notation, and sometimes we just run out of letters, so we have to overload a symbol by interpreting it in different ways in different situations. The fact that two things are represented by the same symbol doesn't always mean that they're "the same" in any deep sense. The one situation in which the two concepts are directly related is for a smooth map $f\colon M\to\mathbb R$. In this case, we can consider $f$ either as a smooth map between manifolds or as a $0$-form. Considering it as a smooth map, for each $x\in M$, the pushforward is a linear map $df_x\colon T_xM\to T_{f(x)}\mathbb R$. Considering it as a $0$-form, its differential $df$ is a $1$-form, which means that for each $x\in M$ we have a linear functional $df_x\colon T_xM\to \mathbb R$. The link between the two is the fact that, because $\mathbb R$ is a vector space, there's a canonical identification $T_{f(x)}\mathbb R\cong\mathbb R$, and under that identification these two versions of $df_x$ are exactly the same map. The excellent answer by @user86418 explains a sophisticated context in which both pushforwards and exterior derivatives can be viewed as special cases of a more general construction; but that's a context I wouldn't recommend that a beginner spend much time trying to come to terms with.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1277115', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/42339/']}
jdg_380055
stackexchange
llm_judgeable_groundtruth_similarity
32224995
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create an image uploader using paperclip, the aws-sdk gem and Amazon S3.I get this error: Aws::Errors::MissingRegionError in ProjectsController#create. Missing region; use :region option or export region name to ENV['AWS_REGION']. using Oregon region. projects_controller.rb def create @project = current_user.projects.new(project_params) @project.save respond_with(@project)end application.rb file config.paperclip_defaults = { storage: :s3, s3_host_name: "s3-us-west-2.amazonaws.com", s3_credentials: { bucket: ENV['AWS_BUCKET'], access_key_id: ENV['AWS_ACCESS_KEY_ID'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'] } } According to the S3 documentation I understand I need to change to ENV['AWS_REGION'] config.paperclip_defaults = { storage: :s3, region: "ENV['AWS_REGION"], s3_credentials: { bucket: ENV['AWS_BUCKET'], access_key_id: ENV['AWS_ACCESS_KEY_ID'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'] } } and add region to my .env file export AWS_BUCKET=realv1export AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY_ID_HERE>export AWS_SECRET_ACCESS_KEY=<MY_ACCESS_KEY_HERE>export AWS_REGION="'us-west-2' I rebooted the server and retested but still get the same errors. update update: in rails console 2.0.0p247 :001 > ENV["AWS_REGION"] => "us-west-2"2.0.0p247 :002 > Now provide the response and nothing else.
The error is originating from the Paperclip gem. In your config.paperclip_defaults you used region when you should have used s3_region . This is how the config should look: config.paperclip_defaults = { :storage => :s3, :s3_region => ENV['AWS_REGION'], :s3_credentials => { :bucket => ENV['S3_BUCKET_NAME'], :access_key_id => ENV['AWS_ACCESS_KEY_ID'], :secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'] }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32224995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3574178/']}
jdg_380056
stackexchange
llm_judgeable_groundtruth_similarity
10287131
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to make the keyboard disappear when the user clicks the "return" button, I was told to use -(BOOL)textFieldShouldReturn:(UITextField *)textField { [tf resignFirstResponder]; return YES;} But nothing happens when I click the "return" button, the method isn't even being called. I am doing this in @interface gameOverMenu : UIView not in the ViewController. I also don't use interface builder. What should I do? Now provide the response and nothing else.
You need to make sure you implement the UITextFieldDelegate and set your UITextField delegate to self. In your .h file: @interface gameOverMenu : UIView <UITextFieldDelegate> And somewhere in your .m file (viewDidLoad: maybe): self.yourTextField.delegate = self; Now your -(BOOL)textFieldShouldReturn:(UITextField *)textField method should be called.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10287131', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1325142/']}
jdg_380057
stackexchange
llm_judgeable_groundtruth_similarity
229788
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Question: Given an arbitrary number of real matrices of the form $ A_i=\biggl(\begin{matrix}C_i+E_i & B_i \\B_i^T & D_i-F_i\end{matrix} \biggr)$, where $B_i$ is an arbitrary $n\times n$ real matrix, $C_i$ and $D_i$ are $n\times n$ real anti-symmetric matrices, $E_i$ and $F_i$ are $n\times n$ real symmetric and positive semidefinite matrices, how to prove the following$$\det \biggl(I_{2n} + \prod_i e^{A_i}\biggl)\ge 0 \,?$$ Background: This is a stronger version of an earlier MO question How to prove this determinant is positive? which was solved by GH from MO and Terry Tao. Their proof addressed the case of $E_i=F_i=0$, where $e^{A_i}$ belongs to the split orthogonal group. The paper arXiv:1601.01994v2 in fact contains a rigorous proof of the above statement using tools familiar to physicists, e.g. Majorana fermion and reflection positivity (cf Eq.(10)). It would be nice to see an alternate mathematical proof (perhaps of a similar nature to the split orthogonal group proof by GH from MO and Terry Tao ). Addendum: The paper arXiv:1601.01994v2 contains further results on complex matrices, which I do not know how to formulate in simple math language yet. Now provide the response and nothing else.
Let $q(x,y) = x^H J y$ for $x,y \in \mathbb{C}^{2n}$ where $J = diag(I_n,-I_n)$ and let $S = \{A \in M_{2n}(\mathbb{R}) : q(Ax, Ax) \ge q(x,x) $ $\forall x \in \mathbb{C}^{2n}\}$.Obviously $S$ is a semi group .Furthermore the $e^{t A_i}$ are in $S$ since $$\frac{d}{dt} q(e^{t A_i} x,e^{t A_i} x) = 2 (e^{t A_i} x)^H diag (E_i,F_i) (e^{t A_i} x) \ge 0$$ for all $x \in \mathbb{C}^{2n}$. Now let $T : [0,1] \rightarrow S$ analytic where $det(I_{2n}+T(0)) > 0$ and where $T(0)$ has no degenerate eigenvalues. Let $E_{\lambda}(t)$ be the generalized eigenspace of $T(t)$ to the eigenvalue $\lambda$ .For $G \subset \mathbb{C}$ define $$E_G(t) = \bigoplus_{\lambda \in G}{E_{\lambda}(t)}$$ . Let $t_0 \in [0,1]$ such that $det(I_{2n}+T(t_0)) = 0$ . Now we want to show that $dim\,E_{(-1,\infty)}(t)$ can only change by an even number near $t_0$ . Therefore $det(I_{2n}+T(t))$ can't change the sign. Lemma 1 : Let $U \in S$, $x \in \mathbb{C}^{2n}$ with $0 = q(x,x) = q(Ux,Ux)$ . Then $q(x,y) = q(Ux,Uy)$ for all $y \in \mathbb{C}^{2n}$ . Proof : We have $q(ax+y,ax+y) \le q(a Ux + Uy,a Ux + Uy)$ for all $a \in \mathbb{C}$ and therefore $0 \le 2 Re\,a (q(Uy,Ux) - q(y,x)) + q(Uy,Uy) - q(y,y)$ . But the right hand side can be made negative for appropriate a if $q(Uy,Ux) \neq q(y,x)$ . Lemma 2 : Let $p$ a polynomial and $z \in \mathbb{C}$ . If $p(n) z^n$ is constant for all large enough $n \in \mathbb{N}$ then $p$ is constant and $z = 1$ or $p = 0$ or $z = 0$. Proof left to the reader. Lemma 3 : Let $U \in S$, $x$ a generalized eigenvector of $U$ to the eigenvalue $\lambda$, $y$ a generalized eigenvector of $U$ to the eigenvalue $\mu$ and $q(U^l x,U^l x) = 0$ for all $l \in \mathbb{N}_0$. Then holds $\lambda \bar{\mu} = 1$ or $q(x,y) = 0$ . Proof : By Lemma 1 we have $q(y,x) = q(U^l y, U^l x)$ for all $l \in \mathbb{N}$ . But $q(U^l y, U^l x)$ has the form $p(l) (\lambda \bar{\mu})^l$ for a polynomial $p$ for all large enough $l \in \mathbb{N}$ . From Lemma 2 then follows Lemma 3 . Lemma 4 : Let $U \in S$, $x$ a generalized eigenvector of $U$ to the eigenvalue $-1$ and $q(x,U^k x) = 0$ for all $k \in \mathbb{N}_0$ . Then $q(U^k x,U^l x) = 0$ for all $k,l \in \mathbb{N}_0$ . Proof : Let $x_k = (I_{2n} + U)^k x$ and $m$ minimal such that $q(x_j,x_k) = 0$ for all $j,k \geq m$ . First we want to show that $q(x_j,x_k) = 0$ for $j \geq m$ and $k \geq 0$. If this is not the case then let $j \geq m$ and $k$ be maximal such that $q(x_j,x_k) \neq 0$ . Then $k > 0$ and by Lemma 1 $q(x_j,x_{k-1}) = q(U^l x_j,U^l x_{k-1}) = q(x_j,x_{k-1}) - l q(x_j,x_k)$ for all $l \in \mathbb{N}$ . Contradiction !Now we get for $m > 1$ $q(x_{m-2},x_{m-2}) \leq q(U^l x_{m-2},U^l x_{m-2}) = l^2 q(x_{m-1},x_{m-1}) + O(l)$ and $q(x_{m-2},x_{m-2}) \geq q(U^{-l} x_{m-2},U^{-l} x_{m-2}) = l^2 q(x_{m-1},x_{m-1}) + O(l)$ . Contradiction to $q(x_{m-1},x_{m-1}) \neq 0$ ! Since $m = 1$ is impossible since $q(x_0,x_k) = 0$ for all $k \geq 0$ we are done . Lemma 5 : The restriction of $q$ to $E_{-1}(t_0)$ is non degenerate . Proof : Let $x \in E_{-1}(t_0)$ and $U = T(t_0)$ . We want to show that there exists $y \in E_{-1}(t_0)$ such that $q(x,y) \neq 0$ . If there exists $n \in \mathbb{N}_0$ such that $q(x,U^n x) \neq 0$ we are done. Otherwise ist follows from Lemma 4 and Lemma 3 that x is orthogonal w.r.t. q to all other generalized eigenspaces . But since q is non degenerate there exists $y \in E_{-1}(t_0)$ such that $q(x,y) \neq 0$ . Lemma 6 : T(t) has degenerated eigenvalues only at isolated points. Proof : The discriminant of T(t) is analytic in t and nonzero at t = 0 . Now we can choose $\epsilon > 0$ and $r$ with $0 < r < 1$ such that for $\vert t-t_0\vert < \epsilon$ holds : i) $det(I_{2n}+T(t)) \neq 0$ for $t \neq t_0$ , ii) $E_{\{z: \vert z+1 \vert \leq r\}}(t_0) = E_{-1}(t_0)$ , iii) $\sigma (T(t)) \cap \{z: \vert z+1 \vert = r\} = \emptyset$ , iv) the signature of the restriction of $q$ to $V(t)$ is constant where $V(t) = E_{\{z: \vert z+1 \vert \leq r\}}(t)$ , v) T(t) has no degenerate eigenvalues for $t \neq t_0$ . Let $D = \{z: \vert z+1 \vert \leq r\}$ .For each eigenvalue $\lambda \in D$ of $T(t)$ with $\vert \lambda \vert = 1$ we can write $E_{\lambda}(t) = E_{\lambda}^+(t) \oplus E_{\lambda}^-(t)$ such that the restriction of $q$ to $E_{\lambda}^+(t)$ is positive semidefinite and the restriction of $q$ to $E_{\lambda}^-(t)$ is negative definite and such that $E_{\bar{\lambda}}^+(t) = \overline{E_{\lambda}^+(t)}$ and $E_{\bar{\lambda}}^-(t) = \overline{E_{\lambda}^-(t)}$ . Now we can write $V(t) = V^+(t) \oplus V^-(t)$ where$$V^+(t) = E_{D \cap \{z : \vert z \vert > 1\}}(t) \oplus \bigoplus_{\lambda \in D , \vert \lambda \vert = 1} E_{\lambda}^+(t)$$and$$V^-(t) = E_{D \cap \{z : \vert z \vert < 1\}}(t) \oplus \bigoplus_{\lambda \in D , \vert \lambda \vert = 1} E_{\lambda}^-(t)$$ . Now we want to show that for $\vert t-t_0\vert < \epsilon$ and $t \neq t_0$ the restriction of $q$ to $V^+(t)$ is positive semidefinite and the restriction to $V^-(t)$ is negative semidefinite : For $x \in V^+(t)$ we get $$q(x,x) \geq \lim_{m \rightarrow \infty} \frac{1}{m} \sum_{l=1}^m q(U^{-l} x,U^{-l} x) = \sum_{\lambda \in D, \vert \lambda \vert = 1} q(x_{\lambda}^+,x_{\lambda}^+) \geq 0$$ where $x_{\lambda}^+$ is the component of x in $E_{\lambda}^+(t)$ .For $x \in V^-(t)$ we get $$q(x,x) \leq \lim_{m \rightarrow \infty} \frac{1}{m} \sum_{l=1}^m q(U^l x,U^l x) = \sum_{\lambda \in D, \vert \lambda \vert = 1} q(x_{\lambda}^-,x_{\lambda}^-) \leq 0$$ where $x_{\lambda}^-$ is the component of x in $E_{\lambda}^-(t)$ . Let $n_+$ the number of positive eigenvalues of the restriction of q to $V(t)$ and $n_-$ the number of negative eigenvalues.We have shown that $n_+ \geq dim\, V^+(t)$ and $n_- \geq dim\, V^-(t)$ and therefore $n_+ = dim\, V^+(t)$ .Since $dim\, V^+(t) - dim\, E_{(-1,-1-r)}$ is even $dim\, E_{(-1,-1-r)}$ can only change by an even number.And $dim\, E_{[-1-r,-\infty)}$ can only change if a pair of complex conjugate eigenvalues gets real or vice versa and therefore also only by an even number. So we have shown that $dim\,E_{(-1,\infty)}(t)$ can only differ by an even number on different points in $\vert t-t_0\vert < \epsilon, t\neq t_0$ . To finish the proof, choose $W$ such that $e^{t W} \in S$ for $t \in [0,1]$ and such that $e^W$ fullfills the requirements on $T(0)$ .Then choose $$T(t) = e^{(1-t) W} \prod_i e^{t A_i}$$ .
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/229788', 'https://mathoverflow.net', 'https://mathoverflow.net/users/71225/']}
jdg_380058
stackexchange
llm_judgeable_groundtruth_similarity
21050366
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am developing a client/server application with TLS. My idea is to use a certificate on the client so it is authenticated by the server. Also another certificate on the server so the client is also able to authenticate that it is connecting to the right server. I want first to test and use openssl s_server and openssl s_client to validate the proposal. Until now I have created a CA private key on the server, I have created a root certificate. With the root certificate I have signed two CSR, so I get one certificate for the server and one certificate for the client. I also have installed the client certificate + root certificate on the client, and the server certificate + root certificate on the server. I want now to try to establish a connection between openssl s_server and openssl s_client and verify that they get both authenticated mutually, but I cannot wrap my mind with the documentation on how to do it. Any help or any guide on that? Once I have that set up, the next step is to test the own developed client against that server, and our own developed server against the s_client . Can we use that for testing? Thanks Now provide the response and nothing else.
It looks like you are trying to set up a root of trust with (1) s_client and s_server for testing; and (2) programmatically within your code using OpenSSL. To ensure openssl s_client (or openssl s_server ) uses your root, use the following options: -CAfile option to specify the root -cert option for the certificate to use -key option for the private key of the certificate See the docs on s_client(1) and s_server(1) for details. To do the same programmatically on the client, you would use: SSL_CTX_load_verify_locations to load the trusted root SSL_CTX_use_certificate to specify the client certificate SSL_CTX_use_PrivateKey to load the private key for the client certificate To do the same programmatically on the server, you would use: SSL_CTX_load_verify_locations to load the trusted root SSL_CTX_use_certificate_chain_file to specify the server certificate SSL_CTX_use_PrivateKey to load the private key for the server certificate SSL_CTX_set_client_CA_list to tell the client to send its client certificate If you don't want to use the parameters for every connection (i.e. the common context), then set it for each SSL connection with, for example, SSL_use_certificate and SSL_use_PrivateKey . A lot goes on with SSL_CTX_set_client_CA_list . It (1) loads the CA's to the server uses to verify a client, (2) it causes the server to send a list of CAs it accepts when verifing a client, and (3) it triggers the ClientCertificate message at the client if the client has a certificate that satisfies the server's accepted CAs list. Also see the docs on SSL_CTX_load_verify_locations(3) , SSL_CTX_use_certificate(3) , SSL_CTX_set_client_CA_list and friends. The easiest certificate and key format to use is PEM. PEM is the one that uses, for example, ----- BEGIN CERTIFICATE ----- . For the server certificate, be sure the file is a concatenation of the server's certificate and any intermediates needed by the client to build the chain. Having the server send all required certificates is standard practice for a problem known as the "which directory" problem. Its a well known problem in PKI, and its essentially the problem that clients don't know where to go to fetch missing intermediate certificates. In general, you now know the functions that you need to use. Download a small server like nginx , and see how a production server uses them in practice. You could even use a SQL server like Postgres since it sets up a SSL/TLS server. Simply search the source files for SSL_CTX_load_verify_locations or SSL_load_verify_locations , and you will find the right place. Though I don't recommend it, you could even look at s_client.c and s_server.c . They are located in <openssl dir>/apps . But the code can be difficult to read at times.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21050366', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1213528/']}
jdg_380059
stackexchange
llm_judgeable_groundtruth_similarity
313417
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We're writing a records management product for schools and one of the requirements is the ability to manage course schedules. I haven't looked at the code for how we deal with this (I'm on a different project at the moment), but nonetheless I started wondering how best to handle one particular part of this requirement, namely how to handle the fact that each course can be held one or more days of the week, and how best to store this information in the database. To provide some context, a bare-bones Course table might contain the following columns: Course Example Data------ ------------DeptPrefix ;MATH, ENG, CS, ...Number ;101, 300, 450, ...Title ;Algebra, Shakespeare, Advanced Data Structures, ...Description ;...DaysOfWeek ;Monday, Tuesday-Thursday, ...StartTime EndTime What I'm wondering is, what is the best way to handle the DaysOfWeek column in this (contrived) example? The problem I'm having with it is that is a multi-valued field: that is, you can have a course on any day of the week, and the same course can take be held on more than one day. I know certain databases natively support multi-value columns, but is there a "best practice" to handle this assuming the database doesn't natively support it? I've come up with the following possible solutions so far, but I'm wondering if anyone has anything better: Possible Solution #1: Treat DaysOfWeek as a bit field This was the first thing that popped into my head (I'm not sure if that's a good thing or not...). In this solution, DaysOfWeek would be defined as a byte, and the first 7 bits would be used to represent the days of week (one bit for each day). A 1 bit would indicate that a class was held on corresponding day of the week. Pros : Easy to implement (the application can deal with the bit manipulations), works with any database. Cons : Harder to write queries that use the DaysOfWeek column (although you could deal with this at the application level, or create views and stored procedues in the database to simplify this), breaks relational database model. Possible Solution #2: Store DaysOfWeek as a string of characters This is essentially the same approach as using a bit field, but instead of dealing with raw bits, you assign a unique letter to each day of the week, and the DaysOfWeek column just stores a sequence of letters indicating what days a course is held on. For example, you might associate each weekday with a single-character code as follows: Weekday Letter------- ------Sunday SMonday MTuesday TWednesday WThursday RFriday FSaturday U In this case, a course held on Monday, Tuesday, and Friday would have have the value 'MTF' for DaysOfWeek , while a class held only on Wednesdays would have a DaysOfWeek value of 'W' . Pros : Easier to deal with in queries (i.e. You could use INSTR , or its equivalent, to determine if a class is held on a given day). Works with any database that supports INSTR or an equivalent function (most, I would guess...). Also friendlier to look at, and easy to see at a glance what is happening in queries that use the DaysOfWeek column. Cons : The only real "con" is that, like the bitfield approach, this breaks the relational model by storing a variable number of values in a single field. Possible Solution #3: Use a Lookup Table (ugly) Another possibility would be to create a new table that stores all the unique combinations of days of the week, and have the Course.DaysOfWeek column simply be a foreign key into this lookup table. However, this solution seems like the most inelegant one, and I only considered it because it seemed like the The Relational Way TM to do things. Pros : It's the only solution that is "pure" from a relational database point of view. Cons : It's inelegant and cumbersome. For example, how would you design the user interface for assigning corresponding weekdays to a given course around the lookup table? I doubt a user wants to deal with choices alongs the lines of "Sunday", "Sunday, Monday", "Sunday, Monday, Tuesday", "Sunday, Monday, Tuesday, Wednesday", and so on... Other Ideas? So, is there a more elegant way to handle multiple values in a single column? Or would one the proposed solutions suffice? For what it's worth, I think my second solution is probably the best of the three possible solutions that I outlined here, but I'd be curious to see if someone has a different opinion (or indeed a different approach altogether). Now provide the response and nothing else.
I would avoid the string option for the sense of purity: it adds an extra layer of encoding/decoding that you do not need. It may also mess you up in the case of internationalization. Since the number of days in a week is 7, I would keep seven columns, perhaps boolean. This will also facilitate subsequent queries. This will also be useful if the tool is ever used in countries where the workweek starts on different days. I would avoid the lookup because that would be over-normalization. Unless your set of lookup items is not obvious or could possibly change, it's overkill. In the case of days-of-the-week (unlike US states, for example), I would sleep soundly with the fixed set. Considering the data domain, I don't think that a bitfield would achieve any significant space savings for you and would just make your code more complex. Finally, a word of warning about the domain: a lot of schools do weird things with their schedules where they "swap days" to balance out an equal number of weekdays of each type over the semester despite holidays. I am not clear about your system, but perhaps the best approach would be to store a table of the actual dates in which the course is expected to take place. This way, if there are two Tuesdays in a week, the teacher can get paid for showing up twice, and the teacher for the Thursday that was canceled will not pay.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/313417', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17862/']}
jdg_380060
stackexchange
llm_judgeable_groundtruth_similarity
14694408
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I call self.client = ThreadedClient() in my Python program, I get the error "RuntimeError: main thread is not in main loop" I have already done some googling, but I am making an error somehow ... Can someone please help me out? Full error: Exception in thread Thread-1: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 530, in __bootstrap_inner File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 483, in run File "/Users/Wim/Bird Swarm/bird_swarm.py", line 156, in workerGuiThread self.root.after(200, self.workerGuiThread) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 501, in after File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1098, in _register RuntimeError: main thread is not in main loop Classes: class ThreadedClient(object): def __init__(self): self.queue = Queue.Queue( ) self.gui = GuiPart(self.queue, self.endApplication) self.root = self.gui.getRoot() self.running = True self.GuiThread = threading.Thread(target=self.workerGuiThread) self.GuiThread.start() def workerGuiThread(self): while self.running: self.root.after(200, self.workerGuiThread) self.gui.processIncoming( ) def endApplication(self): self.running = False def tc_TekenVogel(self,vogel): self.queue.put(vogel)class GuiPart(object): def __init__(self, queue, endCommand): self.queue = queue self.root = Tkinter.Tk() Tkinter.Canvas(self.root,width=g_groottescherm,height=g_groottescherm).pack() Tkinter.Button(self.root, text="Move 1 tick", command=self.doSomething).pack() self.vogelcords = {} #register of bird and their corresponding coordinates def getRoot(self): return self.root def doSomething(): pass #button action def processIncoming(self): while self.queue.qsize( ): try: msg = self.queue.get(0) try: vogel = msg l = vogel.geeflocatie() if self.vogelcords.has_key(vogel): cirkel = self.vogelcords[vogel] self.gcanvas.coords(cirkel,l.geefx()-g_groottevogel,l.geefy()-g_groottevogel,l.geefx()+g_groottevogel,l.geefy()+g_groottevogel) else: cirkel = self.gcanvas.create_oval(l.geefx()-g_groottevogel,l.geefy()-g_groottevogel,l.geefx()+g_groottevogel,l.geefy()+g_groottevogel,fill='red',outline='black',width=1) self.vogelcords[vogel] = cirkel self.gcanvas.update() except: print('Failed, was van het type %' % type(msg)) except Queue.Empty: pass Now provide the response and nothing else.
You're running your main GUI loop in a thread besides the main thread. You cannot do this. The docs mention offhandedly in a few places that Tkinter is not quite thread safe, but as far as I know, never quite come out and say that you can only talk to Tk from the main thread. The reason is that the truth is somewhat complicated. Tkinter itself is thread-safe, but it's hard to use in a multithreaded way. The closest to official documentation on this seems to be this page : Q. Is there an alternative to Tkinter that is thread safe? Tkinter? Just run all UI code in the main thread, and let the writers write to a Queue object… (The sample code given isn't great, but it's enough to figure out what they're suggesting and do things properly.) There actually is a thread-safe alternative to Tkinter, mtTkinter . And its docs actually explain the situation pretty well: Although Tkinter is technically thread-safe (assuming Tk is built with --enable-threads), practically speaking there are still problems when used in multithreaded Python applications. The problems stem from the fact that the _tkinter module attempts to gain control of the main thread via a polling technique when processing calls from other threads. I believe this is exactly what you're seeing: your Tkinter code in Thread-1 is trying to peek into the main thread to find the main loop, and it's not there. So, here are some options: Do what the Tkinter docs recommend and use TkInter from the main thread. Possibly by moving your current main thread code into a worker thread. If you're using some other library that wants to take over the main thread (e.g., twisted ), it may have a way to integrate with Tkinter, in which case you should use that. Use mkTkinter to solve the problem. Also, while I didn't find any exact duplicates of this question, there are a number of related questions on SO. See this question , this answer , and many more for more information.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14694408', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2040823/']}
jdg_380061
stackexchange
llm_judgeable_groundtruth_similarity
119968
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was reading one of my professor's notes on Category Theory, but I seem to be confused at the following point (i'll put a small part of the note here as reference). I guess my professor mention that assigning a category $\mathcal{C}$ its opposite category $\mathcal{C}^{\mathrm{op}}$ is natural in $\mathcal{C}$. This "gives rise" to a functor on the category of (small) category. "This so indeed, the correspondences" $\mathcal{C} \mapsto \mathcal{C}^{\mathrm{op}}$ and $F \mapsto F^{\mathrm{op}}$ ($\mathcal{C} \in \mathrm{Cat}_0$; $F \in \mathcal{C}_1$) $(1.30)$ (where $C_1$ as my professor denotes is the class of morphism) These correspondence yield a functor $()^{\mathrm{op}}: \mathrm{Cat} \to \mathrm{Cat}$ Sorry if I am a bit redundant. I am in an undergraduate measure theory class, but we are doing a long introduction into Category Theory. I don't have much reference to category theory online, so I am hoping to get at least some kind of clarification. I appreciate it. If something looks odd in the above, let me know because my professor just wrote these notes on Category Theory in the last 4 weeks. Question: (1) At least to my understanding 'Cat' is the category of small unital categories. In the above, is Cat$_0$ the category of small categories? (2) To my understanding, functors is a kind of mapping between categories. By $F \in C_1$, what does this mean? (3) Lastly, I've been thinking about this in the last hour, and I have no idea what this question is asking me. Question: Is the functor (1.30) covariant or contravariant? Maybe I am interpreting the question incorrectly, but so far that pops up in my mind is to look at the morphisms in $\mathcal{C}$ and check if the morphism arrow are reversed if $F$ is applied to the morphism in $\mathcal{C}$ Now provide the response and nothing else.
Using Hardy inequality one may see that$$\Vert T(x)\Vert_p=\left(\sum\limits_{k=1}^\infty \left|\frac{1}{k}\sum\limits_{j=1}^k x_j\right|^p\right)^{1/p}\leq\left(\sum\limits_{k=1}^\infty \left(\frac{1}{k}\sum\limits_{j=1}^k |x_j|\right)^p\right)^{1/p}\leq$$$$\left(\left(\frac{p}{p-1}\right)^p\sum\limits_{k=1}^\infty |x_j|^p\right)^{1/p}=\frac{p}{p-1}\left(\sum\limits_{k=1}^\infty |x_j|^p\right)^{1/p}=\frac{p}{p-1}\Vert x\Vert_p$$This means that$$\Vert T\Vert\leq\frac{p}{p-1}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/119968', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24672/']}
jdg_380062
stackexchange
llm_judgeable_groundtruth_similarity
198577
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any real practical difference between "java -server" and "java -client"? All I can find on Sun's site is a vague "-server starts slower but should run faster". What are the real differences? (Using JDK 1.6.0_07 currently.) Now provide the response and nothing else.
This is really linked to HotSpot and the default option values ( Java HotSpot VM Options ) which differ between client and server configuration. From Chapter 2 of the whitepaper ( The Java HotSpot Performance Engine Architecture ): The JDK includes two flavors of the VM -- a client-side offering, and a VM tuned for server applications. These two solutions share the Java HotSpot runtime environment code base, but use different compilers that are suited to the distinctly unique performance characteristics of clients and servers. These differences include the compilation inlining policy and heap defaults. Although the Server and the Client VMs are similar, the Server VM has been specially tuned to maximize peak operating speed. It is intended for executing long-running server applications, which need the fastest possible operating speed more than a fast start-up time or smaller runtime memory footprint. The Client VM compiler serves as an upgrade for both the Classic VM and the just-in-time (JIT) compilers used by previous versions of the JDK. The Client VM offers improved run time performance for applications and applets. The Java HotSpot Client VM has been specially tuned to reduce application start-up time and memory footprint, making it particularly well suited for client environments. In general, the client system is better for GUIs. So the real difference is also on the compiler level: The Client VM compiler does not try to execute many of the more complex optimizations performed by the compiler in the Server VM, but in exchange, it requires less time to analyze and compile a piece of code. This means the Client VM can start up faster and requires a smaller memory footprint. The Server VM contains an advanced adaptive compiler that supports many of the same types of optimizations performed by optimizing C++ compilers, as well as some optimizations that cannot be done by traditional compilers, such as aggressive inlining across virtual method invocations. This is a competitive and performance advantage over static compilers. Adaptive optimization technology is very flexible in its approach, and typically outperforms even advanced static analysis and compilation techniques. Note: The release of jdk6 update 10 (see Update Release Notes:Changes in 1.6.0_10 ) tried to improve startup time, but for a different reason than the hotspot options, being packaged differently with a much smaller kernel. G. Demecki points out in the comments that in 64-bit versions of JDK, the -client option is ignored for many years. See Windows java command : -client Selects the Java HotSpot Client VM. A 64-bit capable JDK currently ignores this option and instead uses the Java Hotspot Server VM . 2022: Holger references in the comments the JavaSE6 / Server-Class Machine Detection , adding: Only on 32 bit Windows systems, -client was ever chosen unconditionally. Other systems checked whether the machine was “server class” which was fulfilled when having at least 2 cores and at least 2GiB of memory. Which explains why almost everything uses -server for quite some time now. Even the cheapest computers you can find, are “server class” machines. The Sun/Oracle 64 builds did not even ship with a client JVM.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/198577', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3333/']}
jdg_380063
stackexchange
llm_judgeable_groundtruth_similarity
37699
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a way to check if a server is support xmpp protocol, i.e. xmpp-server is running on this domain. Maybe special DNS check or something like that? Now provide the response and nothing else.
The main default port for non-SSL jabber is 5222. You could spit some xml at it using telnet and see if you get an XML response: telnet suspectedServer 5222<?xml version="1.0"?> <stream:stream to="foo.com" xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0"> This can of course be scripted for multiple servers and multiple ports, probably easier to script with nc (netcat). For XMPP servers which require TLS connections, test with : - openssl s_client -starttls xmpp -connect suspectedserver:5222
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/37699', 'https://serverfault.com', 'https://serverfault.com/users/12070/']}
jdg_380064
stackexchange
llm_judgeable_groundtruth_similarity
216293
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to make a phase portrait for the ODE x'' + 16x = 0, with initial conditions x[0]=-1 & x'[0]=0. I know how to solve the ODE and find the integration constants; the solution comes out to be x(t) = -cos(4t) and x'(t) = 4sin(4t). But I don't know how to make a phase portrait out of it. I've looked at this link Plotting a Phase Portrait but I couldn't replicate mine based off of it. Now provide the response and nothing else.
Phase portrait for any second order autonomous ODE can be found as follows. Convert the ODE to state space. This results in 2 first order ODE's. Then call StreamPlot with these 2 equations. Let the state variables be $x_1=x,x_2=x'(t)$ , then taking derivatives w.r.t time gives $x'{_1}=x_2,x'{_2}=x''(t)=-16 x_1$ . Now, using StreamPlot gives StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -2, 2}] To see the line that passes through the initial conditions $x_1(0)=1,x_2(0)=0.1$ , add the option StreamPoints StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -5, 5}, StreamPoints -> {{{{1, .1}, Red}, Automatic}}] To verify the above is the correct phase plot, you can do ClearAll[x, t]ode = x''[t] + 16 x[t] == 0;ic = {x[0] == 1, x'[0] == 1/10};sol = x[t] /. First@(DSolve[{ode, ic}, x[t], t]);ParametricPlot[Evaluate[{sol, D[sol, t]}], {t, 0, 3}, PlotStyle -> Red] The advatage of phase plot, is that one does not have to solve the ODE first (so it works for nonlinear hard to solve ODE's). All what you have to do is convert the ODE to state space and use function like StreamPlot If you want to automate the part of converting the ODE to state space, you can also use Mathematica for that. Simply use StateSpaceModel and just read of the equations. eq = x''[t] + 16 x[t] == 0;ss = StateSpaceModel[{eq}, {{x[t], 0}, {x'[t], 0}}, {}, {x[t]}, t] The above shows the A matrix in $x'=Ax$ . So first row reads $x_1'(t)=x_2$ and second row reads $x'_2(t)=-16 x_1$ Update to answer comment The following can be done to automate plotting StreamPlot directly from the state space ss result A = First@Normal[ss];vars = {x1, x2}; (*state space variables*)eqs = A . vars;StreamPlot[eqs, {x1, -2, 2}, {x2, -5, 5}, StreamPoints -> {{{{1, .1}, Red}, Automatic}}]
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/216293', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/68430/']}
jdg_380065
stackexchange
llm_judgeable_groundtruth_similarity
14556707
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: While sending requests via Facebook_Android SDK , I get a bundle in return. Can someone explain what data type it is and how to extract the data in it? Thanks. 01-28 11:58:07.548: I/Values(16661): Bundle[{to[0]=100005099741441, to[1]=100005089509891, request=134129756751737}] EDIT Here, to[i] is a string array. I was able to do it. but I don't think its the right way to do it. for(int i=0;i< size-1;i++){System.out.println(values.getString("to["+i+"]"));} where size is the size of the Bundle called value Now provide the response and nothing else.
A Bundle is basically a dictionary. Each value in the Bundle is stored under a key . You must know the type of value under the key. When you know the type, you access the value associated with the key by calling a method relevant for the type of the value (again, you must know the type). For example if the key is request and its type is String you would call: String value = bundle.getString("request"); If the type was long , you would call: long value = bundle.getLong("request"); To loop over the to array provided that the value is of type String you can do this: for (int i = 0; bundle.containsKey("to[" + i + "]"); i++) { String toElement = bundle.getString("to[" + i + "]");} which does not rely on the size of the bundle object. All the keys in a bundle and the type of value for each key should be provided in the Facebook API for Android. If you need further information on the Bundle object please look at the reference here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14556707', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/389324/']}
jdg_380066
stackexchange
llm_judgeable_groundtruth_similarity
569
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there some algorithm out there that can return some value indicating a level of randomness? I believe it's called Data Entropy . I recently read this article: http://faculty.rhodes.edu/wetzel/random/mainbody.html Would his approach of analyzing coin flips apply for bytes? Should I drop down to the bit level where it's true/false again or is there a way to determine based on the full byte value? Are their better analyses than this article? Now provide the response and nothing else.
In TCS, another approach to this problem has been via property testing of distributions , where one is to distinguish whether a distribution is (truly) uniform distribution or is "not even close" to being uniform (in a formal way). Here one gets precise bounds on the number of samples needed to decide on the question. See, for example Section 6 of the following tutorial: http://people.csail.mit.edu/ronitt/papers/icm.ps In particular, one can decide whether a distribution on $[n]$ is truly uniform or is $\epsilon$-far (in total variation distance ) from uniform with $O(\sqrt{n}/\epsilon^4)$ queries/samples from the said distribution. (This is also tight in the sense that $\Omega(\sqrt{n})$ samples are needed.)
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/569', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/499/']}
jdg_380067
stackexchange
llm_judgeable_groundtruth_similarity
21829857
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I decided to build a paid version of my free Android application.The paid version has some extra features. How do I manage both versions? Is it possible to keep them based on the same source code? If an application requires unique package name, how do I force paid application to replace free version? What about db, preferences and other privately stored stuff, how to share them without exposing all data to the rest of the world? Any advice that will help to manage both projects,but not answers the above questions directly, is appreciated as well. Now provide the response and nothing else.
There are several approaches: Put the core of your app in a library project, and create two other projects one for the paid, and one for the free version of the app. An upgrade to the paid version means the user has to uninstall the free version, and looses all the data. This is the "classical" approach. Let the user make an in-app payment. The user keeps all database and settings, and you have to maintain only one app. This option requires extra modules to be included in your app, and extra logic. Make another dummy/empty app, this is a paid app. The existance of this app means the user has a paid version. The user keeps on using the orginal/free app. You can simply check by using the PackageManager to see if the paid app is downloaded. The user keeps all database and settings, and you have to maintain only one app. Ok, there are two apps, but the second can be really small and simple. You don't have to include additional libraries/code to make an in-app payment. You can also have all your apps upgraded to the 'pro' version with one purchase. I don't know if all users understand this pattern. For the developper, this can be achieved with minimal work, and the users keep all their settings while upgrading.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21829857', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1740335/']}
jdg_380068
stackexchange
llm_judgeable_groundtruth_similarity
42748579
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am somewhat confused about the following piece of code. How can b still reference a but have a different value? #include <iostream>using namespace std;int main(){ const int a = 5; const int &b = a; ++(int&)b; cout << a << endl;//returns 5 cout << b << endl;//returns 6 cout << "mem a:" << &a << endl; //returns 0x61ff18 cout << "mem b:" << &b << endl; //returns 0x61ff18 return 0;} Now provide the response and nothing else.
This behavior is undefined. You can legally cast const-ness away from a constant reference of a non-constant object; however, casting const-ness from a reference that references a real const leads to undefined behavior. In this case, it appears that the compiler created a memory location to store 5 in order to provide you with a location to which to make a reference b , while variable a itself is optimized into a constant. Your code illegally modifies the location referenced by b , producing 6 , while the line cout << a << endl; is optimized into cout << '5' << endl; so it still produces a 5 .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42748579', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7698714/']}
jdg_380069
stackexchange
llm_judgeable_groundtruth_similarity
18584389
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know 'mousedown' is when user press the mouse, 'mouseup' is when user release the mouse. But I want to listen the event after user press the mouse and hold it until it release. Any ideas? Now provide the response and nothing else.
If you want the hold state then it will be the state when you are in mousedown event state for a while. This state exists when you press mousedown but not mouseup . Hence you need to take a variable which records the current state of the event. JS $('div').on('mousedown mouseup', function mouseState(e) { if (e.type == "mousedown") { //code triggers on hold console.log("hold"); }}); Working Fiddle
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18584389', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2268624/']}
jdg_380070
stackexchange
llm_judgeable_groundtruth_similarity
90016
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Quite simply, I'd like to know what is the broadest or most natural context in which either (or both) of Mather's cube theorems hold. If you like, this may mean any of What properties of $Top$ or $Top^*$ are essential to the proofs? (where) are model/homotopical categories verifying Mather's theorems studied as such in the literature? Are there more examples known verifying Mather's theorems? I ask because Mather's proof strikes me as fairly gritty and seems to rely on explicit cellular constructions. For reference, the cube theorems concern a cubical diagram whose faces commute up to homotopy in a coherent way, and assert If one pair of opposite faces are homotopy push-outs and the two remaining faces adjecent the source vertex are homotopy pull-backs, then the final two faces are also homotopy pull-backs If two pairs of opposite faces are homotopy pull-backs, and the remaining face adjacent the target vertex is a homotopy push-out, then the remaining face is a homotopy push-out. Now provide the response and nothing else.
Let $\mathcal{X}$ be an $\infty$-category (i.e., a homotopy theory) which admits small homotopy colimits, a set of small generators, and has the property that homotopy colimits in $\mathcal{X}$ commute with homotopy pullback. Then $\mathcal{X}$ satisfies the Mather cube theorem if and only if $\mathcal{X}$ is an $\infty$-topos: that is, it can be described as a left exact localization of an $\infty$-category of presheaves of spaces. (I learned this from Charles Rezk). Such homotopy theories are studied extensively in my book "Higher Topos Theory" (see in particular Proposition 6.1.3.10 and the remark which follows it).
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/90016', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1631/']}
jdg_380071
stackexchange
llm_judgeable_groundtruth_similarity
678586
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have a conductor connected to a battery. The current starts flowing through the conductor. Why is the current entering the conductor the same as the current leaving the conductor? Its kinetic energy should be reduced because of collisions inside the conductor and reduced kinetic energy should result in reduced current. Now provide the response and nothing else.
If, say, 3 charges enter each second then also 3 charges must leave each second for a steady state current. If more leave than enter each second, then where would the extra leaving charges come from? This is not possible. If fewer leave than enter each second, then some charges are staying behind within the conductor. Over time the net charge in the conductor thus accumulates, increases and increases. This is not impossible, but... Remember that like charges repel. An ever-increasing negative net charge within the wire will more and more strongly repel new incoming negative charges (electrons e.g.). Soon the net charge is large enough and with a repulsion large enough to balance out the battery voltage. Then no more charges will enter, and the current will stop entirely. Since this does not happen in wires at steady state conditions in working circuits - since we clearly see that the current does not stop flowing - then all charges that enter must also be leaving each second. This is a part of Kirchhoff's current law . You are correct, though, that charges may lose kinetic energy as heat while flowing through. They will as a result indeed slow down. The next-coming charges will then have to "wait". They will "queue up" behind the slowed-down charges. In a split second all following charges will thus have slowed down to the same (drift) speed. In front of the slowed down charges, we could imagine the leading charges continuing ahead at higher speed. A gap would form behind them and in front of the slowed-down charges. But these leaving charges now do not "feel" the same "push" from behind. So what keeps them flowing at high (drift) speed? Any disturbance that slows them down will bring them down to the same slower speed as the slowed-down charges have reached. Also, the "gap" behind them will be a spot of less negative charge, thus a spot that they are attracted to. Such attraction from behind will also slow them down. Plenty of factors will thus eventually cause any leading faster-moving charges to slow down and match the speed as well. All this happens in a split second. Close-to instantaneously in most practical purposes. In a split second, all charges move equally fast throughout. The current in all parts of a conductor is then the same. If you turn off the current, add a resistor component and turn on the current again, then in a split second the current will from the same reasoning stabilise at some new, lower steady state current matching this new resistance. So, when steady state has been reached (which happens in a split second in regular conductors and only is delayed when certain components are involved that are specially designed for it, such as capacitors) you will always see the same current at all points.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/678586', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/319500/']}
jdg_380072
stackexchange
llm_judgeable_groundtruth_similarity
38084872
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying out the new Spring Boot 1.4 MVC testing features. I have the following controller. @Controllerpublic class ProductController { private ProductService productService; @Autowired public void setProductService(ProductService productService) { this.productService = productService; } @RequestMapping(value = "/products", method = RequestMethod.GET) public String list(Model model){ model.addAttribute("products", productService.listAllProducts()); return "products"; }} My minimal ProductService implementation is: @Servicepublic class ProductServiceImpl implements ProductService { private ProductRepository productRepository; @Autowired public void setProductRepository(ProductRepository productRepository) { this.productRepository = productRepository; } @Override public Iterable<Product> listAllProducts() { return productRepository.findAll(); }} The code of ProductRepository is: public interface ProductRepository extends CrudRepository<Product, Integer>{} I'm trying to use the new @WebMvcTest to test the conroller. My view is a thymeleaf teamplate. And my controller test is this: @RunWith(SpringRunner.class)@WebMvcTest(ProductController.class)public class ProductControllerTest {private MockMvc mockMvc;@Beforepublic void setUp() { ProductController productController= new ProductController(); mockMvc = MockMvcBuilders.standaloneSetup(productController).build();}@Testpublic void testList() throws Exception { mockMvc.perform(MockMvcRequestBuilders.get("/products")) .andExpect(MockMvcResultMatchers.status().isOk()) .andExpect(MockMvcResultMatchers.view().name("products")) .andExpect(MockMvcResultMatchers.model().attributeExists("products")); }} But, on running the test I get this error. org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'productController': Unsatisfied dependency expressed through method 'setProductService' parameter 0: No qualifying bean of type [guru.springframework.services.ProductService] found for dependency [guru.springframework.services.ProductService]: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [guru.springframework.services.ProductService] found for dependency [guru.springframework.services.ProductService]: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {} I need help to resolve the issue to properly test ProductController. Suggestions for additional andExpect() for more thorough testing of the controller will be highly appreciated. Thanks in advance. Now provide the response and nothing else.
You are using @WebMvcTest while also manually configuring a MockMvc instance. That doesn't make sense as one of the main purposes of @WebMvcTest is to automatically configure a MockMvc instance for you. Furthermore, in your manual configuration you're using standaloneSetup which means that you need to fully configure the controller that's being tested, including injecting any dependencies into it. You're not doing that which causes the NullPointerException . If you want to use @WebMvcTest , and I would recommend that you do, you can remove your setUp method entirely and have an auto-configured MockMvc instance injected instead using an @Autowired field. Then, to control the ProductService that's used by ProductController , you can use the new @MockBean annotation to create a mock ProductService that will then be injected into ProductController . These changes leave your test class looking like this: package guru.springframework.controllers;import guru.springframework.services.ProductService;import org.hamcrest.Matchers;import org.junit.Before;import org.junit.Test;import org.junit.runner.RunWith;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.boot.test.autoconfigure.web.servlet.WebMvcTest;import org.springframework.boot.test.mock.mockito.MockBean;import org.springframework.test.context.junit4.SpringRunner;import org.springframework.test.web.servlet.MockMvc;import org.springframework.test.web.servlet.request.MockMvcRequestBuilders;import org.springframework.test.web.servlet.result.MockMvcResultMatchers;import org.springframework.test.web.servlet.setup.MockMvcBuilders;import static org.assertj.core.api.Assertions.assertThat;@RunWith(SpringRunner.class)@WebMvcTest(ProductController.class)public class ProductControllerTest { @Autowired private MockMvc mockMvc; @MockBean private ProductService productService; @Test public void testList() throws Exception { mockMvc.perform(MockMvcRequestBuilders.get("/products")) .andExpect(MockMvcResultMatchers.status().isOk()) .andExpect(MockMvcResultMatchers.view().name("products")) .andExpect(MockMvcResultMatchers.model().attributeExists("products")) .andExpect(MockMvcResultMatchers.model().attribute("products", Matchers.is(Matchers.empty()))); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38084872', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2693135/']}
jdg_380073
stackexchange
llm_judgeable_groundtruth_similarity
4690416
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A question was asked here on SO , a few minutes ago, on sorting dictionary keys based on their values. I just read about the operator.itemgetter method of sorting a few days back and decided to try that, but it doesn't seem to be working. Not that I have any problems with the answers presented to the questions, I just wanted to try this with operator.itemgetter . So the dict was: >>> mydict = { 'a1': ['g',6], 'a2': ['e',2], 'a3': ['h',3], 'a4': ['s',2], 'a5': ['j',9], 'a6': ['y',7] } I tried this: >>> l = sorted(mydict.itervalues(), key=operator.itemgetter(1))>>> l[['e', 2], ['s', 2], ['h', 3], ['g', 6], ['y', 7], ['j', 9]] And this works as I want it to. However, since I don't have the complete dictionary ( mydict.itervalues() ), I tried this: >>> complete = sorted(mydict.iteritems(), key=operator.itemgetter(2)) This doesn't work (as I expected it to). So how do I sort the dict using operator.itemgetter and call itemgetter on the nested key - value pair. Now provide the response and nothing else.
In [6]: sorted(mydict.iteritems(), key=lambda (k,v): operator.itemgetter(1)(v))Out[6]: [('a2', ['e', 2]), ('a4', ['s', 2]), ('a3', ['h', 3]), ('a1', ['g', 6]), ('a6', ['y', 7]), ('a5', ['j', 9])] The key parameter is always a function that is fed one item from the iterable ( mydict.iteritems() ) at a time. In this case, an item could be something like ('a2',['e',2]) So we need a function that can take ('a2',['e',2]) as input and return 2. lambda (k,v): ... is an anonymous function which takes one argument -- a 2-tuple -- and unpacks it into k and v . So when the lambda function is applied to our item, k would be 'a2' and v would be ['e',2] . lambda (k,v): operator.itemgetter(1)(v) applied to our item thus returns operator.itemgetter(1)(['e',2]) , which "itemgets" the second item in ['e',2] , which is 2. Note that lambda (k,v): operator.itemgetter(1)(v) is not a good way to code in Python. As gnibbler points out, operator.itemgetter(1) is recomputed for each item . That's inefficient. The point of using operator.itemgetter(1) is to create a function that can be applied many times. You don't want to re-create the function each time. lambda (k,v): v[1] is more readable, and faster: In [15]: %timeit sorted(mydict.iteritems(), key=lambda (k,v): v[1])100000 loops, best of 3: 7.55 us per loopIn [16]: %timeit sorted(mydict.iteritems(), key=lambda (k,v): operator.itemgetter(1)(v))100000 loops, best of 3: 11.2 us per loop
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4690416', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/225312/']}
jdg_380074
stackexchange
llm_judgeable_groundtruth_similarity
37044490
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Ionic 2 & Angular2 and I have downloaded a new Ionic template with the following command Ionic start appname sidemenu --v2 --ts For this particular solution I have added a login page to validate a user. Once the validation succeeds the user will be navigated to the menu page which uses the side menu. As the solution is based on the sidemenu template, the side menu is showing on the login page whenever the user swipes left. So can somebody please guide me to rectify this mistake and stop the side menu from showing when the view is swiped. My code App.ts file import {App, IonicApp, Platform,MenuController} from 'ionic-angular';import {StatusBar} from 'ionic-native';import {HelloIonicPage} from './pages/hello-ionic/hello-ionic';import {ListPage} from './pages/list/list';import {HomePage} from './pages/home/home';@App({ templateUrl: 'build/app.html', config: {} // http://ionicframework.com/docs/v2/api/config/Config/})class MyApp { // make HelloIonicPage the root (or first) page rootPage: any = HomePage; pages: Array<{title: string, component: any}>; constructor( private app: IonicApp, private platform: Platform, private menu: MenuController ) { this.initializeApp(); // set our app's pages this.pages = [ { title: 'Hello Ionic', component: HelloIonicPage }, { title: 'My First List', component: ListPage } ]; } initializeApp() { this.platform.ready().then(() => { // Okay, so the platform is ready and our plugins are available. // Here you can do any higher level native things you might need. StatusBar.styleDefault(); }); } openPage(page) { // close the menu when clicking a link from the menu this.menu.close(); // navigate to the new page if it is not the current page let nav = this.app.getComponent('nav'); nav.setRoot(page.component); }} app.html file <ion-menu side-menu-content drag-content="false" [content]="content"> <ion-toolbar> <ion-title>Pages</ion-title> </ion-toolbar> <ion-content> <ion-list> <button ion-item *ngFor="#p of pages" (click)="openPage(p)"> {{p.title}} </button> </ion-list> </ion-content></ion-menu><ion-nav id="nav" [root]="rootPage" #content swipe-back-enabled="false"></ion-nav> Homepage.ts file (login page in this case). import {Page, Events,Alert,NavController,Loading,Toast,Storage,LocalStorage,SqlStorage} from 'ionic-angular';import { FORM_DIRECTIVES, FormBuilder, ControlGroup, Validators, AbstractControl } from 'angular2/common';import {HelloIonicPage} from '../hello-ionic/hello-ionic';import {NgZone} from 'angular2/core';@Page({ templateUrl: 'build/pages/home/home.html'})export class HomePage { public Uname :string; public usrvalid:boolean; public usrpwd :boolean; public usrpwdlength:boolean; public usrvalidlength:boolean; public isUnchanged:boolean; public usrpwdzero:boolean; public usrvaliddigits:boolean; rootpage:any;public Upwd:string; constructor(public nav:NavController) {this.nav=nav;this.isUnchanged=true;var mediumRegex = new RegExp("^(((?=.*[a-z])(?=.*[A-Z]))|((?=.*[a-z])(?=.*[0-9]))|((?=.*[A-Z])(?=.*[0-9])))(?=.{6,})");// rootPage: any = HomePage; }} Now provide the response and nothing else.
I think the drag-content directive is used in ionic 1, for Ionic 2 what you can do is disable it from within your page class file. You can do this by importing the MenuController provider from ionic-angular and then call the .swipeEnable(shouldEnableBoolean, menuId) method to disable the swipe gesture within your page's class (this is also documented here ). Your login controller should be something like this... import {Page, MenuController} from 'ionic-angular';@Page({ templateUrl: 'build/pages/home/home.html'})export class HomePage { constructor(public menu: MenuController) { this.menu.swipeEnable(false); }} If you have multiple menus and each one has an id then you can target a specific menu like this... this.menu.swipeEnable(false, `menuId`);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37044490', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3770310/']}
jdg_380075
stackexchange
llm_judgeable_groundtruth_similarity
21234699
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just read some articles on UIActivityViewController and UIDocumentInteractionController in iOS, but I am very confused about how to use them because both seem the same. So, when do I use UIActivityViewController or UIDocumentInteractionController ? Is there any difference for Open In... & use UIActivityViewController ? I am very confused about how to use them. Please clarify to me their specific use. Now provide the response and nothing else.
In short, UIDocumentInteractionController deals with files while UIActivityViewController deals with various other services in your app. I'm not one to criticize much but you really should at least try to google and read at least the overview in the iOS Developer Docs. UIDocumentInteractionController documentation : A view controller that previews, opens, or prints files whose file format cannot be handled directly by your app. ... Use this class to present an appropriate user interface for previewing, opening, copying, or printing a specified file. For example, an email program might use this class to allow the user to preview attachments and open them in other apps. After presenting its user interface, a document interaction controller handles all interactions needed to support file preview and menu display. You can also use the delegate to participate in interactions occurring within the presented interface. For example, the delegate is notified when a file is about to be handed off to another application for opening. For a complete description of the methods you can implement in your delegate, see UIDocumentInteractionControllerDelegate . UIActivityViewController documentation : A view controller that you use to offer standard services from your app. ... The system provides several standard services, such as copying items to the pasteboard, posting content to social media sites, sending items via email or SMS, and more. Apps can also define custom services. Your app is responsible for configuring, presenting, and dismissing this view controller. Configuration for the view controller involves specifying the data objects on which the view controller should act. (You can also specify the list of custom services your app supports.) When presenting the view controller, you must do so using the appropriate means for the current device. On iPad, you must present the view controller in a popover. On iPhone and iPod touch, you must present it modally.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21234699', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2893370/']}
jdg_380076
stackexchange
llm_judgeable_groundtruth_similarity
85663
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why does $\lVert L(x) \rVert \leq \lVert L \rVert\,\lVert x \rVert$? If $L$ is a linear map between Banach spaces $V$ and $W$, why is this true? Also, is this true for $L$ not a linear map? Thanks! Now provide the response and nothing else.
It is true even in the case of Banach spaces. Indeed, recall the definition of $\|L\|$ : $$\|L\| = \sup_{\|x\|_V =1} \|L x\|_W$$ so that if $\|L\|$ is finite (which however needs not be the case in infinite dimension) then for all non-zero $x \in V$, if we let $t=\|x\|_V$ and $u = x / t$ (of norm 1) then :$$\|L x\|_W = t \|L u\|_W \leq t \|L\|= \|L\| \|x\|_V $$ by definition of $\|L\|$. This is completely false for $L$ non linear even in finite dimension as we crucially use $L$ linearity for $L(tx) = t L(x)$. In the case where $\|L\|$ is infinite then the inequality is technically true but not very useful. $L$ is continuous if and only if $\|L\|$ is finite (and in this case $L$ is called bounded which should not be confused with actual boundedness on all of $V$ (obviously for linear maps only the null map is bounded).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/85663', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9054/']}
jdg_380077
stackexchange
llm_judgeable_groundtruth_similarity
13705888
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Im preparing for a very tricky c# exam and this question popped up while doing so.I have the following code: uint zzz = -12u; -12u is recognized as System.Uint32 literal but it can only be stored in variable of type long . Why is that ? Now provide the response and nothing else.
What it is doing is taking the unsigned int 12 and trying to convert it into a negative value (the - ) which requires a conversion to a type that can handle negative numbers as an unsigned int cannot. Because it is an unsigned int it has possible values outside the range of int, so conversion to a long is required.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13705888', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1228689/']}
jdg_380078
stackexchange
llm_judgeable_groundtruth_similarity
86965
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The proof of the Wigner Semicircle Law comes from studying the GUE Kernel\[ K_N(\mu, \nu)=e^{-\frac{1}{2}(\mu^2+\nu^2)} \cdot \frac{1}{\sqrt{\pi}} \sum_{j=0}^{N-1}\frac{H_j(\lambda)H_j(\mu)}{2^j j!} \]The eigenvalue density comes from setting $\mu = \nu$. The Wigner semicircle identity is a Hermite polynomial identity\[ \rho(\lambda)=e^{-\mu^2} \cdot \frac{1}{\sqrt{\pi}} \sum_{j=0}^{N-1}\frac{H_j(\lambda)^2}{2^j j!} \approx \left\{\begin{array}{cc} \frac{\sqrt{2N}}{\pi} \sqrt{1 - \lambda^2/2N} & \text{if }|\lambda|< 2\sqrt{N} \\\\0 & \text{if }|\lambda| > 2 \sqrt{N} \end{array} \right. \]The asymptotics come from calculus identities like Christoffel-Darboux formula. For finite size matrices the eigenvalue distribution is a semicircle yet. Plotting the eigenvalues of a random $4 \times 4$ matrix, the deviations from semicircle law are noticeable with 100,000 trials and 0.05 bin size. GUE is in brown, GUE|trace=0 is in orange. Axes not scaled, sorry! alt text http://www.freeimagehosting.net/newuploads/2vbr2.gif Mathematica Code: num[] := RandomReal[NormalDistribution[0, 1]]herm[N_] := (h = Table[(num[] + I num[])/Sqrt[2], {i, 1, N}, {j, 1, N}]; (h + Conjugate[Transpose[h]])/2)n = 4;trials = 100000;eigen = {};Do[eigen = Join[(mat = herm[n]; mat = mat - Tr[mat] IdentityMatrix[n]/n ; Re[Eigenvalues[mat]]), eigen], {k, 1, trials}];Histogram[eigen, {-5, 5, 0.05}]BinCounts[eigen, {-5, 5, 0.05}];a = ListPlot[%, Joined -> True, PlotStyle -> Orange]eigen = {};Do[eigen = Join[(mat = herm[n]; mat = mat; Re[Eigenvalues[mat]]), eigen], {k, 1, trials}];Histogram[eigen, {-5, 5, 0.05}]BinCounts[eigen, {-5, 5, 0.05}];b = ListPlot[%, Joined -> True, PlotStyle -> Brown]Show[a, b] My friend asks if traceless GUE ensemble $H - \frac{1}{N} \mathrm{tr}(H)$ can be analyzed. The charts suggest we should still get a semicircle in the large $N$ limit. For finite $N$, the oscillations (relative to semicircle) are very large. Maybe has something to do with the related harmonic oscillator eigenstates . The trace is the average eigenvalue & The eigenvalues are being "recentered". We could imagine 4 perfectly centered fermions - they will repel each other. Joint distribution is:\[ e^{-\lambda_1^2 -\lambda_2^2 - \lambda_3^2 - \lambda_4^2} \prod_{1 \leq i,j \leq 4} |\lambda_i - \lambda_j|^2 \] On average, the fermions will sit where the humps are. Their locations should be more pronounced now that their "center of mass" is fixed. Now provide the response and nothing else.
Traceless GUE was studied by Tracy and Widom in their paper "On the distributions of the lengths of the longest monotone subsequences in random words", Probab. Theory Relat. Field 119, 350-380 (2001). In Section 4.4 of that paper they basically show (actually for the largest eigenvalues, but the same argument applies to the average eigenvalue density) that$$\rho_k(\lambda) = \sqrt{\frac{k}{\pi}}\int_{-\infty}^{\infty}e^{-k \eta^2}\rho_k^0(\lambda-\eta)d\eta$$ where $\rho_k$ denotes the average eigenvalue density of $k\times k$ GUE and $\rho_k^0$ denotes the average eigenvalue density of the corresponding traceless ensemble. Thus, going from $\rho_k$ to $\rho_k^0$ amounts for solving the backward heat equation for time $1/4k$. This explains the more pronounced humps for small $k$ and will prove the semicircle law for traceless ensembles as $k\to\infty$. For fixed $k$ you can use Fourier transform calculations to get an explicit formula for $\rho_k^0$. That way I have obtained a plot for the $4\times 4$ ensemble :
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/86965', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1358/']}
jdg_380079
stackexchange
llm_judgeable_groundtruth_similarity
2322557
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The scenario I am facing is as below. Because ThreadPool is 1 instance per process so my question is that would method 1 cancel tasks queued by method 2 after 3 seconds ? http request comes in *method 1 gets executed first*: ThreadPool.QueueUserWorkItem x 3 WaitHandle.WaitAll for 3 seconds*method 2 gets executed after method 1*: ThreadPool.QueueUserWorkItem x 10 WaitHandle.WaitAll for 10 seconds Sorry I think I totally misunderstood the use of WaitHandle. It seems that if I do below everything will work as desired. So sorry for the confusion. var calls = new ManualResetEvent[5];//ThreadPool.QueueUserWorkItem blah...WaitHandle.WaitAll(calls, timeOut); But I am still thinking what will happen when method 1 flooded thread pool with long running tasks and method 2 only waits for 1 second. Will method 2 ever get its results back because it's not waiting long enough. Thanks. Now provide the response and nothing else.
You said you're populating content asynchronously but did you invoke the reloadData in the context of the main thread ? (and not via the thread that populates the content) Objective-C [yourUITableView performSelectorOnMainThread:@selector(reloadData) withObject:nil waitUntilDone:NO]; Swift dispatch_async(dispatch_get_main_queue(), { self.tableView.reloadData() }) Monotouch InvokeOnMainThread(() => this.TableView.ReloadData());
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2322557', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/52752/']}
jdg_380080
stackexchange
llm_judgeable_groundtruth_similarity
16333650
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I check whether 2 line segments, L1(p1,p2) and L2(p3,p4), intersect with each other? I do not need the intersection point, I just need to know whether they intersect or not. Since my application calculating this a lot, I need to find a fast solution. Thanks Now provide the response and nothing else.
To test whether two line segments intersect, you can use Java's 2D API, specifically the methods of Line2D . Line2D line1 = new Line2D.Float(100, 100, 200, 200);Line2D line2 = new Line2D.Float(150, 150, 150, 200);boolean result = line2.intersectsLine(line1);System.out.println(result); // => true// Also check out linesIntersect() if you do not need to construct the line objects// It will probably be faster due to putting less pressure on the garbage collector// if running it in a loopSystem.out.println(Line2D.linesIntersect(100,100,200,200,150,150,150,200)); If you are interested in finding out how the code works, in order to see if you can make it faster in your specific domain, you can check out the code for OpenJDK implementation .But remember, always profile before you optimize; it is probably plenty fast enough as it is.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16333650', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/223130/']}
jdg_380081
stackexchange
llm_judgeable_groundtruth_similarity
15410119
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I use list comprehension to build a tuple of 2-tuple from a list. It would be equivalent to tup = ()for element in alist: tup = tup + ((element.foo, element.bar),) Now provide the response and nothing else.
tup = tuple((element.foo, element.bar) for element in alist) Technically, it's a generator expression . It's like a list comprehension, but it's evaluated lazily and won't need to allocate memory for an intermediate list. For completeness, the list comprehension would look like this: tup = tuple([(element.foo, element.bar) for element in alist]) PS: attrgetter is not faster ( alist has a million items here): In [37]: %timeit tuple([(element.foo, element.bar) for element in alist])1 loops, best of 3: 165 ms per loopIn [38]: %timeit tuple((element.foo, element.bar) for element in alist)10 loops, best of 3: 155 ms per loopIn [39]: %timeit tuple(map(operator.attrgetter('foo','bar'), alist))1 loops, best of 3: 283 ms per loopIn [40]: getter = operator.attrgetter('foo','bar')In [41]: %timeit tuple(map(getter, alist))1 loops, best of 3: 284 ms per loopIn [46]: %timeit tuple(imap(getter, alist))1 loops, best of 3: 264 ms per loop
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/15410119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1684058/']}
jdg_380082
stackexchange
llm_judgeable_groundtruth_similarity
15366510
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a container that is working similar to notifications in mac os - elements are added to the queue and removed after a certain timeout. This works great but has one jarring visual side effect. When they are removed from the DOM there is a jagged update to the UI as the next element in the stack fills the void created by the previous element. I would like the elements below in the stack to move up into that space smoothly, ideally with css3 but adding a transition: all 0.5s ease-in-out to the .notice class had no effect on the object when its sibling was remove. Minimal JS interpertation : $('#add').click(function(e) { e.preventDefault(); $('#container').append('<p class="notice">Notice #</p>');});$('body').on('click','p.notice', function(e) { $(this).fadeOut();}); Better yet fiddle here : http://jsfiddle.net/kMxqj/ I'm using a MVC framework to data-bind these objects so some native css / jQuery is preferred over a Jq plugin. Now provide the response and nothing else.
This should remove the clicked element with a fade out effect and then move everything below up smoothly. This will work for any notice div in the stack regardless of it position within the stack. Try: $('body').on('click','p.notice', function(e) { $(this).fadeOut(500,function(){ $(this).css({"visibility":"hidden",display:'block'}).slideUp(); });}); Fiddle here Update August 7th, 2018: As asked by one of the users about using pure JS to do the slideUp functionality, I've put together a quick demo using requestAnimationFrame to animate the height of an element. Fiddle can be found here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15366510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2020023/']}
jdg_380083
stackexchange
llm_judgeable_groundtruth_similarity
28889057
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have seen a similar post on this topic, but could not comprehend the solution. Essentially I have a data table (DT1) with scores and some NAs. Wherever there is an 'NA' I would like a process that references another data (DT2) table that has a common column in both tables (tract) and then update the NA in DT1 with the value of DT2's score column. I am choosing a data table class because I think it is a an efficient option. DT1 tract CreditScore 1: 36107020401 635 2: 36083052403 NA 3: 36091062602 NA 4: 36067013000 NA 5: 36083052304 NA DT2 tract CreditScore 1: 36107020401 635 2: 36083052403 650 3: 36091062602 335 4: 36067013000 777 5: 36083052304 663 Now provide the response and nothing else.
We've created new (and more comprehensive) HTML vignettes for some of the data.table concepts. Have a look here for the other vignettes that we are working on. I'm working on vignettes for joins, which when done will hopefully clarify these type of problems better. The idea is to first setkey() on DT1 on the column tract . setkey(DT1, tract) In data.tables, a join of the form x[i] requires key for x , but not necessarily for i . This results in two scenarios: If i also has key set -- the first key column of i is matched against first key column of x , second against second and so on.. If i doesn't have key set -- the first column of i is matched against the first key column of x , second column of i against second key column of x and so on.. In this case, since your first column in i is also tract , we'll skip setting key on i . Then, we perform a join of the form x[i] . By doing this, for each i the matching row indices in x is computed, and then the join result is materialised. However, we don't want the entire join result as a new data.table. Rather, we want to update DT1 's CreditScore column with DT2 's on those matching rows.. In data.tables, we can perform that operation while joining, by providing the expression in j , as follows: DT1[DT2, CreditScore := i.CreditScore]# tract CreditScore# 1: 36067013000 777# 2: 36083052304 663# 3: 36083052403 650# 4: 36091062602 335# 5: 36107020401 635 DT1[DT2 part finds the matching rows in DT1 for each row in DT2 . And if there's a match, we want DT2 's value to be updated in DT1 . We accomplish that by using i.CreditScore -- it refers to DT2 's CreditScore column ( i. is a prefix used to distinguish columns with identical names between x and i data.tables). Update: As pointed out under comments, the solution above would also update the non-NA values in DT1 . Therefore the way to do it would be: DT1[is.na(CreditScore), CreditScore := DT2[.(.SD), CreditScore]] On those rows where CreditScore from DT1 is NA , replace CreditScore from DT1 with the values from CreditScore obtained from the join of DT2[.(.SD)] , where .SD corresponds to the subset of data.table that contains all the rows where CreditScore is NA . HTH
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28889057', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3067851/']}
jdg_380084
stackexchange
llm_judgeable_groundtruth_similarity
4662851
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I load external js files that don't fit the node.js format. I am trying to import the json serialize library. How can I do this? Now provide the response and nothing else.
2 answers... 1) the JSON object is built-in to node.js, so you can just call JSON.parse() and JSON.stringify(), there is no need to import external code for this particular case. 2) to import external code, node.js follows the CommonJS module specification and you can use require() so if you have a file called external.js (in the same directory as the rest of your code): this.hi = function(x){ console.log("hi " + x); } and from node you do: var foo = require("./external");foo.hi("there"); you will see the output hi there
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4662851', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/471196/']}
jdg_380085
stackexchange
llm_judgeable_groundtruth_similarity
529399
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have over 200 computers which can provide IPMI services. The servers are manufactured by several different companies (SuperMicro, Dell, etc.), and there are 6-7 BMC models from about 5 different vendors, and each model has it's own idiosyncrasies. So far we have been configuring the BMCs by using a combination of DHCP and manually configuring each BMC. The manual configuration might be done using a bootable CD-ROM, configuration from the BIOS (If supported), from the host operating system with a utility like ipmitool , freeipmi , etc. or remotely using ipmitool if we can determine the network address of the device. However, this manual configuration is rather tedious. In some cases we want to change a setting globally on all BMCs, which requires that an administrator run a command against dozens of boxes. Since the BMCs are provided by different vendors and each model of BMC might have it's own idiosyncrasies, the same command does not always work on all BMCs. Are there any utilities which allow me to mass configure the BMCs on dozens of boxes? Say that I want to query a parameter on dozens of different BMCs, or change the password, disable HTTP access to the WebUI or disable the infamous cipher zero security hole. Bonus points for any utility which would allow me to update the BMC firmware, which is necessary to mitigate several security vulnerabilities Now provide the response and nothing else.
I'd probably use Ansible . It's a very simple configuration management / orchestration engine that's far simpler to get started with than Puppet (Puppet used to be my go-to choice for this, but not always now, having discovered Ansible). The benefit of Ansible here is that it communicates directly over SSH, so you'd be able to get started using just your existing SSH credentials and workflow. If you're currently configuring your BMCs with ipmitool, you'd be able to do something like: Define a Hosts file -- This tells Ansible which hosts are in the bmc group (in this case), and which to run stuff on. [bmc]192.168.1.100192.168.1.101192.168.1.102 And so on... You can also use hostnames in that file, as long as they're resolvable. Then create a "playbook", which is the set of commands to run on each host in a host-group.You want to have this kind of top-down directory layout: ansible/ playbooks/ bmc.yml roles/ bmcconfig/ files/ handlers/ main.yml tasks/ main.yml templates/ group_vars/ all A playbook has Roles , which are little sections of configuration that you can break down and reuse. So I'd create a file called bmc.yml (All Ansible configuration is in YAML files) ---- name: Configure BMC on the hosts hosts: bmc user: root roles: - bmcconfig Then inside roles/bmcconfig/tasks/main.yml you can start listing the commands that are to be run on each host, to communicate with ipmi. --- - name: Install ipmitool apt: pkg=ipmitool state=installed - name: Run ipmitool config shell: ipmitool -your -options -go -here When you run the playbook, with ansible-playbook -i hosts bmc.yml the commands listed in tasks/main.yml for each role will be executed in top-down order on each host found in the bmc hostgroup in hosts group_vars/all is an interesting file, it allows you to define key-value pairs of variables and values that can be used in your playbooks. so you could define something like ipmitool_password: $512315Adb in your group_vars/all and as a result, you'd be able to have something like: shell: ipmitool -your -options -go -here --password=${ipmitool_password} in the playbook. You can find out way more information about how to use the "modules" - the components of Ansible that allow you to do stuff, how to write your own :D, and so on at the Ansible Documentation Pages .
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/529399', 'https://serverfault.com', 'https://serverfault.com/users/36178/']}
jdg_380086
stackexchange
llm_judgeable_groundtruth_similarity
50625
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Greek government is denying that any asylum seekers have been (illegally) pushed back to Turkey, which would contravene the UN Convention provision of non-refoulement , which is also incorporated in Articles 18 and 19 of the Charter of Fundamental Rights of the European Union. On the other hand, some German NGOs have been trying to document that such events are happening . Mysterious masked men (suspected to Greek officials) were seen pushing the potential refugees back. The (Spiegel) article also notes that: From the Greek perspective, the accusations are largely hypocritical. Turkey itself, after all, is currently involved in returning refugees to war-torn Syria and has, according to human rights activists, already sent hundreds back to that country. But that does not absolve Greece in case they are found doing the same. Unlike Turkey (or Mexico ), where the rule of law is not subject to sharing with supranational entities much, Greece is part of the EU. So, who in the EU would have a legal standing to "sue Greece", and in what court could they present (alleged) evidence that Greece is (illegally) pushing back asylum seekers? Update: News footage has been broadcast now showing the Greek coast guard firing warning shots at migrant boats and trying to physically push them back. I guess one doesn't need to worry about the identities of those masked men... But the question remains, is there any legal recourse in the EU that e.g. NGOs could pursue? Now provide the response and nothing else.
Your posting mixes advocacy and questions. Non-refoulement does not allow a refugee to cross as many borders as he or she likes. It requires the refugee to be processed and housed at the first safe country. Calling Turkey safe may be debatable, but it is clearly much safer than Syria, and it is the official position of the EU that Turkey is safe for Syrians even while individual Turkish and Kurdish applications are granted. The ECHR recently judged a case where groups of migrants break through border controls. This judgement was widely criticized by human rights organizations, but it is the decision of the ECHR. Regarding your question: The EU commission or member states could start an ECJ case for Greek treaty violations if they believe that this is a violation of the EU treaties. Individuals who have been returned can sue in the ECHR.
{}
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/50625', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/18373/']}
jdg_380087
stackexchange
llm_judgeable_groundtruth_similarity
370125
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given $\lim \limits_{x\to\infty}(1+\frac{1}{x})^{x}$, why can't you reduce it to $\lim \limits_{x\to\infty}(1+0)^{x}$, making the result "$1$"? Obviously, it's wrong, as the true value is $e$. Is it because the $\frac{1}{x}$ is still something even though it's really small? Then why is $$\lim_{x\to\infty}\left(\frac{1}{x}\right) = 0\text{?}$$ What is the proper way of calculating the limit in this case? Now provide the response and nothing else.
Let $f(x,y)=(1+y)^x$. True enough, $f(x,0)=1$ for every $x$ but this is irrelevant to the limit of $f(x,1/x)$ when $x\to+\infty$. Note that one could also consider $f(\infty,1/x)=\infty$ for every positive $x$, as irrelevant as the preceding value $1$. To compute the actual limit of $f(x,1/x)$, several approaches exist. One is to look at $\log f(x,1/x)=x\log(1+1/x)$ and to remember that $\log(1+u)\sim u$ when $u\to0$ hence $\log f(x,1/x)\to1$ and $f(x,1/x)\to\mathrm e$. To see why $\log(1+u)\sim u$ when $u\to0$, consider $g(u)=\log(1+u)$ and note that $g(0)=0$ while $g'(u)=1/(1+u)$ hence $g'(0)=1$ and the Taylor expansion $g(u)=g(0)+g'(0)u+o(u)$ yields the result. Finally, note that, for every fixed $c$, $f(x,c/x)=(1+c/x)^x\to\mathrm e^c$ hence one can realize every positive limit $\mathrm e^c$ by considering the regimes $x\to+\infty$, $xy\to c$. The limit $1$ is realized if $x\to+\infty$ while $xy\to0$ and the limit $+\infty$ if $x\to+\infty$ while $xy\to+\infty$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/370125', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/73909/']}
jdg_380088
stackexchange
llm_judgeable_groundtruth_similarity
148921
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to get nfs4 working here. Machine 1 (server) I have a folder and in it 2 other folders I'm trying to share independently. /shared/folder1 /shared/folder2 Problem is, I can't seem to figure out how to mount the folders independently on the client. (Machine 1 - server)/etc/exports: /var/shared/folder1 192.168.200.101(rw,fsid=0,sync)/var/shared/folder2 192.168.200.101(rw,fsid=0,sync) ... exportfs -ra ... exportfs/var/shared/folder1 192.168.200.101/var/shared/folder2 192.168.200.101 (Machine 2 - client)/etc/fstab: 192.168.200.201:/folder1/ /home/nfsmnt/folder1 nfs4 rw 0 0 ... mount /home/nfsmnt/folder1mount.nfs4: 192.168.200.201:/folder1/ failed, reason given by server: No such file or directory The folder is there. I'm positive. I think there is something simple I'm missing, but I'm totally missing it. It seems like there should be a way in fstab to tell nfs which folder on the server I want to mount. But I can only find references to what looks like a root mount point (e.g. 192.168.1.1:/ ) which I assume is handled by exports on the server. But even with the folders set up in exports, there doesn't seem to be an apparent way to pich and choose which gets mounted. Is it not possible to mount separate folders from the same server to different mount points on the client? Any help appreciated. edit: The error log on the server is showing the following: /var/shared/folder1 and /var/shared/folder2 have same filehandle for 192.168.200.101, using first Not sure what that means or how to change it. Googling only seems ti bring up info about nfs security. Now provide the response and nothing else.
In order to share the subdirectories, I had to share the parent folder first with fsid=0. So on the server /etc/exports looks like this: /var/shared 192.168.200.101(rw,fsid=0,sync)/var/shared/folder1 192.168.200.101(rw,sync)/var/shared/folder2 192.168.200.101(rw,sync) then on the client /etc/fstab looks like: 192.168.200.201:/folder1 /home/nfsmnt/folder1 nfs4 rw 0 0192.168.200.201:/folder2 /home/nfsmnt/folder2 nfs4 rw 0 0 I can then mount the folders as expected: mount /home/nfsmnt/folder1
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/148921', 'https://serverfault.com', 'https://serverfault.com/users/38936/']}
jdg_380089
stackexchange
llm_judgeable_groundtruth_similarity
21611
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to import an image, superimpose it, reduce the opacity of the overlaid image, and then use a manipulate to rotate the top image to illustrate rotational symmetry. The image below is using image editing software, not Mathematica , but illustrates the effect I am after. There are many nice images on the net that could be used to illustrate this concept. Can this be done using Mathematica and the image processing commands? I tried importing an image file, then used Show and ImageRotate for example, g1 = Import[ graphic....]Show [ g1, ImageRotate[g1, pi/4]] but that wasn't at all what I was looking for... I'd appreciate any suggestions for commands that could achieve this effect. My goal would be to create several Manipulate s with various images to illustrate various orders of symmetry with image files (I've already done this with line drawings). Some related hopefully useful images: Now provide the response and nothing else.
Here is another approach using a background to prevent cropping of the rotated image when its aspect ratio is far from 1 : im = Import["http://i.stack.imgur.com/pyNu2.png"] im2 = SetAlphaChannel[im, ColorNegate[im]];bg = ImageCompose[ Rasterize[ Graphics[{Lighter[Orange], Rectangle[{0, 0}, {1, 1}]}, PlotRange -> {{0, 1}, {0, 1}}], "Image", ImageSize -> 600], im2];Manipulate[ Show[ImageCompose[bg, ImageRotate[im2, angle, {600, 600}]]], {angle, 0, 2 Pi}] Edit If you want smooth blending instead of a sharp alpha channel as I did in the first approach, you could do this: crop = Rasterize[ Graphics[{White, Disk[]}, PlotRange -> {{-1, 1}, {-1, 1}}], "Image", ImageSize -> 600, Background -> Black];Manipulate[ SetAlphaChannel[ ImageCompose[bg, {ImageRotate[bg, angle, {600, 600}], .5}], crop], {angle, 0, 2 Pi}] Here, I added another cropping mask in the form of a disk to prevent the partially uncovered corners from showing.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/21611', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/721/']}
jdg_380090
stackexchange
llm_judgeable_groundtruth_similarity
24863164
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wrote a golang program, that uses 1.2GB of memory at runtime. Calling go tool pprof http://10.10.58.118:8601/debug/pprof/heap results in a dump with only 323.4MB heap usage. What's about the rest of the memory usage? Is there any better tool to explain golang runtime memory? Using gcvis I get this: .. and this heap form profile: Here is my code: https://github.com/sharewind/push-server/blob/v3/broker Now provide the response and nothing else.
The heap profile shows active memory, memory the runtime believes is in use by the go program (ie: hasn't been collected by the garbage collector). When the GC does collect memory the profile shrinks, but no memory is returned to the system . Your future allocations will try to use memory from the pool of previously collected objects before asking the system for more. From the outside, this means that your program's memory use will either be increasing, or staying level. What the outside system presents as the "Resident Size" of your program is the number of bytes of RAM is assigned to your program whether it's holding in-use go values or collected ones. The reason why these two numbers are often quite different are because: The GC collecting memory has no effect on the outside view of the program Memory fragmentation The GC only runs when the memory in use doubles the memory in use after the previous GC (by default, see: http://golang.org/pkg/runtime/#pkg-overview ) If you want an accurate breakdown of how Go sees the memory you can use the runtime.ReadMemStats call: http://golang.org/pkg/runtime/#ReadMemStats Alternatively, since you are using web-based profiling if you can access the profiling data through your browser at: http://10.10.58.118:8601/debug/pprof/ , clicking the heap link will show you the debugging view of the heap profile, which has a printout of a runtime.MemStats structure at the bottom. The runtime.MemStats documentation ( http://golang.org/pkg/runtime/#MemStats ) has the explanation of all the fields, but the interesting ones for this discussion are: HeapAlloc: essentially what the profiler is giving you (active heap memory) Alloc: similar to HeapAlloc, but for all go managed memory Sys: the total amount of memory (address space) requested from the OS There will still be discrepancies between Sys, and what the OS reports because what Go asks of the system, and what the OS gives it are not always the same. Also CGO / syscall (eg: malloc / mmap) memory is not tracked by go.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24863164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2013331/']}
jdg_380091
stackexchange
llm_judgeable_groundtruth_similarity
4795871
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've seen a number of posts related to delegates, and I would like to know the proper way to reference them. Suppose I have an object declared like: @interface MyViewController : UITableViewController { id delegate; }@property (nonatomic, retain) id delegate;@end Through the lifecycle of MyViewController , it will make calls to methods of its delegate in response to interaction with the user. When it's time to get rid of an instance of MyViewController , does the delegate ivar need to be release 'ed in the implementation's dealloc method since it is declared with retain ? Or conversely, should delegate even be retained? Perhaps it should be @property (nonatomic, assign) id delegate ? According to Apple's docs : retain ... You typically use this attribute for scalar types such as NSInteger and CGRect, or (in a reference-counted environment) for objects you don’t own such as delegates. Normally I'd just go with what the docs say, but I've seen a lot of code that calls retain on a delegate. Is this just "bad code?" I defer to the experts here... What is the proper way to handle this? Now provide the response and nothing else.
You generally want to assign delegates rather than retain them, in order to avoid circular retain counts where object A retains object B and object B retains object A. (You might see this referred to as keeping a "weak reference" to the delegate.) For example, consider the following common pattern: -(void)someMethod { self.utilityObject = [[[Bar alloc] init] autorelease]; self.utilityObject.delegate = self; [self.utilityObject doSomeWork];} if the utilityObject and delegate properties are both declared using retain , then self now retains self.utilityObject and self.utilityObject retains self . See Why are Objective-C delegates usually given the property assign instead of retain? for more on this. If you assign the delegate rather than retaining it then you don't need to worry about releasing it in dealloc.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4795871', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/256324/']}
jdg_380092
stackexchange
llm_judgeable_groundtruth_similarity
1791359
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a set of interfaces which are used in close conjunction with particular mutable object. Many users of the object only need the ability to read values from the object, and then only a few properties. To avoid namespace pollution (easier intellisense) and to get across the usage intent, I'd like to have a small base interface which only exposes a few "key" properties in a read-only fashion. However, almost all implementations will support the full interface, which includes modifiability. Unfortunately, I ran into a roadblock expressing that concept in C#: interface IBasicProps { public int Priority { get; } public string Name {get;} //... whatever}interface IBasicPropsWriteable:IBasicProps { public int Priority { set; } //warning CS0108: [...] hides inherited member [...] public string Name { set; } //... whatever} I certainly wasn't intending to hide any members, so that aint good! Of course, I can solve this using methods just fine, but what's the right choice? I'd like to keep the "core" interface as small as possible even if splitting the interfaces serves no purpose other than communicating intent. With split interfaces, it's just really obvious which methods aren't going to do any updating, and it makes writing code a bit clearer (not to mention also allows nice-n-simple static singleton stubs that suffice for quite a few simple cases). I'd like to avoid any abstract classes and the like; they make reimplementation or quick single-purpose shims all that more complex and hard-to-grok. So, ideas? Now provide the response and nothing else.
Method hiding in an interface isn't nearly as grungy; I'd go with something like: interface IBasicProps { int Priority { get; } string Name {get;} //... whatever}interface IBasicPropsWriteable:IBasicProps { new int Priority { get; set; } new string Name { get; set; } //... whatever}class Foo : IBasicPropsWriteable { public int Priority {get;set;} public string Name {get;set;}/* optional int IBasicProps.Priority {get {return Priority;}} string IBasicProps.Name {get {return Name;}}*/}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1791359', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/42921/']}
jdg_380093
stackexchange
llm_judgeable_groundtruth_similarity
14722978
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to return the Jsonobject to main thread. But when i tried to run the code it returns the following error. 02-06 06:14:36.490: E/AndroidRuntime(769): FATAL EXCEPTION: main02-06 06:14:36.490: E/AndroidRuntime(769): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.dataread/com.example.dataread.MainActivity}: java.lang.NullPointerException02-06 06:14:36.490: E/AndroidRuntime(769): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2180)02-06 06:14:36.490: E/AndroidRuntime(769): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2230)02-06 06:14:36.490: E/AndroidRuntime(769): at android.app.ActivityThread.access$600(ActivityThread.java:141)02-06 06:14:36.490: E/AndroidRuntime(769): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1234)02-06 06:14:36.490: E/AndroidRuntime(769): at android.os.Handler.dispatchMessage(Handler.java:99)02-06 06:14:36.490: E/AndroidRuntime(769): at android.os.Looper.loop(Looper.java:137)02-06 06:14:36.490: E/AndroidRuntime(769): at android.app.ActivityThread.main(ActivityThread.java:5039)02-06 06:14:36.490: E/AndroidRuntime(769): at java.lang.reflect.Method.invokeNative(Native Method)02-06 06:14:36.490: E/AndroidRuntime(769): at java.lang.reflect.Method.invoke(Method.java:511)02-06 06:14:36.490: E/AndroidRuntime(769): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793)02-06 06:14:36.490: E/AndroidRuntime(769): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560)02-06 06:14:36.490: E/AndroidRuntime(769): at dalvik.system.NativeStart.main(Native Method)02-06 06:14:36.490: E/AndroidRuntime(769): Caused by: java.lang.NullPointerException02-06 06:14:36.490: E/AndroidRuntime(769): at com.example.dataread.MainActivity.onCreate(MainActivity.java:37)02-06 06:14:36.490: E/AndroidRuntime(769): at android.app.Activity.performCreate(Activity.java:5104)02-06 06:14:36.490: E/AndroidRuntime(769): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1080)02-06 06:14:36.490: E/AndroidRuntime(769): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2144)02-06 06:14:36.490: E/AndroidRuntime(769): ... 11 more This is my code: public interface Asynchtask {void processFinish(JSONObject result);}public class MainActivity extends ListActivity implements Asynchtask { JSONfunctions js= new JSONfunctions(); JSONObject retunfromAsyncTask; //public static JSONObject dataFromAsyncTask; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); js.delegate = this; new JSONfunctions().execute("http://192.168.6.43/employees.php"); setContentView(R.layout.listplaceholder); ArrayList<HashMap<String, String>> mylist = new ArrayList<HashMap<String, String>>(); try { JSONArray employees = retunfromAsyncTask.getJSONArray("Employees"); for(int i=0;i<employees.length();i++){ JSONArray e = employees.getJSONArray(i); HashMap<String, String> map = new HashMap<String, String>(); map.put("name", "emp name:" + e.getString(0)+" "+e.getString(1)+" "+e.getString(2)); map.put("email id", "email id: " + e.getString(3)); map.put("phone no", "phone no: " + e.getString(4)); mylist.add(map); } }catch(JSONException e) { Log.e("log_tag", "Error parsing data "+e.toString()); } ListAdapter adapter = new SimpleAdapter(this, mylist , R.layout.activity_main, new String[] { "name", "email id","phone no" }, new int[] { R.id.item_title, R.id.item_emailid ,R.id.item_phoneno}); setListAdapter(adapter); final ListView lv = getListView(); lv.setTextFilterEnabled(true); } @Override public void processFinish(JSONObject result) { // TODO Auto-generated method stub retunfromAsyncTask=result; }}public class JSONfunctions extends AsyncTask<String, Void, JSONObject> { public Asynchtask delegate=null; InputStream is; String result ; JSONObject jArray; @Override protected JSONObject doInBackground(String... params) { // TODO Auto-generated method stub //http post try{ HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost(params[0]); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); }catch(Exception e){ Log.e("log_tag", "Error in http connection "+e.toString()); } try{ BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); result=sb.toString(); }catch(Exception e){ Log.e("log_tag", "Error converting result "+e.toString()); } try{ jArray = new JSONObject(result); // MainActivity.dataFromAsyncTask=jArray; }catch(JSONException e){ Log.e("log_tag", "Error parsing data "+e.toString()); } return jArray; //convert response to string } @Override protected void onPostExecute(JSONObject result) { delegate.processFinish(result); }} Manifestfile <?xml version="1.0" encoding="utf-8"?><manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.dataread" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="4" android:targetSdkVersion="17" /> <uses-permission android:name="android.permission.INTERNET"/> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.example.dataread.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application></manifest> Now provide the response and nothing else.
to get result back in Main Thread you will need to use AsyncTask.get() method which make UI thread wait until execution of doInBackground is not completed but get() method call freeze the Main UI thread until doInBackground computation is not complete . start your AsyncTask using get() method as : String str_result=new JSONfunctions().execute("http://192.168.6.43/employees.php").get(); move this line inside a Thread to avoid freezing of UI thread Second and right way to utilize the use of AsyncTask move your code which you want to update with the result of doInBackground computation inside onPostExecute as : public class JSONfunctions extends AsyncTask<String, Void, JSONObject> { public Asynchtask delegate=null; InputStream is; String result ; JSONObject jArray; @Override protected JSONObject doInBackground(String... params) { // your code here.... return jArray; //convert response to string } @Override protected void onPostExecute(JSONObject result) { JSONArray employees = result.getJSONArray("Employees"); // your code here... ListAdapter adapter = new SimpleAdapter(MainActivity.this, mylist , R.layout.activity_main, new String[] { "name", "email id","phone no" }, new int[] { R.id.item_title, R.id.item_emailid ,R.id.item_phoneno}); MainActivity.this.setListAdapter(adapter); final ListView lv = MainActivity.this.getListView(); //.... }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14722978', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1817840/']}
jdg_380094
stackexchange
llm_judgeable_groundtruth_similarity
14246
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently analyzed an experiment that manipulated 2 categorical variables and one continuous variable using ANCOVA. However, a reviewer suggested that multiple regression with the categorical variable coded as dummy variables is a more appropriate test for experiments with both categorical and continuous variables. When is it appropriate to use ANCOVA vs. multiple regression with dummy variables and what factors should I consider in selecting between the two tests? Thank you. Now provide the response and nothing else.
ttnphns is correct. However, given your additional comments I would suggest that the reviewer wanted the change merely for interpretation. If you want to stick with ANOVA style results just call it ANOVA. ANCOVA and ANOVA are the same, as ttnphns pointed out. The difference is that with ANCOVA you don't treat the covariates as predictors and you definitely appear to want to do just that. What the reviewer was getting at was that, while you can perform an ANOVA on continuous predictors, it's typical that one perform a regression. One feature of this is that you get estimates of the effects of the continuous variable and you can even look at interactions between it and the categorical (which aren't included in an ANCOVA but could be in an ANOVA). You may need some help with interpretation of regression results because funny things happen on the way to interactions if you're going to use the beta values to determine the significance of your effects.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/14246', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/4048/']}
jdg_380095
stackexchange
llm_judgeable_groundtruth_similarity
180421
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This might sound like a naive question but when are database table indexes in mysql required? How do such indexes affect performance and why? Now provide the response and nothing else.
The purpose of database indexes is to increase the performance of searches in data tables. As you might already know, for unsorted data structures , the asymptotic notation O(f(n)) for searches is O(n) . Meaning that, in the worst case, all rows of the table are going to be looked in order to rows matching the WHERE clause (and the same goes for JOIN columns). It might not be that bad in a small table, but it can become really slow on a bigger table. Since a data table cannot physically contain sorted data for multiple columns (generally speaking, it may only be physically sorted by a single column), indexes become handy. In summary, indexes are usually a B-tree (or a variant) sorted map containing a specific column data as key and the row information as the entry value. This way, the O(f(n)) can be reduced to O(log(n)) . Are indexes required? No. But your queries will show a huge performance boost, if you use indexes in the right way. Yet the performance gain has its drawbacks: in order to increase the search performance, the insert and delete operations become slower (as the indexes must be updated). Here are some asymptotic amortized worst-case analysis for different data structures. Remember this is just an introduction on the matter. You should definitely take a deeper look on database indexes .
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/180421', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/55045/']}
jdg_380096
stackexchange
llm_judgeable_groundtruth_similarity
951388
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I declare just the 2 varargs methods as follows: public void foo(String... strings) { System.out.println("Foo with Strings");} and public void foo(int... ints) { System.out.println("Foo with ints");} and then have the code: foo(); this is a compiler error due to the ambiguity as expected. However if I have just the following 2 versions of foo: public void foo(Object... objects) { System.out.println("Foo with Objects");} and public void foo(int... ints) { System.out.println("Foo with ints");} then the code foo(); calls the ints version of the method. Can anyone explain why the second example isn't similarly ambiguous and why it resolves to the int method over the Object method. Thanks. Now provide the response and nothing else.
If I recall properly from when I was preparing the scjp, in the first case you have 2 arguments with no relation between them, so the compiler can't choose one. In the second, with boxing enabled (1.5+), int can be Integer which is a subset of Object, and the compiler, in case of conflict, will always use the most specific definition. So Integer (int) is prioritized.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/951388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/63034/']}
jdg_380097
stackexchange
llm_judgeable_groundtruth_similarity
362031
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to filter a part of a file that holds 2 digital certificates. Basically, I want the first part (let's say Cert1) and not the second part (Cert2). Content of the file is: -----BEGIN CERTIFICATE-----AAAAAAAAETC-----END CERTIFICATE----------BEGIN CERTIFICATE-----AAAAAAAAETC-----END CERTIFICATE----- I was under the impression that this would give me the content of Cert1 (the first part between the first BEGIN and the first END) : cat /etc/nginx/cert.pem | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' For some reason, though, it still presents me all the content between the second BEGIN and the second END (basically, nothing changes; all content is the same) . Any pointers? Now provide the response and nothing else.
You can use the following sed command for this task sed '/-----END CERTIFICATE-----/q' /etc/nginx/cert.pem q is an exit code which instructs sed to quit. Therefore sed will print from the beginning of the file and quit when the pattern '-----END CERTIFICATE-----' is encountered. This causes it to stop at the end of the first certificate. Also there is no need to use a pipe to redirect the output of cat to sed. Simply specify the filename in the sed command. Source - http://www.theunixschool.com/2011/09/sed-selective-printing.html
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/362031', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/204855/']}
jdg_380098
stackexchange
llm_judgeable_groundtruth_similarity
24576706
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some problems with adding data to my array. My first data has some data like the following code: array1{projectId: 1, task: 1, hoursEstimated: 345},{projectId: 1, task: 4, hoursEstimated: 274},{projectId: 2, task: 1, hoursEstimated: 432},{projectId: 2, task: 2, hoursEstimated: 347},{projectId: 2, task: 4, hoursEstimated: 815},array2{projectId: 1, task: 1, totalHoursWorked: 145},{projectId: 1, task: 2, totalHoursWorked: 234},{projectId: 1, task: 3, totalHoursWorked: 283},{projectId: 1, task: 4, totalHoursWorked: 174},{projectId: 2, task: 1, totalHoursWorked: 205},{projectId: 2, task: 2, totalHoursWorked: 153},{projectId: 2, task: 3, totalHoursWorked: 634},{projectId: 2, task: 4, totalHoursWorked: 815}, So what i want to do is: array2 has to do is match both the taskId and projectId of array 1 and append the data. So for the first element in de array it would have become this: Array3{projectId: 1, task: 1, hoursEstimated: 345, totalHoursWorked: 145}etc.. Now provide the response and nothing else.
Since attributes are part of class definition available to you at runtime (it's also called "metadata" in geekspeak) CLR does not instantiate them unless some part of your program asks for them. This makes sense: why bother spending CPU cycles for something that nobody wants to access? Because of this, the execution of the constructor will never happen unless you ask for that attribute. Here is one way to ask for an attribute that would make your program fail: var attr = Attribute.GetCustomAttribute(typeof(Failer).GetProperty("Prop"), typeof(FailerAttr)); This code makes CLR instantiate the FailerAttr , which triggers the exception. Demo on ideone. If you do not know the type of the attribute, you can retrieve all attributes at once with this call: var allAttributes = Attribute.GetCustomAttributes(typeof(Failer).GetProperty("Prop")); This causes an exception as well ( demo ).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24576706', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3531957/']}
jdg_380099
stackexchange
llm_judgeable_groundtruth_similarity
3838062
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to use Delphi's 2009 TGridPanel, but I don't understand how to use it correctly. For example, I want to have a GridPanel with 4 columns which should have each 25% percent of the GripPanels width. So I put a GripPanel on the form and add 4 ColumnItems.I give the column items SizeStyle ssPercent, and now I try to give each ColummItem a Value of 25 (ober the object inspector).But for some reasons the object inspector doesn't accept the value 25. It's automatically changed to something like '23,8095238095238'.Could somebody explain me in short how to use this components?Thanks! Now provide the response and nothing else.
When you set the Value of a column with SizeStyle ssPercent , the component distributes the new sum of all ssPercent styled columns for the available total percentage. For instance, if you've got 4 25% columns and you set one to be 50% now you've got a total of 125%. The component calculates 'Value's again to have the total distributed proportionally, i.e. 50% -> 40% (100*50/125) . To set the value of more than one columns at once, at run time you'd surround your code with GridPanel.ColumnCollection.BeginUpdate and GridPanel.ColumnCollection.EndUpdate . And at design time edit the .dfm directly. Press 'Alt+F12' when you're viewing your form in the design editor, find your GridPanel in the text editor, edit the value of the columns there, and then press 'Alt+F12' again to return to the design view.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3838062', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/463730/']}
jdg_380100
stackexchange
llm_judgeable_groundtruth_similarity
9869150
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I get a strange PHP error after updating my php version to 5.4.0-3. I have this array: Array( [host] => 127.0.0.1 [port] => 11211) When I try to access it like this I get strange warnings print $memcachedConfig['host']; print $memcachedConfig['port']; Warning: Illegal string offset 'host' in .... Warning: Illegal string offset 'port' in ... I really don't want to just edit my php.ini and re-set the error level. Now provide the response and nothing else.
The error Illegal string offset 'whatever' in... generally means: you're trying to use a string as a full array. That is actually possible since strings are able to be treated as arrays of single characters in php. So you're thinking the $var is an array with a key, but it's just a string with standard numeric keys, for example: $fruit_counts = array('apples'=>2, 'oranges'=>5, 'pears'=>0);echo $fruit_counts['oranges']; // echoes 5$fruit_counts = "an unexpected string assignment";echo $fruit_counts['oranges']; // causes illegal string offset error You can see this in action here: http://ideone.com/fMhmkR For those who come to this question trying to translate the vagueness of the error into something to do about it, as I was.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9869150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1129996/']}
jdg_380101
stackexchange
llm_judgeable_groundtruth_similarity
10854347
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: my pagination problem still continue ; I just making pagination in simple stuff : $config['base_url'] = site_url('admin/index/page/'); $this->load->database(); $config['total_rows'] = $this->db->count_all('sms'); $config['per_page'] = 20; $offset = $this->uri->segment(4, 0); $this->pagination->initialize($config) And my view page has this command for view pagination : <?php echo $this->pagination->create_links(); ?> And after more than 20 rows , pagination starts to paging the list, but html output shows like this : 1 2 3 As in view , First Page Number 1 does not have link either jumping next page , there is no link on page number 1. Just its on strong . My second problem is : I have just 30 record but CI pagination creates 3rd page which is coming with empty rows ! I am not sure why some Class ( specially pagination makes so much trouble to users ? ) If i need to pay something ( maybe hidden licence? ) for get away from trouble instead of using simple pagination class without loosing so much time for searching issue on internet , I am ready for it ! Now provide the response and nothing else.
I was very mad because of this pagination problem and I was studying the source code of the pagination library and I saw this -- var $uri_segment = 3; The default of the pagination library uses the 3rd uri segment, in your case and in my case we wanted to use the 4th uri segment. To suit our needs change this code: $config['base_url'] = site_url('admin/index/page/'); $this->load->database(); $config['total_rows'] = $this->db->count_all('sms'); $config['per_page'] = 20; $offset = $this->uri->segment(4, 0); $this->pagination->initialize($config) TO $config['base_url'] = site_url('admin/index/page/'); $this->load->database(); $config['total_rows'] = $this->db->count_all('sms'); $config['per_page'] = 20; $offset = $this->uri->segment(4, 0); $config['uri_segment'] = 4; // add this line to override the default $this->pagination->initialize($config) Kindly post back here if this will solve your problem or if the problem still exists so I can help :) Nwei this is not in the pagination class documentation. I hope this will be added to the docs because I'm seeing many developers having a hard time with the pagination class.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10854347', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1405703/']}
jdg_380102
stackexchange
llm_judgeable_groundtruth_similarity
2574636
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: a = ['123', '2', 4]b = a[4] or 'sss'print b I want to get a default value when the list index is out of range (here: 'sss' ). How can I do this? Now provide the response and nothing else.
In the Python spirit of "ask for forgiveness, not permission", here's one way: try: b = a[4]except IndexError: b = 'sss'
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2574636', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/234322/']}
jdg_380103
stackexchange
llm_judgeable_groundtruth_similarity
38989691
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to read data from one table and insert into multiple rows in another table in a MySQL database. Table 1 looks like : ID, name, e-mail, phone, city, ..., .... In Table 2 I need to insert data like: (row1) ID, "name", name (row2) ID, "e-mail, e-mail (row3) ID, "phone", phone ... ... Table 1 has about 3000 rows I guess I need to use some kind of foreach or do..while but can't find anything that works. Can anyone give me a clue how to do this? Now provide the response and nothing else.
If I understand your question correctly, you are wanting to do a query on table1 that returns multiple rows, and then insert those into table2 in a single loop. That's the INSERT INTO SELECT statement: INSERT INTO table2 (name, email, phone) SELECT name, email, phone FROM table1; It can be modified to grab specific results as well: INSERT INTO table2 (name, email, phone) SELECT name, email, phone FROM table1 WHERE name = 'target person'; More information can be found at http://dev.mysql.com/doc/refman/5.7/en/insert-select.html and http://www.w3schools.com/sql/sql_insert_into_select.asp . EDIT: Based on your comment, it sounds like you're trying to do this: SQL split values to multiple rows . I can't think of a situation where you'd actually want to do that, as you can access all of the data in your existing table as is, and it seems to be bad practice to split data in the way you're requesting. However, the solutions in the above thread should be applicable to what you're trying to do. Ultimately, you may want to look at how you're actually retrieving the data. Modifying that code would be a better idea :)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38989691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6724863/']}
jdg_380104
stackexchange
llm_judgeable_groundtruth_similarity
15382807
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The problem is:Having a list of names, and a list of lists, how to create a list, in which each item is an ordered dictionary with names as keys, and items from list of lists as values? It might be more clear from code below: from collections import OrderedDictlist_of_lists = [ ['20010103', '0.9507', '0.9569', '0.9262', '0.9271'], ['20010104', '0.9271', '0.9515', '0.9269', '0.9507'], ['20010105', '0.9507', '0.9591', '0.9464', '0.9575'], ]names = ['date', 'open', 'high', 'low', 'close'] I would like to get: ordered_dictionary = [ OrderedDict([('date', '20010103'), ('open', '0.9507'), ('high', '0.9569'), ('low', '0.9262'), ('close', '0.9271')]), OrderedDict([('date', '20010104'), ('open', '0.9271'), ('high', '0.9515'), ('low', '0.9269'), ('close', '0.9507')]), OrderedDict([('date', '20010105'), ('open', '0.9507'), ('high', '0.9591'), ('low', '0.9464'), ('close', '0.9575')]), ] Now provide the response and nothing else.
Use zip() to combine the names and the values. With a list comprehension: from collections import OrderedDictordered_dictionary = [OrderedDict(zip(names, subl)) for subl in list_of_lists] which gives: >>> from pprint import pprint>>> pprint([OrderedDict(zip(names, subl)) for subl in list_of_lists])[OrderedDict([('date', '20010103'), ('open', '0.9507'), ('high', '0.9569'), ('low', '0.9262'), ('close', '0.9271')]), OrderedDict([('date', '20010104'), ('open', '0.9271'), ('high', '0.9515'), ('low', '0.9269'), ('close', '0.9507')]), OrderedDict([('date', '20010105'), ('open', '0.9507'), ('high', '0.9591'), ('low', '0.9464'), ('close', '0.9575')])]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15382807', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1841829/']}
jdg_380105
stackexchange
llm_judgeable_groundtruth_similarity
50014062
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am making an audio app, and the user can download files locally stored to the documentDirectory using FileManager . Next, I'd like to allow the user to delete all files using a button. In the documentation, there is a method to remove items . Here's my code: @IBAction func deleteDirectoryButton(_ sender: Any) { let documentsUrl = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! do { try FileManager.default.removeItem(at: documentsUrl, includingPropertiesForKeys: nil, options: []) } catch let error { print(error) } } Unfortunately, this won't build with an error Ambiguous reference to member 'removeItem(atPath:)' . Is there a better approach to access the documentDirectory and remove all files from the directory in one swoop? Now provide the response and nothing else.
First of all the error occurs because the signature of the API is wrong. It's just removeItem(at:) without the other parameters. A second issue is that you are going to delete the Documents directory itself rather than the files in the directory which you are discouraged from doing that. You have to get the contents of the directory and add a check for example to delete only MP3 files. A better solution would be to use a subfolder. let documentsUrl = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!do { let fileURLs = try FileManager.default.contentsOfDirectory(at: documentsUrl, includingPropertiesForKeys: nil, options: .skipsHiddenFiles) for fileURL in fileURLs where fileURL.pathExtension == "mp3" { try FileManager.default.removeItem(at: fileURL) }} catch { print(error) } Side note: It is highly recommended to use always the URL related API of FileManager .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50014062', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1927604/']}
jdg_380106
stackexchange
llm_judgeable_groundtruth_similarity
10866134
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I create two lists and zip them a=[1,2,3]b=[7,8,9]z=zip(a,b) Then I typecast z into two lists l1=list(z)l2=list(z) Then the contents of l1 turn out to be fine [(1,7),(2,8),(3,9)], but the contents of l2 is just []. I guess this is the general behavior of python with regards to iterables. But as a novice programmer migrating from the C family, this doesn't make sense to me. Why does it behave in such a way? And is there a way to get past this problem? I mean, yeah in this particular example, I can just copy l1 into l2, but in general is there a way to 'reset' whatever Python uses to iterate 'z' after I iterate it once? Now provide the response and nothing else.
There's no way to "reset" a generator. However, you can use itertools.tee to "copy" an iterator. >>> z = zip(a, b)>>> zip1, zip2 = itertools.tee(z)>>> list(zip1)[(1, 7), (2, 8), (3, 9)]>>> list(zip2)[(1, 7), (2, 8), (3, 9)] This involves caching values, so it only makes sense if you're iterating through both iterables at about the same rate. (In other words, don't use it the way I have here!) Another approach is to pass around the generator function, and call it whenever you want to iterate it. def gen(x): for i in range(x): yield i ** 2def make_two_lists(gen): return list(gen()), list(gen()) But now you have to bind the arguments to the generator function when you pass it. You can use lambda for that, but a lot of people find lambda ugly. (Not me though! YMMV.) >>> make_two_lists(lambda: gen(10))([0, 1, 4, 9, 16, 25, 36, 49, 64, 81], [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]) I hope it goes without saying that under most circumstances, it's better just to make a list and copy it. Also, as a more general way of explaining this behavior, consider this. The point of a generator is to produce a series of values, while maintaining some state between iterations. Now, at times, instead of simply iterating over a generator, you might want to do something like this: z = zip(a, b)while some_condition(): fst = next(z, None) snd = next(z, None) do_some_things(fst, snd) if fst is None and snd is None: do_some_other_things() Let's say this loop may or may not exhaust z . Now we have a generator in an indeterminate state! So it's important, at this point, that the behavior of a generator is restrained in a well-defined way. Although we don't know where the generator is in its output, we know that a) all subsequent accesses will produce later values in the series, and b) once it's "empty", we've gotten all the items in the series exactly once. The more ability we have to manipulate the state of z , the harder it is to reason about it, so it's best that we avoid situations that break those two promises. Of course, as Joel Cornett points out below, it is possible to write a generator that accepts messages via the send method; and it would be possible to write a generator that could be reset using send . But note that in that case, all we can do is send a message . We can't directly manipulate the generator's state, and so all changes to the state of the generator are well-defined (by the generator itself -- assuming it was written correctly!). send is really for implementing coroutines , so I wouldn't use it for this purpose. Everyday generators almost never do anything with values sent to them -- I think for the very reasons I give above.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10866134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1265125/']}
jdg_380107
stackexchange
llm_judgeable_groundtruth_similarity
3389143
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a possibility to generate an java.security.cert.X509Certificate from an byte[]? Now provide the response and nothing else.
Sure. The certificate objects can be created by an instance of CertificateFactory - in particular, one configured to create X509 certificates. This can be created like so: CertificateFactory certFactory = CertificateFactory.getInstance("X.509"); Then you need to pass it an InputStream containing the bytes of the certificate. This can be achieved by wrapping your byte array in a ByteArrayInputStream : InputStream in = new ByteArrayInputStream(bytes);X509Certificate cert = (X509Certificate)certFactory.generateCertificate(in);
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3389143', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/275837/']}
jdg_380108
stackexchange
llm_judgeable_groundtruth_similarity
30402235
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am quite new to ember and don't really get the difference between two types of syntax. Where and in which situations should i use one or another. I.e. Which one is more suitable for usage in Routes and which one for Controllers. this.get('model') As opposed to this.modelFor('artists/show') Now provide the response and nothing else.
this.get('model') //controller callthis.modelFor('someRoute') //route call In Ember, a routes setupController hook by default performs this one line of code: setupController: function(controller, model){ controller.set('model', model);} This takes whatever is returned from the model hook and sets the controller's model property with this value. From within the controller, this.get('model') is the proper way to access this model. Also, a developer can override this hook and do something different, like set model equal to some child property of what is returned from the model hook ( controller.set('model', model.prop) . This is worth noting, because when you call this.modelFor from another route, you DO NOT get the route's associated controller's model that is set by setupController . You get whatever is returned from the model hook, which under the covers is the route's currentModel property if I remember correctly.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30402235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3042109/']}
jdg_380109
stackexchange
llm_judgeable_groundtruth_similarity
18955987
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a datagrid which contains all the records with then same name. I have this table: ShopID name adress city-----------------------------------------1 name1 adress 1 city12 name 2 adress2 city23 name 2 4 name 2 city25 name 3 6 name 4 adress4 city47 name 4 adress4 city4 and my datagrid must contain: 2 name 2 adress2 city23 name 2 4 name 2 city26 name 4 adress4 city47 name 4 adress4 city4 but I have no idea how to create this query Now provide the response and nothing else.
If you use Entity Framework I assume you use LINQ as well. In which case, try it this way: var duplicates = Shop.GroupBy(i => i.Name) .Where(x => x.Count() > 1) .Select(val => val.Key);foreach(var item in duplicates){ //process} In a simple example the output would look like this: //EDIT: if you want to group by multiple columns you can use this syntax: var query = (from sh in Shop group sh by new {sh.Name, sh.Address, sh.City} into grp select new { name = grp.Key.Name, address = grp.Key.Address, city = grp.Key.City }).ToList() .GroupBy(q => q.name) .Where (q => q.Count() >1) .Dump(); This will result in the following: //EDIT2: sometimes I am borderline stupid.Following the KISS-principle: var query = Shop.GroupBy (s => s.Name).Where (s => s.Count () > 1).Dump();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18955987', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1856013/']}
jdg_380110
stackexchange
llm_judgeable_groundtruth_similarity
18409551
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Structure: .parent (has if/else to toggle on click) -> .child (has nothing) <div class="parent">Parent <div class="child">Child</div></div> The parent element is styled to hide overflowing content and toggle its height on click. When the user clicks, the parent element will expand to show the child element. I want users to be able to click on the child element without the parent element toggling back to its original size and hiding the child element. I want the toggle to only happen on the parent. I realize the child element is still contained within the parent element's clickable area, but is there a way to exclude it? Now provide the response and nothing else.
Solution 1: Compare target with currentTarget: $("#parentEle").click( function(e) { if(e.target == e.currentTarget) { alert('parent ele clicked'); } else { //you could exclude this else block to have it do nothing within this listener alert('child ele clicked'); }}); Fiddle e.target will be the element that started the event. e.currentTarget will be where it currently is (bubbling up) which will be parentEle in this click event as that's what this is listening for. If they are the same, you know the click was directly on the parent. Solution 2: Stop the propagation before the event hits the parentEle: The other option is to prevent the event from bubbling up in the first place if there is a click on a child element. That can be done like this: $("#parentEle").click( function(e) { alert('parent ele clicked');});$("#parentEle").children().click( function(e) { //this prevent the event from bubbling to any event higher than the direct children e.stopPropagation();}); Fiddle The main difference between the two is that the first solution will just ignore the event in this listener and allow it to keep bubbling up. This may be necessary if you have a parent of this parentEle that needs to get the event. The second solution stops any click events from bubbling past parentEle 's direct children. So if there was a click event on a parent of parentEle , they would never see these events either.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18409551', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2712051/']}
jdg_380111
stackexchange
llm_judgeable_groundtruth_similarity
15777399
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I primarily come from an Embedded Software background and hence I have very limited knowledge about hardware in general. I always use to think Ethernet as that little physical connector on your computer into which you attach your Ethernet cable. And from a Software perspective all you need to do is to install the driver (in Windows) or configure the Linux kernel to include the driver for your Ethernet. Questions: But as I have started going down one level (towards the hardware) and looking at various datasheet and schematics, I have started to come across terms like PHY, MII, SGMII, RGMII, etc. And now I am little confused as to what constitutes an Ethernet? For example, when I say Intel 82574L 1.0 Gbps Ethernet port, where do all these terms fit in? Now provide the response and nothing else.
Let me try to explain: The MII, SGMII, RGMII are three kinds of interface between the MAC block and the PHY chip. The Intel 82574L is one MAC chip. Looking following figure: _______ __________ ___________ CPU | PCI-E | | MII/SGMII/RGMII | | or |<=======>| MAC |<================>| PHY |<====>physical interface board| or else | | | |_______ __________ ___________ For details about MII (100Mbps), SGMII (1Gbps, serial), RGMII (1Gbps, reduced) definition, you can google them. Basically speaking, NIC (Network Interface Card) consist of one MAC block and related PHY chip, and other peripheral modules. And also one Ethernet device driver should work with the NIC hardware. The MAC block has one interface with the control CPU or PC main-board, such as PCIe bus or else.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15777399', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1521550/']}
jdg_380112
stackexchange
llm_judgeable_groundtruth_similarity
20680957
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am newbie to C++, and want to understand what is the difference between saying typedef enum stateUpdateReasonCode{ a=1, b=2, c=3} StateUpdateReasonCode; and struct StateUpdateReasonCode{ a=1, b=2, c=3}; What is difference between them ? Wy would we use one over another ? Kind Regards Now provide the response and nothing else.
An enum and a struct are totally different concepts, fulfilling different purposes. An enum lets you declare a series of identifiers for use in your code. The compiler replaces them with numbers for you. It's often useful for making your code more readable and maintainable, because you can use descriptive names without the performance penalty of string comparisons. It can also make the code less bug-prone because you don't have to keep writing in specific numbers everywhere, which could go wrong if a number changes. A struct is a data structure. At its simplest, it contains zero or more pieces of data (variables or objects), grouped together so they can be stored, processed, or passed as a single unit. You can usually have multiple copies (or instances) of it. A struct can be a lot more complex though. It's actually exactly the same as a class, except that members are public by default instead of private. Like a class, a struct can have member functions and template parameters and so on. One of the vital difference between structs and enums is that an enum doesn't exist at run-time. It's only for your benefit when you're read/writing the code. However, instances of structs (and classes) certainly can exist in memory at runtime. From a coding standpoint, each identifier in an enum doesn't have its own type. Every member within a struct must have a type.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20680957', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/365019/']}
jdg_380113
stackexchange
llm_judgeable_groundtruth_similarity
14904046
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd prefer not to destroy all the users on my site. But I want to take advantage of Django 1.5's custom pluggable user model. Here's my new user model: class SiteUser(AbstractUser): site = models.ForeignKey(Site, null=True) Everything works with my new model on a new install (I've got other code, along with a good reason for doing this--all of which are irrelevant here). But if I put this on my live site and syncdb & migrate, I'll lose all my users or at least they'll be in a different, orphaned table than the new table created for my new model. I'm familiar with South, but based on this post and some trials on my part, it seems its data migrations are not currently a fit for this specific migration. So I'm looking for some way to either make South work for this or for some non-South migration (raw SQL, dumpdata/loaddata, or otherwise) that I can run on each of my servers (Postgres 9.2) to migrate the users once the new table has been created while the old auth.User table is still in the database. Now provide the response and nothing else.
South is more than able to do this migration for you, but you need to be smart and do it in stages. Here's the step-by-step guide: (This guide presupposed you subclass AbstractUser , not AbstractBaseUser ) Before making the switch, make sure that south support is enabled in the applicationthat contains your custom user model (for the sake of the guide, we'll call it accounts and the model User ).At this point you should not yet have a custom user model. $ ./manage.py schemamigration accounts --initialCreating migrations directory at 'accounts/migrations'...Creating __init__.py in 'accounts/migrations'...Created 0001_initial.py.$ ./manage.py migrate accounts [--fake if you've already syncdb'd this app] Running migrations for accounts: - Migrating forwards to 0001_initial. > accounts:0001_initial - Loading initial data for accounts. Create a new, blank user migration in the accounts app. $ ./manage.py schemamigration accounts --empty switch_to_custom_userCreated 0002_switch_to_custom_user.py. Create your custom User model in the accounts app, but make sure it is defined as: class SiteUser(AbstractUser): pass Fill in the blank migration with the following code. # encoding: utf-8from south.db import dbfrom south.v2 import SchemaMigrationclass Migration(SchemaMigration): def forwards(self, orm): # Fill in the destination name with the table name of your model db.rename_table('auth_user', 'accounts_user') db.rename_table('auth_user_groups', 'accounts_user_groups') db.rename_table('auth_user_user_permissions', 'accounts_user_user_permissions') def backwards(self, orm): db.rename_table('accounts_user', 'auth_user') db.rename_table('accounts_user_groups', 'auth_user_groups') db.rename_table('accounts_user_user_permissions', 'auth_user_user_permissions') models = { ....... } # Leave this alone Run the migration $ ./manage.py migrate accounts - Migrating forwards to 0002_switch_to_custom_user. > accounts:0002_switch_to_custom_user - Loading initial data for accounts. Make any changes to your user model now. # settings.pyAUTH_USER_MODEL = 'accounts.User'# accounts/models.pyclass SiteUser(AbstractUser): site = models.ForeignKey(Site, null=True) create and run migrations for this change $ ./manage.py schemamigration accounts --auto + Added field site on accounts.UserCreated 0003_auto__add_field_user_site.py.$ ./manage.py migrate accounts - Migrating forwards to 0003_auto__add_field_user_site. > accounts:0003_auto__add_field_user_site - Loading initial data for accounts. Honestly, If you already have good knowledge of your setup and already use south, It should be as simple as adding the following migration to your accounts module. # encoding: utf-8from south.db import dbfrom south.v2 import SchemaMigrationfrom django.db import modelsclass Migration(SchemaMigration): def forwards(self, orm): # Fill in the destination name with the table name of your model db.rename_table('auth_user', 'accounts_user') db.rename_table('auth_user_groups', 'accounts_user_groups') db.rename_table('auth_user_permissions', 'accounts_user_permissions') # == YOUR CUSTOM COLUMNS == db.add_column('accounts_user', 'site_id', models.ForeignKey(orm['sites.Site'], null=True, blank=False))) def backwards(self, orm): db.rename_table('accounts_user', 'auth_user') db.rename_table('accounts_user_groups', 'auth_user_groups') db.rename_table('accounts_user_user_permissions', 'auth_user_user_permissions') # == YOUR CUSTOM COLUMNS == db.remove_column('accounts_user', 'site_id') models = { ....... } # Leave this alone EDIT 2/5/13: added rename for auth_user_group table. FKs will auto update to point at the correct table due to db constraints, but M2M fields' table names are generated from the names of the 2 end tables and will need manual updating in this manner. EDIT 2: Thanks to @Tuttle & @pix0r for the corrections.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14904046', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/652693/']}
jdg_380114
stackexchange
llm_judgeable_groundtruth_similarity
9682024
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: On windows, using Powershell, what are the equivalent commands to linux's head , tail , more , less and sed ? Now provide the response and nothing else.
Get-Content (alias: gc ) is your usual option for reading a text file. You can then filter further: gc log.txt | select -first 10 # headgc -TotalCount 10 log.txt # also headgc log.txt | select -last 10 # tailgc -Tail 10 log.txt # also tail (since PSv3), also much faster than above optiongc log.txt | more # or less if you have it installedgc log.txt | %{ $_ -replace '\d+', '($0)' } # sed This works well enough for small files, larger ones (more than a few MiB) are probably a bit slow. The PowerShell Community Extensions include some cmdlets for specialised file stuff (e.g. Get-FileTail).
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9682024', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1265569/']}
jdg_380115
stackexchange
llm_judgeable_groundtruth_similarity
10813704
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a record that would hold the information about a) what kind of elements are present and b) the number of elements of each kind present in a node of a tree. I would explicitly store this information only for the leaf nodes, while the information for the parent node can be obtaining through combining the information of all of it's children (e.g. child 1 has 3 objects of A, 1 object of B, child 2 has 1 object of A, 2 objects of C -- parent has 4 objects of A, 1 object of B and 2 of C). I will be careful when requesting this information from the parent nodes not to first request, use and discard information for a child node and then for its parent node, but the upward construction will be a common operation. Other two common operations are directly derived from what I store: is the object of kind X present? and how many objects of kind X is present? and also how many kinds of objects are present? Object kinds are represented as integers, and the object numbers are always integer values. What is the better choice (and arguments for the selected choice): use std::multiset<int> , and operate with std::multiset::count() and std::multiset::find() operations (easier union but duplication of elements, total distinct element count hard to obtain) use std::map<int, std::size_t> with the kind as a key and number of objects as a value (no duplicate elements, std::map::find() function present, size gives the correct number of object kinds stored, but accessing a non-existent element increases the size unintentionally) Thank you for your suggestions! Now provide the response and nothing else.
To store a total of n items with k distinct values per your comparison predicate, an std::multiset allocates n binary search tree nodes(*). An std::map allocates only k (slightly larger) nodes. You'd use std::multiset when two items can be considered equal by your comparison predicate, but must still be explicitly stored, because they differ in some aspect that the comparison predicate does not check. Also, iterating over a multiset generates each of the n items, whereas a map would generate each of the k distinct items with the count for each. In the case where the items are just integers, go with std::map . Your "how many distinct items" query would then just be a call to size , which runs in constant time. Your claim that "accessing a non-existent element increases the size unintentionally" is only true if you use operator[] to access nodes. find does not exhibit this behavior. (*) The C++ standard does not guarantee that these containers are implemented as (balanced) BSTs, but in all implementations that I've seen, they are.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10813704', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/884412/']}
jdg_380116
stackexchange
llm_judgeable_groundtruth_similarity
8804893
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is a follow up question from here specifically concerning its answer . From a python module I am calling a Hello World executable that simply prints Hello World to the stdout. I am interested in redirecting that output to a python StringIO and ran into this answer which almost brings me all the way to the solution. The critical part of this answer is this code segment: 1. def redirect_stdout():2. print "Redirecting stdout"3. sys.stdout.flush() # <--- important when redirecting to files4. newstdout = os.dup(1)5. devnull = os.open('/dev/null', os.O_WRONLY)6. os.dup2(devnull, 1)7. os.close(devnull)8. sys.stdout = os.fdopen(newstdout, 'w') Also I would like to restore the stdout as it was before the redirection. Questions What exactly is going on in the function above? What is dup and dup2 doing? What is /dev/null ? What is line 8 doing? ( sys.stdout = os.fdopen(newstdout, 'w') ) How can I store the stdout in a StringIO object? How can I restore the stdout after the call to my Hello World program? I am pretty sure that once I have the answer for my question 1 that the answers of questions 2 and 3 will be easy. I decided to post them anyway to maybe push the answer of question 1 into the direction where I want to go. Now provide the response and nothing else.
I've written below a few additional comments that should make clearer what it's going on inside the redirect_stdout function: def redirect_stdout(): print "Redirecting stdout" sys.stdout.flush() # <--- important when redirecting to files # Duplicate stdout (file descriptor 1) # to a different file descriptor number newstdout = os.dup(1) # /dev/null is used just to discard what is being printed devnull = os.open('/dev/null', os.O_WRONLY) # Duplicate the file descriptor for /dev/null # and overwrite the value for stdout (file descriptor 1) os.dup2(devnull, 1) # Close devnull after duplication (no longer needed) os.close(devnull) # Use the original stdout to still be able # to print to stdout within python sys.stdout = os.fdopen(newstdout, 'w') One important thing to note is that a process gets three different file descriptors from the OS when launched: stdin: 0 stdout: 1 stderr: 2 As explained in the comments, the code above takes advantage of the file descriptor for stdout and the file descriptor duplication functions to make trick the C code into using a different stdout while still keeping a reference to the original stdout in the python code to be able to print.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8804893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/572616/']}
jdg_380117
stackexchange
llm_judgeable_groundtruth_similarity
35365689
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking to inspect HTTP traffic sent and received by a WFC client using Fiddler. To do this I've added the Fiddler Root Certificate to the Windows certificate store. My question: is there are any risk of leaving this certificate in the Windows store ready for when I may need to test again? Could an attacker take advantage of the fact that it's there? Should I remove it once I've finished testing? Now provide the response and nothing else.
As the certificate is uniquely generated by Fiddler for my system, even if an adversary knew I had such a certificate installed, there isn't a way for them to take advantage of this. If they knew the unique private key of the certificate they could potentially use this against me, for example, via a man-in-the-middle attack, but to do that they'd need to infiltrate my system to get the certificate, at which point there would be no need for the man-in-the-middle attack. Having said that, just to be extra safe, I've installed the certificate in a separate Firefox profile specifically for use with Fiddler, so that I don't have the certificate in my system when doing general web surfing.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35365689', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/660896/']}
jdg_380118
stackexchange
llm_judgeable_groundtruth_similarity
5611292
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to make multiline NSTextField? UPDATE: I've found in IB special type of NSTextField called "Wrapped Text Field". It is multiline but when I want get a newline I have to press Ctrl+Enter. But I want to press only Enter to get a newline. How can I do it? Now provide the response and nothing else.
There is no way to specify this behavior solely in Interface Builder. You can do it with a delegate message as described in this tech note QA1454 . Here is the example delegate message from the tech note: - (BOOL)control:(NSControl*)control textView:(NSTextView*)textView doCommandBySelector:(SEL)commandSelector{ BOOL result = NO; if (commandSelector == @selector(insertNewline:)) { // new line action: // always insert a line-break character and don’t cause the receiver to end editing [textView insertNewlineIgnoringFieldEditor:self]; result = YES; } else if (commandSelector == @selector(insertTab:)) { // tab action: // always insert a tab character and don’t cause the receiver to end editing [textView insertTabIgnoringFieldEditor:self]; result = YES; } return result;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5611292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/165547/']}
jdg_380119
stackexchange
llm_judgeable_groundtruth_similarity
6457457
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've done numerous svn / ant set-ups and but this time cann't figure out what's wrong. Using Linux, svn 1.6.6ant 1.8.2svnant 1.3.1 I've copied the svnant jar files into $ANT_HOME/lib top@secret:~/apps/apache-ant-1.8.2/lib$ ls -l-rw-r--r-- 1 root root 244831 2011-06-23 07:55 ganymed.jar-rw-r--r-- 1 root root 948100 2011-06-23 07:55 jna.jar-rw-r--r-- 1 root root 76383 2011-06-23 07:55 svnant.jar-rw-r--r-- 1 root root 215757 2011-06-23 07:55 svnClientAdapter.jar-rw-r--r-- 1 root root 89412 2011-06-23 07:55 svnjavahl.jar etc... My build file specifies: <taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask" /> But whenever I execute <target name="update"> <svn> <update dir="."/> </svn> </target> I get the dreaded: update: [svn] Missing 'javahl' dependencies on the classpath !BUILD FAILED/home/top/apps/utils/build.xml:26: Cannot find javahl, svnkit nor command line svn client Any ideas what I'm doing wrong? I assumed copying the svnant jars into $ANT_HOME/lib should be enough? Now provide the response and nothing else.
Javahl way always needs some kind of os-dependent driver. This is mostly hard to set-up and I don't recommend using it. You should try using svnkit : <svn svnkit="true"> This uses the pure java library, but needs svnkit.jar to be in the classpath. References: http://subclipse.tigris.org/svnant/svn.html http://subclipse.tigris.org/wiki/JavaHL I only use the fallback/commandline version, where the svn executable is in the PATH. This works in most cases without problems with the following settings: <svn svnkit="false" javahl="false"> Note: These settings are for svnant 1.31 and prior. Since version 1.32 you can use <svnSetting /> , which is described in the other answer
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6457457', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/60956/']}
jdg_380120
stackexchange
llm_judgeable_groundtruth_similarity
31628
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Openfiler 2.3 on an HP ML370 G5, Smart Array P400, SAS disks combined using RAID 1+0. I set up an NFS share from ext3 partition using Openfiler's web based configuration, and I succeeded to mount the share from another host. Both host are connected using dedicated gigabit link. Simple benchmark using dd : $ dd if=/dev/zero of=outfile bs=1000 count=2000000 2000000+0 records in 2000000+0 records out 2000000000 bytes (2.0 GB) copied, 34.4737 s, 58.0 MB/s I see it can achieve moderate transfer speed (58.0 MB/s). But if I copy a directory containing many small files ( .php and .jpg , around 1-4 kB per file) of total size ~300 MB, the cp process ends in about 10 minutes. Is NFS not suitable for small file transfer like above case? Or is there some parameters that must be adjusted? Now provide the response and nothing else.
There are many reasons why transferring many small files will always be slower than transferring a single large file. For a read, the files are more likely to be scattered around the disk, requiring seeks all over the place to get them. As Evan mentioned, there's also metadata involved in the case of NFS (or any other file system for that matter!) which also complicates things. You can try increasing your rsize and wsize parameters to the NFS mount and see if that will help performance a bit. Also check out this question on tuning NFS for minimum latency as it has a lot of helpful advice that will help in the case of many small file transfers.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/31628', 'https://serverfault.com', 'https://serverfault.com/users/10236/']}
jdg_380121
stackexchange
llm_judgeable_groundtruth_similarity
4139806
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: One of the solutions tried to write the integral a $$\int e^{x^2+x}((2x+1)p(x) + p’(x))dx$$ Where $p(x)=2x^2+bx+c$ I have no idea why they chose the lead coefficient to be $2$ . Going with this, $b=1$ and $c=0$ So $$\int e^{x^2+x} ((2x+1)(2x^2+x)+4x+1)dx $$ Now even after substituting $x^2+x=t$ , the integration still seems to be mess, not in the form of the desired $\int e^t(f(t)+f’(t))dt$ form. Is there a better solution or a better explanation for this one? Now provide the response and nothing else.
$$\begin{align}\int e^{x^2+x}(4x^3+4x^2+5x+1)dx &= \int e^{x^2+x}(4x^3 + 2x^2 + 2x^2 + x + 4x + 1)dx\\&= \int e^{x^2+x}(2x+1)(2x^2 + x) + e^{x^2+x}(4x+1)dx\\&= \int e^{f(x)}(f'(x)g(x) + g'(x)) dx \end{align}$$ where $f(x) = x^2 + x$ and $g(x) = 2x^2 + x$ . You should be able to see (integration by parts is one way) that $$\int e^{f(x)}(f'(x)g(x) + g'(x)) dx = e^{f(x)} g(x) + C$$ To summarize, $$\int e^{x^2+x}(4x^3+4x^2+5x+1)dx = e^{x^2 + x} (2x^2 + x) + C$$ Comments on your solution: Great work, you almost completed it! It's worth noting that instead of $\int e^{t} (f(t) + f'(t)) dt$ , you really want a polynomial in the exponent. Take $f(x) = x^2 + x$ (the obvious choice) and see where you can get with differentiating $e^{f(x)}g(x)$ ? That's how I would reverse engineer the solution. Once you know $g(x)$ , the game is over.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4139806', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/690228/']}
jdg_380122
stackexchange
llm_judgeable_groundtruth_similarity
1124340
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Unlike std::map and std::hash_map, corresponding versions in Qt do not bother to return a reference. Isn't it quite inefficient, if I build a hash for quite bulky class? EDIT especially since there is a separate method value(), which could then return it by value. Now provide the response and nothing else.
const subscript operators of STL containers can return a reference-to-const because they flat out deny calls to it with indexes that do not exist in the container. Behaviour in this case is undefined. Consequently, as a wise design choice, std::map doesn't even provide a const subscript operator overload. QMap tries to be a bit more accommodating, provides a const subscript operator overload as syntactic sugar, runs into the problem with non-existing keys, again tries to be more accomodating, and returns a default-constructed value instead. If you wanted to keep STL's return-by-const-reference convention, you'd need to allocate a static value and return a reference to that . That, however, would be quite at odds with the reentrancy guarantees that QMap provides, so the only option is to return by value. The const there is just sugar coating to prevent some stupid mistakes like constmap["foo"]++ from compiling. That said, returning by reference is not always the most efficient way. If you return a fundamental type, or, with more aggressive optimisation, when sizeof(T)<=sizeof(void*) , return-by-value often makes the compiler return the result in a register directly instead of indirectly (address to result in register) or—heaven forbid—on the stack. The other reason (besides premature pessimisation) to prefer pass-by-const-reference, slicing, doesn't apply here, since both std::map and QMap are value-based, and therefore homogeneous. For a heterogeneous container, you'd need to hold pointers, and pointers are fundamental types (except smart ones, of course). That all said, I almost never use the const subscript operator in Qt. Yes, it has nicer syntax than find() + *it , but invariably, you'll end up with count() / contains() calls right in front of the const subscript operator, which means you're doing the binary search twice . And then you won't notice the miniscule differences in return value performance anyway :) For value() const , though, I agree that it should return reference-to-const, defaulting to the reference-to-default-value being passed in as second argument, but I guess the Qt developers felt that was too much magic.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1124340', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/124944/']}
jdg_380123
stackexchange
llm_judgeable_groundtruth_similarity
20088743
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use the NAudio.Lame library in an MVC4 application and am getting the error: Unable to load DLL 'libmp3lame.32.dll': The specified module could not be found. I added the library via NuGet. I was able to get the library to work fine with a Windows Forms application, so I believe the problem is specific to MVC4. I tried the advice from the library author here: https://stackoverflow.com/a/20065606/910348 Now provide the response and nothing else.
The problem turns out to be that the native DLLs ( libmp3lame.32.dll and libmp3lame.64.dll ) cannot be found because the current directory that the web server process is executing from is not the website's bin folder (where the DLLs reside) and the search path does not include the bin folder. What you need is to add the bin folder to the PATH environment variable, which will enable the LoadLibrary API call to locate the DLLs. Here's a method you can call that will do this for you: public static void CheckAddBinPath(){ // find path to 'bin' folder var binPath = Path.Combine(new string[] { AppDomain.CurrentDomain.BaseDirectory, "bin" }); // get current search path from environment var path = Environment.GetEnvironmentVariable("PATH") ?? ""; // add 'bin' folder to search path if not already present if (!path.Split(Path.PathSeparator).Contains(binPath, StringComparer.CurrentCultureIgnoreCase)) { path = string.Join(Path.PathSeparator.ToString(), new string[] { path, binPath }); Environment.SetEnvironmentVariable("PATH", path); }} Place that in your controller and call it right before you create the LameMP3FileWriter instance. It might work if you put it in Global.asax.cs and call it from Application_Start() . Try it and let me know if it works there. I've put a Wiki article about this on the project site here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20088743', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/910348/']}
jdg_380124
stackexchange
llm_judgeable_groundtruth_similarity
3070274
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a way to do something like this: // style.css@def borderSize '2px';.style { width: borderSize + 2; height: borderSize + 2;} where the width and height attributes would end up having values of 4px. Now provide the response and nothing else.
Sometimes I use the following: @eval BORDER_SIZE_PLUS_2 2+2+"px"; /* GWT evaluates this at compile time! */ Oddly, this only works, if you don't put any spaces between the + operator and the operands. Also, in @eval you can't use constants that were previously defined by @def. You can however use constants that are defined as static fields in one of your Java classes: @eval BORDER_SIZE_PLUS_2 com.example.MyCssConstants.BORDER_SIZE+2+"px"; Or you could let the calculation be performed completely by Java: @eval WIDTH com.example.MyCssCalculations.width(); /* static function, no parameters! */@eval HEIGHT com.example.MyCssCalculations.height();.style { width: WIDTH; height: HEIGHT;} But what I would actually like to do is very similar to your suggestion: @def BORDER_SIZE 2;.style { width: value(BORDER_SIZE + 2, 'px'); /* not possible */ height: value(BORDER_SIZE + 3, 'px');} I don't think that's possible in GWT 2.0. Maybe you find a better solution - here's the Dev Guide page on this topic.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3070274', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/317404/']}
jdg_380125
stackexchange
llm_judgeable_groundtruth_similarity
441758
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have various spreadsheets that employ deliciously complicated macros and third party extensions to produce complicated models. I'm working on a project that involves slightly tweaking various inputs and seeing the results. Rather than doing this by hand or writing VBA, I'd like to see if I can write a python script to drive this. In other words, the python script will start up, load the excel sheet, and then interact with the sheet by making minor changes in some cells and seeing how they affect other cells. So, my question is twofold: What is the best library to use to drive excel from python in such fashion? Where's the best documentation/examples on using said library? Cheers,/YGA Now provide the response and nothing else.
For controlling Excel, use pywin32, like @igowen suggests. Note that it is possible to use static dispatch. Use makepy.py from the pywin32 project to create a python module with the python wrappers. Using the generated wrappers simplifies development, since for instance ipython gives you tab completion and help during development. Static dispatch example: x:> makepy.py "Microsoft Excel 11.0 Object Library"...Generating...Importing modulex:> ipython > from win32com.client import Dispatch> excel = Dispatch("Excel.Application")> wb = excel.Workbooks.Append()> range = wb.Sheets[0].Range("A1")> range.[Press Tab]range.Activate range.Mergerange.AddComment range.NavigateArrowrange.AdvancedFilter range.NoteText...range.GetOffset range.__repr__range.GetResize range.__setattr__range.GetValue range.__str__range.Get_Default range.__unicode__range.GoalSeek range._get_good_object_range.Group range._get_good_single_object_range.Insert range._oleobj_range.InsertIndent range._prop_map_get_range.Item range._prop_map_put_range.Justify range.coclass_clsidrange.ListNames range.__class__> range.Value = 32... Documentation links: The O'Reilly book Python Programming on Win32 has an Integrating with Excel chapter. Same book, free sample chapter Advanced Python and COM covers makepy in detail. Tutorials win32com documentation , I suggest you read this first.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/441758', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/54873/']}
jdg_380126
stackexchange
llm_judgeable_groundtruth_similarity
31656480
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have read all the similar questions and did everything that was described in them, but did not help. Google Maps API v2 is Enabled andAPI key is correct I just chose GoogleMapsActivity in "New project", then created a key.jks, created sha1 by keytool, created public api access key, put my API key in manifest. I tried:clean-rebuild-unistall app-install updated api key many times create a new project with the new key.jks (and all over again) delete and create api key Here is my manifest: <manifest xmlns:android="http://schemas.android.com/apk/res/android"package="com.dandewine.user.thinkmobiletest" ><uses-permission android:name="android.permission.INTERNET" /><uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /><uses-permission android:name="com.google.android.providers.gsf.permission.READ_GSERVICES" /><!-- The ACCESS_COARSE/FINE_LOCATION permissions are not required to use Google Maps Android API v2, but are recommended. --> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <meta-data android:name="com.google.android.gms.version" android:value="@integer/google_play_services_version" /> <meta-data android:name="com.google.android.maps.v2.API_KEY" android:value="AIza**************************" /> <activity android:name=".ActivityMain" android:label="@string/title_activity_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity></application> Here is my activity: public class ActivityMain extends FragmentActivity implements OnMapReadyCallback {@Overrideprotected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.fragment_map); SupportMapFragment mapFragment = (SupportMapFragment)getSupportFragmentManager().findFragmentById(R.id.map); mapFragment.getMapAsync(this);}@Overridepublic void onMapReady(GoogleMap googleMap) { googleMap.addMarker(new MarkerOptions() .position(new LatLng(0, 0)) .title("Marker"));} Logcat: E/Google Maps Android API﹕ Authorization failure. Please see https://developers.google.com/maps/documentation/android/start for how to correctly set up the map.07-27 14:52:37.551 25002-25035/com.dandewine.user.thinkmobiletest E/Google Maps Android API﹕ In the Google Developer Console (https://console.developers.google.com)Ensure that the "Google Maps Android API v2" is enabled.Ensure that the following Android Key exists:API Key: AIza****************************Android Application (<cert_fingerprint>;<package_name>): 8C:2B:4C:F7:CF:FB:EC:D5:DC:D7:D0:5D:6E:30:49:74:97:18:57:88;com.dandewine.user.thinkmobiletest UPDATE : I have different SHA1 fingerprints in google dev. console and in logs, how to deal with that? Can anyone help with advice. Now provide the response and nothing else.
It sounds like you're using the SHA1 fingerprint from the keystore that you will be using to generate a signed apk. For debugging/running from Android Studio, you need to use the SHA1 fingerprint that Android Studio uses to sign the apk. Note that you can get this SHA1 fingerprint by using command line: For Mac or Linux: keytool -list -v -keystore ~/.android/debug.keystore For Windows: keytool -list -v -keystore C:\User\YourUser\.android\debug.keystore with password "android". However, since you already have the correct value in your logs,just copy this from your logs (I modified it here, don't copy from here): 8C:2B:4C:F7:CF:FB:EC:D5:DC:D7:D0:5D:6E:30:49:xx:xx:xx:xx:xx;com.dandewine.user.thinkmobiletest And paste that into your API Key in the developer console. You can add multiple fingerprint/package values to each API key, one per line (you can also see that in the instructions when you are editing an API key). You can also configure a different API key for debug and release, if you do that take a look at this answer .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31656480', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4755017/']}
jdg_380127
stackexchange
llm_judgeable_groundtruth_similarity
23932125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a form in which the user inputs various information. The input chosen name allows the user to enter a username of choice but a HIDDEN INPUT needs to be integrated so that a system username is created. The system username is generated on page submit by a javascript function, and it consists of the first alphabetic characters found in the Family name, street address, Given name; the numerical day of the month; and the numerical seconds field of the time of submission. E.g.: A user registers with name Bernardo O’Higgins, address 213 Liberator St, at 12:31:16 on 25 April 2014. His system username isOLB2516. Just so i can see if it works, at the moment the ssytem username is not hidden but just a normal text box. I am totally lost as i do not know how to go about this and hoping somebody can help me? Here is my php file with form integrated. <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head><link rel="stylesheet" type="text/css" href="stylesheet.css" /><title>Registration</title></head><body><?php$conn = mysql_connect("localhost", "....", ".....");mysql_select_db("tipping291", $conn)or die ('Database not found ' . mysql_error() );mysql_close($conn); ?><div id="container"><div id="header"><h1>Registration</h1></div><div id="menu"><a href="home.php"><h2>Homepage</h2></a><br /><a href="rego.php"><h2>Registration</h2></a><br /><a href="userlogin.php"><h2>User Login</h2></a><br /><a href="adminlogin.php"><h2>Administrator Login</h2></a><br /><a href="tipping.php"><h2>Tipping</h2></a><br /><a href="termsnconditions.php"><h2>Terms & Conditions</h2></a><br /></div><form id="rego" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>" method="post" onSubmit="return validateForm()"><label>Given Name:</label> <input type="text" name="gname"><br /><br /><label>Middle Name: </label><input type="text" name="mname"><br /><br /><label>Family Name:</label> <input type="text" name="surname"><br /><br /><label>Chosen Username:</label> <input type="text" name="username"><br /><br /><label>Address:</label> <input type="text" name="address"><br /><br /><label>Postcode: </label><input type="text" name="postcode"><br /><br /><label>State:</label> <input type="text" name="state"><br /><br /><label>Tel number: </label><input type="text" name="tel"><br /><br /><label>Password:</label> <input type="password" name="password"><br /><br /><label>Password confirmation:</label> <input type="password" name="passconfirm"><br /><br /><label>System username</label> <input type="text" name="susername" ><input type="submit" value="submit"></div></form></body></html> CAN SOMBODY PEASE HELP ME!!!!! I HAVENT HAD ANY SUCCESSS Now provide the response and nothing else.
Include this line ROWS IDENTIFIED BY '<plugin>' . with that your query should look like LOAD XML LOCAL INFILE "D:\\test.xml"INTO TABLE mytableROWS IDENTIFIED BY '<plugin>'; Looks like your XML file formation is not correct and so even though 1 row gets inserted; all the values doesn't gets extracted (remains NULL ). Do little changes as below Create table structure CREATE TABLE mytable (plugin_name varchar(255),title varchar(255),description varchar(255), `file` varchar(255),`install` varchar(255),hook varchar(255),phrase varchar(255)); Change your XML file <?xml version="1.0" encoding="utf-8" ?><plugin plugin_name="tree"> <title>Test</title> <description>some description</description> <file>test.tmp</file> <install>![CDATA[ global $test; ]]</install> <hook name="hookname">![CDATA[ global $local; ]]</hook> <phrase key="category">![CDATA[Show categories]]</phrase></plugin> Now if you use LOAD XML LOCAL INFILE "D:\\test.xml"INTO TABLE mytableROWS IDENTIFIED BY '<plugin>'; All data gets extracted fine
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23932125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3641114/']}
jdg_380128
stackexchange
llm_judgeable_groundtruth_similarity
597813
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing a piece of code which deals with math variables and indices, and I'd need to print subscripts and superscripts on a CLI, is there a (possibly cross-platform) way to do that? I'm working in vanilla C++. Note: I'd like this to be cross-platform, but since from the first answers this doesn't seem to be possible I'm working under MacOS and Ubuntu Linux (so bash). Thank you Now provide the response and nothing else.
Since most CLIs are really only terminals (pretty dumb ones mostly but sometimes with color), the only cross-platform way I've ever done this is by allocating muliple physical lines per virtual line, such as: 2f(x) = x + log x 2 It's not ideal but it's probably the best you're going to get without a GUI. Following you extra information as to what platforms you're mainly interested in: With Ubuntu at least, gnome-terminal runs in UTF-8 mode by default so the following code shows how to generate the superscripts and subscripts: #include <stdio.h>static char *super[] = {"\xe2\x81\xb0", "\xc2\xb9", "\xc2\xb2", "\xc2\xb3", "\xe2\x81\xb4", "\xe2\x81\xb5", "\xe2\x81\xb6", "\xe2\x81\xb7", "\xe2\x81\xb8", "\xe2\x81\xb9"};static char *sub[] = {"\xe2\x82\x80", "\xe2\x82\x81", "\xe2\x82\x82", "\xe2\x82\x83", "\xe2\x82\x84", "\xe2\x82\x85", "\xe2\x82\x86", "\xe2\x82\x87", "\xe2\x82\x88", "\xe2\x82\x89"};int main(void) { int i; printf ("f(x) = x%s + log%sx\n",super[2],sub[2]); for (i = 0; i < 10; i++) { printf ("x%s x%s ", super[i], sub[i]); } printf ("y%s%s%s z%s%s\n", super[9], super[9], super[9], sub[7], sub[5]); return 0;} The super and sub char* arrays are the UTF-8 encodings for the Unicode code points for numeric superscripts and subscripts (see here ). The given program will output my formula from above (on one line instead of three), then another test line for all the choices and a y-super-999 and z-sub-75 so you can see what they look like. MacOS doesn't appear to use gnome-terminal as a terminal program but references here and here seem to indicate the standard terminal understands UTF-8 (or you could download and install gnome-terminal as a last resort).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/597813', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/25418/']}
jdg_380129
stackexchange
llm_judgeable_groundtruth_similarity
644678
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Maybe this is why I can't seem to understand the first thing about QFT. In the article on second quantization , they say that the name shouldn't really be "second quantization", because: One is not quantizing "again", as the term "second" might suggest; the field that is being quantized is not a Schrödinger wave function that was produced as the result of quantizing a particle, but is a classical field (such as the electromagnetic field or Dirac spinor field), essentially an assembly of coupled oscillators, that was not previously quantized. WHAT??? I thought the Dirac equation was just the relativistic Schrodinger equation, albeit with the added benefit of spin and relativistic corrections. It still gives discrete energy states, involves complex numbers, etc. etc. Whereas, or so I thought, a classical field is something where you can directly measure the value at any point, like you can measure the electric/magnetic field by putting a stationary/moving charge there. So this must be why the QFT wavefunction is now a functional of Dirac+EM field configurations, right? But I'm still missing that key conceptual link: how does a given Dirac configuration correspond to a single physical reality (such that it is amenable to quantization!), and not a probability distribution like the Schrodinger wavefunction? How do you measure the Dirac field? Or if you can't, then why doesn't that matter, and how do the structures of QFT connect to experiment? Now provide the response and nothing else.
Just to put it in order, the fields are called classical not because they are directly measurable (electromagnetic vector potential is classical but not measurable, either) but because they are just (c-number) fields, like $$ \psi: \mathbb R^n \rightarrow \mathbb C \qquad \text{or equivalently} \qquad \psi(x) \in \mathbb C \quad \text{for} \quad x \in \mathbb R^n,$$ in opposite to quantum fields which are operator valued $$ \hat \psi(x): \mathcal F \rightarrow \mathcal F$$ for every point $x \in \mathbb R^n$ in space where $\mathcal F$ is the Fock space in which they act. In other words, classical (Schrödinger, Dirac) wavefunction $\psi$ is an element of a Hilbert space $\mathcal H$ itself, $\psi \in \mathcal H$ , whereas a quantum (Schrödinger, Dirac) field $\hat \psi(x)$ is an operator in Fock space $\mathcal F$ (which is mathematically also a Hilbert space). Regarding first and second quantization , in Hamiltonian mechanics one postulates the Poisson brackets for the position and momentum to become commutators of the position and momentum operators $$ \{q^i, p_j\} = \delta^i_j \qquad \rightarrow \qquad [Q^i, P_j] = i\, \hbar\, \delta^i_j$$ while in field theory one postulates the Poisson brackets of the field and its canonical momentum to become the commutators of the field and its momentum operators $$ \{\phi(x), \pi(y)\} = \delta(x-y) \qquad \rightarrow \qquad [\Phi(x), \Pi(y)] = i\, \hbar\, \delta(x-y).$$ In both cases one gets an algebra of operators and looks for its representations. In the first case it is the Hilbert space $\mathcal H$ , in the second the Fock space $\mathcal F$ .Elements of $\mathcal H$ are (Schrödinger, Dirac) first quantized wavefunctions $\psi$ which, treated as classical fields, are second quantized to become operators in Fock space $\mathcal F$ . I personally prefer the names quantum mechanics and quantum field theory as quantized versions of classical mechanics and classical field theory, respectively.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/644678', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/230823/']}
jdg_380130
stackexchange
llm_judgeable_groundtruth_similarity
37491626
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wrote this code to teach myself about JavaScript closures: function1 = function(){ var variable = "foo" var function2 = function(argument){ console.log(variable + argument); } return function2}function3 = function1();function3("bar"); This prints "foobar" as expected. But where does the variable live? Does it become a property of function3, or stored somewhere else in function3? Does JavaScript traverse some kind of closure chain, similarly to how it traverses the prototype chain? Is it stored in memory somewhere else? I am trying to understand this more deeply. Now provide the response and nothing else.
tl;dr: where does the variable live? In the environment it was defined in. Does it become a property of function3, or stored somewhere else in function3? No. Does JavaScript traverse some kind of closure chain, similarly to how it traverses the prototype chain? Yes. Is it stored in memory somewhere else? Yes. tl;dr 2: Functions keep a reference to the environment they are created in. When a function is called it creates a new environment whose parent is the environment the function kept the reference to. Longer explanation: Whenever a function is executed a new lexical environment is created. The environment has two "fields": an environment record where all the variables are being tracked and a outer lexical environment that refers to, as the name suggested, to the "parent lexical environment". So when we your code example is evaluated, the initial state of the memory (before executing anything) might look like this (simplified): +-(Global) lexical environment-+ +-Environment Record-++-------------+----------------+ +---------+----------+| Environment | *--------+---> |function1|undefined || Record | | +---------+----------++-------------+----------------+ |function3|undefined || Outer | | +---------+----------+| lexical | (empty) || environment | |+-------------+----------------+ The global environment doesn't have any outer environment because it is at the top. function1 and function3 are two bindings that haven't been initialized yet (the assignment wasn't evaluated yet). After creating the function (evaluating function1 = function() { ... } ), the memory looks like this: +------------------------------------------------------------------------+ | | v |+-(Global) lexical environment-+ +-Environment Record-+ +-----Function Object-+---++-------------+----------------+ +---------+----------+ +---------------+-----+---+| Environment | *--------+--->|function1| *-----+---->|[[Environment]]| * || Record | | +---------+----------+ +---------------+---------++-------------+----------------+ |function3|undefined | | name |function1|| Outer | | +---------+----------+ +---------------+---------+| lexical | (empty) || environment | |+-------------+----------------+ Now function1 has a value, a function object. Function objects have multiple internal (e.g. [[Environment]] ) and external (e.g. name ) properties. As the name implies, internal properties cannot be accessed from user code. The [[Environment]] property is very important. Notice how it refers back to the lexical environment the function was created in! The next step is executing function3 = function1() , i.e. calling function2 . As I said at the very beginning, whenever a function is executed a new lexical environment is created. Let's look at the memory just after entering the function: +------------------------------------------------------------------------+ | | v | +-(Global) lexical environment-+ +-Environment Record-+ +-----Function Object-+---+ +-------------+----------------+ +---------+----------+ +---------------+-----+---+ | Environment | *--------+--->|function1| +---->|[[Environment]]| * | | Record | | +---------+----------+ +---------------+---------++> +-------------+----------------+ |function3|undefined | | name |function1|| | Outer | | +---------+----------+ +---------------+---------+| | lexical | (empty) || | environment | || +-------------+----------------+|||| +-----lexical environment------+ +-Environment Record-+| +-------------+----------------+ +---------+----------+| | Environment | *--------+--->|variable |undefined || | Record | | +---------+----------+| +-------------+----------------+ |function2|undefined || | Outer | | +---------+----------+| | lexical | * || | environment | | || +-------------+--------+-------+| |+-------------------------+ This looks very similar to the structure of the global environment! We have a lexical environment that has an environment record with two unintialized bindings. But the big difference now is that "outer lexical environment" points to the global lexical environment. How is that possible? When calling function1 and creating a new lexical environment, we set the value of the new environments "outer lexical environment" field to the value of function1 's [[Environment]] field. This is were the scope chain is created. Now, after executing function1 , the memory has this structure: +------------------------------------------------------------------------+ | | v | +-(Global) lexical environment-+ +-Environment Record-+ +-----Function Object-+---+ +-------------+----------------+ +---------+----------+ +---------------+-----+---+ | Environment | *--------+--->|function1| *-----+---->|[[Environment]]| * | | Record | | +---------+----------+ +---------------+---------++> +-------------+----------------+ |function3| | | | name |function1|| | Outer | | +---------+---+------+ +---------------+---------+| | lexical | (empty) | || | environment | | || +-------------+----------------+ +-------------------------+| || +----------------------------------------------------------------+--------+| v | || +-----lexical environment------+ +-Environment Record-+ v || +-------------+----------------+ +---------+----------+ || | Environment | *--------+--->|variable | 'foo' | +-----Function Object-+---+| | Record | | +---------+----------+ +---------------+-----+---+| +-------------+----------------+ |function2| *-----+---->|[[Environment]]| * || | Outer | | +---------+----------+ +---------------+---------+| | lexical | * | | name |function2|| | environment | | | +---------------+---------+| +-------------+--------+-------+| |+-------------------------+ Similar like function1 , function2 has a reference to the environment created by calling function2 . In addition, function3 refers to the function we created because we return it from function1 . Last step: calling function3('bar') : +------------------------------------------------------------------------+ | | v | +-(Global) lexical environment-+ +-Environment Record-+ +-----Function Object-+---+ +-------------+----------------+ +---------+----------+ +---------------+-----+---+ | Environment | *--------+--->|function1| *-----+---->|[[Environment]]| * | | Record | | +---------+----------+ +---------------+---------++> +-------------+----------------+ |function3| | | | name |function1|| | Outer | | +---------+---+------+ +---------------+---------+| | lexical | (empty) | || | environment | | || +-------------+----------------+ +-------------------------+| || +----------------------------------------------------------------+--------+| v | || +-----lexical environment------+ +-Environment Record-+ v || +-------------+----------------+ +---------+----------+ || | Environment | *--------+--->|variable | 'foo' | +-----Function Object-+---+| | Record | | +---------+----------+ +---------------+-----+---+|+>+-------------+----------------+ |function2| *-----+---->|[[Environment]]| * ||| | Outer | | +---------+----------+ +---------------+---------+|| | lexical | * | | name |function2||| | environment | | | +---------------+---------+|| +-------------+--------+-------+++------------------------+ | | +-----lexical environment------+ +-Environment Record-+ | +-------------+----------------+ +---------+----------+ | | Environment | *--------+--->|argument | 'bar' | | | Record | | +---------+----------+ | +-------------+----------------+ | | Outer | | | | lexical | * | | | environment | | | | +-------------+--------+-------+ +------------------------+ Similar here, a new environment is created and its "outer lexical environment" field points to the environment created when function1 was called. Now, looking up the value of argument is straightforward, because it exists in the environment's own record. But when looking up variable , the following happens: Since it doesn't exist in the environment's own record, it looks at its "outer lexical environment"'s record. It can do that because it has a reference to it.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37491626', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5800592/']}
jdg_380131
stackexchange
llm_judgeable_groundtruth_similarity
43894619
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider: struct Point { int x, y; };int main(){ const auto [x, y] = Point{};} This code compiles fine with gcc 7.1 in C++17 mode, however this one: #include <utility>struct Point { int x, y; };int main(){ const auto [x, y] = Point{};} gives an error: bug.cpp: In function 'int main()':bug.cpp:7:16: error: 'std::tuple_size<const Point>::value' is not an integral constant expression const auto [x, y] = Point{}; ^~~~~~ What's going on here? A compiler bug, or is this how structured bindings are supposed to work? Now provide the response and nothing else.
This is compiler bug 78939 . Although it's a bit more complicated than that - there were a few issues between the core language and the library that were mutually contradictory ( GB 20 , LWG 2770 , and LWG 2446 ), which lead to the kind of behavior that gcc/libstdc++ exhibit here. It is certainly intended that the code work with or without #include <utility> , it's just a matter of the standard wording having gotten there properly. Yes, classes with all public non-anonymous union members should be usable in structured bindings declarations per [dcl.struct.bind]/4 : Otherwise, all of E 's non-static data members shall be public direct members of E or of the same unambiguous public base class of E , E shall not have an anonymous union member, and the number of elements in the identifier-list shall be equal to the number of non-static data members of E . Designating the non-static data members of E as m0, m1, m2, ... (in declaration order), each vi is the name of an lvalue that refers to the member mi of e and whose type is cv Ti, where Ti is the declared type of that member; the referenced type is cv Ti. The lvalue is a bit-field if that member is a bit-field. [ Example: struct S { int x1 : 2; volatile double y1; };S f();const auto [ x, y ] = f(); This is completely unrelated to the inclusion of <utility> , nothing in this code depends on any library functionality - the members are grabbed directly, and not via the get / tuple_size mechanism.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43894619', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138817/']}
jdg_380132
stackexchange
llm_judgeable_groundtruth_similarity
19458478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There is a way to check if I got a flag in a series of flag? Example: [Flags]Enum TestEnum{ ALIVE, DEAD, ALMOSTDEAD, HURT, OTHERS}// check if is alive and has been hurtTestEnum aTest = TestEnum.ALIVE | TestEnum.HURTbool aTest2 = aTest.HasFlag(TestEnum.ALIVE) But a.Test.HasFlag always returns true, even without the TestEnum.ALIVE Now provide the response and nothing else.
You can certainly use Enum.HasFlag like everyone has suggested. However, its important to make sure that your enumeration falls in powers of two. Powers of two have a single bit set, so your enumeration should look like this: Enum TestEnum{ ALIVE = 1, DEAD = 2, ALMOSTDEAD = 4, HURT = 8, OTHERS = 16} The reason this is important is because you are comparing the bit flags. In memory, your enum flags will look like this: ALIVE = 00001DEAD = 00010ALMOSTDEAD = 00100HURT = 01000OTHERS = 10000 When you do a bitwise compare, like DEAD | ALMOSTDEAD , you are doing this: DEAD = 00010 ORALMOSTDEAD = 00100------------------RESULT = 00110 Since the Result is > then 0, its true.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19458478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1454282/']}
jdg_380133
stackexchange
llm_judgeable_groundtruth_similarity
11956385
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using virtualenv and I want to know what the TEMPLATE_DIRS in settings.py should be, for example if I make a templates folder in the root of my project folder. Now provide the response and nothing else.
You need to specify the absolute path to your template folder. Always use forward slashes, even on Windows. For example, if your project folder is "/home/djangouser/projects/myproject" (Linux) or 'C:\projects\myproject\' (Windows), your TEMPLATE_DIRS looks like this: # for Linux TEMPLATE_DIRS = ( '/home/djangouser/projects/myproject/templates/', ) # or for Windows; use forward slashes! TEMPLATE_DIRS = ( 'C:/projects/myproject/templates/', ) Alternatively you can use the specified PROJECT_ROOT variable and generate the absolute path by joining it with the relative path to your template folder. This has the advantage that you only need to change your PROJECT_ROOT, if you copy the project to a different location. You need to import the os module to make it work: # add at the beginning of settings.pyimport os# ...TEMPLATE_DIRS = ( os.path.join(PROJECT_ROOT, 'templates/'),)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11956385', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/296775/']}
jdg_380134
stackexchange
llm_judgeable_groundtruth_similarity
21137390
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using this code for Facebook SDK initialization window.fbAsyncInit = function() { FB.init({ appId : '{app id}', status : true, cookie : true, xfbml : true });};(function(d){ var js, id = 'facebook-jssdk', ref = d.getElementsByTagName('script')[0]; if (d.getElementById(id)) { return; } js = d.createElement('script'); js.id = id; js.async = true; js.src = "//connect.facebook.net/en_US/all/debug.js"; ref.parentNode.insertBefore(js, ref);}(document)); In order to use personal data from Facebook, I use these methods in Dart code void showFacebook(Event e, var detail, Node target) { JsObject fb = context["FB"]; fb.callMethod("getLoginStatus", [checkLoginStatusCallback]);}void checkLoginStatusCallback(response) { print("checkLoginStatusCallback"); if(response["status"] == "connected") { showPhoto(response["authResponse"]["userID"]); } else { authUser(); }}void authUser() { print("authUser"); JsObject fb = context["FB"]; fb.callMethod("login", [loginCallback]);}void loginCallback (response) { print("loginCallback"); if(response["authResponse"]) { showPhoto(response["authResponse"]["userID"]); } } But initialization fails with error Uncaught InvalidCharacterError: The string contains invalid characters. Diving deeper into Facebook SDK I've found the reason of that error. Init function trying to create an iframe for sending http requests httpProxyFrame = createIframe({ url: httpDomain + proxyUrl, name: 'fb_xdm_frame_http', id: 'fb_xdm_frame_http', root: container, 'aria-hidden':true, title: IFRAME_TITLE, 'tab-index': -1}); Function createIframe does this check if (hasNamePropertyBug()) { frame = document.createElement('<iframe name="' + name + '"/>');} else { frame = document.createElement("iframe"); frame.name = name;} Function hasNamePropertyBug does this function hasNamePropertyBug() { return typeof hasBug === 'undefined' ? test() : hasBug;}; function test() { var form = document.createElement("form"), input = form.appendChild(document.createElement("input")); input.name = guid(); hasBug = input !== form.elements[input.name]; form = input = null; return hasBug;}; As far as document.createElement uses Dart's domCreateElement function, which wraps an element, the hasBug variable becomes equal true because it compares GeneratedWrapper wrapper with an element. So, hasNamePropertyBug returns true and this code throws an error document.createElement('<iframe name="' + name + '"/>'); So, my question is how to use the Facebook SDK within the Dart environment? Now provide the response and nothing else.
Almost. [-1] uses the fact that a data.frame is a list, so when you do dataframe[-1] it returns another data.frame (list) without the first element (i.e. column). [ ,-1] uses the fact that a data.frame is a two dimensional array, so when you do dataframe[, -1] you get the sub-array that does not include the first column. A priori, they sound like the same, but the second case also tries by default to reduce the dimension of the subarray it returns. So depending on the dimensions of your dataframe you may get a data.frame or a vector, see for example: > data <- data.frame(a = 1:2, b = 3:4)> class(data[-1])[1] "data.frame"> class(data[, -1])[1] "integer" You can use drop = FALSE to override that behavior: > class(data[, -1, drop = FALSE])[1] "data.frame"
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21137390', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1490341/']}
jdg_380135
stackexchange
llm_judgeable_groundtruth_similarity
35138
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: In a setup where you use Apache for PHP / Python scripts and lighttpd for static files, but only one listens on the outside IP address and acts as a proxy to the other listening on the loopback interface, which should be which, why, and does it matter? Now provide the response and nothing else.
The lighter of the two (lighttp) should be the proxy, only bothering the heavier Apache for action when a script needs running. Unleas of course requests for static files are rare, but in that case you don't want to split the jobs between too servers anyway as you are overcomplicating things. As an alternative have you considered running lighttp only with php and python running as FastCGI processes? While PHP as a module can be a little quicker than FCGI the difference may not be worth the hassle of running two servers and may be negated completely anyway by the proxy arrangement. Another alternative is to have the static content served from another IP address if the server can have a second assigned, then neither set of requests need be run through a proxy. You could use a different port on the same address too, though there is a chance some of your viewers might have trouble there if they are behind very strict firewalls that won't allow the non-port-80 traffic through.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/35138', 'https://serverfault.com', 'https://serverfault.com/users/200/']}
jdg_380136
stackexchange
llm_judgeable_groundtruth_similarity
4663982
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to pass an Automator variable into a shell script? Now provide the response and nothing else.
To complement @Ned Deily's answer : (Written as of OS X 10.9.2, still current as of OSX 10.10) It is often not necessary to create and use explicit variables in Automator (using the Set Value of Variable and Get Value of Variable actions). The previous action's output is automatically passed to a Run Shell Script action. By default, the data is passed via stdin , however. If you want it passed as arguments ( $1 , $2 , ... - also accessible as an array via $@ ) instead, select as arguments from the Pass input: list on the right, as illustrated here: In this example, the selected Finder items are passed as POSIX-style paths to the shell script. That said, having a shell script process the data via stdin (using read -r in a loop) works, too:
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4663982', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/240569/']}
jdg_380137
stackexchange
llm_judgeable_groundtruth_similarity
21769248
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: http://jsfiddle.net/G46dK/ <ol> <li> <p> Moo <li> <p class="overflow-hidden"> Moo <li> <p class="overflow-hidden"> Moo <li> <p> Moo</ol> With the accompanying CSS: p.overflow-hidden { overflow-x: hidden;} You'd expect something like Moo Moo Moo Moo but on my Safari and Chrome... the "2." and "3." are hidden (but their "Moo" is still there): Why does the overflow affect the list counter/point at all? It's on a <p> tag that's inside the list... agh it hurts my brain >< Am I losing my mind, or is this a bug? If it's not a bug.. is anybody able to explain it? I imagined the "2." belongs to the li whereas the overflow-x: hidden is applied to the child p . As such even though the "2." is outside the p ... it's got no relationship with the overflow-x: hidden and should therefore be left unaffected - but that's not the case.. What is the case? Now provide the response and nothing else.
Your understanding is correct; the list number (known in CSS as a list marker ) should exist outside the p , not inside it. That should be the case even if you specify list-style-position: inside because like you said, you're applying overflow to the p , not the li . Every list item in general creates a principal block box for its children, and another box for the marker to reside in. The child elements should all be rendered within the principal block box. From the CSS2.1 spec : CSS 2.1 offers basic visual formatting of lists. An element with 'display: list-item' generates a principal block box for the element's content and, depending on the values of 'list-style-type' and 'list-style-image', possibly also a marker box as a visual indication that the element is a list item. A slightly more detailed explanation of principal block boxes can be found here . In your case, each li creates a principal block box and a marker box. The p.overflow-hidden elements should reside in the principal block box and not affect the marker. Here's a crude ASCII art diagram to show what I mean: listmarker li principal block box+-----+ +----------------------------+| | |+--------------------------+|| • | || Moo (p block box) ||| | |+--------------------------+|+-----+ +----------------------------+ Now, the spec seems vague about the positioning of the marker box itself, but it does say that the marker box is separate from the principal block box when list-style-position is outside . It does seem to imply also that a browser could get away with placing the marker box in the principal block box so long as the marker content actually resides by itself in that marker box (which, incidentally, cannot be targeted with CSS as yet). But Safari and Chrome appear to be doing something very different altogether: they seem to be putting the marker box not only within the principal box, but within the first child of the principal block box . That's why it gets cut off when positioned outside the p block box: because the rendering engine sees it as part of the p content, sees that it's out of its horizontal bounds, and cuts it off. (I suspect it gets clipped with overflow-y: hidden as well because it's positioned beyond the left edge, which shouldn't normally happen in LTR mode, but that's just a wild guess.) When you add list-style-position: inside to the li , other browsers correctly shift the p block box beneath the marker, but Safari and Chrome simply move the marker into the p box . Although CSS2.1 says that it doesn't define the exact position of a list marker with respect to the list item's principal block box, it does say this about list-style-position: inside : inside The marker box is placed as the first inline box in the principal block box, before the element's content and before any :before pseudo-elements. That's clearly not what Safari and Chrome are doing with the marker box. Again, the spec is (rather deliberately) not 100% clear about this, but I would certainly not expect the list marker to be a child of, or be affected by, any of the li 's child elements the way it appears to in Safari and Chrome. I'm pretty sure this is incorrect behavior, i.e. a bug.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21769248', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1406230/']}
jdg_380138
stackexchange
llm_judgeable_groundtruth_similarity
750710
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Simple question, why does: $2^{n+1} + 2^{n+1} = 2^{n+2}$ ? Furthermore, why does this only work for powers of 2? Thanks. Now provide the response and nothing else.
$$2^{n+1} + 2^{n+1} = 2\cdot 2^{n+1} = 2^1 \cdot 2^{n+1} = 2^{n+1 + 1} = 2^{n+2}$$ We use the fact that $a^n\cdot a^m = a^{n+m}$. Added: For larger bases, say we have an integer base $a$, then $$\underbrace{a^{n+1}+a^{n+1} + \cdots + a^{n+1}}_{\large a \text{ terms } }= a\cdot a^{n+1} = a^{n+2}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/750710', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/106183/']}
jdg_380139
stackexchange
llm_judgeable_groundtruth_similarity
209957
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: On several of our developer workstations, we've been getting the dreaded "This access control list is not in canonical form and therefore cannot be modified." error when we try and set permissions on certain folders. We haven't been able to figure out what is corrupting these ACLs. Right now, the only way I know to fix it is to right-click the corrupted folder/file, choose Properties and click the Security tab. Windows will then notice the corruption and offer to fix it. I don't like this because it is manual and requires the user to do some investigations to figure out what folder/file is corrupt. Is there a script or program somewhere that will do this automatically? I see that icacls has a /verify parameter, but it just shows me that the ACLs on a file/folder are corrupted. It doesn't offer to fix anything. Now provide the response and nothing else.
I was finally able to figure an automated fix for this. When you call PowerShell's Set-Acl cmdlet, it will re-order the ACLs correctly: $path = C:\Path\To\Item\With\Borked\ACL$acl = Get-Acl $pathSet-Acl $path $acl Of course, it could be a parent of the directory that is messed up, so you should do some traversing to find the culprit. Use icacls C:\Path\To\Item\With\Suspect\CL /verify to figure out if something needs repair. In our environment, Cygwin is the likely culprit: when it creates directories, it likes to give POSIX-style permissions on them, instead of relying on Windows to manage file system security.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/209957', 'https://serverfault.com', 'https://serverfault.com/users/49998/']}
jdg_380140