source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 25060935 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In our application we want the filter on a date column to prompt the user for a start date and an end date, with the filter returning rows where the field in question falls between (or on) those two dates. Initial Approach Our initial approach was to restrict date types to use gte and lte operators, and add the "extra : true" filterable option on the column. This came close, but presented the following problems: A) Each date input could use either the gte (Start) or lte (End) operator, providing undesired flexibility and the option for the user to create a filter that would never return results, and B) Presented a logical comparison (And / Or) that we don't want. Better Approach This question has an answer by Matthew Erwin that gets us very close: it allows us to completely re-style the filter entirely, so we can present simply a Start Date input and an End date input. However, what I can't get working is associating the right filter operation with the right input (gte for the Start date, lte for the End date). My custom filter is as follows: $scope.dateFilter = { extra: true, operators: {}, ui: function (element) { var parent = element.parent(); while (parent.children().length > 1) $(parent.children()[0]).remove(); parent.prepend( "Start Date:<br/><span class=\"k-widget k-datepicker k-header\">" + "<span class=\"k-picker-wrap k-state-default\">" + "<input data-bind=\"value: filters[0].value\" class=\"k-input\" type=\"text\" data-role=\"datepicker\"" + " style=\"width: 100%\" role=\"textbox\" aria-haspopup=\"true\" aria-expanded=\"false\" aria-disabled=\"false\" " + " aria-readonly=\"false\" aria-label=\"Choose a date\">" + "<span unselectable=\"on\" class=\"k-select\" role=\"button\">" + "<span unselectable=\"on\" class=\"k-icon k-i-calendar\">select</span></span></span></span>" + "<br/>End Date:<br/>" + "<span class=\"k-widget k-datepicker k-header\"><span class=\"k-picker-wrap k-state-default\">" + "<input data-bind=\"value: filters[1].value\" class=\"k-input\" type=\"text\" data-role=\"datepicker\"" + " style=\"width: 100%\" role=\"textbox\" aria-haspopup=\"true\" aria-expanded=\"false\" " + " aria-disabled=\"false\" aria-readonly=\"false\" aria-label=\"Choose a date\">" + "<span unselectable=\"on\" class=\"k-select\" role=\"button\">" + "<span unselectable=\"on\" class=\"k-icon k-i-calendar\">select</span></span></span></span>" ); }}; With this approach, the Odata filter option is generated for each of the dates, however it uses the eq Equal To operator, so no values are ever returned. We aren't building filters specifically on the data source. Is there a simple way I can associate each of those date inputs with a specific filter operator? Is there a better way to approach this subject? It seems like filtering dates based on a Start - End range would be commonly desired. Other Details We are using AngularJS, and WebAPI with Odata.
Now provide the response and nothing else.
| After working with Telerik, I came to an answer. The thread that I opened can be found here , but I'll also summarize in this answer. The ultimate solution was to: Use the "Messages" option of the column "filterable" option to customize the filter display message. Use the "Extra" option of the column "filterable" option to get the extra Date selector in the filter menu. Configure the "Operators" option in the grid filterable option to set what operators can be used for dates (gte, lte) and what text is displayed for each (Begin Date, End Date). Use the filterMenuInit event to configure the filter controls. End Result Column Filterable The following filterable options were used: filterable: { "extra": "true", "messages": { "info": "Show items between dates:" }} Extra gives us the second date selector, and the "info" message customizes the text displayed at the top of the filter menu. Grid Filterable I used the "operators" option in the grid-level "filterable" option to make date filters only provide the gte and lte operators, and to customize the text for those operators. This is what the operators configuration object wound up looking like: "date": { "gte": "Begin Date", "lte": "End Date" } Because we want this to apply for all dates, we put that in a factory and reuse it in each angular controller / view. filterMenuInit Event By providing a handler for the filterMenuInit event, you can access and configure the individual controls in the filter menu as it is created. The handler function that I created looks like this: function (e) { if (e.sender.dataSource.options.schema.model.fields[e.field].type == "date") { var beginOperator = e.container.find("[data-role=dropdownlist]:eq(0)").data("kendoDropDownList"); beginOperator.value("gte"); beginOperator.trigger("change"); beginOperator.readonly(); var logicOperator = e.container.find("[data-role=dropdownlist]:eq(1)").data("kendoDropDownList"); logicOperator.readonly(); var endOperator = e.container.find("[data-role=dropdownlist]:eq(2)").data("kendoDropDownList"); endOperator.value("lte"); endOperator.trigger("change"); beginOperator.readonly(); } Specifically, for any date field, this function sets the first and last dropdown operators to "gte" and "lte" respectfully (Those are the dropdowns for the first date operator and the second date operator), and sets all of the dropdowns to read-only so the user can't change them (the only other dropdown, which is at index 1, is the logical comparison - only And makes sense, so we don't let users change it.) This function applies this configuration for any fields that are of "date" type. I did it this way so that I could create this function once, put it in an Angular factory, and then reuse it for any grid that I needed. If you don't want to apply this as a blanket configuration across all of your date columns, you can change the conditional to check for fields by name. Example: if (e.field == "fieldName") Hopefully this will be helpful to someone else. This doesn't give you ultimate customization of the UI in the filter menu, but it does let you simply set up a filter between two dates. I'm sure someone clever could merge this with my original strategy (replacing the markup for the filter menu entirely) to come up with something completely customized. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25060935', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1447559/']} | jdg_380541 |
stackexchange | llm_judgeable_groundtruth_similarity | 317120 |
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This tag has been burninated. Please do not recreate it. If you need advice on which tag to use, see the answer below. If you see this tag reappearing, it may need to be blacklisted. I don't understand the goal behind the goal . The description of the tag says The goal tag has no usage guidance, can you help us create it ? So, the tag itself is ambiguous and has no guidelines of when the tag should be used. Most of the times, the tag is used in scenarios when OP thinks that I have this and my goal is ... and adds the tag to the question which is not necessary. Every question has a goal that OP wants to achieve. At the moment, there are 45 66 questions, out of which 30 43 are used in conjunction with maven . If we still need it with maven I'd suggest a new maven-goal should be created with all the description and the questions should be retagged. As for the other questions, a few can be closed and for the others, goal can be removed. Does this tag even need to be burninated? Does it describe the contents of the questions to which it is applied? and is it unambiguous? - No Is the concept described even on-topic for the site? - No Does the tag add any meaningful information to the post? - No Does it mean the same thing in all common contexts? - No The above 4 No = Yes to burninate . Can we burninate it?
Now provide the response and nothing else.
| It is true that the notion of goal is widely used in Maven. In fact, so widely used that such a tag, maven-goal , would be completely unnecessary and redundant. For those interested , goals are tasks that Maven plugins do during the build. And, basically, the whole Maven build revolves around declaring plugins, thus it adds no value. There are currently 8 23 goal questions (out of the 45 66) that are not tagged with a Maven-related tag ( query ). Of those 8 23: 1 is about a programming language called GOAL , or "Game Oriented Assembly Lisp". This tag could actually make sense for that but it is bound to be misused. A better name has to be found if there are other questions about this language, like lisp-goal . ( How to stream program code? ) 1 is about the Excel function "GoalSeek". Can be safely removed there. ( VBA GoalSeek Loop Issue ). 3 are also tagged prolog . The tag adds no value. ( Finding the highest possible evaluation in prolog , How do I work with equations that are passed on in the goal/query of prolog? , bagof/3 giving weird results ). 3 are also tagged piwik . The tag adds no value; the word "goal" is already in the title or in the body and that is why is what probably tagged this way. ( Piwik goals(checkout steps) , How to get Goal Conversion history from PIWIK , Tracking direct file downloads ). Conclusion: | {} | {'log_upvote_score': 7, 'links': ['https://meta.stackoverflow.com/questions/317120', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/2025923/']} | jdg_380542 |
stackexchange | llm_judgeable_groundtruth_similarity | 36578232 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to run unit tests for my C# project with .NET Core.I am using a Docker container for the runtime. Dockerfile FROM microsoft/dotnet:0.0.1-alphaRUN mkdir /srcWORKDIR /srcADD . /srcRUN dotnet restore "NUnit" and "NUnit.Runners" have been added into project.json "version": "1.0.0-*","compilationOptions": { "emitEntryPoint": true},"dependencies": { "NETStandard.Library": "1.0.0-rc2-23811", "NUnit": "3.2.0", "NUnit.Runners": "3.2.0"},"frameworks": { "dnxcore50": { }} Run dotnet restore successfully with the following output ...log : Installing NUnit.ConsoleRunner 3.2.0.log : Installing NUnit.Extension.NUnitV2ResultWriter 3.2.0.log : Installing NUnit.Extension.NUnitV2Driver 3.2.0.log : Installing NUnit.Extension.VSProjectLoader 3.2.0.log : Installing NUnit.Extension.NUnitProjectLoader 3.2.0.log : Installing NUnit.Runners 3.2.0.info : Committing restore...log : Restore completed in 4352ms. I tried to run the tests with: dotnet nunit dotnet nunit-console But it doesn't work. How am I going to call the runner? Or is there another unit testing framework that works with the current version of .NET Core?
Now provide the response and nothing else.
| Update 4: The NUnit3TestAdapter v3.8 has been released, so it is no longer alpha. Update 3: With NUnit3TestAdapter v3.8.0-alpha1 it is possible now to run the tests using dotnet test command. You just need to have these dependencies in your test project: <PackageReference Include="nunit" Version="3.7.0" /><PackageReference Include="NUnit3TestAdapter" Version="3.8.0-*" /><PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.*" /> You can try it out! Update 2: Visual Studio 2017 and the move from project.json to csproj made the dotnet-test-nunit test adapter obsolete, so we needed to release another updated adapter to run .NET Core tests. Please see Testing .NET Core with NUnit in Visual Studio 2017 if you are using VS2017 and the new .NET Core tooling. See the update below if you are using project.json . Update: NUnit now has support for dotnet test , so you no longer have to use NUnitLite. See testing .NET Core RC2 and ASP.NET Core RC2 using NUnit 3 . NUnit console (and the underlying NUnit Engine) do not support running unit tests against .NET core yet. Hopefully we will get that support in NUnit 3.4. In the meantime, you can use NUnitLite to switch your tests to a self-executing test runner. I wrote a blog post on the process at Testing .NET Core using NUnit 3 . A quick summary is; Create a .NET Core Console application for your test project. Reference NUnit and NUnitLite from your test project. You do not need the runner. Modify main() to execute the unit tests. It should look like this; using NUnitLite;using System;using System.Reflection;namespace MyDnxProject.Test{ public class Program { public int Main(string[] args) { var writter = new ExtendedTextWrapper(Console.Out); new AutoRun(typeof(Program).GetTypeInfo().Assembly).Execute(args, writter, Console.In); } }} For more complete information, see my blog post . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36578232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3911738/']} | jdg_380543 |
stackexchange | llm_judgeable_groundtruth_similarity | 56905995 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I validate and get info from a JWT received from Amazon Cognito? I have setup Google authentication in Cognito, and set the redirect uri to to hit API Gateway, I then receive a code which I POST to this endpoint: https://docs.aws.amazon.com/cognito/latest/developerguide/token-endpoint.html To receive the JWT token, in a RS256 format. I am now struggling to validate, and parse the token in Golang. I’ve tried to parse it using jwt-go, but it appears to support HMAC instead by default and read somewhere that they recommend using frontend validation instead. I tried a few other packages and had similar problems. I came across this answer here: Go Language and Verify JWT but assume the code is outdated as that just says panic: unable to find key . jwt.io can easily decode the key, and probably verify too. I’m not sure where the public/secret keys are as Amazon generated the token, but from what I understand I need to use a JWK URL to validate too? I’ve found a few AWS specific solutions, but they all seem to be hundreds of lines long. Surely it isn’t that complicated in Golang is it?
Now provide the response and nothing else.
| Public keys for Amazon Cognito As you already guessed, you'll need the public key in order to verify the JWT token. https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-verifying-a-jwt.html#amazon-cognito-user-pools-using-tokens-step-2 Download and store the corresponding public JSON Web Key (JWK) for your user pool. It is available as part of a JSON Web Key Set (JWKS).You can locate it athttps://cognito-idp.{region}.amazonaws.com/{userPoolId}/.well-known/jwks.json Parse keys and verify token That JSON file structure is documented in the web, so you could potentially parse that manually, generate the public keys, etc. But it'd probably be easier to just use a library, for example this one: https://github.com/lestrrat-go/jwx And then jwt-go to deal with the JWT part: https://github.com/dgrijalva/jwt-go You can then: Download and parse the public keys JSON using the first library keySet, err := jwk.Fetch(THE_COGNITO_URL_DESCRIBED_ABOVE) When parsing the token with jwt-go, use the "kid" field from the JWT header to find the right key to use token, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) { if _, ok := token.Method.(*jwt.SigningMethodRS256); !ok { return nil, fmt.Errorf("Unexpected signing method: %v", token.Header["alg"]) } kid, ok := token.Header["kid"].(string) if !ok { return nil, errors.New("kid header not found") } keys := keySet.LookupKeyID(kid); if !ok { return nil, fmt.Errorf("key with specified kid is not present in jwks") } var publickey interface{} err = keys.Raw(&publickey) if err != nil { return nil, fmt.Errorf("could not parse pubkey") } return publickey, nil | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56905995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3365278/']} | jdg_380544 |
stackexchange | llm_judgeable_groundtruth_similarity | 12183572 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to fill out the fields on a form through JavaScript. The problem is I only know how to execute JavaScript on the current page so I cannot redirect to the form and execute code from there. I'm hesitant to use this term, but the only phrase that comes to mind is cross-site script. The code I am attempting to execute is below. <script language="javascript"> window.location = "http://www.pagewithaform.com";loaded();//checks to see if page is loaded. if not, checks after timeout.function loaded(){ if(window.onLoad) { //never executes on new page. the problem setTitle(); } else { setTimeout("loaded()",1000); alert("new alert"); }}//sets field's valuefunction setTitle(){ var title = prompt("Field Info","Default Value"); var form = document.form[0]; form.elements["fieldName"].value = title;}</script> I'm not truly sure if this is possible. I'm also open to other ideas, such as PHP. Thanks. EDIT: The second page is a SharePoint form. I cannot edit any of the code on the form. The goal is to write a script that pre-fills most of the fields because 90% of them are static.
Now provide the response and nothing else.
| You're trying to maintain state between pages. Conventionally there are two ways to maintain state: Store state in cookies Store state in the query string Either way your first page has to persist state (to either cookies or the query string) and the other page has to - separately - restore the state. You can't use the same script across both pages. Example: Using Cookies Using cookies, the first page would have to write all the form data you'll need on the next page to cookies: <!DOCTYPE html><html> <head> <title>Maintaining State With Cookies</title> </head> <body> <div> Setting cookies and redirecting... </div> <script> // document.cookie is not a real string document.cookie = 'form/title=My Name is Richard; expires=Tue, 29 Aug 2017 12:00:01 UTC' document.cookie = 'form/text=I am demoing how to use cookies in JavaScript; expires=Tue, 29 Aug 2017 12:00:01 UT'; setTimeout(function(){ window.location = "./form-cookies.html"; }, 1000); </script> </body></html> ... and the second page would then read those cookies and populate the form fields with them: <!DOCTYPE html><html> <head> <title>Maintaining State With Cookies</title> </head> <body> <form id="myForm" action="submit.mumps.cgi" method="POST"> <input type="text" name="title" /> <textarea name="text"></textarea> </form> <script> var COOKIES = {}; var cookieStr = document.cookie; cookieStr.split(/; /).forEach(function(keyValuePair) { // not necessarily the best way to parse cookies var cookieName = keyValuePair.replace(/=.*$/, ""); // some decoding is probably necessary var cookieValue = keyValuePair.replace(/^[^=]*\=/, ""); // some decoding is probably necessary COOKIES[cookieName] = cookieValue; }); document.getElementById("myForm").getElementsByTagName("input")[0].value = COOKIES["form/title"]; document.getElementById("myForm").getElementsByTagName("textarea")[0].value = COOKIES["form/text"]; </script> </body></html> Example: Using the Query String In the case of using the Query String, the first page would just include the query string in the redirect URL, like so: <!DOCTYPE html><html> <head> <title>Maintaining State With The Query String</title> </head> <body> <div> Redirecting... </div> <script> setTimeout(function(){ window.location = "./form-querystring.html?form/title=My Name is Richard&form/text=I am demoing how to use the query string in JavaScript"; }, 1000); </script> </body></html> ...while the form would then parse the query string (available in JavaScript via window.location.search - prepended with a ? ): <!DOCTYPE html><html> <head> <title>Maintaining State With The Query String</title> </head> <body> <form id="myForm" action="submit.mumps.cgi" method="POST"> <input type="text" name="title" /> <textarea name="text"></textarea> </form> <script> var GET = {}; var queryString = window.location.search.replace(/^\?/, ''); queryString.split(/\&/).forEach(function(keyValuePair) { var paramName = keyValuePair.replace(/=.*$/, ""); // some decoding is probably necessary var paramValue = keyValuePair.replace(/^[^=]*\=/, ""); // some decoding is probably necessary GET[paramName] = paramValue; }); document.getElementById("myForm").getElementsByTagName("input")[0].value = GET["form/title"]; document.getElementById("myForm").getElementsByTagName("textarea")[0].value = GET["form/text"]; </script> </body></html> Example: With a Fragment Identifier There's one more option: since state is being maintained strictly on the client side (not on th server side) you could put the information in a fragment identifier (the "hash" part of a URL). The first script is very similar to the Query String example above: the redirect URL just includes the fragment identifier. I'm going to re-use query string formatting for convenience, but notice the # in the place where a ? used to be: <!DOCTYPE html><html> <head> <title>Maintaining State With The Fragment Identifier</title> </head> <body> <div> Redirecting... </div> <script> setTimeout(function(){ window.location = "./form-fragmentidentifier.html#form/title=My Name is Richard&form/text=I am demoing how to use the fragment identifier in JavaScript"; }, 1000); </script> </body></html> ... and then the form has to parse the fragment identifier etc: <!DOCTYPE html><html> <head> <title>Maintaining State With The Fragment Identifier</title> </head> <body> <form id="myForm" action="submit.mumps.cgi" method="POST"> <input type="text" name="title" /> <textarea name="text"></textarea> </form> <script> var HASH = {}; var hashString = window.location.hash.replace(/^#/, ''); hashString.split(/\&/).forEach(function(keyValuePair) { var paramName = keyValuePair.replace(/=.*$/, ""); // some decoding is probably necessary var paramValue = keyValuePair.replace(/^[^=]*\=/, ""); // some decoding is probably necessary HASH[paramName] = paramValue; }); document.getElementById("myForm").getElementsByTagName("input")[0].value = HASH["form/title"]; document.getElementById("myForm").getElementsByTagName("textarea")[0].value = HASH["form/text"]; </script> </body></html> And if you can't edit the code for the form page Try a greasemonkey script. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12183572', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1096496/']} | jdg_380545 |
stackexchange | llm_judgeable_groundtruth_similarity | 53271496 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The question: I call RoleManager.CreateAsync() and RoleManager.AddClaimAsync() to create roles and associated role claims. Then I call UserManager.AddToRoleAsync() to add users to those roles. But when the user logs in, neither the roles nor the associated claims show up in the ClaimsPrincipal (i.e. the Controller's User object). The upshot of this is that User.IsInRole() always returns false, and the collection of Claims returned by User.Claims doesn't contain the role claims, and the [Authorize(policy: xxx)] annotations don't work. I should also add that one solution is to revert from using the new services.AddDefaultIdentity() (which is provided by the templated code) back to calling services.AddIdentity().AddSomething().AddSomethingElse() . I don't want to go there, because I've seen too many conflicting stories online about what I need to do to configure AddIdentity for various use cases. AddDefaultIdentity seems to do most things correctly without a lot of added fluent configuration. BTW, I'm asking this question with the intention of answering it... unless someone else gives me a better answer than the one I'm prepared to post. I'm also asking this question because after several weeks of searching I have yet to find a good end-to-end example of creating and using Roles and Claims in ASP.NET Core Identity 2 . Hopefully, the code example in this question might help someone else who stumbles upon it... The setup: I created a new ASP.NET Core Web Application, select Web Application (Model-View-Controller), and change the Authentication to Individual User Accounts. In the resultant project, I do the following: In Package Manager Console, update the database to match the scaffolded migration: update-database Add an ApplicationUser class that extends IdentityUser . This involves adding the class, adding a line of code to the ApplicationDbContext and replacing every instance of <IdentityUser> with <ApplicationUser> everywhere in the project. The new ApplicationUser class: public class ApplicationUser : IdentityUser{ public string FullName { get; set; }} The updated ApplicationDbContext class: public class ApplicationDbContext : IdentityDbContext{ public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options) { } // Add this line of code public DbSet<ApplicationUser> ApplicationUsers { get; set; }} In Package Manager Console, create a new migration and update the database to incorporate the ApplicationUsers entity. add-migration m_001 update-database Add the following line of code in Startup.cs to enable RoleManager services.AddDefaultIdentity<ApplicationUser>() .AddRoles<IdentityRole>() // <-- Add this line .AddEntityFrameworkStores<ApplicationDbContext>(); Add some code to seed roles, claims, and users. The basic concept for this sample code is that I have two claims: can_report allows the holder to create reports, and can_test allows the holder to run tests. I have two Roles, Admin and Tester . The Tester role can run tests, but can't create reports. The Admin role can do both. So, I add the claims to the roles, and create one Admin test user and one Tester test user. First, I add a class whose sole purpose in life is to contain constants used elsewhere in this example: // Contains constant strings used throughout this examplepublic class MyApp{ // Claims public const string CanTestClaim = "can_test"; public const string CanReportClaim = "can_report"; // Role names public const string AdminRole = "admin"; public const string TesterRole = "tester"; // Authorization policy names public const string CanTestPolicy = "can_test"; public const string CanReportPolicy = "can_report";} Next, I seed my roles, claims, and users. I put this code in the main landing page controller just for expedience; it really belongs in the "startup" Configure method, but that's an extra half-dozen lines of code... public class HomeController : Controller{ const string Password = "QwertyA1?"; const string AdminEmail = "[email protected]"; const string TesterEmail = "[email protected]"; private readonly RoleManager<IdentityRole> _roleManager; private readonly UserManager<ApplicationUser> _userManager; // Constructor (DI claptrap) public HomeController(RoleManager<IdentityRole> roleManager, UserManager<ApplicationUser> userManager) { _roleManager = roleManager; _userManager = userManager; } public async Task<IActionResult> Index() { // Initialize roles if (!await _roleManager.RoleExistsAsync(MyApp.AdminRole)) { var role = new IdentityRole(MyApp.AdminRole); await _roleManager.CreateAsync(role); await _roleManager.AddClaimAsync(role, new Claim(MyApp.CanTestClaim, "")); await _roleManager.AddClaimAsync(role, new Claim(MyApp.CanReportClaim, "")); } if (!await _roleManager.RoleExistsAsync(MyApp.TesterRole)) { var role = new IdentityRole(MyApp.TesterRole); await _roleManager.CreateAsync(role); await _roleManager.AddClaimAsync(role, new Claim(MyApp.CanTestClaim, "")); } // Initialize users var qry = _userManager.Users; IdentityResult result; if (await qry.Where(x => x.UserName == AdminEmail).FirstOrDefaultAsync() == null) { var user = new ApplicationUser { UserName = AdminEmail, Email = AdminEmail, FullName = "Administrator" }; result = await _userManager.CreateAsync(user, Password); if (!result.Succeeded) throw new InvalidOperationException(string.Join(" | ", result.Errors.Select(x => x.Description))); result = await _userManager.AddToRoleAsync(user, MyApp.AdminRole); if (!result.Succeeded) throw new InvalidOperationException(string.Join(" | ", result.Errors.Select(x => x.Description))); } if (await qry.Where(x => x.UserName == TesterEmail).FirstOrDefaultAsync() == null) { var user = new ApplicationUser { UserName = TesterEmail, Email = TesterEmail, FullName = "Tester" }; result = await _userManager.CreateAsync(user, Password); if (!result.Succeeded) throw new InvalidOperationException(string.Join(" | ", result.Errors.Select(x => x.Description))); result = await _userManager.AddToRoleAsync(user, MyApp.TesterRole); if (!result.Succeeded) throw new InvalidOperationException(string.Join(" | ", result.Errors.Select(x => x.Description))); } // Roles and Claims are in a cookie. Don't expect to see them in // the same request that creates them (i.e., the request that // executes the above code to create them). You need to refresh // the page to create a round-trip that includes the cookie. var admin = User.IsInRole(MyApp.AdminRole); var claims = User.Claims.ToList(); return View(); } [Authorize(policy: MyApp.CanTestPolicy)] public IActionResult Test() { return View(); } [Authorize(policy: MyApp.CanReportPolicy)] public IActionResult Report() { return View(); } [ResponseCache(Duration = 0, Location = ResponseCacheLocation.None, NoStore = true)] public IActionResult Error() { return View(new ErrorViewModel { RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier }); }} and I register my authentication policies in the "Startup" ConfigureServices routine, just after the call to services.AddMvc // Register authorization policies services.AddAuthorization(options => { options.AddPolicy(MyApp.CanTestPolicy, policy => policy.RequireClaim(MyApp.CanTestClaim)); options.AddPolicy(MyApp.CanReportPolicy, policy => policy.RequireClaim(MyApp.CanReportClaim)); }); Whew. Now, (assuming I've noted all of the applicable code I've added to the project, above), when I run the app, I notice that neither of my "built-in" test users can access either the /home/Test or /home/Report page. Moreover, if I set a breakpoint in the Index method, I see that my roles and claims do not exist in the User object. But I can look at the database and see all of the roles and claims are there.
Now provide the response and nothing else.
| So, to recap, the question asks why the code provided by the ASP.NET Core Web Application template doesn't load roles or role claims into the cookie when a user logs in. After much Googling and experimenting, there appear to be two modifications that must be made to the templated code in order to get Roles and Role Claims to work: First, you must add the following line of code in Startup.cs to enable RoleManager. (This bit of magic was mentioned in the OP.) services.AddDefaultIdentity<ApplicationUser>() .AddRoles<IdentityRole>() // <-- Add this line .AddEntityFrameworkStores<ApplicationDbContext>(); But wait, there's more! According to this discussion on GitHub , getting the roles and claims to show up in the cookie involves either reverting to the service.AddIdentity initialization code, or sticking with service.AddDefaultIdentity and adding this line of code to ConfigureServices : // Add Role claims to the User object// See: https://github.com/aspnet/Identity/issues/1813#issuecomment-420066501services.AddScoped<IUserClaimsPrincipalFactory<ApplicationUser>, UserClaimsPrincipalFactory<ApplicationUser, IdentityRole>>(); If you read the discussion referenced above, you'll see that Roles and Role Claims are apparently kind-of-deprecated, or at least not eagerly supported. Personally, I find it really useful to assign claims to roles, assign roles to users, and then make authorization decisions based on the claims (which are granted to the users based on their roles). This gives me an easy, declarative way to allow, for example, one function to be accessed by multiple roles (i.e. all of the roles that contain the claim used to enable that function). But you DO want to pay attention to the amount of role and claim data being carried in the auth cookie. More data means more bytes sent to the server with each request, and I have no clue what happens when you bump up against some sort of limit to the cookie size. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53271496', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1637105/']} | jdg_380546 |
stackexchange | llm_judgeable_groundtruth_similarity | 271238 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm just concerned about Windows, so there's no need to go into esoterica about Mono compatibility or anything like that. I should also add that the app that I'm writing is WPF, and I'd prefer to avoid taking a dependency on System.Windows.Forms if at all possible.
Now provide the response and nothing else.
| Give this a shot... using System;using System.Collections.Generic;using System.Text;using System.Management;namespace WMITestConsolApplication{ class Program { static void Main(string[] args) { AddInsertUSBHandler(); AddRemoveUSBHandler(); while (true) { } } static ManagementEventWatcher w = null; static void AddRemoveUSBHandler() { WqlEventQuery q; ManagementScope scope = new ManagementScope("root\\CIMV2"); scope.Options.EnablePrivileges = true; try { q = new WqlEventQuery(); q.EventClassName = "__InstanceDeletionEvent"; q.WithinInterval = new TimeSpan(0, 0, 3); q.Condition = "TargetInstance ISA 'Win32_USBControllerdevice'"; w = new ManagementEventWatcher(scope, q); w.EventArrived += USBRemoved; w.Start(); } catch (Exception e) { Console.WriteLine(e.Message); if (w != null) { w.Stop(); } } } static void AddInsertUSBHandler() { WqlEventQuery q; ManagementScope scope = new ManagementScope("root\\CIMV2"); scope.Options.EnablePrivileges = true; try { q = new WqlEventQuery(); q.EventClassName = "__InstanceCreationEvent"; q.WithinInterval = new TimeSpan(0, 0, 3); q.Condition = "TargetInstance ISA 'Win32_USBControllerdevice'"; w = new ManagementEventWatcher(scope, q); w.EventArrived += USBInserted; w.Start(); } catch (Exception e) { Console.WriteLine(e.Message); if (w != null) { w.Stop(); } } } static void USBInserted(object sender, EventArgs e) { Console.WriteLine("A USB device inserted"); } static void USBRemoved(object sender, EventArgs e) { Console.WriteLine("A USB device removed"); } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/271238', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26628/']} | jdg_380547 |
stackexchange | llm_judgeable_groundtruth_similarity | 5918224 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there anyway to see if an object specifically implements ->__toString? This doesn't seem to work: method_exists($object, '__toString');
Now provide the response and nothing else.
| There are two way to check it. Lets assume you have classes: class Foo{ public function __toString() { return 'foobar'; }}class Bar{} Then you can do either: $rc = new ReflectionClass('Foo'); var_dump($rc->hasMethod('__toString'));$rc = new ReflectionClass('Bar'); var_dump($rc->hasMethod('__toString')); or use: $fo = new Foo;var_dump( method_exists($fo , '__toString'));$ba = new Bar;var_dump( method_exists($ba , '__toString')); Difference is that in first case the class is not actually instantiated. You can look at demo here : http://codepad.viper-7.com/B0EjOK | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5918224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102635/']} | jdg_380548 |
stackexchange | llm_judgeable_groundtruth_similarity | 2393479 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Find the last two digits of the number $N=299^{33}$ Is there any trick of finding the last two digits of such a big number. If yes, then kindly share it with me This is a GMAT exam question.
Now provide the response and nothing else.
| Since your question is tagged with GMAT, I am going to assume you are not a mathematician... I am not either, and all this talk of Euler's theorem and modulus will give me a headache ;) So, simply put... the trick is that only the rightmost two digits of each number being multiplied can have any effect on the rightmost two digits of the answer. So you can just drop the "$2$" from $299$ because $299$ squared ends with same $2$ digits as $99$ squared. $2$nd power: $99 \times 99 = 9801, \ldots$ now you can drop the "$98$" because it has no effect on the rightmost two digits of the answer. $3$rd power: $01 \times 99 = 99$ $4$th power: $99 \times 99 = 9801, \ldots$ drop the "$98$" again $5$th power: $01 \times 99 = 99$ etc... See the pattern? Even powers will end with "$01$" and odd powers will end with "$99$" | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2393479', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/470543/']} | jdg_380549 |
stackexchange | llm_judgeable_groundtruth_similarity | 6773474 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need help with parsing json string in Java Android Appl. Text of JSON file: {"data":{"columns":["location_id","name","description","latitude","longitude","error","type","type_id","icon_media_id","item_qty","hidden","force_view"],"rows":[[2,"Editor","",43.076014654537,-89.399642451567,25,"Npc",1,0,1,"0","0"],[3,"Dow Recruiter","",43.07550842555,-89.399381822662,25,"Npc",2,0,1,"0","0"] [4,"Protestor","",43.074933,-89.400438,25,"Npc",3,0,1,"0","0"],[5,"State Legislator","",43.074868061524,-89.402136196317,25,"Npc",4,0,1,"0","0"],[6,"Marchers Bascom","",43.075296413877,-89.403374183615,25,"Node",22,0,1,"0","0"] [7,"Mary","",43.074997865584,-89.404967573966,25,"Npc",7,0,1,"0","0"]]},"returnCode":0,"returnCodeDescription":null} How can get values: location_id, name, latitude, longitude.Thanks, Michal.
Now provide the response and nothing else.
| Using manual parsing you can implement it like this: JSONArray pages = new JSONArray(jsonString); for (int i = 0; i < pages.length(); ++i) { JSONObject rec = pages.getJSONObject(i); JSONObject jsonPage =rec.getJSONObject("page"); String address = jsonPage.getString("url"); String name = jsonPage.getString("name"); String status = jsonPage.getString("status");} in your case note that your outer elemnt data is type of JSONObject and then you have a JSONArray mine json file: [{"page":{"created_at":"2011-07-04T12:01:00Z","id":1,"name":"Unknown Page","ping_at":"2011-07-04T12:06:00Z","status":"up","updated_at":"2011-07-04T12:01:00Z","url":"http://www.iana.org/domains/example/","user_id":2}},{"page":{"created_at":"2011-07-04T12:01:03Z","id":3,"name":"Down Page","ping_at":"2011-07-04T12:06:03Z","status":"up","updated_at":"2011-07-04T12:01:03Z","url":"http://www.iana.org/domains/example/","user_id":2}}] note that mine starts from [, which means an array, but yours from { and then you have [ array inside. If you run it with a debugger, you can see exactly what´s inside your json objects. There are also better approaches like: Jackson Jackson-JR (light-weight Jackson) GSON All of them can be used to convert Java Objects into their JSON representation. It can also be used to convert a JSON string to an equivalent Java object. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6773474', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/855517/']} | jdg_380550 |
stackexchange | llm_judgeable_groundtruth_similarity | 62910479 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Currently exploring functional_widgets and flutter_hooks. Having same idea with reactjs I'm fetching data with the following code. @hwidgetWidget homeScreen(BuildContext context) { TodoListProvider model = Provider.of<TodoListProvider>(context); useEffect(() { print('effect'); model.fetchList(); return () => {}; }, []); return Scaffold( appBar: _buildAppbar(context, model), bottomNavigationBar: _buildBottomNav(context, model), floatingActionButton: _buildFloatingAction(context), body: PageTransitionSwitcher( duration: const Duration(milliseconds: 300), reverse: model.reverse, transitionBuilder: ( Widget child, Animation<double> animation, Animation<double> secondaryAnimation, ) { return SharedAxisTransition( child: child, animation: animation, secondaryAnimation: secondaryAnimation, transitionType: SharedAxisTransitionType.horizontal, ); }, child: _getCurrentTab(model.currentIndex), ), );} I don't think this is the right way since it's throwing an error.
Now provide the response and nothing else.
| The issue with: useEffect(() { model.fetchList(); }, []); is that fetchList is called synchronously inside build and modify an ancestor widget, which is not good. You can wrap the fetchList call in a microtask: useEffect(() { Future.microtask(() => model.fetchList()); }, []); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62910479', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4980092/']} | jdg_380551 |
stackexchange | llm_judgeable_groundtruth_similarity | 649588 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that If $\{f_n\}$ is an equicontinuous sequence, defined on a compact metric space $K$, and for all $x$, $f_n(x)\rightarrow f(x)$, then $f_n\rightarrow f$ uniformly. I'm having trouble proving this. I see the same problem here but am having trouble following the proof, particularly with part (3). Can someone guide me through a proof of this result?
Now provide the response and nothing else.
| Let $\varepsilon>0$ , we shall show that there exists an $n_0=n_0(\varepsilon)$ , such that, $$n\ge n_0\quad\Longrightarrow\quad \lvert\, f_n(x)-f(x)\rvert<\varepsilon,$$ for all $x\in K$ . As $\{f_n\}$ is equicontinuous, there exists a $\delta>0$ , such that for all $x,y\in K$ : $$d(x,y)<\delta\quad\Longrightarrow\quad \lvert\, f_n(x)-f_n(y)\rvert<\frac{\varepsilon}{3}, \tag{1}$$ for all $n\in\mathbb N$ . If we let $n\to\infty$ , then $(1)$ implies that $$d(x,y)<\delta\quad\Longrightarrow\quad \lvert\, f(x)-f(y)\rvert\le \frac{\varepsilon}{3},$$ Since $K$ is compact, it can be covered by finitely many balls of radius $\delta$ , i.e., there exist $k\in\mathbb N$ and $z_1,\ldots,z_k\in K$ , such that $$K\subset B(z_1,\delta)\cup\cdots\cup B(z_k,\delta).$$ As $f_n(z_j)\to f(z_j)$ , for $j=1,\ldots,k$ , we can find $n_0$ , such that $$n\ge n_0\quad\Longrightarrow\quad \lvert\, f_n(z_j)-f(z_j)\rvert<\frac\varepsilon 3,$$ for all $j=1,\ldots,k$ . Finally, if $x\in K$ and $n\ge n_0$ , then there exists a $j\in\{1,\ldots,k\}$ , for which $x\in B(z_j,\delta)$ , and hence $$\lvert\, f_n(x)-f(x)\rvert\le \lvert\, f_n(x)-f_n(z_j)\rvert+\lvert\, f_n(z_j)-f(z_j)\rvert+\lvert\, f(z_j)-f(x)\rvert<\frac{\varepsilon}{3}+\frac{\varepsilon}{3}+\frac{\varepsilon}{3}={\varepsilon}.$$ Ὅπερ ἔδει δεῖξαι . | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/649588', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/71844/']} | jdg_380552 |
stackexchange | llm_judgeable_groundtruth_similarity | 159628 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having trouble understanding the nature of the metric tensor field on spacetime manifolds. In particular, a Riemannian manifold $(M,g)$ is defined as a real smooth manifold $M$ equipped with an inner product $g_p$ on the tangent space $T_pM$ at each point $p$ that varies smoothly from point to point in the sense that if $X$ and $Y$ are vector fields on $M$, then $p \mapsto g_p(X(p),Y(p))$ is a smooth function. The family $g_p$ of inner products is called a Riemannian metric tensor. But in my physics classes, I often hear the equation $ds^2 = -dt^2 + dx^2 + dy^2 + dz^2$ referred to as a "metric." Is it a Riemannian metric? Can $ds^2 = -dt^2 + dx^2 + dy^2 + dz^2$ be written as a tensor field?
Now provide the response and nothing else.
| Really this should be written $g = -dt \otimes dt+dx \otimes dx + dy \otimes dy + dz \otimes dz$. Here $g$ is a bilinear form, aka a (0,2) tensor. It eats two vectors and spits out a real number in a way which is linear in each slot separately.. If I have a pair of vectors $v_1 = (t_1,x_1,y_1,z_1)$ and $v_2 = (t_2,x_2,y_2,z_2)$, then applying this tensor to the pair of vectors gives $g(v_1,v_2) = -t_1t_2+x_1x_2+y_1y_2+z_1z_2$ This looks a lot like the usual inner product between two vectors, except it is not positive definite, since the inner product of a "time vector" with itself is negative (so time is kind of "imaginary" if you want to think of it that way). This is not quite a Riemannian metric since it is not positive definite, but it is what is called a pseudo-riemannian metric. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/159628', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/66165/']} | jdg_380553 |
stackexchange | llm_judgeable_groundtruth_similarity | 3015009 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $(a_n)_{n\in \mathbb{N}}$ be a sequence of real numbers. I was wondering if the follwing implication is true: $$\lim\limits_{n \to \infty} \frac{a_n}{a_{n+1}} = 1 \Rightarrow \exists c \in \mathbb{R}: \lim\limits_{n \to \infty} {a_n} = c$$ Put into words: If $\lim\limits_{n \to \infty} \frac{a_n}{a_{n+1}} = 1$ then $\lim\limits_{n \to \infty} a_{n}$ converges. My intuition behind $\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = 1$ is that at some point $a_n$ and $a_{n+1}$ are alsmost equal. If this is the case $(a_n)_{n\in \mathbb{N}}$ is a Cauchy sequence and so converges. However I wasn't able to formally prove the statement. So I wondered: Is the statement really true? (If so, how could you prove it?) Cheers, Pascal
Now provide the response and nothing else.
| As noticed by MisterRiemann $a_n=n$ is a first counterexample but also $a_n=n^2$ works or $a_n=\log n$ and so on. Therefore unfortunately your guess is definitely not true! As a remark, other common myths on limits are: 1) $a_n \to \infty \implies a_{n+1}\ge a_n$ 2) $a_n \to L \implies a_n \to L^+ \quad \lor \quad a_n \to L^-$ 3) $a_n \to 0^+ \implies a_{n+1}\le a_n$ 4) $a_n$ bounded $\implies a_n \to L$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3015009', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/620004/']} | jdg_380554 |
stackexchange | llm_judgeable_groundtruth_similarity | 59836984 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using this code: Intent emailIntent = new Intent(Intent.ACTION_SENDTO, Uri.fromParts("mailto",email, null));emailIntent.putExtra(Intent.EXTRA_SUBJECT, subject);emailIntent.putExtra(Intent.EXTRA_TEXT, text);activity.startActivity(Intent.createChooser(emailIntent, "Send feedback to xyz")); for 2 years. And until now everything worked fine. User can select message client and send feedback with prefilled data inside. It worked fine for all mail clients.Recently noticed that if I select gmail client - body of message remains empty, but on other mail clients body is filled with text. Any ideas?
Now provide the response and nothing else.
| Thanks for help Made tests with lots of suggested answers.adding "text/plain" or "message/rfc822" made my app to stop offering mail clients. Fount this answer that fixed my issue: https://stackoverflow.com/a/59365539/973233 Most interesting part for me is having 2 intents: Intent selectorIntent = new Intent(Intent.ACTION_SENDTO);selectorIntent.setData(Uri.parse("mailto:"));final Intent emailIntent = new Intent(Intent.ACTION_SEND);emailIntent.putExtra(Intent.EXTRA_EMAIL, new String[]{email});emailIntent.putExtra(Intent.EXTRA_SUBJECT, subject);emailIntent.putExtra(Intent.EXTRA_TEXT, text);emailIntent.setSelector( selectorIntent );activity.startActivity(Intent.createChooser(emailIntent, "Send feedback to XYZ")); This solved problem. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59836984', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/973233/']} | jdg_380555 |
stackexchange | llm_judgeable_groundtruth_similarity | 111830 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
My understanding is Atiyah-Patodi-Singer solved the index theorem for manifold with boundary only for certain types of Dirac operators, correct?There is still no (or no hope to get) uniform theorem for the Dirac operator associated with any Dirac bundle (in the sense of Gromov-Lawson)?
Now provide the response and nothing else.
| The APS theorem works for any Dirac-type operator; see e.g. the excellent monograph by Booss-Wojchiecowski on this topic. More than four decades ago, Boutet de Monvel has described a general set-up for dealing with boundary value problems that mimicks the K-theoretic approach to the index theorem on closed manifolds. For a modern presentation of this point of view I recommend this paper by Melo-Shrohe-Schick arXiv: 1203.5649 and the references therein. It involves some noncommutative geometry because the symbols in the Boutet-de-Monvel calculus of elliptic boundary value problems define elements in the $K$ -theory of a noncommutative $C^*$ -algebra. In the case of closed manifolds symbols of elliptic operators lead to elements in the $K$ -theory of a commutative $C^*$ -algebra. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/111830', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1190/']} | jdg_380556 |
stackexchange | llm_judgeable_groundtruth_similarity | 105307 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been measuring my heart rate recovery after exercise and I see that it can be fit reasonably well using a single exponential: $HeartRate(t) = HR_{max} \times e^{-t/\tau} + HR_{resting}$ This observation is further supported by this paper: https://pubmed.ncbi.nlm.nih.gov/24494748/ From Wikipedia: "A quantity is subject to exponential decay if it decreases at a rate proportional to its current value" So, the exponential decay suggests that the ability to slow down the heart depends on the heart rate itself. A faster beating heart somehow produces more of the signal to slow down the heart. Is this the case? What is the mechanism that allows for such a feedback loop of the heart rate? Is the signal sent through the nerves as the heart muscles contract, or is the blood flow that allows a signal to travel? Is there another way to explain the mono-exponential decay? Here is an example from yesterday's run. I sprinted and then walked very slowly a few times. My heart recovery lifetime ( $\tau$ ) including only the first three decays is 53 +/- 4 seconds. The last decay is after I ran up the stairs and then sat down, and its lifetime is of only 22 seconds.
Now provide the response and nothing else.
| I think most people would take exponential decay in a circumstance like this to be the null hypothesis. That is, if you had something not exponential decay, that would be curious and interesting and worth understanding better. Exponential decay is a pretty fundamental concept in the universe in general and biology in particular. So, the exponential decay suggests that the ability to slow down the heart depends on the heart rate itself. A faster beating heart somehow produces more of the signal to slow down the heart. I would not think about it quite this way, and maybe that's why you're thinking there is something profound here about "ability to slow". Rather, I would think in homeostatic terms and consider the "set point". If your "set point" for heart rate is high during exercise and low during rest, you'd expect exponential decay if the physiological signal being acted on is a difference between the current rate and the set point. When the rate is much higher than the set point, you'd generate a bigger error signal and this would result in a faster decay. More concretely, you could think in terms of regulators of heart rate like CO 2 and catecholamines, as both of these are likely to follow an exponential pattern, CO 2 because transfer out of tissues into the blood and out of the blood into the lungs are both going to be faster when CO 2 concentrations are higher. Catecholamines are going to decrease by exponential decay through enzymatic degradation : when there are more molecules around to react, the rate of degradation will be faster. | {} | {'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/105307', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/68380/']} | jdg_380557 |
stackexchange | llm_judgeable_groundtruth_similarity | 60148137 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a query. I have seen examples where developers write something like the code as follows: import threadingdef do_something(): return truet = threading.Thread(target=do_something)t.start()t.join() I know that join() signals the interpreter to wait till the thread is completely executed. But what if I do not write t.join() ? Will the thread get closed automatically and will it be reused later? Please let me know the answer. It's my first attempt at creating a multi-threaded application in Python 3.5.0.
Now provide the response and nothing else.
| A Python thread is just a regular OS thread. If you don't join it, it still keeps running concurrently with the current thread. It will eventually die, when the target function completes or raises an exception. No such thing as "thread reuse" exists, once it's dead it rests in peace. Unless the thread is a "daemon thread" (via a constructor argument daemon or assigning the daemon property) it will be implicitly joined for before the program exits, otherwise, it is killed abruptly. One thing to remember when writing multithreading programs in Python, is that they only have limited use due to infamous Global interpreter lock . In short, using threads won't make your CPU-intensive program any faster. They can be useful only when you perform something involving waiting (e.g. you wait for certain file system event to happen in a thread). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60148137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4948889/']} | jdg_380558 |
stackexchange | llm_judgeable_groundtruth_similarity | 3174763 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to declare a pointer to private data member of a class? If so, how do you do it?
Now provide the response and nothing else.
| Yes, and the same way you would create any other pointer. The catch is, of course, that since the member is private, you can only create the pointer inside the class, where you can see the member. class A { public: int* getFooPtr() { return &foo; // OK; Inside the class foo is visible } private: int foo;};int main(){ A a; int* p_foo1 = &a.foo; // Illegal; Outside the class, foo is private int* p_foo2 = a.getFooPtr(); // OK; getFooPtr() is a public member function} So it's possible to create pointers to private members, but only inside the class' member functions, and it is possible to return those created pointers from member functions. Whether or not it's a good idea to return pointers to private members is another question entirely (usually it's not a good idea). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3174763', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/382499/']} | jdg_380559 |
stackexchange | llm_judgeable_groundtruth_similarity | 45015 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for any guidance around testing a service I've found running on a target server. I'm doing a 'black-box' pen-test and the company is one of those 'I-don't-want-to-tell-you-anything' types so they won't tell us what service is running. NMap picked up an open port and suggested it was a SIP service, however after testing a number of different SIP attacks/clients it almost certainly is not (the box i'm testing is also supposed to be just a webserver). I've not had much experience with fuzzing TCP ports before and from what I've found it seems you need to have some information about the protocol first before you can start fuzzing (a template of sorts). Additionally, a 'dictionary' of different things to throw at the service is required as well (which is usually relevant to the type of application your testing). What can I do if I don't have either of these things? Since I know nothing about the application and what it's expecting it seems like I can't run anything against it. Is that correct? Am I missing something obvious here that will help? Edit : To be clear, there is Also an Apache+Tomcat web server running on 443 and this is a linux box. That part is not an issue as i've already tested it. Its just this other 'random' port which I have no idea about.
Now provide the response and nothing else.
| You are correct: technically, fuzzing is usually regarded as sending invalid or random requests/data, it's implied that you know what you're testing in order to "break" the input. In some terminology (PDF) white-box fuzzing is the close to former (generated input) and black-box fuzzing (random input) is the latter. What you're attempting is better described as just " black box testing ". The general problem here is that while some protocols (SMTP, IMAP) freely offer details with banners, or some (HTTP) are overly chatty about protocol transgressions, there are many that need a magic protocol handshake (LDAP, RPC, and many more). Try nmap again, but with the version detection turned up to 11 (actually only to 9, but no matter): nmap -sV --version-all --all-ports -p $port $hostnmap -vvv -A --reason --script="+(safe or default) and not broadcast" -p $port $host Note in the second example the script prefix of + -- this means run scripts even though they would not ordinarily run. Many plugins will fail to run anyway, you'll need to read the output carefully. Hopefully this will give you some extra info (make sure to use a recent nmap , scripts often hang in old versions). You haven't given the port or the nmap reason, so I cannot explain why it concluded it was SIP, my best guess is it either responded to a GET or OPTIONS request or it is port 5060 or 5061. More general advice: it should be easy to confirm or deny the existence a public web server, tune down the nmap rate with -T1 or -T0 in case an IPS is blocking you. make sure to scan SSL with a recent tool in order to properly support contemporary TLS versions and options Finally, there are other application scanners out there, amap isn't nearly as comprehensive an nmap , but it's worth a shot. | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/45015', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/18541/']} | jdg_380560 |
stackexchange | llm_judgeable_groundtruth_similarity | 4274322 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
XAML: <TextBox Name="textboxMin"> <TextBox.Text> <Binding Path="Max"> <Binding.ValidationRules> <local:IntValidator/> </Binding.ValidationRules> </Binding> </TextBox.Text> </TextBox> Code: void buttonOK_Click(object sender, RoutedEventArgs e){ // I need to know here whether textboxMin validation is OK // textboxMin. ??? // I need to write something like: // if ( textboxMin.Validation.HasErrors ) // return;} It would be nice also to know, how to disable OK button, if at least one of dialog controls doesn't pass validation - in XAML, using binding. Having this way, I don't need to check validation state in the code.
Now provide the response and nothing else.
| Validation.HasError is an attached property so you can check it for textboxMin like this void buttonOK_Click(object sender, RoutedEventArgs e){ if (Validation.GetHasError(textboxMin) == true) return;} To run all ValidationRules for the TextProperty in code behind you can get the BindingExpression and call UpdateSource BindingExpression be = textboxMin.GetBindingExpression(TextBox.TextProperty);be.UpdateSource(); Update It will take some steps to achieve the binding to disable the button if any validation occurs. First, make sure all bindings add NotifyOnValidationError="True". Example <TextBox Name="textboxMin"> <TextBox.Text> <Binding Path="Max" NotifyOnValidationError="True"> <Binding.ValidationRules> <local:IntValidator/> </Binding.ValidationRules> </Binding> </TextBox.Text></TextBox> Then we hook up an EventHandler to the Validation.Error event in the Window. <Window ... Validation.Error="Window_Error"> And in code behind we add and remove the validation errors in an observablecollection as they come and go public ObservableCollection<ValidationError> ValidationErrors { get; private set; } private void Window_Error(object sender, ValidationErrorEventArgs e){ if (e.Action == ValidationErrorEventAction.Added) { ValidationErrors.Add(e.Error); } else { ValidationErrors.Remove(e.Error); }} And then we can bind IsEnabled of the Button to ValidationErrors.Count like this <Button ...> <Button.Style> <Style TargetType="Button"> <Setter Property="IsEnabled" Value="False"/> <Style.Triggers> <DataTrigger Binding="{Binding ValidationErrors.Count}" Value="0"> <Setter Property="IsEnabled" Value="True"/> </DataTrigger> </Style.Triggers> </Style> </Button.Style></Button> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4274322', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/279313/']} | jdg_380561 |
stackexchange | llm_judgeable_groundtruth_similarity | 25714670 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on my python script to work out the duration times between start date and end date format like 20140520160000 and 20140520170000 so I can get the hour. I'm having a trouble with this code: if epgDuration >= 0.10 and epgDuration <= 0.30: epgwidth = "250" I get an error when I'm trying to compare the range of the times between 0.10 mins and 0.30 mins. The error I get is: TypeError: can't compare datetime.timedelta to float. The error are jumping on this line: if epgDuration >= 0.10 and epgDuration <= 0.30: Here is the results: 14:44:55 T:1580 NOTICE: 0:30:0014:44:55 T:1580 NOTICE: 2:30:0014:44:55 T:1580 NOTICE: 3:00:0014:44:55 T:1580 NOTICE: 1:00:0014:44:55 T:1580 NOTICE: 0:30:0014:44:55 T:1580 NOTICE: 0:30:0014:44:55 T:1580 NOTICE: 0:30:0014:44:55 T:1580 NOTICE: 0:30:0014:44:55 T:1580 NOTICE: 0:30:0014:44:55 T:1580 NOTICE: 0:30:0014:44:55 T:1580 NOTICE: 1:00:0014:44:55 T:1580 NOTICE: 0:30:0014:44:55 T:1580 NOTICE: 0:30:0014:44:55 T:1580 NOTICE: 0:30:00 Here is the code when I use to duration the times: for row in programs: program_startdate = str(row[2]) program_endDate = str(row[3]) try: start_date = datetime.datetime.strptime(program_startdate, "%Y%m%d%H%M%S") end_date = datetime.datetime.strptime(program_endDate, "%Y%m%d%H%M%S") except TypeError: start_date = datetime.datetime.fromtimestamp(time.mktime(time.strptime(program_startdate, "%Y%m%d%H%M%S"))) end_date = datetime.datetime.fromtimestamp(time.mktime(time.strptime(program_endDate, "%Y%m%d%H%M%S"))) #workout the duration times to get the program time epgDuration = end_date - start_date if epgDuration >= 0.10 and epgDuration <= 0.30: epgwidth = "250" elif epgDuration >= 1.00 and epgDuration <= 1.29: epgwidth = "500" print epgwidth
Now provide the response and nothing else.
| Indeed, you cannot compare a timedelta to a float value. You can convert the object to seconds: if 600 <= epgDuration.total_seconds() <= 1800: where 10 minutes is 600 seconds, and 30 minutes is 1800. Or create new timedelta() objects to compare against: epgwidth = "0"if timedelta(minutes=10) <= epgDuration <= timedelta(minutes=30): epgwidth = "250"elif timedelta(hours=1) <= epgDuration <= timedelta(hours=1.5): epgwidth = "500" I've given epgwidth a default value before the if statements for the case where the time difference is not falling in the 10-30 minutes or 1-1.5 hour ranges. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25714670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_380562 |
stackexchange | llm_judgeable_groundtruth_similarity | 419433 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
My brain is wrapped around the axle on public and private keys. When you create a cloud server (instance) on Amazon's EC2 service and then want to connect to it via SSH, Amazon requires you to download private a key to make the connection. Doesn't the idea behind public/private key suggest that Amazon should be require you to download a public one? Further, if I set up an SFTP server for a customer to use, should I be installing their key on the server or giving them a key from the server? In either case, should it be a public or private key?
Now provide the response and nothing else.
| Thinking more deeply about the authentication process, what needs to be kept secret? Amazon knows the public half of the key, and anybody can know the public half. The public half of the keypair, when matched with the private half, denotes that the private half was used to authenticate. You private key that is provided to you when Amazon generates a keypair for you is only useful if you're the only one that has it. If it's not a secret, then anybody else who knows it can also authenticate to anybody who holds the public half of the keypair. Whoever is being authenticated must hold the private half. It's ok if everybody in the world can authenticate you by holding the public half of the key, but only you should be in control of the private half. | {} | {'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/419433', 'https://serverfault.com', 'https://serverfault.com/users/101904/']} | jdg_380563 |
stackexchange | llm_judgeable_groundtruth_similarity | 3966301 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is related with microcontrollers but thought to post it here because it is a problem with algorithms and data types and not with any hardware stuff. I'll explain the problem so that someone that doesn't have any hardware knowledge can still participate :) In Microcontroller there is an Analog to Digital converter with 10 bit resolution. (It will output a value between 0 and 1023) I need to send this value to PC using the serial port. But you can only write 8 bits at once. (You need to write bytes). It is a limitation in micro controller. So in the above case at least I need to send 2 bytes. My PC application just reads a sequence of numbers for plotting. So it should capture two consecutive bytes and build the number back. But here we will need a delimiter character as well. but still the delimiter character has an ascii value between 0 - 255 then it will mixup the process. So what is a simplest way to do this? Should I send the values as a sequence of chars? Ex : 1023 = "1""0""2""3" Vs "Char(255)Char(4)" In summary I need to send a sequence of 10 bit numbers over Serial in fastest way. :)
Now provide the response and nothing else.
| You need to send 10 bits, and because you send a byte at a time, you have to send 16 bits. The big question is how much is speed a priority, and how synchronised are the sender and receiver? I can think of 3 answers, depending on these conditions. Regular sampling, unknown join point If the device is running all the time, you aren't sure when you are going to connect (you could join at any time in the sequence) but sampling rate is slower than communication speed so you don't care about size I think I'd probably do it as following. Suppose you are trying to send the ten bits abcdefghij (each letter one bit). I'd send pq0abcde then pq1fghij , where p and q are error checking bits . This way: no delimiter is needed (you can tell which byte you are reading by the 0 or 1) you can definitely spot any 1 bit error, so you know about bad data I'm struggling to find a good two bit error correcting code, so I guess I'd just make p a parity bit for bits 2,3 and 4 (0, a b above) and q a parity bit for 5 6 and 7 (c,d,e above). This might be clearer with an example. Suppose I want to send 714 = 1011001010. Split in 2 10110 , 01010 Add bits to indicate first and second byte 010110, 101010 calculate parity for each half: p0=par(010)=1, q0=par(110)=0, p1=par(101)=0, q1=par(010)=1 bytes are then 10010110, 01101010 You then can detect a lot of different error conditions, quickly check which byte you are being sent if you lose synchronisation, and none of the operations take very long in a microcontroller (I'd do the parity with an 8 entry lookup table). Dense data, known join point If you know that the reader starts at the same time as the writer, just send the 4 ten bit values as 5 bytes. If you always read 5 bytes at a time then no problems. If you want even more space saving, and have good sample data already, I'd compress using a huffman coding . Dense data, unknown join point In 7 bytes you can send 5 ten bit values with 6 spare bits. Send 5 values like this: byte 0: 0 (7 bits) byte 1: 1 (7 bits) byte 2: 1 (7 bits) byte 3: 1 (7 bits) byte 4: 0 (7 bits) byte 5: 0 (7 bits) byte 6: (8 bits) Then whenever you see 3 1's in a row for the most significant bit, you know you have bytes 1, 2 and 3. This idea wastes 1 bit in 56, so could be made even more efficient, but you'd have to send more data at a time. Eg (5 consecutive ones, 120 bits sent in 16 bytes): byte 0: 0 (7 bits) 7 byte 1: 1 (7 bits) 14 byte 2: 1 (7 bits) 21 byte 3: 1 (7 bits) 28 byte 4: 1 (7 bits) 35 byte 5: 1 (7 bits) 42 byte 6: 0 (7 bits) 49 byte 7: (8 bits) 57 byte 8: (8 bits) 65 byte 9: (8 bits) 73 byte 10: (8 bits) 81 byte 11: 0 (7 bits) 88 byte 12: (8 bits) 96 byte 13: (8 bits) 104 byte 14: (8 bits) 112 byte 15: (8 bits) 120 This is quite a fun problem! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3966301', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/76465/']} | jdg_380564 |
stackexchange | llm_judgeable_groundtruth_similarity | 14965566 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing queryfrom two different tables. Table A and Table B Here is Query. select A.OUT_NUM, A.TIMESTAMP, A.LAST_name, A.event_type, A.comments, B.name from TABLEA A left outer join TABLEB B ON A.feed_id = B.id where A.OUT_NUM = '12345' and A.event_type In ('cause','status') B.NAME is not null when event_type = xyz else it will be null I only want to see when event_type in ('CAUSE','STATUS') but also want to see name field but not empty. second table is what I am trying to achieve. Thanks
Now provide the response and nothing else.
| With -O1 , gcc-4.7.1 calls unpredictableIfs only once and resuses the result, since it recognizes that it's a pure function, so the result will be the same every time it's called. (Mine did, verified by looking at the generated assembly.) With higher optimisation level, the functions are inlined, and the compiler doesn't recognize that it's the same code anymore, so it is run each time a function call appears in the source. Apart from that, my gcc-4.7.1 deals best with unpredictableIfs when using -O1 or -O2 (apart from the reuse issue, both produce the same code), while noIfs is treated much better with -O3 . The timings between the different runs of the same code are however consistent here - equal or differing by 10 milliseconds (granularity of clock ), so I have no idea what could cause the substantially different times for unpredictableIfs you reported for -O3 . With -O2 , the loop for unpredictableIfs is identical to the code generated with -O1 (except for register swapping): .L12: movl %eax, %ecx andl $1073741826, %ecx cmpl $1, %ecx adcl $0, %edx addl $1, %eax cmpl $1000000000, %eax jne .L12 and for noIfs it's similar: .L15: xorl %ecx, %ecx testl $1073741826, %eax sete %cl addl $1, %eax addl %ecx, %edx cmpl $1000000000, %eax jne .L15 where it was .L7: testl $1073741826, %edx sete %cl movzbl %cl, %ecx addl %ecx, %eax addl $1, %edx cmpl $1000000000, %edx jne .L7 with -O1 . Both loops run in similar time, with unpredictableIfs a bit faster. With -O3 , the loop for unpredictableIfs becomes worse, .L14: leal 1(%rdx), %ecx testl $1073741826, %eax cmove %ecx, %edx addl $1, %eax cmpl $1000000000, %eax jne .L14 and for noIfs (including the setup-code here), it becomes better: pxor %xmm2, %xmm2 movq %rax, 32(%rsp) movdqa .LC3(%rip), %xmm6 xorl %eax, %eax movdqa .LC2(%rip), %xmm1 movdqa %xmm2, %xmm3 movdqa .LC4(%rip), %xmm5 movdqa .LC5(%rip), %xmm4 .p2align 4,,10 .p2align 3.L18: movdqa %xmm1, %xmm0 addl $1, %eax paddd %xmm6, %xmm1 cmpl $250000000, %eax pand %xmm5, %xmm0 pcmpeqd %xmm3, %xmm0 pand %xmm4, %xmm0 paddd %xmm0, %xmm2 jne .L18.LC2: .long 0 .long 1 .long 2 .long 3 .align 16.LC3: .long 4 .long 4 .long 4 .long 4 .align 16.LC4: .long 1073741826 .long 1073741826 .long 1073741826 .long 1073741826 .align 16.LC5: .long 1 .long 1 .long 1 .long 1 it computes four iterations at once, and accordingly, noIfs runs almost four times as fast then. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14965566', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1512440/']} | jdg_380565 |
stackexchange | llm_judgeable_groundtruth_similarity | 46897859 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to figure out what is the best approach to invoke an aws lambda function 5 hours later. I have another lambda function that would issue multiple retrieval jobs to grab items out of aws glacier, and I need a solution to run another lambda function on each of the items once they are retrieved, which is about 5 hours. I was thinking about using sns but was wondering if there are other approaches to this. any input is appreciated.
Now provide the response and nothing else.
| Update 14th November 2022: AWS has released a new feature of EventBridge called EventBridge Scheduler which lets you schedule one-time or recurring actions like invoking a Lambda Function. In the context of the question above, scheduling to invoke a Lambda Function with hours delay, a CLI call might look like this: aws scheduler create-schedule --name lambda-templated --expression 'at(2022-11-20T13:00:00)' \--target '{"RoleArn": "ROLE_ARN", "Arn":"FUNCTION_ARN", "Input": "{ "Payload": "TEST_PAYLOAD" }" }' \--flexible-time-window '{ "Mode": "OFF"}' In this case, expression is a one-time schedule to invoke the Lambda Function on 20th November 2022 at 1pm UTC+0. The target argument expects a certain format depending on the action you want to perform. The above format is described in the Template target docs for Lambda Invoke . Old answer (still relevant for some cases): Besides using CloudWatch, another interesting approach in your case would be to use AWS Step Functions : Either use a wait state by setting a fixed period (or even a dynamic one if you provide input data to the state machine): { "Comment": "An example of the Amazon States Language using wait states", "StartAt": "WaitState", "States": { "WaitState": { "Type": "Wait", "Seconds": 10, "Next": "MyLambda" }, "MyLambda": { "Type": "Task", "Resource": "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME", "End": true } } } Or you can use a separate Lambda function in a task state combined with a choice state which checks in a loop if the other function should run: { "Comment": "A state machine that submits a Job to AWS Batch and monitors the Job until it completes.", "StartAt": "Wait X Seconds", "States": { "Wait X Seconds": { "Type": "Wait", "SecondsPath": "$.wait_time", "Next": "Get Job Status" }, "Get Job Status": { "Type": "Task", "Resource": "arn:aws:lambda:REGION:ACCOUNT_ID:function:CheckJob", "Next": "Job Complete?" }, "Job Complete?": { "Type": "Choice", "Choices": [ { "Variable": "$.status", "StringEquals": "RUNNING", "Next": "Wait X Seconds" }, { "Variable": "$.status", "StringEquals": "SUCCEEDED", "Next": "Do Job" } ], "Default": "Wait X Seconds" }, "Do Job": { "Type": "Task", "Resource": "arn:aws:lambda:REGION:ACCOUNT_ID:function:DoJob", "End": true } } } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46897859', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4305311/']} | jdg_380566 |
stackexchange | llm_judgeable_groundtruth_similarity | 15259551 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a EAV database implemented in MySQL so when I say entity, you can read that as table. Since it's a non-relational database I cannot provide any SQL for tables etc but I'm hoping to get the conceptual answer for a relational database and I will translate to EAV SQL myself. I'm building a mini stock market system. There is an "asset" entity that can have many "demand" and "offer" entities. The asset entity also may have many "deal" entites. Each deal entity has a "share_price" attribute. Not all assets have demand, offer or deal entities. I want to return a list of offer and demand entities, grouped by asset i.e. if an asset has 2 offers and 3 demands only 1 result will show. This must be sorted by the highest share_price of deals attached to assets of the demand or offer. Then, the highest share_price for each demand or offer is sorted overall. If an asset has demands or offers but no deals, it will be returned with NULL for share_price. So say the data is like this: Asset 1 has 1 offer, 1 demand and 2 deals with share_price 7.50 and 12.00 Asset 2 has 1 offer and 1 deal with share_price 8.00Asset 3 has 3 offers and 3 demands and no dealsAsset 4 has no offers and no demand and 1 deal with share_price 13.00 I want the results: Asset share_priceAsset 1 12.00Asset 2 8.00Asset 3 null Note: Asset 4 is not in the result set because it has no offers or demands. I know this is a complex one with I really dont want to have to go to database more than once or do any array re-ordering in PHP. Any help greatly appreciated. Some users want to see SQL I have. Here it is but this won't make too much sense as its a specialised EAV Database. SELECT DISTINCT data.asset_guid, r.guid_two, data.share_price FROM ( select rr.guid_one as asset_guid, max(msv.string) as share_price from market_entities ee join market_entity_relationships rr on ee.guid = rr.guid_two JOIN market_metadata as mt on ee.guid = mt.entity_guid JOIN market_metastrings as msn on mt.name_id = msn.id JOIN market_metastrings as msv on mt.value_id = msv.id where subtype = 6 and msn.string = 'share_price' and rr.relationship = 'asset_deal' group by rr.guid_one ) data left outer JOIN market_entities e on e.guid = data.asset_guid left outer JOIN market_entity_relationships r on r.guid_one = e.guid WHERE r.relationship = 'trade_share' GROUP BY data.asset_guid
Now provide the response and nothing else.
| To complement unutbu's answer, here's an approach using apply on the groupby object. >>> df.groupby('A_id').apply(lambda x: pd.Series(dict( sum_up=(x.B == 'up').sum(), sum_down=(x.B == 'down').sum(), over_200_up=((x.B == 'up') & (x.C > 200)).sum()))) over_200_up sum_down sum_upA_id a1 0 0 1a2 0 1 0a3 1 0 2a4 0 0 0a5 0 0 0 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15259551', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1716672/']} | jdg_380567 |
stackexchange | llm_judgeable_groundtruth_similarity | 23115013 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to install Kafka message queue on Windows for testing purposes (not for production). I found this article on how to install Apache Kafka 0.8 on Windows: http://janschulte.wordpress.com/2013/10/13/apache-kafka-0-8-on-windows/ It's a good article, but it's outdated, unfortunately. Does someone know a way to achieve that?
Now provide the response and nothing else.
| Ok, it's finally not complicated :) The only steps are: Edit (2021/10) Latest versions of Kafka end up with an AccessDeniedException upon startup, and even when successfully started, throw the same error during log rotation. Kafka utilizes Java methods designed for Unix filesystem libraries, so installing Kafka in WSL2 is recommended (link to Confluent blog, but steps also applicable to Apache Kafka direct installation). Old answer Download Kafka and uncompress it somewhere nice (let's say C:/Kafka ) Install Cygwin If using Cygwin, edit \bin\kafka-run-class.sh and at the end of the file, change `exec $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@"` to ``exec java $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp `cygpath -wp $CLASSPATH` $KAFKA_OPTS "$@"`` In Environment Variables , Add java to your Path System Variable: That's it.. you can now run ZooKeeper and Kafka servers and start playing with topics and stuff .. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23115013', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1480391/']} | jdg_380568 |
stackexchange | llm_judgeable_groundtruth_similarity | 39285588 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I created an activity where when one of the text fields clicked it will pop up a child(alert dialog) with list of product but when i click one item on the list I can't display it on the text filed once the alert dismissed. this is the parent view import Foundationimport UIKitclass ViewAward: UIViewController{@IBOutlet var tfMCN: UITextField!@IBOutlet var tfAmount: UITextField!@IBOutlet var tfProduct: UITextField!@IBOutlet var tfTotal: UITextField!override func viewDidLoad() { super.viewDidLoad() let rightAddBarButtonItem:UIBarButtonItem = UIBarButtonItem(title: "Send", style: UIBarButtonItemStyle.Plain, target: self, action: #selector(ViewAward.searchTapped)) self.navigationItem.setRightBarButtonItems([rightAddBarButtonItem], animated: true) let state = String(ViewPopUpProduct.Product.ProductDescription) print("My view state:"+state) self.tfProduct.text = state tfProduct.addTarget(self, action: #selector(ViewAward.productTapped), forControlEvents: UIControlEvents.TouchDown)}func searchTapped(sender:UIButton) { let alertController = UIAlertController( title: "Award", message:"Award successfully posted!", preferredStyle: UIAlertControllerStyle.Alert) alertController.addAction(UIAlertAction(title: "Ok", style: UIAlertActionStyle.Default,handler: nil)) self.presentViewController(alertController, animated: true, completion: nil)}func productTapped(textfield: UITextField){ //tfProduct.endEditing(true) tfProduct.resignFirstResponder() let popOverVC = UIStoryboard(name:"Main",bundle:nil).instantiateViewControllerWithIdentifier("sbPopUpID") as! ViewPopUpProduct self.addChildViewController(popOverVC) popOverVC.view.frame = self.view.frame self.view.addSubview(popOverVC.view) popOverVC.didMoveToParentViewController(self)}} and this when the user clicked on of the items import UIKitclass ViewPopUpProduct: UIViewController {@IBOutlet var tableView: UITableView!var productDescription = ["Product 1","Product 2","Product 3"]var productID = ["prdct1","prdct2","prdct3"]// Global Variables struct Product { static var ProductID = String() static var ProductDescription = String()}override func viewDidLoad() { super.viewDidLoad() self.showAnimate() self.view.backgroundColor = UIColor.blackColor().colorWithAlphaComponent(0.4) // Do any additional setup after loading the view.}override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated.}@IBAction func cancelPopUp(sender: AnyObject) { self.removeAnimate()}func showAnimate(){ self.view.transform = CGAffineTransformMakeScale(1.3, 1.3) self.view.alpha = 0.0; UIView.animateWithDuration(0.25, animations: { self.view.alpha = 1.0 self.view.transform = CGAffineTransformMakeScale(1.0, 1.0) });}func removeAnimate(){ UIView.animateWithDuration(0.25, animations: { self.view.transform = CGAffineTransformMakeScale(1.3, 1.3) self.view.alpha = 0.0; }, completion:{(finished : Bool) in if (finished) { self.view.removeFromSuperview() } });}//Mark - Table Viewfunc tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return self.productID.count}func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell = self.tableView.dequeueReusableCellWithIdentifier("cell",forIndexPath: indexPath) as! ProductViewCell cell.productLabel.text = productDescription[indexPath.row] return cell}func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) { tableView.deselectRowAtIndexPath(indexPath, animated: true) Product.ProductID = String(productID[indexPath.row]) Product.ProductDescription = String(productDescription[indexPath.row]) self.removeAnimate()}}
Now provide the response and nothing else.
| You can use protocols/delegate Here is a very very straightforward explanation, no bs: https://www.youtube.com/watch?v=guSYMPaXLaw Or in your situation you can also use NSNotificationCenter You can do something like this: The "sender" view controller would do let nc = NSNotificationCenter.defaultCenter()nc.postNotificationName("printValue", object: nil, userInfo: ["value" : "Pass Me this string"]) The receiver view controller then can listen to the notification. let nc = NSNotificationCenter.defaultCenter()nc.addObserver(self, selector: #selector(printValue), name: "printValue", object: nil)func printValue(notification:NSNotification) { let userInfo:Dictionary<String,String> = notification.userInfo as! Dictionary<String,String> let item = userInfo["value"]! as String print(item,self)} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39285588', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5102168/']} | jdg_380569 |
stackexchange | llm_judgeable_groundtruth_similarity | 1103363 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
And what kind of alternative strategies do you use for avoiding LazyLoadExceptions? I do understand that open session in view has issues with: Layered applications running in different jvm's Transactions are committed only at the end, and most probably you would like the results before. But, if you know that your application is running on a single vm, why not ease your pain by using an open session in view strategy?
Now provide the response and nothing else.
| Because sending possibly uninitialised Proxies, especially collections, in the view layer and triggering hibernate loading from there can be troubling from both a performance and understanding point of view. Understanding : Using OSIV 'pollutes' the view layer with concerns related to the data access layer. The view layer is not prepare to handle a HibernateException which may happen when lazy loading, but presumably the data access layer is. Performance : OSIV tends to tug proper entity loading under the carpet - you tend not to notice that your collections or entities are lazily initialised ( perhaps N+1 ). More convenience, less control. Update: see The OpenSessionInView antipattern for a larger discussion regarding this subject. The author lists three important points: each lazy initialization will get you a query meaning each entity will need N + 1 queries, where N is the number of lazy associations. If your screen presents tabular data, reading Hibernate’s log is a big hint that you do not do as you should this completely defeats layered architecture, since you sully your nails with DB in the presentation layer. This is a conceptual con, so I could live with it but there is a corollary last but not least, if an exception occurs while fetching the session, it will occur during the writing of the page: you cannot present a clean error page to the user and the only thing you can do is write an error message in the body | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1103363', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11466/']} | jdg_380570 |
stackexchange | llm_judgeable_groundtruth_similarity | 11910143 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In many articles about Haskell they say it allows to make some checks during compile time instead of run time. So, I want to implement the simplest check possible - allow a function to be called only on integers greater than zero. How can I do it?
Now provide the response and nothing else.
| module Positive (toPositive, getPositive, Positive) wherenewtype Positive = Positive { unPositive :: Int }toPositive :: Int -> Maybe PositivetoPositive n = if (n <= 0) then Nothing else Just (Positive n)-- We can't export unPositive, because unPositive can be used-- to update the field. Trivially renaming it to getPositive-- ensures that getPositive can only be used to access the fieldgetPositive :: Positive -> IntgetPositive = unPositive The above module doesn't export the constructor, so the only way to build a value of type Positive is to supply toPositive with a positive integer, which you can then unwrap using getPositive to access the actual value. You can then write a function that only accepts positive integers using: positiveInputsOnly :: Positive -> ... | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11910143', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/483876/']} | jdg_380571 |
stackexchange | llm_judgeable_groundtruth_similarity | 433752 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a test framework in Perl. As part of the tests, I may need to add precondition or postcondition checks for any given test, but not necessarily for all of them. What I've got so far is something like: eval "&verify_precondition_TEST$n";print $@ if $@; Unfortunately, this outputs "Undefined subroutine &verify_precondition_TEST1 called at ..." if the function does not exist. How can I determine ahead of time whether the function exists, before trying to call it?
Now provide the response and nothing else.
| Package::Name->can('function') or *Package::Name::function{CODE}# or no strict; *{ "Package::Name::$function" }{CODE} or just live with the exception. If you call the function in an eval and $@ is set, then you can't call the function. Finally, it sounds like you may want Test::Class instead of writing this yourself. Edit: defined &function_name (or the no strict; defined &{ $function_name } variant), as mentioned in the other answers, looks to be the best way. UNIVERSAL::can is best for something you're going to call as a method (stylistically), and why bother messing around with the symbol table when Perl gives you syntax to do what you want. Learning++ :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/433752', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/893/']} | jdg_380572 |
stackexchange | llm_judgeable_groundtruth_similarity | 217736 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to electronicscould any one explain to me what is the Bootstrap capacitor and what is used for I am doing research about buck convertor using MP1482 chip and I do not understand bootstrap section
Now provide the response and nothing else.
| Your question is a good one, even if it is short. Linguistically the word bootstrap comes from "Pulling yourself up on your bootstraps", which is of course in practise nonsense, but it does give a hint to the purpose, once it's more clear to you how capacitors work, when connected to a square wave signal. What happens is, there is a diode in the system that forces the "top" of the bootstrap capacitor to be at least VCC, because when it is lower the diode will conduct. (Technically, the top of the capacitor will be (VCC - Vf(diode)), but in these systems they will try to make the diode forward voltage at 0.2V or below in all practical situations, so in most cases it's negligible). From here it becomes a bit theoretical, since I don't want to keep saying "In practice, of course, with leakage and imperfect behaviour...", so it's not all as black and white, but the principles hold. Now, when the top of that capacitor goes up above VCC, the diode will block current (it becomes reverse biased) and allow that part of the capacitor to stay at this higher voltage. A capacitor is a voltage storing device, to put it in the simplest words, so it first tries to keep the voltage difference between its pins the same. In the same way an inductor is a current storing device, it will want the current through it to stay the same, if it can help it. But that's not so important now. So, if you flip the bottom pin up to a higher voltage than it is at, the top will jump up by the same amount, as long as nothing in the system prevents that, of course. In the same way, if you toggle the bottom to a lower voltage, the top will jump down by the same amount. So, now the chip connects the bottom of the capacitor to a signal that toggles between 0 and VCC, or between two voltages very quickly. If we assume we start with the top of the capacitor at VCC and the bottom at 0V, and only the diode connected to the top, nothing else, then if the first toggle comes switching the bottom from 0V to VCC, the top will jump the same amount. Because the diode will not conduct the extra energy away, now the top jumps from VCC to 2*VCC. Now, you suddenly have a doubled voltage to supply internal circuitry. Then there's some stuff inside of MP1482, which take that voltage and puts it into a small energy buffer, probably through another MP1482's internal diode into another capacitor, or some such system. Then, if the bottom switches back, the top of the capacitor drops down again too. If a bit of energy was taken away, the top will be below VCC, and the diode will just fill it up again. Then at the next toggle: Boom double VCC again. Since this pulsing 2*VCC signal is rectified in a small buffer, you have a tiny, but steady supply inside the chip that is quite a bit above VCC. Now the N-MOSFETs come in: An N-MOSFET is easier to get to a lower Rds(on) than a similar sized P-MOSFET (to put it extremely simply!), so for high-power stuff people really prefer using N-MOSFETs. But, to turn on an N-MOSFET, you need to drive the gate with a voltage above the voltage that's on its Source. So if the top MOSFET is an N-MOSFET, driving the load, its source will go to VCC whenever it is turned on, or at least, that's what you want to happen, because you want no losses in that transistor. To be able to have a turned on N-MOSFET, while its Source is at VCC, you need to connect the Gate to a voltage higher than VCC, so that's why the chip needs that supply of a higher voltage. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/217736', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/100652/']} | jdg_380573 |
stackexchange | llm_judgeable_groundtruth_similarity | 214469 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Quoting a comment to this question : By a theorem of Serre, if $R$ is a commutative artinian ring, every projective module [over $R$] is free. (The theorem states that for any commutative noetherian ring $R$ and projective module $P$ [over $R$], if $\operatorname{rank}(P) > \dim(R)$, then there exists a projective [$R$-module] $Q$ with $\operatorname{rank}(Q)=\dim(R)$ such that $P\cong R^k \oplus Q$ where $k=\operatorname{rank}(P)−\dim(R)$.) When $R$ is a PID, this is in Lang's Algebra (Section III.7), and when $R$ is local this is a famous theorem of Kaplansky. But in spite of a reasonable effort, I can't seem to find any other reference to this theorem of Serre. Does anyone know of one? Is there any other way to show that every projective module over an artinian ring is free?
Now provide the response and nothing else.
| Let $R$ be any commutative ring whose projective modules are all free, and let $e\notin \{0,1\}$ be an idempotent of $R$. Then $eR$ and $(1-e)R$ are both projective, hence free of some rank 1 or more, and $eR\oplus(1-e)R=R$, so that we have $R^n\cong R$ as $R$ module for some natural number $n\geq 2$. This is absurd since commutative rings have IBN . This shows that $R$ cannot have any nontrivial idempotents. Since an Artinian ring without nontrivial idempotents is local, you can see now the dramatic failure of Artinian rings to have the "projective implies free" property, except in the "good" local case. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/214469', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2536/']} | jdg_380574 |
stackexchange | llm_judgeable_groundtruth_similarity | 57093479 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm successfully implementing a method for retrieving the real path of an image from gallery by the Uri returned from ACTION_PICK intent. Here's a sample: // getRealPathFromURI(intent.getData());private String getRealPathFromURI(Uri contentURI) { String result; Cursor cursor = getContentResolver().query(contentURI, null, null, null, null); if (cursor == null) { // Source is Dropbox or other similar local file path result = contentURI.getPath(); } else { cursor.moveToFirst(); int idx = cursor.getColumnIndex(MediaStore.Images.ImageColumns.DATA); result = cursor.getString(idx); cursor.close(); } return result;} Just like this answer . Recently updated the compileSdkVersion to 29 and apparently the DATA attribute everyone's using is deprecated .In the official docs, they recommend to use FileDescriptor instead, problem is i don't know exactly how. Only thing i found is this question . Didn't find a proper answer there though. Please help me overcome that deprecation issue with a solution using the suggested way or any other way. Thank you. Update: Followed @CommonsWare 's answer and copied the returned Uri (of an image the user picked) to a local directory, using context.getContentResolver.openInputStream(Uri) . Even tried retrieving a file from Google Drive - and it worked. Only problem was the long time it took (about 20 sec for 5MB file). As a bonus, i was cleared to remove external storage permissions, which one doesn't need for using app's local directories. No more externals paths for me!
Now provide the response and nothing else.
| I'm successfully implementing a method for retrieving the real path of an image from gallery by the Uri returned from ACTION_PICK intent. That code may not work for all images. There is no requirement for DATA to point to a filesystem path that you can access. Just like this answer. FWIW, this was my answer to that question. Only thing i found is this question. Didn't find a proper answer there though. That technique wasn't particularly good and will no longer work, as Android has locked down /proc . In the official docs, they recommend to use FileDescriptor instead, problem is i don't know exactly how. The more general concept is that you use ContentResolver to work with the Uri , whether you get an InputStream ( openInputStream() ), OutputStream ( openOutputStream() ), or FileDescriptor . Consume the content using those things. If you have some API that absolutely needs a file, copy the content (e.g., from the InputStream ) to a file that you control (e.g., in getCacheDir() ). As a bonus, now your code is also in position to use the Storage Access Framework (e.g., ACTION_OPEN_DOCUMENT ) and the Internet (e.g., OkHttp), if and when that would be useful. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57093479', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10470378/']} | jdg_380575 |
stackexchange | llm_judgeable_groundtruth_similarity | 19404589 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read somewhere "Abstract and Sealed modifiers are equivalent to a class which is static" I also found that "When you declare a static class, internally the compiler marks the class abstract and sealed, and creates a private constructor in the IL code" so, I decided to do this: static class A{ public static void test() { Console.WriteLine("test"); }} Now, the class "A" cannot be inherited nor instantiated. So, let us write a class B using abstract to prevent instantiation and using sealed to prevent inheritance. But, this approach fails. which should be equivalent to public abstract sealed class B{ private B() { } public void test() { Console.WriteLine("test"); }} But I recieve an error stating "error CS0418: B': an abstract class cannot be sealed or static"` . Any ideas why this is not possible ? Thanks in advance for your answers.
Now provide the response and nothing else.
| Having checked the IL of the System.Directory class (which is static), it is declared in IL as: .class public auto ansi abstract sealed beforefieldinit System.IO.Directoryextends System.Object{ ... Further, this article ( http://msdn.microsoft.com/en-us/library/ms229038.aspx ) suggests that the CLR handles static classes as abstract sealed classes to support languages that do not support directly delcaring static classes (eg C++). Thus in conclusion, static classes in C# are syntactic sugar for sealed abstract classes with private constructors. I for one am glad of that as "static" is a lot easier to write and a lot easier to get right. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19404589', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1360251/']} | jdg_380576 |
stackexchange | llm_judgeable_groundtruth_similarity | 695722 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to write some interactive GUIs in Perl. I've used TclTk but it looks dated. I've written QT code for C++, but the PerlTk module hasn't had a release in several years. Googling around I see other possible options. What are good packages for this, including basic windowing, menus, drawing canvas, scrollbars, and so on.
Now provide the response and nothing else.
| Gtk2 has glade2 which can write out an XML file usable by Gtk2::GladeXML . Here is an example of how to bundle the XML with the app in the same file. I misread the question at first. I thought you wanted a GUI editor for making GUIs (which is what glade2 is). You can also create GUIs using Gtk2 without glade2: #!/usr/bin/perluse strict;use warnings;use Gtk2;Gtk2->init;my $window = Gtk2::Window->new;my $vbox = Gtk2::VBox->new;my $label = Gtk2::Label->new("Hello World");my $button = Gtk2::Button->new("Press me");$window->add($vbox);$vbox->add($label);$vbox->add($button);$window->set_default_size(200, 200);$window->signal_connect( destroy => sub { Gtk2->main_quit; });my $i = 0;$button->signal_connect( clicked => sub { $label->set_text("button pressed " . ++$i . " times"); });$window->show_all;Gtk2->main; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/695722', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23630/']} | jdg_380577 |
stackexchange | llm_judgeable_groundtruth_similarity | 14930950 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Dart has a Map type, with implementations like HashMap , LinkedHashMap , and SplayTreeMap . What's the difference between those different Map implementations?
Now provide the response and nothing else.
| Dart has built-in support for collections like List, Set, and Map. Dart has different Map implementations. Understanding the pros and cons between implementations can help you make an informed decision. (Note: this is written around the time of Dart M3, so what follows might not match the docs at this moment.) What is a Map? A Map is an associative container, mapping keys to values. Keys are unique, and can point to one and only one value. A key cannot be null, but a value can be null. Map Literals Dart supports Map literals , like this: var accounts = {'323525': 'John Smith', '588982': 'Alice Jones'}; The spec says that map literals must maintain insertion order. This means that accounts is an instance of LinkedHashMap . The spec also says that Map literal keys must be Strings. This might be changed in the future. new Map() Dart supports factory constructors, so you can create a new instance of Map like this: var accounts = new Map(); The Map class is abstract, which means the factory constructor actually creates an instance of a subclass of Map . So what is the actual type of accounts ? Earlier versions of Dart created a new instance of HashMap from the new Map() constructor. However, Dart bug 5803 states that in order to make {} and new Map return the same type, new Map will soon return an instance of LinkedHashMap . LinkedHashMap (or, InsertionOrderedMap) A LinkedHashMap iterates through keys and values in the same order they were inserted. Note: LinkedHashMap will probably be renamed to InsertionOrderedMap. Follow Dart bug 2349 for progress. Here is an example: import 'dart:collection';main() { var ordered = new LinkedHashMap(); ordered['32352'] = 'Alice'; ordered['95594'] = 'Bob'; for (var key in ordered.keys) { print(key); } // guaranteed to print 32352, then 95594} Here is the source code for LinkedHashMap . (if this link stops working, it's probably because the class was renamed) HashMap A HashMap has no guarantee of maintaining insertion order. When you iterate through a HashMap's keys or values, you cannot expect a certain order. A HashMap is implemented using a hash table . Here is an example of creating a new HashMap: import 'dart:collection';main() { var accounts = new HashMap();} If you don't care about maintaining insertion order, use HashMap. Here is the source code of HashMap . SplayTreeMap A splay tree is a self-balancing binary search tree with the additional property that recently accessed elements are quick to access again. It performs basic operations such as insertion, look-up and removal in O(log(n)) amortized time. import 'dart:collection';main() { var accounts = new SplayTreeMap();} A SplayTreeMap requires that all keys are of the same type. A splay tree is a good choice for data that is stored and accessed frequently, like caches. The reason is that they use tree rotations to bring up an element to the root for better frequent accesses. The performance comes from the self-optimization of the tree. That is, frequently accessed elements are moved nearer to the top. If however, the tree is equally often accessed all around, then there's little point of using a splay tree map. An example case is a modem router that receives network packets at very high rates. The modem has to decide which packet go in which wire. It can use a map implementation where the key is the IP and the value is the destination. A splay tree map is a good choice for this scenario, because most IP addresses will be used more than once and therefore those can be found from the root of the tree. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14930950', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/123471/']} | jdg_380578 |
stackexchange | llm_judgeable_groundtruth_similarity | 3299022 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following mapped classes Trade { ID, AccountFrom, AccountTo }Account {ID, Company}Company {ID} Now I cannot figure out a way select all trades where AccountFrom.Company.ID = X OR AccountTo.Company.ID = X I can get AND to work using the following: criteria.CreateCriteria("AccountFrom").CreateCriteria("Company").Add(Restrictions.Eq("ID", X);criteria.CreateCriteria("AccountTo").CreateCriteria("Company").Add(Restrictions.Eq("ID", X); But how can I transform this into an OR rather an an AND. I have used Disjunction previously, but I cannot seem to know how to add separate criteria, just restrictions.
Now provide the response and nothing else.
| Try: return session.CreateCriteria<Trade>() .CreateAlias("AccountFrom", "af") .CreateAlias("AccountTo", "at") .Add(Restrictions.Or( Restrictions.Eq("af.Company.CompanyId", companyId), Restrictions.Eq("at.Company.CompanyId", companyId))) .List<Trade>(); I don't think you will need to alias Company. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3299022', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/271106/']} | jdg_380579 |
stackexchange | llm_judgeable_groundtruth_similarity | 53023 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the size of atomic and subatomic particles, like proton, neutron, photon etc? Is it defined based on some quantum characterics as de Broglie wavelength or Compton wavelength?
Now provide the response and nothing else.
| The size is determined by the scattering cross section . Classically one could measure the size of a leaf ( for example) by measuring its shadow from a light source, if the geometrical distances were known. For elementary particles it is more complicated because the projectile and the target interact quantum mechanically but still we can get an estimate of the size of a proton when seen by photons, or by muons or by neutrinos by using the appropriate parameters of the interactions involved. We get "average sizes" because nothing is sharp in quantum mechanics and it depends on the momentum transfer during the interaction in a non trivial way, but we do get a good measurement of its size, and distribution of quarks within the protons, neutrons and pions by measuring form factors . Neutrinos, photons and electrons are deduced/assumed to be point particles of dimension 0, in these scattering experiments. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/53023', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/6735/']} | jdg_380580 |
stackexchange | llm_judgeable_groundtruth_similarity | 20230 |
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can I filter what Flyspell may check in the buffer? I don't want it to spell-check code in Markdown mode. Maybe I can use the font-face at point for that? Something like this: (let ((get-char-property (point) 'face)) (when (equal face "markdown-pre-face") ;; no spell-check )) This question is similar, but the solution there works only for Org-mode.
Now provide the response and nothing else.
| The accepted answer for org-mode is written by me. org-mode is tricky because its own check-word-predicate is already defined. markdown is simpler because it uses default predicate provided by flyspell-mode, (defun flyspell-generic-textmode-verify () "Used for `flyspell-generic-check-word-predicate' in text modes." ;; (point) is next char after the word. Must check one char before. (let ((f (get-text-property (- (point) 1) 'face))) (not (memq f '(markdown-pre-face)))))(setq flyspell-generic-check-word-predicate 'flyspell-generic-textmode-verify)(flyspell-mode 1) By the way, M-x describe-function ENTER flyspell-prog-mode ENTER , you will find similar setup. Use the Source, Luke. | {} | {'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/20230', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/2332/']} | jdg_380581 |
stackexchange | llm_judgeable_groundtruth_similarity | 2857077 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read a lot about GlassFish application server that it supports the whole Java EE 6 specification. But which does Tomcat support? I know that Tomcat is a JSP/Servlet container, but I see articles in web about "JSF and Tomcat", "Hibernate and Tomcat" and many more. Can tomcat play with these technologies: EJB, JSF, JPA, Hibernate, Spring, etc..?
Now provide the response and nothing else.
| Tomcat as being a barebones servletcontainer provides indeed only JSP , Servlet , EL and WS APIs out the box. You can however just provide JSF , JSTL , CDI , JPA , Hibernate , Spring , etc yourself along with the web application in flavor of JAR file(s) in the /WEB-INF/lib folder and some configuration files where necessary. EJB is only a story apart since you can't just add "some JARs" to get that to work, but in fact yes, you can put an EJB container on top of Tomcat too if you modify the server core. That's also basically what TomEE has done. You might as well consider it if your intent is to go Java EE. See also: What exactly is Java EE? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2857077', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/329091/']} | jdg_380582 |
stackexchange | llm_judgeable_groundtruth_similarity | 40649712 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I added a task to my gradle project: task deploy() { dependsOn "build" // excludeTask "test" <-- something like this doFirst { // ... }} Now the build task always runs before the deploy task. This is fine because the build task has many steps included. Now I want to explicitly disable one of these included tasks. Usually I disable it from command line with gradle deploy -x test How can I exclude the test task programmatically?
Now provide the response and nothing else.
| You need to configure tasks graph rather than configure the deploy task itself. Here's the piece of code you need: gradle.taskGraph.whenReady { graph -> if (graph.hasTask(deploy)) { test.enabled = false }} WARNING: this will skip the actions defined by the test task, it will NOT skip tasks that test depends on. Thus this is not the same behavior as passing -x test on the command line | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40649712', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1310372/']} | jdg_380583 |
stackexchange | llm_judgeable_groundtruth_similarity | 32609710 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to use TextInputLayout of design support library in my project. I want to give space between hint and EditText in TextInputLayout . I set margin and padding in TextInputLayout and even inside EditText but both are not work.So how to solve this issue. Here i attach screen shot and my coding. ==============================Style=================================<style name="TextHint" parent="Base.TextAppearance.AppCompat"> <item name="android:textSize">18sp</item> <item name="android:textColor">@color/green</item></style>=============================XML=================================== <android.support.design.widget.TextInputLayout android:layout_width="match_parent" app:hintTextAppearance="@style/TextHint" android:layout_marginTop="10dp" android:layout_marginLeft="30dp" android:layout_marginRight="30dp" android:layout_height="wrap_content"><EditText android:layout_width="match_parent" android:layout_height="50dp" android:id="@+id/edttxtEmailAddress" android:singleLine="true" android:hint="@string/enter_valid_email" android:paddingLeft="20dp" android:textSize="20sp" android:background="@drawable/rounded_common"/></android.support.design.widget.TextInputLayout>
Now provide the response and nothing else.
| The solution proposed by ganesh2shiv works for the most part, although I've found it also de-centres the hint text displayed inside the EditText when not focused. A better trick is to set the desired paddingTop to the EditText but also embed the extra padding within the EditText's background. A fairly sane way to do this is to wrap your original background in a <layer-list> and set the <item android:top="..."> attribute to match the paddingTop of your EditText. <android.support.design.widget.TextInputLayout android:layout_width="match_parent" android:layout_height="wrap_content"> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:paddingTop="@dimen/floating_hint_margin" android:background="@drawable/bg_edit_text" /></android.support.design.widget.TextInputLayout> And the bg_edit_text.xml drawable file: <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <item android:top="@dimen/floating_hint_margin"> <your original background; can be <bitmap> or <shape> or whatever./> </item></layer-list> | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/32609710', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2991983/']} | jdg_380584 |
stackexchange | llm_judgeable_groundtruth_similarity | 75733 |
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm quite shocked for encountering this error on PCA from sklearn ValueError: Complex data not supported After trying to fit complex-valued data. Is this just unimplemented thing? Should I just go ahead and do it 'manually' with SVD or is their a catch for complex-values?
Now provide the response and nothing else.
| Apparently this functionality is left out intentionally, see here . I'm afraid you have to use SVD, but that should be fairly straightforward: def pca(X): mean = X.mean(axis=0) center = X - mean _, stds, pcs = np.linalg.svd(center/np.sqrt(X.shape[0])) return stds**2, pcs | {} | {'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/75733', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/69793/']} | jdg_380585 |
stackexchange | llm_judgeable_groundtruth_similarity | 19794739 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference between iteration and recursion and why/when is one better: while (true) { // Iterating} And private void recursion() { if (true) recursion(); // Recursing return;} I see a lot of recursive implementation while it could be easily done in a simple loop.
Now provide the response and nothing else.
| There are two main differences between Recursion and an Iterative Version of the same algorithm. First of all, some times it is almost better to understand a recursive algorithm than an iterative one (At least if you are experienced programmer) So it does increase expressivity and in some cases readability (It might also lead to the exact opposite in other cases) Expresivity is a huge deal on programming languages and be able to write the same code in 5 lines instead of 20 is a huge deal. On the downside, it decreases the performance of your code. Recursive functions have to keep the function records in memory and jump from one memory address to another to be invoked to pass parameters and return values. That makes them very bad performance wise. Sum Up: Iterative Algorithms = Fast Performance but hard to write (sometimes hard to read too) Recursive Algorithms = Fast to write but Bad performance wise (Sometimes easier to understand too) Take this example: public static long fib(long n) { if (n <= 1) return n; else return fib(n-1) + fib(n-2);} vs if ((n == 1) || (n == 2)) { return 1; } else { long prev = 1, current = 1, next = 0; for (long i = 3; i <= n; i++) { next = prev + current; prev = current; current = next; } return next; } Source: http://www.csd.uwo.ca/Courses/CS1027a/code/FibonacciDemo.java | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19794739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_380586 |
stackexchange | llm_judgeable_groundtruth_similarity | 12629 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When near high tension power lines, particularly after a good rain, the lines themselves emit a buzzing noise. A similar noise can be heard coming out of the electric meters attached to my apartment. I've heard before that this is supposedly from the 60Hz AC current that's running through the lines -- namely, that the buzz is the same 60Hz which is in the lines. I'm skeptical of this though for a couple of reasons: I don't see any reason the change in electricity would somehow be audible. The noise subjectively sounds relatively high pitch. 60Hz would sound extremely low pitched -- it's near the base of human hearing of 20Hz (typical). What is the actual cause of that buzzing? EDIT: I just spent some time playing with a tone generator and the noise I hear from these things sounds closest to 120Hz using a square or triangle wave. (Oddly, not a sine wave, as I would have expected) Perhaps that helps?
Now provide the response and nothing else.
| The reason for this is something known as "magnetostriction", which is strain induced in a magnetic lattice due to the magnetic nature of a material. There is a pretty good explanation of the buzzing noises in transformers in the article Magnetostriction (aka: Why Transformers Hum) , but the summary is that the ferromagnetic domains in the transformer core are subjected to the 60Hz (in the US) oscillations of the magnetic field due to the AC current. Because of the electrical cycle, there are two impacts on the core per AC oscillation, so the strain is changing at 60 Hz, which produces the 120 Hz noise that we can identify a "great B" note. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/12629', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/696/']} | jdg_380587 |
stackexchange | llm_judgeable_groundtruth_similarity | 2552154 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have over 30 columns in my table (sql server 2008). Columns type are varchar(x). I know that in every column there is two extra spaces at the end of column value. How to use rtrim function for all columns and save this modification into this existing table? Edit: is there a way to do it using stored procedure or cursor where I don't have to manually declare all columns?
Now provide the response and nothing else.
| For a generic approach, you can use a script like this to generate the statement for you, for a given table (useful if you have many columns!): DECLARE @SQL VARCHAR(MAX)DECLARE @TableName NVARCHAR(128)SET @TableName = 'YourTableName'SELECT @SQL = COALESCE(@SQL + ',[', '[') + COLUMN_NAME + ']=RTRIM([' + COLUMN_NAME + '])'FROM INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = @TableName AND DATA_TYPE = 'varchar'SET @SQL = 'UPDATE [' + @TableName + '] SET ' + @SQLPRINT @SQL That will just print the SQL statement out. You can either then copy + run the statement, or just EXECUTE(@SQL) . This is untested, so just try it out on a test table first :) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2552154', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/255564/']} | jdg_380588 |
stackexchange | llm_judgeable_groundtruth_similarity | 415015 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There are a lot of resistors and diodes on the board, where the polygon passes like this. How can I get rid of it?
Now provide the response and nothing else.
| You can draw on 41 tRestrict and 42 bRestrict to prevent the pour on those places. Traces overlapping these layers will cause DRC violations. The pour will keep the isolate distance in the polygon properties from the restrict layer objects. Example: | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/415015', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/204723/']} | jdg_380589 |
stackexchange | llm_judgeable_groundtruth_similarity | 29933957 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The docs say socket.io doesn't support .get .set now Is it okay to store client associated data like io.sockets.on('connection', function (client) { client.on('data', function (somedata) { client['data'] = somedata; }); }); in case I need multiple nodes?
Now provide the response and nothing else.
| Yes, it is OK to add properties to the socket.io socket object. You should be careful to not use names that could conflict with built-in properties or methods (I'd suggest adding a leading underscore or namescoping them with some sort of name prefix). But a socket is just a Javascript object and you're free to add properties like this to it as long as you don't cause any conflict with existing properties. There are other ways to do this that use the socket.id as a key into your own data structure. var currentConnections = {};io.sockets.on('connection', function (client) { currentConnections[client.id] = {socket: client}; client.on('data', function (somedata) { currentConnections[client.id].data = someData; }); client.on('disconnect', function() { delete currentConnections[client.id]; });}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29933957', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1366643/']} | jdg_380590 |
stackexchange | llm_judgeable_groundtruth_similarity | 13436232 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working through the Railscast on implementing Devise and OmniAuth (along with the Devise documentation ) -- currently, I've got a site going where visitors can sign up using their facebook accounts or by filling out a form. I'm running into trouble when users that sign up via OmniAuth try to edit their profiles, though. Devise looks for the user's current password when they submit changes to their profiles, but those that logged in with facebook don't know their passwords (they're set automatically in the user model): def self.find_for_facebook_oauth(auth, signed_in_resource=nil) user = User.where(:provider => auth.provider, :uid => auth.uid).first unless user user = User.create(first_name:auth.extra.raw_info.first_name, last_name:auth.extra.raw_info.last_name, provider:auth.provider, uid:auth.uid, email:auth.info.email, password:Devise.friendly_token[0,20] ) end user end When a user edits his information, the app should not require password confirmation if he set up his account through OmniAuth. The tutorial suggests that the handy password_required? method will help me achieve this outcome. Specifically, adding this method to the user model means that it should only return true if the user didn't sign up through OmniAuth (the provider attribute would be nil in that case): def password_required? super && provider.blank?end Thus, a piece of code like: <%= form_for(resource, :as => resource_name, :url => registration_path(resource_name), :html => { :method => :put }) do |f| %> <%= devise_error_messages! %> <%= render :partial => "essential_user_info_inputs", :locals => { :f => f } %> <%= render :partial => "inessential_user_info_inputs", :locals => { :f => f } %> <% if f.object.password_required? %> <%= render :partial => "password_inputs", :locals => { :f => f } %> <%= f.label :current_password %> <i>(we need your current password to confirm your changes)</i><br /> <%= f.password_field :current_password %> <% end %> <%= f.submit "Update" %><% end %> would theoretically only display password inputs when needed. It also suggests that Devise has built in logic saying that OmniAuth users don't need to use passwords to edit their accounts. I have no idea if this is true, but the tutorial kind of makes it look like that. But when an OmniAuth user tries to edit his account, I get "Current password can't be blank." Same thing with non-OmniAuth users (this makes sense, since the password fields don't show up on those users' edit pages either). Some poking around confirms that the password_required? method is returning false, both when the user signed up through OmniAuth and through the site's regular user signup. Even when I change it to simply run the superclass method, it returns false. Any ideas of what's going on with the password_required method? I can't find anything about it anywhere, but I feel like that's what's tripping things up right now. Update: This is now working, but not using the method outlined in the Railscast, which relies on requires_password? method, a topic that I still know nothing about. Instead, I implemented the solution outlined here , as suggested here . So I am now only requiring passwords to update non-OmniAuth accounts with the code: class Users::RegistrationsController < Devise::RegistrationsControllerdef update @user = User.find(current_user.id) email_changed = @user.email != params[:user][:email] is_facebook_account = [email protected]? successfully_updated = if !is_facebook_account @user.update_with_password(params[:user]) else @user.update_without_password(params[:user]) end if successfully_updated # Sign in the user bypassing validation in case his password changed sign_in @user, :bypass => true redirect_to root_path else render "edit" end endend
Now provide the response and nothing else.
| The easiest way is to overwrite the update_resource method in your RegistrationsController . This is advised by devise in their own implementation of the controller: # By default we want to require a password checks on update. # You can overwrite this method in your own RegistrationsController. def update_resource(resource, params) resource.update_with_password(params) end So the solution is to overwrite this method in your own controller like this: class Users::RegistrationsController < Devise::RegistrationsController # Overwrite update_resource to let users to update their user without giving their password def update_resource(resource, params) if current_user.provider == "facebook" params.delete("current_password") resource.update_without_password(params) else resource.update_with_password(params) end endend | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13436232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/945795/']} | jdg_380591 |
stackexchange | llm_judgeable_groundtruth_similarity | 24634259 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Say I have a system property MY_PROP : java -DMY_PROP="My value" This property is necessary for my system to work. What is the right exception to throw if this property is not set? @PostConstructprivate void init() { myProp = System.getProperty("MY_PROP"); if (myProp == null) { throw new ???? } // ... } Somehow IllegalArgumentException does not feel right. Maybe IllegalStateException , MissingResourceException , TypeNotPresentException ? What is the standard practice for this scenario?
Now provide the response and nothing else.
| There is none. I would throw the IllegalStateException , because you are missing the parameter. This mean that configuration validator has failed and your application is in invalid state. In other words you should never be able to call the init() at all. In case the value of parameter would be invalid, then i would throw an IllegalArgumentException . If you are writing a validator, you should decide between using RuntimeException or checked one. When using for example javax.naming.ConfigurationException`, or created own one configuration exception. You API will be able to handle such exception and react properly in term of legacy. Definitions: IllegalStateException - Signals that a method has been invoked at an illegal or inappropriate time. In other words, the Java environment or Java application is not in an appropriate state for the requested operation. IllegalArgumentException - Thrown to indicate that a method has been passed an illegal or inappropriate argument. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24634259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/664577/']} | jdg_380592 |
stackexchange | llm_judgeable_groundtruth_similarity | 53745158 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C++14 given the following code: void foo() { double d = 5.0; auto p1 = new int[d];} clang compiles this without diagnostic while gcc on the other hand produces the following diagnostic ( see it live in godbolt ): error: expression in new-declarator must have integral or enumeration type 7 | auto p1 = new int[d]; | ^ I specifically labeled this C++14 because in C++11 mode clang treats this as ill-formed and produces the following diagnostic ( see it live in godbolt ): error: array size expression must have integral or unscoped enumeration type, not 'double' auto p1 = new int[d]; ^ ~ Is clang correct? If so what changed in C++14 to allow this?
Now provide the response and nothing else.
| Clang is correct, the key wording in [expr.new]p6 changes from the following in the C++11 draft: Every constant-expression in a noptr-new-declarator shall be an integral constant expression ([expr.const]) and evaluate to a strictly positive value. The expression in a noptr-new-declarator shall be of integral type, unscoped enumeration type, or a class type for which a single non-explicit conversion function to integral or unscoped enumeration type exists ([class.conv]). If the expression is of class type, the expression is converted by calling that conversion function, and the result of the conversion is used in place of the original expression. … to this in the C++14 draft : Every constant-expression in a noptr-new-declarator shall be a converted constant expression ([expr.const]) of type std::size_t and shall evaluate to a strictly positive value. The expression in a noptr-new-declarator is implicitly converted to std::size_t . … In C++14 the requirement for the expression in a noptr-new-declarator was weakened to not require an integral, unscoped enumeration or a class with a single non-explicit conversion function to one of those types but just allow implicit conversions to size_t . The change in wording came from the proposal A Proposal to Tweak Certain C++ Contextual Conversions, v3 . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53745158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1708801/']} | jdg_380593 |
stackexchange | llm_judgeable_groundtruth_similarity | 903 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to match the 0's in this output, specifically using "Pattern Recall" feature mentioned in Cisco documentation. The pattern is: Gi4/3 0 0 0 0Gi4/4 0 0 0 0Gi4/5 0 0 0 0Gi4/6 0 0 0 0 I'm trying to get this to work, but it gives no output: switch#show int count | i (0 +)\\1\\2\\3\\4 According to this Cisco Doc it should work, To create a regular expression that recalls a previous pattern, use parentheses to indicate memory of a specific pattern and a double backslash (\) followed by a digit to reuse the remembered pattern. The digit specifies the occurrence of a parenthesis in the regular expression pattern. When there is more than one remembered pattern in the regular expression, \1 indicates the first remembered pattern, \2 indicates the second remembered pattern, and so on.
Now provide the response and nothing else.
| There are a few things that make this fail: This is trying to match the 0 five times. The first time you write the expression it matches, then it tries to match four more times, This is matching 0 (with a trailing space) The last 0 won't have a trailing space, This expression is incrementing the remembered pattern recall number, even though you only have one pattern, The pattern recall only needs one backslash. \\1 would match the text "\1". The firstbackslash is the escape character, the character after that is the escaped character. Technically, with the trailing spaces, this would also match 10 0 0 0 but there is little chance of that being an output. There will never be a number beginning in 0 though. So, to get the pattern you want, this will work: #show interfaces stats | i ( +0)\1\1\1 In this expression, you match any amount of spaces ending with a 0, repeated 3 additional times. As a nice example of how pattern recall works. (+ 0)(+ 0)\1\2 works too. To make this more clear ( +0)( +1)\1\2\2 would match 0 1 0 1 1 | {} | {'log_upvote_score': 5, 'links': ['https://networkengineering.stackexchange.com/questions/903', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/892/']} | jdg_380594 |
stackexchange | llm_judgeable_groundtruth_similarity | 1075180 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $G$ be a finite, non-abelian simple group. How would I go about proving that any $2$-dimensional representation of $G$ is trivial? If it helps, I know how to do it when we're considering $1$-dimensional representations.
Now provide the response and nothing else.
| Okay, we're going to have to use some heavy artillery to start off, but I can't think of another way to begin. Suppose $\rho: G\to \text{GL}_{2} (\mathbb{C})$ is nontrivial. Observe that since $G$ is simple and the representation is nontrivial, we must have $\text{ker} \, \rho =\text{ker}\, \chi = (e)$ (where $\chi$ is the character of this representation). The Feit-Thompson Theorem (!!!) tells us $|G|$ is even. By Cauchy's Theorem, $G$ must have an element $x$ of order $2$. Now, define $$\hat{\rho}: G \to \text{GL}_{1} (\mathbb{C}) \cong \mathbb{C}^{\times}$$ by $\hat{\rho}(g) = \text{det} (\rho(g))$. Evidently, $\hat{\rho}$ is a homomorphism, hence it gives a degree 1 representation of $G$. We know this representation must be trivial. In other words, $\text{det} (\rho(g)) = 1$ for all $g\in G$. That said, we also know that $\rho(x)^2 = \text{Id}$. The set of eigenvalues of $\rho(x)$ is either $\{1, 1\}$, $\{1,-1\}$, or $\{-1,-1\}$. The first possibility is out of the question, since $\text{ker} \chi = (e)$. The second possibility cannot occur, since then $\text{det} (\rho(x)) = -1$. Thus, the eigenvalues of $\rho(x)$ are $\{-1, -1\}$. The characteristic polynomial of $\rho(x)$ is $(X+1)^2$, and $\rho(x)$ also satisfies $X^2 - 1$. Since the minimal polynomial of $\rho(x)$ must divide both of these, it follows $\rho(x)$ satisfies $X+1$, i.e. $\rho(x) = -\text{Id}$. Lastly, since $\rho(x)$ is a scalar multiple of the identity, it commutes with any matrix. In particular, for any $g\in G$, we have $$\rho(g) \rho(x) = \rho(x) \rho(g) \implies \rho(gxg^{-1} x^{-1}) = \text{Id}$$ Triviality of $\text{ker} \, \rho$ implies $gxg^{-1} x^{-1} = e$ for all $g\in G$, hence $x\in Z(G)$. Accordingly, $Z(G)$ is a nontrivial normal subgroup of $G$, so it must equal $G$. But $G$ is non-abelian by assumption. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1075180', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/141494/']} | jdg_380595 |
stackexchange | llm_judgeable_groundtruth_similarity | 51873 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I came across this question in my homework and am unsure why it works this way. Given $y= \ln(e^{x^2})$, find the derivative. The given answer work showed the formula rewritten as $y=x^{2}$ before starting the differentiation process. My thinking is because $$f(x)=\ln(x)$$ And the inverse of the natural log function is$$ f^{-1}(x)=e^x$$ Am I right in thinking that multiplying the inverses cancel each other out? If so, why doesn't the x be removed, leaving the 2 as a constant?
Now provide the response and nothing else.
| First, you are right that $x\mapsto\ln(x)$ and $x\mapsto e^x$ are inverse functions for each other. This is the most important thing here. For real $x$ we have $\ln(e^x)=x$ and for real $x>0$ we have $e^{\ln(x)}=x$. (since $\ln(x)$ is not defined when $x\leq 0$) But there is no reason for the $x$ to go away. I'll write it another way: No matter what the box $\square$ is, in the real case we will always have $$\ln (e^{\square})=\square.$$ So for your problem, imagine that $\square=x^2$. Then we must have $$\ln (e^{x^2})=x^2.$$ Lets do more examples: $$\ln (e^{\sin(x)})=\sin (x)$$ $$\ln(e^{f(x)})=f(x).$$ I hope that helps explain it. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/51873', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/7006/']} | jdg_380596 |
stackexchange | llm_judgeable_groundtruth_similarity | 27073082 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I'm writing a class template C<T> that holds a T value, so C<T> can be copyable only if T is copyable. Normally, when a template might or might not support a certain operation, you just define the operation, and it's up to your callers to avoid calling it when it's not safe: template <typename T>class C { private: T t; public: C(const C& rhs); C(C&& rhs); // other stuff}; However, this creates problems in the case of a copy constructor, because is_copy_constructible<C<T>> will be true even when T is not copyable; the trait can't see that the copy constructor will be ill-formed if it's called. And that's a problem because, for example, vector will sometimes avoid using the move constructor if std::is_copy_constructible is true. How can I fix this? I believe is_copy_constructible will do the right thing if the constructor is explicitly or implicitly defaulted: template <typename T>class C { private: T t; public: C(const C& rhs) = default; C(C&& rhs) = default; // other stuff}; However, it's not always possible to structure your class so that defaulted constructors will do the right thing. The other approach I can see is to use SFINAE to conditionally disable the copy constructor: template <typename T>class C { private: T t; public: template <typename U = C> C(typename std::enable_if<std::is_copy_constructible<T>::value, const U&>::type rhs); C(C&& rhs); // other stuff}; Aside from being ugly as sin, the trouble with this approach is that I have to make the constructor a template, because SFINAE only works on templates. By definition, copy constructors are not templates, so the thing I'm disabling/enabling isn't actually the copy constructor, and consequently it won't suppress the copy constructor that's implicitly provided by the compiler. I can fix this by explicitly deleting the copy constructor: template <typename T>class C { private: T t; public: template <typename U = C> C(typename std::enable_if<std::is_copy_constructible<T>::value, const U&>::type rhs); C(const C&) = delete; C(C&& rhs); // other stuff}; But that still doesn't prevent the copy constructor from being considered during overload resolution. And that's a problem because all else being equal, an ordinary function will beat a function template in overload resolution, so when you try to copy a C<T> , the ordinary copy constructor gets selected, leading to a build failure even if T is copyable. The only approach I can find that in principle will work is to omit the copy constructor from the primary template, and provide it in a partial specialization (using more SFINAE trickery to disable it when T is not copyable). However, this is brittle, because it requires me to duplicate the entire definition of C , which creates a major risk that the two copies will fall out of sync. I can mitigate this by having the method bodies share code, but I still have to duplicate the class definitions and the constructor member-init lists, and that's plenty of room for bugs to sneak in. I can mitigate this further by having them both inherit from a common base class, but introducing inheritance can have a variety of unwelcome consequences. Furthermore, public inheritance just seems like the wrong tool for the job when all I'm trying to do is disable one constructor. Are there any better options that I haven't considered?
Now provide the response and nothing else.
| A noteworthy approach is partial specialization of the surrounding class template. template <typename T, bool = std::is_copy_constructible<T>::value>struct Foo{ T t; Foo() { /* ... */ } Foo(Foo const& other) : t(other.t) { /* ... */ }};template <typename T>struct Foo<T, false> : Foo<T, true>{ using Foo<T, true>::Foo; // Now delete the copy constructor for this specialization: Foo(Foo const&) = delete; // These definitions adapt to what is provided in Foo<T, true>: Foo(Foo&&) = default; Foo& operator=(Foo&&) = default; Foo& operator=(Foo const&) = default;}; This way the trait is_copy_constructible is satisfied exactly where T is_copy_constructible . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27073082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2985545/']} | jdg_380597 |
stackexchange | llm_judgeable_groundtruth_similarity | 15341574 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have what (to me) looks like a pretty basic nested loop. Except the outer loop is only firing the first time: var js_recipes = <?php echo json_encode($recipesArray); ?>;console.log("there are " + js_recipes.length + " recipes"); //console confirms 2for (var i = 0; i < js_recipes.length; i++) { console.log("adding recipe"); //only fires once js_recipe = js_recipes[i]; //add each ingredient for (var i = 0; i < js_recipe.ingredients.length; i++) { console.log("adding ing"); //fires multiple times for first recipe };};console.log("looping complete");//fires ok The console output is: There are 2 recipesadding recipeadding ingadding ingadding ingadding inglooping complete I must be missing something simple, but why am I only iterating over the first recipe?
Now provide the response and nothing else.
| The scope of a variable is either the global scope or the function where it is declared , so you have only one i in this code and i is incremented by the inner loop as well as the outer loop. Use different iterator variables for the different loops. for (var i = 0; i < js_recipes.length; i++) { console.log("adding recipe"); js_recipe = js_recipes[i]; //add each ingredient for (var j = 0; j < js_recipe.ingredients.length; j++) { console.log("adding ing"); };}; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15341574', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/225253/']} | jdg_380598 |
stackexchange | llm_judgeable_groundtruth_similarity | 30641500 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Functors can be covariant and contravariant. Can this covariant/contravariant duality also be applied to monads? Something like: class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b class ContraMonad m where return :: a -> m a contrabind :: m a -> (b -> m a) -> m b Does ContraMonad class make sense? Any examples?
Now provide the response and nothing else.
| Well, of course, it's possible to define it, but I doubt it would be of any use. There is a popular saying that "monad is just a monoid in a category of endofunctors". What it means is, first of all, that we have a category of endofunctors (meaning, (covariant) functors from some category to itself), and what's more, we have some multiplication on this endofunctors (in this case — composition). And then monad fits into some general framework that we don't have to worry about right now. The point is, there is no "multiplication" of contravariant functors. Composition of two covariant functors is again a covariant functor; but composition of two contravariant functors is not a contravariant functor (rather it's a covariant functor, so, a totally different beast). So, "contravariant monads" do not really make sense. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30641500', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/706317/']} | jdg_380599 |
stackexchange | llm_judgeable_groundtruth_similarity | 15368673 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What we know about std::advance is the following: template <class InputIterator, class Distance>void advance (InputIterator& i, Distance n); Purpose Advances the iterator i by n elements. If i is a Random Access Iterator, the function uses once operator+ or operator- , otherwise, the function uses repeatedly the increase or decrease operator ( operator++ or operator-- ) until n elements have been advanced. My question is the following:How is std::advance implemented such that it recognizes if it is a Random Access Iterator or not? How does it know it can use operator+ instead of operator++ ?
Now provide the response and nothing else.
| Through iterator_traits and tag dispatch : template<class InputIterator, class Distance>void advance_impl(InputIterator& i, Distance n, std::random_access_iterator_tag) { i += n;}template<class InputIterator, class Distance>void advance_impl(InputIterator& i, Distance n, std::bidirectional_iterator_tag) { if (n < 0) { while (n++) --i; } else { while (n--) ++i; }}template<class InputIterator, class Distance>void advance_impl(InputIterator& i, Distance n, std::input_iterator_tag) { assert(n >= 0); while (n--) ++i;}template<class InputIterator, class Distance>void advance (InputIterator& i, Distance n) { advance_impl(i, n, typename std::iterator_traits<InputIterator>::iterator_category());} Note that iterator_category is a type (one of std::input_iterator_tag etc.), so iterator_category() is not a function call; it's an expression that constructs a temporary prvalue of that type. The appropriate overload of advance_impl is then selected by normal overload resolution. This is called tag dispatch . Equivalently one could write: template<class InputIterator, class Distance>void advance (InputIterator& i, Distance n) { typename std::iterator_traits<InputIterator>::iterator_category the_tag; advance_impl(i, n, the_tag);} The overloads of advance_impl are receiving as their third argument an unnamed argument that is an instance of their chosen tag type. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15368673', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/163394/']} | jdg_380600 |
stackexchange | llm_judgeable_groundtruth_similarity | 1363650 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to know what makes linear approximation so important (or useful). What I am aware of in my current state of limited understanding is that linear approximation is one of the applications of a derivative and that it is used to approximate the value of a function at a point. Please forgive my naivete. Here I go. Linear approximation for a function f(x) is given by $$f(x) \approx f(x_0) + f'(x_0)(x-x_0)$$ For example, the function near x = 0 $$ln(1+x) \approx x$$ Using the definition of linear approximation above, the value of the function at x = 0 is equal to 0. I hope I don't sound really stupid, but I can just plug in the value x = 0 into the original function ln (1 + x) and get the same answer without even having to know what the linear approximation is (well, that's just what ln(1 + x) ≈ x means). But if one can just evaluate the value of a function at a point and get an answer that's more or less the same with the answer found by using linear approximation, is it even necessary to know what the linear approximation is? I can see that linear approximation can be used to simplify a complicated function into a tremendously simple one. For example, the function g(x) is given by the equation $$ g(x) = \frac{e^{-3x}}{\sqrt{1+x}} $$ and its linear approximation near x = 0 is $$ g(x) = \frac{e^{-3x}}{\sqrt{1+x}} \approx 1 - \frac72x $$ The linear approximation looks tremendously simple as compared to the ugly-looking g(x). Besides simplification, are there other applications of linear approximation? I've read about some applications on Wikipedia, but I would like to hear from the users. Does using linear approximation on a computerprogram make computations more efficient? Can the same thing be said for quadratic approximations, too? Thank you so much for answering!
Now provide the response and nothing else.
| It very often happens in applications that a model produces equations that are extremely difficult or impossible to solve.However, some of the factors are more important than others. There is often a parameter, let's say $p$, whose values are typically small, corresponding to one of these less important factors. If you set$p$ to $0$, thus ignoring that factor completely, it simplifies the situation so much that the solution becomes easy. Thus if you're looking for $F(p)$, you compute $F(0)$ instead. But you don't want to ignore the factor completely, so the next thing to try is a linear approximation. Even though you can't compute $F(p)$ when $p \ne 0$, you may be able to find $F'(0)$, and thus you can use the linear approximation $F(0) + p F'(0)$, which should produce a good approximation to $F(p)$ when $p$ is small. If you want even better approximations, you can try quadratic and higher-order approximations. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1363650', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/254998/']} | jdg_380601 |
stackexchange | llm_judgeable_groundtruth_similarity | 33715499 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Whenever I execute docker-compose start docker-compose ps I see my containers with the state "UP". If I do docker-compose up -d I will see more verbose but it will have the same state. Is there any difference between both commands?
Now provide the response and nothing else.
| docker-compose start ( https://docs.docker.com/compose/reference/start/ ) Starts existing containers for a service. docker-compose up ( https://docs.docker.com/compose/reference/up/ ) Builds, (re)creates, starts, and attaches to containers for a service. Unless they are already running, this command also starts any linked services. The docker-compose up command aggregates the output of each container (essentially running docker-compose logs -f ). When the command exits, all containers are stopped. Running docker-compose up -d starts the containers in the background and leaves them running. If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag. For the complete CLI reference: https://docs.docker.com/compose/reference/ | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/33715499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1858845/']} | jdg_380602 |
stackexchange | llm_judgeable_groundtruth_similarity | 10856353 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to crop a layer with other, but, instead of creating a mask (B) and crop the layer (A) getting a cropped layer A with the shape of B, I want to get a layer with shape A and a 'hole' created by cropping with layer B. ************* ***Layer A**** Layer A * ************** ***** * **** ****** * B * * -> **** ***** Layer A without shape B* ***** * **** ****** * ************************** ************* How can I get the cropped layer A?
Now provide the response and nothing else.
| You have to create a mask that covers the area that you want to keep . This can be done by using an even-odd fill rule and create a path for a shape layer with both rectangles. You can create the shape like this (where the two rectangles would be your two frames). Then you set this as the mask to get the result that you are after. CAShapeLayer *maskWithHole = [CAShapeLayer layer];// Both frames are defined in the same coordinate systemCGRect biggerRect = CGRectMake(30, 50, 120, 200);CGRect smallerRect = CGRectMake(80, 100, 50, 80);UIBezierPath *maskPath = [UIBezierPath bezierPath];[maskPath moveToPoint:CGPointMake(CGRectGetMinX(biggerRect), CGRectGetMinY(biggerRect))];[maskPath addLineToPoint:CGPointMake(CGRectGetMinX(biggerRect), CGRectGetMaxY(biggerRect))];[maskPath addLineToPoint:CGPointMake(CGRectGetMaxX(biggerRect), CGRectGetMaxY(biggerRect))];[maskPath addLineToPoint:CGPointMake(CGRectGetMaxX(biggerRect), CGRectGetMinY(biggerRect))];[maskPath addLineToPoint:CGPointMake(CGRectGetMinX(biggerRect), CGRectGetMinY(biggerRect))];[maskPath moveToPoint:CGPointMake(CGRectGetMinX(smallerRect), CGRectGetMinY(smallerRect))];[maskPath addLineToPoint:CGPointMake(CGRectGetMinX(smallerRect), CGRectGetMaxY(smallerRect))];[maskPath addLineToPoint:CGPointMake(CGRectGetMaxX(smallerRect), CGRectGetMaxY(smallerRect))];[maskPath addLineToPoint:CGPointMake(CGRectGetMaxX(smallerRect), CGRectGetMinY(smallerRect))];[maskPath addLineToPoint:CGPointMake(CGRectGetMinX(smallerRect), CGRectGetMinY(smallerRect))];[maskWithHole setPath:[maskPath CGPath]];[maskWithHole setFillRule:kCAFillRuleEvenOdd];[maskWithHole setFillColor:[[UIColor orangeColor] CGColor]]; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10856353', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1431468/']} | jdg_380603 |
stackexchange | llm_judgeable_groundtruth_similarity | 22721 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's assume that there is a perfectly spherical planet and there is a moon which is also perfectly spherical. Let's assume that there is no atmospheric drag and no other gravitational pull. If the moon is put in a perfectly circular orbit around the planet somehow, would the moon eventually "fall" towards the planet and form an elliptical orbit or would it continue to follow the perfectly circular orbit? Edit: What i actually wanted to ask is that would the gravity of the planet cause the moon to "fall" towards the planet or would the gravity allow the moon to continue its orbit without bending its path any further towards the planet. I know that no planet can be a true sphere or a cube due to the shapes of the particles.
Now provide the response and nothing else.
| "Perfectly" is a funny word. Perfect circles are a mathematical abstraction. Real objects are not "perfect". So supposing a "perfectly spherical planet" is to suppose something that does not and could not exist. All real planets are made of atoms and anything made of little clumps of matter cannot be perfectly spherical. Even if you built a planet that was as spherical as possible, it would be distorted by its rotation and the tides. So there are no perfectly spherical planets. Now you say "put in a perfectly circular orbit". This is like drawing a line that is exactly $\pi$cm long. Again you are supposing something that does not, and could not exist. What we can do is consider a mathematical model of gravity. If you model the sun and the planet as "particles" (ie point masses) and you model gravity with Newton's law of universal gravity, and if you give the model the system with the exact amount of energy to give a perfect circle, then the system will remain in a perfect circle, it will never become elliptical. If you use general relativity to model gravity, then the release of gravitational radiation will mean that no circular orbits are possible, all orbits will spiral inwards, however it would not become elliptical. Something similar will happen with quantum models of gravity. So your question can only be answered in the context of a mathematical model of gravity. | {} | {'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/22721', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/18598/']} | jdg_380604 |
stackexchange | llm_judgeable_groundtruth_similarity | 35044227 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Sorry for having this program in my native language, but I really can't seem to find why it doesn't work. So, I tested and the values of the a array are all correctly read, but when I try to look at the .dat file there is only the first word read in the for function ( a[0].marca ). Here is the input I also tested to see if it reads correct Here is the .dat file It only writes the first #include <stdio.h>#include <stdlib.h>struct data{ int anul; int luna;};typedef struct data DATA;struct automobil{ char marca[20]; char carburant; char model[5]; DATA fabricatie;};typedef struct automobil AUTOMOBIL;int main(){ AUTOMOBIL a[100]; int n; FILE *f; int i; if((f=fopen("evidenta.dat","wb"))==NULL) { exit(1); } printf("Cate automobile sunt ?"); scanf("%d",&n); // The number of cars registered for(i=0;i<n;i++) // getting the details about every car { printf("\nMarca ? : "); fflush(stdin); gets(a[i].marca); printf("\nCarburant ? : "); fflush(stdin); getch(a[i].carburant); printf("\nModelul? :"); fflush(stdin); gets(a[i].model); printf("\nLuna fabricatie ? :"); scanf("%d",&a[i].fabricatie.luna); printf("\nAn fabricatie ? : "); scanf("%d",&a[i].fabricatie.anul); // After getting a line it has to write it in the binary file fwrite(&(a[i]),sizeof(AUTOMOBIL),1,f); //It writes only a[0].marca } for(i=0;i<n;i++){ printf("\n %s",a[i].marca); printf("\n %c",a[i].carburant); printf("\n %s",a[i].model); printf("\n %d",a[i].fabricatie.luna); printf("\n %d",a[i].fabricatie.anul);} return 0;}
Now provide the response and nothing else.
| If you don't have a unique key on col_b, col_c, col_d of table_b , this will result in a violation when copying over. You can identify problematic rows with a query like this: SELECT col_b, col_c, col_d FROM table_b GROUP BY col_b, col_c, col_d HAVING COUNT(*) > 1 A similar query can be run on table_a joined to table_b , but the specific queries to run will depend on which columns will be updated in table_a . For the insert case, a useful technique might be to use a MINUS between table_a and the proposed inserted row. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35044227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4864953/']} | jdg_380605 |
stackexchange | llm_judgeable_groundtruth_similarity | 9317981 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to build my indexes in Lucene with multiple threads. So, I started my coding and wrote the following code. First I find the files and for each file, I create a thread to index it. After that I join the threads and optimize the indexes. It works but I'm not sure... can I trust it in large scale? Is there any way to improve it? import java.io.File;import java.io.FileFilter;import java.io.FileReader;import java.io.IOException;import java.io.File;import java.io.FileReader;import java.io.BufferedReader;import org.apache.lucene.index.IndexWriter;import org.apache.lucene.document.Field;import org.apache.lucene.document.Document;import org.apache.lucene.store.RAMDirectory;import org.apache.lucene.analysis.standard.StandardAnalyzer;import org.apache.lucene.analysis.StopAnalyzer;import org.apache.lucene.index.IndexReader;import org.apache.lucene.store.Directory;import org.apache.lucene.store.FSDirectory;import org.apache.lucene.util.Version;import org.apache.lucene.index.TermFreqVector;public class mIndexer extends Thread { private File ifile; private static IndexWriter writer; public mIndexer(File f) { ifile = f.getAbsoluteFile(); } public static void main(String args[]) throws Exception { System.out.println("here..."); String indexDir; String dataDir; if (args.length != 2) { dataDir = new String("/home/omid/Ranking/docs/"); indexDir = new String("/home/omid/Ranking/indexes/"); } else { dataDir = args[0]; indexDir = args[1]; } long start = System.currentTimeMillis(); Directory dir = FSDirectory.open(new File(indexDir)); writer = new IndexWriter(dir, new StopAnalyzer(Version.LUCENE_34, new File("/home/omid/Desktop/stopwords.txt")), true, IndexWriter.MaxFieldLength.UNLIMITED); int numIndexed = 0; try { numIndexed = index(dataDir, new TextFilesFilter()); } finally { long end = System.currentTimeMillis(); System.out.println("Indexing " + numIndexed + " files took " + (end - start) + " milliseconds"); writer.optimize(); System.out.println("Optimization took place in " + (System.currentTimeMillis() - end) + " milliseconds"); writer.close(); } System.out.println("Enjoy your day/night"); } public static int index(String dataDir, FileFilter filter) throws Exception { File[] dires = new File(dataDir).listFiles(); for (File d: dires) { if (d.isDirectory()) { File[] files = new File(d.getAbsolutePath()).listFiles(); for (File f: files) { if (!f.isDirectory() && !f.isHidden() && f.exists() && f.canRead() && (filter == null || filter.accept(f))) { Thread t = new mIndexer(f); t.start(); t.join(); } } } } return writer.numDocs(); } private static class TextFilesFilter implements FileFilter { public boolean accept(File path) { return path.getName().toLowerCase().endsWith(".txt"); } } protected Document getDocument() throws Exception { Document doc = new Document(); if (ifile.exists()) { doc.add(new Field("contents", new FileReader(ifile), Field.TermVector.YES)); doc.add(new Field("path", ifile.getAbsolutePath(), Field.Store.YES, Field.Index.NOT_ANALYZED)); String cat = "WIR"; cat = ifile.getAbsolutePath().substring(0, ifile.getAbsolutePath().length()-ifile.getName().length()-1); cat = cat.substring(cat.lastIndexOf('/')+1, cat.length()); //doc.add(new Field("category", cat.subSequence(0, cat.length()), Field.Store.YES)); //System.out.println(cat.subSequence(0, cat.length())); } return doc; } public void run() { try { System.out.println("Indexing " + ifile.getAbsolutePath()); Document doc = getDocument(); writer.addDocument(doc); } catch (Exception e) { System.out.println(e.toString()); } }} Any hep is regarded.
Now provide the response and nothing else.
| If you want to parallelize indexing, there are two things you can do: parallelizing calls to addDocument, increasing the maximum thread count of your merge scheduler. You are on the right path to parallelize calls to addDocuments, but spawning one thread per document will not scale as the number of documents you need to index will grow. You should rather use a fixed-size ThreadPoolExecutor . Since this task is mainly CPU-intensive (depending on your analyzer and the way you retrieve your data), setting the number of CPUs of your computer as the maximum number of threads might be a good start. Regarding the merge scheduler, you can increase the maximum number of threads which can be used with the setMaxThreadCount method of ConcurrentMergeScheduler . Beware that disks are much better at sequential reads/writes than random read/writes, as a consequence setting a too high maximum number of threads to your merge scheduler is more likely to slow indexing down than to speed it up. But before trying to parallelizing your indexing process, you should probably try to find where the bottleneck is. If your disk is too slow, the bottleneck is likely to be the flush and the merge steps, as a consequence parallelizing calls to addDocument (which essentially consists in analyzing a document and buffering the result of the analysis in memory) will not improve indexing speed at all. Some side notes: There is some ongoing work in the development version of Lucene in order to improve indexing parallelism (the flushing part especially, this blog entry explains how it works). Lucene has a nice wiki page on how to improve indexing speed where you will find other ways to improve indexing speed. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9317981', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/585874/']} | jdg_380606 |
stackexchange | llm_judgeable_groundtruth_similarity | 30805786 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a C program. I calculated the sum of each row of array and then compared them to find which row is minimum sum and which is maximum sum. But my program sometimes give correct output but sometime wrong. Where is the issue? please help me. Thanks #include <iostream>#include <stdlib.h>#include <time.h>using namespace std;///////Prototypes of functionsvoid displayMaxMinElement(int array[10][10]);void fillBoard(int array[10][10]);void rowSum(int array[10][10]);void displayBoard(int array[10][10]);/////Starting main functionmain() { int board[10][10] = {0}; system("cls"); ///To clear previous data on screen fillBoard(board); displayBoard(board); displayMaxMinElement(board); rowSum(board); system("pause>nul"); ///Pausing the program without printing "Press any key to continue"} void displayMaxMinElement(int array[10][10]) { int max = 0, min = 100; for(int i=0;i<10;i++) { for(int j=0; j<10; j++) { if(array[i][j] <= min) min = array[i][j]; if(array[i][j] >= max) max = array[i][j]; } } cout<<"\n\nArray element with maximum value: "<<max<<endl; cout<<"Array element with minimum value: "<<min<<endl<<endl; }void fillBoard(int array[10][10]) { srand(time(0)); for(int i = 0; i < 10; i++) { for(int j = 0; j < 10; j++) { array[i][j] = rand()%100+1; } }} void rowSum(int array[10][10]){ int maxRow=0, minRow=10000; int minRowNo, maxRowNo; int sum[10]={0}; for(int i=0; i<10; i++) for(int j=0; j<10; j++) { sum[i]=sum[i]+array[i][j]; /// cout<<endl<<sum[i]<<"="<<sum[i]<<"+"<<array[i][j]; } for(int i=0; i<10; i++) { if(sum[i]<=minRow){ minRow=sum[i]; minRowNo=i; } else if(sum[i]>=maxRow){ maxRow=sum[i]; maxRowNo=i; } /// cout<<endl<<"*********"<<sum[i]; } cout<<endl<<"Row"<<minRowNo<<"is minimum and having sum of"<<minRow; cout<<endl<<"Row"<<maxRowNo<<"is maximum and having sum of "<<maxRow;}void displayBoard(int array[10][10]){ int *ptr = *array; for(int i = 1; i <= 100; i++) { cout<<*ptr<<"\t"; ptr++; if(i%10 == 0) cout<<endl; } cout<<endl; ptr = NULL;}
Now provide the response and nothing else.
| Do not add the row to the table markup directly, instead add it to DataTable instance and then use the .draw() method. Adding to the DataTable instance will internally add it as a tbody anyway. Something like this should do var mytable = $('#tblItems').DataTable({ "paging": true, "lengthChange": false, "searching": false, "ordering": true, "info": true, "autoWidth": false, "sDom": 'lfrtip'});mytable.row.add(['asdsa34id', 'asdsa34id', 'asdsa34id', 'asdsa34id', 'asdsa34id', 'asdsa34id', 'asdsa34id']);mytable.draw(); Here is a demo https://jsfiddle.net/dhirajbodicherla/vwpsxhaL/1/ Also reading how to add rows to DataTable from their documentation for further reference Update You can use rows.add() (plural) and do something like this var jsonObject = JSON.parse(response.d);var result = jsonObject.map(function(item){ var result = []; result.push(item.Id); // .... add all the values required return result;});myTable.rows.add(result); // add to DataTable instancemyTable.draw(); // always redraw var myTable;$(document).ready(function() { myTable = $("#tblItems").DataTable({ "deferRender": true, "paging": true, "lengthChange": false, "searching": false, "ordering": true, "info": true, "autoWidth": false, "sDom": 'lfrtip' }); PopulateItemsTable();});function PopulateItemsTable() { $.ajax({ type: "POST", url: "Item.aspx/Search", contentType: "application/json; charset=utf-8", dataType: "json", success: function (response) { var jsonObject = JSON.parse(response.d); var result = jsonObject.map(function(item){ var result = []; result.push(item.Id); // .... add all the values required return result; }); myTable.rows.add(result); // add to DataTable instance myTable.draw(); // always redraw } });} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30805786', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4838540/']} | jdg_380607 |
stackexchange | llm_judgeable_groundtruth_similarity | 4679746 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the time complexity of the String#substring() method in Java?
Now provide the response and nothing else.
| New answer As of update 6 within Java 7's lifetime, the behaviour of substring changed to create a copy - so every String refers to a char[] which is not shared with any other object, as far as I'm aware. So at that point, substring() became an O(n) operation where n is the numbers in the substring. Old answer: pre-Java 7 Undocumented - but in practice O(1) if you assume no garbage collection is required, etc. It simply builds a new String object referring to the same underlying char[] but with different offset and count values. So the cost is the time taken to perform validation and construct a single new (reasonably small) object. That's O(1) as far as it's sensible to talk about the complexity of operations which can vary in time based on garbage collection, CPU caches etc. In particular, it doesn't directly depend on the length of the original string or the substring. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4679746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/547198/']} | jdg_380608 |
stackexchange | llm_judgeable_groundtruth_similarity | 6379259 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want different connection limits for some URLs in my application. Some URLs accept file uploads and need to have a large Connection Timeout. All other URLs need a much smaller timeout to prevent denial of service and not waste resources. Currently I have the Connection Timeout property in IIS set for the entire site to 60 minutes. Then I did this in the web.config: <system.web> <httpRuntime executionTimeout="480" maxRequestLength="1024" /></system.web><location path="FileUpload/WriteFile.rails"> <system.web> <httpRuntime executionTimeout="3600" maxRequestLength="512000" /> </system.web></location> So i was hoping this would set all URLs to an 8 minute timeout and allow the WriteFile.rails URL to run for 60 minutes. Instead ALL URLs are allowed to run for 60 minutes. How do I get IIS to do what I want?
Now provide the response and nothing else.
| The question asked specifically about timeouts but also implied setting maxRequestLength as well. I'm going to try and give a really comprehensive answer to both issues (now that I have spent most of a day working it out). Lets say we have a single URL on our website that we want to process file uploads. We want to take in up to a Gigabyte of data on that URL and we will allow clients to be connected for, at most, 1 hour. All other URLs we want to only allow 90 seconds of connection time and a maximum of 4MB in the POST body. Global Settings First you have to globally raise the limits on time and size for the entire site. First you want to set the "Connection Timeout" for the entire site. This acts as an absolute upper bound and it cannot be set from within the web.config. The IIS7 website has good instructions here . You can also do it programatically with the Microsoft.Web.Administration library that shipped with IIS7/7.5: var serverManager = ServerManager.OpenRemote("\\web-server-name");var site = serverManager.Sites["Your-Site-Name"];site.Limits.ConnectionTimeout = new TimeSpan(1, 0, 0); Next you need to set the max size request that the site will allow. This is in a totally different place, in the Request Fitlering module. This module may not be installed by default on IIS7. Again Microsoft has good instructions for how to set the maxAllowedContentLength through the GUI. This is something you can set from within the Web.config: <system.webServer> <security> <requestFiltering> <!-- Allow 1GB uploads --> <requestLimits maxAllowedContentLength="1073741824"></requestLimits> </requestFiltering> </security></system.webServer> This setting is evaluated against the Content-Length header and requests larger than this will immediately result in a 404.13. The setting is in bytes and what comes next is in Kilobytes, very consistent this IIS7. ASP.NET Settings Next we want to cap all of the ASP.NET requests at 90 seconds/4MB. This can be done in the web.config: <location> <system.web> <httpRuntime executionTimeout="90" maxRequestLength="4096" /> </system.web></location> To make the settings global the system.web tag is wrapped in a location tag that has no path attribute. (In the original question I did not wrap the system.web tag in the location tag which was probably the source of my problem.) maxRequestLength is in kilobytes this time. Finally we want to allow our special upload URL to accept huge uploads. Setting these values higher than the ones you set globally wont work. The global values override these settings. <location path="Uploads/PostFile.rails"> <system.web> <httpRuntime executionTimeout="3600" maxRequestLength="1048576" /> </system.web></location> If everything else is set up right, that should do it. As Peter Bromberg suggested, you can add as many of these blocks as needed to raise the limits for specific URLs. One last note: in debug mode IIS does not enforce the Connection Timeout or executionTimeout settings, to allow you more time for debugging. So to test your setting on a developer machine you should do a release build and you should set the 'Enable Server-Side Debugging' setting to false . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6379259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2021/']} | jdg_380609 |
stackexchange | llm_judgeable_groundtruth_similarity | 19263390 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to load jQuery and other scripts into the header (or should it be footer), and I have the jQuery working sort of I can get an alert box running. The thing is, jquery-2.0.3.min.js isn't loading, and I don't know if I'm doing the enqueue correctly. jquery-1.10.2 is loaded though. And also, the other script isn't loading either. For both scripts (2.0.3 and other script), this is at the end: ?ver=3.6.1 Also I was reading that it might be better load both into one function? So, any help would be appreciated! function load_jquery() { wp_register_script( 'jquery_script', get_template_directory_uri() . 'js/jquery-2.0.3.min.js', array( 'jquery' ) ); wp_enqueue_script( 'jquery_script' );}add_action( 'init', 'load_jquery' ); // end jQueryfunction another() { wp_register_script( 'another_script', get_template_directory_uri() . 'js/another.js', array( 'jquery' ) ); wp_enqueue_script( 'another_script' );}add_action( 'init', 'another' );
Now provide the response and nothing else.
| First thing jquery in there by default in wordpress so you dont have to register it , just enqueue it most of the jquery ui libs and core jquery files are already registered with wordpress so you only need to enqueue with right handle look here enqueue script wp_enqueue_script is used to enqueue script and wp_enqueue_style is used to enqueue style for calling custom js, its better to register script or style first before using wp_register_script // to register script wp_register_style // To register style then enqueue using wp_enqueue_script , wp_enqueue_style here is a sample code snippet for whole process from my site function pr_scripts_styles() { wp_enqueue_script('jquery'); // just enqueue as its already registered if ( is_singular() && comments_open() && get_option( 'thread_comments' ) ) wp_enqueue_script( 'comment-reply' ); /* REGISTER ALL JS FOR SITE */ wp_register_script('pr_cycle_all',get_stylesheet_directory_uri().'/js/pr-slider.js'); wp_register_script('pr_slider',get_stylesheet_directory_uri().'/js/jquery.cycle.all.min.js'); wp_register_script('pr_validation_engine',get_stylesheet_directory_uri().'/js/jquery.validationEngine-en.js'); wp_register_script('pr_validation_locale',get_stylesheet_directory_uri().'/js/jquery.validationEngine.js'); wp_register_script('stylethemes',get_stylesheet_directory_uri().'/js/stylethemes.js'); wp_register_script('pr-jquery-ui',get_stylesheet_directory_uri().'/js/jquery-ui.js'); wp_register_script('main-js',get_stylesheet_directory_uri().'/js/main.js'); wp_register_script('pr-galleriffic',get_stylesheet_directory_uri().'/js/jquery.galleriffic.js'); wp_register_script('pr-rollover',get_stylesheet_directory_uri().'/js/jquery.opacityrollover.js'); wp_register_script('pr_colorbox',get_stylesheet_directory_uri().'/js/jquery.colorbox.js'); wp_register_script('pr_jcarousel_js',get_stylesheet_directory_uri().'/js/jquery.jcarousel.min.js'); //wp_register_script('google-map-api','https://maps.googleapis.com/maps/api/js?v=3.exp&sensor=false'); /* REGISTER ALL CSS FOR SITE */ wp_register_style('pr_woocommerce',get_stylesheet_directory_uri().'/css/_woocommerce.css'); wp_register_style('pr_mobile',get_stylesheet_directory_uri().'/css/mobile.css'); wp_register_style('pr_sec_teal_grey',get_stylesheet_directory_uri().'/css/secondary-teal-grey.css'); wp_register_style('pr_site_options',get_stylesheet_directory_uri().'/css/site-options.css'); wp_register_style('pr_teal_grey',get_stylesheet_directory_uri().'/css/teal-grey.css'); wp_register_style('validation_css',get_stylesheet_directory_uri().'/css/validationEngine.jquery.css'); wp_register_style('galleriffic_css',get_stylesheet_directory_uri().'/css/galleriffic.css'); wp_register_style('pr_colorbox_style',get_stylesheet_directory_uri().'/css/colorbox.css'); wp_register_style('pr_jcarousel_css',get_stylesheet_directory_uri().'/css/jcarouselskin.css'); /* CALL ALL CSS AND SCRIPTS FOR SITE */ wp_enqueue_script('pr-jquery-ui'); wp_enqueue_script('stylethemes'); wp_enqueue_script('pr_cycle_all'); wp_enqueue_script('pr_slider','','','',true); wp_enqueue_script('pr_validation_engine'); wp_enqueue_script('pr_validation_locale'); wp_enqueue_script('google-map-api'); wp_enqueue_script('main-js'); wp_enqueue_script('pr-galleriffic'); wp_enqueue_script('pr-rollover'); wp_enqueue_script('pr_colorbox'); wp_enqueue_style( 'pr-style', get_stylesheet_uri(), array(), '2013-07-18' ); wp_enqueue_style('pr_site_options'); wp_enqueue_style('pr_woocommerce'); wp_enqueue_style('pr_mobile'); wp_enqueue_style('pr_sec_teal_grey'); wp_enqueue_style('pr_teal_grey'); wp_enqueue_style('validation_css'); wp_enqueue_style('galleriffic_css'); wp_enqueue_style('pr_colorbox_style'); if(is_single()){ wp_enqueue_script('pr_jcarousel_js'); wp_enqueue_style('pr_jcarousel_css'); }}add_action( 'wp_enqueue_scripts', 'pr_scripts_styles' ); also remember to hook your function with wp_enqueue_scripts so that scripts & style load correctly add_action( 'wp_enqueue_scripts', 'pr_scripts_styles' ); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19263390', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1843781/']} | jdg_380610 |
stackexchange | llm_judgeable_groundtruth_similarity | 6290729 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to ExtJS 4 and need some help understanding how the Ext.define works, please. In fact what I want to do is something similar to the portlets in the portal example, in my application I will need so many objects to add in my different tabs, so in order to organize my code and not have just one very big script, I want to define each component I need in a separate file and then call it in the main script when I need it (I will mainly use the examples so this is why I want to know how Ext.define works so I can adapt those examples and make them work the way I want). I hope I was clear. And thank you in advance for your help.
Now provide the response and nothing else.
| Ext.define ( String className, Object data, Function createdFn ) : Ext.Base Ext.define is used to define a class. Example:// creates My.computer.NoteBook ClassExt.define('My.computer.NoteBook', { extend:'Ext.panel.Panel', config: { hardware:'Dell', os:'Linux', price:500 }, constructor:function(config) { this.initConfig(config); return this; }});// creates instance of My.computer.NoteBook Classvar myComputer = Ext.create('My.computer.NoteBook', { hardware:'MacBook Pro', os:'Mac OS X', price:1800}); so, with Ext.define you make a mold, witch you can use later in many cases. You can define width, height, id, css, so later you just call that mold/class. In your case you can define a class for every tab, and then when you make a function to open/create that tab you can say: if(existingTab){ mainPanel.setActiveTab(existingTab);}else{ mainPanel.add(Ext.create('My.computer.NoteBook', {id:tabId})).show(); }... You can put every Class in your separate .js file, later, on production you will make a class.js with all classes in one minified .js file! You can define a class and then say: items: Ext.create("My.computer.NoteBook",{ ...}) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6290729', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/745378/']} | jdg_380611 |
stackexchange | llm_judgeable_groundtruth_similarity | 105764 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am attempting to compute the one loop correction to the Higgs mass, which requires the evaluation of a scattering amplitude, namely $$\require{cancel} \mathcal{M} = (-)N_f \int \frac{\mathrm{d}^4 k}{(2\pi)^4} \mathrm{Tr} \, \left[ \left( \frac{i\lambda_f}{\sqrt{2}}\right) \frac{i}{\cancel{k}-m_f} \left( \frac{i\lambda_f}{\sqrt{2}} \right) \frac{i}{\cancel{k} + \cancel{p}-m_f}\right]$$ which corresponds to the Feynman diagram: After combining constants, and rationalizing the denominators, I obtain, $$-\frac{N_f \lambda_f^2}{2} \int \frac{\mathrm{d}^4 k}{(2\pi)^4} \frac{\mathrm{Tr}\left[ \cancel{k}\cancel{k} + \cancel{k}\cancel{p} +2m_f \cancel{k} + m_f \cancel{p} + m_f^2\right]}{\left(k^2-m_f^2\right)\left((k+p)^2 -m_f^2 \right)}$$ Computing traces, via the relation $\mathrm{Tr}[\cancel{a}\cancel{b}] = 4(a\cdot b)$ yields, $$-2N_f \lambda_f^2 \int \frac{\mathrm{d}^4 k}{(2\pi)^4} \frac{k^2 +k\cdot p + m_f^2}{\left(k^2-m_f^2\right)\left((k+p)^2 -m_f^2 \right)}$$ At this point, I employed dimensional regularization, followed by Feynman reparametrization to combine the denominators, and then completed the square, yielding $$-\frac{2^{2-d}\pi^{-d/2}}{\Gamma (d/2)}N_f \lambda_f^2 \int_{0}^1 \mathrm{d}x \int_0^\infty \mathrm{d}k \frac{k^{d-1}(k^2 +kp + m_f^2)}{\left[ \left(k-(x-1)p\right)^2 +p^2(x-x^2 -1)\right]^2}$$ Additional Calculations (Edit) I attempted to further simplify the integrand using a substitution in only the first integral, namely $\ell = k-(1-x)p$ which implies $\mathrm{d}\ell = \mathrm{d}k$, yielding (after several manipulations), $$-\frac{2^{2-d}\pi^{-d/2}}{\Gamma(d/2)}N_f \lambda_f^2 \int_0^1 \mathrm{d}x \, \int_{(x-1)p}^{\infty} \mathrm{d}\ell \frac{(\ell + (1-x)p)^{d-1}[(\ell + \frac{1}{2}p(3-2x))^2 - \frac{1}{4}p^2 + m_f^2]}{[\ell^2 + p^2(x-x^2-1)]^2}$$ N.B. Mathematica evaluated the original integral over $k$, and outputted a combination of the first Appell hypergeometric series, which possess the integral representation, $$F_1(a,b_1,b_2,c;x,y) = \frac{\Gamma(c)}{\Gamma(a)\Gamma(c-a)} \int_0^1 \mathrm{d}t \, t^{a-1}(1-t)^{c-a-1}(1-xt)^{-b_1}(1-yt)^{-b_2}$$ with $\Re c >\Re a >0$, which has a structure similar to the beta function. If I can express the loop integral in a similar form, I may be able to express it in terms of these functions. At the end of the calculation, I will take $d \to 4-\epsilon$ to obtain poles in $\epsilon$, using the usual expansion $$\Gamma(x) = \frac{1}{x} -\gamma + \mathcal{O}(x)$$ and a similar expansion should the final answer indeed contain the Appell hypergeometric series. Passarino-Veltmann Reduction (Edit): Based on my understanding of Veltmann-Passarino reduction, it is not applicable as the numerator contains an arbitrary power of loop momentum. I could plug in $d=4$, and impose a high momentum cut off, but this has already been done in many texts. As aforementioned, I would like a dimensionally regularized amplitude. I am stuck at this point, can anyone give some details as to how to proceed? In addition, I have a query regarding the hierarchy problem. If using a simple cut-off regularization, the one loop correction can be shown to be quadratically divergent. But why is this an issue that needs to be remedied, by for example, the minimally supersymmetric standard model? Can't the divergence be eliminated by a regular renormalization procedure?
Now provide the response and nothing else.
| I go through the calculation below. However, I won't calculate the integral myself since its very impractical and not what you want to do in practice. You need a quick formula to simplify your integrals. Thanksfully, such a formula is provided in any standard textbook in QFT. You should derive this formula once and then move on. I will do the calculation using this formula and if you would like to see the derivation its done in Peskin and Schroeder, when they introduce dim-reg. I dropped the $N_f$ factor because its not quite right due to the sum over the masses of flavor states. As you mentioned the diagram is given by (I kept your other conventions for the couplings, I presume they are correct)\begin{equation} {\cal M} = - \int \frac{ d ^4 k }{ (2\pi)^4 } \left( \frac{ i \lambda _f }{ \sqrt{ 2}} \right) ^2 ( i ) ^2 \mbox{Tr} \left[ \frac{ \cancel{k} + m _f }{ k ^2 - m ^2 _f } \frac{ \cancel{k} +\cancel{p} + m _f }{ (k+p) ^2 - m ^2 _f } \right] \end{equation} You can combine the denomenators using Feynman parameters (this is the first of two formulas you may want to write down and refer to in the future, but I'll do it explicitly here):\begin{align} \frac{1}{ D} & = \frac{1}{ ( k ^2 - m ^2 ) \left( ( k + p ) ^2 - m ^2 \right) } \\ & = \int d x \frac{1}{ \left[ x ( ( k + p ) ^2 - m ^2 ) + ( 1 - x ) ( k ^2 - m ^2 ) \right] ^2 } \\ & = \int d x \frac{1}{ \left[ k ^2 + 2 k p x + p ^2 x ^2 - p ^2 x ^2 + p ^2 x - m ^2 x - m ^2 + x m ^2 \right] ^2 } \\ & = \int d x \frac{1}{ \left[ ( k + p x ) ^2 - ( p ^2 x ^2 - p ^2 x + m ^2 ) \right] ^2 } \\ & = \int d x \frac{1}{ \left[ ( k + p x ) ^2 - \Delta \right] ^2 } \end{align} where $ \Delta \equiv p ^2 x ^2 - p ^2 x + m ^2 $. To get rid of the $ k + p x $ factor we shift $ k: k \rightarrow k - p x $. Then the denomenator is even in $k$. The trace is given by:\begin{align} \mbox{Tr} \left[ ... \right] & \rightarrow \mbox{Tr} \left[ ( \cancel{k}-\cancel{p}x + m _f ) ( \cancel{k} + \cancel{p} ( 1-x ) + m _f ) \right] \\ & = 4 \left[ ( k - p x ) ( k + p ( 1-x ) ) + m ^2 _f \right] \end{align} All linear terms are zero since the denominator is even. Thus the trace becomes:\begin{equation} \mbox{Tr} \left[ ... \right] \rightarrow 4 \left[ k ^2 - p ^2 x ( 1 - x ) + m ^2 _f \right] \end{equation} The amplitude now takes the form,\begin{equation} - \left( 2\lambda _f ^2 \right) \mu ^\epsilon \int \,dx \frac{ \,d^dk }{ (2\pi)^4 }\frac{ k ^2 - p ^2 x ( 1 - x ) + m _f ^2 }{\left[ k ^2 - \Delta \right] ^2 }\end{equation}where I moved to $ d $ dimensions and introduce a renormalization scale, $ \mu $, to keep the coupling dimensionless. I now use two formula out of Peskin and Schroeder, Eq A.44 and A.46, and simplify the final result,\begin{align} & \int \frac{ \,d^4k }{ (2\pi)^4 } \frac{ k ^2 }{ ( k ^2 - \Delta ) ^2 } = \frac{ i \Delta }{ 16 \pi ^2 } \left( \frac{ 2 }{ \epsilon } + \log \frac{ \mu ^2 }{ \Delta } + \log 4\pi + 2 \gamma + 1 \right) \\ & \int \frac{ \,d^4k }{ (2\pi)^4 } \frac{ 1 }{ ( k ^2 - \Delta ) } = \frac{ i }{ 16 \pi ^2 } \left( \frac{ 2 }{ \epsilon } + \log \frac{ \mu ^2 }{ \Delta } + \log 4\pi - \gamma \right) \end{align} where I used $ d = 4 - \epsilon $. For simplicity lets only focus on the most divergent part (of course to calculate the physical cross-sections you'll need the full amplitude). Its easy, but more cumbersome, to include all the finite pieces. In that case we have,\begin{align} {\cal M} &= - \frac{ 2 i \lambda _f ^2 }{ 16 \pi ^2 \epsilon } \int d x \left[ \Delta - p ^2 x ( 1 - x ) + m ^2 _f \right] \\ & = - \frac{ 2 i \lambda _f ^2 }{ 16 \pi ^2 \epsilon } \left[ -\frac{ p ^2}{3} + 2m ^2 _f \right]\end{align} Now with regards to your question about the hierarchy problem. Yes, the divergence can and is cancelled by a counterterm. But, the modern view of QFT says that renormalization is not an artificial procedure, but instead a physical consequence of quantum corrections. That being said, if the Higgs mass is at the TeV scale but the amplitude is at the Planck scale, the counterterms must be huge. This means that while the physical mass is still at the TeV scale very precise cancellation need to occur for this to happen which is very unnatural. Such cancellation don't happen anywhere else in Nature! | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/105764', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/34382/']} | jdg_380612 |
stackexchange | llm_judgeable_groundtruth_similarity | 53093515 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dilemma where I need to have a MySQL database create a customer number based on the first 3 characters of a new client name entered and to append an incremenenting value. For example: Table clients: +----+----------------------------------------------+--------------+| id | clientName | clientNumber |+----+----------------------------------------------+--------------+| 1 | Accelerated Learning | ACC00 || 2 | Acceleration Systems | ACC01 || 3 | Acme,Inc. | ACM00 | Normally I would do this in the submission form on the front end, but for reasons unexplained, the spec wants this handled by the DB. I'm trying to write something like this, but can't get it right: Trigger: CREATE DEFINER = CURRENT_USER TRIGGER `crmdev`.`clients_BEFORE_INSERT` BEFORE INSERT ON `clients` FOR EACH ROWBEGINDECLARE prefix varchar(3);DECLARE suffix INT(3);SET prefix = SUBSTRING(NEW.clientName,1,3);SET suffix = ( SELECT COUNT(*),SUBSTRING(clientName,1,3) = prefix + 1;SET NEW.clientNumber = CONCAT(prefix,suffix);END Thanks in advance for help/guidance!
Now provide the response and nothing else.
| As mentioned in the comments, a multi-stage build involves a single Dockerfile to perform multiple stages. What you have is a common base image. You could convert these to a non-traditional multi-stage build with a syntax like (I say non-traditional because you do not perform any copying between the layers and instead use just the from line to pick from a prior stage): FROM python:3.6 as baseRUN apt-get update && apt-get upgrade -yRUN pip install pipenv pipCOPY Pipfile ./# some more common configuration...FROM base as devRUN pipenv install --system --skip-lock --devENV FLASK_ENV developmentENV FLASK_DEBUG 1FROM base as prodRUN pipenv install --system --skip-lockENV FLASK_ENV production Then you can build one stage or another using the --target syntax to build, or a compose file like: # docker-compose.ymlversion: '3.4'services: webapp: build: context: ./dir dockerfile: Dockerfile target: prod The biggest downside is the current build engine will go through every stage until it reaches the target. Build caching can mean that's only a sub-second process. And BuildKit which is coming out of experimental in 18.09 and will need upstream support from docker-compose will be more intelligent about only running the needed commands to get your desired target built. All that said, I believe this is trying to fit a square peg in a round hole. The docker-compose developer is encouraging users to move away from doing the build within the compose file itself since it's not supported in swarm mode. Instead, the recommended solution is to perform builds with a CI/CD build server, and push those images to a registry. Then you can run the same compose file with docker-compose or docker stack deploy or even some k8s equivalents, without needing to redesign your workflow. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/53093515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1454073/']} | jdg_380613 |
stackexchange | llm_judgeable_groundtruth_similarity | 12763548 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Python 2.6. it seems that markers of the end of string $ and \Z are not compatible with group expressions. Fo example import rere.findall("\w+[\s$]", "green pears") returns ['green '] (so $ effectively does not work). And using re.findall("\w+[\s\Z]", "green pears") results in an error: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/re.pyc in findall(pattern, string, flags) 175 176 Empty matches are included in the result."""--> 177 return _compile(pattern, flags).findall(string) 178 179 if sys.hexversion >= 0x02020000:/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/re.pyc in _compile(*key) 243 p = sre_compile.compile(pattern, flags) 244 except error, v:--> 245 raise error, v # invalid expression 246 if len(_cache) >= _MAXCACHE: 247 _cache.clear()error: internal: unsupported set operator Why does it work that way and how to go around?
Now provide the response and nothing else.
| A [..] expression is a character group , meaning it'll match any one character contained therein. You are thus matching a literal $ character. A character group always applies to one input character, and thus can never contain an anchor. If you wanted to match either a whitespace character or the end of the string, use a non-capturing group instead, combined with the | or selector: r"\w+(?:\s|$)" Alternatively, look at the \b word boundary anchor. It'll match anywhere a \w group start or ends (so it anchors to points in the text where a \w character is preceded or followed by a \W character, or is at the start or end of the string). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12763548', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/907575/']} | jdg_380614 |
stackexchange | llm_judgeable_groundtruth_similarity | 30130934 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to Ubuntu and Linux in general. I want to code in Java on my computer, but I'm having problems installing IntelliJ IDEA on Ubuntu. I have downloaded and extracted the file and for some reason renamed the folder to idea. I tried moving the folder to /usr/share/applications or something but I didn't have permission. I used sudo -i in terminal to gain permission but didn't manage to get out of root folder. Can anyone help me with a step by step way to move the folder, create a shortcut in the search bar or whatever it's called and install it properly?
Now provide the response and nothing else.
| Note: This answer covers the installation of IntelliJ IDEA. For an extended script, that covers more JetBrains IDEs, as well as help for font rendering issues, please see this link provided by brendan. Furthermore, a manual Desktop Entry creation is optional, as newer versions of IntelliJ offer to create it on first startup. I have my intellij int /opt folder. So what I do is: Download Intellij Extract intellij to /opt-folder: sudo tar -xvf <intellij.tar> -C /opt/ (the -C option extracts the tar to the folder /opt/) Create a Desktop Entry File called idea.desktop (see example file below) and store it anywhere you want (let's assume in your home directory) Move the idea.desktop from your home directory to /usr/share/applications: sudo mv ~/idea.desktop /usr/share/applications/ Now (in a lot) Ubuntu versions you can start the application after the GUI is restarted. If you don't know how to do that, you can restart your PC.. idea.desktop (this is for community edition version 14.1.2, you have to change the paths in Exec= and Icon= lines if the path is different for you): [Desktop Entry] Encoding=UTF-8Name=IntelliJ IDEAComment=IntelliJ IDEAExec=/opt/ideaIC-14.1.2/bin/idea.shIcon=/opt/ideaIC-14.1.2/bin/idea.pngTerminal=falseStartupNotify=trueType=Application Edit I also found a shell script that does this for you, here . The given script in the link installs Oracle Java 7 for you and gives you the choice between Community and Ultimate Edition. It then automatically downloads the newest version for you, extracts it and creates a desktop entry. I have modified the scripts to fulfill my needs. It does not install java 8 and it does not ask you for the version you want to install (but the version is kept in a variable to easily change that). You can also update Intellij with it. But then you have to (so far) manually remove the old folder! This is what i got: Edit2 Here is the new version of the script. As mentioned in the comments, breandan has updated the script to be more stable (the jetbrains website changed its behavior). Thanks for the update, breandan. #!/bin/shecho "Installing IntelliJ IDEA..."# We need root to install[ $(id -u) != "0" ] && exec sudo "$0" "$@"# Attempt to install a JDK# apt-get install openjdk-8-jdk# add-apt-repository ppa:webupd8team/java && apt-get update && apt-get install oracle-java8-installer# Prompt for edition#while true; do# read -p "Enter 'U' for Ultimate or 'C' for Community: " ed # case $ed in# [Uu]* ) ed=U; break;;# [Cc]* ) ed=C; break;;# esac#doneed=C# Fetch the most recent versionVERSION=$(wget "https://www.jetbrains.com/intellij-repository/releases" -qO- | grep -P -o -m 1 "(?<=https://www.jetbrains.com/intellij-repository/releases/com/jetbrains/intellij/idea/BUILD/)[^/]+(?=/)")# Prepend base URL for downloadURL="https://download.jetbrains.com/idea/ideaI$ed-$VERSION.tar.gz"echo $URL# Truncate filenameFILE=$(basename ${URL})# Set download directoryDEST=~/Downloads/$FILEecho "Downloading idea-I$ed-$VERSION to $DEST..."# Download binarywget -cO ${DEST} ${URL} --read-timeout=5 --tries=0echo "Download complete!"# Set directory nameDIR="/opt/idea-I$ed-$VERSION"echo "Installing to $DIR"# Untar fileif mkdir ${DIR}; then tar -xzf ${DEST} -C ${DIR} --strip-components=1fi# Grab executable folderBIN="$DIR/bin"# Add permissions to install directorychmod -R +rwx ${DIR}# Set desktop shortcut pathDESK=/usr/share/applications/IDEA.desktop# Add desktop shortcutecho -e "[Desktop Entry]\nEncoding=UTF-8\nName=IntelliJ IDEA\nComment=IntelliJ IDEA\nExec=${BIN}/idea.sh\nIcon=${BIN}/idea.png\nTerminal=false\nStartupNotify=true\nType=Application" -e > ${DESK}# Create symlink entryln -s ${BIN}/idea.sh /usr/local/bin/ideaecho "Done." Old Version #!/bin/sh echo "Installing IntelliJ IDEA..."# We need root to install[ $(id -u) != "0" ] && exec sudo "$0" "$@"# define version (ultimate. change to 'C' for Community)ed='U'# Fetch the most recent community edition URLURL=$(wget "https://www.jetbrains.com/idea/download/download_thanks.jsp?edition=I${ed}&os=linux" -qO- | grep -o -m 1 "https://download.jetbrains.com/idea/.*gz")echo "URL: ${URL}"echo "basename(url): $(basename ${URL})"# Truncate filenameFILE=$(basename ${URL})echo "File: ${FILE}"# Download binarywget -cO /tmp/${FILE} ${URL} --read-timeout=5 --tries=0# Set directory nameDIR="${FILE%\.tar\.gz}"# Untar fileif mkdir /opt/${DIR}; then tar -xvzf /tmp/${FILE} -C /opt/${DIR} --strip-components=1fi# Grab executable folderBIN="/opt/$DIR/bin"# Add permissions to install directorychmod 755 ${BIN}/idea.sh# Set desktop shortcut pathDESK=/usr/share/applications/IDEA.desktop# Add desktop shortcut echo -e "[Desktop Entry]\nEncoding=UTF-8\nName=IntelliJ IDEA\nComment=IntelliJ IDEA\nExec=${BIN}/idea.sh\nIcon=${BIN}/idea.png\nTerminal=false\nStartupNotify=true\nType=Application" > ${DESK}echo "Done." | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/30130934', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4149162/']} | jdg_380615 |
stackexchange | llm_judgeable_groundtruth_similarity | 34832531 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My app uses multiple currencies, and these currencies uses different formats, for example: Price for Ruble shows as: 1,101 Руб. Same amount for US Dollar shows as: US $1 101 How would I change the grouping separator, currency symbol and position of currency symbol, by defining a set of different formats for different currencies. This is how my short code stands var formatter = NSNumberFormatter()formatter.numberStyle = .CurrencyStyleformatter.locale = NSLocale.currentLocale()formatter.stringFromNumber(4500000)//Output : $4,500,000.00//Expected : 4,500,000 Руб.
Now provide the response and nothing else.
| Swift 4 or later extension Formatter { static let belarusianRuble: NumberFormatter = { let formatter = NumberFormatter() // set the numberStyle to .CurrencyStyle formatter.numberStyle = .currency // set the desired negative and positive formats grouping, and currency symbol position formatter.positiveFormat = "#,##0 ¤" formatter.negativeFormat = "-#,##0 ¤" // set your custom currency symbol formatter.currencySymbol = "Руб" return formatter }()} let stringToDisplay = Formatter.belarusianRuble.string(for: 4500000) // "4,500,000 Руб" extension Formatter { static let currencyBYR: NumberFormatter = { let formatter = NumberFormatter() formatter.numberStyle = .currency formatter.positiveFormat = "#,##0 ¤" formatter.negativeFormat = "-#,##0 ¤" formatter.currencySymbol = "Руб" return formatter }() static let currencyEUR: NumberFormatter = { let formatter = NumberFormatter() formatter.locale = Locale(identifier: "pt_PT") formatter.numberStyle = .currency return formatter }() static let currencyUSD: NumberFormatter = { let formatter = NumberFormatter() formatter.locale = Locale(identifier: "en_US") formatter.numberStyle = .currency return formatter }() static let currencyBRL: NumberFormatter = { let formatter = NumberFormatter() formatter.locale = Locale(identifier: "pt_BR") formatter.numberStyle = .currency return formatter }() static let currencyRUB: NumberFormatter = { let formatter = NumberFormatter() formatter.locale = Locale(identifier: "ru_RU") formatter.numberStyle = .currency formatter.maximumFractionDigits = 0 return formatter }() static let currencyLocale: NumberFormatter = { let formatter = NumberFormatter() formatter.locale = .current formatter.numberStyle = .currency return formatter }()} extension Numeric { var currencyLocale: String { return Formatter.currencyLocale.string(for: self) ?? "" } var currencyUSD: String { return Formatter.currencyUSD.string(for: self) ?? "" } var currencyEUR: String { return Formatter.currencyEUR.string(for: self) ?? "" } var currencyBYR: String { return Formatter.currencyBYR.string(for: self) ?? "" } var currencyBRL: String { return Formatter.currencyBRL.string(for: self) ?? "" } var currencyRUB: String { return Formatter.currencyRUB.string(for: self) ?? "" }} Usage let amount = 4500000.0let stringLocale = amount.currencyLocale // "$4,500,000.00"let stringUSD = amount.currencyUSD // "$4,500,000.00"let stringEUR = amount.currencyEUR // "4 500 000,00 €"let stringBRL = amount.currencyBRL // "R$ 4.500.000,00"let stringBYR = amount.currencyBYR // "4,500,000 Руб"let stringRUB = amount.currencyRUB // "4 500 000 ₽" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34832531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1244597/']} | jdg_380616 |
stackexchange | llm_judgeable_groundtruth_similarity | 64969644 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to use gsutil to copy a file from GCS into a Run container during the build step. The steps I have tried: RUN pip install gsutilRUN gsutil -m cp -r gs://BUCKET_NAME $APP_HOME/artefacts The error: ServiceException: 401 Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.CommandException: 1 file/object could not be transferred.The command '/bin/sh -c gsutil -m cp -r gs://BUCKET_NAME $APP_HOME/artefacts' returned a non-zero code: 1ERRORERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1 The service account (default compute & cloudbuild) does have access to GCS, and I have also tried to gsutil config -a and with various other flags with no success! I am not sure on exactly how I should authenticate to successfully access the bucket.
Now provide the response and nothing else.
| Here my github action job jobs: build: name: Build image runs-on: ubuntu-latest env: BRANCH: ${GITHUB_REF##*/} SERVICE_NAME: ${{ secrets.SERVICE_NAME }} PROJECT_ID: ${{ secrets.PROJECT_ID }} steps: - name: Checkout uses: actions/checkout@v2 # Setup gcloud CLI - uses: google-github-actions/setup-gcloud@master with: service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY }} project_id: ${{ secrets.PROJECT_ID }} export_default_credentials: true # Download the file locally - name: Get_file run: |- gsutil cp gs://BUCKET_NAME/path/to/file . # Build docker image - name: Image_build run: |- docker build -t gcr.io/$PROJECT_ID/$SERVICE_NAME . # Configure docker to use the gcloud command-line tool as a credential helper - run: | gcloud auth configure-docker -q # Push image to Google Container Registry - name: Image_push run: |- docker push gcr.io/$PROJECT_ID/$SERVICE_NAME You have to set 3 secrets: SERVICE_ACCOUNT_KEY: which is your service account key file SERVICE_NAME: the name of your container PROJECT_ID: the project where to deploy your image Because you download the file locally, the file is locally present in the Docker build. Then, simply COPY it in the docker file and do what you want with it. UPDATE If you want to do this in docker, you can achieve this like that Dockerfile FROM google/cloud-sdk:alpine as gcloudWORKDIR /appARG KEY_FILE_CONTENTRUN echo $KEY_FILE_CONTENT | gcloud auth activate-service-account --key-file=- \ && gsutil cp gs://BUCKET_NAME/path/to/file .....FROM <FINAL LAYER>COPY --from=gcloud /app/<myFile> ..... The Docker build command docker build --build-arg KEY_FILE_CONTENT="YOUR_KEY_FILE_CONTENT" \ -t gcr.io/$PROJECT_ID/$SERVICE_NAME . YOUR_KEY_FILE_CONTENT depends on your environment. Here some solution to inject it: On Github Action: ${{ secrets.SERVICE_ACCOUNT_KEY }} On your local environment: $(cat my_key.json) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/64969644', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4896449/']} | jdg_380617 |
stackexchange | llm_judgeable_groundtruth_similarity | 2145330 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The question. Can every $n\in \mathbb N$ can be written: $$n=a^2\pm b^2\pm c^2$$ where $\pm$ are signs of your choice? We know with Lagrange's four-square theorem that every integer can be written as the sum of four squares. Plus, with have Legendre's three-square theorem stated that an integer can not be written as the sum of three squares if, and only if, it is of the form: $$4^k(8n+7).$$ So we just have to prove (or disprove) it for every number of this form. I have checked it until $55$ , and it seems to work so far. So the number we have to check are these ones . For instance: $$31=6^2-2^2-1^2$$ and $$39=6^2+2^2-1^2.$$ The issue here is that $a$ , $b$ and $c$ can be arbitrarily large. For instance: $$183=14542^2-14541^2-170^2.$$ So I don't really know how to prove or disprove this result, and I think it could go either way.
Now provide the response and nothing else.
| Hang on, it's actually quite simple! So suppose that we have a number $l$ . Suppose that $l=pq$ , with $p,q$ having the same parity. That is, both $p$ and $q$ are even, or both $p$ and $q$ are odd. If this is the case, consider $a= \frac{p+q}{2}, b= \frac{p-q}{2}$ . Then, note that $a^2 - b^2 = pq = l$ ! For example, $183 = 61 \times 3$ , so $a=32$ and $b = 29$ , and $32^2-29^2 = 1024 - 841 = 183$ . Now, when can $l$ be written in this form? At least when $l$ is odd, because then you can split it into two odd factors (even if one of those factors is $1$ : for example $7=7 \times 1 = 4^2-3^2$ ) and carry out the above procedure. Finally, given an even number, just subtract (or add!) $1^2=1$ to make it an odd number,which can be expressed as a difference of squares. For example: given $39$ , we can write $39=13 \times 3 = 8^2 - 5^2$ . Given $78$ , we can write $78 = 77 + 1 = 11 \times 7 +1 = 9^2-2^2+1^2$ . What is the reason for so much flexibility? Simple : $(a^2-b^2)$ has a non-trivial factorization, while $a^2+b^2$ does not. This is what makes the whole additive theory of squares (and the Waring problem) so interesting and difficult. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2145330', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/288138/']} | jdg_380618 |
stackexchange | llm_judgeable_groundtruth_similarity | 17485788 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
trying to implement the RegisterClientScriptBlock to my vb.net code, using the highchart javascript code. so far i have this. Dim script As String script = _ "var chart = $('#container1').highcharts();" & _ "chart.redraw();" ScriptManager.RegisterClientScriptBlock( _ Me, _ GetType(Page), _ "container1", _ script, _ True) i need to refresh the data. am trying to use the above method to re fresh data using my arrays: hidden.Value = arrayJson hidden1.Value = arrayJson1 hidden2.Value = arrayJson2 hidden3.Value = arrayJson3 hidden4.Value = arrayJson4 hidden5.Value = arrayJson5 hidden6.Value = arrayJson6 hidden7.Value = arrayJson7 not sure how to link it up
Now provide the response and nothing else.
| Yes. You can do something similar to normal level order traversal. You have to use two stacks first stack for printing from left to right second stack for printing from right to left. Start from the root node. Store it's children in one stack. In every iteration, you have nodes of one level in one of the stacks. Print the nodes, and push nodes of next level in other stack. Repeat until your reach the final level. Time Complexity O(n) and space complexity O(n). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17485788', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2519466/']} | jdg_380619 |
stackexchange | llm_judgeable_groundtruth_similarity | 17320 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for the method by which the partial derangement formula $D_{n,k}$ was derived. I can determine the values for small values of N empirically, but how the general case formula arose still alludes me. Any links/books or an explanation will be appreciated. The formula is: $D_{n,k} = {n \choose k}!(n-k)$ Links: Mathworld
Now provide the response and nothing else.
| Here is a very general solution. There is a fundamental formula in combinatorics called the exponential formula, and one statement of it is as follows. Given a finite group $G$ acting on a set $X$, its cycle index polynomial is given by $$Z_G = \frac{1}{|G|} \sum_{g \in G} z_1^{c_1(g)} z_2^{c_2(g)} ... $$ where $c_i(g)$ is the number of cycles of length $i$ in the action of $g$ on $X$. In particular, the notation $Z_{S_n}$ will denote the cycle index polynomial of $S_n$ acting on an $n$-element set in the usual way; it is a generating function encoding the relative proportions of different cycle types of permutations. The exponential formula then states that $$\sum_{n \ge 0} Z_{S_n} t^n = \exp \left( z_1 t + \frac{z_2 t^2}{2} + \frac{z_3 t^3}{3} + ... \right).$$ In my opinion this is one of the most beautiful formulas in mathematics and a major reason I became interested in combinatorics was because I stumbled upon this formula while solving a Putnam problem (which is described in the blog post I linked to above). How does it apply to this problem? Set $z_2 = z_3 = ... = 1$ and $z_1 = z$. Then the LHS of the exponential formula is a generating function which counts permutations according to how many fixed points ($1$-cycles) they have. In other words, $$Z_{S_n}(z, 1, 1, ...) = \frac{1}{n!} \sum_{g \in S_n} z^{c_1(g)} = \frac{1}{n!} \sum_{k=0}^n D_{n,k} z^k.$$ The RHS of the exponential formula, on the other hand, is $$\exp \left( zt + \log \frac{1}{1-t} - t \right) = \frac{e^{-t}}{1 - t} e^{zt}.$$ So we obtain the beautifully concise formula $$\sum_{n \ge 0} \frac{t^n}{n!} \sum_{k=0}^n D_{n,k} z^k = \frac{e^{-t}}{1 - t} e^{zt}.$$ The coefficients of $\frac{e^{-t}}{1 - t}$ are obtained by setting $z = 0$; they give the usual derangement numbers, e.g. the number of permutations of $n$ elements with no fixed points, and this can also be seen directly from the generating function since $$\frac{e^{-t}}{1 - t} = \sum_{n \ge 0} \left( \sum_{k=0}^n \frac{(-1)^k}{k!} \right) t^n$$ which is equivalent to the formula $D_{n,0} = n! \sum_{k=0}^n \frac{(-1)^k}{k!} \sim \frac{n!}{e}$. (In fact you can read this asymptotic directly from the generating function.) The above then gives $$D_{n,k} = {n \choose n-k} D_{n-k,0} = \frac{n!}{k!} \sum_{i=0}^{n-k} \frac{(-1)^i}{i!}.$$ Of course, there is a much more direct proof of this: observe that specifying a permutation of $n$ elements with $k$ fixed points is equivalent to specifying the $n-k$ elements which are not fixed points, then specifying a fixed-point-free permutation of these. This immediately gives $D_{n,k} = {n \choose n-k} D_{n-k,0}$, so it suffices to compute $D_{n,0}$, and this can be done by the standard inclusion-exclusion argument. (In the interest of completeness, the standard inclusion-exclusion argument is as follows: first we start with all $n!$ permutations. Then we subtract the ones which fix $1$, and the ones which fix $2$, etc., so we subtract $n \cdot (n-1)!$. But this is overcounting: we need to add back the ones which fix both $1$ and $2$, or more generally both $i$ and $j$ for distinct $i, j$, so we add back ${n \choose 2} \cdot (n-2)!$. But this is overcounting: we need to subtract the ones which fix any triplet... and so forth. This gives each term of the formula $n! \sum_{k=0}^n \frac{(-1)^k}{k!}$ one-by-one.) My point in presenting the generating function argument is not that it is any easier in this case but that it generalizes to far more complicated problems in a way which minimizes mental effort: for example you can use it to count permutations by how many $2$-cycles they have, or by $c_3(g) + 17 c_5(g)$, or whatever, and the generating function is also a convenient way to organize the computation of the expected value and variance of various permutation statistics; see, for example, this math.SE answer . | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/17320', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/5760/']} | jdg_380620 |
stackexchange | llm_judgeable_groundtruth_similarity | 13568917 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hi I am getting the data from Sql Server like for about 1000 stores. I need to put data in to excel file like as and so on until now i have managed to code like as console application if (reader.HasRows) { while (reader.Read()) { SqlDataAdapter da = new SqlDataAdapter("StoresWithProduct", connection); da.SelectCommand.CommandType = CommandType.StoredProcedure; da.SelectCommand.Parameters.AddWithValue("@StoreID", reader.GetInt32(0)); DataSet ds = new DataSet(); da.Fill(ds, "table1"); foreach (DataRow dr1 in ds.Tables["table1"].Rows) { Console.Write("Store Code:" + dr1["StoreCode"] + "\t"); foreach (DataRow dr in ds.Tables["table1"].Rows) { Console.Write(dr["itemNumber"]+"-" + dr["quantity"]+"\t\n"); } break; } }} else { Console.WriteLine("No rows found.");}Console.ReadLine();reader.Close();connection.Close(); but not able to put into the excel file the way i want, any pointer will be appriciated. Thanks
Now provide the response and nothing else.
| Depending on your database, this can be trivially simple. HQL supports built-in vendor-specific features and functions, it also supports the ability to extend the dialect by registering new functions if they're not already supported by HQL. Let's say you're using SQLServer (or Sybase). SQLServer has a function called 'DATEADD' that can do what you like very easily. The format is: DATEADD (datepart, number, date) You can use this function directly in HQL by first registering the function in your own Hibernate Dialect. To do this, you just have to extend the Dialect you're currently using. This is a very simple process. First, create your own dialect class (replace 'SQLServer2008Dialect' with your own DB vendor): public class MySQLServerDialect extends SQLServer2008Dialect { public MySQLServerDialect() { registerFunction("addminutes", new VarArgsSQLFunction(TimestampType.INSTANCE, "dateadd(minute,", ",", ")")); }} Next, modify your hibernate configuration to use this new class: <?xml version='1.0' encoding='utf-8'?><!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd"><hibernate-configuration> <session-factory> ... <property name="hibernate.dialect">com.mycompany.MySQLServerDialect</property> ...</hibernate-configuration> Now simply use the function: select x from MyEntity x where addminutes(x.creationDate, 10) < current_time() (This assumes your entity is called MyEntity and the creation_date field is mapped to a property called creationDate). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13568917', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1056987/']} | jdg_380621 |
stackexchange | llm_judgeable_groundtruth_similarity | 15292175 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to send a specific key (e.g. k) to another program named notepad, and below is the code that I used: private void SendKey(){ [DllImport ("User32.dll")] static extern int SetForegroundWindow(IntPtr point); var p = Process.GetProcessesByName("notepad")[0]; var pointer = p.Handle; SetForegroundWindow(pointer); SendKeys.Send("k");} But the code doesn't work, what's wrong with the code? Is it possible that I send the "K" to the notepad without notepad to be the active window? (e.g. active window = "Google chrome", notepad is in the background, which means sending a key to a background application)?
Now provide the response and nothing else.
| If notepad is already started, you should write: // import the function in your class[DllImport ("User32.dll")]static extern int SetForegroundWindow(IntPtr point);//...Process p = Process.GetProcessesByName("notepad").FirstOrDefault();if (p != null){ IntPtr h = p.MainWindowHandle; SetForegroundWindow(h); SendKeys.SendWait("k");} GetProcessesByName returns an array of processes, so you should get the first one (or find the one you want). If you want to start notepad and send the key, you should write: Process p = Process.Start("notepad.exe");p.WaitForInputIdle();IntPtr h = p.MainWindowHandle;SetForegroundWindow(h);SendKeys.SendWait("k"); The only situation in which the code may not work is when notepad is started as Administrator and your application is not. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15292175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1212812/']} | jdg_380622 |
stackexchange | llm_judgeable_groundtruth_similarity | 5403099 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I create a .war-file from my gwt-project in eclipse?
Now provide the response and nothing else.
| I always use Ant build file, so the project gets compiled and packaged as a war with one click. Add an xml-file to your project with the following content: <project name="test" basedir="." default="default"><property name="src.dir" value="src" /><property name="build.dir" value="war" /><path id="compile.classpath"> <fileset dir="${build.dir}/WEB-INF/lib"> <include name="**/*.jar" /> <include name="**/*.xml" /> </fileset></path><target name="default" depends="gwtc, buildwar,deploy"></target><target name="gwtc" description="GWT compile to JavaScript"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <pathelement location="${src.dir}" /> <path refid="compile.classpath" /> </classpath> <arg line="-logLevel INFO" /> <jvmarg value="-Xmx1024M" /> <arg value="YourProject.EntryPointClass" /> </java></target><target name="buildwar"> <war basedir="war" destfile="YourProject.war" webxml="war/WEB-INF/web.xml"> <exclude name="WEB-INF/**" /> <webinf dir="war/WEB-INF/"> <include name="**/gwt-servlet.jar" /> <include name="**/classes/**" /> </webinf> </war></target><target name="deploy"> <copy file="YourProject.war" todir="." /></target></project> (Edit `YourProject.EntryPointClass to the path to your EntryPoint-class) You would need to add gwt-user.jar and gwt-dev.jar to your projects build path(right click on your project -> Build Path -> Add External Achives). If you now look at your "Problems"-view you get a warning that the two files are not available on the server's class path. You can use the QuickFix to either copy it to WEB-INF/lib or hide the warning. The build file will not include those two file in the war-file. All you need to do to compile and create the file is to right click the xml-file and select run as Ant Build. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5403099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/658545/']} | jdg_380623 |
stackexchange | llm_judgeable_groundtruth_similarity | 8615 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was always told that boiling cold water in a pot is better for you then boiling warm or hot water (from the sink). Is there any truth to this statement?
Now provide the response and nothing else.
| Yes, but not for any reason related to nutrition or ease-of-boiling. Older home plumbing systems may use lead solder, and lead is a neurotoxin which is particularly harmful to children. Hot water dissolves lead more readily and stays in your home's plumbing system far longer, so it may end up with much more dissolved lead than cold water. The EPA states: How can I reduce lead in drinking water at home? Flush your pipes before drinking, and only use cold water for consumption. The more time water has been sitting in your home's pipes, the more lead it may contain. Anytime the water in a particular faucet has not been used for six hours or longer, "flush" your cold-water pipes by running the water until it becomes as cold as it will get. This could take as little as five to thirty seconds if there has been recent heavy water use such as showering or toilet flushing. Otherwise, it could take two minutes or longer. Your water utility will inform you if longer flushing times are needed to respond to local conditions. Use only water from the cold-water tap for drinking, cooking, and especially for making baby formula. Hot water is likely to contain higher levels of lead. The two actions recommended above are very important to the health of your family. They will probably be effective in reducing lead levels because most of the lead in household water usually comes from the plumbing in your house, not from the local water supply. | {} | {'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/8615', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/651/']} | jdg_380624 |
stackexchange | llm_judgeable_groundtruth_similarity | 360817 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I replace all of the spaces at the beginning of each line with a tab? I would prefer to use sed for this.
Now provide the response and nothing else.
| Portably. TAB=$(printf '\t')sed "s/^ */$TAB/" < file.in > file.out Some shells ( ksh93 , zsh , bash , mksh and FreeBSD sh at least) also support a special form of quotes ( $'...' ) where things like \t are expanded. sed $'s/^ */\t/' < file.in > file.out The fish shell expands those outside of quotes: sed 's/^ */'\t/ < file.in > file.out Some sed implementations like GNU sed also recognise \t as meaning TAB by themselves. So with those, this would also work: sed 's/^ */\t/' < file.in > file.out Portably, awk does expand \t inside its double quotes. And also uses extended regular expressions, so one can use x+ in place of xx* : awk '{sub(/^ +/, "\t"); print}' < file.in > file.out | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/360817', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/221298/']} | jdg_380625 |
stackexchange | llm_judgeable_groundtruth_similarity | 13353663 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having problems creating a regex validator that checks to make sure the input has uppercase or lowercase alphabetical characters, spaces, periods, underscores, and dashes only. Couldn't find this example online via searches. For example: These are ok: Dr. Marshallsam smith.george con-stanza .greatpeter.josh_stinsonsmith _.gorne Anything containing other characters is not okay. That is numbers, or any other symbols.
Now provide the response and nothing else.
| The regex you're looking for is ^[A-Za-z.\s_-]+$ ^ asserts that the regular expression must match at the beginning of the subject [] is a character class - any character that matches inside this expression is allowed A-Z allows a range of uppercase characters a-z allows a range of lowercase characters . matches a periodrather than a range of characters \s matches whitespace (spaces and tabs) _ matches an underscore - matches a dash (hyphen); we have it as the last character in the character class so it doesn't get interpreted as being part of a character range. We could also escape it ( \- ) instead and put it anywhere in the character class, but that's less clear + asserts that the preceding expression (in our case, the character class) must match one or more times $ Finally, this asserts that we're now at the end of the subject When you're testing regular expressions, you'll likely find a tool like regexpal helpful. This allows you to see your regular expression match (or fail to match) your sample data in real time as you write it. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13353663', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1678772/']} | jdg_380626 |
stackexchange | llm_judgeable_groundtruth_similarity | 59797720 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In flutter, we can declare a function as variable and call it like this MyWidget((){print('HI');});class MyWidget extends StatelessWidget{ final Function sayHi; MyWidget(this.sayHi); @override Widget build(BuildContext context) { sayHi(); return ... }} But what if sayHi() is a async function? How to declare a async function as variable? There seems no class like AsyncFunction . So how to achive that?
Now provide the response and nothing else.
| Async functions are normal functions with some sugar on top. Here, the function variable type just needs to specify that it returns a Future: class Example { Future<void> Function() asyncFuncVar; Future<void> asyncFunc() async => print('Do async stuff...'); Example() { asyncFuncVar = asyncFunc; asyncFuncVar().then((_) => print('Hello')); }}void main() => Example(); Hope this helps. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59797720', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12735564/']} | jdg_380627 |
stackexchange | llm_judgeable_groundtruth_similarity | 441182 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So I have an IP Address 5x.2x.2xx.1xx I want to map to localhost. In my hosts file I have: cat /etc/hosts127.0.1.1 test test127.0.0.1 localhost# The following lines are desirable for IPv6 capable hosts::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allroutersff02::3 ip6-allhosts5x.2x.2xx.1xx 127.0.0.1 What I want to accomplish is that when I connect in this machine to 5x.2x.2xx.1xx, I go to localhost. What I really want is to connect to MySQL using mysql -uroot 5x.2x.2xx.1xx -p and instead of pointing to that IP address I want to use the local MySQL server At the time it isn't working since it stills redirect to the server's IP (5x.2x.2xx.1xx) I've also tried: sudo service nscd restart with no luck
Now provide the response and nothing else.
| /etc/hosts can be used if you want to map a specific DNS name to a different IP address than it really has, but if the IP address is already specified by the application, that and any other techniques based on manipulating hostname resolution will be useless: the application already has a perfectly good IP address to connect to, so it does not need any hostname resolution services. If you want to redirect traffic that is going out to a specified IP address back to your local system, you'll need iptables for that. sudo iptables -t nat -I OUTPUT --dst 5x.2x.2xx.1xx -p tcp --dport 3306 -j REDIRECT --to-ports 3306 This will redirect any outgoing connections from your system to the default MySQL port 3306 of 5x.2x.2xx.1xx back to port 3306 of your own system. Replace the 5x.2x.2xx.1xx and 3306 with the real IP address and port numbers, obviously. The above command will be effective immediately, but will not persist over a reboot unless you do something else to make the settings persistent, but perhaps you don't even need that? | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/441182', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/95422/']} | jdg_380628 |
stackexchange | llm_judgeable_groundtruth_similarity | 2323869 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose $f : M \to N$ is a smooth mapping between manifolds of the same dimension $m$.Milnor says the following Observe that if $M$ is compact and $y \in N$ is a regular value, then $f^{-1}(y)$ is a finite set (possibly empty) For $f^{-1}(y)$ is in any case compact, being a closed subset of a compact space $M$; and $f^{-1}(y)$ is discrete, since $f$ is one-to-one in a neighbourhood of each $x \in f^{-1}(y)$ The fact that $f^{-1}(y)$ is closed in $M$ follows from the fact that $\{y\}$ is closed in $N$ and since $f$ is continuous, we have $f^{-1}(y)$ to also be closed But I can't see how the fact that $f$ is one-to-one in a neighbourhood of each $x \in f^{-1}(y)$ implies that $f^{-1}(y)$ is finite. The fact that $f$ is one-to-one follows from the inverse function theorem
Now provide the response and nothing else.
| If the set were not finite it would have an accumulation point, since $M$ is compact. Now in that point you also have $f(p) = y$ by continuity, but the injectivity conclusion of the inverse function theorem for a neighbourhood of that point would fail (by construction). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2323869', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/266135/']} | jdg_380629 |
stackexchange | llm_judgeable_groundtruth_similarity | 36502401 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to drop the database I'm currently connected to like so, but I'm getting this error: pq: cannot drop the currently open database I don't really understand how I'm expected to drop the database if I have to close my connection, because then I don't think I will be able to use dbConn.Exec to execute my DROP DATABASE statement? dbConn *sql.DBfunc stuff() error { _, err := dbConn.Exec(fmt.Sprintf(`DROP DATABASE %s;`, dbName)) if err != nil { return err } return dbConn.Close()} I guess I could connect to a different database and then execute it on that connection, but I'm not even sure if that'd work, and it seems really weird to have to connect to a new database just to drop a different database. Any ideas? Thanks.
Now provide the response and nothing else.
| Because, you are trying to execute dropDb command on database, to which you have open connection. According to postgres documentation: You cannot be connected to the database you are about to remove. Instead, connect to template1 or any other database and run this command again. This makes sense, because when you drop the entire database, all the open connection referencing to that database becomes invalid, So the recommended approach is to connect to different database, and execute this command again. If you are facing a situation, where a different client is connected to the database, and you really want to drop the database, you can forcibly disconnect all the client from that particular database. For example, to forcibly disconnect all clients from database mydb : If PostgreSQL < 9.2 SELECT pg_terminate_backend(procpid) FROM pg_stat_activity WHERE datname = 'mydb'; Else SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'mydb'; Note: This command requires superuser privileges. Then, you can connect to different database, and run dropDb command again. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/36502401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/779404/']} | jdg_380630 |
stackexchange | llm_judgeable_groundtruth_similarity | 19713228 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am attempting to run a mixed effect model on some data but struggling with one of the fixed effects, I think primarily due to it a factor?! Sample data: data4<-structure(list(code = structure(1:10, .Label = c("10888", "10889", "10890", "10891", "10892", "10893", "10894", "10896", "10897", "10898", "10899", "10900", "10901", "10902", "10903", "10904", "10905", "10906", "10907", "10908", "10909", "10910", "10914", "10916", "10917", "10919", "10920", "10922", "10923", "10924", "10925", "10927"), class = "factor"), speed = c(0.0296315046039244, 0.0366986630049636, 0.0294297725505692, 0.048316183511095, 0.0294275666501456, 0.199924957584131, 0.0798850288176711, 0.0445886457047146, 0.0285993712316451, 0.0715158276875623), meanflow = c(0.657410742496051, 0.608271363339857, 0.663241108786611, 0.538259450171821, 0.666299529534762, 0.507156583629893, 0.762448863636364, 37.6559178370787, 50.8557196935557, 31.6601587837838), length = c(136, 157, 132, 140, 135, 134, 144, 149, 139, 165), river = structure(c(2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L), .Label = c("c", "f"), class = "factor")), .Names = c("code", "speed", "meanflow", "length", "river"), row.names = c(2L, 4L, 6L, 8L, 10L, 12L, 14L, 16L, 18L, 20L), class = "data.frame") My model is as such: model1<-lmer(speed ~ river + length +(1|meanflow)+(1|code), data4) and when run returns error message: Error in checkNlevels(reTrms$flist, n = n, control) : number of levels of each grouping factor must be < number of observations Having trawled the internet I have found one response but for the life of me do not understand the responses to the question!
Now provide the response and nothing else.
| You have two problems here: It looks like you have one observation for every value of code . That means that you can't estimate both a residual variance (which is built in to lmer , and linear mixed models more generally) and an among- code variance -- both of these parameters will be trying to estimate the same variance component, and any combination of var(residual) and var(code) that adds up to the same value will represent an equally good fit to the data. You also have one observation for every value of meanflow ; this is because meanflow is a continuous variable, which is not usually something you want to use as a grouping variable in the model. I'm not sure what you're trying to capture with this term. You can actually fit these models if you insist by using lmerControl to bypass the checks, but you won't necessarily get a sensible result! model2 <- lmer(speed ~ river + length +(1|meanflow)+(1|code), data4, control=lmerControl(check.nobs.vs.nlev = "ignore", check.nobs.vs.rankZ = "ignore", check.nobs.vs.nRE="ignore")) Here the variance has been divided approximately in equal thirds: VarCorr(model2) ## Groups Name Std.Dev. ## meanflow (Intercept) 0.035354 ## code (Intercept) 0.032898 ## Residual 0.033590 If we use only one (still inappropriate) random effect, model0 <- lmer(speed ~ river + length +(1|meanflow), data4, control=lmerControl(check.nobs.vs.nlev = "ignore", check.nobs.vs.rankZ = "ignore", check.nobs.vs.nRE="ignore")) Now the variance is divided exactly in halves: VarCorr(model0)## Groups Name Std.Dev.## meanflow (Intercept) 0.041596## Residual 0.041596 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19713228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1751379/']} | jdg_380631 |
stackexchange | llm_judgeable_groundtruth_similarity | 177973 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a direct way to determine how many digits a power of 2 will contain without actually performing the multiplication? An estimation would help as well if there is no absolute solution. EDIT :In both decimal and binary bases.
Now provide the response and nothing else.
| If you solve for $x$ the equation $$2^{n}=10^{x}$$ you get the exponent of $10$ $$x=\frac{n\ln 2}{\ln 10}\approx 0.30103n\qquad \text{(see comment)}$$ Answer to the edit . In binary base since $$2^{n}=1\cdot 2^{n}+0\cdot 2^{n-1}+\cdots +0\cdot 2^{2}+0\cdot 2^{1}+0\cdot2^{0},$$ we have $n+1$ bits $$\left( 2^{n}\right) _{2}=\underset{n+1\text{ bits}}{\underbrace{1\overset{n\text{ 0's}}{\overbrace{0\ldots 000}}}}.$$ Comment . The number $x$ is never an integer because $2^{n}$ can only terminate in $2,4,6$ or $8$. So, as commented by Random832, the number of digits in decimal base is $$\left\lfloor 1+\frac{n\ln 2}{\ln 10}\right\rfloor =1+\left\lfloor n\,\log_{10}2\right\rfloor ,$$ which is the sequence A034887 in OEIS (Gost's comment). | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/177973', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/31055/']} | jdg_380632 |
stackexchange | llm_judgeable_groundtruth_similarity | 34809845 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a playbook that is running in different way in Ansible 1.9.x and 2.0. I would like to check currently running ansible version in my playbook to avoid someone running it with old one. I don't think that this is the best solution: - local_action: command ansible --version register: version What would you suggest?
Now provide the response and nothing else.
| Ansible provides a global dict called ansible_version, dict contains the following "ansible_version": { "full": "2.7.4", "major": 2, "minor": 7, "revision": 4, "string": "2.7.4" } you can use any of the following ansible_version.full, ansible_version.major or any other combination in creating conditional statements to check the version of ansible that's installed. example playbook: using this dict and a when statement. ---- hosts: localhost tasks: - name: Print message if ansible version is greater than 2.7.0 debug: msg: "Ansible version is {{ ansible_version.full }}" when: ansible_version.full >= "2.7.4" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34809845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3846577/']} | jdg_380633 |
stackexchange | llm_judgeable_groundtruth_similarity | 1458 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I find it hard to understand what really is the issue with multiple comparisons . With a simple analogy, it is said that a person who will make many decisions will make many mistakes. So very conservative precaution is applied, like Bonferroni correction, so as to make the probability that, this person will make any mistake at all, as low as possible. But why do we care about whether the person has made any mistake at all among all decisions he/she has made, rather than the percentage of the wrong decisions? Let me try to explain what confuses me with another analogy. Suppose there are two judges, one is 60 years old, and the other is 20 years old. Then Bonferroni correction tells the one which is 20 years old to be as conservative as possible, in deciding for execution, because he will work for many more years as a judge, will make many more decisions, so he has to be careful. But the one at 60 years old will possibly retire soon, will make fewer decisions, so he can be more careless compared to the other. But actually, both judges should be equally careful or conservative, regardless of the total number of decisions they will make. I think this analogy more or less translates to the real problems where Bonferroni correction is applied, which I find counterintuitive.
Now provide the response and nothing else.
| You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication is why some people do not believe in Bonferroni style corrections at all. Sometimes the kind of data one deals with in their career is such that this is not an issue. For judges who make one, or very few decisions on each new piece of evidence this is a very valid argument. But what about the judge with 20 defendants and who is basing their judgment on a single large set of data (e.g. war tribunals)? You're ignoring the kicks at the can part of the argument. Generally scientists are looking for something — a p-value less than alpha. Every attempt to find one is another kick at the can. One will eventually find one if one takes enough shots at it. Therefore, they should be penalized for doing that. The way you harmonize these two arguments is to realize they are both true. The simplest solution is to consider testing of differences within a single dataset as a kicks at the can kind of problem but that expanding the scope of correction outside that would be a slippery slope. This is a genuinely difficult problem in a number of fields, notably FMRI where there are thousands of data points being compared and there are bound to be some come up as significant by chance. Given that the field has been historically very exploratory one has to do something to correct for the fact that hundreds of areas of the brain will look significant purely by chance. Therefore, many methods of adjustment of criterion have been developed in that field. On the other hand, in some fields one might at most be looking at 3 to 5 levels of a variable and always just test every combination if a significant ANOVA occurs. This is known to have some problems (type 1 errors) but it's not particularly terrible. It depends on your point of view. The FMRI researcher recognizes a real need for a criterion shift. The person looking at a small ANOVA may feel that there's clearly something there from the test. The proper conservative point of view on the multiple comparisons is to always do something about them but only based on a single dataset. Any new data resets the criterion... unless you're a Bayesian... | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/1458', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/148/']} | jdg_380634 |
stackexchange | llm_judgeable_groundtruth_similarity | 43300381 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
BACKGROUND: I have a desktop browser app that uses mapquest with the leaflet js plugin. I am using divIcon class for the markers, which allows me to use custom HTML and styling for the markers. Each divIcon marker also contains a hidden div that displays when the marker is hovered over (using :hover class). I am using neither the default marker or default popup built into leaflet, because custom html gives much better control over styling. PROBLEM: When the popup is showing, any other markers on the map show on top of the popup, instead of underneath. I have tried setting z-index of the popup div to a really higher number, but it does not help. WHAT IS EXPECTED: When you hover the mouse over an icon, the markers should be behind the popup, not in front. THIS IS NOT A DUPLICATE QUESTION: This question is not the same as this one . That question was about having the default leaflet popups stay on top of a custom div that is outside of the map z-index context. This question is about a custom mouseover popup (not the default popup) staying above other markers. Plus, my solution is completely different from the admittedly javascript "hack" put forward as a workaround. WORKING EXAMPLE OF PROBLEM: https://jsfiddle.net/mrrost/py2bqw7j/ Here is what divIcon code with custom marker/popup looks like: var pin = L.divIcon({ html: ` <div class='marker'> Pin <div class='popup'> Marker info. Other markers WILL BE on top of this div. This is BAD and a PROBLEM. </div> </div> `,}); This most important css definitions: #map { position: fixed;}/* hide default leaflet pin; div.popup is put inside here */.leaflet-marker-icon { border: 0; margin: 0; padding: 0;} div.popup { display: none; position: absolute;}div.marker:hover div.popup { display: block;}
Now provide the response and nothing else.
| The solution was to better understand how z-indexing works. The popup is set inside the marker (to make is work with just css :hover), so it is the child of the parent marker element. Z-index is inherited from parent elements, and child elements cannot have a z-index higher than its parent. That's just how z-indexing works. So, setting z-index on the popup element is ignored by the browser, because leaflet has set a z-index for the marker. The fix was to use a css rule that tells the browser to lower the z-index of all other markers, when a marker is hovered over: .leaflet-marker-icon:not(:hover) { z-index: 0 !important;} See here for a full working example: https://jsfiddle.net/mrrost/tdr45764/ | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43300381', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4634069/']} | jdg_380635 |
stackexchange | llm_judgeable_groundtruth_similarity | 5667444 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Anyone has been using Behat with Zend Framework? Any examples on how to use both?
Now provide the response and nothing else.
| I got it working. It works with PHPUnit and Zend_Test so you can use all those nifty assertXYZ() methods. First, make sure you've got behat installed and available in your system $PATH . I did the following: sudo pear channel-discover pear.symfony.comsudo pear channel-discover pear.behat.orgsudo pear install behat/behat Now, create a directory structure like so: features application ControllerTestCase.php bootstrap FeatureContext.php homepage.feature The features/application/ControllerTestCase.php class is typical of a Zend_Test testing implementation: <?phprequire_once 'Zend/Application.php';require_once 'Zend/Test/PHPUnit/ControllerTestCase.php';class ControllerTestCase extends Zend_Test_PHPUnit_ControllerTestCase { public $application; public function setUp() { $this->application = new Zend_Application(APPLICATION_ENV, APPLICATION_PATH . '/configs/application.ini'); $this->bootstrap = array($this, 'appBootstrap'); parent::setUp(); } public function appBootstrap(){ $this->application->bootstrap(); }} The features/bootstrap/FeatureContext.php class is what Behat needs to bootstrap itself: <?phpuse Behat\Behat\Context\ClosuredContextInterface, Behat\Behat\Context\TranslatedContextInterface, Behat\Behat\Context\BehatContext, Behat\Behat\Exception\PendingException;use Behat\Gherkin\Node\PyStringNode, Behat\Gherkin\Node\TableNode;require_once 'PHPUnit/Autoload.php';require_once 'PHPUnit/Framework/Assert/Functions.php';define('APPLICATION_ENV', 'testing');define('APPLICATION_PATH', dirname(__FILE__) . '/../path/to/your/zf/application');set_include_path('.' . PATH_SEPARATOR . APPLICATION_PATH . '/../library' . PATH_SEPARATOR . get_include_path());require_once dirname(__FILE__) . '/../application/ControllerTestCase.php';class FeatureContext extends BehatContext { protected $app; /** * Initializes context. * Every scenario gets it's own context object. * * @param array $parameters context parameters (set up via behat.yml) */ public function __construct(array $parameters) { $this->app = new ControllerTestCase(); $this->app->setUp(); } /** * @When /^I load the URL "([^"]*)"$/ */ public function iLoadTheURL($url) { $this->app->dispatch($url); } /** * @Then /^the module should be "([^"]*)"$/ */ public function theModuleShouldBe($desiredModule) { $this->app->assertModule($desiredModule); } /** * @Given /^the controller should be "([^"]*)"$/ */ public function theControllerShouldBe($desiredController) { $this->app->assertController($desiredController); } /** * @Given /^the action should be "([^"]*)"$/ */ public function theActionShouldBe($desiredAction) { $this->app->assertAction($desiredAction); } /** * @Given /^the page should contain a "([^"]*)" tag that contains "([^"]*)"$/ */ public function thePageShouldContainATagThatContains($tag, $content) { $this->app->assertQueryContentContains($tag, $content); } /** * @Given /^the action should not redirect$/ */ public function theActionShouldNotRedirect() { $this->app->assertNotRedirect(); }} And now you can write features like features/homepage.feature : Feature: Homepage In order to know ZF works with Behat I need to see that the page loads.Scenario: Check the homepage Given I load the URL "/index" Then the module should be "default" And the controller should be "index" And the action should be "index" And the action should not redirect And the page should contain a "title" tag that contains "My Nifty ZF App" To run the tests, cd to the directory that contains the features folder, and type behat . Good luck! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5667444', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151174/']} | jdg_380636 |
stackexchange | llm_judgeable_groundtruth_similarity | 47507733 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm programming a react native game app for blind kids to help them with maths. There is a game in which they have to count how many animals are in the screen; when an animal is pressed it emits a sound. In react native there is the onPress property of <TouchableWithouFeedback /> which allows me to play the sound, but when it comes to visually impaired users I have to announce that there is an animal instead of just playing the sound. How can I know if a certain View is focused by the screen reader and call a function to do that?
Now provide the response and nothing else.
| You need to pass the event object to handleDelete function when you make use of Arrow function as done in your implementation. You can think of an arrow function like a function that calls another function to which you need to pass the arguments. Event object is a parameter to the arrow function and you indeed need to pass this on to the handleDelete function onClick={(e) => this.handleDelete(e, i)} However after this change you still need to bind the deleteTodos function in the parent, since the context of this inside this function won't be that of the React class component, you can do it like deleteTodos = (i) => { var lists = this.state.listArr; lists.splice(i, 1); this.setState({listArr: lists}) } or constructor(props){ super(props); this.state = { listArr: [], } this.deleteTodos = this.deleteTodos.bind(this); } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47507733', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4374166/']} | jdg_380637 |
stackexchange | llm_judgeable_groundtruth_similarity | 602069 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This circuit is going to be in a production environment. I want to turn a relay on and off using a P-channel MOSFET. This action of turning on and off is not a periodic signal at a high frequency, hence a MOSFET driver is not needed. When the controller is set high (activated), it pulls the output to ground. R3 is added to limit the current into the gate. R2 is added to pull up the gate to 12V, so it is not floating. Diode added is a flyback diode. The internal circuitry contains a 3V voltage that is pulled through R1. Is this circuit sufficient for a production design? What standards do you guys follow when designing a similar circuit? simulate this circuit – Schematic created using CircuitLab Also by the looks of it the IRF9530 FET will not work because the Vgs is to high, are my calculation coorect? simulate this circuit Edit 1: simulate this circuit Edit 2: Edit 3: A redundant pull-up is added so even if the control internal circuit is not plugged in, the load is still OFF.
Now provide the response and nothing else.
| If the circuit is as shown it will not work at all, (or it will pull the 3V supply up to 12V). There will be about 9V on the coil with switch open and about 11V on the coil with switch closed. The MOSFET will never turn off or on completely. Remember Vgs is what controls the MOSFET. You should probably be using an N-channel MOSFET (or two MOSFETs) depending on what you are trying to do. DC relays do not have an inrush current, so not sure where your numbers are coming from- they sound more like an AC relay specification. Powering an AC relay from the nominal AC voltage in DC will likely lead to the acrid smell of disappointment. Generically, you probably want to do something like this: simulate this circuit – Schematic created using CircuitLab | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/602069', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/159693/']} | jdg_380638 |
stackexchange | llm_judgeable_groundtruth_similarity | 386958 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a directory of files with filenames of the form <num1>v<num2>.txt . I'd like to find all files for which <num1> is a duplicate. When duplicates are found, we should delete the ones with smaller <num2> . Is this possible? I could easily write a python script to handle this, but thought it might be a nice application of built-in zsh features. Example In the following list of files, the first three have duplicate <num1> parts. As well, the fourth and fifth are duplicate. 012345v1.txt012345v2.txt012345v3.txt3333v4.txt3333v7.txt11111v11.txt I would like to end up with directory containing 012345v3.txt3333v7.txt11111v11.txt
Now provide the response and nothing else.
| You could do something like: files=(<->v<->.txt(n))typeset -A hfor f ($files) h[${f%%v*}]=$fkeep=($h)echo rm ${files:|keep} (remove echo if happy) <-> : any sequence of digits ( <x-y> glob operator with no bound specified) (n) : numeric sort ${f%%v*} : standard/ksh greedy pattern stripping from the end. ${files:|keep} : array subtraction. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/386958', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/247135/']} | jdg_380639 |
stackexchange | llm_judgeable_groundtruth_similarity | 9361303 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to input a character and get the unicode value back? for example, i can put ⽇ in html to output "⽇", is it possible to give that character as an argument to a function and get the number as an output without building a unicode table? $val = someFunction("⽇");//returns 12103 or the reverse? $val2 = someOtherFunction(12103);//returns "⽇" I would like to be able to output the actual characters to the page not the codes, and I would also like to be able to get the code from the character if possible.The closest I got to what I want is php.net/manual/en/function.mb-decode-numericentity.php but I cant get it working, is this the code I need or am I on the wrong track?
Now provide the response and nothing else.
| function _uniord($c) { if (ord($c[0]) >=0 && ord($c[0]) <= 127) return ord($c[0]); if (ord($c[0]) >= 192 && ord($c[0]) <= 223) return (ord($c[0])-192)*64 + (ord($c[1])-128); if (ord($c[0]) >= 224 && ord($c[0]) <= 239) return (ord($c[0])-224)*4096 + (ord($c[1])-128)*64 + (ord($c[2])-128); if (ord($c[0]) >= 240 && ord($c[0]) <= 247) return (ord($c[0])-240)*262144 + (ord($c[1])-128)*4096 + (ord($c[2])-128)*64 + (ord($c[3])-128); if (ord($c[0]) >= 248 && ord($c[0]) <= 251) return (ord($c[0])-248)*16777216 + (ord($c[1])-128)*262144 + (ord($c[2])-128)*4096 + (ord($c[3])-128)*64 + (ord($c[4])-128); if (ord($c[0]) >= 252 && ord($c[0]) <= 253) return (ord($c[0])-252)*1073741824 + (ord($c[1])-128)*16777216 + (ord($c[2])-128)*262144 + (ord($c[3])-128)*4096 + (ord($c[4])-128)*64 + (ord($c[5])-128); if (ord($c[0]) >= 254 && ord($c[0]) <= 255) // error return FALSE; return 0;} // function _uniord() and function _unichr($o) { if (function_exists('mb_convert_encoding')) { return mb_convert_encoding('&#'.intval($o).';', 'UTF-8', 'HTML-ENTITIES'); } else { return chr(intval($o)); }} // function _unichr() | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9361303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/401898/']} | jdg_380640 |
Subsets and Splits