source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 3689008 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a WCF service written in .net 4, and exposed over net.tcp. Any time I try to set the MaxConnections property of the binding configuration to something higher than 10 I am AddressAlreadyInUseException. Why would that be getting thrown on the MaxConnection setting? (if it matters, I am on Server 2008 R2 Standard with a 4 core CPU and 4 gb ram) <binding name="NetTcpBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transferMode="Buffered" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxBufferSize="25000000" maxReceivedMessageSize="25000000" maxConnections="50"> <readerQuotas maxDepth="32" maxStringContentLength="25000000" maxArrayLength="25000000" maxBytesPerRead="25000000" maxNameTableCharCount="25000000" /> <security mode="None" /> </binding> <service behaviorConfiguration="ApiService.ServiceBehavior" name="Api.Service.PlatformApiService"> <endpoint address="/Search" binding="netTcpBinding" bindingConfiguration="NetTcpBinding" contract="IApiService" /> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="NetTcpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8094/Api/" /> </baseAddresses> </host> </service>
Now provide the response and nothing else.
| Your mex endpoint defines binding configuration which is not part of your configuration snippet. MaxConnection defines pooling of connections for given port. At the moment you are using two endpoints which share single port - ApiService and Metadata endpoints. Before you changes setting in your binding configuration both enpoints used default value - 10 connections in a pool. When you changed the value it affected only one endpoint second endpoint still demands 10 connections => exception. The solutions are: Expose metadata endpoint on different port. Create custom binding for Mex endpoint. Default mexTcpBinding does not allow changing MaxConnections. Set same value for MaxConnection in custom binding. Try to use port sharing . At least first idea should work. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3689008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_380641 |
stackexchange | llm_judgeable_groundtruth_similarity | 24809956 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having a heck of a time converting from older mapping standards to automapper. Here are my classes // Modelspublic class BaseModel{ public Int64 Id { get; set; } public Guid UniqueId { get; set; } public DateTime? CreateDate { get; set; } public DateTime? LastUpdate { get; set; }} public class LibraryItemModel : BaseModel{ public string Name { get; set; } public string Description { get; set; } public string URL { get; set; } public bool IsActive { get; set; } public List<LibraryCategoryModel> Categories { get; set; }} public class LibraryCategoryModel : BaseModel{ public string Description { get; set; }}// Entity Classespublic partial class LibraryItem{ public LibraryItem() { this.LibraryItemCategories = new HashSet<LibraryItemCategory>(); } public long Id { get; set; } public System.Guid UniqueId { get; set; } public string Description { get; set; } public string URL { get; set; } public System.DateTime CreateDate { get; set; } public System.DateTime LastUpdate { get; set; } public bool IsActive { get; set; } public string Name { get; set; } public virtual ICollection<LibraryItemCategory> LibraryItemCategories { get; set; }}// comes from a ternary table in DB... many to manypublic partial class LibraryItemCategory{ public long LibraryItemId { get; set; } public long LibraryCategoryId { get; set; } public System.DateTime CreateDate { get; set; } public System.DateTime LastUpdate { get; set; } public virtual LibraryCategory LibraryCategory { get; set; } public virtual LibraryItem LibraryItem { get; set; }}public partial class LibraryCategory{ public LibraryCategory() { this.LibraryCategoryRoles = new HashSet<LibraryCategoryRole>(); this.LibraryItemCategories = new HashSet<LibraryItemCategory>(); } public long id { get; set; } public System.Guid UniqueId { get; set; } public string Description { get; set; } public System.DateTime CreateDate { get; set; } public System.DateTime LastUpdate { get; set; } public virtual ICollection<LibraryCategoryRole> LibraryCategoryRoles { get; set; } public virtual ICollection<LibraryItemCategory> LibraryItemCategories { get; set; }} // Old Conversion code doing it the long way private LibraryItemModel Convert(Entities.LibraryItem libraryItem) { var newLibraryItem = new LibraryItemModel { UniqueId = libraryItem.UniqueId, Name = libraryItem.Name, Description = libraryItem.Description, URL = libraryItem.URL, CreateDate = libraryItem.CreateDate, LastUpdate = libraryItem.LastUpdate, IsActive = libraryItem.IsActive, Categories = new List<LibraryCategoryModel>() }; foreach (var lc in libraryItem.LibraryItemCategories) { var newCategory = new LibraryCategoryModel { UniqueId = lc.LibraryCategory.UniqueId, Description = lc.LibraryCategory.Description, CreateDate = lc.LibraryCategory.CreateDate, LastUpdate = lc.LibraryCategory.LastUpdate }; newLibraryItem.Categories.Add(newCategory); } return newLibraryItem; } // My attempt at automapper to go between the models and entities Mapper.CreateMap<EF.Entities.LibraryItem, LibraryItemModel>(); Mapper.CreateMap<LibraryItemModel, EF.Entities.LibraryItem>(); .ForMember(lim => lim.LibraryItemCategories, o => o.Ignore() Mapper.CreateMap<EF.Entities.LibraryCategory, LibraryCategoryModel>(); Mapper.CreateMap<LibraryCategoryModel, EF.Entities.LibraryCategory>() .ForMember(lcm => lcm.LibraryCategoryRoles, o => o.Ignore()) .ForMember(lcm => lcm.LibraryItemCategories, o => o.Ignore()); No matter how I configure ignores or custom mappings it seems to not like this nesting. Any Automapper experts out there who could tell me how a mapping with a complex object like this could be done. The enitity classes are being generated via an EF6 edmx file.
Now provide the response and nothing else.
| So basically the problem here is that you want to map from each LibraryItemCategory that belongs to a LibraryItem to a LibraryCategoryModel that includes properties from each LibraryItemCatalog 's LibraryCatalog property. First you want to correctly map the collections to each other: Mapper.CreateMap<LibraryItem, LibraryItemModel>() .ForMember( dest => dest.Categories, opt => opt.MapFrom(src => src.LibraryItemCategories)); Next you need to worry about mapping each LibraryItemCategory inside of LibraryItem.LibraryItemCategories to a LibraryCatalogModel . As stated in the problem, you need to access each LibraryItemCategory 's LibraryCatalog property and actually map from that instead. The way this looks is: Mapper.CreateMap<LibraryItemCategory, LibraryCategoryModel>() .ConstructUsing(ct => Mapper.Map<LibraryCategoryModel>(ct.LibraryCategory)) .ForAllMembers(opt => opt.Ignore()); Here, we're telling AutoMapper that to map from a LibraryItemCategory to a LibraryCategoryModel , we need to construct LibraryCategoryModel 's using another call to Mapper.Map on the inner LibraryCategory property. Next, all that's left to do is define the mapping from LibraryCategory to LibraryCategoryModel : Mapper.CreateMap<LibraryCategory, LibraryCategoryModel>(); Now a call to Mapper.Map on the LibraryItem should take care of everything for you. Alternatively, you could remove the map from LibraryItemCategory to LibraryCategoryModel and use LINQ to create the collection of LibraryCategory s that you actually want to map from in the mapping definition from LibraryItem to LibraryItemModel : Mapper.CreateMap<LibraryItem, LibraryItemModel>() .ForMember( dest => dest.Categories, opt => opt.MapFrom( src => src.LibraryItemCategories.Select(lb => lb.LibraryCategory))); You'd obviously still need the mapping from LibraryCategory to LibraryCategoryViewModel , but you might prefer this since it involves fewer mappings. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24809956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1308105/']} | jdg_380642 |
stackexchange | llm_judgeable_groundtruth_similarity | 637449 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When the input voltage is +3 V, the current in R3 is only 0.24 mA, while the current in R4 is 18 mA. It seems that R3 doesn't play any significant role when the input voltage is +3 V. So why does it exist in the circuit at all? Some guess: When input voltage is 0 V, R3 guarantees that the base-emitter voltage of Q3 remains 0, and Q3 is not turned on accidentally by electrostatic charges accumulated on base of Q3 or by electromagnetic interference. The image is from The Art of Electronics , 3rd Ed.
Now provide the response and nothing else.
| What is the purpose of R3 in this LED driver circuit? R3 ensures that any leakage currents passing through Q2 (when it is supposedly off) do not trickle into the base of Q3 and partially turn it on. This might cause the LED to glow dimly. Leakage might be 100 nA and, through a 10 kΩ resistor would produce an offset of 1 mV to ground and be thoroughly effective against the 100 nA flowing into the base of Q3. 100 kΩ would also be very effective too. Of course, it may be there for other reasons related to the input signal amplitude (unknown). The other big reason is operating speed (if modulating the LED with data or generally using PWM). To improve edge rise and fall times you need an emitter resistor on Q2. The two transistors are in a configuration that is very similar to a Darlington pair and, if you look at the equivalent circuit of some Darlington transistors they also use an emitter resistor on the first transistor to improve switching speed. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/637449', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/196688/']} | jdg_380643 |
stackexchange | llm_judgeable_groundtruth_similarity | 10591543 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an Email document which has a sent_at date field: { 'sent_at': Date( 1336776254000 )} If this Email has not been sent, the sent_at field is either null, or non-existant. I need to get the count of all sent/unsent Emails . I'm stuck at trying to figure out the right way to query for this information. I think this is the right way to get the sent count: db.emails.count({sent_at: {$ne: null}}) But how should I get the count of the ones that aren't sent?
Now provide the response and nothing else.
| If the sent_at field is not there when its not set then: db.emails.count({sent_at: {$exists: false}}) If it's there and null, or not there at all: db.emails.count({sent_at: null}) If it's there and null: db.emails.count({sent_at: { $type: 10 }}) The Query for Null or Missing Fields section of the MongoDB manual describes how to query for null and missing values. Equality Filter The { item : null } query matches documents that either contain the item field whose value is null or that do not contain the item field. db.inventory.find( { item: null } ) Existence Check The following example queries for documents that do not contain a field. The { item : { $exists: false } } query matches documents that do not contain the item field: db.inventory.find( { item : { $exists: false } } ) Type Check The { item : { $type: 10 } } query matches only documents that contain the item field whose value is null ; i.e. the value of the item field is of BSON Type Null (type number 10 ) : db.inventory.find( { item : { $type: 10 } } ) | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/10591543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48523/']} | jdg_380644 |
stackexchange | llm_judgeable_groundtruth_similarity | 41879 |
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A similar question has been asked before: How to set up Raspberry Pi without a monitor? However, given that the accepted answer doesn't appear to offer an appropriate solution, and given that there's a level of ambiguity in what was being asked, I'm assuming that the original question was mis-constructed, and that my question isn't an exact duplicate. I'm performing the initial, first-time set-up of a Pi. I don't have a monitor or keyboard, but instead want to use a connected laptop to do the job. I don't want to buy a monitor or keyboard and have them sitting around just for the occasions when they're needed by the Pi. I could borrow them, but I plan to buy more Pi's in the future, and don't want to have to borrow them each time. I've set a static IP address on the Pi by editing the cmdline.txt file. (The Pi is running the lastest version of Raspbian.) ip=10.0.0.20 I've given the eth interface on my laptop a corresponding static IP address. (The laptop is running Ubuntu.) ip ad add 10.0.0.10/24 dev eth0 I've connected the two together with an ethernet cable and can successfully ping the Pi from my laptop. What I can't do is ssh to the Pi. I'm getting a Connection refused response: richard@richard-ThinkPad-X220:~$ ssh -vvv [email protected]_6.6.1, OpenSSL 1.0.1f 6 Jan 2014debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to 10.0.0.20 [10.0.0.20] port 22.debug1: connect to address 10.0.0.20 port 22: Connection refusedssh: connect to host 10.0.0.20 port 22: Connection refused One thing that this would imply is that sshd is not yet running on the Pi. The official docs state that sshd runs by default, but is it possible that during an initial boot that other things (e.g. prompts waiting for user input) might block the boot at a certain level, meaning we don't reach the point at which sshd is started? Where do I go from here? (Not "to the shops to buy a monitor"... ) Is what I'm trying actually possible? Edit: I've now tried the methods mentioned in the answers to the following post, but without success: Enabling SSH on RPi without screen - keystrokes for raspi-config? Specifically, ensuring that the SSH init script is run at runlevel 2 still didn't allow me to connect.
Now provide the response and nothing else.
| The official docs state that sshd runs by default On the latest image I have, 2015-11-21-raspbian-jessie- lite .img, this is not true. Debian/Raspbian jessie currently uses systemd for init, but there is a sort of hybrid backward-SysV-compatible mechanism built in, and I notice on the running system I created from this image (using a screen and keyboard for the initial setup), where sshd is now enabled, there's a trigger in both the systemd and the old SysV rc.d directories. Presumably this is how the backward-compatible mechanism works (I enabled ssh via systemd). In any case, there's only one sshd instance with a PPID of 1 running. For the working system, there is an entry in /etc/rc[2,3,4,5].d for S02ssh (the exact priority number, 02, is set when the service is enabled and may vary). There is also a /etc/systemd/system/multi-user.target.wants/ssh.service . In the base image, however... ...There is no entry in /etc/systemd and there is a K01ssh for runlevels 2, 3, 4, 5. That will pretty much guarantee no sshd is running. My suggestion is to first try setting just the SysV entry. From the etc directory of the second partition on the SD card: for i in 2 3 4 5; do \ln -s ../init.d/ssh rc$i.d/S02ssh; \rm rc$i.d/K01ssh \done Check to make sure the links are there and they point to the right place with stat . I believe this should be sufficient and you can now try booting the system. If so, after you get in remove those links and run systemctl enable ssh , then check the links have been recreated (remember, the priority may be different). If you still get "Connection refused", create a link from [SD_rootfs]/lib/systemd/system/ssh.service to [SD_rootfs]/etc/systemd/default.target.wants/ssh.service and try again. | {} | {'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/41879', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/39611/']} | jdg_380645 |
stackexchange | llm_judgeable_groundtruth_similarity | 1618929 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In looking at the definition of vertical tangent lines in some popular calculus texts, I noticed that there are a few different definitions for this term, including the following: A function $f$ has a vertical tangent line at $a$ if $\textbf{1)}$ $\;f$ is continuous at $a$ and $\displaystyle\lim_{x\to a}\;\lvert f^{\prime}(x)\rvert=\infty$ $\textbf{2)}$ $\;f$ is continuous at $a$ and $\displaystyle\lim_{x\to a} f^{\prime}(x)=\infty$ or $\displaystyle\lim_{x\to a} f^{\prime}(x)=-\infty$ $\textbf{3)}$ $\;\displaystyle\lim_{h\to0}\frac{f(a+h)-f(a)}{h}=\pm\infty$ I would like to ask if there is a standard definition of this term, and whether or not the definition should include continuity at $a$ and should not include the situation where the graph has a vertical cusp at $a$. Here are some examples where these definitions lead to different conclusions: a) $\;f(x)=x^{2/3}$ b) $\;f(x)=\begin{cases}1&\mbox{, if }x>0\\0&\mbox{, if }x=0\\-1&\mbox{, if }x<0\end{cases}$ (This question has also been posted on Math Educators Stack Exchange.)
Now provide the response and nothing else.
| Speaking as a geometer, I want "tangency" to be independent of the coordinate system. Particularly, if $f$ is a real-valued function of one variable defined in some neighborhood of $a$, and if $f$ is invertible in some neighborhood of $a$, then the line $x = a$ should be tangent to the graph $y = f(x)$ at $a$ if and only if the line $y = b = f(a)$ is tangent to the graph $y = f^{-1}(x)$ at $b$. For an elementary calculus course I'd want: $f$ continuous in some neighborhood of $a$; $f$ invertible in some neighborhood of $a$; $f'(a) = \pm\infty$, i.e., $(f^{-1})'(b) = 0$ (the graph $y = f^{-1}(x)$ has $y = a$ as horizontal tangent). Condition 1 does not guarantee invertibility near $a$ (as the cusp shows), so in my book it's out. Condition 2 implies all three items of my wish list. ($f$ is implicitly assumed differentiable in some neighborhood of $a$; the derivative condition guarantees the derivative doesn't change sign in some neighborhood of $a$, and that $f'(a) = \pm\infty$.) Condition 3 does not imply continuity (as the step function shows), so it's out. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1618929', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/84413/']} | jdg_380646 |
stackexchange | llm_judgeable_groundtruth_similarity | 6393197 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hi I'm trying to setup a validation that is only called in a specific form view, to do this I'm trying to create a hidden_field for a virtual attribute on the form and set this to a value, then validate :if the virtual attribute equals the value. So far I have: ## user model validates_presence_of :password_confirmation, :if => :confirmation_validation attr_accessible :email, :password, :password_confirmation, :remember_me, :name, :avatar, :username, :bio, :confirmation_validationdef confirmation_validation # not sure what goes here???end## form view<%= form_for(resource, :validate => true, :as => resource_name, :url => registration_path(resource_name), :html => { :method => :put }, :html => {:multipart => true}) do |f| %><%= devise_error_messages! %><p><%= f.label :password %> <i>(leave blank if you don't want to change it)</i><br /><%= f.password_field :password %></p><p><%= f.label :password_confirmation %><br /><%= f.password_field :password_confirmation %><% f.hidden_field :confirmation_validation, :value => 100%></p><p><%= f.submit "Update" %></p><% end %>
Now provide the response and nothing else.
| The value of the confirmation_validation hidden field should be included in the params hash, and also set the virtual attribute accordingly. Therefore, you can probably simply check whether or not the value has been set: validates_presence_of :password_confirmation, :if => :should_confirm?def should_confirm? confirmation_validation == '100' # Value of the hidden field as set in the formend | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6393197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/635192/']} | jdg_380647 |
stackexchange | llm_judgeable_groundtruth_similarity | 32360149 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following sample has reliably returned the name of the process that is associated with the active window, but does not work with the newer modern/universal apps because it returns the name of a helper process WWAHost.exe on Windows 8 and ApplicationFrameHost.exe on Windows 10 rather than the name of the app. HWND active_window = GetForegroundWindow();GetWindowThreadProcessId(active_window, &active_process_id);HANDLE active_process = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, active_process_id);GetProcessImageFileName(active_process, image_name, 512); With Windows 10 the ApplicationFrameHost.exe is the process that creates the window handles and is what gets returned by GetWindowThreadProcessId(), is there another Win32 API that can be used to get the active process of universal app that is active? Also tried using GetApplicationUserModelId() and GetPackageFullName() with no success as they return APPMODEL_ERROR_NO_APPLICATION and APPMODEL_ERROR_NO_PACKAGE respectively because the active_process handle is just the helper process and not the process of the active application. Any other APIs to use to get the process name of a Modern/Universal application given the hwnd of the window, or otherwise figure out the process name of the universal app is active. Thanks in advance!
Now provide the response and nothing else.
| Be sure to use the Spy++ utility when you want to reverse-engineer something like this. Included with Visual Studio, you need the 64-bit version in Common7\Tools\spyxx_amd64.exe. Use Search > Find Window and drag the bullseye to a UWP app, like Weather. You'll see the window you'll find with GetForegroundWindow(), it has at least 3 child windows: ApplicationFrameTitleBarWindow ApplicationFrameInputSinkWindow Windows.Core.UI.CoreWindow, that's the host window for the UWP app and the one you are interested in. Right-click it and select Properties, Process tab, click the Process ID. That takes you to the real owner process you want to know. So you just need to make an extra step from the code you already have, you just have to enumerate the child windows and look for one with a different owner process. Some C code, trying to make it as universal as possible without making too many assumptions and not enough error checking: #include <stdio.h>#include <Windows.h>typedef struct { DWORD ownerpid; DWORD childpid;} windowinfo;BOOL CALLBACK EnumChildWindowsCallback(HWND hWnd, LPARAM lp) { windowinfo* info = (windowinfo*)lp; DWORD pid = 0; GetWindowThreadProcessId(hWnd, &pid); if (pid != info->ownerpid) info->childpid = pid; return TRUE;}int main(){ Sleep(2000); HWND active_window = GetForegroundWindow(); windowinfo info = { 0 }; GetWindowThreadProcessId(active_window, &info.ownerpid); info.childpid = info.ownerpid; EnumChildWindows(active_window, EnumChildWindowsCallback, (LPARAM)&info); HANDLE active_process = OpenProcess(PROCESS_QUERY_INFORMATION, FALSE, info.childpid); WCHAR image_name[MAX_PATH] = { 0 }; DWORD bufsize = MAX_PATH; QueryFullProcessImageName(active_process, 0, image_name, &bufsize); wprintf(L"%s\n", image_name); CloseHandle(active_process); return 0;} Output on the Weather program: C:\Program Files\WindowsApps\Microsoft.BingWeather_4.5.168.0_x86__8wekyb3d8bbwe\ Microsoft.Msn.Weather.exe | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32360149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1990747/']} | jdg_380648 |
stackexchange | llm_judgeable_groundtruth_similarity | 11053116 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
DROP TABLE (SELECT table_nameFROM information_schema.`TABLES`WHERE table_schema = 'myDatabase' AND table_name LIKE BINARY 'del%'); I know this doesn't work! What is the equivalent for something like this in SQL? I can whip out a simple Python script to do this but was just wondering if we can do something with SQL directly. I am using MySQL. Thank you!
Now provide the response and nothing else.
| You can use prepared statements - SET @tables = NULL;SELECT GROUP_CONCAT('`', table_schema, '`.`', table_name,'`') INTO @tables FROM information_schema.tables WHERE table_schema = 'myDatabase' AND table_name LIKE BINARY 'del%';SET @tables = CONCAT('DROP TABLE ', @tables);PREPARE stmt1 FROM @tables;EXECUTE stmt1;DEALLOCATE PREPARE stmt1; It will generate and execute a statement like this - DROP TABLE myDatabase.del1, myDatabase.del2, myDatabase.del3; | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11053116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/235310/']} | jdg_380649 |
stackexchange | llm_judgeable_groundtruth_similarity | 20337619 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to update a bunch of rows in a table to set the id = self.id. How would I do the below? from metadataorder.tasks.models import Tasktasks = Task.objects.filter(task_definition__cascades=False) .update(shared_task_id=self.id) The equivalent SQL would be: update tasks_task t join tasks_taskdefinition d on t.task_definition_id = d.idset t.shared_task_id = t.id where d.cascades = 0
Now provide the response and nothing else.
| You can do this using an F expression : from django.db.models import Ftasks = Task.objects.filter(task_definition__cascades=False) .update(shared_task_id=F('id')) There are some restrictions on what you can do with F objects in an update call, but it'll work fine for this case: Calls to update can also use F expressions to update one field based on the value of another field in the model. However, unlike F() objects in filter and exclude clauses, you can’t introduce joins when you use F() objects in an update – you can only reference fields local to the model being updated. If you attempt to introduce a join with an F() object, a FieldError will be raised[.] https://docs.djangoproject.com/en/dev/topics/db/queries/#updating-multiple-objects-at-once | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20337619', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/651174/']} | jdg_380650 |
stackexchange | llm_judgeable_groundtruth_similarity | 31132477 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there are any java annotation(s) that can validate like the example below? String test;test = null; //validtest = ""; //invalidtest = " "; //invalidtest = "Some values"; //valid
Now provide the response and nothing else.
| You need to create a custom annotation: @NullOrNotBlank First create the custom annotation: NullOrNotBlank.java @Target( {ElementType.FIELD})@Retention(RUNTIME)@Documented@Constraint(validatedBy = NullOrNotBlankValidator.class)public @interface NullOrNotBlank { String message() default "{javax.validation.constraints.NullOrNotBlank.message}"; Class<?>[] groups() default { }; Class<? extends Payload>[] payload() default {};} Then the actual validator: NullOrNotBlankValidator.java public class NullOrNotBlankValidator implements ConstraintValidator<NullOrNotBlank, String> { public void initialize(NullOrNotBlank parameters) { // Nothing to do here } public boolean isValid(String value, ConstraintValidatorContext constraintValidatorContext) { return value == null || value.trim().length() > 0; }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31132477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2604735/']} | jdg_380651 |
stackexchange | llm_judgeable_groundtruth_similarity | 949673 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's something obvious that should be easy to do... How do I retrieve a list of installed plugins at runtime? Can't see an obvious way to do this a Platform.getBundle() requires a specific plugin name. Is this perhaps a restriction for some kind of security reason?
Now provide the response and nothing else.
| From here : The BundleContext class has a getBundles () method that returns all installed bundles. You get an instance of the BundleContext when your bundle is activated BundleActivator.start(BundleContext)) . You can use it to get some Bundle version number for instance. The interactions between the bundles are done through two complementary mechanisms: the package export/import and the service registration lookup facility. The publication and lookup of services are performed through the BundleContext reference that each bundle receives at startup time. During the publication process, the advertising bundles registers a service by publishing a Java interface it is implementing, and by providing a class implementing this interface. The lookup is performed by the client bundle, which gets the service from the BundleContext and uses it as a standard Java object. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/949673', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/99231/']} | jdg_380652 |
stackexchange | llm_judgeable_groundtruth_similarity | 35220048 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I am a complete noob to Laravel and am trying something here. I want to import a CSV file into two tables, I have a table called lists that will get the list name and a client_id . Then I have a table called customers that will get name surname contact number as well as client_id and a list_id . What I want to achieve is to import a CSV file that will take the file name and store it in the list table, then create an array through the CSV file and import the data into the customers table with the list and client id's. I have the first part done, and it inserts into the lists table correctly, How do I now create an array from the CSV that is located in storage/documents and then insert that into the customers table? namespace App\Http\Controllers;use Input;use DB;use Illuminate\Http\Request;use App\Http\Requests\ListsRequest;use App\Lists;use App\Clients;use App\Http\Requests;use App\Http\Controllers\Controller;class ListsController extends Controller { public function index() { // $list_items = Lists::all(); $clients = Clients::all(); return view('lists.show', compact('clients')); } public function store(Requests\ListsRequest $request) { $input = $request->input(); Lists::create($input); if (Input::hasFile('name')) { $file = Input::file('name'); $name = time() . '-' . $file->getClientOriginalName(); $path = storage_path('documents'); $file->move($path, $name); // All works up to here // All I need now is to create an array // from the CSV and insert into the customers database } }} I chose to use the answer that I had accepted but I also played with the other answer and got it to work like this. public function store(Requests\ListsRequest $request){ $input = $request->input(); $client_id = $request->input('client_id'); if (Input::hasFile('name')) { $file = Input::file('name'); $name = time() . '-' . $file->getClientOriginalName(); $path = storage_path('documents'); Lists::create(['client_id' => $client_id, 'name' => $name]); $reader = Reader::createFromPath($file->getRealPath()); // Create a customer from each row in the CSV file $headers = array(); foreach ($reader as $index => $row) { if ($index === 0) { $headers = $row; } else { $data = array_combine($headers, $row); Customers::create($data); } } $file->move($path, $name); return view('clients'); }}
Now provide the response and nothing else.
| There are 3 steps to read CSV file and import it in database in Laravel. Read CSV file Convert it to array Finally create records in our database. Before we start, I have created a sample test.csv file and put it on my public folder under file folder: name,email,passworduser1,[email protected],pasxxxxxxxxxworduser2,[email protected],pasxxxxxxxxxworduser3,[email protected],pasxxxxxxxxxword Step 1 and 2; I created a helper function called csvToArray , I just put it in my controller for now (this function is inspired from this link ) it simply reads the CSV file and convert it to array: function csvToArray($filename = '', $delimiter = ','){ if (!file_exists($filename) || !is_readable($filename)) return false; $header = null; $data = array(); if (($handle = fopen($filename, 'r')) !== false) { while (($row = fgetcsv($handle, 1000, $delimiter)) !== false) { if (!$header) $header = $row; else $data[] = array_combine($header, $row); } fclose($handle); } return $data;} Step 3; And here is my final step, read array and insert it in our database: public function importCsv(){ $file = public_path('file/test.csv'); $customerArr = $this->csvToArray($file); for ($i = 0; $i < count($customerArr); $i ++) { User::firstOrCreate($customerArr[$i]); } return 'Jobi done or what ever'; } Note: this solution assume that you have a model in your Laravel project and has the proper table in your database. if you use dd($customerArr) you will get this | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/35220048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4796879/']} | jdg_380653 |
stackexchange | llm_judgeable_groundtruth_similarity | 422929 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Whenever I use Html.ActionLink it always Html encodes my display string. For instance I want my link to look like this: <a href="/posts/422/My-Post-Title-Here">More…</a> it outputs like this: More… &hellip is "..." incase you were wondering. However the actionlink outputs the actual text "…" as the link text. I have the same problem with if I want to output this: <a href="/posts/422/My-Post-Title-Here"><em>My-Post-Title-Here</em></a> I wind up with: <em>My-Post-Title-Here</em> Any idea how to do this?
Now provide the response and nothing else.
| It looks like ActionLink always uses calls HttpUtility.Encode on the link text. You could use UrlHelper to generate the href and build the anchor tag yourself. <a href='@Url.Action("Posts", ...)'>More…</a> Alternatively you can "decode" the string you pass to ActionLink. Constructing the link in HTML seems to be slightly more readable (to me) - especially in Razor. Below is the equivalent for comparison. @Html.ActionLink(HttpUtility.HtmlDecode("More…"), "Posts", ...) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/422929', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17744/']} | jdg_380654 |
stackexchange | llm_judgeable_groundtruth_similarity | 3819739 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was browsing the internet lately, when I stumbled upon Dll Injection. I think its an interesting subject but, I have no clue what the purpose of it is? I have read that it can be used for cracking/hacking games and software but is it also possible to do something positive with it? if so, what can it be used for? And what languages support this? For the record, I am not going to try and Crack/hack any game with knowledge gained, no intention to do someting illegal! Thanks for the time, Emerion ps: Websites/books that are on this subject would be appreciated!
Now provide the response and nothing else.
| There are several uses that come to my mind: Hot patching : Allows you to update/patch parts of your code without actually shutting down the process or restarting. Microsoft itself made sure large parts of Windows are hot-patchable by prefixing functions with a 5-byte NOP block. Why? Because you can JMP to any other part of your code in 5 bytes, so hot-patching basically overwrites the prefix bytes with a JMP to the updated/patched code and voila, your code does something entirely new. This is often used together with DLL injection to load the new code into the target process, and while not mandatory, it's one of its uses. Logging : In the same spirit, detouring code is often used to prefix a function for logging purposes, i.e. to see with what parameters it is called. Also, some applications that record screen output from DirectX applications do this by detouring the DirectX calls, which again involves injecting a DLL into the process that monitors calls. Subclassing : Not in the OOP sense, but in the Windows sense, i.e. providing a new WndProc for an existing window to alter its behavior. While you can simply set a different message handling routine via SetWindowLongPtr , the limiting factor to this is that the function needs to reside in the target process address space. This is where injection comes in once again: you provide a new implementation in a DLL, inject this into the target process and call SetWindowLongPtr . This can be used to provide custom, additional drawing on a foreign window, for example. I have personally had to deal with all of the above use cases in the past, in regular business applications, from using hot patching to ensure uptime on a critical notification system in medical services to using detours/logging to allow a proprietary record management (by a then already dead software shop) to talk to a full-blown CRM solution in real-time. As always, it's just a tool in your box, and there is nothing inherently "evil" about it. It's for whatever purpose you make use of it that counts. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3819739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/460562/']} | jdg_380655 |
stackexchange | llm_judgeable_groundtruth_similarity | 40420069 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I go about calculating weighted mean of a Map<Double, Integer> where the Integer value is the weight for the Double value to be averaged.eg: Map has following elements: (0.7, 100) // value is 0.7 and weight is 100 (0.5, 200) (0.3, 300) (0.0, 400) I am looking to apply the following formula using Java 8 streams, but unsure how to calculate the numerator and denominator together and preserve it at the same time. How to use reduction here?
Now provide the response and nothing else.
| You can create your own collector for this task: static <T> Collector<T,?,Double> averagingWeighted(ToDoubleFunction<T> valueFunction, ToIntFunction<T> weightFunction) { class Box { double num = 0; long denom = 0; } return Collector.of( Box::new, (b, e) -> { b.num += valueFunction.applyAsDouble(e) * weightFunction.applyAsInt(e); b.denom += weightFunction.applyAsInt(e); }, (b1, b2) -> { b1.num += b2.num; b1.denom += b2.denom; return b1; }, b -> b.num / b.denom );} This custom collector takes two functions as parameter: one is a function returning the value to use for a given stream element (as a ToDoubleFunction ), and the other returns the weight (as a ToIntFunction ). It uses a helper local class storing the numerator and denominator during the collecting process. Each time an entry is accepted, the numerator is increased with the result of multiplying the value with its weight, and the denominator is increased with the weight. The finisher then returns the division of the two as a Double . A sample usage would be: Map<Double,Integer> map = new HashMap<>();map.put(0.7, 100);map.put(0.5, 200);double weightedAverage = map.entrySet().stream().collect(averagingWeighted(Map.Entry::getKey, Map.Entry::getValue)); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40420069', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1628839/']} | jdg_380656 |
stackexchange | llm_judgeable_groundtruth_similarity | 1379266 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following piece of code: private Dictionary<object, object> items = new Dictionary<object, object>;public IEnumerable<object> Keys{ get { foreach (object key in items.Keys) { yield return key; } }} Is this thread-safe? If not do I have to put a lock around the loop or the yield return ? Here is what I mean: Thread1 accesses the Keys property while Thread2 adds an item to the underlying dictionary. Is Thread1 affected by the add of Thread2?
Now provide the response and nothing else.
| OK, I did some testing and got an interesting result. It seems that it is more an issue of the enumerator of the underlying collection than the yield keyword. The enumerator (actually its MoveNext method) throws (if implemented correctly) an InvalidOperationException because the enumeration has changed. According to the MSDN documentation of the MoveNext method this is the expected behavior. Because enumerating through a collection is usually not thread-safe a yield return is not either. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1379266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48120/']} | jdg_380657 |
stackexchange | llm_judgeable_groundtruth_similarity | 17291622 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the following case, is the compiler allowed to optimize away the call to foo() and\or the entire if block? if( foo() && 0 ) { ... }
Now provide the response and nothing else.
| From a standards point-of-view, the compiler must evaluate the left-hand side, i.e. foo() must be called: [C99, 6.5.13] Unlike the bitwise binary & operator, the && operator guarantees left-to-right evaluation; there is a sequence point after the evaluation of the first operand. If the first operand compares equal to 0, the second operand is not evaluated. But as it knows that the body of the if statement can never be reached, * then it's free to omit any corresponding code for that part. Of course, if the compiler can prove that foo() has no observable side-effects, then it's free to optimise that call away as well. But that has little to do with the short-circuit behaviour. * (C++-only) assuming foo() doesn't return a type with an overload of operator&& . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17291622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/331785/']} | jdg_380658 |
stackexchange | llm_judgeable_groundtruth_similarity | 48443892 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a user defined function as follows which I want to use to derive new columns in my dataframe: def to_date_formatted(date_str, format): if date_str == '' or date_str is None: return None try: dt = datetime.datetime.strptime(date_str, format) except: return None return dt.date()spark.udf.register("to_date_udf", to_date_formatted, DateType()) I can use this by running sql like select to_date_udf(my_date, '%d-%b-%y') as date . Note the ability to pass a custom format as an argument to the function However, I'm struggling to use it using pyspark column expression syntax, rather than sql I want to write something like: df.with_column("date", to_date_udf('my_date', %d-%b-%y') But this results in an error. How can I do this? [Edit: In this specific example, in Spark 2.2+ you can provide an optional format argument with the built in to_date function. I'm on Spark 2.0 at the moment, so this is not possible for me. Also worth noting I provided this as an example, but I'm interested in the general syntax for providing arguments to UDFs, rather than the specifics of date conversion]
Now provide the response and nothing else.
| I have found three options for achieving this: Setup reproducible example import pandas as pd import datetime from pyspark import SparkContext, SparkConffrom pyspark.sql import SparkSessionfrom pyspark.sql.types import DateTypefrom pyspark.sql.functions import expr, litsc = SparkContext.getOrCreate()spark = SparkSession(sc) def to_date_formatted(date_str, format): if date_str == '' or date_str is None: return None try: dt = datetime.datetime.strptime(date_str, format) except: return None return dt.date()data = {}data["date_str_1"] = ["01-Dec-17", "05-Jan-12", "08-Mar-15"]data["date_str_2"] = ["01/12/17", "05/01/12", "08/03/15"]df = pd.DataFrame(data)df = spark.createDataFrame(df)df.registerTempTable('df') Option 1 from pyspark.sql.functions import udfto_date_udf = udf(to_date_formatted, DateType())df = df.withColumn("parsed_date", to_date_udf('date_str_1', lit('%d-%b-%y')))df.show() Option 2 spark.udf.register("to_date_udf", to_date_formatted, DateType())ex = "to_date_udf(date_str_1, '%d-%b-%y') as d"df = df.withColumn("parsed_date", expr(ex))df.show() Option 3 Option 3 is just to curry the to_date_formatted function: from functools import partialcurried_to_date = partial(to_date_formatted, format="%d-%b-%y")curried_to_date = udf(curried_to_date, DateType())df.withColumn("parsed_date", curried_to_date('date_str_1')) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48443892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1779128/']} | jdg_380659 |
stackexchange | llm_judgeable_groundtruth_similarity | 113037 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Every piece of knowledge in science has a beginning lying in someone's experiment. I would like to know which experiment gave scientists the reason to believe nuclear fission/fusion existed and was instrumental in the development of the field of nuclear energy. I would also accept a thought experiment as an acceptable answer, as long as it answers the question.
Now provide the response and nothing else.
| which experiment gave scientists the reason to believe nuclear fission/fussion existed Fusion was first. Francis William Aston built a mass spectrometer in 1919 and measured the masses of various isotopes, realizing that the mass of helium-4 was less than 4 times that of hydrogen-1. From this information, Arthur Eddington proposed hydrogen fusion as a possible energy source of stars. "Certain physical investigations in the past year, which I hope we may hear about at this meeting, make it probable to my mind that some portion of this sub-atomic energy is actually being set free in the stars. F. W. Aston's experiments seem to leave no room for doubt that all the elements are constituted out of hydrogen atoms bound together with negative electrons. The nucleus of the helium atom, for example, consists of 4 hydrogen atoms bound with 2 electrons. But Aston has further shown conclusively that the mass of the helium atom is less than the sum of the masses of the 4 hydrogen atoms which enter into it; and in this at any rate the chemists agree with him. There is a loss of mass in the synthesis amounting to about 1 part in 120, the atomic weight of hydrogen being 1.008 and that of helium just 4." Eddington 24 August 1920 At that time it was not understood that a neutron was distinct from a proton. It was thought that the nucleus of helium 4 contained 4 protons and 2 electrons (instead of two protons and two neutrons), but Eddington's main idea that hydrogen fusing to helium released energy thereby powering stars was correct. Eric Doolittle proposed a vague fission process in stars in 1919, but of course this was incorrect: "It seems very probable that when subjected to these inconceivably great temperatures and pressures, atoms may be broken up, and a part, at least, of their sub-atomic energy may be liberated. And it is only necessary to suppose that a part of the energy of the atom is in this way radiated into space in order that the life of a sun, or star, may be almost indefinitely prolonged". Fission of heavy elements was discovered in the 1930s. Enrico Fermi's experiments caused fission in 1934, be he did not realize that fission was occurring. Otto Hahn and Fritz Strassmann concluded that upon neutron bombardment, uranium was broken into two lighter nuclei. Lise Meitner and Otto Frisch made calculations concerning the large amount of energy released and introduced the term "fission". | {} | {'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/113037', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/26122/']} | jdg_380660 |
stackexchange | llm_judgeable_groundtruth_similarity | 563322 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In my textbook, a clear proof that the Geometric Distribution is a distribution function is given, namely $$\sum_{n=1}^{\infty} \Pr(X=n)=p\sum_{n=1}^{\infty} (1-p)^{n-1} = \frac{p}{1-(1-p))}=1.$$ Then the textbook introduces the Negative Binomial Distribution ; it gives a fairly clear explanation for why the PMF of a Negative Binomial random variable $N$ with parameter $r$ is $$p\binom{n-1}{r-1}p^{r-1}(1-p)^{n-r} = \binom{n-1}{r-1}p^{r}(1-p)^{n-r} $$ But to show $$\sum_{n=r}^{\infty} \Pr(N=n)=\sum_{n=r}^{\infty}\binom{n-1}{r-1}p^{r}(1-p)^{n-r}=1$$ the textbook gives (in my opinion) a wordy and informal argument that is nowhere near as clear. What is a straightforward algebraic way to prove the above statement; that the Negative Binomial is a distribution function? I also looked at a different probability textbook, plus wolfram.com's definition before asking.
Now provide the response and nothing else.
| It's evident that $\Bbb{P}(N=n)\ge 0$ for $n\ge r$. So you have to prove that $\sum_{n\ge r}\Bbb{P}(N=n)=1$:$$\begin{align}\sum_{n\ge r}\Bbb{P}(N=n)&=\sum_{n\ge r} \binom {n-1} {r-1} p^r \left({1-p}\right)^{n-r}\\&=\sum_{n\ge r} \binom {n-1} {n-r} p^r \left({1-p}\right)^{n-r}\;\;\quad\quad\text{(symmetry})\\&=p^r\sum_{j\ge 0} \binom {r+j-1} {j} \left({1-p}\right)^{j}\qquad\text{(substituting }j=n-r)\\&=p^r\sum_{j\ge 0} (-1)^j \binom{-r}{j}\left({1-p}\right)^{j}\qquad\text{(identity}\tbinom{j+r-1}{j}=(-1)^j \tbinom{-r}{j})\\&=p^r\sum_{j\ge 0} \binom{-r}{j}\left({p-1}\right)^{j}\\&=p^r\left(1+(p-1)\right)^{-r} \qquad\qquad\qquad\text{(binomial theorem) }\\&=1\end{align}$$using the identity$$\begin{align}\binom{j+r-1}{j}&=\frac{(j+r-1)(j+r-2) \cdots r}{j!}\\&=(-1)^j \frac{(-r-(j-1))(-r-(j-2)) \cdots (-r)}{j!} \\&=(-1)^j \frac{(-r)(-r-1) \cdots (-r-(j-1))}{j!} \\&=(-1)^j \binom{-r}{j}\end{align}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/563322', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/104597/']} | jdg_380661 |
stackexchange | llm_judgeable_groundtruth_similarity | 3823567 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If one iterates the squaring: $$5^2 = 25, 25^2 = 625, 625^2 = 390625 $$ one quickly notices that the end digits become 'stable', i.e. however far you take the above procedure, the result will always end in $...90625.$ The resulting 'limit' is a so-called $10$ -adic number. What's also interesting is that the following procedure also leads to a stable 'limit': $$2^5 = 32, 32^5 = 33554432\dots$$ and if one multiplies the resulting $10$ -adics, one gets $0$ . After similar experiments, it seems that iterating $p^q$ and $q^p$ modulo $pq$ leads to pq-adic inverses $pq$ -adic numbers with product 0. How can this be shown? It feels like this should follow easily from something like Fermat's little theorem or the CRT but I can't immediately see how to prove this.
Now provide the response and nothing else.
| I saw this observation in a math book once when I was 16 or so and was totally baffled at the time. It's nice to know I understand it now! As you say, the starting point is to use CRT, which allows us to write $\mathbb{Z}_{10} \cong \mathbb{Z}_2 \times \mathbb{Z}_5$ , so we can work in the $2$ -adics and $5$ -adics separately. It's easy to understand what happens to the powers of $5$ in $\mathbb{Z}_5$ : they converge to zero. Similarly for the powers of $2$ in $\mathbb{Z}_2$ . The tricky question is about the powers of $5$ in $\mathbb{Z}_2$ and the powers of $2$ in $\mathbb{Z}_5$ . Here, as you also say, the starting point is that by Fermat's little theorem we have $x^p \equiv x \bmod p$ . So at least the first digit $\bmod p$ stabilizes. What can we say about taking further iterations $\bmod p^2, p^3$ , etc.? Theorem (existence of the Teichmuller character ): Let $p$ be a prime and let $x \in \mathbb{Z}_p$ . The sequence $x, x^p, x^{p^2}, \dots$ converges and its limit $\omega(x)$ , the Teichmuller character of $x$ , is the unique solution to $\omega(x)^p = \omega(x)$ which is congruent to $x \bmod p$ . Proof. This sequence always lies in the subspace $S_x$ of $\mathbb{Z}_p$ consisting of elements congruent to $x \bmod p$ . It suffices to show that on this subspace, the Frobenius map $F(x) = x^p$ is a contraction in the $p$ -adic metric so we can apply the Banach fixed point theorem . In other words, we want to show that there exists some constant $c < 1$ such that for all $a, b \in S_x$ we have $$|a^p - b^p|_p \le c |a - b|_p.$$ This follows from a contest math result called lifting the exponent although we won't need its full strength so we can settle for only part of the proof. Since by assumption $a \equiv b \bmod p$ , we can argue as follows: write $$\frac{a^p - b^p}{a - b} = a^{p-1} + a^{p-2} b + \dots + b^{p-1}.$$ This sequence has $p$ terms and each term is congruent to $a^{p-1} \equiv b^{p-1} \bmod p$ , so their sum is congruent to $0 \bmod p$ . So $a^p - b^p$ is divisible by at least one more power of $p$ than $a - b$ is, which means the Frobenius map is a contraction with $c = p^{-1}$ . Applying the Banach fixed point theorem we conclude that the sequence $x, F(x), F^2(x), \dots $ converges to a unique fixed point $\omega(x)$ in $S_x$ : this means precisely that $\omega(x) \equiv x \bmod p$ and $\omega(x)^p = \omega(x)$ and that $\omega(x)$ is unique with respect to these two properties. (Alternatively this existence and uniqueness result can also be deduced from Hensel's lemma.) $\Box$ This means that the Teichmuller character provides a canonical splitting of the map $\mathbb{Z}_p^{\times} \to \mathbb{F}_p^{\times}$ on groups of units, allowing us to construct the $(p-1)^{th}$ roots of unity in $\mathbb{Z}_p$ surprisingly explicitly. Applying the theorem, we get: The sequence $5, 5^2, 5^4, \dots $ converges in $\mathbb{Z}_2$ to the unique solution to $\omega(5)^2 = \omega(5)$ congruent to $1 \bmod 2$ , which is $1$ . In other words, the sequence converges in $\mathbb{Z}_{10} \cong \mathbb{Z}_2 \times \mathbb{Z}_5$ to $(1, 0)$ , which is precisely the idempotent projecting from $\mathbb{Z}_{10}$ down to $\mathbb{Z}_2$ . The sequence $2, 2^5, 2^{25}, \dots$ converges in $\mathbb{Z}_5$ to the unique solution to $\omega(2)^5 = \omega(2)$ congruent to $2 \bmod 5$ , which is one of the two primitive $4^{th}$ roots of unity. In other words, the sequence converges in $\mathbb{Z}_{10} \cong \mathbb{Z}_2 \times \mathbb{Z}_5$ to an element you might call $(0, i)$ . Now we of course have $(1, 0) \cdot (0, i) = (0, 0)$ . The fun part is that if we take the fourth power of $(0, i)$ , getting the limit of the sequence $16, 16^5, \dots$ , we get $(0, 1)$ , which is the idempotent projecting from $\mathbb{Z}_{10}$ down to $\mathbb{Z}_5$ , and it satisfies $(0, 1)^2 = (0, 1)$ and $(0, 1) + (1, 0) = (1, 1)$ ; in other words, if we know the digits of $(1, 0) = \dots 90625$ we can compute the digits of $(0, 1)$ by just subtracting from $1$ , which gives $$\lim_{n \to \infty} 16^{5^n} = \dots 09376 = 1 - \lim_{n \to \infty} 5^{2^n}$$ and you can check this on a calculator! What this says in other words is that these two limits, which somewhat abusing notation I'll call $\omega(5)$ and $\omega(16)$ , give a canonical decomposition of any $10$ -adic number into two components $$x = \omega(5) x + \omega(16) x$$ where the first component is $5$ -adically zero and gives the $2$ -adic component of $x$ and the second component is $2$ -adically zero and gives the $5$ -adic component of $x$ . (You may be familiar with a certain explicit proof of CRT that constructs idempotents like these to show, for example, that $5x + 6y$ is an explicit number congruent to $x \bmod 2$ and $y \bmod 5$ ; this construction gives a compatible family of such idempotents $\bmod 10^n$ for all $n$ .) | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3823567', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/824421/']} | jdg_380662 |
stackexchange | llm_judgeable_groundtruth_similarity | 6890097 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a generic class MyClass<T> where T should only be those types which can be compared. This would mean only numeric types and classes where methods for the relational operators have been defined. How do I do this ?
Now provide the response and nothing else.
| You cannot constrain to operators, but you can constrain to interfaces. Therefore, intending to use >=, <=, == is out, but you could use CompareTo, Equals . where T : IComparable<T> Interface documentation This interface brings you the CompareTo method which is useful for relational ordering (greater than, less than, etc.). Primitives and strings implement this already, but you would need to implement this for your own custom types. You would use it like this void SomeMethod<T>(T alpha, T beta) where T : IComparable<T>{ if (alpha.CompareTo(beta) > 0) { // alpha is greater than beta, replaces alpha > beta } else if (alpha.CompareTo(beta) < 0) { // alpha is less than beta, replaces alpha < beta } else { // CompareTo returns 0, alpha equals beta }} Equals you get by default as a virtual method on object . You want to override this method on your own custom types if you want something other than referential equality to be used. (It is also strongly recommended to override GetHashCode at the same time.) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6890097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/452079/']} | jdg_380663 |
stackexchange | llm_judgeable_groundtruth_similarity | 2253586 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a function called save(), this function gathers up all the inputs on the page, and performs an AJAX call to the server to save the state of the user's work. save() is currently called when a user clicks the save button, or performs some other action which requires us to have the most current state on the server (generate a document from the page for example). I am adding in the ability to auto save the user's work every so often. First I would like to prevent an AutoSave and a User generated save from running at the same time. So we have the following code (I am cutting most of the code and this is not a 1:1 but should be enough to get the idea across): var isSaving=false;var timeoutId;var timeoutInterval=300000;function save(showMsg){ //Don't save if we are already saving. if (isSaving) { return; } isSaving=true; //disables the autoSave timer so if we are saving via some other method //we won't kick off the timer. disableAutoSave(); if (showMsg) { //show a saving popup} params=CollectParams(); PerformCallBack(params,endSave,endSaveError);}function endSave(){ isSaving=false; //hides popup if it's visible //Turns auto saving back on so we save x milliseconds after the last save. enableAutoSave();} function endSaveError(){ alert("Ooops"); endSave();}function enableAutoSave(){ timeoutId=setTimeOut(function(){save(false);},timeoutInterval);}function disableAutoSave(){ cancelTimeOut(timeoutId);} My question is if this code is safe? Do the major browsers allow only a single thread to execute at a time? One thought I had is it would be worse for the user to click save and get no response because we are autosaving (And I know how to modify the code to handle this). Anyone see any other issues here?
Now provide the response and nothing else.
| JavaScript in browsers is single threaded. You will only ever be in one function at any point in time. Functions will complete before the next one is entered. You can count on this behavior, so if you are in your save() function, you will never enter it again until the current one has finished. Where this sometimes gets confusing (and yet remains true) is when you have asynchronous server requests (or setTimeouts or setIntervals), because then it feels like your functions are being interleaved . They're not. In your case, while two save() calls will not overlap each other, your auto-save and user save could occur back-to-back. If you just want a save to happen at least every x seconds, you can do a setInterval on your save function and forget about it. I don't see a need for the isSaving flag. I think your code could be simplified a lot: var intervalTime = 300000;var intervalId = setInterval("save('my message')", intervalTime);function save(showMsg){ if (showMsg) { //show a saving popup} params=CollectParams(); PerformCallBack(params, endSave, endSaveError); // You could even reset your interval now that you know we just saved. // Of course, you'll need to know it was a successful save. // Doing this will prevent the user clicking save only to have another // save bump them in the face right away because an interval comes up. clearInterval(intervalId); intervalId = setInterval("save('my message')", intervalTime);}function endSave(){ // no need for this method alert("I'm done saving!");}function endSaveError(){ alert("Ooops"); endSave();} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2253586', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26160/']} | jdg_380664 |
stackexchange | llm_judgeable_groundtruth_similarity | 23215869 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Yii2 I want one of my input field to be autocomplete when user starts to type.Below is my code which uses Jui Autocomplete . <?php $items= ArrayHelper::map(Company::find()->all(), 'c_id', 'name'); echo AutoComplete::widget([ 'model' => $model, 'attribute' => 'company', 'clientOptions' => [ 'source' => $items, ], ]);?> This is not working.When i printed my array, i got like Array ( [1] => abc [2] => xyz [4] => pqr ) I got it working when i manually set like $items=['abc','xyz','pqr']; The reason may be my c_id's are not ordered?But i want to get the c_id value to be submitted!Any idea how to fix this?
Now provide the response and nothing else.
| This can be solved with the help of a hidden field input.Hope this will help somebody! <?php use yii\web\JsExpression; $data = Company::find() ->select(['name as value', 'name as label','c_id as id']) ->asArray() ->all(); echo AutoComplete::widget([ 'name' => 'Company', 'id' => 'ddd', 'clientOptions' => [ 'source' => $data, 'autoFill'=>true, 'minLength'=>'4', 'select' => new JsExpression("function( event, ui ) { $('#user-company').val(ui.item.id); }") ], ]); ?> <?= Html::activeHiddenInput($model, 'company')?> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23215869', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2775597/']} | jdg_380665 |
stackexchange | llm_judgeable_groundtruth_similarity | 83263 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Description I am designing a system in which there's a circuit that controls the power applied to a micro SD card (enable/disable). The circuit is the following: The power control circuit is done by a P-MOSFET that is deactivated by default. MICROSD_PWR_EN signal is connected to a pin of a microcontroller configured as open-drain. Issue The measured voltage on the pin VDD of the micro SD card should be 0V by default. However, this voltage is near +1V, which is neither a logic "0" nor a logic "1". The voltage measured on node "+3.3V" is +3.288V and the one measured on the gate of Q5 P-MOSFET is +3.285V. Do you have any idea regarding this issue? Could that be related to the 3mV difference between the source and the gate of the transistor? Firmware solution Firstly, thank you all for your answers. It seems that I solved the problem by firmware: by configuring the SD card GPIOs as output open-drain and by setting them to logic "0", the voltage on the VDD pin of the SD card is now near 0V. As everyone pointed, it is probably related to protection diodes of the SD card chip GPIOs.
Now provide the response and nothing else.
| The residual +1V that you are seeing is being caused by bias on the other signal pins to the uSD card. Current passes from either high levels on the microcontroller I/O pins connected at the SDIO interface or via the 47K resistors that you have on these lines into the controller chip in the uSD card. From there it passes through the input protection network on these pins to the uSD card VDD pin where you see it appear at the 1V level. You can correct this situation by following the steps below: 1) Connect the supply line side of the pullup resistors to the switched VDD of the uSD card. 2) Whenever the microcontroller firmware goes to set the MICROSD_PWR_EN signal high to disable the card power set all output pins of the SDIO interface to a low level by outputting 0 bit values to their port register bits. Note that in some cases where the SDIO is enabled to an dedicated on-board peripheral on the microcontroller it may be necessary to set these output pins back to GPIO mode to allow the FW to gain control of the pins. 3) For any signals that are inputs to the microcontroller from the SDIO interface you need to arrange for these to go to a low level whenever the microcontroller sets the MICROSD_PWR_EN signal high. This can be done by one of two ways. You could change the 47K pullup resistor to a pulldown on these specific lines. Otherwise the input pins could be programmed back to GPIO mode and then set as outputs at a low level. This latter may be easier since then the I/O pins get handled the same as the output pins. At the time the microcontroller goes to re-enable the uSD power by setting MICROSD_PWR_EN signal low the firmware would be written to re-configure all the SDIO interface pins back to their normal operating mode. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/83263', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/22628/']} | jdg_380666 |
stackexchange | llm_judgeable_groundtruth_similarity | 221337 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Has anyone had any issues with logrotate before that causes a log file to get rotated and then go back to the same size it originally was? Here's my findings: Logrotate Script: /var/log/mylogfile.log { rotate 7 daily compress olddir /log_archives missingok notifempty copytruncate} Verbose Output of Logrotate: copying /var/log/mylogfile.log to /log_archives/mylogfile.log.1truncating /var/log/mylogfile.logcompressing log with: /bin/gzipremoving old log /log_archives/mylogfile.log.8.gz Log file after truncate happens [root@server ~]# ls -lh /var/log/mylogfile.log-rw-rw-r-- 1 part1 part1 0 Jan 11 17:32 /var/log/mylogfile.log Literally Seconds Later: [root@server ~]# ls -lh /var/log/mylogfile.log-rw-rw-r-- 1 part1 part1 3.5G Jan 11 17:32 /var/log/mylogfile.log RHEL Version: [root@server ~]# cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Logrotate Version: [root@DAA21529WWW370 ~]# rpm -qa | grep logrotatelogrotate-3.7.1-10.RHEL4 Few Notes: Service can't be restarted on the fly, so that's why I'm using copytruncate Logs are rotating every night, according to the olddir directory having log files in it from each night.
Now provide the response and nothing else.
| This is probably because even though you truncate the file, the process writing to the file will continue writing at whichever offset it were at last. So what's happening is that logrotate truncates the file, size is zero, process writes to the file again, continuing at the offset it left off, and you now have a file with NULL-bytes up to the point where you truncated it plus the new entries written to the log. od -c after truncate + sudden growth, generated output along the lines of: 0000000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0*33255657600 \0 C K B - s e r v e r [ h t t33255657620 <more log output> What this says is from offset 0 to 33255657600 your file consists of null bytes, and then some legible data. Getting to this state doesn't take the same amount of time it would take to actually write all those null-bytes. The ext{2,3,4} filesystems support something called sparse files, so if you seek past a region of a file that doesn't contain anything, that region will be assumed to contain null-bytes and won't take up space on disk. Those null bytes won't actually be written, just assumed to be there, hence the time it takes to go to 0 to 3.5GB don't take a lot of time to. (You can test the amount of time it takes by doing something like dd if=${HOME}/.bashrc of=largefile.bin seek=3432343264 bs=1 , this should generate a file of over 3GB in a few milliseconds). If you run ls -ls on your logfiles after they've been truncated and had a sudden growth again, it should now report a number at the beginning of the line which represents the actual size (in blocks occupied on disk), which probably is orders of magnitude smaller than the size reported by just ls -l . | {} | {'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/221337', 'https://serverfault.com', 'https://serverfault.com/users/32999/']} | jdg_380667 |
stackexchange | llm_judgeable_groundtruth_similarity | 123278 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Can anyone tell me if there is a Mayer-Vietoris sequence for an arbitrary homotopy pushout (hence homotopy pullback) of spectra and an arbitrary (co)homology theory. If this comes from some easy way of writing down a pushout/pullback as a fiber sequence, it'd be really cool to see that spelled out (as if I were a baby). Also, it would also be really neat to know if there are general conditions on a model category or on the (co)homology theory to make this true. Thanks in advance. :-)
Now provide the response and nothing else.
| Mayer-Vietoris sequences can be obtained from excision isomorphisms. Anything worthy of the name "homology theory" will give a long exact sequence $$\dots \to h_n(A)\to h_n(X)\to h_n(A\to X)\to h_{n-1}(A)\to \dots$$for each morphism $A\to X$. And for a square, a.k.a. map of morphisms $(C\to B)\to (A\to X)$, it will give you a map from the long exact sequence of $C\to B$ to that of $A\to X$. If the square happens to induce an isomorphism $h_n(C\to B)\to h_n(A\to X)$ for every $n$ (as it will if it is a homotopy pushout square) then the desired map $h_n(X)\to h_{n-1}(C)$ is given by composing $h_n(X)\to h_n(A\to X)\cong h_n(C\to B)\to h_{n-1}(C)$, and a little diagram chase gives you the exactness you want. Depending on how one axiomatizes the notion of homology theory, the fact that homotopy pushout squares gives isomorphisms in relative homology is either an axiom or a consequence of the axioms. There is no need to work with spectra or other stable objects here. Note that if you had something like a homology theory except that it gave isomorphisms $h_n(C\to B)\to h_n(A\to X)$ for homotopy pullbacks instead of for homotopy pushouts then you would get a "Mayer-Vietoris sequence" for pullback squares. That's how it is for homotopy groups and based spaces, except that things get funny down around $\pi_1$ and relative $\pi_2$. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/123278', 'https://mathoverflow.net', 'https://mathoverflow.net/users/11546/']} | jdg_380668 |
stackexchange | llm_judgeable_groundtruth_similarity | 56360119 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I created a function in my .ts file that allows to update a array (empty by default) that I use to create the options of my datalist. When I inspect the HTML code I see the options in the datalist but they do not show in the GUI <input list='id-RPG' type="text" pInputText class="inputTextStyle" [(ngModel)]="MyModal" (input)="searchList($event.target.value)"/><datalist id='id-RPG'> <option *ngFor="let element of array" [(value)]="element ">{{element }}</option></datalist> ngOnInit(){this.array = [];}searchList(){//Rest Herethis.array = ["aaa","aab","aac","aad"]} No error message is displayed
Now provide the response and nothing else.
| I had also been trying hard to find solution for the Transactional support in Reactive style of Mongo DB & Spring Boot But luckily I figured it myself. Though few of the things from google were also helpful but those were non reactive. Important Note - For Spring boot 2.2.x it works well, but with spring boot 2.3.x it has some other issues, it has internal re-write & changes all together You need to use ReactiveMongoTransactionManager along with ReactiveMongoDatabaseFactory , most of the details at the end, also sharing the code repo for the same For getting the mongo db to support the Transactions we need to make sure that the DB should be running in replica mode . Why we need that? Because you will get some error like this otherwise:- Sessions are not supported by the MongoDB cluster to which this client is connected The instructions for the same are below:- run the docker-compose based mongo db server using docker-compose.yml as shared below:- version: "3"services: mongo: hostname: mongo container_name: localmongo_docker image: mongo expose: - 27017 ports: - 27017:27017 restart: always entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ] volumes: - ./mongodata:/data/db # need to create a docker volume named as mongodata first After the image comes up, execute the command(here localmongo_docker is the name of the container):- docker exec -it localmongo_docker mongo Copy and paste the command below and execute that rs.initiate( { _id : 'rs0', members: [ { _id : 0, host : "mongo:27017" } ] } ) And then exit the execution by entering exit Important - The code repo can be found here on my github - https://github.com/krnbr/mongo-spring-boot-template Important notes for the code are as below:- MongoConfiguration class in the config package is the important part to make the transactions working, link to the configuration class is here Main part is the Bean @Bean ReactiveMongoTransactionManager transactionManager(ReactiveMongoDatabaseFactory dbFactory) { return new ReactiveMongoTransactionManager(dbFactory); } For checking the working of the code's Transactional requirement you may go through the class UserService in service package here Code shared in case the links do not work for someone:- The Configuration and inside the Beans @Configurationpublic class MongoConfiguration extends AbstractMongoClientConfiguration { @Autowired private MongoProperties mongoProperties; @Bean ReactiveMongoTransactionManager transactionManager(ReactiveMongoDatabaseFactory dbFactory) { return new ReactiveMongoTransactionManager(dbFactory); } @Override protected String getDatabaseName() { return mongoProperties.getDatabase(); } @Override public MongoClient mongoClient() { return MongoClients.create(mongoProperties.getUri()); }} application.properties (related to mongo db) spring.data.mongodb.database=mongospring.data.mongodb.uri=mongodb://localhost:27017/mongo?replicaSet=rs0 Document Classes Role Class @Getter@Setter@Accessors(chain = true)@Document(collection = "roles")@TypeAlias("role")public class Role implements Persistable<String> { @Id private String id; @Field("role_name") @Indexed(unique = true) private String role; @CreatedDate private ZonedDateTime created; @LastModifiedDate private ZonedDateTime updated; private Boolean deleted; private Boolean enabled; @Override @JsonIgnore public boolean isNew() { if(getCreated() == null) return true; else return false; }} User Class @Getter@Setter@Accessors(chain = true)@Document(collection = "users")@JsonInclude(JsonInclude.Include.NON_NULL)@TypeAlias("user")public class User implements Persistable<String> { @Id() private String id; @Field("username") @Indexed(unique = true) @JsonProperty("username") private String userName; @JsonProperty(access = JsonProperty.Access.WRITE_ONLY) private String password; @CreatedDate private ZonedDateTime created; @LastModifiedDate private ZonedDateTime updated; private Boolean deleted; private Boolean enabled; @DBRef(lazy = true) @JsonProperty("roles") private List<Role> roles = new ArrayList(); @Override @JsonIgnore public boolean isNew() { if(getCreated() == null) return true; else return false; }} UserProfile Class @Getter@Setter@Accessors(chain = true)@Document(collection = "user_profiles")@JsonInclude(JsonInclude.Include.NON_NULL)@TypeAlias("user_profile")public class UserProfile implements Persistable<String> { @Id private String id; @Indexed(unique = true) private String mobile; @Indexed(unique = true) private String email; private String address; private String firstName; private String lastName; @DBRef private User user; @CreatedDate private ZonedDateTime created; @LastModifiedDate private ZonedDateTime updated; private Boolean deleted; private Boolean enabled; @Override @JsonIgnore public boolean isNew() { if(getCreated() == null) return true; else return false; }} ReactiveMongoRepository Interface(s) RoleRepository public interface RoleRepository extends ReactiveMongoRepository<Role, String> { Mono<Role> findByRole(String role); Flux<Role> findAllByRoleIn(List<String> roles);} UserRepository public interface UserRepository extends ReactiveMongoRepository<User, String> { Mono<User> findByUserName(String userName);} UserProfileRepository public interface UserProfileRepository extends ReactiveMongoRepository<UserProfile, String> {} The User Service Class Need to create your own RuntimeException Class here, here it is AppRuntimeException Class, I had been using @Slf4j@Servicepublic class UserService { @Autowired private RoleRepository roleRepository; @Autowired private UserRepository userRepository; @Autowired private UserProfileRepository userProfileRepository; @Transactional public Mono<UserProfile> saveUserAndItsProfile(final UserRequest userRequest) { Mono<Role> roleMono = roleRepository.findByRole("USER"); Mono<User> userMono = roleMono.flatMap(r -> { User user = new User() .setUserName(userRequest.getUsername()) .setPassword(userRequest.getPassword()); user.setRoles(Arrays.asList(r)); return userRepository.save(user); }).onErrorResume(ex -> { log.error(ex.getMessage()); if(ex instanceof DuplicateKeyException) { String errorMessage = "The user with the username '"+userRequest.getUsername()+"' already exists"; log.error(errorMessage); return Mono.error(new AppRuntimeException(errorMessage, ErrorCodes.CONFLICT, ex)); } return Mono.error(new AppRuntimeException(ex.getMessage(), ErrorCodes.INTERNAL_SERVER_ERROR, ex)); }); Mono<UserProfile> userProfileMono = userMono.flatMap(u -> { UserProfile userProfile = new UserProfile() .setAddress(userRequest.getAddress()) .setEmail(userRequest.getEmail()) .setMobile(userRequest.getMobile()) .setUser(u); return userProfileRepository.save(userProfile); }).onErrorResume(ex -> { log.error(ex.getMessage()); if(ex instanceof DuplicateKeyException) { String errorMessage = "The user with the profile mobile'"+userRequest.getMobile()+"' and/or - email '"+userRequest.getEmail()+"' already exists"; log.error(errorMessage); return Mono.error(new AppRuntimeException(errorMessage, ErrorCodes.CONFLICT, ex)); } return Mono.error(new AppRuntimeException(ex.getMessage(), ErrorCodes.INTERNAL_SERVER_ERROR, ex)); }); return userProfileMono; }} Controller and the Model Class UserRequest Model Class @Getter@Setter@Accessors(chain = true)@Slf4j@JsonInclude(JsonInclude.Include.NON_NULL)public class UserRequest { private String username; private String password; private String mobile; private String email; private String address; private String firstName; private String lastName;} UserProfileApisController class @Slf4j@RestController@RequestMapping("/apis/user/profile")public class UserProfileApisController { @Autowired private UserService userService; @PostMapping public Mono<UserProfile> saveUserProfile(final @RequestBody UserRequest userRequest) { return userService.saveUserAndItsProfile(userRequest); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56360119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11572007/']} | jdg_380669 |
stackexchange | llm_judgeable_groundtruth_similarity | 165434 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We are doing an electrolysis experiment using distilled water with sea salt on clean stainless steel plates. A horrible black gunk has formed in the water, very quickly. What is this? I don't believe this is simply the brown colour that others are seeing on their plates. The black liquid appears after just 90-120 minutes in the bath. We are using 4V at 2A in this. And we definitely do not see this when we use KOH as the chemical.
Now provide the response and nothing else.
| Just count the number of electrons that you provided to the electrodes during 2 hours: 120 x 60 x 2 A=14,400 Coulomb. 1 Coulomb = 6,240,000,000,000,000,000 electrons! Do this multiplication with 14,400 Coulomb. Also rest assured that each electron caused a chemical reaction, because it has to if you forced this high current. What is there to react during electrolysis? Water, salt, and steel plates. When all of them are decomposing, the products react together in a mixed compartment- you get this reddish gelatinous precipitate of mainly iron (III) hydroxides because the cathode side is alkaline. This precipitate cannot appear all of a sudden after 120 minutes, the electrode corrode slowly with the passage of time and the electrode (anode) continuously dissolves as current is passed through it. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/165434', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/124416/']} | jdg_380670 |
stackexchange | llm_judgeable_groundtruth_similarity | 40966 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to show that a finite sum of eigenspaces (with distinct eigenvalues) is a direct sum. I have $ \alpha : V \to V $. The eigenspaces are $ V_{\lambda_i} = \ker(\alpha - \lambda_i id_V )$ for $ 1 \leq i \leq n $. My attempt at a proof: $ A + B $ is a direct sum iff $ A \cap B = \{0\} $. If $ v \neq 0 \in V_{\lambda_i} \cap V_{\lambda_j} $ for some $i,j, i \neq j $, then $ \alpha(v) = \lambda_i v $ and $ \alpha(v) = \lambda_j v $. So $(\lambda_i - \lambda_j)v = 0 $, and so $ \lambda_i = \lambda_j $. This is a contradiction, so any pair of the eigenspaces have trivial intersection. Therefore $ \cap_{i=1}^n V_{\lambda_i} = \{0\} $, and so we have a direct sum. Is this ok? Thanks
Now provide the response and nothing else.
| No, this is not a full proof. It is not true that, if $V = A+B+C$, and $A \cap B = A \cap C = B \cap C = \{ 0 \}$, then $V = A \oplus B \oplus C$. For example, let $V = \mathbb{C}^2$ and let $A$, $B$ and $C$ be the one dimensional subspaces spanned by $(1,0)$, $(1,1)$ and $(0,1)$. This does give some good intuition for why the claim is true. If you want to build your way to the full proof, you might try the special case of three eigenspaces and see what you can do. Amusingly, this is currently the top voted example of a common false belief over at MO. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/40966', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10589/']} | jdg_380671 |
stackexchange | llm_judgeable_groundtruth_similarity | 17943870 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a subclass of DbContext public class MyContext : DbContext { } and I have an IUnitOfWork abstraction around MyContext that implements IDisposable to ensure that references such as MyContext are disposed of at the appropriate time public interface IUnitOfWork : IDisposable { }public class UnitOfWork : IUnitOfWork { private readonly MyContext _context; public UnitOfWork() { _context = new MyContext(); } ~UnitOfWork() { Dispose(false); } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } private bool _disposed; protected virtual void Dispose(bool disposing) { if (_disposed) return; if (disposing) { if (_context != null) _context.Dispose(); } _disposed = true; }} My UnitOfWork is registered with a lifetime scope of per (web) request. I have decorators of IUnitOfWork that could be registered as transient or lifetime scoped and my question is what should they do with regard to implementing IDisposable - specifically should they or should they not pass on the call to Dispose() . public class UnitOfWorkDecorator : IUnitOfWork{ private readonly IUnitOfWork _decorated; public UnitOfWorkDecorator(IUnitOfWork decorated) { _decorated = decorated; } public void Dispose() { //do we pass on the call? _decorated.Dispose(); }} I see 2 options (I'm guessing option 2 is the correct answer): It is expected that each Decorator will know whether it is transient or lifetime scoped. If a decorator is transient then it should not call Dispose() on the decorated instance. If it is lifetime scoped it should. Each decorator should only be concerned with disposing of itself and should never pass on the call to the decorated instance. The container will manage the call to Dispose() for each object in the call chain at the appropriate time. An object should only Dispose() of instances that it encapsulates and decorating is not encapsulation.
Now provide the response and nothing else.
| what should [decorators] do with regard to implementing IDisposable This comes back to the general principle of ownership. Ask yourself: "who owns that disposable type?". The answer to this question is: He who owns the type is responsible for disposing of it. Since a disposable type is passed on to the decorator from the outside, the decorator didn't create that type and should normally not be responsible for cleaning it up. The decorator has no way of knowing whether the type should be disposed of (since it doesn't control its lifetime) and this is very clear in your case, since the decorator can be registered as transient, while the decoratee has a much longer lifetime. In your case your system will simply break if you dispose the decoratee from within the decorator. So the decorator should never dispose the decoratee, simply because it doesn't own the decoratee. It's the responsibility of your Composition Root to dispose that decoratee. It doesn't matter that we're talking about decorators in this case; it still comes down to the general principle of ownership. Each decorator should only be concerned with disposing of itself and shouldnever pass on the call to the decorated instance. Correct. The decorator should dispose everything it owns though, but since you're using dependency injection, it typically doesn't create much stuff itself and therefore doesn't own that stuff. Your UnitOfWork on the other hand creates a new MyContext class and therefor has the ownership of that instance and it should dispose of it. There are exceptions to this rule, but it still comes down to ownership. Sometimes you do pass on ownership of a type to others. When using a factory method for instance, by convention the factory method passes on the ownership of the created object to the caller. Sometimes ownership is passed on to a created object, such as .NET's StreamReader class does. The API documentation is clear about this, but since the design is such unintuitive, developers keep tripping over this behavior. Most of the types in the .NET framework don't work this way. For instance, the SqlCommand class doesn't dispose the SqlConnection , and it would be very annoying if it did dispose of the connection. A different way of looking at this issue is from perspective of the SOLID principles . By letting the IUnitOfWork implement IDisposable you are violating the Dependency Inversion Principle , because "Abstractions should not depend on details; Details should depend on abstractions". By implementing IDisposable you are leaking implementation details into the IUnitOfWork interface. Implementing IDisposable means that the class has unmanaged resources that need disposal, such as file handles and connection strings. These are implementation details, because it can't hardly ever be the case that each implementation of such interface actually needs disposal at all. You just have to create one fake or mock implementation for your unit tests and you have proof of an implementation that doesn't need disposal. So when you fix this DIP violation by removing the IDisposable interface from IUnitOfWork -and moving it to the implementation-, it becomes impossible for the decorator to dispose the decoratee, because it has no way of knowing whether or not the decoratee implements IDisposable . And this is good, because according to the DIP, the decorator shouldn't know -and- we already established that the decorator should not dispose the decoratee. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17943870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1515209/']} | jdg_380672 |
stackexchange | llm_judgeable_groundtruth_similarity | 136555 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So I have really been hitting the example code hard this week in an attempt to better understand some HDL design basics, specifically FPGAs with VHDL. The book I am using (if anyone is interested) is "FPGA PROTOTYPING BY VHDL EXAMPLES" by Pong P. Chu. After a few examples, I am starting to wonder. How does someone initially design a digital system for HDL? (Flowchart/Block diagram? Signal list? etc) For example, I love to use Logisim to flesh out simple digital circuits. The graphical interface is easy to follow and I can get on-the-fly simulations without all the synthesis. But when I am satisfied with my Logisim design, I find it difficult to transfer that design into HDL. Is there a way to understand how you should structure your HDL design, or does it just come with practice?
Now provide the response and nothing else.
| I generally take a top-down design approach, and I start by drawing a block diagram that shows the interfaces among the top-level blocks. I then draw additional diagrams that represent the implementations of the top-level blocks in terms of lower-level blocks. This hierarchy of block diagrams translates pretty much directly to the hierarchy of the HDL modules. Once I get to a low enough level of detail on the block diagrams, I start coding and stop drawing diagrams. The block diagrams also function as dataflow diagrams, since they show at every stage how the data flows from one module to another. When it comes to specific interfaces between modules, I also draw timing diagrams that show the details of the interface protocol. I also use timing diagrams to keep track of the flow of data through the pipeline stages inside a module. In both cases, these diagrams serve as a reference when looking at waveforms in the simulator during verification. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/136555', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/5666/']} | jdg_380673 |
stackexchange | llm_judgeable_groundtruth_similarity | 42091674 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am practicing for Java fresher interview coding examples.I am trying to write a program to find duplicate numbers between 1 to N , where N is given by the user along with the numbers themselves.Here is the code: import java.io.DataInputStream;import java.io.IOException;public class DuplicateNumbers { public static void main(String[] args) throws IOException { DataInputStream in = new DataInputStream(System.in); System.out.println(" Enter the number of numbers "); int a = in.readInt(); int[] num = new int[a]; System.out.println(" Enter the ints one by one "); for (int b = 0; b < a; b++) { System.out.println(" Enter no "+(b+1)); num[b]=in.readInt(); } int c = 0; for (int d = 0; d < a; d++) { int f = 0; c = num[d]; for (int e=0; e<a; e++) { if (c==num[e]) { f++; } } if(f > 1) System.out.println(" Duplicate number "+c); } }} But I am getting following error in Eclipse Neon: Enter the number of numbers 5Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at DuplicateNumbers.main(DuplicateNumbers.java:14) What is wrong? Why the JVM heap space error?Code compiles and runs fine.
Now provide the response and nothing else.
| DataInputStream is for binary not text. When you type 4 bytes, this is turned into a 32-bit int value e.g. 5, \n, \n, \n is about 900 million which is why it complains about memory when you create the array. You can check this by stepping through the code in your debugger. What you need is text input, try using Scanner in = new Scanner(System.in);System.out.println("Enter the number of numbers");int a = in.nextInt();in.nextLine(); // discard the rest of the line. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42091674', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_380674 |
stackexchange | llm_judgeable_groundtruth_similarity | 11334710 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
An application I'm working on needs to get HQ YouTube thumbnails given a URL. Using this answer , I tried using the maxresdefault.jpg to get the maximum resolution of the thumbnail but on some videos (seemingly random), the request 404's. Here are some examples: http://img.youtube.com/vi/zIez5QVE3NI/maxresdefault.jpg works http://img.youtube.com/vi/DaAfe5xTm8w/maxresdefault.jpg doesn't work Using the same URL's, other requests using the same video keys do work: http://img.youtube.com/vi/zIez5QVE3NI/default.jpg works http://img.youtube.com/vi/DaAfe5xTm8w/default.jpg works Why doesn't maxresdefault.jpg reliably return the image? Both of those videos are public and can be embedded. I'd use 0.jpg but it has black bars on the top and bottom of the image.
Now provide the response and nothing else.
| As I understand it, YouTube only generates high-res stills for high-res videos. The first one you link was uploaded at a resolution of at least 1080p, while the second was less than 720p. Looks like the only thing you can do is pull the video feed, check that to see if a high-res still has been generated and up-scale the smaller one yourself if not. All¹ YouTube stills are presented in the 4:3 aspect ratio, so will always have black bars at the top and bottom if the video is 16:9. In HTML/CSS, this is easily combated by setting the image as the background of a div and sizing the div to the correct aspect. ¹ Except the mqdefault.jpg stills, which are all in 16:9 aspect (320x180) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11334710', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/823435/']} | jdg_380675 |
stackexchange | llm_judgeable_groundtruth_similarity | 24484272 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to convert my photos from jpg, gif, and png to WebP format. When I'm trying to use CMD for using cwebp command, I got this error message : 'cwebp' is not recognized as an internal or external command,operable program or batch file. What should do I do? I've downloaded all the files needed such as libwebp-0.4.0-windows-x86.zip and WebpCodecSetup.exe . Even I've installed Visual Studio to use its command prompt, but didn't work! Is there anyone who can help me? One more question: Is anyone know any tool to reduce image size without losing its quality?
Now provide the response and nothing else.
| Download cwebp binaries (.exe) and run it with PowerShell: # tip: on windows explorer shift + right-click a directory and copy its path$dir = "path/to/photos/directory"# get all files in the directory$images = Get-ChildItem $dirforeach ($img in $images) { # output file will be written in the same directory # but with .webp extension instead of old extension $outputName = $img.DirectoryName + "\" + $img.BaseName + ".webp" C:\webp-converter\libwebp-0.6.1-windows-x64\bin\cwebp.exe $img.FullName -o $outputName} See also cwebp options . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24484272', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3382152/']} | jdg_380676 |
stackexchange | llm_judgeable_groundtruth_similarity | 2774343 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
On the subject of associating your iPhone app with file types. In this informative question I learned that apps could be associated with custom URL protocols. That was almost one year ago and since then Apple introduced 'Document Support' which goes a step further and allows apps to associate with file types. There is a lot of talk in the documentation about how to set up your app to launch other appropriate apps when it encounters an unknown file type. This means the association doesn't work out of the box for any app, like the URL protocol registering did. This leads me to the question: have system apps like Safari or Mail implemented this system for choosing associated applications, or will they do nothing, as before?
Now provide the response and nothing else.
| File type handling is new with iPhone OS 3.2, and is different than the already-existing custom URL schemes. You can register your application to handle particular document types, and any application that uses a document controller can hand off processing of these documents to your own application. For example, my application Molecules (for which the source code is available) handles the .pdb and .pdb.gz file types, if received via email or in another supported application. To register support, you will need to have something like the following in your Info.plist: <key>CFBundleDocumentTypes</key><array> <dict> <key>CFBundleTypeIconFiles</key> <array> <string>Document-molecules-320.png</string> <string>Document-molecules-64.png</string> </array> <key>CFBundleTypeName</key> <string>Molecules Structure File</string> <key>CFBundleTypeRole</key> <string>Viewer</string> <key>LSHandlerRank</key> <string>Owner</string> <key>LSItemContentTypes</key> <array> <string>com.sunsetlakesoftware.molecules.pdb</string> <string>org.gnu.gnu-zip-archive</string> </array> </dict></array> Two images are provided that will be used as icons for the supported types in Mail and other applications capable of showing documents. The LSItemContentTypes key lets you provide an array of Uniform Type Identifiers (UTIs) that your application can open. For a list of system-defined UTIs, see Apple's Uniform Type Identifiers Reference . Even more detail on UTIs can be found in Apple's Uniform Type Identifiers Overview . Those guides reside in the Mac developer center, because this capability has been ported across from the Mac. One of the UTIs used in the above example was system-defined, but the other was an application-specific UTI. The application-specific UTI will need to be exported so that other applications on the system can be made aware of it. To do this, you would add a section to your Info.plist like the following: <key>UTExportedTypeDeclarations</key><array> <dict> <key>UTTypeConformsTo</key> <array> <string>public.plain-text</string> <string>public.text</string> </array> <key>UTTypeDescription</key> <string>Molecules Structure File</string> <key>UTTypeIdentifier</key> <string>com.sunsetlakesoftware.molecules.pdb</string> <key>UTTypeTagSpecification</key> <dict> <key>public.filename-extension</key> <string>pdb</string> <key>public.mime-type</key> <string>chemical/x-pdb</string> </dict> </dict></array> This particular example exports the com.sunsetlakesoftware.molecules.pdb UTI with the .pdb file extension, corresponding to the MIME type chemical/x-pdb . With this in place, your application will be able to handle documents attached to emails or from other applications on the system. In Mail, you can tap-and-hold to bring up a list of applications that can open a particular attachment. When the attachment is opened, your application will be started and you will need to handle the processing of this file in your -application:didFinishLaunchingWithOptions: application delegate method. It appears that files loaded in this manner from Mail are copied into your application's Documents directory under a subdirectory corresponding to what email box they arrived in. You can get the URL for this file within the application delegate method using code like the following: NSURL *url = (NSURL *)[launchOptions valueForKey:UIApplicationLaunchOptionsURLKey]; Note that this is the same approach we used for handling custom URL schemes. You can separate the file URLs from others by using code like the following: if ([url isFileURL]){ // Handle file being passed in}else{ // Handle custom URL scheme} | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2774343', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/64167/']} | jdg_380677 |
stackexchange | llm_judgeable_groundtruth_similarity | 16497788 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on a mobile site for a local company using jquery mobile. Here is what I have so far So far, it has turned out well but I'm running into a few problems. 1. I do not know how to change the header color. I have tried different data-themes. I have tried to use a custom css style sheet. But nothing I do works. edit - Ok, so apparently the head tag doesn't get a data-role like the other parts of the page. So i removed that. But i still need to figure out how to change the color. The css i write for it seems to get overwritten. Here is the actual header <div data-role="header" data-theme="c"> It seems like data roles for headers dont do anything 2. The call us button has a 'href' tag that lets you dial to a phone. The problem is that ever since i put it in there, it creates a link style around the box that is pretty noticeable. Here is a screen shot How do I stop that style from being made? I have already tried CSS to stop it. a:link {color:#FF0000;} /* unvisited link */a:visited {color:#00FF00;} /* visited link */ These work, but only on the expandable list at the bottom of the page. Why do they not work for all buttons?
Now provide the response and nothing else.
| Header background color I made you a working example: http://jsfiddle.net/Gajotres/5VWuy/ .ui-page .ui-header { background: #112233 !important;} If you want to change it only on a specific page the replace .ui-page with an page id, like this: #index .ui-header { background: #112233 !important;} Button problem In this case don't wrap your a tag with button. A tag with data-role="button" is button so you can do it like this: <a href="tel:8149413000" data-role="button" rel="external" data-theme="c" data-icon="custom-phone" data-iconpos="top">Call Us</a> You can find this example in my previous jsFiddle. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16497788', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2070478/']} | jdg_380678 |
stackexchange | llm_judgeable_groundtruth_similarity | 41400395 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I did the laravel command for authentication system , php artisan make:auth it made the authentication system for my app and almost everything is working. Now when i use the forgot password and it sends me a token to my mail id , i see that the template contains laravel and some other things that i might wanna edit or ommit, to be precise , i want my custom template to be used there. I looked up at the controllers and their source files but i can't find the template or the code that is displaying the html in the mail. How do i do it ? How do i change it? This is the default template that comes from laravel to the mail.
Now provide the response and nothing else.
| Just a heads up: In addition to the previous answer, there are additional steps if you want to modify the notification lines like You are receiving this... , etc. Below is a step-by-step guide. You'll need to override the default sendPasswordResetNotification method on your User model. Why? Because the lines are pulled from Illuminate\Auth\Notifications\ResetPassword.php . Modifying it in the core will mean your changes are lost during an update of Laravel. To do this, add the following to your your User model. use App\Notifications\PasswordReset; // Or the location that you store your notifications (this is default)./** * Send the password reset notification. * * @param string $token * @return void */public function sendPasswordResetNotification($token){ $this->notify(new PasswordReset($token));} Lastly, create that notification : php artisan make:notification PasswordReset And example of this notification's content: /** * The password reset token. * * @var string */public $token;/** * Create a new notification instance. * * @return void */public function __construct($token){ $this->token = $token;}/** * Get the notification's delivery channels. * * @param mixed $notifiable * @return array */public function via($notifiable){ return ['mail'];}/** * Build the mail representation of the notification. * * @param mixed $notifiable * @return \Illuminate\Notifications\Messages\MailMessage */public function toMail($notifiable){ return (new MailMessage) ->line('You are receiving this email because we received a password reset request for your account.') // Here are the lines you can safely override ->action('Reset Password', url('password/reset', $this->token)) ->line('If you did not request a password reset, no further action is required.');} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41400395', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5871514/']} | jdg_380679 |
stackexchange | llm_judgeable_groundtruth_similarity | 14024798 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have following command $> MYENVVAR=myfolder echo $MYENVVAR && MYENVVAR=myfolder ls $MYENVVAR I mean that MYENVVAR=myfolder repeats Is it possible to set it once for both "&&" separated commands while keeping the command on one line?
Now provide the response and nothing else.
| Assuming you actually need it as an environment variable (even though the example code does not really need an environment variable; some shell variables are not environment variables): (export MYENVVAR=myfolder; echo $MYENVVAR && ls $MYENVVAR) If you don't need it as an environment variable, then: (MYENVVAR=myfolder; echo $MYENVVAR && ls $MYENVVAR) The parentheses create a sub-shell; environment variables (and plain variables) set in the sub-shell do not affect the parent shell. In both commands shown, the variable is set once and then used twice, once by each of the two commands. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14024798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/131618/']} | jdg_380680 |
stackexchange | llm_judgeable_groundtruth_similarity | 421079 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am developing code mainly using Bash, C, Python and Fortran and recently also HTML/CSS+JavaScript. My OS is Ubuntu. Maybe I am exaggerating, but I figured that I kind of spend more time getting software (Debian and Python packages mainly, sometimes also from source) to be installed properly than actually developing code. And I am not talking about coding vs. debugging, debugging is part of coding for me. It happens so often to me that I update my Linux packages and then my Python packages and my software does not work anymore, because some .so files have another name now, and Python does not find them anymore. Or I setup a totally clean Ubuntu VM, install a package with pip and get two screens of error message, because some debian package was not installed. I am not a system administrator, I enjoy developing software. But this just annoys me. I do not want to inform myself on all the 157 Python packages and thousands of Debian packages I have on my system and know what their dependancies are. I want to write code and implement new functionality into my code. What am I doing wrong?
Now provide the response and nothing else.
| What am I doing wrong? You're trying to develop in an environment where you're also the sysadmin, devops and the local technical product owner for every pip package you use - and you're assuming that the sysadmin, devops and TPO roles should be no effort just because they're not what you're interested in. Those are paid full-time jobs (ok, maybe not TPO) that people have because they are not trivial. Maintaining up-to-date development environments can be a lot of work. The usual approaches are to work for a large enough organization that it's someone else's job, or to somehow automate it (which is why things like conda and docker exist - although this is still a non-trivial amount of work you'd prefer the person from #1 to do instead) to just update infrequently Specifically, you have two different package managers (apt and pip) that don't know much about each other and aren't co-ordinated. I'd recommend you: get a working initial development environment choose some way to be able to clone that environment when you want a new VM (or docker or other) container starting at a working baseline don't update it at all unless there's a specific feature or security update you want don't update it when you actually want to be developing, because you'll get frustrated whenever it doesn't work instantly ideally perform updates in a clone, so you can give up and get back to developing in a working environment if it is more broken than you can face fixing right away | {} | {'log_upvote_score': 7, 'links': ['https://softwareengineering.stackexchange.com/questions/421079', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/383168/']} | jdg_380681 |
stackexchange | llm_judgeable_groundtruth_similarity | 77334 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Convolve[Sinc[x], Exp[-x^2], x, X](* E^-X^2 π *) is obviously false, but why?Any suggestions ?
Now provide the response and nothing else.
| $Version(*"8.0 for Microsoft Windows (64-bit) (October 7, 2011)"*) Direct attack fails: Timing[Convolve[Sinc[x], Exp[-x^2], x, y]](*Out[218]= {59.296, Convolve[Sinc[x], E^-x^2, x, y]}*) or, equivalently, Timing[Integrate[Sinc[x] Exp[-(x - y)^2], {x, -∞, ∞}] ] $\left\{49.92,\int_{-\infty }^{\infty } e^{-(x-y)^2} \text{Sinc}[x] \, dx\right\}$ Let us now solve the problem, using Fourier transform. We have FourierTransform[Sinc[x], x, t](*Out[206]= 1/2 Sqrt[π/2] (Sign[1 - t] + Sign[1 + t])*) Therefore we can write InverseFourierTransform[1/2 Sqrt[π/2] (Sign[1 - t] + Sign[1 + t]), t, x](*Out[207]= Sin[x]/x*) or, explicitly Integrate[Exp[-I t x] (1/ 4 (Sign[1 - t] + Sign[1 + t])), {t, -∞, ∞}](*Out[211]= Sin[x]/x*) Therefore doing the x integration first we have Integrate[Exp[-I t x] (1/ 4 (Sign[1 - t] + Sign[1 + t])) Exp[-(x - y)^2], {x, -∞, ∞}](*Out[212]= 1/4 E^(-(1/4) t (t + 4 I y)) Sqrt[π] (Sign[1 - t] + Sign[1 + t])*) and the t integration finalizes the calculation giving for the convolution the following result: Integrate[1/4 E^(-(1/4) t (t + 4 I y)) Sqrt[π] (Sign[1 - t] + Sign[1 + t]), {t, -∞, ∞}](*Out[215]= 1/2 E^-y^2 π (Erf[1/2 - I y] + Erf[1/2 + I y])*) EDIT #1 17.03.15 Comparision and analysis Let us first compare the answers of Wolfgang and Jens fWolfgang[y_] := 1/2 E^-y^2 π (Erf[1/2 - I y] + Erf[1/2 + I y])fJens[X_] := -(1/2) E^-X^2 Pi Erfc[1/2 - I X] - 1/2 E^-X^2 Pi Erfc[1/2 + I X] Because $erfc(z) = 1 - erf(z)$ this can be written f1Jens[y_] := -(1/2) E^-y^2 π (1 - Erf[1/2 - I y] + 1 - Erf[1/2 + I y]) The difference is just fWolfgang[x] - f1Jens[x] // Simplify(*Out[9]= E^-x^2 π*) Now, in order to see the "true" result, let's define the numeric integral fNum[y_] := NIntegrate[Sin[x]/x Exp[-(y - x)^2], {x, -∞, ∞}] Comparing the results graphically (ignoring the error messages of the integration) gives Plot[{fWolfgang[y] + 0.1, fNum[y]}, {y, -6, 6}](* 150317_Plot _fW _fN *) "proves" that fWolfgang is correct. Do you want still another incorrect result from correct input? Here we go: Writing Sin[x] = 1/(2 I) (Exp[I x] - Exp[-I x]) our integral becomes fSplit[y_] = Integrate[(Exp[I x] - Exp[-I x])/(2 I x) Exp[-(y - x)^2], {x, -∞, ∞}](*Out[16]= 1/2 I E^-y^2 (Log[-I - 2 y] - Log[I - 2 y] + Log[-I + 2 y] - Log[I + 2 y])*) Here there even is no error function. And the result is obviously wrong: fSplit[0.](*Out[19]= 3.14159 + 0. I*)fWolfgang[0.](*Out[20]= 1.6352 + 0. I*) The same (wrong) result is obtained usind the option PrincipalValue->True in order to tell Mathematica how to deal with the false pole at x = 0 . But let's look at the ostensible pole in more detail.This integral is obviously divergent at x = 0 : Integrate[Exp[I x]/(2 I x) Exp[-(y - x)^2], {x, -∞, ∞}] During evaluation of In[21]:= Integrate::idiv: Integral of E^(I x-(-x+y)^2)/x does not converge on {-∞,∞}. >> $\int_{-\infty }^{\infty } -\frac{i e^{i x-(-x+y)^2}}{2 x} \, dx$ But taking the pricipal value the result is finite Integrate[Exp[I x]/(2 I x) Exp[-(y - x)^2], {x, -∞, ∞}, PrincipalValue -> True](*Out[22]= 1/2 I E^-y^2 (Log[-I - 2 y] - Log[I + 2 y])*) and it is part of the wrong result fSplit[] above. The integral can also be written as a fourier transform Sqrt[2 π] FourierTransform[1/(2 I x) Exp[-(y - x)^2], x, t] /. t -> 1(*Out[27]= -(1/2) I E^-y^2 (-Log[-I - 2 y] + Log[I + 2 y])*) But it still leads to the same wrong result. Summarizing we find that splitting the Sin[] into a sum of complex exponentials the resulting integral leads rather consistently to a wrong result. Concluding (i) it is not only Convolve which produces wrong results but also related integrals do. (ii) I realize that I was just lucky having found the correct result by attacking the problem using Fourier transformation. EDIT #2 In order to mitigate the pessimistic outlook here's a positive message: We can replace the lengthy Fouriertransform approach by this one. A pole 1/x can be produced by an auxiliary integration. Indeed, we can write Integrate[Cos[t x], {t, 0, 1}](* Out[45]= Sin[x]/x *) Changing the order of integration, doing the x-integral first, we get Integrate[Cos[t x] Exp[-(x - y)^2], {x, -∞, ∞}](* Out[46]= 1/2 E^(-(1/4) t (t + 4 I y)) (1 + E^(2 I t y)) Sqrt[π] *) and doing the t-integral subsequently we have Integrate[%, {t, 0, 1}](*Out[47]= 1/2 E^-y^2 π (Erf[1/2 - I y] + Erf[1/2 + I y])*) which is the correct result. Check: % /. y -> 1. (*Out[44]= 1.39248 + 0. I*) We can also "save" convole. We have to consider (before the t-integration) Convolve[Cos[t x], Exp[-x^2], x, y](*Out[57]= 1/2 E^(-(1/4) t (t + 4 I y)) (1 + E^(2 I t y)) Sqrt[π]*) and the t-integral Integrate[%, {t, 0, 1}](*Out[55]= 1/2 E^-y^2 π (Erf[1/2 - I y] + Erf[1/2 + I y])*)% /. y -> 1.(*Out[56]= 1.39248 + 0. I*) gives the correct result. EDIT #3 I have found a transparent way to generate the result of Jens.This show where the problem lies. Let us again consider the integral h = 1/(2 I ) Integrate[Exp[I x]/x Exp[(x - y)^2], {x, -∞, ∞}] and let us shift the integration variable thus Exp[I x]/x Exp[-(x - y)^2] /. x -> u + y(*Out[66]= E^(-u^2 + I (u + y))/(u + y)*) completing the square Expand[-(u - I/2)^2](*Out[67]= 1/4 + I u - u^2*) we can write $\text{Exp}[i y-1/4]\int_{-\infty }^{\infty } \frac{e^{-(u-i/2)^2}}{y+u} \, du$ Now shifting again, this time into the complex plane E^-(u - I/2)^2/(y + u) /. u -> v + I/2(*Out[68]= E^-v^2/(I/2 + v + y)*) giving $\text{Exp}[i y-1/4]\int_{-\infty -i/2}^{\infty -i/2} \frac{e^{-v^2}}{y+v+i/2} \, dv$ Now the trick from the good old university days when calculating the Fourier transform of Exp[-x^2]: we shift the integration path in the u-plane which lies 1/2 unitites below the real axis and paralell to it, to the real axis.This gives $\frac{1}{2i}\text{Exp}[i y-1/4]\int_{-\infty }^{\infty } \frac{e^{-w^2}}{y+w+i/2} \, dw$ (*Out[69]= ConditionalExpression[-(1/2) I E^(-(1/4) + I y - 1/4 (I + 2 y)^2) (I π Erf[1/2 - I y] + Log[-I - 2 y] - Log[I + 2 y]), Im[y] != -(1/2)]*) Taking the input format we can add the condition that y>0 (for ins (1/(2*I))*Exp[I*y - 1/4]* Integrate[1/(E^w^2*(y + w + I/2)), {w, -Infinity, Infinity}, Assumptions -> y ∈ Reals] // Simplify(*Out[71]= -(1/2) E^-y^2 π Erfc[1/2 - I y]*) For the complete (sinc) integral we need to add the complex conjugate h* of it, giving fShift[y_] = -(1/2) E^-y^2 π Erfc[1/2 - I y] + -(1/2) E^-y^2 π Erfc[ 1/2 + I y](*Out[72]= -(1/2) E^-y^2 π Erfc[1/2 - I y] - 1/2 E^-y^2 π Erfc[1/2 + I y]*)fShift[1.](* Out[73]= 0.236748 + 0. I *) We can easily verify that this function is exactly the result fJens[] of Jens. This means, however, that the clue lies in the shifting of the integration path. This shifting blurres the required exact treatment of the pole. EDIT #4: Miscellaneous results 1) Proof by series expansion Just to fill a small gap: in the "proof" of correctness of fWolfgang we resorted to numerical integration. Now we shall do it by expansion into power series fWolfgang[y](*Out[142]= 1/2 E^-y^2 π (Erf[1/2 - I y] + Erf[1/2 + I y])*)Series[fWolfgang[y], {y, 0, 6}] // Normal(*Out[154]= π Erf[1/2] + y^2 (Sqrt[π]/E^(1/4) - π Erf[1/2]) + y^6 ((71 Sqrt[π])/(360 E^(1/4)) - 1/6 π Erf[1/2]) + y^4 (-((7 Sqrt[π])/(12 E^(1/4))) + 1/2 π Erf[1/2])*) Expanding the expression Exp[-(x-y)^2] in the integrand with respect to y , and integrating term by term gives up to the order y^6 : Collect[Integrate[ Sin[x]/x Series[Exp[-(x - y)^2], {y, 0, 6}] // Normal, {x, -∞, ∞}] // Expand, y](*Out[163]= π Erf[1/2] + y^2 (Sqrt[π]/E^(1/4) - π Erf[1/2]) + y^6 ((71 Sqrt[π])/(360 E^(1/4)) - 1/6 π Erf[1/2]) + y^4 (-((7 Sqrt[π])/(12 E^(1/4))) + 1/2 π Erf[1/2])*) which agrees with the expansion of fWolfgang. This is not a strict proof, of course, as we have considered only a finite number of terms.But I promise to the first one who finds a term which does not agree a bottle of fine German beer. 2) The innocent "pole" The "pole" at x = 0 alone is not the cause of trouble. Look at this example where I have replaced the Gaussian by a Cauchy weight Convolve[(Sin[x]/x), 1/(1 + x^2), x, y, Assumptions -> y > 0](*Out[197]= (π (E - Cos[y] + y Sin[y]))/(E (1 + y^2))*) or, in explicit form, Integrate[(Sin[x]/x) 1/(1 + (x - y)^2), {x, -∞, ∞}, Assumptions -> y > 0](*Out[200]= (π (E - Cos[y] + y Sin[y]))/(E (1 + y^2))*) Both operations are performed by Mathematica without problems. I conclude that it is the combination of the "pole" and the esssential singularity of Exp[-x^2] at infinity which gives rise to the observed difficulties. | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/77334', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/27019/']} | jdg_380682 |
stackexchange | llm_judgeable_groundtruth_similarity | 553887 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The famous $\Lambda_{\textrm{QCD}}$ parameter enters through the one-loop running of the QCD coupling, through a relation similar to the following: $$\alpha_S(Q^2)=\frac{\alpha_S(Q^2_0)}{1+b\ln(Q^2/Q^2_0)}\equiv\frac{\alpha_S(Q^2_0)}{b\ln(Q^2/\Lambda^2_\textrm{QCD})}$$ My question is simple: how does this equation, and thus any definition of $\Lambda_{\textrm{QCD}}$ , have anything to do with QCD in the non-perturbative regime, where $\alpha_S>1$ and thus these equations break down? Here $Q$ is strictly just an arbitrary renormalization scale, but it could also be an energy scale in a particular process we are considering. I know the conformal/trace anomaly in QCD is given be $T^\mu_\mu\sim \beta(\alpha_S)F^2$ , where $\beta(\alpha_S)$ is the beta function. But is this an all-orders result? (i.e. are all the orders of $\alpha_S$ resummed correctly in $\beta(\alpha_S)$ ?) Also, I know that the mass of a hadronic state is given by the trace anomaly $\langle P|T|P\rangle\sim M^2$ , but surely this cannot mean that $M^2\sim\beta(\alpha_S)$ because the beta-function is scheme dependent, whereas the mass of a hadron is totally physical. But none of the two previous two paragraphs say anything about what happens when $Q\sim\Lambda_{\textrm{QCD}}$ and $\alpha_S\sim 1$ . If this truly is how the $\Lambda_{\textrm{QCD}}$ parameter enters into life, I don't see how it isn't just a mere artifact of the logarithms which appear in the perturbative regime. It would then seem possible for its relevance to disappear if we somehow calculated the beta-function to all-loops.
Now provide the response and nothing else.
| You are quite correct when you say that $\Lambda_{QCD}$ may be an artifact of perturbation theory. This actually is the current interpretation of $\Lambda_{QCD}$ , based on a few observations. $$$$ Why we believe that $\boldsymbol{\Lambda_{QCD}}$ is a perturbative artifact First of all, observe that the actual value of $\Lambda_{QCD}$ depends on the order in perturbation theory (and renormalization scheme) in which you are computing the running coupling. By adding higher-order terms to the beta function $\beta(\alpha_{s})$ you change the shape of the running coupling $\alpha_{s}(Q^2)$ which solves the Callan-Symanzik equation for the coupling; therefore the position of the Landau pole is shifted order by order and you have one $\Lambda_{QCD}$ for each non-trivial order in perturbation theory (each of which, except the first and second, depend on the renormalization scheme). You may expect adding orders in perturbation theory to shift the Landau pole to a lower position in momentum space, so as to enable the access to physics at lower and lower momenta. However, this is not what happens: at least to five loops (and for a sufficiently small number of fermions) the higher-order coefficients of the beta function are negative as much as the one-loop order coefficient $\beta_{0}$ , so that the derivative of the running coupling is more and more negative and $\alpha_{s}(Q^{2})$ diverges earlier in momentum space ( $\Lambda_{QCD}$ is shifted to higher momenta). Of course, there may be an order at which the coefficients change sign and the coupling is allowed to decrease. I must say that currently the evidence is not in favor of this behavior. Setting aside this issue, the point I want to make here is that $\Lambda_{QCD}$ is an intrinsically perturbative scale: it is defined in the context of perturbation theory and has different values at different perturbative orders (and in different renormalization schemes). The second reason to believe that $\Lambda_{QCD}$ is an artifact of perturbation theory is that QCD is expected to describe the physics of the strong interactions down to zero momentum. Therefore, if the $\alpha_{s}$ that appears in the QCD action is to have any meaning at low momenta, it simply cannot have a Landau pole. Today we know that QCD describes the strong interactions also in the non-perturbative regime thanks to lattice QCD, which was able (for instance) to predict the masses of the meson octet, baryon decuplet and more to an astonishing degree of accuracy. Since lattice QCD exploits an intrinsically non-perturbative approach to computations in QCD, $\Lambda_{QCD}$ is not part of the definition of the theory. Indeed, in lattice QCD a running coupling is not even required to exist. Nonetheless, many definitions can be given of $\alpha_{s}(Q^{2})$ in the lattice framework, all of which must reduce to the standard one in the UV. $$$$ An enlightening definition of $\boldsymbol{\alpha_{s}(Q^{2})}$ The definition of $\alpha_{s}(Q^{2})$ which, in my opinion, sheds more light on the interpretation of $\Lambda_{QCD}$ is the one given in the Landau gauge and Taylor scheme, namely $$\alpha_{s}(Q^{2})=\alpha_{s}(Q^{2}_{0})\,J(Q^{2};Q^{2}_{0})\,\chi^{2}(Q^{2};Q^{2}_{0}),$$ where $J(Q^{2};Q^{2}_{0})$ and $\chi(Q^{2};Q^{2}_{0})$ are the gluon and ghost dressing functions renormalized at the scale $Q^{2}_{0}$ : $$J(Q^{2};Q^{2}_{0})=Q^{2}D(Q^{2};Q^{2}_{0}),\\ \chi(Q^{2};Q^{2}_{0})=Q^{2}G(Q^{2};Q^{2}_{0}),$$ with $D(Q^{2};Q^{2}_{0})$ and $G(Q^{2};Q^{2}_{0})$ the transverse-gluon and ghost propagators renormalized at the scale $Q^{2}_{0}$ (observe that $J(Q^{2}_{0};Q^{2}_{0})=\chi(Q^{2}_{0};Q^{2}_{0})=1$ by definition). This definition is suitable both in a perturbative and in a non-perturbative setting, since the propagators can be computed in both. In the Landau gauge, it is equivalent to the standard definition of $\alpha_{s}(Q^{2})$ up to two loops. For instance, to one loop and in the Landau gauge, one can compute that $$J(Q^{2};Q^{2}_{0})=\left[\frac{\alpha_{s}(Q^{2})}{\alpha_{s}(Q^{2}_{0})}\right]^{13/22}\ ,\qquad \chi(Q^{2};Q^{2}_{0})=\left[\frac{\alpha_{s}(Q^{2})}{\alpha_{s}(Q^{2}_{0})}\right]^{9/44}\ ,$$ where $\alpha_{s}(Q^{2})$ is the ordinary one-loop running coupling. On the lattice, one can compute the gluon and ghost propagators and take the product of their dressing functions to obtain a non-perturbative version of $\alpha_{s}(Q^{2})$ . The result is contained for instance in Fig. 4 of this article (the computations are made without quarks, but the conclusions are the same). As you can see, on the lattice the Taylor-scheme $\alpha_{s}(Q^{2})$ has no Landau pole: somewhat below 1 GeV (in the absence of quarks), the pole is replaced by a maximum. Moreover, at lower momenta the running coupling decreases until it goes to zero at zero momentum (don't be fooled by this, at zero momentum there can be something else which blows up, giving rise to finite effects!). This is an example of a running coupling, computed non-perturbatively, which is finite in the IR. What role does $\Lambda_{QCD}$ play in this setting? By itself, none at all. Nonetheless, at high energies the Taylor-scheme coupling computed on the lattice reduces to the standard running coupling. Therefore the high-energy behavior of the Taylor-scheme coupling can indeed be parametrized by the curve (approximating to one loop) $$\alpha_{s}(Q^{2})=\frac{4\pi}{\beta_{0}\ln(Q^{2}/\Lambda^{2}_{QCD})}.$$ Here however $\Lambda_{QCD}$ is a fitting parameter, rather than the position of a pole. $$$$ What's going on with the Taylor coupling? Mass effects on the running of the strong coupling At this point you might be wondering why and how does the Landau pole disappear from the non-perturbative running coupling. In the context of the Taylor scheme and Landau gauge, this question admits a fairly straightforward answer: the finiteness of the coupling can be viewed as being caused by mass effects over its running. In order to illustrate this point I will take QED as an example. In high-but-not-too-high-energy QED the running coupling can be expressed as $$\alpha(Q^{2})=\frac{4\pi}{\beta_{0}\ln(\Lambda^{2}/Q^{2})}\qquad(\beta_{0}>0),$$ where $\Lambda\sim 10^{286}$ eV can be defined in analogy to $\Lambda_{QCD}$ . In the $Q^{2}\to 0$ limit, this expression would imply $\alpha(Q^{2})\to0$ , which however is not the correct result. This is because the expression given above does not take into account the mass effects on the running of the coupling due to the electron mass $m_{e}$ being non-zero (recall that most of the elementary derivations of the beta functions go like "Let us suppose that all the masses can be set to zero, then ..."). The correct result instead is $$\alpha(Q^{2})=\alpha(Q^{2}_{0})J(Q^{2};Q_{0}^{2})=\frac{\alpha(Q_{0}^{2})}{1-\Pi(Q^{2};Q_{0}^{2})}$$ where $\Pi(Q^{2};Q_{0}^{2})$ is the photon polarization renormalized at $Q_{0}^{2}$ (notice the similarity of the above equation with the definition of the strong coupling in the Taylor scheme). This expression yields a finite, non-zero result in the limit $\alpha(Q^{2})\to0$ , and more generally an IR behavior for $\alpha(Q^{2})$ which is not simply logarithmic. More generally, at low momenta, one must take into account the mass effects. You might expect that I'm referring to the quarks' masses, as I did above for the electron's mass. However, I'm not. What I'm talking about is the gluon mass. Indeed, it has now been established that at low energies, due to non-perturbative effects, the gluons acquire a dynamically generated mass. This mass is not expected to explicitly break gauge invariance (although it might be caused by some form of spontaneous symmetry breaking), so it is somewhat a "safe" mass, unlike an explicit mass term in the QCD Lagrangian. At high energies the gluon mass, which is a function of momentum, decreases, until it becomes negligible and the ordinary massless gluons are recovered. The dynamical generation of a mass for the gluons affects the form of the transverse gluon propagator: instead of growing to infinity as $p\to 0$ as would happen for a massless propagator, the gluon propagator saturates to a finite value (see e.g. Fig. 1 in the article I've already cited). In the context of the Taylor scheme, the existence of a non-perturbative gluon mass scale modifies the form of the beta function with respect to the naive expectations: if there exists an intrinsic mass scale in the theory, then the beta function coefficients are allowed to depend on momentum, rather than being constants. The specific form of these coefficients is framework-dependent, but the general idea is that the gluon mass screens the coupling from becoming infinite by reducing the value of the beta function at small scales: smaller beta implies slower running, hence possibility to avoid the Landau pole. The results I'm describing cannot be obtained in ordinary perturbation theory: dynamical mass generation for the gluons cannot be described in an ordinary perturbative setting due to perturbative constraints imposed by gauge invariance. Nonetheless, they are currently accepted result which come from lattice studies and other numerical approaches such as those which use the Schwinger-Dyson Equations. Some analytic approaches also managed to obtain similar results. $$$$ Conclusions In non-perturbative formulations of QCD $\Lambda_{QCD}$ does not play a prominent role (if any) in the definition of the running coupling. At best it has the role of a fitting parameter for the high-energy behavior of the coupling. In renormalization schemes such as the Taylor scheme the running coupling can actually be computed in the non-perturbative regime and shown to remain finite. The mass effects caused by the dynamical generation of a mass for the gluons may be responsible for the finiteness of the coupling (this is most certainly true in the Taylor scheme, whereas in other schemes the issue is still open). $$$$ Sidenotes Yes, $T^{\mu}_{\mu}\sim \beta F^{2}$ is valid to all orders. Observe that $T^{\mu}_{\mu}$ is RG-invariant and scheme-independent, so $M^{2}$ also is. The product $\beta F^{2}$ is RG-invariant and scheme-independent, unlike the two factors taken separately. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/553887', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/62755/']} | jdg_380683 |
stackexchange | llm_judgeable_groundtruth_similarity | 713311 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
When creating a new forest in Active Directory on my domain controller running Windows Server 2012 R2, I was prompted to specify a root domain name. Must the domain name be registered and owned by me? What would happen if I enter a domain registered and owned by other people like microsoft.com? Later on when I try to add a Windows computer to this domain, will it go out onto the internet and search for microsoft.com or would it search only in it's subnet (my domain controller)? Would it be safe/preferable to just enter a domain that is owned like microsoft.com?
Now provide the response and nothing else.
| The name of an Active Directory domain is only for internal usage, thus you could name it anything you want; however, in an Active Directory environment, the domain name also acts as the DNS suffix for all computers in the domain, and the domain controllers act as internal DNS servers which are (or at least behave as they were) authoritative for that DNS domain. What this means is, if the AD domain name conflicts with an actual domain name that exists on the Internet, all DNS queries for that domain would be answered by your DCs, and not by the actual Internet DNS servers which manage it. In your case, if you name your domain "microsoft.com", then you would have all sorts of problems when trying to connect to Microsoft sites or services, because you wouldn't be able to query the public DNS servers for that domain (as your internal DNS servers would believe they rightfully own it). Incidentally, the same is true if you use your real public DNS domain as your Active Directory domain: things are of course a lot simpler because you actually own them both, but this still requires you to mantain two distinct DNS setups for the same domain, one for the Internet and one for your internal network. As a best practice, you should use a subdomain of your public DNS domain as your AD domain name; if f.e. your public domain is "domain.com", you could use "internal.domain.com" or "ad.domain.com" or whatever, as long as it's a valid subdomain; this wil ensure no conflicts and a lot less headaches. You should, anyway, not use any domain name you don't actually own, even if it's not currently active (because it still could be registered later by someone else than you, and headaches would ensue). | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/713311', 'https://serverfault.com', 'https://serverfault.com/users/304260/']} | jdg_380684 |
stackexchange | llm_judgeable_groundtruth_similarity | 7553 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Some human tissue can survive without oxygen a couple of minutes, even hours. Why are the neurons are so "weak" and depends so much on oxygen and other nutrients and cannot live without them for more than a few seconds or 1 or 2 minutes? Are they missing some parts of their cells which can store nutrients for worse times in favor of their function or what is the case?
Now provide the response and nothing else.
| Neurons use lot of energy to maintain their polarized state, this is not required to other cells [1,2]. When O2 or blood flow (which is carrying the nutrients) is reduced, the neuronal ATP levels breaks down very fast, with 90% ATP depleted in less than 5 minutes. Without ATP, the neuron can not maintain the correct ion flux, so depolarization occurs causing glutamate excitotoxicity, cell swelling and finally cell death. 1] http://www.acnp.org/g4/gn401000064/ch064.html 2] http://www.scientificamerican.com/article.cfm?id=why-does-the-brain-need-s | {} | {'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/7553', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/3118/']} | jdg_380685 |
stackexchange | llm_judgeable_groundtruth_similarity | 8049612 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
please shed some light on this situation Right now i have two array having latitude and longitude of nearby places and also have the user location latiude and longiude now i want to calculate the distance between user location and nearby places and want to show them in listview. I know that there is a method for calculating distance as public static void distanceBetween (double startLatitude, double startLongitude, double endLatitude, double endLongitude, float[] results); Now what is the problem is how to pass these two array having nearby latitude and longitue in this method and get the array of distances.
Now provide the response and nothing else.
| http://developer.android.com/reference/android/location/Location.html Look into distanceTo Returns the approximate distance in meters between this location andthe given location. Distance is defined using the WGS84 ellipsoid. or distanceBetween Computes the approximate distance in meters between two locations, andoptionally the initial and final bearings of the shortest path betweenthem. Distance and bearing are defined using the WGS84 ellipsoid. You can create a Location object from a latitude and longitude: Location locationA = new Location("point A");locationA.setLatitude(latA);locationA.setLongitude(lngA);Location locationB = new Location("point B");locationB.setLatitude(latB);locationB.setLongitude(lngB);float distance = locationA.distanceTo(locationB); or private double meterDistanceBetweenPoints(float lat_a, float lng_a, float lat_b, float lng_b) { float pk = (float) (180.f/Math.PI); float a1 = lat_a / pk; float a2 = lng_a / pk; float b1 = lat_b / pk; float b2 = lng_b / pk; double t1 = Math.cos(a1) * Math.cos(a2) * Math.cos(b1) * Math.cos(b2); double t2 = Math.cos(a1) * Math.sin(a2) * Math.cos(b1) * Math.sin(b2); double t3 = Math.sin(a1) * Math.sin(b1); double tt = Math.acos(t1 + t2 + t3); return 6366000 * tt;} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/8049612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/909317/']} | jdg_380686 |
stackexchange | llm_judgeable_groundtruth_similarity | 24826 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Why is it that Secured HTTP Connection (https) is Slower then than typical HTTP (http)? Just interested in knowing only. Thanks.
Now provide the response and nothing else.
| To open a TCP/IP connection requires 3 packets to be exchanged between the client and the server. The connection is then considered established, and the client can submit its request. When SSL is added on top of the TCP/IP connection there are several additional interactions that have to happen before the connection is considered established. Unless the latency is negligible between the client and the server (read, the same network), then it is likely that this additional latency, due to the additional round trips to exchange SSL handshaking data, not the CPU overhead of calculating the key material, is the major factor in the delay in establishing an SSL connection. More information about TCP/IP connection establishment More information about SSL/TLS handshaking | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/24826', 'https://serverfault.com', 'https://serverfault.com/users/8834/']} | jdg_380687 |
stackexchange | llm_judgeable_groundtruth_similarity | 45415901 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
my project uses both Objective-C and Swift code. When a user logs in, it calls a set of apis for user preference, I have a DataCoordinator.swift class which schedules the API operation and I make this calls from UserDetailViewController.m class to load user preferences. This use to work fine before I migrated my code to Swift 4 using Xcode 9 beta 4. Now when I login it crashes by giving me this error in my DataCoordinator class. Below is a sample of my DataCoordinator and Viewcontroller class. DataCoordinator.swiftimport UIKit@objcMembersclass DataCoordinator: NSObject { //MARK:- Private fileprivate var user = myDataStore.sharedInstance().user fileprivate var preferenceFetchOperations = [FetchOperation]() fileprivate func scheduleFetchOperation(_ operation:FetchOperation, inFetchOperations operations:inout [FetchOperation]) { guard operations.index(of: operation) == nil else { return } operations.append(operation) } fileprivate func completeFetchOperation(_ fetchOperation:FetchOperation, withError error:Error?, andCompletionHandler handler:@escaping FetchCompletionHandler) { func removeOperation(_ operation:FetchOperation, fromOperations operations:inout [FetchOperation]) { if operations.count > 0 { operations.remove(at: operations.index(of: fetchOperation)!) handler(error) } } if preferenceFetchOperations.contains(fetchOperation) { removeOperation(fetchOperation, fromOperations: &preferenceFetchOperations) } } fileprivate func schedulePreferencesFetchOperation(_ serviceName:String, fetch:@escaping FetchOperationBlock){ let operation = FetchOperation(name: serviceName, fetch: fetch); scheduleFetchOperation(operation, inFetchOperations: &preferenceFetchOperations) } fileprivate func runOperationsIn(_ fetchOperations:inout [FetchOperation]) { for var operation in fetchOperations { guard operation.isActivated == false else { continue } operation.isActivated = true operation.execute() } } //MARK:- Non-Private typealias FetchCompletionHandler = (_ error:Error?)->Void var numberOfPreferencesFetchCalls:Int { get { return preferenceFetchOperations.count } } // MARK: - func fetchPreferences(_ completionHandler:@escaping FetchCompletionHandler) -> Void { defer { runOperationsIn(&preferenceFetchOperations) } schedulePreferencesFetchOperation("com.fetchPreferences.type1") {[unowned self] (operation:FetchOperation) in WebServiceManager.getType1Detail(for: user) {[unowned self] (error) in self.completeFetchOperation(operation, withError: error, andCompletionHandler: completionHandler) } } schedulePreferencesFetchOperation("com.fetchPreferences.type2") {[unowned self] (operation:FetchOperation) in WebServiceManager.getType2Detail(for: user) {[unowned self] (error) in self.completeFetchOperation(operation, withError: error, andCompletionHandler: completionHandler) } } schedulePreferencesFetchOperation("com.fetchPreferences.type3") {[unowned self] (operation:FetchOperation) in WebServiceManager.getType3Detail(for: user) {[unowned self] (error) in self.completeFetchOperation(operation, withError: error, andCompletionHandler: completionHandler) } } schedulePreferencesFetchOperation("com.fetchPreferences.type4") {[unowned self] (operation:FetchOperation) in WebServiceManager.getType4Detail(for: user) {[unowned self] (error) in self.completeFetchOperation(operation, withError: error, andCompletionHandler: completionHandler) } } }}// MARK:- Fetch Operation Structprivate typealias FetchOperationBlock = (_ operation:FetchOperation)->Voidprivate struct FetchOperation:Hashable { fileprivate var runToken = 0 fileprivate let fetchBlock:FetchOperationBlock let name:String! var isActivated:Bool { get { return runToken == 0 ? false : true } mutating set { if runToken == 0 && newValue == true { runToken = 1 } } } fileprivate var hashValue: Int { get { return name.hashValue } } func execute() -> Void { fetchBlock(self) } init (name:String, fetch:@escaping FetchOperationBlock) { self.name = name self.fetchBlock = fetch }}private func ==(lhs: FetchOperation, rhs: FetchOperation) -> Bool { return lhs.hashValue == rhs.hashValue} //This is how I call it in my viewcontrollers viewDidLoad method __weak UserDetailViewController *weakSelf = self;[self.dataCoordinator fetchPreferences:^(NSError * _Nullable error) { if (error == nil) { [weakSelf didFetchPrefrences]; } else { // handle error } }];//completion response- (void)didFetchPrefrences { //when api calls complete load data if (self.dataCoordinator.numberOfPreferencesFetchCalls == 0) { //Load details }} I'm not sure how to proceed on this, I saw a bug report at https://bugs.swift.org/browse/SR-5119 but it seems to be fixed in Xcode 9 beta 3. Any help is appreciated
Now provide the response and nothing else.
| I think this 'bug' may be a Swift 4 "feature", specifically something they call "Exclusive access to Memory". Check out this WWDC video. Around the 50-minute mark, the long-haired speaker explains it. https://developer.apple.com/videos/play/wwdc2017/402/?time=233 You could try turning the thread sanitizer off in your scheme settings if you're happy to ignore it. However, the debugger is trying to tell you about a subtle threading issue, so it's probably a better use of your time to figure out why you've got something writing to your array at the same time as it's being read from. UPDATE 2022: The link is broken, you can watch the video directly here: (Around the 53-minute mark) https://devstreaming-cdn.apple.com/videos/wwdc/2017/402ynph39nk5sn4222/402/402_hd_whats_new_in_swift.mp4 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45415901', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2258784/']} | jdg_380688 |
stackexchange | llm_judgeable_groundtruth_similarity | 645867 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If we're thinking about a case where bar magnet is placed on a flat surface, with its N-pole on the right and a compass is placed just above the middle of the magnet, in which direction will the compass needle point? Will it go in circles? Here's a diagram:
Now provide the response and nothing else.
| Let's find out! Here, I have a compass. The red end points to the Earth's North Pole, so that is the north side of the magnet by definition. Ignore the dial, since I've it rotated it to make the needle easier to see. Now, I don't have a bar magnet, but I do have a bunch of neodymium disc magnets that I can stack into a bar. I have written an "S" on one end of the magnet stack to indicate it is the south pole. We can tell this because the north pole of the compass is attracted towards it. (I've used tape to create a writing surface and to stop the magnets from rolling away. The magnets are very well stuck together.) Just to confirm that we understand how both of these magnets work, if I flip the magnet stack, the compass needle flips. In preparation for placing the compass on top of the magnet, I'll show how the two will be arranged in the picture below. The north pole of the magnet stack will be pointing to the left. Notice which way the compass points. Now, I'll pick up the compass and move it directly over the magnet. My fingers are visible holding the compass in place because the magnetic force between the two would cause one or the other to shift to the side. Notice that the compass needle has flipped to pointing the opposite direction from when it was to the side of the magnet. In case you meant "above" in a different direction, here's a slightly different arrangement with the same result. The north pole of the magnet stack is still pointing to the left, which causes the north pole of the compass needle to point to the right. The picture is slightly tilted so the "N" on the magnet stack is visible. The way magnetic field arrows are drawn is that they point away from north poles and towards south poles. With this convention, you can see in the previous pictures that the north end of the compass wants to align with the local magnetic field direction. What does the last picture tell you about how the magnetic field points near the middle of a bar magnet? So, apparently, some commenters think I need to demonstrate that opposite poles of a magnet attract without assuming it. It's like people here don't trust me. Since we're defining the north end of a magnet as one that seeks Earth's North Pole, we need to find where north is. Now, in order to let the stack of magnets find its own north, I'll float it in a bowl of water (laboratory? Hah! I've got a kitchen!). The magnets' boat is a plastic lid from a breadcrumb container. Now, we float the magnetic boat in the bowl of water. We now label the ends of the magnet to follow the rule that the north end of a magnet points to the north. Finally, we check that the new labels on the magnet match the labels used in the first part of this answer. As you can see, this arrangement is the same as the third picture in this answer, meaning the magnet was correctly labeled all this time. This also means that the Earth actually does have a magnetic south pole near the geographic North Pole. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/645867', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/304581/']} | jdg_380689 |
stackexchange | llm_judgeable_groundtruth_similarity | 40391566 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a web application that gets dynamic data from Flask when a select element from HTML is changed. of course that is done via jquery ajax . No probs here I got that. The problem is, the dynamic data - that is sent by Flask - , is a list of objects from the database Flask-sqlalchemy . Of course the data is sent as JSON from Flask . I'd like to iterate through those objects to display their info using Jinja . HTML <select id="#mySelect"> <option value="option1" id="1">Option 1 </option> <option value="option2" id="1">Option 2 </option> <option value="option3" id="3">Option 3 </option></select> jQuery $('body').on('change','#mySelect',function(){ var option_id = $('#mySelect').find(':selected').attr('id'); $.ajax({ url: "{{ url_for('_get_content') }}", type: "POST", dataType: "json", data: {'option_id':option_id}, success: function(data){ data = data.data; /* HERE I WANT TO ITERATE THROUGH THE data LIST OF OBJECTS */ } });}); Flask @app.route('/_get_content/')def _get_content(): option_id = request.form['option_id'] all_options = models.Content.query.filter_by(id=option_id) return jsonify({'data': all_options}) PS : I know that jinja gets rendered first so there is no way to assign jQuery variables to Jinja. So how exactly am I going to iterate through the data list if I can't use it in Jinja ?
Now provide the response and nothing else.
| Okay, I got it. Simply, I made an external html file and added the required jinja template to it. {% for object in object_list %} {{object.name}}{% endfor %} then in my Flask file I literally returned the render_template response to the jquery ( which contained the HTML I wanted to append ) objects_from_db = getAllObjects()return jsonify({'data': render_template('the_temp.html', object_list=objects_from_db)} And then simply append the HTML from the response to the required div to be updated. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40391566', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7091942/']} | jdg_380690 |
stackexchange | llm_judgeable_groundtruth_similarity | 382510 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to decide which is the best architecture for a turn-based multiplayer game where the players can be either humans or AI and the UI is optional, for instance because the game can just be used to make the AIs fight against each other. Let's take the simplest possible game out there, tic-tac-toe, and I used a class like this: class TicTacToeGame { mark(cell) { //make something happen }} In the most simple implementation of my game I may have a UI with a click handler: function onClick(cell) { ticTacToeGame.mark(cell); refreshUI();} This code maybe works fine when there are only human players but if we had AI players and "headless" games it becomes insufficient. What are some ideas to expand this code for the other use cases (AI, headless game)? A first solution would be to use the classical observer pattern. By using this idea, multiple players would subscribe to the game and would be notified when it's their turn. In the same way, the interface could subscribe and be notified when new different configurations need to be displayed. So in that case the game class would change to become like this: class TicTacToeGame { constructor() { this.observers = []; } subscribe(observer) { this.observers.push(observer); } mark(cell) { //make something happen this.observers.forEach(o => o.notify(this)); }} where the observers would be the players and the UI: ...ticTacToeGame.register(AI);ticTactoeGame.register(UI);... but this solution looks a little bit too generic and I'm not entirely sure about the best way to describe the fact that the AIs may represent (for instance) the first and third players in a game. A more advanced solution would be to use the observer pattern for the UI but keep a dedicated system for the players: class TicTacToeGame { constructor() { this.observers = []; this.players = []; } subscribe(observer) { this.observers.push(observer); } addPlayer(player) { this.players.push(player); } mark(cell) { //make something happen this.players[this.currentPlayerIndex].notify(this); this.observers.forEach(o => o.notify(this)); }} But things start to get more complex, and I'm not sure if modelling a human player would make that much sense now. I've never written a game in my life so I'm not entirely sure if there are maybe patterns that I should know or if the solution is more context dependent. What are your opinions about my initial design? It may be also important to add that the context where I would like to write the game is the web, and the UI framework would be React.
Now provide the response and nothing else.
| I would try to keep the TicTacToeGame completely UI agnostic. No observer, no publisher-subscriber inside that class. Only "business logic" (or call it "game-logic") inside that class, no mixed responsibilities which could lead to the complexity you scetched in your question. Instead, you could implement the turn-logic by utilizing your own event queue. I give an example in pseudo-code using polling for the sake of simplicity, depending on your environment you can implement it without polling instead: MainLoop() { while(queue.IsEmpty()) WaitSomeMiliseconds(); // or use some queue.WaitForEvent() command, if available var nextEvent=queue.getNextEvent(); if(nextEvent==Event.MoveCompleted) { Display(ticTacToeGame); if(ticTacToeGame.GameOver()) break; nextPlayer=PickNextPlayer(); if(nextPlayer.Type()==PlayerType.Human) { AllowMoveByUI(); // enable UI controls for entering moves by human } else { LetAIMakeMove(ticTacToeGame); queue.Insert(Event.MoveCompleted); } } } And the event handlers of the UI (driven by the UI event loop, not yours) then should have some logic to mark a cell by the user and insert an Event.MoveCompleted into the queue as well: HandleUserInputEvent(CellType cell) { if(ticTacToeGame.IsMarkingValid(cell)) { ticTacToeGame.Mark(cell); DisableMoveByUI(); queue.Insert(Event.MoveCompleted); } } Of course, using a queue is a little bit overengineered in the example above, since there is currently only one type of event, so a simple global boolean flag would do the trick as well. But in your real system, I assume there will be different types of events, so I tried to gave a rough outline on how the system may look like. I hope you get the idea. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/382510', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/93229/']} | jdg_380691 |
stackexchange | llm_judgeable_groundtruth_similarity | 5223325 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In answering another question* on SO, and the subsequent comment discussion, I ran into a wall on a point that I'm not clear on. Correct me on any point where I'm astray... When the Garbage Collector collects an object, it calls that object's finalizer, on a separate thread (unless the finalizer has been suppressed, e.g. through a Dispose() method). While collecting, the GC suspends all threads except the thread that triggered the collection (background collection aside). What isn't clear: Does the Garbage Collector wait for the finalizer to execute on that object before collecting it? If not, does it un-suspend threads while the finalizer is still executing? If it does wait, what happens if the finalizer runs into a lock being held by one of the suspended threads? Does the finalizer thread deadlock? (In my answer, I argue that this is bad design, but I could possibly see cases where this could happen) * Link to the original question: .NET GC Accessing a synchronised object from a finalizer
Now provide the response and nothing else.
| Does the Garbage Collector wait for the finalizer to execute on that object before collecting it? Your question is a bit ambiguous. When the GC encounters a "dead" object that needs finalization, it abandons its attempt to reclaim the dead object's storage. Instead, it puts the object on a queue of "objects that I know need finalization" and treats that object as alive until the finalizer thread is done with it. So, yes, the GC does "wait" until the finalizer is executed before reclaiming the storage. But it does not wait synchronously . It sounds like you're asking "does the GC synchronously call the finalizer right there?" No, it queues up the object to be finalized later and keeps on truckin'. The GC wants to quickly get through the task of releasing garbage and compacting memory so that the program proper can resume running ASAP. It's not going to stop to deal with some whiny object that is demanding attention before it gets cleaned up. It puts that object on a queue and says "be quiet and the finalizer thread will deal with you later." Later on the GC will check the object again and say "are you still dead? And has your finalizer run?" If the answer is "yes" then the object gets reclaimed. (Remember, a finalizer might make a dead object back into a live one; try to never do that. Nothing pleasant happens as a result.) Does it un-suspend threads while the finalizer is still executing? I believe that the GC thaws out the threads that it froze, and signals the finalizer thread "hey, you've got work to do". So when the finalizer thread starts running, the threads that were frozen by the GC are starting up again. There might have to be unfrozen threads because the finalizer might require a call to be marshalled to a user thread in order to release a thread-affinitized resource. Of course some of those user threads might be blocked or frozen; threads can always be blocked by something. what happens if the finalizer runs into a lock being held by one of the suspended threads? Does the finalizer thread deadlock? You betcha. There's nothing magic about the finalizer thread that prevents it from deadlocking. If a user thread is waiting on a lock taken out by the finalizer thread, and the finalizer thread is waiting on a lock taken out by the user thread, then you've got a deadlock. Examples of finalizer thread deadlocks abound. Here's a good article on one such scenario, with a bunch of links to other scenarios: http://blogs.microsoft.co.il/blogs/sasha/archive/2010/06/30/sta-objects-and-the-finalizer-thread-tale-of-a-deadlock.aspx As the article states: finalizers are an extremely complex and dangerous cleanup mechanism and you should avoid them if you possibly can . It is incredibly easy to get a finalizer wrong and very hard to get it right. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5223325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/329969/']} | jdg_380692 |
stackexchange | llm_judgeable_groundtruth_similarity | 20312217 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I got a bit of a problem. I want to display a route between two points and give custom markers to both points. It is working. I set suppressMarkers = true I call directionsService.route to display the route. The map's center is correct, the whole route fits into my map's canvas beautifully. Then I add custom markers to both origin and destination. Then comes the problem: When I add the custom markers, the map's center jumps to the last added marker. I want to keep the map's center on the route, that was originally generated by the directionsService.route function. PS: Weirdly if I add the markers before displaying the route, the map still centers on the last added marker. Thanks, Pál Terebess
Now provide the response and nothing else.
| The MATLAB jordan function is from the Symbolic Math Toolbox, so it does not seem unreasonable to get its Python replacement from the SymPy library. Specifically, the Matrix class has the method jordan_form . You can pass a numpy array as an argument when you create a sympy Matrix. For example, the following is from the wikipedia article on the Jordan normal form : In [1]: import numpy as npIn [2]: from sympy import MatrixIn [3]: a = np.array([[5, 4, 2, 1], [0, 1, -1, -1], [-1, -1, 3, 0], [1, 1, -1, 2]])In [4]: m = Matrix(a)In [5]: mOut[5]: Matrix([[ 5, 4, 2, 1],[ 0, 1, -1, -1],[-1, -1, 3, 0],[ 1, 1, -1, 2]])In [6]: P, J = m.jordan_form()In [7]: JOut[7]: Matrix([[1, 0, 0, 0],[0, 2, 0, 0],[0, 0, 4, 1],[0, 0, 0, 4]]) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20312217', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3054368/']} | jdg_380693 |
stackexchange | llm_judgeable_groundtruth_similarity | 43051606 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I like to customize and display more information for Test suites or tests like Test Run times, for eg: adding more information to below displayed output ===============================================Demo-SuiteTotal tests run: 19, Failures: 1, Skips: 0=============================================== Any suggestions how to add more to above info like adding Average Test Suite run time etc.,
Now provide the response and nothing else.
| The ngForm being sent to your event handler does not have a .submit() method, so you'll need to get the underlying form from either ngForm or the event so you can use its .submit() method. For this example, I used the event . Markup <!-- Note that I added the $event to the handler --> <form #cartForm="ngForm" (ngSubmit)="onSubmit(cartForm, $event)" method="post" [action]="cart.paymentUrl" > TS onSubmit(form: any, e: any): void { //Note that I added 'e' and calling the event target's .submit() if(this.loggedIn){ e.target.submit(); } else{ this.cartService.updateUserEmail(this.cart.id, this.email) .subscribe(() => { e.target.submit(); }) }} I stripped it down a bit, but here is a Working Plunker ("Working" as in, you'll notice the failed POST -- after 1 second -- in console to www.google.com due to cross-domain issues.) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43051606', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5977018/']} | jdg_380694 |
stackexchange | llm_judgeable_groundtruth_similarity | 405597 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In this video , the electrical engineer and youtuber Mehdi Sadaghdar (ElectroBOOM) disagrees with another video from professor Walter Lewin. Basically, professor Lewin shows in an experiment that if we have two different resistances connected in a closed loop, and if we generate a changing magnetic field using a coil, the voltage at the endpoints of the two resistances will be different, contrary to the expectations from Kirchhoff's Voltage Law (KVL). simulate this circuit – Schematic created using CircuitLab According to the experiment, the left voltmeter VM1 shows a voltage different from the second voltmeter VM2. Lewin then concludes that KVL does not hold when there's a changing magnetic field. The mathematical reason that he gives is that the magnetic field is non-conservative, and KVL can be derived from Maxwell's equations only when the field is conservative. He then says that this experiment is a proof of his claims. Mehdi, on the other hand, points out two things: first, that the way the probing was done is incorrect. The changing magnetic field has an effect on on the probe wires, and that's one of the reasons why the voltmeters change value depending on the position. Second, he says that because there's a loop, then the loop is behaving like an inductor, and together with the coil it's forming mutual inductor: simulate this circuit I understand Lewin's derivation of the KVL, so I understand that there's an issue with the non-conservative magnetic field, but at the same I think Mehdi is right: that loop is an inductor, and the way Lewin is probing the circuit looks wrong to me. So where is the mistake here? Does KVL hold in the circuit above? Is the probing being done right? Does the circuit have a mutual inductor that should not be ignored?
Now provide the response and nothing else.
| The lumped component models to which KVL is applied are just that--models. Like all models, they are only accurate to the extent that they represent the relevant characteristics of the system they reflect. The simple loop of two resistors model does not represent the susceptibility of the conductive path that constitutes the circuit to induced EMF, therefore this simple model will not reflect the behavior of the real circuit in the real world where induced EMF is a thing that happens. The simple model can be made more accurate by including inductors between the resistors and an additional inductor that represents the solenoid that provides the changing magnetic field. By considering the coupling of these inductors it is possible to incorporate the induced EMF into the model and thus achieve results that better reflect reality. A reasonably complete model of the situation in Lewin's demonstration would look something like the following ( source ), which is also what Mehdi Sadaghdar shows. Note that the results of simulating this lumped element model closely resemble those of Lewin's demonstration. This idea of refining a theoretical circuit model by adding lumped elements to represent parasitic terms (that is, inherent characteristics of a system that are not intentional but are relevant to the system's behavior) is not exclusive to situations where there is a changing magnetic field, and is in fact a common and useful practice in electrical engineering. For example, the behavior of a MOSFET switch can be more accurately modeled by including elements to represent C GS and C GD . In this case, the inductors represent an electrical phenomenon that is governed by the physical relationship between the elements of the real world circuit. As such, if the circuit is physically rearranged, the inductors in the model must be adjusted to reflect the electrical characteristics of this new physical relationship. This is also a well-understood aspect of electrical engineering, where, for instance, the physical proximity of two tracks on a PCB must be understood as affecting the way the signals in those two tracks interact. At a certain point, when the rates of change in the circuit state become fast with respect to the physical size of the components of the circuit (including wires/PCB tracks!), the lumped element becomes unwieldy at best and inaccurate at worst, at which point things like transmission line models come into play, but the lumped model remains quite useful in dynamic systems operating well into the MHz range. So on the whole, Lewin's claim that KVL does not work for the situation he demonstrates is basically correct, but only because the circuit model used does not represent elements that are crucial to understanding its real world behavior. As a side note, it may look as if Lewin doesn't understand what's happening in this circuit, however he clearly does when you examine the specific language he uses in the lecture and in other materials. From this supplement: Suppose you put the probes of a voltmeter across the terminals of an inductor (with very small resistance) in a circuit. What will you measure? What you will measure on the meter of the voltmeter is a "voltage drop" of Ldi/dt. But that is not because there is an electric field in the inductor! It is because putting the voltmeter in the circuit will result in a time changing magnetic flux through the voltmeter circuit, consisting of the inductor, the voltmeter leads, and the large internal resistor in the voltmeter This makes it clear that Lewin considers the voltmeter and its leads part of the circuit, and as he has stated, the path taken through the changing field affects the integral and therefore the voltage indicated by the meter. This is precisely the effect that Mehdi Sadaghdar describes in his video, just observed from a physics perspective (Faraday et al) instead of an EE perspective (parasitic inductances). I'm not sure why Lewin has not chosen to acknowledge this equivalence, other than that he considers the latter a 'right answer for the wrong reasons'. Edit to add: In this video , Lewin more clearly expresses his objection to formulating the problem in a way that reflects KVL. For this circuit: simulate this circuit – Schematic created using CircuitLab Lewin shows that, starting at the bottom left corner and moving clockwise, the closed loop integral of \$\overrightarrow{E}.\overrightarrow{dl}\$ is as follows (note that no term is shown for the inductor because it is assumed to be ideal, ie, superconducting): \$ \oint \overrightarrow{E}.\overrightarrow{dl} = -V_{0} + IR + \frac{Q}{C}\$ Because of these two identities: \$\oint \overrightarrow{E}.\overrightarrow{dl} = -\frac{d\Phi_{B} }{dt}\$ \$-\frac{d\Phi_{B} }{dt} = -L\frac{dI}{dt}\$ We can describe the circuit using this equation: \$-V_{0} + IR + \frac{Q}{C} = -L\frac{dI}{dt} \$ If we wanted to get something that resembles KVL, we can simply move the term that describes V L to the other side of the equation: \$-V_{0} + IR + \frac{Q}{C} + L\frac{dI}{dt} = 0\$ Of this latter form, Lewin says moving the inductance term to the left "doesn't make the equation wrong, but the physics stinks!" because we now neither side of the equation wholly represents \$ \oint \overrightarrow{E}.\overrightarrow{dl}\$ . | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/405597', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/203669/']} | jdg_380695 |
stackexchange | llm_judgeable_groundtruth_similarity | 2459026 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Storing credit card information securely and legally is very difficult and should not be attempted . I have no intention of storing credit card data but I'm dying to figure out the following: My credit card info is being stored on a server some where in the world. This data is (hopefully) not being stored on a merchant's server, but at some point it needs to be stored to verify and charge the account identified by merchant submitted data. My question is this: if you were tasked with storing credit card data what encryption strategy would you use to secure the data on-disk? From what I can tell submitted credit card info is being checked more or less in real time. I doubt that any encryption key used to secure the data is being entered manually, so decryption is being done on the fly, which implies that the keys themselves are being stored on-disk. How would you secure your data and your keys in an automated system like this?
Now provide the response and nothing else.
| If I was storing the number, I would be a giant service provider with a massive database. That database is spread across a highly-redundant storage array consisting of multiple cabinets, in separate rooms or ideally in separate geographical locations, connected by a SAN. My biggest insider threat is the distributed physical plant, the constant stream of worn-out drives, and several daily shifts of technicians, administrators, and engineers. It's a huge threat. Therefore I would encrypt the data on a physically-isolated computer that connects to the mass storage over a network. The software would be as simple as possible: encryption and number verification. The public interfaces and business logic goes elsewhere. Accesses would be logged to a separate SAN. Encrypt with something like AES. The raw AES key is only ever stored in RAM. The key is wrapped in a PGP file for each administrator, who has their own passphrase to enable the server. Less-trusted personnel can be given partial passphrases to use in disaster recovery, or passphrases can be stored in a vault somewhere. For encryption, pick a unique initialization vector (IV) for each card number, AES-encrypt the number using that IV, and store the IV and encrypted number to the SAN. Decryption only occurs using a privileged client interface; normal client connections used for purchases can never get a decryption. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2459026', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/212700/']} | jdg_380696 |
stackexchange | llm_judgeable_groundtruth_similarity | 1700081 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
class D { public static void main(String args[]) { Integer b2=128; Integer b3=128; System.out.println(b2==b3); }} Output: false class D { public static void main(String args[]) { Integer b2=127; Integer b3=127; System.out.println(b2==b3); }} Output: true Note: Numbers between -128 and 127 are true.
Now provide the response and nothing else.
| When you compile a number literal in Java and assign it to a Integer (capital I ) the compiler emits: Integer b2 =Integer.valueOf(127) This line of code is also generated when you use autoboxing. valueOf is implemented such that certain numbers are "pooled", and it returns the same instance for values smaller than 128. From the java 1.6 source code, line 621: public static Integer valueOf(int i) { if(i >= -128 && i <= IntegerCache.high) return IntegerCache.cache[i + 128]; else return new Integer(i);} The value of high can be configured to another value, with the system property. -Djava.lang.Integer.IntegerCache.high=999 If you run your program with that system property, it will output true! The obvious conclusion: never rely on two references being identical, always compare them with .equals() method. So b2.equals(b3) will print true for all logically equal values of b2,b3. Note that Integer cache is not there for performance reasons, but rather to conform to the JLS, section 5.1.7 ; object identity must be given for values -128 to 127 inclusive. Integer#valueOf(int) also documents this behavior: this method is likely to yield significantly better space and time performance by caching frequently requested values. This method will always cache values in the range -128 to 127, inclusive, and may cache other values outside of this range. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1700081', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/162767/']} | jdg_380697 |
stackexchange | llm_judgeable_groundtruth_similarity | 23663459 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been working with Scrapy but run into a bit of a problem. DjangoItem has a save method to persist items using the Django ORM. This is great, except that if I run a scraper multiple times, new items will be created in the database even though I may just want to update a previous value. After looking at the documentation and source code, I don't see any means to update existing items. I know that I could call out to the ORM to see if an item exists and update it, but it would mean calling out to the database for every single object and then again to save the item. How can I update items if they already exist?
Now provide the response and nothing else.
| Unfortunately, the best way that I found to accomplish this is to do exactly what was stated: Check if the item exists in the database using django_model.objects.get , then update it if it does. In my settings file, I added the new pipeline: ITEM_PIPELINES = { # ... # Last pipeline, because further changes won't be saved. 'apps.scrapy.pipelines.ItemPersistencePipeline': 999} I created some helper methods to handle the work of creating the item model, and creating a new one if necessary: def item_to_model(item): model_class = getattr(item, 'django_model') if not model_class: raise TypeError("Item is not a `DjangoItem` or is misconfigured") return item.instancedef get_or_create(model): model_class = type(model) created = False # Normally, we would use `get_or_create`. However, `get_or_create` would # match all properties of an object (i.e. create a new object # anytime it changed) rather than update an existing object. # # Instead, we do the two steps separately try: # We have no unique identifier at the moment; use the name for now. obj = model_class.objects.get(name=model.name) except model_class.DoesNotExist: created = True obj = model # DjangoItem created a model for us. return (obj, created)def update_model(destination, source, commit=True): pk = destination.pk source_dict = model_to_dict(source) for (key, value) in source_dict.items(): setattr(destination, key, value) setattr(destination, 'pk', pk) if commit: destination.save() return destination Then, the final pipeline is fairly straightforward: class ItemPersistencePipeline(object): def process_item(self, item, spider): try: item_model = item_to_model(item) except TypeError: return item model, created = get_or_create(item_model) update_model(model, item_model) return item | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23663459', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/165988/']} | jdg_380698 |
stackexchange | llm_judgeable_groundtruth_similarity | 10604571 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list of meetings: std::list<meeting*> meetings; I want to iterate the list and return a reference to a specific meeting: meeting& day::findMeeting( float StartHour ){std::list<meeting*>::iterator it;for(it = meetings.begin(); it != meetings.end(); it++){ if (it->getStartHour() == StartHour) { return *it; }}throw no_such_meeting_error;} i get the following errors : 'getStartHour' : is not a member of 'std::_List_iterator<_Mylist>' 'return' : cannot convert from 'meeting *' to 'meeting &' invalid return type 'meeting **' for overloaded 'operator ->' I'm still learning c++ so would be happy to understand what i'm doing wrong. Also, Is it good practice to return a reference to an object from a function ? Is there something better to do ? is it likely that the reference will be invalidated on some because of changes to the items in the list ? Thanks
Now provide the response and nothing else.
| I have a list of meetings. No, you don't. You have a list of pointers to meeting. From that one misunderstanding, all of your further errors flow. if (it->getStartHour() == StartHour) This code would be correct if you had a list of meetings. It is wrong if you have a list of pointers to meetings. Try: if ((*it)->getStartHour() == StartHour) Next: return *it; Try: return **it; In the alternative, perhaps you really do want a "list of meetings". In that case, you would declare your list thus: std::list<meeting> meetings; I don't know which you want -- a list of meetings or a list of pointers to meetings. That has to do with the design of the rest of your program. I almost never keep a container full of pointers. You might want a list of pointers, for example, if you need multiple list entries to refer to the same meeting. ("I have a meeting at 10 with Abe, at 11 with Bob and Chuck, and then again a meeting at 10 with Abe"?) You also might want a list of pointers if copying a meeting is impossible or prohibitively expensive. In that case, I suggest you use a smart pointer rather than a naked pointer. To answer your other question, yes, returning a reference to an object is a fine thing to do. You do need to be aware of the lifetime of that object; never access the object through its reference after it is destroyed. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10604571', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/599912/']} | jdg_380699 |
stackexchange | llm_judgeable_groundtruth_similarity | 3709298 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$R(s;n)= \int^{\infty}_0 \frac{x^n}{x^s+1}dx$$ From a previously asked question, I know: $$R(s;0)=\frac{1}{s} \varGamma\left(\frac{1}{s}\right) \varGamma\left(1-\frac{1}{s}\right)$$ The obvious approach is to do integration by parts but I did not manage to find it using that approach, can any of you provide hints or solutions?
Now provide the response and nothing else.
| Set $y = x^{n+1}$ , then $dy = (n+1)x^n dx$ , and $ x^s = y^{ s/({n+1})}$ , so $$R(s;n)=\frac1{n+1}R\left(\frac{s}{n+1};0\right)$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3709298', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/789052/']} | jdg_380700 |
stackexchange | llm_judgeable_groundtruth_similarity | 17124719 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently heard about 9-patch images. I know its 9 tiled and is stretchable. I'd like to know more about it. How can I create a 9-patch image? Is there any tool? Can I create it from AndroidSDK or code? Main Advantages of 9-patch over regular png? (is it stretchable dynamically/ automatically according to screen?)
Now provide the response and nothing else.
| The SDK and Android Studio both ship with the "Draw 9-patch" tool ("draw9patch" in the SDK tools folder) which is a simple editor. Here is a nicer one which is also open source. It has a simple but clever default image. The official documentation has improved over the years. In summary, nine patch images' most important advantage is that they can specify (non-contiguous) areas to scale: A NinePatch graphic is a standard PNG image that includes an extra1-pixel border. It must be saved with the 9.png extension in theres/drawable/ directory of your project. Use the border to define the stretchable and static areas of theimage. You indicate a stretchable section by drawing one (or more)1-pixel wide black line(s) in the left and top part of the border (theother border pixels should be fully transparent or white). You canhave as many stretchable sections as you want. The relative size ofthe stretchable sections stays the same, so the largest section alwaysremains the largest. You can also define an optional drawable section of the image(effectively, the padding lines) by drawing a line on the right and aline on the bottom. If a View object sets the NinePatch graphic as itsbackground and then specifies the view's text, it stretches itself sothat all the text occupies only the area designated by the right andbottom lines (if included). If the padding lines aren't included,Android uses the left and top lines to define this drawable area. But the docs lack good examples. This tutorial has some great examples at the end that answer the second part of your question, explaining how the scaling works - not just for buttons - but also frames, and it has a complete example project that you can download and play with. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17124719', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2124004/']} | jdg_380701 |
stackexchange | llm_judgeable_groundtruth_similarity | 15671679 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to make a working example of jQuery 1.9.1 AJAX + icanhaz/mustache. This is my template: <script id="user" type="text/html"> {{#users}}<li>Username: {{ username }}, fullname: {{ fullname }}</li>{{/users}}</script> and this is my JavaScript: $(document).ready( function() { $("#user-btn").click(function() { $.ajax({ type: "GET", url: "../php/client/json.php", data: { type: "users" } }).done(function( response ) { var element = $('#dialog-message'); element.html("<ul>"); element.append(ich.user(response)); element.append("</ul>"); });}); The AJAX response from this address looks something like: {"users":[{"username":"jd","fullname":"John Doe"},{"username":"jl","fullname":"John Lennon"}]}; With the following code, icanhaz cannot render anything for me. I spent some time with javascript console and found out that the typeof response is string and I expected object . Icanhaz also expects object - that's why it didn't manage to render the correct response. Am I doing something wrong or am I just a poor newbie who didn't know that jquery.ajax returns string responses always? If so, how should I handle them?
Now provide the response and nothing else.
| If you are getting a string returned from your AJAX call, you need to add dataType: "json" . This will make jQuery parse the response as JSON, if possible. $(document).ready( function() { $("#user-btn").click(function() { $.ajax({ type: "GET", url: "../php/client/json.php", data: { type: "users" }, dataType: "json" }).done(function( response ) { ... });}); Are you sure your ich.user method expects an array of users and not just a single user object? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15671679', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/769384/']} | jdg_380702 |
stackexchange | llm_judgeable_groundtruth_similarity | 281494 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
This is kind of a continuation of a recent (closed) question . Is there an order-preserving surjective function $f:{\mathbb N}^{\mathbb N}\to [0,\infty)$ (where for $a,b\in {\mathbb N}^{\mathbb N}$ we have $a\leq b$ if $a(n) \le b(n)$ for all $n\in {\mathbb N}$)? Thanks to Jeremy Rickard who made me aware that a previous version of this question was trivial and therefore uninteresting.
Now provide the response and nothing else.
| Yes. Let us isomorphically identify the poset of functions $\omega \to \omega$ (under the pointwise order) with the set of functions $\omega \to \mathbb{N}_2 = \{n \in \mathbb{N}: n \geq 2\}$, again ordered pointwise. Now in fact there is an isomorphism of posets $\mathbb{N}_2^\omega \to [1, \infty)$ given by continued fractions $$(a_1, a_2, \ldots) \mapsto a_1 - \frac1{a_2 - \frac1{a_3 - \ldots}}$$ provided we endow the domain with dictionary order. Then finish by observing that the identity function $$(\mathbb{N}_2^\omega)_{\text{pointwise}} \to (\mathbb{N}_2^\omega)_{\text{dict}}$$ is order-preserving. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/281494', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8628/']} | jdg_380703 |
stackexchange | llm_judgeable_groundtruth_similarity | 615704 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is your preferred method/datatype for storing passwords in a database (preferably SQL Server 2005). The way I have been doing it in several of our applications is to first use the .NET encryption libraries and then store them in the database as binary(16). Is this the preferred method or should I be using a different datatype or allocating more space than 16?
Now provide the response and nothing else.
| I store the salted hash equivalent of the password in the database and never the password itself, then always compare the hash to the generated one of what the user passed in. It's too dangerous to ever store the literal password data anywhere. This makes recovery impossible, but when someone forgets or loses a password you can run through some checks and create a new password. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/615704', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/22164/']} | jdg_380704 |
stackexchange | llm_judgeable_groundtruth_similarity | 1437457 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I in a batch script find the full path to application XYZ if it is installed Clarifications: The application is not in the PATH All I have is it's name in this case "ISTool.exe" and I would like to get C:\Program\ISTool\ISTool.exe
Now provide the response and nothing else.
| You can locate an executable on the path (or other path-like string if necessary): c:\> for %i in (cmd.exe) do @echo. %~$PATH:iC:\WINDOWS\system32\cmd.exec:\> for %i in (python.exe) do @echo. %~$PATH:iC:\Python25\python.exe Details can be found at the end of the help text for the "for" command, "for /?" but the summary is: %~i - expands %i removing any surrounding quotes.%~fi - expands %i to a fully qualified path name.%~di - expands %i to a drive letter only.%~pi - expands %i to a path only.%~ni - expands %i to a file name only.%~xi - expands %i to a file extension only.%~si - expanded path contains short names only.%~ai - expands %i to file attributes of file.%~ti - expands %i to date/time of file.%~zi - expands %i to size of file.%~$P:i - searches the directories listed in the P environment variable and expands %i to the fully qualified name of the first one found. If the environment variable name is not defined or the file is not found by the search, then this modifier expands to the empty string. The modifiers can be combined to get compound results: %~dpi - expands %i to a drive letter and path only.%~nxi - expands %i to a file name and extension only.%~fsi - expands %i to a full path name with short names only.%~dp$P:i - searches the directories listed in the P environment variable for %i and expands to the drive letter and path of the first one found.%~ftzai - expands %i to a DIR like output line. If your executable isn't on the path (as per your edit), your best bet is to use the bare/subdirectory format of dir which will do it for you. From the root directory: dir /b /s ISTool.exe will get you all of the files on that drive with that name. You then just have to parse the output. My own preference would be to use Cygwin's "find /cygdrive -name ISTool.exe" but that's because I already have it installed. You may not want that (or even have that option). Update: That dir /b /s command will take a while since it's basically searching the whole disk. If that's a problem you may want to consider periodically creating a cached record of all files on all disks with a cmd file like: @echo offsetlocal enableextensions enabledelayedexpansiondel c:\files.cache.tmp >nul: 2>nul:for %%d in (c d e) do ( cd /d %%d:\ dir /b /s >>c:\files.cache.tmp)del c:\files.cache >nul: 2>nul:move c:\files.cache.tmp c:\files.cacheendlocal You could do this with scheduled tasks either nightly (for an always-on server) or on boot (for a desktop). You could even make the script more intelligent to do it only every couple of days (I have an automated backup script that does a similar thing on the family machines I support). This creates the list in a temporary cache file then overwrites the original one to ensure the time when the file doesn't exist is minimized. Then you can just use: findstr \\ISTool.exe c:\files.cache to locate all your files. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1437457', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17413/']} | jdg_380705 |
stackexchange | llm_judgeable_groundtruth_similarity | 34229 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So I just bought and installed a new fuse for an analog multimeter I scored for free and now it works like a charm. The new fuse is the same rating as the blown one I found inside, which is 0.5A 250V. The meter has a warning symbol consisting of a little triangle with an exclamation point inside it and says 500V. I'm completely new to electrical engineering, because I am a Computer Science student. My question is, besides accidentally passing the Ω or kΩ settings while hooked up to a power source, what other things should I avoid doing to prevent the fuse blowing or any other damage to the meter or myself?
Now provide the response and nothing else.
| Multimeter: ... besides accidentally passing the Ω or kΩ settings while hooked up to a power source, what other things should I avoid doing to prevent the fuse blowing or any other damage to the meter or myself? Stating meter brand and model will probably allow us to provide some model specific advice. AC mains use is important and is covered at length below. Do not use on 110 or 230 VAC mains before considering material in main section below. Make every effort to not overload the mA ranges past fuse capability. Low current range (500 mA and less) will usually be fused and share a common fuse. If overloaded by say 1A this fuse will go open circuit essentially instantaneously. This can be very annoying [tm]. Multiple probe sockets: Chris mentioned this. It's worth repeating. some meters have two hard wired leads. Some meters have two probe sockets - +ve & -ve.BUT many meters have 3 probe sockets - 1 x common, 1 x high current +ve, 1 x other +ve. Sometimes other functions may share the high current positive but usually not. Having a high current positive input allows use of a VERY low resistance unfused unswitched shunt for current measurements of 10A or higher. NO MATTER WHAT RANGE THE METER IS SET TO, the shunt will be present from common to high current +ve. If you set the meter to AC volts and attempt to measure mains voltages the shunt will be placed across the high voltage source. This can be VERY exciting. As there is usually no fuse in this arrangement it's something you REALLY want to avoid doing. I've done it and lived. If the shunt has more hair on its chest than your mains fuse then it may live too, but YCMV. (your calibration may vary) subsequently. Some meters have very low voltage ranges - some have 200 mV and a few have 20 mV = 10 uV resolution. These low current ranges may use the basic digital meter IC with little or no external attenuator. Try not to apply stupidly high voltages. If you have a low ohms range (often 200 Ohms) be sure to short the probes together before measuring to determine the zero point - this is at least the lead resistance plus any contact resistance in the problem plugs and sockets. Twisting the plugs to and from and ensuring they are fully seated can substantially reduce and stabilise zero resistance. Be aware that the apparent resolution and repeatability of most meters is substantially higher than the accuracy. DC accuracies may be 1% or 2% and AC may be 5% and sometimes worse (+/- in each case). Some meters have a low battery indicator . Some don't. Some meters get very inaccurate when the battery voltage is too low. Some stay stable. If your meter has no low battery indicator and is badly affected by low battery state then you need to be either properly aware of effects and state of battery etc. or to check it's accuracy every time you use it - a fairly impractical choice except, perhaps, if the meter is an especially useful one. This is such an insidious trap and hard enough to remember in practice that it may be worth not using a meter that has no warning and/or that fails under low battery. Current range resistance: When measuring current the resistance of the meter results in an Imeasured x Rmeter current drop. On eg the 200 mA range most meters have a resistance of 1 Ohm or less so that at 200 mA the meter drops 0.2V (0.A x 1 Ohm = 0.2 V). This is usually (but not always) low enough to not be too important and can usually be allowed for. However, some meters have much higher resistances - I have one with 17 Ohms resistance on the 200 mA range, meaning that it drops a massive 3.4 Volts across the meter at 200 mA. This is entirely unacceptable and the "designer" of the meter needs a severe talking to (before or after tar & feathering as desired). In my case I clearly labelled the meter as not to be used for current measurements but in a situation where many people may use it it may be best to discard it.. Mains use: DEATH IS POSSIBLE Regardless of whether AC mains are 110 VAC or 230 VAC a meter MUST be CAT II or better rated if you value your life. METERS USED ON ANY AC MAINS SHOULD BE CAT II OR HIGHER RATED. Many lower cost meters are CAT I rated (or not rated). Digikey - What are Multimeter CAT (Category) Safety Ratings? Fluke ABCs of multimeter safety Fluke Guide to digital multimeter safety Understanding CAT Safety Ratings for Multimeters and other Test Instruments Any use on AC mains should be with due caution. More so on 230 VAC rayher than 110 VAC BUT both are lethal. **If mains is mains peak voltage is ~= 230 x 1.414 (sine wave RMS to peak factor) = 325V peak. 500 V (meter rating in this case) is more than 50% above this BUT spikes, interesting waveforms and Murphy can make meters not specifically rated for 230 VAC operation erupt in colourful smoke and flames. Worst case, and, thankfully, rare, is that people also die when the meter does. This can happen if the meter draws an arc from a high energy source and the meter fuse and mains fuse or breaker both do not interrupt it. Household mains supplies are capable of perhaps 100A (say 25 kW at 230 VAC) and in some cases much more if fuses etc do not get in the way Ideally, test equipment designed for use with high voltage high energy sources will use HRC (High rupture capacity) fuses which are able to blow and stay blown when the initial current surge vastly exceeds their rated fusing capacity. Non HRC fuses may sustain an arc of hundreds of amp after the eg 500 mA fuse vanishes. BUT , using an HRC fuse is pointless if the equipment is not also designed to break such an arc. Most cheap meters are unlikely to have this capability. In many cases all that will happen if a meter fails under high voltage break down will be a pleasant arcing sound from within, a curl of smoke from various orifices and a bad and lingering smell. In most cases the user will not be directly affected. If using such meters on AC mains pray for high values of "most". HOWEVER - if the source is able to sustain a very high current, a meter and fuse not rated to break such a current may allow an arc to form and sustain across the meter. People have died from such occurrences when using an unsuitable meter for AC mains use. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/34229', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10314/']} | jdg_380706 |
stackexchange | llm_judgeable_groundtruth_similarity | 2997689 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Show that $2n^2+11$ and $2n^2+29$ generate primes for all non-negative integers $n<11$ and $n<29$ respectively. This question is from Elementary Number Theory by Kenneth Rosen. The book's solution has used a brute force method to generate all numbers and manually check if they are prime or not. I wish to ask instead if there is a better method. I have looked around on the internet and couldn't find anything. I instead wrote a C program to test the theory about primes of the form $2n^2+x$ $(0\leq n<x)$ , and it seems only $x=3,5,11,29$ satisfy it under $10^6$ . So, is there no better, more number theoretical way to verify the given question?
Now provide the response and nothing else.
| Context: here is a proof for the better-known $n^2 + n + 41$ Is the notorious $n^2 + n + 41$ prime generator the last of its type? Took me a while and some numerical experiments, but I got this one. For foundational material on positive binary quadratic forms, see https://en.wikipedia.org/wiki/Binary_quadratic_form and https://en.wikipedia.org/wiki/Binary_quadratic_form#References My favorite book is Buell; people seem to like Cox. We are given a prime $p \equiv \pm 3 \pmod 8$ such that the form class number $h(-8p) = 2.$ In particular, the only classes are those of $$ x^2 + 2py^2 $$ and $$ 2 x^2 + p y^2. $$ The shorthand for these two forms is $\langle 1,0,2p \rangle$ and $\langle 2,0,p \rangle \; . \;$ Note that the product $2p$ must be one of Euler's Idoneal Numbers We will ASSUME that there is an integer $n$ with $1 \leq n < p$ such that $2n^2 + p$ is composite. We will show that this leads o a contradiction by constructing a third primitive quadratic form of the same discriminant. Let $q$ be the smallest prime that divides any $2n^2 + p$ with $n < p.$ Usually there will be several small $n$ giving the same $q.$ In turn, take the smallest $n$ that works, meaning $2n^2 + p \equiv 0 \pmod q \; . \;$ We get $q$ prime, $$ 2n^2 + p = q t \; , \; $$ $$ t > q > 2n \; . \; $$ There are just two cases. IF $q > 4 n,$ then $$ \langle q,4n,2t \rangle $$ is a reduced primitive form of the same discriminant $(-8p).$ It is not equivalent to either of the original forms listed, that is what we get for reduced forms. IF $q < 4 n,$ then $$ \langle q,4n-2q, 2t - 4n + q \rangle $$ is a reduced primitive form of the same discriminant $(-8p).$ In either case, the presence of composite numbers represented as $2n^2 + p$ has resulted in $h(-8p) \geq 3,$ contradicting the hypothesis of class number 2. Examples for the first case: If $p \equiv 3 \pmod 5,$ we can take $n=1$ and get third form $$ \langle 5,4, \frac{2p+4}{5} \rangle $$ If $p \equiv 3 \pmod {11},$ we can take $n=2$ and get third form $$ \langle 11,8, \frac{2p+16}{11} \rangle $$ Examples for the second case: If $p \equiv 1 \pmod 3,$ we can take $n=1$ and get third form $$ \langle 3,2, \frac{2p+1}{3} \rangle $$ If $p \equiv 2 \pmod 5,$ we can take $n=2$ and get third form $$ \langle 5,2, \frac{2p+1}{5} \rangle $$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2997689', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/89548/']} | jdg_380707 |
stackexchange | llm_judgeable_groundtruth_similarity | 18458934 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I noticed that apps like Intagram uses UICollectionViews to display the feed of photos. I also noticed that the cells for these photos is somehow placed on screen before the actual photos are downloaded completely. Then when a download completes, that photo is nicely displayed in the correct cell. I would like to copy that functionality, but I do not know how to go about it. I am currently downloading a large JSON object which I transform to an array of NSDictionaries . Each NSDictionary contains information about each photo, and among that information, an URL is presented. At this URL, I can find the corresponding image that I need to download and display in my UICollectionViewCells As for now, I iterate this list and initiate a download for each URL I see. When that download is complete, I reload the collectionview using [self.collectionView reloadData] . But, as you can imagine, if I have 30 cells that all wants an image, there is a lot of reloadData calls. I am using AFNetworking to handle the download, here is the method, which I call based on the URL I mentioned before: -(void) downloadFeedImages:(NSString *) photoURL imageDescs:(NSDictionary*)imageDescs photoId:(NSString *)photoID{ NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *directory = [paths objectAtIndex:0]; NSString* foofile = [directory stringByAppendingPathComponent:photoID]; if([[NSFileManager defaultManager] fileExistsAtPath:foofile]){ // IF IMAGE IS CACHED [self.collectionView reloadData]; return; } NSLog(@"photoURL: %@", photoURL); NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:photoURL]]; AFImageRequestOperation *operation = [AFImageRequestOperation imageRequestOperationWithRequest:request imageProcessingBlock:nil success:^(NSURLRequest *request, NSHTTPURLResponse *response, UIImage *image) { // Save Image NSLog(@"URL-RESPONSE:%@", image); NSString *myFile = [directory stringByAppendingPathComponent:photoID]; NSData *imageData = UIImagePNGRepresentation(image); [imageData writeToFile:myFile atomically: YES]; [self.collectionView reloadData]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error) { NSLog(@"ERROR: %@", [error localizedDescription]); }]; [[WebAPI sharedInstance] enqueueHTTPRequestOperation:operation];} So basically, I wonder how I can achieve the functionality that Instagram and similar applications has when it comes to displaying a feed with images. In addition, I would like to know a good way to initiate a download for each cell, and when that download is finished, update that cell, not redraw the entire view using [reloadData] Thanks
Now provide the response and nothing else.
| The technique you want to implement is called lazy loading. Since you are using AFNetworking it will be easier to implement this in your case. Each of your collection view cell needs to have a UIImageView to display the image. Use the UIImageView+AFNetworking.h category and set the correct image URL by calling method - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath{ // .... [cell.imageView setImageWithURL:imageURL placeholderImage:[UIImage imageNamed:@"placeholder.png"]]; // ... return cell;} Placeholder is the image which will be displayed until required image is downloaded. This will simply do the required job for you. Note : By default, URL requests have a cache policy of NSURLCacheStorageAllowed and a timeout interval of 30 seconds, and are set not handle cookies. To configure URL requests differently, use setImageWithURLRequest:placeholderImage:success:failure: . Also, for you reference, if you want to implement lazy loading of images yourself, follow this Apple sample code . This is for UITableView but same technique can be used for UICollectionView as well. Hope that helps! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18458934', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2040431/']} | jdg_380708 |
stackexchange | llm_judgeable_groundtruth_similarity | 14379586 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am able to draw multiple polyline in google map and style them, but I want to color each polyline with a different color. Currently, I have this code: var DrivePath = [ new google.maps.LatLng(37.772323, -122.214897), new google.maps.LatLng(21.291982, -157.821856), new google.maps.LatLng(-18.142599, 178.431), new google.maps.LatLng(-27.46758, 153.027892), new google.maps.LatLng(12.97918167, 77.6449), new google.maps.LatLng(12.97918667, 77.64487167), new google.maps.LatLng(12.979185, 77.64479167), new google.maps.LatLng(12.97918333, 77.64476)];var PathStyle = new google.maps.Polyline({ path: DrivePath, strokeColor: "#FF0000", strokeOpacity: 1.0, strokeWeight: 2});PathStyle.setMap(map); Is there any way I can add a separate style to each polyline that I am creating?
Now provide the response and nothing else.
| Certainly. For instance suppose you know what colours you want to go with each line, let's assume you therefore have an array of colours which has a length equal to DrivePath.length - 1. var Colors = [ "#FF0000", "#00FF00", "#0000FF", "#FFFFFF", "#000000", "#FFFF00", "#00FFFF", "#FF00FF"]; Now, instead of drawing one polyline, draw a separate polyline for each coordinate. for (var i = 0; i < DrivePath.length-1; i++) { var PathStyle = new google.maps.Polyline({ path: [DrivePath[i], DrivePath[i+1]], strokeColor: Colors[i], strokeOpacity: 1.0, strokeWeight: 2, map: map });} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14379586', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1238575/']} | jdg_380709 |
stackexchange | llm_judgeable_groundtruth_similarity | 2189053 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a usercontrol that has a scrollviewer, then a bunch of child controls like text boxes, radio buttons, and listboxes, etc inside of it. I can use the mouse wheel to scroll the parent scrollviewer until my mouse lands inside a listbox then, the mouse wheel events start going to the listbox. Is there any way to have the listbox send those events back up to the parent control? Removing the listbox from within side the parent control like this question suggests ( Mouse wheel not working when over ScrollViewer's child controls ) isnt a solution. I have tried private void ListBox_PreviewMouseWheel(object sender, MouseWheelEventArgs e){ e.Handled = true;} but that didnt work either. Thanks
Now provide the response and nothing else.
| The answer you have referenced is exactly what is causing your problem, the ListBox (which is composed of among other things a ScrollViewer) inside your ScrollViewer catches the MouseWheel event and handles it, preventing it from bubbling and thus the ScrollViewer has no idea the event ever occurred. Use the following extremely simple ControlTemplate for your ListBox to demonstrate (note it does not have a ScrollViewer in it and so the MouseWheel event will not be caught) The ScrollViewer will still scroll with the mouse over the ListBox. <UserControl.Resources> <ControlTemplate x:Key="NoScroll"> <ItemsPresenter></ItemsPresenter> </ControlTemplate></UserControl.Resources><ScrollViewer> <SomeContainerControl> <.... what ever other controls are inside your ScrollViewer> <ListBox Template="{StaticResource NoScroll}"></ListBox> <SomeContainerControl></ScrollViewer> You do have the option of capturing the mouse when it enters the ScrollViewer though so it continues to receive all mouse events until the mouse is released, however this option would require you to delgate any further mouse events to the controls contained within the ScrollViewer if you want a response...the following MouseEnter MouseLeave event handlers will be sufficient. private void ScrollViewerMouseEnter(object sender, MouseEventArgs e){ ((ScrollViewer)sender).CaptureMouse();}private void ScrollViewerMouseLeave(object sender, MouseEventArgs e){ ((ScrollViewer)sender).ReleaseMouseCapture();} Neither of the workarounds I have provided are really preferred however and I would suggest rethinking what you are actually trying to do. If you explain what you are trying to achieve in your question I'm sure you will get some more suggestions... | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2189053', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/86524/']} | jdg_380710 |
stackexchange | llm_judgeable_groundtruth_similarity | 158319 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A number of times I have encountered in text-books and articles that neutrinos might contribute only a small fraction to dark matter. The reason has to do with the fact that if all of the dark matter consisted of neutrinos, then small-scale structures in the Universe could not have formed yet, because, as they say, neutrinos "wash out" small fluctuations. However, none of these texts provided a reference to any specific sources explaining in detail what is meant by "washing out". After all, neutrinos are notorious in their weak interaction with baryonic matter, so if there is a small-scale fluctuation of baryons, then how background neutrinos can prevent it from growing further if they practically do not interact with baryons? I guess the question boils down to calculating cross-sections of interactions at specific temperatures. I would appreciate comments and references to sources addressing this particular issue.
Now provide the response and nothing else.
| The dark matter energy density of the universe is, at present, thought to be about five times that of the baryonic matter energy density. Meanwhile, the radiation energy density is almost negligible. Matter energy is about 4.5% of the total energy density of the universe. Dark matter makes up about 23%, and radiation is very small at about 0.009%. The number for radiation was calculated including all relativistic particles, including neutrinos. In fact, if you go through and read this link , it details the calculation for the total neutrino energy density and shows that it is thought to be about 68% of the photon energy density. So the 0.009% of the universe that is relativistic particles is not even mostly neutrinos. My point? There truly is simply not enough neutrinos out there to explain away dark matter as neutrinos. Not only that, but we have clearly already included them in the calculation. Dark matter makes up 22.7% (give or take) of the energy density of the universe. And that is on top of the less than 0.0036% that neutrinos account for. So there's no way that neutrinos could be a major, let alone sole, component of dark matter. For an overview of the energy densities, see Wikipedia and links therein To answer your question on "washing out", the Wikipedia article on Dark Matter does a very good job at explaining this. For small scale structure to form, dark matter is required to help gravitationally bind baryonic matter. However, the free streaming length of any candidate particle that accomplishes this must be small. The free streaming length is the distance that the particles move in the early universe from random motions before the expansion slows them down. Primordial density fluctuations provide the seeds for small scale structure to form, but if the free streaming length of the dark matter candidate particle is larger than the scale of the small primordial perturbations, then these perturbations become homogenized (or "washed out") as the particles communicate and equilibrate. Without the perturbations, there is no seed for the small scale structure and, thus, it does not form. Now you may be wondering why dark matter is needed in the first place for small scale structure to form. After the Big Bang, ordinary baryonic matter had too much temperature and pressure to collapse into structure on its own. It requires a gravitational seed (like giving it a kick-start to get the gravitational collapse going), which means there has to be a perturbation in the density of a colder, less interacting form of matter to provide this seed; that is, a local density of this cold dark matter that is higher than the background value. These perturbations would be formed because of the primordial density perturbations left over from inflation. However, neutrinos are known to have a high free streaming length, thus they would smooth out these perturbations in their own density and you wouldn't get a local high density region that could act as a seed. No seed means no collapse. No collapse means no small scale structure (until it's much too late). Neutrinos are actually the primary candidate for hot dark matter, but they are not a viable consideration for cold dark matter, which is what is necessary to generate sufficient small scale structure formation. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/158319', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/41682/']} | jdg_380711 |
stackexchange | llm_judgeable_groundtruth_similarity | 2553 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is the situation: I am uploading a large file from client A to a server using sftp. I also need to download this file from the server to client B over ssh. What I would like to do is start the transfer from the server to client B when the upload is still happening from client A. What is the best method/tool to get this done? UPDATE : The answers so far are interesting--I'll be sure to read and test them all. Bonus points for answers that don't depend on controlling how Client A is uploading the file. (ie. the only thing we know from client A is that the file is being written to a known filename.)
Now provide the response and nothing else.
| For a single file instead of using SFTP you could pipe the file over ssh using cat or pv at the sending side and using tee on the middle server to both send the data to a file there and send a copy over the another ssh link the other side of which just writes the data to a file. The exact voodoo required I'll leave as an exercise for the reader, as I've not got time to play right now (sorry). This method would only work if the second destination is publicly accessible via SSH which may not be the case as you describe it as a client machine. Another approach, which is less "run and wait" but may otherwise be easier, it to use rsync between the server and client B. The first time you run this it may get a partial copy of the data, but you can just re-run it to get more data afterwards (with one final run once the Client1->Server transfer is complete). This will only work if the server puts the data direct into the right file-name during the SFTP transfer (sometimes you will see the data going into a temporary file which is then renamed once the file is completely transferred - this is done to make the file update more atomic but will render the rsync idea unusable). You could also use rsync for the C1->S transfer instead of scp (if you use the --inplace option to avoid the problem mentioned above) - using rsync would also give you protection against needing to resend everything if the C1->Server connection experiences problems during a large transfer (I tend to use rsync --inplace -a --progress <source> <dest> instead of scp/sftp when rsync is available, for this "transfer resume" behaviour). To summarise the above, running: rsync --inplace -a --progress <source> user@server:/<destination_file_or_folder> on client1 then running rsync --inplace -a --progress user@server:/<destination_file_or_folder> <destination_on_cli2> on client2 repeatedly until the first transfer is complete (then running once more to make sure you've got everything). rsync is very good at only transferring the absolute minimum it needs to update a location instead of transferring the whole lot each time. For paranoia you might want to add the --checksum option to the rsync commands (which will take much more CPU time for large files but won't result in significantly more data being transfered unless it is needed) and for speed the --compress option will help if the data you are transferring is not already in a compressed format. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/2553', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/1207/']} | jdg_380712 |
stackexchange | llm_judgeable_groundtruth_similarity | 176599 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When explaining LASSO regression, the diagram of a diamond and circle is often used. It is said that because the shape of the constraint in LASSO is a diamond, the least squares solution obtained might touch the corner of the diamond such that it leads to a shrinkage of some variable. However, in ridge regression, because it is a circle, it will often not touch the axis. I could not understand why it cannot touch the axis or maybe have a lower probability than LASSO to shrink certain parameters. On top of that, why do LASSO and ridge have lower variance than ordinary least squares? The above is my understanding of ridge and LASSO and I might be wrong. Can someone help me understand why these two regression methods have lower variance?
Now provide the response and nothing else.
| This is regarding the variance OLS provides what is called the Best Linear Unbiased Estimator (BLUE) . That means that if you take any other unbiased estimator, it is bound to have a higher variance then the OLS solution. So why on earth should we consider anything else than that? Now the trick with regularization, such as the lasso or ridge, is to add some bias in turn to try to reduce the variance. Because when you estimate your prediction error, it is a combination of three things :$$\text{E}[(y-\hat{f}(x))^2]=\text{Bias}[\hat{f}(x))]^2+\text{Var}[\hat{f}(x))]+\sigma^2$$The last part is the irreducible error, so we have no control over that. Using the OLS solution the bias term is zero. But it might be that the second term is large. It might be a good idea, ( if we want good predictions ), to add in some bias and hopefully reduce the variance. So what is this $\text{Var}[\hat{f}(x))]$? It is the variance introduced in the estimates for the parameters in your model. The linear model has the form$$\mathbf{y}=\mathbf{X}\beta + \epsilon,\qquad \epsilon\sim\mathcal{N}(0,\sigma^2I)$$To obtain the OLS solution we solve the minimization problem$$\arg \min_\beta ||\mathbf{y}-\mathbf{X}\beta||^2$$This provides the solution$$\hat{\beta}_{\text{OLS}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}$$The minimization problem for ridge regression is similar:$$\arg \min_\beta ||\mathbf{y}-\mathbf{X}\beta||^2+\lambda||\beta||^2\qquad \lambda>0$$Now the solution becomes$$\hat{\beta}_{\text{Ridge}} = (\mathbf{X}^T\mathbf{X}+\lambda I)^{-1}\mathbf{X}^T\mathbf{y}$$So we are adding this $\lambda I$ (called the ridge) on the diagonal of the matrix that we invert. The effect this has on the matrix $\mathbf{X}^T\mathbf{X}$ is that it " pulls " the determinant of the matrix away from zero. Thus when you invert it, you do not get huge eigenvalues. But that leads to another interesting fact, namely that the variance of the parameter estimates becomes lower. I am not sure if I can provide a more clear answer then this. What this all boils down to is the covariance matrix for the parameters in the model and the magnitude of the values in that covariance matrix. I took ridge regression as an example, because that is much easier to treat. The lasso is much harder and there is still active ongoing research on that topic. These slides provide some more information and this blog also has some relevant information. EDIT: What do I mean that by adding the ridge the determinant is " pulled " away from zero? Note that the matrix $\mathbf{X}^T\mathbf{X}$ is a positive definite symmetric matrix. Note that all symmetric matrices with real values have real eigenvalues. Also since it is positive definite, the eigenvalues are all greater than zero. Ok so how do we calculate the eigenvalues? We solve the characteristic equation:$$\text{det}(\mathbf{X}^T\mathbf{X}-tI)=0$$This is a polynomial in $t$, and as stated above, the eigenvalues are real and positive. Now let's take a look at the equation for the ridge matrix we need to invert:$$\text{det}(\mathbf{X}^T\mathbf{X}+\lambda I-tI)=0$$We can change this a little bit and see:$$\text{det}(\mathbf{X}^T\mathbf{X}-(t-\lambda)I)=0$$So we can solve this for $(t-\lambda)$ and get the same eigenvalues as for the first problem. Let's assume that one eigenvalue is $t_i$. So the eigenvalue for the ridge problem becomes $t_i+\lambda$. It gets shifted by $\lambda$. This happens to all the eigenvalues, so they all move away from zero. Here is some R code to illustrate this: # Create random matrixA <- matrix(sample(10,9,T),nrow=3,ncol=3)# Make a symmetric matrixB <- A+t(A)# Calculate eigenvalueseigen(B)# Calculate eigenvalues of B with ridgeeigen(B+3*diag(3)) Which gives the results: > eigen(B)$values[1] 37.368634 6.952718 -8.321352> eigen(B+3*diag(3))$values[1] 40.368634 9.952718 -5.321352 So all the eigenvalues get shifted up by exactly 3. You can also prove this in general by using the Gershgorin circle theorem . There the centers of the circles containing the eigenvalues are the diagonal elements. You can always add "enough" to the diagonal element to make all the circles in the positive real half-plane. That result is more general and not needed for this. | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/176599', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/79151/']} | jdg_380713 |
stackexchange | llm_judgeable_groundtruth_similarity | 16056462 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a for loop like int c = 0 ; for(int i=0 ; i<100 ; i++) c++; where C is a global variable. if I run this code with 2 thread what is the minimum and maximum final value of C if I don't lock the threads in the section of c++ and how?
Now provide the response and nothing else.
| if you translate increment code to assembly it's pseudocode will be like: 1-mov ax,mem[c]2-inc ax3-mov mem[c],ax if we have 2 thread consider this scenario: thread 1: line 1thread 2: line (1-2-3) for 99 timesthread 1: line (2-3)thread 2: line 1thread 1: line (1-2-3) for remaining 99 timesthread 2: line (2-3) for the last time now the value of c is 2 so minimum is 2 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16056462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1550482/']} | jdg_380714 |
stackexchange | llm_judgeable_groundtruth_similarity | 79 |
Below is a question asked on the forum devops.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to use the Terraform MySQL Provider to keep a list of mysql users and grants handy for creating new test environments. The .tf and .tfstate files both seem to want to store the MySQL passwords in plaintext. Concerning .tf: It is my understanding that .tf files live in revision control and are maintained by a team. How does that practice differ when secrets are in the .tf ? It is possible to encrypt these values at all? Concerning .tfstate: I can store the .tfstate securely somewhere after running Terraform apply, but it would be preferable for this use case to not store it at all?
Now provide the response and nothing else.
| Terraform supports adding an additional file with variables during invocation. documentation: https://www.terraform.io/intro/getting-started/variables.html#from-a-file We are using that feature to provide a secrets.tfvars file on each invocation of Terraform. We also use a script to wrap the command so that its invocation is consistent, and all team members avoid having to make the same mistakes. The wrapper synchronizes .tfstate with S3 before an execution, and pushes .tfstate back to S3 at the end. I also hear of people doing the same thing with state stored in Consul, even adding a kind of semaphore in consul to prevent two people from starting Terraform at the same time. When you avoid setting a default value in a variables.tf file, it forces the user to input the value. It can be either entered manually or using the -var-file command option like described above. Not setting a default on your secrets is a good way to enforce changes that require a change in secrets. The secrets.tfvars file is a symbolic link to one of the files with secrets which are not stored in version control. We have several, one per environment, like so secrets-prod.tfvars , secrets-dev.tfvars , secrets-stg.tfvars , etc... An even better practice would be to generate these secrets files during the wrapper script based on data in Vault or some other way to share secrets. Since currently when the format of secrets changes, or secrets themselves, we need to communicate it to the team outside the version control channel - and this doesn't always work well, to be honest. But secrets do change infrequently. | {} | {'log_upvote_score': 6, 'links': ['https://devops.stackexchange.com/questions/79', 'https://devops.stackexchange.com', 'https://devops.stackexchange.com/users/36/']} | jdg_380715 |
stackexchange | llm_judgeable_groundtruth_similarity | 12776021 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any way to make non editable cell dynamically in jtable ? Whenever user gives input like false, i want to make non editable cell...I have seen in DefaultTableModel isCellEditable method.But if i want to use that i have create each time new object.So i want to change it non editable dynamically. Can you anyone please help me?..thanks
Now provide the response and nothing else.
| public class MyDefaultTableModel extends DefaultTableModel { private boolean[][] editable_cells; // 2d array to represent rows and columns private MyDefaultTableModel(int rows, int cols) { // constructor super(rows, cols); this.editable_cells = new boolean[rows][cols]; } @Override public boolean isCellEditable(int row, int column) { // custom isCellEditable function return this.editable_cells[row][column]; } public void setCellEditable(int row, int col, boolean value) { this.editable_cells[row][col] = value; // set cell true/false this.fireTableCellUpdated(row, col); }} other class ... stuffDefaultTableModel myModel = new MyDefaultTableModel(x, y); table.setModel(myModel);... stuff You can then set the values dynamically by using the myModel variable you have stored and calling the setCellEditable() function on it.. in theory. I have not tested this code but it should work. You may still have to fire some sort of event to trigger the table to notice the changes. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12776021', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/708502/']} | jdg_380716 |
stackexchange | llm_judgeable_groundtruth_similarity | 26657932 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working with Xamarin.Forms and XAML, and I'm trying to create an application that stores a list of products. I put my list of products in a ListView. This works fine. Here is my XAML: <ListView x:Name="listSushi" ItemsSource="{x:Static local:myListSushi.All}" SelectedItem="{Binding SelectedItem, Mode=TwoWay}" RowHeight="{StaticResource rowHeight}" ><ListView.ItemTemplate> <DataTemplate> <ViewCell> <ViewCell.View> <StackLayout Padding="5, 5, 0, 5" Orientation="Horizontal" Spacing="15"> <StackLayout> <Image Source="{Binding ImageSource}" /> </StackLayout> <StackLayout Padding="0, 0, 0, 0" VerticalOptions="Center" HorizontalOptions="FillAndExpand"> <Label Text="{Binding Name}" Font="Bold, Medium" /> <Label Text="{Binding Description}" Font="Small"/> </StackLayout> <StackLayout Orientation="Horizontal" Padding="0, 0, 10, 0"> <Button Text=" - " HorizontalOptions="EndAndExpand" VerticalOptions="FillAndExpand" Command="{Binding DeleteSushiCommand}" CommandParameter="{Binding Name}" /> <Label VerticalOptions="Center" Text="{Binding Number,StringFormat='{0}'}" TextColor="Black"/> <Button Text=" + " HorizontalOptions="EndAndExpand" VerticalOptions="FillAndExpand" Command="{Binding AddSushiCommand}" CommandParameter="{Binding Name}" /> </StackLayout> </StackLayout> </ViewCell.View> </ViewCell> </DataTemplate></ListView.ItemTemplate> I've just the problem that if I click on a cell of my listView, the cell is highlight, and stay highlight. I've try to disable it with this code in the xaml.cs listSushi.ItemSelected+= (object sender, SelectedItemChangedEventArgs e) => { // don't do anything if we just de-selected the row if (e.SelectedItem == null) return; // do something with e.SelectedItem ((ListView)sender).SelectedItem = null; // de-select the row}; But when I touch a cell, now my list is scrolling automatically. It's very strange. Does anyone know if this is a bug, or know a fix, like if there is a property where I can disable the highlight?
Now provide the response and nothing else.
| You might try using the ItemTapped event instead, i.e. listSushi.ItemTapped += (object sender, ItemTappedEventArgs e) => { // don't do anything if we just de-selected the row. if (e.Item == null) return; // Optionally pause a bit to allow the preselect hint. Task.Delay(500); // Deselect the item. if (sender is ListView lv) lv.SelectedItem = null; // Do something with the selection. ...}; I have tested this on a ListView (on an Android device) that has enough items to bring scrolling into the mix. I see no auto-scroll behavior, and your idea to set SelectedItem null to defeat the highlight works great. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26657932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2307737/']} | jdg_380717 |
stackexchange | llm_judgeable_groundtruth_similarity | 7071106 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am sub-classing QCompleter to give it some special functionality. I want activated() to be fired when there is only one completion in the model with the given prefix, but that's not where I'm having a problem. I have created a virtual setCompleterPrefix() in my sub-class but the compiler doesn't seem to notice it. Instead the base QCompleter::setCompletionPrefix() is called when the user enters a prefix. Am I doing something wrong? Here is my class: #ifndef INSTANTCOMPLETER_H#define INSTANTCOMPLETER_H#include <QCompleter>namespace Reliant{class InstantCompleter : public QCompleter{ Q_OBJECTpublic: explicit InstantCompleter(QObject* parent = 0);private:signals:public slots: virtual void setCompletionPrefix(const QString &prefix);};}#endif // INSTANTCOMPLETER_H Definition: #include "instantcompleter.h"using Reliant::InstantCompleter;InstantCompleter::InstantCompleter(QObject* parent) : QCompleter(parent){}void InstantCompleter::setCompletionPrefix(const QString &prefix){ int completionCount = this->completionCount(); if(completionCount == 1 && setCurrentRow(0)) emit activated(currentCompletion()); else QCompleter::setCompletionPrefix(prefix);}
Now provide the response and nothing else.
| Suppose the EnterWriteLock() fails. For whatever reason. Then the one thing you shouldn't do is to Exit a lock you never Entered. It's a very basic pattern that also holds for example for streams, but not seen as often thanks to the using() {} statement. var s = File.Create(...);// (only) if the previous line succeeded, // we gain the responsibility to close s, no matter whattry{ // do some I/O}finally{ s.Dispose();} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7071106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/394317/']} | jdg_380718 |
stackexchange | llm_judgeable_groundtruth_similarity | 492809 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In what situations is it more appropriate to use an HTML IMG tag to display an image, as opposed to a CSS background-image , and vice-versa? Factors may include accessibility, browser support, dynamic content, or any kind of technical limits or usability principles.
Now provide the response and nothing else.
| Proper uses of IMG Use IMG if you intend to have people print your page and you want the image to be included by default. — JayTee Use IMG (with alt text) when the image has an important semantic meaning, such as a warning icon . This ensures that the meaning of the image can be communicated in all user-agents, including screen readers. Pragmatic uses of IMG Use IMG plus alt attribute if the image is part of the content such as a logo or diagram or person (real person, not stock photo people).— sanchothefat Use IMG if you rely on browser scaling to render an image in proportion to text size. Use IMG for multiple overlay images in IE6 . Use IMG with a z-index in order to stretch a background image to fill its entire window. Note, this is no longer true with CSS3 background-size; see #6 below. Using img instead of background-image can dramatically improve performance of animations over a background. When to use CSS background-image Use CSS background images if the image is not part of the content . — sanchothefat Use CSS background images when doing image-replacement of text eg. paragraphs/headers. — sanchothefat Use background-image if you intend to have people print your page and you do not want the image to be included by default. — JayTee Use background-image if you need to improve download times, as with CSS sprites . Use background-image if you need for only a portion of the image to be visible, as with CSS sprites. Use background-image with background-size:cover in order to stretch a background image to fill its entire window. | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/492809', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/52963/']} | jdg_380719 |
stackexchange | llm_judgeable_groundtruth_similarity | 334186 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
PROBLEM. Let $\theta(t)$ and $\phi(t)$ be two real analytic non-constant functions $[0,2\pi]\rightarrow \mathbb{R}$ . I am trying to prove the following claim If the integral $$\int_0^{2\pi} e^{i\theta(t)} (\phi(t))^n dt=0$$ for all $n\in\mathbb{N}_0$ than the first derivative $\theta'$ and $\phi$ are periodic of common period $2\pi/l$ with $1\neq l\in\mathbb{N}$ . Note that this is equivalent to $F(\lambda):=\int_0^{2\pi} e^{i(\theta(t)+\lambda\phi(t))} dt=0$ for all $\lambda \in \mathbb{R}$ . In fact, $F(\lambda)$ is analytic in $\lambda$ and its being constantly equal to 0 is equivalent to the vanishing of all its derivatives $F^{(n)}(0)=\int_0^{2\pi} e^{i\theta(t)} (\phi(t))^n dt$ . Geometrically this means that the curve obtained by integrating the (tangent) vector function $(\cos(\theta+\lambda\phi),\sin(\theta+\lambda\phi))$ over $[0,2\pi]$ is closed $\forall \lambda$ . Just in case, a back-up less general claim for which I would like to see a clean solution is If, in the hypotesis above, $\phi$ is a polynomial, then $\phi$ is constantly $0$ . OBSERVATION. If $\theta'$ and $\phi$ are periodic of common period $\frac{2\pi}{l}$ with $1\neq l \in \mathbb{N}$ and $\int_0^{\frac{2\pi}{l}} e^{i\theta}\neq 0$ then the converse implication is true. In fact, in this setting $\theta=c\cdot t+\theta_p(t)$ with $c=\frac{2\pi}{l}(\theta(\frac{2\pi}{l})-\theta(0))$ and $\theta_p$ periodic of period $\frac{2\pi}{l}$ . Then $$\begin{align}\int_0^{2\pi} e^{i(\theta(t)+\lambda\phi(t))} dt &=& \sum_{j=0}^{l-1} \int_{j \frac{2\pi}{l}}^{(j+1) \frac{2\pi}{l}} e^{i(c\cdot t+\theta_p(t)+\lambda\phi(t))} dt \\&=& \sum_{j=0}^{l-1} e^{i\cdot j \cdot \frac{2\pi}{l}} \int_{0}^{\frac{2\pi}{l}} e^{i(c\cdot t+\theta_p(t)+\lambda\phi(t))} dt,\end{align}$$ where the last equality is obtained by repetedly applying the substitution $t'=t-\frac{2\pi}{l}$ . Since we know $\sum_{j=0}^{l-1} e^{i\cdot j \cdot \frac{2\pi}{l}} \int_{0}^{\frac{2\pi}{l}} e^{i\theta(t)}dt=\int_0^{2\pi} e^{i\theta(t)} dt=0$ then also the integral above must be $0$ . In the following picture the curve associated to $\theta(t)=t + \cos( 12 t)$ deformed in the direction $\cos(3 t)$ . In this case $l=3$ and the curve is closed $\forall \lambda$ . $\theta(t)=t + \cos( 12 t)$ deformed in the direction $\cos( 3 t)$ . In this case $l=3$ and the curve is closed $\forall \lambda$ ." /> IDEA. If $\theta$ monotone one can substitute $s=\theta(t)$ in the integral and get $$\int_{\theta(0)}^{\theta(2\pi)} e^{i s} \frac{(\phi(\theta^{-1}(s)))^n}{\theta'(\theta^{-1}(s))} ds=0.$$ In this case the idea behind the hypotesis becomes apperent: $\phi(\theta^{-1}(s))$ is periodic of non-trivial period iff $\phi$ and $\theta'$ have the common period property. It seems here that looking at the Fourier expansion of our functions on $[\theta(0),\theta(2\pi)]$ could be a good idea: the condition we have means indeed that, $\forall n$ , the first harmonic of the function $\frac{(\phi(\theta^{-1}(s)))^n}{\theta'(\theta^{-1}(s))}$ is $0$ . Fourier coefficients of a product are obtained by convolutions and therefore the condition above becomes, $\forall n$ : $$\sum_{k_n=-\infty}^{+\infty} \sum_{k_{n-1}} ... \sum_{k_{2}}\sum_{k_{1}} \widehat{\frac{1}{\theta'}}(1-\sum_{i=1}^{n} k_i) \prod_{i=1}^{n} \widehat{\phi}(k_i)=0.$$ Is this approach viable? Can one from here exploit the fact that a function is periodic of non-trivial period iff there exists $k$ such that only harmonics multiple of $k$ are different from 0? Other way round, do non-zero harmonics of coprime orders imply a contradiction with our constraints? As for a toy example, if $\theta(t)=t$ , $\theta'(s)=1$ and $\phi(s)=\cos(2s)+\cos(3s)$ already $\widehat{f^2}(1)= 2 \widehat{f}(3)\widehat{f}(-2) \neq 0$ ; in the general setting interaction of coefficients is not straightforward. NOTE: This question originated from Orthogonality relation in $L^2$ implying periodicity . As suggested in the comments to the previous post, since the target of the question changed over time and edits were major, here I hope I gave a clearer and more consistent presentation of my problem. Thank you for your time.
Now provide the response and nothing else.
| I missed the real analyticity condition (my comment makes perfect sense for $C^\infty$ though), so let's move points in a fancy way to satisfy it. First, observe that if $a_0,a_1,a_2$ are positive reals close to $1$ , then there exist unique $\theta_1\approx \frac {2\pi} 3$ and $\theta_2\approx \frac {4\pi}3$ such that $a_0+a_1e^{i\theta_1}+a_2e^{i\theta_2}=0$ . Moreover, $\theta_{1,2}$ are real analytic functions of $a_{0,1,2}$ in some neighborhood of $1$ . This is just the implicit function theorem. Now choose your favorite $2\pi$ -periodic real analytic function $F(\tau)$ with uniformly small derivative that is not periodic with any smaller period (say, $\varepsilon\cos\tau$ ) and put $t(\tau)=\tau+F(\tau)$ . Then $\tau$ is uniquely determined by $t$ and the dependence is real analytic as well. Next define $\theta_{1,2}(\tau)$ by $\theta_j(\tau)\approx \frac {2\pi j}3$ such that $$t'(\tau)+t'(\tau+\tfrac{2\pi}3)e^{i\theta_1(\tau)}+t'(\tau+\tfrac{4\pi}3)e^{i\theta_2(\tau)}=0$$ Everything is real analytic so far. By uniqueness, we must have the relations $\theta_1(\tau+\frac{2\pi}{3})=\theta_2(\tau)-\theta_1(\tau)$ and $\theta_1(\tau+\frac{4\pi}{3})=2\pi -\theta_2(\tau)$ . Thus $\theta_1(\tau)+\theta_1(\tau+\frac{2\pi}{3})+\theta_1(\tau+\frac{4\pi}{3})=2\pi$ . This implies that there exists a real analytic $\Theta(\tau)$ such that $\Theta(\tau+\frac{2\pi}3)=\Theta(\tau)+\theta_1(\tau)$ (just divide the Fourier coefficients by appropriate numbers to get the periodic part and add $\tau$ ; note that the identity for $\theta_1$ implies that $\widehat\theta_1(3k)=0$ for $k\ne 0$ , so no division by $0$ will be encountered). Then, automatically, $\Theta(\tau+2\pi)=\Theta(\tau)+2\pi$ and $$\Theta(\tau+\frac{4\pi}3)=\Theta(\tau+\frac{2\pi}3)+\theta_1(\tau+\frac{2\pi}3)\\=\Theta(\tau)+\theta_1(\tau)+\theta_1(\tau+\frac{2\pi}3)=\Theta(\tau)+\theta_2(\tau)$$ Hence, we have the identity $$\sum_{j=0}^2 t'(\tau+\tfrac{2\pi j}3)e^{i\Theta(\tau+\frac{2\pi j}3)}=0$$ We can now pick up any $\frac{2\pi}3$ -periodic real analytic function $\Psi(\tau)$ , multiply the terms by the corresponding values of $\Psi^n$ (they are equal), integrate in $\tau$ from $0$ to $\frac{2\pi}3$ , and use the standard change of variable formula to get $$\int_0^{2\pi} e^{i\Theta(\tau(t))}\Psi(\tau(t))^n\,dt=0$$ but $\psi(t)=\Psi(\tau(t))$ is no longer $\frac{2\pi}3$ periodic in $t$ because the composition kills periodicity. As I said from the beginning, "there are many fancy ways to move six (well, even three) points around the circle and keep the sum balanced". | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/334186', 'https://mathoverflow.net', 'https://mathoverflow.net/users/132140/']} | jdg_380720 |
stackexchange | llm_judgeable_groundtruth_similarity | 15138483 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am learning how to use VBOs and, as the book says, "...you can free up CPU memory by moving vertex data to the OpenGL memory on the GPU." Well, just exactly what can a GPU handle in this regard? Is it acceptable to assume that the "OpenGL memory" can store the vertex data for millions of polygons? What about the GPU in a mobile device? While developers are used to having a frame of reference for memory restrictions on a CPU, learning OpenGL is partly challenging because I don't know much about GPUs and what to expect from their hardware. So when I read a vague statement like the above, it makes me nervous.
Now provide the response and nothing else.
| OpenGL has an abstract device and memory model. And technically in the world of OpenGL there is not CPU and GPU memory, but client and server memory. OpenGL buffer objects live on the server side. Server, that simply means everything the OpenGL driver abstracts away. And the OpenGL driver is perfectly allowed to swap out data from the GPU to the CPU if the GPU memory, which acts like a cache, is not sufficient. Hence what your book states: "...you can free up CPU memory by moving vertex data to the OpenGL memory on the GPU." Is not entirely correct, as the data in a OpenGL buffer object may very well reside in CPU memory. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15138483', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3758484/']} | jdg_380721 |
stackexchange | llm_judgeable_groundtruth_similarity | 25875635 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I have this code to respawn my player when he hits his enemy. Everything works fine, but when the players spawn, he's uncontrolable. For some reason the Player clone has the controller script unchecked in the inspector. Anyone have any idea as to why that happens & how to solve it? using UnityEngine;using System.Collections; public class RedEnemy : MonoBehaviour { GameObject spawnPoint;GameObject Player;// Use this for initializationvoid Start () { spawnPoint = GameObject.Find ("spawnPoint"); Player = GameObject.Find ("Player");}//collidervoid OnTriggerEnter ( Collider other ){ if (other.tag == "Player") { Destroy (other.gameObject); GameObject Clone; Clone = Instantiate(Player, spawnPoint.transform.position, Quaternion.identity) as GameObject; }} }
Now provide the response and nothing else.
| Oh man, this was causing me a headache for a few days and could not figure out how to do this. The worst part was that creating a new Xcode iOS project with the master-detail template worked just fine. Fortunately, in the end, that little fact was how I found the solution. There are some posts I've found that suggest that the solution is to implement the new primaryViewControllerForCollapsingSplitViewController: method on UISplitViewControllerDelegate . I tried that to no avail. What Apple does in the master-detail template that seems to work is implement the new (take a deep breath to say all of this one) splitViewController:collapseSecondaryViewController:ontoPrimaryViewController: delegate method (again on UISplitViewControllerDelegate ). According to the docs , this method: Asks the delegate to adjust the primary view controller and to incorporate the secondary view controller into the collapsed interface. Make sure to read up on the discussion part of that method for more specific details. The way that Apple handles this is: - (BOOL)splitViewController:(UISplitViewController *)splitViewControllercollapseSecondaryViewController:(UIViewController *)secondaryViewController ontoPrimaryViewController:(UIViewController *)primaryViewController { if ([secondaryViewController isKindOfClass:[UINavigationController class]] && [[(UINavigationController *)secondaryViewController topViewController] isKindOfClass:[DetailViewController class]] && ([(DetailViewController *)[(UINavigationController *)secondaryViewController topViewController] detailItem] == nil)) { // Return YES to indicate that we have handled the collapse by doing nothing; the secondary controller will be discarded. return YES; } else { return NO; }} This implementation basically does the following: If secondaryViewController is what we're expecting (a UINavigationController ), and it's showing what we're expecting (a DetailViewController -- your view controller), but has no model ( detailItem ), then " Return YES to indicate that we have handled the collapse by doing nothing; the secondary controller will be discarded. " Otherwise, return " NO to let the split view controller try and incorporate the secondary view controller’s content into the collapsed interface" The results are the following for the iPhone in portrait (either starting in portrait or rotating to portrait -- or more accurately compact size class): If your view is correct and has a model, show the detail view controller but has no model, show the master view controller If your view is not correct show the master view controller Clear as mud. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/25875635', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4020396/']} | jdg_380722 |
stackexchange | llm_judgeable_groundtruth_similarity | 7502073 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following method, which basically calls a request to load an array of NSData of images in the background thread: [query findObjectsInBackgroundWithBlock:^(NSArray * objects, NSError * error){}]; In this case objects is an array of the NSData. The issue is that if I have 100 images to load (100 elements in the array). This means that the user will have to wait for quite some time to see any image showing up in a UITableView. What I want to do is for them to see an image once it is available/loaded.. do I have to then change the code so that it does 100 background threads to load the image?
Now provide the response and nothing else.
| you could implement something like this in your cellForRowAtIndexPath: That way you load each image in the background and as soon as its loaded the corresponding cell is updated on the mainThread. dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^(void) { NSData *data0 = [NSData dataWithContentsOfURL:someURL]; UIImage *image = [UIImage imageWithData:data0]; dispatch_sync(dispatch_get_main_queue(), ^(void) { UIImageView* imageView = (UIImageView*)[cell viewWithTag:100]; imageView.image = image; }); }); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7502073', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/957225/']} | jdg_380723 |
stackexchange | llm_judgeable_groundtruth_similarity | 236542 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
'4800483343' is a directory, and 'file1' & 'file2' are two files in it. Why is the following happening? $ ls 4800483343file1 file2$ md5sum 4800483343/*36468e77d55ee160477dc9772a99be4b 4800483343/file129b098f7d374d080eb006140fb01bbfe 4800483343/file2$ mv 4800483343 4800[48]3343$ md5sum 4800[48]3343/*md5sum: 4800[48]3343/*: No such file or directory$ md5sum '4800[48]3343'/*36468e77d55ee160477dc9772a99be4b 4800[48]3343/file129b098f7d374d080eb006140fb01bbfe 4800[48]3343/file2 What other characters cause this?
Now provide the response and nothing else.
| Answer for original question Why are square brackets preventing shell expansion Square brackets do not prevent shell expansion but quotes do. I suspect that the commands that you actually ran were as follows This runs md5sum on the files in dir/ : $ md5sum d[i]r/*02fdd7309cef4d392383569bffabf24c dir/file1db69ce7c59b11f752c33d70813ab5df6 dir/file2 This moves dir to d[i]r with the quotes preventing the expansion of the square brackets: $ mv dir 'd[i]r' This looks for directory dir which no longer exists: $ md5sum d[i]r/*d[i]r/*: No such file or directory Because of the quotes, the following looks in the new directory named d[i]r : $ md5sum 'd[i]r'/*02fdd7309cef4d392383569bffabf24c d[i]r/file1db69ce7c59b11f752c33d70813ab5df6 d[i]r/file2 Answer for revised question In the revised question, the directory 4800483343 exists and the following command run: mv 4800483343 4800[48]3343 What happens when this command is run depends on whether the glob 4800[48]3343 matches any existing directory. If no directory matches that, then 4800[48]3343 expands to itself 4800[48]3343 and the directory 4800483343 is moved to the directory 4800[48]3343 . Consequently: The command md5sum 4800[48]3343/* will return the error "No such file or directory" because no directory exists which matches the glob 4800[48]3343 . The command md5sum '4800[48]3343'/* will correctly find the files because the quotes prevent expansion of the glob. Examples of globs Let's create two files: $ touch a1b a2b Now, observe these globs: $ echo a[123]ba1b a2b$ echo a?ba1b a2b$ echo *ba1b a2b | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/236542', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/85900/']} | jdg_380724 |
stackexchange | llm_judgeable_groundtruth_similarity | 288774 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why do some GNU Coreutils commands have the -T/--no-target-directory option? It seems like everything that it does can be achieved using the semantics of the . (self dot) in a traditional Unix directory hierarchy. Considering: cp -rT /this/source dir The -T option prevents the copy from creating a dir/source subdirectory. Rather /this/source is identified with dir and the contents are mapped between the trees accordingly. So for instance /this/source/foo.c goes to dir/foo.c and so on, rather than to dir/source/foo.c . But this can be easily accomplished without the -T option using: cp -r /this/source/. dir # Probably worked fine since dawn of Unix? Semantically, the trailing dot component is copied as a child of dir , but of course that "child" already exists (so doesn't have to be created) and is actually dir itself, so the effect is that /this/path is identified with dir . It works fine if the current directory is the target: cp -r /this/tree/node/. . # node's children go to current dir Is there something you can do only with -T that can rationalize its existence? (Besides support for operating systems that don't implement the dot directory, a rationale not mentioned in the documentation.) Does the above dot trick not solve the same race conditions that are mentioned in the GNU Info documentation about -T ?
Now provide the response and nothing else.
| Your . trick can only be used when you're copying a directory, not a file. The -T option works with both directories and files. If you do: cp srcfile destfile and there's already a directory named destfile it will copy to destfile/srcfile , which may not be intended. So you use cp -T srcfile destfile and you correctly get the error: cp: cannot overwrite directory `destfile' with non-directory If you tried using the . method, the copy would never work: cp: cannot stat `srcfile/.`: Not a directory | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/288774', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/16369/']} | jdg_380725 |
stackexchange | llm_judgeable_groundtruth_similarity | 8302657 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I'm working with an old data model, and I kind of have to work within what I've been handed. When I perform a database query, the model returns data as a List<Dictionary<string, object>> Where for each dictionary, the key is the column name and the value is the column value. As you can imagine, working with this is a nightmare of foreach loops and type casting I'm hoping to define some POCO viewmodels and then making something that uses LINQ/reflection, and an "assignment binding map" to go from hideous return value to my nice clean POCO. So I could define "maps" with the column names and lambdas to the properties on my POCO, similar to this... var Map; // type???Map.Add("Id", p => p.Id);Map.Add("Code", p => p.Code);Map.Add("Description", p => p.Description);Map.Add("Active", p => p.Active); Then convert like this... List<Dictionary<string, object>> Results = MyModel.Query(...);List<ProductViewModel> POCOs = new List<ProductViewModel>();foreach (var Result in Results) // Foreach row{ ProductViewModel POCO = new ProductViewModel(); foreach (var i in Result) // Foreach column in this row { // This is where I need help. // i.Key is the string name of my column. // I can get the lambda for this property from my map using this column name. // For example, need to assign to POCO.Id using the lambda expression p => p.Id // Or, assign to POCO.Code using the lambda expression p => p.Code } POCOs.Add(POCO);}return POCOs; Can this be done using some sort of reflection, and if so, how?
Now provide the response and nothing else.
| Here is an approach using expression trees . First, define the API of the map: public class PropertyMap<T> where T : new(){ public void Add(string sourceName, Expression<Func<T, object>> getProperty); public T CreateObject(IDictionary<string, object> values);} You would use it like this: var map = new PropertyMap<ProductViewModel>();map.Add("Id", p => p.Id);map.Add("Code", p => p.Code);map.Add("Description", p => p.Description);map.Add("Active", p => p.Active);var productViewModel = map.CreateObject(values); To implement it, first you would declare a dictionary to associate names from the data source to properties: private readonly IDictionary<string, PropertyInfo> _properties = new Dictionary<string, PropertyInfo>(); Next, you would implement the Add method in terms of that dictionary (all error handling left as an exercise for the reader): public void Add(string sourceName, Expression<Func<T, object>> getProperty){ _properties[sourceName] = (PropertyInfo) ((MemberExpression) getProperty.Body).Member;} Then, you would dynamically compile a method, using expression trees, which does the assignments (it sounds scarier than it is). The easiest way to visualize this process is to look at an example of what we're building. What we want is some code which does this: new ProductViewModel{ Id = ..., Code = ..., Description = ..., Active = ...} But, we can't know that at compile-time because of the dynamic mappings. So, we'll build a function which is that exact code, but compiled at runtime. Expression trees are just runtime data that represents the same code you could write at compile-time. First, we need to get a set of bindings (assignments) for the properties: private IEnumerable<MemberBinding> GetPropertyBindings(IDictionary<string, object> values){ return from sourceName in _properties.Keys select Expression.Bind(_properties[sourceName], Expression.Constant(values[sourceName]));} What we're saying here is, for each property in the mapped properties, look up the value and make it a constant (for Id , this might be the value 7) and bind the corresponding property to it. This gives us the expression Id = 7 . We repeat this for all of the properties, giving us all of the assignments. Once we have those bindings, we can create the full member initialization, which includes the constructor call: private MemberInitExpression GetMemberInit(IDictionary<string, object> values){ return Expression.MemberInit(Expression.New(typeof(T)), GetPropertyBindings(values));} Because we specified where T : new() in the class declaration, we are guaranteed to have a parameterless constructor to call here. We pass in the property bindings we created before, giving us a data structure that represents the initialization expression we wanted to build. So what do we do know? We have this data structure, but how do we call the code? To do that, we have to wrap that expression in a function that we can call, because the only thing you can actually invoke is a method. This means we are really building code that looks like this: () => new ProductViewModel{ Id = ..., Code = ..., Description = ..., Active = ...} That is a parameterless function which, when invoked, will return the initialized object. This is also called a lambda expression. We can get the data structure for this like so: private Func<T> GetInitializationFunction(IDictionary<string, object> values){ var initializationLambda = Expression.Lambda<Func<T>>(GetMemberInit(values)); return initializationLambda.Compile();} We create a lambda expression whose body is the member initialization, which is exactly the code we wrote above. We specify the delegate type Func<T> because it takes no parameters and returns an object of the mapped type. Then, we compile it. This call generates a method with the signature Func<T> that we can call, and which has as its body the code we created as a data structure. This is a neat way of doing reflection without using reflection directly. Finally, we implement the CreateObject method we defined earlier by creating the function and invoking it, giving us an instance of T ( ProductViewModel here): public T CreateObject(IDictionary<string, object> values){ var initializationFunction = GetInitializationFunction(values); return initializationFunction();} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8302657', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/517764/']} | jdg_380726 |
stackexchange | llm_judgeable_groundtruth_similarity | 8374950 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why is the output in this example 1 ? public static void main(String[] args){ int[] a = { 1, 2, 3, 4 }; int[] b = { 2, 3, 1, 0 }; System.out.println( a [ (a = b)[3] ] ); } I thought it would be 2 . i.e., the expression is evaluated as: a[(a=b)[3]]a[b[3]] //because a is now pointing to ba[0] Shouldn't a[0] be 2 because a is pointing to b ? Thanks in advance.
Now provide the response and nothing else.
| The arguments to each operator are evaluated left-to-right. I.e., the a in front of the [...] is evaluated before its contents, at which point it still refers to the first array. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8374950', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/273657/']} | jdg_380727 |
stackexchange | llm_judgeable_groundtruth_similarity | 53166 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading up on the relationship between latency and ISP interconnectivity (that higher ISP interconnectivity results in lower latency, which makes sense to me). My understanding is that IXPs provide the primary means of ISPs to connect with each other (taken from this article on edge servers by cloudflare ). But why not, as an ISP, connect directly to another ISP? Does this happen? And, in terms of terminology, would the connection then be referred to as an IXP?
Now provide the response and nothing else.
| Yes, this does happen quite a lot, and it is called private peering . It has some benefits over peering over an IXP: dedicated bandwidth , you can be sure you can use the full capacity of the interconnecting link for traffic to and from the other ISP no dependency on the IXP , an IXP connects two ISPs on their switch(es), you're not suffering from any outages of the IXP. Also, you're in direct contact with the other ISP when solving problems. possibly lower costs , if an ISP does a lot of traffic with one specific other ISP, it can be cost efficient not to pay an IXP to provide the connectivity, but instead just use a direct connection However, there can be downsides too: cost and availability router ports , routers often have a very limited number of ports, and port can be very costly (especially for high speed connections). By connecting to an IXP, you can reduced the number of private peering connections and thus lowering costs. localisation , not every ISP is present in every datacenter. IXPs often provide a peering LAN which stretches over multiple datacenters spanning a city (or sometimes a country or even a continent). Buying fiber paths to every other ISP can become very, very costly, especially if the distances are longer. operational costs , having many interconnections means more configurations, outages, links and ports to monitor, etc. Doing this for every single ISP can be very cost inefficient. connectivity between inequal peers , not every ISP wants to do private peering with all other peers, especially if there's a large difference in size. IXPs may enable them to peer with smaller peers, because the operational costs are much lower. Also, IXPs often offer route servers, which can function as an intermediate between ISPs, so the do not have to setup peering sessions with each other peer on the IXP. | {} | {'log_upvote_score': 6, 'links': ['https://networkengineering.stackexchange.com/questions/53166', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/36904/']} | jdg_380728 |
stackexchange | llm_judgeable_groundtruth_similarity | 1256246 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C, we cannot use & to find out the address of a register variable but in C++ we can do the same. Why is it legal in C++ but not in C? Can someone please explain this concept in-depth.
Now provide the response and nothing else.
| Here's an excerpt from Section 6.7.1 (footnote 101) of the C99 standard (pdf) : The implementation may treat any register declaration simply as an auto declaration. However, whether or not addressable storage is actually used, the address of any part of an object declared with storage-class specifier register cannot be computed , either explicitly (by use of the unary & operator as discussed in 6.5.3.2) or implicitly (by converting an array name to a pointer as discussed in 6.3.2.1). Thus, the only operator that can be applied to an array declared with storage-class specifier register is sizeof . And from Section 7.1.1, Paragraph 3 of the C++ standard (pdf) : A register specifier has the same semantics as an auto specifier together with a hint to the implementation that the object so declared will be heavily used. [Note: the hint can be ignored and in most implementations it will be ignored if the address of the object is taken. —end note] Fun tidbits about register The C++ group (WG21) wants to deprecate register : The register keyword serves very little function, offering no more than a hint that a note says is typically ignored. It should be deprecated in this version of the standard, freeing the reserved name up for use in a future standard, much like auto has been re-used this time around for being similarly useless. Notes from the March, 2009 meeting: The consensus of the CWG was in favor of deprecating register . Look what the C99 group (WG14) said about register (pdf) at a meeting: General agreement to deprecate the “ auto ” keyword. Should we ask WG21 to go back to the previous use of “ register ” (no address)? No, this will not fly with WG21. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1256246', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/122959/']} | jdg_380729 |
stackexchange | llm_judgeable_groundtruth_similarity | 1424738 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I detect when a JRadioButton is changed from 'unselected' to 'selected' when clicked with the mouse? I've tried using an ActionListener on the button, but that gets fired every time the radiobutton is clicked, not just when it's changing state to 'selected'. I've thought of maintaining a boolean variable that remembers the state of the button, and test it inside the ActionListener to see whether to change its state but I'm wondering if there's a much better or cleaner solution.
Now provide the response and nothing else.
| Look at JRadioButton. addItemListener () EDIT: It is unlikely you want to use a changeListener as it fires multiple times per click. An itemListener fires only once per click. See here EDIT2: Just to expand on this, an actionListener on a jradioButton will fire every time a user clicks on it, even if it is already selected. if that's what you want, fine, but I find it annoying. I only want to be notified it it is selected or deselected. A ChangeListener will fire for all sorts of things, meaning your listener will receive 5 or more events per click. Not good. An itemlistener will fire only if the selected or deselected state changes. This means that a user can click on it multiple times and it will not fire if it doesn't change. In your handler method you will have to have an if block checking for SELECTED or DESELECTED status and do whatever there: @Overridepublic void itemStateChanged(ItemEvent e) { if (e.getStateChange() == ItemEvent.SELECTED) { // Your selected code here. } else if (e.getStateChange() == ItemEvent.DESELECTED) { // Your deselected code here. }} It just works better because you know that if you are in the method then the radio button has either just been selected or deselected, not that the user is just banging on the interface for some unknown reason. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1424738', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23325/']} | jdg_380730 |
stackexchange | llm_judgeable_groundtruth_similarity | 40426843 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to use websocket to connect to a TLS server using react-native. Below is my code (running on windows + android ): var ws = new WebSocket('wss://hub.fingi-staging.com:20020',{ rejectUnauthorized: false});ws.onopen = () => { // connection opened ws.send('something'); // send a message};ws.onmessage = (e) => { // a message was received console.log('message : ' + e.data);};ws.onerror = (e) => { // an error occurred console.log('error:'+e.message);};ws.onclose = (e) => { // connection closed console.log('close:'+e.code, e.reason);}; However, it fails with : error:java.security.cert.CertPathValidatorException: Trust anchor for certification path not found . This is because the server uses a self signed certificate. Is there any way to fix this?
Now provide the response and nothing else.
| In Excel, dates are stored as numbers. If you want to get a formatted date, you'll need to check whether the cell contains a date (there's a utility method for that), then get the date value of the cell, get the data format, and finally convert the date to string using the format. You should not force the CellType to string or else you will no longer be able to tell that the cell originally held a date. I would recommend making an extension method like this to get the formatted cell value based on its type: using NPOI.SS.UserModel;public static class NpoiExtensions{ public static string GetFormattedCellValue(this ICell cell, IFormulaEvaluator eval = null) { if (cell != null) { switch (cell.CellType) { case CellType.String: return cell.StringCellValue; case CellType.Numeric: if (DateUtil.IsCellDateFormatted(cell)) { DateTime date = cell.DateCellValue; ICellStyle style = cell.CellStyle; // Excel uses lowercase m for month whereas .Net uses uppercase string format = style.GetDataFormatString().Replace('m', 'M'); return date.ToString(format); } else { return cell.NumericCellValue.ToString(); } case CellType.Boolean: return cell.BooleanCellValue ? "TRUE" : "FALSE"; case CellType.Formula: if (eval != null) return GetFormattedCellValue(eval.EvaluateInCell(cell)); else return cell.CellFormula; case CellType.Error: return FormulaError.ForInt(cell.ErrorCellValue).String; } } // null or blank cell, or unknown cell type return string.Empty; }} Then, use it like this: ICell cell = row.GetCell(i);string fieldString = cell.GetFormattedCellValue();result = result + ";" + FieldValue; Optional: If you have any formulas in your cells and you want those formulas to be evaluated, then create an IFormulaEvaluator based on your workbook type and pass the evaluator to the GetFormattedCellValue() method. For example: IFormulaEvaluator eval;if (workbook is XSSFWorkbook) eval = new XSSFFormulaEvaluator(workbook);else eval = new HSSFFormulaEvaluator(workbook);...ICell cell = row.GetCell(i);string fieldString = cell.GetFormattedCellValue(eval);result = result + ";" + FieldValue; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40426843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/533631/']} | jdg_380731 |
stackexchange | llm_judgeable_groundtruth_similarity | 21552654 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a <ui:composition> that contains a few elements with explicit ids and some ajax events which reference these ids for partial processing/updating. I encapsulated this fragment of xhtml inside the composition simply so I could use it in a few different places without having to duplicate the code. However, when I use the composition (with <ui:include> ) more than once inside a page, I get duplicate id exceptions. It seems JSF is not wrapping each composition inside its own naming container (like <ui:component> does). Is there a simple way to wrap my composition inside its own naming container?Or do I have to use a composite component every time I want to reuse xhtml fragments inside a common naming container?
Now provide the response and nothing else.
| Depending on the purpose of the <ui:include> template, you've several options: Use <f:subview> . It creates another NamingContainer context (like as <h:form> , <h:dataTable> , and friends all do): <f:subview id="top"> <ui:include src="/WEB-INF/includes/some.xhtml" /></f:subview>...<f:subview id="bottom"> <ui:include src="/WEB-INF/includes/some.xhtml" /></f:subview> The components definied in some.xhtml will end up getting respectively top: and bottom: prefix in their ID. Turn it into a tagfile which requires an id attribute. <my:some id="top" />...<my:some id="bottom" /> And use that ID to prefix the ID of the components in the composition. <ui:composition xmlns="http://www.w3.org/1999/xhtml" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets"> ... <h:someComponent id="#{id}_some" /> <h:otherComponent id="#{id}_other" /> ...<ui:composition> Turn it into a composite component . Composite components are inherently already a NamingContainer , so their id attribute is optional. Basically, replace <ui:composition xmlns="http://www.w3.org/1999/xhtml" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets"> ...<ui:composition> by <ui:component xmlns="http://www.w3.org/1999/xhtml" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:cc="http://java.sun.com/jsf/composite"> <cc:interface> <!-- This is optional. --> </cc:interface> <cc:implementation> ... <h:someComponent id="some" /> <h:otherComponent id="other" /> ... </cc:implementation><ui:component> This way you can use it as follows: <my:some id="top" />...<my:some id="bottom" /> The components definied in <cc:implementation> will end up getting respectively top: and bottom: prefix in their ID (note again, the composite component's id attribute is optional, JSF will otherwise autogenerate one). See also: When to use <ui:include>, tag files, composite components and/or custom components? | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21552654', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1918059/']} | jdg_380732 |
stackexchange | llm_judgeable_groundtruth_similarity | 241582 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently I made a question about Java classes implementing methods from two sources (kinda like multiple inheritance). However, it was pointed out that this sort of need may be a sign of a design flaw. Hence, it is probably better to address my current design rather than trying to simulate multiple inheritance. Before tackling the actual problem, some background info about a particular mechanic in this framework: It is a simple game development framework. Several components allocate some memory (like pixel data), and it is necessary to get rid of it as soon as you don't need it. Sprites are an example of this. Anyway, I decided to implement something ala Manual-Reference-Counting from Objective-C. Certain classes, like Sprites, contain an internal counter, which is increased when you call retain() , and decreased on release() . Thus the Resource abstract class was created. Any subclass of this will obtain the retain() and release() implementations for free. When its count hits 0 (nobody is using this class), it will call the destroy() method. The subclass needs only to implement destroy() . This is because I don't want to rely on the Garbage Collector to get rid of unused pixel data. Game objects are all subclasses of the Node class - which is the main construction block, as it provides info such as position, size, rotation, etc. See, two classes are used often in my game. Sprites and Labels. Ah... but wait. Sprites contain pixel data, remember? And as such, they need to extend Resource . But this, of course, can't be done. Sprites ARE nodes, hence they must subclass Node . But heck, they are resources too. Why not making Resource an interface? Because I'd have to re-implement retain() and release() . I am avoiding this in virtue of not writing the same code over and over (remember that there are multiple classes that need this memory-management system). Why not composition? Because I'd still have to implement methods in Sprite (and similar classes) that essentially call the methods of Resource . I'd still be writing the same code over and over! What is your advice in this situation, then?
Now provide the response and nothing else.
| You wrote Sprites contain pixel data which is a clear sign of a "has a" relationship, not an "is a". So bite the bullet and make Sprite -> Resource a composition. Why not composition? Because I'd still have to implement methods in Sprite (and similar classes) that essentially call the methods of Resource yes, you will - but only trivial delegation methods. You won't have to repeat any functional code. That's actually acceptable. and the standard way composition/delegation works. Here you find a detailed example how to replace inheritance by delegation. This approach is so common, there is even an Eclipse refactoring exactly for this purpose, | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/241582', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/13833/']} | jdg_380733 |
stackexchange | llm_judgeable_groundtruth_similarity | 33124699 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In this picture below from the Github(c) Desktop App there ist this button "Update from master" (in this case). I was wondering if anyone had the insight what exact git function it triggers. Primarily I'm interested if merge or rebase is used. (I could not find any sort of log console).
Now provide the response and nothing else.
| The git commands underlying the buttons in GitHub Desktop are not well-documented, so I investigated a while back. I concluded that the "Update from..." button dispatched git merge --no-ff -m "Merge <auto_text> <branch_name>" <branch_name> or something nearly identical with the "Compare" branch set to <branch_name> in the GitHub Desktop GUI. I reached the conclusion in the following way: First, I forked a repository that I control to my GitHub account. Then, I cloned the repository from my GitHub account to my local machine. Next, I committed a small change to the (original) main remote repository. Finally, I used git fetch <remote_name_assigned_to_main_repo> ( <remote_name> , hereafter) to bring the single commit to my local machine. After this fetch , the "Update from..." button lit up. This set up a scenario in which the branch checked out, master in my local repository, was one commit behind master in the main remote repository. By default, git merge <remote_name> would have produced a fast-forward merge (without a merge commit). Using the "Update from..." button, however, resulted in the following reflog entry: HEAD@{0}: merge <remote_name>/master: Merge made by the 'recursive' strategy. And a merge commit in the log : Merge remote-tracking branch '<remote_name>/master' (The 'recursive' strategy "...is the default merge strategy when pulling or merging one branch." per the manual.) I also set up a scenario in which git rebase might have been an option, but saw the same merge behavior. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33124699', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2321201/']} | jdg_380734 |
stackexchange | llm_judgeable_groundtruth_similarity | 29719289 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working with an AngularJS powered page, and I need to display a running clock inside a read-only input text field (two way bound with data-ng-model) . To simulate a running clock, I am using a JavaScript scheduler with setTimeout to call a function every 1000 milliseconds, which updates the $scope'd property value which in turn is bound to that input text field. Somehow the value in the input field is not getting updated. So I placed a <pre /> tag and updated its content using a jQuery selector. That is working fine, so I need help getting the input text field value to also get updated every second. I have set up a jsFiddle for this example. The HTML is below: <body data-ng-app="formApp"> <div data-ng-controller="FormCtrl"> Current Date and Time <input type="text" data-ng-model="formData.currentDateTime" readonly="readonly" size="60" /> </div> <pre id="currentDateTime" style="font-size:1.5em;"> </pre></body> The AngularJS app module and controller are declared as follows: (function() { var formApp = angular.module("formApp", []); formApp.controller("FormCtrl", function ($scope) { $scope.formData = {}; $scope.formData.currentDateTime = new Date().toString(); (function updateCDT() { $scope.formData.currentDateTime = new Date().toString(); document.getElementById("currentDateTime").innerHTML = $scope.formData.currentDateTime; setTimeout(updateCDT, 1000); })(); });})();
Now provide the response and nothing else.
| you need to use $scope.$apply() or angulars $timeout to reflect changes since setTimeout is outside the scope of angularjs using $scope.$apply() apply $scope.$apply() inside anonymous function of setTimeout(function(){},1000) and then call the actual function like below (function updateCDT() { $scope.formData.currentDateTime = new Date().toString(); document.getElementById("currentDateTime").innerHTML = $scope.formData.currentDateTime; setTimeout(function(){ $scope.$apply(); updateCDT() }, 1000); fiddle for $scope.$apply() using $timeout (dont forget to inject it into controller) (function updateCDT() { $scope.formData.currentDateTime = new Date().toString(); document.getElementById("currentDateTime").innerHTML = $scope.formData.currentDateTime; $timeout(updateCDT, 1000); })(); fiddle for $timeout | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29719289', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/371392/']} | jdg_380735 |
stackexchange | llm_judgeable_groundtruth_similarity | 2347642 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to change my code from: string path = @"c:\Directory\test.xml";XmlSerializer s = new XmlSerializer(typeof(Car));TextReader r = new StreamReader(path);Car car = (Car)s.Deserialize(r);r.Close(); into code that would convert an XML to a string, and then convert string to the object Car. Is this possible?
Now provide the response and nothing else.
| public static string XmlSerializeToString(this object objectInstance){ var serializer = new XmlSerializer(objectInstance.GetType()); var sb = new StringBuilder(); using (TextWriter writer = new StringWriter(sb)) { serializer.Serialize(writer, objectInstance); } return sb.ToString();}public static T XmlDeserializeFromString<T>(this string objectData){ return (T)XmlDeserializeFromString(objectData, typeof(T));}public static object XmlDeserializeFromString(this string objectData, Type type){ var serializer = new XmlSerializer(type); object result; using (TextReader reader = new StringReader(objectData)) { result = serializer.Deserialize(reader); } return result;} To use it: //Make XMLvar settings = new ObjectCustomerSettings();var xmlString = settings.XmlSerializeToString();//Make Objectvar settings = xmlString.XmlDeserializeFromString<ObjectCustomerSettings>(); | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/2347642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/278618/']} | jdg_380736 |
stackexchange | llm_judgeable_groundtruth_similarity | 1911536 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm not very good with Ant, but we're using it as a build tool. Right now, we can run "ant test" and it'll run through all the unit tests. However, I'd love to be able to do something like ant test some_module and have it accept some_module as a parameter, and only test that. I haven't been able to find how to pass command line args to Ant - any ideas?
Now provide the response and nothing else.
| One solution might be as follows. (I have a project that does this.) Have a separate target similar to test with a fileset that restricts the test to one class only. Then pass the name of that class using -D at the ant command line: ant -Dtest.module=MyClassUnderTest single_test In the build.xml (highly reduced): <target name="single_test" depends="compile" description="Run one unit test"> <junit> <batchtest> <fileset dir="${test.dir}" includes="**/${test.module}.class" /> </batchtest> </junit></target> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1911536', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232560/']} | jdg_380737 |
stackexchange | llm_judgeable_groundtruth_similarity | 202321 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When reading articles on ISP, there seem to be two contradicting definitions of ISP: According to first definition ( see 1 , 2 , 3 ), ISP states that classes implementing the interface shouldn't be forced to implement functionalities which they don't need. Thus, fat interface IFat interface IFat{ void A(); void B(); void C(); void D();}class MyClass: IFat{ ... } should be split into smaller interfaces ISmall_1 and ISmall_2 interface ISmall_1{ void A(); void B();}interface ISmall_2{ void C(); void D();}class MyClass:ISmall_2{ ... } since this way my MyClass is able to implement only the methods it needs ( D() and C() ), without being forced to also provide dummy implementations for A() and B() : But according to the second definition ( see 1 , 2 , answer by Nazar Merza ), ISP states that MyClient calling methods on MyService shouldn't be aware of methods on MyService that it doesn't need. In other words, if MyClient only needs the functionality of C() and D() , then instead of class MyService { public void A(); public void B(); public void C(); public void D();}/*client code*/ MyService service = ...;service.C(); service.D(); we should segregate MyService's methods into client-specific interfaces: public interface ISmall_1{ void A(); void B();}public interface ISmall_2{ void C(); void D();}class MyService:ISmall_1, ISmall_2 { ... }/*client code*/ISmall_2 service = ...;service.C(); service.D(); Thus with the former definition, the goal of ISP is to " make the life of classes implementing IFat interface easier ", while with the latter the goal of ISP is to " make the life of clients calling methods of MyService easier ". Which of the two different definitions of ISP is actually correct? @MARJAN VENEMA 1. So when you are going to split IFat into smaller interface, whichmethods end up in which ISmallinterface should be decided based on howcohesive the members are. While it makes sense to put cohesive methods within the same interface, I thought with ISP pattern the needs of the client take precedence over the "cohesiveness" of an interface. In other words, I thought with ISP we should lump within the same interface those methods needed by particular clients, even if that means leaving out of that interface those methods that should, for the sake of cohesiveness, also be put inside that same interface? Thus, if there were lots of clients that will only ever needed to call CutGreens , but not also GrillMeat , then to adhere to ISP pattern we should only put CutGreens inside ICook , but not also GrillMeat , even though the two methods are highly cohesive?! 2. I think that your confusion stems from the a hidden assumption in thefirst definition: that the implementing classes are already followingthe single responsibility principle. By "implementing classes not following SRP" are you referring to those classes that implement IFat or to classes that implement ISmall_1 / ISmall_2 ? I assume you're referring to classes that implement IFat ? If so, why do you assume they don't already follow SRP? thanks
Now provide the response and nothing else.
| You confuse the word "client" as used in the Gang of Four documents with a "client" as in consumer of a service. A "client", as intended by Gang of Four definitions, is a class that implements an interface. If class A implements interface B, then they say A is a client of B. Otherwise the phrase "clients should not be forced to implement interfaces they don't use" wouldn't make sense since "clients" ( as in consumers ) don't implement anything. The phrase only makes sense when you see "client" as "implementor". If "client" meant a class that "consumes" (calls) the methods of another class that implements the big interface, then by calling the two methods you care about and ignoring the rest, would be enough to keep you decoupled from the rest of the methods you don't use. The spirit of the principle is avoid the "client" ( the class implementing the interface ) having to implement dummy methods in order to comply with the whole interface when it only cares about a set of methods that are related. Also it aims to having the less amount of coupling as possible so that changes made in one place cause the less impact. By segregating the interfaces you reduce the coupling. That problems appears when the interface does too much and have methods that should be divided into several interfaces instead of just one. Both of your code examples are OK . It's only that in the second one you assume "client" means "a class that consumes/calls the services/methods offered by another class". I find no contradictions in the concepts explained in the three links you gave. Just keep clear that "client" is implementor , in SOLID talk. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/202321', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/89942/']} | jdg_380738 |
stackexchange | llm_judgeable_groundtruth_similarity | 6259515 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As the title says, I've got a string and I want to split into segments of n characters. For example: var str = 'abcdefghijkl'; after some magic with n=3 , it will become var arr = ['abc','def','ghi','jkl']; Is there a way to do this?
Now provide the response and nothing else.
| var str = 'abcdefghijkl';console.log(str.match(/.{1,3}/g)); Note: Use {1,3} instead of just {3} to include the remainder for string lengths that aren't a multiple of 3, e.g: console.log("abcd".match(/.{1,3}/g)); // ["abc", "d"] A couple more subtleties: If your string may contain newlines ( which you want to count as a character rather than splitting the string ), then the . won't capture those. Use /[\s\S]{1,3}/ instead. (Thanks @Mike). If your string is empty, then match() will return null when you may be expecting an empty array. Protect against this by appending || [] . So you may end up with: var str = 'abcdef \t\r\nghijkl';var parts = str.match(/[\s\S]{1,3}/g) || [];console.log(parts);console.log(''.match(/[\s\S]{1,3}/g) || []); | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/6259515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/385273/']} | jdg_380739 |
stackexchange | llm_judgeable_groundtruth_similarity | 48218950 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've found that for some graphs I get doubles values from Prometheus where should be just ones: Query I use: increase(signups_count[4m]) Scrape interval is set to the recommended maximum of 2 minutes. If I query the actual data stored: curl -gs 'localhost:9090/api/v1/query?query=(signups_count[1h])'"values":[ [1515721365.194, "579"], [1515721485.194, "579"], [1515721605.194, "580"], [1515721725.194, "580"], [1515721845.194, "580"], [1515721965.194, "580"], [1515722085.194, "580"], [1515722205.194, "581"], [1515722325.194, "581"], [1515722445.194, "581"], [1515722565.194, "581"]], I see that there were just two increases. And indeed if I query for these times I see an expected result: curl -gs 'localhost:9090/api/v1/query_range?step=4m&query=increase(signups_count[4m])&start=1515721965.194&end=1515722565.194'"values": [ [1515721965.194, "0"], [1515722205.194, "1"], [1515722445.194, "0"]], But Grafana (and Prometheus in the GUI) tends to set a different step in queries, with which I get a very unexpected result for a person unfamiliar with internal workings of Prometheus. curl -gs 'localhost:9090/api/v1/query_range?step=15&query=increase(signups_count[4m])&start=1515721965.194&end=1515722565.194'... skip ... [1515722190.194, "0"], [1515722205.194, "1"], [1515722220.194, "2"], [1515722235.194, "2"],... skip ... Knowing that increase() is just a syntactic sugar for a specific use-case of the rate() function , I guess this is how it is supposed to work given the circumstances. How to avoid such situations? How do I make Prometheus/Grafana show me ones for ones, and twos for twos, most of the time? Other than by increasing the scrape interval (this will be my last resort). I understand that Prometheus isn't an exact sort of tool , so it is fine with me if I would have a good number not at all times, but most of the time. What else am I missing here?
Now provide the response and nothing else.
| This is known as aliasing and is a fundamental problem in signal processing. You can improve this a bit by increasing your sample rate, a 4m range is a bit short with a 2m range. Try a 10m range. Here for example the query executed at 1515722220 only sees the [email protected] and [email protected] samples. That's an increase of 1 over 2 minutes, which extrapolated over 4 minutes is an increase of 2 - which is as expected. Any metrics-based monitoring system will have similar artifacts, if you want 100% accuracy you need logs. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/48218950', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93540/']} | jdg_380740 |
Subsets and Splits