title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
How to create a ButtonBar in JavaFX? | A ButtonBar is simply an HBox on which you can arrange buttons. Typically, the buttons on a ButtonBar are Operating System specific. You can create a button bar by instantiating the javafx.scene.control.ButtonBar class.
The following Example demonstrates the creation of a ButtonBar.
import javafx.application.Application;
import javafx.geometry.Insets;
import javafx.scene.Scene;
import javafx.scene.control.ButtonBar;
import javafx.scene.control.ButtonBar.ButtonData;
import javafx.scene.control.ToggleButton;
import javafx.scene.control.ToggleGroup;
import javafx.scene.layout.HBox;
import javafx.stage.Stage;
public class ButtonBarExample extends Application {
@Override
public void start(Stage stage) {
//Creating toggle buttons
ToggleButton button1 = new ToggleButton("Java");
button1.setPrefSize(60, 40);
ToggleButton button2 = new ToggleButton("Python");
button2.setPrefSize(60, 40);
ToggleButton button3 = new ToggleButton("C++");
button3.setPrefSize(60, 40);
//Adding the buttons to a toggle group
ToggleGroup group = new ToggleGroup();
button1.setToggleGroup(group);
button2.setToggleGroup(group);
button3.setToggleGroup(group);
//Create a ButtonBar
ButtonBar buttonBar = new ButtonBar();
//Adding the buttons to the button bar
ButtonBar.setButtonData(button1, ButtonData.APPLY);
ButtonBar.setButtonData(button2, ButtonData.APPLY);
ButtonBar.setButtonData(button3, ButtonData.APPLY);
buttonBar.getButtons().addAll(button1, button2, button3);
//Adding the toggle button to the pane
HBox box = new HBox(5);
box.setPadding(new Insets(50, 50, 50, 150));
box.getChildren().addAll(buttonBar);
box.setStyle("-fx-background-color: BEIGE");
//Setting the stage
Scene scene = new Scene(box, 595, 150);
stage.setTitle("Button Bar");
stage.setScene(scene);
stage.show();
}
public static void main(String args[]){
launch(args);
}
} | [
{
"code": null,
"e": 1282,
"s": 1062,
"text": "A ButtonBar is simply an HBox on which you can arrange buttons. Typically, the buttons on a ButtonBar are Operating System specific. You can create a button bar by instantiating the javafx.scene.control.ButtonBar class."
},
{
"code": null,
"e": 1346,
"s": 1282,
"text": "The following Example demonstrates the creation of a ButtonBar."
},
{
"code": null,
"e": 3087,
"s": 1346,
"text": "import javafx.application.Application;\nimport javafx.geometry.Insets;\nimport javafx.scene.Scene;\nimport javafx.scene.control.ButtonBar;\nimport javafx.scene.control.ButtonBar.ButtonData;\nimport javafx.scene.control.ToggleButton;\nimport javafx.scene.control.ToggleGroup;\nimport javafx.scene.layout.HBox;\nimport javafx.stage.Stage;\npublic class ButtonBarExample extends Application {\n @Override\n public void start(Stage stage) {\n //Creating toggle buttons\n ToggleButton button1 = new ToggleButton(\"Java\");\n button1.setPrefSize(60, 40);\n ToggleButton button2 = new ToggleButton(\"Python\");\n button2.setPrefSize(60, 40);\n ToggleButton button3 = new ToggleButton(\"C++\");\n button3.setPrefSize(60, 40);\n //Adding the buttons to a toggle group\n ToggleGroup group = new ToggleGroup();\n button1.setToggleGroup(group);\n button2.setToggleGroup(group);\n button3.setToggleGroup(group);\n //Create a ButtonBar\n ButtonBar buttonBar = new ButtonBar();\n //Adding the buttons to the button bar\n ButtonBar.setButtonData(button1, ButtonData.APPLY);\n ButtonBar.setButtonData(button2, ButtonData.APPLY);\n ButtonBar.setButtonData(button3, ButtonData.APPLY);\n buttonBar.getButtons().addAll(button1, button2, button3);\n //Adding the toggle button to the pane\n HBox box = new HBox(5);\n box.setPadding(new Insets(50, 50, 50, 150));\n box.getChildren().addAll(buttonBar);\n box.setStyle(\"-fx-background-color: BEIGE\");\n //Setting the stage\n Scene scene = new Scene(box, 595, 150);\n stage.setTitle(\"Button Bar\");\n stage.setScene(scene);\n stage.show();\n }\n public static void main(String args[]){\n launch(args);\n }\n}"
}
] |
JavaScript Object Properties | Properties in JavaScript are the values associated with an object. Following is the code implementing object properties in JavaScript −
Live Demo
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
<style>
body {
font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif;
}
.sample {
font-size: 18px;
font-weight: 500;
color: red;
}
</style>
</head>
<body>
<h1>JavaScript Object Properties</h1>
<div class="sample"></div>
<button class="Btn">CLICK HERE</button>
<h3>
Click on the above button to display name and age property from testObj
object
</h3>
<script>
let sampleEle = document.querySelector(".sample");
document.querySelector(".Btn").addEventListener("click", () => {
let testObj = { name: "Rohan",age: 23,};
sampleEle.innerHTML += "testObj.name = " + testObj.name + "<br>";
sampleEle.innerHTML += "testObj.age = " + testObj.age + "<br>";
});
</script>
</body>
</html>
On clicking the ‘CLICK HERE’ button − | [
{
"code": null,
"e": 1198,
"s": 1062,
"text": "Properties in JavaScript are the values associated with an object. Following is the code implementing object properties in JavaScript −"
},
{
"code": null,
"e": 1209,
"s": 1198,
"text": " Live Demo"
},
{
"code": null,
"e": 2117,
"s": 1209,
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\" />\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n<title>Document</title>\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n .sample {\n font-size: 18px;\n font-weight: 500;\n color: red;\n }\n</style>\n</head>\n<body>\n<h1>JavaScript Object Properties</h1>\n<div class=\"sample\"></div>\n<button class=\"Btn\">CLICK HERE</button>\n<h3>\nClick on the above button to display name and age property from testObj\nobject\n</h3>\n<script>\n let sampleEle = document.querySelector(\".sample\");\n document.querySelector(\".Btn\").addEventListener(\"click\", () => {\n let testObj = { name: \"Rohan\",age: 23,};\n sampleEle.innerHTML += \"testObj.name = \" + testObj.name + \"<br>\";\n sampleEle.innerHTML += \"testObj.age = \" + testObj.age + \"<br>\";\n });\n</script>\n</body>\n</html>"
},
{
"code": null,
"e": 2155,
"s": 2117,
"text": "On clicking the ‘CLICK HERE’ button −"
}
] |
rmmod command in Linux with Examples - GeeksforGeeks | 24 May, 2019
rmmod command in Linux system is used to remove a module from the kernel. Most of the users still use modprobe with the -r option instead of using rmmod.
Syntax:
rmmod [-f] [-s] [-v] [modulename]
Example:
rmmod bluetooth
Options:
rmmod command with help option: It will print the general syntax of the rmmod along with the various options that can be used with the rmmod command as well as gives a brief description about each option.
rmmod -v: This option prints messages about what the program is being doing. Usually rmmod only prints messages only if something went wrong.Example:rmmod -v bluetooth
Example:
rmmod -v bluetooth
rmmod -f: This option can be extremely dangerous. It takes no effect unless CONFIG_MODULE_FORCE_UNLOAD is being set when the kernel was compiled. With this option, you can remove the specified modules which are being used, or which are not being designed to be removed or have been marked as not safe.Example:sudo rmmod -f bluetooth
Example:
sudo rmmod -f bluetooth
rmmod -s : This option is going to send errors to syslog instead of standard error.Example:rmmod -s bluetooth
Example:
rmmod -s bluetooth
rmmod -V : This option will going to show version of program and then exit.rmmod -V
rmmod -V
linux-command
Linux-misc-commands
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
TCP Server-Client implementation in C
tar command in Linux with examples
UDP Server-Client implementation in C
curl command in Linux with Examples
'crontab' in Linux with Examples
Conditional Statements | Shell Script
diff command in Linux with examples
Cat command in Linux with examples
touch command in Linux with Examples
Mutex lock for Linux Thread Synchronization | [
{
"code": null,
"e": 24816,
"s": 24788,
"text": "\n24 May, 2019"
},
{
"code": null,
"e": 24970,
"s": 24816,
"text": "rmmod command in Linux system is used to remove a module from the kernel. Most of the users still use modprobe with the -r option instead of using rmmod."
},
{
"code": null,
"e": 24978,
"s": 24970,
"text": "Syntax:"
},
{
"code": null,
"e": 25013,
"s": 24978,
"text": "rmmod [-f] [-s] [-v] [modulename]\n"
},
{
"code": null,
"e": 25022,
"s": 25013,
"text": "Example:"
},
{
"code": null,
"e": 25038,
"s": 25022,
"text": "rmmod bluetooth"
},
{
"code": null,
"e": 25047,
"s": 25038,
"text": "Options:"
},
{
"code": null,
"e": 25252,
"s": 25047,
"text": "rmmod command with help option: It will print the general syntax of the rmmod along with the various options that can be used with the rmmod command as well as gives a brief description about each option."
},
{
"code": null,
"e": 25420,
"s": 25252,
"text": "rmmod -v: This option prints messages about what the program is being doing. Usually rmmod only prints messages only if something went wrong.Example:rmmod -v bluetooth"
},
{
"code": null,
"e": 25429,
"s": 25420,
"text": "Example:"
},
{
"code": null,
"e": 25448,
"s": 25429,
"text": "rmmod -v bluetooth"
},
{
"code": null,
"e": 25781,
"s": 25448,
"text": "rmmod -f: This option can be extremely dangerous. It takes no effect unless CONFIG_MODULE_FORCE_UNLOAD is being set when the kernel was compiled. With this option, you can remove the specified modules which are being used, or which are not being designed to be removed or have been marked as not safe.Example:sudo rmmod -f bluetooth"
},
{
"code": null,
"e": 25790,
"s": 25781,
"text": "Example:"
},
{
"code": null,
"e": 25814,
"s": 25790,
"text": "sudo rmmod -f bluetooth"
},
{
"code": null,
"e": 25924,
"s": 25814,
"text": "rmmod -s : This option is going to send errors to syslog instead of standard error.Example:rmmod -s bluetooth"
},
{
"code": null,
"e": 25933,
"s": 25924,
"text": "Example:"
},
{
"code": null,
"e": 25952,
"s": 25933,
"text": "rmmod -s bluetooth"
},
{
"code": null,
"e": 26036,
"s": 25952,
"text": "rmmod -V : This option will going to show version of program and then exit.rmmod -V"
},
{
"code": null,
"e": 26045,
"s": 26036,
"text": "rmmod -V"
},
{
"code": null,
"e": 26059,
"s": 26045,
"text": "linux-command"
},
{
"code": null,
"e": 26079,
"s": 26059,
"text": "Linux-misc-commands"
},
{
"code": null,
"e": 26090,
"s": 26079,
"text": "Linux-Unix"
},
{
"code": null,
"e": 26188,
"s": 26090,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26197,
"s": 26188,
"text": "Comments"
},
{
"code": null,
"e": 26210,
"s": 26197,
"text": "Old Comments"
},
{
"code": null,
"e": 26248,
"s": 26210,
"text": "TCP Server-Client implementation in C"
},
{
"code": null,
"e": 26283,
"s": 26248,
"text": "tar command in Linux with examples"
},
{
"code": null,
"e": 26321,
"s": 26283,
"text": "UDP Server-Client implementation in C"
},
{
"code": null,
"e": 26357,
"s": 26321,
"text": "curl command in Linux with Examples"
},
{
"code": null,
"e": 26390,
"s": 26357,
"text": "'crontab' in Linux with Examples"
},
{
"code": null,
"e": 26428,
"s": 26390,
"text": "Conditional Statements | Shell Script"
},
{
"code": null,
"e": 26464,
"s": 26428,
"text": "diff command in Linux with examples"
},
{
"code": null,
"e": 26499,
"s": 26464,
"text": "Cat command in Linux with examples"
},
{
"code": null,
"e": 26536,
"s": 26499,
"text": "touch command in Linux with Examples"
}
] |
Analyzing time series data in Pandas | by Ehi Aigiomawu | Towards Data Science | In my previous tutorials, we have considered data preparation and visualization tools such as Numpy, Pandas, Matplotlib and Seaborn. In this tutorial, we are going to learn about Time Series, why it’s important, situations we will need to apply Time Series, and more specifically, we will learn how to analyze Time Series data using Pandas.
Time Series is a set of data points or observations taken at specified times usually at equal intervals (e.g hourly, daily, weekly, quarterly, yearly, etc). Time Series is usually used to predict future occurrences based on previous observed occurrence or values. Predicting what would happen in the stock market tomorrow, volume of goods that would be sold in the coming week, whether or not price of an item would skyrocket in December, number of Uber rides over a period of time, etc; are some of the things we can do with Time Series Analysis.
Time series helps us understand past trends so we can forecast and plan for the future. For example, you own a coffee shop, what you’d likely see is how many coffee you sell every day or month and when you want to see how your shop has performed over the past six months, you’re likely going to add all the six month sales. Now, what if you want to be able to forecast sales for the next six months or year. In this kind of scenario, the only variable known to you is time (either in seconds, minutes, days, months, years, etc) — hence you need Time Series Analysis to predict the other unknown variables like trends, seasonality, etc.
Hence, it is important to note that in Time Series Analysis, the only known variable is — Time.
Pandas has proven very successful as a tool for working with Time Series data. This is because Pandas has some in-built datetime functions which makes it easy to work with a Time Series Analysis, and since time is the most important variable we work with here, it makes Pandas a very suitable tool to perform such analysis.
Generally, including those outside of the financial world, Time Series often contain the following features:
Trends: This refers to the movement of a series to relatively higher or lower values over a long period of time. For example, when the Time Series Analysis shows a pattern that is upward, we call it an Uptrend, and when the pattern is downward, we call it a Down trend, and if there was no trend at all, we call it a horizontal or stationary trend. One key thing to note is that trend usually happens for sometime and then disappears.Seasonality: This refers to is a repeating pattern within a fixed time period. Although these patterns can also swing upward or downward, however, this is quite different from that of a trend because trend happens for a period of time and then disappears. However Seasonality keeps happening within a fixed time period. For example, when it’s Christmas, you discover more candies and chocolates are sold and this keeps happening every year.Irregularity: This is also called noise. Irregularity happens for a short duration and it’s non depleting. A very good example is the case of Ebola. During that period, there was a massive demand for hand sanitizers which happened erratically/systematically in a way no one could have predicted, hence one could not tell how much number of sales could have been made or tell the next time there’s going to be another outbreak.Cyclic: This is when a series is repeating upward and downward movement. It usually does not have a fixed pattern. It could happen in 6months, then two years later, then 4 years, then 1 year later. These kinds of patterns are much harder to predict.
Trends: This refers to the movement of a series to relatively higher or lower values over a long period of time. For example, when the Time Series Analysis shows a pattern that is upward, we call it an Uptrend, and when the pattern is downward, we call it a Down trend, and if there was no trend at all, we call it a horizontal or stationary trend. One key thing to note is that trend usually happens for sometime and then disappears.
Seasonality: This refers to is a repeating pattern within a fixed time period. Although these patterns can also swing upward or downward, however, this is quite different from that of a trend because trend happens for a period of time and then disappears. However Seasonality keeps happening within a fixed time period. For example, when it’s Christmas, you discover more candies and chocolates are sold and this keeps happening every year.
Irregularity: This is also called noise. Irregularity happens for a short duration and it’s non depleting. A very good example is the case of Ebola. During that period, there was a massive demand for hand sanitizers which happened erratically/systematically in a way no one could have predicted, hence one could not tell how much number of sales could have been made or tell the next time there’s going to be another outbreak.
Cyclic: This is when a series is repeating upward and downward movement. It usually does not have a fixed pattern. It could happen in 6months, then two years later, then 4 years, then 1 year later. These kinds of patterns are much harder to predict.
Remember how we stated that the main variable here is Time? Same way, it is important to mention that we cannot apply Time Series analysis to a dataset when:
The variables/values are constant. For example, 5000 boxes of candies where sold last Christmas, and the Christmas before that. Since both values are the same, we cannot apply time series to predict sales for this year’s Christmas.
The variables/values are constant. For example, 5000 boxes of candies where sold last Christmas, and the Christmas before that. Since both values are the same, we cannot apply time series to predict sales for this year’s Christmas.
2. Values in the form of functions: There’s no point applying Time Series Analysis to a dataset when you can calculate values by simply using a formula or function.
Now that we have basic understanding of what Time Series is, let’s go ahead and work on an example to fully grasp how we can analyze a Time Series Data.
In this example, we are asked to build a model to forecast the demand for flight tickets of a particular airline. We will be using the International Airline Passengers dataset . You can also download it from kaggle here.
Importing Packages and Data
To begin, first thing we need to do is to import the packages we will use to perform our analysis: in this case, we’ll make use of pandas, to prepare our data and access the datetime functions and matplotlib to create our visualizations:
Now, let’s read our dataset to see what kind of data we have. As we see, the dataset has been classified into two columns; Month and Passengers traveling per month.
I usually like getting a summary of the dataset in case there’s a row with an empty value. Let’s go ahead and check by doing this:
As we can see, we do not have any empty value in our dataset, so we’re free to continue our analysis. Now, what we will do is to confirm that the Month column is in datetime format and not string. Pandas .dtypes function makes this possible:
We can see that Month column is of a generic object type which could be a string. Since we want to perform time related actions on this data, we need to convert it to a datetime format before it can be useful to us. Let’s go ahead and do this using to_datetime() helper function, let’s cast the Month column to a datetime object instead of a generic object:
Notice how we now have date field generated for us as part of the Month column. By default, the date field assumes the first day of the month to fill in the values of the days that were not supplied. Now, if we go back and confirm the type, we can see that it’s now of type datetime :
Now, we need to set the datetime object as the index of the dataframe to allow us really explore our data. Let’s do this using the .set_index() method:
We can see now that the Month column is the index of our dataframe. Let’s go ahead and create our plot to see what our data looks like:
Note that in Time Series plots, time is usually plotted on the x-axis while the y-axis is usually the magnitude of the data.
Notice how the Month column was used as our x-axis and because we had previously casted our Month column to datetime, the year was specifically used to plot the graph.
By now, you should notice an upward trend indicating that the airline would have more passenger over time. Although there are ups and downs at every point in time, generally we can observe that the trend increases. Also we can notice how the ups and downs seem to be a bit regular, it means we might be observing a seasonal pattern here too. Let’s take a closer look by observing some year’s data:
As we can see in the plot, there’s usually a spike between July and September which begins to drop by October, which implies that more people travel between July and September and probably travel less from October.
Remember we mentioned that there’s an upward trend and a seasonal pattern in our observation? There are usually a number of components [Scroll up to see explanation of Time Series components] in most Time Series analysis. Hence, what we need to do now is use Decomposition techniques to to deconstruct our observation into several components, each representing one of the underlying categories of patterns.
Decomposition of Time Series
There are a couple of models to consider during the Decomposition of Time Series data.1. Additive Model: This model is used when the variations around the trend does not vary with the level of the time series. Here the components of a time series are simply added together using the formula: y(t) = Level(t) + Trend(t) + Seasonality(t) + Noise(t)2. Multiplicative Model: Is used if the trend is proportional to the level of the time series. Here the components of a time series are simply multiplied together using the formula: y(t) = Level(t) * Trend(t) * Seasonality(t) *Noise(t)
For the sake of this tutorial, we will use the additive model because it is quick to develop, fast to train, and provide interpretable patterns. We also need to import statsmodels which has a tsa (time series analysis) package as well as the seasonal_decompose() function we need:
Now we have a much clearer plot showing us that the trend is going up, and the seasonality following a regular pattern.
One last thing we will do is plot the trend alongside the observed time series. To do this, we will use Matplotlib’s .YearLocator() function to set each year to begin from the month of January month=1 , and month as the minor locator showing ticks for every 3 months (intervals=3). Then we plot our dataset (and gave it blue color) using the index of the dataframe as x-axis and the number of Passengers for the y-axis.We did the same for the trend observations which we plotted in red color.
import matplotlib.pyplot as pltimport matplotlib.dates as mdatesfig, ax = plt.subplots()ax.grid(True)year = mdates.YearLocator(month=1)month = mdates.MonthLocator(interval=3)year_format = mdates.DateFormatter('%Y')month_format = mdates.DateFormatter('%m')ax.xaxis.set_minor_locator(month)ax.xaxis.grid(True, which = 'minor')ax.xaxis.set_major_locator(year)ax.xaxis.set_major_formatter(year_format)plt.plot(data_set.index, data_set['#Passengers'], c='blue')plt.plot(decomposition.trend.index, decomposition.trend, c='red')
Again, we can see the trend is going up against the individual observations.
I hope this tutorial has helped you in understanding what Time Series is and how to get started with analyzing Time Series data. | [
{
"code": null,
"e": 512,
"s": 171,
"text": "In my previous tutorials, we have considered data preparation and visualization tools such as Numpy, Pandas, Matplotlib and Seaborn. In this tutorial, we are going to learn about Time Series, why it’s important, situations we will need to apply Time Series, and more specifically, we will learn how to analyze Time Series data using Pandas."
},
{
"code": null,
"e": 1060,
"s": 512,
"text": "Time Series is a set of data points or observations taken at specified times usually at equal intervals (e.g hourly, daily, weekly, quarterly, yearly, etc). Time Series is usually used to predict future occurrences based on previous observed occurrence or values. Predicting what would happen in the stock market tomorrow, volume of goods that would be sold in the coming week, whether or not price of an item would skyrocket in December, number of Uber rides over a period of time, etc; are some of the things we can do with Time Series Analysis."
},
{
"code": null,
"e": 1696,
"s": 1060,
"text": "Time series helps us understand past trends so we can forecast and plan for the future. For example, you own a coffee shop, what you’d likely see is how many coffee you sell every day or month and when you want to see how your shop has performed over the past six months, you’re likely going to add all the six month sales. Now, what if you want to be able to forecast sales for the next six months or year. In this kind of scenario, the only variable known to you is time (either in seconds, minutes, days, months, years, etc) — hence you need Time Series Analysis to predict the other unknown variables like trends, seasonality, etc."
},
{
"code": null,
"e": 1792,
"s": 1696,
"text": "Hence, it is important to note that in Time Series Analysis, the only known variable is — Time."
},
{
"code": null,
"e": 2116,
"s": 1792,
"text": "Pandas has proven very successful as a tool for working with Time Series data. This is because Pandas has some in-built datetime functions which makes it easy to work with a Time Series Analysis, and since time is the most important variable we work with here, it makes Pandas a very suitable tool to perform such analysis."
},
{
"code": null,
"e": 2225,
"s": 2116,
"text": "Generally, including those outside of the financial world, Time Series often contain the following features:"
},
{
"code": null,
"e": 3775,
"s": 2225,
"text": "Trends: This refers to the movement of a series to relatively higher or lower values over a long period of time. For example, when the Time Series Analysis shows a pattern that is upward, we call it an Uptrend, and when the pattern is downward, we call it a Down trend, and if there was no trend at all, we call it a horizontal or stationary trend. One key thing to note is that trend usually happens for sometime and then disappears.Seasonality: This refers to is a repeating pattern within a fixed time period. Although these patterns can also swing upward or downward, however, this is quite different from that of a trend because trend happens for a period of time and then disappears. However Seasonality keeps happening within a fixed time period. For example, when it’s Christmas, you discover more candies and chocolates are sold and this keeps happening every year.Irregularity: This is also called noise. Irregularity happens for a short duration and it’s non depleting. A very good example is the case of Ebola. During that period, there was a massive demand for hand sanitizers which happened erratically/systematically in a way no one could have predicted, hence one could not tell how much number of sales could have been made or tell the next time there’s going to be another outbreak.Cyclic: This is when a series is repeating upward and downward movement. It usually does not have a fixed pattern. It could happen in 6months, then two years later, then 4 years, then 1 year later. These kinds of patterns are much harder to predict."
},
{
"code": null,
"e": 4210,
"s": 3775,
"text": "Trends: This refers to the movement of a series to relatively higher or lower values over a long period of time. For example, when the Time Series Analysis shows a pattern that is upward, we call it an Uptrend, and when the pattern is downward, we call it a Down trend, and if there was no trend at all, we call it a horizontal or stationary trend. One key thing to note is that trend usually happens for sometime and then disappears."
},
{
"code": null,
"e": 4651,
"s": 4210,
"text": "Seasonality: This refers to is a repeating pattern within a fixed time period. Although these patterns can also swing upward or downward, however, this is quite different from that of a trend because trend happens for a period of time and then disappears. However Seasonality keeps happening within a fixed time period. For example, when it’s Christmas, you discover more candies and chocolates are sold and this keeps happening every year."
},
{
"code": null,
"e": 5078,
"s": 4651,
"text": "Irregularity: This is also called noise. Irregularity happens for a short duration and it’s non depleting. A very good example is the case of Ebola. During that period, there was a massive demand for hand sanitizers which happened erratically/systematically in a way no one could have predicted, hence one could not tell how much number of sales could have been made or tell the next time there’s going to be another outbreak."
},
{
"code": null,
"e": 5328,
"s": 5078,
"text": "Cyclic: This is when a series is repeating upward and downward movement. It usually does not have a fixed pattern. It could happen in 6months, then two years later, then 4 years, then 1 year later. These kinds of patterns are much harder to predict."
},
{
"code": null,
"e": 5486,
"s": 5328,
"text": "Remember how we stated that the main variable here is Time? Same way, it is important to mention that we cannot apply Time Series analysis to a dataset when:"
},
{
"code": null,
"e": 5718,
"s": 5486,
"text": "The variables/values are constant. For example, 5000 boxes of candies where sold last Christmas, and the Christmas before that. Since both values are the same, we cannot apply time series to predict sales for this year’s Christmas."
},
{
"code": null,
"e": 5950,
"s": 5718,
"text": "The variables/values are constant. For example, 5000 boxes of candies where sold last Christmas, and the Christmas before that. Since both values are the same, we cannot apply time series to predict sales for this year’s Christmas."
},
{
"code": null,
"e": 6115,
"s": 5950,
"text": "2. Values in the form of functions: There’s no point applying Time Series Analysis to a dataset when you can calculate values by simply using a formula or function."
},
{
"code": null,
"e": 6268,
"s": 6115,
"text": "Now that we have basic understanding of what Time Series is, let’s go ahead and work on an example to fully grasp how we can analyze a Time Series Data."
},
{
"code": null,
"e": 6489,
"s": 6268,
"text": "In this example, we are asked to build a model to forecast the demand for flight tickets of a particular airline. We will be using the International Airline Passengers dataset . You can also download it from kaggle here."
},
{
"code": null,
"e": 6517,
"s": 6489,
"text": "Importing Packages and Data"
},
{
"code": null,
"e": 6755,
"s": 6517,
"text": "To begin, first thing we need to do is to import the packages we will use to perform our analysis: in this case, we’ll make use of pandas, to prepare our data and access the datetime functions and matplotlib to create our visualizations:"
},
{
"code": null,
"e": 6920,
"s": 6755,
"text": "Now, let’s read our dataset to see what kind of data we have. As we see, the dataset has been classified into two columns; Month and Passengers traveling per month."
},
{
"code": null,
"e": 7051,
"s": 6920,
"text": "I usually like getting a summary of the dataset in case there’s a row with an empty value. Let’s go ahead and check by doing this:"
},
{
"code": null,
"e": 7293,
"s": 7051,
"text": "As we can see, we do not have any empty value in our dataset, so we’re free to continue our analysis. Now, what we will do is to confirm that the Month column is in datetime format and not string. Pandas .dtypes function makes this possible:"
},
{
"code": null,
"e": 7651,
"s": 7293,
"text": "We can see that Month column is of a generic object type which could be a string. Since we want to perform time related actions on this data, we need to convert it to a datetime format before it can be useful to us. Let’s go ahead and do this using to_datetime() helper function, let’s cast the Month column to a datetime object instead of a generic object:"
},
{
"code": null,
"e": 7936,
"s": 7651,
"text": "Notice how we now have date field generated for us as part of the Month column. By default, the date field assumes the first day of the month to fill in the values of the days that were not supplied. Now, if we go back and confirm the type, we can see that it’s now of type datetime :"
},
{
"code": null,
"e": 8088,
"s": 7936,
"text": "Now, we need to set the datetime object as the index of the dataframe to allow us really explore our data. Let’s do this using the .set_index() method:"
},
{
"code": null,
"e": 8224,
"s": 8088,
"text": "We can see now that the Month column is the index of our dataframe. Let’s go ahead and create our plot to see what our data looks like:"
},
{
"code": null,
"e": 8349,
"s": 8224,
"text": "Note that in Time Series plots, time is usually plotted on the x-axis while the y-axis is usually the magnitude of the data."
},
{
"code": null,
"e": 8517,
"s": 8349,
"text": "Notice how the Month column was used as our x-axis and because we had previously casted our Month column to datetime, the year was specifically used to plot the graph."
},
{
"code": null,
"e": 8915,
"s": 8517,
"text": "By now, you should notice an upward trend indicating that the airline would have more passenger over time. Although there are ups and downs at every point in time, generally we can observe that the trend increases. Also we can notice how the ups and downs seem to be a bit regular, it means we might be observing a seasonal pattern here too. Let’s take a closer look by observing some year’s data:"
},
{
"code": null,
"e": 9130,
"s": 8915,
"text": "As we can see in the plot, there’s usually a spike between July and September which begins to drop by October, which implies that more people travel between July and September and probably travel less from October."
},
{
"code": null,
"e": 9537,
"s": 9130,
"text": "Remember we mentioned that there’s an upward trend and a seasonal pattern in our observation? There are usually a number of components [Scroll up to see explanation of Time Series components] in most Time Series analysis. Hence, what we need to do now is use Decomposition techniques to to deconstruct our observation into several components, each representing one of the underlying categories of patterns."
},
{
"code": null,
"e": 9566,
"s": 9537,
"text": "Decomposition of Time Series"
},
{
"code": null,
"e": 10148,
"s": 9566,
"text": "There are a couple of models to consider during the Decomposition of Time Series data.1. Additive Model: This model is used when the variations around the trend does not vary with the level of the time series. Here the components of a time series are simply added together using the formula: y(t) = Level(t) + Trend(t) + Seasonality(t) + Noise(t)2. Multiplicative Model: Is used if the trend is proportional to the level of the time series. Here the components of a time series are simply multiplied together using the formula: y(t) = Level(t) * Trend(t) * Seasonality(t) *Noise(t)"
},
{
"code": null,
"e": 10429,
"s": 10148,
"text": "For the sake of this tutorial, we will use the additive model because it is quick to develop, fast to train, and provide interpretable patterns. We also need to import statsmodels which has a tsa (time series analysis) package as well as the seasonal_decompose() function we need:"
},
{
"code": null,
"e": 10549,
"s": 10429,
"text": "Now we have a much clearer plot showing us that the trend is going up, and the seasonality following a regular pattern."
},
{
"code": null,
"e": 11042,
"s": 10549,
"text": "One last thing we will do is plot the trend alongside the observed time series. To do this, we will use Matplotlib’s .YearLocator() function to set each year to begin from the month of January month=1 , and month as the minor locator showing ticks for every 3 months (intervals=3). Then we plot our dataset (and gave it blue color) using the index of the dataframe as x-axis and the number of Passengers for the y-axis.We did the same for the trend observations which we plotted in red color."
},
{
"code": null,
"e": 11564,
"s": 11042,
"text": "import matplotlib.pyplot as pltimport matplotlib.dates as mdatesfig, ax = plt.subplots()ax.grid(True)year = mdates.YearLocator(month=1)month = mdates.MonthLocator(interval=3)year_format = mdates.DateFormatter('%Y')month_format = mdates.DateFormatter('%m')ax.xaxis.set_minor_locator(month)ax.xaxis.grid(True, which = 'minor')ax.xaxis.set_major_locator(year)ax.xaxis.set_major_formatter(year_format)plt.plot(data_set.index, data_set['#Passengers'], c='blue')plt.plot(decomposition.trend.index, decomposition.trend, c='red')"
},
{
"code": null,
"e": 11641,
"s": 11564,
"text": "Again, we can see the trend is going up against the individual observations."
}
] |
Python - Merge Pandas DataFrame with Inner Join | To merge Pandas DataFrame, use the merge() function. The inner join is implemented on both the DataFrames by setting under the “how” parameter of the merge() function i.e. −
how = “inner”
At first, let us import the pandas library with an alias −
import pandas as pd
Create DataFrame1 −
dataFrame1 = pd.DataFrame(
{
"Car": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],
"Units": [100, 150, 110, 80, 110, 90]
}
)
Now, create DataFrame2 −
dataFrame2 = pd.DataFrame(
{
"Car": ['BMW', 'Lexus', 'Tesla', 'Mustang', 'Mercedes', 'Jaguar'],
"Reg_Price": [7000, 1500, 5000, 8000, 9000, 6000]
}
)
Merge DataFrames with a common column Car and "inner" in "how" parameter implements Inner Join −
mergedRes = pd.merge(dataFrame1, dataFrame2, on ='Car', how ="inner")
Following is the code −
#
# Merge Pandas DataFrame with Inner Join
#
import pandas as pd
# Create DataFrame1
dataFrame1 = pd.DataFrame(
{
"Car": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],
"Units": [100, 150, 110, 80, 110, 90]
}
)
print"DataFrame1 ...\n",dataFrame1
# Create DataFrame2
dataFrame2 = pd.DataFrame(
{
"Car": ['BMW', 'Lexus', 'Tesla', 'Mustang', 'Mercedes', 'Jaguar'],
"Reg_Price": [7000, 1500, 5000, 8000, 9000, 6000]
}
)
print"\nDataFrame2 ...\n",dataFrame2
# merge DataFrames with common column Car and "inner" in "how" parameter implements Inner Join
mergedRes = pd.merge(dataFrame1, dataFrame2, on ='Car', how ="inner")
print"\nMerged dataframe with inner join...\n", mergedRes
This will produce the following output −
DataFrame1 ...
Car Units
0 BMW 100
1 Lexus 150
2 Audi 110
3 Mustang 80
4 Bentley 110
5 Jaguar 90
DataFrame2 ...
Car Reg_Price
0 BMW 7000
1 Lexus 1500
2 Tesla 5000
3 Mustang 8000
4 Mercedes 9000
5 Jaguar 6000
Merged dataframe with inner join...
Car Units Reg_Price
0 BMW 100 7000
1 Lexus 150 1500
2 Mustang 80 8000
3 Jaguar 90 6000 | [
{
"code": null,
"e": 1236,
"s": 1062,
"text": "To merge Pandas DataFrame, use the merge() function. The inner join is implemented on both the DataFrames by setting under the “how” parameter of the merge() function i.e. −"
},
{
"code": null,
"e": 1250,
"s": 1236,
"text": "how = “inner”"
},
{
"code": null,
"e": 1309,
"s": 1250,
"text": "At first, let us import the pandas library with an alias −"
},
{
"code": null,
"e": 1330,
"s": 1309,
"text": "import pandas as pd\n"
},
{
"code": null,
"e": 1350,
"s": 1330,
"text": "Create DataFrame1 −"
},
{
"code": null,
"e": 1506,
"s": 1350,
"text": "dataFrame1 = pd.DataFrame(\n {\n \"Car\": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],\n \"Units\": [100, 150, 110, 80, 110, 90]\n }\n)\n\n"
},
{
"code": null,
"e": 1531,
"s": 1506,
"text": "Now, create DataFrame2 −"
},
{
"code": null,
"e": 1702,
"s": 1531,
"text": "dataFrame2 = pd.DataFrame(\n {\n \"Car\": ['BMW', 'Lexus', 'Tesla', 'Mustang', 'Mercedes', 'Jaguar'],\n \"Reg_Price\": [7000, 1500, 5000, 8000, 9000, 6000]\n\n }\n)\n\n"
},
{
"code": null,
"e": 1799,
"s": 1702,
"text": "Merge DataFrames with a common column Car and \"inner\" in \"how\" parameter implements Inner Join −"
},
{
"code": null,
"e": 1869,
"s": 1799,
"text": "mergedRes = pd.merge(dataFrame1, dataFrame2, on ='Car', how =\"inner\")"
},
{
"code": null,
"e": 1893,
"s": 1869,
"text": "Following is the code −"
},
{
"code": null,
"e": 2624,
"s": 1893,
"text": "#\n# Merge Pandas DataFrame with Inner Join\n#\n\nimport pandas as pd\n\n# Create DataFrame1\ndataFrame1 = pd.DataFrame(\n {\n \"Car\": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],\n \"Units\": [100, 150, 110, 80, 110, 90]\n }\n)\n\nprint\"DataFrame1 ...\\n\",dataFrame1\n\n# Create DataFrame2\ndataFrame2 = pd.DataFrame(\n {\n \"Car\": ['BMW', 'Lexus', 'Tesla', 'Mustang', 'Mercedes', 'Jaguar'],\n \"Reg_Price\": [7000, 1500, 5000, 8000, 9000, 6000]\n\n }\n)\n\nprint\"\\nDataFrame2 ...\\n\",dataFrame2\n\n# merge DataFrames with common column Car and \"inner\" in \"how\" parameter implements Inner Join\nmergedRes = pd.merge(dataFrame1, dataFrame2, on ='Car', how =\"inner\")\nprint\"\\nMerged dataframe with inner join...\\n\", mergedRes\n\n"
},
{
"code": null,
"e": 2665,
"s": 2624,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 3159,
"s": 2665,
"text": "DataFrame1 ...\n Car Units\n0 BMW 100\n1 Lexus 150\n2 Audi 110\n3 Mustang 80\n4 Bentley 110\n5 Jaguar 90\n\nDataFrame2 ...\n Car Reg_Price\n0 BMW 7000\n1 Lexus 1500\n2 Tesla 5000\n3 Mustang 8000\n4 Mercedes 9000\n5 Jaguar 6000\n\nMerged dataframe with inner join...\n Car Units Reg_Price\n0 BMW 100 7000\n1 Lexus 150 1500\n2 Mustang 80 8000\n3 Jaguar 90 6000"
}
] |
Finding distance between two latitudes and longitudes in Python | by Zolzaya Luvsandorj | Towards Data Science | When preparing data for a model, there may be a time where it’s useful to find distances between two locations. This post shows how to find the shortest spherical and travel distance between two locations from their latitude and longitude in Python.
We can locate any place on earth from its geographic coordinates. Geographic coordinates of a location consist of its latitude and longitude position.
Latitude is a measurement of the vertical position between North Pole and South Pole. Imaginary horizontal latitude lines are called parallels. Equator is a special parallel that is at 0° latitude and lies halfway between and North Pole and South Pole.
Longitude is a measurement of the horizontal position. Imaginary vertical longitude lines are called meridians. The Prime Meridian is a special meridian that is at 0° longitude. Longitudes are also important when it comes to time zones.
Parallels are like a ring whereas meridians are like a half a ring.
We will import the libraries and set two sample location coordinates in Melbourne, Australia:
import numpy as npimport pandas as pdfrom math import radians, cos, sin, asin, acos, sqrt, pifrom geopy import distancefrom geopy.geocoders import Nominatimimport osmnx as oximport networkx as nxlat1, lon1 = -37.82120, 144.96441 # location 1lat2, lon2 = -37.88465, 145.08727 # location 2
Installing osmnx can be fiddly. One easy way to setup the environment to follow this tutorial would be to use Google Colaboratory: First, create a new notebook; second, install the library with !pip install osmnx; third, restart: go to Runtime from menu on top > Restart Runtime > Environment is ready!
Earth’s equatorial radius is 6378 km and polar radius is 6356 km so earth is not a perfect sphere. However, assuming spherical earth enables us to easily find approximate distances which is satisfactory in some applications. In this section, we will use the haversine formula to find the spherical distance between two locations from their geographic coordinates. Let’s first familiarise with the haversine function.
The haversine function is as follows:
Haversine of a central angle, which equals the spherical distance divided by the radius of the sphere, can be calculated using the haversine formula:
We can transform this formula using the first definition of the haversine function and rearrange it such that d is on the left side:
Now, it’s time to translate this into Python code. There are two things to highlight: First, the latitudes and longitudes are in degrees so we will have to convert them to radians before we plug them into the formula. Second, we will use globally average value of 6371 km as the radius of the spherical earth.
def calculate_spherical_distance(lat1, lon1, lat2, lon2, r=6371): # Convert degrees to radians coordinates = lat1, lon1, lat2, lon2 # radians(c) is same as c*pi/180 phi1, lambda1, phi2, lambda2 = [ radians(c) for c in coordinates ] # Apply the haversine formula a = (np.square(sin((phi2-phi1)/2)) + cos(phi1) * cos(phi2) * np.square(sin((lambda2-lambda1)/2))) d = 2*r*asin(np.sqrt(a)) return dprint(f"{calculate_spherical_distance(lat1, lon1, lat2, lon2):.4f} km")
Alternatively, we can use the second definition of the haversine function with cosine and rearrange the equation to express d:
This can be expressed in Python as follows:
def calculate_spherical_distance(lat1, lon1, lat2, lon2, r=6371): # Convert degrees to radians coordinates = lat1, lon1, lat2, lon2 phi1, lambda1, phi2, lambda2 = [ radians(c) for c in coordinates ] # Apply the haversine formula d = r*acos(cos(phi2-phi1) - cos(phi1) * cos(phi2) * (1-cos(lambda2-lambda1))) return dprint(f"{calculate_spherical_distance(lat1, lon1, lat2, lon2):.4f} km")
More practically, we can use geopy package to get the spherical distance in a single line of code:
print(f"{distance.great_circle((lat1, lon1), (lat2, lon2)).km:.4f} km")
In addition, it’s easy to find other distances with geopy package. For instance, we can get distance based on ellipsoid earth assumption like this: distance.distance((lat1, lon1), (lat2, lon2)).km. There are different ellipsoid models available, the previous function uses the WGS-84 model and here’s an alternative syntax: distance.geodesic((lat1, lon1), (lat2, lon2), ellipsoid=’WGS-84').km. If you want to learn more about the library, check out its resource on calculating distance.
In this section, we will look at how to find shortest travel distance using OpenStreetMap with OSMnx package. We will start by pulling the graph of the network of the city:
mel_graph = ox.graph_from_place( 'Melbourne, Australia', network_type='drive', simplify=True)ox.plot_graph(mel_graph)
This code is likely to take a while to run. We used network_type='drive' to get driving distance. Other network types are also available. For instance, if we are after walking distance, then we tweak the code to network_type='walk'.
Now, we can find the driving distance using the graph:
orig_node = ox.distance.nearest_nodes(mel_graph, lon1, lat1)target_node = ox.distance.nearest_nodes(mel_graph, lon2, lat2)nx.shortest_path_length(G=mel_graph, source=orig_node, target=target_node, weight='length')
The shortest driving distance from location 1 to location 2 is 15,086.094 meters. It’s worth noting that distance from location 2 to location 1 may not necessarily be the same as the distance from location 1 to location 2.
We can create a function that calculates the distance:
def calculate_driving_distance(lat1, lon1, lat2, lon2): # Get city and country name geolocator = Nominatim(user_agent="geoapiExercises") location = geolocator.reverse(f"{lat1}, {lon1}") address = location.raw['address'] area = f"{address['city']}, {address['country']}" # Get graph for the city graph = ox.graph_from_place(area, network_type='drive', simplify=True) # Find shortest driving distance orig_node = ox.distance.nearest_nodes(graph, lon1, lat1) target_node = ox.distance.nearest_nodes(graph, lon2, lat2) length = nx.shortest_path_length(G=graph, source=orig_node, target=target_node, weight='length') return length / 1000 # convert from m to kmsprint(f"{calculate_driving_distance(lat1, lon1, lat2, lon2):.2f} km")
That was it! If you want to learn more about the library, check out OSMnx user reference and OSMnx examples.
Would you like to access more content like this? Medium members get unlimited access to any articles on Medium. If you become a member using my referral link, a portion of your membership fee will directly go to support me.
Thank you for reading my post. If you are interested, here are links to some of my posts:◼️ Enrich your Jupyter Notebook with these tips◼️ Organise your Jupyter Notebook with these tips◼️ Useful IPython magic commands◼️ Introduction to Python Virtual Environment for Data Science◼️ Introduction to Git for Data Science◼️ Simple data visualisations in Python that you will find useful◼️ 6 simple tips for prettier and customised plots in Seaborn (Python)◼️️ 5 tips for pandas users◼️️ Writing 5 common SQL queries in pandas
Bye for now 🏃💨 | [
{
"code": null,
"e": 415,
"s": 165,
"text": "When preparing data for a model, there may be a time where it’s useful to find distances between two locations. This post shows how to find the shortest spherical and travel distance between two locations from their latitude and longitude in Python."
},
{
"code": null,
"e": 566,
"s": 415,
"text": "We can locate any place on earth from its geographic coordinates. Geographic coordinates of a location consist of its latitude and longitude position."
},
{
"code": null,
"e": 819,
"s": 566,
"text": "Latitude is a measurement of the vertical position between North Pole and South Pole. Imaginary horizontal latitude lines are called parallels. Equator is a special parallel that is at 0° latitude and lies halfway between and North Pole and South Pole."
},
{
"code": null,
"e": 1056,
"s": 819,
"text": "Longitude is a measurement of the horizontal position. Imaginary vertical longitude lines are called meridians. The Prime Meridian is a special meridian that is at 0° longitude. Longitudes are also important when it comes to time zones."
},
{
"code": null,
"e": 1124,
"s": 1056,
"text": "Parallels are like a ring whereas meridians are like a half a ring."
},
{
"code": null,
"e": 1218,
"s": 1124,
"text": "We will import the libraries and set two sample location coordinates in Melbourne, Australia:"
},
{
"code": null,
"e": 1507,
"s": 1218,
"text": "import numpy as npimport pandas as pdfrom math import radians, cos, sin, asin, acos, sqrt, pifrom geopy import distancefrom geopy.geocoders import Nominatimimport osmnx as oximport networkx as nxlat1, lon1 = -37.82120, 144.96441 # location 1lat2, lon2 = -37.88465, 145.08727 # location 2"
},
{
"code": null,
"e": 1810,
"s": 1507,
"text": "Installing osmnx can be fiddly. One easy way to setup the environment to follow this tutorial would be to use Google Colaboratory: First, create a new notebook; second, install the library with !pip install osmnx; third, restart: go to Runtime from menu on top > Restart Runtime > Environment is ready!"
},
{
"code": null,
"e": 2227,
"s": 1810,
"text": "Earth’s equatorial radius is 6378 km and polar radius is 6356 km so earth is not a perfect sphere. However, assuming spherical earth enables us to easily find approximate distances which is satisfactory in some applications. In this section, we will use the haversine formula to find the spherical distance between two locations from their geographic coordinates. Let’s first familiarise with the haversine function."
},
{
"code": null,
"e": 2265,
"s": 2227,
"text": "The haversine function is as follows:"
},
{
"code": null,
"e": 2415,
"s": 2265,
"text": "Haversine of a central angle, which equals the spherical distance divided by the radius of the sphere, can be calculated using the haversine formula:"
},
{
"code": null,
"e": 2548,
"s": 2415,
"text": "We can transform this formula using the first definition of the haversine function and rearrange it such that d is on the left side:"
},
{
"code": null,
"e": 2858,
"s": 2548,
"text": "Now, it’s time to translate this into Python code. There are two things to highlight: First, the latitudes and longitudes are in degrees so we will have to convert them to radians before we plug them into the formula. Second, we will use globally average value of 6371 km as the radius of the spherical earth."
},
{
"code": null,
"e": 3372,
"s": 2858,
"text": "def calculate_spherical_distance(lat1, lon1, lat2, lon2, r=6371): # Convert degrees to radians coordinates = lat1, lon1, lat2, lon2 # radians(c) is same as c*pi/180 phi1, lambda1, phi2, lambda2 = [ radians(c) for c in coordinates ] # Apply the haversine formula a = (np.square(sin((phi2-phi1)/2)) + cos(phi1) * cos(phi2) * np.square(sin((lambda2-lambda1)/2))) d = 2*r*asin(np.sqrt(a)) return dprint(f\"{calculate_spherical_distance(lat1, lon1, lat2, lon2):.4f} km\")"
},
{
"code": null,
"e": 3499,
"s": 3372,
"text": "Alternatively, we can use the second definition of the haversine function with cosine and rearrange the equation to express d:"
},
{
"code": null,
"e": 3543,
"s": 3499,
"text": "This can be expressed in Python as follows:"
},
{
"code": null,
"e": 3975,
"s": 3543,
"text": "def calculate_spherical_distance(lat1, lon1, lat2, lon2, r=6371): # Convert degrees to radians coordinates = lat1, lon1, lat2, lon2 phi1, lambda1, phi2, lambda2 = [ radians(c) for c in coordinates ] # Apply the haversine formula d = r*acos(cos(phi2-phi1) - cos(phi1) * cos(phi2) * (1-cos(lambda2-lambda1))) return dprint(f\"{calculate_spherical_distance(lat1, lon1, lat2, lon2):.4f} km\")"
},
{
"code": null,
"e": 4074,
"s": 3975,
"text": "More practically, we can use geopy package to get the spherical distance in a single line of code:"
},
{
"code": null,
"e": 4146,
"s": 4074,
"text": "print(f\"{distance.great_circle((lat1, lon1), (lat2, lon2)).km:.4f} km\")"
},
{
"code": null,
"e": 4633,
"s": 4146,
"text": "In addition, it’s easy to find other distances with geopy package. For instance, we can get distance based on ellipsoid earth assumption like this: distance.distance((lat1, lon1), (lat2, lon2)).km. There are different ellipsoid models available, the previous function uses the WGS-84 model and here’s an alternative syntax: distance.geodesic((lat1, lon1), (lat2, lon2), ellipsoid=’WGS-84').km. If you want to learn more about the library, check out its resource on calculating distance."
},
{
"code": null,
"e": 4806,
"s": 4633,
"text": "In this section, we will look at how to find shortest travel distance using OpenStreetMap with OSMnx package. We will start by pulling the graph of the network of the city:"
},
{
"code": null,
"e": 4927,
"s": 4806,
"text": "mel_graph = ox.graph_from_place( 'Melbourne, Australia', network_type='drive', simplify=True)ox.plot_graph(mel_graph)"
},
{
"code": null,
"e": 5160,
"s": 4927,
"text": "This code is likely to take a while to run. We used network_type='drive' to get driving distance. Other network types are also available. For instance, if we are after walking distance, then we tweak the code to network_type='walk'."
},
{
"code": null,
"e": 5215,
"s": 5160,
"text": "Now, we can find the driving distance using the graph:"
},
{
"code": null,
"e": 5429,
"s": 5215,
"text": "orig_node = ox.distance.nearest_nodes(mel_graph, lon1, lat1)target_node = ox.distance.nearest_nodes(mel_graph, lon2, lat2)nx.shortest_path_length(G=mel_graph, source=orig_node, target=target_node, weight='length')"
},
{
"code": null,
"e": 5652,
"s": 5429,
"text": "The shortest driving distance from location 1 to location 2 is 15,086.094 meters. It’s worth noting that distance from location 2 to location 1 may not necessarily be the same as the distance from location 1 to location 2."
},
{
"code": null,
"e": 5707,
"s": 5652,
"text": "We can create a function that calculates the distance:"
},
{
"code": null,
"e": 6538,
"s": 5707,
"text": "def calculate_driving_distance(lat1, lon1, lat2, lon2): # Get city and country name geolocator = Nominatim(user_agent=\"geoapiExercises\") location = geolocator.reverse(f\"{lat1}, {lon1}\") address = location.raw['address'] area = f\"{address['city']}, {address['country']}\" # Get graph for the city graph = ox.graph_from_place(area, network_type='drive', simplify=True) # Find shortest driving distance orig_node = ox.distance.nearest_nodes(graph, lon1, lat1) target_node = ox.distance.nearest_nodes(graph, lon2, lat2) length = nx.shortest_path_length(G=graph, source=orig_node, target=target_node, weight='length') return length / 1000 # convert from m to kmsprint(f\"{calculate_driving_distance(lat1, lon1, lat2, lon2):.2f} km\")"
},
{
"code": null,
"e": 6647,
"s": 6538,
"text": "That was it! If you want to learn more about the library, check out OSMnx user reference and OSMnx examples."
},
{
"code": null,
"e": 6871,
"s": 6647,
"text": "Would you like to access more content like this? Medium members get unlimited access to any articles on Medium. If you become a member using my referral link, a portion of your membership fee will directly go to support me."
},
{
"code": null,
"e": 7394,
"s": 6871,
"text": "Thank you for reading my post. If you are interested, here are links to some of my posts:◼️ Enrich your Jupyter Notebook with these tips◼️ Organise your Jupyter Notebook with these tips◼️ Useful IPython magic commands◼️ Introduction to Python Virtual Environment for Data Science◼️ Introduction to Git for Data Science◼️ Simple data visualisations in Python that you will find useful◼️ 6 simple tips for prettier and customised plots in Seaborn (Python)◼️️ 5 tips for pandas users◼️️ Writing 5 common SQL queries in pandas"
}
] |
Execution of ‘DAD rp’ instruction | In 8085 Instruction set, for 16-bit addition, there is one instruction available that is DAD rp instruction. It is a 1-Byte instruction. With this instruction, with the content of the HL register pair, the contents of the mentioned register pair will get added and the result thus produced will be stored on the HL register pair.
As an example, let us consider the execution of the DADB instruction. Let us suppose, that the initial content of HL register pair is 5050H and content of BC register pair is 4050H. So now if we execute instruction DAD B then the following 16-bit addition will take place –
(BC)= 4050H = 0100 0101
(HL)= 5050H = 0101 0101
----- ---------
(HL) 90A0H = 1001 1010
In the first machine cycle M1, the opcode 09H for DAD B instruction is fetched from the memory into the IR register of 8085. Then this instruction will be decoded by the 8085 to interpret it as the opcode for DAD B instruction. This Opcode Fetch machine cycle takes a total of 3 + 1 = 4 clock cycles. Now it is the time to add the contents of HL and BC register pairs and storing the result on toHL register pair. In 8085 we have only an 8-bit ALU. So to perform this 16-bit addition, we take support from temp registers to hold intermediate results.
In the second machine cycle M2, the following actions take place.
Accumulator is temporarily stored in the W register;
Accumulator is temporarily stored in the W register;
L register contents are moved to the Accumulator;
L register contents are moved to the Accumulator;
C register contents are moved to the temp register;
C register contents are moved to the temp register;
Addition is performed, and ALU output is moved to the L register.
Addition is performed, and ALU output is moved to the L register.
This machine cycle uses up three clock cycles. It is aBus Idle (BI) machine cycle because:
No address is sent out by 8085;
No address is sent out by 8085;
No data is sent out or received from outside;
No data is sent out or received from outside;
No external control signals are generated by 8085.
No external control signals are generated by 8085.
In the third machine cycle M3, the following actions take place.
H register contents are moved to the Accumulator;
H register contents are moved to the Accumulator;
B register contents are moved to the temp register;
B register contents are moved to the temp register;
Addition with Cy is performed, and the result stored is in H;
Addition with Cy is performed, and the result stored is in H;
Accumulator gets the original value from the W register.
Accumulator gets the original value from the W register.
This machine cycle uses up three clock cycles. This is also a Bus Idle (BI) machine cycle because:
No address is sent out by 8085;
No address is sent out by 8085;
No data is sent out or received from outside;
No data is sent out or received from outside;
No external control signals are generated by 8085.
No external control signals are generated by 8085.
Thus, DAD B instruction needs a total of ten clock cycles. It consists of Opcode Fetch machine cycle (four clock cycles), followed by two BI machine cycles (each of three clock cycles).
The timing diagram against this instruction OUT F0H execution is as follows –
Summary − So this instruction OUT requires 1-Byte1, 3-Machine Cycles (Opcode Fetch, Bus Idle Cycle, BusIdle Cycle) and 10 T-States for execution as shown in the timing diagram. | [
{
"code": null,
"e": 1392,
"s": 1062,
"text": "In 8085 Instruction set, for 16-bit addition, there is one instruction available that is DAD rp instruction. It is a 1-Byte instruction. With this instruction, with the content of the HL register pair, the contents of the mentioned register pair will get added and the result thus produced will be stored on the HL register pair."
},
{
"code": null,
"e": 1667,
"s": 1392,
"text": "As an example, let us consider the execution of the DADB instruction. Let us suppose, that the initial content of HL register pair is 5050H and content of BC register pair is 4050H. So now if we execute instruction DAD B then the following 16-bit addition will take place – "
},
{
"code": null,
"e": 1763,
"s": 1667,
"text": "(BC)= 4050H = 0100 0101\n(HL)= 5050H = 0101 0101\n ----- ---------\n(HL) 90A0H = 1001 1010"
},
{
"code": null,
"e": 2314,
"s": 1763,
"text": "In the first machine cycle M1, the opcode 09H for DAD B instruction is fetched from the memory into the IR register of 8085. Then this instruction will be decoded by the 8085 to interpret it as the opcode for DAD B instruction. This Opcode Fetch machine cycle takes a total of 3 + 1 = 4 clock cycles. Now it is the time to add the contents of HL and BC register pairs and storing the result on toHL register pair. In 8085 we have only an 8-bit ALU. So to perform this 16-bit addition, we take support from temp registers to hold intermediate results."
},
{
"code": null,
"e": 2381,
"s": 2314,
"text": " In the second machine cycle M2, the following actions take place."
},
{
"code": null,
"e": 2434,
"s": 2381,
"text": "Accumulator is temporarily stored in the W register;"
},
{
"code": null,
"e": 2487,
"s": 2434,
"text": "Accumulator is temporarily stored in the W register;"
},
{
"code": null,
"e": 2537,
"s": 2487,
"text": "L register contents are moved to the Accumulator;"
},
{
"code": null,
"e": 2587,
"s": 2537,
"text": "L register contents are moved to the Accumulator;"
},
{
"code": null,
"e": 2639,
"s": 2587,
"text": "C register contents are moved to the temp register;"
},
{
"code": null,
"e": 2691,
"s": 2639,
"text": "C register contents are moved to the temp register;"
},
{
"code": null,
"e": 2757,
"s": 2691,
"text": "Addition is performed, and ALU output is moved to the L register."
},
{
"code": null,
"e": 2823,
"s": 2757,
"text": "Addition is performed, and ALU output is moved to the L register."
},
{
"code": null,
"e": 2914,
"s": 2823,
"text": "This machine cycle uses up three clock cycles. It is aBus Idle (BI) machine cycle because:"
},
{
"code": null,
"e": 2946,
"s": 2914,
"text": "No address is sent out by 8085;"
},
{
"code": null,
"e": 2978,
"s": 2946,
"text": "No address is sent out by 8085;"
},
{
"code": null,
"e": 3024,
"s": 2978,
"text": "No data is sent out or received from outside;"
},
{
"code": null,
"e": 3070,
"s": 3024,
"text": "No data is sent out or received from outside;"
},
{
"code": null,
"e": 3121,
"s": 3070,
"text": "No external control signals are generated by 8085."
},
{
"code": null,
"e": 3172,
"s": 3121,
"text": "No external control signals are generated by 8085."
},
{
"code": null,
"e": 3237,
"s": 3172,
"text": "In the third machine cycle M3, the following actions take place."
},
{
"code": null,
"e": 3287,
"s": 3237,
"text": "H register contents are moved to the Accumulator;"
},
{
"code": null,
"e": 3337,
"s": 3287,
"text": "H register contents are moved to the Accumulator;"
},
{
"code": null,
"e": 3389,
"s": 3337,
"text": "B register contents are moved to the temp register;"
},
{
"code": null,
"e": 3441,
"s": 3389,
"text": "B register contents are moved to the temp register;"
},
{
"code": null,
"e": 3503,
"s": 3441,
"text": "Addition with Cy is performed, and the result stored is in H;"
},
{
"code": null,
"e": 3565,
"s": 3503,
"text": "Addition with Cy is performed, and the result stored is in H;"
},
{
"code": null,
"e": 3622,
"s": 3565,
"text": "Accumulator gets the original value from the W register."
},
{
"code": null,
"e": 3679,
"s": 3622,
"text": "Accumulator gets the original value from the W register."
},
{
"code": null,
"e": 3778,
"s": 3679,
"text": "This machine cycle uses up three clock cycles. This is also a Bus Idle (BI) machine cycle because:"
},
{
"code": null,
"e": 3810,
"s": 3778,
"text": "No address is sent out by 8085;"
},
{
"code": null,
"e": 3842,
"s": 3810,
"text": "No address is sent out by 8085;"
},
{
"code": null,
"e": 3888,
"s": 3842,
"text": "No data is sent out or received from outside;"
},
{
"code": null,
"e": 3934,
"s": 3888,
"text": "No data is sent out or received from outside;"
},
{
"code": null,
"e": 3985,
"s": 3934,
"text": "No external control signals are generated by 8085."
},
{
"code": null,
"e": 4036,
"s": 3985,
"text": "No external control signals are generated by 8085."
},
{
"code": null,
"e": 4222,
"s": 4036,
"text": "Thus, DAD B instruction needs a total of ten clock cycles. It consists of Opcode Fetch machine cycle (four clock cycles), followed by two BI machine cycles (each of three clock cycles)."
},
{
"code": null,
"e": 4300,
"s": 4222,
"text": "The timing diagram against this instruction OUT F0H execution is as follows –"
},
{
"code": null,
"e": 4477,
"s": 4300,
"text": "Summary − So this instruction OUT requires 1-Byte1, 3-Machine Cycles (Opcode Fetch, Bus Idle Cycle, BusIdle Cycle) and 10 T-States for execution as shown in the timing diagram."
}
] |
select() function in Lua programming | The select function in Lua is used to return the number of arguments that are passed into it as an argument. It can be used in two forms, the first one includes passing an index and then it will return the numbers that are passed after that number into the function as an argument in a list format, the other pattern is if we pass the length operator as a first argument, in that case it simply returns a count of the multiple arguments that are provided.
Let’s explore both the cases in the examples shown below.
Live Demo
print(select(1, "a", "b", "c")) --> a b c
print(select(2, "a", "b", "c")) --> b c
print(select(3, "a", "b", "c")) --> c
In the above example, we have passed an index, and we can see that the output from the select function will be the arguments after the given index.
a b c
b c
c
Live Demo
print(select("#")) --> 0
print(select("#", {1, 2, 3}))
print(select("#", 1, 2, 3))
print(select("#", {1,2,3}, 4, 5, {6,7,8}))
In the above example, instead of passing an index, I passed the length operator, and hence the output will simply be the number of arguments passed after it.
0
1
3
4 | [
{
"code": null,
"e": 1518,
"s": 1062,
"text": "The select function in Lua is used to return the number of arguments that are passed into it as an argument. It can be used in two forms, the first one includes passing an index and then it will return the numbers that are passed after that number into the function as an argument in a list format, the other pattern is if we pass the length operator as a first argument, in that case it simply returns a count of the multiple arguments that are provided."
},
{
"code": null,
"e": 1576,
"s": 1518,
"text": "Let’s explore both the cases in the examples shown below."
},
{
"code": null,
"e": 1587,
"s": 1576,
"text": " Live Demo"
},
{
"code": null,
"e": 1707,
"s": 1587,
"text": "print(select(1, \"a\", \"b\", \"c\")) --> a b c\nprint(select(2, \"a\", \"b\", \"c\")) --> b c\nprint(select(3, \"a\", \"b\", \"c\")) --> c"
},
{
"code": null,
"e": 1855,
"s": 1707,
"text": "In the above example, we have passed an index, and we can see that the output from the select function will be the arguments after the given index."
},
{
"code": null,
"e": 1873,
"s": 1855,
"text": "a b c\nb c\nc"
},
{
"code": null,
"e": 1884,
"s": 1873,
"text": " Live Demo"
},
{
"code": null,
"e": 2010,
"s": 1884,
"text": "print(select(\"#\")) --> 0\nprint(select(\"#\", {1, 2, 3}))\nprint(select(\"#\", 1, 2, 3))\nprint(select(\"#\", {1,2,3}, 4, 5, {6,7,8}))"
},
{
"code": null,
"e": 2168,
"s": 2010,
"text": "In the above example, instead of passing an index, I passed the length operator, and hence the output will simply be the number of arguments passed after it."
},
{
"code": null,
"e": 2176,
"s": 2168,
"text": "0\n1\n3\n4"
}
] |
Textwrap - Text wrapping and filling in Python | The textwrap module provides TextWrapper class that performs wrapping or filling. It has convenience functions for the same purpose.
Wraps the single paragraph in text (a string) so every line is at most width characters long. Returns a list of output lines, without final newlines.
Wraps the single paragraph in text, and returns a single string containing the wrapped paragraph.
>>> sample_text = '''
The textwrap module provides some convenience functions, as well as TextWrapper class
that does all the work. If you’re just wrapping or filling one or two text strings,
the convenience functions should be good enough; otherwise, you should use an instance
of TextWrapper for efficiency.
'''
>>> import textwrap
>>> for line in (textwrap.wrap(sample_text, width = 50)):
print (line)
The textwrap module provides some convenience
functions, as well as TextWrapper class that
does all the work. If you’re just wrapping or
filling one or two text strings, the
convenience functions should be good enough;
otherwise, you should use an instance of
TextWrapper for efficiency.
>>> textwrap.fill(sample_text, width = 50)
' The textwrap module provides some convenience\nfunctions, as well as TextWrapper class that\ndoes all the work. If you’re just wrapping or\nfilling one or two text strings, the\nconvenience functions should be good enough;\notherwise, you should use an instance of\nTextWrapper for efficiency.' | [
{
"code": null,
"e": 1195,
"s": 1062,
"text": "The textwrap module provides TextWrapper class that performs wrapping or filling. It has convenience functions for the same purpose."
},
{
"code": null,
"e": 1345,
"s": 1195,
"text": "Wraps the single paragraph in text (a string) so every line is at most width characters long. Returns a list of output lines, without final newlines."
},
{
"code": null,
"e": 1443,
"s": 1345,
"text": "Wraps the single paragraph in text, and returns a single string containing the wrapped paragraph."
},
{
"code": null,
"e": 2137,
"s": 1443,
"text": ">>> sample_text = '''\nThe textwrap module provides some convenience functions, as well as TextWrapper class\nthat does all the work. If you’re just wrapping or filling one or two text strings,\nthe convenience functions should be good enough; otherwise, you should use an instance\nof TextWrapper for efficiency.\n'''\n>>> import textwrap\n>>> for line in (textwrap.wrap(sample_text, width = 50)):\nprint (line)\n\nThe textwrap module provides some convenience\nfunctions, as well as TextWrapper class that\ndoes all the work. If you’re just wrapping or\nfilling one or two text strings, the\nconvenience functions should be good enough;\notherwise, you should use an instance of\nTextWrapper for efficiency."
},
{
"code": null,
"e": 2477,
"s": 2137,
"text": ">>> textwrap.fill(sample_text, width = 50)\n' The textwrap module provides some convenience\\nfunctions, as well as TextWrapper class that\\ndoes all the work. If you’re just wrapping or\\nfilling one or two text strings, the\\nconvenience functions should be good enough;\\notherwise, you should use an instance of\\nTextWrapper for efficiency.'"
}
] |
XHTML - Quick Guide | XHTML stands for EXtensible HyperText Markup Language. It is the next step in the evolution of the internet. The XHTML 1.0 is the first document type in the XHTML family.
XHTML is almost identical to HTML 4.01 with only few differences. This is a cleaner and stricter version of HTML 4.01. If you already know HTML, then you need to give little attention to learn this latest version of HTML.
XHTML was developed by World Wide Web Consortium (W3C) to help web developers make the transition from HTML to XML. By migrating to XHTML today, web developers can enter the XML world with all of its benefits, while still remaining confident in the backward and future compatibility of the content.
Developers who migrate their content to XHTML 1.0 get the following benefits −
XHTML documents are XML conforming as they are readily viewed, edited, and validated with standard XML tools.
XHTML documents are XML conforming as they are readily viewed, edited, and validated with standard XML tools.
XHTML documents can be written to operate better than they did before in existing browsers as well as in new browsers.
XHTML documents can be written to operate better than they did before in existing browsers as well as in new browsers.
XHTML documents can utilize applications such as scripts and applets that rely upon either the HTML Document Object Model or the XML Document Object Model.
XHTML documents can utilize applications such as scripts and applets that rely upon either the HTML Document Object Model or the XML Document Object Model.
XHTML gives you a more consistent, well-structured format so that your webpages can be easily parsed and processed by present and future web browsers.
XHTML gives you a more consistent, well-structured format so that your webpages can be easily parsed and processed by present and future web browsers.
You can easily maintain, edit, convert and format your document in the long run.
You can easily maintain, edit, convert and format your document in the long run.
Since XHTML is an official standard of the W3C, your website becomes more compatible with many browsers and it is rendered more accurately.
Since XHTML is an official standard of the W3C, your website becomes more compatible with many browsers and it is rendered more accurately.
XHTML combines strength of HTML and XML. Also, XHTML pages can be rendered by all XML enabled browsers.
XHTML combines strength of HTML and XML. Also, XHTML pages can be rendered by all XML enabled browsers.
XHTML defines quality standard for your webpages and if you follow that, then your web pages are counted as quality web pages. The W3C certifies those pages with their quality stamp.
XHTML defines quality standard for your webpages and if you follow that, then your web pages are counted as quality web pages. The W3C certifies those pages with their quality stamp.
Web developers and web browser designers are constantly discovering new ways to express their ideas through new markup languages. In XML, it is relatively easy to introduce new elements or additional element attributes. The XHTML family is designed to accommodate these extensions through XHTML modules and techniques for developing new XHTML-conforming modules. These modules permit the combination of existing and new features at the time of developing content and designing new user agents.
Before we proceed further, let us have a quick view on what are HTML, XML, and SGML.
This is Standard Generalized Markup Language (SGML) application conforming to International Standard ISO 8879. HTML is widely regarded as the standard publishing language of the World Wide Web.
This is a language for describing markup languages, particularly those used in electronic document exchange, document management, and document publishing. HTML is an example of a language defined in SGML.
XML stands for EXtensible Markup Language. XML is a markup language much like HTML and it was designed to describe data. XML tags are not predefined. You must define your own tags according to your needs.
XHTML syntax is very similar to HTML syntax and almost all the valid HTML elements are valid in XHTML as well. But when you write an XHTML document, you need to pay a bit extra attention to make your HTML document compliant to XHTML.
Here are the important points to remember while writing a new XHTML document or converting existing HTML document into XHTML document −
Write a DOCTYPE declaration at the start of the XHTML document.
Write a DOCTYPE declaration at the start of the XHTML document.
Write all XHTML tags and attributes in lower case only.
Write all XHTML tags and attributes in lower case only.
Close all XHTML tags properly.
Close all XHTML tags properly.
Nest all the tags properly.
Nest all the tags properly.
Quote all the attribute values.
Quote all the attribute values.
Forbid Attribute minimization.
Forbid Attribute minimization.
Replace the name attribute with the id attribute.
Replace the name attribute with the id attribute.
Deprecate the language attribute of the script tag.
Deprecate the language attribute of the script tag.
Here is the detail explanation of the above XHTML rules −
All XHTML documents must have a DOCTYPE declaration at the start. There are three types of DOCTYPE declarations, which are discussed in detail in XHTML Doctypes chapter. Here is an example of using DOCTYPE −
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
XHTML is case sensitive markup language. All the XHTML tags and attributes need to be written in lower case only.
<!-- This is invalid in XHTML -->
<A Href="/xhtml/xhtml_tutorial.html">XHTML Tutorial</A>
<!-- Correct XHTML way of writing this is as follows -->
<a href="/xhtml/xhtml_tutorial.html">XHTML Tutorial</a>
In the example, Href and anchor tag A are not in lower case, so it is incorrect.
Each and every XHTML tag should have an equivalent closing tag, even empty elements should also have closing tags. Here is an example showing valid and invalid ways of using tags −
<!-- This is invalid in XHTML -->
<p>This paragraph is not written according to XHTML syntax.
<!-- This is also invalid in XHTML -->
<img src="/images/xhtml.gif" >
The following syntax shows the correct way of writing above tags in XHTML. Difference is that, here we have closed both the tags properly.
<!-- This is valid in XHTML -->
<p>This paragraph is not written according to XHTML syntax.</p>
<!-- This is also valid now -->
<img src="/images/xhtml.gif" />
All the values of XHTML attributes must be quoted. Otherwise, your XHTML document is assumed as an invalid document. Here is the example showing syntax −
<!-- This is invalid in XHTML -->
<img src="/images/xhtml.gif" width=250 height=50 />
<!-- Correct XHTML way of writing this is as follows -->
<img src="/images/xhtml.gif" width="250" height="50" />
XHTML does not allow attribute minimization. It means you need to explicitly state the attribute and its value. The following example shows the difference −
<!-- This is invalid in XHTML -->
<option selected>
<!-- Correct XHTML way of writing this is as follows -->
<option selected="selected">
Here is a list of the minimized attributes in HTML and the way you need to write them in XHTML −
The id attribute replaces the name attribute. Instead of using name = "name", XHTML prefers to use id = "id". The following example shows how −
<!-- This is invalid in XHTML -->
<img src="/images/xhtml.gif" name="xhtml_logo" />
<!-- Correct XHTML way of writing this is as follows -->
<img src="/images/xhtml.gif" id="xhtml_logo" />
The language attribute of the script tag is deprecated. The following example shows this difference −
<!-- This is invalid in XHTML -->
<script language="JavaScript" type="text/JavaScript">
document.write("Hello XHTML!");
</script>
<!-- Correct XHTML way of writing this is as follows -->
<script type="text/JavaScript">
document.write("Hello XHTML!");
</script>
You must nest all the XHTML tags properly. Otherwise your document is assumed as an incorrect XHTML document. The following example shows the syntax −
<!-- This is invalid in XHTML -->
<b><i> This text is bold and italic</b></i>
<!-- Correct XHTML way of writing this is as follows -->
<b><i> This text is bold and italic</i></b>
The following elements are not allowed to have any other element inside them. This prohibition applies to all depths of nesting. Means, it includes all the descending elements.
The following example shows you a minimum content of an XHTML 1.0 document −
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/TR/xhtml1" xml:lang="en" lang="en">
<head>
<title>Every document must have a title</title>
</head>
<body>
...your content goes here...
</body>
</html>
Due to the fact that XHTML is an XML application, certain practices that were perfectly legal in SGML-based HTML 4 must be changed. You already have seen XHTML syntax in previous chapter, so differences between XHTML and HTML are very obvious. Following is the comparison between XHTML and HTML.
Well-formedness is a new concept introduced by XML. Essentially, this means all the elements must have closing tags and you must nest them properly.
CORRECT: Nested Elements
<p>Here is an emphasized <em>paragraph</em>.</p>
INCORRECT: Overlapping Elements
<p>Here is an emphasized <em>paragraph.</p></em>
XHTML documents must use lower case for all HTML elements and attribute names. This difference is necessary because XHTML document is assumed to be an XML document and XML is case-sensitive. For example, <li> and <LI> are different tags.
In HTML, certain elements are permitted to omit the end tag. But XML does not allow end tags to be omitted.
CORRECT: Terminated Elements
<p>Here is a paragraph.</p><p>here is another paragraph.</p>
<br><hr/>
INCORRECT: Unterminated Elements
<p>Here is a paragraph.<p>here is another paragraph.
<br><hr>
All attribute values including numeric values, must be quoted.
CORRECT: Quoted Attribute Values
<td rowspan="3">
INCORRECT: Unquoted Attribute Values
<td rowspan=3>
XML does not support attribute minimization. Attribute-value pairs must be written in full. Attribute names such as compact and checked cannot occur in elements without their value being specified.
CORRECT: Non Minimized Attributes
<dl compact="compact">
INCORRECT: Minimized Attributes
<dl compact>
When a browser processes attributes, it does the following −
Strips leading and trailing whitespace.
Strips leading and trailing whitespace.
Maps sequences of one or more white space characters (including line breaks) to a single inter-word space.
Maps sequences of one or more white space characters (including line breaks) to a single inter-word space.
In XHTML, the script and style elements should not have “<” and “&” characters directly, if they exist; then they are treated as the start of markup. The entities such as “<” and “&” are recognized as entity references by the XML processor for displaying “<” and “&” characters respectively.
Wrapping the content of the script or style element within a CDATA marked section avoids the expansion of these entities.
<script type="text/JavaScript">
<![CDATA[
... unescaped VB or Java Script here... ...
]]>
</script>
An alternative is to use external script and style documents.
XHTML recommends the replacement of name attribute with id attribute. Note that in XHTML 1.0, the name attribute of these elements is formally deprecated, and it will be removed in a subsequent versions of XHTML.
HTML and XHTML both have some attributes that have pre-defined and limited sets of values. For example, type attribute of the input element. In HTML and XML, these are called enumerated attributes. Under HTML 4, the interpretation of these values was case-insensitive, so a value of TEXT was equivalent to a value of text.
Under XHTML, the interpretation of these values is case-sensitive so all of these values are defined in lower-case.
HTML and XML both permit references to characters by using hexadecimal value. In HTML these references could be made using either &#Xnn; or &#xnn; and they are valid but in XHTML documents, you must use the lower-case version only such as &#xnn;.
All XHTML elements must be nested within the <html> root element. All other elements can have sub elements which must be in pairs and correctly nested within their parent element. The basic document structure is −
<!DOCTYPE html....>
<html>
<head> ... </head>
<body> ... </body>
</html>
The XHTML standard defines three Document Type Definitions (DTDs). The most commonly used and easy one is the XHTML Transitional document.
XHTML 1.0 document type definitions correspond to three DTDs −
Strict
Transitional
Frameset
There are few XHTML elements and attributes, which are available in one DTD but not available in another DTD. Therefore, while writing your XHTML document, you must select your XHTML elements or attributes carefully. However, XHTML validator helps you to identify valid and invalid elements and attributes.
Please check XHTML Validations for more detail on this.
If you are planning to use Cascading Style Sheet (CSS) strictly and avoiding to write most of the XHTML attributes, then it is recommended to use this DTD. A document conforming to this DTD is of the best quality.
If you want to use XHTML 1.0 Strict DTD then you need to include the following line at the top of your XHTML document.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
If you are planning to use many XHTML attributes as well as few Cascading Style Sheet properties, then you should adopt this DTD and you should write your XHTML document accordingly.
If you want to use XHTML 1.0 Transitional DTD, then you need to include the following line at the top of your XHTML document.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
You can use this when you want to use HTML Frames to partition the browser window into two or more frames.
If you want to use XHTML 1.0 Frameset DTD, then you need to include following line at the top of your XHTML document.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd">
Note − No matter what DTD you are using to write your XHTML document; if it is a valid XHTML document, then your document is considered as a good quality document.
There are a few XHTML/HTML attributes which are standard and associated to all the XHTML/HTML tags. These attributes are listed here with brief description −
Not valid in base, head, html, meta, param, script, style, and title elements.
The lang attribute indicates the language being used for the enclosed content. The language is identified using the ISO standard language abbreviations, such as fr for French, en for English, and so on. More codes and their formats are described at www.ietf.org.
Not valid in base, br, frame, frameset, hr, iframe, param, and script elements.
Microsoft introduced a number of new proprietary attributes with the Internet Explorer 4 and higher versions.
When users visit a website, they do things such as click on text, images and hyperlinks, hover-over things, etc. These are examples of what JavaScript calls events.
We can write our event handlers in JavaScript or VBScript and can specify these event handlers as a value of event tag attribute. The XHTML 1.0 has a similar set of events which is available in HTML 4.01 specification.
There are only two attributes which can be used to trigger any JavaScript or VBScript code, when any event occurs at document level.
Note − Here, the script refers to any function or piece of code of VBScript or JavaScript.
There are following six attributes which can be used to trigger any JavaScript or VBScript code when any event occurs at form level.
The following three events are generated by keyboard. These events are not valid in base, bdo, br, frame, frameset, head, html, iframe, meta, param, script, style, and title elements.
The following seven events are generated by mouse when it comes in contact with any HTML tag. These events are not valid in base, bdo, br, frame, frameset, head, html, iframe, meta, param, script, style, and title elements.
The W3C has helped move the internet content-development community from the days of malformed, non-standard mark-up into the well-formed, valid world of XML. In XHTML 1.0, this move was moderated by the goal of providing easy migration of existing HTML 4 (or earlier) based content to XHTML and XML.
The W3C has removed support for deprecated elements and attributes from the XHTML family. These elements and attributes had largely presentation-oriented functionality that is better handled via style sheets or client-specific default behavior.
Now the W3C's HTML Working Group has defined an initial document type based solely upon modules which are XHTML 1.1. This document type is designed to be portable to a broad collection of client devices, and applicable to the majority of internet content.
The XHTML 1.1 provides a definition of strictly conforming XHTML documents which MUST meet all the following criteria −
The document MUST conform to the constraints expressed in XHTML 1.1 Document Type Definition.
The document MUST conform to the constraints expressed in XHTML 1.1 Document Type Definition.
The root element of the document MUST be <html>.
The root element of the document MUST be <html>.
The root element of the document MUST designate the XHTML namespace using the xmlns attribute.
The root element of the document MUST designate the XHTML namespace using the xmlns attribute.
The root element MAY also contain a schema location attribute as defined in the XML Schema.
The root element MAY also contain a schema location attribute as defined in the XML Schema.
There MUST be a DOCTYPE declaration in the document prior to the root element. If it is present, the public identifier included in the DOCTYPE declaration MUST refer the DTD found in XHTML 1.1 Document Type Definition.
Here is an example of an XHTML 1.1 document −
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/MarkUp/SCHEMA/xhtml11.xsd" xml:lang="en">
<head>
<title>This is the document title</title>
</head>
<body>
<p>Moved to <a href="http://example.org/">example.org</a>.</p>
</body>
</html>
Note − In this example, the XML declaration is included. An XML declaration such as the one above is not required in all XML documents. XHTML document authors are strongly encouraged to use XML declarations in all their documents. Such a declaration is required when the character encoding of the document is other than the default UTF-8 or UTF-16.
The XHTML 1.1 document type is made up of the following XHTML modules.
Structure Module − The Structure Module defines the major structural elements for XHTML. These elements effectively act as the basis for the content model of many XHTML family document types. The elements and attributes included in this module are − body, head, html, and title.
Text Module − This module defines all of the basic text container elements, attributes, and their content model − abbr, acronym, address, blockquote, br, cite, code, dfn, div, em, h1, h2, h3, h4, h5, h6, kbd, p, pre, q, samp, span, strong, and var.
Hypertext Module − The Hypertext Module provides the element that is used to define hypertext links to other resources. This module supports element a.
List Module − As its name suggests, the List Module provides list-oriented elements. Specifically, the List Module supports the following elements and attributes − dl, dt, dd, ol, ul, and li.
Object Module − The Object Module provides elements for general-purpose object inclusion. Specifically, the Object Module supports − object and param.
Presentation Module − This module defines elements, attributes, and a minimal content model for simple presentation-related markup − b, big, hr, i, small, sub, sup, and tt.
Edit Module − This module defines elements and attributes for use in editing-related markup − del and ins.
Bidirectional Text Module − The Bi-directional Text module defines an element that can be used to declare the bi-directional rules for the element's content − bdo.
Forms Module − It provides all the form features found in HTML 4.0. Specifically, it supports − button, fieldset, form, input, label, legend, select, optgroup, option, and textarea.
Table Module − It supports the following elements, attributes, and content model − caption, col, colgroup, table, tbody, td, tfoot, th, thead, and tr.
Image Module − It provides basic image embedding and may be used in some implementations of client side image maps independently. It supports the element − img.
Client-side Image Map Module − It provides elements for client side image maps − area and map.
Server-side Image Map Module − It provides support for image-selection and transmission of selection coordinates. The Server-side Image Map Module supports − attribute ismap on img.
Intrinsic Events Module − It supports all the events discussed in XHTML Events.
Meta information Module − The Meta information Module defines an element that describes information within the declarative portion of a document. It includes element meta.
Scripting Module − It defines the elements used to contain information pertaining to executable scripts or the lack of support for executable scripts. Elements and attributes included in this module are − noscript and script.
Style Sheet Module − It defines an element to be used when declaring internal style sheets. The element and attribute defined by this module is − style.
Style Attribute Module (Deprecated) − It defines the style attribute.
Link Module − It defines an element that can be used to define links to external resources. It supports link element.
Base Module − It defines an element that can be used to define a base URI against which relative URIs in the document are resolved. The element and attribute included in this module is − base.
Ruby Annotation Module − XHTML also uses the Ruby Annotation module as defined in RUBY and supports − ruby, rbc, rtc, rb, rt, and rp.
This section describes the differences between XHTML 1.1 and XHTML 1.0 Strict. XHTML 1.1 represents a departure from both HTML 4 and XHTML 1.0.
The most significant is the removal of features that were deprecated.
The most significant is the removal of features that were deprecated.
The changes can be summarized as follows −
The changes can be summarized as follows −
On every element, the lang attribute has been removed in favor of the xml:lang attribute.
On every element, the lang attribute has been removed in favor of the xml:lang attribute.
On the <a> and <map> elements, the name attribute has been removed in favor of the id attribute.
On the <a> and <map> elements, the name attribute has been removed in favor of the id attribute.
The ruby collection of elements has been added.
The ruby collection of elements has been added.
This chapter lists out various tips and tricks which you should be aware of while writing an XHTML document. These tips and tricks can help you create effective documents.
Here are some basic guidelines for designing XHTML documents −
When you think of satisfying what your audience wants, you need to design effective and catchy documents to serve the purpose. Your document should be easy for finding required information and giving a familiar environment.
For example, Academicians or medical practitioners are comfortable with journal-like document with long sentences, complex diagrams, specific terminologies, etc., whereas the document accessed by school-going children must be simple and informative.
Reuse your previously created successful documents instead of starting from scratch each time you bag a new project.
Here are some tips regarding elements inside the XHTML document −
An XML declaration is not required in all XHTML documents but XHTML document authors are strongly encouraged to use XML declarations in all their documents. Such a declaration is required when the character encoding of the document is other than the default UTF-8 or UTF-16.
They include a space before the trailing / and > of empty elements. For example, <br />, <hr />, and <img src="/html/xhtml.gif" alt="xhtml" />.
Use external style sheets if your style sheet uses “<”, “&”, “]]>”, or “—”.
Use external scripts if your script uses “<”, “&”, or “]]>”, or “—”.
Avoid line breaks and multiple whitespace characters within attribute values. These are handled inconsistently by different browsers.
Do not include more than one isindex element in the document head. The isindex element is deprecated in favor of the input element.
Use both the lang and xml:lang attributes while specifying the language of an element. The value of the xml:lang attribute takes precedence.
XHTML 1.0 has deprecated the name attributes of a, applet, form, frame, iframe, img, and map elements. They will be removed from XHTML in subsequent versions. Therefore, start using id element for element identification.
The ampersand character ("&") should be presented as an entity reference &.
<!-- This is invalid in XHTML -->
http://my.site.dom/cgi-bin/myscript.pl?class=guest&name=user.
<!-- Correct XHTML way of writing this is as follows -->
http://my.site.dom/cgi-bin/myscript.pl?class=guest&name=user
Some characters that are legal in HTML documents are illegal in XML document. For example, in HTML, the form-feed character (U+000C) is treated as white space, in XHTML, due to XML's definition of characters, it is illegal.
The named character reference ' (the apostrophe, U+0027) was introduced in XML 1.0 but does not appear in HTML. Web developers should therefore use ' instead of ' to work as expected in HTML 4 Web Browsers.
Every XHTML document is validated against a Document Type Definition. Before validating an XHTML file properly, a correct DTD must be added as the first or second line of the file.
Once you are ready to validate your XHTML document, you can use W3C Validator
to validate your document. This tool is very handy and helps you to fix the
problems with your document. This tool does not require any expertise to perform
validation.
The following statement in the text box shows you details. You need to give
complete URL of the page, which you want to validate and then click Validate
Page button.
Input your page address in the box below −
This validator checks the markup validity of web documents with various formats especially in HTML, XHTML, SMIL, MathML, etc.
There are other tools to perform different other validations.
RSS/Atom feeds Validator
RSS/Atom feeds Validator
CSS stylesheets Validator
CSS stylesheets Validator
Find Broken Links
Find Broken Links
Other validators and tools
Other validators and tools
We assume you have understood all the concepts related to XHTML. Therefore, you should be able to write your HTML document into a well-formed XHTML document and get a cleaner version of your website.
You can convert your existing HTML website into XHTML website.
Let us go through some important steps. To convert your existing document, you must first decide which DTD you are going to adhere to, and include document type definition at the top of the document.
Make sure you have all other required elements. These include a root element <html> that indicates an XML namespace, a <head> element, a <title> element contained within the <head> element, and a <body> element.
Make sure you have all other required elements. These include a root element <html> that indicates an XML namespace, a <head> element, a <title> element contained within the <head> element, and a <body> element.
Convert all element keywords and attribute names to lowercase.
Convert all element keywords and attribute names to lowercase.
Ensure that all attributes are in a name="value" format.
Ensure that all attributes are in a name="value" format.
Make sure that all container elements have closing tags.
Make sure that all container elements have closing tags.
Place a forward slash inside all standalone elements. For example, rewrite all <br> elements as <br />.
Place a forward slash inside all standalone elements. For example, rewrite all <br> elements as <br />.
Designate client-side script code and style sheet code as CDATA sections.
Designate client-side script code and style sheet code as CDATA sections.
Still XHTML is being improved and its next version XHTML 1.1 has been drafted. We have discussed this in detail in XHTML Version 1.1 chapter.
XHTML tags, characters, and entities are same as HTML, so if you already know HTML then you do not need to put extra effort to learn these subjects, especially for XHTML. We have listed out all HTML stuff along with XHTML tutorial also, because they are applicable to XHTML as well.
We have listed out various resources for XHTML and HTML so if you are interested and you have time in hand, then we recommend you to go through these resources to enhance your understanding on XHTML. Otherwise this tutorial must have given you enough knowledge to write your web pages using XHTML.
Your feedback on this tutorial is welcome at [email protected].
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 1920,
"s": 1749,
"text": "XHTML stands for EXtensible HyperText Markup Language. It is the next step in the evolution of the internet. The XHTML 1.0 is the first document type in the XHTML family."
},
{
"code": null,
"e": 2142,
"s": 1920,
"text": "XHTML is almost identical to HTML 4.01 with only few differences. This is a cleaner and stricter version of HTML 4.01. If you already know HTML, then you need to give little attention to learn this latest version of HTML."
},
{
"code": null,
"e": 2441,
"s": 2142,
"text": "XHTML was developed by World Wide Web Consortium (W3C) to help web developers make the transition from HTML to XML. By migrating to XHTML today, web developers can enter the XML world with all of its benefits, while still remaining confident in the backward and future compatibility of the content."
},
{
"code": null,
"e": 2520,
"s": 2441,
"text": "Developers who migrate their content to XHTML 1.0 get the following benefits −"
},
{
"code": null,
"e": 2630,
"s": 2520,
"text": "XHTML documents are XML conforming as they are readily viewed, edited, and validated with standard XML tools."
},
{
"code": null,
"e": 2740,
"s": 2630,
"text": "XHTML documents are XML conforming as they are readily viewed, edited, and validated with standard XML tools."
},
{
"code": null,
"e": 2859,
"s": 2740,
"text": "XHTML documents can be written to operate better than they did before in existing browsers as well as in new browsers."
},
{
"code": null,
"e": 2978,
"s": 2859,
"text": "XHTML documents can be written to operate better than they did before in existing browsers as well as in new browsers."
},
{
"code": null,
"e": 3134,
"s": 2978,
"text": "XHTML documents can utilize applications such as scripts and applets that rely upon either the HTML Document Object Model or the XML Document Object Model."
},
{
"code": null,
"e": 3290,
"s": 3134,
"text": "XHTML documents can utilize applications such as scripts and applets that rely upon either the HTML Document Object Model or the XML Document Object Model."
},
{
"code": null,
"e": 3441,
"s": 3290,
"text": "XHTML gives you a more consistent, well-structured format so that your webpages can be easily parsed and processed by present and future web browsers."
},
{
"code": null,
"e": 3592,
"s": 3441,
"text": "XHTML gives you a more consistent, well-structured format so that your webpages can be easily parsed and processed by present and future web browsers."
},
{
"code": null,
"e": 3673,
"s": 3592,
"text": "You can easily maintain, edit, convert and format your document in the long run."
},
{
"code": null,
"e": 3754,
"s": 3673,
"text": "You can easily maintain, edit, convert and format your document in the long run."
},
{
"code": null,
"e": 3894,
"s": 3754,
"text": "Since XHTML is an official standard of the W3C, your website becomes more compatible with many browsers and it is rendered more accurately."
},
{
"code": null,
"e": 4034,
"s": 3894,
"text": "Since XHTML is an official standard of the W3C, your website becomes more compatible with many browsers and it is rendered more accurately."
},
{
"code": null,
"e": 4138,
"s": 4034,
"text": "XHTML combines strength of HTML and XML. Also, XHTML pages can be rendered by all XML enabled browsers."
},
{
"code": null,
"e": 4242,
"s": 4138,
"text": "XHTML combines strength of HTML and XML. Also, XHTML pages can be rendered by all XML enabled browsers."
},
{
"code": null,
"e": 4425,
"s": 4242,
"text": "XHTML defines quality standard for your webpages and if you follow that, then your web pages are counted as quality web pages. The W3C certifies those pages with their quality stamp."
},
{
"code": null,
"e": 4608,
"s": 4425,
"text": "XHTML defines quality standard for your webpages and if you follow that, then your web pages are counted as quality web pages. The W3C certifies those pages with their quality stamp."
},
{
"code": null,
"e": 5102,
"s": 4608,
"text": "Web developers and web browser designers are constantly discovering new ways to express their ideas through new markup languages. In XML, it is relatively easy to introduce new elements or additional element attributes. The XHTML family is designed to accommodate these extensions through XHTML modules and techniques for developing new XHTML-conforming modules. These modules permit the combination of existing and new features at the time of developing content and designing new user agents."
},
{
"code": null,
"e": 5187,
"s": 5102,
"text": "Before we proceed further, let us have a quick view on what are HTML, XML, and SGML."
},
{
"code": null,
"e": 5381,
"s": 5187,
"text": "This is Standard Generalized Markup Language (SGML) application conforming to International Standard ISO 8879. HTML is widely regarded as the standard publishing language of the World Wide Web."
},
{
"code": null,
"e": 5586,
"s": 5381,
"text": "This is a language for describing markup languages, particularly those used in electronic document exchange, document management, and document publishing. HTML is an example of a language defined in SGML."
},
{
"code": null,
"e": 5791,
"s": 5586,
"text": "XML stands for EXtensible Markup Language. XML is a markup language much like HTML and it was designed to describe data. XML tags are not predefined. You must define your own tags according to your needs."
},
{
"code": null,
"e": 6025,
"s": 5791,
"text": "XHTML syntax is very similar to HTML syntax and almost all the valid HTML elements are valid in XHTML as well. But when you write an XHTML document, you need to pay a bit extra attention to make your HTML document compliant to XHTML."
},
{
"code": null,
"e": 6161,
"s": 6025,
"text": "Here are the important points to remember while writing a new XHTML document or converting existing HTML document into XHTML document −"
},
{
"code": null,
"e": 6225,
"s": 6161,
"text": "Write a DOCTYPE declaration at the start of the XHTML document."
},
{
"code": null,
"e": 6289,
"s": 6225,
"text": "Write a DOCTYPE declaration at the start of the XHTML document."
},
{
"code": null,
"e": 6345,
"s": 6289,
"text": "Write all XHTML tags and attributes in lower case only."
},
{
"code": null,
"e": 6401,
"s": 6345,
"text": "Write all XHTML tags and attributes in lower case only."
},
{
"code": null,
"e": 6432,
"s": 6401,
"text": "Close all XHTML tags properly."
},
{
"code": null,
"e": 6463,
"s": 6432,
"text": "Close all XHTML tags properly."
},
{
"code": null,
"e": 6491,
"s": 6463,
"text": "Nest all the tags properly."
},
{
"code": null,
"e": 6519,
"s": 6491,
"text": "Nest all the tags properly."
},
{
"code": null,
"e": 6551,
"s": 6519,
"text": "Quote all the attribute values."
},
{
"code": null,
"e": 6583,
"s": 6551,
"text": "Quote all the attribute values."
},
{
"code": null,
"e": 6614,
"s": 6583,
"text": "Forbid Attribute minimization."
},
{
"code": null,
"e": 6645,
"s": 6614,
"text": "Forbid Attribute minimization."
},
{
"code": null,
"e": 6695,
"s": 6645,
"text": "Replace the name attribute with the id attribute."
},
{
"code": null,
"e": 6745,
"s": 6695,
"text": "Replace the name attribute with the id attribute."
},
{
"code": null,
"e": 6797,
"s": 6745,
"text": "Deprecate the language attribute of the script tag."
},
{
"code": null,
"e": 6849,
"s": 6797,
"text": "Deprecate the language attribute of the script tag."
},
{
"code": null,
"e": 6907,
"s": 6849,
"text": "Here is the detail explanation of the above XHTML rules −"
},
{
"code": null,
"e": 7115,
"s": 6907,
"text": "All XHTML documents must have a DOCTYPE declaration at the start. There are three types of DOCTYPE declarations, which are discussed in detail in XHTML Doctypes chapter. Here is an example of using DOCTYPE −"
},
{
"code": null,
"e": 7238,
"s": 7115,
"text": "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\"\n\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n"
},
{
"code": null,
"e": 7352,
"s": 7238,
"text": "XHTML is case sensitive markup language. All the XHTML tags and attributes need to be written in lower case only."
},
{
"code": null,
"e": 7556,
"s": 7352,
"text": "<!-- This is invalid in XHTML -->\n<A Href=\"/xhtml/xhtml_tutorial.html\">XHTML Tutorial</A>\n\n<!-- Correct XHTML way of writing this is as follows -->\n<a href=\"/xhtml/xhtml_tutorial.html\">XHTML Tutorial</a>"
},
{
"code": null,
"e": 7637,
"s": 7556,
"text": "In the example, Href and anchor tag A are not in lower case, so it is incorrect."
},
{
"code": null,
"e": 7818,
"s": 7637,
"text": "Each and every XHTML tag should have an equivalent closing tag, even empty elements should also have closing tags. Here is an example showing valid and invalid ways of using tags −"
},
{
"code": null,
"e": 7983,
"s": 7818,
"text": "<!-- This is invalid in XHTML -->\n<p>This paragraph is not written according to XHTML syntax.\n\n<!-- This is also invalid in XHTML -->\n<img src=\"/images/xhtml.gif\" >"
},
{
"code": null,
"e": 8122,
"s": 7983,
"text": "The following syntax shows the correct way of writing above tags in XHTML. Difference is that, here we have closed both the tags properly."
},
{
"code": null,
"e": 8283,
"s": 8122,
"text": "<!-- This is valid in XHTML -->\n<p>This paragraph is not written according to XHTML syntax.</p>\n\n<!-- This is also valid now -->\n<img src=\"/images/xhtml.gif\" />"
},
{
"code": null,
"e": 8437,
"s": 8283,
"text": "All the values of XHTML attributes must be quoted. Otherwise, your XHTML document is assumed as an invalid document. Here is the example showing syntax −"
},
{
"code": null,
"e": 8637,
"s": 8437,
"text": "<!-- This is invalid in XHTML -->\n<img src=\"/images/xhtml.gif\" width=250 height=50 />\n\n<!-- Correct XHTML way of writing this is as follows -->\n<img src=\"/images/xhtml.gif\" width=\"250\" height=\"50\" />"
},
{
"code": null,
"e": 8794,
"s": 8637,
"text": "XHTML does not allow attribute minimization. It means you need to explicitly state the attribute and its value. The following example shows the difference −"
},
{
"code": null,
"e": 8933,
"s": 8794,
"text": "<!-- This is invalid in XHTML -->\n<option selected>\n\n<!-- Correct XHTML way of writing this is as follows -->\n<option selected=\"selected\">"
},
{
"code": null,
"e": 9030,
"s": 8933,
"text": "Here is a list of the minimized attributes in HTML and the way you need to write them in XHTML −"
},
{
"code": null,
"e": 9174,
"s": 9030,
"text": "The id attribute replaces the name attribute. Instead of using name = \"name\", XHTML prefers to use id = \"id\". The following example shows how −"
},
{
"code": null,
"e": 9364,
"s": 9174,
"text": "<!-- This is invalid in XHTML -->\n<img src=\"/images/xhtml.gif\" name=\"xhtml_logo\" />\n\n<!-- Correct XHTML way of writing this is as follows -->\n<img src=\"/images/xhtml.gif\" id=\"xhtml_logo\" />"
},
{
"code": null,
"e": 9466,
"s": 9364,
"text": "The language attribute of the script tag is deprecated. The following example shows this difference −"
},
{
"code": null,
"e": 9736,
"s": 9466,
"text": "<!-- This is invalid in XHTML -->\n\n<script language=\"JavaScript\" type=\"text/JavaScript\">\n document.write(\"Hello XHTML!\");\n</script>\n\n<!-- Correct XHTML way of writing this is as follows -->\n\n<script type=\"text/JavaScript\">\n document.write(\"Hello XHTML!\");\n</script>"
},
{
"code": null,
"e": 9887,
"s": 9736,
"text": "You must nest all the XHTML tags properly. Otherwise your document is assumed as an incorrect XHTML document. The following example shows the syntax −"
},
{
"code": null,
"e": 10067,
"s": 9887,
"text": "<!-- This is invalid in XHTML -->\n<b><i> This text is bold and italic</b></i>\n\n<!-- Correct XHTML way of writing this is as follows -->\n<b><i> This text is bold and italic</i></b>"
},
{
"code": null,
"e": 10244,
"s": 10067,
"text": "The following elements are not allowed to have any other element inside them. This prohibition applies to all depths of nesting. Means, it includes all the descending elements."
},
{
"code": null,
"e": 10321,
"s": 10244,
"text": "The following example shows you a minimum content of an XHTML 1.0 document −"
},
{
"code": null,
"e": 10692,
"s": 10321,
"text": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\"\n\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n\n<html xmlns=\"http://www.w3.org/TR/xhtml1\" xml:lang=\"en\" lang=\"en\">\n <head>\n <title>Every document must have a title</title>\n </head>\n\t\n <body>\n ...your content goes here...\n </body>\n</html>"
},
{
"code": null,
"e": 10988,
"s": 10692,
"text": "Due to the fact that XHTML is an XML application, certain practices that were perfectly legal in SGML-based HTML 4 must be changed. You already have seen XHTML syntax in previous chapter, so differences between XHTML and HTML are very obvious. Following is the comparison between XHTML and HTML."
},
{
"code": null,
"e": 11137,
"s": 10988,
"text": "Well-formedness is a new concept introduced by XML. Essentially, this means all the elements must have closing tags and you must nest them properly."
},
{
"code": null,
"e": 11162,
"s": 11137,
"text": "CORRECT: Nested Elements"
},
{
"code": null,
"e": 11211,
"s": 11162,
"text": "<p>Here is an emphasized <em>paragraph</em>.</p>"
},
{
"code": null,
"e": 11243,
"s": 11211,
"text": "INCORRECT: Overlapping Elements"
},
{
"code": null,
"e": 11292,
"s": 11243,
"text": "<p>Here is an emphasized <em>paragraph.</p></em>"
},
{
"code": null,
"e": 11530,
"s": 11292,
"text": "XHTML documents must use lower case for all HTML elements and attribute names. This difference is necessary because XHTML document is assumed to be an XML document and XML is case-sensitive. For example, <li> and <LI> are different tags."
},
{
"code": null,
"e": 11638,
"s": 11530,
"text": "In HTML, certain elements are permitted to omit the end tag. But XML does not allow end tags to be omitted."
},
{
"code": null,
"e": 11667,
"s": 11638,
"text": "CORRECT: Terminated Elements"
},
{
"code": null,
"e": 11738,
"s": 11667,
"text": "<p>Here is a paragraph.</p><p>here is another paragraph.</p>\n<br><hr/>"
},
{
"code": null,
"e": 11771,
"s": 11738,
"text": "INCORRECT: Unterminated Elements"
},
{
"code": null,
"e": 11833,
"s": 11771,
"text": "<p>Here is a paragraph.<p>here is another paragraph.\n<br><hr>"
},
{
"code": null,
"e": 11896,
"s": 11833,
"text": "All attribute values including numeric values, must be quoted."
},
{
"code": null,
"e": 11929,
"s": 11896,
"text": "CORRECT: Quoted Attribute Values"
},
{
"code": null,
"e": 11946,
"s": 11929,
"text": "<td rowspan=\"3\">"
},
{
"code": null,
"e": 11983,
"s": 11946,
"text": "INCORRECT: Unquoted Attribute Values"
},
{
"code": null,
"e": 11998,
"s": 11983,
"text": "<td rowspan=3>"
},
{
"code": null,
"e": 12196,
"s": 11998,
"text": "XML does not support attribute minimization. Attribute-value pairs must be written in full. Attribute names such as compact and checked cannot occur in elements without their value being specified."
},
{
"code": null,
"e": 12230,
"s": 12196,
"text": "CORRECT: Non Minimized Attributes"
},
{
"code": null,
"e": 12253,
"s": 12230,
"text": "<dl compact=\"compact\">"
},
{
"code": null,
"e": 12285,
"s": 12253,
"text": "INCORRECT: Minimized Attributes"
},
{
"code": null,
"e": 12298,
"s": 12285,
"text": "<dl compact>"
},
{
"code": null,
"e": 12359,
"s": 12298,
"text": "When a browser processes attributes, it does the following −"
},
{
"code": null,
"e": 12399,
"s": 12359,
"text": "Strips leading and trailing whitespace."
},
{
"code": null,
"e": 12439,
"s": 12399,
"text": "Strips leading and trailing whitespace."
},
{
"code": null,
"e": 12546,
"s": 12439,
"text": "Maps sequences of one or more white space characters (including line breaks) to a single inter-word space."
},
{
"code": null,
"e": 12653,
"s": 12546,
"text": "Maps sequences of one or more white space characters (including line breaks) to a single inter-word space."
},
{
"code": null,
"e": 12945,
"s": 12653,
"text": "In XHTML, the script and style elements should not have “<” and “&” characters directly, if they exist; then they are treated as the start of markup. The entities such as “<” and “&” are recognized as entity references by the XML processor for displaying “<” and “&” characters respectively."
},
{
"code": null,
"e": 13067,
"s": 12945,
"text": "Wrapping the content of the script or style element within a CDATA marked section avoids the expansion of these entities."
},
{
"code": null,
"e": 13179,
"s": 13067,
"text": "<script type=\"text/JavaScript\">\n <![CDATA[\n ... unescaped VB or Java Script here... ...\n ]]>\n</script>"
},
{
"code": null,
"e": 13241,
"s": 13179,
"text": "An alternative is to use external script and style documents."
},
{
"code": null,
"e": 13454,
"s": 13241,
"text": "XHTML recommends the replacement of name attribute with id attribute. Note that in XHTML 1.0, the name attribute of these elements is formally deprecated, and it will be removed in a subsequent versions of XHTML."
},
{
"code": null,
"e": 13777,
"s": 13454,
"text": "HTML and XHTML both have some attributes that have pre-defined and limited sets of values. For example, type attribute of the input element. In HTML and XML, these are called enumerated attributes. Under HTML 4, the interpretation of these values was case-insensitive, so a value of TEXT was equivalent to a value of text."
},
{
"code": null,
"e": 13893,
"s": 13777,
"text": "Under XHTML, the interpretation of these values is case-sensitive so all of these values are defined in lower-case."
},
{
"code": null,
"e": 14140,
"s": 13893,
"text": "HTML and XML both permit references to characters by using hexadecimal value. In HTML these references could be made using either &#Xnn; or &#xnn; and they are valid but in XHTML documents, you must use the lower-case version only such as &#xnn;."
},
{
"code": null,
"e": 14354,
"s": 14140,
"text": "All XHTML elements must be nested within the <html> root element. All other elements can have sub elements which must be in pairs and correctly nested within their parent element. The basic document structure is −"
},
{
"code": null,
"e": 14434,
"s": 14354,
"text": "<!DOCTYPE html....>\n\n<html>\n <head> ... </head>\n <body> ... </body>\n</html>"
},
{
"code": null,
"e": 14573,
"s": 14434,
"text": "The XHTML standard defines three Document Type Definitions (DTDs). The most commonly used and easy one is the XHTML Transitional document."
},
{
"code": null,
"e": 14636,
"s": 14573,
"text": "XHTML 1.0 document type definitions correspond to three DTDs −"
},
{
"code": null,
"e": 14643,
"s": 14636,
"text": "Strict"
},
{
"code": null,
"e": 14656,
"s": 14643,
"text": "Transitional"
},
{
"code": null,
"e": 14665,
"s": 14656,
"text": "Frameset"
},
{
"code": null,
"e": 14972,
"s": 14665,
"text": "There are few XHTML elements and attributes, which are available in one DTD but not available in another DTD. Therefore, while writing your XHTML document, you must select your XHTML elements or attributes carefully. However, XHTML validator helps you to identify valid and invalid elements and attributes."
},
{
"code": null,
"e": 15028,
"s": 14972,
"text": "Please check XHTML Validations for more detail on this."
},
{
"code": null,
"e": 15242,
"s": 15028,
"text": "If you are planning to use Cascading Style Sheet (CSS) strictly and avoiding to write most of the XHTML attributes, then it is recommended to use this DTD. A document conforming to this DTD is of the best quality."
},
{
"code": null,
"e": 15361,
"s": 15242,
"text": "If you want to use XHTML 1.0 Strict DTD then you need to include the following line at the top of your XHTML document."
},
{
"code": null,
"e": 15471,
"s": 15361,
"text": "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">"
},
{
"code": null,
"e": 15654,
"s": 15471,
"text": "If you are planning to use many XHTML attributes as well as few Cascading Style Sheet properties, then you should adopt this DTD and you should write your XHTML document accordingly."
},
{
"code": null,
"e": 15780,
"s": 15654,
"text": "If you want to use XHTML 1.0 Transitional DTD, then you need to include the following line at the top of your XHTML document."
},
{
"code": null,
"e": 15902,
"s": 15780,
"text": "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">"
},
{
"code": null,
"e": 16009,
"s": 15902,
"text": "You can use this when you want to use HTML Frames to partition the browser window into two or more frames."
},
{
"code": null,
"e": 16127,
"s": 16009,
"text": "If you want to use XHTML 1.0 Frameset DTD, then you need to include following line at the top of your XHTML document."
},
{
"code": null,
"e": 16241,
"s": 16127,
"text": "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Frameset//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd\">"
},
{
"code": null,
"e": 16405,
"s": 16241,
"text": "Note − No matter what DTD you are using to write your XHTML document; if it is a valid XHTML document, then your document is considered as a good quality document."
},
{
"code": null,
"e": 16563,
"s": 16405,
"text": "There are a few XHTML/HTML attributes which are standard and associated to all the XHTML/HTML tags. These attributes are listed here with brief description −"
},
{
"code": null,
"e": 16642,
"s": 16563,
"text": "Not valid in base, head, html, meta, param, script, style, and title elements."
},
{
"code": null,
"e": 16905,
"s": 16642,
"text": "The lang attribute indicates the language being used for the enclosed content. The language is identified using the ISO standard language abbreviations, such as fr for French, en for English, and so on. More codes and their formats are described at www.ietf.org."
},
{
"code": null,
"e": 16985,
"s": 16905,
"text": "Not valid in base, br, frame, frameset, hr, iframe, param, and script elements."
},
{
"code": null,
"e": 17095,
"s": 16985,
"text": "Microsoft introduced a number of new proprietary attributes with the Internet Explorer 4 and higher versions."
},
{
"code": null,
"e": 17260,
"s": 17095,
"text": "When users visit a website, they do things such as click on text, images and hyperlinks, hover-over things, etc. These are examples of what JavaScript calls events."
},
{
"code": null,
"e": 17479,
"s": 17260,
"text": "We can write our event handlers in JavaScript or VBScript and can specify these event handlers as a value of event tag attribute. The XHTML 1.0 has a similar set of events which is available in HTML 4.01 specification."
},
{
"code": null,
"e": 17612,
"s": 17479,
"text": "There are only two attributes which can be used to trigger any JavaScript or VBScript code, when any event occurs at document level."
},
{
"code": null,
"e": 17703,
"s": 17612,
"text": "Note − Here, the script refers to any function or piece of code of VBScript or JavaScript."
},
{
"code": null,
"e": 17836,
"s": 17703,
"text": "There are following six attributes which can be used to trigger any JavaScript or VBScript code when any event occurs at form level."
},
{
"code": null,
"e": 18020,
"s": 17836,
"text": "The following three events are generated by keyboard. These events are not valid in base, bdo, br, frame, frameset, head, html, iframe, meta, param, script, style, and title elements."
},
{
"code": null,
"e": 18244,
"s": 18020,
"text": "The following seven events are generated by mouse when it comes in contact with any HTML tag. These events are not valid in base, bdo, br, frame, frameset, head, html, iframe, meta, param, script, style, and title elements."
},
{
"code": null,
"e": 18544,
"s": 18244,
"text": "The W3C has helped move the internet content-development community from the days of malformed, non-standard mark-up into the well-formed, valid world of XML. In XHTML 1.0, this move was moderated by the goal of providing easy migration of existing HTML 4 (or earlier) based content to XHTML and XML."
},
{
"code": null,
"e": 18789,
"s": 18544,
"text": "The W3C has removed support for deprecated elements and attributes from the XHTML family. These elements and attributes had largely presentation-oriented functionality that is better handled via style sheets or client-specific default behavior."
},
{
"code": null,
"e": 19045,
"s": 18789,
"text": "Now the W3C's HTML Working Group has defined an initial document type based solely upon modules which are XHTML 1.1. This document type is designed to be portable to a broad collection of client devices, and applicable to the majority of internet content."
},
{
"code": null,
"e": 19165,
"s": 19045,
"text": "The XHTML 1.1 provides a definition of strictly conforming XHTML documents which MUST meet all the following criteria −"
},
{
"code": null,
"e": 19259,
"s": 19165,
"text": "The document MUST conform to the constraints expressed in XHTML 1.1 Document Type Definition."
},
{
"code": null,
"e": 19353,
"s": 19259,
"text": "The document MUST conform to the constraints expressed in XHTML 1.1 Document Type Definition."
},
{
"code": null,
"e": 19402,
"s": 19353,
"text": "The root element of the document MUST be <html>."
},
{
"code": null,
"e": 19451,
"s": 19402,
"text": "The root element of the document MUST be <html>."
},
{
"code": null,
"e": 19546,
"s": 19451,
"text": "The root element of the document MUST designate the XHTML namespace using the xmlns attribute."
},
{
"code": null,
"e": 19641,
"s": 19546,
"text": "The root element of the document MUST designate the XHTML namespace using the xmlns attribute."
},
{
"code": null,
"e": 19733,
"s": 19641,
"text": "The root element MAY also contain a schema location attribute as defined in the XML Schema."
},
{
"code": null,
"e": 19825,
"s": 19733,
"text": "The root element MAY also contain a schema location attribute as defined in the XML Schema."
},
{
"code": null,
"e": 20044,
"s": 19825,
"text": "There MUST be a DOCTYPE declaration in the document prior to the root element. If it is present, the public identifier included in the DOCTYPE declaration MUST refer the DTD found in XHTML 1.1 Document Type Definition."
},
{
"code": null,
"e": 20090,
"s": 20044,
"text": "Here is an example of an XHTML 1.1 document −"
},
{
"code": null,
"e": 20582,
"s": 20090,
"text": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.1//EN\" \"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd\">\n\n<html xmlns=\"http://www.w3.org/1999/xhtml\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://www.w3.org/MarkUp/SCHEMA/xhtml11.xsd\" xml:lang=\"en\">\n\t\n <head>\n <title>This is the document title</title>\n </head>\n\t\n <body>\n <p>Moved to <a href=\"http://example.org/\">example.org</a>.</p>\n </body>\n\t\n</html>"
},
{
"code": null,
"e": 20931,
"s": 20582,
"text": "Note − In this example, the XML declaration is included. An XML declaration such as the one above is not required in all XML documents. XHTML document authors are strongly encouraged to use XML declarations in all their documents. Such a declaration is required when the character encoding of the document is other than the default UTF-8 or UTF-16."
},
{
"code": null,
"e": 21002,
"s": 20931,
"text": "The XHTML 1.1 document type is made up of the following XHTML modules."
},
{
"code": null,
"e": 21281,
"s": 21002,
"text": "Structure Module − The Structure Module defines the major structural elements for XHTML. These elements effectively act as the basis for the content model of many XHTML family document types. The elements and attributes included in this module are − body, head, html, and title."
},
{
"code": null,
"e": 21530,
"s": 21281,
"text": "Text Module − This module defines all of the basic text container elements, attributes, and their content model − abbr, acronym, address, blockquote, br, cite, code, dfn, div, em, h1, h2, h3, h4, h5, h6, kbd, p, pre, q, samp, span, strong, and var."
},
{
"code": null,
"e": 21682,
"s": 21530,
"text": "Hypertext Module − The Hypertext Module provides the element that is used to define hypertext links to other resources. This module supports element a."
},
{
"code": null,
"e": 21874,
"s": 21682,
"text": "List Module − As its name suggests, the List Module provides list-oriented elements. Specifically, the List Module supports the following elements and attributes − dl, dt, dd, ol, ul, and li."
},
{
"code": null,
"e": 22025,
"s": 21874,
"text": "Object Module − The Object Module provides elements for general-purpose object inclusion. Specifically, the Object Module supports − object and param."
},
{
"code": null,
"e": 22198,
"s": 22025,
"text": "Presentation Module − This module defines elements, attributes, and a minimal content model for simple presentation-related markup − b, big, hr, i, small, sub, sup, and tt."
},
{
"code": null,
"e": 22305,
"s": 22198,
"text": "Edit Module − This module defines elements and attributes for use in editing-related markup − del and ins."
},
{
"code": null,
"e": 22469,
"s": 22305,
"text": "Bidirectional Text Module − The Bi-directional Text module defines an element that can be used to declare the bi-directional rules for the element's content − bdo."
},
{
"code": null,
"e": 22651,
"s": 22469,
"text": "Forms Module − It provides all the form features found in HTML 4.0. Specifically, it supports − button, fieldset, form, input, label, legend, select, optgroup, option, and textarea."
},
{
"code": null,
"e": 22802,
"s": 22651,
"text": "Table Module − It supports the following elements, attributes, and content model − caption, col, colgroup, table, tbody, td, tfoot, th, thead, and tr."
},
{
"code": null,
"e": 22963,
"s": 22802,
"text": "Image Module − It provides basic image embedding and may be used in some implementations of client side image maps independently. It supports the element − img."
},
{
"code": null,
"e": 23058,
"s": 22963,
"text": "Client-side Image Map Module − It provides elements for client side image maps − area and map."
},
{
"code": null,
"e": 23240,
"s": 23058,
"text": "Server-side Image Map Module − It provides support for image-selection and transmission of selection coordinates. The Server-side Image Map Module supports − attribute ismap on img."
},
{
"code": null,
"e": 23320,
"s": 23240,
"text": "Intrinsic Events Module − It supports all the events discussed in XHTML Events."
},
{
"code": null,
"e": 23492,
"s": 23320,
"text": "Meta information Module − The Meta information Module defines an element that describes information within the declarative portion of a document. It includes element meta."
},
{
"code": null,
"e": 23718,
"s": 23492,
"text": "Scripting Module − It defines the elements used to contain information pertaining to executable scripts or the lack of support for executable scripts. Elements and attributes included in this module are − noscript and script."
},
{
"code": null,
"e": 23871,
"s": 23718,
"text": "Style Sheet Module − It defines an element to be used when declaring internal style sheets. The element and attribute defined by this module is − style."
},
{
"code": null,
"e": 23941,
"s": 23871,
"text": "Style Attribute Module (Deprecated) − It defines the style attribute."
},
{
"code": null,
"e": 24059,
"s": 23941,
"text": "Link Module − It defines an element that can be used to define links to external resources. It supports link element."
},
{
"code": null,
"e": 24252,
"s": 24059,
"text": "Base Module − It defines an element that can be used to define a base URI against which relative URIs in the document are resolved. The element and attribute included in this module is − base."
},
{
"code": null,
"e": 24386,
"s": 24252,
"text": "Ruby Annotation Module − XHTML also uses the Ruby Annotation module as defined in RUBY and supports − ruby, rbc, rtc, rb, rt, and rp."
},
{
"code": null,
"e": 24530,
"s": 24386,
"text": "This section describes the differences between XHTML 1.1 and XHTML 1.0 Strict. XHTML 1.1 represents a departure from both HTML 4 and XHTML 1.0."
},
{
"code": null,
"e": 24600,
"s": 24530,
"text": "The most significant is the removal of features that were deprecated."
},
{
"code": null,
"e": 24670,
"s": 24600,
"text": "The most significant is the removal of features that were deprecated."
},
{
"code": null,
"e": 24713,
"s": 24670,
"text": "The changes can be summarized as follows −"
},
{
"code": null,
"e": 24756,
"s": 24713,
"text": "The changes can be summarized as follows −"
},
{
"code": null,
"e": 24846,
"s": 24756,
"text": "On every element, the lang attribute has been removed in favor of the xml:lang attribute."
},
{
"code": null,
"e": 24936,
"s": 24846,
"text": "On every element, the lang attribute has been removed in favor of the xml:lang attribute."
},
{
"code": null,
"e": 25033,
"s": 24936,
"text": "On the <a> and <map> elements, the name attribute has been removed in favor of the id attribute."
},
{
"code": null,
"e": 25130,
"s": 25033,
"text": "On the <a> and <map> elements, the name attribute has been removed in favor of the id attribute."
},
{
"code": null,
"e": 25178,
"s": 25130,
"text": "The ruby collection of elements has been added."
},
{
"code": null,
"e": 25226,
"s": 25178,
"text": "The ruby collection of elements has been added."
},
{
"code": null,
"e": 25398,
"s": 25226,
"text": "This chapter lists out various tips and tricks which you should be aware of while writing an XHTML document. These tips and tricks can help you create effective documents."
},
{
"code": null,
"e": 25461,
"s": 25398,
"text": "Here are some basic guidelines for designing XHTML documents −"
},
{
"code": null,
"e": 25685,
"s": 25461,
"text": "When you think of satisfying what your audience wants, you need to design effective and catchy documents to serve the purpose. Your document should be easy for finding required information and giving a familiar environment."
},
{
"code": null,
"e": 25935,
"s": 25685,
"text": "For example, Academicians or medical practitioners are comfortable with journal-like document with long sentences, complex diagrams, specific terminologies, etc., whereas the document accessed by school-going children must be simple and informative."
},
{
"code": null,
"e": 26052,
"s": 25935,
"text": "Reuse your previously created successful documents instead of starting from scratch each time you bag a new project."
},
{
"code": null,
"e": 26118,
"s": 26052,
"text": "Here are some tips regarding elements inside the XHTML document −"
},
{
"code": null,
"e": 26393,
"s": 26118,
"text": "An XML declaration is not required in all XHTML documents but XHTML document authors are strongly encouraged to use XML declarations in all their documents. Such a declaration is required when the character encoding of the document is other than the default UTF-8 or UTF-16."
},
{
"code": null,
"e": 26537,
"s": 26393,
"text": "They include a space before the trailing / and > of empty elements. For example, <br />, <hr />, and <img src=\"/html/xhtml.gif\" alt=\"xhtml\" />."
},
{
"code": null,
"e": 26613,
"s": 26537,
"text": "Use external style sheets if your style sheet uses “<”, “&”, “]]>”, or “—”."
},
{
"code": null,
"e": 26682,
"s": 26613,
"text": "Use external scripts if your script uses “<”, “&”, or “]]>”, or “—”."
},
{
"code": null,
"e": 26816,
"s": 26682,
"text": "Avoid line breaks and multiple whitespace characters within attribute values. These are handled inconsistently by different browsers."
},
{
"code": null,
"e": 26948,
"s": 26816,
"text": "Do not include more than one isindex element in the document head. The isindex element is deprecated in favor of the input element."
},
{
"code": null,
"e": 27089,
"s": 26948,
"text": "Use both the lang and xml:lang attributes while specifying the language of an element. The value of the xml:lang attribute takes precedence."
},
{
"code": null,
"e": 27310,
"s": 27089,
"text": "XHTML 1.0 has deprecated the name attributes of a, applet, form, frame, iframe, img, and map elements. They will be removed from XHTML in subsequent versions. Therefore, start using id element for element identification."
},
{
"code": null,
"e": 27386,
"s": 27310,
"text": "The ampersand character (\"&\") should be presented as an entity reference &."
},
{
"code": null,
"e": 27601,
"s": 27386,
"text": "<!-- This is invalid in XHTML -->\nhttp://my.site.dom/cgi-bin/myscript.pl?class=guest&name=user.\n\n<!-- Correct XHTML way of writing this is as follows -->\nhttp://my.site.dom/cgi-bin/myscript.pl?class=guest&name=user"
},
{
"code": null,
"e": 27825,
"s": 27601,
"text": "Some characters that are legal in HTML documents are illegal in XML document. For example, in HTML, the form-feed character (U+000C) is treated as white space, in XHTML, due to XML's definition of characters, it is illegal."
},
{
"code": null,
"e": 28036,
"s": 27825,
"text": "The named character reference ' (the apostrophe, U+0027) was introduced in XML 1.0 but does not appear in HTML. Web developers should therefore use ' instead of ' to work as expected in HTML 4 Web Browsers."
},
{
"code": null,
"e": 28217,
"s": 28036,
"text": "Every XHTML document is validated against a Document Type Definition. Before validating an XHTML file properly, a correct DTD must be added as the first or second line of the file."
},
{
"code": null,
"e": 28464,
"s": 28217,
"text": "Once you are ready to validate your XHTML document, you can use W3C Validator\nto validate your document. This tool is very handy and helps you to fix the\nproblems with your document. This tool does not require any expertise to perform\nvalidation."
},
{
"code": null,
"e": 28630,
"s": 28464,
"text": "The following statement in the text box shows you details. You need to give\ncomplete URL of the page, which you want to validate and then click Validate\nPage button."
},
{
"code": null,
"e": 28673,
"s": 28630,
"text": "Input your page address in the box below −"
},
{
"code": null,
"e": 28799,
"s": 28673,
"text": "This validator checks the markup validity of web documents with various formats especially in HTML, XHTML, SMIL, MathML, etc."
},
{
"code": null,
"e": 28861,
"s": 28799,
"text": "There are other tools to perform different other validations."
},
{
"code": null,
"e": 28886,
"s": 28861,
"text": "RSS/Atom feeds Validator"
},
{
"code": null,
"e": 28911,
"s": 28886,
"text": "RSS/Atom feeds Validator"
},
{
"code": null,
"e": 28937,
"s": 28911,
"text": "CSS stylesheets Validator"
},
{
"code": null,
"e": 28963,
"s": 28937,
"text": "CSS stylesheets Validator"
},
{
"code": null,
"e": 28981,
"s": 28963,
"text": "Find Broken Links"
},
{
"code": null,
"e": 28999,
"s": 28981,
"text": "Find Broken Links"
},
{
"code": null,
"e": 29026,
"s": 28999,
"text": "Other validators and tools"
},
{
"code": null,
"e": 29053,
"s": 29026,
"text": "Other validators and tools"
},
{
"code": null,
"e": 29253,
"s": 29053,
"text": "We assume you have understood all the concepts related to XHTML. Therefore, you should be able to write your HTML document into a well-formed XHTML document and get a cleaner version of your website."
},
{
"code": null,
"e": 29316,
"s": 29253,
"text": "You can convert your existing HTML website into XHTML website."
},
{
"code": null,
"e": 29516,
"s": 29316,
"text": "Let us go through some important steps. To convert your existing document, you must first decide which DTD you are going to adhere to, and include document type definition at the top of the document."
},
{
"code": null,
"e": 29728,
"s": 29516,
"text": "Make sure you have all other required elements. These include a root element <html> that indicates an XML namespace, a <head> element, a <title> element contained within the <head> element, and a <body> element."
},
{
"code": null,
"e": 29940,
"s": 29728,
"text": "Make sure you have all other required elements. These include a root element <html> that indicates an XML namespace, a <head> element, a <title> element contained within the <head> element, and a <body> element."
},
{
"code": null,
"e": 30003,
"s": 29940,
"text": "Convert all element keywords and attribute names to lowercase."
},
{
"code": null,
"e": 30066,
"s": 30003,
"text": "Convert all element keywords and attribute names to lowercase."
},
{
"code": null,
"e": 30123,
"s": 30066,
"text": "Ensure that all attributes are in a name=\"value\" format."
},
{
"code": null,
"e": 30180,
"s": 30123,
"text": "Ensure that all attributes are in a name=\"value\" format."
},
{
"code": null,
"e": 30237,
"s": 30180,
"text": "Make sure that all container elements have closing tags."
},
{
"code": null,
"e": 30294,
"s": 30237,
"text": "Make sure that all container elements have closing tags."
},
{
"code": null,
"e": 30398,
"s": 30294,
"text": "Place a forward slash inside all standalone elements. For example, rewrite all <br> elements as <br />."
},
{
"code": null,
"e": 30502,
"s": 30398,
"text": "Place a forward slash inside all standalone elements. For example, rewrite all <br> elements as <br />."
},
{
"code": null,
"e": 30576,
"s": 30502,
"text": "Designate client-side script code and style sheet code as CDATA sections."
},
{
"code": null,
"e": 30650,
"s": 30576,
"text": "Designate client-side script code and style sheet code as CDATA sections."
},
{
"code": null,
"e": 30792,
"s": 30650,
"text": "Still XHTML is being improved and its next version XHTML 1.1 has been drafted. We have discussed this in detail in XHTML Version 1.1 chapter."
},
{
"code": null,
"e": 31075,
"s": 30792,
"text": "XHTML tags, characters, and entities are same as HTML, so if you already know HTML then you do not need to put extra effort to learn these subjects, especially for XHTML. We have listed out all HTML stuff along with XHTML tutorial also, because they are applicable to XHTML as well."
},
{
"code": null,
"e": 31373,
"s": 31075,
"text": "We have listed out various resources for XHTML and HTML so if you are interested and you have time in hand, then we recommend you to go through these resources to enhance your understanding on XHTML. Otherwise this tutorial must have given you enough knowledge to write your web pages using XHTML."
},
{
"code": null,
"e": 31446,
"s": 31373,
"text": "Your feedback on this tutorial is welcome at [email protected]."
},
{
"code": null,
"e": 31453,
"s": 31446,
"text": " Print"
},
{
"code": null,
"e": 31464,
"s": 31453,
"text": " Add Notes"
}
] |
Apache Airflow Tips and Best Practices | Towards Data Science | When I first started building ETL pipelines with Airflow, I had so many memorable “aha” moments after figuring out why my pipelines didn’t run. As the tech documentation never covers everything, I tend to learn much more about a new tool from making mistakes and reading source code from good developers. In the blog post, I will share many tips and best practices for Airflow along with behind-the-scenes mechanisms to help you build more reliable and scalable data pipelines.
(New to Airflow? Check out the beginner’s guide to Airflow first.)
(Interested in ways to efficiently learn a tech stack? Check out the Systematic Learning Method.)
In Airflow, a DAG is triggered by the Airflow scheduler periodically based on the start_date and schedule_interval parameters specified in the DAG file. It is very common for beginners to get confused by Airflow’s job scheduling mechanism because it is unintuitive at first that the Airflow scheduler triggers a DAG run at the end of its schedule period, rather than at the beginning of it.
When a new DAG is created and picked up by Airflow, the Airflow scheduler materializes many DAG run entries along with corresponding schedule periods based on start_date and schedule_interval of the DAG, and each DAG run is triggered when its time dependency is met. For example, consider this sample DAG that runs daily at 7 am UTC:
default_args = { 'owner': 'xinran.waibel', 'start_date': datetime(2019, 12, 5),}dag = DAG('sample_dag', default_args=default_args, schedule_interval='0 7 * * *')
This DAG will have the following DAG runs created by the Airflow scheduler:
The first DAG run would be triggered after 7 am on 2019–12–06, at the end of its schedule period, instead of on the start date. Similarly, the rest of DAG runs would be executed every day at 7 am after that.
The execution time in Airflow is not the actual run time, but rather the start timestamp of its schedule period. For example, the execution time of the first DAG run is 2019–12–05 7:00:00, though it is executed on 2019–12–06. However, if a DAG run is manually started by users, the execution time of this manual DAG run would be exactly when it was triggered. (To tell whether a DAG run is scheduled or manually triggered, you can look at the prefix of its DAG run ID: scheduled__ or manual__).
Based on Airflow’s scheduling mechanism described above, you should always use a static start_date for your DAGs to make sure DAG runs are populated as expected. Keep in mind the start_date is not necessarily when the first DAG run would be triggered.
One important concept in Airflow’s job scheduling is catchup. After materializing DAG run entries of a DAG, the Airflow scheduler will “backfill” all past DAG runs whose time dependency has been met if catchup is enabled. If catchup in turned off, then only the latest DAG run will be executed and those before it will not even show up in the DAG history. For example, assuming the sample DAG is picked up by Airflow at 8 am on 2019–12–08, three DAG runs will run if catch up is enabled. However, if catchup is turned off, only scheduled__2019–12–07T07:00:00+00:00 will be triggered.
There are 2 ways to configure the catchup setting in Airflow:
Airflow cluster level: Set catchup_by_default = True (by default) or False under thescheduler section in the Airflow configuration file airflow.cfg. This setting is applied to all DAGs unless a DAG-level catchup setting is specified.
DAG level: Set dag.catchup = True or False in the DAG file:
dag = DAG('sample_dag', catchup=False, default_args=default_args)
Because Airflow can backfill past DAG runs when catchup is enabled and each DAG run can be re-run manually at any time, it is important to make sure DAGs are idempotent and each DAG run is independent of each other and the actual run date. DAG idempotence means the result of running the same DAG run multiple times should be the same as the result of running it once.
Now let’s consider this primitive DAG that runs a Python function every day to retrieve daily marketing ads’ performance data from an API and load the data to a database:
The export_api_data Python function uses datetime library to dynamically get yesterday’s date, download yesterday’s ads performance data from API, and then insert downloaded data into the destination database. However, this DAG will have correct results if no past DAG runs are backfilled and all DAG runs are executed exactly once. This is because there are two big issues in the DAG design:
If start_date is set to 2019–12–01 and the DAG is uploaded to Airflow bucket on 2019–12–08, then seven past DAG runs would run on 2019–12–08. Since yesterday’s date is obtained dynamically in export_api_data function, all the backfilled DAG runs will have yesterday = 2019–12–07 and therefore download and upload the same day’s data into the database.When a DAG run is executed more than once, multiple copies of the same day’s ads data will be inserted into the database, causing unwanted duplicates in the database.
If start_date is set to 2019–12–01 and the DAG is uploaded to Airflow bucket on 2019–12–08, then seven past DAG runs would run on 2019–12–08. Since yesterday’s date is obtained dynamically in export_api_data function, all the backfilled DAG runs will have yesterday = 2019–12–07 and therefore download and upload the same day’s data into the database.
When a DAG run is executed more than once, multiple copies of the same day’s ads data will be inserted into the database, causing unwanted duplicates in the database.
We can make a few improvements to the DAG file to solve these issues:
Instead of using datetime library, now we use {{ ds }} , one of Airflow’s built-in template variables, to get the execution date of the DAG run, which is independent of its actual DAG run date.Before the previous day’s ads data is inserted to the database, delete the corresponding partition in the database, if any, to avoid duplicates.
Instead of using datetime library, now we use {{ ds }} , one of Airflow’s built-in template variables, to get the execution date of the DAG run, which is independent of its actual DAG run date.
Before the previous day’s ads data is inserted to the database, delete the corresponding partition in the database, if any, to avoid duplicates.
Airflow is powered by two key components:
Metadata database: maintains information on DAG and task states.
Scheduler: processes DAG files and utilizes information stored in the metadata database to decide when tasks should be executed.
The scheduler will scan and compile all qualified DAG files in the Airflow bucket every a couple of seconds to detect DAG changes and check whether a task can be triggered. It is critical to keep DAG files very light (like a configuration file) so that it takes less time and resources for the Airflow scheduler to process them at each heartbeat. No actual data processing should happen in DAG files.
Changing the DAG ID of an existing DAG is equivalent to creating a brand new DAG since Airflow will actually add a new entry in the metadata database without deleting the old one. This might cause extra trouble because you will lose all the DAG run history and Airflow will attempt to backfill all the past DAG runs again if catchup is turned on. Do not rename DAGs unless it is totally necessary.
Deleting a DAG file from the Airflow bucket does not erase its DAG run history and other metadata. You need to use either the Delete button in Airflow UI or airflow delete_dag to explicitly delete the metadata. If you upload the same DAG again after all previous metadata is deleted, it will be treated as a brand new DAG again (which comes very handy if you want to rerun all the past DAG runs at once).
(TL;DR) Here is a summary of the main takeaways:
The start_date is not necessarily when the first DAG run would be executed, as a DAG run is triggered at the end of its schedule period.
Always use a static start_date for your DAGs to make sure DAG runs are populated as expected.
Utilize Airflow’s template variable and macros to ensure your DAG runs are independent of each other and actual run time.
Make sure your DAGs are idempotent to ensure running the same DAG run multiple times is the same as the result of running it once.
Keep DAG files light and quick-to-process like a configuration file, as the Airflow scheduler processes all DAG files at each heartbeat.
Renaming an existing DAG will introduce a brand new DAG.
In order to completely erase a DAG, you need to remove DAG files from the Airflow bucket and explicitly delete DAG metadata.
Want to learn more about Data Engineering? Check out my Data Engineering 101 column on Towards Data Science: | [
{
"code": null,
"e": 649,
"s": 171,
"text": "When I first started building ETL pipelines with Airflow, I had so many memorable “aha” moments after figuring out why my pipelines didn’t run. As the tech documentation never covers everything, I tend to learn much more about a new tool from making mistakes and reading source code from good developers. In the blog post, I will share many tips and best practices for Airflow along with behind-the-scenes mechanisms to help you build more reliable and scalable data pipelines."
},
{
"code": null,
"e": 716,
"s": 649,
"text": "(New to Airflow? Check out the beginner’s guide to Airflow first.)"
},
{
"code": null,
"e": 814,
"s": 716,
"text": "(Interested in ways to efficiently learn a tech stack? Check out the Systematic Learning Method.)"
},
{
"code": null,
"e": 1205,
"s": 814,
"text": "In Airflow, a DAG is triggered by the Airflow scheduler periodically based on the start_date and schedule_interval parameters specified in the DAG file. It is very common for beginners to get confused by Airflow’s job scheduling mechanism because it is unintuitive at first that the Airflow scheduler triggers a DAG run at the end of its schedule period, rather than at the beginning of it."
},
{
"code": null,
"e": 1539,
"s": 1205,
"text": "When a new DAG is created and picked up by Airflow, the Airflow scheduler materializes many DAG run entries along with corresponding schedule periods based on start_date and schedule_interval of the DAG, and each DAG run is triggered when its time dependency is met. For example, consider this sample DAG that runs daily at 7 am UTC:"
},
{
"code": null,
"e": 1703,
"s": 1539,
"text": "default_args = { 'owner': 'xinran.waibel', 'start_date': datetime(2019, 12, 5),}dag = DAG('sample_dag', default_args=default_args, schedule_interval='0 7 * * *')"
},
{
"code": null,
"e": 1779,
"s": 1703,
"text": "This DAG will have the following DAG runs created by the Airflow scheduler:"
},
{
"code": null,
"e": 1987,
"s": 1779,
"text": "The first DAG run would be triggered after 7 am on 2019–12–06, at the end of its schedule period, instead of on the start date. Similarly, the rest of DAG runs would be executed every day at 7 am after that."
},
{
"code": null,
"e": 2482,
"s": 1987,
"text": "The execution time in Airflow is not the actual run time, but rather the start timestamp of its schedule period. For example, the execution time of the first DAG run is 2019–12–05 7:00:00, though it is executed on 2019–12–06. However, if a DAG run is manually started by users, the execution time of this manual DAG run would be exactly when it was triggered. (To tell whether a DAG run is scheduled or manually triggered, you can look at the prefix of its DAG run ID: scheduled__ or manual__)."
},
{
"code": null,
"e": 2734,
"s": 2482,
"text": "Based on Airflow’s scheduling mechanism described above, you should always use a static start_date for your DAGs to make sure DAG runs are populated as expected. Keep in mind the start_date is not necessarily when the first DAG run would be triggered."
},
{
"code": null,
"e": 3318,
"s": 2734,
"text": "One important concept in Airflow’s job scheduling is catchup. After materializing DAG run entries of a DAG, the Airflow scheduler will “backfill” all past DAG runs whose time dependency has been met if catchup is enabled. If catchup in turned off, then only the latest DAG run will be executed and those before it will not even show up in the DAG history. For example, assuming the sample DAG is picked up by Airflow at 8 am on 2019–12–08, three DAG runs will run if catch up is enabled. However, if catchup is turned off, only scheduled__2019–12–07T07:00:00+00:00 will be triggered."
},
{
"code": null,
"e": 3380,
"s": 3318,
"text": "There are 2 ways to configure the catchup setting in Airflow:"
},
{
"code": null,
"e": 3614,
"s": 3380,
"text": "Airflow cluster level: Set catchup_by_default = True (by default) or False under thescheduler section in the Airflow configuration file airflow.cfg. This setting is applied to all DAGs unless a DAG-level catchup setting is specified."
},
{
"code": null,
"e": 3674,
"s": 3614,
"text": "DAG level: Set dag.catchup = True or False in the DAG file:"
},
{
"code": null,
"e": 3740,
"s": 3674,
"text": "dag = DAG('sample_dag', catchup=False, default_args=default_args)"
},
{
"code": null,
"e": 4109,
"s": 3740,
"text": "Because Airflow can backfill past DAG runs when catchup is enabled and each DAG run can be re-run manually at any time, it is important to make sure DAGs are idempotent and each DAG run is independent of each other and the actual run date. DAG idempotence means the result of running the same DAG run multiple times should be the same as the result of running it once."
},
{
"code": null,
"e": 4280,
"s": 4109,
"text": "Now let’s consider this primitive DAG that runs a Python function every day to retrieve daily marketing ads’ performance data from an API and load the data to a database:"
},
{
"code": null,
"e": 4673,
"s": 4280,
"text": "The export_api_data Python function uses datetime library to dynamically get yesterday’s date, download yesterday’s ads performance data from API, and then insert downloaded data into the destination database. However, this DAG will have correct results if no past DAG runs are backfilled and all DAG runs are executed exactly once. This is because there are two big issues in the DAG design:"
},
{
"code": null,
"e": 5191,
"s": 4673,
"text": "If start_date is set to 2019–12–01 and the DAG is uploaded to Airflow bucket on 2019–12–08, then seven past DAG runs would run on 2019–12–08. Since yesterday’s date is obtained dynamically in export_api_data function, all the backfilled DAG runs will have yesterday = 2019–12–07 and therefore download and upload the same day’s data into the database.When a DAG run is executed more than once, multiple copies of the same day’s ads data will be inserted into the database, causing unwanted duplicates in the database."
},
{
"code": null,
"e": 5543,
"s": 5191,
"text": "If start_date is set to 2019–12–01 and the DAG is uploaded to Airflow bucket on 2019–12–08, then seven past DAG runs would run on 2019–12–08. Since yesterday’s date is obtained dynamically in export_api_data function, all the backfilled DAG runs will have yesterday = 2019–12–07 and therefore download and upload the same day’s data into the database."
},
{
"code": null,
"e": 5710,
"s": 5543,
"text": "When a DAG run is executed more than once, multiple copies of the same day’s ads data will be inserted into the database, causing unwanted duplicates in the database."
},
{
"code": null,
"e": 5780,
"s": 5710,
"text": "We can make a few improvements to the DAG file to solve these issues:"
},
{
"code": null,
"e": 6118,
"s": 5780,
"text": "Instead of using datetime library, now we use {{ ds }} , one of Airflow’s built-in template variables, to get the execution date of the DAG run, which is independent of its actual DAG run date.Before the previous day’s ads data is inserted to the database, delete the corresponding partition in the database, if any, to avoid duplicates."
},
{
"code": null,
"e": 6312,
"s": 6118,
"text": "Instead of using datetime library, now we use {{ ds }} , one of Airflow’s built-in template variables, to get the execution date of the DAG run, which is independent of its actual DAG run date."
},
{
"code": null,
"e": 6457,
"s": 6312,
"text": "Before the previous day’s ads data is inserted to the database, delete the corresponding partition in the database, if any, to avoid duplicates."
},
{
"code": null,
"e": 6499,
"s": 6457,
"text": "Airflow is powered by two key components:"
},
{
"code": null,
"e": 6564,
"s": 6499,
"text": "Metadata database: maintains information on DAG and task states."
},
{
"code": null,
"e": 6693,
"s": 6564,
"text": "Scheduler: processes DAG files and utilizes information stored in the metadata database to decide when tasks should be executed."
},
{
"code": null,
"e": 7094,
"s": 6693,
"text": "The scheduler will scan and compile all qualified DAG files in the Airflow bucket every a couple of seconds to detect DAG changes and check whether a task can be triggered. It is critical to keep DAG files very light (like a configuration file) so that it takes less time and resources for the Airflow scheduler to process them at each heartbeat. No actual data processing should happen in DAG files."
},
{
"code": null,
"e": 7492,
"s": 7094,
"text": "Changing the DAG ID of an existing DAG is equivalent to creating a brand new DAG since Airflow will actually add a new entry in the metadata database without deleting the old one. This might cause extra trouble because you will lose all the DAG run history and Airflow will attempt to backfill all the past DAG runs again if catchup is turned on. Do not rename DAGs unless it is totally necessary."
},
{
"code": null,
"e": 7897,
"s": 7492,
"text": "Deleting a DAG file from the Airflow bucket does not erase its DAG run history and other metadata. You need to use either the Delete button in Airflow UI or airflow delete_dag to explicitly delete the metadata. If you upload the same DAG again after all previous metadata is deleted, it will be treated as a brand new DAG again (which comes very handy if you want to rerun all the past DAG runs at once)."
},
{
"code": null,
"e": 7946,
"s": 7897,
"text": "(TL;DR) Here is a summary of the main takeaways:"
},
{
"code": null,
"e": 8083,
"s": 7946,
"text": "The start_date is not necessarily when the first DAG run would be executed, as a DAG run is triggered at the end of its schedule period."
},
{
"code": null,
"e": 8177,
"s": 8083,
"text": "Always use a static start_date for your DAGs to make sure DAG runs are populated as expected."
},
{
"code": null,
"e": 8299,
"s": 8177,
"text": "Utilize Airflow’s template variable and macros to ensure your DAG runs are independent of each other and actual run time."
},
{
"code": null,
"e": 8430,
"s": 8299,
"text": "Make sure your DAGs are idempotent to ensure running the same DAG run multiple times is the same as the result of running it once."
},
{
"code": null,
"e": 8567,
"s": 8430,
"text": "Keep DAG files light and quick-to-process like a configuration file, as the Airflow scheduler processes all DAG files at each heartbeat."
},
{
"code": null,
"e": 8624,
"s": 8567,
"text": "Renaming an existing DAG will introduce a brand new DAG."
},
{
"code": null,
"e": 8749,
"s": 8624,
"text": "In order to completely erase a DAG, you need to remove DAG files from the Airflow bucket and explicitly delete DAG metadata."
}
] |
Python MySQL - Create Database | You can create a database in MYSQL using the CREATE DATABASE query.
Following is the syntax of the CREATE DATABASE query −
CREATE DATABASE name_of_the_database
Following statement creates a database with name mydb in MySQL −
mysql> CREATE DATABASE mydb;
Query OK, 1 row affected (0.04 sec)
If you observe the list of databases using the SHOW DATABASES statement, you can observe the newly created database in it as shown below −
mysql> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| logging |
| mydatabase |
| mydb |
| performance_schema |
| students |
| sys |
+--------------------+
26 rows in set (0.15 sec)
After establishing connection with MySQL, to manipulate data in it you need to connect to a database. You can connect to an existing database or, create your own.
You would need special privileges to create or to delete a MySQL database. So if you have access to the root user, you can create any database.
Following example establishes connection with MYSQL and creates a database in it.
import mysql.connector
#establishing the connection
conn = mysql.connector.connect(user='root', password='password', host='127.0.0.1')
#Creating a cursor object using the cursor() method
cursor = conn.cursor()
#Doping database MYDATABASE if already exists.
cursor.execute("DROP database IF EXISTS MyDatabase")
#Preparing query to create a database
sql = "CREATE database MYDATABASE";
#Creating a database
cursor.execute(sql)
#Retrieving the list of databases
print("List of databases: ")
cursor.execute("SHOW DATABASES")
print(cursor.fetchall())
#Closing the connection
conn.close()
List of databases:
[('information_schema',), ('dbbug61332',), ('details',), ('exampledatabase',), ('mydatabase',), ('mydb',), ('mysql',), ('performance_schema',)]
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3273,
"s": 3205,
"text": "You can create a database in MYSQL using the CREATE DATABASE query."
},
{
"code": null,
"e": 3328,
"s": 3273,
"text": "Following is the syntax of the CREATE DATABASE query −"
},
{
"code": null,
"e": 3366,
"s": 3328,
"text": "CREATE DATABASE name_of_the_database\n"
},
{
"code": null,
"e": 3431,
"s": 3366,
"text": "Following statement creates a database with name mydb in MySQL −"
},
{
"code": null,
"e": 3497,
"s": 3431,
"text": "mysql> CREATE DATABASE mydb;\nQuery OK, 1 row affected (0.04 sec)\n"
},
{
"code": null,
"e": 3636,
"s": 3497,
"text": "If you observe the list of databases using the SHOW DATABASES statement, you can observe the newly created database in it as shown below −"
},
{
"code": null,
"e": 3939,
"s": 3636,
"text": "mysql> SHOW DATABASES;\n+--------------------+\n| Database |\n+--------------------+\n| information_schema |\n| logging |\n| mydatabase |\n| mydb |\n| performance_schema |\n| students |\n| sys |\n+--------------------+\n26 rows in set (0.15 sec)\n"
},
{
"code": null,
"e": 4102,
"s": 3939,
"text": "After establishing connection with MySQL, to manipulate data in it you need to connect to a database. You can connect to an existing database or, create your own."
},
{
"code": null,
"e": 4246,
"s": 4102,
"text": "You would need special privileges to create or to delete a MySQL database. So if you have access to the root user, you can create any database."
},
{
"code": null,
"e": 4328,
"s": 4246,
"text": "Following example establishes connection with MYSQL and creates a database in it."
},
{
"code": null,
"e": 4918,
"s": 4328,
"text": "import mysql.connector\n\n#establishing the connection\nconn = mysql.connector.connect(user='root', password='password', host='127.0.0.1')\n\n#Creating a cursor object using the cursor() method\ncursor = conn.cursor()\n\n#Doping database MYDATABASE if already exists.\ncursor.execute(\"DROP database IF EXISTS MyDatabase\")\n\n#Preparing query to create a database\nsql = \"CREATE database MYDATABASE\";\n\n#Creating a database\ncursor.execute(sql)\n\n#Retrieving the list of databases\nprint(\"List of databases: \")\ncursor.execute(\"SHOW DATABASES\")\nprint(cursor.fetchall())\n\n#Closing the connection\nconn.close()"
},
{
"code": null,
"e": 5081,
"s": 4918,
"text": "List of databases:\n[('information_schema',), ('dbbug61332',), ('details',), ('exampledatabase',), ('mydatabase',), ('mydb',), ('mysql',), ('performance_schema',)]"
},
{
"code": null,
"e": 5118,
"s": 5081,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 5134,
"s": 5118,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 5167,
"s": 5134,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 5186,
"s": 5167,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 5221,
"s": 5186,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 5243,
"s": 5221,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 5277,
"s": 5243,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 5305,
"s": 5277,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 5340,
"s": 5305,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 5354,
"s": 5340,
"text": " Lets Kode It"
},
{
"code": null,
"e": 5387,
"s": 5354,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 5404,
"s": 5387,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 5411,
"s": 5404,
"text": " Print"
},
{
"code": null,
"e": 5422,
"s": 5411,
"text": " Add Notes"
}
] |
What is the process of compilation and linking in python? | Compilation: The source code in python is saved as a .py file which is then compiled into a format known as byte code, byte code is then converted to machine code. After the compilation, the code is stored in .pyc files and is regenerated when the source is updated. This process is known as compilation.
Linking: Linking is the final phase where all the functions are linked with their definitions as the linker knows where all these functions are implemented. This process is known as linking.
import dis
def recursive_sum(n):
"""Function to return the sum of recursive numbers"""
if n <= 1:
return n
else:
return n + recursive_sum(n-1)
# change this value for a different result
number = 16
if number < 0:
print("The sum is",recursive_sum(number))
# by using dis module ,the bytecode is loaded into machine code ,and a piece of code that reads each instruction in the bytecode and executes whatever operation is indicated.
dis.dis(recursive_sum)
The sum is 136
4 0 LOAD_FAST 0 (n)
2 LOAD_CONST 1 (1)
4 COMPARE_OP 1 (<=)
6 POP_JUMP_IF_FALSE 12
5 8 LOAD_FAST 0 (n)
10 RETURN_VALUE
7 >> 12 LOAD_FAST 0 (n)
14 LOAD_GLOBAL 0 (recursive_sum)
16 LOAD_FAST 0 (n)
18 LOAD_CONST 1 (1)
20 BINARY_SUBTRACT
22 CALL_FUNCTION 1
24 BINARY_ADD
26 RETURN_VALUE
28 LOAD_CONST 2 (None)
30 RETURN_VALUE | [
{
"code": null,
"e": 1367,
"s": 1062,
"text": "Compilation: The source code in python is saved as a .py file which is then compiled into a format known as byte code, byte code is then converted to machine code. After the compilation, the code is stored in .pyc files and is regenerated when the source is updated. This process is known as compilation."
},
{
"code": null,
"e": 1558,
"s": 1367,
"text": "Linking: Linking is the final phase where all the functions are linked with their definitions as the linker knows where all these functions are implemented. This process is known as linking."
},
{
"code": null,
"e": 2012,
"s": 1558,
"text": "import dis\ndef recursive_sum(n):\n\"\"\"Function to return the sum of recursive numbers\"\"\"\nif n <= 1:\nreturn n\nelse:\nreturn n + recursive_sum(n-1)\n\n# change this value for a different result\nnumber = 16\nif number < 0:\nprint(\"The sum is\",recursive_sum(number))\n# by using dis module ,the bytecode is loaded into machine code ,and a piece of code that reads each instruction in the bytecode and executes whatever operation is indicated.\ndis.dis(recursive_sum)"
},
{
"code": null,
"e": 2350,
"s": 2012,
"text": "The sum is 136\n4 0 LOAD_FAST 0 (n)\n2 LOAD_CONST 1 (1)\n4 COMPARE_OP 1 (<=)\n6 POP_JUMP_IF_FALSE 12\n\n5 8 LOAD_FAST 0 (n)\n10 RETURN_VALUE\n\n7 >> 12 LOAD_FAST 0 (n)\n14 LOAD_GLOBAL 0 (recursive_sum)\n16 LOAD_FAST 0 (n)\n18 LOAD_CONST 1 (1)\n20 BINARY_SUBTRACT\n22 CALL_FUNCTION 1\n24 BINARY_ADD\n26 RETURN_VALUE\n28 LOAD_CONST 2 (None)\n30 RETURN_VALUE"
}
] |
Swift - Fallthrough Statement | A switch statement in Swift 4 completes its execution as soon as the first matching case is completed instead of falling through the bottom of subsequent cases as it happens in C and C++ programming languages.
The generic syntax of a switch statement in C and C++ is as follows −
switch(expression){
case constant-expression :
statement(s);
break; /* optional */
case constant-expression :
statement(s);
break; /* optional */
/* you can have any number of case statements */
default : /* Optional */
statement(s);
}
Here we need to use a break statement to come out of a case statement, otherwise the execution control will fall through the subsequent case statements available below the matching case statement.
The generic syntax of a switch statement in Swift 4 is as follows −
switch expression {
case expression1 :
statement(s)
fallthrough /* optional */
case expression2, expression3 :
statement(s)
fallthrough /* optional */
default : /* Optional */
statement(s);
}
If we do not use fallthrough statement, then the program will come out of the switch statement after executing the matching case statement. We will take the following two examples to make its functionality clear.
The following example shows how to use a switch statement in Swift 4 programming without fallthrough −
var index = 10
switch index {
case 100 :
print( "Value of index is 100")
case 10,15 :
print( "Value of index is either 10 or 15")
case 5 :
print( "Value of index is 5")
default :
print( "default case")
}
When the above code is compiled and executed, it produces the following result −
Value of index is either 10 or 15
The following example shows how to use a switch statement in Swift 4 programming with fallthrough −
var index = 10
switch index {
case 100 :
print( "Value of index is 100")
fallthrough
case 10,15 :
print( "Value of index is either 10 or 15")
fallthrough
case 5 :
print( "Value of index is 5")
default :
print( "default case")
}
When the above code is compiled and executed, it produces the following result −
Value of index is either 10 or 15
Value of index is 5
38 Lectures
1 hours
Ashish Sharma
13 Lectures
2 hours
Three Millennials
7 Lectures
1 hours
Three Millennials
22 Lectures
1 hours
Frahaan Hussain
12 Lectures
39 mins
Devasena Rajendran
40 Lectures
2.5 hours
Grant Klimaytys
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2463,
"s": 2253,
"text": "A switch statement in Swift 4 completes its execution as soon as the first matching case is completed instead of falling through the bottom of subsequent cases as it happens in C and C++ programming languages."
},
{
"code": null,
"e": 2533,
"s": 2463,
"text": "The generic syntax of a switch statement in C and C++ is as follows −"
},
{
"code": null,
"e": 2813,
"s": 2533,
"text": "switch(expression){\n case constant-expression :\n statement(s);\n break; /* optional */\n case constant-expression :\n statement(s);\n break; /* optional */\n\n /* you can have any number of case statements */\n default : /* Optional */\n statement(s);\n}\n"
},
{
"code": null,
"e": 3010,
"s": 2813,
"text": "Here we need to use a break statement to come out of a case statement, otherwise the execution control will fall through the subsequent case statements available below the matching case statement."
},
{
"code": null,
"e": 3078,
"s": 3010,
"text": "The generic syntax of a switch statement in Swift 4 is as follows −"
},
{
"code": null,
"e": 3311,
"s": 3078,
"text": "switch expression {\n case expression1 :\n statement(s)\n fallthrough /* optional */\n case expression2, expression3 :\n statement(s)\n fallthrough /* optional */\n\n default : /* Optional */\n statement(s);\n}\n"
},
{
"code": null,
"e": 3524,
"s": 3311,
"text": "If we do not use fallthrough statement, then the program will come out of the switch statement after executing the matching case statement. We will take the following two examples to make its functionality clear."
},
{
"code": null,
"e": 3627,
"s": 3524,
"text": "The following example shows how to use a switch statement in Swift 4 programming without fallthrough −"
},
{
"code": null,
"e": 3868,
"s": 3627,
"text": "var index = 10\n\nswitch index {\n case 100 :\n print( \"Value of index is 100\")\n case 10,15 :\n print( \"Value of index is either 10 or 15\")\n case 5 :\n print( \"Value of index is 5\")\n default :\n print( \"default case\")\n}"
},
{
"code": null,
"e": 3949,
"s": 3868,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 3984,
"s": 3949,
"text": "Value of index is either 10 or 15\n"
},
{
"code": null,
"e": 4084,
"s": 3984,
"text": "The following example shows how to use a switch statement in Swift 4 programming with fallthrough −"
},
{
"code": null,
"e": 4361,
"s": 4084,
"text": "var index = 10\n\nswitch index {\n case 100 :\n print( \"Value of index is 100\")\n fallthrough\n case 10,15 :\n print( \"Value of index is either 10 or 15\")\n fallthrough\n case 5 :\n print( \"Value of index is 5\")\n default :\n print( \"default case\")\n}"
},
{
"code": null,
"e": 4442,
"s": 4361,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 4497,
"s": 4442,
"text": "Value of index is either 10 or 15\nValue of index is 5\n"
},
{
"code": null,
"e": 4530,
"s": 4497,
"text": "\n 38 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 4545,
"s": 4530,
"text": " Ashish Sharma"
},
{
"code": null,
"e": 4578,
"s": 4545,
"text": "\n 13 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 4597,
"s": 4578,
"text": " Three Millennials"
},
{
"code": null,
"e": 4629,
"s": 4597,
"text": "\n 7 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 4648,
"s": 4629,
"text": " Three Millennials"
},
{
"code": null,
"e": 4681,
"s": 4648,
"text": "\n 22 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 4698,
"s": 4681,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 4730,
"s": 4698,
"text": "\n 12 Lectures \n 39 mins\n"
},
{
"code": null,
"e": 4750,
"s": 4730,
"text": " Devasena Rajendran"
},
{
"code": null,
"e": 4785,
"s": 4750,
"text": "\n 40 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4802,
"s": 4785,
"text": " Grant Klimaytys"
},
{
"code": null,
"e": 4809,
"s": 4802,
"text": " Print"
},
{
"code": null,
"e": 4820,
"s": 4809,
"text": " Add Notes"
}
] |
JavaScript | Blob - GeeksforGeeks | 27 May, 2020
A blob object is simply a group of bytes that holds the data stored in a file. It may seem like that a blob is a reference to the actual file but actually it is not. A blob has its size and MIME just like that of a simple file. The blob data is stored in the memory or filesystem of a user depending on the browser features and size of the blob. A simple blob can be used anywhere we wish just like files.The content of the blob can easily be read as ArrayBuffer which makes blobs very convenient to store the binary data.
Syntax for creating a Blob:
var abc = new Blob(["Blob Content"],
{type: Blob Property containing MIME property})
Apart from inserting data directly into Blob, we can also read data from this Blob using the FileReader class:
var abc = new Blob(["GeeksForGeeks"], {type : "text/plain"});var def = new FileReader();def.addEventListener("loadend", function(e) { document.getElementById("para").innerHTML = e.srcElement.result;}); def.readAsText(abc);
In HTML file, we just create a simple <p> element with id=”para”:
<p id="para"></p>
And you will get the below output:
GeeksForGeeks
Blob URL’s: Just like we have file URLs that refer to some real files in the local filesystem, we also have Blob URLs that refer to the Blob. Blob URL’s are quite similar to any regular URL’s and hence can be used almost anywhere that we can use the general URL’s. A Blob can be easily used as an URL for <a>, <img> or other tags, to display its contents. The blob URL pointing towards a blob can be obtained using the createObjectURL object:
<!DOCTYPE html><html> <head> <title> JavaScript Blob </title></head> <body> <a download="gfg.txt" href='#' id="link">Download</a> <script> let abc = new Blob(["Geeks For Geeks"], { type: 'text/plain' }); link.href = URL.createObjectURL(abc); </script></body> </html>
Output:You will be getting a downloaded dynamically generated Blob with Geeks For Geeks as its content:
Blob To ArrayBuffer: The Blob constructor can be used to create blobs from anything including any type of BufferSource. For low-level processing, we can use the lowest level ArrayBuffer from the blob using FileReader:
let def = new FileReader(); def.readAsArrayBuffer(abc); def.onload = function(event) { let res = def.result;};
Positive points for using Blobs:
Blobs are a good option for adding large binary data files to a database and can be easily referenced.
It is easy to set access rights using rights management while using Blobs.
Database backups of Blobs contain all the data.
Negative points for using Blobs:
Not all databases permit the use of Blobs.
Blobs are inefficient due to the amount of disk space required and access time.
Creating backups is highly time consuming due to the file size of Blobs.
JavaScript-Misc
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Difference Between PUT and PATCH Request
How to get character array from string in JavaScript?
Remove elements from a JavaScript Array
How to get selected value in dropdown list using JavaScript ?
Top 10 Front End Developer Skills That You Need in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 24909,
"s": 24881,
"text": "\n27 May, 2020"
},
{
"code": null,
"e": 25432,
"s": 24909,
"text": "A blob object is simply a group of bytes that holds the data stored in a file. It may seem like that a blob is a reference to the actual file but actually it is not. A blob has its size and MIME just like that of a simple file. The blob data is stored in the memory or filesystem of a user depending on the browser features and size of the blob. A simple blob can be used anywhere we wish just like files.The content of the blob can easily be read as ArrayBuffer which makes blobs very convenient to store the binary data."
},
{
"code": null,
"e": 25460,
"s": 25432,
"text": "Syntax for creating a Blob:"
},
{
"code": null,
"e": 25551,
"s": 25460,
"text": "var abc = new Blob([\"Blob Content\"], \n {type: Blob Property containing MIME property})\n"
},
{
"code": null,
"e": 25662,
"s": 25551,
"text": "Apart from inserting data directly into Blob, we can also read data from this Blob using the FileReader class:"
},
{
"code": "var abc = new Blob([\"GeeksForGeeks\"], {type : \"text/plain\"});var def = new FileReader();def.addEventListener(\"loadend\", function(e) { document.getElementById(\"para\").innerHTML = e.srcElement.result;}); def.readAsText(abc);",
"e": 25925,
"s": 25662,
"text": null
},
{
"code": null,
"e": 25991,
"s": 25925,
"text": "In HTML file, we just create a simple <p> element with id=”para”:"
},
{
"code": "<p id=\"para\"></p>",
"e": 26009,
"s": 25991,
"text": null
},
{
"code": null,
"e": 26044,
"s": 26009,
"text": "And you will get the below output:"
},
{
"code": null,
"e": 26058,
"s": 26044,
"text": "GeeksForGeeks"
},
{
"code": null,
"e": 26501,
"s": 26058,
"text": "Blob URL’s: Just like we have file URLs that refer to some real files in the local filesystem, we also have Blob URLs that refer to the Blob. Blob URL’s are quite similar to any regular URL’s and hence can be used almost anywhere that we can use the general URL’s. A Blob can be easily used as an URL for <a>, <img> or other tags, to display its contents. The blob URL pointing towards a blob can be obtained using the createObjectURL object:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> JavaScript Blob </title></head> <body> <a download=\"gfg.txt\" href='#' id=\"link\">Download</a> <script> let abc = new Blob([\"Geeks For Geeks\"], { type: 'text/plain' }); link.href = URL.createObjectURL(abc); </script></body> </html>",
"e": 26833,
"s": 26501,
"text": null
},
{
"code": null,
"e": 26937,
"s": 26833,
"text": "Output:You will be getting a downloaded dynamically generated Blob with Geeks For Geeks as its content:"
},
{
"code": null,
"e": 27155,
"s": 26937,
"text": "Blob To ArrayBuffer: The Blob constructor can be used to create blobs from anything including any type of BufferSource. For low-level processing, we can use the lowest level ArrayBuffer from the blob using FileReader:"
},
{
"code": "let def = new FileReader(); def.readAsArrayBuffer(abc); def.onload = function(event) { let res = def.result;};",
"e": 27271,
"s": 27155,
"text": null
},
{
"code": null,
"e": 27304,
"s": 27271,
"text": "Positive points for using Blobs:"
},
{
"code": null,
"e": 27407,
"s": 27304,
"text": "Blobs are a good option for adding large binary data files to a database and can be easily referenced."
},
{
"code": null,
"e": 27482,
"s": 27407,
"text": "It is easy to set access rights using rights management while using Blobs."
},
{
"code": null,
"e": 27530,
"s": 27482,
"text": "Database backups of Blobs contain all the data."
},
{
"code": null,
"e": 27563,
"s": 27530,
"text": "Negative points for using Blobs:"
},
{
"code": null,
"e": 27606,
"s": 27563,
"text": "Not all databases permit the use of Blobs."
},
{
"code": null,
"e": 27686,
"s": 27606,
"text": "Blobs are inefficient due to the amount of disk space required and access time."
},
{
"code": null,
"e": 27759,
"s": 27686,
"text": "Creating backups is highly time consuming due to the file size of Blobs."
},
{
"code": null,
"e": 27775,
"s": 27759,
"text": "JavaScript-Misc"
},
{
"code": null,
"e": 27786,
"s": 27775,
"text": "JavaScript"
},
{
"code": null,
"e": 27803,
"s": 27786,
"text": "Web Technologies"
},
{
"code": null,
"e": 27901,
"s": 27803,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27910,
"s": 27901,
"text": "Comments"
},
{
"code": null,
"e": 27923,
"s": 27910,
"text": "Old Comments"
},
{
"code": null,
"e": 27984,
"s": 27923,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 28025,
"s": 27984,
"text": "Difference Between PUT and PATCH Request"
},
{
"code": null,
"e": 28079,
"s": 28025,
"text": "How to get character array from string in JavaScript?"
},
{
"code": null,
"e": 28119,
"s": 28079,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 28181,
"s": 28119,
"text": "How to get selected value in dropdown list using JavaScript ?"
},
{
"code": null,
"e": 28237,
"s": 28181,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 28270,
"s": 28237,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 28332,
"s": 28270,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 28375,
"s": 28332,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Modern Gaussian Process Regression | by Ryan Sander | Towards Data Science | Ever wonder how you can create non-parametric supervised learning models with unlimited expressive power? Look no further than Gaussian Process Regression (GPR), an algorithm that learns to make predictions almost entirely from the data itself (with a little help from hyperparameters). Combining this algorithm with recent advances in computing, such as automatic differentiation, allows for applying GPRs to solve a variety of supervised machine learning problems in near-real-time.
In this article, we’ll discuss:
A brief overview/recap of the theory behind GPRThe types of problems we can use GPR to solve, and some examplesHow GPR compares to other supervised learning algorithmsModern programming packages and tools we can use to implement GPR
A brief overview/recap of the theory behind GPR
The types of problems we can use GPR to solve, and some examples
How GPR compares to other supervised learning algorithms
Modern programming packages and tools we can use to implement GPR
This is the second article in my GPR series. For a rigorous, Ab initio introduction to Gaussian Process Regression, please check out my previous article here.
Before we dive into how we can implement and use GPR, let’s quickly review the mechanics and theory behind this supervised machine learning algorithm. For more detailed derivations/discussion of the following concepts, please check out my previous article on GPR here. GPR:
i. Predicts the conditional posterior distribution of test points conditioned on observed training points:
ii. Computes the mean of predicted test point targets as linear combinations of observed target values, with the weights of these linear combinations determined by the kernel distance from the training inputs to the test points:
iii. Uses covariance functions to measure the kernel distance between inputs:
iv. Interpolates novel points from existing points by treating each novel point as part of a Gaussian Process, i.e. parameterizing the novel point as a Gaussian distribution:
GPR can be applied to a variety of supervised machine learning problems (and in some cases, can be used as a subroutine in unsupervised machine learning). Here are just a few classes of problems that can be solved with this machine learning technique:
Interpolation is a key task in a variety of fields, such as signal processing, spatial statistics, and control. This application is particularly common in fields that leverage spatial statistics, such as geostatistics. As a concrete example, consider the problem of generating a surface corresponding to the mountain below, given only a limited number of defined points on the mountain. If you’re interested in seeing a specific implementation of this, please check out my article here.
This class of problems looks at projecting a time series into the future using historical data. Like kriging, time series forecasting allows for predicting unseen values. Rather than predicting unseen values at different locations, however, this problem applies GPR for predicting the mean and variance of unseen points in the future. This is highly applicable for tasks such as predicting electricity demand, stock prices, or the state-space evolution of a linear dynamical system.
Furthermore, not only does GPR predict the mean of a future point, but it also outputs a predicted variance, enabling decision-making systems to factor uncertainty into their decisions.
More generally, because GPR allows for predicting variance at test points, GPR can be used for a variety of uncertainty quantification tasks — i.e. any task for which it is relevant to estimate both an expected value, and the uncertainty, or variance, associated with this expected value.
You may be wondering: Why is uncertainty important? To motivate this answer, consider predicting the trajectory of a pedestrian for an autonomous navigation safety system. If the predicted trajectory of a pedestrian has high predicted uncertainty, an autonomous vehicle should exercise increased caution to account for having low confidence in the pedestrian’s intention. If, on the other hand, the autonomous vehicle has low predicted variance of the pedestrian’s trajectory, then the autonomous car will be better able to predict the pedestrian’s intentions, and can more easily proceed along with its current driving plan.
In a sense, by predicting uncertainty, decision-making systems can “weight” the expected values they estimate according to how uncertain they predict these expected values to be.
You may be wondering — why should I consider using GPR instead of a different supervised learning model? Below, I enumerate a few comparative reasons.
GPR is non-parametric. This means it learns largely from the data itself, rather than by learning an extensive set of parameters. This is especially advantageous because this results in GPR models not being as data-hungry as highly parametric models, such as neural networks, i.e. they don’t need as many samples to achieve strong generalizability.For interpolation and prediction tasks, GPR estimates both expected values and uncertainty. This is especially beneficial for decision-making systems that take this uncertainty into account when making decisions.GPR is a linear smoother [5] — from a supervised learning lens, this can be conceptualized as a regularization technique. From a Bayesian lens, this is equivalent to imposing a prior on your model that all targets on test points must be linear combinations of existing training targets. This attribute helps GPR to generalize to unseen data, so long as the true unseen targets can be represented as linear combinations of training targets.With automatic differentiation backend frameworks such as torch and tensorflow, which are integrated through GPR packages such as gpytorch and gpflow, GPR is lightning fast and scalable. This is particularly true for batched models. For an example case study of this, please see my previous article on batched, multi-dimensional GPR here!
GPR is non-parametric. This means it learns largely from the data itself, rather than by learning an extensive set of parameters. This is especially advantageous because this results in GPR models not being as data-hungry as highly parametric models, such as neural networks, i.e. they don’t need as many samples to achieve strong generalizability.
For interpolation and prediction tasks, GPR estimates both expected values and uncertainty. This is especially beneficial for decision-making systems that take this uncertainty into account when making decisions.
GPR is a linear smoother [5] — from a supervised learning lens, this can be conceptualized as a regularization technique. From a Bayesian lens, this is equivalent to imposing a prior on your model that all targets on test points must be linear combinations of existing training targets. This attribute helps GPR to generalize to unseen data, so long as the true unseen targets can be represented as linear combinations of training targets.
With automatic differentiation backend frameworks such as torch and tensorflow, which are integrated through GPR packages such as gpytorch and gpflow, GPR is lightning fast and scalable. This is particularly true for batched models. For an example case study of this, please see my previous article on batched, multi-dimensional GPR here!
Below, we introduce several Python machine learning packages for scalable, efficient, and modular implementations of Gaussian Process Regression. Let’s walk through each of them!
This is a great package for getting started with GPR. It allows for some model flexibility, and is able to carry out hyperparameter optimization and defining likelihoods under the hood. To use sklearn with your datasets, please make sure your datasets can be represented numerically with np.array objects. The main steps for using GPR with sklearn:
Preprocess your data. Training data (np.array) can be represented as a (x_train, y_train) tuple with x_train shape (N, D) and y_train shape (N, 1), where N is the number of samples, and D is the dimension of the features. Your test points (np.array) can be represented as x_test with shape (N, D).Define your covariance function. In the code segment below, we use a Radial Basis Function (RBF) kernel RBF along with additive noise using a WhiteKernel.Define your GaussianProcessRegressor object using your covariance function, and a random state that seeds your GPR. This random_state is important for ensuring reproducibility.Fit your gpr object using the method gpr.fit(x_train, y_train). This “trains your model”, and optimizes the hyperparameters of your gpr object using gradient methods such as lbfgs, a second-order Hessian-based optimization routine.Predict the mean and covariance of the targets on your test points x_test using the method gpr.predict(x_test, return_std=True). This gives you both a predicted value, as well as a measure of the uncertainty for this predicted point.
Preprocess your data. Training data (np.array) can be represented as a (x_train, y_train) tuple with x_train shape (N, D) and y_train shape (N, 1), where N is the number of samples, and D is the dimension of the features. Your test points (np.array) can be represented as x_test with shape (N, D).
Define your covariance function. In the code segment below, we use a Radial Basis Function (RBF) kernel RBF along with additive noise using a WhiteKernel.
Define your GaussianProcessRegressor object using your covariance function, and a random state that seeds your GPR. This random_state is important for ensuring reproducibility.
Fit your gpr object using the method gpr.fit(x_train, y_train). This “trains your model”, and optimizes the hyperparameters of your gpr object using gradient methods such as lbfgs, a second-order Hessian-based optimization routine.
Predict the mean and covariance of the targets on your test points x_test using the method gpr.predict(x_test, return_std=True). This gives you both a predicted value, as well as a measure of the uncertainty for this predicted point.
To install dependencies for the example below using pip:
pip install scikit-learn numpy matplotlib
Here is an example that fits and predicts a one-dimensional sinusoid using sklearn:
This package is great for creating fully customizable, advanced, and accelerated GPR models that scale. This package supports everything from GPR model optimization via auto-differentiation to hardware acceleration via CUDA and PyKeOps.
It’s recommended you have some familiarity with PyTorch and/or auto-differentiation packages in python before working with GPyTorch, but the tutorials make this framework easy to learn and use. Data for GPRs in GPyTorch are represented as torch.tensor objects. Here are the steps for fitting a GPR model in GPyTorch:
Preprocess your data. Training data can be represented as a (x_train, y_train) tuple with x_train shape (B, N, D) and y_train shape (B, N, 1), where B is the batch size, N is the number of samples, and D is the dimension of the features. Your test points can be represented as x_test with shape (B, N, D).Define your ExactGPModel by subclassing the gpytorch.models.ExactGP class. To subclass this model, you’ll need to define: (i) The constructor method, which specifies the mean and covariance functions of the model, (ii) The forward method, which describes how the GPR model makes predictions. To use batching, check out this tutorial here. To use prior distributions on your hyperparameters, check out this tutorial here.Specify your likelihood function, which your model uses to relate latent variables f to observed targets y.Instantiate your model using your likelihood and training data (x_train, y_train).Perform hyperparameter optimization (“training”) of your model using pytorch auto-differentiation. Once finished, ensure your model and likelihood are placed in posterior mode with model.eval() and likelihood.eval().Compute mean and variance predictions on your test points using your model by calling likelihood(model(x_test)). The inner function predicts latent test values f* from test inputs x*, and the outer function predicts mean and variance from latent test values f*.
Preprocess your data. Training data can be represented as a (x_train, y_train) tuple with x_train shape (B, N, D) and y_train shape (B, N, 1), where B is the batch size, N is the number of samples, and D is the dimension of the features. Your test points can be represented as x_test with shape (B, N, D).
Define your ExactGPModel by subclassing the gpytorch.models.ExactGP class. To subclass this model, you’ll need to define: (i) The constructor method, which specifies the mean and covariance functions of the model, (ii) The forward method, which describes how the GPR model makes predictions. To use batching, check out this tutorial here. To use prior distributions on your hyperparameters, check out this tutorial here.
Specify your likelihood function, which your model uses to relate latent variables f to observed targets y.
Instantiate your model using your likelihood and training data (x_train, y_train).
Perform hyperparameter optimization (“training”) of your model using pytorch auto-differentiation. Once finished, ensure your model and likelihood are placed in posterior mode with model.eval() and likelihood.eval().
Compute mean and variance predictions on your test points using your model by calling likelihood(model(x_test)). The inner function predicts latent test values f* from test inputs x*, and the outer function predicts mean and variance from latent test values f*.
To install dependencies for the example below using pip:
pip install gpytorch torch matplotlib numpy# (Optional) - Installs pykeopspip install pykeops
Here is an example to fit a noisy one-dimensional sinusoid using gpytorch:
Another GPR package that supports automatic differentiation (this time in tensorflow), GPFlow has extensive functionality built-in for creating fully-customizable models, likelihood functions, kernels, and optimization and inference routines. In addition to GPR, GPFlow has built-in functionality for a variety of other state-of-the-art problems in Bayesian Optimization, such as Variational Fourier Features and Convolutional Gaussian Processes.
It’s recommended you have some familiarity with TensorFlow and/or auto-differentiation packages in Python before working with GPFlow. Data for GPRs in GPFlow are represented as tf.tensor objects. To get started with GPFlow, please check out this examples link.
This package has Python implementations for a multitude of GPR models, likelihood functions, and inference procedures. Though this package doesn’t have the same auto-differentiation backends that power gpytorch and gpflow, this package’s versatility, modularity, and customizability make it a valuable resource for implementing GPR.
Pyro is a probabilistic programming package that can be integrated with Python that also supports Gaussian Process Regression, as well as advanced applications such as Deep Kernel Learning.
Gen is another probabilistic programming package built on top of Julia. Gen offers several advantages with Gaussian Process Regression: (i) It builds in proposal distributions, which can help to narrow down a search space by effectively imposing a prior on the set of possible solutions, (ii) It has an easy API for sampling traces from fit GPR models, (iii) As is the goal for many probabilistic programming languages, it enables for easily creating hierarchical models for tuning the priors of GPR hyperparameters.
Stan is another probabilistic programming package that can be integrated with Python, but also supports other languages such as R, MATLAB, Julia, and Stata. In addition to having functionality built-in for Gaussian Process Regression, Stan also supports a variety of other Bayesian inference and sampling functionality.
Built by the creators of GPyTorch, BoTorch is a Bayesian Optimization library that supports many of the same GPR techniques, as well as advanced Bayesian Optimization techniques and analytic test suites, as GPyTorch.
In this article, we reviewed the theory behind Gaussian Process Regression (GPR), introduced and discussed the types of problems GPR can be used to solve, discussed how GPR compares to other supervised learning algorithms, and walked through how we can implement GPR using sklearn, gpytorch, or gpflow.
To see more articles in reinforcement learning, machine learning, computer vision, robotics, and teaching, please follow me! Thank you for reading!
Thank you to CODECOGS for their inline equation rendering tool, Carl Edward Rasmussen for open-sourcing the textbook Gaussian Processes for Machine Learning [5], and for Scikit-Learn, GPyTorch, GPFlow, and GPy for open-sourcing their Gaussian Process Regression Python libraries.
[1] Pedregosa, Fabian, et al. “Scikit-learn: Machine learning in Python.” the Journal of machine Learning research 12 (2011): 2825–2830.
[3] Gardner, Jacob R., et al. “Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration.” arXiv preprint arXiv:1809.11165 (2018).
[3] Matthews, Alexander G. de G., et al. “GPflow: A Gaussian Process Library using TensorFlow.” J. Mach. Learn. Res. 18.40 (2017): 1–6.
[4] GPy, “GPy.” http://github.com/SheffieldML/GPy.
[5] Carl Edward Rasmussen and Christopher K. I. Williams. 2005. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press.
[6] Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D. Goodman. 2019. Pyro: deep universal probabilistic programming. J. Mach. Learn. Res. 20, 1 (January 2019), 973–978.
[7] Gen: A General-Purpose Probabilistic Programming System with Programmable Inference. Cusumano-Towner, M. F.; Saad, F. A.; Lew, A.; and Mansinghka, V. K. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ‘19).
[8] Stan Development Team. 2021. Stan Modeling Language Users Guide and Reference Manual, VERSION. https://mc-stan.org.
[9] Balandat, Maximilian, et al. “BoTorch: A framework for efficient Monte-Carlo Bayesian optimization.” Advances in Neural Information Processing Systems 33 (2020). | [
{
"code": null,
"e": 657,
"s": 172,
"text": "Ever wonder how you can create non-parametric supervised learning models with unlimited expressive power? Look no further than Gaussian Process Regression (GPR), an algorithm that learns to make predictions almost entirely from the data itself (with a little help from hyperparameters). Combining this algorithm with recent advances in computing, such as automatic differentiation, allows for applying GPRs to solve a variety of supervised machine learning problems in near-real-time."
},
{
"code": null,
"e": 689,
"s": 657,
"text": "In this article, we’ll discuss:"
},
{
"code": null,
"e": 922,
"s": 689,
"text": "A brief overview/recap of the theory behind GPRThe types of problems we can use GPR to solve, and some examplesHow GPR compares to other supervised learning algorithmsModern programming packages and tools we can use to implement GPR"
},
{
"code": null,
"e": 970,
"s": 922,
"text": "A brief overview/recap of the theory behind GPR"
},
{
"code": null,
"e": 1035,
"s": 970,
"text": "The types of problems we can use GPR to solve, and some examples"
},
{
"code": null,
"e": 1092,
"s": 1035,
"text": "How GPR compares to other supervised learning algorithms"
},
{
"code": null,
"e": 1158,
"s": 1092,
"text": "Modern programming packages and tools we can use to implement GPR"
},
{
"code": null,
"e": 1317,
"s": 1158,
"text": "This is the second article in my GPR series. For a rigorous, Ab initio introduction to Gaussian Process Regression, please check out my previous article here."
},
{
"code": null,
"e": 1591,
"s": 1317,
"text": "Before we dive into how we can implement and use GPR, let’s quickly review the mechanics and theory behind this supervised machine learning algorithm. For more detailed derivations/discussion of the following concepts, please check out my previous article on GPR here. GPR:"
},
{
"code": null,
"e": 1698,
"s": 1591,
"text": "i. Predicts the conditional posterior distribution of test points conditioned on observed training points:"
},
{
"code": null,
"e": 1927,
"s": 1698,
"text": "ii. Computes the mean of predicted test point targets as linear combinations of observed target values, with the weights of these linear combinations determined by the kernel distance from the training inputs to the test points:"
},
{
"code": null,
"e": 2005,
"s": 1927,
"text": "iii. Uses covariance functions to measure the kernel distance between inputs:"
},
{
"code": null,
"e": 2180,
"s": 2005,
"text": "iv. Interpolates novel points from existing points by treating each novel point as part of a Gaussian Process, i.e. parameterizing the novel point as a Gaussian distribution:"
},
{
"code": null,
"e": 2432,
"s": 2180,
"text": "GPR can be applied to a variety of supervised machine learning problems (and in some cases, can be used as a subroutine in unsupervised machine learning). Here are just a few classes of problems that can be solved with this machine learning technique:"
},
{
"code": null,
"e": 2919,
"s": 2432,
"text": "Interpolation is a key task in a variety of fields, such as signal processing, spatial statistics, and control. This application is particularly common in fields that leverage spatial statistics, such as geostatistics. As a concrete example, consider the problem of generating a surface corresponding to the mountain below, given only a limited number of defined points on the mountain. If you’re interested in seeing a specific implementation of this, please check out my article here."
},
{
"code": null,
"e": 3402,
"s": 2919,
"text": "This class of problems looks at projecting a time series into the future using historical data. Like kriging, time series forecasting allows for predicting unseen values. Rather than predicting unseen values at different locations, however, this problem applies GPR for predicting the mean and variance of unseen points in the future. This is highly applicable for tasks such as predicting electricity demand, stock prices, or the state-space evolution of a linear dynamical system."
},
{
"code": null,
"e": 3588,
"s": 3402,
"text": "Furthermore, not only does GPR predict the mean of a future point, but it also outputs a predicted variance, enabling decision-making systems to factor uncertainty into their decisions."
},
{
"code": null,
"e": 3877,
"s": 3588,
"text": "More generally, because GPR allows for predicting variance at test points, GPR can be used for a variety of uncertainty quantification tasks — i.e. any task for which it is relevant to estimate both an expected value, and the uncertainty, or variance, associated with this expected value."
},
{
"code": null,
"e": 4503,
"s": 3877,
"text": "You may be wondering: Why is uncertainty important? To motivate this answer, consider predicting the trajectory of a pedestrian for an autonomous navigation safety system. If the predicted trajectory of a pedestrian has high predicted uncertainty, an autonomous vehicle should exercise increased caution to account for having low confidence in the pedestrian’s intention. If, on the other hand, the autonomous vehicle has low predicted variance of the pedestrian’s trajectory, then the autonomous car will be better able to predict the pedestrian’s intentions, and can more easily proceed along with its current driving plan."
},
{
"code": null,
"e": 4682,
"s": 4503,
"text": "In a sense, by predicting uncertainty, decision-making systems can “weight” the expected values they estimate according to how uncertain they predict these expected values to be."
},
{
"code": null,
"e": 4833,
"s": 4682,
"text": "You may be wondering — why should I consider using GPR instead of a different supervised learning model? Below, I enumerate a few comparative reasons."
},
{
"code": null,
"e": 6171,
"s": 4833,
"text": "GPR is non-parametric. This means it learns largely from the data itself, rather than by learning an extensive set of parameters. This is especially advantageous because this results in GPR models not being as data-hungry as highly parametric models, such as neural networks, i.e. they don’t need as many samples to achieve strong generalizability.For interpolation and prediction tasks, GPR estimates both expected values and uncertainty. This is especially beneficial for decision-making systems that take this uncertainty into account when making decisions.GPR is a linear smoother [5] — from a supervised learning lens, this can be conceptualized as a regularization technique. From a Bayesian lens, this is equivalent to imposing a prior on your model that all targets on test points must be linear combinations of existing training targets. This attribute helps GPR to generalize to unseen data, so long as the true unseen targets can be represented as linear combinations of training targets.With automatic differentiation backend frameworks such as torch and tensorflow, which are integrated through GPR packages such as gpytorch and gpflow, GPR is lightning fast and scalable. This is particularly true for batched models. For an example case study of this, please see my previous article on batched, multi-dimensional GPR here!"
},
{
"code": null,
"e": 6520,
"s": 6171,
"text": "GPR is non-parametric. This means it learns largely from the data itself, rather than by learning an extensive set of parameters. This is especially advantageous because this results in GPR models not being as data-hungry as highly parametric models, such as neural networks, i.e. they don’t need as many samples to achieve strong generalizability."
},
{
"code": null,
"e": 6733,
"s": 6520,
"text": "For interpolation and prediction tasks, GPR estimates both expected values and uncertainty. This is especially beneficial for decision-making systems that take this uncertainty into account when making decisions."
},
{
"code": null,
"e": 7173,
"s": 6733,
"text": "GPR is a linear smoother [5] — from a supervised learning lens, this can be conceptualized as a regularization technique. From a Bayesian lens, this is equivalent to imposing a prior on your model that all targets on test points must be linear combinations of existing training targets. This attribute helps GPR to generalize to unseen data, so long as the true unseen targets can be represented as linear combinations of training targets."
},
{
"code": null,
"e": 7512,
"s": 7173,
"text": "With automatic differentiation backend frameworks such as torch and tensorflow, which are integrated through GPR packages such as gpytorch and gpflow, GPR is lightning fast and scalable. This is particularly true for batched models. For an example case study of this, please see my previous article on batched, multi-dimensional GPR here!"
},
{
"code": null,
"e": 7691,
"s": 7512,
"text": "Below, we introduce several Python machine learning packages for scalable, efficient, and modular implementations of Gaussian Process Regression. Let’s walk through each of them!"
},
{
"code": null,
"e": 8040,
"s": 7691,
"text": "This is a great package for getting started with GPR. It allows for some model flexibility, and is able to carry out hyperparameter optimization and defining likelihoods under the hood. To use sklearn with your datasets, please make sure your datasets can be represented numerically with np.array objects. The main steps for using GPR with sklearn:"
},
{
"code": null,
"e": 9132,
"s": 8040,
"text": "Preprocess your data. Training data (np.array) can be represented as a (x_train, y_train) tuple with x_train shape (N, D) and y_train shape (N, 1), where N is the number of samples, and D is the dimension of the features. Your test points (np.array) can be represented as x_test with shape (N, D).Define your covariance function. In the code segment below, we use a Radial Basis Function (RBF) kernel RBF along with additive noise using a WhiteKernel.Define your GaussianProcessRegressor object using your covariance function, and a random state that seeds your GPR. This random_state is important for ensuring reproducibility.Fit your gpr object using the method gpr.fit(x_train, y_train). This “trains your model”, and optimizes the hyperparameters of your gpr object using gradient methods such as lbfgs, a second-order Hessian-based optimization routine.Predict the mean and covariance of the targets on your test points x_test using the method gpr.predict(x_test, return_std=True). This gives you both a predicted value, as well as a measure of the uncertainty for this predicted point."
},
{
"code": null,
"e": 9430,
"s": 9132,
"text": "Preprocess your data. Training data (np.array) can be represented as a (x_train, y_train) tuple with x_train shape (N, D) and y_train shape (N, 1), where N is the number of samples, and D is the dimension of the features. Your test points (np.array) can be represented as x_test with shape (N, D)."
},
{
"code": null,
"e": 9585,
"s": 9430,
"text": "Define your covariance function. In the code segment below, we use a Radial Basis Function (RBF) kernel RBF along with additive noise using a WhiteKernel."
},
{
"code": null,
"e": 9762,
"s": 9585,
"text": "Define your GaussianProcessRegressor object using your covariance function, and a random state that seeds your GPR. This random_state is important for ensuring reproducibility."
},
{
"code": null,
"e": 9994,
"s": 9762,
"text": "Fit your gpr object using the method gpr.fit(x_train, y_train). This “trains your model”, and optimizes the hyperparameters of your gpr object using gradient methods such as lbfgs, a second-order Hessian-based optimization routine."
},
{
"code": null,
"e": 10228,
"s": 9994,
"text": "Predict the mean and covariance of the targets on your test points x_test using the method gpr.predict(x_test, return_std=True). This gives you both a predicted value, as well as a measure of the uncertainty for this predicted point."
},
{
"code": null,
"e": 10285,
"s": 10228,
"text": "To install dependencies for the example below using pip:"
},
{
"code": null,
"e": 10327,
"s": 10285,
"text": "pip install scikit-learn numpy matplotlib"
},
{
"code": null,
"e": 10411,
"s": 10327,
"text": "Here is an example that fits and predicts a one-dimensional sinusoid using sklearn:"
},
{
"code": null,
"e": 10648,
"s": 10411,
"text": "This package is great for creating fully customizable, advanced, and accelerated GPR models that scale. This package supports everything from GPR model optimization via auto-differentiation to hardware acceleration via CUDA and PyKeOps."
},
{
"code": null,
"e": 10965,
"s": 10648,
"text": "It’s recommended you have some familiarity with PyTorch and/or auto-differentiation packages in python before working with GPyTorch, but the tutorials make this framework easy to learn and use. Data for GPRs in GPyTorch are represented as torch.tensor objects. Here are the steps for fitting a GPR model in GPyTorch:"
},
{
"code": null,
"e": 12357,
"s": 10965,
"text": "Preprocess your data. Training data can be represented as a (x_train, y_train) tuple with x_train shape (B, N, D) and y_train shape (B, N, 1), where B is the batch size, N is the number of samples, and D is the dimension of the features. Your test points can be represented as x_test with shape (B, N, D).Define your ExactGPModel by subclassing the gpytorch.models.ExactGP class. To subclass this model, you’ll need to define: (i) The constructor method, which specifies the mean and covariance functions of the model, (ii) The forward method, which describes how the GPR model makes predictions. To use batching, check out this tutorial here. To use prior distributions on your hyperparameters, check out this tutorial here.Specify your likelihood function, which your model uses to relate latent variables f to observed targets y.Instantiate your model using your likelihood and training data (x_train, y_train).Perform hyperparameter optimization (“training”) of your model using pytorch auto-differentiation. Once finished, ensure your model and likelihood are placed in posterior mode with model.eval() and likelihood.eval().Compute mean and variance predictions on your test points using your model by calling likelihood(model(x_test)). The inner function predicts latent test values f* from test inputs x*, and the outer function predicts mean and variance from latent test values f*."
},
{
"code": null,
"e": 12663,
"s": 12357,
"text": "Preprocess your data. Training data can be represented as a (x_train, y_train) tuple with x_train shape (B, N, D) and y_train shape (B, N, 1), where B is the batch size, N is the number of samples, and D is the dimension of the features. Your test points can be represented as x_test with shape (B, N, D)."
},
{
"code": null,
"e": 13084,
"s": 12663,
"text": "Define your ExactGPModel by subclassing the gpytorch.models.ExactGP class. To subclass this model, you’ll need to define: (i) The constructor method, which specifies the mean and covariance functions of the model, (ii) The forward method, which describes how the GPR model makes predictions. To use batching, check out this tutorial here. To use prior distributions on your hyperparameters, check out this tutorial here."
},
{
"code": null,
"e": 13192,
"s": 13084,
"text": "Specify your likelihood function, which your model uses to relate latent variables f to observed targets y."
},
{
"code": null,
"e": 13275,
"s": 13192,
"text": "Instantiate your model using your likelihood and training data (x_train, y_train)."
},
{
"code": null,
"e": 13492,
"s": 13275,
"text": "Perform hyperparameter optimization (“training”) of your model using pytorch auto-differentiation. Once finished, ensure your model and likelihood are placed in posterior mode with model.eval() and likelihood.eval()."
},
{
"code": null,
"e": 13754,
"s": 13492,
"text": "Compute mean and variance predictions on your test points using your model by calling likelihood(model(x_test)). The inner function predicts latent test values f* from test inputs x*, and the outer function predicts mean and variance from latent test values f*."
},
{
"code": null,
"e": 13811,
"s": 13754,
"text": "To install dependencies for the example below using pip:"
},
{
"code": null,
"e": 13905,
"s": 13811,
"text": "pip install gpytorch torch matplotlib numpy# (Optional) - Installs pykeopspip install pykeops"
},
{
"code": null,
"e": 13980,
"s": 13905,
"text": "Here is an example to fit a noisy one-dimensional sinusoid using gpytorch:"
},
{
"code": null,
"e": 14427,
"s": 13980,
"text": "Another GPR package that supports automatic differentiation (this time in tensorflow), GPFlow has extensive functionality built-in for creating fully-customizable models, likelihood functions, kernels, and optimization and inference routines. In addition to GPR, GPFlow has built-in functionality for a variety of other state-of-the-art problems in Bayesian Optimization, such as Variational Fourier Features and Convolutional Gaussian Processes."
},
{
"code": null,
"e": 14688,
"s": 14427,
"text": "It’s recommended you have some familiarity with TensorFlow and/or auto-differentiation packages in Python before working with GPFlow. Data for GPRs in GPFlow are represented as tf.tensor objects. To get started with GPFlow, please check out this examples link."
},
{
"code": null,
"e": 15021,
"s": 14688,
"text": "This package has Python implementations for a multitude of GPR models, likelihood functions, and inference procedures. Though this package doesn’t have the same auto-differentiation backends that power gpytorch and gpflow, this package’s versatility, modularity, and customizability make it a valuable resource for implementing GPR."
},
{
"code": null,
"e": 15211,
"s": 15021,
"text": "Pyro is a probabilistic programming package that can be integrated with Python that also supports Gaussian Process Regression, as well as advanced applications such as Deep Kernel Learning."
},
{
"code": null,
"e": 15728,
"s": 15211,
"text": "Gen is another probabilistic programming package built on top of Julia. Gen offers several advantages with Gaussian Process Regression: (i) It builds in proposal distributions, which can help to narrow down a search space by effectively imposing a prior on the set of possible solutions, (ii) It has an easy API for sampling traces from fit GPR models, (iii) As is the goal for many probabilistic programming languages, it enables for easily creating hierarchical models for tuning the priors of GPR hyperparameters."
},
{
"code": null,
"e": 16048,
"s": 15728,
"text": "Stan is another probabilistic programming package that can be integrated with Python, but also supports other languages such as R, MATLAB, Julia, and Stata. In addition to having functionality built-in for Gaussian Process Regression, Stan also supports a variety of other Bayesian inference and sampling functionality."
},
{
"code": null,
"e": 16265,
"s": 16048,
"text": "Built by the creators of GPyTorch, BoTorch is a Bayesian Optimization library that supports many of the same GPR techniques, as well as advanced Bayesian Optimization techniques and analytic test suites, as GPyTorch."
},
{
"code": null,
"e": 16568,
"s": 16265,
"text": "In this article, we reviewed the theory behind Gaussian Process Regression (GPR), introduced and discussed the types of problems GPR can be used to solve, discussed how GPR compares to other supervised learning algorithms, and walked through how we can implement GPR using sklearn, gpytorch, or gpflow."
},
{
"code": null,
"e": 16716,
"s": 16568,
"text": "To see more articles in reinforcement learning, machine learning, computer vision, robotics, and teaching, please follow me! Thank you for reading!"
},
{
"code": null,
"e": 16996,
"s": 16716,
"text": "Thank you to CODECOGS for their inline equation rendering tool, Carl Edward Rasmussen for open-sourcing the textbook Gaussian Processes for Machine Learning [5], and for Scikit-Learn, GPyTorch, GPFlow, and GPy for open-sourcing their Gaussian Process Regression Python libraries."
},
{
"code": null,
"e": 17133,
"s": 16996,
"text": "[1] Pedregosa, Fabian, et al. “Scikit-learn: Machine learning in Python.” the Journal of machine Learning research 12 (2011): 2825–2830."
},
{
"code": null,
"e": 17288,
"s": 17133,
"text": "[3] Gardner, Jacob R., et al. “Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration.” arXiv preprint arXiv:1809.11165 (2018)."
},
{
"code": null,
"e": 17424,
"s": 17288,
"text": "[3] Matthews, Alexander G. de G., et al. “GPflow: A Gaussian Process Library using TensorFlow.” J. Mach. Learn. Res. 18.40 (2017): 1–6."
},
{
"code": null,
"e": 17475,
"s": 17424,
"text": "[4] GPy, “GPy.” http://github.com/SheffieldML/GPy."
},
{
"code": null,
"e": 17639,
"s": 17475,
"text": "[5] Carl Edward Rasmussen and Christopher K. I. Williams. 2005. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press."
},
{
"code": null,
"e": 17916,
"s": 17639,
"text": "[6] Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D. Goodman. 2019. Pyro: deep universal probabilistic programming. J. Mach. Learn. Res. 20, 1 (January 2019), 973–978."
},
{
"code": null,
"e": 18185,
"s": 17916,
"text": "[7] Gen: A General-Purpose Probabilistic Programming System with Programmable Inference. Cusumano-Towner, M. F.; Saad, F. A.; Lew, A.; and Mansinghka, V. K. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ‘19)."
},
{
"code": null,
"e": 18305,
"s": 18185,
"text": "[8] Stan Development Team. 2021. Stan Modeling Language Users Guide and Reference Manual, VERSION. https://mc-stan.org."
}
] |
Higher order components in React.js | Higher order component in short called as hoc. It’s a pattern which receives a component and returns a new component with add-on features to it.
//hoc is the name of a custom JavaScript function
const AddOnComponent= hoc(SimpleComponent);
We use component with state and/or props to build an UI. Similar way a hoc builds a new component from the provided component.
Use of hoc is making a cross cutting concerns in React. The components will take care of individual responsibility of single tasks while hoc functions will take care of cross cutting concerns.
Connect function from redux is an example of hoc.
Display welcome message to customer or admin based on user type.
Class App extends React.Component{
render(){
return (
<UserWelcome message={this.props.message} userType={this.props.userType} />
);
}
}
Class UserWelcome extends React.Component{
render(){
return(
<div>Welcome {this.props.message}</div>
);
}
}
const userSpecificMessage=(WrappedComponent)=>{
return class extends React.Component{
render(){
if(this.props.userType==’customer’){
return(
<WrappedComponent {...this.props}/>
);
} else {
<div> Welcome Admin </div>
}
}
}
}
In the UserWelcome, we are just displaying message to user passed by parent component App.js.
The UserComponent is wrapped by hoc userSpecificMessage which received the props from wrapped component i.e. UserComponent
The hoc userSpecificMessage decides which message to display based on the type of user.
If the type of the user is customer it displays the message as it is. But if the user is not customer then it displays a Welcome Admin message by default.
With this way we can add the common functionality required by components in hoc and use it whenever required.
It allows code reuse and keeps the components clean with the individual tasks only. | [
{
"code": null,
"e": 1207,
"s": 1062,
"text": "Higher order component in short called as hoc. It’s a pattern which receives a component and returns a new component with add-on features to it."
},
{
"code": null,
"e": 1257,
"s": 1207,
"text": "//hoc is the name of a custom JavaScript function"
},
{
"code": null,
"e": 1301,
"s": 1257,
"text": "const AddOnComponent= hoc(SimpleComponent);"
},
{
"code": null,
"e": 1428,
"s": 1301,
"text": "We use component with state and/or props to build an UI. Similar way a hoc builds a new component from the provided component."
},
{
"code": null,
"e": 1621,
"s": 1428,
"text": "Use of hoc is making a cross cutting concerns in React. The components will take care of individual responsibility of single tasks while hoc functions will take care of cross cutting concerns."
},
{
"code": null,
"e": 1671,
"s": 1621,
"text": "Connect function from redux is an example of hoc."
},
{
"code": null,
"e": 1736,
"s": 1671,
"text": "Display welcome message to customer or admin based on user type."
},
{
"code": null,
"e": 2346,
"s": 1736,
"text": "Class App extends React.Component{\n render(){\n return (\n <UserWelcome message={this.props.message} userType={this.props.userType} />\n );\n }\n}\nClass UserWelcome extends React.Component{\n render(){\n return(\n <div>Welcome {this.props.message}</div>\n );\n }\n}\nconst userSpecificMessage=(WrappedComponent)=>{\n return class extends React.Component{\n render(){\n if(this.props.userType==’customer’){\n return(\n <WrappedComponent {...this.props}/>\n );\n } else {\n <div> Welcome Admin </div>\n }\n }\n }\n}"
},
{
"code": null,
"e": 2440,
"s": 2346,
"text": "In the UserWelcome, we are just displaying message to user passed by parent component App.js."
},
{
"code": null,
"e": 2563,
"s": 2440,
"text": "The UserComponent is wrapped by hoc userSpecificMessage which received the props from wrapped component i.e. UserComponent"
},
{
"code": null,
"e": 2651,
"s": 2563,
"text": "The hoc userSpecificMessage decides which message to display based on the type of user."
},
{
"code": null,
"e": 2806,
"s": 2651,
"text": "If the type of the user is customer it displays the message as it is. But if the user is not customer then it displays a Welcome Admin message by default."
},
{
"code": null,
"e": 2916,
"s": 2806,
"text": "With this way we can add the common functionality required by components in hoc and use it whenever required."
},
{
"code": null,
"e": 3000,
"s": 2916,
"text": "It allows code reuse and keeps the components clean with the individual tasks only."
}
] |
Create a 3D Text Effect using HTML and CSS - GeeksforGeeks | 03 Aug, 2021
The 3D text effect is one of the most used text effects in the web design world. As a designer or front-end developer one should know how to create a 3D text effect. Today we will be looking at one of the simplest and easy methods to create our text in a 3D look.Approach: The 3D text animation effect is designed by text-shadow property. The reason to apply multiple text-shadow is to give a 3D look as if we apply only single (or unitary) text-shadow it will be the same for all the alphabets present in the word. But for the 3D effect, we want a different thickness of shadow for each alphabet and at each angle(basically X and Y coordinates and radius of blur). Now let’s look at the implementation of the above approach.HTML Code: In this section, we have used a <h1> tag with the word to which we want to apply the 3D effect.
html
<!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content= "width=device-width, initial-scale=1.0" /> <title>3D Text Effect</title></head> <body> <h1>GeeksforGeeks</h1></body> </html>
CSS Code:
Step 1: The first thing that we have done is to align the <h1> element to center and provide the body background.
Step 2: Now, apply a transition to h1 element. Duration can be adjusted according to your need.
Step 3: Now apply text shadow on h1 element. The concept of applying multiple text-shadow has already been explained in the approach at the starting of the article.
Tip: We have to choose to apply an effect to be visible only on mouse hover but if you want the effect to be visible all the time then remove the hover selector.
CSS
<style> body { background: green; } h1 { margin: 300px auto; text-align: center; color: white; font-size: 8em; transition: 0.5s; font-family: Arial, Helvetica, sans-serif; } h1:hover { text-shadow: 0 1px 0 #ccc, 0 2px 0 #ccc, 0 3px 0 #ccc, 0 4px 0 #ccc, 0 5px 0 #ccc, 0 6px 0 #ccc, 0 7px 0 #ccc, 0 8px 0 #ccc, 0 9px 0 #ccc, 0 10px 0 #ccc, 0 11px 0 #ccc, 0 12px 0 #ccc, 0 20px 30px rgba(0, 0, 0, 0.5); }</style>
Complete Code: Tn this section, we will be combining the above two sections to create a 3D text animation effect on mouse hover.
HTML
<!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>3D Text Effect</title> <style> body { background: green; } h1 { margin: 300px auto; text-align: center; color: white; font-size: 8em; transition: 0.5s; font-family: Arial, Helvetica, sans-serif; } h1:hover { text-shadow: 0 1px 0 #ccc, 0 2px 0 #ccc, 0 3px 0 #ccc, 0 4px 0 #ccc, 0 5px 0 #ccc, 0 6px 0 #ccc, 0 7px 0 #ccc, 0 8px 0 #ccc, 0 9px 0 #ccc, 0 10px 0 #ccc, 0 11px 0 #ccc, 0 12px 0 #ccc, 0 20px 30px rgba(0, 0, 0, 0.5); } </style></head> <body> <h1>GeeksforGeeks</h1></body> </html>
Output:
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
sumitgumber28
CSS-Basics
HTML-Misc
CSS
HTML
Web Technologies
Web technologies Questions
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Create a Responsive Navbar using ReactJS
Design a web page using HTML and CSS
Form validation using jQuery
How to apply style to parent if it has child with CSS?
How to auto-resize an image to fit a div container using CSS?
How to set the default value for an HTML <select> element ?
How to set input type date in dd-mm-yyyy format using HTML ?
Hide or show elements in HTML using display property
How to Insert Form Data into Database using PHP ?
REST API (Introduction) | [
{
"code": null,
"e": 24812,
"s": 24784,
"text": "\n03 Aug, 2021"
},
{
"code": null,
"e": 25645,
"s": 24812,
"text": "The 3D text effect is one of the most used text effects in the web design world. As a designer or front-end developer one should know how to create a 3D text effect. Today we will be looking at one of the simplest and easy methods to create our text in a 3D look.Approach: The 3D text animation effect is designed by text-shadow property. The reason to apply multiple text-shadow is to give a 3D look as if we apply only single (or unitary) text-shadow it will be the same for all the alphabets present in the word. But for the 3D effect, we want a different thickness of shadow for each alphabet and at each angle(basically X and Y coordinates and radius of blur). Now let’s look at the implementation of the above approach.HTML Code: In this section, we have used a <h1> tag with the word to which we want to apply the 3D effect. "
},
{
"code": null,
"e": 25650,
"s": 25645,
"text": "html"
},
{
"code": "<!DOCTYPE html><html lang=\"en\"> <head> <meta charset=\"UTF-8\" /> <meta name=\"viewport\" content= \"width=device-width, initial-scale=1.0\" /> <title>3D Text Effect</title></head> <body> <h1>GeeksforGeeks</h1></body> </html>",
"e": 25889,
"s": 25650,
"text": null
},
{
"code": null,
"e": 25901,
"s": 25889,
"text": "CSS Code: "
},
{
"code": null,
"e": 26015,
"s": 25901,
"text": "Step 1: The first thing that we have done is to align the <h1> element to center and provide the body background."
},
{
"code": null,
"e": 26111,
"s": 26015,
"text": "Step 2: Now, apply a transition to h1 element. Duration can be adjusted according to your need."
},
{
"code": null,
"e": 26276,
"s": 26111,
"text": "Step 3: Now apply text shadow on h1 element. The concept of applying multiple text-shadow has already been explained in the approach at the starting of the article."
},
{
"code": null,
"e": 26439,
"s": 26276,
"text": "Tip: We have to choose to apply an effect to be visible only on mouse hover but if you want the effect to be visible all the time then remove the hover selector. "
},
{
"code": null,
"e": 26443,
"s": 26439,
"text": "CSS"
},
{
"code": "<style> body { background: green; } h1 { margin: 300px auto; text-align: center; color: white; font-size: 8em; transition: 0.5s; font-family: Arial, Helvetica, sans-serif; } h1:hover { text-shadow: 0 1px 0 #ccc, 0 2px 0 #ccc, 0 3px 0 #ccc, 0 4px 0 #ccc, 0 5px 0 #ccc, 0 6px 0 #ccc, 0 7px 0 #ccc, 0 8px 0 #ccc, 0 9px 0 #ccc, 0 10px 0 #ccc, 0 11px 0 #ccc, 0 12px 0 #ccc, 0 20px 30px rgba(0, 0, 0, 0.5); }</style>",
"e": 27050,
"s": 26443,
"text": null
},
{
"code": null,
"e": 27180,
"s": 27050,
"text": "Complete Code: Tn this section, we will be combining the above two sections to create a 3D text animation effect on mouse hover. "
},
{
"code": null,
"e": 27185,
"s": 27180,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html lang=\"en\"> <head> <meta charset=\"UTF-8\" /> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" /> <title>3D Text Effect</title> <style> body { background: green; } h1 { margin: 300px auto; text-align: center; color: white; font-size: 8em; transition: 0.5s; font-family: Arial, Helvetica, sans-serif; } h1:hover { text-shadow: 0 1px 0 #ccc, 0 2px 0 #ccc, 0 3px 0 #ccc, 0 4px 0 #ccc, 0 5px 0 #ccc, 0 6px 0 #ccc, 0 7px 0 #ccc, 0 8px 0 #ccc, 0 9px 0 #ccc, 0 10px 0 #ccc, 0 11px 0 #ccc, 0 12px 0 #ccc, 0 20px 30px rgba(0, 0, 0, 0.5); } </style></head> <body> <h1>GeeksforGeeks</h1></body> </html>",
"e": 28057,
"s": 27185,
"text": null
},
{
"code": null,
"e": 28067,
"s": 28057,
"text": "Output: "
},
{
"code": null,
"e": 28206,
"s": 28069,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 28220,
"s": 28206,
"text": "sumitgumber28"
},
{
"code": null,
"e": 28231,
"s": 28220,
"text": "CSS-Basics"
},
{
"code": null,
"e": 28241,
"s": 28231,
"text": "HTML-Misc"
},
{
"code": null,
"e": 28245,
"s": 28241,
"text": "CSS"
},
{
"code": null,
"e": 28250,
"s": 28245,
"text": "HTML"
},
{
"code": null,
"e": 28267,
"s": 28250,
"text": "Web Technologies"
},
{
"code": null,
"e": 28294,
"s": 28267,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 28299,
"s": 28294,
"text": "HTML"
},
{
"code": null,
"e": 28397,
"s": 28299,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28438,
"s": 28397,
"text": "Create a Responsive Navbar using ReactJS"
},
{
"code": null,
"e": 28475,
"s": 28438,
"text": "Design a web page using HTML and CSS"
},
{
"code": null,
"e": 28504,
"s": 28475,
"text": "Form validation using jQuery"
},
{
"code": null,
"e": 28559,
"s": 28504,
"text": "How to apply style to parent if it has child with CSS?"
},
{
"code": null,
"e": 28621,
"s": 28559,
"text": "How to auto-resize an image to fit a div container using CSS?"
},
{
"code": null,
"e": 28681,
"s": 28621,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 28742,
"s": 28681,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 28795,
"s": 28742,
"text": "Hide or show elements in HTML using display property"
},
{
"code": null,
"e": 28845,
"s": 28795,
"text": "How to Insert Form Data into Database using PHP ?"
}
] |
Chain processes vs Fan of processes using fork() function in C - GeeksforGeeks | 07 Jul, 2021
Fork System Call: The fork system call is used for creating a new process, which is called the child process, which runs concurrently with the process that makes the fork() call (parent process). After a new child process is created, both processes will execute the next instruction following the fork() system call. A child process uses the same program counter, same CPU registers, the same open files which are used in the parent process.
For creating a fan or chain of processes, first, insert the header file “unistd.h” to use the fork() function for creating the process. For using the exit() method include “stdlib.h” and there are 3 ways exit statement can be used in the program:
exit(0): For normal termination
exit(1): For abnormal termination
exit(): It can be normal and abnormal.
Suppose there are three processes, and the first process is the parent process, and it is creating the child process, then this second process creates another process (third process), then the first process becomes the parent of the second one and the second process becomes the parent of the third process. So, a chain of processes is obtained, and that’s called a chain of processes.
Chain of 3 processes
Below is the implementation to create a chain of processes using the fork():
C
// C program to create a chain of process#include <stdio.h>#include <stdlib.h>#include <sys/types.h>#include <sys/wait.h>#include <unistd.h> // Driver Codeint main(){ int pid; // Iterate in the range [0, 2] for (int i = 0; i < 3; i++) { pid = fork(); if (pid > 0) { // Print the parent process printf("Parent process ID is %d\n", getpid()); break; } else { // Print the child process printf("Child process ID is %d\n", getpid()); } } return 0;}
Parent process ID is 1359
Child process ID is 1360
Parent process ID is 1360
Child process ID is 1360
Child process ID is 1361
Parent process ID is 1361
Child process ID is 1360
Child process ID is 1361
Child process ID is 1362
Explanation: In the above program, getpid() is used to get the process ID. Alternately, getppid() can also be used which will get a parent process ID. Each time when the program is run, the output is different, because not every time the same process id is allocated to the process by the operating system, but the PID(process id) is always obtained in non-decreasing order whenever the program is run.
Suppose there are three processes, and the first process is the parent process, and it is creating two child processes. If both of the processes have the same parent, then it is a fan of processes.
Fan of 3 processes
Below is the implementation to create a fan of processes using fork():
C
// C program to create a fan of processes #include <stdio.h>#include <stdlib.h>#include <sys/wait.h>#include <unistd.h> // Driver Codeint main(){ // Iterate in the range [0, 2] for (int i = 0; i < 3; i++) { if (fork() == 0) { // getpid gives process id // getppid gives parent process id printf("child pid %d from the" " parent pid %d\n", getpid(), getppid()); // Set Normal termination of // the program exit(0); } } for (int i = 0; i < 3; i++) wait(NULL);}
child pid 29831 from parent pid 29830
child pid 29832 from parent pid 29830
child pid 29833 from parent pid 29830
Explanation: In the above program, every time the same process ID is not allocated every time to process by the operating system. That means in this case, also the output is different each time the program is run, but the child processes created will have the same parent process ID.
Operating Systems-Process Management
system-programming
C Language
C Programs
Operating Systems
Operating Systems
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
TCP Server-Client implementation in C
Exception Handling in C++
Multithreading in C
'this' pointer in C++
Arrow operator -> in C/C++ with Examples
Strings in C
Arrow operator -> in C/C++ with Examples
UDP Server-Client implementation in C
C Program to read contents of Whole File
Header files in C/C++ and its uses | [
{
"code": null,
"e": 24234,
"s": 24206,
"text": "\n07 Jul, 2021"
},
{
"code": null,
"e": 24676,
"s": 24234,
"text": "Fork System Call: The fork system call is used for creating a new process, which is called the child process, which runs concurrently with the process that makes the fork() call (parent process). After a new child process is created, both processes will execute the next instruction following the fork() system call. A child process uses the same program counter, same CPU registers, the same open files which are used in the parent process."
},
{
"code": null,
"e": 24924,
"s": 24676,
"text": "For creating a fan or chain of processes, first, insert the header file “unistd.h” to use the fork() function for creating the process. For using the exit() method include “stdlib.h” and there are 3 ways exit statement can be used in the program: "
},
{
"code": null,
"e": 24956,
"s": 24924,
"text": "exit(0): For normal termination"
},
{
"code": null,
"e": 24990,
"s": 24956,
"text": "exit(1): For abnormal termination"
},
{
"code": null,
"e": 25029,
"s": 24990,
"text": "exit(): It can be normal and abnormal."
},
{
"code": null,
"e": 25415,
"s": 25029,
"text": "Suppose there are three processes, and the first process is the parent process, and it is creating the child process, then this second process creates another process (third process), then the first process becomes the parent of the second one and the second process becomes the parent of the third process. So, a chain of processes is obtained, and that’s called a chain of processes."
},
{
"code": null,
"e": 25436,
"s": 25415,
"text": "Chain of 3 processes"
},
{
"code": null,
"e": 25513,
"s": 25436,
"text": "Below is the implementation to create a chain of processes using the fork():"
},
{
"code": null,
"e": 25515,
"s": 25513,
"text": "C"
},
{
"code": "// C program to create a chain of process#include <stdio.h>#include <stdlib.h>#include <sys/types.h>#include <sys/wait.h>#include <unistd.h> // Driver Codeint main(){ int pid; // Iterate in the range [0, 2] for (int i = 0; i < 3; i++) { pid = fork(); if (pid > 0) { // Print the parent process printf(\"Parent process ID is %d\\n\", getpid()); break; } else { // Print the child process printf(\"Child process ID is %d\\n\", getpid()); } } return 0;}",
"e": 26108,
"s": 25515,
"text": null
},
{
"code": null,
"e": 26336,
"s": 26108,
"text": "Parent process ID is 1359\nChild process ID is 1360\nParent process ID is 1360\nChild process ID is 1360\nChild process ID is 1361\nParent process ID is 1361\nChild process ID is 1360\nChild process ID is 1361\nChild process ID is 1362"
},
{
"code": null,
"e": 26739,
"s": 26336,
"text": "Explanation: In the above program, getpid() is used to get the process ID. Alternately, getppid() can also be used which will get a parent process ID. Each time when the program is run, the output is different, because not every time the same process id is allocated to the process by the operating system, but the PID(process id) is always obtained in non-decreasing order whenever the program is run."
},
{
"code": null,
"e": 26937,
"s": 26739,
"text": "Suppose there are three processes, and the first process is the parent process, and it is creating two child processes. If both of the processes have the same parent, then it is a fan of processes."
},
{
"code": null,
"e": 26956,
"s": 26937,
"text": "Fan of 3 processes"
},
{
"code": null,
"e": 27027,
"s": 26956,
"text": "Below is the implementation to create a fan of processes using fork():"
},
{
"code": null,
"e": 27029,
"s": 27027,
"text": "C"
},
{
"code": "// C program to create a fan of processes #include <stdio.h>#include <stdlib.h>#include <sys/wait.h>#include <unistd.h> // Driver Codeint main(){ // Iterate in the range [0, 2] for (int i = 0; i < 3; i++) { if (fork() == 0) { // getpid gives process id // getppid gives parent process id printf(\"child pid %d from the\" \" parent pid %d\\n\", getpid(), getppid()); // Set Normal termination of // the program exit(0); } } for (int i = 0; i < 3; i++) wait(NULL);}",
"e": 27629,
"s": 27029,
"text": null
},
{
"code": null,
"e": 27743,
"s": 27629,
"text": "child pid 29831 from parent pid 29830\nchild pid 29832 from parent pid 29830\nchild pid 29833 from parent pid 29830"
},
{
"code": null,
"e": 28027,
"s": 27743,
"text": "Explanation: In the above program, every time the same process ID is not allocated every time to process by the operating system. That means in this case, also the output is different each time the program is run, but the child processes created will have the same parent process ID."
},
{
"code": null,
"e": 28064,
"s": 28027,
"text": "Operating Systems-Process Management"
},
{
"code": null,
"e": 28083,
"s": 28064,
"text": "system-programming"
},
{
"code": null,
"e": 28094,
"s": 28083,
"text": "C Language"
},
{
"code": null,
"e": 28105,
"s": 28094,
"text": "C Programs"
},
{
"code": null,
"e": 28123,
"s": 28105,
"text": "Operating Systems"
},
{
"code": null,
"e": 28141,
"s": 28123,
"text": "Operating Systems"
},
{
"code": null,
"e": 28239,
"s": 28141,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28277,
"s": 28239,
"text": "TCP Server-Client implementation in C"
},
{
"code": null,
"e": 28303,
"s": 28277,
"text": "Exception Handling in C++"
},
{
"code": null,
"e": 28323,
"s": 28303,
"text": "Multithreading in C"
},
{
"code": null,
"e": 28345,
"s": 28323,
"text": "'this' pointer in C++"
},
{
"code": null,
"e": 28386,
"s": 28345,
"text": "Arrow operator -> in C/C++ with Examples"
},
{
"code": null,
"e": 28399,
"s": 28386,
"text": "Strings in C"
},
{
"code": null,
"e": 28440,
"s": 28399,
"text": "Arrow operator -> in C/C++ with Examples"
},
{
"code": null,
"e": 28478,
"s": 28440,
"text": "UDP Server-Client implementation in C"
},
{
"code": null,
"e": 28519,
"s": 28478,
"text": "C Program to read contents of Whole File"
}
] |
How to list all functions in a Python module? | You can use the dir(module) to get all the attributes/methods of a module. For example,
>>> import math
>>> dir(math)
['__doc__', '__name__', '__package__', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'copysign', 'cos', 'cosh', 'degrees', 'e', 'erf', 'erfc', 'exp', 'expm1', 'fabs', 'factorial', 'floor', 'fmod', 'frexp', 'fsum', 'gamma', 'hypot', 'isinf', 'isnan', 'ldexp', 'lgamma', 'log', 'log10', 'log1p', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc']
But here as you can see attributes of the module(__name__, __doc__, etc) are also listed. You can create a simple function that filters these out using the isfunction predicate and getmembers(module, predicate) to get the members of a module. For example,
>>> from inspect import getmembers, isfunction
>>> import helloworld
>>> print [o for o in getmembers(helloworld) if isfunction(o[1])]
['hello_world']
Note that this doesn't work for built in modules as the type of functions for those modules is not function but built in function. | [
{
"code": null,
"e": 1150,
"s": 1062,
"text": "You can use the dir(module) to get all the attributes/methods of a module. For example,"
},
{
"code": null,
"e": 1575,
"s": 1150,
"text": ">>> import math\n>>> dir(math)\n['__doc__', '__name__', '__package__', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'copysign', 'cos', 'cosh', 'degrees', 'e', 'erf', 'erfc', 'exp', 'expm1', 'fabs', 'factorial', 'floor', 'fmod', 'frexp', 'fsum', 'gamma', 'hypot', 'isinf', 'isnan', 'ldexp', 'lgamma', 'log', 'log10', 'log1p', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc']"
},
{
"code": null,
"e": 1831,
"s": 1575,
"text": "But here as you can see attributes of the module(__name__, __doc__, etc) are also listed. You can create a simple function that filters these out using the isfunction predicate and getmembers(module, predicate) to get the members of a module. For example,"
},
{
"code": null,
"e": 1982,
"s": 1831,
"text": ">>> from inspect import getmembers, isfunction\n>>> import helloworld\n>>> print [o for o in getmembers(helloworld) if isfunction(o[1])]\n['hello_world']"
},
{
"code": null,
"e": 2113,
"s": 1982,
"text": "Note that this doesn't work for built in modules as the type of functions for those modules is not function but built in function."
}
] |
Extract day, hour, minute, etc. from a datetime column in PostgreSQL? | Let us create a new table containing a single timestamp column −
CREATE TABLE timestamp_test(
ts timestamp
);
Now let us populate it with some data −
INSERT INTO timestamp_test(ts)
VALUES(current_timestamp),
(current_timestamp+interval '5 days'),
(current_timestamp-interval '18 hours'),
(current_timestamp+interval '1 year'),
(current_timestamp+interval '3 minutes'),
(current_timestamp-interval '6 years');
If you query the table (SELECT * from timestamp_test), you will see the following output −
Now, in order to extract hour, minute, etc. from the timestamp column, we use the EXTRACT function. Some examples are shown below −
SELECT EXTRACT(HOUR from ts) as hour from timestamp_test
Output −
Similarly −
SELECT EXTRACT(MONTH from ts) as month from timestamp_test
You can also extract not-so-obvious values like the ISO week, or the century −
SELECT EXTRACT(CENTURY from ts) as century, EXTRACT(WEEK from ts) as week from timestamp_test
To get a complete list of values you can extract from a timestamp column, see https://www.postgresql.org/docs/9.1/functions-datetime.html | [
{
"code": null,
"e": 1127,
"s": 1062,
"text": "Let us create a new table containing a single timestamp column −"
},
{
"code": null,
"e": 1175,
"s": 1127,
"text": "CREATE TABLE timestamp_test(\n ts timestamp\n);"
},
{
"code": null,
"e": 1215,
"s": 1175,
"text": "Now let us populate it with some data −"
},
{
"code": null,
"e": 1474,
"s": 1215,
"text": "INSERT INTO timestamp_test(ts)\nVALUES(current_timestamp),\n(current_timestamp+interval '5 days'),\n(current_timestamp-interval '18 hours'),\n(current_timestamp+interval '1 year'),\n(current_timestamp+interval '3 minutes'),\n(current_timestamp-interval '6 years');"
},
{
"code": null,
"e": 1565,
"s": 1474,
"text": "If you query the table (SELECT * from timestamp_test), you will see the following output −"
},
{
"code": null,
"e": 1697,
"s": 1565,
"text": "Now, in order to extract hour, minute, etc. from the timestamp column, we use the EXTRACT function. Some examples are shown below −"
},
{
"code": null,
"e": 1754,
"s": 1697,
"text": "SELECT EXTRACT(HOUR from ts) as hour from timestamp_test"
},
{
"code": null,
"e": 1763,
"s": 1754,
"text": "Output −"
},
{
"code": null,
"e": 1775,
"s": 1763,
"text": "Similarly −"
},
{
"code": null,
"e": 1834,
"s": 1775,
"text": "SELECT EXTRACT(MONTH from ts) as month from timestamp_test"
},
{
"code": null,
"e": 1913,
"s": 1834,
"text": "You can also extract not-so-obvious values like the ISO week, or the century −"
},
{
"code": null,
"e": 2007,
"s": 1913,
"text": "SELECT EXTRACT(CENTURY from ts) as century, EXTRACT(WEEK from ts) as week from timestamp_test"
},
{
"code": null,
"e": 2146,
"s": 2007,
"text": "To get a complete list of values you can extract from a timestamp column, see https://www.postgresql.org/docs/9.1/functions-datetime.html "
}
] |
PL/SQL - Strings | The string in PL/SQL is actually a sequence of characters with an optional size specification. The characters could be numeric, letters, blank, special characters or a combination of all. PL/SQL offers three kinds of strings −
Fixed-length strings − In such strings, programmers specify the length while declaring the string. The string is right-padded with spaces to the length so specified.
Fixed-length strings − In such strings, programmers specify the length while declaring the string. The string is right-padded with spaces to the length so specified.
Variable-length strings − In such strings, a maximum length up to 32,767, for the string is specified and no padding takes place.
Variable-length strings − In such strings, a maximum length up to 32,767, for the string is specified and no padding takes place.
Character large objects (CLOBs) − These are variable-length strings that can be up to 128 terabytes.
Character large objects (CLOBs) − These are variable-length strings that can be up to 128 terabytes.
PL/SQL strings could be either variables or literals. A string literal is enclosed within quotation marks. For example,
'This is a string literal.' Or 'hello world'
To include a single quote inside a string literal, you need to type two single quotes next to one another. For example,
'this isn''t what it looks like'
Oracle database provides numerous string datatypes, such as CHAR, NCHAR, VARCHAR2, NVARCHAR2, CLOB, and NCLOB. The datatypes prefixed with an 'N' are 'national character set' datatypes, that store Unicode character data.
If you need to declare a variable-length string, you must provide the maximum length of that string. For example, the VARCHAR2 data type. The following example illustrates declaring and using some string variables −
DECLARE
name varchar2(20);
company varchar2(30);
introduction clob;
choice char(1);
BEGIN
name := 'John Smith';
company := 'Infotech';
introduction := ' Hello! I''m John Smith from Infotech.';
choice := 'y';
IF choice = 'y' THEN
dbms_output.put_line(name);
dbms_output.put_line(company);
dbms_output.put_line(introduction);
END IF;
END;
/
When the above code is executed at the SQL prompt, it produces the following result −
John Smith
Infotech
Hello! I'm John Smith from Infotech.
PL/SQL procedure successfully completed
To declare a fixed-length string, use the CHAR datatype. Here you do not have to specify a maximum length for a fixed-length variable. If you leave off the length constraint, Oracle Database automatically uses a maximum length required. The following two declarations are identical −
red_flag CHAR(1) := 'Y';
red_flag CHAR := 'Y';
PL/SQL offers the concatenation operator (||) for joining two strings. The following table provides the string functions provided by PL/SQL −
ASCII(x);
Returns the ASCII value of the character x.
CHR(x);
Returns the character with the ASCII value of x.
CONCAT(x, y);
Concatenates the strings x and y and returns the appended string.
INITCAP(x);
Converts the initial letter of each word in x to uppercase and returns that string.
INSTR(x, find_string [, start] [, occurrence]);
Searches for find_string in x and returns the position at which it occurs.
INSTRB(x);
Returns the location of a string within another string, but returns the value in bytes.
LENGTH(x);
Returns the number of characters in x.
LENGTHB(x);
Returns the length of a character string in bytes for single byte character set.
LOWER(x);
Converts the letters in x to lowercase and returns that string.
LPAD(x, width [, pad_string]) ;
Pads x with spaces to the left, to bring the total length of the string up to width characters.
LTRIM(x [, trim_string]);
Trims characters from the left of x.
NANVL(x, value);
Returns value if x matches the NaN special value (not a number), otherwise x is returned.
NLS_INITCAP(x);
Same as the INITCAP function except that it can use a different sort method as specified by NLSSORT.
NLS_LOWER(x) ;
Same as the LOWER function except that it can use a different sort method as specified by NLSSORT.
NLS_UPPER(x);
Same as the UPPER function except that it can use a different sort method as specified by NLSSORT.
NLSSORT(x);
Changes the method of sorting the characters. Must be specified before any NLS function; otherwise, the default sort will be used.
NVL(x, value);
Returns value if x is null; otherwise, x is returned.
NVL2(x, value1, value2);
Returns value1 if x is not null; if x is null, value2 is returned.
REPLACE(x, search_string, replace_string);
Searches x for search_string and replaces it with replace_string.
RPAD(x, width [, pad_string]);
Pads x to the right.
RTRIM(x [, trim_string]);
Trims x from the right.
SOUNDEX(x) ;
Returns a string containing the phonetic representation of x.
SUBSTR(x, start [, length]);
Returns a substring of x that begins at the position specified by start. An optional length for the substring may be supplied.
SUBSTRB(x);
Same as SUBSTR except that the parameters are expressed in bytes instead of characters for the single-byte character systems.
TRIM([trim_char FROM) x);
Trims characters from the left and right of x.
UPPER(x);
Converts the letters in x to uppercase and returns that string.
Let us now work out on a few examples to understand the concept −
DECLARE
greetings varchar2(11) := 'hello world';
BEGIN
dbms_output.put_line(UPPER(greetings));
dbms_output.put_line(LOWER(greetings));
dbms_output.put_line(INITCAP(greetings));
/* retrieve the first character in the string */
dbms_output.put_line ( SUBSTR (greetings, 1, 1));
/* retrieve the last character in the string */
dbms_output.put_line ( SUBSTR (greetings, -1, 1));
/* retrieve five characters,
starting from the seventh position. */
dbms_output.put_line ( SUBSTR (greetings, 7, 5));
/* retrieve the remainder of the string,
starting from the second position. */
dbms_output.put_line ( SUBSTR (greetings, 2));
/* find the location of the first "e" */
dbms_output.put_line ( INSTR (greetings, 'e'));
END;
/
When the above code is executed at the SQL prompt, it produces the following result −
HELLO WORLD
hello world
Hello World
h
d
World
ello World
2
PL/SQL procedure successfully completed.
DECLARE
greetings varchar2(30) := '......Hello World.....';
BEGIN
dbms_output.put_line(RTRIM(greetings,'.'));
dbms_output.put_line(LTRIM(greetings, '.'));
dbms_output.put_line(TRIM( '.' from greetings));
END;
/
When the above code is executed at the SQL prompt, it produces the following result −
......Hello World
Hello World.....
Hello World
PL/SQL procedure successfully completed.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2292,
"s": 2065,
"text": "The string in PL/SQL is actually a sequence of characters with an optional size specification. The characters could be numeric, letters, blank, special characters or a combination of all. PL/SQL offers three kinds of strings −"
},
{
"code": null,
"e": 2458,
"s": 2292,
"text": "Fixed-length strings − In such strings, programmers specify the length while declaring the string. The string is right-padded with spaces to the length so specified."
},
{
"code": null,
"e": 2624,
"s": 2458,
"text": "Fixed-length strings − In such strings, programmers specify the length while declaring the string. The string is right-padded with spaces to the length so specified."
},
{
"code": null,
"e": 2754,
"s": 2624,
"text": "Variable-length strings − In such strings, a maximum length up to 32,767, for the string is specified and no padding takes place."
},
{
"code": null,
"e": 2884,
"s": 2754,
"text": "Variable-length strings − In such strings, a maximum length up to 32,767, for the string is specified and no padding takes place."
},
{
"code": null,
"e": 2985,
"s": 2884,
"text": "Character large objects (CLOBs) − These are variable-length strings that can be up to 128 terabytes."
},
{
"code": null,
"e": 3086,
"s": 2985,
"text": "Character large objects (CLOBs) − These are variable-length strings that can be up to 128 terabytes."
},
{
"code": null,
"e": 3206,
"s": 3086,
"text": "PL/SQL strings could be either variables or literals. A string literal is enclosed within quotation marks. For example,"
},
{
"code": null,
"e": 3252,
"s": 3206,
"text": "'This is a string literal.' Or 'hello world'\n"
},
{
"code": null,
"e": 3372,
"s": 3252,
"text": "To include a single quote inside a string literal, you need to type two single quotes next to one another. For example,"
},
{
"code": null,
"e": 3406,
"s": 3372,
"text": "'this isn''t what it looks like'\n"
},
{
"code": null,
"e": 3627,
"s": 3406,
"text": "Oracle database provides numerous string datatypes, such as CHAR, NCHAR, VARCHAR2, NVARCHAR2, CLOB, and NCLOB. The datatypes prefixed with an 'N' are 'national character set' datatypes, that store Unicode character data."
},
{
"code": null,
"e": 3843,
"s": 3627,
"text": "If you need to declare a variable-length string, you must provide the maximum length of that string. For example, the VARCHAR2 data type. The following example illustrates declaring and using some string variables −"
},
{
"code": null,
"e": 4246,
"s": 3843,
"text": "DECLARE \n name varchar2(20); \n company varchar2(30); \n introduction clob; \n choice char(1); \nBEGIN \n name := 'John Smith'; \n company := 'Infotech'; \n introduction := ' Hello! I''m John Smith from Infotech.'; \n choice := 'y'; \n IF choice = 'y' THEN \n dbms_output.put_line(name); \n dbms_output.put_line(company); \n dbms_output.put_line(introduction); \n END IF; \nEND; \n/"
},
{
"code": null,
"e": 4332,
"s": 4246,
"text": "When the above code is executed at the SQL prompt, it produces the following result −"
},
{
"code": null,
"e": 4434,
"s": 4332,
"text": "John Smith \nInfotech\nHello! I'm John Smith from Infotech. \n\nPL/SQL procedure successfully completed\n"
},
{
"code": null,
"e": 4718,
"s": 4434,
"text": "To declare a fixed-length string, use the CHAR datatype. Here you do not have to specify a maximum length for a fixed-length variable. If you leave off the length constraint, Oracle Database automatically uses a maximum length required. The following two declarations are identical −"
},
{
"code": null,
"e": 4770,
"s": 4718,
"text": "red_flag CHAR(1) := 'Y'; \n red_flag CHAR := 'Y';\n"
},
{
"code": null,
"e": 4912,
"s": 4770,
"text": "PL/SQL offers the concatenation operator (||) for joining two strings. The following table provides the string functions provided by PL/SQL −"
},
{
"code": null,
"e": 4922,
"s": 4912,
"text": "ASCII(x);"
},
{
"code": null,
"e": 4966,
"s": 4922,
"text": "Returns the ASCII value of the character x."
},
{
"code": null,
"e": 4974,
"s": 4966,
"text": "CHR(x);"
},
{
"code": null,
"e": 5023,
"s": 4974,
"text": "Returns the character with the ASCII value of x."
},
{
"code": null,
"e": 5037,
"s": 5023,
"text": "CONCAT(x, y);"
},
{
"code": null,
"e": 5103,
"s": 5037,
"text": "Concatenates the strings x and y and returns the appended string."
},
{
"code": null,
"e": 5115,
"s": 5103,
"text": "INITCAP(x);"
},
{
"code": null,
"e": 5199,
"s": 5115,
"text": "Converts the initial letter of each word in x to uppercase and returns that string."
},
{
"code": null,
"e": 5247,
"s": 5199,
"text": "INSTR(x, find_string [, start] [, occurrence]);"
},
{
"code": null,
"e": 5322,
"s": 5247,
"text": "Searches for find_string in x and returns the position at which it occurs."
},
{
"code": null,
"e": 5333,
"s": 5322,
"text": "INSTRB(x);"
},
{
"code": null,
"e": 5421,
"s": 5333,
"text": "Returns the location of a string within another string, but returns the value in bytes."
},
{
"code": null,
"e": 5432,
"s": 5421,
"text": "LENGTH(x);"
},
{
"code": null,
"e": 5471,
"s": 5432,
"text": "Returns the number of characters in x."
},
{
"code": null,
"e": 5483,
"s": 5471,
"text": "LENGTHB(x);"
},
{
"code": null,
"e": 5564,
"s": 5483,
"text": "Returns the length of a character string in bytes for single byte character set."
},
{
"code": null,
"e": 5574,
"s": 5564,
"text": "LOWER(x);"
},
{
"code": null,
"e": 5638,
"s": 5574,
"text": "Converts the letters in x to lowercase and returns that string."
},
{
"code": null,
"e": 5670,
"s": 5638,
"text": "LPAD(x, width [, pad_string]) ;"
},
{
"code": null,
"e": 5766,
"s": 5670,
"text": "Pads x with spaces to the left, to bring the total length of the string up to width characters."
},
{
"code": null,
"e": 5792,
"s": 5766,
"text": "LTRIM(x [, trim_string]);"
},
{
"code": null,
"e": 5829,
"s": 5792,
"text": "Trims characters from the left of x."
},
{
"code": null,
"e": 5846,
"s": 5829,
"text": "NANVL(x, value);"
},
{
"code": null,
"e": 5936,
"s": 5846,
"text": "Returns value if x matches the NaN special value (not a number), otherwise x is returned."
},
{
"code": null,
"e": 5952,
"s": 5936,
"text": "NLS_INITCAP(x);"
},
{
"code": null,
"e": 6053,
"s": 5952,
"text": "Same as the INITCAP function except that it can use a different sort method as specified by NLSSORT."
},
{
"code": null,
"e": 6068,
"s": 6053,
"text": "NLS_LOWER(x) ;"
},
{
"code": null,
"e": 6167,
"s": 6068,
"text": "Same as the LOWER function except that it can use a different sort method as specified by NLSSORT."
},
{
"code": null,
"e": 6181,
"s": 6167,
"text": "NLS_UPPER(x);"
},
{
"code": null,
"e": 6280,
"s": 6181,
"text": "Same as the UPPER function except that it can use a different sort method as specified by NLSSORT."
},
{
"code": null,
"e": 6292,
"s": 6280,
"text": "NLSSORT(x);"
},
{
"code": null,
"e": 6423,
"s": 6292,
"text": "Changes the method of sorting the characters. Must be specified before any NLS function; otherwise, the default sort will be used."
},
{
"code": null,
"e": 6438,
"s": 6423,
"text": "NVL(x, value);"
},
{
"code": null,
"e": 6492,
"s": 6438,
"text": "Returns value if x is null; otherwise, x is returned."
},
{
"code": null,
"e": 6517,
"s": 6492,
"text": "NVL2(x, value1, value2);"
},
{
"code": null,
"e": 6584,
"s": 6517,
"text": "Returns value1 if x is not null; if x is null, value2 is returned."
},
{
"code": null,
"e": 6627,
"s": 6584,
"text": "REPLACE(x, search_string, replace_string);"
},
{
"code": null,
"e": 6693,
"s": 6627,
"text": "Searches x for search_string and replaces it with replace_string."
},
{
"code": null,
"e": 6724,
"s": 6693,
"text": "RPAD(x, width [, pad_string]);"
},
{
"code": null,
"e": 6745,
"s": 6724,
"text": "Pads x to the right."
},
{
"code": null,
"e": 6771,
"s": 6745,
"text": "RTRIM(x [, trim_string]);"
},
{
"code": null,
"e": 6795,
"s": 6771,
"text": "Trims x from the right."
},
{
"code": null,
"e": 6808,
"s": 6795,
"text": "SOUNDEX(x) ;"
},
{
"code": null,
"e": 6870,
"s": 6808,
"text": "Returns a string containing the phonetic representation of x."
},
{
"code": null,
"e": 6899,
"s": 6870,
"text": "SUBSTR(x, start [, length]);"
},
{
"code": null,
"e": 7026,
"s": 6899,
"text": "Returns a substring of x that begins at the position specified by start. An optional length for the substring may be supplied."
},
{
"code": null,
"e": 7038,
"s": 7026,
"text": "SUBSTRB(x);"
},
{
"code": null,
"e": 7164,
"s": 7038,
"text": "Same as SUBSTR except that the parameters are expressed in bytes instead of characters for the single-byte character systems."
},
{
"code": null,
"e": 7190,
"s": 7164,
"text": "TRIM([trim_char FROM) x);"
},
{
"code": null,
"e": 7237,
"s": 7190,
"text": "Trims characters from the left and right of x."
},
{
"code": null,
"e": 7247,
"s": 7237,
"text": "UPPER(x);"
},
{
"code": null,
"e": 7311,
"s": 7247,
"text": "Converts the letters in x to uppercase and returns that string."
},
{
"code": null,
"e": 7377,
"s": 7311,
"text": "Let us now work out on a few examples to understand the concept −"
},
{
"code": null,
"e": 8203,
"s": 7377,
"text": "DECLARE \n greetings varchar2(11) := 'hello world'; \nBEGIN \n dbms_output.put_line(UPPER(greetings)); \n \n dbms_output.put_line(LOWER(greetings)); \n \n dbms_output.put_line(INITCAP(greetings)); \n \n /* retrieve the first character in the string */ \n dbms_output.put_line ( SUBSTR (greetings, 1, 1)); \n \n /* retrieve the last character in the string */ \n dbms_output.put_line ( SUBSTR (greetings, -1, 1)); \n \n /* retrieve five characters, \n starting from the seventh position. */ \n dbms_output.put_line ( SUBSTR (greetings, 7, 5)); \n \n /* retrieve the remainder of the string, \n starting from the second position. */ \n dbms_output.put_line ( SUBSTR (greetings, 2)); \n \n /* find the location of the first \"e\" */ \n dbms_output.put_line ( INSTR (greetings, 'e')); \nEND; \n/ "
},
{
"code": null,
"e": 8289,
"s": 8203,
"text": "When the above code is executed at the SQL prompt, it produces the following result −"
},
{
"code": null,
"e": 8400,
"s": 8289,
"text": "HELLO WORLD \nhello world \nHello World \nh \nd \nWorld \nello World \n2 \n\nPL/SQL procedure successfully completed.\n"
},
{
"code": null,
"e": 8630,
"s": 8400,
"text": "DECLARE \n greetings varchar2(30) := '......Hello World.....'; \nBEGIN \n dbms_output.put_line(RTRIM(greetings,'.')); \n dbms_output.put_line(LTRIM(greetings, '.')); \n dbms_output.put_line(TRIM( '.' from greetings)); \nEND; \n/"
},
{
"code": null,
"e": 8716,
"s": 8630,
"text": "When the above code is executed at the SQL prompt, it produces the following result −"
},
{
"code": null,
"e": 8812,
"s": 8716,
"text": "......Hello World \nHello World..... \nHello World \n\nPL/SQL procedure successfully completed. \n"
},
{
"code": null,
"e": 8819,
"s": 8812,
"text": " Print"
},
{
"code": null,
"e": 8830,
"s": 8819,
"text": " Add Notes"
}
] |
Scala Collections - Fold Method | fold() method is a member of TraversableOnce trait, it is used to collapse elements of collections.
The following is the syntax of fold method.
def fold[A1 >: A](z: A1)(op: (A1, A1) ? A1): A1
Here, fold method takes associative binary operator function as a parameter. This method returns the result as value. It considers first input as initial value and second input as a function (which takes accumulated value and current item as input).
Below is an example program of showing how to use fold method −
object Demo {
def main(args: Array[String]) = {
val list = List(1, 2, 3 ,4)
//apply operation to get sum of all elements of the list
val result = list.fold(0)(_ + _)
//print result
println(result)
}
}
Here we've passed 0 as initial value to fold function and then all values are added. Save the above program in Demo.scala. The following commands are used to compile and execute this program.
\>scalac Demo.scala
\>scala Demo
10
82 Lectures
7 hours
Arnab Chakraborty
23 Lectures
1.5 hours
Mukund Kumar Mishra
52 Lectures
1.5 hours
Bigdata Engineer
76 Lectures
5.5 hours
Bigdata Engineer
69 Lectures
7.5 hours
Bigdata Engineer
46 Lectures
4.5 hours
Stone River ELearning
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2982,
"s": 2882,
"text": "fold() method is a member of TraversableOnce trait, it is used to collapse elements of collections."
},
{
"code": null,
"e": 3026,
"s": 2982,
"text": "The following is the syntax of fold method."
},
{
"code": null,
"e": 3075,
"s": 3026,
"text": "def fold[A1 >: A](z: A1)(op: (A1, A1) ? A1): A1\n"
},
{
"code": null,
"e": 3325,
"s": 3075,
"text": "Here, fold method takes associative binary operator function as a parameter. This method returns the result as value. It considers first input as initial value and second input as a function (which takes accumulated value and current item as input)."
},
{
"code": null,
"e": 3389,
"s": 3325,
"text": "Below is an example program of showing how to use fold method −"
},
{
"code": null,
"e": 3632,
"s": 3389,
"text": "object Demo {\n def main(args: Array[String]) = {\n val list = List(1, 2, 3 ,4)\n //apply operation to get sum of all elements of the list\n val result = list.fold(0)(_ + _)\n //print result\n println(result) \n }\n}"
},
{
"code": null,
"e": 3824,
"s": 3632,
"text": "Here we've passed 0 as initial value to fold function and then all values are added. Save the above program in Demo.scala. The following commands are used to compile and execute this program."
},
{
"code": null,
"e": 3858,
"s": 3824,
"text": "\\>scalac Demo.scala\n\\>scala Demo\n"
},
{
"code": null,
"e": 3862,
"s": 3858,
"text": "10\n"
},
{
"code": null,
"e": 3895,
"s": 3862,
"text": "\n 82 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 3914,
"s": 3895,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 3949,
"s": 3914,
"text": "\n 23 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 3970,
"s": 3949,
"text": " Mukund Kumar Mishra"
},
{
"code": null,
"e": 4005,
"s": 3970,
"text": "\n 52 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 4023,
"s": 4005,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 4058,
"s": 4023,
"text": "\n 76 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 4076,
"s": 4058,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 4111,
"s": 4076,
"text": "\n 69 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 4129,
"s": 4111,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 4164,
"s": 4129,
"text": "\n 46 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 4187,
"s": 4164,
"text": " Stone River ELearning"
},
{
"code": null,
"e": 4194,
"s": 4187,
"text": " Print"
},
{
"code": null,
"e": 4205,
"s": 4194,
"text": " Add Notes"
}
] |
How to find and extract a number from a string in C#? | A regular expression is a pattern that could be matched against an input text.
The .Net framework provides a regular expression engine that allows such matching.
A pattern consists of one or more character literals, operators, or constructs.
Here are basic pattern metacharacters used by RegEx −
* = zero or more
? = zero or one
^ = not
[] = range
The ^ symbol is used to specify not condition.
the [] brackets if we are to give range values such as 0 - 9 or a-z or A-Z
Live Demo
using System;
namespace DemoApplication{
public class Program{
static void Main(string[] args){
string str1 = "123string456";
string str2 = string.Empty;
int val = 0;
Console.WriteLine($"String with number: {str1}");
for (int i = 0; i < str1.Length; i++){
if (Char.IsDigit(str1[i]))
str2 += str1[i];
}
if (str2.Length > 0)
val = int.Parse(str2);
Console.WriteLine($"Extracted Number: {val}");
Console.ReadLine();
}
}
}
String with number: 123string456
Extracted Number: 123456
In the above example we are looping all the characters of the string str1. The
Char.IsDigit() validates whether the particular character is a number or not and adds it
to a new string which is later parsed to a numer.
Live Demo
using System;
using System.Text.RegularExpressions;
namespace DemoApplication{
public class Program{
static void Main(string[] args){
string str1 = "123string456";
string str2 = string.Empty;
int val = 0;
Console.WriteLine($"String with number: {str1}");
var matches = Regex.Matches(str1, @"\d+");
foreach(var match in matches){
str2 += match;
}
val = int.Parse(str2);
Console.WriteLine($"Extracted Number: {val}");
Console.ReadLine();
}
}
}
String with number: 123string456
Extracted Number: 123456
In the above example, we use the regular expression (\d+) to extract only the numbers
from the string str1. | [
{
"code": null,
"e": 1304,
"s": 1062,
"text": "A regular expression is a pattern that could be matched against an input text.\nThe .Net framework provides a regular expression engine that allows such matching.\nA pattern consists of one or more character literals, operators, or constructs."
},
{
"code": null,
"e": 1358,
"s": 1304,
"text": "Here are basic pattern metacharacters used by RegEx −"
},
{
"code": null,
"e": 1410,
"s": 1358,
"text": "* = zero or more\n? = zero or one\n^ = not\n[] = range"
},
{
"code": null,
"e": 1457,
"s": 1410,
"text": "The ^ symbol is used to specify not condition."
},
{
"code": null,
"e": 1532,
"s": 1457,
"text": "the [] brackets if we are to give range values such as 0 - 9 or a-z or A-Z"
},
{
"code": null,
"e": 1543,
"s": 1532,
"text": " Live Demo"
},
{
"code": null,
"e": 2103,
"s": 1543,
"text": "using System;\nnamespace DemoApplication{\n public class Program{\n static void Main(string[] args){\n string str1 = \"123string456\";\n string str2 = string.Empty;\n int val = 0;\n Console.WriteLine($\"String with number: {str1}\");\n for (int i = 0; i < str1.Length; i++){\n if (Char.IsDigit(str1[i]))\n str2 += str1[i];\n }\n if (str2.Length > 0)\n val = int.Parse(str2);\n Console.WriteLine($\"Extracted Number: {val}\");\n Console.ReadLine();\n }\n }\n}"
},
{
"code": null,
"e": 2161,
"s": 2103,
"text": "String with number: 123string456\nExtracted Number: 123456"
},
{
"code": null,
"e": 2379,
"s": 2161,
"text": "In the above example we are looping all the characters of the string str1. The\nChar.IsDigit() validates whether the particular character is a number or not and adds it\nto a new string which is later parsed to a numer."
},
{
"code": null,
"e": 2390,
"s": 2379,
"text": " Live Demo"
},
{
"code": null,
"e": 2952,
"s": 2390,
"text": "using System;\nusing System.Text.RegularExpressions;\nnamespace DemoApplication{\n public class Program{\n static void Main(string[] args){\n string str1 = \"123string456\";\n string str2 = string.Empty;\n int val = 0;\n Console.WriteLine($\"String with number: {str1}\");\n var matches = Regex.Matches(str1, @\"\\d+\");\n foreach(var match in matches){\n str2 += match;\n }\n val = int.Parse(str2);\n Console.WriteLine($\"Extracted Number: {val}\");\n Console.ReadLine();\n }\n }\n}"
},
{
"code": null,
"e": 3010,
"s": 2952,
"text": "String with number: 123string456\nExtracted Number: 123456"
},
{
"code": null,
"e": 3118,
"s": 3010,
"text": "In the above example, we use the regular expression (\\d+) to extract only the numbers\nfrom the string str1."
}
] |
Format Year in yyyy format in Java | To format year in yyyy format is like displaying the entire year. For example, 2018.
Use the yyyy format like this −
SimpleDateFormat("yyyy");
Let us see an example −
// year in yyyy format
SimpleDateFormat simpleformat = new SimpleDateFormat("yyyy");
String strYear = simpleformat.format(new Date());
System.out.println("Current Year = "+strYear);
Above, we have used the SimpleDateFormat class, therefore the following package is imported −
import java.text.SimpleDateFormat;
The following is an example −
Live Demo
import java.text.Format;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.Calendar;
public class Demo {
public static void main(String[] args) throws Exception {
// displaying current date and time
Calendar cal = Calendar.getInstance();
SimpleDateFormat simpleformat = new SimpleDateFormat("E, dd MMM yyyy HH:mm:ss");
System.out.println("Date and time = "+simpleformat.format(cal.getTime()));
// displaying date
simpleformat = new SimpleDateFormat("dd/MMMM/yyyy");
String str = simpleformat.format(new Date());
System.out.println("Current Date = "+str);
// year in yyyy format
simpleformat = new SimpleDateFormat("yyyy");
String strYear = simpleformat.format(new Date());
System.out.println("Current Year = "+strYear);
}
}
Date and time = Mon, 26 Nov 2018 11:12:39
Current Date = 26/November/2018
Current Year = 2018 | [
{
"code": null,
"e": 1147,
"s": 1062,
"text": "To format year in yyyy format is like displaying the entire year. For example, 2018."
},
{
"code": null,
"e": 1179,
"s": 1147,
"text": "Use the yyyy format like this −"
},
{
"code": null,
"e": 1205,
"s": 1179,
"text": "SimpleDateFormat(\"yyyy\");"
},
{
"code": null,
"e": 1229,
"s": 1205,
"text": "Let us see an example −"
},
{
"code": null,
"e": 1411,
"s": 1229,
"text": "// year in yyyy format\nSimpleDateFormat simpleformat = new SimpleDateFormat(\"yyyy\");\nString strYear = simpleformat.format(new Date());\nSystem.out.println(\"Current Year = \"+strYear);"
},
{
"code": null,
"e": 1505,
"s": 1411,
"text": "Above, we have used the SimpleDateFormat class, therefore the following package is imported −"
},
{
"code": null,
"e": 1540,
"s": 1505,
"text": "import java.text.SimpleDateFormat;"
},
{
"code": null,
"e": 1570,
"s": 1540,
"text": "The following is an example −"
},
{
"code": null,
"e": 1581,
"s": 1570,
"text": " Live Demo"
},
{
"code": null,
"e": 2408,
"s": 1581,
"text": "import java.text.Format;\nimport java.text.SimpleDateFormat;\nimport java.util.Date;\nimport java.util.Calendar;\npublic class Demo {\n public static void main(String[] args) throws Exception {\n // displaying current date and time\n Calendar cal = Calendar.getInstance();\n SimpleDateFormat simpleformat = new SimpleDateFormat(\"E, dd MMM yyyy HH:mm:ss\");\n System.out.println(\"Date and time = \"+simpleformat.format(cal.getTime()));\n // displaying date\n simpleformat = new SimpleDateFormat(\"dd/MMMM/yyyy\");\n String str = simpleformat.format(new Date());\n System.out.println(\"Current Date = \"+str);\n // year in yyyy format\n simpleformat = new SimpleDateFormat(\"yyyy\");\n String strYear = simpleformat.format(new Date());\n System.out.println(\"Current Year = \"+strYear);\n }\n}"
},
{
"code": null,
"e": 2502,
"s": 2408,
"text": "Date and time = Mon, 26 Nov 2018 11:12:39\nCurrent Date = 26/November/2018\nCurrent Year = 2018"
}
] |
jQuery UI dialog close() Method - GeeksforGeeks | 13 Jan, 2021
close() method is used to disable the dialog. This method does not accept any argument
Syntax:
$( ".selector" ).dialog("close");
Approach: First, add jQuery UI scripts needed for your project.
<link href = “https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css” rel = “stylesheet”><script src = “https://code.jquery.com/jquery-1.10.2.js”></script><script src = “https://code.jquery.com/ui/1.10.4/jquery-ui.js”></script>
Example:
HTML
<!doctype html><html lang="en"> <head> <meta charset="utf-8"> <link href="https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css" rel="stylesheet"> <script src="https://code.jquery.com/jquery-1.10.2.js"></script> <script src="https://code.jquery.com/ui/1.10.4/jquery-ui.js"> </script> <script> $(function () { $("#gfg").dialog({ autoOpen: false, }); $("#geeks").click(function () { $("#gfg").dialog("close"); }); }); </script></head> <body> <div id="gfg"> Jquery UI| close dialog method </div> <button id="geeks">Open Dialog</button></body> </html>
Output:
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
jQuery-UI
HTML
JQuery
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to insert spaces/tabs in text using HTML/CSS?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
HTML | <img> align Attribute
Types of CSS (Cascading Style Sheet)
How to set the default value for an HTML <select> element ?
JQuery | Set the value of an input text field
Form validation using jQuery
How to change selected value of a drop-down list using jQuery?
How to change the background color after clicking the button in JavaScript ?
How to add options to a select element using jQuery? | [
{
"code": null,
"e": 50960,
"s": 50932,
"text": "\n13 Jan, 2021"
},
{
"code": null,
"e": 51047,
"s": 50960,
"text": "close() method is used to disable the dialog. This method does not accept any argument"
},
{
"code": null,
"e": 51055,
"s": 51047,
"text": "Syntax:"
},
{
"code": null,
"e": 51089,
"s": 51055,
"text": "$( \".selector\" ).dialog(\"close\");"
},
{
"code": null,
"e": 51153,
"s": 51089,
"text": "Approach: First, add jQuery UI scripts needed for your project."
},
{
"code": null,
"e": 51394,
"s": 51153,
"text": "<link href = “https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css” rel = “stylesheet”><script src = “https://code.jquery.com/jquery-1.10.2.js”></script><script src = “https://code.jquery.com/ui/1.10.4/jquery-ui.js”></script>"
},
{
"code": null,
"e": 51403,
"s": 51394,
"text": "Example:"
},
{
"code": null,
"e": 51408,
"s": 51403,
"text": "HTML"
},
{
"code": "<!doctype html><html lang=\"en\"> <head> <meta charset=\"utf-8\"> <link href=\"https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css\" rel=\"stylesheet\"> <script src=\"https://code.jquery.com/jquery-1.10.2.js\"></script> <script src=\"https://code.jquery.com/ui/1.10.4/jquery-ui.js\"> </script> <script> $(function () { $(\"#gfg\").dialog({ autoOpen: false, }); $(\"#geeks\").click(function () { $(\"#gfg\").dialog(\"close\"); }); }); </script></head> <body> <div id=\"gfg\"> Jquery UI| close dialog method </div> <button id=\"geeks\">Open Dialog</button></body> </html>",
"e": 52113,
"s": 51408,
"text": null
},
{
"code": null,
"e": 52121,
"s": 52113,
"text": "Output:"
},
{
"code": null,
"e": 52258,
"s": 52121,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 52268,
"s": 52258,
"text": "jQuery-UI"
},
{
"code": null,
"e": 52273,
"s": 52268,
"text": "HTML"
},
{
"code": null,
"e": 52280,
"s": 52273,
"text": "JQuery"
},
{
"code": null,
"e": 52297,
"s": 52280,
"text": "Web Technologies"
},
{
"code": null,
"e": 52302,
"s": 52297,
"text": "HTML"
},
{
"code": null,
"e": 52400,
"s": 52302,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 52409,
"s": 52400,
"text": "Comments"
},
{
"code": null,
"e": 52422,
"s": 52409,
"text": "Old Comments"
},
{
"code": null,
"e": 52472,
"s": 52422,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 52534,
"s": 52472,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 52563,
"s": 52534,
"text": "HTML | <img> align Attribute"
},
{
"code": null,
"e": 52600,
"s": 52563,
"text": "Types of CSS (Cascading Style Sheet)"
},
{
"code": null,
"e": 52660,
"s": 52600,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 52706,
"s": 52660,
"text": "JQuery | Set the value of an input text field"
},
{
"code": null,
"e": 52735,
"s": 52706,
"text": "Form validation using jQuery"
},
{
"code": null,
"e": 52798,
"s": 52735,
"text": "How to change selected value of a drop-down list using jQuery?"
},
{
"code": null,
"e": 52875,
"s": 52798,
"text": "How to change the background color after clicking the button in JavaScript ?"
}
] |
Time Series Hierarchical Clustering using Dynamic Time Warping in Python | by Andrew Vladimirovich | Towards Data Science | Let us consider the following task: we have a bunch of evenly distributed time series of different lengths. The goal is to cluster time series by defining general patterns that are presented in the data.
Here I’d like to present one approach to solving this task. We will use hierarchical clustering and DTW algorithm as a comparison metric to the time series. The solution worked well on HR data (employee historical scores). For other types of time series, DTW function may work worse than other metrics like CID (Complexity Invariant Distance), MAE or correlation.
I will skip the theoretical explanations of hierarchical clustering and DTW algorithms and focus on why did I select such combination:
Hierarchical Clustering is simple, flexible, tunable (linkage criteria) and allows us not to cluster all trajectoriesDTW method allows us to compare time series of different length and, by my experience, works perfectly with infrequent
Hierarchical Clustering is simple, flexible, tunable (linkage criteria) and allows us not to cluster all trajectories
DTW method allows us to compare time series of different length and, by my experience, works perfectly with infrequent
Ok, here we go! Our imports:
import randomfrom copy import deepcopyfrom scipy import interpolateimport numpy as npfrom dtaidistance import dtwimport matplotlib.pyplot as pltfrom _plotly_future_ import v4_subplotsimport plotly.graph_objects as gofrom plotly.subplots import make_subplots
Some parameters for time series generation and our threshold:
NUM_OF_TRAJECTORIES number of trajectories that we have to cluster
MIN_LEN_OF_TRAJECTORY and MAX_LEN_OF_TRAJECTORY lower and upper length bounds for any trajectory
THRESHOLD our threshold for DTW
NUM_OF_TRAJECTORIES = 200MIN_LEN_OF_TRAJECTORY = 16MAX_LEN_OF_TRAJECTORY = 40THRESHOLD = 0.50
For simplicity, all our trajectories will lie between -1 and 1. Also, I added some smoothing.
for item in range(NUM_OF_TRAJECTORIES): length = random.choice(list(range(MIN_LEN_OF_TRAJECTORY, MAX_LEN_OF_TRAJECTORY + 1))) tempTrajectory = np.random.randint(low=-100, high=100, size=int(length / 4)).astype(float) / 100 oldScale = np.arange(0, int(length / 4)) interpolationFunction = interpolate.interp1d(oldScale, tempTrajectory) newScale = np.linspace(0, int(length / 4) - 1, length) tempTrajectory = interpolationFunction(newScale) trajectoriesSet[(str(item),)] = [tempTrajectory]
Notice, that all trajectories are stored as dictionary values of list type (for convenience, when we will start to union them into groups). For the same reason, the names of trajectories are stored as tuples.
Our algorithm is the following:
We find a pair of closest entities (trajectory-trajectory or trajectory-cluster or cluster-trajectory or cluster-cluster)Group them into a single cluster if their distance is lower than the THRESHOLDRepeat step 1We stop our algorithm if we fail at step 2 or we get one big cluster (so all our trajectories get into it — it means that our THRESHOLD is very big)
We find a pair of closest entities (trajectory-trajectory or trajectory-cluster or cluster-trajectory or cluster-cluster)
Group them into a single cluster if their distance is lower than the THRESHOLD
Repeat step 1
We stop our algorithm if we fail at step 2 or we get one big cluster (so all our trajectories get into it — it means that our THRESHOLD is very big)
The first part of the algorithm:
trajectories = deepcopy(trajectoriesSet)distanceMatrixDictionary = {}iteration = 1while True: distanceMatrix = np.empty((len(trajectories), len(trajectories),)) distanceMatrix[:] = np.nan for index1, (filter1, trajectory1) in enumerate(trajectories.items()): tempArray = [] for index2, (filter2, trajectory2) in enumerate(trajectories.items()): if index1 > index2: continue elif index1 == index2: continue else: unionFilter = filter1 + filter2 sorted(unionFilter) if unionFilter in distanceMatrixDictionary.keys(): distanceMatrix[index1][index2] = distanceMatrixDictionary.get(unionFilter) continue metric = [] for subItem1 in trajectory1: for subItem2 in trajectory2: metric.append(dtw.distance(subItem1, subItem2, psi=1)) metric = max(metric) distanceMatrix[index1][index2] = metric distanceMatrixDictionary[unionFilter] = metric
Dictionary distanceMatrixDictionary helps us to keep already calculated distances.
Numpy array distanceMatrix is filled with np.nan at the beginning of each step. It is needed only to keep representations between pairs of indexes and calculated distances. May be removed after adding the same functionality to the distanceMatrixDictionary.
This part of code allows us to compare all possible options — trajectory-trajectory or trajectory-cluster or cluster-trajectory or cluster-cluster :
metric = []for subItem1 in trajectory1: for subItem2 in trajectory2: metric.append(dtw.distance(subItem1, subItem2))metric = max(metric)
The last line above — metric = max(metric) — is linkage criteria called ‘complete linkage’. It worked better for me but you can try other criteria or even make it customized.
Okay, distances are calculated, let us proceed with the grouping process.
We find the lowest distance and a pair of indexes that provide us this distance.
Here for simplicity, we will work with only one pair (the first one). Even, if we have two, three or more pairs for the same distance — the rest will be processed step-by-step during the next iterations.
minValue = np.min(list(distanceMatrixDictionary.values()))if minValue > THRESHOLD: breakminIndices = np.where(distanceMatrix == minValue)minIndices = list(zip(minIndices[0], minIndices[1]))minIndex = minIndices[0]
After getting the pair of indexes we need simply define entities names and values, combine them, put the combination into the dictionary and remove these single entities from it:
filter1 = list(trajectories.keys())[minIndex[0]]filter2 = list(trajectories.keys())[minIndex[1]]trajectory1 = trajectories.get(filter1)trajectory2 = trajectories.get(filter2)unionFilter = filter1 + filter2sorted(unionFilter)trajectoryGroup = trajectory1 + trajectory2trajectories = {key: value for key, value in trajectories.items() if all(value not in unionFilter for value in key)}distanceMatrixDictionary = {key: value for key, value in distanceMatrixDictionary.items() if all(value not in unionFilter for value in key)}trajectories[unionFilter] = trajectoryGroup
After that, we repeat the previous step until we have nothing to cluster.
I have described the general approach but this algorithm can be simplified, boosted and modified to avoid any recalculations.
As a result, we get groups like this:
In this cluster, we see 3 time series of different lengths. All of them have the same general pattern: local minimum in the first third, then global peak in the second half and a global minimum in the end.
Some more results (here for each cluster the left subplot presents original trajectories lengths, right one — rescaled to the MAX_LEN_OF_TRAJECTORY for comparison):
Depending on THRESHOLD value we can make our clusters bigger (and more generalized) or smaller (more detailed).
What can we improve if the current approach does not perform well on another dataset?
We can try to replace DWT with another distance metricWe can try to work additionally with time series: scale/rescale, smooth or remove outliersWe can try to use different thresholdsWe can try to change linkage criteria
We can try to replace DWT with another distance metric
We can try to work additionally with time series: scale/rescale, smooth or remove outliers
We can try to use different thresholds
We can try to change linkage criteria
After finding the optimal hyperparameters it is possible to refactor the code and speed-up the calculations.
Code to use is available here:https://github.com/avchauzov/_articles/blob/master/1.1.trajectoriesClustering.ipynb | [
{
"code": null,
"e": 376,
"s": 172,
"text": "Let us consider the following task: we have a bunch of evenly distributed time series of different lengths. The goal is to cluster time series by defining general patterns that are presented in the data."
},
{
"code": null,
"e": 740,
"s": 376,
"text": "Here I’d like to present one approach to solving this task. We will use hierarchical clustering and DTW algorithm as a comparison metric to the time series. The solution worked well on HR data (employee historical scores). For other types of time series, DTW function may work worse than other metrics like CID (Complexity Invariant Distance), MAE or correlation."
},
{
"code": null,
"e": 875,
"s": 740,
"text": "I will skip the theoretical explanations of hierarchical clustering and DTW algorithms and focus on why did I select such combination:"
},
{
"code": null,
"e": 1111,
"s": 875,
"text": "Hierarchical Clustering is simple, flexible, tunable (linkage criteria) and allows us not to cluster all trajectoriesDTW method allows us to compare time series of different length and, by my experience, works perfectly with infrequent"
},
{
"code": null,
"e": 1229,
"s": 1111,
"text": "Hierarchical Clustering is simple, flexible, tunable (linkage criteria) and allows us not to cluster all trajectories"
},
{
"code": null,
"e": 1348,
"s": 1229,
"text": "DTW method allows us to compare time series of different length and, by my experience, works perfectly with infrequent"
},
{
"code": null,
"e": 1377,
"s": 1348,
"text": "Ok, here we go! Our imports:"
},
{
"code": null,
"e": 1635,
"s": 1377,
"text": "import randomfrom copy import deepcopyfrom scipy import interpolateimport numpy as npfrom dtaidistance import dtwimport matplotlib.pyplot as pltfrom _plotly_future_ import v4_subplotsimport plotly.graph_objects as gofrom plotly.subplots import make_subplots"
},
{
"code": null,
"e": 1697,
"s": 1635,
"text": "Some parameters for time series generation and our threshold:"
},
{
"code": null,
"e": 1764,
"s": 1697,
"text": "NUM_OF_TRAJECTORIES number of trajectories that we have to cluster"
},
{
"code": null,
"e": 1861,
"s": 1764,
"text": "MIN_LEN_OF_TRAJECTORY and MAX_LEN_OF_TRAJECTORY lower and upper length bounds for any trajectory"
},
{
"code": null,
"e": 1893,
"s": 1861,
"text": "THRESHOLD our threshold for DTW"
},
{
"code": null,
"e": 1987,
"s": 1893,
"text": "NUM_OF_TRAJECTORIES = 200MIN_LEN_OF_TRAJECTORY = 16MAX_LEN_OF_TRAJECTORY = 40THRESHOLD = 0.50"
},
{
"code": null,
"e": 2081,
"s": 1987,
"text": "For simplicity, all our trajectories will lie between -1 and 1. Also, I added some smoothing."
},
{
"code": null,
"e": 2592,
"s": 2081,
"text": "for item in range(NUM_OF_TRAJECTORIES): length = random.choice(list(range(MIN_LEN_OF_TRAJECTORY, MAX_LEN_OF_TRAJECTORY + 1))) tempTrajectory = np.random.randint(low=-100, high=100, size=int(length / 4)).astype(float) / 100 oldScale = np.arange(0, int(length / 4)) interpolationFunction = interpolate.interp1d(oldScale, tempTrajectory) newScale = np.linspace(0, int(length / 4) - 1, length) tempTrajectory = interpolationFunction(newScale) trajectoriesSet[(str(item),)] = [tempTrajectory]"
},
{
"code": null,
"e": 2801,
"s": 2592,
"text": "Notice, that all trajectories are stored as dictionary values of list type (for convenience, when we will start to union them into groups). For the same reason, the names of trajectories are stored as tuples."
},
{
"code": null,
"e": 2833,
"s": 2801,
"text": "Our algorithm is the following:"
},
{
"code": null,
"e": 3194,
"s": 2833,
"text": "We find a pair of closest entities (trajectory-trajectory or trajectory-cluster or cluster-trajectory or cluster-cluster)Group them into a single cluster if their distance is lower than the THRESHOLDRepeat step 1We stop our algorithm if we fail at step 2 or we get one big cluster (so all our trajectories get into it — it means that our THRESHOLD is very big)"
},
{
"code": null,
"e": 3316,
"s": 3194,
"text": "We find a pair of closest entities (trajectory-trajectory or trajectory-cluster or cluster-trajectory or cluster-cluster)"
},
{
"code": null,
"e": 3395,
"s": 3316,
"text": "Group them into a single cluster if their distance is lower than the THRESHOLD"
},
{
"code": null,
"e": 3409,
"s": 3395,
"text": "Repeat step 1"
},
{
"code": null,
"e": 3558,
"s": 3409,
"text": "We stop our algorithm if we fail at step 2 or we get one big cluster (so all our trajectories get into it — it means that our THRESHOLD is very big)"
},
{
"code": null,
"e": 3591,
"s": 3558,
"text": "The first part of the algorithm:"
},
{
"code": null,
"e": 4746,
"s": 3591,
"text": "trajectories = deepcopy(trajectoriesSet)distanceMatrixDictionary = {}iteration = 1while True: distanceMatrix = np.empty((len(trajectories), len(trajectories),)) distanceMatrix[:] = np.nan for index1, (filter1, trajectory1) in enumerate(trajectories.items()): tempArray = [] for index2, (filter2, trajectory2) in enumerate(trajectories.items()): if index1 > index2: continue elif index1 == index2: continue else: unionFilter = filter1 + filter2 sorted(unionFilter) if unionFilter in distanceMatrixDictionary.keys(): distanceMatrix[index1][index2] = distanceMatrixDictionary.get(unionFilter) continue metric = [] for subItem1 in trajectory1: for subItem2 in trajectory2: metric.append(dtw.distance(subItem1, subItem2, psi=1)) metric = max(metric) distanceMatrix[index1][index2] = metric distanceMatrixDictionary[unionFilter] = metric"
},
{
"code": null,
"e": 4829,
"s": 4746,
"text": "Dictionary distanceMatrixDictionary helps us to keep already calculated distances."
},
{
"code": null,
"e": 5086,
"s": 4829,
"text": "Numpy array distanceMatrix is filled with np.nan at the beginning of each step. It is needed only to keep representations between pairs of indexes and calculated distances. May be removed after adding the same functionality to the distanceMatrixDictionary."
},
{
"code": null,
"e": 5235,
"s": 5086,
"text": "This part of code allows us to compare all possible options — trajectory-trajectory or trajectory-cluster or cluster-trajectory or cluster-cluster :"
},
{
"code": null,
"e": 5382,
"s": 5235,
"text": "metric = []for subItem1 in trajectory1: for subItem2 in trajectory2: metric.append(dtw.distance(subItem1, subItem2))metric = max(metric)"
},
{
"code": null,
"e": 5557,
"s": 5382,
"text": "The last line above — metric = max(metric) — is linkage criteria called ‘complete linkage’. It worked better for me but you can try other criteria or even make it customized."
},
{
"code": null,
"e": 5631,
"s": 5557,
"text": "Okay, distances are calculated, let us proceed with the grouping process."
},
{
"code": null,
"e": 5712,
"s": 5631,
"text": "We find the lowest distance and a pair of indexes that provide us this distance."
},
{
"code": null,
"e": 5916,
"s": 5712,
"text": "Here for simplicity, we will work with only one pair (the first one). Even, if we have two, three or more pairs for the same distance — the rest will be processed step-by-step during the next iterations."
},
{
"code": null,
"e": 6132,
"s": 5916,
"text": "minValue = np.min(list(distanceMatrixDictionary.values()))if minValue > THRESHOLD: breakminIndices = np.where(distanceMatrix == minValue)minIndices = list(zip(minIndices[0], minIndices[1]))minIndex = minIndices[0]"
},
{
"code": null,
"e": 6311,
"s": 6132,
"text": "After getting the pair of indexes we need simply define entities names and values, combine them, put the combination into the dictionary and remove these single entities from it:"
},
{
"code": null,
"e": 6920,
"s": 6311,
"text": "filter1 = list(trajectories.keys())[minIndex[0]]filter2 = list(trajectories.keys())[minIndex[1]]trajectory1 = trajectories.get(filter1)trajectory2 = trajectories.get(filter2)unionFilter = filter1 + filter2sorted(unionFilter)trajectoryGroup = trajectory1 + trajectory2trajectories = {key: value for key, value in trajectories.items() if all(value not in unionFilter for value in key)}distanceMatrixDictionary = {key: value for key, value in distanceMatrixDictionary.items() if all(value not in unionFilter for value in key)}trajectories[unionFilter] = trajectoryGroup"
},
{
"code": null,
"e": 6994,
"s": 6920,
"text": "After that, we repeat the previous step until we have nothing to cluster."
},
{
"code": null,
"e": 7120,
"s": 6994,
"text": "I have described the general approach but this algorithm can be simplified, boosted and modified to avoid any recalculations."
},
{
"code": null,
"e": 7158,
"s": 7120,
"text": "As a result, we get groups like this:"
},
{
"code": null,
"e": 7364,
"s": 7158,
"text": "In this cluster, we see 3 time series of different lengths. All of them have the same general pattern: local minimum in the first third, then global peak in the second half and a global minimum in the end."
},
{
"code": null,
"e": 7529,
"s": 7364,
"text": "Some more results (here for each cluster the left subplot presents original trajectories lengths, right one — rescaled to the MAX_LEN_OF_TRAJECTORY for comparison):"
},
{
"code": null,
"e": 7641,
"s": 7529,
"text": "Depending on THRESHOLD value we can make our clusters bigger (and more generalized) or smaller (more detailed)."
},
{
"code": null,
"e": 7727,
"s": 7641,
"text": "What can we improve if the current approach does not perform well on another dataset?"
},
{
"code": null,
"e": 7947,
"s": 7727,
"text": "We can try to replace DWT with another distance metricWe can try to work additionally with time series: scale/rescale, smooth or remove outliersWe can try to use different thresholdsWe can try to change linkage criteria"
},
{
"code": null,
"e": 8002,
"s": 7947,
"text": "We can try to replace DWT with another distance metric"
},
{
"code": null,
"e": 8093,
"s": 8002,
"text": "We can try to work additionally with time series: scale/rescale, smooth or remove outliers"
},
{
"code": null,
"e": 8132,
"s": 8093,
"text": "We can try to use different thresholds"
},
{
"code": null,
"e": 8170,
"s": 8132,
"text": "We can try to change linkage criteria"
},
{
"code": null,
"e": 8279,
"s": 8170,
"text": "After finding the optimal hyperparameters it is possible to refactor the code and speed-up the calculations."
}
] |
Traverse a collection of objects using the Enumeration Interface in Java | All the elements in a collection of objects can be traversed using the Enumeration interface. The method hasMoreElements( ) returns true if there are more elements to be enumerated and false if there are no more elements to be enumerated. The method nextElement( ) returns the next object in the enumeration.
A program that demonstrates this is given as follows −
Live Demo
import java.util.Enumeration;
import java.util.Vector;
public class Demo {
public static void main(String args[]) throws Exception {
Vector vec = new Vector();
vec.add("John");
vec.add("Gary");
vec.add("Susan");
vec.add("Mike");
vec.add("Angela");
Enumeration enumeration = vec.elements();
System.out.println("The vector elements are:");
while (enumeration.hasMoreElements()) {
Object obj = enumeration.nextElement();
System.out.println(obj);
}
}
}
The vector elements are:
John
Gary
Susan
Mike
Angela
Now let us understand the above program.
The Vector is created and Vector.add() is used to add the elements to the Vector. Then the vector elements are displayed using the enumeration interface. A code snippet which demonstrates this is as follows −
Vector vec = new Vector();
vec.add("John");
vec.add("Gary");
vec.add("Susan");
vec.add("Mike");
vec.add("Angela");
Enumeration enumeration = vec.elements();
System.out.println("The vector elements are:");
while (enumeration.hasMoreElements()) {
Object obj = enumeration.nextElement();
System.out.println(obj);
} | [
{
"code": null,
"e": 1371,
"s": 1062,
"text": "All the elements in a collection of objects can be traversed using the Enumeration interface. The method hasMoreElements( ) returns true if there are more elements to be enumerated and false if there are no more elements to be enumerated. The method nextElement( ) returns the next object in the enumeration."
},
{
"code": null,
"e": 1426,
"s": 1371,
"text": "A program that demonstrates this is given as follows −"
},
{
"code": null,
"e": 1437,
"s": 1426,
"text": " Live Demo"
},
{
"code": null,
"e": 1970,
"s": 1437,
"text": "import java.util.Enumeration;\nimport java.util.Vector;\npublic class Demo {\n public static void main(String args[]) throws Exception {\n Vector vec = new Vector();\n vec.add(\"John\");\n vec.add(\"Gary\");\n vec.add(\"Susan\");\n vec.add(\"Mike\");\n vec.add(\"Angela\");\n Enumeration enumeration = vec.elements();\n System.out.println(\"The vector elements are:\");\n while (enumeration.hasMoreElements()) {\n Object obj = enumeration.nextElement();\n System.out.println(obj);\n }\n }\n}"
},
{
"code": null,
"e": 2023,
"s": 1970,
"text": "The vector elements are:\nJohn\nGary\nSusan\nMike\nAngela"
},
{
"code": null,
"e": 2064,
"s": 2023,
"text": "Now let us understand the above program."
},
{
"code": null,
"e": 2273,
"s": 2064,
"text": "The Vector is created and Vector.add() is used to add the elements to the Vector. Then the vector elements are displayed using the enumeration interface. A code snippet which demonstrates this is as follows −"
},
{
"code": null,
"e": 2591,
"s": 2273,
"text": "Vector vec = new Vector();\nvec.add(\"John\");\nvec.add(\"Gary\");\nvec.add(\"Susan\");\nvec.add(\"Mike\");\nvec.add(\"Angela\");\nEnumeration enumeration = vec.elements();\nSystem.out.println(\"The vector elements are:\");\nwhile (enumeration.hasMoreElements()) {\n Object obj = enumeration.nextElement();\n System.out.println(obj);\n}"
}
] |
Python Program for Longest Common Subsequence - GeeksforGeeks | 18 Apr, 2020
LCS Problem Statement: Given two sequences, find the length of longest subsequence present in both of them. A subsequence is a sequence that appears in the same relative order, but not necessarily contiguous. For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of “abcdefg”. So a string of length n has 2^n different possible subsequences.
It is a classic computer science problem, the basis of diff (a file comparison program that outputs the differences between two files), and has applications in bioinformatics.
Examples:LCS for input Sequences “ABCDGH” and “AEDFHR” is “ADH” of length 3.LCS for input Sequences “AGGTAB” and “GXTXAYB” is “GTAB” of length 4.
Let the input sequences be X[0..m-1] and Y[0..n-1] of lengths m and n respectively. And let L(X[0..m-1], Y[0..n-1]) be the length of LCS of the two sequences X and Y. Following is the recursive definition of L(X[0..m-1], Y[0..n-1]).
If last characters of both sequences match (or X[m-1] == Y[n-1]) thenL(X[0..m-1], Y[0..n-1]) = 1 + L(X[0..m-2], Y[0..n-2])
If last characters of both sequences do not match (or X[m-1] != Y[n-1]) thenL(X[0..m-1], Y[0..n-1]) = MAX ( L(X[0..m-2], Y[0..n-1]), L(X[0..m-1], Y[0..n-2])
# A Naive recursive Python implementation of LCS problem def lcs(X, Y, m, n): if m == 0 or n == 0: return 0; elif X[m-1] == Y[n-1]: return 1 + lcs(X, Y, m-1, n-1); else: return max(lcs(X, Y, m, n-1), lcs(X, Y, m-1, n)); # Driver program to test the above functionX = "AGGTAB"Y = "GXTXAYB"print ("Length of LCS is ", lcs(X, Y, len(X), len(Y)))
Length of LCS is 4
Following is a tabulated implementation for the LCS problem.
# Dynamic Programming implementation of LCS problem def lcs(X, Y): # find the length of the strings m = len(X) n = len(Y) # declaring the array for storing the dp values L = [[None]*(n + 1) for i in range(m + 1)] """Following steps build L[m + 1][n + 1] in bottom up fashion Note: L[i][j] contains length of LCS of X[0..i-1] and Y[0..j-1]""" for i in range(m + 1): for j in range(n + 1): if i == 0 or j == 0 : L[i][j] = 0 elif X[i-1] == Y[j-1]: L[i][j] = L[i-1][j-1]+1 else: L[i][j] = max(L[i-1][j], L[i][j-1]) # L[m][n] contains the length of LCS of X[0..n-1] & Y[0..m-1] return L[m][n]# end of function lcs # Driver program to test the above functionX = "AGGTAB"Y = "GXTXAYB"print("Length of LCS is ", lcs(X, Y)) # This code is contributed by Nikhil Kumar Singh(nickzuck_007)
Length of LCS is 4
Please refer complete article on Dynamic Programming | Set 4 (Longest Common Subsequence) for more details!
Akshay Ashok
LCS
Dynamic Programming
Python Programs
Dynamic Programming
LCS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Bellman–Ford Algorithm | DP-23
Floyd Warshall Algorithm | DP-16
Subset Sum Problem | DP-25
Matrix Chain Multiplication | DP-8
Coin Change | DP-7
Defaultdict in Python
Python program to convert a list to string
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python | Convert a list to dictionary | [
{
"code": null,
"e": 26932,
"s": 26904,
"text": "\n18 Apr, 2020"
},
{
"code": null,
"e": 27295,
"s": 26932,
"text": "LCS Problem Statement: Given two sequences, find the length of longest subsequence present in both of them. A subsequence is a sequence that appears in the same relative order, but not necessarily contiguous. For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of “abcdefg”. So a string of length n has 2^n different possible subsequences."
},
{
"code": null,
"e": 27471,
"s": 27295,
"text": "It is a classic computer science problem, the basis of diff (a file comparison program that outputs the differences between two files), and has applications in bioinformatics."
},
{
"code": null,
"e": 27617,
"s": 27471,
"text": "Examples:LCS for input Sequences “ABCDGH” and “AEDFHR” is “ADH” of length 3.LCS for input Sequences “AGGTAB” and “GXTXAYB” is “GTAB” of length 4."
},
{
"code": null,
"e": 27850,
"s": 27617,
"text": "Let the input sequences be X[0..m-1] and Y[0..n-1] of lengths m and n respectively. And let L(X[0..m-1], Y[0..n-1]) be the length of LCS of the two sequences X and Y. Following is the recursive definition of L(X[0..m-1], Y[0..n-1])."
},
{
"code": null,
"e": 27973,
"s": 27850,
"text": "If last characters of both sequences match (or X[m-1] == Y[n-1]) thenL(X[0..m-1], Y[0..n-1]) = 1 + L(X[0..m-2], Y[0..n-2])"
},
{
"code": null,
"e": 28130,
"s": 27973,
"text": "If last characters of both sequences do not match (or X[m-1] != Y[n-1]) thenL(X[0..m-1], Y[0..n-1]) = MAX ( L(X[0..m-2], Y[0..n-1]), L(X[0..m-1], Y[0..n-2])"
},
{
"code": "# A Naive recursive Python implementation of LCS problem def lcs(X, Y, m, n): if m == 0 or n == 0: return 0; elif X[m-1] == Y[n-1]: return 1 + lcs(X, Y, m-1, n-1); else: return max(lcs(X, Y, m, n-1), lcs(X, Y, m-1, n)); # Driver program to test the above functionX = \"AGGTAB\"Y = \"GXTXAYB\"print (\"Length of LCS is \", lcs(X, Y, len(X), len(Y)))",
"e": 28506,
"s": 28130,
"text": null
},
{
"code": null,
"e": 28527,
"s": 28506,
"text": "Length of LCS is 4\n"
},
{
"code": null,
"e": 28588,
"s": 28527,
"text": "Following is a tabulated implementation for the LCS problem."
},
{
"code": "# Dynamic Programming implementation of LCS problem def lcs(X, Y): # find the length of the strings m = len(X) n = len(Y) # declaring the array for storing the dp values L = [[None]*(n + 1) for i in range(m + 1)] \"\"\"Following steps build L[m + 1][n + 1] in bottom up fashion Note: L[i][j] contains length of LCS of X[0..i-1] and Y[0..j-1]\"\"\" for i in range(m + 1): for j in range(n + 1): if i == 0 or j == 0 : L[i][j] = 0 elif X[i-1] == Y[j-1]: L[i][j] = L[i-1][j-1]+1 else: L[i][j] = max(L[i-1][j], L[i][j-1]) # L[m][n] contains the length of LCS of X[0..n-1] & Y[0..m-1] return L[m][n]# end of function lcs # Driver program to test the above functionX = \"AGGTAB\"Y = \"GXTXAYB\"print(\"Length of LCS is \", lcs(X, Y)) # This code is contributed by Nikhil Kumar Singh(nickzuck_007)",
"e": 29495,
"s": 28588,
"text": null
},
{
"code": null,
"e": 29516,
"s": 29495,
"text": "Length of LCS is 4\n"
},
{
"code": null,
"e": 29624,
"s": 29516,
"text": "Please refer complete article on Dynamic Programming | Set 4 (Longest Common Subsequence) for more details!"
},
{
"code": null,
"e": 29637,
"s": 29624,
"text": "Akshay Ashok"
},
{
"code": null,
"e": 29641,
"s": 29637,
"text": "LCS"
},
{
"code": null,
"e": 29661,
"s": 29641,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 29677,
"s": 29661,
"text": "Python Programs"
},
{
"code": null,
"e": 29697,
"s": 29677,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 29701,
"s": 29697,
"text": "LCS"
},
{
"code": null,
"e": 29799,
"s": 29701,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29830,
"s": 29799,
"text": "Bellman–Ford Algorithm | DP-23"
},
{
"code": null,
"e": 29863,
"s": 29830,
"text": "Floyd Warshall Algorithm | DP-16"
},
{
"code": null,
"e": 29890,
"s": 29863,
"text": "Subset Sum Problem | DP-25"
},
{
"code": null,
"e": 29925,
"s": 29890,
"text": "Matrix Chain Multiplication | DP-8"
},
{
"code": null,
"e": 29944,
"s": 29925,
"text": "Coin Change | DP-7"
},
{
"code": null,
"e": 29966,
"s": 29944,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 30009,
"s": 29966,
"text": "Python program to convert a list to string"
},
{
"code": null,
"e": 30048,
"s": 30009,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 30094,
"s": 30048,
"text": "Python | Split string into list of characters"
}
] |
Count of longest possible subarrays with sum not divisible by K - GeeksforGeeks | 12 May, 2021
Given an array of integers arr[] and a positive integer K, the task is to find the count of the longest possible subarrays with sum of its elements not divisible by K.
Examples:
Input: arr[] = {2, 3, 4, 6}, K = 3 Output: 1 Explanation: There is only one longest possible subarray of size 3 i.e. {3, 4, 6} having a sum 13, which is not divisible by K = 3.
Input: arr[] = {2, 4, 3, 5, 1}, K = 3 Output: 2 Explanation: There are 2 longest possible subarrays of size 4 i.e. {2, 4, 3, 5} and {4, 3, 5, 1} having a sum 14 and 13 respectively, which is not divisible by K = 3.
Approach:
Check if the sum of all the elements of the array is divisible by KIf the sum is not divisible by K, return 1 as the longest subarray would be of size N.Else Find the index of the first number not divisible by K. Let that be L.Find the index of the last number not divisible by K. Let that be R.Remove the elements all the way up to index L and find the size of the subarray. Remove the elements beyond R and find the size of this subarray as well. Whichever length is greater, that will be the size of the longest subarray not divisible by K.Using this length as the window size, apply the sliding window technique on the arr[] to find out the count of sub-arrays of the size obtained above which are not divisible by K.
Check if the sum of all the elements of the array is divisible by K
If the sum is not divisible by K, return 1 as the longest subarray would be of size N.
Else Find the index of the first number not divisible by K. Let that be L.Find the index of the last number not divisible by K. Let that be R.Remove the elements all the way up to index L and find the size of the subarray. Remove the elements beyond R and find the size of this subarray as well. Whichever length is greater, that will be the size of the longest subarray not divisible by K.Using this length as the window size, apply the sliding window technique on the arr[] to find out the count of sub-arrays of the size obtained above which are not divisible by K.
Find the index of the first number not divisible by K. Let that be L.
Find the index of the last number not divisible by K. Let that be R.
Remove the elements all the way up to index L and find the size of the subarray. Remove the elements beyond R and find the size of this subarray as well. Whichever length is greater, that will be the size of the longest subarray not divisible by K.
Using this length as the window size, apply the sliding window technique on the arr[] to find out the count of sub-arrays of the size obtained above which are not divisible by K.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program for the above problem #include <bits/stdc++.h>using namespace std; // Function to find the count of// longest subarrays with sum not// divisible by Kint CountLongestSubarrays( int arr[], int n, int k){ // Sum of all elements in // an array int i, s = 0; for (i = 0; i < n; ++i) { s += arr[i]; } // If overall sum is not // divisible then return // 1, as only one subarray // of size n is possible if (s % k) { return 1; } else { int ini = 0; // Index of the first number // not divisible by K while (ini < n && arr[ini] % k == 0) { ++ini; } int final = n - 1; // Index of the last number // not divisible by K while (final >= 0 && arr[final] % k == 0) { --final; } int len, sum = 0, count = 0; // Subarray doesn't exist if (ini == n) { return -1; } else { len = max(n - 1 - ini, final); } // Sum of the window for (i = 0; i < len; i++) { sum += arr[i]; } if (sum % k != 0) { count++; } // Calculate the sum of rest of // the windows of size len for (i = len; i < n; i++) { sum = sum + arr[i]; sum = sum - arr[i - len]; if (sum % k != 0) { count++; } } return count; }} // Driver Codeint main(){ int arr[] = { 3, 2, 2, 2, 3 }; int n = sizeof(arr) / sizeof(arr[0]); int k = 3; cout << CountLongestSubarrays(arr, n, k); return 0;}
// Java program for the above problemimport java.util.*; class GFG{ // Function to find the count of// longest subarrays with sum not// divisible by Kstatic int CountLongestSubarrays(int arr[], int n, int k){ // Sum of all elements in // an array int i, s = 0; for(i = 0; i < n; ++i) { s += arr[i]; } // If overall sum is not // divisible then return // 1, as only one subarray // of size n is possible if ((s % k) != 0) { return 1; } else { int ini = 0; // Index of the first number // not divisible by K while (ini < n && arr[ini] % k == 0) { ++ini; } int fin = n - 1; // Index of the last number // not divisible by K while (fin >= 0 && arr[fin] % k == 0) { --fin; } int len, sum = 0, count = 0; // Subarray doesn't exist if (ini == n) { return -1; } else { len = Math.max(n - 1 - ini, fin); } // Sum of the window for(i = 0; i < len; i++) { sum += arr[i]; } if (sum % k != 0) { count++; } // Calculate the sum of rest of // the windows of size len for(i = len; i < n; i++) { sum = sum + arr[i]; sum = sum - arr[i - len]; if (sum % k != 0) { count++; } } return count; }} // Driver Codepublic static void main (String []args){ int arr[] = { 3, 2, 2, 2, 3 }; int n = arr.length; int k = 3; System.out.print(CountLongestSubarrays( arr, n, k));}} // This code is contributed by chitranayal
# Python3 program for the above problem # Function to find the count of# longest subarrays with sum not# divisible by Kdef CountLongestSubarrays(arr, n, k): # Sum of all elements in # an array s = 0 for i in range(n): s += arr[i] # If overall sum is not # divisible then return # 1, as only one subarray # of size n is possible if(s % k): return 1 else: ini = 0 # Index of the first number # not divisible by K while (ini < n and arr[ini] % k == 0): ini += 1 final = n - 1 # Index of the last number # not divisible by K while (final >= 0 and arr[final] % k == 0): final -= 1 sum, count = 0, 0 # Subarray doesn't exist if(ini == n): return -1 else: length = max(n - 1 - ini, final) # Sum of the window for i in range(length): sum += arr[i] if(sum % k != 0): count += 1 # Calculate the sum of rest of # the windows of size len for i in range(length, n): sum = sum + arr[i] sum = sum + arr[i - length] if (sum % k != 0): count += 1 return count # Driver Codeif __name__ == '__main__': arr = [ 3, 2, 2, 2, 3 ] n = len(arr) k = 3 print(CountLongestSubarrays(arr, n, k)) # This code is contributed by Shivam Singh
// C# program for the above problemusing System; class GFG{ // Function to find the count of// longest subarrays with sum not// divisible by Kstatic int CountLongestSubarrays(int[] arr, int n, int k){ // Sum of all elements in // an array int i, s = 0; for(i = 0; i < n; ++i) { s += arr[i]; } // If overall sum is not // divisible then return // 1, as only one subarray // of size n is possible if ((s % k) != 0) { return 1; } else { int ini = 0; // Index of the first number // not divisible by K while (ini < n && arr[ini] % k == 0) { ++ini; } int fin = n - 1; // Index of the last number // not divisible by K while (fin >= 0 && arr[fin] % k == 0) { --fin; } int len, sum = 0, count = 0; // Subarray doesn't exist if (ini == n) { return -1; } else { len = Math.Max(n - 1 - ini, fin); } // Sum of the window for(i = 0; i < len; i++) { sum += arr[i]; } if (sum % k != 0) { count++; } // Calculate the sum of rest of // the windows of size len for(i = len; i < n; i++) { sum = sum + arr[i]; sum = sum - arr[i - len]; if (sum % k != 0) { count++; } } return count; }} // Driver Codepublic static void Main(String[] args){ int[] arr = { 3, 2, 2, 2, 3 }; int n = arr.Length; int k = 3; Console.WriteLine(CountLongestSubarrays( arr, n, k));}} // This code is contributed by jrishabh99
<script> // JavaScript program for the above problem // Function to find the count of// longest subarrays with sum not// divisible by Kfunction CountLongestSubarrays(arr, n, k){ // Sum of all elements in // an array let i, s = 0; for(i = 0; i < n; ++i) { s += arr[i]; } // If overall sum is not // divisible then return // 1, as only one subarray // of size n is possible if ((s % k) != 0) { return 1; } else { let ini = 0; // Index of the first number // not divisible by K while (ini < n && arr[ini] % k == 0) { ++ini; } let fin = n - 1; // Index of the last number // not divisible by K while (fin >= 0 && arr[fin] % k == 0) { --fin; } let len, sum = 0, count = 0; // Subarray doesn't exist if (ini == n) { return -1; } else { len = Math.max(n - 1 - ini, fin); } // Sum of the window for(i = 0; i < len; i++) { sum += arr[i]; } if (sum % k != 0) { count++; } // Calculate the sum of rest of // the windows of size len for(i = len; i < n; i++) { sum = sum + arr[i]; sum = sum - arr[i - len]; if (sum % k != 0) { count++; } } return count; }} // Driver Codelet arr = [ 3, 2, 2, 2, 3 ];let n = arr.length;let k = 3; document.write(CountLongestSubarrays( arr, n, k)); // This code is contributed by sanjoy_62 </script>
2
Time Complexity: O(N) Auxiliary Space Complexity: O(1)
SHIVAMSINGH67
ukasp
jrishabh99
sanjoy_62
khushboogoyal499
divisibility
Number Divisibility
sliding-window
subarray
Arrays
Competitive Programming
Greedy
Mathematical
sliding-window
Arrays
Greedy
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Maximum and minimum of an array using minimum number of comparisons
Top 50 Array Coding Problems for Interviews
Introduction to Arrays
Multidimensional Arrays in Java
Linear Search
Competitive Programming - A Complete Guide
Practice for cracking any coding interview
Arrow operator -> in C/C++ with Examples
Prefix Sum Array - Implementation and Applications in Competitive Programming
Top 10 Algorithms and Data Structures for Competitive Programming | [
{
"code": null,
"e": 26455,
"s": 26427,
"text": "\n12 May, 2021"
},
{
"code": null,
"e": 26623,
"s": 26455,
"text": "Given an array of integers arr[] and a positive integer K, the task is to find the count of the longest possible subarrays with sum of its elements not divisible by K."
},
{
"code": null,
"e": 26634,
"s": 26623,
"text": "Examples: "
},
{
"code": null,
"e": 26812,
"s": 26634,
"text": "Input: arr[] = {2, 3, 4, 6}, K = 3 Output: 1 Explanation: There is only one longest possible subarray of size 3 i.e. {3, 4, 6} having a sum 13, which is not divisible by K = 3. "
},
{
"code": null,
"e": 27029,
"s": 26812,
"text": "Input: arr[] = {2, 4, 3, 5, 1}, K = 3 Output: 2 Explanation: There are 2 longest possible subarrays of size 4 i.e. {2, 4, 3, 5} and {4, 3, 5, 1} having a sum 14 and 13 respectively, which is not divisible by K = 3. "
},
{
"code": null,
"e": 27041,
"s": 27029,
"text": "Approach: "
},
{
"code": null,
"e": 27763,
"s": 27041,
"text": "Check if the sum of all the elements of the array is divisible by KIf the sum is not divisible by K, return 1 as the longest subarray would be of size N.Else Find the index of the first number not divisible by K. Let that be L.Find the index of the last number not divisible by K. Let that be R.Remove the elements all the way up to index L and find the size of the subarray. Remove the elements beyond R and find the size of this subarray as well. Whichever length is greater, that will be the size of the longest subarray not divisible by K.Using this length as the window size, apply the sliding window technique on the arr[] to find out the count of sub-arrays of the size obtained above which are not divisible by K."
},
{
"code": null,
"e": 27831,
"s": 27763,
"text": "Check if the sum of all the elements of the array is divisible by K"
},
{
"code": null,
"e": 27918,
"s": 27831,
"text": "If the sum is not divisible by K, return 1 as the longest subarray would be of size N."
},
{
"code": null,
"e": 28487,
"s": 27918,
"text": "Else Find the index of the first number not divisible by K. Let that be L.Find the index of the last number not divisible by K. Let that be R.Remove the elements all the way up to index L and find the size of the subarray. Remove the elements beyond R and find the size of this subarray as well. Whichever length is greater, that will be the size of the longest subarray not divisible by K.Using this length as the window size, apply the sliding window technique on the arr[] to find out the count of sub-arrays of the size obtained above which are not divisible by K."
},
{
"code": null,
"e": 28557,
"s": 28487,
"text": "Find the index of the first number not divisible by K. Let that be L."
},
{
"code": null,
"e": 28626,
"s": 28557,
"text": "Find the index of the last number not divisible by K. Let that be R."
},
{
"code": null,
"e": 28875,
"s": 28626,
"text": "Remove the elements all the way up to index L and find the size of the subarray. Remove the elements beyond R and find the size of this subarray as well. Whichever length is greater, that will be the size of the longest subarray not divisible by K."
},
{
"code": null,
"e": 29054,
"s": 28875,
"text": "Using this length as the window size, apply the sliding window technique on the arr[] to find out the count of sub-arrays of the size obtained above which are not divisible by K."
},
{
"code": null,
"e": 29107,
"s": 29054,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 29111,
"s": 29107,
"text": "C++"
},
{
"code": null,
"e": 29116,
"s": 29111,
"text": "Java"
},
{
"code": null,
"e": 29124,
"s": 29116,
"text": "Python3"
},
{
"code": null,
"e": 29127,
"s": 29124,
"text": "C#"
},
{
"code": null,
"e": 29138,
"s": 29127,
"text": "Javascript"
},
{
"code": "// C++ program for the above problem #include <bits/stdc++.h>using namespace std; // Function to find the count of// longest subarrays with sum not// divisible by Kint CountLongestSubarrays( int arr[], int n, int k){ // Sum of all elements in // an array int i, s = 0; for (i = 0; i < n; ++i) { s += arr[i]; } // If overall sum is not // divisible then return // 1, as only one subarray // of size n is possible if (s % k) { return 1; } else { int ini = 0; // Index of the first number // not divisible by K while (ini < n && arr[ini] % k == 0) { ++ini; } int final = n - 1; // Index of the last number // not divisible by K while (final >= 0 && arr[final] % k == 0) { --final; } int len, sum = 0, count = 0; // Subarray doesn't exist if (ini == n) { return -1; } else { len = max(n - 1 - ini, final); } // Sum of the window for (i = 0; i < len; i++) { sum += arr[i]; } if (sum % k != 0) { count++; } // Calculate the sum of rest of // the windows of size len for (i = len; i < n; i++) { sum = sum + arr[i]; sum = sum - arr[i - len]; if (sum % k != 0) { count++; } } return count; }} // Driver Codeint main(){ int arr[] = { 3, 2, 2, 2, 3 }; int n = sizeof(arr) / sizeof(arr[0]); int k = 3; cout << CountLongestSubarrays(arr, n, k); return 0;}",
"e": 30827,
"s": 29138,
"text": null
},
{
"code": "// Java program for the above problemimport java.util.*; class GFG{ // Function to find the count of// longest subarrays with sum not// divisible by Kstatic int CountLongestSubarrays(int arr[], int n, int k){ // Sum of all elements in // an array int i, s = 0; for(i = 0; i < n; ++i) { s += arr[i]; } // If overall sum is not // divisible then return // 1, as only one subarray // of size n is possible if ((s % k) != 0) { return 1; } else { int ini = 0; // Index of the first number // not divisible by K while (ini < n && arr[ini] % k == 0) { ++ini; } int fin = n - 1; // Index of the last number // not divisible by K while (fin >= 0 && arr[fin] % k == 0) { --fin; } int len, sum = 0, count = 0; // Subarray doesn't exist if (ini == n) { return -1; } else { len = Math.max(n - 1 - ini, fin); } // Sum of the window for(i = 0; i < len; i++) { sum += arr[i]; } if (sum % k != 0) { count++; } // Calculate the sum of rest of // the windows of size len for(i = len; i < n; i++) { sum = sum + arr[i]; sum = sum - arr[i - len]; if (sum % k != 0) { count++; } } return count; }} // Driver Codepublic static void main (String []args){ int arr[] = { 3, 2, 2, 2, 3 }; int n = arr.length; int k = 3; System.out.print(CountLongestSubarrays( arr, n, k));}} // This code is contributed by chitranayal",
"e": 32611,
"s": 30827,
"text": null
},
{
"code": "# Python3 program for the above problem # Function to find the count of# longest subarrays with sum not# divisible by Kdef CountLongestSubarrays(arr, n, k): # Sum of all elements in # an array s = 0 for i in range(n): s += arr[i] # If overall sum is not # divisible then return # 1, as only one subarray # of size n is possible if(s % k): return 1 else: ini = 0 # Index of the first number # not divisible by K while (ini < n and arr[ini] % k == 0): ini += 1 final = n - 1 # Index of the last number # not divisible by K while (final >= 0 and arr[final] % k == 0): final -= 1 sum, count = 0, 0 # Subarray doesn't exist if(ini == n): return -1 else: length = max(n - 1 - ini, final) # Sum of the window for i in range(length): sum += arr[i] if(sum % k != 0): count += 1 # Calculate the sum of rest of # the windows of size len for i in range(length, n): sum = sum + arr[i] sum = sum + arr[i - length] if (sum % k != 0): count += 1 return count # Driver Codeif __name__ == '__main__': arr = [ 3, 2, 2, 2, 3 ] n = len(arr) k = 3 print(CountLongestSubarrays(arr, n, k)) # This code is contributed by Shivam Singh",
"e": 34059,
"s": 32611,
"text": null
},
{
"code": "// C# program for the above problemusing System; class GFG{ // Function to find the count of// longest subarrays with sum not// divisible by Kstatic int CountLongestSubarrays(int[] arr, int n, int k){ // Sum of all elements in // an array int i, s = 0; for(i = 0; i < n; ++i) { s += arr[i]; } // If overall sum is not // divisible then return // 1, as only one subarray // of size n is possible if ((s % k) != 0) { return 1; } else { int ini = 0; // Index of the first number // not divisible by K while (ini < n && arr[ini] % k == 0) { ++ini; } int fin = n - 1; // Index of the last number // not divisible by K while (fin >= 0 && arr[fin] % k == 0) { --fin; } int len, sum = 0, count = 0; // Subarray doesn't exist if (ini == n) { return -1; } else { len = Math.Max(n - 1 - ini, fin); } // Sum of the window for(i = 0; i < len; i++) { sum += arr[i]; } if (sum % k != 0) { count++; } // Calculate the sum of rest of // the windows of size len for(i = len; i < n; i++) { sum = sum + arr[i]; sum = sum - arr[i - len]; if (sum % k != 0) { count++; } } return count; }} // Driver Codepublic static void Main(String[] args){ int[] arr = { 3, 2, 2, 2, 3 }; int n = arr.Length; int k = 3; Console.WriteLine(CountLongestSubarrays( arr, n, k));}} // This code is contributed by jrishabh99",
"e": 35841,
"s": 34059,
"text": null
},
{
"code": "<script> // JavaScript program for the above problem // Function to find the count of// longest subarrays with sum not// divisible by Kfunction CountLongestSubarrays(arr, n, k){ // Sum of all elements in // an array let i, s = 0; for(i = 0; i < n; ++i) { s += arr[i]; } // If overall sum is not // divisible then return // 1, as only one subarray // of size n is possible if ((s % k) != 0) { return 1; } else { let ini = 0; // Index of the first number // not divisible by K while (ini < n && arr[ini] % k == 0) { ++ini; } let fin = n - 1; // Index of the last number // not divisible by K while (fin >= 0 && arr[fin] % k == 0) { --fin; } let len, sum = 0, count = 0; // Subarray doesn't exist if (ini == n) { return -1; } else { len = Math.max(n - 1 - ini, fin); } // Sum of the window for(i = 0; i < len; i++) { sum += arr[i]; } if (sum % k != 0) { count++; } // Calculate the sum of rest of // the windows of size len for(i = len; i < n; i++) { sum = sum + arr[i]; sum = sum - arr[i - len]; if (sum % k != 0) { count++; } } return count; }} // Driver Codelet arr = [ 3, 2, 2, 2, 3 ];let n = arr.length;let k = 3; document.write(CountLongestSubarrays( arr, n, k)); // This code is contributed by sanjoy_62 </script>",
"e": 37572,
"s": 35841,
"text": null
},
{
"code": null,
"e": 37574,
"s": 37572,
"text": "2"
},
{
"code": null,
"e": 37632,
"s": 37576,
"text": "Time Complexity: O(N) Auxiliary Space Complexity: O(1) "
},
{
"code": null,
"e": 37646,
"s": 37632,
"text": "SHIVAMSINGH67"
},
{
"code": null,
"e": 37652,
"s": 37646,
"text": "ukasp"
},
{
"code": null,
"e": 37663,
"s": 37652,
"text": "jrishabh99"
},
{
"code": null,
"e": 37673,
"s": 37663,
"text": "sanjoy_62"
},
{
"code": null,
"e": 37690,
"s": 37673,
"text": "khushboogoyal499"
},
{
"code": null,
"e": 37703,
"s": 37690,
"text": "divisibility"
},
{
"code": null,
"e": 37723,
"s": 37703,
"text": "Number Divisibility"
},
{
"code": null,
"e": 37738,
"s": 37723,
"text": "sliding-window"
},
{
"code": null,
"e": 37747,
"s": 37738,
"text": "subarray"
},
{
"code": null,
"e": 37754,
"s": 37747,
"text": "Arrays"
},
{
"code": null,
"e": 37778,
"s": 37754,
"text": "Competitive Programming"
},
{
"code": null,
"e": 37785,
"s": 37778,
"text": "Greedy"
},
{
"code": null,
"e": 37798,
"s": 37785,
"text": "Mathematical"
},
{
"code": null,
"e": 37813,
"s": 37798,
"text": "sliding-window"
},
{
"code": null,
"e": 37820,
"s": 37813,
"text": "Arrays"
},
{
"code": null,
"e": 37827,
"s": 37820,
"text": "Greedy"
},
{
"code": null,
"e": 37840,
"s": 37827,
"text": "Mathematical"
},
{
"code": null,
"e": 37938,
"s": 37840,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 38006,
"s": 37938,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 38050,
"s": 38006,
"text": "Top 50 Array Coding Problems for Interviews"
},
{
"code": null,
"e": 38073,
"s": 38050,
"text": "Introduction to Arrays"
},
{
"code": null,
"e": 38105,
"s": 38073,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 38119,
"s": 38105,
"text": "Linear Search"
},
{
"code": null,
"e": 38162,
"s": 38119,
"text": "Competitive Programming - A Complete Guide"
},
{
"code": null,
"e": 38205,
"s": 38162,
"text": "Practice for cracking any coding interview"
},
{
"code": null,
"e": 38246,
"s": 38205,
"text": "Arrow operator -> in C/C++ with Examples"
},
{
"code": null,
"e": 38324,
"s": 38246,
"text": "Prefix Sum Array - Implementation and Applications in Competitive Programming"
}
] |
How to create a Breadcrumb Navigation ? - GeeksforGeeks | 23 Apr, 2021
In this article, we will learn how to create breadcrumbs navigation. Breadcrumbs are a secondary navigation aid that helps users easily navigate through a website. Breadcrumbs provide you an orientation and show you exactly where you are within the website’s hierarchy.
Approach 1: We will follow the below steps for creating breadcrumbs using only CSS. This method allows to exactly customize how the breadcrumbs would look like.
Step 1: Create an HTML list of the navigation links.
<ul class="breadcrumb-navigation">
<li><a href="home">Home</a></li>
<li><a href="webdev">Web Development</a></li>
<li><a href="frontenddev">Frontend Development</a></li>
<li>JavaScript</li>
</ul>
Step 2: Set the CSS display: inline in order to show the list in the same line.
.breadcrumb-navigation > li {
display: inline;
}
Step 3: Add a separator after every list element.
.breadcrumb-navigation li + li:before {
padding: 4px;
content: "/";
}
Example:
HTML
<!DOCTYPE html><html> <head> <style> .breadcrumb-navigation { padding: 10px 18px; background-color: rgb(238, 238, 238); } .breadcrumb-navigation>li { display: inline; } .breadcrumb-navigation>li>a { color: #026ece; text-decoration: none; } .breadcrumb-navigation>li>a:hover { color: #6fc302; text-decoration: underline; } .breadcrumb-navigation li+li:before { padding: 4px; content: "/"; } </style></head> <body> <h1 style="color: green">GeeksforGeeks</h1> <ul class="breadcrumb-navigation"> <li> <a href="home"> Home </a> </li> <li> <a href="webdev"> Web Development </a> </li> <li> <a href="frontenddev"> Frontend Development </a> </li> <li>JavaScript</li> </ul></body> </html>
Output:
Approach 2: We will follow the below steps for creating breadcrumbs using the Bootstrap library. This allows one to quickly create good-looking breadcrumbs.
Step 1: We simply add aria-label=”breadcrumb” to the nav element.
<nav aria-label="breadcrumb">
Step 2: We next add class=”breadcrumb-item” in the list elements.
<li class="breadcrumb-item"><a href="#">
Home
</a></li>
Step 3: Add class=”breadcrumb-item active” in the current list element.
<li class="breadcrumb-item active" aria-current="page">
JavaScript
</li>
Example:
HTML
<!DOCTYPE html><html> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css" /></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <nav aria-label="breadcrumb"> <ol class="breadcrumb"> <li class="breadcrumb-item"> <a href="home">Home</a> </li> <li class="breadcrumb-item"> <a href="webdev">Web Development</a> </li> <li class="breadcrumb-item"> <a href="frontenddev"> Frontend Development </a> </li> <li class="breadcrumb-item active" aria-current="page"> JavaScript </li> </ol> </nav></body> </html>
Output:
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
Bootstrap-Questions
CSS-Properties
CSS-Questions
HTML-Questions
Picked
Bootstrap
CSS
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Show Images on Click using HTML ?
How to set Bootstrap Timepicker using datetimepicker library ?
How to Use Bootstrap with React?
How to keep gap between columns using Bootstrap?
Tailwind CSS vs Bootstrap
How to insert spaces/tabs in text using HTML/CSS?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to update Node.js and NPM to next version ?
How to create footer to stay at the bottom of a Web page?
CSS to put icon inside an input element in a form | [
{
"code": null,
"e": 26951,
"s": 26923,
"text": "\n23 Apr, 2021"
},
{
"code": null,
"e": 27221,
"s": 26951,
"text": "In this article, we will learn how to create breadcrumbs navigation. Breadcrumbs are a secondary navigation aid that helps users easily navigate through a website. Breadcrumbs provide you an orientation and show you exactly where you are within the website’s hierarchy."
},
{
"code": null,
"e": 27382,
"s": 27221,
"text": "Approach 1: We will follow the below steps for creating breadcrumbs using only CSS. This method allows to exactly customize how the breadcrumbs would look like."
},
{
"code": null,
"e": 27435,
"s": 27382,
"text": "Step 1: Create an HTML list of the navigation links."
},
{
"code": null,
"e": 27647,
"s": 27435,
"text": "<ul class=\"breadcrumb-navigation\">\n <li><a href=\"home\">Home</a></li>\n <li><a href=\"webdev\">Web Development</a></li>\n <li><a href=\"frontenddev\">Frontend Development</a></li>\n <li>JavaScript</li>\n</ul>"
},
{
"code": null,
"e": 27727,
"s": 27647,
"text": "Step 2: Set the CSS display: inline in order to show the list in the same line."
},
{
"code": null,
"e": 27778,
"s": 27727,
"text": ".breadcrumb-navigation > li {\n display: inline;\n}"
},
{
"code": null,
"e": 27830,
"s": 27780,
"text": "Step 3: Add a separator after every list element."
},
{
"code": null,
"e": 27904,
"s": 27830,
"text": ".breadcrumb-navigation li + li:before {\n padding: 4px;\n content: \"/\";\n}"
},
{
"code": null,
"e": 27913,
"s": 27904,
"text": "Example:"
},
{
"code": null,
"e": 27918,
"s": 27913,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <head> <style> .breadcrumb-navigation { padding: 10px 18px; background-color: rgb(238, 238, 238); } .breadcrumb-navigation>li { display: inline; } .breadcrumb-navigation>li>a { color: #026ece; text-decoration: none; } .breadcrumb-navigation>li>a:hover { color: #6fc302; text-decoration: underline; } .breadcrumb-navigation li+li:before { padding: 4px; content: \"/\"; } </style></head> <body> <h1 style=\"color: green\">GeeksforGeeks</h1> <ul class=\"breadcrumb-navigation\"> <li> <a href=\"home\"> Home </a> </li> <li> <a href=\"webdev\"> Web Development </a> </li> <li> <a href=\"frontenddev\"> Frontend Development </a> </li> <li>JavaScript</li> </ul></body> </html>",
"e": 28953,
"s": 27918,
"text": null
},
{
"code": null,
"e": 28961,
"s": 28953,
"text": "Output:"
},
{
"code": null,
"e": 29119,
"s": 28961,
"text": "Approach 2: We will follow the below steps for creating breadcrumbs using the Bootstrap library. This allows one to quickly create good-looking breadcrumbs. "
},
{
"code": null,
"e": 29185,
"s": 29119,
"text": "Step 1: We simply add aria-label=”breadcrumb” to the nav element."
},
{
"code": null,
"e": 29215,
"s": 29185,
"text": "<nav aria-label=\"breadcrumb\">"
},
{
"code": null,
"e": 29281,
"s": 29215,
"text": "Step 2: We next add class=”breadcrumb-item” in the list elements."
},
{
"code": null,
"e": 29341,
"s": 29281,
"text": "<li class=\"breadcrumb-item\"><a href=\"#\">\n Home\n</a></li>"
},
{
"code": null,
"e": 29413,
"s": 29341,
"text": "Step 3: Add class=”breadcrumb-item active” in the current list element."
},
{
"code": null,
"e": 29490,
"s": 29413,
"text": "<li class=\"breadcrumb-item active\" aria-current=\"page\">\n JavaScript\n</li>"
},
{
"code": null,
"e": 29499,
"s": 29490,
"text": "Example:"
},
{
"code": null,
"e": 29504,
"s": 29499,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <head> <link rel=\"stylesheet\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css\" /></head> <body> <h1 style=\"color: green\"> GeeksforGeeks </h1> <nav aria-label=\"breadcrumb\"> <ol class=\"breadcrumb\"> <li class=\"breadcrumb-item\"> <a href=\"home\">Home</a> </li> <li class=\"breadcrumb-item\"> <a href=\"webdev\">Web Development</a> </li> <li class=\"breadcrumb-item\"> <a href=\"frontenddev\"> Frontend Development </a> </li> <li class=\"breadcrumb-item active\" aria-current=\"page\"> JavaScript </li> </ol> </nav></body> </html>",
"e": 30303,
"s": 29504,
"text": null
},
{
"code": null,
"e": 30311,
"s": 30303,
"text": "Output:"
},
{
"code": null,
"e": 30448,
"s": 30311,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 30468,
"s": 30448,
"text": "Bootstrap-Questions"
},
{
"code": null,
"e": 30483,
"s": 30468,
"text": "CSS-Properties"
},
{
"code": null,
"e": 30497,
"s": 30483,
"text": "CSS-Questions"
},
{
"code": null,
"e": 30512,
"s": 30497,
"text": "HTML-Questions"
},
{
"code": null,
"e": 30519,
"s": 30512,
"text": "Picked"
},
{
"code": null,
"e": 30529,
"s": 30519,
"text": "Bootstrap"
},
{
"code": null,
"e": 30533,
"s": 30529,
"text": "CSS"
},
{
"code": null,
"e": 30538,
"s": 30533,
"text": "HTML"
},
{
"code": null,
"e": 30555,
"s": 30538,
"text": "Web Technologies"
},
{
"code": null,
"e": 30560,
"s": 30555,
"text": "HTML"
},
{
"code": null,
"e": 30658,
"s": 30560,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30699,
"s": 30658,
"text": "How to Show Images on Click using HTML ?"
},
{
"code": null,
"e": 30762,
"s": 30699,
"text": "How to set Bootstrap Timepicker using datetimepicker library ?"
},
{
"code": null,
"e": 30795,
"s": 30762,
"text": "How to Use Bootstrap with React?"
},
{
"code": null,
"e": 30844,
"s": 30795,
"text": "How to keep gap between columns using Bootstrap?"
},
{
"code": null,
"e": 30870,
"s": 30844,
"text": "Tailwind CSS vs Bootstrap"
},
{
"code": null,
"e": 30920,
"s": 30870,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 30982,
"s": 30920,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 31030,
"s": 30982,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 31088,
"s": 31030,
"text": "How to create footer to stay at the bottom of a Web page?"
}
] |
PyQt5 – How to open window in maximized format? - GeeksforGeeks | 22 Apr, 2020
In this article we will see how to make a window to open in maximized format, which refer to full screen display window. We can do this by using showMaximized method which belongs to QWidget class.
Syntax : self.showMaximized()
Argument : It takes no argument.
Action performed It will open the window in full screen format.
Code :
# importing librariesfrom PyQt5.QtWidgets import * from PyQt5.QtGui import * from PyQt5.QtCore import * import sys class Window(QMainWindow): def __init__(self): super().__init__() # setting title self.setWindowTitle("Python ") # setting geometry self.setGeometry(100, 100, 600, 400) # calling method self.UiComponents() # showing all the widgets self.show() # method for widgets def UiComponents(self): # creating label label = QLabel("Label", self) # setting geometry to label label.setGeometry(100, 100, 120, 40) # adding border to label label.setStyleSheet("border : 2px solid black") # opening window in maximized size self.showMaximized() # create pyqt5 appApp = QApplication(sys.argv) # create the instance of our Windowwindow = Window() # start the appsys.exit(App.exec())
Output :
Python-gui
Python-PyQt
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
How to Install PIP on Windows ?
Enumerate() in Python
Different ways to create Pandas Dataframe
Create a Pandas DataFrame from Lists
Python String | replace()
Reading and Writing to text files in Python
*args and **kwargs in Python
How to drop one or multiple columns in Pandas Dataframe
sum() function in Python | [
{
"code": null,
"e": 24934,
"s": 24906,
"text": "\n22 Apr, 2020"
},
{
"code": null,
"e": 25132,
"s": 24934,
"text": "In this article we will see how to make a window to open in maximized format, which refer to full screen display window. We can do this by using showMaximized method which belongs to QWidget class."
},
{
"code": null,
"e": 25162,
"s": 25132,
"text": "Syntax : self.showMaximized()"
},
{
"code": null,
"e": 25195,
"s": 25162,
"text": "Argument : It takes no argument."
},
{
"code": null,
"e": 25259,
"s": 25195,
"text": "Action performed It will open the window in full screen format."
},
{
"code": null,
"e": 25266,
"s": 25259,
"text": "Code :"
},
{
"code": "# importing librariesfrom PyQt5.QtWidgets import * from PyQt5.QtGui import * from PyQt5.QtCore import * import sys class Window(QMainWindow): def __init__(self): super().__init__() # setting title self.setWindowTitle(\"Python \") # setting geometry self.setGeometry(100, 100, 600, 400) # calling method self.UiComponents() # showing all the widgets self.show() # method for widgets def UiComponents(self): # creating label label = QLabel(\"Label\", self) # setting geometry to label label.setGeometry(100, 100, 120, 40) # adding border to label label.setStyleSheet(\"border : 2px solid black\") # opening window in maximized size self.showMaximized() # create pyqt5 appApp = QApplication(sys.argv) # create the instance of our Windowwindow = Window() # start the appsys.exit(App.exec())",
"e": 26201,
"s": 25266,
"text": null
},
{
"code": null,
"e": 26210,
"s": 26201,
"text": "Output :"
},
{
"code": null,
"e": 26221,
"s": 26210,
"text": "Python-gui"
},
{
"code": null,
"e": 26233,
"s": 26221,
"text": "Python-PyQt"
},
{
"code": null,
"e": 26240,
"s": 26233,
"text": "Python"
},
{
"code": null,
"e": 26338,
"s": 26240,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26347,
"s": 26338,
"text": "Comments"
},
{
"code": null,
"e": 26360,
"s": 26347,
"text": "Old Comments"
},
{
"code": null,
"e": 26378,
"s": 26360,
"text": "Python Dictionary"
},
{
"code": null,
"e": 26410,
"s": 26378,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26432,
"s": 26410,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 26474,
"s": 26432,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 26511,
"s": 26474,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 26537,
"s": 26511,
"text": "Python String | replace()"
},
{
"code": null,
"e": 26581,
"s": 26537,
"text": "Reading and Writing to text files in Python"
},
{
"code": null,
"e": 26610,
"s": 26581,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 26666,
"s": 26610,
"text": "How to drop one or multiple columns in Pandas Dataframe"
}
] |
How to Combine Lists in Dart? - GeeksforGeeks | 20 Jul, 2020
In Dart programming, the List data type is similar to arrays in other programming languages. A list is used to represent a collection of objects. It is an ordered group of objects. The core libraries in Dart are responsible for the existence of List class, its creation, and manipulation. There are 5 ways to combine two or more list:
Using addAll() method to add all the elements of another list to the existing list.Creating a new list by adding two or more lists using addAll() method of the list.Creating a new list by adding two or more list using expand() method of the list.Using + operator to combine list.Using spread operator to combine list.
Using addAll() method to add all the elements of another list to the existing list.
Creating a new list by adding two or more lists using addAll() method of the list.
Creating a new list by adding two or more list using expand() method of the list.
Using + operator to combine list.
Using spread operator to combine list.
We can add all the elements of the other list to the existing list by the use of addAll() method. To learn about this method you can follow this article.
Example:
Dart
// Main functionmain() { // Creating lists List gfg1 = ['Welcome','to']; List gfg2 = ['GeeksForGeeks']; // Combining lists gfg1.addAll(gfg2); // Printing combined list print(gfg1);}
Output:
[Welcome, to, GeeksForGeeks]
We can add all the elements of the list one after another to a new list by the use of addAll() method in Dart. To learn about this method you can follow this article.
Example:
Dart
// Main functionmain() { // Creating lists List gfg1 = ['Welcome','to']; List gfg2 = ['GeeksForGeeks']; // Combining lists var newgfgList = new List.from(gfg1)..addAll(gfg2); // Printing combined list print(newgfgList);}
Output:
[Welcome, to, GeeksForGeeks]
We can add all the elements of the list one after another to a new list by the use of expand() method in Dart. This is generally used to add more than two lists together.
Example:
Dart
// Main functionmain() { // Creating lists List gfg1 = ['Welcome']; List gfg2 = ['to']; List gfg3 = ['GeeksForGeeks']; // Combining lists var newgfgList = [gfg1, gfg2, gfg3].expand((x) => x).toList(); // Printing combined list print(newgfgList);}
Output:
[Welcome, to, GeeksForGeeks]
We can also add lists together by the use of + operator in Dart. This method was introduced in the Dart 2.0 update.
Example:
Dart
// Main functionmain() { // Creating lists List gfg1 = ['Welcome']; List gfg2 = ['to']; List gfg3 = ['GeeksForGeeks']; // Combining lists var newgfgList = gfg1 + gfg2 + gfg3; // Printing combined list print(newgfgList);}
Output:
[Welcome, to, GeeksForGeeks]
As of Dart 2.3 update, one can also use the spread operator to combine the list in Dart.
Example:
Dart
// Main functionmain() { // Creating lists List gfg1 = ['Welcome']; List gfg2 = ['to']; List gfg3 = ['GeeksForGeeks']; // Combining lists var newgfgList = [...gfg1, ...gfg2, ...gfg3]; // Printing combined list print(newgfgList);}
Output:
[Welcome, to, GeeksForGeeks]
Dart-List
Dart
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Flutter - DropDownButton Widget
Flutter - Custom Bottom Navigation Bar
Flutter - Checkbox Widget
Flutter - BoxShadow Widget
Dart Tutorial
ListView Class in Flutter
Flutter - BorderRadius Widget
Operators in Dart
Flutter - Carousel Slider
Android Studio Setup for Flutter Development | [
{
"code": null,
"e": 24018,
"s": 23990,
"text": "\n20 Jul, 2020"
},
{
"code": null,
"e": 24353,
"s": 24018,
"text": "In Dart programming, the List data type is similar to arrays in other programming languages. A list is used to represent a collection of objects. It is an ordered group of objects. The core libraries in Dart are responsible for the existence of List class, its creation, and manipulation. There are 5 ways to combine two or more list:"
},
{
"code": null,
"e": 24671,
"s": 24353,
"text": "Using addAll() method to add all the elements of another list to the existing list.Creating a new list by adding two or more lists using addAll() method of the list.Creating a new list by adding two or more list using expand() method of the list.Using + operator to combine list.Using spread operator to combine list."
},
{
"code": null,
"e": 24755,
"s": 24671,
"text": "Using addAll() method to add all the elements of another list to the existing list."
},
{
"code": null,
"e": 24838,
"s": 24755,
"text": "Creating a new list by adding two or more lists using addAll() method of the list."
},
{
"code": null,
"e": 24920,
"s": 24838,
"text": "Creating a new list by adding two or more list using expand() method of the list."
},
{
"code": null,
"e": 24954,
"s": 24920,
"text": "Using + operator to combine list."
},
{
"code": null,
"e": 24993,
"s": 24954,
"text": "Using spread operator to combine list."
},
{
"code": null,
"e": 25147,
"s": 24993,
"text": "We can add all the elements of the other list to the existing list by the use of addAll() method. To learn about this method you can follow this article."
},
{
"code": null,
"e": 25157,
"s": 25147,
"text": "Example: "
},
{
"code": null,
"e": 25162,
"s": 25157,
"text": "Dart"
},
{
"code": "// Main functionmain() { // Creating lists List gfg1 = ['Welcome','to']; List gfg2 = ['GeeksForGeeks']; // Combining lists gfg1.addAll(gfg2); // Printing combined list print(gfg1);}",
"e": 25363,
"s": 25162,
"text": null
},
{
"code": null,
"e": 25373,
"s": 25363,
"text": " Output: "
},
{
"code": null,
"e": 25403,
"s": 25373,
"text": "[Welcome, to, GeeksForGeeks]\n"
},
{
"code": null,
"e": 25570,
"s": 25403,
"text": "We can add all the elements of the list one after another to a new list by the use of addAll() method in Dart. To learn about this method you can follow this article."
},
{
"code": null,
"e": 25580,
"s": 25570,
"text": "Example: "
},
{
"code": null,
"e": 25585,
"s": 25580,
"text": "Dart"
},
{
"code": "// Main functionmain() { // Creating lists List gfg1 = ['Welcome','to']; List gfg2 = ['GeeksForGeeks']; // Combining lists var newgfgList = new List.from(gfg1)..addAll(gfg2); // Printing combined list print(newgfgList);}",
"e": 25825,
"s": 25585,
"text": null
},
{
"code": null,
"e": 25835,
"s": 25825,
"text": " Output: "
},
{
"code": null,
"e": 25865,
"s": 25835,
"text": "[Welcome, to, GeeksForGeeks]\n"
},
{
"code": null,
"e": 26036,
"s": 25865,
"text": "We can add all the elements of the list one after another to a new list by the use of expand() method in Dart. This is generally used to add more than two lists together."
},
{
"code": null,
"e": 26046,
"s": 26036,
"text": "Example: "
},
{
"code": null,
"e": 26051,
"s": 26046,
"text": "Dart"
},
{
"code": "// Main functionmain() { // Creating lists List gfg1 = ['Welcome']; List gfg2 = ['to']; List gfg3 = ['GeeksForGeeks']; // Combining lists var newgfgList = [gfg1, gfg2, gfg3].expand((x) => x).toList(); // Printing combined list print(newgfgList);}",
"e": 26318,
"s": 26051,
"text": null
},
{
"code": null,
"e": 26328,
"s": 26318,
"text": " Output: "
},
{
"code": null,
"e": 26358,
"s": 26328,
"text": "[Welcome, to, GeeksForGeeks]\n"
},
{
"code": null,
"e": 26474,
"s": 26358,
"text": "We can also add lists together by the use of + operator in Dart. This method was introduced in the Dart 2.0 update."
},
{
"code": null,
"e": 26484,
"s": 26474,
"text": "Example: "
},
{
"code": null,
"e": 26489,
"s": 26484,
"text": "Dart"
},
{
"code": "// Main functionmain() { // Creating lists List gfg1 = ['Welcome']; List gfg2 = ['to']; List gfg3 = ['GeeksForGeeks']; // Combining lists var newgfgList = gfg1 + gfg2 + gfg3; // Printing combined list print(newgfgList);}",
"e": 26730,
"s": 26489,
"text": null
},
{
"code": null,
"e": 26740,
"s": 26730,
"text": " Output: "
},
{
"code": null,
"e": 26770,
"s": 26740,
"text": "[Welcome, to, GeeksForGeeks]\n"
},
{
"code": null,
"e": 26859,
"s": 26770,
"text": "As of Dart 2.3 update, one can also use the spread operator to combine the list in Dart."
},
{
"code": null,
"e": 26869,
"s": 26859,
"text": "Example: "
},
{
"code": null,
"e": 26874,
"s": 26869,
"text": "Dart"
},
{
"code": "// Main functionmain() { // Creating lists List gfg1 = ['Welcome']; List gfg2 = ['to']; List gfg3 = ['GeeksForGeeks']; // Combining lists var newgfgList = [...gfg1, ...gfg2, ...gfg3]; // Printing combined list print(newgfgList);}",
"e": 27124,
"s": 26874,
"text": null
},
{
"code": null,
"e": 27134,
"s": 27124,
"text": " Output: "
},
{
"code": null,
"e": 27164,
"s": 27134,
"text": "[Welcome, to, GeeksForGeeks]\n"
},
{
"code": null,
"e": 27174,
"s": 27164,
"text": "Dart-List"
},
{
"code": null,
"e": 27179,
"s": 27174,
"text": "Dart"
},
{
"code": null,
"e": 27277,
"s": 27179,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27286,
"s": 27277,
"text": "Comments"
},
{
"code": null,
"e": 27299,
"s": 27286,
"text": "Old Comments"
},
{
"code": null,
"e": 27331,
"s": 27299,
"text": "Flutter - DropDownButton Widget"
},
{
"code": null,
"e": 27370,
"s": 27331,
"text": "Flutter - Custom Bottom Navigation Bar"
},
{
"code": null,
"e": 27396,
"s": 27370,
"text": "Flutter - Checkbox Widget"
},
{
"code": null,
"e": 27423,
"s": 27396,
"text": "Flutter - BoxShadow Widget"
},
{
"code": null,
"e": 27437,
"s": 27423,
"text": "Dart Tutorial"
},
{
"code": null,
"e": 27463,
"s": 27437,
"text": "ListView Class in Flutter"
},
{
"code": null,
"e": 27493,
"s": 27463,
"text": "Flutter - BorderRadius Widget"
},
{
"code": null,
"e": 27511,
"s": 27493,
"text": "Operators in Dart"
},
{
"code": null,
"e": 27537,
"s": 27511,
"text": "Flutter - Carousel Slider"
}
] |
A limitation of Random Forest Regression | by Ben Thompson | Towards Data Science | Random Forest is a popular machine learning model that is commonly used for classification tasks as can be seen in many academic papers, Kaggle competitions, and blog posts. In addition to classification, Random Forests can also be used for regression tasks. A Random Forest’s nonlinear nature can give it a leg up over linear algorithms, making it a great option. However, it is important to know your data and keep in mind that a Random Forest can’t extrapolate. It can only make a prediction that is an average of previously observed labels. In this sense it is very similar to KNN. In other words, in a regression problem, the range of predictions a Random Forest can make is bound by the highest and lowest labels in the training data. This behavior becomes problematic in situations where the training and prediction inputs differ in their range and/or distributions. This is called covariate shift and it is difficult for most models to handle but especially for Random Forest, because it can’t extrapolate.
For example, let’s say that you’re working with data that has an underlying trend over time such as stock prices, home values or sales. If you’re training data is missing any time periods, your Random Forest model will under or over predict, depending on the trend, examples outside of the time frames in your training data. This will be very noticeable if you plot your model’s predictions against their true values. Let’s take a look at this by creating some data.
import numpy as npimport matplotlib.pyplot as pltfrom sklearn.linear_model import LinearRegressionfrom sklearn.ensemble import RandomForestRegressor%matplotlib inline#make fake data with a time trendX = np.random.rand(1000,10)#add time feature simulating years 2000-2010time = np.random.randint(2000,2011,size=1000)#add time to XX = np.hstack((X,time.reshape(-1,1)))#create target via a linear relationship to Xweights = np.random.rand(11)y = X.dot(weights)#create test data that includes years#not in training data 2000 - 2019X_test = np.random.rand(1000,10)time_test = np.random.randint(2000,2020,size=1000)X_test = np.hstack((X_test,time_test.reshape(-1,1)))y_test = X_test.dot(weights)
Let’s see how well a Random Forest can predict the test data.
#fit and score the data using RFRF = RandomForestRegressor(n_estimators=100)RF.fit(X,y)RF.score(X_test,y_test)>>0.5872576516824577
That’s not very good. Let’s plot our predictions against their known values to see what is going on.
#plot RF as trees increase#set starting point for subplotsindex = 1#set the size of the subplot to something largeplt.figure(figsize=(20,20))#iterate through number of trees in model#and plot predictions v actualfor i in [1,5,10,100]: plt.subplot(2, 2, index) RF_plot = RandomForestRegressor(n_estimators=i) RF_plot.fit(X,y) #split data btw vals RF can interploate vs. data #it needs to exptrapolate interpolate_index = X_test[:,10]<=2010 extrapolate_index = X_test[:,10]>2010 X_interpolate = X_test[interpolate_index] X_extrapolate = X_test[extrapolate_index] y_interpolate = y_test[interpolate_index] y_extrapolate = y_test[extrapolate_index] #plot predictions vs. actual plt.scatter(RFplot.predict(X_interpolate), y_interpolate, color="g",label="interpolate") plt.scatter(RFplot.predict(X_extrapolate), y_extrapolate, color="b",label="extrapolate") plt.xlabel('Predicted') plt.ylabel('Actual') plt.title('Random Forest with {} trees'.format(i)) plt.subplots_adjust(wspace=.4, hspace=.4) plt.legend(loc="best") index += 1
This plot makes it clear that the highest value the model can predict is around 961, while the underlying trend in the data pushes more recent values as high as 966. Unfortunately, the Random Forest can’t extrapolate the linear trend and accurately predict new examples that have a time value higher than that seen in the training data (2000–2010). Even adjusting the number of trees doesn’t fix the problem. In this situation, since we forced a perfectly linear relationship on the data, a model like Linear Regression would be a better choice and will have no problem detecting the trend in the data and making accurate predictions for data outside of the time ranges in the training data.
#fit the data using Linear RegressionLR = LinearRegression()LR.fit(X,y)LR.score(X_test,y_test)>>1.0#plot predictions of Linear Regression against actualplt.figure(figsize=(7,7))plt.xlabel('Predicted')plt.ylabel('Actual')plt.title('Linear Regression - Test Data')_ = plt.scatter(LR.predict(X_interpolate),y_interpolate, color="g",label="interpolate")_ = plt.scatter(LR.predict(X_extrapolate),y_extrapolate, color="b",label="extrapolate")plt.legend(loc="best")
While Random Forest is often an excellent choice of model, it is still important to know how it works, and if it might have any limitations given your data. In this case, because it is a neighborhood based model it prevented us from making accurate predictions for time frames outside of our training data. If you find yourself in such a situation it would be best to test other models, such Linear Regression or Cubist, and/or consider using the Random Forest in an ensemble of models. Happy predicting! | [
{
"code": null,
"e": 1187,
"s": 172,
"text": "Random Forest is a popular machine learning model that is commonly used for classification tasks as can be seen in many academic papers, Kaggle competitions, and blog posts. In addition to classification, Random Forests can also be used for regression tasks. A Random Forest’s nonlinear nature can give it a leg up over linear algorithms, making it a great option. However, it is important to know your data and keep in mind that a Random Forest can’t extrapolate. It can only make a prediction that is an average of previously observed labels. In this sense it is very similar to KNN. In other words, in a regression problem, the range of predictions a Random Forest can make is bound by the highest and lowest labels in the training data. This behavior becomes problematic in situations where the training and prediction inputs differ in their range and/or distributions. This is called covariate shift and it is difficult for most models to handle but especially for Random Forest, because it can’t extrapolate."
},
{
"code": null,
"e": 1654,
"s": 1187,
"text": "For example, let’s say that you’re working with data that has an underlying trend over time such as stock prices, home values or sales. If you’re training data is missing any time periods, your Random Forest model will under or over predict, depending on the trend, examples outside of the time frames in your training data. This will be very noticeable if you plot your model’s predictions against their true values. Let’s take a look at this by creating some data."
},
{
"code": null,
"e": 2344,
"s": 1654,
"text": "import numpy as npimport matplotlib.pyplot as pltfrom sklearn.linear_model import LinearRegressionfrom sklearn.ensemble import RandomForestRegressor%matplotlib inline#make fake data with a time trendX = np.random.rand(1000,10)#add time feature simulating years 2000-2010time = np.random.randint(2000,2011,size=1000)#add time to XX = np.hstack((X,time.reshape(-1,1)))#create target via a linear relationship to Xweights = np.random.rand(11)y = X.dot(weights)#create test data that includes years#not in training data 2000 - 2019X_test = np.random.rand(1000,10)time_test = np.random.randint(2000,2020,size=1000)X_test = np.hstack((X_test,time_test.reshape(-1,1)))y_test = X_test.dot(weights)"
},
{
"code": null,
"e": 2406,
"s": 2344,
"text": "Let’s see how well a Random Forest can predict the test data."
},
{
"code": null,
"e": 2537,
"s": 2406,
"text": "#fit and score the data using RFRF = RandomForestRegressor(n_estimators=100)RF.fit(X,y)RF.score(X_test,y_test)>>0.5872576516824577"
},
{
"code": null,
"e": 2638,
"s": 2537,
"text": "That’s not very good. Let’s plot our predictions against their known values to see what is going on."
},
{
"code": null,
"e": 3782,
"s": 2638,
"text": "#plot RF as trees increase#set starting point for subplotsindex = 1#set the size of the subplot to something largeplt.figure(figsize=(20,20))#iterate through number of trees in model#and plot predictions v actualfor i in [1,5,10,100]: plt.subplot(2, 2, index) RF_plot = RandomForestRegressor(n_estimators=i) RF_plot.fit(X,y) #split data btw vals RF can interploate vs. data #it needs to exptrapolate interpolate_index = X_test[:,10]<=2010 extrapolate_index = X_test[:,10]>2010 X_interpolate = X_test[interpolate_index] X_extrapolate = X_test[extrapolate_index] y_interpolate = y_test[interpolate_index] y_extrapolate = y_test[extrapolate_index] #plot predictions vs. actual plt.scatter(RFplot.predict(X_interpolate), y_interpolate, color=\"g\",label=\"interpolate\") plt.scatter(RFplot.predict(X_extrapolate), y_extrapolate, color=\"b\",label=\"extrapolate\") plt.xlabel('Predicted') plt.ylabel('Actual') plt.title('Random Forest with {} trees'.format(i)) plt.subplots_adjust(wspace=.4, hspace=.4) plt.legend(loc=\"best\") index += 1"
},
{
"code": null,
"e": 4474,
"s": 3782,
"text": "This plot makes it clear that the highest value the model can predict is around 961, while the underlying trend in the data pushes more recent values as high as 966. Unfortunately, the Random Forest can’t extrapolate the linear trend and accurately predict new examples that have a time value higher than that seen in the training data (2000–2010). Even adjusting the number of trees doesn’t fix the problem. In this situation, since we forced a perfectly linear relationship on the data, a model like Linear Regression would be a better choice and will have no problem detecting the trend in the data and making accurate predictions for data outside of the time ranges in the training data."
},
{
"code": null,
"e": 4961,
"s": 4474,
"text": "#fit the data using Linear RegressionLR = LinearRegression()LR.fit(X,y)LR.score(X_test,y_test)>>1.0#plot predictions of Linear Regression against actualplt.figure(figsize=(7,7))plt.xlabel('Predicted')plt.ylabel('Actual')plt.title('Linear Regression - Test Data')_ = plt.scatter(LR.predict(X_interpolate),y_interpolate, color=\"g\",label=\"interpolate\")_ = plt.scatter(LR.predict(X_extrapolate),y_extrapolate, color=\"b\",label=\"extrapolate\")plt.legend(loc=\"best\")"
}
] |
7 React Best Practices Every Web Developer Should Follow - GeeksforGeeks | 26 May, 2020
React...the most popular library of Javascript for building user interfaces. For developers, this library is one of the favorite libraries to build any kind of beautiful applications. Learning React might be easy for you. You start using React and you start developing an application. You create one component to build some features and then another for some other feature. When your application starts growing either you add a few lines in the existing component or you just create one more component. This goes on and if you won’t pay attention to these components or the codes you have written then you may end up with a lot of messy code in your application. You will find some code is redundant, some components are not reusable, few components have too many lines of code and a lot of issues there. Later it will be difficult to maintain the project.
Well, React is easy to learn but if you won’t follow some best practices then you will and up like a scenario given above. It will be tough for another developer as well to work on the same application. In this blog let’s discuss some tips and best practices to write better React code in your application.
Most of the beginners make mistake in organizing the file properly in React application. A proper structure of folders and files is not just important in the React app but also in other applications. It helps in understanding the flow of the project and adding other features in the application. The file structure of create-react-app is one possible way of organizing the files, but when the applications grow rapidly, it becomes a little bit difficult task.
Create an asset folder to keep your top-level CSS, images, and font files. You can create a helper folder to put other files for any kind of file for functionalities. Maintain one folder to keep all the components of your React project. Also, maintain the subfolder for minor components used by any large component. It will be easier to understand the file hierarchy if you keep large components in their own folder and the small components that are used by the components in a subfolder.
In React, index.js is the main entry point used by developers but it becomes difficult to navigate once you have several files, all named index.js. In this situation, you can add a package.json file to each of your components folders and you can set the main entry point for this corresponding folder. It’s not the good practice to add pacjkage.json file in each folder but it will be easy to handle the files.
Example: A component Tags.js can be declared as an entry point as the code given below...
{
"main": 'Tags.js'
}
React works on the components’ reusability principle. Try to maintain and write smaller components instead of putting everything into a single massive component. Small size components are easy to read, easy to update, easy to debug, maintain, and reuse. Now the question is how big a component should be? Take a look at your render method. If it has more than 10 lines your component is probably too big. Try to refactor the code and split the components into smaller ones. In React, a component should only be responsible for one functionality (single responsibility principle). You can create smaller and reusable components when you follow this principle. This way everyone can work easily on your application.
A lot of beginners get confused about whether they should create a Class component or functional component. If you aren’t using the life cycle method or component state then it’s efficient to write functional components. Functional components are much easier to read and test because they are plain JavaScript functions without state or life cycle-hooks. Some of the advantages are as follows:
Fewer lines of code and better performance
Easier to read, easy to understand and easy to test.
No need to use ‘this’ binding.
Easier to extract smaller components.
Class Component
javascript
import React, { Component } from 'react'; class Button extends Component { render() { const { children, color, onClick } = this.props; return ( <button onClick={onClick} className={`Btn ${color}`}> {children} </button> ); }} export default Button;
Functional Component
javascript
import React from 'react'; export default function Button({ children, color, onClick }) { return ( <button onClick={onClick} className={`Btn ${color}`}> {children} </button> );}
There is one problem with functional component i.e developers have no control over the re-rendering process. When something changes React will re-render the functional component even if the component changes itself. In the former version, the solution was Pure component. PureComponent allows shallow props and state comparison which means React “test” if the content of the component, props, or the component itself has changed. The Component will re-render when props or content of the component or component itself changed. Otherwise, it will skip re-rendering and reuse the last rendered result instead.
The above problem was solved when a new feature memo was introduced with the release of React v16.6.0. Memo performs shallow prop comparison to the functional component. It checks if the content of the component, props, or the component itself has changed. Based on the comparison react will either reuse last rendered result or re-render. Memo allowed developers to create “pure” functional components and eliminated the use of stateful components or pure components.
A lot of developers use the index as a value for a key prop. Adding a key prop to the element is required when you create an array of JSX elements. This is commonly done when you use a map() or some other iterator or loop. This is another bad practice in React. React uses the key property to track each element in the array and due to the collapsing nature of an array. This can easily result in the wrong information being rendered in the wrong place. This is especially apparent when looping through class components with the state. The key props are used for identification and it determines what should be rendered or re-rendered. React does not spend time rendering duplicates. So when you have two elements with the same keys React sees them as the same and this can cause some elements to be eliminated.
Using props in the initial state is another bad practice in React. Why? because the constructor is called only once, at the time when the component is created. Next time if you make any changes to the props, the component state will remain the same as the previous value and it won’t be updated. This problems can be fixed by using react life cycle method componentDidUpdate. This method allows you to update the component when props change. Keep in mind that this method won’t be invoked on the initial render so make sure you initialize component state with necessary values probably fetched from props. After that use this method to update those values, and the component, as you need.
Most of the developer initializing the component state with with the class constructor which is very common in React. It’s not that much bad practice but it increases the redundancy in your code and creates some performance issue. When you initialize state in the constructor you need to remember about props and you need to call super with props. It also increases the number of lines in your code and creates a performance issue. You can initialize state with class fields instead of initializing state in the constructor. This practice in React helps you reduce noise in your code. Take a look at the code given below and compare both of them.
State Initialize in Constructor
javascript
// Import React libraryimport React from 'react'// Create React componentclass MyComponent extends React.Component { constructor(props) { super(props) // Initialize component State this.state = { count: 0 } } ...}
State Initialize with Class Field
javascript
// Import React libraryimport React from 'react'// Create React componentclass MyComponent extends React.Component { // Initialize component State state = { count: 0 } ...}
As the name suggest stateful component store component’s state information and provide necessary context. On the other hand, stateless components have no memory and it doesn’t provide any context. Stateless components require less code to be executed than stateful components. This increases the performance of the application. So reducing the use of stateful components in React is one of the best practices to follow.
With the release of React 16.8.0 a new feature ‘React Hooks‘ was introduced. This feature helps in writing stateful functional components and it obliterates the use of class components. This new feature is really helpful when the project grows. Earlier we just had one option in React to use state and life cycle method i.e. writing stateful components. Hooks changed this and now developers are no longer bounded to stateful components because they needed to use state.
react-js
GBlog
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ...
Socket Programming in C/C++
DSA Sheet by Love Babbar
GET and POST requests using Python
Must Do Coding Questions for Product Based Companies
Practice for cracking any coding interview
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Types of Software Testing
Working with csv files in Python
How to Start Learning DSA? | [
{
"code": null,
"e": 28409,
"s": 28381,
"text": "\n26 May, 2020"
},
{
"code": null,
"e": 29267,
"s": 28409,
"text": "React...the most popular library of Javascript for building user interfaces. For developers, this library is one of the favorite libraries to build any kind of beautiful applications. Learning React might be easy for you. You start using React and you start developing an application. You create one component to build some features and then another for some other feature. When your application starts growing either you add a few lines in the existing component or you just create one more component. This goes on and if you won’t pay attention to these components or the codes you have written then you may end up with a lot of messy code in your application. You will find some code is redundant, some components are not reusable, few components have too many lines of code and a lot of issues there. Later it will be difficult to maintain the project. "
},
{
"code": null,
"e": 29576,
"s": 29267,
"text": "Well, React is easy to learn but if you won’t follow some best practices then you will and up like a scenario given above. It will be tough for another developer as well to work on the same application. In this blog let’s discuss some tips and best practices to write better React code in your application. "
},
{
"code": null,
"e": 30037,
"s": 29576,
"text": "Most of the beginners make mistake in organizing the file properly in React application. A proper structure of folders and files is not just important in the React app but also in other applications. It helps in understanding the flow of the project and adding other features in the application. The file structure of create-react-app is one possible way of organizing the files, but when the applications grow rapidly, it becomes a little bit difficult task. "
},
{
"code": null,
"e": 30527,
"s": 30037,
"text": "Create an asset folder to keep your top-level CSS, images, and font files. You can create a helper folder to put other files for any kind of file for functionalities. Maintain one folder to keep all the components of your React project. Also, maintain the subfolder for minor components used by any large component. It will be easier to understand the file hierarchy if you keep large components in their own folder and the small components that are used by the components in a subfolder. "
},
{
"code": null,
"e": 30939,
"s": 30527,
"text": "In React, index.js is the main entry point used by developers but it becomes difficult to navigate once you have several files, all named index.js. In this situation, you can add a package.json file to each of your components folders and you can set the main entry point for this corresponding folder. It’s not the good practice to add pacjkage.json file in each folder but it will be easy to handle the files. "
},
{
"code": null,
"e": 31032,
"s": 30939,
"text": "Example: A component Tags.js can be declared as an entry point as the code given below... "
},
{
"code": null,
"e": 31055,
"s": 31032,
"text": "{\n\"main\": 'Tags.js'\n}\n"
},
{
"code": null,
"e": 31773,
"s": 31057,
"text": "React works on the components’ reusability principle. Try to maintain and write smaller components instead of putting everything into a single massive component. Small size components are easy to read, easy to update, easy to debug, maintain, and reuse. Now the question is how big a component should be? Take a look at your render method. If it has more than 10 lines your component is probably too big. Try to refactor the code and split the components into smaller ones. In React, a component should only be responsible for one functionality (single responsibility principle). You can create smaller and reusable components when you follow this principle. This way everyone can work easily on your application. "
},
{
"code": null,
"e": 32167,
"s": 31773,
"text": "A lot of beginners get confused about whether they should create a Class component or functional component. If you aren’t using the life cycle method or component state then it’s efficient to write functional components. Functional components are much easier to read and test because they are plain JavaScript functions without state or life cycle-hooks. Some of the advantages are as follows:"
},
{
"code": null,
"e": 32210,
"s": 32167,
"text": "Fewer lines of code and better performance"
},
{
"code": null,
"e": 32263,
"s": 32210,
"text": "Easier to read, easy to understand and easy to test."
},
{
"code": null,
"e": 32294,
"s": 32263,
"text": "No need to use ‘this’ binding."
},
{
"code": null,
"e": 32332,
"s": 32294,
"text": "Easier to extract smaller components."
},
{
"code": null,
"e": 32350,
"s": 32332,
"text": "Class Component "
},
{
"code": null,
"e": 32361,
"s": 32350,
"text": "javascript"
},
{
"code": "import React, { Component } from 'react'; class Button extends Component { render() { const { children, color, onClick } = this.props; return ( <button onClick={onClick} className={`Btn ${color}`}> {children} </button> ); }} export default Button;",
"e": 32639,
"s": 32361,
"text": null
},
{
"code": null,
"e": 32664,
"s": 32641,
"text": "Functional Component "
},
{
"code": null,
"e": 32675,
"s": 32664,
"text": "javascript"
},
{
"code": "import React from 'react'; export default function Button({ children, color, onClick }) { return ( <button onClick={onClick} className={`Btn ${color}`}> {children} </button> );}",
"e": 32867,
"s": 32675,
"text": null
},
{
"code": null,
"e": 33478,
"s": 32869,
"text": "There is one problem with functional component i.e developers have no control over the re-rendering process. When something changes React will re-render the functional component even if the component changes itself. In the former version, the solution was Pure component. PureComponent allows shallow props and state comparison which means React “test” if the content of the component, props, or the component itself has changed. The Component will re-render when props or content of the component or component itself changed. Otherwise, it will skip re-rendering and reuse the last rendered result instead. "
},
{
"code": null,
"e": 33948,
"s": 33478,
"text": "The above problem was solved when a new feature memo was introduced with the release of React v16.6.0. Memo performs shallow prop comparison to the functional component. It checks if the content of the component, props, or the component itself has changed. Based on the comparison react will either reuse last rendered result or re-render. Memo allowed developers to create “pure” functional components and eliminated the use of stateful components or pure components. "
},
{
"code": null,
"e": 34761,
"s": 33948,
"text": "A lot of developers use the index as a value for a key prop. Adding a key prop to the element is required when you create an array of JSX elements. This is commonly done when you use a map() or some other iterator or loop. This is another bad practice in React. React uses the key property to track each element in the array and due to the collapsing nature of an array. This can easily result in the wrong information being rendered in the wrong place. This is especially apparent when looping through class components with the state. The key props are used for identification and it determines what should be rendered or re-rendered. React does not spend time rendering duplicates. So when you have two elements with the same keys React sees them as the same and this can cause some elements to be eliminated. "
},
{
"code": null,
"e": 35451,
"s": 34761,
"text": "Using props in the initial state is another bad practice in React. Why? because the constructor is called only once, at the time when the component is created. Next time if you make any changes to the props, the component state will remain the same as the previous value and it won’t be updated. This problems can be fixed by using react life cycle method componentDidUpdate. This method allows you to update the component when props change. Keep in mind that this method won’t be invoked on the initial render so make sure you initialize component state with necessary values probably fetched from props. After that use this method to update those values, and the component, as you need. "
},
{
"code": null,
"e": 36100,
"s": 35451,
"text": "Most of the developer initializing the component state with with the class constructor which is very common in React. It’s not that much bad practice but it increases the redundancy in your code and creates some performance issue. When you initialize state in the constructor you need to remember about props and you need to call super with props. It also increases the number of lines in your code and creates a performance issue. You can initialize state with class fields instead of initializing state in the constructor. This practice in React helps you reduce noise in your code. Take a look at the code given below and compare both of them. "
},
{
"code": null,
"e": 36134,
"s": 36100,
"text": "State Initialize in Constructor "
},
{
"code": null,
"e": 36145,
"s": 36134,
"text": "javascript"
},
{
"code": "// Import React libraryimport React from 'react'// Create React componentclass MyComponent extends React.Component { constructor(props) { super(props) // Initialize component State this.state = { count: 0 } } ...}",
"e": 36379,
"s": 36145,
"text": null
},
{
"code": null,
"e": 36417,
"s": 36381,
"text": "State Initialize with Class Field "
},
{
"code": null,
"e": 36428,
"s": 36417,
"text": "javascript"
},
{
"code": "// Import React libraryimport React from 'react'// Create React componentclass MyComponent extends React.Component { // Initialize component State state = { count: 0 } ...}",
"e": 36608,
"s": 36428,
"text": null
},
{
"code": null,
"e": 37029,
"s": 36608,
"text": "As the name suggest stateful component store component’s state information and provide necessary context. On the other hand, stateless components have no memory and it doesn’t provide any context. Stateless components require less code to be executed than stateful components. This increases the performance of the application. So reducing the use of stateful components in React is one of the best practices to follow. "
},
{
"code": null,
"e": 37502,
"s": 37029,
"text": "With the release of React 16.8.0 a new feature ‘React Hooks‘ was introduced. This feature helps in writing stateful functional components and it obliterates the use of class components. This new feature is really helpful when the project grows. Earlier we just had one option in React to use state and life cycle method i.e. writing stateful components. Hooks changed this and now developers are no longer bounded to stateful components because they needed to use state. "
},
{
"code": null,
"e": 37511,
"s": 37502,
"text": "react-js"
},
{
"code": null,
"e": 37517,
"s": 37511,
"text": "GBlog"
},
{
"code": null,
"e": 37615,
"s": 37517,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 37689,
"s": 37615,
"text": "Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ..."
},
{
"code": null,
"e": 37717,
"s": 37689,
"text": "Socket Programming in C/C++"
},
{
"code": null,
"e": 37742,
"s": 37717,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 37777,
"s": 37742,
"text": "GET and POST requests using Python"
},
{
"code": null,
"e": 37830,
"s": 37777,
"text": "Must Do Coding Questions for Product Based Companies"
},
{
"code": null,
"e": 37873,
"s": 37830,
"text": "Practice for cracking any coding interview"
},
{
"code": null,
"e": 37935,
"s": 37873,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 37961,
"s": 37935,
"text": "Types of Software Testing"
},
{
"code": null,
"e": 37994,
"s": 37961,
"text": "Working with csv files in Python"
}
] |
Python - Evaluate Expression given in String - GeeksforGeeks | 27 Jul, 2021
Sometimes, while working with Python Strings, we can have certain computations in string format and we need to formulate its result. This can occur in domains related to Mathematics and data. Let’s discuss certain ways in which we can perform this task.
Method #1 : Using regex + map() + sum() The combination of above functions can be used to solve this problem. In this, we perform the task of computation using sum() and mapping of operator and operation using map(). This method can be used if the string has only + or -. Method #2 can be used for other operations as well.
Python3
# Python3 code to demonstrate working of# Expression evaluation in String# Using regex + map() + sum()import re # initializing stringtest_str = "45 + 98-10" # printing original stringprint("The original string is : " + test_str) # Expression evaluation in String# Using regex + map() + sum()res = sum(map(int, re.findall(r'[+-]?\d+', test_str))) # printing resultprint("The evaluated result is : " + str(res))
The original string is : 45+98-10
The evaluated result is : 133
Method #2 : Using eval() This is one of the way in which this task can be performed. In this, we perform computation internally using eval().
Python3
# Python3 code to demonstrate working of# Expression evaluation in String# Using eval() # initializing stringtest_str = "45 + 98-10" # printing original stringprint("The original string is : " + test_str) # Expression evaluation in String# Using eval()res = eval(test_str) # printing resultprint("The evaluated result is : " + str(res))
The original string is : 45+98-10
The evaluated result is : 133
asbrakavi
sagar0719kumar
Python string-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
How to Install PIP on Windows ?
Enumerate() in Python
Different ways to create Pandas Dataframe
Python String | replace()
Python program to convert a list to string
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python | Convert a list to dictionary | [
{
"code": null,
"e": 25935,
"s": 25907,
"text": "\n27 Jul, 2021"
},
{
"code": null,
"e": 26189,
"s": 25935,
"text": "Sometimes, while working with Python Strings, we can have certain computations in string format and we need to formulate its result. This can occur in domains related to Mathematics and data. Let’s discuss certain ways in which we can perform this task."
},
{
"code": null,
"e": 26513,
"s": 26189,
"text": "Method #1 : Using regex + map() + sum() The combination of above functions can be used to solve this problem. In this, we perform the task of computation using sum() and mapping of operator and operation using map(). This method can be used if the string has only + or -. Method #2 can be used for other operations as well."
},
{
"code": null,
"e": 26521,
"s": 26513,
"text": "Python3"
},
{
"code": "# Python3 code to demonstrate working of# Expression evaluation in String# Using regex + map() + sum()import re # initializing stringtest_str = \"45 + 98-10\" # printing original stringprint(\"The original string is : \" + test_str) # Expression evaluation in String# Using regex + map() + sum()res = sum(map(int, re.findall(r'[+-]?\\d+', test_str))) # printing resultprint(\"The evaluated result is : \" + str(res))",
"e": 26931,
"s": 26521,
"text": null
},
{
"code": null,
"e": 26995,
"s": 26931,
"text": "The original string is : 45+98-10\nThe evaluated result is : 133"
},
{
"code": null,
"e": 27140,
"s": 26997,
"text": "Method #2 : Using eval() This is one of the way in which this task can be performed. In this, we perform computation internally using eval(). "
},
{
"code": null,
"e": 27148,
"s": 27140,
"text": "Python3"
},
{
"code": "# Python3 code to demonstrate working of# Expression evaluation in String# Using eval() # initializing stringtest_str = \"45 + 98-10\" # printing original stringprint(\"The original string is : \" + test_str) # Expression evaluation in String# Using eval()res = eval(test_str) # printing resultprint(\"The evaluated result is : \" + str(res))",
"e": 27485,
"s": 27148,
"text": null
},
{
"code": null,
"e": 27549,
"s": 27485,
"text": "The original string is : 45+98-10\nThe evaluated result is : 133"
},
{
"code": null,
"e": 27561,
"s": 27551,
"text": "asbrakavi"
},
{
"code": null,
"e": 27576,
"s": 27561,
"text": "sagar0719kumar"
},
{
"code": null,
"e": 27599,
"s": 27576,
"text": "Python string-programs"
},
{
"code": null,
"e": 27606,
"s": 27599,
"text": "Python"
},
{
"code": null,
"e": 27622,
"s": 27606,
"text": "Python Programs"
},
{
"code": null,
"e": 27720,
"s": 27622,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27738,
"s": 27720,
"text": "Python Dictionary"
},
{
"code": null,
"e": 27770,
"s": 27738,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27792,
"s": 27770,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 27834,
"s": 27792,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 27860,
"s": 27834,
"text": "Python String | replace()"
},
{
"code": null,
"e": 27903,
"s": 27860,
"text": "Python program to convert a list to string"
},
{
"code": null,
"e": 27925,
"s": 27903,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 27964,
"s": 27925,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 28010,
"s": 27964,
"text": "Python | Split string into list of characters"
}
] |
C++ Program to Implement Sorted Singly Linked List | In data structure, Linked List is a linear collection of data elements. Each element or node of a list is comprising of two items - the data and a reference to the next node. The last node has a reference to null. Into a linked list the entry point is called the head of the list.
Each node in the list stores the contents and a pointer or reference to the next node in the list, in a singly linked list. Singly linked list does not store any pointer or reference to the previous node.
Developing a C++ program to implement sorted singly linked list.
Begin
function createnode() to insert node in the list:
It checks whether the list is empty or not.
If the list is empty put the node as first element and update head.
Initialize the next pointer with NULL.
If list is not empty,
It creates a newnode and inserts the number in the data field of the newnode.
Now the newnode will be inserted in such a way that linked list will remain sorted.
If it gets inserted at the last, then the newnode points to NULL.
If the newnode inserted at the first, then the linked list starts from there.
End
Begin
function display() to print the list content having n number of nodes:
Initialize c = 0.
Initialize pointer variable with the start address
while (c <= n)
Print the node info
Update pointer variable
Increment c.
End
#include<iostream>
using namespace std;
struct nod {
int d;
nod *n;
}
*p = NULL, *head = NULL, *q = NULL, *np = NULL;
int c = 0;
void createnode(int n) {
np = new nod;
np->d = n;
np->n = NULL;
if (c == 0) {
head = np;
p = head;
p->n = head;
c++;
} else if (c == 1) {
p = head;
q = p;
if (np->d < p->d) {
np->n = p;
head = np;
p->n = np;
} else if (np->d > p->d) {
p->n = np;
np->n = head;
}
c++;
} else {
p = head;
q = p;
if (np->d < p->d) {
np->n = p;
head = np;
do {
p = p->n;
}
while (p->n != q);
p->n = head;
} else if (np->d > p->d) {
while (p->n != head && q->d < np->d) {
q = p;
p = p->n;
if (p->n == head) {
p->n = np;
np->n = head;
} else if (np->d< p->d) {
q->n = np;
np->n = p;
break;
}
}
}
}
}
void display(int i) {
nod *t = head;
int c = 0;
while (c <= i ) {
cout<<t->d<<"\t";
t = t->n;
c++;
}
}
int main() {
int i = 0, n, a;
cout<<"enter the no of nodes\n";
cin>>n;
while (i < n) {
cout<<"\nenter value of node\n";
cin>>a;
createnode(a);
i++;
}
cout<<"sorted singly link list"<<endl;
display(n);
}
enter the no of nodes
5
enter value of node
6
enter value of node
4
enter value of node
7
enter value of node
3
enter value of node
2
sorted singly link list
2 3 4 6 7 2 | [
{
"code": null,
"e": 1343,
"s": 1062,
"text": "In data structure, Linked List is a linear collection of data elements. Each element or node of a list is comprising of two items - the data and a reference to the next node. The last node has a reference to null. Into a linked list the entry point is called the head of the list."
},
{
"code": null,
"e": 1548,
"s": 1343,
"text": "Each node in the list stores the contents and a pointer or reference to the next node in the list, in a singly linked list. Singly linked list does not store any pointer or reference to the previous node."
},
{
"code": null,
"e": 1613,
"s": 1548,
"text": "Developing a C++ program to implement sorted singly linked list."
},
{
"code": null,
"e": 2473,
"s": 1613,
"text": "Begin\n function createnode() to insert node in the list:\n It checks whether the list is empty or not.\n If the list is empty put the node as first element and update head.\n Initialize the next pointer with NULL.\n If list is not empty,\n It creates a newnode and inserts the number in the data field of the newnode.\n Now the newnode will be inserted in such a way that linked list will remain sorted.\n If it gets inserted at the last, then the newnode points to NULL.\n If the newnode inserted at the first, then the linked list starts from there.\nEnd\nBegin\n function display() to print the list content having n number of nodes:\n Initialize c = 0.\n Initialize pointer variable with the start address\n while (c <= n)\n Print the node info\n Update pointer variable\n Increment c.\nEnd"
},
{
"code": null,
"e": 3944,
"s": 2473,
"text": "#include<iostream>\nusing namespace std;\nstruct nod {\n int d;\n nod *n;\n}\n*p = NULL, *head = NULL, *q = NULL, *np = NULL;\nint c = 0;\nvoid createnode(int n) {\n np = new nod;\n np->d = n;\n np->n = NULL;\n if (c == 0) {\n head = np;\n p = head;\n p->n = head;\n c++;\n } else if (c == 1) {\n p = head;\n q = p;\n if (np->d < p->d) {\n np->n = p;\n head = np;\n p->n = np;\n } else if (np->d > p->d) {\n p->n = np;\n np->n = head;\n }\n c++;\n } else {\n p = head;\n q = p;\n if (np->d < p->d) {\n np->n = p;\n head = np;\n do {\n p = p->n;\n }\n while (p->n != q);\n p->n = head;\n } else if (np->d > p->d) {\n while (p->n != head && q->d < np->d) {\n q = p;\n p = p->n;\n if (p->n == head) {\n p->n = np;\n np->n = head;\n } else if (np->d< p->d) {\n q->n = np;\n np->n = p;\n break;\n }\n }\n }\n }\n}\nvoid display(int i) {\n nod *t = head;\n int c = 0;\n while (c <= i ) {\n cout<<t->d<<\"\\t\";\n t = t->n;\n c++;\n }\n}\nint main() {\n int i = 0, n, a;\n cout<<\"enter the no of nodes\\n\";\n cin>>n;\n while (i < n) {\n cout<<\"\\nenter value of node\\n\";\n cin>>a;\n createnode(a);\n i++;\n }\n cout<<\"sorted singly link list\"<<endl;\n display(n);\n}"
},
{
"code": null,
"e": 4114,
"s": 3944,
"text": "enter the no of nodes\n5\nenter value of node\n6\nenter value of node\n4\nenter value of node\n7\nenter value of node\n3\nenter value of node\n2\nsorted singly link list\n2 3 4 6 7 2"
}
] |
How to Design Homepage like Facebook using HTML and CSS ? - GeeksforGeeks | 07 Oct, 2020
HTML: HTML stands for Hyper Text Markup Language. It is used to design web pages using a markup language. HTML is the combination of Hypertext and Markup language. Hypertext defines the link between the web pages. A markup language is used to define the text document within tag which defines the structure of web pages.
CSS: Cascading Style Sheets, fondly referred to as CSS, is a simply designed language intended to simplify the process of making web pages presentable. CSS allows you to apply styles to web pages. More importantly, CSS enables you to do this independent of the HTML that makes up each web page.
Below is the source code for construction of Facebook like Homepage:
You can see the Github link to download the complete code of this article.
HTML section: File name is homepage1.html
HTML
<!DOCTYPE html><html> <head> <meta charset="UTF-8"> <meta name="viewport" content= "width=device-width, initial-scale=1.0"> <link rel="stylesheet" type="text/css" href="new.css" media="screen" /></head> <body> <div class=" header1"> <div id="name" class="header1"> OLD MASTER </div> <div id="searcharea" class="header1"> <input placeholder="search here...." type="text" id="searchbox" /> </div> <div id="profilearea" class="header1">Profile</div> <div id="profilearea1" class="header1">|</div> <div id="profilearea2" class="header1">Home</div> </div> <div class="sidenav"> <div class="bodyn"> <div id="side1" class="bodyn">Profile</div> <div id="side2" class="bodyn">edit profile</div> <div id="side3" class="bodyn">News feed</div> <div id="side4" class="bodyn">Messages</div> <div id="side5" class="bodyn">Events</div> <div id="side6" class="bodyn">PAGES</div> <div id="side7" class="bodyn">Pages feed</div> <div id="side8" class="bodyn">Like pages</div> <div id="side9" class="bodyn">Create page</div> <div id="side10" class="bodyn">Create ad</div> <div id="side11" class="bodyn">GROUPS</div> <div id="side12" class="bodyn">New groups</div> <div id="side13" class="bodyn">Create group</div> <div id="side14" class="bodyn">APPS</div> <div id="side15" class="bodyn">Games</div> <div id="side16" class="bodyn">On this day</div> <div id="side17" class="bodyn">Games feed</div> <div id="side18" class="bodyn">FRIENDS</div> <div id="side19" class="bodyn">Close friends</div> <div id="side20" class="bodyn">Family</div> <div id="side21" class="bodyn">INTERESTS</div> <div id="side22" class="bodyn">Pages and public</div> <div id="side23" class="bodyn">EVENTS</div> <div id="side24" class="bodyn">Create event</div> </div> </div> <div class="post00"></div> <div class="post10"></div> <div class="header0001"></div> <div class="sideboxxx"></div> <div class="sideboxxxx2"></div> <div class="post"> <div id="column-1" class="post"> update status | add photos/ videos | create photo album <hr><br><br><br><br><br><br> <hr> </div> <div id="postpos" class="post"> <input type="submit" id="buttonpost" value="post" /> </div> <div id="postboxpos" class="post"> <textarea placeholder="What's in your mind" id="postbox"> </textarea> </div> </div> <div class="post1"> <img src="mini1.png" alt="image is here" height="40" width="40" /><br> <img src="mini......png" alt="image is here" height="400" width="575" /><br><br> <p6>Like Comment Share</p6><br> <hr> <p1>Amit Deb</p1> <p2> and</p2> <p1> 5 others</p1> <p2> like this</p2> <div id="post2text" class="post1"> <p3>Rani Mukharji </p3> <p2>with </p2> <p1> Arup Pegu</p1> <p2> and</p2> <p1> 15 others.</p1><br> <p4>4 hrs.</p4> </div> <div id="commentprof2" class="post1"> <img src="mini1.png" alt="image is here" height="25" width="25" id="profpic" /> </div> <div id="commentboxpos2" class="post1"> <input type="text" placeholder="comment" id="commentbox" /> </div> </div> <div class="sidebox"> <div id="sidebox1" class="sidebox"> <div id="sideboxx1">YOUR PAGES</div> <hr><br><br>See all <hr> <div id="sideboxx2"> This Week </div><br><br>See more <hr> <div id="sideboxx3"> Recent Posts </div><br><br>See more <hr> <div id="sideboxx4"> You haven't posted in this days </div><br><br><br>See all </div> <div id="post1pos" class="sidebox"> <input type="submit" id="buttonpost1" value="write a post" /> </div> </div> <div class="sideboxxx2"> <div id="sidebox2" class="sideboxxx2"> <hr> <div id="sideboxx21">Trending</div> <br><br><br>See more <hr> <div id="sideboxx22"> Suggested Pages </div><br><br><br>See all <hr> <div id="sideboxx23"> People you may know </div><br><br><br><br>See all </div> </div></body> </html>
CSS section: File name is new.css
Download the HTML, CSS and images files from Github and save all files in a folder and run homepage1.html file. It will display the result.
Output:
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
CSS-Misc
HTML-Misc
CSS
HTML
Web Technologies
Web technologies Questions
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Create a Responsive Navbar using ReactJS
Design a web page using HTML and CSS
How to set div width to fit content using CSS ?
How to set fixed width for <td> in a table ?
How to apply style to parent if it has child with CSS?
How to set the default value for an HTML <select> element ?
How to set input type date in dd-mm-yyyy format using HTML ?
How to Insert Form Data into Database using PHP ?
Hide or show elements in HTML using display property
REST API (Introduction) | [
{
"code": null,
"e": 24644,
"s": 24616,
"text": "\n07 Oct, 2020"
},
{
"code": null,
"e": 24966,
"s": 24644,
"text": "HTML: HTML stands for Hyper Text Markup Language. It is used to design web pages using a markup language. HTML is the combination of Hypertext and Markup language. Hypertext defines the link between the web pages. A markup language is used to define the text document within tag which defines the structure of web pages."
},
{
"code": null,
"e": 25261,
"s": 24966,
"text": "CSS: Cascading Style Sheets, fondly referred to as CSS, is a simply designed language intended to simplify the process of making web pages presentable. CSS allows you to apply styles to web pages. More importantly, CSS enables you to do this independent of the HTML that makes up each web page."
},
{
"code": null,
"e": 25331,
"s": 25261,
"text": "Below is the source code for construction of Facebook like Homepage: "
},
{
"code": null,
"e": 25408,
"s": 25331,
"text": "You can see the Github link to download the complete code of this article. "
},
{
"code": null,
"e": 25451,
"s": 25408,
"text": "HTML section: File name is homepage1.html "
},
{
"code": null,
"e": 25456,
"s": 25451,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <head> <meta charset=\"UTF-8\"> <meta name=\"viewport\" content= \"width=device-width, initial-scale=1.0\"> <link rel=\"stylesheet\" type=\"text/css\" href=\"new.css\" media=\"screen\" /></head> <body> <div class=\" header1\"> <div id=\"name\" class=\"header1\"> OLD MASTER </div> <div id=\"searcharea\" class=\"header1\"> <input placeholder=\"search here....\" type=\"text\" id=\"searchbox\" /> </div> <div id=\"profilearea\" class=\"header1\">Profile</div> <div id=\"profilearea1\" class=\"header1\">|</div> <div id=\"profilearea2\" class=\"header1\">Home</div> </div> <div class=\"sidenav\"> <div class=\"bodyn\"> <div id=\"side1\" class=\"bodyn\">Profile</div> <div id=\"side2\" class=\"bodyn\">edit profile</div> <div id=\"side3\" class=\"bodyn\">News feed</div> <div id=\"side4\" class=\"bodyn\">Messages</div> <div id=\"side5\" class=\"bodyn\">Events</div> <div id=\"side6\" class=\"bodyn\">PAGES</div> <div id=\"side7\" class=\"bodyn\">Pages feed</div> <div id=\"side8\" class=\"bodyn\">Like pages</div> <div id=\"side9\" class=\"bodyn\">Create page</div> <div id=\"side10\" class=\"bodyn\">Create ad</div> <div id=\"side11\" class=\"bodyn\">GROUPS</div> <div id=\"side12\" class=\"bodyn\">New groups</div> <div id=\"side13\" class=\"bodyn\">Create group</div> <div id=\"side14\" class=\"bodyn\">APPS</div> <div id=\"side15\" class=\"bodyn\">Games</div> <div id=\"side16\" class=\"bodyn\">On this day</div> <div id=\"side17\" class=\"bodyn\">Games feed</div> <div id=\"side18\" class=\"bodyn\">FRIENDS</div> <div id=\"side19\" class=\"bodyn\">Close friends</div> <div id=\"side20\" class=\"bodyn\">Family</div> <div id=\"side21\" class=\"bodyn\">INTERESTS</div> <div id=\"side22\" class=\"bodyn\">Pages and public</div> <div id=\"side23\" class=\"bodyn\">EVENTS</div> <div id=\"side24\" class=\"bodyn\">Create event</div> </div> </div> <div class=\"post00\"></div> <div class=\"post10\"></div> <div class=\"header0001\"></div> <div class=\"sideboxxx\"></div> <div class=\"sideboxxxx2\"></div> <div class=\"post\"> <div id=\"column-1\" class=\"post\"> update status | add photos/ videos | create photo album <hr><br><br><br><br><br><br> <hr> </div> <div id=\"postpos\" class=\"post\"> <input type=\"submit\" id=\"buttonpost\" value=\"post\" /> </div> <div id=\"postboxpos\" class=\"post\"> <textarea placeholder=\"What's in your mind\" id=\"postbox\"> </textarea> </div> </div> <div class=\"post1\"> <img src=\"mini1.png\" alt=\"image is here\" height=\"40\" width=\"40\" /><br> <img src=\"mini......png\" alt=\"image is here\" height=\"400\" width=\"575\" /><br><br> <p6>Like Comment Share</p6><br> <hr> <p1>Amit Deb</p1> <p2> and</p2> <p1> 5 others</p1> <p2> like this</p2> <div id=\"post2text\" class=\"post1\"> <p3>Rani Mukharji </p3> <p2>with </p2> <p1> Arup Pegu</p1> <p2> and</p2> <p1> 15 others.</p1><br> <p4>4 hrs.</p4> </div> <div id=\"commentprof2\" class=\"post1\"> <img src=\"mini1.png\" alt=\"image is here\" height=\"25\" width=\"25\" id=\"profpic\" /> </div> <div id=\"commentboxpos2\" class=\"post1\"> <input type=\"text\" placeholder=\"comment\" id=\"commentbox\" /> </div> </div> <div class=\"sidebox\"> <div id=\"sidebox1\" class=\"sidebox\"> <div id=\"sideboxx1\">YOUR PAGES</div> <hr><br><br>See all <hr> <div id=\"sideboxx2\"> This Week </div><br><br>See more <hr> <div id=\"sideboxx3\"> Recent Posts </div><br><br>See more <hr> <div id=\"sideboxx4\"> You haven't posted in this days </div><br><br><br>See all </div> <div id=\"post1pos\" class=\"sidebox\"> <input type=\"submit\" id=\"buttonpost1\" value=\"write a post\" /> </div> </div> <div class=\"sideboxxx2\"> <div id=\"sidebox2\" class=\"sideboxxx2\"> <hr> <div id=\"sideboxx21\">Trending</div> <br><br><br>See more <hr> <div id=\"sideboxx22\"> Suggested Pages </div><br><br><br>See all <hr> <div id=\"sideboxx23\"> People you may know </div><br><br><br><br>See all </div> </div></body> </html>",
"e": 30290,
"s": 25456,
"text": null
},
{
"code": null,
"e": 30328,
"s": 30294,
"text": "CSS section: File name is new.css"
},
{
"code": null,
"e": 30468,
"s": 30328,
"text": "Download the HTML, CSS and images files from Github and save all files in a folder and run homepage1.html file. It will display the result."
},
{
"code": null,
"e": 30476,
"s": 30468,
"text": "Output:"
},
{
"code": null,
"e": 30613,
"s": 30476,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 30622,
"s": 30613,
"text": "CSS-Misc"
},
{
"code": null,
"e": 30632,
"s": 30622,
"text": "HTML-Misc"
},
{
"code": null,
"e": 30636,
"s": 30632,
"text": "CSS"
},
{
"code": null,
"e": 30641,
"s": 30636,
"text": "HTML"
},
{
"code": null,
"e": 30658,
"s": 30641,
"text": "Web Technologies"
},
{
"code": null,
"e": 30685,
"s": 30658,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 30690,
"s": 30685,
"text": "HTML"
},
{
"code": null,
"e": 30788,
"s": 30690,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30797,
"s": 30788,
"text": "Comments"
},
{
"code": null,
"e": 30810,
"s": 30797,
"text": "Old Comments"
},
{
"code": null,
"e": 30851,
"s": 30810,
"text": "Create a Responsive Navbar using ReactJS"
},
{
"code": null,
"e": 30888,
"s": 30851,
"text": "Design a web page using HTML and CSS"
},
{
"code": null,
"e": 30936,
"s": 30888,
"text": "How to set div width to fit content using CSS ?"
},
{
"code": null,
"e": 30981,
"s": 30936,
"text": "How to set fixed width for <td> in a table ?"
},
{
"code": null,
"e": 31036,
"s": 30981,
"text": "How to apply style to parent if it has child with CSS?"
},
{
"code": null,
"e": 31096,
"s": 31036,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 31157,
"s": 31096,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 31207,
"s": 31157,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 31260,
"s": 31207,
"text": "Hide or show elements in HTML using display property"
}
] |
Automation of Invoice Processing using RPA - GeeksforGeeks | 13 Apr, 2021
In this article, we will learn how to make a simple project on Automation of Invoice Processing using RPA in UiPath Studio. This is a simple application of Robotic Process Automation in which invoices get downloaded in pdf formats from the desired email address, then from those invoices, specific information like email, name, due date, and balance is extracted and stored in an Excel sheet and then from that sheet the data is formatted in a specific way written on a template and then sent to the email addresses mentioned in the invoices.
Take a note of the following points:
For this, we are using 5 invoices of a specific format as shown below. You can create these invoices by using www.invoicely.com for free. Send these invoices to your email id through which mails have to be sent to customers.
Create an Excel file “invoicedata” with entries in the first row as shown below :
Create an Excel file “template” with entries as shown below:
Adobe Acrobat Reader free version (pdf viewer) is used for opening invoices.
To start implementing the Automation of Invoice Processing using RPA in UiPath Studio follow the below steps:
Step 1: Open the Uipath Studio and create a new process by clicking on the Process tab.
Step 2: Set the name of the process and give a short description then click on Create.
The Uipath studio will automatically load and add all the dependencies of the project. The design page will get opened, click on OPEN MAIN WORKFLOW.
Step 3: Now in the activities panel search for Flowchart activity. Drag and drop it in the designer window.
Step 4: Now in the activities panel search for Sequence activity. Drag and drop it in the designer window.
Change the name of the sequence to ‘Email Attachments’ in the properties section.
Double-click on it to add more activities.
Step 5: Now in the activities panel search for Input Dialog Box activity. Drag and drop it in the Email Attachments sequence.
Double-click on it to enter details. In the Value entered field you need to create a variable ‘Email’ to store the email id entered by the user. Make sure to change the scope of this variable to that of the flowchart.
Step 6: Now again in the activities panel search for Input Dialog Box activity. Drag and drop it in the Email Attachments sequence.
Double-click on it to enter details. In the Value entered field you need to create a variable ‘Password’ to store the password entered by the user. Make sure to change the scope of this variable to that of the flowchart and tick mark the ‘IsPassword‘ field in the properties section.
Step 7: Now in the activities panel search for the “Get IMAP Mail Messages” activity. Drag and drop it in the Email Attachments sequence.
Click on it and enter details in the properties section as shown in the image below. You need to create a variable ‘MailMessages‘ in the output field.
Step 8: Now in the activities panel search for ‘For Each’ activity. Drag and drop it in the Email Attachments sequence.
Write mail in place of the item and in the next field pass the MailMessages variable as shown in the image below. Make sure to change the TypeArgument in the properties panel to ‘System.Net.Mail.MailMessage‘.
Step 9: Now in the activities panel search for ‘if’ activity. Drag and drop it in the Body.
Create a variable ‘invoice’ and set the variable type to ‘string’. Specify the if condition as shown below.
Step 10: Add a sequence in the Then section. Search for the Save attachments activity and drop it in this sequence in the Then section.
Add details as shown below.
Step 11: Now add a sequence in the else section and inside that sequence add ‘create folder’ activity.
Now pass the location for creating the folder as shown below.
Step 12: Now add save attachments activity below create folder activity and enter details as shown below.
If you follow the same steps then your body section will look like this.
And your Email Attachments sequence will look like this.
Step 13: Add a sequence to the flowchart and change the display name to PDF Extraction.
Double-click on it to add activities inside it.
Step 14: Now in the activities panel search for ‘assign’ activity. Drag and drop it in the PDF Extraction sequence.
Create a variable ‘PDFfiles‘ and change the variable type to an array of strings.
Assign the location of the invoice folder that we have created in the Email Attachments sequence to PDFfiles variable. Example: Directory.GetFiles(“C:\Users\Ravi Yadav\Desktop\invoice”)
Step 15: Now in the activities panel search for the ‘read range‘ activity of the workbook. Drag and drop it in the PDF Extraction sequence.
Pass the location of the ‘invoicedata‘ Excel file in the read range activity.
Under the output section in properties, the panel creates a variable ‘invoicedata‘.
Step 16: Add for each activity and enter values as ‘ForEach file in PDFfiles’ as shown below.
Step 17: Now in the activities panel search for ‘start process’ activity. Drag and drop it in the body.
Enter value – file.ToString in it.
Step 18: Now in the activities panel search for ‘attach window’ activity. Drag and drop it in the body.
Make sure to open any one of the invoices on the background screen so that you can capture the window when clicked upon ‘indicate window on screen’.
Step 19: Now in the activities panel search for ‘send hotkey’ activity. Drag and drop it in the do section.
Select the ‘ctrl‘ option and write the value ‘num1’ as shown below.
Step 20: Now in the activities panel search for ‘anchor base’ activity. Drag and drop it in the do section.
Step 21: Now in the activities panel search for ‘find element’ activity. Drag and drop it in the ‘anchor‘ part of anchor base section.
Step 22: Now in the activities panel search for ‘get text’ activity. Drag and drop it in the ‘drop action activity here’ part of anchor base section.
Your AnchorBase will look like this.
Now in the Find Element part, indicate the keyword E-mail id written in the invoice by selecting it and in the Get Text part, indicate the email address corresponding to the email id. In the properties panel of get text part , create a variable ‘emailId‘ with the type ‘generic value’ under the output tag.
Add another anchorBase activity with find element and get the text as explained above.
Now in the Find Element part, indicate the keyword Bill To written in the invoice by selecting it, and in Get Text part, indicate the Bill to name corresponding to the Bill to. In the properties panel of the Get text part, create a variable ‘BillTo‘ with the type ‘generic value’ under the output tag.
Add another anchorBase activity with find element and the Get text as explained above.
Now in the Find Element part, indicate the keyword Due date written in the invoice by selecting it and in Get Text part, indicate the Due date corresponding to the due date. In the properties panel of get text part , create a variable ‘DueDate’ with the type ‘generic value’ under output tag.
Add another anchorBase activity with find element and get text as explained above.
Now in Find Element part, indicate the keyword Balance written in the invoice by selecting it and in Get Text part, indicate the balance corresponding to the balance. In the properties panel of get text part , create a variable ‘Balance’ with the type ‘generic value’ under the output tag.
Step 23: Now in the activities panel search for ‘close application’ activity. Drag and drop it in the do section.
Indicate the invoice window in the close application activity.
Step 24: Now in the activities panel search for ‘add data row’ activity. Drag and drop it in the do section.
Click on this activity and then in the properties panel, supply the data table ‘invoicedata‘ and array row ‘{EmailId,BillTo,DueDate,Balance}’.
Step 25: Now in the activities panel search for ‘write range’ activity of workbook. Drag and drop it in the do section.
Pass the location of ‘invoicedata‘ Excel file in the workbook path and in the data table write ‘invoicedata‘ as shown below.
Step 25: Now add assign activity in the PDFExtraction sequence (outside the for each block) and in the ‘To’ field create a variable ‘template’ with variable type as shown below.
And in the expression field write – new dictionary(of string, object)
Step 27: Now add a read range activity and pass the location of the ‘template’ Excel file in the path as shown below.
Step 28: Now in the activities panel search for ‘for each row’ activity . Drag and drop it in the do section.
Enter values as for each ‘row’ in templateDT(create this variable for ‘in’ field).In the body add ‘add to dictionary’ activity. For using this activity you need to install Microsoft.Activities.Extensions package.
When you are adding this activity in the body a dialog box will appear asking for you to choose the types, choose a string, and object.
Enter the values as shown below.
Step 29: Now add read range activity and pass the location of the ‘invoicedata‘ Excel file in the path.
Step 30: Now add for each row activity and write values as :
Step 31: In the body part, add SMTP mail message activity.Enter values as:
To: row("EMAIL").ToString
Subject: convert.ToDateTime(DateTime.Now).ToShortDateString+"_"+template("Subject").ToString
Body: String.Format(template("Body").ToString,row("BILL TO").ToString,row("DUE DATE").ToString,
row("BALANCE DUE").ToString)
In the properties panel, enter values as:
Port: 587
Server: "smtp.gmail.com"
Email: Email
Password: Password
From: Email
Save the process using the Save button in the design panel and then click on Run. Your bot is ready!!!!
Note: If you face any error in sending mail go to your Gmail account security settings and turn on LESS SECURE APP ACCESS.
You can check out the documentation of uipath on the official doc site.
Advanced Computer Subject
Project
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Copying Files to and from Docker Containers
Fuzzy Logic | Introduction
Basics of API Testing Using Postman
Q-Learning in Python
Markov Decision Process
SDE SHEET - A Complete Guide for SDE Preparation
Working with zip files in Python
XML parsing in Python
Python | Simple GUI calculator using Tkinter
Implementing Web Scraping in Python with BeautifulSoup | [
{
"code": null,
"e": 25583,
"s": 25555,
"text": "\n13 Apr, 2021"
},
{
"code": null,
"e": 26126,
"s": 25583,
"text": "In this article, we will learn how to make a simple project on Automation of Invoice Processing using RPA in UiPath Studio. This is a simple application of Robotic Process Automation in which invoices get downloaded in pdf formats from the desired email address, then from those invoices, specific information like email, name, due date, and balance is extracted and stored in an Excel sheet and then from that sheet the data is formatted in a specific way written on a template and then sent to the email addresses mentioned in the invoices."
},
{
"code": null,
"e": 26163,
"s": 26126,
"text": "Take a note of the following points:"
},
{
"code": null,
"e": 26388,
"s": 26163,
"text": "For this, we are using 5 invoices of a specific format as shown below. You can create these invoices by using www.invoicely.com for free. Send these invoices to your email id through which mails have to be sent to customers."
},
{
"code": null,
"e": 26470,
"s": 26388,
"text": "Create an Excel file “invoicedata” with entries in the first row as shown below :"
},
{
"code": null,
"e": 26531,
"s": 26470,
"text": "Create an Excel file “template” with entries as shown below:"
},
{
"code": null,
"e": 26608,
"s": 26531,
"text": "Adobe Acrobat Reader free version (pdf viewer) is used for opening invoices."
},
{
"code": null,
"e": 26718,
"s": 26608,
"text": "To start implementing the Automation of Invoice Processing using RPA in UiPath Studio follow the below steps:"
},
{
"code": null,
"e": 26806,
"s": 26718,
"text": "Step 1: Open the Uipath Studio and create a new process by clicking on the Process tab."
},
{
"code": null,
"e": 26893,
"s": 26806,
"text": "Step 2: Set the name of the process and give a short description then click on Create."
},
{
"code": null,
"e": 27043,
"s": 26893,
"text": " The Uipath studio will automatically load and add all the dependencies of the project. The design page will get opened, click on OPEN MAIN WORKFLOW."
},
{
"code": null,
"e": 27151,
"s": 27043,
"text": "Step 3: Now in the activities panel search for Flowchart activity. Drag and drop it in the designer window."
},
{
"code": null,
"e": 27258,
"s": 27151,
"text": "Step 4: Now in the activities panel search for Sequence activity. Drag and drop it in the designer window."
},
{
"code": null,
"e": 27340,
"s": 27258,
"text": "Change the name of the sequence to ‘Email Attachments’ in the properties section."
},
{
"code": null,
"e": 27383,
"s": 27340,
"text": "Double-click on it to add more activities."
},
{
"code": null,
"e": 27509,
"s": 27383,
"text": "Step 5: Now in the activities panel search for Input Dialog Box activity. Drag and drop it in the Email Attachments sequence."
},
{
"code": null,
"e": 27727,
"s": 27509,
"text": "Double-click on it to enter details. In the Value entered field you need to create a variable ‘Email’ to store the email id entered by the user. Make sure to change the scope of this variable to that of the flowchart."
},
{
"code": null,
"e": 27859,
"s": 27727,
"text": "Step 6: Now again in the activities panel search for Input Dialog Box activity. Drag and drop it in the Email Attachments sequence."
},
{
"code": null,
"e": 28143,
"s": 27859,
"text": "Double-click on it to enter details. In the Value entered field you need to create a variable ‘Password’ to store the password entered by the user. Make sure to change the scope of this variable to that of the flowchart and tick mark the ‘IsPassword‘ field in the properties section."
},
{
"code": null,
"e": 28281,
"s": 28143,
"text": "Step 7: Now in the activities panel search for the “Get IMAP Mail Messages” activity. Drag and drop it in the Email Attachments sequence."
},
{
"code": null,
"e": 28432,
"s": 28281,
"text": "Click on it and enter details in the properties section as shown in the image below. You need to create a variable ‘MailMessages‘ in the output field."
},
{
"code": null,
"e": 28552,
"s": 28432,
"text": "Step 8: Now in the activities panel search for ‘For Each’ activity. Drag and drop it in the Email Attachments sequence."
},
{
"code": null,
"e": 28761,
"s": 28552,
"text": "Write mail in place of the item and in the next field pass the MailMessages variable as shown in the image below. Make sure to change the TypeArgument in the properties panel to ‘System.Net.Mail.MailMessage‘."
},
{
"code": null,
"e": 28853,
"s": 28761,
"text": "Step 9: Now in the activities panel search for ‘if’ activity. Drag and drop it in the Body."
},
{
"code": null,
"e": 28961,
"s": 28853,
"text": "Create a variable ‘invoice’ and set the variable type to ‘string’. Specify the if condition as shown below."
},
{
"code": null,
"e": 29097,
"s": 28961,
"text": "Step 10: Add a sequence in the Then section. Search for the Save attachments activity and drop it in this sequence in the Then section."
},
{
"code": null,
"e": 29125,
"s": 29097,
"text": "Add details as shown below."
},
{
"code": null,
"e": 29228,
"s": 29125,
"text": "Step 11: Now add a sequence in the else section and inside that sequence add ‘create folder’ activity."
},
{
"code": null,
"e": 29290,
"s": 29228,
"text": "Now pass the location for creating the folder as shown below."
},
{
"code": null,
"e": 29396,
"s": 29290,
"text": "Step 12: Now add save attachments activity below create folder activity and enter details as shown below."
},
{
"code": null,
"e": 29469,
"s": 29396,
"text": "If you follow the same steps then your body section will look like this."
},
{
"code": null,
"e": 29526,
"s": 29469,
"text": "And your Email Attachments sequence will look like this."
},
{
"code": null,
"e": 29614,
"s": 29526,
"text": "Step 13: Add a sequence to the flowchart and change the display name to PDF Extraction."
},
{
"code": null,
"e": 29662,
"s": 29614,
"text": "Double-click on it to add activities inside it."
},
{
"code": null,
"e": 29778,
"s": 29662,
"text": "Step 14: Now in the activities panel search for ‘assign’ activity. Drag and drop it in the PDF Extraction sequence."
},
{
"code": null,
"e": 29860,
"s": 29778,
"text": "Create a variable ‘PDFfiles‘ and change the variable type to an array of strings."
},
{
"code": null,
"e": 30046,
"s": 29860,
"text": "Assign the location of the invoice folder that we have created in the Email Attachments sequence to PDFfiles variable. Example: Directory.GetFiles(“C:\\Users\\Ravi Yadav\\Desktop\\invoice”)"
},
{
"code": null,
"e": 30186,
"s": 30046,
"text": "Step 15: Now in the activities panel search for the ‘read range‘ activity of the workbook. Drag and drop it in the PDF Extraction sequence."
},
{
"code": null,
"e": 30264,
"s": 30186,
"text": "Pass the location of the ‘invoicedata‘ Excel file in the read range activity."
},
{
"code": null,
"e": 30348,
"s": 30264,
"text": "Under the output section in properties, the panel creates a variable ‘invoicedata‘."
},
{
"code": null,
"e": 30442,
"s": 30348,
"text": "Step 16: Add for each activity and enter values as ‘ForEach file in PDFfiles’ as shown below."
},
{
"code": null,
"e": 30547,
"s": 30442,
"text": "Step 17: Now in the activities panel search for ‘start process’ activity. Drag and drop it in the body."
},
{
"code": null,
"e": 30583,
"s": 30547,
"text": "Enter value – file.ToString in it."
},
{
"code": null,
"e": 30687,
"s": 30583,
"text": "Step 18: Now in the activities panel search for ‘attach window’ activity. Drag and drop it in the body."
},
{
"code": null,
"e": 30837,
"s": 30687,
"text": "Make sure to open any one of the invoices on the background screen so that you can capture the window when clicked upon ‘indicate window on screen’. "
},
{
"code": null,
"e": 30945,
"s": 30837,
"text": "Step 19: Now in the activities panel search for ‘send hotkey’ activity. Drag and drop it in the do section."
},
{
"code": null,
"e": 31013,
"s": 30945,
"text": "Select the ‘ctrl‘ option and write the value ‘num1’ as shown below."
},
{
"code": null,
"e": 31121,
"s": 31013,
"text": "Step 20: Now in the activities panel search for ‘anchor base’ activity. Drag and drop it in the do section."
},
{
"code": null,
"e": 31256,
"s": 31121,
"text": "Step 21: Now in the activities panel search for ‘find element’ activity. Drag and drop it in the ‘anchor‘ part of anchor base section."
},
{
"code": null,
"e": 31406,
"s": 31256,
"text": "Step 22: Now in the activities panel search for ‘get text’ activity. Drag and drop it in the ‘drop action activity here’ part of anchor base section."
},
{
"code": null,
"e": 31443,
"s": 31406,
"text": "Your AnchorBase will look like this."
},
{
"code": null,
"e": 31750,
"s": 31443,
"text": "Now in the Find Element part, indicate the keyword E-mail id written in the invoice by selecting it and in the Get Text part, indicate the email address corresponding to the email id. In the properties panel of get text part , create a variable ‘emailId‘ with the type ‘generic value’ under the output tag."
},
{
"code": null,
"e": 31837,
"s": 31750,
"text": "Add another anchorBase activity with find element and get the text as explained above."
},
{
"code": null,
"e": 32139,
"s": 31837,
"text": "Now in the Find Element part, indicate the keyword Bill To written in the invoice by selecting it, and in Get Text part, indicate the Bill to name corresponding to the Bill to. In the properties panel of the Get text part, create a variable ‘BillTo‘ with the type ‘generic value’ under the output tag."
},
{
"code": null,
"e": 32226,
"s": 32139,
"text": "Add another anchorBase activity with find element and the Get text as explained above."
},
{
"code": null,
"e": 32519,
"s": 32226,
"text": "Now in the Find Element part, indicate the keyword Due date written in the invoice by selecting it and in Get Text part, indicate the Due date corresponding to the due date. In the properties panel of get text part , create a variable ‘DueDate’ with the type ‘generic value’ under output tag."
},
{
"code": null,
"e": 32602,
"s": 32519,
"text": "Add another anchorBase activity with find element and get text as explained above."
},
{
"code": null,
"e": 32893,
"s": 32602,
"text": "Now in Find Element part, indicate the keyword Balance written in the invoice by selecting it and in Get Text part, indicate the balance corresponding to the balance. In the properties panel of get text part , create a variable ‘Balance’ with the type ‘generic value’ under the output tag."
},
{
"code": null,
"e": 33007,
"s": 32893,
"text": "Step 23: Now in the activities panel search for ‘close application’ activity. Drag and drop it in the do section."
},
{
"code": null,
"e": 33070,
"s": 33007,
"text": "Indicate the invoice window in the close application activity."
},
{
"code": null,
"e": 33179,
"s": 33070,
"text": "Step 24: Now in the activities panel search for ‘add data row’ activity. Drag and drop it in the do section."
},
{
"code": null,
"e": 33322,
"s": 33179,
"text": "Click on this activity and then in the properties panel, supply the data table ‘invoicedata‘ and array row ‘{EmailId,BillTo,DueDate,Balance}’."
},
{
"code": null,
"e": 33442,
"s": 33322,
"text": "Step 25: Now in the activities panel search for ‘write range’ activity of workbook. Drag and drop it in the do section."
},
{
"code": null,
"e": 33567,
"s": 33442,
"text": "Pass the location of ‘invoicedata‘ Excel file in the workbook path and in the data table write ‘invoicedata‘ as shown below."
},
{
"code": null,
"e": 33745,
"s": 33567,
"text": "Step 25: Now add assign activity in the PDFExtraction sequence (outside the for each block) and in the ‘To’ field create a variable ‘template’ with variable type as shown below."
},
{
"code": null,
"e": 33815,
"s": 33745,
"text": "And in the expression field write – new dictionary(of string, object)"
},
{
"code": null,
"e": 33933,
"s": 33815,
"text": "Step 27: Now add a read range activity and pass the location of the ‘template’ Excel file in the path as shown below."
},
{
"code": null,
"e": 34043,
"s": 33933,
"text": "Step 28: Now in the activities panel search for ‘for each row’ activity . Drag and drop it in the do section."
},
{
"code": null,
"e": 34256,
"s": 34043,
"text": "Enter values as for each ‘row’ in templateDT(create this variable for ‘in’ field).In the body add ‘add to dictionary’ activity. For using this activity you need to install Microsoft.Activities.Extensions package."
},
{
"code": null,
"e": 34392,
"s": 34256,
"text": "When you are adding this activity in the body a dialog box will appear asking for you to choose the types, choose a string, and object."
},
{
"code": null,
"e": 34425,
"s": 34392,
"text": "Enter the values as shown below."
},
{
"code": null,
"e": 34529,
"s": 34425,
"text": "Step 29: Now add read range activity and pass the location of the ‘invoicedata‘ Excel file in the path."
},
{
"code": null,
"e": 34590,
"s": 34529,
"text": "Step 30: Now add for each row activity and write values as :"
},
{
"code": null,
"e": 34665,
"s": 34590,
"text": "Step 31: In the body part, add SMTP mail message activity.Enter values as:"
},
{
"code": null,
"e": 34918,
"s": 34665,
"text": "To: row(\"EMAIL\").ToString\n\nSubject: convert.ToDateTime(DateTime.Now).ToShortDateString+\"_\"+template(\"Subject\").ToString\n \nBody: String.Format(template(\"Body\").ToString,row(\"BILL TO\").ToString,row(\"DUE DATE\").ToString,\n row(\"BALANCE DUE\").ToString)"
},
{
"code": null,
"e": 34960,
"s": 34918,
"text": "In the properties panel, enter values as:"
},
{
"code": null,
"e": 35039,
"s": 34960,
"text": "Port: 587\nServer: \"smtp.gmail.com\"\nEmail: Email\nPassword: Password\nFrom: Email"
},
{
"code": null,
"e": 35143,
"s": 35039,
"text": "Save the process using the Save button in the design panel and then click on Run. Your bot is ready!!!!"
},
{
"code": null,
"e": 35266,
"s": 35143,
"text": "Note: If you face any error in sending mail go to your Gmail account security settings and turn on LESS SECURE APP ACCESS."
},
{
"code": null,
"e": 35339,
"s": 35266,
"text": "You can check out the documentation of uipath on the official doc site. "
},
{
"code": null,
"e": 35365,
"s": 35339,
"text": "Advanced Computer Subject"
},
{
"code": null,
"e": 35373,
"s": 35365,
"text": "Project"
},
{
"code": null,
"e": 35471,
"s": 35373,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 35515,
"s": 35471,
"text": "Copying Files to and from Docker Containers"
},
{
"code": null,
"e": 35542,
"s": 35515,
"text": "Fuzzy Logic | Introduction"
},
{
"code": null,
"e": 35578,
"s": 35542,
"text": "Basics of API Testing Using Postman"
},
{
"code": null,
"e": 35599,
"s": 35578,
"text": "Q-Learning in Python"
},
{
"code": null,
"e": 35623,
"s": 35599,
"text": "Markov Decision Process"
},
{
"code": null,
"e": 35672,
"s": 35623,
"text": "SDE SHEET - A Complete Guide for SDE Preparation"
},
{
"code": null,
"e": 35705,
"s": 35672,
"text": "Working with zip files in Python"
},
{
"code": null,
"e": 35727,
"s": 35705,
"text": "XML parsing in Python"
},
{
"code": null,
"e": 35772,
"s": 35727,
"text": "Python | Simple GUI calculator using Tkinter"
}
] |
Freeware v/s Shareware - GeeksforGeeks | 18 Apr, 2019
Freeware:It is a software that is provided to the user free of cost with the fully functional mechanism having no expiry date.
Anyone can download it from the internet and use it for free.
Here the author retains the copyright to the software.Shareware:It is a commercial software that is initially distributed free of charge but later charge payment to keep the functionality on to continue the access.
It allows the people to “try before they buy”.
Referenceshttps://www.computerscience.gcse.guru/theory/free-software-freeware-sharewarehttps://www.diffen.com/difference/Freeware_vs_Shareware
Akanksha_Rai
Difference Between
GBlog
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Difference Between Method Overloading and Method Overriding in Java
Difference Between Spark DataFrame and Pandas DataFrame
Difference between Internal and External fragmentation
Difference between Top down parsing and Bottom up parsing
Roadmap to Become a Web Developer in 2022
Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ...
Socket Programming in C/C++
DSA Sheet by Love Babbar
Must Do Coding Questions for Product Based Companies | [
{
"code": null,
"e": 24804,
"s": 24776,
"text": "\n18 Apr, 2019"
},
{
"code": null,
"e": 24931,
"s": 24804,
"text": "Freeware:It is a software that is provided to the user free of cost with the fully functional mechanism having no expiry date."
},
{
"code": null,
"e": 24993,
"s": 24931,
"text": "Anyone can download it from the internet and use it for free."
},
{
"code": null,
"e": 25208,
"s": 24993,
"text": "Here the author retains the copyright to the software.Shareware:It is a commercial software that is initially distributed free of charge but later charge payment to keep the functionality on to continue the access."
},
{
"code": null,
"e": 25257,
"s": 25208,
"text": " It allows the people to “try before they buy”."
},
{
"code": null,
"e": 25400,
"s": 25257,
"text": "Referenceshttps://www.computerscience.gcse.guru/theory/free-software-freeware-sharewarehttps://www.diffen.com/difference/Freeware_vs_Shareware"
},
{
"code": null,
"e": 25413,
"s": 25400,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 25432,
"s": 25413,
"text": "Difference Between"
},
{
"code": null,
"e": 25438,
"s": 25432,
"text": "GBlog"
},
{
"code": null,
"e": 25536,
"s": 25438,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25545,
"s": 25536,
"text": "Comments"
},
{
"code": null,
"e": 25558,
"s": 25545,
"text": "Old Comments"
},
{
"code": null,
"e": 25619,
"s": 25558,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 25687,
"s": 25619,
"text": "Difference Between Method Overloading and Method Overriding in Java"
},
{
"code": null,
"e": 25743,
"s": 25687,
"text": "Difference Between Spark DataFrame and Pandas DataFrame"
},
{
"code": null,
"e": 25798,
"s": 25743,
"text": "Difference between Internal and External fragmentation"
},
{
"code": null,
"e": 25856,
"s": 25798,
"text": "Difference between Top down parsing and Bottom up parsing"
},
{
"code": null,
"e": 25898,
"s": 25856,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 25972,
"s": 25898,
"text": "Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ..."
},
{
"code": null,
"e": 26000,
"s": 25972,
"text": "Socket Programming in C/C++"
},
{
"code": null,
"e": 26025,
"s": 26000,
"text": "DSA Sheet by Love Babbar"
}
] |
How can I get days since epoch in JavaScript? | To get days since epoch, you need to use Math.abs() JavaScript method. Then use the Math.Floor() method to get the difference between dates since epoch and current date −
Live Demo
<html>
<head>
<title>JavaScript Clone Date</title>
</head>
<body>
<script>
var current_date, epocDate;
current_date = new Date();
document.write("Current Date: "+current_date);
var epocDate = new Date(new Date().getTime() / 1000);
document.write("<br>Since epoch: "+epocDate);
var res = Math.abs(current_date - epocDate) / 1000;
// get total days between two dates
var days = Math.floor(res / 86400);
document.write("<br>Difference (Days): "+days);
</script>
</body>
</html>
Current Date: Fri May 25 2018 15:42:43 GMT+0530 (India Standard Time)
Since epoch: Sun Jan 18 1970 21:44:03 GMT+0530 (India Standard Time)
Difference (Days): 17658 | [
{
"code": null,
"e": 1233,
"s": 1062,
"text": "To get days since epoch, you need to use Math.abs() JavaScript method. Then use the Math.Floor() method to get the difference between dates since epoch and current date −"
},
{
"code": null,
"e": 1243,
"s": 1233,
"text": "Live Demo"
},
{
"code": null,
"e": 1832,
"s": 1243,
"text": "<html>\n <head>\n <title>JavaScript Clone Date</title>\n </head>\n <body>\n <script>\n var current_date, epocDate;\n\n current_date = new Date();\n document.write(\"Current Date: \"+current_date);\n\n var epocDate = new Date(new Date().getTime() / 1000);\n document.write(\"<br>Since epoch: \"+epocDate);\n var res = Math.abs(current_date - epocDate) / 1000;\n\n // get total days between two dates\n var days = Math.floor(res / 86400);\n document.write(\"<br>Difference (Days): \"+days);\n </script>\n </body>\n</html>"
},
{
"code": null,
"e": 1996,
"s": 1832,
"text": "Current Date: Fri May 25 2018 15:42:43 GMT+0530 (India Standard Time)\nSince epoch: Sun Jan 18 1970 21:44:03 GMT+0530 (India Standard Time)\nDifference (Days): 17658"
}
] |
Implementation of Multi-Variate Linear Regression in Python using Gradient Descent Optimization from scratch | by Navoneel Chakrabarty | Towards Data Science | Most Practical Applications of Machine Learning involve Multiple Features on which the Target Outcome depends upon. Similarly in Regression Analysis Problems, there are instances where the Target Outcome depends on numerous features. Multi-Variate Linear Regression is a possible solution to tackle such problems. In this article, I will be discussing the Multi-Variate (multiple features) Linear Regression, its Python Implementation from Scratch, Application on a Practical Problem and Performance Analysis.
As it is a “linear” Regression Technique, only linear term of each feature will be taken in the framing of the hypothesis. Let, x_1, x_2, ... x_n, be the features on which the Target Outcome depends upon. Then, the hypothesis for Multi-Variate Linear Regression:
Also, the above hypothesis can be re-framed in terms of Vector Algebra too:
There is also a cost function (or loss function) associated with the hypothesis dependent upon parameters, theta_0, theta_1, theta_2, ... ,theta_n.
The cost function here is the same as in the case of Polynomial Regression [1].
So, these parameters, theta_0, theta_1, theta_2, ..., theta_n have to assume such values for which the cost function (or simply cost) reaches to its minimum value possible. In other words, the minima of the Cost Function have to be found out.
Batch Gradient Descent can be used as the Optimization Strategy in this case.
Implementation of Multi-Variate Linear Regression using Batch Gradient Descent:
The implementation is done by creating 3 modules each used for performing different operations in the Training Process.
=> hypothesis(): It is the function that calculates and outputs the hypothesis value of the Target Variable, given theta (theta_0, theta_1, theta_2, theta_3, ...., theta_n), Features in a matrix, X of dimension [m X (n+1)] where m is the number of samples and n is the number of features. The implementation of hypothesis() is given below:
def hypothesis(theta, X, n): h = np.ones((X.shape[0],1)) theta = theta.reshape(1,n+1) for i in range(0,X.shape[0]): h[i] = float(np.matmul(theta, X[i])) h = h.reshape(X.shape[0]) return h
=>BGD(): It is the function that performs the Batch Gradient Descent Algorithm taking current value of theta (theta_0, theta_1,..., theta_n), learning rate (alpha), number of iterations (num_iters), list of hypothesis values of all samples (h), feature set (X), Target Variable set (y) and Number of Features (n) as input and outputs the optimized theta (theta_0, theta_1, theta_2, theta_3, ..., theta_n) and the cost history or cost which contains the value of the cost function over all the iterations. The implementation of BGD() is given below:
def BGD(theta, alpha, num_iters, h, X, y, n): cost = np.ones(num_iters) for i in range(0,num_iters): theta[0] = theta[0] - (alpha/X.shape[0]) * sum(h - y) for j in range(1,n+1): theta[j] = theta[j] - (alpha/X.shape[0]) * sum((h-y) * X.transpose()[j]) h = hypothesis(theta, X, n) cost[i] = (1/X.shape[0]) * 0.5 * sum(np.square(h - y)) theta = theta.reshape(1,n+1) return theta, cost
=>linear_regression(): It is the principal function that takes the features matrix (X), Target Variable Vector (y), learning rate (alpha) and number of iterations (num_iters) as input and outputs the final optimized theta i.e., the values of [theta_0, theta_1, theta_2, theta_3,....,theta_n] for which the cost function almost achieves minima following Batch Gradient Descent, and cost which stores the value of cost for every iteration.
def linear_regression(X, y, alpha, num_iters): n = X.shape[1] one_column = np.ones((X.shape[0],1)) X = np.concatenate((one_column, X), axis = 1) # initializing the parameter vector... theta = np.zeros(n+1) # hypothesis calculation.... h = hypothesis(theta, X, n) # returning the optimized parameters by Gradient Descent... theta, cost = BGD(theta,alpha,num_iters,h,X,y,n) return theta, cost
Now, let’s move on to the Application of the Multi-Variate Linear Regression on a Practical Practice Data-Set.
Let us consider a Housing Price Data-Set of Portland, Oregon. It contains size of the house (in square feet) and number of bedrooms as features and price of the house as the Target Variable. The Data-Set is available at,
github.com
Problem Statement: “Given the size of the house and number of bedrooms, analyze and predict the possible price of the house”
Data Reading into Numpy Arrays :
data = np.loadtxt('data2.txt', delimiter=',')X_train = data[:,[0,1]] #feature sety_train = data[:,2] #label set
Feature Normalization or Feature Scaling:
This involves scaling the features for fast and efficient computation.
where u is the Mean and sigma is the Standard Deviation:
Implementation of feature scaling:
mean = np.ones(X_train.shape[1])std = np.ones(X_train.shape[1])for i in range(0, X_train.shape[1]): mean[i] = np.mean(X_train.transpose()[i]) std[i] = np.std(X_train.transpose()[i]) for j in range(0, X_train.shape[0]): X_train[j][i] = (X_train[j][i] - mean[i])/std[i]
Here,
Mean of the feature “size of the house (in sq. feet)” or F1: 2000.6808Mean of the feature “number of bed-rooms” or F2: 3.1702Standard Deviation of F1: 7.86202619e+02Standard Deviation of F2: 7.52842809e-01
Mean of the feature “size of the house (in sq. feet)” or F1: 2000.6808
Mean of the feature “number of bed-rooms” or F2: 3.1702
Standard Deviation of F1: 7.86202619e+02
Standard Deviation of F2: 7.52842809e-01
# calling the principal function with learning_rate = 0.0001 and # num_iters = 300000theta, cost = linear_regression(X_train, y_train, 0.0001, 300000)
The cost has been reduced in the course of Batch Gradient Descent iteration-by-iteration. The reduction in the cost is shown with the help of Line Curve.
import matplotlib.pyplot as pltcost = list(cost)n_iterations = [x for x in range(1,300001)]plt.plot(n_iterations, cost)plt.xlabel('No. of iterations')plt.ylabel('Cost')
Side-by-Side Visualization of Features and Target Variable Actual and Prediction using 3-D Scatter Plots :
=>Actual Target Variable Visualization:
from matplotlib import pyplotfrom mpl_toolkits.mplot3d import Axes3Dsequence_containing_x_vals = list(X_train.transpose()[0])sequence_containing_y_vals = list(X_train.transpose()[1])sequence_containing_z_vals = list(y_train)fig = pyplot.figure()ax = Axes3D(fig)ax.scatter(sequence_containing_x_vals, sequence_containing_y_vals, sequence_containing_z_vals)ax.set_xlabel('Living Room Area', fontsize=10)ax.set_ylabel('Number of Bed Rooms', fontsize=10)ax.set_zlabel('Actual Housing Price', fontsize=10)
=>Prediction Target Variable Visualization:
# Getting the predictions...X_train = np.concatenate((np.ones((X_train.shape[0],1)), X_train) ,axis = 1)predictions = hypothesis(theta, X_train, X_train.shape[1] - 1)from matplotlib import pyplotfrom mpl_toolkits.mplot3d import Axes3Dsequence_containing_x_vals = list(X_train.transpose()[1])sequence_containing_y_vals = list(X_train.transpose()[2])sequence_containing_z_vals = list(predictions)fig = pyplot.figure()ax = Axes3D(fig)ax.scatter(sequence_containing_x_vals, sequence_containing_y_vals, sequence_containing_z_vals)ax.set_xlabel('Living Room Area', fontsize=10)ax.set_ylabel('Number of Bed Rooms', fontsize=10)ax.set_zlabel('Housing Price Predictions', fontsize=10)
Performance Analysis:
Mean Absolute Error: 51502.7803 (in dollars)Mean Square Error: 4086560101.2158 (in dollars square)Root Mean Square Error: 63926.2082 (in dollars)R-Square Score: 0.7329
Mean Absolute Error: 51502.7803 (in dollars)
Mean Square Error: 4086560101.2158 (in dollars square)
Root Mean Square Error: 63926.2082 (in dollars)
R-Square Score: 0.7329
One thing to be noted, is that the Mean Absolute Error, Mean Square Error and Root Mean Square Error is not unit free. To make them unit-free, before Training the Model, the Target Label can be scaled in the same way, the features were scaled. Other than that, a descent R-Square-Score of 0.7329 is also obtained.
That’s all about the Implementation of Multi-Variate Linear Regression in Python using Gradient Descent from scratch. | [
{
"code": null,
"e": 682,
"s": 172,
"text": "Most Practical Applications of Machine Learning involve Multiple Features on which the Target Outcome depends upon. Similarly in Regression Analysis Problems, there are instances where the Target Outcome depends on numerous features. Multi-Variate Linear Regression is a possible solution to tackle such problems. In this article, I will be discussing the Multi-Variate (multiple features) Linear Regression, its Python Implementation from Scratch, Application on a Practical Problem and Performance Analysis."
},
{
"code": null,
"e": 945,
"s": 682,
"text": "As it is a “linear” Regression Technique, only linear term of each feature will be taken in the framing of the hypothesis. Let, x_1, x_2, ... x_n, be the features on which the Target Outcome depends upon. Then, the hypothesis for Multi-Variate Linear Regression:"
},
{
"code": null,
"e": 1021,
"s": 945,
"text": "Also, the above hypothesis can be re-framed in terms of Vector Algebra too:"
},
{
"code": null,
"e": 1169,
"s": 1021,
"text": "There is also a cost function (or loss function) associated with the hypothesis dependent upon parameters, theta_0, theta_1, theta_2, ... ,theta_n."
},
{
"code": null,
"e": 1249,
"s": 1169,
"text": "The cost function here is the same as in the case of Polynomial Regression [1]."
},
{
"code": null,
"e": 1492,
"s": 1249,
"text": "So, these parameters, theta_0, theta_1, theta_2, ..., theta_n have to assume such values for which the cost function (or simply cost) reaches to its minimum value possible. In other words, the minima of the Cost Function have to be found out."
},
{
"code": null,
"e": 1570,
"s": 1492,
"text": "Batch Gradient Descent can be used as the Optimization Strategy in this case."
},
{
"code": null,
"e": 1650,
"s": 1570,
"text": "Implementation of Multi-Variate Linear Regression using Batch Gradient Descent:"
},
{
"code": null,
"e": 1770,
"s": 1650,
"text": "The implementation is done by creating 3 modules each used for performing different operations in the Training Process."
},
{
"code": null,
"e": 2110,
"s": 1770,
"text": "=> hypothesis(): It is the function that calculates and outputs the hypothesis value of the Target Variable, given theta (theta_0, theta_1, theta_2, theta_3, ...., theta_n), Features in a matrix, X of dimension [m X (n+1)] where m is the number of samples and n is the number of features. The implementation of hypothesis() is given below:"
},
{
"code": null,
"e": 2320,
"s": 2110,
"text": "def hypothesis(theta, X, n): h = np.ones((X.shape[0],1)) theta = theta.reshape(1,n+1) for i in range(0,X.shape[0]): h[i] = float(np.matmul(theta, X[i])) h = h.reshape(X.shape[0]) return h"
},
{
"code": null,
"e": 2869,
"s": 2320,
"text": "=>BGD(): It is the function that performs the Batch Gradient Descent Algorithm taking current value of theta (theta_0, theta_1,..., theta_n), learning rate (alpha), number of iterations (num_iters), list of hypothesis values of all samples (h), feature set (X), Target Variable set (y) and Number of Features (n) as input and outputs the optimized theta (theta_0, theta_1, theta_2, theta_3, ..., theta_n) and the cost history or cost which contains the value of the cost function over all the iterations. The implementation of BGD() is given below:"
},
{
"code": null,
"e": 3337,
"s": 2869,
"text": "def BGD(theta, alpha, num_iters, h, X, y, n): cost = np.ones(num_iters) for i in range(0,num_iters): theta[0] = theta[0] - (alpha/X.shape[0]) * sum(h - y) for j in range(1,n+1): theta[j] = theta[j] - (alpha/X.shape[0]) * sum((h-y) * X.transpose()[j]) h = hypothesis(theta, X, n) cost[i] = (1/X.shape[0]) * 0.5 * sum(np.square(h - y)) theta = theta.reshape(1,n+1) return theta, cost"
},
{
"code": null,
"e": 3775,
"s": 3337,
"text": "=>linear_regression(): It is the principal function that takes the features matrix (X), Target Variable Vector (y), learning rate (alpha) and number of iterations (num_iters) as input and outputs the final optimized theta i.e., the values of [theta_0, theta_1, theta_2, theta_3,....,theta_n] for which the cost function almost achieves minima following Batch Gradient Descent, and cost which stores the value of cost for every iteration."
},
{
"code": null,
"e": 4196,
"s": 3775,
"text": "def linear_regression(X, y, alpha, num_iters): n = X.shape[1] one_column = np.ones((X.shape[0],1)) X = np.concatenate((one_column, X), axis = 1) # initializing the parameter vector... theta = np.zeros(n+1) # hypothesis calculation.... h = hypothesis(theta, X, n) # returning the optimized parameters by Gradient Descent... theta, cost = BGD(theta,alpha,num_iters,h,X,y,n) return theta, cost"
},
{
"code": null,
"e": 4307,
"s": 4196,
"text": "Now, let’s move on to the Application of the Multi-Variate Linear Regression on a Practical Practice Data-Set."
},
{
"code": null,
"e": 4528,
"s": 4307,
"text": "Let us consider a Housing Price Data-Set of Portland, Oregon. It contains size of the house (in square feet) and number of bedrooms as features and price of the house as the Target Variable. The Data-Set is available at,"
},
{
"code": null,
"e": 4539,
"s": 4528,
"text": "github.com"
},
{
"code": null,
"e": 4664,
"s": 4539,
"text": "Problem Statement: “Given the size of the house and number of bedrooms, analyze and predict the possible price of the house”"
},
{
"code": null,
"e": 4697,
"s": 4664,
"text": "Data Reading into Numpy Arrays :"
},
{
"code": null,
"e": 4809,
"s": 4697,
"text": "data = np.loadtxt('data2.txt', delimiter=',')X_train = data[:,[0,1]] #feature sety_train = data[:,2] #label set"
},
{
"code": null,
"e": 4851,
"s": 4809,
"text": "Feature Normalization or Feature Scaling:"
},
{
"code": null,
"e": 4922,
"s": 4851,
"text": "This involves scaling the features for fast and efficient computation."
},
{
"code": null,
"e": 4979,
"s": 4922,
"text": "where u is the Mean and sigma is the Standard Deviation:"
},
{
"code": null,
"e": 5014,
"s": 4979,
"text": "Implementation of feature scaling:"
},
{
"code": null,
"e": 5298,
"s": 5014,
"text": "mean = np.ones(X_train.shape[1])std = np.ones(X_train.shape[1])for i in range(0, X_train.shape[1]): mean[i] = np.mean(X_train.transpose()[i]) std[i] = np.std(X_train.transpose()[i]) for j in range(0, X_train.shape[0]): X_train[j][i] = (X_train[j][i] - mean[i])/std[i]"
},
{
"code": null,
"e": 5304,
"s": 5298,
"text": "Here,"
},
{
"code": null,
"e": 5510,
"s": 5304,
"text": "Mean of the feature “size of the house (in sq. feet)” or F1: 2000.6808Mean of the feature “number of bed-rooms” or F2: 3.1702Standard Deviation of F1: 7.86202619e+02Standard Deviation of F2: 7.52842809e-01"
},
{
"code": null,
"e": 5581,
"s": 5510,
"text": "Mean of the feature “size of the house (in sq. feet)” or F1: 2000.6808"
},
{
"code": null,
"e": 5637,
"s": 5581,
"text": "Mean of the feature “number of bed-rooms” or F2: 3.1702"
},
{
"code": null,
"e": 5678,
"s": 5637,
"text": "Standard Deviation of F1: 7.86202619e+02"
},
{
"code": null,
"e": 5719,
"s": 5678,
"text": "Standard Deviation of F2: 7.52842809e-01"
},
{
"code": null,
"e": 5916,
"s": 5719,
"text": "# calling the principal function with learning_rate = 0.0001 and # num_iters = 300000theta, cost = linear_regression(X_train, y_train, 0.0001, 300000)"
},
{
"code": null,
"e": 6070,
"s": 5916,
"text": "The cost has been reduced in the course of Batch Gradient Descent iteration-by-iteration. The reduction in the cost is shown with the help of Line Curve."
},
{
"code": null,
"e": 6239,
"s": 6070,
"text": "import matplotlib.pyplot as pltcost = list(cost)n_iterations = [x for x in range(1,300001)]plt.plot(n_iterations, cost)plt.xlabel('No. of iterations')plt.ylabel('Cost')"
},
{
"code": null,
"e": 6346,
"s": 6239,
"text": "Side-by-Side Visualization of Features and Target Variable Actual and Prediction using 3-D Scatter Plots :"
},
{
"code": null,
"e": 6386,
"s": 6346,
"text": "=>Actual Target Variable Visualization:"
},
{
"code": null,
"e": 6897,
"s": 6386,
"text": "from matplotlib import pyplotfrom mpl_toolkits.mplot3d import Axes3Dsequence_containing_x_vals = list(X_train.transpose()[0])sequence_containing_y_vals = list(X_train.transpose()[1])sequence_containing_z_vals = list(y_train)fig = pyplot.figure()ax = Axes3D(fig)ax.scatter(sequence_containing_x_vals, sequence_containing_y_vals, sequence_containing_z_vals)ax.set_xlabel('Living Room Area', fontsize=10)ax.set_ylabel('Number of Bed Rooms', fontsize=10)ax.set_zlabel('Actual Housing Price', fontsize=10)"
},
{
"code": null,
"e": 6941,
"s": 6897,
"text": "=>Prediction Target Variable Visualization:"
},
{
"code": null,
"e": 7651,
"s": 6941,
"text": "# Getting the predictions...X_train = np.concatenate((np.ones((X_train.shape[0],1)), X_train) ,axis = 1)predictions = hypothesis(theta, X_train, X_train.shape[1] - 1)from matplotlib import pyplotfrom mpl_toolkits.mplot3d import Axes3Dsequence_containing_x_vals = list(X_train.transpose()[1])sequence_containing_y_vals = list(X_train.transpose()[2])sequence_containing_z_vals = list(predictions)fig = pyplot.figure()ax = Axes3D(fig)ax.scatter(sequence_containing_x_vals, sequence_containing_y_vals, sequence_containing_z_vals)ax.set_xlabel('Living Room Area', fontsize=10)ax.set_ylabel('Number of Bed Rooms', fontsize=10)ax.set_zlabel('Housing Price Predictions', fontsize=10)"
},
{
"code": null,
"e": 7673,
"s": 7651,
"text": "Performance Analysis:"
},
{
"code": null,
"e": 7841,
"s": 7673,
"text": "Mean Absolute Error: 51502.7803 (in dollars)Mean Square Error: 4086560101.2158 (in dollars square)Root Mean Square Error: 63926.2082 (in dollars)R-Square Score: 0.7329"
},
{
"code": null,
"e": 7886,
"s": 7841,
"text": "Mean Absolute Error: 51502.7803 (in dollars)"
},
{
"code": null,
"e": 7941,
"s": 7886,
"text": "Mean Square Error: 4086560101.2158 (in dollars square)"
},
{
"code": null,
"e": 7989,
"s": 7941,
"text": "Root Mean Square Error: 63926.2082 (in dollars)"
},
{
"code": null,
"e": 8012,
"s": 7989,
"text": "R-Square Score: 0.7329"
},
{
"code": null,
"e": 8326,
"s": 8012,
"text": "One thing to be noted, is that the Mean Absolute Error, Mean Square Error and Root Mean Square Error is not unit free. To make them unit-free, before Training the Model, the Target Label can be scaled in the same way, the features were scaled. Other than that, a descent R-Square-Score of 0.7329 is also obtained."
}
] |
Groovy - remove() | Removes the element at the specified position in this List.
Object remove(int index)
Index – Index at which the value needs to be removed.
The removed value.
Following is an example of the usage of this method −
class Example {
static void main(String[] args) {
def lst = [11, 12, 13, 14];
println(lst.remove(2));
println(lst);
}
}
When we run the above program, we will get the following result −
13
[11, 12, 14]
52 Lectures
8 hours
Krishna Sakinala
49 Lectures
2.5 hours
Packt Publishing
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2298,
"s": 2238,
"text": "Removes the element at the specified position in this List."
},
{
"code": null,
"e": 2324,
"s": 2298,
"text": "Object remove(int index)\n"
},
{
"code": null,
"e": 2378,
"s": 2324,
"text": "Index – Index at which the value needs to be removed."
},
{
"code": null,
"e": 2397,
"s": 2378,
"text": "The removed value."
},
{
"code": null,
"e": 2451,
"s": 2397,
"text": "Following is an example of the usage of this method −"
},
{
"code": null,
"e": 2596,
"s": 2451,
"text": "class Example {\n static void main(String[] args) {\n def lst = [11, 12, 13, 14];\n\n println(lst.remove(2));\n println(lst);\n }\n}"
},
{
"code": null,
"e": 2662,
"s": 2596,
"text": "When we run the above program, we will get the following result −"
},
{
"code": null,
"e": 2680,
"s": 2662,
"text": "13 \n[11, 12, 14]\n"
},
{
"code": null,
"e": 2713,
"s": 2680,
"text": "\n 52 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 2731,
"s": 2713,
"text": " Krishna Sakinala"
},
{
"code": null,
"e": 2766,
"s": 2731,
"text": "\n 49 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 2784,
"s": 2766,
"text": " Packt Publishing"
},
{
"code": null,
"e": 2791,
"s": 2784,
"text": " Print"
},
{
"code": null,
"e": 2802,
"s": 2791,
"text": " Add Notes"
}
] |
Partition problem | For this problem, a given set can be partitioned in such a way, that sum of each subset is equal.
At first, we have to find the sum of the given set. If it is even, then there is a chance to divide it into two sets. Otherwise, it cannot be divided.
For even value of the sum, then we will create a table named partTable, now use the following condition to solve the problem.
partTable[i, j] is true, when subset of array[0] to array[j-1] has sum equal to i, otherwise it is false.
Input:
A set of integers. {3, 1, 1, 2, 2, 1}
Output:
True if the set can be partitioned into two parts with equal sum.
Here the answer is true. One pair of the partitions are: {3, 1, 1}, {2, 2, 1}
checkPartition(set, n)
Input − The given set, the number of elements in the set.
Output − True when partitioning is possible to make two subsets of the equal sum.
Begin
sum := sum of all elements in the set
if sum is odd, then
return
define partTable of order (sum/2 + 1 x n+1)
set all elements in the 0th row to true
set all elements in the 0th column to false
for i in range 1 to sum/2, do
for j in range 1 to n, do
partTab[i, j] := partTab[i, j-1]
if i >= set[j-1], then
partTab[i, j] := partTab[i, j] or with
partTab[i – set[j-1], j-1]
done
done
return partTab[sum/2, n]
End
#include <iostream>
using namespace std;
bool checkPartition (int set[], int n) {
int sum = 0;
for (int i = 0; i < n; i++) //find the sum of all elements of set
sum += set[i];
if (sum%2 != 0) //when sum is odd, it is not divisible into two set
return false;
bool partTab[sum/2+1][n+1]; //create partition table
for (int i = 0; i <= n; i++)
partTab[0][i] = true; //for set of zero element, all values are true
for (int i = 1; i <= sum/2; i++)
partTab[i][0] = false; //as first column holds empty set, it is false
// Fill the partition table in botton up manner
for (int i = 1; i <= sum/2; i++) {
for (int j = 1; j <= n; j++) {
partTab[i][j] = partTab[i][j-1];
if (i >= set[j-1])
partTab[i][j] = partTab[i][j] || partTab[i - set[j-1]][j-1];
}
}
return partTab[sum/2][n];
}
int main() {
int set[] = {3, 1, 1, 2, 2, 1};
int n = 6;
if (checkPartition(set, n))
cout << "Given Set can be divided into two subsets of equal sum.";
else
cout << "Given Set can not be divided into two subsets of equal sum.";
}
Given Set can be divided into two subsets of equal sum. | [
{
"code": null,
"e": 1160,
"s": 1062,
"text": "For this problem, a given set can be partitioned in such a way, that sum of each subset is equal."
},
{
"code": null,
"e": 1311,
"s": 1160,
"text": "At first, we have to find the sum of the given set. If it is even, then there is a chance to divide it into two sets. Otherwise, it cannot be divided."
},
{
"code": null,
"e": 1437,
"s": 1311,
"text": "For even value of the sum, then we will create a table named partTable, now use the following condition to solve the problem."
},
{
"code": null,
"e": 1543,
"s": 1437,
"text": "partTable[i, j] is true, when subset of array[0] to array[j-1] has sum equal to i, otherwise it is false."
},
{
"code": null,
"e": 1740,
"s": 1543,
"text": "Input:\nA set of integers. {3, 1, 1, 2, 2, 1}\nOutput:\nTrue if the set can be partitioned into two parts with equal sum.\nHere the answer is true. One pair of the partitions are: {3, 1, 1}, {2, 2, 1}"
},
{
"code": null,
"e": 1763,
"s": 1740,
"text": "checkPartition(set, n)"
},
{
"code": null,
"e": 1821,
"s": 1763,
"text": "Input − The given set, the number of elements in the set."
},
{
"code": null,
"e": 1903,
"s": 1821,
"text": "Output − True when partitioning is possible to make two subsets of the equal sum."
},
{
"code": null,
"e": 2406,
"s": 1903,
"text": "Begin\n sum := sum of all elements in the set\n if sum is odd, then\n return\n\n define partTable of order (sum/2 + 1 x n+1)\n set all elements in the 0th row to true\n set all elements in the 0th column to false\n\n for i in range 1 to sum/2, do\n for j in range 1 to n, do\n partTab[i, j] := partTab[i, j-1]\n if i >= set[j-1], then\n partTab[i, j] := partTab[i, j] or with\n partTab[i – set[j-1], j-1]\n done\n done\n\n return partTab[sum/2, n]\nEnd"
},
{
"code": null,
"e": 3564,
"s": 2406,
"text": "#include <iostream>\nusing namespace std;\n\nbool checkPartition (int set[], int n) {\n int sum = 0;\n\n for (int i = 0; i < n; i++) //find the sum of all elements of set\n sum += set[i];\n\n if (sum%2 != 0) //when sum is odd, it is not divisible into two set\n return false;\n\n bool partTab[sum/2+1][n+1]; //create partition table\n for (int i = 0; i <= n; i++)\n partTab[0][i] = true; //for set of zero element, all values are true\n\n for (int i = 1; i <= sum/2; i++)\n partTab[i][0] = false; //as first column holds empty set, it is false\n\n // Fill the partition table in botton up manner\n for (int i = 1; i <= sum/2; i++) {\n for (int j = 1; j <= n; j++) {\n partTab[i][j] = partTab[i][j-1];\n if (i >= set[j-1])\n partTab[i][j] = partTab[i][j] || partTab[i - set[j-1]][j-1];\n } \n } \n return partTab[sum/2][n];\n}\n \nint main() {\n int set[] = {3, 1, 1, 2, 2, 1};\n int n = 6;\n\n if (checkPartition(set, n))\n cout << \"Given Set can be divided into two subsets of equal sum.\";\n else\n cout << \"Given Set can not be divided into two subsets of equal sum.\";\n} "
},
{
"code": null,
"e": 3620,
"s": 3564,
"text": "Given Set can be divided into two subsets of equal sum."
}
] |
C Program to Swap two Numbers - GeeksforGeeks | 07 May, 2020
Given two numbers, write a C program to swap the given numbers.
Input : x = 10, y = 20;
Output : x = 20, y = 10
Input : x = 200, y = 100
Output : x = 100, y = 200
The idea is simple
Assign x to a temp variable : temp = xAssign y to x : x = yAssign temp to y : y = temp
Assign x to a temp variable : temp = x
Assign y to x : x = y
Assign temp to y : y = temp
Let us understand with an example.
x = 100, y = 200
After line 1: temp = xtemp = 100
After line 2: x = yx = 200
After line 3 : y = tempy = 100
// C program to swap two variables#include <stdio.h> int main(){ int x, y; printf("Enter Value of x "); scanf("%d", &x); printf("\nEnter Value of y "); scanf("%d", &y); int temp = x; x = y; y = temp; printf("\nAfter Swapping: x = %d, y = %d", x, y); return 0;}
Output:
Enter Value of x 12
Enter Value of y 14
After Swapping: x = 14, y = 12
How to write a function to swap?Since we want the local variables of main to modified by swap function, we must them using pointers in C.
// C program to swap two variables using a // user defined swap()#include <stdio.h> // This function swaps values pointed by xp and ypvoid swap(int *xp, int *yp){ int temp = *xp; *xp = *yp; *yp = temp;} int main(){ int x, y; printf("Enter Value of x "); scanf("%d", &x); printf("\nEnter Value of y "); scanf("%d", &y); swap(&x, &y); printf("\nAfter Swapping: x = %d, y = %d", x, y); return 0;}
Output:
Enter Value of x 12
Enter Value of y 14
After Swapping: x = 14, y = 12
How to do in C++?In C++, we can use references also.
// C++ program to swap two variables using a // user defined swap()#include <stdio.h> // This function swaps values referred by // x and y,void swap(int &x, int &y){ int temp = x; x = y; y = temp;} int main(){ int x, y; printf("Enter Value of x "); scanf("%d", &x); printf("\nEnter Value of y "); scanf("%d", &y); swap(x, y); printf("\nAfter Swapping: x = %d, y = %d", x, y); return 0;}
Output:
Enter Value of x 12
Enter Value of y 14
After Swapping: x = 14, y = 12
Is there a library function?We can use C++ library swap function also.
// C++ program to swap two variables using a // user defined swap()#include <bits/stdc++.h>using namespace std; int main(){ int x, y; printf("Enter Value of x "); scanf("%d", &x); printf("\nEnter Value of y "); scanf("%d", &y); swap(x, y); printf("\nAfter Swapping: x = %d, y = %d", x, y); return 0;}
Output:
Enter Value of x 12
Enter Value of y 14
After Swapping: x = 14, y = 12
How to swap without using a temporary variable?
Samsung
Swap-Program
C Programs
C++ Programs
Mathematical
School Programming
Samsung
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
C Program to read contents of Whole File
Producer Consumer Problem in C
C / C++ Program for Dijkstra's shortest path algorithm | Greedy Algo-7
Program to calculate First and Follow sets of given grammar
time() function in C
C++ Program for QuickSort
C++ program for hashing with chaining
delete keyword in C++
cin in C++
Count substrings that contain all vowels | SET 2 | [
{
"code": null,
"e": 24579,
"s": 24551,
"text": "\n07 May, 2020"
},
{
"code": null,
"e": 24643,
"s": 24579,
"text": "Given two numbers, write a C program to swap the given numbers."
},
{
"code": null,
"e": 24744,
"s": 24643,
"text": "Input : x = 10, y = 20;\nOutput : x = 20, y = 10\n\nInput : x = 200, y = 100\nOutput : x = 100, y = 200\n"
},
{
"code": null,
"e": 24763,
"s": 24744,
"text": "The idea is simple"
},
{
"code": null,
"e": 24850,
"s": 24763,
"text": "Assign x to a temp variable : temp = xAssign y to x : x = yAssign temp to y : y = temp"
},
{
"code": null,
"e": 24889,
"s": 24850,
"text": "Assign x to a temp variable : temp = x"
},
{
"code": null,
"e": 24911,
"s": 24889,
"text": "Assign y to x : x = y"
},
{
"code": null,
"e": 24939,
"s": 24911,
"text": "Assign temp to y : y = temp"
},
{
"code": null,
"e": 24974,
"s": 24939,
"text": "Let us understand with an example."
},
{
"code": null,
"e": 24991,
"s": 24974,
"text": "x = 100, y = 200"
},
{
"code": null,
"e": 25024,
"s": 24991,
"text": "After line 1: temp = xtemp = 100"
},
{
"code": null,
"e": 25051,
"s": 25024,
"text": "After line 2: x = yx = 200"
},
{
"code": null,
"e": 25082,
"s": 25051,
"text": "After line 3 : y = tempy = 100"
},
{
"code": "// C program to swap two variables#include <stdio.h> int main(){ int x, y; printf(\"Enter Value of x \"); scanf(\"%d\", &x); printf(\"\\nEnter Value of y \"); scanf(\"%d\", &y); int temp = x; x = y; y = temp; printf(\"\\nAfter Swapping: x = %d, y = %d\", x, y); return 0;}",
"e": 25378,
"s": 25082,
"text": null
},
{
"code": null,
"e": 25386,
"s": 25378,
"text": "Output:"
},
{
"code": null,
"e": 25460,
"s": 25386,
"text": "Enter Value of x 12\n\nEnter Value of y 14\n\nAfter Swapping: x = 14, y = 12 "
},
{
"code": null,
"e": 25598,
"s": 25460,
"text": "How to write a function to swap?Since we want the local variables of main to modified by swap function, we must them using pointers in C."
},
{
"code": "// C program to swap two variables using a // user defined swap()#include <stdio.h> // This function swaps values pointed by xp and ypvoid swap(int *xp, int *yp){ int temp = *xp; *xp = *yp; *yp = temp;} int main(){ int x, y; printf(\"Enter Value of x \"); scanf(\"%d\", &x); printf(\"\\nEnter Value of y \"); scanf(\"%d\", &y); swap(&x, &y); printf(\"\\nAfter Swapping: x = %d, y = %d\", x, y); return 0;}",
"e": 26027,
"s": 25598,
"text": null
},
{
"code": null,
"e": 26035,
"s": 26027,
"text": "Output:"
},
{
"code": null,
"e": 26109,
"s": 26035,
"text": "Enter Value of x 12\n\nEnter Value of y 14\n\nAfter Swapping: x = 14, y = 12 "
},
{
"code": null,
"e": 26162,
"s": 26109,
"text": "How to do in C++?In C++, we can use references also."
},
{
"code": "// C++ program to swap two variables using a // user defined swap()#include <stdio.h> // This function swaps values referred by // x and y,void swap(int &x, int &y){ int temp = x; x = y; y = temp;} int main(){ int x, y; printf(\"Enter Value of x \"); scanf(\"%d\", &x); printf(\"\\nEnter Value of y \"); scanf(\"%d\", &y); swap(x, y); printf(\"\\nAfter Swapping: x = %d, y = %d\", x, y); return 0;}",
"e": 26584,
"s": 26162,
"text": null
},
{
"code": null,
"e": 26592,
"s": 26584,
"text": "Output:"
},
{
"code": null,
"e": 26666,
"s": 26592,
"text": "Enter Value of x 12\n\nEnter Value of y 14\n\nAfter Swapping: x = 14, y = 12 "
},
{
"code": null,
"e": 26737,
"s": 26666,
"text": "Is there a library function?We can use C++ library swap function also."
},
{
"code": "// C++ program to swap two variables using a // user defined swap()#include <bits/stdc++.h>using namespace std; int main(){ int x, y; printf(\"Enter Value of x \"); scanf(\"%d\", &x); printf(\"\\nEnter Value of y \"); scanf(\"%d\", &y); swap(x, y); printf(\"\\nAfter Swapping: x = %d, y = %d\", x, y); return 0;}",
"e": 27063,
"s": 26737,
"text": null
},
{
"code": null,
"e": 27071,
"s": 27063,
"text": "Output:"
},
{
"code": null,
"e": 27145,
"s": 27071,
"text": "Enter Value of x 12\n\nEnter Value of y 14\n\nAfter Swapping: x = 14, y = 12 "
},
{
"code": null,
"e": 27193,
"s": 27145,
"text": "How to swap without using a temporary variable?"
},
{
"code": null,
"e": 27201,
"s": 27193,
"text": "Samsung"
},
{
"code": null,
"e": 27214,
"s": 27201,
"text": "Swap-Program"
},
{
"code": null,
"e": 27225,
"s": 27214,
"text": "C Programs"
},
{
"code": null,
"e": 27238,
"s": 27225,
"text": "C++ Programs"
},
{
"code": null,
"e": 27251,
"s": 27238,
"text": "Mathematical"
},
{
"code": null,
"e": 27270,
"s": 27251,
"text": "School Programming"
},
{
"code": null,
"e": 27278,
"s": 27270,
"text": "Samsung"
},
{
"code": null,
"e": 27291,
"s": 27278,
"text": "Mathematical"
},
{
"code": null,
"e": 27389,
"s": 27291,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27398,
"s": 27389,
"text": "Comments"
},
{
"code": null,
"e": 27411,
"s": 27398,
"text": "Old Comments"
},
{
"code": null,
"e": 27452,
"s": 27411,
"text": "C Program to read contents of Whole File"
},
{
"code": null,
"e": 27483,
"s": 27452,
"text": "Producer Consumer Problem in C"
},
{
"code": null,
"e": 27554,
"s": 27483,
"text": "C / C++ Program for Dijkstra's shortest path algorithm | Greedy Algo-7"
},
{
"code": null,
"e": 27614,
"s": 27554,
"text": "Program to calculate First and Follow sets of given grammar"
},
{
"code": null,
"e": 27635,
"s": 27614,
"text": "time() function in C"
},
{
"code": null,
"e": 27661,
"s": 27635,
"text": "C++ Program for QuickSort"
},
{
"code": null,
"e": 27699,
"s": 27661,
"text": "C++ program for hashing with chaining"
},
{
"code": null,
"e": 27721,
"s": 27699,
"text": "delete keyword in C++"
},
{
"code": null,
"e": 27732,
"s": 27721,
"text": "cin in C++"
}
] |
Java - How to Check if a Path Exists ? - onlinetutorialspoint | PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
This article shows you how to check if a Path exists in Java.
Java NIO package helps us to get this done.
Files.exists(Path) method takes Path as a parameter and returns True if the given path exists, otherwise, its returns False.
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class IsPathExists {
public static void main(String[] args) throws Exception {
Path path = Paths.get("/Users/chandra/sample.txt");
System.out.println("isPathExists: " + Files.exists(path));
}
}
Output:
isPathExists: true
Files.notExists() is exactly reverse functionality of Files.exists(), it returns True if the provided path does not exist otherwise it’s False.
import java.nio.file.Files;
import java.nio.file.LinkOption;
import java.nio.file.Path;
import java.nio.file.Paths;
public class isPathExists {
public static void main(String[] args) throws Exception {
Path path = Paths.get("/Users/chandra/sample.txt");
System.out.println("is Path notExists: " + Files.notExists(path));
}
}
Output:
is Path notExists: false
Java Files Doc
Java reading all files in a directory
Happy Learning 🙂
How to check whether a file exists python ?
Java – How to create directory in Java
Java 8 walk How to Read all files in a folder
Difference between Path vs Classpath in Java
Java – How to get present working directory in Java
Java 8 Read File Line By Line Example
Java How to create Jar File ?
How To Change Spring Boot Context Path
How to Delete a File or Directory in Python
php readfile Example Tutorials
Java Program for Check Octal Number
Java Program To check a number is prime or not ?
Java Program to Check a Number is Palindrome or not ?
Java Program to Check the Number is Perfect or not ?
How to check whether a String is a Balanced String or not ?
How to check whether a file exists python ?
Java – How to create directory in Java
Java 8 walk How to Read all files in a folder
Difference between Path vs Classpath in Java
Java – How to get present working directory in Java
Java 8 Read File Line By Line Example
Java How to create Jar File ?
How To Change Spring Boot Context Path
How to Delete a File or Directory in Python
php readfile Example Tutorials
Java Program for Check Octal Number
Java Program To check a number is prime or not ?
Java Program to Check a Number is Palindrome or not ?
Java Program to Check the Number is Perfect or not ?
How to check whether a String is a Balanced String or not ?
Δ
Install Java on Mac OS
Install AWS CLI on Windows
Install Minikube on Windows
Install Docker Toolbox on Windows
Install SOAPUI on Windows
Install Gradle on Windows
Install RabbitMQ on Windows
Install PuTTY on windows
Install Mysql on Windows
Install Hibernate Tools in Eclipse
Install Elasticsearch on Windows
Install Maven on Windows
Install Maven on Ubuntu
Install Maven on Windows Command
Add OJDBC jar to Maven Repository
Install Ant on Windows
Install RabbitMQ on Windows
Install Apache Kafka on Ubuntu
Install Apache Kafka on Windows
Java8 – Install Windows
Java8 – foreach
Java8 – forEach with index
Java8 – Stream Filter Objects
Java8 – Comparator Userdefined
Java8 – GroupingBy
Java8 – SummingInt
Java8 – walk ReadFiles
Java8 – JAVA_HOME on Windows
Howto – Install Java on Mac OS
Howto – Convert Iterable to Stream
Howto – Get common elements from two Lists
Howto – Convert List to String
Howto – Concatenate Arrays using Stream
Howto – Remove duplicates from List
Howto – Filter null values from Stream
Howto – Convert List to Map
Howto – Convert Stream to List
Howto – Sort a Map
Howto – Filter a Map
Howto – Get Current UTC Time
Howto – Verify an Array contains a specific value
Howto – Convert ArrayList to Array
Howto – Read File Line By Line
Howto – Convert Date to LocalDate
Howto – Merge Streams
Howto – Resolve NullPointerException in toMap
Howto -Get Stream count
Howto – Get Min and Max values in a Stream
Howto – Convert InputStream to String | [
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 460,
"s": 398,
"text": "This article shows you how to check if a Path exists in Java."
},
{
"code": null,
"e": 504,
"s": 460,
"text": "Java NIO package helps us to get this done."
},
{
"code": null,
"e": 629,
"s": 504,
"text": "Files.exists(Path) method takes Path as a parameter and returns True if the given path exists, otherwise, its returns False."
},
{
"code": null,
"e": 956,
"s": 629,
"text": "import java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\n\npublic class IsPathExists {\n\n public static void main(String[] args) throws Exception {\n Path path = Paths.get(\"/Users/chandra/sample.txt\");\n System.out.println(\"isPathExists: \" + Files.exists(path));\n }\n}\n"
},
{
"code": null,
"e": 964,
"s": 956,
"text": "Output:"
},
{
"code": null,
"e": 984,
"s": 964,
"text": "isPathExists: true\n"
},
{
"code": null,
"e": 1128,
"s": 984,
"text": "Files.notExists() is exactly reverse functionality of Files.exists(), it returns True if the provided path does not exist otherwise it’s False."
},
{
"code": null,
"e": 1496,
"s": 1128,
"text": "import java.nio.file.Files;\nimport java.nio.file.LinkOption;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\n\npublic class isPathExists {\n\n public static void main(String[] args) throws Exception {\n Path path = Paths.get(\"/Users/chandra/sample.txt\");\n System.out.println(\"is Path notExists: \" + Files.notExists(path));\n }\n}\n"
},
{
"code": null,
"e": 1504,
"s": 1496,
"text": "Output:"
},
{
"code": null,
"e": 1530,
"s": 1504,
"text": "is Path notExists: false\n"
},
{
"code": null,
"e": 1545,
"s": 1530,
"text": "Java Files Doc"
},
{
"code": null,
"e": 1583,
"s": 1545,
"text": "Java reading all files in a directory"
},
{
"code": null,
"e": 1600,
"s": 1583,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 2262,
"s": 1600,
"text": "\nHow to check whether a file exists python ?\nJava – How to create directory in Java\nJava 8 walk How to Read all files in a folder\nDifference between Path vs Classpath in Java\nJava – How to get present working directory in Java\nJava 8 Read File Line By Line Example\nJava How to create Jar File ?\nHow To Change Spring Boot Context Path\nHow to Delete a File or Directory in Python\nphp readfile Example Tutorials\nJava Program for Check Octal Number\nJava Program To check a number is prime or not ?\nJava Program to Check a Number is Palindrome or not ?\nJava Program to Check the Number is Perfect or not ?\nHow to check whether a String is a Balanced String or not ?\n"
},
{
"code": null,
"e": 2306,
"s": 2262,
"text": "How to check whether a file exists python ?"
},
{
"code": null,
"e": 2345,
"s": 2306,
"text": "Java – How to create directory in Java"
},
{
"code": null,
"e": 2391,
"s": 2345,
"text": "Java 8 walk How to Read all files in a folder"
},
{
"code": null,
"e": 2436,
"s": 2391,
"text": "Difference between Path vs Classpath in Java"
},
{
"code": null,
"e": 2488,
"s": 2436,
"text": "Java – How to get present working directory in Java"
},
{
"code": null,
"e": 2526,
"s": 2488,
"text": "Java 8 Read File Line By Line Example"
},
{
"code": null,
"e": 2556,
"s": 2526,
"text": "Java How to create Jar File ?"
},
{
"code": null,
"e": 2595,
"s": 2556,
"text": "How To Change Spring Boot Context Path"
},
{
"code": null,
"e": 2639,
"s": 2595,
"text": "How to Delete a File or Directory in Python"
},
{
"code": null,
"e": 2670,
"s": 2639,
"text": "php readfile Example Tutorials"
},
{
"code": null,
"e": 2706,
"s": 2670,
"text": "Java Program for Check Octal Number"
},
{
"code": null,
"e": 2755,
"s": 2706,
"text": "Java Program To check a number is prime or not ?"
},
{
"code": null,
"e": 2809,
"s": 2755,
"text": "Java Program to Check a Number is Palindrome or not ?"
},
{
"code": null,
"e": 2862,
"s": 2809,
"text": "Java Program to Check the Number is Perfect or not ?"
},
{
"code": null,
"e": 2922,
"s": 2862,
"text": "How to check whether a String is a Balanced String or not ?"
},
{
"code": null,
"e": 2928,
"s": 2926,
"text": "Δ"
},
{
"code": null,
"e": 2952,
"s": 2928,
"text": " Install Java on Mac OS"
},
{
"code": null,
"e": 2980,
"s": 2952,
"text": " Install AWS CLI on Windows"
},
{
"code": null,
"e": 3009,
"s": 2980,
"text": " Install Minikube on Windows"
},
{
"code": null,
"e": 3044,
"s": 3009,
"text": " Install Docker Toolbox on Windows"
},
{
"code": null,
"e": 3071,
"s": 3044,
"text": " Install SOAPUI on Windows"
},
{
"code": null,
"e": 3098,
"s": 3071,
"text": " Install Gradle on Windows"
},
{
"code": null,
"e": 3127,
"s": 3098,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 3153,
"s": 3127,
"text": " Install PuTTY on windows"
},
{
"code": null,
"e": 3179,
"s": 3153,
"text": " Install Mysql on Windows"
},
{
"code": null,
"e": 3215,
"s": 3179,
"text": " Install Hibernate Tools in Eclipse"
},
{
"code": null,
"e": 3249,
"s": 3215,
"text": " Install Elasticsearch on Windows"
},
{
"code": null,
"e": 3275,
"s": 3249,
"text": " Install Maven on Windows"
},
{
"code": null,
"e": 3300,
"s": 3275,
"text": " Install Maven on Ubuntu"
},
{
"code": null,
"e": 3334,
"s": 3300,
"text": " Install Maven on Windows Command"
},
{
"code": null,
"e": 3369,
"s": 3334,
"text": " Add OJDBC jar to Maven Repository"
},
{
"code": null,
"e": 3393,
"s": 3369,
"text": " Install Ant on Windows"
},
{
"code": null,
"e": 3422,
"s": 3393,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 3454,
"s": 3422,
"text": " Install Apache Kafka on Ubuntu"
},
{
"code": null,
"e": 3487,
"s": 3454,
"text": " Install Apache Kafka on Windows"
},
{
"code": null,
"e": 3512,
"s": 3487,
"text": " Java8 – Install Windows"
},
{
"code": null,
"e": 3529,
"s": 3512,
"text": " Java8 – foreach"
},
{
"code": null,
"e": 3557,
"s": 3529,
"text": " Java8 – forEach with index"
},
{
"code": null,
"e": 3588,
"s": 3557,
"text": " Java8 – Stream Filter Objects"
},
{
"code": null,
"e": 3620,
"s": 3588,
"text": " Java8 – Comparator Userdefined"
},
{
"code": null,
"e": 3640,
"s": 3620,
"text": " Java8 – GroupingBy"
},
{
"code": null,
"e": 3660,
"s": 3640,
"text": " Java8 – SummingInt"
},
{
"code": null,
"e": 3684,
"s": 3660,
"text": " Java8 – walk ReadFiles"
},
{
"code": null,
"e": 3714,
"s": 3684,
"text": " Java8 – JAVA_HOME on Windows"
},
{
"code": null,
"e": 3746,
"s": 3714,
"text": " Howto – Install Java on Mac OS"
},
{
"code": null,
"e": 3782,
"s": 3746,
"text": " Howto – Convert Iterable to Stream"
},
{
"code": null,
"e": 3826,
"s": 3782,
"text": " Howto – Get common elements from two Lists"
},
{
"code": null,
"e": 3858,
"s": 3826,
"text": " Howto – Convert List to String"
},
{
"code": null,
"e": 3899,
"s": 3858,
"text": " Howto – Concatenate Arrays using Stream"
},
{
"code": null,
"e": 3936,
"s": 3899,
"text": " Howto – Remove duplicates from List"
},
{
"code": null,
"e": 3976,
"s": 3936,
"text": " Howto – Filter null values from Stream"
},
{
"code": null,
"e": 4005,
"s": 3976,
"text": " Howto – Convert List to Map"
},
{
"code": null,
"e": 4037,
"s": 4005,
"text": " Howto – Convert Stream to List"
},
{
"code": null,
"e": 4057,
"s": 4037,
"text": " Howto – Sort a Map"
},
{
"code": null,
"e": 4079,
"s": 4057,
"text": " Howto – Filter a Map"
},
{
"code": null,
"e": 4109,
"s": 4079,
"text": " Howto – Get Current UTC Time"
},
{
"code": null,
"e": 4160,
"s": 4109,
"text": " Howto – Verify an Array contains a specific value"
},
{
"code": null,
"e": 4196,
"s": 4160,
"text": " Howto – Convert ArrayList to Array"
},
{
"code": null,
"e": 4228,
"s": 4196,
"text": " Howto – Read File Line By Line"
},
{
"code": null,
"e": 4263,
"s": 4228,
"text": " Howto – Convert Date to LocalDate"
},
{
"code": null,
"e": 4286,
"s": 4263,
"text": " Howto – Merge Streams"
},
{
"code": null,
"e": 4333,
"s": 4286,
"text": " Howto – Resolve NullPointerException in toMap"
},
{
"code": null,
"e": 4358,
"s": 4333,
"text": " Howto -Get Stream count"
},
{
"code": null,
"e": 4402,
"s": 4358,
"text": " Howto – Get Min and Max values in a Stream"
}
] |
Ruby on Rails - Migrations | Rails Migration allows you to use Ruby to define changes to your database schema, making it possible to use a version control system to keep things synchronized with the actual code.
This has many uses, including −
Teams of developers − If one person makes a schema change, the other developers just need to update, and run "rake migrate".
Teams of developers − If one person makes a schema change, the other developers just need to update, and run "rake migrate".
Production servers − Run "rake migrate" when you roll out a new release to bring the database up to date as well.
Production servers − Run "rake migrate" when you roll out a new release to bring the database up to date as well.
Multiple machines − If you develop on both a desktop and a laptop, or in more than one location, migrations can help you keep them all synchronized.
Multiple machines − If you develop on both a desktop and a laptop, or in more than one location, migrations can help you keep them all synchronized.
create_table(name, options)
drop_table(name)
rename_table(old_name, new_name)
add_column(table_name, column_name, type, options)
rename_column(table_name, column_name, new_column_name)
change_column(table_name, column_name, type, options)
remove_column(table_name, column_name)
add_index(table_name, column_name, index_type)
remove_index(table_name, column_name)
Migrations support all the basic data types − The following is the list of data types that migration supports −
string − for small data types such as a title.
string − for small data types such as a title.
text − for longer pieces of textual data, such as the description.
text − for longer pieces of textual data, such as the description.
integer − for whole numbers.
integer − for whole numbers.
float − for decimals.
float − for decimals.
datetime and timestamp − store the date and time into a column.
datetime and timestamp − store the date and time into a column.
date and time − store either the date only or time only.
date and time − store either the date only or time only.
binary − for storing data such as images, audio, or movies.
binary − for storing data such as images, audio, or movies.
Boolean − for storing true or false values.
Boolean − for storing true or false values.
Valid column options are − The following is the list of valid column options.
limit ( :limit => “50” )
limit ( :limit => “50” )
default (:default => “blah” )
default (:default => “blah” )
null (:null => false implies NOT NULL)
null (:null => false implies NOT NULL)
NOTE − The activities done by Rails Migration can be done using any front-end GUI or directly on SQL prompt, but Rails Migration makes all those activities very easy.
See the Rails API for details on these.
Here is the generic syntax for creating a migration −
application_dir> rails generate migration table_name
This will create the file db/migrate/001_table_name.rb. A migration file contains the basic Ruby syntax that describes the data structure of a database table.
NOTE − Before running the migration generator, it is recommended to clean the existing migrations generated by model generators.
We will create two migrations corresponding to our three tables − books and subjects.
Books migration should be as follows −
tp> cd library
library> rails generate migration books
Above command generates the following code.
subject migration should be as follows −
tp> cd library
library> rails generate migration subjects
Above command generates the following code.
Notice that you are using lower case for book and subject and plural form while creating migrations. This is a Rails paradigm that you should follow each time you create a Migration.
Go to db/migrate subdirectory of your application and edit each file one by one using any simple text editor.
Modify 001_books.rb as follows −
The ID column will be created automatically, so don't do it here as well.
class Books < ActiveRecord::Migration
def self.up
create_table :books do |t|
t.column :title, :string, :limit => 32, :null => false
t.column :price, :float
t.column :subject_id, :integer
t.column :description, :text
t.column :created_at, :timestamp
end
end
def self.down
drop_table :books
end
end
The method self.up is used when migrating to a new version, self.down is used to roll back any changes if needed. At this moment, the above script will be used to create books table.
Modify 002_subjects.rb as follows −
class Subjects < ActiveRecord::Migration
def self.up
create_table :subjects do |t|
t.column :name, :string
end
Subject.create :name => "Physics"
Subject.create :name => "Mathematics"
Subject.create :name => "Chemistry"
Subject.create :name => "Psychology"
Subject.create :name => "Geography"
end
def self.down
drop_table :subjects
end
end
The above script will be used to create subjects table and will create five records in the subjects table.
Now that you have created all the required migration files. It is time to execute them against the database. To do this, go to a command prompt and go to the library directory in which the application is located, and then type rake migrate as follows −
library> rake db:migrate
This will create a "schema_info" table if it doesn't exist, which tracks the current version of the database - each new migration will be a new version, and any new migrations will be run until your database is at the current version.
Rake is a Ruby build program similar to Unix make program that Rails takes advantage of, to simplify the execution of complex tasks such as updating a database's structure etc.
If you would like to specify what Rails environment to use for the migration, use the RAILS_ENV shell variable.
For example −
library> export RAILS_ENV = production
library> rake db:migrate
library> export RAILS_ENV = test
library> rake db:migrate
library> export RAILS_ENV = development
library> rake db:migrate
NOTE − In Windows, use "set RAILS_ENV = production" instead of export command.
Now we have our database and the required tables available. In the two subsequent chapters, we will explore two important components called Controller (ActionController) and View (ActionView).
Creating Controllers (Action Controller).
Creating Views (Action View).
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2286,
"s": 2103,
"text": "Rails Migration allows you to use Ruby to define changes to your database schema, making it possible to use a version control system to keep things synchronized with the actual code."
},
{
"code": null,
"e": 2318,
"s": 2286,
"text": "This has many uses, including −"
},
{
"code": null,
"e": 2443,
"s": 2318,
"text": "Teams of developers − If one person makes a schema change, the other developers just need to update, and run \"rake migrate\"."
},
{
"code": null,
"e": 2568,
"s": 2443,
"text": "Teams of developers − If one person makes a schema change, the other developers just need to update, and run \"rake migrate\"."
},
{
"code": null,
"e": 2682,
"s": 2568,
"text": "Production servers − Run \"rake migrate\" when you roll out a new release to bring the database up to date as well."
},
{
"code": null,
"e": 2796,
"s": 2682,
"text": "Production servers − Run \"rake migrate\" when you roll out a new release to bring the database up to date as well."
},
{
"code": null,
"e": 2945,
"s": 2796,
"text": "Multiple machines − If you develop on both a desktop and a laptop, or in more than one location, migrations can help you keep them all synchronized."
},
{
"code": null,
"e": 3094,
"s": 2945,
"text": "Multiple machines − If you develop on both a desktop and a laptop, or in more than one location, migrations can help you keep them all synchronized."
},
{
"code": null,
"e": 3122,
"s": 3094,
"text": "create_table(name, options)"
},
{
"code": null,
"e": 3139,
"s": 3122,
"text": "drop_table(name)"
},
{
"code": null,
"e": 3172,
"s": 3139,
"text": "rename_table(old_name, new_name)"
},
{
"code": null,
"e": 3223,
"s": 3172,
"text": "add_column(table_name, column_name, type, options)"
},
{
"code": null,
"e": 3279,
"s": 3223,
"text": "rename_column(table_name, column_name, new_column_name)"
},
{
"code": null,
"e": 3333,
"s": 3279,
"text": "change_column(table_name, column_name, type, options)"
},
{
"code": null,
"e": 3372,
"s": 3333,
"text": "remove_column(table_name, column_name)"
},
{
"code": null,
"e": 3419,
"s": 3372,
"text": "add_index(table_name, column_name, index_type)"
},
{
"code": null,
"e": 3457,
"s": 3419,
"text": "remove_index(table_name, column_name)"
},
{
"code": null,
"e": 3569,
"s": 3457,
"text": "Migrations support all the basic data types − The following is the list of data types that migration supports −"
},
{
"code": null,
"e": 3616,
"s": 3569,
"text": "string − for small data types such as a title."
},
{
"code": null,
"e": 3663,
"s": 3616,
"text": "string − for small data types such as a title."
},
{
"code": null,
"e": 3730,
"s": 3663,
"text": "text − for longer pieces of textual data, such as the description."
},
{
"code": null,
"e": 3797,
"s": 3730,
"text": "text − for longer pieces of textual data, such as the description."
},
{
"code": null,
"e": 3826,
"s": 3797,
"text": "integer − for whole numbers."
},
{
"code": null,
"e": 3855,
"s": 3826,
"text": "integer − for whole numbers."
},
{
"code": null,
"e": 3877,
"s": 3855,
"text": "float − for decimals."
},
{
"code": null,
"e": 3899,
"s": 3877,
"text": "float − for decimals."
},
{
"code": null,
"e": 3963,
"s": 3899,
"text": "datetime and timestamp − store the date and time into a column."
},
{
"code": null,
"e": 4027,
"s": 3963,
"text": "datetime and timestamp − store the date and time into a column."
},
{
"code": null,
"e": 4084,
"s": 4027,
"text": "date and time − store either the date only or time only."
},
{
"code": null,
"e": 4141,
"s": 4084,
"text": "date and time − store either the date only or time only."
},
{
"code": null,
"e": 4201,
"s": 4141,
"text": "binary − for storing data such as images, audio, or movies."
},
{
"code": null,
"e": 4261,
"s": 4201,
"text": "binary − for storing data such as images, audio, or movies."
},
{
"code": null,
"e": 4305,
"s": 4261,
"text": "Boolean − for storing true or false values."
},
{
"code": null,
"e": 4349,
"s": 4305,
"text": "Boolean − for storing true or false values."
},
{
"code": null,
"e": 4427,
"s": 4349,
"text": "Valid column options are − The following is the list of valid column options."
},
{
"code": null,
"e": 4452,
"s": 4427,
"text": "limit ( :limit => “50” )"
},
{
"code": null,
"e": 4477,
"s": 4452,
"text": "limit ( :limit => “50” )"
},
{
"code": null,
"e": 4507,
"s": 4477,
"text": "default (:default => “blah” )"
},
{
"code": null,
"e": 4537,
"s": 4507,
"text": "default (:default => “blah” )"
},
{
"code": null,
"e": 4576,
"s": 4537,
"text": "null (:null => false implies NOT NULL)"
},
{
"code": null,
"e": 4615,
"s": 4576,
"text": "null (:null => false implies NOT NULL)"
},
{
"code": null,
"e": 4782,
"s": 4615,
"text": "NOTE − The activities done by Rails Migration can be done using any front-end GUI or directly on SQL prompt, but Rails Migration makes all those activities very easy."
},
{
"code": null,
"e": 4822,
"s": 4782,
"text": "See the Rails API for details on these."
},
{
"code": null,
"e": 4876,
"s": 4822,
"text": "Here is the generic syntax for creating a migration −"
},
{
"code": null,
"e": 4930,
"s": 4876,
"text": "application_dir> rails generate migration table_name\n"
},
{
"code": null,
"e": 5089,
"s": 4930,
"text": "This will create the file db/migrate/001_table_name.rb. A migration file contains the basic Ruby syntax that describes the data structure of a database table."
},
{
"code": null,
"e": 5218,
"s": 5089,
"text": "NOTE − Before running the migration generator, it is recommended to clean the existing migrations generated by model generators."
},
{
"code": null,
"e": 5304,
"s": 5218,
"text": "We will create two migrations corresponding to our three tables − books and subjects."
},
{
"code": null,
"e": 5343,
"s": 5304,
"text": "Books migration should be as follows −"
},
{
"code": null,
"e": 5399,
"s": 5343,
"text": "tp> cd library\nlibrary> rails generate migration books\n"
},
{
"code": null,
"e": 5443,
"s": 5399,
"text": "Above command generates the following code."
},
{
"code": null,
"e": 5484,
"s": 5443,
"text": "subject migration should be as follows −"
},
{
"code": null,
"e": 5543,
"s": 5484,
"text": "tp> cd library\nlibrary> rails generate migration subjects\n"
},
{
"code": null,
"e": 5587,
"s": 5543,
"text": "Above command generates the following code."
},
{
"code": null,
"e": 5770,
"s": 5587,
"text": "Notice that you are using lower case for book and subject and plural form while creating migrations. This is a Rails paradigm that you should follow each time you create a Migration."
},
{
"code": null,
"e": 5880,
"s": 5770,
"text": "Go to db/migrate subdirectory of your application and edit each file one by one using any simple text editor."
},
{
"code": null,
"e": 5913,
"s": 5880,
"text": "Modify 001_books.rb as follows −"
},
{
"code": null,
"e": 5987,
"s": 5913,
"text": "The ID column will be created automatically, so don't do it here as well."
},
{
"code": null,
"e": 6365,
"s": 5987,
"text": "class Books < ActiveRecord::Migration\n \n def self.up\n create_table :books do |t|\n t.column :title, :string, :limit => 32, :null => false\n t.column :price, :float\n t.column :subject_id, :integer\n t.column :description, :text\n t.column :created_at, :timestamp\n end\n end\n\n def self.down\n drop_table :books\n end\nend\n"
},
{
"code": null,
"e": 6548,
"s": 6365,
"text": "The method self.up is used when migrating to a new version, self.down is used to roll back any changes if needed. At this moment, the above script will be used to create books table."
},
{
"code": null,
"e": 6584,
"s": 6548,
"text": "Modify 002_subjects.rb as follows −"
},
{
"code": null,
"e": 7003,
"s": 6584,
"text": "class Subjects < ActiveRecord::Migration\n def self.up\n \n create_table :subjects do |t|\n t.column :name, :string\n end\n\t\n Subject.create :name => \"Physics\"\n Subject.create :name => \"Mathematics\"\n Subject.create :name => \"Chemistry\"\n Subject.create :name => \"Psychology\"\n Subject.create :name => \"Geography\"\n end\n\n def self.down\n drop_table :subjects\n end\nend\n"
},
{
"code": null,
"e": 7110,
"s": 7003,
"text": "The above script will be used to create subjects table and will create five records in the subjects table."
},
{
"code": null,
"e": 7363,
"s": 7110,
"text": "Now that you have created all the required migration files. It is time to execute them against the database. To do this, go to a command prompt and go to the library directory in which the application is located, and then type rake migrate as follows −"
},
{
"code": null,
"e": 7389,
"s": 7363,
"text": "library> rake db:migrate\n"
},
{
"code": null,
"e": 7624,
"s": 7389,
"text": "This will create a \"schema_info\" table if it doesn't exist, which tracks the current version of the database - each new migration will be a new version, and any new migrations will be run until your database is at the current version."
},
{
"code": null,
"e": 7801,
"s": 7624,
"text": "Rake is a Ruby build program similar to Unix make program that Rails takes advantage of, to simplify the execution of complex tasks such as updating a database's structure etc."
},
{
"code": null,
"e": 7913,
"s": 7801,
"text": "If you would like to specify what Rails environment to use for the migration, use the RAILS_ENV shell variable."
},
{
"code": null,
"e": 7927,
"s": 7913,
"text": "For example −"
},
{
"code": null,
"e": 8115,
"s": 7927,
"text": "library> export RAILS_ENV = production\nlibrary> rake db:migrate\nlibrary> export RAILS_ENV = test\nlibrary> rake db:migrate\nlibrary> export RAILS_ENV = development\nlibrary> rake db:migrate\n"
},
{
"code": null,
"e": 8194,
"s": 8115,
"text": "NOTE − In Windows, use \"set RAILS_ENV = production\" instead of export command."
},
{
"code": null,
"e": 8387,
"s": 8194,
"text": "Now we have our database and the required tables available. In the two subsequent chapters, we will explore two important components called Controller (ActionController) and View (ActionView)."
},
{
"code": null,
"e": 8429,
"s": 8387,
"text": "Creating Controllers (Action Controller)."
},
{
"code": null,
"e": 8459,
"s": 8429,
"text": "Creating Views (Action View)."
},
{
"code": null,
"e": 8466,
"s": 8459,
"text": " Print"
},
{
"code": null,
"e": 8477,
"s": 8466,
"text": " Add Notes"
}
] |
Arduino - Humidity Sensor | In this section, we will learn how to interface our Arduino board with different sensors. We will discuss the following sensors −
Humidity sensor (DHT22)
Temperature sensor (LM35)
Water detector sensor (Simple Water Trigger)
PIR SENSOR
ULTRASONIC SENSOR
GPS
The DHT-22 (also named as AM2302) is a digital-output, relative humidity, and temperature sensor. It uses a capacitive humidity sensor and a thermistor to measure the surrounding air, and sends a digital signal on the data pin.
In this example, you will learn how to use this sensor with Arduino UNO. The room temperature and humidity will be printed to the serial monitor.
The connections are simple. The first pin on the left to 3-5V power, the second pin to the data input pin and the right-most pin to the ground.
Power − 3-5V
Power − 3-5V
Max Current − 2.5mA
Max Current − 2.5mA
Humidity − 0-100%, 2-5% accuracy
Humidity − 0-100%, 2-5% accuracy
Temperature − 40 to 80°C, ±0.5°C accuracy
Temperature − 40 to 80°C, ±0.5°C accuracy
You will need the following components −
1 × Breadboard
1 × Arduino Uno R3
1 × DHT22
1 × 10K ohm resistor
Follow the circuit diagram and hook up the components on the breadboard as shown in the image below.
Open the Arduino IDE software on your computer. Coding in the Arduino language will control your circuit. Open a new sketch File by clicking New.
// Example testing sketch for various DHT humidity/temperature sensors
#include "DHT.h"
#define DHTPIN 2 // what digital pin we're connected to
// Uncomment whatever type you're using!
//#define DHTTYPE DHT11 // DHT 11
#define DHTTYPE DHT22 // DHT 22 (AM2302), AM2321
//#define DHTTYPE DHT21 // DHT 21 (AM2301)
// Connect pin 1 (on the left) of the sensor to +5V
// NOTE: If using a board with 3.3V logic like an Arduino Due connect pin 1
// to 3.3V instead of 5V!
// Connect pin 2 of the sensor to whatever your DHTPIN is
// Connect pin 4 (on the right) of the sensor to GROUND
// Connect a 10K resistor from pin 2 (data) to pin 1 (power) of the sensor
// Initialize DHT sensor.
// Note that older versions of this library took an optional third parameter to
// tweak the timings for faster processors. This parameter is no longer needed
// as the current DHT reading algorithm adjusts itself to work on faster procs.
DHT dht(DHTPIN, DHTTYPE);
void setup() {
Serial.begin(9600);
Serial.println("DHTxx test!");
dht.begin();
}
void loop() {
delay(2000); // Wait a few seconds between measurements
float h = dht.readHumidity();
// Reading temperature or humidity takes about 250 milliseconds!
float t = dht.readTemperature();
// Read temperature as Celsius (the default)
float f = dht.readTemperature(true);
// Read temperature as Fahrenheit (isFahrenheit = true)
// Check if any reads failed and exit early (to try again).
if (isnan(h) || isnan(t) || isnan(f)) {
Serial.println("Failed to read from DHT sensor!");
return;
}
// Compute heat index in Fahrenheit (the default)
float hif = dht.computeHeatIndex(f, h);
// Compute heat index in Celsius (isFahreheit = false)
float hic = dht.computeHeatIndex(t, h, false);
Serial.print ("Humidity: ");
Serial.print (h);
Serial.print (" %\t");
Serial.print ("Temperature: ");
Serial.print (t);
Serial.print (" *C ");
Serial.print (f);
Serial.print (" *F\t");
Serial.print ("Heat index: ");
Serial.print (hic);
Serial.print (" *C ");
Serial.print (hif);
Serial.println (" *F");
}
DHT22 sensor has four terminals (Vcc, DATA, NC, GND), which are connected to the board as follows −
DATA pin to Arduino pin number 2
Vcc pin to 5 volt of Arduino board
GND pin to the ground of Arduino board
We need to connect 10k ohm resistor (pull up resistor) between the DATA and the Vcc pin
Once hardware connections are done, you need to add DHT22 library to your Arduino library file as described earlier.
You will see the temperature and humidity display on serial port monitor which is updated every 2 seconds.
65 Lectures
6.5 hours
Amit Rana
43 Lectures
3 hours
Amit Rana
20 Lectures
2 hours
Ashraf Said
19 Lectures
1.5 hours
Ashraf Said
11 Lectures
47 mins
Ashraf Said
9 Lectures
41 mins
Ashraf Said
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3000,
"s": 2870,
"text": "In this section, we will learn how to interface our Arduino board with different sensors. We will discuss the following sensors −"
},
{
"code": null,
"e": 3024,
"s": 3000,
"text": "Humidity sensor (DHT22)"
},
{
"code": null,
"e": 3050,
"s": 3024,
"text": "Temperature sensor (LM35)"
},
{
"code": null,
"e": 3095,
"s": 3050,
"text": "Water detector sensor (Simple Water Trigger)"
},
{
"code": null,
"e": 3106,
"s": 3095,
"text": "PIR SENSOR"
},
{
"code": null,
"e": 3124,
"s": 3106,
"text": "ULTRASONIC SENSOR"
},
{
"code": null,
"e": 3128,
"s": 3124,
"text": "GPS"
},
{
"code": null,
"e": 3356,
"s": 3128,
"text": "The DHT-22 (also named as AM2302) is a digital-output, relative humidity, and temperature sensor. It uses a capacitive humidity sensor and a thermistor to measure the surrounding air, and sends a digital signal on the data pin."
},
{
"code": null,
"e": 3502,
"s": 3356,
"text": "In this example, you will learn how to use this sensor with Arduino UNO. The room temperature and humidity will be printed to the serial monitor."
},
{
"code": null,
"e": 3646,
"s": 3502,
"text": "The connections are simple. The first pin on the left to 3-5V power, the second pin to the data input pin and the right-most pin to the ground."
},
{
"code": null,
"e": 3659,
"s": 3646,
"text": "Power − 3-5V"
},
{
"code": null,
"e": 3672,
"s": 3659,
"text": "Power − 3-5V"
},
{
"code": null,
"e": 3692,
"s": 3672,
"text": "Max Current − 2.5mA"
},
{
"code": null,
"e": 3712,
"s": 3692,
"text": "Max Current − 2.5mA"
},
{
"code": null,
"e": 3745,
"s": 3712,
"text": "Humidity − 0-100%, 2-5% accuracy"
},
{
"code": null,
"e": 3778,
"s": 3745,
"text": "Humidity − 0-100%, 2-5% accuracy"
},
{
"code": null,
"e": 3820,
"s": 3778,
"text": "Temperature − 40 to 80°C, ±0.5°C accuracy"
},
{
"code": null,
"e": 3862,
"s": 3820,
"text": "Temperature − 40 to 80°C, ±0.5°C accuracy"
},
{
"code": null,
"e": 3903,
"s": 3862,
"text": "You will need the following components −"
},
{
"code": null,
"e": 3918,
"s": 3903,
"text": "1 × Breadboard"
},
{
"code": null,
"e": 3937,
"s": 3918,
"text": "1 × Arduino Uno R3"
},
{
"code": null,
"e": 3947,
"s": 3937,
"text": "1 × DHT22"
},
{
"code": null,
"e": 3968,
"s": 3947,
"text": "1 × 10K ohm resistor"
},
{
"code": null,
"e": 4069,
"s": 3968,
"text": "Follow the circuit diagram and hook up the components on the breadboard as shown in the image below."
},
{
"code": null,
"e": 4215,
"s": 4069,
"text": "Open the Arduino IDE software on your computer. Coding in the Arduino language will control your circuit. Open a new sketch File by clicking New."
},
{
"code": null,
"e": 6344,
"s": 4215,
"text": "// Example testing sketch for various DHT humidity/temperature sensors\n\n#include \"DHT.h\"\n#define DHTPIN 2 // what digital pin we're connected to\n// Uncomment whatever type you're using!\n//#define DHTTYPE DHT11 // DHT 11\n#define DHTTYPE DHT22 // DHT 22 (AM2302), AM2321\n//#define DHTTYPE DHT21 // DHT 21 (AM2301)\n// Connect pin 1 (on the left) of the sensor to +5V\n// NOTE: If using a board with 3.3V logic like an Arduino Due connect pin 1\n// to 3.3V instead of 5V!\n// Connect pin 2 of the sensor to whatever your DHTPIN is\n// Connect pin 4 (on the right) of the sensor to GROUND\n// Connect a 10K resistor from pin 2 (data) to pin 1 (power) of the sensor\n// Initialize DHT sensor.\n// Note that older versions of this library took an optional third parameter to\n// tweak the timings for faster processors. This parameter is no longer needed\n// as the current DHT reading algorithm adjusts itself to work on faster procs.\nDHT dht(DHTPIN, DHTTYPE);\n\nvoid setup() {\n Serial.begin(9600);\n Serial.println(\"DHTxx test!\");\n dht.begin();\n}\n\nvoid loop() {\n delay(2000); // Wait a few seconds between measurements\n float h = dht.readHumidity();\n // Reading temperature or humidity takes about 250 milliseconds!\n float t = dht.readTemperature();\n // Read temperature as Celsius (the default)\n float f = dht.readTemperature(true);\n // Read temperature as Fahrenheit (isFahrenheit = true)\n // Check if any reads failed and exit early (to try again).\n if (isnan(h) || isnan(t) || isnan(f)) {\n Serial.println(\"Failed to read from DHT sensor!\");\n return;\n }\n \n // Compute heat index in Fahrenheit (the default)\n float hif = dht.computeHeatIndex(f, h);\n // Compute heat index in Celsius (isFahreheit = false)\n float hic = dht.computeHeatIndex(t, h, false);\n Serial.print (\"Humidity: \");\n Serial.print (h);\n Serial.print (\" %\\t\");\n Serial.print (\"Temperature: \");\n Serial.print (t);\n Serial.print (\" *C \");\n Serial.print (f);\n Serial.print (\" *F\\t\");\n Serial.print (\"Heat index: \");\n Serial.print (hic);\n Serial.print (\" *C \");\n Serial.print (hif);\n Serial.println (\" *F\");\n}"
},
{
"code": null,
"e": 6444,
"s": 6344,
"text": "DHT22 sensor has four terminals (Vcc, DATA, NC, GND), which are connected to the board as follows −"
},
{
"code": null,
"e": 6477,
"s": 6444,
"text": "DATA pin to Arduino pin number 2"
},
{
"code": null,
"e": 6512,
"s": 6477,
"text": "Vcc pin to 5 volt of Arduino board"
},
{
"code": null,
"e": 6551,
"s": 6512,
"text": "GND pin to the ground of Arduino board"
},
{
"code": null,
"e": 6639,
"s": 6551,
"text": "We need to connect 10k ohm resistor (pull up resistor) between the DATA and the Vcc pin"
},
{
"code": null,
"e": 6756,
"s": 6639,
"text": "Once hardware connections are done, you need to add DHT22 library to your Arduino library file as described earlier."
},
{
"code": null,
"e": 6863,
"s": 6756,
"text": "You will see the temperature and humidity display on serial port monitor which is updated every 2 seconds."
},
{
"code": null,
"e": 6898,
"s": 6863,
"text": "\n 65 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 6909,
"s": 6898,
"text": " Amit Rana"
},
{
"code": null,
"e": 6942,
"s": 6909,
"text": "\n 43 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 6953,
"s": 6942,
"text": " Amit Rana"
},
{
"code": null,
"e": 6986,
"s": 6953,
"text": "\n 20 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 6999,
"s": 6986,
"text": " Ashraf Said"
},
{
"code": null,
"e": 7034,
"s": 6999,
"text": "\n 19 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 7047,
"s": 7034,
"text": " Ashraf Said"
},
{
"code": null,
"e": 7079,
"s": 7047,
"text": "\n 11 Lectures \n 47 mins\n"
},
{
"code": null,
"e": 7092,
"s": 7079,
"text": " Ashraf Said"
},
{
"code": null,
"e": 7123,
"s": 7092,
"text": "\n 9 Lectures \n 41 mins\n"
},
{
"code": null,
"e": 7136,
"s": 7123,
"text": " Ashraf Said"
},
{
"code": null,
"e": 7143,
"s": 7136,
"text": " Print"
},
{
"code": null,
"e": 7154,
"s": 7143,
"text": " Add Notes"
}
] |
Angular 4 - Components | Major part of the development with Angular 4 is done in the components. Components are basically classes that interact with the .html file of the component, which gets displayed on the browser. We have seen the file structure in one of our previous chapters. The file structure has the app component and it consists of the following files −
app.component.css
app.component.css
app.component.html
app.component.html
app.component.spec.ts
app.component.spec.ts
app.component.ts
app.component.ts
app.module.ts
app.module.ts
The above files were created by default when we created new project using the angular-cli command.
If you open up the app.module.ts file, it has some libraries which are imported and also a declarative which is assigned the appcomponent as follows −
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
The declarations include the AppComponent variable, which we have already imported. This becomes the parent component.
Now, angular-cli has a command to create your own component. However, the app component which is created by default will always remain the parent and the next components created will form the child components.
Let us now run the command to create the component.
ng g component new-cmp
When you run the above command in the command line, you will receive the following output −
C:\projectA4\Angular 4-app>ng g component new-cmp
installing component
create src\app\new-cmp\new-cmp.component.css
create src\app\new-cmp\new-cmp.component.html
create src\app\new-cmp\new-cmp.component.spec.ts
create src\app\new-cmp\new-cmp.component.ts
update src\app\app.module.ts
Now, if we go and check the file structure, we will get the new-cmp new folder created under the src/app folder.
The following files are created in the new-cmp folder −
new-cmp.component.css − css file for the new component is created.
new-cmp.component.css − css file for the new component is created.
new-cmp.component.html − html file is created.
new-cmp.component.html − html file is created.
new-cmp.component.spec.ts − this can be used for unit testing.
new-cmp.component.spec.ts − this can be used for unit testing.
new-cmp.component.ts − here, we can define the module, properties, etc.
new-cmp.component.ts − here, we can define the module, properties, etc.
Changes are added to the app.module.ts file as follows −
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
import { NewCmpComponent } from './new-cmp/new-cmp.component';
// includes the new-cmp component we created
@NgModule({
declarations: [
AppComponent,
NewCmpComponent // here it is added in declarations and will behave as a child component
],
imports: [
BrowserModule
],
providers: [],
bootstrap: [AppComponent] //for bootstrap the AppComponent the main app component is given.
})
export class AppModule { }
The new-cmp.component.ts file is generated as follows −
import { Component, OnInit } from '@angular/core'; // here angular/core is imported .
@Component({
// this is a declarator which starts with @ sign. The component word marked in bold needs to be the same.
selector: 'app-new-cmp', //
templateUrl: './new-cmp.component.html',
// reference to the html file created in the new component.
styleUrls: ['./new-cmp.component.css'] // reference to the style file.
})
export class NewCmpComponent implements OnInit {
constructor() { }
ngOnInit() {}
}
If you see the above new-cmp.component.ts file, it creates a new class called NewCmpComponent, which implements OnInit.In, which has a constructor and a method called ngOnInit(). ngOnInit is called by default when the class is executed.
Let us check how the flow works. Now, the app component, which is created by default becomes the parent component. Any component added later becomes the child component.
When we hit the url in the http://localhost:4200/ browser, it first executes the index.html file which is shown below −
<!doctype html>
<html lang = "en">
<head>
<meta charset = "utf-8">
<title>Angular 4App</title>
<base href = "/">
<meta name="viewport" content="width = device-width, initial-scale = 1">
<link rel = "icon" type = "image/x-icon" href = "favicon.ico">
</head>
<body>
<app-root></app-root>
</body>
</html>
The above is the normal html file and we do not see anything that is printed in the browser. Take a look at the tag in the body section.
<app-root></app-root>
This is the root tag created by the Angular by default. This tag has the reference in the main.ts file.
import { enableProdMode } from '@angular/core';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { AppModule } from './app/app.module';
import { environment } from './environments/environment';
if (environment.production) {
enableProdMode();
}
platformBrowserDynamic().bootstrapModule(AppModule);
AppModule is imported from the app of the main parent module, and the same is given to the bootstrap Module, which makes the appmodule load.
Let us now see the app.module.ts file −
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
import { NewCmpComponent } from './new-cmp/new-cmp.component';
@NgModule({
declarations: [
AppComponent,
NewCmpComponent
],
imports: [
BrowserModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
Here, the AppComponent is the name given, i.e., the variable to store the reference of the app. Component.ts and the same is given to the bootstrap. Let us now see the app.component.ts file.
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'Angular 4 Project!';
}
Angular core is imported and referred as the Component and the same is used in the Declarator as −
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
In the declarator reference to the selector, templateUrl and styleUrl are given. The selector here is nothing but the tag which is placed in the index.html file that we saw above.
The class AppComponent has a variable called title, which is displayed in the browser.
The @Component uses the templateUrl called app.component.html which is as follows −
<!--The content below is only a placeholder and can be replaced.-->
<div style="text-align:center">
<h1>
Welcome to {{title}}.
</h1>
</div>
It has just the html code and the variable title in curly brackets. It gets replaced with the value, which is present in the app.component.ts file. This is called binding. We will discuss the concept of binding in a subsequent chapter.
Now that we have created a new component called new-cmp. The same gets included in the app.module.ts file, when the command is run for creating a new component.
app.module.ts has a reference to the new component created.
Let us now check the new files created in new-cmp.
import { Component, OnInit } from '@angular/core';
@Component({
selector: 'app-new-cmp',
templateUrl: './new-cmp.component.html',
styleUrls: ['./new-cmp.component.css']
})
export class NewCmpComponent implements OnInit {
constructor() { }
ngOnInit() {}
}
Here, we have to import the core too. The reference of the component is used in the declarator.
The declarator has the selector called app-new-cmp and the templateUrl and styleUrl.
The .html called new-cmp.component.html is as follows −
<p>
new-cmp works!
</p>
As seen above, we have the html code, i.e., the p tag. The style file is empty as we do not need any styling at present. But when we run the project, we do not see anything related to the new component getting displayed in the browser. Let us now add something and the same can be seen in the browser later.
The selector, i.e., app-new-cmp needs to be added in the app.component .html file as follows −
<!--The content below is only a placeholder and can be replaced.-->
<div style="text-align:center">
<h1>
Welcome to {{title}}.
</h1>
</div>
<app-new-cmp></app-new-cmp>
When the <app-new-cmp></app-new-cmp> tag is added, all that is present in the .html file of the new component created will get displayed on the browser along with the parent component data.
Let us see the new component .html file and the new-cmp.component.ts file.
import { Component, OnInit } from '@angular/core';
@Component({
selector: 'app-new-cmp',
templateUrl: './new-cmp.component.html',
styleUrls: ['./new-cmp.component.css']
})
export class NewCmpComponent implements OnInit {
newcomponent = "Entered in new component created";
constructor() {}
ngOnInit() { }
}
In the class, we have added one variable called new component and the value is “Entered in new component created”.
The above variable is bound in the .new-cmp.component.html file as follows −
<p>
{{newcomponent}}
</p>
<p>
new-cmp works!
</p>
Now since we have included the <app-new-cmp></app-new-cmp> selector in the app. component .html which is the .html of the parent component, the content present in the new component .html file (new-cmp.component.html) gets displayed on the browser as follows −
Similarly, we can create components and link the same using the selector in the app.component.html file as per our requirements.
16 Lectures
1.5 hours
Anadi Sharma
28 Lectures
2.5 hours
Anadi Sharma
11 Lectures
7.5 hours
SHIVPRASAD KOIRALA
16 Lectures
2.5 hours
Frahaan Hussain
69 Lectures
5 hours
Senol Atac
53 Lectures
3.5 hours
Senol Atac
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2333,
"s": 1992,
"text": "Major part of the development with Angular 4 is done in the components. Components are basically classes that interact with the .html file of the component, which gets displayed on the browser. We have seen the file structure in one of our previous chapters. The file structure has the app component and it consists of the following files −"
},
{
"code": null,
"e": 2351,
"s": 2333,
"text": "app.component.css"
},
{
"code": null,
"e": 2369,
"s": 2351,
"text": "app.component.css"
},
{
"code": null,
"e": 2388,
"s": 2369,
"text": "app.component.html"
},
{
"code": null,
"e": 2407,
"s": 2388,
"text": "app.component.html"
},
{
"code": null,
"e": 2429,
"s": 2407,
"text": "app.component.spec.ts"
},
{
"code": null,
"e": 2451,
"s": 2429,
"text": "app.component.spec.ts"
},
{
"code": null,
"e": 2468,
"s": 2451,
"text": "app.component.ts"
},
{
"code": null,
"e": 2485,
"s": 2468,
"text": "app.component.ts"
},
{
"code": null,
"e": 2499,
"s": 2485,
"text": "app.module.ts"
},
{
"code": null,
"e": 2513,
"s": 2499,
"text": "app.module.ts"
},
{
"code": null,
"e": 2612,
"s": 2513,
"text": "The above files were created by default when we created new project using the angular-cli command."
},
{
"code": null,
"e": 2763,
"s": 2612,
"text": "If you open up the app.module.ts file, it has some libraries which are imported and also a declarative which is assigned the appcomponent as follows −"
},
{
"code": null,
"e": 3088,
"s": 2763,
"text": "import { BrowserModule } from '@angular/platform-browser';\nimport { NgModule } from '@angular/core';\nimport { AppComponent } from './app.component';\n\n@NgModule({\n declarations: [\n AppComponent\n ],\n imports: [\n BrowserModule\n ],\n providers: [],\n bootstrap: [AppComponent]\n})\n\nexport class AppModule { }\n"
},
{
"code": null,
"e": 3207,
"s": 3088,
"text": "The declarations include the AppComponent variable, which we have already imported. This becomes the parent component."
},
{
"code": null,
"e": 3417,
"s": 3207,
"text": "Now, angular-cli has a command to create your own component. However, the app component which is created by default will always remain the parent and the next components created will form the child components."
},
{
"code": null,
"e": 3469,
"s": 3417,
"text": "Let us now run the command to create the component."
},
{
"code": null,
"e": 3493,
"s": 3469,
"text": "ng g component new-cmp\n"
},
{
"code": null,
"e": 3585,
"s": 3493,
"text": "When you run the above command in the command line, you will receive the following output −"
},
{
"code": null,
"e": 3885,
"s": 3585,
"text": "C:\\projectA4\\Angular 4-app>ng g component new-cmp\ninstalling component\n create src\\app\\new-cmp\\new-cmp.component.css\n create src\\app\\new-cmp\\new-cmp.component.html\n create src\\app\\new-cmp\\new-cmp.component.spec.ts\n create src\\app\\new-cmp\\new-cmp.component.ts\n update src\\app\\app.module.ts\n"
},
{
"code": null,
"e": 3998,
"s": 3885,
"text": "Now, if we go and check the file structure, we will get the new-cmp new folder created under the src/app folder."
},
{
"code": null,
"e": 4054,
"s": 3998,
"text": "The following files are created in the new-cmp folder −"
},
{
"code": null,
"e": 4121,
"s": 4054,
"text": "new-cmp.component.css − css file for the new component is created."
},
{
"code": null,
"e": 4188,
"s": 4121,
"text": "new-cmp.component.css − css file for the new component is created."
},
{
"code": null,
"e": 4235,
"s": 4188,
"text": "new-cmp.component.html − html file is created."
},
{
"code": null,
"e": 4282,
"s": 4235,
"text": "new-cmp.component.html − html file is created."
},
{
"code": null,
"e": 4345,
"s": 4282,
"text": "new-cmp.component.spec.ts − this can be used for unit testing."
},
{
"code": null,
"e": 4408,
"s": 4345,
"text": "new-cmp.component.spec.ts − this can be used for unit testing."
},
{
"code": null,
"e": 4480,
"s": 4408,
"text": "new-cmp.component.ts − here, we can define the module, properties, etc."
},
{
"code": null,
"e": 4552,
"s": 4480,
"text": "new-cmp.component.ts − here, we can define the module, properties, etc."
},
{
"code": null,
"e": 4609,
"s": 4552,
"text": "Changes are added to the app.module.ts file as follows −"
},
{
"code": null,
"e": 5203,
"s": 4609,
"text": "import { BrowserModule } from '@angular/platform-browser';\nimport { NgModule } from '@angular/core';\nimport { AppComponent } from './app.component';\nimport { NewCmpComponent } from './new-cmp/new-cmp.component';\n// includes the new-cmp component we created\n\n@NgModule({\n declarations: [\n AppComponent,\n NewCmpComponent // here it is added in declarations and will behave as a child component\n ],\n imports: [\n BrowserModule\n ],\n providers: [],\n bootstrap: [AppComponent] //for bootstrap the AppComponent the main app component is given.\n})\n\nexport class AppModule { }"
},
{
"code": null,
"e": 5259,
"s": 5203,
"text": "The new-cmp.component.ts file is generated as follows −"
},
{
"code": null,
"e": 5774,
"s": 5259,
"text": "import { Component, OnInit } from '@angular/core'; // here angular/core is imported .\n\n@Component({\n // this is a declarator which starts with @ sign. The component word marked in bold needs to be the same.\n selector: 'app-new-cmp', //\n templateUrl: './new-cmp.component.html', \n // reference to the html file created in the new component.\n styleUrls: ['./new-cmp.component.css'] // reference to the style file.\n})\n\nexport class NewCmpComponent implements OnInit {\n constructor() { }\n ngOnInit() {}\n}"
},
{
"code": null,
"e": 6011,
"s": 5774,
"text": "If you see the above new-cmp.component.ts file, it creates a new class called NewCmpComponent, which implements OnInit.In, which has a constructor and a method called ngOnInit(). ngOnInit is called by default when the class is executed."
},
{
"code": null,
"e": 6181,
"s": 6011,
"text": "Let us check how the flow works. Now, the app component, which is created by default becomes the parent component. Any component added later becomes the child component."
},
{
"code": null,
"e": 6301,
"s": 6181,
"text": "When we hit the url in the http://localhost:4200/ browser, it first executes the index.html file which is shown below −"
},
{
"code": null,
"e": 6655,
"s": 6301,
"text": "<!doctype html>\n<html lang = \"en\">\n <head>\n <meta charset = \"utf-8\">\n <title>Angular 4App</title>\n <base href = \"/\">\n <meta name=\"viewport\" content=\"width = device-width, initial-scale = 1\">\n <link rel = \"icon\" type = \"image/x-icon\" href = \"favicon.ico\">\n </head>\n \n <body>\n <app-root></app-root>\n </body>\n</html>"
},
{
"code": null,
"e": 6792,
"s": 6655,
"text": "The above is the normal html file and we do not see anything that is printed in the browser. Take a look at the tag in the body section."
},
{
"code": null,
"e": 6815,
"s": 6792,
"text": "<app-root></app-root>\n"
},
{
"code": null,
"e": 6919,
"s": 6815,
"text": "This is the root tag created by the Angular by default. This tag has the reference in the main.ts file."
},
{
"code": null,
"e": 7255,
"s": 6919,
"text": "import { enableProdMode } from '@angular/core';\nimport { platformBrowserDynamic } from '@angular/platform-browser-dynamic';\nimport { AppModule } from './app/app.module';\nimport { environment } from './environments/environment';\n\nif (environment.production) {\n enableProdMode();\n}\n\nplatformBrowserDynamic().bootstrapModule(AppModule);"
},
{
"code": null,
"e": 7396,
"s": 7255,
"text": "AppModule is imported from the app of the main parent module, and the same is given to the bootstrap Module, which makes the appmodule load."
},
{
"code": null,
"e": 7436,
"s": 7396,
"text": "Let us now see the app.module.ts file −"
},
{
"code": null,
"e": 7846,
"s": 7436,
"text": "import { BrowserModule } from '@angular/platform-browser';\nimport { NgModule } from '@angular/core';\nimport { AppComponent } from './app.component';\nimport { NewCmpComponent } from './new-cmp/new-cmp.component';\n\n@NgModule({\n declarations: [\n AppComponent,\n NewCmpComponent\n ],\n imports: [\n BrowserModule\n ],\n providers: [],\n bootstrap: [AppComponent]\n})\n\nexport class AppModule { }"
},
{
"code": null,
"e": 8037,
"s": 7846,
"text": "Here, the AppComponent is the name given, i.e., the variable to store the reference of the app. Component.ts and the same is given to the bootstrap. Let us now see the app.component.ts file."
},
{
"code": null,
"e": 8264,
"s": 8037,
"text": "import { Component } from '@angular/core';\n\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\n\nexport class AppComponent {\n title = 'Angular 4 Project!';\n}"
},
{
"code": null,
"e": 8363,
"s": 8264,
"text": "Angular core is imported and referred as the Component and the same is used in the Declarator as −"
},
{
"code": null,
"e": 8483,
"s": 8363,
"text": "@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\n"
},
{
"code": null,
"e": 8663,
"s": 8483,
"text": "In the declarator reference to the selector, templateUrl and styleUrl are given. The selector here is nothing but the tag which is placed in the index.html file that we saw above."
},
{
"code": null,
"e": 8750,
"s": 8663,
"text": "The class AppComponent has a variable called title, which is displayed in the browser."
},
{
"code": null,
"e": 8834,
"s": 8750,
"text": "The @Component uses the templateUrl called app.component.html which is as follows −"
},
{
"code": null,
"e": 8987,
"s": 8834,
"text": "<!--The content below is only a placeholder and can be replaced.-->\n<div style=\"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n"
},
{
"code": null,
"e": 9223,
"s": 8987,
"text": "It has just the html code and the variable title in curly brackets. It gets replaced with the value, which is present in the app.component.ts file. This is called binding. We will discuss the concept of binding in a subsequent chapter."
},
{
"code": null,
"e": 9384,
"s": 9223,
"text": "Now that we have created a new component called new-cmp. The same gets included in the app.module.ts file, when the command is run for creating a new component."
},
{
"code": null,
"e": 9444,
"s": 9384,
"text": "app.module.ts has a reference to the new component created."
},
{
"code": null,
"e": 9495,
"s": 9444,
"text": "Let us now check the new files created in new-cmp."
},
{
"code": null,
"e": 9767,
"s": 9495,
"text": "import { Component, OnInit } from '@angular/core';\n@Component({\n selector: 'app-new-cmp',\n templateUrl: './new-cmp.component.html',\n styleUrls: ['./new-cmp.component.css']\n})\n\nexport class NewCmpComponent implements OnInit {\n constructor() { }\n ngOnInit() {}\n}\n"
},
{
"code": null,
"e": 9863,
"s": 9767,
"text": "Here, we have to import the core too. The reference of the component is used in the declarator."
},
{
"code": null,
"e": 9948,
"s": 9863,
"text": "The declarator has the selector called app-new-cmp and the templateUrl and styleUrl."
},
{
"code": null,
"e": 10004,
"s": 9948,
"text": "The .html called new-cmp.component.html is as follows −"
},
{
"code": null,
"e": 10032,
"s": 10004,
"text": "<p>\n new-cmp works!\n</p>\n"
},
{
"code": null,
"e": 10340,
"s": 10032,
"text": "As seen above, we have the html code, i.e., the p tag. The style file is empty as we do not need any styling at present. But when we run the project, we do not see anything related to the new component getting displayed in the browser. Let us now add something and the same can be seen in the browser later."
},
{
"code": null,
"e": 10435,
"s": 10340,
"text": "The selector, i.e., app-new-cmp needs to be added in the app.component .html file as follows −"
},
{
"code": null,
"e": 10617,
"s": 10435,
"text": "<!--The content below is only a placeholder and can be replaced.-->\n<div style=\"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n\n<app-new-cmp></app-new-cmp>\n"
},
{
"code": null,
"e": 10807,
"s": 10617,
"text": "When the <app-new-cmp></app-new-cmp> tag is added, all that is present in the .html file of the new component created will get displayed on the browser along with the parent component data."
},
{
"code": null,
"e": 10882,
"s": 10807,
"text": "Let us see the new component .html file and the new-cmp.component.ts file."
},
{
"code": null,
"e": 11209,
"s": 10882,
"text": "import { Component, OnInit } from '@angular/core';\n\n@Component({\n selector: 'app-new-cmp',\n templateUrl: './new-cmp.component.html',\n styleUrls: ['./new-cmp.component.css']\n})\n\nexport class NewCmpComponent implements OnInit {\n newcomponent = \"Entered in new component created\";\n constructor() {}\n ngOnInit() { }\n}\n"
},
{
"code": null,
"e": 11324,
"s": 11209,
"text": "In the class, we have added one variable called new component and the value is “Entered in new component created”."
},
{
"code": null,
"e": 11401,
"s": 11324,
"text": "The above variable is bound in the .new-cmp.component.html file as follows −"
},
{
"code": null,
"e": 11459,
"s": 11401,
"text": "<p>\n {{newcomponent}}\n</p>\n\n<p>\n new-cmp works!\n</p>\n"
},
{
"code": null,
"e": 11719,
"s": 11459,
"text": "Now since we have included the <app-new-cmp></app-new-cmp> selector in the app. component .html which is the .html of the parent component, the content present in the new component .html file (new-cmp.component.html) gets displayed on the browser as follows −"
},
{
"code": null,
"e": 11848,
"s": 11719,
"text": "Similarly, we can create components and link the same using the selector in the app.component.html file as per our requirements."
},
{
"code": null,
"e": 11883,
"s": 11848,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 11897,
"s": 11883,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 11932,
"s": 11897,
"text": "\n 28 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 11946,
"s": 11932,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 11981,
"s": 11946,
"text": "\n 11 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 12001,
"s": 11981,
"text": " SHIVPRASAD KOIRALA"
},
{
"code": null,
"e": 12036,
"s": 12001,
"text": "\n 16 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 12053,
"s": 12036,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 12086,
"s": 12053,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 12098,
"s": 12086,
"text": " Senol Atac"
},
{
"code": null,
"e": 12133,
"s": 12098,
"text": "\n 53 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 12145,
"s": 12133,
"text": " Senol Atac"
},
{
"code": null,
"e": 12152,
"s": 12145,
"text": " Print"
},
{
"code": null,
"e": 12163,
"s": 12152,
"text": " Add Notes"
}
] |
Magic Triplets | Practice | GeeksforGeeks | Given an array of size n, a triplet (a[i], a[j], a[k]) is called a Magic Triplet if a[i] < a[j] < a[k] and i < j < k. Count the number of magic triplets in a given array.
Example 1:
Input: arr = [3, 2, 1]
Output: 0
Explanation: There is no magic triplet.
Example 2:
Input: arr = [1, 2, 3, 4]
Output: 4
Explanation: Fours magic triplets are
(1, 2, 3), (1, 2, 4), (1, 3, 4) and
(2, 3, 4).
Your Task:
You don't need to read or print anything. Your task is to complete the function countTriplets() which takes the array nums[] as input parameter and returns the number of magic triplets in the array.
Expected Time Complexity: O(N2)
Expected Space Complexity: O(N)
Constraints:
1 <= length of array <= 1000
1 <= arr[i] <= 100000
0
nguyentronghuy061 month ago
class Solution{
public:
int countTriplets(vector<int>nums){
int count = 0;
int length = nums.size();
for(int i=1;i<length-1;i++) {
int temp = 0;
for(int j=i+1;j<length;j++)
if(nums[j]>nums[i])
temp++;
int temp1 = 0;
for(int j=i-1;j>=0;j--)
if(nums[j]<nums[i])
temp1++;
count += temp*temp1;
}
return count;
}
};
0
manishbit232 months ago
//C++ solution
class Solution{
public:
int countTriplets(vector<int>nums){
int count = 0;
int length = nums.size();
for(int i=1;i<length-1;i++) {
int temp = 0;
for(int j=i+1;j<length;j++)
if(nums[j]>nums[i])
temp++;
int temp1 = 0;
for(int j=i-1;j>=0;j--)
if(nums[j]<nums[i])
temp1++;
count += temp*temp1;
}
return count;
}
};
0
insanelion4 months ago
int count = 0; int len = nums.length; //outer for loop helps us keep track of middle element in triplets for(int i=1;i<len-1;i++) { //right int temp = 0; for(int j=i+1;j<len;j++) if(nums[j]>nums[i]) temp++; int temp1 = 0; //left for(int j=i-1;j>=0;j--) if(nums[j]<nums[i]) temp1++; count += temp*temp1; } return count;
0
shobhitsingh11046 months ago
Short and simple Solution .
int countTriplets(vector<int>nums){ int cnt = 0 , n = nums.size(); for(int i=1;i<n-1;i++) { int l = i-1 , r = i+1 , c1 = 0 , c2 = 0; while(l>=0) c1+=(nums[l]<nums[i]) , l--; while(r<n) c2+=(nums[r]>nums[i]) , r++; cnt+=c1*c2; } return cnt;}
0
tauruas08057 months ago
vector<int>b(nums.size(),0);int n=nums.size(); for(int i=0;i<n;i++){ for(int j=i+1;j<n;j++){ if(nums[j]>nums[i])b[i]++; } } int c=0; for(int i=0;i<n;i++){ for(int j=i+1;j<n;j++){ if(nums[j]>nums[i]){ c+=b[j]; } } } return c;
0
reaper277 months ago
Easy Solution Python
class Solution:
def countTriplets(self, nums):
n=len(nums)
ans=0
count=[None]*n
for i in range(n-1,-1,-1):
cnt=[]
for j in range(i+1,n):
if nums[i]<nums[j]:cnt.append(j)
count[i]=cnt
for i in range(n-2):
tmp=count[i]
for j in tmp:ans+=len(count[j])
return ans
0
GeeksforGeeks ✔8 months ago
GeeksforGeeks ✔
public int countTriplets(int[] nums){ int n = nums.length; int res = 0; for(int i = 1; i < n - 1; i++){ int temp1 = 0; // Left Smaller for(int j = i - 1; j >= 0; j--){ if(nums[i] > nums[j]) temp1++; } if(temp1 == 0) continue; // No left smaller int temp2 = 0; // right greater for(int j = i + 1; j < n; j++){ if(nums[i] < nums[j]) temp2++; } res+= (temp1 * temp2); } return res; }
0
wallflower9 months ago
wallflower
Java O(n^2) solution
https://ide.geeksforgeeks.o...
0
DIPU DEEPAK
This comment was deleted.
+1
Debarshi Maitra1 year ago
Debarshi Maitra
O(N LogN) Solution using Binary Index Tree : https://ide.geeksforgeeks.o...Auxiliary Space : O(N)
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 400,
"s": 226,
"text": "Given an array of size n, a triplet (a[i], a[j], a[k]) is called a Magic Triplet if a[i] < a[j] < a[k] and i < j < k. Count the number of magic triplets in a given array.\n "
},
{
"code": null,
"e": 411,
"s": 400,
"text": "Example 1:"
},
{
"code": null,
"e": 485,
"s": 411,
"text": "Input: arr = [3, 2, 1]\nOutput: 0\nExplanation: There is no magic triplet.\n"
},
{
"code": null,
"e": 497,
"s": 485,
"text": "\nExample 2:"
},
{
"code": null,
"e": 621,
"s": 497,
"text": "Input: arr = [1, 2, 3, 4]\nOutput: 4\nExplanation: Fours magic triplets are \n(1, 2, 3), (1, 2, 4), (1, 3, 4) and \n(2, 3, 4).\n"
},
{
"code": null,
"e": 833,
"s": 623,
"text": "Your Task:\nYou don't need to read or print anything. Your task is to complete the function countTriplets() which takes the array nums[] as input parameter and returns the number of magic triplets in the array."
},
{
"code": null,
"e": 902,
"s": 835,
"text": "Expected Time Complexity: O(N2) \nExpected Space Complexity: O(N)\n "
},
{
"code": null,
"e": 966,
"s": 902,
"text": "Constraints:\n1 <= length of array <= 1000\n1 <= arr[i] <= 100000"
},
{
"code": null,
"e": 968,
"s": 966,
"text": "0"
},
{
"code": null,
"e": 996,
"s": 968,
"text": "nguyentronghuy061 month ago"
},
{
"code": null,
"e": 1441,
"s": 996,
"text": "class Solution{\n\tpublic:\n\tint countTriplets(vector<int>nums){\n\t int count = 0;\n int length = nums.size();\n for(int i=1;i<length-1;i++) {\n int temp = 0;\n for(int j=i+1;j<length;j++)\n if(nums[j]>nums[i])\n temp++;\n int temp1 = 0;\n for(int j=i-1;j>=0;j--)\n if(nums[j]<nums[i])\n temp1++;\n count += temp*temp1;\n }\n return count;\n\t}\n};"
},
{
"code": null,
"e": 1443,
"s": 1441,
"text": "0"
},
{
"code": null,
"e": 1467,
"s": 1443,
"text": "manishbit232 months ago"
},
{
"code": null,
"e": 1927,
"s": 1467,
"text": "//C++ solution\nclass Solution{\n\tpublic:\n\tint countTriplets(vector<int>nums){\n\t int count = 0;\n int length = nums.size();\n for(int i=1;i<length-1;i++) {\n int temp = 0;\n for(int j=i+1;j<length;j++)\n if(nums[j]>nums[i])\n temp++;\n int temp1 = 0;\n for(int j=i-1;j>=0;j--)\n if(nums[j]<nums[i])\n temp1++;\n count += temp*temp1;\n }\n return count;\n\t}\n};"
},
{
"code": null,
"e": 1929,
"s": 1927,
"text": "0"
},
{
"code": null,
"e": 1952,
"s": 1929,
"text": "insanelion4 months ago"
},
{
"code": null,
"e": 2411,
"s": 1952,
"text": "int count = 0; int len = nums.length; //outer for loop helps us keep track of middle element in triplets for(int i=1;i<len-1;i++) { //right int temp = 0; for(int j=i+1;j<len;j++) if(nums[j]>nums[i]) temp++; int temp1 = 0; //left for(int j=i-1;j>=0;j--) if(nums[j]<nums[i]) temp1++; count += temp*temp1; } return count;"
},
{
"code": null,
"e": 2413,
"s": 2411,
"text": "0"
},
{
"code": null,
"e": 2442,
"s": 2413,
"text": "shobhitsingh11046 months ago"
},
{
"code": null,
"e": 2471,
"s": 2442,
"text": "Short and simple Solution . "
},
{
"code": null,
"e": 2751,
"s": 2473,
"text": "int countTriplets(vector<int>nums){ int cnt = 0 , n = nums.size(); for(int i=1;i<n-1;i++) { int l = i-1 , r = i+1 , c1 = 0 , c2 = 0; while(l>=0) c1+=(nums[l]<nums[i]) , l--; while(r<n) c2+=(nums[r]>nums[i]) , r++; cnt+=c1*c2; } return cnt;}"
},
{
"code": null,
"e": 2753,
"s": 2751,
"text": "0"
},
{
"code": null,
"e": 2777,
"s": 2753,
"text": "tauruas08057 months ago"
},
{
"code": null,
"e": 3068,
"s": 2777,
"text": "vector<int>b(nums.size(),0);int n=nums.size(); for(int i=0;i<n;i++){ for(int j=i+1;j<n;j++){ if(nums[j]>nums[i])b[i]++; } } int c=0; for(int i=0;i<n;i++){ for(int j=i+1;j<n;j++){ if(nums[j]>nums[i]){ c+=b[j]; } } } return c;"
},
{
"code": null,
"e": 3070,
"s": 3068,
"text": "0"
},
{
"code": null,
"e": 3091,
"s": 3070,
"text": "reaper277 months ago"
},
{
"code": null,
"e": 3112,
"s": 3091,
"text": "Easy Solution Python"
},
{
"code": null,
"e": 3431,
"s": 3112,
"text": "class Solution:\n\tdef countTriplets(self, nums):\n\t n=len(nums)\n\t ans=0\n\t\tcount=[None]*n\n\t\tfor i in range(n-1,-1,-1):\n\t\t cnt=[]\n\t\t for j in range(i+1,n):\n\t\t if nums[i]<nums[j]:cnt.append(j)\n\t\t count[i]=cnt\n\t\tfor i in range(n-2):\n\t\t tmp=count[i]\n\t\t for j in tmp:ans+=len(count[j])\n\t\treturn ans"
},
{
"code": null,
"e": 3433,
"s": 3431,
"text": "0"
},
{
"code": null,
"e": 3461,
"s": 3433,
"text": "GeeksforGeeks ✔8 months ago"
},
{
"code": null,
"e": 3477,
"s": 3461,
"text": "GeeksforGeeks ✔"
},
{
"code": null,
"e": 4096,
"s": 3477,
"text": "public int countTriplets(int[] nums){ int n = nums.length; int res = 0; for(int i = 1; i < n - 1; i++){ int temp1 = 0; // Left Smaller for(int j = i - 1; j >= 0; j--){ if(nums[i] > nums[j]) temp1++; } if(temp1 == 0) continue; // No left smaller int temp2 = 0; // right greater for(int j = i + 1; j < n; j++){ if(nums[i] < nums[j]) temp2++; } res+= (temp1 * temp2); } return res; }"
},
{
"code": null,
"e": 4098,
"s": 4096,
"text": "0"
},
{
"code": null,
"e": 4121,
"s": 4098,
"text": "wallflower9 months ago"
},
{
"code": null,
"e": 4132,
"s": 4121,
"text": "wallflower"
},
{
"code": null,
"e": 4153,
"s": 4132,
"text": "Java O(n^2) solution"
},
{
"code": null,
"e": 4184,
"s": 4153,
"text": "https://ide.geeksforgeeks.o..."
},
{
"code": null,
"e": 4186,
"s": 4184,
"text": "0"
},
{
"code": null,
"e": 4198,
"s": 4186,
"text": "DIPU DEEPAK"
},
{
"code": null,
"e": 4224,
"s": 4198,
"text": "This comment was deleted."
},
{
"code": null,
"e": 4227,
"s": 4224,
"text": "+1"
},
{
"code": null,
"e": 4253,
"s": 4227,
"text": "Debarshi Maitra1 year ago"
},
{
"code": null,
"e": 4269,
"s": 4253,
"text": "Debarshi Maitra"
},
{
"code": null,
"e": 4367,
"s": 4269,
"text": "O(N LogN) Solution using Binary Index Tree : https://ide.geeksforgeeks.o...Auxiliary Space : O(N)"
},
{
"code": null,
"e": 4513,
"s": 4367,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 4549,
"s": 4513,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 4559,
"s": 4549,
"text": "\nProblem\n"
},
{
"code": null,
"e": 4569,
"s": 4559,
"text": "\nContest\n"
},
{
"code": null,
"e": 4632,
"s": 4569,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 4780,
"s": 4632,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 4988,
"s": 4780,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 5094,
"s": 4988,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
How to add dividers and spaces between items in RecyclerView? | This example demonstrate about How to add dividers and spaces between items in RecyclerView
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version = "1.0" encoding = "utf-8"?>
<android.support.design.widget.CoordinatorLayout android:layout_width = "match_parent"
android:layout_height = "match_parent"
xmlns:android = "http://schemas.android.com/apk/res/android"
xmlns:app = "http://schemas.android.com/apk/res-auto">
<android.support.design.widget.AppBarLayout
android:layout_width = "match_parent"
android:layout_height = "wrap_content">
<android.support.v7.widget.Toolbar
android:id = "@+id/appbarlayout_tool_bar"
android:background = "@color/colorPrimary"
android:layout_width = "match_parent"
android:layout_height = "?attr/actionBarSize"
app:layout_scrollFlags = "scroll|snap|enterAlways"
app:theme = "@style/ThemeOverlay.AppCompat.Dark.ActionBar"
app:popupTheme = "@style/ThemeOverlay.AppCompat.Light" />
</android.support.design.widget.AppBarLayout>
<android.support.v7.widget.RecyclerView
android:id = "@+id/recycler_view"
android:layout_width = "match_parent"
android:layout_height = "match_parent"
app:layout_behavior = "@string/appbar_scrolling_view_behavior"/>
</android.support.design.widget.CoordinatorLayout>
In the above code, we have taken recycerview.
Step 3 − Add the following code to src/MainActivity.java
package com.example.myapplication;
import android.annotation.TargetApi;
import android.os.Build;
import android.os.Bundle;
import android.support.v4.content.ContextCompat;
import android.support.v7.app.AppCompatActivity;
import android.support.v7.widget.DefaultItemAnimator;
import android.support.v7.widget.DividerItemDecoration;
import android.support.v7.widget.LinearLayoutManager;
import android.support.v7.widget.RecyclerView;
import android.support.v7.widget.Toolbar;
import android.widget.TextView;
import android.widget.Toast;
import java.util.ArrayList;
public class MainActivity extends AppCompatActivity {
TextView text;
ArrayList<String> list = new ArrayList<>();
private RecyclerView recyclerView;
private customAdapter mAdapter;
private onClickInterface onclickInterface;
@TargetApi(Build.VERSION_CODES.LOLLIPOP)
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Toolbar toolbar = (android.support.v7.widget.Toolbar) findViewById(R.id.appbarlayout_tool_bar);
toolbar.setTitle("This is toolbar.");
setSupportActionBar(toolbar);
onclickInterface = new onClickInterface() {
@Override
public void setClick(int abc) {
list.remove(abc);
Toast.makeText(MainActivity.this,"Position is"+abc,Toast.LENGTH_LONG).show();
mAdapter.notifyDataSetChanged();
}
};
recyclerView = (RecyclerView) findViewById(R.id.recycler_view);
RecyclerView.LayoutManager mLayoutManager = new LinearLayoutManager(getApplicationContext());
recyclerView.setLayoutManager(mLayoutManager);
recyclerView.setItemAnimator(new DefaultItemAnimator());
mAdapter = new customAdapter(this, list, onclickInterface);
recyclerView.setAdapter(mAdapter);
DividerItemDecoration dividerItemDecoration = new DividerItemDecoration(recyclerView.getContext(), DividerItemDecoration.VERTICAL);
dividerItemDecoration.setDrawable(ContextCompat.getDrawable(MainActivity.this, R.drawable.divider));
recyclerView.addItemDecoration(dividerItemDecoration);
list.add("sairamm");
list.add("Krishna");
list.add("prasad");
list.add("sairamm");
list.add("Krishna");
list.add("prasad");
list.add("sairamm");
list.add("Krishna");
list.add("prasad");
list.add("sairamm");
list.add("Krishna");
list.add("prasad");
list.add("Krishna");
list.add("prasad");
list.add("sairamm");
list.add("Krishna");
list.add("prasad");
list.add("sairamm");
list.add("Krishna");
list.add("prasad");
}
}
Step 4 − Add the following code to src/ customAdapter.java
package com.example.myapplication;
import android.content.Context;
import android.support.annotation.NonNull;
import android.support.v7.widget.RecyclerView;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.TextView;
import java.util.ArrayList;
public class customAdapter extends RecyclerView.Adapter<customAdapter.MyViewHolder> {
Context context;
ArrayList<String> list;
onClickInterface onClickInterface;
public class MyViewHolder extends RecyclerView.ViewHolder {
public TextView title;
public MyViewHolder(View view) {
super(view);
title = (TextView) view.findViewById(R.id.title);
}
}
public customAdapter(Context context, ArrayList<String> list, onClickInterface onClickInterface) {
this.context = context;
this.list = list;
this.onClickInterface = onClickInterface;
}
@NonNull
@Override
public MyViewHolder onCreateViewHolder(@NonNull ViewGroup viewGroup, int i) {
View itemView = LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.list_row, viewGroup, false);
return new MyViewHolder(itemView);
}
@Override
public void onBindViewHolder(@NonNull MyViewHolder myViewHolder, final int i) {
myViewHolder.title.setText(list.get(i));
myViewHolder.title.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
onClickInterface.setClick(i);
}
});
}
@Override
public int getItemCount() {
return list.size();
}
}
Step 5 − Add the following code to res/layout/ list_row.xml.
<?xml version = "1.0" encoding = "utf-8"?>
<android.support.v7.widget.CardView xmlns:android = "http://schemas.android.com/apk/res/android"
xmlns:app = "http://schemas.android.com/apk/res-auto"
xmlns:tools = "http://schemas.android.com/tools"
android:layout_width = "match_parent"
android:layout_height = "wrap_content"
app:cardElevation = "10dp"
app:cardCornerRadius = "20dp"
tools:context = ".MainActivity">
<LinearLayout
android:layout_width = "match_parent"
android:layout_height = "wrap_content"
android:gravity = "center"
android:orientation = "vertical">
<ImageView
android:id = "@+id/imageView2"
android:layout_width = "wrap_content"
android:layout_height = "wrap_content"
android:src = "@drawable/logo" />
<TextView
android:id = "@+id/title"
android:layout_width = "match_parent"
android:layout_height = "wrap_content"
android:gravity = "center"
android:textSize = "30sp" />
<TextView
android:id = "@+id/textview2"
android:layout_width = "match_parent"
android:layout_height = "wrap_content"
android:gravity = "center"
android:text = "Sairamkrishan"
android:textSize = "30sp" />
</LinearLayout>
</android.support.v7.widget.CardView>
Step 6 − Add the following code to src/ onClickInterface.
package com.example.myapplication;
public interface onClickInterface {
void setClick(int abc);
}
Step 7 − Add the following code to res/drawable/ dividerxml.
<?xml version = "1.0" encoding = "utf-8"?>
<shape xmlns:android = "http://schemas.android.com/apk/res/android"
android:shape = "rectangle">
<solid android:color = "@color/colorPrimary"/>
<size android:height = "2dp"/>
</shape>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen – | [
{
"code": null,
"e": 1154,
"s": 1062,
"text": "This example demonstrate about How to add dividers and spaces between items in RecyclerView"
},
{
"code": null,
"e": 1283,
"s": 1154,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1348,
"s": 1283,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2563,
"s": 1348,
"text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<android.support.design.widget.CoordinatorLayout android:layout_width = \"match_parent\"\n android:layout_height = \"match_parent\"\n xmlns:android = \"http://schemas.android.com/apk/res/android\"\n xmlns:app = \"http://schemas.android.com/apk/res-auto\">\n <android.support.design.widget.AppBarLayout\n android:layout_width = \"match_parent\"\n android:layout_height = \"wrap_content\">\n <android.support.v7.widget.Toolbar\n android:id = \"@+id/appbarlayout_tool_bar\"\n android:background = \"@color/colorPrimary\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"?attr/actionBarSize\"\n app:layout_scrollFlags = \"scroll|snap|enterAlways\"\n app:theme = \"@style/ThemeOverlay.AppCompat.Dark.ActionBar\"\n app:popupTheme = \"@style/ThemeOverlay.AppCompat.Light\" />\n </android.support.design.widget.AppBarLayout>\n <android.support.v7.widget.RecyclerView\n android:id = \"@+id/recycler_view\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"match_parent\"\n app:layout_behavior = \"@string/appbar_scrolling_view_behavior\"/>\n</android.support.design.widget.CoordinatorLayout>"
},
{
"code": null,
"e": 2609,
"s": 2563,
"text": "In the above code, we have taken recycerview."
},
{
"code": null,
"e": 2666,
"s": 2609,
"text": "Step 3 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 5382,
"s": 2666,
"text": "package com.example.myapplication;\nimport android.annotation.TargetApi;\nimport android.os.Build;\nimport android.os.Bundle;\nimport android.support.v4.content.ContextCompat;\nimport android.support.v7.app.AppCompatActivity;\nimport android.support.v7.widget.DefaultItemAnimator;\nimport android.support.v7.widget.DividerItemDecoration;\nimport android.support.v7.widget.LinearLayoutManager;\nimport android.support.v7.widget.RecyclerView;\nimport android.support.v7.widget.Toolbar;\nimport android.widget.TextView;\nimport android.widget.Toast;\nimport java.util.ArrayList;\n\npublic class MainActivity extends AppCompatActivity {\n TextView text;\n ArrayList<String> list = new ArrayList<>();\n private RecyclerView recyclerView;\n private customAdapter mAdapter;\n private onClickInterface onclickInterface;\n @TargetApi(Build.VERSION_CODES.LOLLIPOP)\n @Override\n public void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n Toolbar toolbar = (android.support.v7.widget.Toolbar) findViewById(R.id.appbarlayout_tool_bar);\n toolbar.setTitle(\"This is toolbar.\");\n setSupportActionBar(toolbar);\n onclickInterface = new onClickInterface() {\n @Override\n public void setClick(int abc) {\n list.remove(abc);\n Toast.makeText(MainActivity.this,\"Position is\"+abc,Toast.LENGTH_LONG).show();\n mAdapter.notifyDataSetChanged();\n }\n };\n recyclerView = (RecyclerView) findViewById(R.id.recycler_view);\n RecyclerView.LayoutManager mLayoutManager = new LinearLayoutManager(getApplicationContext());\n recyclerView.setLayoutManager(mLayoutManager);\n recyclerView.setItemAnimator(new DefaultItemAnimator());\n mAdapter = new customAdapter(this, list, onclickInterface);\n recyclerView.setAdapter(mAdapter);\n DividerItemDecoration dividerItemDecoration = new DividerItemDecoration(recyclerView.getContext(), DividerItemDecoration.VERTICAL);\n dividerItemDecoration.setDrawable(ContextCompat.getDrawable(MainActivity.this, R.drawable.divider));\n recyclerView.addItemDecoration(dividerItemDecoration);\n list.add(\"sairamm\");\n list.add(\"Krishna\");\n list.add(\"prasad\");\n list.add(\"sairamm\");\n list.add(\"Krishna\");\n list.add(\"prasad\");\n list.add(\"sairamm\");\n list.add(\"Krishna\");\n list.add(\"prasad\");\n list.add(\"sairamm\");\n list.add(\"Krishna\");\n list.add(\"prasad\");\n list.add(\"Krishna\");\n list.add(\"prasad\");\n list.add(\"sairamm\");\n list.add(\"Krishna\");\n list.add(\"prasad\");\n list.add(\"sairamm\");\n list.add(\"Krishna\");\n list.add(\"prasad\");\n }\n}"
},
{
"code": null,
"e": 5441,
"s": 5382,
"text": "Step 4 − Add the following code to src/ customAdapter.java"
},
{
"code": null,
"e": 7040,
"s": 5441,
"text": "package com.example.myapplication;\nimport android.content.Context;\nimport android.support.annotation.NonNull;\nimport android.support.v7.widget.RecyclerView;\nimport android.view.LayoutInflater;\nimport android.view.View;\nimport android.view.ViewGroup;\nimport android.widget.TextView;\nimport java.util.ArrayList;\n\npublic class customAdapter extends RecyclerView.Adapter<customAdapter.MyViewHolder> {\n Context context;\n ArrayList<String> list;\n onClickInterface onClickInterface;\n public class MyViewHolder extends RecyclerView.ViewHolder {\n public TextView title;\n public MyViewHolder(View view) {\n super(view);\n title = (TextView) view.findViewById(R.id.title);\n }\n }\n public customAdapter(Context context, ArrayList<String> list, onClickInterface onClickInterface) {\n this.context = context;\n this.list = list;\n this.onClickInterface = onClickInterface;\n }\n @NonNull\n @Override\n public MyViewHolder onCreateViewHolder(@NonNull ViewGroup viewGroup, int i) {\n View itemView = LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.list_row, viewGroup, false);\n return new MyViewHolder(itemView);\n }\n @Override\n public void onBindViewHolder(@NonNull MyViewHolder myViewHolder, final int i) {\n myViewHolder.title.setText(list.get(i));\n myViewHolder.title.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n onClickInterface.setClick(i);\n }\n });\n }\n @Override\n public int getItemCount() {\n return list.size();\n }\n}"
},
{
"code": null,
"e": 7101,
"s": 7040,
"text": "Step 5 − Add the following code to res/layout/ list_row.xml."
},
{
"code": null,
"e": 8447,
"s": 7101,
"text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<android.support.v7.widget.CardView xmlns:android = \"http://schemas.android.com/apk/res/android\"\n xmlns:app = \"http://schemas.android.com/apk/res-auto\"\n xmlns:tools = \"http://schemas.android.com/tools\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"wrap_content\"\n app:cardElevation = \"10dp\"\n app:cardCornerRadius = \"20dp\"\n tools:context = \".MainActivity\">\n <LinearLayout\n android:layout_width = \"match_parent\"\n android:layout_height = \"wrap_content\"\n android:gravity = \"center\"\n android:orientation = \"vertical\">\n <ImageView\n android:id = \"@+id/imageView2\"\n android:layout_width = \"wrap_content\"\n android:layout_height = \"wrap_content\"\n android:src = \"@drawable/logo\" />\n <TextView\n android:id = \"@+id/title\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"wrap_content\"\n android:gravity = \"center\"\n android:textSize = \"30sp\" />\n <TextView\n android:id = \"@+id/textview2\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"wrap_content\"\n android:gravity = \"center\"\n android:text = \"Sairamkrishan\"\n android:textSize = \"30sp\" />\n </LinearLayout>\n</android.support.v7.widget.CardView>"
},
{
"code": null,
"e": 8505,
"s": 8447,
"text": "Step 6 − Add the following code to src/ onClickInterface."
},
{
"code": null,
"e": 8605,
"s": 8505,
"text": "package com.example.myapplication;\npublic interface onClickInterface {\n void setClick(int abc);\n}"
},
{
"code": null,
"e": 8666,
"s": 8605,
"text": "Step 7 − Add the following code to res/drawable/ dividerxml."
},
{
"code": null,
"e": 8902,
"s": 8666,
"text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<shape xmlns:android = \"http://schemas.android.com/apk/res/android\"\n android:shape = \"rectangle\">\n <solid android:color = \"@color/colorPrimary\"/>\n <size android:height = \"2dp\"/>\n</shape>"
},
{
"code": null,
"e": 9249,
"s": 8902,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –"
}
] |
A Guide on How to Build a Fuzzy Search Algorithm with FuzzyWuzzy and HMNI | by Cheng | Towards Data Science | In this article I will guide you through my thoughts on how to build a fuzzy search algorithm. A very practical use case of this algorithm is that we can use it to find alternative names for a brand saying ‘Amazon’ and we want it to return strings such as ‘AMZ’, ‘AMZN’ or ‘AMZN MKTP’.
The article follows an outline as the following:
Fuzzy search with the FuzzyWuzzy
Fuzzy search with the HMNI
Fuzzy search with an integrated algorithm
Return an alternative names table
github.com
FuzzyWuzzy is a great python library can be used to complete a fuzzy search job. Essentially it uses Levenshtein Distance to calculate the difference / distance between sequences.
According to the Wikipedia, the Levenshtein distance is a metric of evaluating the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. This means that the evaluation metric within the FuzzyWuzzy can be very good at performing fuzzy search for the misspelled words and capturing the longest common subsequence between inputs.
But for some cases, for instance, the abbreviation of brand names, only knowing the difference on a character-level is probably not enough. It would also make sense to know the phonetic and semantic difference before return the most similar name matches.
Therefore, I would like to introduce another library called HMNI which can help us check the phonetic similarity between inputs but first let me build a sample dataset for a more proper test.
Starting with the FuzzyWuzzy library, to install it, we could run the following commands:
# Using PIP via PyPIpip install fuzzywuzzy# Or the following to install python-Levenshtein toopip install fuzzywuzzy[speedup]
Then we will continue to create a sample dataset for our test.
# Sample Datasetdf = pd.DataFrame(index =['AMAZON', 'Netflix', 'PayPal', 'Apple', 'Spotify', 'Apple', 'Facebook', 'Google', 'Starbucks'], columns = ['AMZ', 'PP', 'FACEBK', 'SPTF*', 'APPL', 'STARBK', 'GG', 'Starbucks TORONTO', 'NFLIX'])# Print df
We have the row indexes set as the full brand names and column names set as the potential abbreviations of these brands.
Now, we can define a function that takes two strings as input and returns a similarity score as output. In FuzzyWuzzy, we can use the function fuzz.ratio() to compute the similarity score between two inputs.
# Customized similarity function with FuzzyWuzzydef similarity_fuzzy(word1, word2): score = fuzz.ratio(word1, word2) d = score/100 return d
Now, we just need to implement the similarity_fuzzy function on each pair of the row index and column name to replace these NaN values with similarity scores.
from tqdm import tqdmfor i in tqdm(range(8)): # range in number of rows for j in range(9): # range in number of columns df.loc[df.index[i], df.columns[j]] = similarity_fuzzy(str(df.index[i]), str(df.columns[j])) df
As we can see, the FuzzyWuzzy doesn’t perform very well to find the correct abbreviations for the input brand names. The main reason I think it’s because the abbreviations have lost a lot of characters so the metrics of using Levenshtein distance is not the optimal solution for this case.
And this is the place I think a new perspective of computing phonetic similarity should be a great help!
github.com
Generally, HMNI is a library that follows the cognitive process of applying soft-logic to approximate the spelling and phonetic (sound) characteristics.
A great article to explore:
towardsdatascience.com
To test the fuzzy name matching performance with the HMNI, we may follow the same steps as for the FuzzyWuzzy.
To install the HMNI:
# Using PIP via PyPIpip install hmni
To Initialize a Matcher Object:
import hmnimatcher = hmni.Matcher(model='latin')
To customize our similarity function:
def similarity_hmni(word1, word2): d = matcher.similarity(word1, word2) return d
Then, we can test the similarity_hmni function on the same sample dataset to compare the performance.
The difference is very obvious between the FuzzyWuzzy and HMNI. HMNI seems to be better at finding the abbreviations for the brand inputs based on the potential phonetic characteristics.
But it still doesn’t mean there is no disadvantage of using HMNI. For instance, looking at the ‘PayPal’ and ‘Apple’, we find that HMNI tends to be bad at separating these two brands since the similarity scores are 0.71 & 0.41 and 0.66 & 0.94 respectively. This may cause some confusion if we add more inputs into the dataset. Also, for the exact match between ‘Starbucks’ and ‘Starbucks Toronto’, the HMNI should be more confident about its prediction but now it only returns a value at 0.5.
This probably means we should consider integrate both of the two dimensions, the phonetic similarity and the Levenshtein distance to achieve an optimal balance.
My solution to this is simple. Just add two functions together into one new function and we will adjust the weights to determine the final output scores.
def similarity_calculator(word1, word2): score_1 = fuzz.ratio(word1, word2) # score from fuzzywuzzy score_2 = matcher.similarity(word1, word2) # score from hmni score_1 = score_1/100 score = 0.2*score_1 + 0.8*score_2 # customize your own weights return score
By integrating these two functions together, we seem to reach a balance that we can set 60% as a threshold to separate matches and non-matches. Except for the Starbucks, we probably should search for these big brand matches directly by using the .find() function in python.
Now, the remaining work is to create a table of top alternative name matches for the brand inputs. For the sample dataset, I choose to return only the top match with the highest similarity score. The code looks like the following:
# Return a new column 'Max' that contains the alternative names for each brand inputdf['Max'] = df.astype(float).idxmax(axis=1)# Create a new dataframe 'result' to display only the input & output columnsresult = pd.DataFrame(list(df.index), columns=['Input'])result['Alternative Names'] = list(df.Max)result
Your final output will look like a dataset above.
Thanks for your reading. And I hope this article could be helpful for anyone who is looking for some guide on how to build a fuzzy search algorithm with machine learning.
I think this article could be a great start for you to develop your own fuzzy search algorithm :)
Thanks Again!
See you in the next article ~ | [
{
"code": null,
"e": 457,
"s": 171,
"text": "In this article I will guide you through my thoughts on how to build a fuzzy search algorithm. A very practical use case of this algorithm is that we can use it to find alternative names for a brand saying ‘Amazon’ and we want it to return strings such as ‘AMZ’, ‘AMZN’ or ‘AMZN MKTP’."
},
{
"code": null,
"e": 506,
"s": 457,
"text": "The article follows an outline as the following:"
},
{
"code": null,
"e": 539,
"s": 506,
"text": "Fuzzy search with the FuzzyWuzzy"
},
{
"code": null,
"e": 566,
"s": 539,
"text": "Fuzzy search with the HMNI"
},
{
"code": null,
"e": 608,
"s": 566,
"text": "Fuzzy search with an integrated algorithm"
},
{
"code": null,
"e": 642,
"s": 608,
"text": "Return an alternative names table"
},
{
"code": null,
"e": 653,
"s": 642,
"text": "github.com"
},
{
"code": null,
"e": 833,
"s": 653,
"text": "FuzzyWuzzy is a great python library can be used to complete a fuzzy search job. Essentially it uses Levenshtein Distance to calculate the difference / distance between sequences."
},
{
"code": null,
"e": 1232,
"s": 833,
"text": "According to the Wikipedia, the Levenshtein distance is a metric of evaluating the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. This means that the evaluation metric within the FuzzyWuzzy can be very good at performing fuzzy search for the misspelled words and capturing the longest common subsequence between inputs."
},
{
"code": null,
"e": 1487,
"s": 1232,
"text": "But for some cases, for instance, the abbreviation of brand names, only knowing the difference on a character-level is probably not enough. It would also make sense to know the phonetic and semantic difference before return the most similar name matches."
},
{
"code": null,
"e": 1679,
"s": 1487,
"text": "Therefore, I would like to introduce another library called HMNI which can help us check the phonetic similarity between inputs but first let me build a sample dataset for a more proper test."
},
{
"code": null,
"e": 1769,
"s": 1679,
"text": "Starting with the FuzzyWuzzy library, to install it, we could run the following commands:"
},
{
"code": null,
"e": 1895,
"s": 1769,
"text": "# Using PIP via PyPIpip install fuzzywuzzy# Or the following to install python-Levenshtein toopip install fuzzywuzzy[speedup]"
},
{
"code": null,
"e": 1958,
"s": 1895,
"text": "Then we will continue to create a sample dataset for our test."
},
{
"code": null,
"e": 2221,
"s": 1958,
"text": "# Sample Datasetdf = pd.DataFrame(index =['AMAZON', 'Netflix', 'PayPal', 'Apple', 'Spotify', 'Apple', 'Facebook', 'Google', 'Starbucks'], columns = ['AMZ', 'PP', 'FACEBK', 'SPTF*', 'APPL', 'STARBK', 'GG', 'Starbucks TORONTO', 'NFLIX'])# Print df"
},
{
"code": null,
"e": 2342,
"s": 2221,
"text": "We have the row indexes set as the full brand names and column names set as the potential abbreviations of these brands."
},
{
"code": null,
"e": 2550,
"s": 2342,
"text": "Now, we can define a function that takes two strings as input and returns a similarity score as output. In FuzzyWuzzy, we can use the function fuzz.ratio() to compute the similarity score between two inputs."
},
{
"code": null,
"e": 2711,
"s": 2550,
"text": "# Customized similarity function with FuzzyWuzzydef similarity_fuzzy(word1, word2): score = fuzz.ratio(word1, word2) d = score/100 return d"
},
{
"code": null,
"e": 2870,
"s": 2711,
"text": "Now, we just need to implement the similarity_fuzzy function on each pair of the row index and column name to replace these NaN values with similarity scores."
},
{
"code": null,
"e": 3116,
"s": 2870,
"text": "from tqdm import tqdmfor i in tqdm(range(8)): # range in number of rows for j in range(9): # range in number of columns df.loc[df.index[i], df.columns[j]] = similarity_fuzzy(str(df.index[i]), str(df.columns[j])) df"
},
{
"code": null,
"e": 3406,
"s": 3116,
"text": "As we can see, the FuzzyWuzzy doesn’t perform very well to find the correct abbreviations for the input brand names. The main reason I think it’s because the abbreviations have lost a lot of characters so the metrics of using Levenshtein distance is not the optimal solution for this case."
},
{
"code": null,
"e": 3511,
"s": 3406,
"text": "And this is the place I think a new perspective of computing phonetic similarity should be a great help!"
},
{
"code": null,
"e": 3522,
"s": 3511,
"text": "github.com"
},
{
"code": null,
"e": 3675,
"s": 3522,
"text": "Generally, HMNI is a library that follows the cognitive process of applying soft-logic to approximate the spelling and phonetic (sound) characteristics."
},
{
"code": null,
"e": 3703,
"s": 3675,
"text": "A great article to explore:"
},
{
"code": null,
"e": 3726,
"s": 3703,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 3837,
"s": 3726,
"text": "To test the fuzzy name matching performance with the HMNI, we may follow the same steps as for the FuzzyWuzzy."
},
{
"code": null,
"e": 3858,
"s": 3837,
"text": "To install the HMNI:"
},
{
"code": null,
"e": 3895,
"s": 3858,
"text": "# Using PIP via PyPIpip install hmni"
},
{
"code": null,
"e": 3927,
"s": 3895,
"text": "To Initialize a Matcher Object:"
},
{
"code": null,
"e": 3976,
"s": 3927,
"text": "import hmnimatcher = hmni.Matcher(model='latin')"
},
{
"code": null,
"e": 4014,
"s": 3976,
"text": "To customize our similarity function:"
},
{
"code": null,
"e": 4109,
"s": 4014,
"text": "def similarity_hmni(word1, word2): d = matcher.similarity(word1, word2) return d"
},
{
"code": null,
"e": 4211,
"s": 4109,
"text": "Then, we can test the similarity_hmni function on the same sample dataset to compare the performance."
},
{
"code": null,
"e": 4398,
"s": 4211,
"text": "The difference is very obvious between the FuzzyWuzzy and HMNI. HMNI seems to be better at finding the abbreviations for the brand inputs based on the potential phonetic characteristics."
},
{
"code": null,
"e": 4890,
"s": 4398,
"text": "But it still doesn’t mean there is no disadvantage of using HMNI. For instance, looking at the ‘PayPal’ and ‘Apple’, we find that HMNI tends to be bad at separating these two brands since the similarity scores are 0.71 & 0.41 and 0.66 & 0.94 respectively. This may cause some confusion if we add more inputs into the dataset. Also, for the exact match between ‘Starbucks’ and ‘Starbucks Toronto’, the HMNI should be more confident about its prediction but now it only returns a value at 0.5."
},
{
"code": null,
"e": 5051,
"s": 4890,
"text": "This probably means we should consider integrate both of the two dimensions, the phonetic similarity and the Levenshtein distance to achieve an optimal balance."
},
{
"code": null,
"e": 5205,
"s": 5051,
"text": "My solution to this is simple. Just add two functions together into one new function and we will adjust the weights to determine the final output scores."
},
{
"code": null,
"e": 5500,
"s": 5205,
"text": "def similarity_calculator(word1, word2): score_1 = fuzz.ratio(word1, word2) # score from fuzzywuzzy score_2 = matcher.similarity(word1, word2) # score from hmni score_1 = score_1/100 score = 0.2*score_1 + 0.8*score_2 # customize your own weights return score"
},
{
"code": null,
"e": 5774,
"s": 5500,
"text": "By integrating these two functions together, we seem to reach a balance that we can set 60% as a threshold to separate matches and non-matches. Except for the Starbucks, we probably should search for these big brand matches directly by using the .find() function in python."
},
{
"code": null,
"e": 6005,
"s": 5774,
"text": "Now, the remaining work is to create a table of top alternative name matches for the brand inputs. For the sample dataset, I choose to return only the top match with the highest similarity score. The code looks like the following:"
},
{
"code": null,
"e": 6313,
"s": 6005,
"text": "# Return a new column 'Max' that contains the alternative names for each brand inputdf['Max'] = df.astype(float).idxmax(axis=1)# Create a new dataframe 'result' to display only the input & output columnsresult = pd.DataFrame(list(df.index), columns=['Input'])result['Alternative Names'] = list(df.Max)result"
},
{
"code": null,
"e": 6363,
"s": 6313,
"text": "Your final output will look like a dataset above."
},
{
"code": null,
"e": 6534,
"s": 6363,
"text": "Thanks for your reading. And I hope this article could be helpful for anyone who is looking for some guide on how to build a fuzzy search algorithm with machine learning."
},
{
"code": null,
"e": 6632,
"s": 6534,
"text": "I think this article could be a great start for you to develop your own fuzzy search algorithm :)"
},
{
"code": null,
"e": 6646,
"s": 6632,
"text": "Thanks Again!"
}
] |
How to close an opened file in Python? | To close an opened file in python, just call the close function on the file's object.
>>> f = open('hello.txt', 'r')
>>> # Do stuff with file
>>> f.close()
Try not to open files in this way though as it is not safe. Use with ... open instead.
with open('hello.txt', 'r') as f:
print(f.read())
The file auto closes as soon as you escape the with block. | [
{
"code": null,
"e": 1148,
"s": 1062,
"text": "To close an opened file in python, just call the close function on the file's object."
},
{
"code": null,
"e": 1218,
"s": 1148,
"text": ">>> f = open('hello.txt', 'r')\n>>> # Do stuff with file\n>>> f.close()"
},
{
"code": null,
"e": 1305,
"s": 1218,
"text": "Try not to open files in this way though as it is not safe. Use with ... open instead."
},
{
"code": null,
"e": 1359,
"s": 1305,
"text": "with open('hello.txt', 'r') as f:\n print(f.read())"
},
{
"code": null,
"e": 1418,
"s": 1359,
"text": "The file auto closes as soon as you escape the with block."
}
] |
Find the location with specified latitude and longitude using Python | 06 Aug, 2021
In this article, we are going to write a python script to find the address of a specified latitude and longitude using the geopy module. The geopy module makes it easier to locate the coordinates of addresses, cities, countries, landmarks, and zipcode.
Installation:
To install GeoPy module, run the following command in your terminal.
pip install geopy
Step-by-step Approach:
Import the geopy module.
Initialize Nominatim API to get location from the input string.
Get location with geolocator.geocode() method.
Below is the program based on the above approach:
Python3
# Import modulefrom geopy.geocoders import Nominatim # Initialize Nominatim APIgeolocator = Nominatim(user_agent="geoapiExercises") # Assign Latitude & LongitudeLatitude = "25.594095"Longitude = "85.137566" # Displaying Latitude and Longitudeprint("Latitude: ", Latitude)print("Longitude: ", Longitude) # Get location with geocodelocation = geolocator.geocode(Latitude+","+Longitude) # Display locationprint("\nLocation of the given Latitude and Longitude:")print(location)
Output:
gulshankumarar231
adnanirshad158
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n06 Aug, 2021"
},
{
"code": null,
"e": 281,
"s": 28,
"text": "In this article, we are going to write a python script to find the address of a specified latitude and longitude using the geopy module. The geopy module makes it easier to locate the coordinates of addresses, cities, countries, landmarks, and zipcode."
},
{
"code": null,
"e": 295,
"s": 281,
"text": "Installation:"
},
{
"code": null,
"e": 364,
"s": 295,
"text": "To install GeoPy module, run the following command in your terminal."
},
{
"code": null,
"e": 382,
"s": 364,
"text": "pip install geopy"
},
{
"code": null,
"e": 405,
"s": 382,
"text": "Step-by-step Approach:"
},
{
"code": null,
"e": 430,
"s": 405,
"text": "Import the geopy module."
},
{
"code": null,
"e": 494,
"s": 430,
"text": "Initialize Nominatim API to get location from the input string."
},
{
"code": null,
"e": 541,
"s": 494,
"text": "Get location with geolocator.geocode() method."
},
{
"code": null,
"e": 591,
"s": 541,
"text": "Below is the program based on the above approach:"
},
{
"code": null,
"e": 599,
"s": 591,
"text": "Python3"
},
{
"code": "# Import modulefrom geopy.geocoders import Nominatim # Initialize Nominatim APIgeolocator = Nominatim(user_agent=\"geoapiExercises\") # Assign Latitude & LongitudeLatitude = \"25.594095\"Longitude = \"85.137566\" # Displaying Latitude and Longitudeprint(\"Latitude: \", Latitude)print(\"Longitude: \", Longitude) # Get location with geocodelocation = geolocator.geocode(Latitude+\",\"+Longitude) # Display locationprint(\"\\nLocation of the given Latitude and Longitude:\")print(location)",
"e": 1073,
"s": 599,
"text": null
},
{
"code": null,
"e": 1081,
"s": 1073,
"text": "Output:"
},
{
"code": null,
"e": 1101,
"s": 1083,
"text": "gulshankumarar231"
},
{
"code": null,
"e": 1116,
"s": 1101,
"text": "adnanirshad158"
},
{
"code": null,
"e": 1131,
"s": 1116,
"text": "python-utility"
},
{
"code": null,
"e": 1138,
"s": 1131,
"text": "Python"
}
] |
How to insert an item into array at specific index in JavaScript? | 14 May, 2019
There is no inbuilt method in JavaScript which directly allows for insertion of an element at any arbitrary index of an array. This can be solved using 2 approaches:
Using array.splice():The array.splice() method is usually used to add or remove items from an array. This method takes in 3 parameters, the index where the element id is to be inserted or removed, the number of items to be deleted and the new items which are to be inserted.
The only insertion can be done by specifying the number of elements to be deleted to 0. This allows to only insert the specified item at a particular index with no deletion.
Syntax:
array.splice(index, no_of_items_to_remove, item1 ... itemX)
Example:
<!DOCTYPE html><html> <head> <title> How to insert an item into array at specific index in JavaScript? </title></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <b>How to insert an item into array at specific index in JavaScript?</b> <p>The original array is: 1, 2, 3, 4, 5</p> <p>Click on the button to insert -99 at index 2</p> <p>The new array is: <span class="outputArray"></span> </p> <button onclick="insertElement()">Insert element</button> <script type="text/javascript"> function insertElement() { let arr = [1, 2, 3, 4, 5]; let index = 2; let element = -99; arr.splice(index, 0, element); document.querySelector('.outputArray').textContent = arr; } </script></body> </html>
Output:
Before clicking the button:
After clicking the button:
Using the traditional for-loop:The for loop can be used to move all the elements from the index (where the new element is to be inserted) to the end of the array, one place after from their current place. The required element can then be placed at the index.
Code:
// shift all elements one place to the back until indexfor (i = arr.length; i > index; i--) { arr[i] = arr[i - 1];} // insert the element at the indexarr[index] = element;
Example:
<!DOCTYPE html><html> <head> <title>How to insert an item into array at specific index in JavaScript?</title></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <b>How to insert an item into array at specific index in JavaScript? </b> <p>The original array is: 1, 2, 3, 4, 5 </p> <p>Click on the button to insert -99 at index 2 </p> <p>The new array is: <span class="outputArray"></span> </p> <button onclick="insertElement()"> Insert element </button> <script type="text/javascript"> function insertElement() { let arr = [1, 2, 3, 4, 5]; let index = 2; let element = -99; // shift all elements one // place to the back until index for (i = arr.length; i > index; i--) { arr[i] = arr[i - 1]; } // insert the element at the index arr[index] = element; document.querySelector( '.outputArray').textContent = arr; } </script></body> </html>
Output:
Before clicking the button:
After clicking the button:
javascript-array
Picked
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n14 May, 2019"
},
{
"code": null,
"e": 194,
"s": 28,
"text": "There is no inbuilt method in JavaScript which directly allows for insertion of an element at any arbitrary index of an array. This can be solved using 2 approaches:"
},
{
"code": null,
"e": 469,
"s": 194,
"text": "Using array.splice():The array.splice() method is usually used to add or remove items from an array. This method takes in 3 parameters, the index where the element id is to be inserted or removed, the number of items to be deleted and the new items which are to be inserted."
},
{
"code": null,
"e": 643,
"s": 469,
"text": "The only insertion can be done by specifying the number of elements to be deleted to 0. This allows to only insert the specified item at a particular index with no deletion."
},
{
"code": null,
"e": 651,
"s": 643,
"text": "Syntax:"
},
{
"code": null,
"e": 711,
"s": 651,
"text": "array.splice(index, no_of_items_to_remove, item1 ... itemX)"
},
{
"code": null,
"e": 720,
"s": 711,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> How to insert an item into array at specific index in JavaScript? </title></head> <body> <h1 style=\"color: green\"> GeeksforGeeks </h1> <b>How to insert an item into array at specific index in JavaScript?</b> <p>The original array is: 1, 2, 3, 4, 5</p> <p>Click on the button to insert -99 at index 2</p> <p>The new array is: <span class=\"outputArray\"></span> </p> <button onclick=\"insertElement()\">Insert element</button> <script type=\"text/javascript\"> function insertElement() { let arr = [1, 2, 3, 4, 5]; let index = 2; let element = -99; arr.splice(index, 0, element); document.querySelector('.outputArray').textContent = arr; } </script></body> </html>",
"e": 1555,
"s": 720,
"text": null
},
{
"code": null,
"e": 1563,
"s": 1555,
"text": "Output:"
},
{
"code": null,
"e": 1591,
"s": 1563,
"text": "Before clicking the button:"
},
{
"code": null,
"e": 1618,
"s": 1591,
"text": "After clicking the button:"
},
{
"code": null,
"e": 1877,
"s": 1618,
"text": "Using the traditional for-loop:The for loop can be used to move all the elements from the index (where the new element is to be inserted) to the end of the array, one place after from their current place. The required element can then be placed at the index."
},
{
"code": null,
"e": 1883,
"s": 1877,
"text": "Code:"
},
{
"code": "// shift all elements one place to the back until indexfor (i = arr.length; i > index; i--) { arr[i] = arr[i - 1];} // insert the element at the indexarr[index] = element;",
"e": 2058,
"s": 1883,
"text": null
},
{
"code": null,
"e": 2067,
"s": 2058,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title>How to insert an item into array at specific index in JavaScript?</title></head> <body> <h1 style=\"color: green\"> GeeksforGeeks </h1> <b>How to insert an item into array at specific index in JavaScript? </b> <p>The original array is: 1, 2, 3, 4, 5 </p> <p>Click on the button to insert -99 at index 2 </p> <p>The new array is: <span class=\"outputArray\"></span> </p> <button onclick=\"insertElement()\"> Insert element </button> <script type=\"text/javascript\"> function insertElement() { let arr = [1, 2, 3, 4, 5]; let index = 2; let element = -99; // shift all elements one // place to the back until index for (i = arr.length; i > index; i--) { arr[i] = arr[i - 1]; } // insert the element at the index arr[index] = element; document.querySelector( '.outputArray').textContent = arr; } </script></body> </html>",
"e": 3135,
"s": 2067,
"text": null
},
{
"code": null,
"e": 3143,
"s": 3135,
"text": "Output:"
},
{
"code": null,
"e": 3171,
"s": 3143,
"text": "Before clicking the button:"
},
{
"code": null,
"e": 3198,
"s": 3171,
"text": "After clicking the button:"
},
{
"code": null,
"e": 3215,
"s": 3198,
"text": "javascript-array"
},
{
"code": null,
"e": 3222,
"s": 3215,
"text": "Picked"
},
{
"code": null,
"e": 3233,
"s": 3222,
"text": "JavaScript"
},
{
"code": null,
"e": 3250,
"s": 3233,
"text": "Web Technologies"
}
] |
How to sort an Array of Strings in Java | 29 Nov, 2021
Array Of Strings
To sort an array of strings in Java, we can use Arrays.sort() function.
Java
// A sample Java program to// sort an array of strings// in ascending and descending// orders using Arrays.sort(). import java.util.Arrays;import java.util.Collections; public class SortExample { public static void main(String[] args) { String arr[] = { "practice.geeksforgeeks.org", "quiz.geeksforgeeks.org", "code.geeksforgeeks.org" }; // Sorts arr[] in ascending order Arrays.sort(arr); System.out.printf("Modified arr[] : \n%s\n\n", Arrays.toString(arr)); // Sorts arr[] in descending order Arrays.sort(arr, Collections.reverseOrder()); System.out.printf("Modified arr[] : \n%s\n\n", Arrays.toString(arr)); }}
Modified arr[] :
Modified arr[] : [quiz.geeksforgeeks.org, practice.geeksforgeeks.org, code.geeksforgeeks.org]
ArrayList Of Strings
If we have an ArrayList to sort, we can use Collections.sort()
Java
// A sample Java program to sort// an arrayList of strings// in ascending and descending// orders using Collections.sort(). import java.util.ArrayList;import java.util.Collections; public class SortExample { public static void main(String[] args) { ArrayList<String> al = new ArrayList<String>(); al.add("practice.geeksforgeeks.org"); al.add("quiz.geeksforgeeks.org"); al.add("code.geeksforgeeks.org"); // Sorts ArrayList in ascending order Collections.sort(al); System.out.println( "Modified ArrayList : \n" + al); // Sorts arr[] in descending order Collections.sort(al, Collections.reverseOrder()); System.out.println( "Modified ArrayList : \n" + al); }}
Modified ArrayList : Modified ArrayList : [quiz.geeksforgeeks.org, practice.geeksforgeeks.org, code.geeksforgeeks.org]
surinderdawra388
Java-ArrayList
Java-Arrays
Arrays
Java
Sorting
Strings
Arrays
Strings
Sorting
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n29 Nov, 2021"
},
{
"code": null,
"e": 45,
"s": 28,
"text": "Array Of Strings"
},
{
"code": null,
"e": 118,
"s": 45,
"text": "To sort an array of strings in Java, we can use Arrays.sort() function. "
},
{
"code": null,
"e": 123,
"s": 118,
"text": "Java"
},
{
"code": "// A sample Java program to// sort an array of strings// in ascending and descending// orders using Arrays.sort(). import java.util.Arrays;import java.util.Collections; public class SortExample { public static void main(String[] args) { String arr[] = { \"practice.geeksforgeeks.org\", \"quiz.geeksforgeeks.org\", \"code.geeksforgeeks.org\" }; // Sorts arr[] in ascending order Arrays.sort(arr); System.out.printf(\"Modified arr[] : \\n%s\\n\\n\", Arrays.toString(arr)); // Sorts arr[] in descending order Arrays.sort(arr, Collections.reverseOrder()); System.out.printf(\"Modified arr[] : \\n%s\\n\\n\", Arrays.toString(arr)); }}",
"e": 896,
"s": 123,
"text": null
},
{
"code": null,
"e": 914,
"s": 896,
"text": "Modified arr[] : "
},
{
"code": null,
"e": 1009,
"s": 914,
"text": "Modified arr[] : [quiz.geeksforgeeks.org, practice.geeksforgeeks.org, code.geeksforgeeks.org] "
},
{
"code": null,
"e": 1032,
"s": 1011,
"text": "ArrayList Of Strings"
},
{
"code": null,
"e": 1096,
"s": 1032,
"text": "If we have an ArrayList to sort, we can use Collections.sort() "
},
{
"code": null,
"e": 1101,
"s": 1096,
"text": "Java"
},
{
"code": "// A sample Java program to sort// an arrayList of strings// in ascending and descending// orders using Collections.sort(). import java.util.ArrayList;import java.util.Collections; public class SortExample { public static void main(String[] args) { ArrayList<String> al = new ArrayList<String>(); al.add(\"practice.geeksforgeeks.org\"); al.add(\"quiz.geeksforgeeks.org\"); al.add(\"code.geeksforgeeks.org\"); // Sorts ArrayList in ascending order Collections.sort(al); System.out.println( \"Modified ArrayList : \\n\" + al); // Sorts arr[] in descending order Collections.sort(al, Collections.reverseOrder()); System.out.println( \"Modified ArrayList : \\n\" + al); }}",
"e": 1886,
"s": 1101,
"text": null
},
{
"code": null,
"e": 2007,
"s": 1886,
"text": "Modified ArrayList : Modified ArrayList : [quiz.geeksforgeeks.org, practice.geeksforgeeks.org, code.geeksforgeeks.org] "
},
{
"code": null,
"e": 2026,
"s": 2009,
"text": "surinderdawra388"
},
{
"code": null,
"e": 2041,
"s": 2026,
"text": "Java-ArrayList"
},
{
"code": null,
"e": 2053,
"s": 2041,
"text": "Java-Arrays"
},
{
"code": null,
"e": 2060,
"s": 2053,
"text": "Arrays"
},
{
"code": null,
"e": 2065,
"s": 2060,
"text": "Java"
},
{
"code": null,
"e": 2073,
"s": 2065,
"text": "Sorting"
},
{
"code": null,
"e": 2081,
"s": 2073,
"text": "Strings"
},
{
"code": null,
"e": 2088,
"s": 2081,
"text": "Arrays"
},
{
"code": null,
"e": 2096,
"s": 2088,
"text": "Strings"
},
{
"code": null,
"e": 2104,
"s": 2096,
"text": "Sorting"
},
{
"code": null,
"e": 2109,
"s": 2104,
"text": "Java"
}
] |
Get a specific row in a given Pandas DataFrame | 20 Aug, 2020
In the Pandas DataFrame we can find the specified row value with the using function iloc(). In this function we pass the row number as parameter.
Syntax : pandas.DataFrame.iloc[]Parameters :
Index Position : Index position of rows in integer or list of integer.
Return type : Data frame or Series depending on parameters
Example 1 :
# importing the moduleimport pandas as pd # creating a DataFramedata = {'1' : ['g', 'e', 'e'], '2' : ['k', 's', 'f'], '3' : ['o', 'r', 'g'], '4' : ['e', 'e', 'k']}df = pd.DataFrame(data)print("Original DataFrame")display(df) print("Value of row 1")display(df.iloc[1])
Output :
Example 2:
# importing the moduleimport pandas as pd # creating a DataFramedata = {'Name' : ['Simon', 'Marsh', 'Gaurav', 'Alex', 'Selena'], 'Maths' : [8, 5, 6, 9, 7], 'Science' : [7, 9, 5, 4, 7], 'English' : [7, 4, 7, 6, 8]} df = pd.DataFrame(data)print("Original DataFrame")display(df) print("Value of row 3 (Alex)")display(df.iloc[3])
Output :
Python pandas-dataFrame
Python Pandas-exercise
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n20 Aug, 2020"
},
{
"code": null,
"e": 175,
"s": 28,
"text": "In the Pandas DataFrame we can find the specified row value with the using function iloc(). In this function we pass the row number as parameter."
},
{
"code": null,
"e": 220,
"s": 175,
"text": "Syntax : pandas.DataFrame.iloc[]Parameters :"
},
{
"code": null,
"e": 291,
"s": 220,
"text": "Index Position : Index position of rows in integer or list of integer."
},
{
"code": null,
"e": 350,
"s": 291,
"text": "Return type : Data frame or Series depending on parameters"
},
{
"code": null,
"e": 362,
"s": 350,
"text": "Example 1 :"
},
{
"code": "# importing the moduleimport pandas as pd # creating a DataFramedata = {'1' : ['g', 'e', 'e'], '2' : ['k', 's', 'f'], '3' : ['o', 'r', 'g'], '4' : ['e', 'e', 'k']}df = pd.DataFrame(data)print(\"Original DataFrame\")display(df) print(\"Value of row 1\")display(df.iloc[1])",
"e": 656,
"s": 362,
"text": null
},
{
"code": null,
"e": 665,
"s": 656,
"text": "Output :"
},
{
"code": null,
"e": 676,
"s": 665,
"text": "Example 2:"
},
{
"code": "# importing the moduleimport pandas as pd # creating a DataFramedata = {'Name' : ['Simon', 'Marsh', 'Gaurav', 'Alex', 'Selena'], 'Maths' : [8, 5, 6, 9, 7], 'Science' : [7, 9, 5, 4, 7], 'English' : [7, 4, 7, 6, 8]} df = pd.DataFrame(data)print(\"Original DataFrame\")display(df) print(\"Value of row 3 (Alex)\")display(df.iloc[3])",
"e": 1047,
"s": 676,
"text": null
},
{
"code": null,
"e": 1056,
"s": 1047,
"text": "Output :"
},
{
"code": null,
"e": 1080,
"s": 1056,
"text": "Python pandas-dataFrame"
},
{
"code": null,
"e": 1103,
"s": 1080,
"text": "Python Pandas-exercise"
},
{
"code": null,
"e": 1117,
"s": 1103,
"text": "Python-pandas"
},
{
"code": null,
"e": 1124,
"s": 1117,
"text": "Python"
}
] |
Super Keyword in Java | 16 Mar, 2020
The super keyword in java is a reference variable that is used to refer parent class objects. The keyword “super” came into the picture with the concept of Inheritance. It is majorly used in the following contexts:
1. Use of super with variables: This scenario occurs when a derived class and base class has same data members. In that case there is a possibility of ambiguity for the JVM. We can understand it more clearly using this code snippet:
/* Base class vehicle */class Vehicle{ int maxSpeed = 120;} /* sub class Car extending vehicle */class Car extends Vehicle{ int maxSpeed = 180; void display() { /* print maxSpeed of base class (vehicle) */ System.out.println("Maximum Speed: " + super.maxSpeed); }} /* Driver program to test */class Test{ public static void main(String[] args) { Car small = new Car(); small.display(); }}
Output:
Maximum Speed: 120
In the above example, both base class and subclass have a member maxSpeed. We could access maxSpeed of base class in subclass using super keyword.
2. Use of super with methods: This is used when we want to call parent class method. So whenever a parent and child class have same named methods then to resolve ambiguity we use super keyword. This code snippet helps to understand the said usage of super keyword.
/* Base class Person */class Person{ void message() { System.out.println("This is person class"); }} /* Subclass Student */class Student extends Person{ void message() { System.out.println("This is student class"); } // Note that display() is only in Student class void display() { // will invoke or call current class message() method message(); // will invoke or call parent class message() method super.message(); }} /* Driver program to test */class Test{ public static void main(String args[]) { Student s = new Student(); // calling display() of Student s.display(); }}
Output:
This is student class
This is person class
In the above example, we have seen that if we only call method message() then, the current class message() is invoked but with the use of super keyword, message() of superclass could also be invoked.
3. Use of super with constructors: super keyword can also be used to access the parent class constructor. One more important thing is that, ‘’super’ can call both parametric as well as non parametric constructors depending upon the situation. Following is the code snippet to explain the above concept:
/* superclass Person */class Person{ Person() { System.out.println("Person class Constructor"); }} /* subclass Student extending the Person class */class Student extends Person{ Student() { // invoke or call parent class constructor super(); System.out.println("Student class Constructor"); }} /* Driver program to test*/class Test{ public static void main(String[] args) { Student s = new Student(); }}
Output:
Person class Constructor
Student class Constructor
In the above example we have called the superclass constructor using keyword ‘super’ via subclass constructor.
Other Important points:
Call to super() must be first statement in Derived(Student) Class constructor.If a constructor does not explicitly invoke a superclass constructor, the Java compiler automatically inserts a call to the no-argument constructor of the superclass. If the superclass does not have a no-argument constructor, you will get a compile-time error. Object does have such a constructor, so if Object is the only superclass, there is no problem.If a subclass constructor invokes a constructor of its superclass, either explicitly or implicitly, you might think that a whole chain of constructors called, all the way back to the constructor of Object. This, in fact, is the case. It is called constructor chaining..
Call to super() must be first statement in Derived(Student) Class constructor.
If a constructor does not explicitly invoke a superclass constructor, the Java compiler automatically inserts a call to the no-argument constructor of the superclass. If the superclass does not have a no-argument constructor, you will get a compile-time error. Object does have such a constructor, so if Object is the only superclass, there is no problem.
If a subclass constructor invokes a constructor of its superclass, either explicitly or implicitly, you might think that a whole chain of constructors called, all the way back to the constructor of Object. This, in fact, is the case. It is called constructor chaining..
This article is contributed by Vishwajeet Srivastava. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
vinaydevs00
Java-keyword
Java
School Programming
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n16 Mar, 2020"
},
{
"code": null,
"e": 268,
"s": 52,
"text": "The super keyword in java is a reference variable that is used to refer parent class objects. The keyword “super” came into the picture with the concept of Inheritance. It is majorly used in the following contexts:"
},
{
"code": null,
"e": 501,
"s": 268,
"text": "1. Use of super with variables: This scenario occurs when a derived class and base class has same data members. In that case there is a possibility of ambiguity for the JVM. We can understand it more clearly using this code snippet:"
},
{
"code": "/* Base class vehicle */class Vehicle{ int maxSpeed = 120;} /* sub class Car extending vehicle */class Car extends Vehicle{ int maxSpeed = 180; void display() { /* print maxSpeed of base class (vehicle) */ System.out.println(\"Maximum Speed: \" + super.maxSpeed); }} /* Driver program to test */class Test{ public static void main(String[] args) { Car small = new Car(); small.display(); }}",
"e": 946,
"s": 501,
"text": null
},
{
"code": null,
"e": 954,
"s": 946,
"text": "Output:"
},
{
"code": null,
"e": 973,
"s": 954,
"text": "Maximum Speed: 120"
},
{
"code": null,
"e": 1120,
"s": 973,
"text": "In the above example, both base class and subclass have a member maxSpeed. We could access maxSpeed of base class in subclass using super keyword."
},
{
"code": null,
"e": 1387,
"s": 1122,
"text": "2. Use of super with methods: This is used when we want to call parent class method. So whenever a parent and child class have same named methods then to resolve ambiguity we use super keyword. This code snippet helps to understand the said usage of super keyword."
},
{
"code": "/* Base class Person */class Person{ void message() { System.out.println(\"This is person class\"); }} /* Subclass Student */class Student extends Person{ void message() { System.out.println(\"This is student class\"); } // Note that display() is only in Student class void display() { // will invoke or call current class message() method message(); // will invoke or call parent class message() method super.message(); }} /* Driver program to test */class Test{ public static void main(String args[]) { Student s = new Student(); // calling display() of Student s.display(); }}",
"e": 2073,
"s": 1387,
"text": null
},
{
"code": null,
"e": 2081,
"s": 2073,
"text": "Output:"
},
{
"code": null,
"e": 2124,
"s": 2081,
"text": "This is student class\nThis is person class"
},
{
"code": null,
"e": 2324,
"s": 2124,
"text": "In the above example, we have seen that if we only call method message() then, the current class message() is invoked but with the use of super keyword, message() of superclass could also be invoked."
},
{
"code": null,
"e": 2629,
"s": 2326,
"text": "3. Use of super with constructors: super keyword can also be used to access the parent class constructor. One more important thing is that, ‘’super’ can call both parametric as well as non parametric constructors depending upon the situation. Following is the code snippet to explain the above concept:"
},
{
"code": "/* superclass Person */class Person{ Person() { System.out.println(\"Person class Constructor\"); }} /* subclass Student extending the Person class */class Student extends Person{ Student() { // invoke or call parent class constructor super(); System.out.println(\"Student class Constructor\"); }} /* Driver program to test*/class Test{ public static void main(String[] args) { Student s = new Student(); }}",
"e": 3099,
"s": 2629,
"text": null
},
{
"code": null,
"e": 3107,
"s": 3099,
"text": "Output:"
},
{
"code": null,
"e": 3158,
"s": 3107,
"text": "Person class Constructor\nStudent class Constructor"
},
{
"code": null,
"e": 3269,
"s": 3158,
"text": "In the above example we have called the superclass constructor using keyword ‘super’ via subclass constructor."
},
{
"code": null,
"e": 3293,
"s": 3269,
"text": "Other Important points:"
},
{
"code": null,
"e": 3996,
"s": 3293,
"text": "Call to super() must be first statement in Derived(Student) Class constructor.If a constructor does not explicitly invoke a superclass constructor, the Java compiler automatically inserts a call to the no-argument constructor of the superclass. If the superclass does not have a no-argument constructor, you will get a compile-time error. Object does have such a constructor, so if Object is the only superclass, there is no problem.If a subclass constructor invokes a constructor of its superclass, either explicitly or implicitly, you might think that a whole chain of constructors called, all the way back to the constructor of Object. This, in fact, is the case. It is called constructor chaining.."
},
{
"code": null,
"e": 4075,
"s": 3996,
"text": "Call to super() must be first statement in Derived(Student) Class constructor."
},
{
"code": null,
"e": 4431,
"s": 4075,
"text": "If a constructor does not explicitly invoke a superclass constructor, the Java compiler automatically inserts a call to the no-argument constructor of the superclass. If the superclass does not have a no-argument constructor, you will get a compile-time error. Object does have such a constructor, so if Object is the only superclass, there is no problem."
},
{
"code": null,
"e": 4701,
"s": 4431,
"text": "If a subclass constructor invokes a constructor of its superclass, either explicitly or implicitly, you might think that a whole chain of constructors called, all the way back to the constructor of Object. This, in fact, is the case. It is called constructor chaining.."
},
{
"code": null,
"e": 4880,
"s": 4701,
"text": "This article is contributed by Vishwajeet Srivastava. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 4892,
"s": 4880,
"text": "vinaydevs00"
},
{
"code": null,
"e": 4905,
"s": 4892,
"text": "Java-keyword"
},
{
"code": null,
"e": 4910,
"s": 4905,
"text": "Java"
},
{
"code": null,
"e": 4929,
"s": 4910,
"text": "School Programming"
},
{
"code": null,
"e": 4934,
"s": 4929,
"text": "Java"
}
] |
Search in a matrix | Practice | GeeksforGeeks | Given a matrix mat[][] of size N x M, where every row and column is sorted in increasing order, and a number X is given. The task is to find whether element X is present in the matrix or not.
Example 1:
Input:
N = 3, M = 3
mat[][] = 3 30 38
44 52 54
57 60 69
X = 62
Output:
0
Explanation:
62 is not present in the
matrix, so output is 0
Example 2:
Input:
N = 1, M = 6
mat[][] = 18 21 27 38 55 67
X = 55
Output:
1
Explanation:
55 is present in the
matrix at 5th cell.
Your Task:
You don't need to read input or print anything. You just have to complete the function matSearch() which takes a 2D matrix mat[][], its dimensions N and M and integer X as inputs and returns 1 if the element X is present in the matrix and 0 otherwise.
Expected Time Complexity: O(N+M).
Expected Auxiliary Space: O(1).
Constraints:
1 <= N, M <= 1005
1 <= mat[][] <= 10000000
1<= X <= 10000000
0
manojyadavblack1 day ago
//I have explained 3 methods to solve this problem
// BRute---> Beetter---->Optimal
//This function is associated with method -2
int performBinarySearch(int arr[], int target, int l , int h)
{
if(l<=h)
{
int mid=(l+h)/2;
if(target==arr[mid]) return mid;
else if(target>arr[mid]) return performBinarySearch(arr,target,mid+1,h);
else return performBinarySearch(arr,target,l,mid-1);
}
return -1;
}
int matSearch (int N, int M, int matrix[][M], int target)
{
int row=N;
int col=M;
//Method -1
//Brutefprce approach
// for(int i=0;i<N;i++)
// for(int j=0;j<col;j++)
// if(X==matrix[i][j]) return 1;
// return 0;
//Method -2
// Performing Binary search row wise
// for(int i=0;i<row;i++)
// {
// int index=performBinarySearch( matrix[i], target, 0,col-1);
// if(index!=-1) return 1;
// }
// return 0;
//Method-3
//Performing Binary seacrh using given property s.t that matrix is row and col wise sperted
int i=0;
int j=col-1;
while(i<row && col>=0)
{
if(matrix[i][j]==target) return 1;
else if(target>matrix[i][j]) i++;
else j--;
}
return 0;
}
0
imjunior4712 days ago
for(int r = 0; r<N; r++){ for(int c = M-1; c>=0; c--){ if(X > mat[r][c]) break; if(X == mat[r][c]) return 1; } } return 0;
0
snehabaipalli2 days ago
Python Solution:
class Solution:def matSearch(self,mat, N, M, X): # Complete this function count=0 for i in range(N): for j in range(M): if(mat[i][j]==X): count=1 break if(count==0): return 0 elif(count==1): return 1
-3
aamitprasad6185 days ago
CPP
vector<int>arr;
for(int i=0;i<N;i++){
for(int j=0;j<M;j++){
arr.push_back(mat[i][j]);
}
}
for(int i=0;i<arr.size();i++){
if(arr[i]==X){
return 1;
}
}
return 0;
0
abhishek0908022 weeks ago
int matSearch (vector <vector <int>> &mat, int N, int M, int X)
{
int i=0; int j=M-1;
while(i<N and j>=0)
{
int temp=mat[i][j];
if(temp<X)
i++;
else if(temp>X)
j--;
else if(temp==X)
return 1;
}
return 0;
}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested
against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code.
On submission, your code is tested against multiple test cases consisting of all
possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as
the final solution code.
You can view the solutions submitted by other users from the submission tab.
Make sure you are not using ad-blockers.
Disable browser extensions.
We recommend using latest version of your browser for best experience.
Avoid using static/global variables in coding problems as your code is tested
against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases in coding problems does not guarantee the
correctness of code. On submission, your code is tested against multiple test cases
consisting of all possible corner cases and stress constraints. | [
{
"code": null,
"e": 430,
"s": 238,
"text": "Given a matrix mat[][] of size N x M, where every row and column is sorted in increasing order, and a number X is given. The task is to find whether element X is present in the matrix or not."
},
{
"code": null,
"e": 442,
"s": 430,
"text": "\nExample 1:"
},
{
"code": null,
"e": 596,
"s": 442,
"text": "Input:\nN = 3, M = 3\nmat[][] = 3 30 38 \n 44 52 54 \n 57 60 69\nX = 62\nOutput:\n0\nExplanation:\n62 is not present in the\nmatrix, so output is 0"
},
{
"code": null,
"e": 608,
"s": 596,
"text": "\nExample 2:"
},
{
"code": null,
"e": 727,
"s": 608,
"text": "Input:\nN = 1, M = 6\nmat[][] = 18 21 27 38 55 67\nX = 55\nOutput:\n1\nExplanation:\n55 is present in the\nmatrix at 5th cell."
},
{
"code": null,
"e": 991,
"s": 727,
"text": "\nYour Task:\nYou don't need to read input or print anything. You just have to complete the function matSearch() which takes a 2D matrix mat[][], its dimensions N and M and integer X as inputs and returns 1 if the element X is present in the matrix and 0 otherwise."
},
{
"code": null,
"e": 1058,
"s": 991,
"text": "\nExpected Time Complexity: O(N+M).\nExpected Auxiliary Space: O(1)."
},
{
"code": null,
"e": 1133,
"s": 1058,
"text": "\nConstraints:\n1 <= N, M <= 1005\n1 <= mat[][] <= 10000000\n1<= X <= 10000000"
},
{
"code": null,
"e": 1135,
"s": 1133,
"text": "0"
},
{
"code": null,
"e": 1160,
"s": 1135,
"text": "manojyadavblack1 day ago"
},
{
"code": null,
"e": 2440,
"s": 1160,
"text": "//I have explained 3 methods to solve this problem \n// BRute---> Beetter---->Optimal\n\n//This function is associated with method -2\nint performBinarySearch(int arr[], int target, int l , int h)\n{\n if(l<=h)\n {\n int mid=(l+h)/2;\n if(target==arr[mid]) return mid;\n else if(target>arr[mid]) return performBinarySearch(arr,target,mid+1,h);\n else return performBinarySearch(arr,target,l,mid-1);\n }\n return -1;\n}\n\n\nint matSearch (int N, int M, int matrix[][M], int target)\n{\n int row=N;\n int col=M;\n \n //Method -1\n //Brutefprce approach\n // for(int i=0;i<N;i++)\n // for(int j=0;j<col;j++)\n // if(X==matrix[i][j]) return 1;\n // return 0;\n \n \n //Method -2\n // Performing Binary search row wise\n // for(int i=0;i<row;i++)\n // {\n // int index=performBinarySearch( matrix[i], target, 0,col-1);\n // if(index!=-1) return 1;\n \n // }\n \n // return 0;\n \n \n //Method-3\n //Performing Binary seacrh using given property s.t that matrix is row and col wise sperted\n int i=0;\n int j=col-1;\n while(i<row && col>=0)\n {\n if(matrix[i][j]==target) return 1;\n else if(target>matrix[i][j]) i++;\n else j--;\n }\n return 0;\n \n \n \n}\n\n\n\n"
},
{
"code": null,
"e": 2442,
"s": 2440,
"text": "0"
},
{
"code": null,
"e": 2464,
"s": 2442,
"text": "imjunior4712 days ago"
},
{
"code": null,
"e": 2654,
"s": 2464,
"text": " for(int r = 0; r<N; r++){ for(int c = M-1; c>=0; c--){ if(X > mat[r][c]) break; if(X == mat[r][c]) return 1; } } return 0;"
},
{
"code": null,
"e": 2656,
"s": 2654,
"text": "0"
},
{
"code": null,
"e": 2680,
"s": 2656,
"text": "snehabaipalli2 days ago"
},
{
"code": null,
"e": 2697,
"s": 2680,
"text": "Python Solution:"
},
{
"code": null,
"e": 2941,
"s": 2697,
"text": "class Solution:def matSearch(self,mat, N, M, X): # Complete this function count=0 for i in range(N): for j in range(M): if(mat[i][j]==X): count=1 break if(count==0): return 0 elif(count==1): return 1"
},
{
"code": null,
"e": 2944,
"s": 2941,
"text": "-3"
},
{
"code": null,
"e": 2969,
"s": 2944,
"text": "aamitprasad6185 days ago"
},
{
"code": null,
"e": 2973,
"s": 2969,
"text": "CPP"
},
{
"code": null,
"e": 3238,
"s": 2973,
"text": "vector<int>arr;\n\t \n\t for(int i=0;i<N;i++){\n\t for(int j=0;j<M;j++){\n\t arr.push_back(mat[i][j]);\n\t }\n\t }\n\t \n\t for(int i=0;i<arr.size();i++){\n\t if(arr[i]==X){\n\t return 1;\n\t }\n\t }\n\t \n\t return 0;"
},
{
"code": null,
"e": 3240,
"s": 3238,
"text": "0"
},
{
"code": null,
"e": 3266,
"s": 3240,
"text": "abhishek0908022 weeks ago"
},
{
"code": null,
"e": 3334,
"s": 3266,
"text": " int matSearch (vector <vector <int>> &mat, int N, int M, int X)"
},
{
"code": null,
"e": 3340,
"s": 3334,
"text": " {"
},
{
"code": null,
"e": 3369,
"s": 3340,
"text": " int i=0; int j=M-1; "
},
{
"code": null,
"e": 3397,
"s": 3369,
"text": " while(i<N and j>=0)"
},
{
"code": null,
"e": 3407,
"s": 3397,
"text": " {"
},
{
"code": null,
"e": 3440,
"s": 3407,
"text": " int temp=mat[i][j]; "
},
{
"code": null,
"e": 3463,
"s": 3440,
"text": " if(temp<X)"
},
{
"code": null,
"e": 3481,
"s": 3463,
"text": " i++; "
},
{
"code": null,
"e": 3509,
"s": 3481,
"text": " else if(temp>X)"
},
{
"code": null,
"e": 3526,
"s": 3509,
"text": " j--;"
},
{
"code": null,
"e": 3555,
"s": 3526,
"text": " else if(temp==X)"
},
{
"code": null,
"e": 3578,
"s": 3555,
"text": " return 1; "
},
{
"code": null,
"e": 3588,
"s": 3578,
"text": " }"
},
{
"code": null,
"e": 3606,
"s": 3588,
"text": " return 0;"
},
{
"code": null,
"e": 3612,
"s": 3606,
"text": " }"
},
{
"code": null,
"e": 3758,
"s": 3612,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 3794,
"s": 3758,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 3804,
"s": 3794,
"text": "\nProblem\n"
},
{
"code": null,
"e": 3814,
"s": 3804,
"text": "\nContest\n"
},
{
"code": null,
"e": 3877,
"s": 3814,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 4062,
"s": 3877,
"text": "Avoid using static/global variables in your code as your code is tested \n against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 4346,
"s": 4062,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code.\n On submission, your code is tested against multiple test cases consisting of all\n possible corner cases and stress constraints."
},
{
"code": null,
"e": 4492,
"s": 4346,
"text": "You can access the hints to get an idea about what is expected of you as well as\n the final solution code."
},
{
"code": null,
"e": 4569,
"s": 4492,
"text": "You can view the solutions submitted by other users from the submission tab."
},
{
"code": null,
"e": 4610,
"s": 4569,
"text": "Make sure you are not using ad-blockers."
},
{
"code": null,
"e": 4638,
"s": 4610,
"text": "Disable browser extensions."
},
{
"code": null,
"e": 4709,
"s": 4638,
"text": "We recommend using latest version of your browser for best experience."
},
{
"code": null,
"e": 4896,
"s": 4709,
"text": "Avoid using static/global variables in coding problems as your code is tested \n against multiple test cases and these tend to retain their previous values."
}
] |
Python | Pandas DatetimeIndex.freq | 24 Dec, 2018
Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric python packages. Pandas is one of those packages and makes importing and analyzing data much easier.
Pandas DatetimeIndex.freq attribute returns the frequency object if it is set in the DatetimeIndex object. If the frequency is not set then it returns None.
Syntax: DatetimeIndex.freq
Return: frequency object
Example #1: Use DatetimeIndex.freq attribute to find the frequency for the given DatetimeIndex object.
# importing pandas as pdimport pandas as pd # Create the DatetimeIndex# Here 'BQ' represents Business quarter frequencydidx = pd.DatetimeIndex(start ='2014-08-01 10:05:45', freq ='BQ', periods = 5, tz ='Asia/Calcutta') # Print the DatetimeIndexprint(didx)
Output :
Now we want to find the value of frequency for the given DatetimeIndex object.
# find the value of frequencydidx.freq
Output :As we can see in the output, the function has returned a frequency object for the given DatetimeIndex object. Example #2: Use DatetimeIndex.freq attribute to find the frequency for the given DatetimeIndex object.
# importing pandas as pdimport pandas as pd # Create the DatetimeIndex# Here 'CBMS' represents custom business month start frequencydidx = pd.DatetimeIndex(start ='2000-01-10 06:30', freq ='CBMS', periods = 5, tz ='Asia/Calcutta') # Print the DatetimeIndexprint(didx)
Output :Now we want to find the value of frequency for the given DatetimeIndex object.
# find the value of frequencydidx.freq
Output :As we can see in the output, the function has returned a frequency object for the given DatetimeIndex object. The didx DatetimeIndex object is having custom business month start frequency.
Python pandas-datetimeIndex
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n24 Dec, 2018"
},
{
"code": null,
"e": 242,
"s": 28,
"text": "Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric python packages. Pandas is one of those packages and makes importing and analyzing data much easier."
},
{
"code": null,
"e": 399,
"s": 242,
"text": "Pandas DatetimeIndex.freq attribute returns the frequency object if it is set in the DatetimeIndex object. If the frequency is not set then it returns None."
},
{
"code": null,
"e": 426,
"s": 399,
"text": "Syntax: DatetimeIndex.freq"
},
{
"code": null,
"e": 451,
"s": 426,
"text": "Return: frequency object"
},
{
"code": null,
"e": 554,
"s": 451,
"text": "Example #1: Use DatetimeIndex.freq attribute to find the frequency for the given DatetimeIndex object."
},
{
"code": "# importing pandas as pdimport pandas as pd # Create the DatetimeIndex# Here 'BQ' represents Business quarter frequencydidx = pd.DatetimeIndex(start ='2014-08-01 10:05:45', freq ='BQ', periods = 5, tz ='Asia/Calcutta') # Print the DatetimeIndexprint(didx)",
"e": 843,
"s": 554,
"text": null
},
{
"code": null,
"e": 852,
"s": 843,
"text": "Output :"
},
{
"code": null,
"e": 931,
"s": 852,
"text": "Now we want to find the value of frequency for the given DatetimeIndex object."
},
{
"code": "# find the value of frequencydidx.freq",
"e": 970,
"s": 931,
"text": null
},
{
"code": null,
"e": 1191,
"s": 970,
"text": "Output :As we can see in the output, the function has returned a frequency object for the given DatetimeIndex object. Example #2: Use DatetimeIndex.freq attribute to find the frequency for the given DatetimeIndex object."
},
{
"code": "# importing pandas as pdimport pandas as pd # Create the DatetimeIndex# Here 'CBMS' represents custom business month start frequencydidx = pd.DatetimeIndex(start ='2000-01-10 06:30', freq ='CBMS', periods = 5, tz ='Asia/Calcutta') # Print the DatetimeIndexprint(didx)",
"e": 1491,
"s": 1191,
"text": null
},
{
"code": null,
"e": 1578,
"s": 1491,
"text": "Output :Now we want to find the value of frequency for the given DatetimeIndex object."
},
{
"code": "# find the value of frequencydidx.freq",
"e": 1617,
"s": 1578,
"text": null
},
{
"code": null,
"e": 1814,
"s": 1617,
"text": "Output :As we can see in the output, the function has returned a frequency object for the given DatetimeIndex object. The didx DatetimeIndex object is having custom business month start frequency."
},
{
"code": null,
"e": 1842,
"s": 1814,
"text": "Python pandas-datetimeIndex"
},
{
"code": null,
"e": 1856,
"s": 1842,
"text": "Python-pandas"
},
{
"code": null,
"e": 1863,
"s": 1856,
"text": "Python"
}
] |
Android - Gestures | Android provides special types of touch screen events such as pinch , double tap, scrolls , long presses and flinch. These are all known as gestures.
Android provides GestureDetector class to receive motion events and tell us that these events correspond to gestures or not. To use it , you need to create an object of GestureDetector and then extend another class with GestureDetector.SimpleOnGestureListener to act as a listener and override some methods. Its syntax is given below −
GestureDetector myG;
myG = new GestureDetector(this,new Gesture());
class Gesture extends GestureDetector.SimpleOnGestureListener{
public boolean onSingleTapUp(MotionEvent ev) {
}
public void onLongPress(MotionEvent ev) {
}
public boolean onScroll(MotionEvent e1, MotionEvent e2, float distanceX,
float distanceY) {
}
public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX,
float velocityY) {
}
}
Android provides ScaleGestureDetector class to handle gestures like pinch e.t.c. In order to use it, you need to instantiate an object of this class. Its syntax is as follow −
ScaleGestureDetector SGD;
SGD = new ScaleGestureDetector(this,new ScaleListener());
The first parameter is the context and the second parameter is the event listener. We have to define the event listener and override a function OnTouchEvent to make it working. Its syntax is given below −
public boolean onTouchEvent(MotionEvent ev) {
SGD.onTouchEvent(ev);
return true;
}
private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
@Override
public boolean onScale(ScaleGestureDetector detector) {
float scale = detector.getScaleFactor();
return true;
}
}
Apart from the pinch gestures , there are other methods available that notify more about touch events. They are listed below −
getEventTime()
This method get the event time of the current event being processed..
getFocusX()
This method get the X coordinate of the current gesture's focal point.
getFocusY()
This method get the Y coordinate of the current gesture's focal point.
getTimeDelta()
This method return the time difference in milliseconds between the previous accepted scaling event and the current scaling event.
isInProgress()
This method returns true if a scale gesture is in progress..
onTouchEvent(MotionEvent event)
This method accepts MotionEvents and dispatches events when appropriate.
Here is an example demonstrating the use of ScaleGestureDetector class. It creates a basic application that allows you to zoom in and out through pinch.
To experiment with this example , you can run this on an actual device or in an emulator with touch screen enabled.
Following is the content of the modified main activity file src/MainActivity.java.
package com.example.sairamkrishna.myapplication;
import android.app.Activity;
import android.graphics.Matrix;
import android.os.Bundle;
import android.view.MotionEvent;
import android.view.ScaleGestureDetector;
import android.widget.ImageView;
public class MainActivity extends Activity {
private ImageView iv;
private Matrix matrix = new Matrix();
private float scale = 1f;
private ScaleGestureDetector SGD;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
iv=(ImageView)findViewById(R.id.imageView);
SGD = new ScaleGestureDetector(this,new ScaleListener());
}
public boolean onTouchEvent(MotionEvent ev) {
SGD.onTouchEvent(ev);
return true;
}
private class ScaleListener extends ScaleGestureDetector.
SimpleOnScaleGestureListener {
@Override
public boolean onScale(ScaleGestureDetector detector) {
scale *= detector.getScaleFactor();
scale = Math.max(0.1f, Math.min(scale, 5.0f));
matrix.setScale(scale, scale);
iv.setImageMatrix(matrix);
return true;
}
}
}
Following is the modified content of the xml res/layout/activity_main.xml.
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:paddingBottom="@dimen/activity_vertical_margin"
tools:context=".MainActivity" >
<TextView android:text="Gestures
Example" android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/textview"
android:textSize="35dp"
android:layout_alignParentTop="true"
android:layout_centerHorizontal="true" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Tutorials point"
android:id="@+id/textView"
android:layout_below="@+id/textview"
android:layout_centerHorizontal="true"
android:textColor="#ff7aff24"
android:textSize="35dp" />
<ImageView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/imageView"
android:src="@drawable/abc"
android:scaleType="matrix"
android:layout_below="@+id/textView"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true"
android:layout_alignParentBottom="true"
android:layout_alignParentRight="true"
android:layout_alignParentEnd="true" />
</RelativeLayout>
Following is the content of the res/values/string.xml.
<resources>
<string name="app_name>My Application</string>
</resources>
Following is the content of AndroidManifest.xml file.
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.sairamkrishna.myapplication" >
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name="com.example.sairamkrishna.myapplicationMainActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from Android studio, open one of your project's activity files and click Run icon from the toolbar.The sample output should be like this −
Now just place two fingers over android screen , and separate them a part and you will see that the android image is zooming. It is shown in the image below −
Now again place two fingers over android screen, and try to close them and you will see that the android image is now shrinking. It is shown in the image below −
46 Lectures
7.5 hours
Aditya Dua
32 Lectures
3.5 hours
Sharad Kumar
9 Lectures
1 hours
Abhilash Nelson
14 Lectures
1.5 hours
Abhilash Nelson
15 Lectures
1.5 hours
Abhilash Nelson
10 Lectures
1 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3757,
"s": 3607,
"text": "Android provides special types of touch screen events such as pinch , double tap, scrolls , long presses and flinch. These are all known as gestures."
},
{
"code": null,
"e": 4093,
"s": 3757,
"text": "Android provides GestureDetector class to receive motion events and tell us that these events correspond to gestures or not. To use it , you need to create an object of GestureDetector and then extend another class with GestureDetector.SimpleOnGestureListener to act as a listener and override some methods. Its syntax is given below −"
},
{
"code": null,
"e": 4552,
"s": 4093,
"text": "GestureDetector myG;\nmyG = new GestureDetector(this,new Gesture());\n \nclass Gesture extends GestureDetector.SimpleOnGestureListener{\n public boolean onSingleTapUp(MotionEvent ev) {\n }\n \n public void onLongPress(MotionEvent ev) {\n }\n \n public boolean onScroll(MotionEvent e1, MotionEvent e2, float distanceX,\n float distanceY) {\n }\n \n public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX,\n float velocityY) {\n }\n}"
},
{
"code": null,
"e": 4728,
"s": 4552,
"text": "Android provides ScaleGestureDetector class to handle gestures like pinch e.t.c. In order to use it, you need to instantiate an object of this class. Its syntax is as follow −"
},
{
"code": null,
"e": 4812,
"s": 4728,
"text": "ScaleGestureDetector SGD;\nSGD = new ScaleGestureDetector(this,new ScaleListener());"
},
{
"code": null,
"e": 5017,
"s": 4812,
"text": "The first parameter is the context and the second parameter is the event listener. We have to define the event listener and override a function OnTouchEvent to make it working. Its syntax is given below −"
},
{
"code": null,
"e": 5340,
"s": 5017,
"text": "public boolean onTouchEvent(MotionEvent ev) {\n SGD.onTouchEvent(ev);\n return true;\n}\n\nprivate class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {\n @Override\n public boolean onScale(ScaleGestureDetector detector) {\n float scale = detector.getScaleFactor();\n return true;\n }\n}"
},
{
"code": null,
"e": 5467,
"s": 5340,
"text": "Apart from the pinch gestures , there are other methods available that notify more about touch events. They are listed below −"
},
{
"code": null,
"e": 5482,
"s": 5467,
"text": "getEventTime()"
},
{
"code": null,
"e": 5552,
"s": 5482,
"text": "This method get the event time of the current event being processed.."
},
{
"code": null,
"e": 5564,
"s": 5552,
"text": "getFocusX()"
},
{
"code": null,
"e": 5635,
"s": 5564,
"text": "This method get the X coordinate of the current gesture's focal point."
},
{
"code": null,
"e": 5647,
"s": 5635,
"text": "getFocusY()"
},
{
"code": null,
"e": 5718,
"s": 5647,
"text": "This method get the Y coordinate of the current gesture's focal point."
},
{
"code": null,
"e": 5733,
"s": 5718,
"text": "getTimeDelta()"
},
{
"code": null,
"e": 5863,
"s": 5733,
"text": "This method return the time difference in milliseconds between the previous accepted scaling event and the current scaling event."
},
{
"code": null,
"e": 5878,
"s": 5863,
"text": "isInProgress()"
},
{
"code": null,
"e": 5939,
"s": 5878,
"text": "This method returns true if a scale gesture is in progress.."
},
{
"code": null,
"e": 5971,
"s": 5939,
"text": "onTouchEvent(MotionEvent event)"
},
{
"code": null,
"e": 6044,
"s": 5971,
"text": "This method accepts MotionEvents and dispatches events when appropriate."
},
{
"code": null,
"e": 6197,
"s": 6044,
"text": "Here is an example demonstrating the use of ScaleGestureDetector class. It creates a basic application that allows you to zoom in and out through pinch."
},
{
"code": null,
"e": 6313,
"s": 6197,
"text": "To experiment with this example , you can run this on an actual device or in an emulator with touch screen enabled."
},
{
"code": null,
"e": 6396,
"s": 6313,
"text": "Following is the content of the modified main activity file src/MainActivity.java."
},
{
"code": null,
"e": 7598,
"s": 6396,
"text": "package com.example.sairamkrishna.myapplication;\n\nimport android.app.Activity;\nimport android.graphics.Matrix;\nimport android.os.Bundle;\n\nimport android.view.MotionEvent;\nimport android.view.ScaleGestureDetector;\nimport android.widget.ImageView;\n\npublic class MainActivity extends Activity {\n private ImageView iv;\n private Matrix matrix = new Matrix();\n private float scale = 1f;\n private ScaleGestureDetector SGD;\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n\n iv=(ImageView)findViewById(R.id.imageView);\n SGD = new ScaleGestureDetector(this,new ScaleListener());\n }\n\n public boolean onTouchEvent(MotionEvent ev) {\n SGD.onTouchEvent(ev);\n return true;\n }\n\n private class ScaleListener extends ScaleGestureDetector.\n SimpleOnScaleGestureListener {\n \n @Override\n public boolean onScale(ScaleGestureDetector detector) {\n scale *= detector.getScaleFactor();\n scale = Math.max(0.1f, Math.min(scale, 5.0f));\n matrix.setScale(scale, scale);\n iv.setImageMatrix(matrix);\n return true;\n }\n }\n}"
},
{
"code": null,
"e": 7673,
"s": 7598,
"text": "Following is the modified content of the xml res/layout/activity_main.xml."
},
{
"code": null,
"e": 9276,
"s": 7673,
"text": "<RelativeLayout \n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\" \n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\" \n android:paddingLeft=\"@dimen/activity_horizontal_margin\"\n android:paddingRight=\"@dimen/activity_horizontal_margin\"\n android:paddingTop=\"@dimen/activity_vertical_margin\"\n android:paddingBottom=\"@dimen/activity_vertical_margin\" \n tools:context=\".MainActivity\" >\n \n <TextView android:text=\"Gestures \n Example\" android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/textview\"\n android:textSize=\"35dp\"\n android:layout_alignParentTop=\"true\"\n android:layout_centerHorizontal=\"true\" />\n \n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Tutorials point\"\n android:id=\"@+id/textView\"\n android:layout_below=\"@+id/textview\"\n android:layout_centerHorizontal=\"true\"\n android:textColor=\"#ff7aff24\"\n android:textSize=\"35dp\" />\n \n <ImageView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/imageView\"\n android:src=\"@drawable/abc\"\n android:scaleType=\"matrix\"\n android:layout_below=\"@+id/textView\"\n android:layout_alignParentLeft=\"true\"\n android:layout_alignParentStart=\"true\"\n android:layout_alignParentBottom=\"true\"\n android:layout_alignParentRight=\"true\"\n android:layout_alignParentEnd=\"true\" />\n \n</RelativeLayout>"
},
{
"code": null,
"e": 9331,
"s": 9276,
"text": "Following is the content of the res/values/string.xml."
},
{
"code": null,
"e": 9406,
"s": 9331,
"text": "<resources>\n <string name=\"app_name>My Application</string>\n</resources>"
},
{
"code": null,
"e": 9460,
"s": 9406,
"text": "Following is the content of AndroidManifest.xml file."
},
{
"code": null,
"e": 10199,
"s": 9460,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"com.example.sairamkrishna.myapplication\" >\n\n <application\n android:allowBackup=\"true\"\n android:icon=\"@drawable/ic_launcher\"\n android:label=\"@string/app_name\"\n android:theme=\"@style/AppTheme\" >\n \n <activity\n android:name=\"com.example.sairamkrishna.myapplicationMainActivity\"\n android:label=\"@string/app_name\" >\n \n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n \n </activity>\n \n </application>\n</manifest>"
},
{
"code": null,
"e": 10473,
"s": 10199,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from Android studio, open one of your project's activity files and click Run icon from the toolbar.The sample output should be like this − "
},
{
"code": null,
"e": 10632,
"s": 10473,
"text": "Now just place two fingers over android screen , and separate them a part and you will see that the android image is zooming. It is shown in the image below −"
},
{
"code": null,
"e": 10794,
"s": 10632,
"text": "Now again place two fingers over android screen, and try to close them and you will see that the android image is now shrinking. It is shown in the image below −"
},
{
"code": null,
"e": 10829,
"s": 10794,
"text": "\n 46 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 10841,
"s": 10829,
"text": " Aditya Dua"
},
{
"code": null,
"e": 10876,
"s": 10841,
"text": "\n 32 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 10890,
"s": 10876,
"text": " Sharad Kumar"
},
{
"code": null,
"e": 10922,
"s": 10890,
"text": "\n 9 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 10939,
"s": 10922,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 10974,
"s": 10939,
"text": "\n 14 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 10991,
"s": 10974,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 11026,
"s": 10991,
"text": "\n 15 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 11043,
"s": 11026,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 11076,
"s": 11043,
"text": "\n 10 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 11093,
"s": 11076,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 11100,
"s": 11093,
"text": " Print"
},
{
"code": null,
"e": 11111,
"s": 11100,
"text": " Add Notes"
}
] |
MLflow Part 2: Deploying a Tracking Server to Minikube! | by David Hundley | Towards Data Science | 10/15/20 Update: In writing my next post in this series, I found several bugs that prevented me from appropriately deploying to Minikube. To that end, I’ve updated a number of things to get you up and going with a WORKING instance! 😃
Welcome back, friends! We’re back with our continued mini-series on MLflow. In case you missed out part one, be sure to check it out here. The first post was a super basic introduction to log basic parameters, metrics, and artifacts with MLflow. That was just having us log those items to a spot on our local machine, which is not an ideal practice. In a company context, you ideally want to have all those things logged to a central, reusable location. That’s we’ll be tackling in today’s post! And of course, you can find all my code on GitHub at this link.
So to be clear, we’re going to be covering some advanced topics that require a bit of foreknowledge about Docker and Kubernetes. I personally plan to write posts on those at a later date, but for now, I’d recommend the following resources if you want to get a quick start on working with Docker and Kubernetes:
Docker 101 Tutorial
Learn Kubernetes Basics
Now if you know Kubernetes, chances are that you are familiar with Minikube, but in case you aren’t, Minikube is basically a small VM you can run on your local machine to start a sandbox environment to test out Kubernetes concepts. Once Minikube is up and running, it’ll look very familiar to those of you who have worked in legit Kubernetes environments. The instructions to set up Minikube are nicely documented in this page, BUT in order to get Minikube working, we need to get a couple additional things added later on down this post.
Before going further, I think a picture is worth a thousand words, so below is a tiny picture of the architecture we’ll be building here.
Alrighty, so on the right there we have our Minikube environment. Again, Minikube is highly representative of a legit Kubernetes environment, so the pieces inside Minikube are all things we’d see in any Kubernetes workspace. As such, we can see that MLflow’s tracking server is deployed inside a Deployment. That Deployment interacts with the outside world by connecting a service to an ingress (which is why the ingress spans both the inside and outside in our picture), and then we can view the tracking server interface inside our web browser. Simple enough, right?
Okay, so step 1 is going to be to create a Docker image that builds the MLflow tracking server. This is really simple, and I personally have uploaded my public image in case you want to skip this first step. (Here is that image in my personal Docker Hub.) The Dockerfile is simply going to build on top of a basic Python image, install MLflow, and set the proper entrypoint command. That looks like this:
# Defining base imageFROM python:3.8.2-slim# Installing packages from PyPiRUN pip install mlflow[extras]==1.9.1 && \ pip install psycopg2-binary==2.8.5 && \ pip install boto3==1.15.16# Defining start up commandEXPOSE 5000ENTRYPOINT ["mlflow", "server"]
You know the drill from here: build and push out to Docker Hub! (Or just use mine.)
Where our 10–15–20 update begins!
Okay, in the previous iteration of this post, I attempted to use a simple PVC for storage of the metadata and artifacts. Turns out that it is not so easy. Instead, we’re going to have to do a little extra legwork to get this going on Minikube. To that end, we’re going to configure a Postgres store for the backend metadata and an object store called Minio for our artifacts. (More on Minio below in case you haven’t heard of it.) If both of these things sound daunting to you, that’s okay! You can simply use my code to get up and going.
Alrighty, so let’s tackle the Postgres deployment. Here is the K8s manifest code for that:
apiVersion: v1kind: ConfigMapmetadata: name: mlflow-postgres-config labels: app: mlflow-postgresdata: POSTGRES_DB: mlflow_db POSTGRES_USER: mlflow_user POSTGRES_PASSWORD: mlflow_pwd PGDATA: /var/lib/postgresql/mlflow/data---apiVersion: apps/v1kind: StatefulSetmetadata: name: mlflow-postgres labels: app: mlflow-postgresspec: selector: matchLabels: app: mlflow-postgres serviceName: "mlflow-postgres-service" replicas: 1 template: metadata: labels: app: mlflow-postgres spec: containers: - name: mlflow-postgres image: postgres:11 ports: - containerPort: 5432 protocol: TCP envFrom: - configMapRef: name: mlflow-postgres-config resources: requests: memory: "1Gi" cpu: "500m" volumeMounts: - name: mlflow-pvc mountPath: /var/lib/postgresql/mlflow volumeClaimTemplates: - metadata: name: mlflow-pvc spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 100Mi---apiVersion: v1kind: Servicemetadata: name: mlflow-postgres-service labels: svc: mlflow-postgres-servicespec: type: NodePort ports: - port: 5432 targetPort: 5432 protocol: TCP selector: app: mlflow-postgres
So I’m not going to go line by line with everything, but at a thousand foot level, this is going to spin up a Postgres instance with 100Mi of storage and the appropriate config information defined at the top of the code. You can change those variables if you’d like. Remember, we’re just learning here, so these variables are obviously exposed. In the real world, this is a HUGE security concern, so don’t follow my lead here if you’re going to use this for a legit deployment.
Alright, with that deployed, we’re ready to tackle our object store: Minio. Now if you’re totally new to Minio like me, it basically is an object store you can deploy to K8s that basically emulates Amazon Web Service’s (AWS’s) S3 service. The deployment syntax for that looks like this:
apiVersion: apps/v1kind: Deploymentmetadata: name: mlflow-miniospec: selector: matchLabels: app: mlflow-minio template: metadata: labels: app: mlflow-minio spec: volumes: - name: mlflow-pvc persistentVolumeClaim: claimName: mlflow-pvc containers: - name: mlflow-minio image: minio/minio:latest args: - server - /data volumeMounts: - name: mlflow-pvc mountPath: '/data' env: - name: MINIO_ACCESS_KEY value: "minio" - name: MINIO_SECRET_KEY value: "minio123" ports: - containerPort: 9000---apiVersion: v1kind: Servicemetadata: name: mlflow-minio-servicespec: type: NodePort ports: - port: 9000 targetPort: 9000 protocol: TCP selector: app: mlflow-minio---apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: mlflow-minio-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.il/add-base-url: "true" nginx.ingress.kubernetes.io/ssl-redirect: "false"spec: rules: - host: mlflow-minio.local http: paths: - backend: serviceName: mlflow-minio-service servicePort: 9000 path: /---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: mlflow-pvcspec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi
So again at a high level, we’re deploying a Minio object store backed by a PVC with 100Mi of data. You can also see in the environment variables of the deployment that we define the access key ID and secret access key. These very much correlate to AWS’s AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Now before moving onto the next step, you’ll have to properly configure your machine’s ingress, and in my original post, I share how to do that down below. (I’m too lazy to re-type that here, so do a CTRL+F for ingress and I’m sure you’ll find it!)
Alrighty, if you configured your ingress properly and navigate on over to mlflow-minio.local in a browser, you should be greeted with this splash screen.
In the respective fields, type in the access key and secret key we defined in the Minio deployment. (If you kept it the same as me, those respectively are “minio” and “minio123”.) Hit enter to be greeted with this next screen.
Okay, so in my case, I already created the “bucket” we’ll be using to store our artifacts. For you to do it, it’s as simple as clicking the orangish plus sign in the bottom right of the UI, selecting “Create New Bucket”, and naming your new bucket “mlflow”.
Phew! Alright, we have our backend stuff set up! Time to get the actual server itself rolling!
I’m primarily going to stick to the Deployment manifest here. Most of this syntax will look pretty familiar to you. The only thing to be mindful of here are the arguments we’ll pass to our building Docker image. Let me show you what my Deployment manifest looks like first.
# Creating MLflow deploymentapiVersion: apps/v1kind: Deploymentmetadata: name: mlflow-deploymentspec: replicas: 1 selector: matchLabels: app: mlflow-deployment template: metadata: labels: app: mlflow-deployment spec: containers: - name: mlflow-deployment image: dkhundley/mlflow-server:1.0.3 imagePullPolicy: Always args: - --host=0.0.0.0 - --port=5000 - --backend-store-uri=postgresql://mlflow_user:[email protected]:5432/mlflow_db - --default-artifact-root=s3://mlflow/ - --workers=2 env: - name: MLFLOW_S3_ENDPOINT_URL value: http://mlflow-minio.local/ - name: AWS_ACCESS_KEY_ID value: "minio" - name: AWS_SECRET_ACCESS_KEY value: "minio123" ports: - name: http containerPort: 5000 protocol: TCP---apiVersion: v1kind: Servicemetadata: name: mlflow-servicespec: type: NodePort ports: - port: 5000 targetPort: 5000 protocol: TCP name: http selector: app: mlflow-deployment---apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: mlflow-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.il/add-base-url: "true"spec: rules: - host: mlflow-server.local http: paths: - backend: serviceName: mlflow-service servicePort: 5000 path: /
A few key things to point out in here. First, the Postgres instance is referenced in the “args” with its IP address. I obtained that IP address by running the following command:
kubectl get services
That will give you this screen:
You’ll notice that the CLUSTER-IP for the mlflow-postgres-service directly correlates to what is in my server deployment manifest. You’ll need to update your IP with whatever your service shows as it will likely not be the same as me. (And truth be told... I feel like there’s a programatic way to do this, but I honestly don’t know how to do that.) Notice also how we reference Minio as the backend server. It might look weird to you that we’re indeed using AWS-like environment variables, but hey, that’s just how it works!
Okay, so now that we have everything successfully deployed, it’s time to get our Minikube ingress working. You probably won’t have this issue if you’re working in a legit Kubernetes environment, but Minikube can be a bit tricky here. Honestly, this last part took me several days to figure out, so I’m glad to finally pass this knowledge along to you!
Let’s take a glance at the ingress YAML again:
# Creating the Minikube ingressapiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: mlflow-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.il/add-base-url: "true"spec: rules: - host: mlflow-server.local http: paths: - backend: serviceName: mlflow-service servicePort: 5000 path: /
Most of this should be familiar to you. In our example here, we’ll be serving out the MLflow tracking server’s UI at mlflow-server.local. One thing that might be new to you are those annotations, and they are absolutely necessary. Without them, your ingress will not work properly. I specifically posted the image below to Twitter to try getting folks to help me out with my blank screen issue. It was quite frustrating.
Bleh, talk about a mess! After much trial and error, I finally figured out that the specific annotation configuration provided above worked. I honestly can’t tell you why though. ̄\_(ツ)_/ ̄
But wait, there’s more! By default, Minikube isn’t set up to handle ingress right out of the box. In order to do that, you’ll need to do a few things. First up, after your Minikube server is running, run the following command:
minikube addons enable ingress
Easy enough. Now, you need to set up your computer to reference the Minikube cluster’s IP through the mlflow-server.local host we’ve set up in the ingress. To get your Minikube’s IP address, simply run this command:
minikube ip
Copy that to your clipboard. Now, this next part might be totally new to you. (At least, it was to me!) Just like you can create alias commands for Linux, you can also apparently create alias ties from IP addresses to web addresses. It’s very interesting because I learned that this is the place where your browser translates “localhost” to your local IP address.
To navigate to where you need to do that, run the following command:
sudo nano /etc/hosts
You should be greeted with a screen that looks like this:
So you can see here at the top what I was just referencing with the localhost thing. With this interface open, paste in your Minikube’s IP address (which is 192.168.64.4 in my case) followed by BOTH host names for the MLflow server and Minio artifact store, which are respectively mlflow-server.local and mlflow-minio.local in our case.
Alright, if you did everything properly, you should be pretty much all set! Navigate on over to your browser of choice and open up http://mlflow-server.local. If all goes well, you should see a familiar looking screen.
That’s it for this post, folks! I don’t want to overload you all with too much, so in our next post, we’ll take off from here by logging a practice model or two to this shared tracking server just to see that it’s working. And in two posts from now, we’ll keep the ball rolling even further by showing how to deploy models for usage out from this tracking server. So to be honest, the content of this post might not have been that glamorous, but we’re laying down the train tracks that are going to make everything really fly in the next couple posts.
Until then, thanks for reading this post! Be sure to check out my former ones on other data science-related topics, and we’ll see you next week for more MLflow content! | [
{
"code": null,
"e": 405,
"s": 171,
"text": "10/15/20 Update: In writing my next post in this series, I found several bugs that prevented me from appropriately deploying to Minikube. To that end, I’ve updated a number of things to get you up and going with a WORKING instance! 😃"
},
{
"code": null,
"e": 965,
"s": 405,
"text": "Welcome back, friends! We’re back with our continued mini-series on MLflow. In case you missed out part one, be sure to check it out here. The first post was a super basic introduction to log basic parameters, metrics, and artifacts with MLflow. That was just having us log those items to a spot on our local machine, which is not an ideal practice. In a company context, you ideally want to have all those things logged to a central, reusable location. That’s we’ll be tackling in today’s post! And of course, you can find all my code on GitHub at this link."
},
{
"code": null,
"e": 1276,
"s": 965,
"text": "So to be clear, we’re going to be covering some advanced topics that require a bit of foreknowledge about Docker and Kubernetes. I personally plan to write posts on those at a later date, but for now, I’d recommend the following resources if you want to get a quick start on working with Docker and Kubernetes:"
},
{
"code": null,
"e": 1296,
"s": 1276,
"text": "Docker 101 Tutorial"
},
{
"code": null,
"e": 1320,
"s": 1296,
"text": "Learn Kubernetes Basics"
},
{
"code": null,
"e": 1859,
"s": 1320,
"text": "Now if you know Kubernetes, chances are that you are familiar with Minikube, but in case you aren’t, Minikube is basically a small VM you can run on your local machine to start a sandbox environment to test out Kubernetes concepts. Once Minikube is up and running, it’ll look very familiar to those of you who have worked in legit Kubernetes environments. The instructions to set up Minikube are nicely documented in this page, BUT in order to get Minikube working, we need to get a couple additional things added later on down this post."
},
{
"code": null,
"e": 1997,
"s": 1859,
"text": "Before going further, I think a picture is worth a thousand words, so below is a tiny picture of the architecture we’ll be building here."
},
{
"code": null,
"e": 2566,
"s": 1997,
"text": "Alrighty, so on the right there we have our Minikube environment. Again, Minikube is highly representative of a legit Kubernetes environment, so the pieces inside Minikube are all things we’d see in any Kubernetes workspace. As such, we can see that MLflow’s tracking server is deployed inside a Deployment. That Deployment interacts with the outside world by connecting a service to an ingress (which is why the ingress spans both the inside and outside in our picture), and then we can view the tracking server interface inside our web browser. Simple enough, right?"
},
{
"code": null,
"e": 2971,
"s": 2566,
"text": "Okay, so step 1 is going to be to create a Docker image that builds the MLflow tracking server. This is really simple, and I personally have uploaded my public image in case you want to skip this first step. (Here is that image in my personal Docker Hub.) The Dockerfile is simply going to build on top of a basic Python image, install MLflow, and set the proper entrypoint command. That looks like this:"
},
{
"code": null,
"e": 3230,
"s": 2971,
"text": "# Defining base imageFROM python:3.8.2-slim# Installing packages from PyPiRUN pip install mlflow[extras]==1.9.1 && \\ pip install psycopg2-binary==2.8.5 && \\ pip install boto3==1.15.16# Defining start up commandEXPOSE 5000ENTRYPOINT [\"mlflow\", \"server\"]"
},
{
"code": null,
"e": 3314,
"s": 3230,
"text": "You know the drill from here: build and push out to Docker Hub! (Or just use mine.)"
},
{
"code": null,
"e": 3348,
"s": 3314,
"text": "Where our 10–15–20 update begins!"
},
{
"code": null,
"e": 3887,
"s": 3348,
"text": "Okay, in the previous iteration of this post, I attempted to use a simple PVC for storage of the metadata and artifacts. Turns out that it is not so easy. Instead, we’re going to have to do a little extra legwork to get this going on Minikube. To that end, we’re going to configure a Postgres store for the backend metadata and an object store called Minio for our artifacts. (More on Minio below in case you haven’t heard of it.) If both of these things sound daunting to you, that’s okay! You can simply use my code to get up and going."
},
{
"code": null,
"e": 3978,
"s": 3887,
"text": "Alrighty, so let’s tackle the Postgres deployment. Here is the K8s manifest code for that:"
},
{
"code": null,
"e": 5282,
"s": 3978,
"text": "apiVersion: v1kind: ConfigMapmetadata: name: mlflow-postgres-config labels: app: mlflow-postgresdata: POSTGRES_DB: mlflow_db POSTGRES_USER: mlflow_user POSTGRES_PASSWORD: mlflow_pwd PGDATA: /var/lib/postgresql/mlflow/data---apiVersion: apps/v1kind: StatefulSetmetadata: name: mlflow-postgres labels: app: mlflow-postgresspec: selector: matchLabels: app: mlflow-postgres serviceName: \"mlflow-postgres-service\" replicas: 1 template: metadata: labels: app: mlflow-postgres spec: containers: - name: mlflow-postgres image: postgres:11 ports: - containerPort: 5432 protocol: TCP envFrom: - configMapRef: name: mlflow-postgres-config resources: requests: memory: \"1Gi\" cpu: \"500m\" volumeMounts: - name: mlflow-pvc mountPath: /var/lib/postgresql/mlflow volumeClaimTemplates: - metadata: name: mlflow-pvc spec: accessModes: [ \"ReadWriteOnce\" ] resources: requests: storage: 100Mi---apiVersion: v1kind: Servicemetadata: name: mlflow-postgres-service labels: svc: mlflow-postgres-servicespec: type: NodePort ports: - port: 5432 targetPort: 5432 protocol: TCP selector: app: mlflow-postgres"
},
{
"code": null,
"e": 5760,
"s": 5282,
"text": "So I’m not going to go line by line with everything, but at a thousand foot level, this is going to spin up a Postgres instance with 100Mi of storage and the appropriate config information defined at the top of the code. You can change those variables if you’d like. Remember, we’re just learning here, so these variables are obviously exposed. In the real world, this is a HUGE security concern, so don’t follow my lead here if you’re going to use this for a legit deployment."
},
{
"code": null,
"e": 6047,
"s": 5760,
"text": "Alright, with that deployed, we’re ready to tackle our object store: Minio. Now if you’re totally new to Minio like me, it basically is an object store you can deploy to K8s that basically emulates Amazon Web Service’s (AWS’s) S3 service. The deployment syntax for that looks like this:"
},
{
"code": null,
"e": 7453,
"s": 6047,
"text": "apiVersion: apps/v1kind: Deploymentmetadata: name: mlflow-miniospec: selector: matchLabels: app: mlflow-minio template: metadata: labels: app: mlflow-minio spec: volumes: - name: mlflow-pvc persistentVolumeClaim: claimName: mlflow-pvc containers: - name: mlflow-minio image: minio/minio:latest args: - server - /data volumeMounts: - name: mlflow-pvc mountPath: '/data' env: - name: MINIO_ACCESS_KEY value: \"minio\" - name: MINIO_SECRET_KEY value: \"minio123\" ports: - containerPort: 9000---apiVersion: v1kind: Servicemetadata: name: mlflow-minio-servicespec: type: NodePort ports: - port: 9000 targetPort: 9000 protocol: TCP selector: app: mlflow-minio---apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: mlflow-minio-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.il/add-base-url: \"true\" nginx.ingress.kubernetes.io/ssl-redirect: \"false\"spec: rules: - host: mlflow-minio.local http: paths: - backend: serviceName: mlflow-minio-service servicePort: 9000 path: /---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: mlflow-pvcspec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi"
},
{
"code": null,
"e": 8001,
"s": 7453,
"text": "So again at a high level, we’re deploying a Minio object store backed by a PVC with 100Mi of data. You can also see in the environment variables of the deployment that we define the access key ID and secret access key. These very much correlate to AWS’s AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Now before moving onto the next step, you’ll have to properly configure your machine’s ingress, and in my original post, I share how to do that down below. (I’m too lazy to re-type that here, so do a CTRL+F for ingress and I’m sure you’ll find it!)"
},
{
"code": null,
"e": 8155,
"s": 8001,
"text": "Alrighty, if you configured your ingress properly and navigate on over to mlflow-minio.local in a browser, you should be greeted with this splash screen."
},
{
"code": null,
"e": 8382,
"s": 8155,
"text": "In the respective fields, type in the access key and secret key we defined in the Minio deployment. (If you kept it the same as me, those respectively are “minio” and “minio123”.) Hit enter to be greeted with this next screen."
},
{
"code": null,
"e": 8640,
"s": 8382,
"text": "Okay, so in my case, I already created the “bucket” we’ll be using to store our artifacts. For you to do it, it’s as simple as clicking the orangish plus sign in the bottom right of the UI, selecting “Create New Bucket”, and naming your new bucket “mlflow”."
},
{
"code": null,
"e": 8735,
"s": 8640,
"text": "Phew! Alright, we have our backend stuff set up! Time to get the actual server itself rolling!"
},
{
"code": null,
"e": 9009,
"s": 8735,
"text": "I’m primarily going to stick to the Deployment manifest here. Most of this syntax will look pretty familiar to you. The only thing to be mindful of here are the arguments we’ll pass to our building Docker image. Let me show you what my Deployment manifest looks like first."
},
{
"code": null,
"e": 10452,
"s": 9009,
"text": "# Creating MLflow deploymentapiVersion: apps/v1kind: Deploymentmetadata: name: mlflow-deploymentspec: replicas: 1 selector: matchLabels: app: mlflow-deployment template: metadata: labels: app: mlflow-deployment spec: containers: - name: mlflow-deployment image: dkhundley/mlflow-server:1.0.3 imagePullPolicy: Always args: - --host=0.0.0.0 - --port=5000 - --backend-store-uri=postgresql://mlflow_user:[email protected]:5432/mlflow_db - --default-artifact-root=s3://mlflow/ - --workers=2 env: - name: MLFLOW_S3_ENDPOINT_URL value: http://mlflow-minio.local/ - name: AWS_ACCESS_KEY_ID value: \"minio\" - name: AWS_SECRET_ACCESS_KEY value: \"minio123\" ports: - name: http containerPort: 5000 protocol: TCP---apiVersion: v1kind: Servicemetadata: name: mlflow-servicespec: type: NodePort ports: - port: 5000 targetPort: 5000 protocol: TCP name: http selector: app: mlflow-deployment---apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: mlflow-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.il/add-base-url: \"true\"spec: rules: - host: mlflow-server.local http: paths: - backend: serviceName: mlflow-service servicePort: 5000 path: /"
},
{
"code": null,
"e": 10630,
"s": 10452,
"text": "A few key things to point out in here. First, the Postgres instance is referenced in the “args” with its IP address. I obtained that IP address by running the following command:"
},
{
"code": null,
"e": 10651,
"s": 10630,
"text": "kubectl get services"
},
{
"code": null,
"e": 10683,
"s": 10651,
"text": "That will give you this screen:"
},
{
"code": null,
"e": 11209,
"s": 10683,
"text": "You’ll notice that the CLUSTER-IP for the mlflow-postgres-service directly correlates to what is in my server deployment manifest. You’ll need to update your IP with whatever your service shows as it will likely not be the same as me. (And truth be told... I feel like there’s a programatic way to do this, but I honestly don’t know how to do that.) Notice also how we reference Minio as the backend server. It might look weird to you that we’re indeed using AWS-like environment variables, but hey, that’s just how it works!"
},
{
"code": null,
"e": 11561,
"s": 11209,
"text": "Okay, so now that we have everything successfully deployed, it’s time to get our Minikube ingress working. You probably won’t have this issue if you’re working in a legit Kubernetes environment, but Minikube can be a bit tricky here. Honestly, this last part took me several days to figure out, so I’m glad to finally pass this knowledge along to you!"
},
{
"code": null,
"e": 11608,
"s": 11561,
"text": "Let’s take a glance at the ingress YAML again:"
},
{
"code": null,
"e": 11991,
"s": 11608,
"text": "# Creating the Minikube ingressapiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: mlflow-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.il/add-base-url: \"true\"spec: rules: - host: mlflow-server.local http: paths: - backend: serviceName: mlflow-service servicePort: 5000 path: /"
},
{
"code": null,
"e": 12412,
"s": 11991,
"text": "Most of this should be familiar to you. In our example here, we’ll be serving out the MLflow tracking server’s UI at mlflow-server.local. One thing that might be new to you are those annotations, and they are absolutely necessary. Without them, your ingress will not work properly. I specifically posted the image below to Twitter to try getting folks to help me out with my blank screen issue. It was quite frustrating."
},
{
"code": null,
"e": 12603,
"s": 12412,
"text": "Bleh, talk about a mess! After much trial and error, I finally figured out that the specific annotation configuration provided above worked. I honestly can’t tell you why though. ̄\\_(ツ)_/ ̄"
},
{
"code": null,
"e": 12830,
"s": 12603,
"text": "But wait, there’s more! By default, Minikube isn’t set up to handle ingress right out of the box. In order to do that, you’ll need to do a few things. First up, after your Minikube server is running, run the following command:"
},
{
"code": null,
"e": 12861,
"s": 12830,
"text": "minikube addons enable ingress"
},
{
"code": null,
"e": 13077,
"s": 12861,
"text": "Easy enough. Now, you need to set up your computer to reference the Minikube cluster’s IP through the mlflow-server.local host we’ve set up in the ingress. To get your Minikube’s IP address, simply run this command:"
},
{
"code": null,
"e": 13089,
"s": 13077,
"text": "minikube ip"
},
{
"code": null,
"e": 13453,
"s": 13089,
"text": "Copy that to your clipboard. Now, this next part might be totally new to you. (At least, it was to me!) Just like you can create alias commands for Linux, you can also apparently create alias ties from IP addresses to web addresses. It’s very interesting because I learned that this is the place where your browser translates “localhost” to your local IP address."
},
{
"code": null,
"e": 13522,
"s": 13453,
"text": "To navigate to where you need to do that, run the following command:"
},
{
"code": null,
"e": 13543,
"s": 13522,
"text": "sudo nano /etc/hosts"
},
{
"code": null,
"e": 13601,
"s": 13543,
"text": "You should be greeted with a screen that looks like this:"
},
{
"code": null,
"e": 13938,
"s": 13601,
"text": "So you can see here at the top what I was just referencing with the localhost thing. With this interface open, paste in your Minikube’s IP address (which is 192.168.64.4 in my case) followed by BOTH host names for the MLflow server and Minio artifact store, which are respectively mlflow-server.local and mlflow-minio.local in our case."
},
{
"code": null,
"e": 14157,
"s": 13938,
"text": "Alright, if you did everything properly, you should be pretty much all set! Navigate on over to your browser of choice and open up http://mlflow-server.local. If all goes well, you should see a familiar looking screen."
},
{
"code": null,
"e": 14709,
"s": 14157,
"text": "That’s it for this post, folks! I don’t want to overload you all with too much, so in our next post, we’ll take off from here by logging a practice model or two to this shared tracking server just to see that it’s working. And in two posts from now, we’ll keep the ball rolling even further by showing how to deploy models for usage out from this tracking server. So to be honest, the content of this post might not have been that glamorous, but we’re laying down the train tracks that are going to make everything really fly in the next couple posts."
}
] |
How to remove rows containing missing value based on a particular column in an R data frame? | If we want to remove rows containing missing values based on a particular column then we should select that column by ignoring the missing values. This can be done by using is.na function. For example, if we have a data frame df that contains column x, y, z and each of the columns have some missing values then rows of x without missing values can be selected as df[!is.na(df$x),].
Consider the below data frame −
Live Demo
x1<−sample(c(NA,1,2,3,4),20,replace=TRUE)
x2<−sample(c(NA,5,10),20,replace=TRUE)
x3<−sample(c(NA,3,12,21,30),20,replace=TRUE)
x4<−sample(c(NA,54,65),20,replace=TRUE)
x5<−sample(c(NA,101,125,111),20,replace=TRUE)
x6<−sample(c(NA,500),20,replace=TRUE)
df<−data.frame(x1,x2,x3,x4,x5,x6)
df
x1 x2 x3 x4 x5 x6
1 4 10 21 54 NA NA
2 4 NA 21 65 NA 500
3 NA 5 NA NA 101 NA
4 3 5 NA NA NA NA
5 1 5 21 65 101 NA
6 NA 10 NA 65 111 500
7 2 NA NA NA NA NA
8 NA 5 NA NA 125 500
9 4 10 NA 54 NA NA
10 1 NA 12 NA 101 NA
11 4 NA 12 NA 101 NA
12 3 5 NA 65 111 NA
13 4 10 30 54 101 500
14 4 5 30 54 111 NA
15 3 5 NA 65 111 NA
16 1 NA 30 65 125 NA
17 1 5 3 65 125 500
18 3 5 NA NA 125 NA
19 NA NA 12 65 101 500
20 2 NA 21 54 111 NA
Selecting rows of x1 that does not contain missing values −
df[!is.na(df$x1),]
x1 x2 x3 x4 x5 x6
1 4 10 21 54 NA NA
2 4 NA 21 65 NA 500
4 3 5 NA NA NA NA
5 1 5 21 65 101 NA
7 2 NA NA NA NA NA
9 4 10 NA 54 NA NA
10 1 NA 12 NA 101 NA
11 4 NA 12 NA 101 NA
12 3 5 NA 65 111 NA
13 4 10 30 54 101 500
14 4 5 30 54 111 NA
15 3 5 NA 65 111 NA
16 1 NA 30 65 125 NA
17 1 5 3 65 125 500
18 3 5 NA NA 125 NA
20 2 NA 21 54 111 NA
Selecting rows of x2 that does not contain missing values −
df[!is.na(df$x2),]
x1 x2 x3 x4 x5 x6
1 4 10 21 54 NA NA
3 NA 5 NA NA 101 NA
4 3 5 NA NA NA NA
5 1 5 21 65 101 NA
6 NA 10 NA 65 111 500
8 NA 5 NA NA 125 500
9 4 10 NA 54 NA NA
12 3 5 NA 65 111 NA
13 4 10 30 54 101 500
14 4 5 30 54 111 NA
15 3 5 NA 65 111 NA
17 1 5 3 65 125 500
18 3 5 NA NA 125 NA
Selecting rows of x3 that does not contain missing values −
df[!is.na(df$x3),]
x1 x2 x3 x4 x5 x6
1 4 10 21 54 NA NA
2 4 NA 21 65 NA 500
5 1 5 21 65 101 NA
10 1 NA 12 NA 101 NA
11 4 NA 12 NA 101 NA
13 4 10 30 54 101 500
14 4 5 30 54 111 NA
16 1 NA 30 65 125 NA
17 1 5 3 65 125 500
19 NA NA 12 65 101 500
20 2 NA 21 54 111 NA
Selecting rows of x4 that does not contain missing values −
df[!is.na(df$x4),]
x1 x2 x3 x4 x5 x6
1 4 10 21 54 NA NA
2 4 NA 21 65 NA 500
5 1 5 21 65 101 NA
6 NA 10 NA 65 111 500
9 4 10 NA 54 NA NA
12 3 5 NA 65 111 NA
13 4 10 30 54 101 500
14 4 5 30 54 111 NA
15 3 5 NA 65 111 NA
16 1 NA 30 65 125 NA
17 1 5 3 65 125 500
19 NA NA 12 65 101 500
20 2 NA 21 54 111 NA
Selecting rows of x5 that does not contain missing values −
df[!is.na(df$x5),]
x1 x2 x3 x4 x5 x6
3 NA 5 NA NA 101 NA
5 1 5 21 65 101 NA
6 NA 10 NA 65 111 500
8 NA 5 NA NA 125 500
10 1 NA 12 NA 101 NA
11 4 NA 12 NA 101 NA
12 3 5 NA 65 111 NA
13 4 10 30 54 101 500
14 4 5 30 54 111 NA
15 3 5 NA 65 111 NA
16 1 NA 30 65 125 NA
17 1 5 3 65 125 500
18 3 5 NA NA 125 NA
19 NA NA 12 65 101 500
20 2 NA 21 54 111 NA
Selecting rows of x6 that does not contain missing values −
df[!is.na(df$x6),]
x1 x2 x3 x4 x5 x6
2 4 NA 21 65 NA 500
6 NA 10 NA 65 111 500
8 NA 5 NA NA 125 500
13 4 10 30 54 101 500
17 1 5 3 65 125 500
19 NA NA 12 65 101 500 | [
{
"code": null,
"e": 1445,
"s": 1062,
"text": "If we want to remove rows containing missing values based on a particular column then we should select that column by ignoring the missing values. This can be done by using is.na function. For example, if we have a data frame df that contains column x, y, z and each of the columns have some missing values then rows of x without missing values can be selected as df[!is.na(df$x),]."
},
{
"code": null,
"e": 1477,
"s": 1445,
"text": "Consider the below data frame −"
},
{
"code": null,
"e": 1488,
"s": 1477,
"text": " Live Demo"
},
{
"code": null,
"e": 1775,
"s": 1488,
"text": "x1<−sample(c(NA,1,2,3,4),20,replace=TRUE)\nx2<−sample(c(NA,5,10),20,replace=TRUE)\nx3<−sample(c(NA,3,12,21,30),20,replace=TRUE)\nx4<−sample(c(NA,54,65),20,replace=TRUE)\nx5<−sample(c(NA,101,125,111),20,replace=TRUE)\nx6<−sample(c(NA,500),20,replace=TRUE)\ndf<−data.frame(x1,x2,x3,x4,x5,x6)\ndf"
},
{
"code": null,
"e": 2217,
"s": 1775,
"text": " x1 x2 x3 x4 x5 x6\n1 4 10 21 54 NA NA\n2 4 NA 21 65 NA 500\n3 NA 5 NA NA 101 NA\n4 3 5 NA NA NA NA\n5 1 5 21 65 101 NA\n6 NA 10 NA 65 111 500\n7 2 NA NA NA NA NA\n8 NA 5 NA NA 125 500\n9 4 10 NA 54 NA NA\n10 1 NA 12 NA 101 NA\n11 4 NA 12 NA 101 NA\n12 3 5 NA 65 111 NA\n13 4 10 30 54 101 500\n14 4 5 30 54 111 NA\n15 3 5 NA 65 111 NA\n16 1 NA 30 65 125 NA\n17 1 5 3 65 125 500\n18 3 5 NA NA 125 NA\n19 NA NA 12 65 101 500\n20 2 NA 21 54 111 NA"
},
{
"code": null,
"e": 2277,
"s": 2217,
"text": "Selecting rows of x1 that does not contain missing values −"
},
{
"code": null,
"e": 2296,
"s": 2277,
"text": "df[!is.na(df$x1),]"
},
{
"code": null,
"e": 2634,
"s": 2296,
"text": "x1 x2 x3 x4 x5 x6\n1 4 10 21 54 NA NA\n2 4 NA 21 65 NA 500\n4 3 5 NA NA NA NA\n5 1 5 21 65 101 NA\n7 2 NA NA NA NA NA\n9 4 10 NA 54 NA NA\n10 1 NA 12 NA 101 NA\n11 4 NA 12 NA 101 NA\n12 3 5 NA 65 111 NA\n13 4 10 30 54 101 500\n14 4 5 30 54 111 NA\n15 3 5 NA 65 111 NA\n16 1 NA 30 65 125 NA\n17 1 5 3 65 125 500\n18 3 5 NA NA 125 NA\n20 2 NA 21 54 111 NA"
},
{
"code": null,
"e": 2694,
"s": 2634,
"text": "Selecting rows of x2 that does not contain missing values −"
},
{
"code": null,
"e": 2713,
"s": 2694,
"text": "df[!is.na(df$x2),]"
},
{
"code": null,
"e": 2991,
"s": 2713,
"text": "x1 x2 x3 x4 x5 x6\n1 4 10 21 54 NA NA\n3 NA 5 NA NA 101 NA\n4 3 5 NA NA NA NA\n5 1 5 21 65 101 NA\n6 NA 10 NA 65 111 500\n8 NA 5 NA NA 125 500\n9 4 10 NA 54 NA NA\n12 3 5 NA 65 111 NA\n13 4 10 30 54 101 500\n14 4 5 30 54 111 NA\n15 3 5 NA 65 111 NA\n17 1 5 3 65 125 500\n18 3 5 NA NA 125 NA"
},
{
"code": null,
"e": 3051,
"s": 2991,
"text": "Selecting rows of x3 that does not contain missing values −"
},
{
"code": null,
"e": 3070,
"s": 3051,
"text": "df[!is.na(df$x3),]"
},
{
"code": null,
"e": 3315,
"s": 3070,
"text": "x1 x2 x3 x4 x5 x6\n1 4 10 21 54 NA NA\n2 4 NA 21 65 NA 500\n5 1 5 21 65 101 NA\n10 1 NA 12 NA 101 NA\n11 4 NA 12 NA 101 NA\n13 4 10 30 54 101 500\n14 4 5 30 54 111 NA\n16 1 NA 30 65 125 NA\n17 1 5 3 65 125 500\n19 NA NA 12 65 101 500\n20 2 NA 21 54 111 NA"
},
{
"code": null,
"e": 3375,
"s": 3315,
"text": "Selecting rows of x4 that does not contain missing values −"
},
{
"code": null,
"e": 3394,
"s": 3375,
"text": "df[!is.na(df$x4),]"
},
{
"code": null,
"e": 3678,
"s": 3394,
"text": "x1 x2 x3 x4 x5 x6\n1 4 10 21 54 NA NA\n2 4 NA 21 65 NA 500\n5 1 5 21 65 101 NA\n6 NA 10 NA 65 111 500\n9 4 10 NA 54 NA NA\n12 3 5 NA 65 111 NA\n13 4 10 30 54 101 500\n14 4 5 30 54 111 NA\n15 3 5 NA 65 111 NA\n16 1 NA 30 65 125 NA\n17 1 5 3 65 125 500\n19 NA NA 12 65 101 500\n20 2 NA 21 54 111 NA"
},
{
"code": null,
"e": 3738,
"s": 3678,
"text": "Selecting rows of x5 that does not contain missing values −"
},
{
"code": null,
"e": 3757,
"s": 3738,
"text": "df[!is.na(df$x5),]"
},
{
"code": null,
"e": 4086,
"s": 3757,
"text": "x1 x2 x3 x4 x5 x6\n3 NA 5 NA NA 101 NA\n5 1 5 21 65 101 NA\n6 NA 10 NA 65 111 500\n8 NA 5 NA NA 125 500\n10 1 NA 12 NA 101 NA\n11 4 NA 12 NA 101 NA\n12 3 5 NA 65 111 NA\n13 4 10 30 54 101 500\n14 4 5 30 54 111 NA\n15 3 5 NA 65 111 NA\n16 1 NA 30 65 125 NA\n17 1 5 3 65 125 500\n18 3 5 NA NA 125 NA\n19 NA NA 12 65 101 500\n20 2 NA 21 54 111 NA"
},
{
"code": null,
"e": 4146,
"s": 4086,
"text": "Selecting rows of x6 that does not contain missing values −"
},
{
"code": null,
"e": 4165,
"s": 4146,
"text": "df[!is.na(df$x6),]"
},
{
"code": null,
"e": 4312,
"s": 4165,
"text": " x1 x2 x3 x4 x5 x6\n2 4 NA 21 65 NA 500\n6 NA 10 NA 65 111 500\n8 NA 5 NA NA 125 500\n13 4 10 30 54 101 500\n17 1 5 3 65 125 500\n19 NA NA 12 65 101 500"
}
] |
5 Powerful Tricks to Visualize Your Data with Matplotlib | by Rizky Maulana Nurhidayat | Towards Data Science | Data visualization is used to shows the data in a more straightforward representation and more comfortable to be understood. It can be formed in histograms, scatter plots, line plots, pie chart, etc. Many people are still using Matplotlib as their back-end module to visualize their plots. In this story, I will give you some tricks, 5 powerful tricks in using Matplotlib to create an excellent plot.
Using LaTeX font
Using LaTeX font
In default, we can use some nice fonts that are provided by Matplotlib. But, some symbols are not good enough to be created by Matplotlib. For example, the symbol phi (φ), as shown in Figure 1.
As you see in the y-label, it is still the symbol of phi (φ), but it is not good enough for a plot label for some people. To make it prettier, you can use LaTeX font. How to use it? Here is the answer.
You can add the code above at the beginning of your python code. Line 1 is defining the LaTeX font used in your plot. You also need to define the font size, larger than the default size. If you did not change it, I think it will give you a small label. I choose 18 for it. The result after I apply the code above is shown in Figure 2.
You need to write double dollar ($ ... $) in the beginning and at the end of your symbol, like this
If you have some errors or have not installed the required libraries for using LaTeX font, you need to install them by running the following code in your Jupyter Notebook cell.
!apt install texlive-fonts-recommended texlive-fonts-extra cm-super dvipng
If you want to install them via terminal, you can remove !, so
apt install texlive-fonts-recommended texlive-fonts-extra cm-super dvipng
Of course, you can use some different font families, like serif, sans-serif (the example above), etc. To change the font family, you can use this code.
plt.rcParams['font.family'] = "serif"
If you add the code above in your code, it will give you a plot shown in Figure 3.
Can you realize the difference between Figure 3 and Figure 2? Yups, if you analyze it carefully, the difference is the tail of the font. The latter figure is using serif, whereas the former one is sans-serif. In simply, serif means tail, sans means no. If you want to learn more about the font family or typeface, I recommend this link.
en.wikipedia.org
You can also set the font family/typeface using the Jupyterthemes library. I have made the tutorial on using it. Just click the following link. Jupyterthemes also can change your Jupyter themes, dark mode themes, for example.
medium.com
We want to give you a complex text inserted in Matplotlib, as shown in the title of Figure 4.
If you want to create Figure 4, you can use this full code
If you have some questions about the code, please write it in the comment.
2. Creating zoom-in effect
In this trick, I will give you a code to generate a plot, as shown in Figure 5.
Firstly, you need to understand the difference between plt.axes() and plt.figure(). You can review it in the following link. Code plt.figure() covers all the objects in a single container, including axes, graphics, text, and labels. Code plt.axes() just covers the specific subplot. Figure 6 can give you a simple understanding, I think.
The black box is under plt.figure() and the red and blue boxes are under plt.axes(). In Figure 6, there are two axes, red and blue. You can check this link for the basic reference.
medium.com
After you understand it, you can analyze how to create Figure 5. Yups, in a simple, there are two axes in Figure 5. The first axes is a big plot, zoomed-in version from 580 to 650 and the second one is the zoomed-out version. Here is the code to create Figure 5.
If you need the basic explanation for the code, you can visit this link.
medium.com
I also give another version of the zoom effect you can use using Matplotlib. It is shown in Figure 7.
To create Figure 7, you need to create three axes in Matplotlib using add_subplot or another syntax (subplot). Here, I just use add_subplot and avoid using looping to make it easier. To create them, you can use the following code.
The code will generate a figure, as shown in Figure 8. It tells us that it will generate 2 rows and 2 columns. Axes sub1 (2, 2, 1) is the first axes in the subplots (first row, first column). The sequence is started from the left-top side to the right. The second axes sub2 (2, 2, 2) are placed in the first row, the second column. The last axes, sub3 (2, 2, (3, 4)), are merged axes between the second-row first column and second-row second columns.
Of course, we need to define a mock data to be visualized in your plots. Here, I define a simple combination of linear and sinusoidal functions, as shown in the code below.
If you apply the code into the previous code, you will get a figure, as shown in Figure 9.
The next step is limiting the x-axis and y-axis in the first and second axes (sub1 and sub2), creating blocked areas for both axes in sub3, and create ConnectionPatch(s) that are the representatives of the zoom effect. It can be done using this full code (remember, I did not use looping for the simplicity).
The code will give you an excellent zoom effect plot, as shown in Figure 7.
3. Creating outbox legend
Did your plot have many legends to be shown in a plot, like a Figure 10? If yes, you need to place them out of the main axes.
To place the legends outside of the main container, you need to adjust the position using this code
plt.legend(bbox_to_anchor=(1.05, 1.04)) # position of the legend
The value of 1.05 and 1.04 is in the coordinate x and y-axis toward the main container. You can vary it. Now, applying the code above to our code,
After run the code, it will give a plot, as shown in Figure 11.
If you want to make the legend box more beautiful, you can add a shadow effect using the following code. It will show a plot, as shown in Figure 12.
plt.legend(bbox_to_anchor=(1.05, 1.04), shadow=True)
4. Creating continuous error plots
In the last decade, the styles in data visualization are moved to a clean plot theme. We can see the shift by reading some new papers in international journals or web pages. One of the most popular is visualizing the data with continuous errors, not using error bars. You can see it in Figure 13.
Figure 13 is generated by using fill_between. In fill_between syntax, you need to define the upper limit and lower limit, as shown in Figure 14.
To apply it, you can use the following code.
plt.fill_between(x, upper_limit, lower_limit)
Arguments upper_limit and lower_limit are interchangeable. Here is the full code.
5. Adjusting box pad margin
If you analyze each code above, you will get a syntax plt.savefig() followed by a complex argument: bbox_inches and pad_inches. They are accommodating for you when you are constructing a journal or article. If you did not include them, your plot would have a larger margin after you save it. Figure 15 is presenting the different plots with bbox_inches and pad_inches and without them.
I think you can not see the difference between the two plots in Figure 15 well. I will try to present it in different background colors, as shown in Figure 16.
Again, this trick helps you when you are inserting your plots into a paper or an article. You did not need to crop it to save the space.
Conclusion
Matplotlib is a multi-platform library that can play with many operating systems. It is one of the old libraries to visualize your data, but it is still powerful. Because the developers always make some updates following the trends in data visualization. Some tricks mentioned above are examples of the updates. I hope this story can help you visualize your data more interesting.
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
That’s all. Thanks for reading this story. Comment and share if you like it. I also recommend you follow my account to get a notification when I post my new story. | [
{
"code": null,
"e": 573,
"s": 172,
"text": "Data visualization is used to shows the data in a more straightforward representation and more comfortable to be understood. It can be formed in histograms, scatter plots, line plots, pie chart, etc. Many people are still using Matplotlib as their back-end module to visualize their plots. In this story, I will give you some tricks, 5 powerful tricks in using Matplotlib to create an excellent plot."
},
{
"code": null,
"e": 590,
"s": 573,
"text": "Using LaTeX font"
},
{
"code": null,
"e": 607,
"s": 590,
"text": "Using LaTeX font"
},
{
"code": null,
"e": 801,
"s": 607,
"text": "In default, we can use some nice fonts that are provided by Matplotlib. But, some symbols are not good enough to be created by Matplotlib. For example, the symbol phi (φ), as shown in Figure 1."
},
{
"code": null,
"e": 1003,
"s": 801,
"text": "As you see in the y-label, it is still the symbol of phi (φ), but it is not good enough for a plot label for some people. To make it prettier, you can use LaTeX font. How to use it? Here is the answer."
},
{
"code": null,
"e": 1338,
"s": 1003,
"text": "You can add the code above at the beginning of your python code. Line 1 is defining the LaTeX font used in your plot. You also need to define the font size, larger than the default size. If you did not change it, I think it will give you a small label. I choose 18 for it. The result after I apply the code above is shown in Figure 2."
},
{
"code": null,
"e": 1438,
"s": 1338,
"text": "You need to write double dollar ($ ... $) in the beginning and at the end of your symbol, like this"
},
{
"code": null,
"e": 1615,
"s": 1438,
"text": "If you have some errors or have not installed the required libraries for using LaTeX font, you need to install them by running the following code in your Jupyter Notebook cell."
},
{
"code": null,
"e": 1690,
"s": 1615,
"text": "!apt install texlive-fonts-recommended texlive-fonts-extra cm-super dvipng"
},
{
"code": null,
"e": 1753,
"s": 1690,
"text": "If you want to install them via terminal, you can remove !, so"
},
{
"code": null,
"e": 1827,
"s": 1753,
"text": "apt install texlive-fonts-recommended texlive-fonts-extra cm-super dvipng"
},
{
"code": null,
"e": 1979,
"s": 1827,
"text": "Of course, you can use some different font families, like serif, sans-serif (the example above), etc. To change the font family, you can use this code."
},
{
"code": null,
"e": 2017,
"s": 1979,
"text": "plt.rcParams['font.family'] = \"serif\""
},
{
"code": null,
"e": 2100,
"s": 2017,
"text": "If you add the code above in your code, it will give you a plot shown in Figure 3."
},
{
"code": null,
"e": 2437,
"s": 2100,
"text": "Can you realize the difference between Figure 3 and Figure 2? Yups, if you analyze it carefully, the difference is the tail of the font. The latter figure is using serif, whereas the former one is sans-serif. In simply, serif means tail, sans means no. If you want to learn more about the font family or typeface, I recommend this link."
},
{
"code": null,
"e": 2454,
"s": 2437,
"text": "en.wikipedia.org"
},
{
"code": null,
"e": 2680,
"s": 2454,
"text": "You can also set the font family/typeface using the Jupyterthemes library. I have made the tutorial on using it. Just click the following link. Jupyterthemes also can change your Jupyter themes, dark mode themes, for example."
},
{
"code": null,
"e": 2691,
"s": 2680,
"text": "medium.com"
},
{
"code": null,
"e": 2785,
"s": 2691,
"text": "We want to give you a complex text inserted in Matplotlib, as shown in the title of Figure 4."
},
{
"code": null,
"e": 2844,
"s": 2785,
"text": "If you want to create Figure 4, you can use this full code"
},
{
"code": null,
"e": 2919,
"s": 2844,
"text": "If you have some questions about the code, please write it in the comment."
},
{
"code": null,
"e": 2946,
"s": 2919,
"text": "2. Creating zoom-in effect"
},
{
"code": null,
"e": 3026,
"s": 2946,
"text": "In this trick, I will give you a code to generate a plot, as shown in Figure 5."
},
{
"code": null,
"e": 3364,
"s": 3026,
"text": "Firstly, you need to understand the difference between plt.axes() and plt.figure(). You can review it in the following link. Code plt.figure() covers all the objects in a single container, including axes, graphics, text, and labels. Code plt.axes() just covers the specific subplot. Figure 6 can give you a simple understanding, I think."
},
{
"code": null,
"e": 3545,
"s": 3364,
"text": "The black box is under plt.figure() and the red and blue boxes are under plt.axes(). In Figure 6, there are two axes, red and blue. You can check this link for the basic reference."
},
{
"code": null,
"e": 3556,
"s": 3545,
"text": "medium.com"
},
{
"code": null,
"e": 3819,
"s": 3556,
"text": "After you understand it, you can analyze how to create Figure 5. Yups, in a simple, there are two axes in Figure 5. The first axes is a big plot, zoomed-in version from 580 to 650 and the second one is the zoomed-out version. Here is the code to create Figure 5."
},
{
"code": null,
"e": 3892,
"s": 3819,
"text": "If you need the basic explanation for the code, you can visit this link."
},
{
"code": null,
"e": 3903,
"s": 3892,
"text": "medium.com"
},
{
"code": null,
"e": 4005,
"s": 3903,
"text": "I also give another version of the zoom effect you can use using Matplotlib. It is shown in Figure 7."
},
{
"code": null,
"e": 4236,
"s": 4005,
"text": "To create Figure 7, you need to create three axes in Matplotlib using add_subplot or another syntax (subplot). Here, I just use add_subplot and avoid using looping to make it easier. To create them, you can use the following code."
},
{
"code": null,
"e": 4687,
"s": 4236,
"text": "The code will generate a figure, as shown in Figure 8. It tells us that it will generate 2 rows and 2 columns. Axes sub1 (2, 2, 1) is the first axes in the subplots (first row, first column). The sequence is started from the left-top side to the right. The second axes sub2 (2, 2, 2) are placed in the first row, the second column. The last axes, sub3 (2, 2, (3, 4)), are merged axes between the second-row first column and second-row second columns."
},
{
"code": null,
"e": 4860,
"s": 4687,
"text": "Of course, we need to define a mock data to be visualized in your plots. Here, I define a simple combination of linear and sinusoidal functions, as shown in the code below."
},
{
"code": null,
"e": 4951,
"s": 4860,
"text": "If you apply the code into the previous code, you will get a figure, as shown in Figure 9."
},
{
"code": null,
"e": 5260,
"s": 4951,
"text": "The next step is limiting the x-axis and y-axis in the first and second axes (sub1 and sub2), creating blocked areas for both axes in sub3, and create ConnectionPatch(s) that are the representatives of the zoom effect. It can be done using this full code (remember, I did not use looping for the simplicity)."
},
{
"code": null,
"e": 5336,
"s": 5260,
"text": "The code will give you an excellent zoom effect plot, as shown in Figure 7."
},
{
"code": null,
"e": 5362,
"s": 5336,
"text": "3. Creating outbox legend"
},
{
"code": null,
"e": 5488,
"s": 5362,
"text": "Did your plot have many legends to be shown in a plot, like a Figure 10? If yes, you need to place them out of the main axes."
},
{
"code": null,
"e": 5588,
"s": 5488,
"text": "To place the legends outside of the main container, you need to adjust the position using this code"
},
{
"code": null,
"e": 5653,
"s": 5588,
"text": "plt.legend(bbox_to_anchor=(1.05, 1.04)) # position of the legend"
},
{
"code": null,
"e": 5800,
"s": 5653,
"text": "The value of 1.05 and 1.04 is in the coordinate x and y-axis toward the main container. You can vary it. Now, applying the code above to our code,"
},
{
"code": null,
"e": 5864,
"s": 5800,
"text": "After run the code, it will give a plot, as shown in Figure 11."
},
{
"code": null,
"e": 6013,
"s": 5864,
"text": "If you want to make the legend box more beautiful, you can add a shadow effect using the following code. It will show a plot, as shown in Figure 12."
},
{
"code": null,
"e": 6066,
"s": 6013,
"text": "plt.legend(bbox_to_anchor=(1.05, 1.04), shadow=True)"
},
{
"code": null,
"e": 6101,
"s": 6066,
"text": "4. Creating continuous error plots"
},
{
"code": null,
"e": 6398,
"s": 6101,
"text": "In the last decade, the styles in data visualization are moved to a clean plot theme. We can see the shift by reading some new papers in international journals or web pages. One of the most popular is visualizing the data with continuous errors, not using error bars. You can see it in Figure 13."
},
{
"code": null,
"e": 6543,
"s": 6398,
"text": "Figure 13 is generated by using fill_between. In fill_between syntax, you need to define the upper limit and lower limit, as shown in Figure 14."
},
{
"code": null,
"e": 6588,
"s": 6543,
"text": "To apply it, you can use the following code."
},
{
"code": null,
"e": 6634,
"s": 6588,
"text": "plt.fill_between(x, upper_limit, lower_limit)"
},
{
"code": null,
"e": 6716,
"s": 6634,
"text": "Arguments upper_limit and lower_limit are interchangeable. Here is the full code."
},
{
"code": null,
"e": 6744,
"s": 6716,
"text": "5. Adjusting box pad margin"
},
{
"code": null,
"e": 7130,
"s": 6744,
"text": "If you analyze each code above, you will get a syntax plt.savefig() followed by a complex argument: bbox_inches and pad_inches. They are accommodating for you when you are constructing a journal or article. If you did not include them, your plot would have a larger margin after you save it. Figure 15 is presenting the different plots with bbox_inches and pad_inches and without them."
},
{
"code": null,
"e": 7290,
"s": 7130,
"text": "I think you can not see the difference between the two plots in Figure 15 well. I will try to present it in different background colors, as shown in Figure 16."
},
{
"code": null,
"e": 7427,
"s": 7290,
"text": "Again, this trick helps you when you are inserting your plots into a paper or an article. You did not need to crop it to save the space."
},
{
"code": null,
"e": 7438,
"s": 7427,
"text": "Conclusion"
},
{
"code": null,
"e": 7819,
"s": 7438,
"text": "Matplotlib is a multi-platform library that can play with many operating systems. It is one of the old libraries to visualize your data, but it is still powerful. Because the developers always make some updates following the trends in data visualization. Some tricks mentioned above are examples of the updates. I hope this story can help you visualize your data more interesting."
},
{
"code": null,
"e": 7842,
"s": 7819,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 7865,
"s": 7842,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 7888,
"s": 7865,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 7911,
"s": 7888,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 7934,
"s": 7911,
"text": "towardsdatascience.com"
}
] |
Multi-dimensional Arrays in C | C programming language allows multidimensional arrays. Here is the general form of a multidimensional array declaration −
type name[size1][size2]...[sizeN];
For example, the following declaration creates a three dimensional integer array −
int threedim[5][10][4];
The simplest form of multidimensional array is the two-dimensional array. A two-dimensional array is, in essence, a list of one-dimensional arrays. To declare a two-dimensional integer array of size [x][y], you would write something as follows −
type arrayName [ x ][ y ];
Where type can be any valid C data type and arrayName will be a valid C identifier. A two-dimensional array can be considered as a table which will have x number of rows and y number of columns. A two-dimensional array a, which contains three rows and four columns can be shown as follows −
Thus, every element in the array a is identified by an element name of the form a[ i ][ j ], where 'a' is the name of the array, and 'i' and 'j' are the subscripts that uniquely identify each element in 'a'.
Multidimensional arrays may be initialized by specifying bracketed values for each row. Following is an array with 3 rows and each row has 4 columns.
int a[3][4] = {
{0, 1, 2, 3} , /* initializers for row indexed by 0 */
{4, 5, 6, 7} , /* initializers for row indexed by 1 */
{8, 9, 10, 11} /* initializers for row indexed by 2 */
};
The nested braces, which indicate the intended row, are optional. The following initialization is equivalent to the previous example −
int a[3][4] = {0,1,2,3,4,5,6,7,8,9,10,11};
An element in a two-dimensional array is accessed by using the subscripts, i.e., row index and column index of the array. For example −
int val = a[2][3];
The above statement will take the 4th element from the 3rd row of the array. You can verify it in the above figure. Let us check the following program where we have used a nested loop to handle a two-dimensional array −
#include <stdio.h>
int main () {
/* an array with 5 rows and 2 columns*/
int a[5][2] = { {0,0}, {1,2}, {2,4}, {3,6},{4,8}};
int i, j;
/* output each array element's value */
for ( i = 0; i < 5; i++ ) {
for ( j = 0; j < 2; j++ ) {
printf("a[%d][%d] = %d\n", i,j, a[i][j] );
}
}
return 0;
}
When the above code is compiled and executed, it produces the following result −
a[0][0]: 0
a[0][1]: 0
a[1][0]: 1
a[1][1]: 2
a[2][0]: 2
a[2][1]: 4
a[3][0]: 3
a[3][1]: 6
a[4][0]: 4
a[4][1]: 8
As explained above, you can have arrays with any number of dimensions, although it is likely that most of the arrays you create will be of one or two dimensions.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2206,
"s": 2084,
"text": "C programming language allows multidimensional arrays. Here is the general form of a multidimensional array declaration −"
},
{
"code": null,
"e": 2242,
"s": 2206,
"text": "type name[size1][size2]...[sizeN];\n"
},
{
"code": null,
"e": 2325,
"s": 2242,
"text": "For example, the following declaration creates a three dimensional integer array −"
},
{
"code": null,
"e": 2350,
"s": 2325,
"text": "int threedim[5][10][4];\n"
},
{
"code": null,
"e": 2596,
"s": 2350,
"text": "The simplest form of multidimensional array is the two-dimensional array. A two-dimensional array is, in essence, a list of one-dimensional arrays. To declare a two-dimensional integer array of size [x][y], you would write something as follows −"
},
{
"code": null,
"e": 2624,
"s": 2596,
"text": "type arrayName [ x ][ y ];\n"
},
{
"code": null,
"e": 2915,
"s": 2624,
"text": "Where type can be any valid C data type and arrayName will be a valid C identifier. A two-dimensional array can be considered as a table which will have x number of rows and y number of columns. A two-dimensional array a, which contains three rows and four columns can be shown as follows −"
},
{
"code": null,
"e": 3123,
"s": 2915,
"text": "Thus, every element in the array a is identified by an element name of the form a[ i ][ j ], where 'a' is the name of the array, and 'i' and 'j' are the subscripts that uniquely identify each element in 'a'."
},
{
"code": null,
"e": 3273,
"s": 3123,
"text": "Multidimensional arrays may be initialized by specifying bracketed values for each row. Following is an array with 3 rows and each row has 4 columns."
},
{
"code": null,
"e": 3477,
"s": 3273,
"text": "int a[3][4] = { \n {0, 1, 2, 3} , /* initializers for row indexed by 0 */\n {4, 5, 6, 7} , /* initializers for row indexed by 1 */\n {8, 9, 10, 11} /* initializers for row indexed by 2 */\n};"
},
{
"code": null,
"e": 3612,
"s": 3477,
"text": "The nested braces, which indicate the intended row, are optional. The following initialization is equivalent to the previous example −"
},
{
"code": null,
"e": 3656,
"s": 3612,
"text": "int a[3][4] = {0,1,2,3,4,5,6,7,8,9,10,11};\n"
},
{
"code": null,
"e": 3792,
"s": 3656,
"text": "An element in a two-dimensional array is accessed by using the subscripts, i.e., row index and column index of the array. For example −"
},
{
"code": null,
"e": 3812,
"s": 3792,
"text": "int val = a[2][3];\n"
},
{
"code": null,
"e": 4032,
"s": 3812,
"text": "The above statement will take the 4th element from the 3rd row of the array. You can verify it in the above figure. Let us check the following program where we have used a nested loop to handle a two-dimensional array −"
},
{
"code": null,
"e": 4373,
"s": 4032,
"text": "#include <stdio.h>\n \nint main () {\n\n /* an array with 5 rows and 2 columns*/\n int a[5][2] = { {0,0}, {1,2}, {2,4}, {3,6},{4,8}};\n int i, j;\n \n /* output each array element's value */\n for ( i = 0; i < 5; i++ ) {\n\n for ( j = 0; j < 2; j++ ) {\n printf(\"a[%d][%d] = %d\\n\", i,j, a[i][j] );\n }\n }\n \n return 0;\n}"
},
{
"code": null,
"e": 4454,
"s": 4373,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 4565,
"s": 4454,
"text": "a[0][0]: 0\na[0][1]: 0\na[1][0]: 1\na[1][1]: 2\na[2][0]: 2\na[2][1]: 4\na[3][0]: 3\na[3][1]: 6\na[4][0]: 4\na[4][1]: 8\n"
},
{
"code": null,
"e": 4727,
"s": 4565,
"text": "As explained above, you can have arrays with any number of dimensions, although it is likely that most of the arrays you create will be of one or two dimensions."
},
{
"code": null,
"e": 4734,
"s": 4727,
"text": " Print"
},
{
"code": null,
"e": 4745,
"s": 4734,
"text": " Add Notes"
}
] |
15 Tips and Tricks for Jupyter Notebook that will ease your Coding Experience | by Satyam Kumar | Towards Data Science | Jupyter Notebook is a browser bases REPL (read eval print loop) built on IPython and other open-source libraries, it allows us to run interactive python code on the browser.
It not only runs python code but also has many interesting plugins and magic commands which enhances the python coding experience greatly.
One can calculate the time of execution of a jupyter notebook cell using magic command at the beginning of the cell. It calculates the wall time that can be referred to as the total time required to execute that cell.
One can use a python external library to create a progress bar, that can give live updates of the progress of code. It keeps the user informed about the status of a running code script. You can get the Github repository of library here.
First, you need to install tqdm library,
pip3 install tqdm
Or you can also install it in a jupyter notebook cell using ! .
The tqdm function can be used by importing its package and the usage and implementation can be observed below:
Using nb_black library, one can format a code snippet in a cell to a proper format. Sometimes the code snippet in a jupyter notebook cell is not well-formatted, this library helps to attain proper formatting of the code snippet.
nb_black is a simple extension for Jupyter Notebook and Jupyter Lab to beautify Python code automatically.
Installation of the library:
pip3 install nb_black
Usage for Jupyter Notebook:
%load_ext nb_black
Jupyter Notebook can install any python package in the notebook itself. To install any python package using the pip command in jupyter notebook cell enter a ! before the command.
For installing the pandas package: Enter ! pip install pandas and run the cell.
Jupyter Notebook can show that documentation of the function you are calling. Press Shift+Tab to view the documentation. This is very helpful as you don’t need to open the documentation website every single time. This feature also works for the local custom functions.
Usage:
Write the name of the function you want to implement
Press Shift+Tab to view the documentation.
Click on ^ on the top right corner of documentation to view it in a pager.
Click on + to grow the docstring vertically.
Click on x to close the docstring.
Jupyter Notebook can show suggestions for any function name or variable. To view suggestions writing typing the code press Tab in your keyboard and the suggestion will appear in a top-down menu. Press arrow-up or arrow-down key to scroll up or down the menu. You can also scroll using your mouse. Click on the keyword or hit enter on the selected keyword to confirm your suggestion.
You will also get suggestions for custom functions and variables.
Jupyter Notebook can print the output of each cell just below the cell. When you have a lot of output you can reduce the amount of space it takes up by clicking on the left side panel of the output. This will turn the output into a scrolling window. Double click on the left side of the output to completely collapse the output panel.
You can repeat the process of a single click or double click to change the format of viewing the output panel.
Jupyter Notebook has certain cell execution features that ease the programmer’s performance.
Shit+Enter will run the current cell and highlight the next cell, if no cell is present it will create a new cell.
Alt+Enter will run the current cell and insert a new cell and highlight it.
Jupyter notebook cells can not only run code snippets but also be used to write text. Markdown cells can be used to write text descriptions. It is a better way to express than using comments.
Usage:
Click on the cell to convert it to markdown.
Choose the Markdown option from the drop-down menu
Jupyter Notebook cells can also be used to compile and run code from different languages using IPython magic commands. Use IPython Magics with the name of your kernel at the start of each cell that you want to use that cell for:
%%bash
%%HTML
%%python2
%%python3
%%ruby
%%perl
Jupyter Notebook supports editing code using multiple cursors at once. To select the code to edit at once press Alt key and select the code snippet using your mouse. After selection, you can now edit the code using multiple cursors at once.
Jupyter Notebook can be used to create a PowerPoint-style presentation. Here each cell or group of cells of the notebook can be treated as each slide.
Firstly, install RISE using conda: conda install -c damianavila82 rise
Enter/Exit RISE Slideshow button appears in the notebook toolbar. A slideshow option will also appear under View>Cell Toolbar>Slideshow
To prepare Slideshow click on View>Cell Toolbar>Slideshow and select the jupyter notebook cells for each slide.
After selecting each slide click on the RISE Slideshow button in the notebook toolbar.
Visit here for detailed video guide usage.
After code completion, you have several options to share your jupyter notebook.
Download your jupyter notebook as HTML, pdf, ipynb, py file, etc.
You can use JupyterHub that can create a multi-user Hub which spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server.
You can publish to medium directly from the jupyter notebook. Read this to know the steps.
Jupyter Notebook is the best tool used for data analysis and visualization. It can be used to generate different types of plots using different python or R libraries. Some of the python libraries used to generate plots are:
Matplotlib
Seaborn
bokeh
plot.ly
Shortcuts are used to save a lot of programmer’s time and ease the coding experience. Jupyter notebook has plenty of inbuilt keyboard shortcuts that you find under the Help menu bar: Help>Keyboard Shortcuts .
Jupyter Notebook also provides functionality to edit the keyboard shortcuts as per the programmer’s convenience. You can edit keyboard shortcuts: Help>Edit Keyboard Shortcuts .
Jupyter Notebook is one of the best tools extensible used by folks working in the data science domain due to interactive UI. The above-discussed 15 tips and tricks will help you to ease your jupyter notebook coding experience. It has a lot more built-in magic commands that are not discussed in this article, you can have a read here. Let me know your favorite tips and comment if know more tricks.
The images used in the article are either cited or generated by the author
Thank You for Reading | [
{
"code": null,
"e": 345,
"s": 171,
"text": "Jupyter Notebook is a browser bases REPL (read eval print loop) built on IPython and other open-source libraries, it allows us to run interactive python code on the browser."
},
{
"code": null,
"e": 484,
"s": 345,
"text": "It not only runs python code but also has many interesting plugins and magic commands which enhances the python coding experience greatly."
},
{
"code": null,
"e": 702,
"s": 484,
"text": "One can calculate the time of execution of a jupyter notebook cell using magic command at the beginning of the cell. It calculates the wall time that can be referred to as the total time required to execute that cell."
},
{
"code": null,
"e": 939,
"s": 702,
"text": "One can use a python external library to create a progress bar, that can give live updates of the progress of code. It keeps the user informed about the status of a running code script. You can get the Github repository of library here."
},
{
"code": null,
"e": 980,
"s": 939,
"text": "First, you need to install tqdm library,"
},
{
"code": null,
"e": 998,
"s": 980,
"text": "pip3 install tqdm"
},
{
"code": null,
"e": 1062,
"s": 998,
"text": "Or you can also install it in a jupyter notebook cell using ! ."
},
{
"code": null,
"e": 1173,
"s": 1062,
"text": "The tqdm function can be used by importing its package and the usage and implementation can be observed below:"
},
{
"code": null,
"e": 1402,
"s": 1173,
"text": "Using nb_black library, one can format a code snippet in a cell to a proper format. Sometimes the code snippet in a jupyter notebook cell is not well-formatted, this library helps to attain proper formatting of the code snippet."
},
{
"code": null,
"e": 1509,
"s": 1402,
"text": "nb_black is a simple extension for Jupyter Notebook and Jupyter Lab to beautify Python code automatically."
},
{
"code": null,
"e": 1538,
"s": 1509,
"text": "Installation of the library:"
},
{
"code": null,
"e": 1560,
"s": 1538,
"text": "pip3 install nb_black"
},
{
"code": null,
"e": 1588,
"s": 1560,
"text": "Usage for Jupyter Notebook:"
},
{
"code": null,
"e": 1607,
"s": 1588,
"text": "%load_ext nb_black"
},
{
"code": null,
"e": 1786,
"s": 1607,
"text": "Jupyter Notebook can install any python package in the notebook itself. To install any python package using the pip command in jupyter notebook cell enter a ! before the command."
},
{
"code": null,
"e": 1866,
"s": 1786,
"text": "For installing the pandas package: Enter ! pip install pandas and run the cell."
},
{
"code": null,
"e": 2135,
"s": 1866,
"text": "Jupyter Notebook can show that documentation of the function you are calling. Press Shift+Tab to view the documentation. This is very helpful as you don’t need to open the documentation website every single time. This feature also works for the local custom functions."
},
{
"code": null,
"e": 2142,
"s": 2135,
"text": "Usage:"
},
{
"code": null,
"e": 2195,
"s": 2142,
"text": "Write the name of the function you want to implement"
},
{
"code": null,
"e": 2238,
"s": 2195,
"text": "Press Shift+Tab to view the documentation."
},
{
"code": null,
"e": 2313,
"s": 2238,
"text": "Click on ^ on the top right corner of documentation to view it in a pager."
},
{
"code": null,
"e": 2358,
"s": 2313,
"text": "Click on + to grow the docstring vertically."
},
{
"code": null,
"e": 2393,
"s": 2358,
"text": "Click on x to close the docstring."
},
{
"code": null,
"e": 2776,
"s": 2393,
"text": "Jupyter Notebook can show suggestions for any function name or variable. To view suggestions writing typing the code press Tab in your keyboard and the suggestion will appear in a top-down menu. Press arrow-up or arrow-down key to scroll up or down the menu. You can also scroll using your mouse. Click on the keyword or hit enter on the selected keyword to confirm your suggestion."
},
{
"code": null,
"e": 2842,
"s": 2776,
"text": "You will also get suggestions for custom functions and variables."
},
{
"code": null,
"e": 3177,
"s": 2842,
"text": "Jupyter Notebook can print the output of each cell just below the cell. When you have a lot of output you can reduce the amount of space it takes up by clicking on the left side panel of the output. This will turn the output into a scrolling window. Double click on the left side of the output to completely collapse the output panel."
},
{
"code": null,
"e": 3288,
"s": 3177,
"text": "You can repeat the process of a single click or double click to change the format of viewing the output panel."
},
{
"code": null,
"e": 3381,
"s": 3288,
"text": "Jupyter Notebook has certain cell execution features that ease the programmer’s performance."
},
{
"code": null,
"e": 3496,
"s": 3381,
"text": "Shit+Enter will run the current cell and highlight the next cell, if no cell is present it will create a new cell."
},
{
"code": null,
"e": 3572,
"s": 3496,
"text": "Alt+Enter will run the current cell and insert a new cell and highlight it."
},
{
"code": null,
"e": 3764,
"s": 3572,
"text": "Jupyter notebook cells can not only run code snippets but also be used to write text. Markdown cells can be used to write text descriptions. It is a better way to express than using comments."
},
{
"code": null,
"e": 3771,
"s": 3764,
"text": "Usage:"
},
{
"code": null,
"e": 3816,
"s": 3771,
"text": "Click on the cell to convert it to markdown."
},
{
"code": null,
"e": 3867,
"s": 3816,
"text": "Choose the Markdown option from the drop-down menu"
},
{
"code": null,
"e": 4096,
"s": 3867,
"text": "Jupyter Notebook cells can also be used to compile and run code from different languages using IPython magic commands. Use IPython Magics with the name of your kernel at the start of each cell that you want to use that cell for:"
},
{
"code": null,
"e": 4103,
"s": 4096,
"text": "%%bash"
},
{
"code": null,
"e": 4110,
"s": 4103,
"text": "%%HTML"
},
{
"code": null,
"e": 4120,
"s": 4110,
"text": "%%python2"
},
{
"code": null,
"e": 4130,
"s": 4120,
"text": "%%python3"
},
{
"code": null,
"e": 4137,
"s": 4130,
"text": "%%ruby"
},
{
"code": null,
"e": 4144,
"s": 4137,
"text": "%%perl"
},
{
"code": null,
"e": 4385,
"s": 4144,
"text": "Jupyter Notebook supports editing code using multiple cursors at once. To select the code to edit at once press Alt key and select the code snippet using your mouse. After selection, you can now edit the code using multiple cursors at once."
},
{
"code": null,
"e": 4536,
"s": 4385,
"text": "Jupyter Notebook can be used to create a PowerPoint-style presentation. Here each cell or group of cells of the notebook can be treated as each slide."
},
{
"code": null,
"e": 4607,
"s": 4536,
"text": "Firstly, install RISE using conda: conda install -c damianavila82 rise"
},
{
"code": null,
"e": 4743,
"s": 4607,
"text": "Enter/Exit RISE Slideshow button appears in the notebook toolbar. A slideshow option will also appear under View>Cell Toolbar>Slideshow"
},
{
"code": null,
"e": 4855,
"s": 4743,
"text": "To prepare Slideshow click on View>Cell Toolbar>Slideshow and select the jupyter notebook cells for each slide."
},
{
"code": null,
"e": 4942,
"s": 4855,
"text": "After selecting each slide click on the RISE Slideshow button in the notebook toolbar."
},
{
"code": null,
"e": 4985,
"s": 4942,
"text": "Visit here for detailed video guide usage."
},
{
"code": null,
"e": 5065,
"s": 4985,
"text": "After code completion, you have several options to share your jupyter notebook."
},
{
"code": null,
"e": 5131,
"s": 5065,
"text": "Download your jupyter notebook as HTML, pdf, ipynb, py file, etc."
},
{
"code": null,
"e": 5285,
"s": 5131,
"text": "You can use JupyterHub that can create a multi-user Hub which spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server."
},
{
"code": null,
"e": 5376,
"s": 5285,
"text": "You can publish to medium directly from the jupyter notebook. Read this to know the steps."
},
{
"code": null,
"e": 5600,
"s": 5376,
"text": "Jupyter Notebook is the best tool used for data analysis and visualization. It can be used to generate different types of plots using different python or R libraries. Some of the python libraries used to generate plots are:"
},
{
"code": null,
"e": 5611,
"s": 5600,
"text": "Matplotlib"
},
{
"code": null,
"e": 5619,
"s": 5611,
"text": "Seaborn"
},
{
"code": null,
"e": 5625,
"s": 5619,
"text": "bokeh"
},
{
"code": null,
"e": 5633,
"s": 5625,
"text": "plot.ly"
},
{
"code": null,
"e": 5842,
"s": 5633,
"text": "Shortcuts are used to save a lot of programmer’s time and ease the coding experience. Jupyter notebook has plenty of inbuilt keyboard shortcuts that you find under the Help menu bar: Help>Keyboard Shortcuts ."
},
{
"code": null,
"e": 6019,
"s": 5842,
"text": "Jupyter Notebook also provides functionality to edit the keyboard shortcuts as per the programmer’s convenience. You can edit keyboard shortcuts: Help>Edit Keyboard Shortcuts ."
},
{
"code": null,
"e": 6418,
"s": 6019,
"text": "Jupyter Notebook is one of the best tools extensible used by folks working in the data science domain due to interactive UI. The above-discussed 15 tips and tricks will help you to ease your jupyter notebook coding experience. It has a lot more built-in magic commands that are not discussed in this article, you can have a read here. Let me know your favorite tips and comment if know more tricks."
},
{
"code": null,
"e": 6493,
"s": 6418,
"text": "The images used in the article are either cited or generated by the author"
}
] |
Count the Reversals | Practice | GeeksforGeeks | Given a string S consisting of only opening and closing curly brackets '{' and '}', find out the minimum number of reversals required to convert the string into a balanced expression.
A reversal means changing '{' to '}' or vice-versa.
Example 1:
Input:
S = "}{{}}{{{"
Output: 3
Explanation: One way to balance is:
"{{{}}{}}". There is no balanced sequence
that can be formed in lesser reversals.
​Example 2:
Input:
S = "{{}{{{}{{}}{{"
Output: -1
Explanation: There's no way we can balance
this sequence of braces.
Your Task:
You don't need to read input or print anything. Your task is to complete the function countRev() which takes the string S as input parameter and returns the minimum number of reversals required to balance the bracket sequence. If balancing is not possible, return -1.
Expected Time Complexity: O(|S|).
Expected Auxiliary Space: O(1).
Constraints:
1 ≤ |S| ≤ 105
0
himanshukug19cs2 weeks ago
java solution
Stack<Character> st = new Stack<>(); int count=0; int ans=0; for(int i=0;i<s.length();i++){ if(st.empty()){ if(s.charAt(i)=='}'){ ans++;} st.push('{'); }else{ if(s.charAt(i)=='}') st.pop(); else{ st.push('{'); } } } if(st.size()%2==0){ ans+=st.size()/2; return ans; } return -1;
0
hanumanmanyam8374 weeks ago
class Sol
{
int countRev (String s)
{
// your code here
Stack<Character>st=new Stack<>();
int ans=0;
for(int i=0;i<s.length();i++)
{
if(s.charAt(i)=='{')
{
st.push(s.charAt(i));
}
else
{
if(st.isEmpty())
{
st.push('{');
ans++;
}
else
{
st.pop();
}
}
}
if(st.size()%2==0)
{
ans=(st.size()/2)+ans;
return ans;
}
return -1;
}
}
0
amarrajsmart1971 month ago
C++ CODE time taken=0.1 sec out of assigned 2.0 sec
stack<char>st; int ans=0; for(int i=0;i<s.length();i++) { if(s[i]=='{') { st.push(s[i]); } else { if(st.empty()) { st.push(s[i]); ans++; } else { st.pop(); } } } if(st.size()%2==0){ ans+=st.size()/2; return ans;}elsereturn -1;}
0
ksandeep27071 month ago
C++ CODE:-
int countRev (string s)
{
stack<char> st;
int ans=0;
for(int i=0;i<s.size();i++)
{
if(s[i]=='{')
st.push(s[i]);
else
{
if(st.empty())
{ st.push('{');ans++;}
else
st.pop();
}
}
if(st.size()%2==0)
return st.size()/2+ans;
else
return -1;
}
0
officialshivaji0071 month ago
int countRev (string s){ // your code here int n = s.size(); if(n&1) return -1; stack<char> st; int rev = 0; for(auto ch : s){ if(ch == '{'){ st.push('{'); }else{ if(st.size()==0) { rev++; st.push('{'); } else{ st.pop(); } } } rev += st.size()/2; return rev;}
0
shubham55jha11 month ago
JAVA
// your code here
int n = s.length();
if(n % 2 != 0) return -1;
ArrayDeque<Character> q = new ArrayDeque<>();
for(int i = 0; i < n; i++){
char ch = s.charAt(i);
if(q.isEmpty() || ch == '{') q.offer(ch);
else if(q.peekLast() == '{') q.pollLast();
else q.offer(ch);
}
int res = 0;
while(!q.isEmpty()){
if(q.pop() == q.pop()) res++;
else res += 2;
}
return res;
+1
praveenprakash4392 months ago
int countRev (string s)
{
stack<char>st;
int ans=0;
for(int i=0;i<s.size();++i)
{
if(st.empty())
st.push(s[i]);
else if(st.top()=='{' and s[i]=='}')
st.pop();
else
st.push(s[i]);
}
if(st.size()%2!=0)
return -1;
while(!st.empty()){
char temp1=st.top();st.pop();
char temp2=st.top();st.pop();
if(temp1==temp2)
ans++;
else
ans+=2;
}
return ans;
}
0
gujjulassr2 months ago
Total Time Taken:
0.6/1.7
class Sol{ int countRev (String str) { // your code here Stack<Character> s=new Stack<Character>(); for(int i=0;i<str.length();i++){ if(str.charAt(i)=='}'){ if(s.isEmpty()){ s.push(str.charAt(i)); }else{ if(s.peek()!=str.charAt(i)){ s.pop(); }else{ s.push(str.charAt(i)); } } }else{ s.push(str.charAt(i)); } } if(s.size()%2!=0){ return -1; } int co=0; int cc=0; while(!s.isEmpty()){ if(s.peek()=='{'){ co++; }else{ cc++; } s.pop(); } return (co%2)+(cc%2)+(co/2)+(cc/2); }}
0
uphaarkamal20032 months ago
int countRev (string s)
{
// your code here
int n = s.size();
if(n%2==1)
return -1;
stack<int> st;
int count=0;
for(int i=0 ; i<s.size() ; i++){
if(st.empty() || s[i]=='{'){
st.push(s[i]);
continue;
}
char ch = st.top();
if(ch=='{'){
st.pop();
}
else
st.push(s[i]);
}
while(!st.empty()){
char ch1=st.top();
st.pop();
char ch2=st.top();
st.pop();
if((ch1=='{' && ch2=='{') || (ch1=='}' && ch2=='}'))
count++;
else
count += 2;
}
return count;
}
+1
manishkumar250319992 months ago
int countRev (string s){ if(s.length()%2!=0) return -1; int l=0,r=0; for(int i=0;i<s.length();i++) { if(s[i]=='{') l++; else { if(l==0) r++; else l--; } } return ceil(l/2.0)+ceil(r/2.0);
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 474,
"s": 238,
"text": "Given a string S consisting of only opening and closing curly brackets '{' and '}', find out the minimum number of reversals required to convert the string into a balanced expression.\nA reversal means changing '{' to '}' or vice-versa."
},
{
"code": null,
"e": 485,
"s": 474,
"text": "Example 1:"
},
{
"code": null,
"e": 636,
"s": 485,
"text": "Input:\nS = \"}{{}}{{{\"\nOutput: 3\nExplanation: One way to balance is:\n\"{{{}}{}}\". There is no balanced sequence\nthat can be formed in lesser reversals.\n"
},
{
"code": null,
"e": 651,
"s": 636,
"text": "​Example 2:"
},
{
"code": null,
"e": 759,
"s": 651,
"text": "Input: \nS = \"{{}{{{}{{}}{{\"\nOutput: -1\nExplanation: There's no way we can balance\nthis sequence of braces.\n"
},
{
"code": null,
"e": 1039,
"s": 759,
"text": "Your Task:\nYou don't need to read input or print anything. Your task is to complete the function countRev() which takes the string S as input parameter and returns the minimum number of reversals required to balance the bracket sequence. If balancing is not possible, return -1. "
},
{
"code": null,
"e": 1105,
"s": 1039,
"text": "Expected Time Complexity: O(|S|).\nExpected Auxiliary Space: O(1)."
},
{
"code": null,
"e": 1132,
"s": 1105,
"text": "Constraints:\n1 ≤ |S| ≤ 105"
},
{
"code": null,
"e": 1136,
"s": 1134,
"text": "0"
},
{
"code": null,
"e": 1163,
"s": 1136,
"text": "himanshukug19cs2 weeks ago"
},
{
"code": null,
"e": 1177,
"s": 1163,
"text": "java solution"
},
{
"code": null,
"e": 1702,
"s": 1177,
"text": " Stack<Character> st = new Stack<>(); int count=0; int ans=0; for(int i=0;i<s.length();i++){ if(st.empty()){ if(s.charAt(i)=='}'){ ans++;} st.push('{'); }else{ if(s.charAt(i)=='}') st.pop(); else{ st.push('{'); } } } if(st.size()%2==0){ ans+=st.size()/2; return ans; } return -1; "
},
{
"code": null,
"e": 1704,
"s": 1702,
"text": "0"
},
{
"code": null,
"e": 1732,
"s": 1704,
"text": "hanumanmanyam8374 weeks ago"
},
{
"code": null,
"e": 2427,
"s": 1732,
"text": "class Sol\n{\n int countRev (String s)\n {\n // your code here \n Stack<Character>st=new Stack<>();\n int ans=0;\n for(int i=0;i<s.length();i++)\n {\n if(s.charAt(i)=='{')\n {\n st.push(s.charAt(i));\n }\n else\n {\n if(st.isEmpty())\n {\n st.push('{');\n ans++;\n }\n else\n {\n st.pop();\n }\n }\n }\n if(st.size()%2==0)\n {\n ans=(st.size()/2)+ans;\n return ans;\n }\n return -1;\n \n }\n}"
},
{
"code": null,
"e": 2429,
"s": 2427,
"text": "0"
},
{
"code": null,
"e": 2456,
"s": 2429,
"text": "amarrajsmart1971 month ago"
},
{
"code": null,
"e": 2511,
"s": 2456,
"text": "C++ CODE time taken=0.1 sec out of assigned 2.0 sec "
},
{
"code": null,
"e": 2904,
"s": 2513,
"text": " stack<char>st; int ans=0; for(int i=0;i<s.length();i++) { if(s[i]=='{') { st.push(s[i]); } else { if(st.empty()) { st.push(s[i]); ans++; } else { st.pop(); } } } if(st.size()%2==0){ ans+=st.size()/2; return ans;}elsereturn -1;}"
},
{
"code": null,
"e": 2906,
"s": 2904,
"text": "0"
},
{
"code": null,
"e": 2930,
"s": 2906,
"text": "ksandeep27071 month ago"
},
{
"code": null,
"e": 2941,
"s": 2930,
"text": "C++ CODE:-"
},
{
"code": null,
"e": 3311,
"s": 2941,
"text": "int countRev (string s)\n{\n stack<char> st;\n int ans=0;\n for(int i=0;i<s.size();i++)\n {\n if(s[i]=='{')\n st.push(s[i]);\n else\n {\n if(st.empty())\n { st.push('{');ans++;}\n else\n st.pop();\n }\n }\n if(st.size()%2==0)\n return st.size()/2+ans;\n else\n return -1;\n}"
},
{
"code": null,
"e": 3313,
"s": 3311,
"text": "0"
},
{
"code": null,
"e": 3343,
"s": 3313,
"text": "officialshivaji0071 month ago"
},
{
"code": null,
"e": 3745,
"s": 3343,
"text": "int countRev (string s){ // your code here int n = s.size(); if(n&1) return -1; stack<char> st; int rev = 0; for(auto ch : s){ if(ch == '{'){ st.push('{'); }else{ if(st.size()==0) { rev++; st.push('{'); } else{ st.pop(); } } } rev += st.size()/2; return rev;}"
},
{
"code": null,
"e": 3747,
"s": 3745,
"text": "0"
},
{
"code": null,
"e": 3772,
"s": 3747,
"text": "shubham55jha11 month ago"
},
{
"code": null,
"e": 3777,
"s": 3772,
"text": "JAVA"
},
{
"code": null,
"e": 4285,
"s": 3777,
"text": "\t\t// your code here \n int n = s.length();\n if(n % 2 != 0) return -1;\n ArrayDeque<Character> q = new ArrayDeque<>();\n for(int i = 0; i < n; i++){\n char ch = s.charAt(i);\n if(q.isEmpty() || ch == '{') q.offer(ch);\n else if(q.peekLast() == '{') q.pollLast();\n else q.offer(ch);\n }\n int res = 0;\n while(!q.isEmpty()){\n if(q.pop() == q.pop()) res++;\n else res += 2;\n }\n return res;"
},
{
"code": null,
"e": 4288,
"s": 4285,
"text": "+1"
},
{
"code": null,
"e": 4318,
"s": 4288,
"text": "praveenprakash4392 months ago"
},
{
"code": null,
"e": 4829,
"s": 4318,
"text": "int countRev (string s)\n{\n stack<char>st;\n int ans=0;\n for(int i=0;i<s.size();++i)\n {\n if(st.empty())\n st.push(s[i]);\n else if(st.top()=='{' and s[i]=='}')\n st.pop();\n else\n st.push(s[i]);\n }\n if(st.size()%2!=0)\n return -1;\n while(!st.empty()){\n char temp1=st.top();st.pop();\n char temp2=st.top();st.pop();\n \n if(temp1==temp2)\n ans++;\n else\n ans+=2;\n }\n \n return ans;\n \n}"
},
{
"code": null,
"e": 4831,
"s": 4829,
"text": "0"
},
{
"code": null,
"e": 4854,
"s": 4831,
"text": "gujjulassr2 months ago"
},
{
"code": null,
"e": 4872,
"s": 4854,
"text": "Total Time Taken:"
},
{
"code": null,
"e": 4880,
"s": 4872,
"text": "0.6/1.7"
},
{
"code": null,
"e": 5696,
"s": 4882,
"text": "class Sol{ int countRev (String str) { // your code here Stack<Character> s=new Stack<Character>(); for(int i=0;i<str.length();i++){ if(str.charAt(i)=='}'){ if(s.isEmpty()){ s.push(str.charAt(i)); }else{ if(s.peek()!=str.charAt(i)){ s.pop(); }else{ s.push(str.charAt(i)); } } }else{ s.push(str.charAt(i)); } } if(s.size()%2!=0){ return -1; } int co=0; int cc=0; while(!s.isEmpty()){ if(s.peek()=='{'){ co++; }else{ cc++; } s.pop(); } return (co%2)+(cc%2)+(co/2)+(cc/2); }}"
},
{
"code": null,
"e": 5698,
"s": 5696,
"text": "0"
},
{
"code": null,
"e": 5726,
"s": 5698,
"text": "uphaarkamal20032 months ago"
},
{
"code": null,
"e": 6371,
"s": 5726,
"text": "\nint countRev (string s)\n{\n // your code here\n int n = s.size();\n if(n%2==1)\n return -1;\n stack<int> st;\n int count=0;\n for(int i=0 ; i<s.size() ; i++){\n if(st.empty() || s[i]=='{'){\n st.push(s[i]);\n continue;\n }\n char ch = st.top();\n if(ch=='{'){\n st.pop();\n }\n else\n st.push(s[i]);\n }\n while(!st.empty()){\n char ch1=st.top();\n st.pop();\n char ch2=st.top();\n st.pop();\n if((ch1=='{' && ch2=='{') || (ch1=='}' && ch2=='}'))\n count++;\n else\n count += 2;\n }\n return count;\n}\n"
},
{
"code": null,
"e": 6374,
"s": 6371,
"text": "+1"
},
{
"code": null,
"e": 6406,
"s": 6374,
"text": "manishkumar250319992 months ago"
},
{
"code": null,
"e": 6688,
"s": 6406,
"text": "int countRev (string s){ if(s.length()%2!=0) return -1; int l=0,r=0; for(int i=0;i<s.length();i++) { if(s[i]=='{') l++; else { if(l==0) r++; else l--; } } return ceil(l/2.0)+ceil(r/2.0);"
},
{
"code": null,
"e": 6834,
"s": 6688,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 6870,
"s": 6834,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 6880,
"s": 6870,
"text": "\nProblem\n"
},
{
"code": null,
"e": 6890,
"s": 6880,
"text": "\nContest\n"
},
{
"code": null,
"e": 6953,
"s": 6890,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 7101,
"s": 6953,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 7309,
"s": 7101,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 7415,
"s": 7309,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
Apache HttpClient - Custom SSL Context | Using Secure Socket Layer, you can establish a secured connection between the client and
server. It helps to safeguard sensitive information such as credit card numbers, usernames, passwords, pins, etc.
You can make connections more secure by creating your own SSL context using the HttpClient library.
Follow the steps given below to customize SSLContext using HttpClient library −
SSLContextBuilder is the builder for the SSLContext objects. Create its object using the custom() method of the SSLContexts class.
//Creating SSLContextBuilder object
SSLContextBuilder SSLBuilder = SSLContexts.custom();
In the path Java_home_directory/jre/lib/security/, you can find a file named cacerts. Save this as your key store file (with extension .jks). Load the keystore file and, its password (which is changeit by default) using the loadTrustMaterial() method of the SSLContextBuilder class.
//Loading the Keystore file
File file = new File("mykeystore.jks");
SSLBuilder = SSLBuilder.loadTrustMaterial(file, "changeit".toCharArray());
An SSLContext object represents a secure socket protocol implementation. Build an SSLContext using the build() method.
//Building the SSLContext
SSLContext sslContext = SSLBuilder.build();
SSLConnectionSocketFactory is a layered socket factory for TSL and SSL connections. Using this, you can verify the Https server using a list of trusted certificates and authenticate the given Https server.
You can create this in many ways. Depending on the way you create an SSLConnectionSocketFactory object, you can allow all hosts, allow only self-signed
certificates, allow only particular protocols, etc.
To allow only particular protocols, create SSLConnectionSocketFactory object by passing an SSLContext object, string array representing the protocols need to be supported, string array representing the cipher suits need to be supported and a HostnameVerifier object to its constructor.
new SSLConnectionSocketFactory(sslcontext, new String[]{"TLSv1"}, null,
SSLConnectionSocketFactory.getDefaultHostnameVerifier());
To allow all hosts, create SSLConnectionSocketFactory object by passing a SSLContext object and a NoopHostnameVerifier object.
//Creating SSLConnectionSocketFactory SSLConnectionSocketFactory object
SSLConnectionSocketFactory sslConSocFactory = new SSLConnectionSocketFactory(sslcontext, new NoopHostnameVerifier());
Create an HttpClientBuilder object using the custom() method of the HttpClients class.
//Creating HttpClientBuilder
HttpClientBuilder clientbuilder = HttpClients.custom();
Set the SSLConnectionSocketFactory object to the HttpClientBuilder using the setSSLSocketFactory() method.
//Setting the SSLConnectionSocketFactory
clientbuilder = clientbuilder.setSSLSocketFactory(sslConSocFactory);
Build the CloseableHttpClient object by calling the build() method.
//Building the CloseableHttpClient
CloseableHttpClient httpclient = clientbuilder.build();
The HttpGet class represents the HTTP GET request which retrieves the information of
the given server using a URI.
Create a HTTP GET request by instantiating the HttpGet class by passing a string representing the URI.
//Creating the HttpGet request
HttpGet httpget = new HttpGet("https://example.com/");
Execute the request using the execute() method.
//Executing the request
HttpResponse httpresponse = httpclient.execute(httpget);
Following example demonstrates the customization of the SSLContrext −
import java.io.File;
import javax.net.ssl.SSLContext;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.conn.ssl.NoopHostnameVerifier;
import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.ssl.SSLContextBuilder;
import org.apache.http.ssl.SSLContexts;
import org.apache.http.util.EntityUtils;
public class ClientCustomSSL {
public final static void main(String[] args) throws Exception {
//Creating SSLContextBuilder object
SSLContextBuilder SSLBuilder = SSLContexts.custom();
//Loading the Keystore file
File file = new File("mykeystore.jks");
SSLBuilder = SSLBuilder.loadTrustMaterial(file,
"changeit".toCharArray());
//Building the SSLContext usiong the build() method
SSLContext sslcontext = SSLBuilder.build();
//Creating SSLConnectionSocketFactory object
SSLConnectionSocketFactory sslConSocFactory = new SSLConnectionSocketFactory(sslcontext, new NoopHostnameVerifier());
//Creating HttpClientBuilder
HttpClientBuilder clientbuilder = HttpClients.custom();
//Setting the SSLConnectionSocketFactory
clientbuilder = clientbuilder.setSSLSocketFactory(sslConSocFactory);
//Building the CloseableHttpClient
CloseableHttpClient httpclient = clientbuilder.build();
//Creating the HttpGet request
HttpGet httpget = new HttpGet("https://example.com/");
//Executing the request
HttpResponse httpresponse = httpclient.execute(httpget);
//printing the status line
System.out.println(httpresponse.getStatusLine());
//Retrieving the HttpEntity and displaying the no.of bytes read
HttpEntity entity = httpresponse.getEntity();
if (entity != null) {
System.out.println(EntityUtils.toByteArray(entity).length);
}
}
}
On executing, the above program generates the following output.
HTTP/1.1 200 OK
1270
46 Lectures
3.5 hours
Arnab Chakraborty
23 Lectures
1.5 hours
Mukund Kumar Mishra
16 Lectures
1 hours
Nilay Mehta
52 Lectures
1.5 hours
Bigdata Engineer
14 Lectures
1 hours
Bigdata Engineer
23 Lectures
1 hours
Bigdata Engineer
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2030,
"s": 1827,
"text": "Using Secure Socket Layer, you can establish a secured connection between the client and\nserver. It helps to safeguard sensitive information such as credit card numbers, usernames, passwords, pins, etc."
},
{
"code": null,
"e": 2130,
"s": 2030,
"text": "You can make connections more secure by creating your own SSL context using the HttpClient library."
},
{
"code": null,
"e": 2210,
"s": 2130,
"text": "Follow the steps given below to customize SSLContext using HttpClient library −"
},
{
"code": null,
"e": 2341,
"s": 2210,
"text": "SSLContextBuilder is the builder for the SSLContext objects. Create its object using the custom() method of the SSLContexts class."
},
{
"code": null,
"e": 2431,
"s": 2341,
"text": "//Creating SSLContextBuilder object\nSSLContextBuilder SSLBuilder = SSLContexts.custom();\n"
},
{
"code": null,
"e": 2714,
"s": 2431,
"text": "In the path Java_home_directory/jre/lib/security/, you can find a file named cacerts. Save this as your key store file (with extension .jks). Load the keystore file and, its password (which is changeit by default) using the loadTrustMaterial() method of the SSLContextBuilder class."
},
{
"code": null,
"e": 2858,
"s": 2714,
"text": "//Loading the Keystore file\nFile file = new File(\"mykeystore.jks\");\nSSLBuilder = SSLBuilder.loadTrustMaterial(file, \"changeit\".toCharArray());\n"
},
{
"code": null,
"e": 2977,
"s": 2858,
"text": "An SSLContext object represents a secure socket protocol implementation. Build an SSLContext using the build() method."
},
{
"code": null,
"e": 3048,
"s": 2977,
"text": "//Building the SSLContext\nSSLContext sslContext = SSLBuilder.build();\n"
},
{
"code": null,
"e": 3254,
"s": 3048,
"text": "SSLConnectionSocketFactory is a layered socket factory for TSL and SSL connections. Using this, you can verify the Https server using a list of trusted certificates and authenticate the given Https server."
},
{
"code": null,
"e": 3458,
"s": 3254,
"text": "You can create this in many ways. Depending on the way you create an SSLConnectionSocketFactory object, you can allow all hosts, allow only self-signed\ncertificates, allow only particular protocols, etc."
},
{
"code": null,
"e": 3744,
"s": 3458,
"text": "To allow only particular protocols, create SSLConnectionSocketFactory object by passing an SSLContext object, string array representing the protocols need to be supported, string array representing the cipher suits need to be supported and a HostnameVerifier object to its constructor."
},
{
"code": null,
"e": 3882,
"s": 3744,
"text": "new SSLConnectionSocketFactory(sslcontext, new String[]{\"TLSv1\"}, null, \n SSLConnectionSocketFactory.getDefaultHostnameVerifier());\n"
},
{
"code": null,
"e": 4009,
"s": 3882,
"text": "To allow all hosts, create SSLConnectionSocketFactory object by passing a SSLContext object and a NoopHostnameVerifier object."
},
{
"code": null,
"e": 4200,
"s": 4009,
"text": "//Creating SSLConnectionSocketFactory SSLConnectionSocketFactory object\nSSLConnectionSocketFactory sslConSocFactory = new SSLConnectionSocketFactory(sslcontext, new NoopHostnameVerifier());\n"
},
{
"code": null,
"e": 4287,
"s": 4200,
"text": "Create an HttpClientBuilder object using the custom() method of the HttpClients class."
},
{
"code": null,
"e": 4373,
"s": 4287,
"text": "//Creating HttpClientBuilder\nHttpClientBuilder clientbuilder = HttpClients.custom();\n"
},
{
"code": null,
"e": 4480,
"s": 4373,
"text": "Set the SSLConnectionSocketFactory object to the HttpClientBuilder using the setSSLSocketFactory() method."
},
{
"code": null,
"e": 4591,
"s": 4480,
"text": "//Setting the SSLConnectionSocketFactory\nclientbuilder = clientbuilder.setSSLSocketFactory(sslConSocFactory);\n"
},
{
"code": null,
"e": 4659,
"s": 4591,
"text": "Build the CloseableHttpClient object by calling the build() method."
},
{
"code": null,
"e": 4751,
"s": 4659,
"text": "//Building the CloseableHttpClient\nCloseableHttpClient httpclient = clientbuilder.build();\n"
},
{
"code": null,
"e": 4866,
"s": 4751,
"text": "The HttpGet class represents the HTTP GET request which retrieves the information of\nthe given server using a URI."
},
{
"code": null,
"e": 4969,
"s": 4866,
"text": "Create a HTTP GET request by instantiating the HttpGet class by passing a string representing the URI."
},
{
"code": null,
"e": 5056,
"s": 4969,
"text": "//Creating the HttpGet request\nHttpGet httpget = new HttpGet(\"https://example.com/\");\n"
},
{
"code": null,
"e": 5104,
"s": 5056,
"text": "Execute the request using the execute() method."
},
{
"code": null,
"e": 5186,
"s": 5104,
"text": "//Executing the request\nHttpResponse httpresponse = httpclient.execute(httpget);\n"
},
{
"code": null,
"e": 5256,
"s": 5186,
"text": "Following example demonstrates the customization of the SSLContrext −"
},
{
"code": null,
"e": 7344,
"s": 5256,
"text": "import java.io.File;\nimport javax.net.ssl.SSLContext;\nimport org.apache.http.HttpEntity;\nimport org.apache.http.HttpResponse;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.conn.ssl.NoopHostnameVerifier;\nimport org.apache.http.conn.ssl.SSLConnectionSocketFactory;\nimport org.apache.http.impl.client.CloseableHttpClient;\nimport org.apache.http.impl.client.HttpClientBuilder;\nimport org.apache.http.impl.client.HttpClients;\nimport org.apache.http.ssl.SSLContextBuilder;\nimport org.apache.http.ssl.SSLContexts;\nimport org.apache.http.util.EntityUtils;\n\npublic class ClientCustomSSL {\n \n public final static void main(String[] args) throws Exception {\n\n //Creating SSLContextBuilder object\n SSLContextBuilder SSLBuilder = SSLContexts.custom();\n \n //Loading the Keystore file\n File file = new File(\"mykeystore.jks\");\n SSLBuilder = SSLBuilder.loadTrustMaterial(file,\n \"changeit\".toCharArray());\n\n //Building the SSLContext usiong the build() method\n SSLContext sslcontext = SSLBuilder.build();\n \n //Creating SSLConnectionSocketFactory object\n SSLConnectionSocketFactory sslConSocFactory = new SSLConnectionSocketFactory(sslcontext, new NoopHostnameVerifier());\n \n //Creating HttpClientBuilder\n HttpClientBuilder clientbuilder = HttpClients.custom();\n\n //Setting the SSLConnectionSocketFactory\n clientbuilder = clientbuilder.setSSLSocketFactory(sslConSocFactory);\n\n //Building the CloseableHttpClient\n CloseableHttpClient httpclient = clientbuilder.build();\n \n //Creating the HttpGet request\n HttpGet httpget = new HttpGet(\"https://example.com/\");\n \n //Executing the request\n HttpResponse httpresponse = httpclient.execute(httpget);\n\n //printing the status line\n System.out.println(httpresponse.getStatusLine());\n\n //Retrieving the HttpEntity and displaying the no.of bytes read\n HttpEntity entity = httpresponse.getEntity();\n if (entity != null) {\n System.out.println(EntityUtils.toByteArray(entity).length);\n } \n }\n}"
},
{
"code": null,
"e": 7408,
"s": 7344,
"text": "On executing, the above program generates the following output."
},
{
"code": null,
"e": 7430,
"s": 7408,
"text": "HTTP/1.1 200 OK\n1270\n"
},
{
"code": null,
"e": 7465,
"s": 7430,
"text": "\n 46 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 7484,
"s": 7465,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 7519,
"s": 7484,
"text": "\n 23 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 7540,
"s": 7519,
"text": " Mukund Kumar Mishra"
},
{
"code": null,
"e": 7573,
"s": 7540,
"text": "\n 16 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 7586,
"s": 7573,
"text": " Nilay Mehta"
},
{
"code": null,
"e": 7621,
"s": 7586,
"text": "\n 52 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 7639,
"s": 7621,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 7672,
"s": 7639,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 7690,
"s": 7672,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 7723,
"s": 7690,
"text": "\n 23 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 7741,
"s": 7723,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 7748,
"s": 7741,
"text": " Print"
},
{
"code": null,
"e": 7759,
"s": 7748,
"text": " Add Notes"
}
] |
Python Program to Find all Numbers in a Range which are Perfect Squares and Sum of all Digits in the Number is Less than 10 | When it is required to find all numbers in a range where there are perfect square, and sum of digits in the number is less than 10, list comprehension is used.
Below is the demonstration of the same −
Live Demo
lower_limit = int(input(“Enter the lower range: “))
upper_limit = int(input(“Enter the upper range: “))
my_list = []
my_list = [x for x in range(lower_limit,upper_limit+1) if (int(x**0.5))**2==x and
sum(list(map(int,str(x))))<10]
print(“The result is : “)
print(my_list)
Enter the lower range: 5
Enter the upper range: 12
The result is :
[9]
The lower range and upper range are taken by the user.
The lower range and upper range are taken by the user.
An empty list is defined.
An empty list is defined.
The list comprehension is used, to iterate over the lower and upper limits.
The list comprehension is used, to iterate over the lower and upper limits.
The square root of the elements are found.
The square root of the elements are found.
The elements are summed up.
The elements are summed up.
It is converted to a list.
It is converted to a list.
This is assigned to a variable.
This is assigned to a variable.
The output is displayed on the console.
The output is displayed on the console. | [
{
"code": null,
"e": 1222,
"s": 1062,
"text": "When it is required to find all numbers in a range where there are perfect square, and sum of digits in the number is less than 10, list comprehension is used."
},
{
"code": null,
"e": 1263,
"s": 1222,
"text": "Below is the demonstration of the same −"
},
{
"code": null,
"e": 1274,
"s": 1263,
"text": " Live Demo"
},
{
"code": null,
"e": 1545,
"s": 1274,
"text": "lower_limit = int(input(“Enter the lower range: “))\nupper_limit = int(input(“Enter the upper range: “))\nmy_list = []\nmy_list = [x for x in range(lower_limit,upper_limit+1) if (int(x**0.5))**2==x and\nsum(list(map(int,str(x))))<10]\nprint(“The result is : “)\nprint(my_list)"
},
{
"code": null,
"e": 1616,
"s": 1545,
"text": "Enter the lower range: 5\nEnter the upper range: 12\nThe result is :\n[9]"
},
{
"code": null,
"e": 1671,
"s": 1616,
"text": "The lower range and upper range are taken by the user."
},
{
"code": null,
"e": 1726,
"s": 1671,
"text": "The lower range and upper range are taken by the user."
},
{
"code": null,
"e": 1752,
"s": 1726,
"text": "An empty list is defined."
},
{
"code": null,
"e": 1778,
"s": 1752,
"text": "An empty list is defined."
},
{
"code": null,
"e": 1854,
"s": 1778,
"text": "The list comprehension is used, to iterate over the lower and upper limits."
},
{
"code": null,
"e": 1930,
"s": 1854,
"text": "The list comprehension is used, to iterate over the lower and upper limits."
},
{
"code": null,
"e": 1973,
"s": 1930,
"text": "The square root of the elements are found."
},
{
"code": null,
"e": 2016,
"s": 1973,
"text": "The square root of the elements are found."
},
{
"code": null,
"e": 2044,
"s": 2016,
"text": "The elements are summed up."
},
{
"code": null,
"e": 2072,
"s": 2044,
"text": "The elements are summed up."
},
{
"code": null,
"e": 2099,
"s": 2072,
"text": "It is converted to a list."
},
{
"code": null,
"e": 2126,
"s": 2099,
"text": "It is converted to a list."
},
{
"code": null,
"e": 2158,
"s": 2126,
"text": "This is assigned to a variable."
},
{
"code": null,
"e": 2190,
"s": 2158,
"text": "This is assigned to a variable."
},
{
"code": null,
"e": 2230,
"s": 2190,
"text": "The output is displayed on the console."
},
{
"code": null,
"e": 2270,
"s": 2230,
"text": "The output is displayed on the console."
}
] |
Django - File Uploading | It is generally useful for a web app to be able to upload files (profile picture, songs, pdf, words.....). Let's discuss how to upload files in this chapter.
Before starting to play with an image, make sure you have the Python Image Library (PIL) installed. Now to illustrate uploading an image, let's create a profile form, in our myapp/forms.py −
#-*- coding: utf-8 -*-
from django import forms
class ProfileForm(forms.Form):
name = forms.CharField(max_length = 100)
picture = forms.ImageFields()
As you can see, the main difference here is just the forms.ImageField. ImageField will make sure the uploaded file is an image. If not, the form validation will fail.
Now let's create a "Profile" model to save our uploaded profile. This is done in myapp/models.py −
from django.db import models
class Profile(models.Model):
name = models.CharField(max_length = 50)
picture = models.ImageField(upload_to = 'pictures')
class Meta:
db_table = "profile"
As you can see for the model, the ImageField takes a compulsory argument: upload_to. This represents the place on the hard drive where your images will be saved. Note that the parameter will be added to the MEDIA_ROOT option defined in your settings.py file.
Now that we have the Form and the Model, let's create the view, in myapp/views.py −
#-*- coding: utf-8 -*-
from myapp.forms import ProfileForm
from myapp.models import Profile
def SaveProfile(request):
saved = False
if request.method == "POST":
#Get the posted form
MyProfileForm = ProfileForm(request.POST, request.FILES)
if MyProfileForm.is_valid():
profile = Profile()
profile.name = MyProfileForm.cleaned_data["name"]
profile.picture = MyProfileForm.cleaned_data["picture"]
profile.save()
saved = True
else:
MyProfileForm = Profileform()
return render(request, 'saved.html', locals())
The part not to miss is, there is a change when creating a ProfileForm, we added a second parameters: request.FILES. If not passed the form validation will fail, giving a message that says the picture is empty.
Now, we just need the saved.html template and the profile.html template, for the form and the redirection page −
myapp/templates/saved.html −
<html>
<body>
{% if saved %}
<strong>Your profile was saved.</strong>
{% endif %}
{% if not saved %}
<strong>Your profile was not saved.</strong>
{% endif %}
</body>
</html>
myapp/templates/profile.html −
<html>
<body>
<form name = "form" enctype = "multipart/form-data"
action = "{% url "myapp.views.SaveProfile" %}" method = "POST" >{% csrf_token %}
<div style = "max-width:470px;">
<center>
<input type = "text" style = "margin-left:20%;"
placeholder = "Name" name = "name" />
</center>
</div>
<br>
<div style = "max-width:470px;">
<center>
<input type = "file" style = "margin-left:20%;"
placeholder = "Picture" name = "picture" />
</center>
</div>
<br>
<div style = "max-width:470px;">
<center>
<button style = "border:0px;background-color:#4285F4; margin-top:8%;
height:35px; width:80%; margin-left:19%;" type = "submit" value = "Login" >
<strong>Login</strong>
</button>
</center>
</div>
</form>
</body>
</html>
Next, we need our pair of URLs to get started: myapp/urls.py
from django.conf.urls import patterns, url
from django.views.generic import TemplateView
urlpatterns = patterns(
'myapp.views', url(r'^profile/',TemplateView.as_view(
template_name = 'profile.html')), url(r'^saved/', 'SaveProfile', name = 'saved')
)
When accessing "/myapp/profile", we will get the following profile.html template rendered −
And on form post, the saved template will be rendered −
We have a sample for image, but if you want to upload another type of file, not just image, just replace the ImageField in both Model and Form with FileField.
39 Lectures
3.5 hours
John Elder
36 Lectures
2.5 hours
John Elder
28 Lectures
2 hours
John Elder
20 Lectures
1 hours
John Elder
35 Lectures
3 hours
John Elder
79 Lectures
10 hours
Rathan Kumar
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2203,
"s": 2045,
"text": "It is generally useful for a web app to be able to upload files (profile picture, songs, pdf, words.....). Let's discuss how to upload files in this chapter."
},
{
"code": null,
"e": 2394,
"s": 2203,
"text": "Before starting to play with an image, make sure you have the Python Image Library (PIL) installed. Now to illustrate uploading an image, let's create a profile form, in our myapp/forms.py −"
},
{
"code": null,
"e": 2551,
"s": 2394,
"text": "#-*- coding: utf-8 -*-\nfrom django import forms\n\nclass ProfileForm(forms.Form):\n name = forms.CharField(max_length = 100)\n picture = forms.ImageFields()"
},
{
"code": null,
"e": 2718,
"s": 2551,
"text": "As you can see, the main difference here is just the forms.ImageField. ImageField will make sure the uploaded file is an image. If not, the form validation will fail."
},
{
"code": null,
"e": 2817,
"s": 2718,
"text": "Now let's create a \"Profile\" model to save our uploaded profile. This is done in myapp/models.py −"
},
{
"code": null,
"e": 3018,
"s": 2817,
"text": "from django.db import models\n\nclass Profile(models.Model):\n name = models.CharField(max_length = 50)\n picture = models.ImageField(upload_to = 'pictures')\n\n class Meta:\n db_table = \"profile\""
},
{
"code": null,
"e": 3277,
"s": 3018,
"text": "As you can see for the model, the ImageField takes a compulsory argument: upload_to. This represents the place on the hard drive where your images will be saved. Note that the parameter will be added to the MEDIA_ROOT option defined in your settings.py file."
},
{
"code": null,
"e": 3361,
"s": 3277,
"text": "Now that we have the Form and the Model, let's create the view, in myapp/views.py −"
},
{
"code": null,
"e": 3962,
"s": 3361,
"text": "#-*- coding: utf-8 -*-\nfrom myapp.forms import ProfileForm\nfrom myapp.models import Profile\n\ndef SaveProfile(request):\n saved = False\n \n if request.method == \"POST\":\n #Get the posted form\n MyProfileForm = ProfileForm(request.POST, request.FILES)\n \n if MyProfileForm.is_valid():\n profile = Profile()\n profile.name = MyProfileForm.cleaned_data[\"name\"]\n profile.picture = MyProfileForm.cleaned_data[\"picture\"]\n profile.save()\n saved = True\n else:\n MyProfileForm = Profileform()\n\t\t\n return render(request, 'saved.html', locals())"
},
{
"code": null,
"e": 4173,
"s": 3962,
"text": "The part not to miss is, there is a change when creating a ProfileForm, we added a second parameters: request.FILES. If not passed the form validation will fail, giving a message that says the picture is empty."
},
{
"code": null,
"e": 4286,
"s": 4173,
"text": "Now, we just need the saved.html template and the profile.html template, for the form and the redirection page −"
},
{
"code": null,
"e": 4315,
"s": 4286,
"text": "myapp/templates/saved.html −"
},
{
"code": null,
"e": 4555,
"s": 4315,
"text": "<html>\n <body>\n \n {% if saved %}\n <strong>Your profile was saved.</strong>\n {% endif %}\n \n {% if not saved %}\n <strong>Your profile was not saved.</strong>\n {% endif %}\n \n </body>\n</html>"
},
{
"code": null,
"e": 4586,
"s": 4555,
"text": "myapp/templates/profile.html −"
},
{
"code": null,
"e": 5696,
"s": 4586,
"text": "<html>\n <body>\n \n <form name = \"form\" enctype = \"multipart/form-data\" \n action = \"{% url \"myapp.views.SaveProfile\" %}\" method = \"POST\" >{% csrf_token %}\n \n <div style = \"max-width:470px;\">\n <center> \n <input type = \"text\" style = \"margin-left:20%;\" \n placeholder = \"Name\" name = \"name\" />\n </center>\n </div>\n\t\t\t\n <br>\n \n <div style = \"max-width:470px;\">\n <center> \n <input type = \"file\" style = \"margin-left:20%;\" \n placeholder = \"Picture\" name = \"picture\" />\n </center>\n </div>\n\t\t\t\n <br>\n \n <div style = \"max-width:470px;\">\n <center> \n \n <button style = \"border:0px;background-color:#4285F4; margin-top:8%; \n height:35px; width:80%; margin-left:19%;\" type = \"submit\" value = \"Login\" >\n <strong>Login</strong>\n </button>\n \n </center>\n </div>\n \n </form>\n \n </body>\n</html>"
},
{
"code": null,
"e": 5757,
"s": 5696,
"text": "Next, we need our pair of URLs to get started: myapp/urls.py"
},
{
"code": null,
"e": 6017,
"s": 5757,
"text": "from django.conf.urls import patterns, url\nfrom django.views.generic import TemplateView\n\nurlpatterns = patterns(\n 'myapp.views', url(r'^profile/',TemplateView.as_view(\n template_name = 'profile.html')), url(r'^saved/', 'SaveProfile', name = 'saved')\n)"
},
{
"code": null,
"e": 6109,
"s": 6017,
"text": "When accessing \"/myapp/profile\", we will get the following profile.html template rendered −"
},
{
"code": null,
"e": 6165,
"s": 6109,
"text": "And on form post, the saved template will be rendered −"
},
{
"code": null,
"e": 6324,
"s": 6165,
"text": "We have a sample for image, but if you want to upload another type of file, not just image, just replace the ImageField in both Model and Form with FileField."
},
{
"code": null,
"e": 6359,
"s": 6324,
"text": "\n 39 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6371,
"s": 6359,
"text": " John Elder"
},
{
"code": null,
"e": 6406,
"s": 6371,
"text": "\n 36 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6418,
"s": 6406,
"text": " John Elder"
},
{
"code": null,
"e": 6451,
"s": 6418,
"text": "\n 28 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 6463,
"s": 6451,
"text": " John Elder"
},
{
"code": null,
"e": 6496,
"s": 6463,
"text": "\n 20 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6508,
"s": 6496,
"text": " John Elder"
},
{
"code": null,
"e": 6541,
"s": 6508,
"text": "\n 35 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 6553,
"s": 6541,
"text": " John Elder"
},
{
"code": null,
"e": 6587,
"s": 6553,
"text": "\n 79 Lectures \n 10 hours \n"
},
{
"code": null,
"e": 6601,
"s": 6587,
"text": " Rathan Kumar"
},
{
"code": null,
"e": 6608,
"s": 6601,
"text": " Print"
},
{
"code": null,
"e": 6619,
"s": 6608,
"text": " Add Notes"
}
] |
Exponential Search | Exponential search is also known as doubling or galloping search. This mechanism is used to find the range where the search key may present. If L and U are the upper and lower bound of the list, then L and U both are the power of 2. For the last section, the U is the last position of the list. For that reason, it is known as exponential.
After finding the specific range, it uses the binary search technique to find the exact location of the search key.
Time Complexity: O(1) for the best case. O(log2 i) for average or worst case. Where i is the location where search key is present.
Space Complexity: O(1)
Input:
A sorted list of data:
10 13 15 26 28 50 56 88 94 127 159 356 480 567 689 699 780 850 956 995
The search key 780
Output:
Item found at location: 16
binarySearch(array, start, end, key)
Input: An sorted array, start and end location, and the search key
Output − location of the key (if found), otherwise wrong location.
Begin
if start <= end then
mid := start + (end - start) /2
if array[mid] = key then
return mid location
if array[mid] > key then
call binarySearch(array, mid+1, end, key)
else when array[mid] < key then
call binarySearch(array, start, mid-1, key)
else
return invalid location
End
exponentialSearch(array, start, end, key)
Input: An sorted array, start and end location, and the search key
Output: location of the key (if found), otherwise wrong location.
Begin
if (end – start) <= 0 then
return invalid location
i := 1
while i < (end - start) do
if array[i] < key then
i := i * 2 //increase i as power of 2
else
terminate the loop
done
call binarySearch(array, i/2, i, key)
End
#include<iostream>
using namespace std;
int binarySearch(int array[], int start, int end, int key) {
if(start <= end) {
int mid = (start + (end - start) /2); //mid location of the list
if(array[mid] == key)
return mid;
if(array[mid] > key)
return binarySearch(array, start, mid-1, key);
return binarySearch(array, mid+1, end, key);
}
return -1;
}
int exponentialSearch(int array[], int start, int end, int key){
if((end - start) <= 0)
return -1;
int i = 1; // as 2^0 = 1
while(i < (end - start)){
if(array[i] < key)
i *= 2; //i will increase as power of 2
else
break; //when array[i] corsses the key element
}
return binarySearch(array, i/2, i, key); //search item in the smaller range
}
int main() {
int n, searchKey, loc;
cout << "Enter number of items: ";
cin >> n;
int arr[n]; //create an array of size n
cout << "Enter items: " << endl;
for(int i = 0; i< n; i++) {
cin >> arr[i];
}
cout << "Enter search key to search in the list: ";
cin >> searchKey;
if((loc = exponentialSearch(arr, 0, n, searchKey)) >= 0)
cout << "Item found at location: " << loc << endl;
else
cout << "Item is not found in the list." << endl;
}
Enter number of items: 20
Enter items:
10 13 15 26 28 50 56 88 94 127 159 356 480 567 689 699 780 850 956 995
Enter search key to search in the list: 780
Item found at location: 16 | [
{
"code": null,
"e": 1402,
"s": 1062,
"text": "Exponential search is also known as doubling or galloping search. This mechanism is used to find the range where the search key may present. If L and U are the upper and lower bound of the list, then L and U both are the power of 2. For the last section, the U is the last position of the list. For that reason, it is known as exponential."
},
{
"code": null,
"e": 1518,
"s": 1402,
"text": "After finding the specific range, it uses the binary search technique to find the exact location of the search key."
},
{
"code": null,
"e": 1649,
"s": 1518,
"text": "Time Complexity: O(1) for the best case. O(log2 i) for average or worst case. Where i is the location where search key is present."
},
{
"code": null,
"e": 1672,
"s": 1649,
"text": "Space Complexity: O(1)"
},
{
"code": null,
"e": 1827,
"s": 1672,
"text": "Input:\nA sorted list of data:\n10 13 15 26 28 50 56 88 94 127 159 356 480 567 689 699 780 850 956 995\nThe search key 780\nOutput:\nItem found at location: 16"
},
{
"code": null,
"e": 1864,
"s": 1827,
"text": "binarySearch(array, start, end, key)"
},
{
"code": null,
"e": 1931,
"s": 1864,
"text": "Input: An sorted array, start and end location, and the search key"
},
{
"code": null,
"e": 1998,
"s": 1931,
"text": "Output − location of the key (if found), otherwise wrong location."
},
{
"code": null,
"e": 2341,
"s": 1998,
"text": "Begin\n if start <= end then\n mid := start + (end - start) /2\n if array[mid] = key then\n return mid location\n if array[mid] > key then\n call binarySearch(array, mid+1, end, key)\n else when array[mid] < key then\n call binarySearch(array, start, mid-1, key)\n else\n return invalid location\nEnd"
},
{
"code": null,
"e": 2383,
"s": 2341,
"text": "exponentialSearch(array, start, end, key)"
},
{
"code": null,
"e": 2450,
"s": 2383,
"text": "Input: An sorted array, start and end location, and the search key"
},
{
"code": null,
"e": 2516,
"s": 2450,
"text": "Output: location of the key (if found), otherwise wrong location."
},
{
"code": null,
"e": 2790,
"s": 2516,
"text": "Begin\n if (end – start) <= 0 then\n return invalid location\n i := 1\n while i < (end - start) do\n if array[i] < key then\n i := i * 2 //increase i as power of 2\n else\n terminate the loop\n done\n call binarySearch(array, i/2, i, key)\nEnd"
},
{
"code": null,
"e": 4089,
"s": 2790,
"text": "#include<iostream>\nusing namespace std;\n\nint binarySearch(int array[], int start, int end, int key) {\n if(start <= end) {\n int mid = (start + (end - start) /2); //mid location of the list\n if(array[mid] == key)\n return mid;\n if(array[mid] > key)\n return binarySearch(array, start, mid-1, key);\n return binarySearch(array, mid+1, end, key);\n }\n return -1;\n}\n\nint exponentialSearch(int array[], int start, int end, int key){\n if((end - start) <= 0)\n return -1;\n int i = 1; // as 2^0 = 1\n while(i < (end - start)){\n if(array[i] < key)\n i *= 2; //i will increase as power of 2\n else\n break; //when array[i] corsses the key element\n }\n return binarySearch(array, i/2, i, key); //search item in the smaller range\n}\n\nint main() {\n int n, searchKey, loc;\n cout << \"Enter number of items: \";\n cin >> n;\n int arr[n]; //create an array of size n\n cout << \"Enter items: \" << endl;\n for(int i = 0; i< n; i++) {\n cin >> arr[i];\n }\n cout << \"Enter search key to search in the list: \";\n cin >> searchKey;\n if((loc = exponentialSearch(arr, 0, n, searchKey)) >= 0)\n cout << \"Item found at location: \" << loc << endl;\n else\n cout << \"Item is not found in the list.\" << endl;\n}"
},
{
"code": null,
"e": 4270,
"s": 4089,
"text": "Enter number of items: 20\nEnter items:\n10 13 15 26 28 50 56 88 94 127 159 356 480 567 689 699 780 850 956 995\nEnter search key to search in the list: 780\nItem found at location: 16"
}
] |
Distributed Processing with PyArrow-Powered New Pandas UDFs in PySpark 3.0 | by Pınar Ersoy | Towards Data Science | Data processing time is so valuable as each minute spent costs back to users in financial terms. This article is mainly for data scientists and data engineers looking to use the newest enhancements of Apache Spark since, in a noticeably short amount of time, Apache Spark has emerged as the next generation big data processing engine, and is highly being practiced throughout the industry faster than ever.
Spark’s consolidated structure supports both compatible and constructible APIs that are formed to empower high performance by optimizing across the various libraries and functions built together in programs enabling users to build applications beyond existing libraries. It gives the opportunity for users to write their own analytical libraries on top as well.
Data is costly to migrate so Spark concentrates on performing computations over the data, regardless of where it locates. In user-interacting APIs, Spark strives to manage these storage systems that seem broadly related in case applications do not require concern about where their data is.
When the data is too big to fit on a single machine with a long time to execute that computation on one machine drives it to place the data on more than one server or computer. This logic requires processing the data in a distributed manner. Spark DataFrame is the ultimate Structured API that serves a table of data with rows and columns. With its column-and-column-type schema, it can span large numbers of data sources.
The purpose of this article is to introduce the benefits of one of the currently released features of Spark 3.0 that is related to Pandas with Apache Arrow usage with PySpark in order to be able to execute a pandas-like UDFs in a parallel manner. In the following headings, PyArrow’s crucial usage with PySpark session configurations, PySpark enabled Pandas UDFs will be explained in a detailed way by providing code snippets for corresponding topics. At the end of the article, references and additional resources are added for further research.
In the previous versions of Spark, there were inefficient steps for converting DataFrame to Pandas in PySpark as collecting all rows to the Spark driver, serializing each row into Python’s pickle format (row by row), and sending them to a Python worker process. At the end of this converting procedure, it unpickles each row into a massive list of tuples. In order to be able to overcome these ineffective operations, Apache Arrow that is integrated with Apache Spark can be used to empower faster columnar data transfer and conversion.
Apache Arrow helps to accelerate converting to pandas objects from traditional columnar memory providing the high-performance in-memory columnar data structures.
Previously, Spark reveals a row-based interface for interpreting and running user-defined functions (UDFs). This introduces high overhead in serialization and deserialization and makes it difficult to work with Python libraries such as NumPy, Pandas which are coded in native Python that enables them to compile faster to machine code.
With the newly proposed UDFs, it advocates introducing new APIs to support vectorized UDFs in Python, in which a block of data is transferred over to Python in some columnar format for execution by serializing block by block instead of row by row.
Pandas package is recognized by machine learning and data science specialists since it has coherent integrations with plenty of Python libraries and packages including scikit-learn, matplotlib, and NumPy.
Also, Pandas UDFs support users both to distribute their data loads and to use the Pandas APIs in Apache Spark.
The user-defined functions can be executed by referring to the official site:
Apache Arrow enables to transfer of data precisely between Java Virtual Machine and executors of Python with zero serialization cost by leveraging the Arrow columnar memory layout to fasten up the processing of string data.
Pandas libraries to work with its instances and APIs.
For better performance, while executing jobs, the following configurations shall be set as follows.
To be able to benefit from PyArrow optimizations, the following configuration can be enabled by setting this config to true which is disabled by default : spark.sql.execution.arrow.pyspark.enabled
The upper enabled optimization may fall back to the non-Arrow optimization implementation situation in case of an error. To cope with this issue that occurs the actual computation within Spark,fallback.enabled shall be set to true : spark.sql.execution.arrow.pyspark.fallback.enabled
Parquet-summary-metadata is not efficient to enable the following configurations for the below reasons:
mergeSchema = false: It is assumed that the schema of all Parquet part-files is identical, for this reason, the footer can be read from any part-files.mergeSchema = true: The footers are required to be read for all files to actualize the merge process.
mergeSchema = false: It is assumed that the schema of all Parquet part-files is identical, for this reason, the footer can be read from any part-files.
mergeSchema = true: The footers are required to be read for all files to actualize the merge process.
spark.sql.parquet.mergeSchema falsespark.hadoop.parquet.enable.summary-metadata false
To sum up, the final recommended list of Arrow optimized configurations are as follows:
"spark.sql.execution.arrow.pyspark.enabled", "true""spark.sql.execution.arrow.pyspark.fallback.enabled", "true""spark.sql.parquet.mergeSchema", "false""spark.hadoop.parquet.enable.summary-metadata", "false"
Proper usage of PyArrow and PandasUDF requires some packages to be upgraded in the PySpark development platform.
The following list of packages is needed to be updated in order to be able to use the latest version of PandasUDF with Spark 3.0 in a proper way.
# Install with Condaconda install -c conda-forge pyarrow# Install PyArrow with Pythonpip install pyarrow==0.15.0# Install Py4j with Pythonpip install py4j==0.10.9# Install pyspark with Pythonpip install pyspark==3.0.0
Also, you may need to assign a new environment variable in order not to face any issues with the PyArrow upgrade of 0.15.1 when running Pandas UDFs.
# Environment Variable Setting for PyArrow Version Upgradeimport osos.environ["ARROW_PRE_0_15_IPC_FORMAT"] = "1"
PyArrow has a greater performance gap when it reads parquet files instead of other file formats. In this blog, you can find a benchmark study regarding different file format reads.
It can be used with different kinds of packages with varying processing times with Python:
Parquet to Arrow : pyarrow.parquet
# Importing PyArrow import pyarrow.parquet as pqpath = "dataset/dimension"data_frame = pq.read_table(path).to_pandas()
Parquet to Arrow with Pandas Dataframe : pyarrow.parquet then convert to pandas.DataFrame
import pandas as pdimport pyarrow as paimport pyarrow.parquet as pqpandas_df = pd.DataFrame(data={'column_1': [1, 2], 'column_2': [3, 4], 'column_3': [5, 6]})table = pa.Table.from_pandas(pandas_df, preserve_index=True)pq.write_table(table, 'pandas_dataframe.parquet')
As long as we are concerned with the performance and processing speed of written scripts, it is beneficial to be aware of how to measure their processing times.
There exist two types of time-passed processing calculation when a Python script is executed.
Processor Time: It measures how long a specific process actively being executed on the CPU. Sleep, waiting for a web request, or time are not included. time.process_time()
Wall-Clock Time: It calculates how much time has passed “on a clock hanging on the wall”, i.e. outside real time.time.perf_counter()
There are additional ways to compute the amount of time spent on a running script.
time.time() function is also quantifes time-passed as a wall-clock time; however it can be calibrated. For this reason, it is needed to go back in time to reset it.
time.monotonic() function is monotonic that simply goes forward; however it has reduced precision performance than time.perf_counter()
Pandas User-Defined Functions can be identified as vectorized UDF that is powered by Apache Arrow permits vectorized operations that serve much higher performance compared to row-at-a-time Python UDFs. They can be accepted as the most impactful improvements in Apache Spark by means of distributed processing of customized functions. They bring countless benefits, including empowering users to use Pandas APIs and improving performance.
Ingesting Spark customized function structures in Python reveals its advanced functionality to SQL users by allowing them to call in the functions without generating the extra scripting effort to connect their functionalities.
Functions can be executed by means of Row, Group, and Window while data formats can be used as Series for column and DataFrame for table structures.
Scalar type of Pandas UDF can be described as the conversion of one or more Pandas Series into one Pandas Series. The final returning data series size is expected to be the same as the input data series.
import pandas as pdfrom pyspark.sql.functions import pandas_udffrom pyspark.sql import Windowdataframe = spark.createDataFrame( [(1, 5), (2, 7), (2, 8), (2, 10), (3, 18), (3, 22), (4, 36)], (“index”, “weight”))# The function definition and the UDF creation@pandas_udf(“int”)def weight_avg_udf(weight: pd.Series) -> float: return weight.mean()dataframe.select(weight_avg_udf(dataframe[‘weight’])).show()
Grouped Agg of Pandas UDF can be defined as the conversion of one or more Pandas Series into one Scalar. The final returned data value type is required to be primitive (boolean, byte, char, short, int, long, float, and double) data type.
# Aggregation Process on Pandas UDFdataframe.groupby("index").agg(weight_avg_udf(dataframe['weight'])).show()w = Window \ .partitionBy('index') \ .rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
# Print the windowed resultsdataframe.withColumn('avg_weight', weight_avg_udf(dataframe['weight']).over(w)).show()
Grouped Map of Pandas UDF can be identified as the conversion of one or more Pandas DataFrame into one Pandas DataFrame. The final returned data size can be arbitrary.
import numpy as np# Pandas DataFrame generationpandas_dataframe = pd.DataFrame(np.random.rand(200, 4))def weight_map_udf(pandas_dataframe): weight = pandas_dataframe.weight return pandas_dataframe.assign(weight=weight - weight.mean())dataframe.groupby("index").applyInPandas(weight_map_udf, schema="index int, weight int").show()
According to the specifications of your input and output data, you can switch between these vectorized UDFs by adding more complex functions to them.
The full implementation code and Jupyter Notebook are available on my GitHub.
Questions and comments are highly appreciated!
Python User Defined FunctionsPandas APIApache SparkApache Arrow
Python User Defined Functions
Pandas API
Apache Spark
Apache Arrow
Vectorized UDF: Scalable Analysis with Python and PySparkDemo for Apache Arrow Tokyo Meetup 2018Spark: The Definitive Guide
Vectorized UDF: Scalable Analysis with Python and PySpark
Demo for Apache Arrow Tokyo Meetup 2018
Spark: The Definitive Guide | [
{
"code": null,
"e": 579,
"s": 172,
"text": "Data processing time is so valuable as each minute spent costs back to users in financial terms. This article is mainly for data scientists and data engineers looking to use the newest enhancements of Apache Spark since, in a noticeably short amount of time, Apache Spark has emerged as the next generation big data processing engine, and is highly being practiced throughout the industry faster than ever."
},
{
"code": null,
"e": 941,
"s": 579,
"text": "Spark’s consolidated structure supports both compatible and constructible APIs that are formed to empower high performance by optimizing across the various libraries and functions built together in programs enabling users to build applications beyond existing libraries. It gives the opportunity for users to write their own analytical libraries on top as well."
},
{
"code": null,
"e": 1232,
"s": 941,
"text": "Data is costly to migrate so Spark concentrates on performing computations over the data, regardless of where it locates. In user-interacting APIs, Spark strives to manage these storage systems that seem broadly related in case applications do not require concern about where their data is."
},
{
"code": null,
"e": 1655,
"s": 1232,
"text": "When the data is too big to fit on a single machine with a long time to execute that computation on one machine drives it to place the data on more than one server or computer. This logic requires processing the data in a distributed manner. Spark DataFrame is the ultimate Structured API that serves a table of data with rows and columns. With its column-and-column-type schema, it can span large numbers of data sources."
},
{
"code": null,
"e": 2202,
"s": 1655,
"text": "The purpose of this article is to introduce the benefits of one of the currently released features of Spark 3.0 that is related to Pandas with Apache Arrow usage with PySpark in order to be able to execute a pandas-like UDFs in a parallel manner. In the following headings, PyArrow’s crucial usage with PySpark session configurations, PySpark enabled Pandas UDFs will be explained in a detailed way by providing code snippets for corresponding topics. At the end of the article, references and additional resources are added for further research."
},
{
"code": null,
"e": 2739,
"s": 2202,
"text": "In the previous versions of Spark, there were inefficient steps for converting DataFrame to Pandas in PySpark as collecting all rows to the Spark driver, serializing each row into Python’s pickle format (row by row), and sending them to a Python worker process. At the end of this converting procedure, it unpickles each row into a massive list of tuples. In order to be able to overcome these ineffective operations, Apache Arrow that is integrated with Apache Spark can be used to empower faster columnar data transfer and conversion."
},
{
"code": null,
"e": 2901,
"s": 2739,
"text": "Apache Arrow helps to accelerate converting to pandas objects from traditional columnar memory providing the high-performance in-memory columnar data structures."
},
{
"code": null,
"e": 3237,
"s": 2901,
"text": "Previously, Spark reveals a row-based interface for interpreting and running user-defined functions (UDFs). This introduces high overhead in serialization and deserialization and makes it difficult to work with Python libraries such as NumPy, Pandas which are coded in native Python that enables them to compile faster to machine code."
},
{
"code": null,
"e": 3485,
"s": 3237,
"text": "With the newly proposed UDFs, it advocates introducing new APIs to support vectorized UDFs in Python, in which a block of data is transferred over to Python in some columnar format for execution by serializing block by block instead of row by row."
},
{
"code": null,
"e": 3690,
"s": 3485,
"text": "Pandas package is recognized by machine learning and data science specialists since it has coherent integrations with plenty of Python libraries and packages including scikit-learn, matplotlib, and NumPy."
},
{
"code": null,
"e": 3802,
"s": 3690,
"text": "Also, Pandas UDFs support users both to distribute their data loads and to use the Pandas APIs in Apache Spark."
},
{
"code": null,
"e": 3880,
"s": 3802,
"text": "The user-defined functions can be executed by referring to the official site:"
},
{
"code": null,
"e": 4104,
"s": 3880,
"text": "Apache Arrow enables to transfer of data precisely between Java Virtual Machine and executors of Python with zero serialization cost by leveraging the Arrow columnar memory layout to fasten up the processing of string data."
},
{
"code": null,
"e": 4158,
"s": 4104,
"text": "Pandas libraries to work with its instances and APIs."
},
{
"code": null,
"e": 4258,
"s": 4158,
"text": "For better performance, while executing jobs, the following configurations shall be set as follows."
},
{
"code": null,
"e": 4455,
"s": 4258,
"text": "To be able to benefit from PyArrow optimizations, the following configuration can be enabled by setting this config to true which is disabled by default : spark.sql.execution.arrow.pyspark.enabled"
},
{
"code": null,
"e": 4739,
"s": 4455,
"text": "The upper enabled optimization may fall back to the non-Arrow optimization implementation situation in case of an error. To cope with this issue that occurs the actual computation within Spark,fallback.enabled shall be set to true : spark.sql.execution.arrow.pyspark.fallback.enabled"
},
{
"code": null,
"e": 4843,
"s": 4739,
"text": "Parquet-summary-metadata is not efficient to enable the following configurations for the below reasons:"
},
{
"code": null,
"e": 5096,
"s": 4843,
"text": "mergeSchema = false: It is assumed that the schema of all Parquet part-files is identical, for this reason, the footer can be read from any part-files.mergeSchema = true: The footers are required to be read for all files to actualize the merge process."
},
{
"code": null,
"e": 5248,
"s": 5096,
"text": "mergeSchema = false: It is assumed that the schema of all Parquet part-files is identical, for this reason, the footer can be read from any part-files."
},
{
"code": null,
"e": 5350,
"s": 5248,
"text": "mergeSchema = true: The footers are required to be read for all files to actualize the merge process."
},
{
"code": null,
"e": 5436,
"s": 5350,
"text": "spark.sql.parquet.mergeSchema falsespark.hadoop.parquet.enable.summary-metadata false"
},
{
"code": null,
"e": 5524,
"s": 5436,
"text": "To sum up, the final recommended list of Arrow optimized configurations are as follows:"
},
{
"code": null,
"e": 5731,
"s": 5524,
"text": "\"spark.sql.execution.arrow.pyspark.enabled\", \"true\"\"spark.sql.execution.arrow.pyspark.fallback.enabled\", \"true\"\"spark.sql.parquet.mergeSchema\", \"false\"\"spark.hadoop.parquet.enable.summary-metadata\", \"false\""
},
{
"code": null,
"e": 5844,
"s": 5731,
"text": "Proper usage of PyArrow and PandasUDF requires some packages to be upgraded in the PySpark development platform."
},
{
"code": null,
"e": 5990,
"s": 5844,
"text": "The following list of packages is needed to be updated in order to be able to use the latest version of PandasUDF with Spark 3.0 in a proper way."
},
{
"code": null,
"e": 6208,
"s": 5990,
"text": "# Install with Condaconda install -c conda-forge pyarrow# Install PyArrow with Pythonpip install pyarrow==0.15.0# Install Py4j with Pythonpip install py4j==0.10.9# Install pyspark with Pythonpip install pyspark==3.0.0"
},
{
"code": null,
"e": 6357,
"s": 6208,
"text": "Also, you may need to assign a new environment variable in order not to face any issues with the PyArrow upgrade of 0.15.1 when running Pandas UDFs."
},
{
"code": null,
"e": 6470,
"s": 6357,
"text": "# Environment Variable Setting for PyArrow Version Upgradeimport osos.environ[\"ARROW_PRE_0_15_IPC_FORMAT\"] = \"1\""
},
{
"code": null,
"e": 6651,
"s": 6470,
"text": "PyArrow has a greater performance gap when it reads parquet files instead of other file formats. In this blog, you can find a benchmark study regarding different file format reads."
},
{
"code": null,
"e": 6742,
"s": 6651,
"text": "It can be used with different kinds of packages with varying processing times with Python:"
},
{
"code": null,
"e": 6777,
"s": 6742,
"text": "Parquet to Arrow : pyarrow.parquet"
},
{
"code": null,
"e": 6896,
"s": 6777,
"text": "# Importing PyArrow import pyarrow.parquet as pqpath = \"dataset/dimension\"data_frame = pq.read_table(path).to_pandas()"
},
{
"code": null,
"e": 6986,
"s": 6896,
"text": "Parquet to Arrow with Pandas Dataframe : pyarrow.parquet then convert to pandas.DataFrame"
},
{
"code": null,
"e": 7254,
"s": 6986,
"text": "import pandas as pdimport pyarrow as paimport pyarrow.parquet as pqpandas_df = pd.DataFrame(data={'column_1': [1, 2], 'column_2': [3, 4], 'column_3': [5, 6]})table = pa.Table.from_pandas(pandas_df, preserve_index=True)pq.write_table(table, 'pandas_dataframe.parquet')"
},
{
"code": null,
"e": 7415,
"s": 7254,
"text": "As long as we are concerned with the performance and processing speed of written scripts, it is beneficial to be aware of how to measure their processing times."
},
{
"code": null,
"e": 7509,
"s": 7415,
"text": "There exist two types of time-passed processing calculation when a Python script is executed."
},
{
"code": null,
"e": 7681,
"s": 7509,
"text": "Processor Time: It measures how long a specific process actively being executed on the CPU. Sleep, waiting for a web request, or time are not included. time.process_time()"
},
{
"code": null,
"e": 7814,
"s": 7681,
"text": "Wall-Clock Time: It calculates how much time has passed “on a clock hanging on the wall”, i.e. outside real time.time.perf_counter()"
},
{
"code": null,
"e": 7897,
"s": 7814,
"text": "There are additional ways to compute the amount of time spent on a running script."
},
{
"code": null,
"e": 8062,
"s": 7897,
"text": "time.time() function is also quantifes time-passed as a wall-clock time; however it can be calibrated. For this reason, it is needed to go back in time to reset it."
},
{
"code": null,
"e": 8197,
"s": 8062,
"text": "time.monotonic() function is monotonic that simply goes forward; however it has reduced precision performance than time.perf_counter()"
},
{
"code": null,
"e": 8635,
"s": 8197,
"text": "Pandas User-Defined Functions can be identified as vectorized UDF that is powered by Apache Arrow permits vectorized operations that serve much higher performance compared to row-at-a-time Python UDFs. They can be accepted as the most impactful improvements in Apache Spark by means of distributed processing of customized functions. They bring countless benefits, including empowering users to use Pandas APIs and improving performance."
},
{
"code": null,
"e": 8862,
"s": 8635,
"text": "Ingesting Spark customized function structures in Python reveals its advanced functionality to SQL users by allowing them to call in the functions without generating the extra scripting effort to connect their functionalities."
},
{
"code": null,
"e": 9011,
"s": 8862,
"text": "Functions can be executed by means of Row, Group, and Window while data formats can be used as Series for column and DataFrame for table structures."
},
{
"code": null,
"e": 9215,
"s": 9011,
"text": "Scalar type of Pandas UDF can be described as the conversion of one or more Pandas Series into one Pandas Series. The final returning data series size is expected to be the same as the input data series."
},
{
"code": null,
"e": 9618,
"s": 9215,
"text": "import pandas as pdfrom pyspark.sql.functions import pandas_udffrom pyspark.sql import Windowdataframe = spark.createDataFrame( [(1, 5), (2, 7), (2, 8), (2, 10), (3, 18), (3, 22), (4, 36)], (“index”, “weight”))# The function definition and the UDF creation@pandas_udf(“int”)def weight_avg_udf(weight: pd.Series) -> float: return weight.mean()dataframe.select(weight_avg_udf(dataframe[‘weight’])).show()"
},
{
"code": null,
"e": 9856,
"s": 9618,
"text": "Grouped Agg of Pandas UDF can be defined as the conversion of one or more Pandas Series into one Scalar. The final returned data value type is required to be primitive (boolean, byte, char, short, int, long, float, and double) data type."
},
{
"code": null,
"e": 10075,
"s": 9856,
"text": "# Aggregation Process on Pandas UDFdataframe.groupby(\"index\").agg(weight_avg_udf(dataframe['weight'])).show()w = Window \\ .partitionBy('index') \\ .rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)"
},
{
"code": null,
"e": 10190,
"s": 10075,
"text": "# Print the windowed resultsdataframe.withColumn('avg_weight', weight_avg_udf(dataframe['weight']).over(w)).show()"
},
{
"code": null,
"e": 10358,
"s": 10190,
"text": "Grouped Map of Pandas UDF can be identified as the conversion of one or more Pandas DataFrame into one Pandas DataFrame. The final returned data size can be arbitrary."
},
{
"code": null,
"e": 10694,
"s": 10358,
"text": "import numpy as np# Pandas DataFrame generationpandas_dataframe = pd.DataFrame(np.random.rand(200, 4))def weight_map_udf(pandas_dataframe): weight = pandas_dataframe.weight return pandas_dataframe.assign(weight=weight - weight.mean())dataframe.groupby(\"index\").applyInPandas(weight_map_udf, schema=\"index int, weight int\").show()"
},
{
"code": null,
"e": 10844,
"s": 10694,
"text": "According to the specifications of your input and output data, you can switch between these vectorized UDFs by adding more complex functions to them."
},
{
"code": null,
"e": 10922,
"s": 10844,
"text": "The full implementation code and Jupyter Notebook are available on my GitHub."
},
{
"code": null,
"e": 10969,
"s": 10922,
"text": "Questions and comments are highly appreciated!"
},
{
"code": null,
"e": 11033,
"s": 10969,
"text": "Python User Defined FunctionsPandas APIApache SparkApache Arrow"
},
{
"code": null,
"e": 11063,
"s": 11033,
"text": "Python User Defined Functions"
},
{
"code": null,
"e": 11074,
"s": 11063,
"text": "Pandas API"
},
{
"code": null,
"e": 11087,
"s": 11074,
"text": "Apache Spark"
},
{
"code": null,
"e": 11100,
"s": 11087,
"text": "Apache Arrow"
},
{
"code": null,
"e": 11224,
"s": 11100,
"text": "Vectorized UDF: Scalable Analysis with Python and PySparkDemo for Apache Arrow Tokyo Meetup 2018Spark: The Definitive Guide"
},
{
"code": null,
"e": 11282,
"s": 11224,
"text": "Vectorized UDF: Scalable Analysis with Python and PySpark"
},
{
"code": null,
"e": 11322,
"s": 11282,
"text": "Demo for Apache Arrow Tokyo Meetup 2018"
}
] |
How to read and parse CSV files in C++? | You should really be using a library to parsing CSV files in C++ as there are many cases that you can miss if you read files on your own. The boost library for C++ provides a really nice set of tools for reading CSV files. For example,
#include<iostream>
vector<string> parseCSVLine(string line){
using namespace boost;
std::vector<std::string> vec;
// Tokenizes the input string
tokenizer<escaped_list_separator<char> > tk(line, escaped_list_separator<char>
('\\', ',', '\"'));
for (auto i = tk.begin(); i!=tk.end(); ++i)
vec.push_back(*i);
return vec;
}
int main() {
std::string line = "hello,from,here";
auto words = parseCSVLine(line);
for(auto it = words.begin(); it != words.end(); it++) {
std::cout << *it << std::endl;
}
}
This will give the output −
hello
from
here
Another way is to use a delimiter to split a line and take it in an array −
Another way is to provide a custom delimiter to split the string by using the getline function −
#include <vector>
#include <string>
#include <sstream>
using namespace std;
int main() {
std::stringstream str_strm("hello,from,here");
std::string tmp;
vector<string> words;
char delim = ','; // Ddefine the delimiter to split by
while (std::getline(str_strm, tmp, delim)) {
// Provide proper checks here for tmp like if empty
// Also strip down symbols like !, ., ?, etc.
// Finally push it.
words.push_back(tmp);
}
for(auto it = words.begin(); it != words.end(); it++) {
std::cout << *it << std::endl;
}
}
This will give the output −
hello
from
here | [
{
"code": null,
"e": 1298,
"s": 1062,
"text": "You should really be using a library to parsing CSV files in C++ as there are many cases that you can miss if you read files on your own. The boost library for C++ provides a really nice set of tools for reading CSV files. For example,"
},
{
"code": null,
"e": 1841,
"s": 1298,
"text": "#include<iostream>\nvector<string> parseCSVLine(string line){\n using namespace boost;\n\n std::vector<std::string> vec;\n\n // Tokenizes the input string\n tokenizer<escaped_list_separator<char> > tk(line, escaped_list_separator<char>\n ('\\\\', ',', '\\\"'));\n for (auto i = tk.begin(); i!=tk.end(); ++i)\n vec.push_back(*i);\n\n return vec;\n}\n\nint main() {\n std::string line = \"hello,from,here\";\n auto words = parseCSVLine(line);\n for(auto it = words.begin(); it != words.end(); it++) {\n std::cout << *it << std::endl;\n }\n}"
},
{
"code": null,
"e": 1869,
"s": 1841,
"text": "This will give the output −"
},
{
"code": null,
"e": 1885,
"s": 1869,
"text": "hello\nfrom\nhere"
},
{
"code": null,
"e": 1961,
"s": 1885,
"text": "Another way is to use a delimiter to split a line and take it in an array −"
},
{
"code": null,
"e": 2058,
"s": 1961,
"text": "Another way is to provide a custom delimiter to split the string by using the getline function −"
},
{
"code": null,
"e": 2624,
"s": 2058,
"text": "#include <vector>\n#include <string>\n#include <sstream>\n\nusing namespace std;\n\nint main() {\n std::stringstream str_strm(\"hello,from,here\");\n std::string tmp;\n vector<string> words;\n char delim = ','; // Ddefine the delimiter to split by\n\n while (std::getline(str_strm, tmp, delim)) {\n // Provide proper checks here for tmp like if empty\n // Also strip down symbols like !, ., ?, etc.\n // Finally push it.\n words.push_back(tmp);\n }\n\n for(auto it = words.begin(); it != words.end(); it++) {\n std::cout << *it << std::endl;\n }\n}"
},
{
"code": null,
"e": 2652,
"s": 2624,
"text": "This will give the output −"
},
{
"code": null,
"e": 2668,
"s": 2652,
"text": "hello\nfrom\nhere"
}
] |
Tcl - Commands | As you know, Tcl is a Tool command language, commands are the most vital part of the language. Tcl commands are built in-to the language with each having its own predefined function. These commands form the reserved words of the language and cannot be used for other variable naming. The advantage with these Tcl commands is that, you can define your own implementation for any of these commands to replace the original built-in functionality.
Each of the Tcl commands validates the input and it reduces the work of the interpreter.
Tcl command is actually a list of words, with the first word representing the command to be executed. The next words represent the arguments. In order to group the words into a single argument, we enclose multiple words with "" or {}.
The syntax of Tcl command is as follows −
commandName argument1 argument2 ... argumentN
Let's see a simple example of Tcl command −
#!/usr/bin/tclsh
puts "Hello, world!"
When the above code is executed, it produces the following result −
Hello, world!
In the above code, ‘puts’ is the Tcl command and "Hello World" is the argument1. As said before, we have used "" to group two words.
Let's see another example of Tcl command with two arguments −
#!/usr/bin/tclsh
puts stdout "Hello, world!"
When the above code is executed, it produces the following result −
Hello, world!
In the above code, ‘puts’ is the Tcl command, ‘stdout’ is argument1, and "Hello World" is argument2. Here, stdout makes the program to print in the standard output device.
In command substitutions, square brackets are used to evaluate the scripts inside the square brackets. A simple example to add two numbers is shown below −
#!/usr/bin/tclsh
puts [expr 1 + 6 + 9]
When the above code is executed, it produces following result −
16
In variable substitutions, $ is used before the variable name and this returns the contents of the variable. A simple example to set a value to a variable and print it is shown below.
#!/usr/bin/tclsh
set a 3
puts $a
When the above code is executed, it produces the following result −
3
These are commonly called escape sequences; with each backslash, followed by a letter having its own meaning. A simple example for newline substitution is shown below −
#!/usr/bin/tclsh
puts "Hello\nWorld"
When the above code is executed, it produces the following result −
Hello
World
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2645,
"s": 2201,
"text": "As you know, Tcl is a Tool command language, commands are the most vital part of the language. Tcl commands are built in-to the language with each having its own predefined function. These commands form the reserved words of the language and cannot be used for other variable naming. The advantage with these Tcl commands is that, you can define your own implementation for any of these commands to replace the original built-in functionality."
},
{
"code": null,
"e": 2734,
"s": 2645,
"text": "Each of the Tcl commands validates the input and it reduces the work of the interpreter."
},
{
"code": null,
"e": 2969,
"s": 2734,
"text": "Tcl command is actually a list of words, with the first word representing the command to be executed. The next words represent the arguments. In order to group the words into a single argument, we enclose multiple words with \"\" or {}."
},
{
"code": null,
"e": 3011,
"s": 2969,
"text": "The syntax of Tcl command is as follows −"
},
{
"code": null,
"e": 3057,
"s": 3011,
"text": "commandName argument1 argument2 ... argumentN"
},
{
"code": null,
"e": 3101,
"s": 3057,
"text": "Let's see a simple example of Tcl command −"
},
{
"code": null,
"e": 3140,
"s": 3101,
"text": "#!/usr/bin/tclsh\n\nputs \"Hello, world!\""
},
{
"code": null,
"e": 3208,
"s": 3140,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 3223,
"s": 3208,
"text": "Hello, world!\n"
},
{
"code": null,
"e": 3356,
"s": 3223,
"text": "In the above code, ‘puts’ is the Tcl command and \"Hello World\" is the argument1. As said before, we have used \"\" to group two words."
},
{
"code": null,
"e": 3418,
"s": 3356,
"text": "Let's see another example of Tcl command with two arguments −"
},
{
"code": null,
"e": 3464,
"s": 3418,
"text": "#!/usr/bin/tclsh\n\nputs stdout \"Hello, world!\""
},
{
"code": null,
"e": 3532,
"s": 3464,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 3547,
"s": 3532,
"text": "Hello, world!\n"
},
{
"code": null,
"e": 3719,
"s": 3547,
"text": "In the above code, ‘puts’ is the Tcl command, ‘stdout’ is argument1, and \"Hello World\" is argument2. Here, stdout makes the program to print in the standard output device."
},
{
"code": null,
"e": 3875,
"s": 3719,
"text": "In command substitutions, square brackets are used to evaluate the scripts inside the square brackets. A simple example to add two numbers is shown below −"
},
{
"code": null,
"e": 3915,
"s": 3875,
"text": "#!/usr/bin/tclsh\n\nputs [expr 1 + 6 + 9]"
},
{
"code": null,
"e": 3979,
"s": 3915,
"text": "When the above code is executed, it produces following result −"
},
{
"code": null,
"e": 3983,
"s": 3979,
"text": "16\n"
},
{
"code": null,
"e": 4167,
"s": 3983,
"text": "In variable substitutions, $ is used before the variable name and this returns the contents of the variable. A simple example to set a value to a variable and print it is shown below."
},
{
"code": null,
"e": 4201,
"s": 4167,
"text": "#!/usr/bin/tclsh\n\nset a 3\nputs $a"
},
{
"code": null,
"e": 4269,
"s": 4201,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 4272,
"s": 4269,
"text": "3\n"
},
{
"code": null,
"e": 4441,
"s": 4272,
"text": "These are commonly called escape sequences; with each backslash, followed by a letter having its own meaning. A simple example for newline substitution is shown below −"
},
{
"code": null,
"e": 4479,
"s": 4441,
"text": "#!/usr/bin/tclsh\n\nputs \"Hello\\nWorld\""
},
{
"code": null,
"e": 4547,
"s": 4479,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 4560,
"s": 4547,
"text": "Hello\nWorld\n"
},
{
"code": null,
"e": 4567,
"s": 4560,
"text": " Print"
},
{
"code": null,
"e": 4578,
"s": 4567,
"text": " Add Notes"
}
] |
What are the differences between a static block and a constructor in Java? | The static blocks are executed at the time of class loading.
The static blocks are executed before running the main () method.
The static blocks don't have any name in its prototype.
If we want any logic that needs to be executed at the time of class loading that logic needs to placed inside the static block so that it will be executed at the time of class loading.
static {
//some statements
}
Live Demo
public class StaticBlockTest {
static {
System.out.println("Static Block!");
}
public static void main(String args[]) {
System.out.println("Welcome to Tutorials Point!");
}
}
Static Block!
Welcome to Tutorials Point!
A Constructor will be executed while creating an object in Java.
A Constructor is called while creating an object of a class.
The name of a constructor must be always the same name as a class.
A Constructor is called only once for an object and it is called as many times as we can create an object. i.e The constructor gets executed automatically when the object is created.
public class MyClass {
//This is the constructor
MyClass() {
// some statements
}
}
Live Demo
public class ConstructorTest {
static {
//static block
System.out.println("In Static Block!");
}
public ConstructorTest() {
System.out.println("In a first constructor!");
}
public ConstructorTest(int c) {
System.out.println("In a second constructor!");
}
public static void main(String args[]) {
ConstructorTest ct1 = new ConstructorTest();
ConstructorTest ct2 = new ConstructorTest(10);
}
}
In Static Block!
In a first constructor!
In a second constructor! | [
{
"code": null,
"e": 1123,
"s": 1062,
"text": "The static blocks are executed at the time of class loading."
},
{
"code": null,
"e": 1189,
"s": 1123,
"text": "The static blocks are executed before running the main () method."
},
{
"code": null,
"e": 1245,
"s": 1189,
"text": "The static blocks don't have any name in its prototype."
},
{
"code": null,
"e": 1430,
"s": 1245,
"text": "If we want any logic that needs to be executed at the time of class loading that logic needs to placed inside the static block so that it will be executed at the time of class loading."
},
{
"code": null,
"e": 1462,
"s": 1430,
"text": "static {\n //some statements\n}"
},
{
"code": null,
"e": 1472,
"s": 1462,
"text": "Live Demo"
},
{
"code": null,
"e": 1671,
"s": 1472,
"text": "public class StaticBlockTest {\n static {\n System.out.println(\"Static Block!\");\n }\n public static void main(String args[]) {\n System.out.println(\"Welcome to Tutorials Point!\");\n }\n}"
},
{
"code": null,
"e": 1713,
"s": 1671,
"text": "Static Block!\nWelcome to Tutorials Point!"
},
{
"code": null,
"e": 1778,
"s": 1713,
"text": "A Constructor will be executed while creating an object in Java."
},
{
"code": null,
"e": 1839,
"s": 1778,
"text": "A Constructor is called while creating an object of a class."
},
{
"code": null,
"e": 1906,
"s": 1839,
"text": "The name of a constructor must be always the same name as a class."
},
{
"code": null,
"e": 2089,
"s": 1906,
"text": "A Constructor is called only once for an object and it is called as many times as we can create an object. i.e The constructor gets executed automatically when the object is created."
},
{
"code": null,
"e": 2188,
"s": 2089,
"text": "public class MyClass {\n //This is the constructor\n MyClass() {\n // some statements\n }\n}"
},
{
"code": null,
"e": 2198,
"s": 2188,
"text": "Live Demo"
},
{
"code": null,
"e": 2650,
"s": 2198,
"text": "public class ConstructorTest {\n static {\n //static block\n System.out.println(\"In Static Block!\");\n }\n public ConstructorTest() {\n System.out.println(\"In a first constructor!\");\n }\n public ConstructorTest(int c) {\n System.out.println(\"In a second constructor!\");\n }\n public static void main(String args[]) {\n ConstructorTest ct1 = new ConstructorTest();\n ConstructorTest ct2 = new ConstructorTest(10);\n }\n}"
},
{
"code": null,
"e": 2716,
"s": 2650,
"text": "In Static Block!\nIn a first constructor!\nIn a second constructor!"
}
] |
ES6 | Array forEach() Method - GeeksforGeeks | 13 Dec, 2021
When working with arrays, it’s widespread to iterate through its elements and manipulate them. Traditionally this can be done using for, while or do-while loops. The forEach will call the function for each element in the array.
Syntax:
array.forEach( callback, thisObject )
Parameter: This method accept only two parameter mentioned above and described below:
callback: This allow the function to test the each and every element in the array.
thisObject: This will be called when the callback function is executed.
Return: It returns the newly created array.Without forEach() Loop: The first line declares and initialize a number array called array_1 with values [2, 3, 4, 5, 6]. To double every element of this array a for loop is used which runs from zero (because the index of an array starts from zero) to one less than the length of the array. Now on the third line, each element is extracted from the array and is multiplied by 2, hence doubling the values.Program 1:
javascript
<script>var array_1 = [2, 3, 4, 5, 6];for(var i = 0; i < array_1.length; i++) { array_1[i] *= 2;}document.write(array_1);</script>
Output:
4, 6, 8, 10, 12
With forEach() Loop: In ES6 a more easy to understand and use method is introduced for Arrays which is forEach(). Let’s see how it works for the same above situation. Program 2:
javascript
<script>var array_1 = [2, 3, 4, 5, 6];array_1.forEach(function(number, i) { array_1[i] *= 2;}); document.write(array_1);</script>
Output:
4, 6, 8, 10, 12
Same as previous, the first line declares and initialize a numbers array called array_1 with values [2, 3, 4, 5, 6]. Then on array_1 forEach method is called which iterates over the array and takes a callback function as an argument. Now the callback function accepts three arguments,
currentValue – Which is a required argument, this corresponds to the value of the current element.
index – It is an optional argument, this is the corresponding index value of the current element.
array – It is also an optional argument, this is the original array, here array_1.
So, we make use of the second argument that is index and follow the exact same algorithm as before to double the value. Program 3: Here we just use the currentValue argument using which we printout each of the values from names array to the console.
javascript
<script>var names = ['Arpan', 'Abhishek', 'GeeksforGeeks']; names.forEach(function(name){ document.write(name + "<br/>");});</script>
Output:
Arpan
Abhishek
GeeksforGeeks
surinderdawra388
anikakapoor
JavaScript-ES
Picked
JavaScript
Technical Scripter
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Difference Between PUT and PATCH Request
Node.js | fs.writeFileSync() Method
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 24598,
"s": 24570,
"text": "\n13 Dec, 2021"
},
{
"code": null,
"e": 24827,
"s": 24598,
"text": "When working with arrays, it’s widespread to iterate through its elements and manipulate them. Traditionally this can be done using for, while or do-while loops. The forEach will call the function for each element in the array. "
},
{
"code": null,
"e": 24837,
"s": 24827,
"text": "Syntax: "
},
{
"code": null,
"e": 24875,
"s": 24837,
"text": "array.forEach( callback, thisObject )"
},
{
"code": null,
"e": 24963,
"s": 24875,
"text": "Parameter: This method accept only two parameter mentioned above and described below: "
},
{
"code": null,
"e": 25046,
"s": 24963,
"text": "callback: This allow the function to test the each and every element in the array."
},
{
"code": null,
"e": 25118,
"s": 25046,
"text": "thisObject: This will be called when the callback function is executed."
},
{
"code": null,
"e": 25579,
"s": 25118,
"text": "Return: It returns the newly created array.Without forEach() Loop: The first line declares and initialize a number array called array_1 with values [2, 3, 4, 5, 6]. To double every element of this array a for loop is used which runs from zero (because the index of an array starts from zero) to one less than the length of the array. Now on the third line, each element is extracted from the array and is multiplied by 2, hence doubling the values.Program 1: "
},
{
"code": null,
"e": 25590,
"s": 25579,
"text": "javascript"
},
{
"code": "<script>var array_1 = [2, 3, 4, 5, 6];for(var i = 0; i < array_1.length; i++) { array_1[i] *= 2;}document.write(array_1);</script>",
"e": 25724,
"s": 25590,
"text": null
},
{
"code": null,
"e": 25733,
"s": 25724,
"text": "Output: "
},
{
"code": null,
"e": 25749,
"s": 25733,
"text": "4, 6, 8, 10, 12"
},
{
"code": null,
"e": 25929,
"s": 25749,
"text": "With forEach() Loop: In ES6 a more easy to understand and use method is introduced for Arrays which is forEach(). Let’s see how it works for the same above situation. Program 2: "
},
{
"code": null,
"e": 25940,
"s": 25929,
"text": "javascript"
},
{
"code": "<script>var array_1 = [2, 3, 4, 5, 6];array_1.forEach(function(number, i) { array_1[i] *= 2;}); document.write(array_1);</script>",
"e": 26073,
"s": 25940,
"text": null
},
{
"code": null,
"e": 26082,
"s": 26073,
"text": "Output: "
},
{
"code": null,
"e": 26098,
"s": 26082,
"text": "4, 6, 8, 10, 12"
},
{
"code": null,
"e": 26385,
"s": 26098,
"text": "Same as previous, the first line declares and initialize a numbers array called array_1 with values [2, 3, 4, 5, 6]. Then on array_1 forEach method is called which iterates over the array and takes a callback function as an argument. Now the callback function accepts three arguments, "
},
{
"code": null,
"e": 26484,
"s": 26385,
"text": "currentValue – Which is a required argument, this corresponds to the value of the current element."
},
{
"code": null,
"e": 26582,
"s": 26484,
"text": "index – It is an optional argument, this is the corresponding index value of the current element."
},
{
"code": null,
"e": 26665,
"s": 26582,
"text": "array – It is also an optional argument, this is the original array, here array_1."
},
{
"code": null,
"e": 26917,
"s": 26665,
"text": "So, we make use of the second argument that is index and follow the exact same algorithm as before to double the value. Program 3: Here we just use the currentValue argument using which we printout each of the values from names array to the console. "
},
{
"code": null,
"e": 26928,
"s": 26917,
"text": "javascript"
},
{
"code": "<script>var names = ['Arpan', 'Abhishek', 'GeeksforGeeks']; names.forEach(function(name){ document.write(name + \"<br/>\");});</script>",
"e": 27065,
"s": 26928,
"text": null
},
{
"code": null,
"e": 27074,
"s": 27065,
"text": "Output: "
},
{
"code": null,
"e": 27103,
"s": 27074,
"text": "Arpan\nAbhishek\nGeeksforGeeks"
},
{
"code": null,
"e": 27120,
"s": 27103,
"text": "surinderdawra388"
},
{
"code": null,
"e": 27132,
"s": 27120,
"text": "anikakapoor"
},
{
"code": null,
"e": 27146,
"s": 27132,
"text": "JavaScript-ES"
},
{
"code": null,
"e": 27153,
"s": 27146,
"text": "Picked"
},
{
"code": null,
"e": 27164,
"s": 27153,
"text": "JavaScript"
},
{
"code": null,
"e": 27183,
"s": 27164,
"text": "Technical Scripter"
},
{
"code": null,
"e": 27200,
"s": 27183,
"text": "Web Technologies"
},
{
"code": null,
"e": 27227,
"s": 27200,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 27325,
"s": 27227,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27370,
"s": 27325,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 27431,
"s": 27370,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 27503,
"s": 27431,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 27544,
"s": 27503,
"text": "Difference Between PUT and PATCH Request"
},
{
"code": null,
"e": 27580,
"s": 27544,
"text": "Node.js | fs.writeFileSync() Method"
},
{
"code": null,
"e": 27622,
"s": 27580,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 27655,
"s": 27622,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 27698,
"s": 27655,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 27760,
"s": 27698,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
}
] |
Logic Programming — Rethinking The Way We Program | by Louis de Benoist | Towards Data Science | If you’ve coded before, chances are you’re familiar with an imperative language, such as Python, Java, or C++. In this paradigm, a program is a list of instructions that modify its state when executed. Although this is the most common way of programming, it isn’t the focus of this article.
Instead, we’re going to introduce a different programming paradigm, logic programming, wherein a program is a database of relations. We will lay out the main concepts as we try to solve some simple questions in one of the most popular logic languages, prolog.
One of the most fundamental concepts in logic programming is a relation. But what exactly is a relation?
To see this more clearly, let’s look at a very simple piece of code in prolog (you can download prolog here)
mortal(X) :- man(X).
In general, relations given by “A :- B” are read as “if B then A”. The above example can be read as,“if you are man, then you are a mortal”, AKA “men are mortal”. Note that the period is just there to end the relation. Now, let’s suppose we add the following:
man(socrates).
This just states, “socrates is a man”. Now, you might think, “this is great, but what’s the point?” The reason that we define relations is that we’re then able to perform queries. In prolog, we could perform the simple query:
?- mortal(socrates).true.
The result of the above query is that “socrates is mortal”. It used the relational database that we constructed to see that socrates is a man and, since men are mortal, then socrates must also be mortal. If we, instead, wanted to iterate through all mortal, we could have done:
?- mortal(X).X = socrates.
In prolog, capital letters are used to represent variables.
To illustrate this with a more thorough example, let’s suppose we are given the following directed graph and are interested in modeling and making inferences from it.
The first, natural thing that we want to do is to somehow find a representation of the graph. The most obvious way is to represent it in terms of its connections.
arrow(1,2).arrow(1,3).arrow(3,2).arrow(3,4).arrow(4,3).
This should be quite intuitive. For example, arrow(1,2) just means that there is an arrow from vertex 1 to vertex 2.
Suppose, now, that we wish to determine whether there is a path from a given node A to node B. How could we model this using logic programming? Let’s think intuitively about the problem. There are two possible cases: either A and B are neighbors, in which case we need to check if there is an arrow from A to B. Otherwise, there is a path from A to B if there is an arrow from A to some other vertex C and a path from C to B.
The first relation can easily be written as:
is_path(A,B) :- arrow(A, B).
The second one is also fairly straightforward:
is_path(A,B) :- arrow(A, C), is_path(C, B).
In the second case, the comma is used to represent the logical “and”. For example, there is a path from 1 to 4 because there is an arrow from 1 to 3 and a path from 3 to 4.
We can go even further and figure out the path itself. We can define is_path/3 as follows (here, the /3 just indicates that there are 3 arguments):
is_path(A, B, P).
We now want to give conditions to determine when P is a path from A to B. To be clear, we want to define is_path/3 in such a way that we obtain the following result to the query:
?- is_path(1,4,[1,3,4]).true
Let’s write is_path/3, one step at a time. The base case is quite simple: we just check whether P is [A, B] and if there is an arrow from A to B.
is_path(A, B, P) :- P = [A, B], arrow(A,B).
The other case can be done as follows:
is_path(A, B, P) :- P = [A|Tail], arrow(A, C), is_path(C, B, Tail).
Now let’s carefully look at the above code. The first thing that we do is check that P is of the form: A plus a list containing the elements after A. Then, all we do is check if there is an arrow from A to some other vertex C and recursively try to see if there the tail of P is a path from C to B.
If we apply this to the previous example, is_path(1,4,[1,3,4]), we need to check whether [1,3,4] is a path from 1 to 4. We see that P can be written [1| Tail] with Tail being [3,4]. Hence, [1,3,4] is a path from 1 to 4.
Up to now, we’ve been working on the Herbrand domain, but where prolog (and logic programming, in general) really shines is when working over finite domains, CLP(FD), or the reals, CLP(R).
Constraint logic programming over R allows you to reduce and solve systems of equations over the real numbers. In prolog, we first need to import it as follows:
:- use_module(library(clpr)).
To give a simple, straightforward, example, let’s look at the following system of equations:
Representing this in prolog is very easy:
?- {X + Y = 0, X < 3, X >= 2}.
The result of this query will be the most simplified version of the above system, in this case:
{X>=2.0, X<3.0, Y= -X}.
This can be used to reduce (and solve) equations with many different constraints.
Constraint programming over finite domains is much more applicable to everyday problems, such as task scheduling, optimization, or solving puzzles (such as the n-queens problem).
Let’s introduce CLP(FD) through a simple example. We will consider the following puzzle: we need to assign integers to each letter such that
In prolog, we can write this simple program to solve the above puzzle:
:- use_module(library(clpfd)).puzzle([V,E,R,Y] + [N,I,C,E] = [M,E,M,E,S]) :- Variables = [V,E,R,Y,N,I,C,M,S], Variables ins 0..9, all_different(Variables), (1000*V + 100*E + 10*R + Y) + (1000*N + 100*I + 10*C + E) #= (10000*M + 1000*E + 100*M + 10*E + S), V #\= 0, N #\=0, M#\=0, label(Variables).
We first group all of the Variables for which we need to assign numerical values. The next thing we do is specify their domain (in between 0 and 9). Then, we force them all to be different. The main part consists of placing the main constraint given by the puzzle (in CLP(FD), the syntax for the constraint “=” is “#=”). Furthermore, we make sure that V, N, and M are different than 0. The last thing that we do is label, which forces prolog to spit out individual solutions, rather than printing out the final propagated constraint.
Now, we can make the following query to obtain a solution:
?- puzzle(X).X = ([7, 6, 2, 3]+[8, 5, 4, 6]=[1, 6, 1, 6, 9])
In this tutorial, we have only touched the surface of what can be done with logic programming. There are more advanced applications in group theory and artificial intelligence (especially natural language processing), to name a few. Hopefully, you now have a better idea as to what the general idea of logic programming is and how it can be applied to a variety of problems. | [
{
"code": null,
"e": 337,
"s": 46,
"text": "If you’ve coded before, chances are you’re familiar with an imperative language, such as Python, Java, or C++. In this paradigm, a program is a list of instructions that modify its state when executed. Although this is the most common way of programming, it isn’t the focus of this article."
},
{
"code": null,
"e": 597,
"s": 337,
"text": "Instead, we’re going to introduce a different programming paradigm, logic programming, wherein a program is a database of relations. We will lay out the main concepts as we try to solve some simple questions in one of the most popular logic languages, prolog."
},
{
"code": null,
"e": 702,
"s": 597,
"text": "One of the most fundamental concepts in logic programming is a relation. But what exactly is a relation?"
},
{
"code": null,
"e": 811,
"s": 702,
"text": "To see this more clearly, let’s look at a very simple piece of code in prolog (you can download prolog here)"
},
{
"code": null,
"e": 832,
"s": 811,
"text": "mortal(X) :- man(X)."
},
{
"code": null,
"e": 1092,
"s": 832,
"text": "In general, relations given by “A :- B” are read as “if B then A”. The above example can be read as,“if you are man, then you are a mortal”, AKA “men are mortal”. Note that the period is just there to end the relation. Now, let’s suppose we add the following:"
},
{
"code": null,
"e": 1107,
"s": 1092,
"text": "man(socrates)."
},
{
"code": null,
"e": 1333,
"s": 1107,
"text": "This just states, “socrates is a man”. Now, you might think, “this is great, but what’s the point?” The reason that we define relations is that we’re then able to perform queries. In prolog, we could perform the simple query:"
},
{
"code": null,
"e": 1359,
"s": 1333,
"text": "?- mortal(socrates).true."
},
{
"code": null,
"e": 1637,
"s": 1359,
"text": "The result of the above query is that “socrates is mortal”. It used the relational database that we constructed to see that socrates is a man and, since men are mortal, then socrates must also be mortal. If we, instead, wanted to iterate through all mortal, we could have done:"
},
{
"code": null,
"e": 1664,
"s": 1637,
"text": "?- mortal(X).X = socrates."
},
{
"code": null,
"e": 1724,
"s": 1664,
"text": "In prolog, capital letters are used to represent variables."
},
{
"code": null,
"e": 1891,
"s": 1724,
"text": "To illustrate this with a more thorough example, let’s suppose we are given the following directed graph and are interested in modeling and making inferences from it."
},
{
"code": null,
"e": 2054,
"s": 1891,
"text": "The first, natural thing that we want to do is to somehow find a representation of the graph. The most obvious way is to represent it in terms of its connections."
},
{
"code": null,
"e": 2110,
"s": 2054,
"text": "arrow(1,2).arrow(1,3).arrow(3,2).arrow(3,4).arrow(4,3)."
},
{
"code": null,
"e": 2227,
"s": 2110,
"text": "This should be quite intuitive. For example, arrow(1,2) just means that there is an arrow from vertex 1 to vertex 2."
},
{
"code": null,
"e": 2653,
"s": 2227,
"text": "Suppose, now, that we wish to determine whether there is a path from a given node A to node B. How could we model this using logic programming? Let’s think intuitively about the problem. There are two possible cases: either A and B are neighbors, in which case we need to check if there is an arrow from A to B. Otherwise, there is a path from A to B if there is an arrow from A to some other vertex C and a path from C to B."
},
{
"code": null,
"e": 2698,
"s": 2653,
"text": "The first relation can easily be written as:"
},
{
"code": null,
"e": 2728,
"s": 2698,
"text": "is_path(A,B) :- arrow(A, B). "
},
{
"code": null,
"e": 2775,
"s": 2728,
"text": "The second one is also fairly straightforward:"
},
{
"code": null,
"e": 2828,
"s": 2775,
"text": "is_path(A,B) :- arrow(A, C), is_path(C, B). "
},
{
"code": null,
"e": 3001,
"s": 2828,
"text": "In the second case, the comma is used to represent the logical “and”. For example, there is a path from 1 to 4 because there is an arrow from 1 to 3 and a path from 3 to 4."
},
{
"code": null,
"e": 3149,
"s": 3001,
"text": "We can go even further and figure out the path itself. We can define is_path/3 as follows (here, the /3 just indicates that there are 3 arguments):"
},
{
"code": null,
"e": 3168,
"s": 3149,
"text": "is_path(A, B, P). "
},
{
"code": null,
"e": 3347,
"s": 3168,
"text": "We now want to give conditions to determine when P is a path from A to B. To be clear, we want to define is_path/3 in such a way that we obtain the following result to the query:"
},
{
"code": null,
"e": 3376,
"s": 3347,
"text": "?- is_path(1,4,[1,3,4]).true"
},
{
"code": null,
"e": 3522,
"s": 3376,
"text": "Let’s write is_path/3, one step at a time. The base case is quite simple: we just check whether P is [A, B] and if there is an arrow from A to B."
},
{
"code": null,
"e": 3576,
"s": 3522,
"text": "is_path(A, B, P) :- P = [A, B], arrow(A,B). "
},
{
"code": null,
"e": 3615,
"s": 3576,
"text": "The other case can be done as follows:"
},
{
"code": null,
"e": 3696,
"s": 3615,
"text": "is_path(A, B, P) :- P = [A|Tail], arrow(A, C), is_path(C, B, Tail)."
},
{
"code": null,
"e": 3995,
"s": 3696,
"text": "Now let’s carefully look at the above code. The first thing that we do is check that P is of the form: A plus a list containing the elements after A. Then, all we do is check if there is an arrow from A to some other vertex C and recursively try to see if there the tail of P is a path from C to B."
},
{
"code": null,
"e": 4215,
"s": 3995,
"text": "If we apply this to the previous example, is_path(1,4,[1,3,4]), we need to check whether [1,3,4] is a path from 1 to 4. We see that P can be written [1| Tail] with Tail being [3,4]. Hence, [1,3,4] is a path from 1 to 4."
},
{
"code": null,
"e": 4404,
"s": 4215,
"text": "Up to now, we’ve been working on the Herbrand domain, but where prolog (and logic programming, in general) really shines is when working over finite domains, CLP(FD), or the reals, CLP(R)."
},
{
"code": null,
"e": 4565,
"s": 4404,
"text": "Constraint logic programming over R allows you to reduce and solve systems of equations over the real numbers. In prolog, we first need to import it as follows:"
},
{
"code": null,
"e": 4595,
"s": 4565,
"text": ":- use_module(library(clpr))."
},
{
"code": null,
"e": 4688,
"s": 4595,
"text": "To give a simple, straightforward, example, let’s look at the following system of equations:"
},
{
"code": null,
"e": 4730,
"s": 4688,
"text": "Representing this in prolog is very easy:"
},
{
"code": null,
"e": 4761,
"s": 4730,
"text": "?- {X + Y = 0, X < 3, X >= 2}."
},
{
"code": null,
"e": 4857,
"s": 4761,
"text": "The result of this query will be the most simplified version of the above system, in this case:"
},
{
"code": null,
"e": 4881,
"s": 4857,
"text": "{X>=2.0, X<3.0, Y= -X}."
},
{
"code": null,
"e": 4963,
"s": 4881,
"text": "This can be used to reduce (and solve) equations with many different constraints."
},
{
"code": null,
"e": 5142,
"s": 4963,
"text": "Constraint programming over finite domains is much more applicable to everyday problems, such as task scheduling, optimization, or solving puzzles (such as the n-queens problem)."
},
{
"code": null,
"e": 5283,
"s": 5142,
"text": "Let’s introduce CLP(FD) through a simple example. We will consider the following puzzle: we need to assign integers to each letter such that"
},
{
"code": null,
"e": 5354,
"s": 5283,
"text": "In prolog, we can write this simple program to solve the above puzzle:"
},
{
"code": null,
"e": 5686,
"s": 5354,
"text": ":- use_module(library(clpfd)).puzzle([V,E,R,Y] + [N,I,C,E] = [M,E,M,E,S]) :- Variables = [V,E,R,Y,N,I,C,M,S], Variables ins 0..9, all_different(Variables), (1000*V + 100*E + 10*R + Y) + (1000*N + 100*I + 10*C + E) #= (10000*M + 1000*E + 100*M + 10*E + S), V #\\= 0, N #\\=0, M#\\=0, label(Variables)."
},
{
"code": null,
"e": 6220,
"s": 5686,
"text": "We first group all of the Variables for which we need to assign numerical values. The next thing we do is specify their domain (in between 0 and 9). Then, we force them all to be different. The main part consists of placing the main constraint given by the puzzle (in CLP(FD), the syntax for the constraint “=” is “#=”). Furthermore, we make sure that V, N, and M are different than 0. The last thing that we do is label, which forces prolog to spit out individual solutions, rather than printing out the final propagated constraint."
},
{
"code": null,
"e": 6279,
"s": 6220,
"text": "Now, we can make the following query to obtain a solution:"
},
{
"code": null,
"e": 6340,
"s": 6279,
"text": "?- puzzle(X).X = ([7, 6, 2, 3]+[8, 5, 4, 6]=[1, 6, 1, 6, 9])"
}
] |
Control Structure Testing - GeeksforGeeks | 07 Apr, 2020
Control structure testing is used to increase the coverage area by testing various control structures present in the program. The different types of testing performed under control structure testing are as follows-
1. Condition Testing
2. Data Flow Testing
3. Loop Testing
1. Condition Testing :Condition testing is a test cased design method, which ensures that the logical condition and decision statements are free from errors. The errors present in logical conditions can be incorrect boolean operators, missing parenthesis in a booleans expression, error in relational operators, arithmetic expressions, and so on.
The common types of logical conditions that are tested using condition testing are-
A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic expressions and ‘OP’ is an operator.A simple condition like any relational expression preceded by a NOT (~) operator.For example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’ denotes NOT operator.A compound condition consists of two or more simple conditions, Boolean operator, and parenthesis.For example, (E1 & E2)|(E2 & E3) where E1, E2, E3 denote arithmetic expression and ‘&’ and ‘|’ denote AND or OR operators.A Boolean expression consists of operands and a Boolean operator like ‘AND’, OR, NOT.For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote operands and | denotes OR operator.
A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic expressions and ‘OP’ is an operator.
A simple condition like any relational expression preceded by a NOT (~) operator.For example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’ denotes NOT operator.
A compound condition consists of two or more simple conditions, Boolean operator, and parenthesis.For example, (E1 & E2)|(E2 & E3) where E1, E2, E3 denote arithmetic expression and ‘&’ and ‘|’ denote AND or OR operators.
A Boolean expression consists of operands and a Boolean operator like ‘AND’, OR, NOT.For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote operands and | denotes OR operator.
2. Data Flow Testing :The data flow test method chooses the test path of a program based on the locations of the definitions and uses all the variables in the program.
The data flow test approach is depicted as follows suppose each statement in a program is assigned a unique statement number and that theme function cannot modify its parameters or global variables.For example, with S as its statement number.
DEF (S) = {X | Statement S has a definition of X}
USE (S) = {X | Statement S has a use of X}
If statement S is an if loop statement, them its DEF set is empty and its USE set depends on the state of statement S. The definition of the variable X at statement S is called the line of statement S’ if the statement is any way from S to statement S’ then there is no other definition of X.
A definition use (DU) chain of variable X has the form [X, S, S’], where S and S’ denote statement numbers, X is in DEF(S) and USE(S’), and the definition of X in statement S is line at statement S’.
A simple data flow test approach requires that each DU chain be covered at least once. This approach is known as the DU test approach. The DU testing does not ensure coverage of all branches of a program.
However, a branch is not guaranteed to be covered by DU testing only in rar cases such as then in which the other construct does not have any certainty of any variable in its later part and the other part is not present. Data flow testing strategies are appropriate for choosing test paths of a program containing nested if and loop statements.
3. Loop Testing :Loop testing is actually a white box testing technique. It specifically focuses on the validity of loop construction.Following are the types of loops.
Simple Loop – The following set of test can be applied to simple loops, where the maximum allowable number through the loop is n.Skip the entire loop.Traverse the loop only once.Traverse the loop two times.Make p passes through the loop where p<n.Traverse the loop n-1, n, n+1 times.Concatenated Loops – If loops are not dependent on each other, contact loops can be tested using the approach used in simple loops. if the loops are interdependent, the steps are followed in nested loops.Nested Loops – Loops within loops are called as nested loops. when testing nested loops, the number of tested increases as level nesting increases.The following steps for testing nested loops are as follows-Start with inner loop. set all other loops to minimum values.Conduct simple loop testing on inner loop.Work outwards.Continue until all loops tested.Unstructured loops – This type of loops should be redesigned, whenever possible, to reflect the use of unstructured the structured programming constructs.
Simple Loop – The following set of test can be applied to simple loops, where the maximum allowable number through the loop is n.Skip the entire loop.Traverse the loop only once.Traverse the loop two times.Make p passes through the loop where p<n.Traverse the loop n-1, n, n+1 times.
Skip the entire loop.Traverse the loop only once.Traverse the loop two times.Make p passes through the loop where p<n.Traverse the loop n-1, n, n+1 times.
Skip the entire loop.
Traverse the loop only once.
Traverse the loop two times.
Make p passes through the loop where p<n.
Traverse the loop n-1, n, n+1 times.
Concatenated Loops – If loops are not dependent on each other, contact loops can be tested using the approach used in simple loops. if the loops are interdependent, the steps are followed in nested loops.
Nested Loops – Loops within loops are called as nested loops. when testing nested loops, the number of tested increases as level nesting increases.The following steps for testing nested loops are as follows-Start with inner loop. set all other loops to minimum values.Conduct simple loop testing on inner loop.Work outwards.Continue until all loops tested.
Start with inner loop. set all other loops to minimum values.Conduct simple loop testing on inner loop.Work outwards.Continue until all loops tested.
Start with inner loop. set all other loops to minimum values.
Conduct simple loop testing on inner loop.
Work outwards.
Continue until all loops tested.
Unstructured loops – This type of loops should be redesigned, whenever possible, to reflect the use of unstructured the structured programming constructs.
Software Engineering
Write From Home
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
What is DFD(Data Flow Diagram)?
Software Engineering | Black box testing
DFD for Library Management System
Software Engineering | Software Design Process
System Testing
Convert integer to string in Python
Convert string to integer in Python
Python infinity
How to set input type date in dd-mm-yyyy format using HTML ?
Matplotlib.pyplot.title() in Python | [
{
"code": null,
"e": 25014,
"s": 24986,
"text": "\n07 Apr, 2020"
},
{
"code": null,
"e": 25229,
"s": 25014,
"text": "Control structure testing is used to increase the coverage area by testing various control structures present in the program. The different types of testing performed under control structure testing are as follows-"
},
{
"code": null,
"e": 25289,
"s": 25229,
"text": "1. Condition Testing \n2. Data Flow Testing\n3. Loop Testing "
},
{
"code": null,
"e": 25636,
"s": 25289,
"text": "1. Condition Testing :Condition testing is a test cased design method, which ensures that the logical condition and decision statements are free from errors. The errors present in logical conditions can be incorrect boolean operators, missing parenthesis in a booleans expression, error in relational operators, arithmetic expressions, and so on."
},
{
"code": null,
"e": 25720,
"s": 25636,
"text": "The common types of logical conditions that are tested using condition testing are-"
},
{
"code": null,
"e": 26405,
"s": 25720,
"text": "A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic expressions and ‘OP’ is an operator.A simple condition like any relational expression preceded by a NOT (~) operator.For example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’ denotes NOT operator.A compound condition consists of two or more simple conditions, Boolean operator, and parenthesis.For example, (E1 & E2)|(E2 & E3) where E1, E2, E3 denote arithmetic expression and ‘&’ and ‘|’ denote AND or OR operators.A Boolean expression consists of operands and a Boolean operator like ‘AND’, OR, NOT.For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote operands and | denotes OR operator."
},
{
"code": null,
"e": 26514,
"s": 26405,
"text": "A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic expressions and ‘OP’ is an operator."
},
{
"code": null,
"e": 26683,
"s": 26514,
"text": "A simple condition like any relational expression preceded by a NOT (~) operator.For example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’ denotes NOT operator."
},
{
"code": null,
"e": 26904,
"s": 26683,
"text": "A compound condition consists of two or more simple conditions, Boolean operator, and parenthesis.For example, (E1 & E2)|(E2 & E3) where E1, E2, E3 denote arithmetic expression and ‘&’ and ‘|’ denote AND or OR operators."
},
{
"code": null,
"e": 27093,
"s": 26904,
"text": "A Boolean expression consists of operands and a Boolean operator like ‘AND’, OR, NOT.For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote operands and | denotes OR operator."
},
{
"code": null,
"e": 27261,
"s": 27093,
"text": "2. Data Flow Testing :The data flow test method chooses the test path of a program based on the locations of the definitions and uses all the variables in the program."
},
{
"code": null,
"e": 27504,
"s": 27261,
"text": "The data flow test approach is depicted as follows suppose each statement in a program is assigned a unique statement number and that theme function cannot modify its parameters or global variables.For example, with S as its statement number."
},
{
"code": null,
"e": 27598,
"s": 27504,
"text": "DEF (S) = {X | Statement S has a definition of X}\nUSE (S) = {X | Statement S has a use of X} "
},
{
"code": null,
"e": 27891,
"s": 27598,
"text": "If statement S is an if loop statement, them its DEF set is empty and its USE set depends on the state of statement S. The definition of the variable X at statement S is called the line of statement S’ if the statement is any way from S to statement S’ then there is no other definition of X."
},
{
"code": null,
"e": 28091,
"s": 27891,
"text": "A definition use (DU) chain of variable X has the form [X, S, S’], where S and S’ denote statement numbers, X is in DEF(S) and USE(S’), and the definition of X in statement S is line at statement S’."
},
{
"code": null,
"e": 28296,
"s": 28091,
"text": "A simple data flow test approach requires that each DU chain be covered at least once. This approach is known as the DU test approach. The DU testing does not ensure coverage of all branches of a program."
},
{
"code": null,
"e": 28641,
"s": 28296,
"text": "However, a branch is not guaranteed to be covered by DU testing only in rar cases such as then in which the other construct does not have any certainty of any variable in its later part and the other part is not present. Data flow testing strategies are appropriate for choosing test paths of a program containing nested if and loop statements."
},
{
"code": null,
"e": 28809,
"s": 28641,
"text": "3. Loop Testing :Loop testing is actually a white box testing technique. It specifically focuses on the validity of loop construction.Following are the types of loops."
},
{
"code": null,
"e": 29807,
"s": 28809,
"text": "Simple Loop – The following set of test can be applied to simple loops, where the maximum allowable number through the loop is n.Skip the entire loop.Traverse the loop only once.Traverse the loop two times.Make p passes through the loop where p<n.Traverse the loop n-1, n, n+1 times.Concatenated Loops – If loops are not dependent on each other, contact loops can be tested using the approach used in simple loops. if the loops are interdependent, the steps are followed in nested loops.Nested Loops – Loops within loops are called as nested loops. when testing nested loops, the number of tested increases as level nesting increases.The following steps for testing nested loops are as follows-Start with inner loop. set all other loops to minimum values.Conduct simple loop testing on inner loop.Work outwards.Continue until all loops tested.Unstructured loops – This type of loops should be redesigned, whenever possible, to reflect the use of unstructured the structured programming constructs."
},
{
"code": null,
"e": 30091,
"s": 29807,
"text": "Simple Loop – The following set of test can be applied to simple loops, where the maximum allowable number through the loop is n.Skip the entire loop.Traverse the loop only once.Traverse the loop two times.Make p passes through the loop where p<n.Traverse the loop n-1, n, n+1 times."
},
{
"code": null,
"e": 30246,
"s": 30091,
"text": "Skip the entire loop.Traverse the loop only once.Traverse the loop two times.Make p passes through the loop where p<n.Traverse the loop n-1, n, n+1 times."
},
{
"code": null,
"e": 30268,
"s": 30246,
"text": "Skip the entire loop."
},
{
"code": null,
"e": 30297,
"s": 30268,
"text": "Traverse the loop only once."
},
{
"code": null,
"e": 30326,
"s": 30297,
"text": "Traverse the loop two times."
},
{
"code": null,
"e": 30368,
"s": 30326,
"text": "Make p passes through the loop where p<n."
},
{
"code": null,
"e": 30405,
"s": 30368,
"text": "Traverse the loop n-1, n, n+1 times."
},
{
"code": null,
"e": 30610,
"s": 30405,
"text": "Concatenated Loops – If loops are not dependent on each other, contact loops can be tested using the approach used in simple loops. if the loops are interdependent, the steps are followed in nested loops."
},
{
"code": null,
"e": 30967,
"s": 30610,
"text": "Nested Loops – Loops within loops are called as nested loops. when testing nested loops, the number of tested increases as level nesting increases.The following steps for testing nested loops are as follows-Start with inner loop. set all other loops to minimum values.Conduct simple loop testing on inner loop.Work outwards.Continue until all loops tested."
},
{
"code": null,
"e": 31117,
"s": 30967,
"text": "Start with inner loop. set all other loops to minimum values.Conduct simple loop testing on inner loop.Work outwards.Continue until all loops tested."
},
{
"code": null,
"e": 31179,
"s": 31117,
"text": "Start with inner loop. set all other loops to minimum values."
},
{
"code": null,
"e": 31222,
"s": 31179,
"text": "Conduct simple loop testing on inner loop."
},
{
"code": null,
"e": 31237,
"s": 31222,
"text": "Work outwards."
},
{
"code": null,
"e": 31270,
"s": 31237,
"text": "Continue until all loops tested."
},
{
"code": null,
"e": 31425,
"s": 31270,
"text": "Unstructured loops – This type of loops should be redesigned, whenever possible, to reflect the use of unstructured the structured programming constructs."
},
{
"code": null,
"e": 31446,
"s": 31425,
"text": "Software Engineering"
},
{
"code": null,
"e": 31462,
"s": 31446,
"text": "Write From Home"
},
{
"code": null,
"e": 31560,
"s": 31462,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31592,
"s": 31560,
"text": "What is DFD(Data Flow Diagram)?"
},
{
"code": null,
"e": 31633,
"s": 31592,
"text": "Software Engineering | Black box testing"
},
{
"code": null,
"e": 31667,
"s": 31633,
"text": "DFD for Library Management System"
},
{
"code": null,
"e": 31714,
"s": 31667,
"text": "Software Engineering | Software Design Process"
},
{
"code": null,
"e": 31729,
"s": 31714,
"text": "System Testing"
},
{
"code": null,
"e": 31765,
"s": 31729,
"text": "Convert integer to string in Python"
},
{
"code": null,
"e": 31801,
"s": 31765,
"text": "Convert string to integer in Python"
},
{
"code": null,
"e": 31817,
"s": 31801,
"text": "Python infinity"
},
{
"code": null,
"e": 31878,
"s": 31817,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
}
] |
Rexx - Decision Making | Decision making structures require that the programmer specify one or more conditions to be evaluated or tested by the program.
The following diagram shows the general form of a typical decision-making structure found in most of the programming languages.
There is a statement or statements to be executed if the condition is determined to be true, and optionally, other statements to be executed if the condition is determined to be false.
Let’s look at the various decision-making statements available in Rexx.
The first decision-making statement is the if statement. An if statement consists of a Boolean expression followed by one or more statements.
The next decision-making statement is the if-else statement. An if statement can be followed by an optional else statement, which executes when the Boolean expression is false.
Sometimes there is a requirement to have multiple if statements embedded inside each other, as is possible in other programming languages. In Rexx also this is possible.
if (condition1) then
do
#statement1
end
else
if (condition2) then
do
#statement2
end
The flow diagram of nested if statements is as follows −
Let’s take an example of nested if statement −
/* Main program */
i = 50
if (i < 10) then
do
say "i is less than 10"
end
else
if (i < 7) then
do
say "i is less than 7"
end
else
do
say "i is greater than 10"
end
The output of the above program will be −
i is greater than 10
Rexx offers the select statement which can be used to execute expressions based on the output of the select statement.
The general form of this statement is −
select
when (condition#1) then
statement#1
when (condition#2) then
statement#2
otherwise
defaultstatement
end
The general working of this statement is as follows −
The select statement has a range of when statements to evaluate different conditions.
The select statement has a range of when statements to evaluate different conditions.
Each when clause has a different condition which needs to be evaluated and the subsequent statement is executed.
Each when clause has a different condition which needs to be evaluated and the subsequent statement is executed.
The otherwise statement is used to run any default statement if the previous when conditions do not evaluate to true.
The otherwise statement is used to run any default statement if the previous when conditions do not evaluate to true.
The flow diagram of the select statement is as follows
The following program is an example of the case statement in Rexx.
/* Main program */
i = 50
select
when(i <= 5) then
say "i is less than 5"
when(i <= 10) then
say "i is less than 10"
otherwise
say "i is greater than 10"
end
The output of the above program would be −
i is greater than 10
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2467,
"s": 2339,
"text": "Decision making structures require that the programmer specify one or more conditions to be evaluated or tested by the program."
},
{
"code": null,
"e": 2595,
"s": 2467,
"text": "The following diagram shows the general form of a typical decision-making structure found in most of the programming languages."
},
{
"code": null,
"e": 2780,
"s": 2595,
"text": "There is a statement or statements to be executed if the condition is determined to be true, and optionally, other statements to be executed if the condition is determined to be false."
},
{
"code": null,
"e": 2852,
"s": 2780,
"text": "Let’s look at the various decision-making statements available in Rexx."
},
{
"code": null,
"e": 2994,
"s": 2852,
"text": "The first decision-making statement is the if statement. An if statement consists of a Boolean expression followed by one or more statements."
},
{
"code": null,
"e": 3171,
"s": 2994,
"text": "The next decision-making statement is the if-else statement. An if statement can be followed by an optional else statement, which executes when the Boolean expression is false."
},
{
"code": null,
"e": 3341,
"s": 3171,
"text": "Sometimes there is a requirement to have multiple if statements embedded inside each other, as is possible in other programming languages. In Rexx also this is possible."
},
{
"code": null,
"e": 3465,
"s": 3341,
"text": "if (condition1) then \n do \n #statement1 \n end \nelse \n if (condition2) then \n do \n #statement2 \n end\n"
},
{
"code": null,
"e": 3522,
"s": 3465,
"text": "The flow diagram of nested if statements is as follows −"
},
{
"code": null,
"e": 3569,
"s": 3522,
"text": "Let’s take an example of nested if statement −"
},
{
"code": null,
"e": 3784,
"s": 3569,
"text": "/* Main program */ \ni = 50 \nif (i < 10) then \n do \n say \"i is less than 10\" \n end \nelse \nif (i < 7) then \n do \n say \"i is less than 7\" \n end \nelse \n do \n say \"i is greater than 10\" \n end "
},
{
"code": null,
"e": 3826,
"s": 3784,
"text": "The output of the above program will be −"
},
{
"code": null,
"e": 3849,
"s": 3826,
"text": "i is greater than 10 \n"
},
{
"code": null,
"e": 3968,
"s": 3849,
"text": "Rexx offers the select statement which can be used to execute expressions based on the output of the select statement."
},
{
"code": null,
"e": 4008,
"s": 3968,
"text": "The general form of this statement is −"
},
{
"code": null,
"e": 4129,
"s": 4008,
"text": "select \nwhen (condition#1) then \nstatement#1 \n\nwhen (condition#2) then \nstatement#2 \notherwise \n\ndefaultstatement \nend \n"
},
{
"code": null,
"e": 4183,
"s": 4129,
"text": "The general working of this statement is as follows −"
},
{
"code": null,
"e": 4269,
"s": 4183,
"text": "The select statement has a range of when statements to evaluate different conditions."
},
{
"code": null,
"e": 4355,
"s": 4269,
"text": "The select statement has a range of when statements to evaluate different conditions."
},
{
"code": null,
"e": 4468,
"s": 4355,
"text": "Each when clause has a different condition which needs to be evaluated and the subsequent statement is executed."
},
{
"code": null,
"e": 4581,
"s": 4468,
"text": "Each when clause has a different condition which needs to be evaluated and the subsequent statement is executed."
},
{
"code": null,
"e": 4699,
"s": 4581,
"text": "The otherwise statement is used to run any default statement if the previous when conditions do not evaluate to true."
},
{
"code": null,
"e": 4817,
"s": 4699,
"text": "The otherwise statement is used to run any default statement if the previous when conditions do not evaluate to true."
},
{
"code": null,
"e": 4872,
"s": 4817,
"text": "The flow diagram of the select statement is as follows"
},
{
"code": null,
"e": 4939,
"s": 4872,
"text": "The following program is an example of the case statement in Rexx."
},
{
"code": null,
"e": 5108,
"s": 4939,
"text": "/* Main program */ \ni = 50 \nselect \nwhen(i <= 5) then \nsay \"i is less than 5\" \n\nwhen(i <= 10) then \nsay \"i is less than 10\" \n\notherwise \nsay \"i is greater than 10\" \nend"
},
{
"code": null,
"e": 5151,
"s": 5108,
"text": "The output of the above program would be −"
},
{
"code": null,
"e": 5174,
"s": 5151,
"text": "i is greater than 10 \n"
},
{
"code": null,
"e": 5181,
"s": 5174,
"text": " Print"
},
{
"code": null,
"e": 5192,
"s": 5181,
"text": " Add Notes"
}
] |
Spring Boot Actuator Database Health Check - onlinetutorialspoint | PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
Here I am going to show how to check database health using Spring boot actuator health endpoint.
Include the spring boot actuator dependency in pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
We can do this check in two different solutions.
Enabling the management config property in application.properties/yml file.
management.endpoints.health.sensitive=false
management.health.db.enabled=true
management.health.defaults.enabled=true
management.endpoint.health.show-details=always
Run the Application and access actuator health endpoint.
http://localhost:8080/actuator/health
Creating the custom Actuator service.
Creating DbHealthCheck class implementing HealthIndicator and override health() method.
Using JdbcTemplate, execute the sample SQL query to check whether the database connected or not.
package com.onlinetutorialspoint.actuator;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.SingleColumnRowMapper;
import org.springframework.stereotype.Component;
import java.util.List;
@Component
public class DbHealthCheck implements HealthIndicator {
@Autowired
JdbcTemplate template;
@Override
public Health health() {
int errorCode = check(); // perform some specific health check
if (errorCode != 1) {
return Health.down().withDetail("Error Code", 500).build();
}
return Health.up().build();
}
public int check(){
List<Object> results = template.query("select 1 from dual",
new SingleColumnRowMapper<>());
return results.size();
}
}
Run the application.
http://localhost:8080/actuator/health
References:
Spring Boot Production ready endpoints
Spring Boot Actuator
Happy Learning 🙂
Spring Boot EhCache Example
How to Get All Spring Beans Details Loaded in ICO
Spring Boot Hazelcast Cache Example
Spring Boot JNDI Configuration – External Tomcat
Simple Spring Boot Example
Spring Boot How to change the Tomcat to Jetty Server
Spring Boot Hibernate Integration Example
How to Send Mail Spring Boot Example
Spring Boot Apache ActiveMq In Memory Example
Spring Boot JPA Integration Example
Spring Boot Actuator Example
Spring Boot Batch Example Csv to Database
Spring Boot H2 Database + JDBC Template Example
Spring Boot Security MySQL Database Integration Example
Spring Boot MongoDB + Spring Data Example
Spring Boot EhCache Example
How to Get All Spring Beans Details Loaded in ICO
Spring Boot Hazelcast Cache Example
Spring Boot JNDI Configuration – External Tomcat
Simple Spring Boot Example
Spring Boot How to change the Tomcat to Jetty Server
Spring Boot Hibernate Integration Example
How to Send Mail Spring Boot Example
Spring Boot Apache ActiveMq In Memory Example
Spring Boot JPA Integration Example
Spring Boot Actuator Example
Spring Boot Batch Example Csv to Database
Spring Boot H2 Database + JDBC Template Example
Spring Boot Security MySQL Database Integration Example
Spring Boot MongoDB + Spring Data Example
ravi
May 25, 2020 at 1:43 pm - Reply
where is the controller , datapase details of properties file
ravi
May 25, 2020 at 1:43 pm - Reply
where is the controller , datapase details of properties file
where is the controller , datapase details of properties file
Δ
Spring Boot – Hello World
Spring Boot – MVC Example
Spring Boot- Change Context Path
Spring Boot – Change Tomcat Port Number
Spring Boot – Change Tomcat to Jetty Server
Spring Boot – Tomcat session timeout
Spring Boot – Enable Random Port
Spring Boot – Properties File
Spring Boot – Beans Lazy Loading
Spring Boot – Set Favicon image
Spring Boot – Set Custom Banner
Spring Boot – Set Application TimeZone
Spring Boot – Send Mail
Spring Boot – FileUpload Ajax
Spring Boot – Actuator
Spring Boot – Actuator Database Health Check
Spring Boot – Swagger
Spring Boot – Enable CORS
Spring Boot – External Apache ActiveMQ Setup
Spring Boot – Inmemory Apache ActiveMq
Spring Boot – Scheduler Job
Spring Boot – Exception Handling
Spring Boot – Hibernate CRUD
Spring Boot – JPA Integration CRUD
Spring Boot – JPA DataRest CRUD
Spring Boot – JdbcTemplate CRUD
Spring Boot – Multiple Data Sources Config
Spring Boot – JNDI Configuration
Spring Boot – H2 Database CRUD
Spring Boot – MongoDB CRUD
Spring Boot – Redis Data CRUD
Spring Boot – MVC Login Form Validation
Spring Boot – Custom Error Pages
Spring Boot – iText PDF
Spring Boot – Enable SSL (HTTPs)
Spring Boot – Basic Authentication
Spring Boot – In Memory Basic Authentication
Spring Boot – Security MySQL Database Integration
Spring Boot – Redis Cache – Redis Server
Spring Boot – Hazelcast Cache
Spring Boot – EhCache
Spring Boot – Kafka Producer
Spring Boot – Kafka Consumer
Spring Boot – Kafka JSON Message to Kafka Topic
Spring Boot – RabbitMQ Publisher
Spring Boot – RabbitMQ Consumer
Spring Boot – SOAP Consumer
Spring Boot – Soap WebServices
Spring Boot – Batch Csv to Database
Spring Boot – Eureka Server
Spring Boot – MockMvc JUnit
Spring Boot – Docker Deployment | [
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 495,
"s": 398,
"text": "Here I am going to show how to check database health using Spring boot actuator health endpoint."
},
{
"code": null,
"e": 550,
"s": 495,
"text": "Include the spring boot actuator dependency in pom.xml"
},
{
"code": null,
"e": 685,
"s": 550,
"text": "<dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-actuator</artifactId>\n</dependency>"
},
{
"code": null,
"e": 734,
"s": 685,
"text": "We can do this check in two different solutions."
},
{
"code": null,
"e": 810,
"s": 734,
"text": "Enabling the management config property in application.properties/yml file."
},
{
"code": null,
"e": 975,
"s": 810,
"text": "management.endpoints.health.sensitive=false\nmanagement.health.db.enabled=true\nmanagement.health.defaults.enabled=true\nmanagement.endpoint.health.show-details=always"
},
{
"code": null,
"e": 1034,
"s": 977,
"text": "Run the Application and access actuator health endpoint."
},
{
"code": null,
"e": 1072,
"s": 1034,
"text": "http://localhost:8080/actuator/health"
},
{
"code": null,
"e": 1110,
"s": 1072,
"text": "Creating the custom Actuator service."
},
{
"code": null,
"e": 1198,
"s": 1110,
"text": "Creating DbHealthCheck class implementing HealthIndicator and override health() method."
},
{
"code": null,
"e": 1295,
"s": 1198,
"text": "Using JdbcTemplate, execute the sample SQL query to check whether the database connected or not."
},
{
"code": null,
"e": 2264,
"s": 1295,
"text": "package com.onlinetutorialspoint.actuator;\n\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.actuate.health.Health;\nimport org.springframework.boot.actuate.health.HealthIndicator;\nimport org.springframework.jdbc.core.JdbcTemplate;\nimport org.springframework.jdbc.core.SingleColumnRowMapper;\nimport org.springframework.stereotype.Component;\n\nimport java.util.List;\n\n@Component\npublic class DbHealthCheck implements HealthIndicator {\n @Autowired\n JdbcTemplate template;\n @Override\n public Health health() {\n int errorCode = check(); // perform some specific health check\n if (errorCode != 1) {\n return Health.down().withDetail(\"Error Code\", 500).build();\n }\n return Health.up().build();\n }\n\n public int check(){\n List<Object> results = template.query(\"select 1 from dual\",\n new SingleColumnRowMapper<>());\n return results.size();\n }\n}\n"
},
{
"code": null,
"e": 2285,
"s": 2264,
"text": "Run the application."
},
{
"code": null,
"e": 2323,
"s": 2285,
"text": "http://localhost:8080/actuator/health"
},
{
"code": null,
"e": 2335,
"s": 2323,
"text": "References:"
},
{
"code": null,
"e": 2374,
"s": 2335,
"text": "Spring Boot Production ready endpoints"
},
{
"code": null,
"e": 2395,
"s": 2374,
"text": "Spring Boot Actuator"
},
{
"code": null,
"e": 2412,
"s": 2395,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 3035,
"s": 2412,
"text": "\nSpring Boot EhCache Example\nHow to Get All Spring Beans Details Loaded in ICO\nSpring Boot Hazelcast Cache Example\nSpring Boot JNDI Configuration – External Tomcat\nSimple Spring Boot Example\nSpring Boot How to change the Tomcat to Jetty Server\nSpring Boot Hibernate Integration Example\nHow to Send Mail Spring Boot Example\nSpring Boot Apache ActiveMq In Memory Example\nSpring Boot JPA Integration Example\nSpring Boot Actuator Example\nSpring Boot Batch Example Csv to Database\nSpring Boot H2 Database + JDBC Template Example\nSpring Boot Security MySQL Database Integration Example\nSpring Boot MongoDB + Spring Data Example\n"
},
{
"code": null,
"e": 3063,
"s": 3035,
"text": "Spring Boot EhCache Example"
},
{
"code": null,
"e": 3113,
"s": 3063,
"text": "How to Get All Spring Beans Details Loaded in ICO"
},
{
"code": null,
"e": 3149,
"s": 3113,
"text": "Spring Boot Hazelcast Cache Example"
},
{
"code": null,
"e": 3198,
"s": 3149,
"text": "Spring Boot JNDI Configuration – External Tomcat"
},
{
"code": null,
"e": 3225,
"s": 3198,
"text": "Simple Spring Boot Example"
},
{
"code": null,
"e": 3278,
"s": 3225,
"text": "Spring Boot How to change the Tomcat to Jetty Server"
},
{
"code": null,
"e": 3320,
"s": 3278,
"text": "Spring Boot Hibernate Integration Example"
},
{
"code": null,
"e": 3357,
"s": 3320,
"text": "How to Send Mail Spring Boot Example"
},
{
"code": null,
"e": 3403,
"s": 3357,
"text": "Spring Boot Apache ActiveMq In Memory Example"
},
{
"code": null,
"e": 3439,
"s": 3403,
"text": "Spring Boot JPA Integration Example"
},
{
"code": null,
"e": 3468,
"s": 3439,
"text": "Spring Boot Actuator Example"
},
{
"code": null,
"e": 3510,
"s": 3468,
"text": "Spring Boot Batch Example Csv to Database"
},
{
"code": null,
"e": 3558,
"s": 3510,
"text": "Spring Boot H2 Database + JDBC Template Example"
},
{
"code": null,
"e": 3614,
"s": 3558,
"text": "Spring Boot Security MySQL Database Integration Example"
},
{
"code": null,
"e": 3656,
"s": 3614,
"text": "Spring Boot MongoDB + Spring Data Example"
},
{
"code": null,
"e": 3768,
"s": 3656,
"text": "\n\n\n\n\n\nravi\nMay 25, 2020 at 1:43 pm - Reply \n\nwhere is the controller , datapase details of properties file\n\n\n\n\n"
},
{
"code": null,
"e": 3878,
"s": 3768,
"text": "\n\n\n\n\nravi\nMay 25, 2020 at 1:43 pm - Reply \n\nwhere is the controller , datapase details of properties file\n\n\n\n"
},
{
"code": null,
"e": 3940,
"s": 3878,
"text": "where is the controller , datapase details of properties file"
},
{
"code": null,
"e": 3946,
"s": 3944,
"text": "Δ"
},
{
"code": null,
"e": 3973,
"s": 3946,
"text": " Spring Boot – Hello World"
},
{
"code": null,
"e": 4000,
"s": 3973,
"text": " Spring Boot – MVC Example"
},
{
"code": null,
"e": 4034,
"s": 4000,
"text": " Spring Boot- Change Context Path"
},
{
"code": null,
"e": 4075,
"s": 4034,
"text": " Spring Boot – Change Tomcat Port Number"
},
{
"code": null,
"e": 4120,
"s": 4075,
"text": " Spring Boot – Change Tomcat to Jetty Server"
},
{
"code": null,
"e": 4158,
"s": 4120,
"text": " Spring Boot – Tomcat session timeout"
},
{
"code": null,
"e": 4192,
"s": 4158,
"text": " Spring Boot – Enable Random Port"
},
{
"code": null,
"e": 4223,
"s": 4192,
"text": " Spring Boot – Properties File"
},
{
"code": null,
"e": 4257,
"s": 4223,
"text": " Spring Boot – Beans Lazy Loading"
},
{
"code": null,
"e": 4290,
"s": 4257,
"text": " Spring Boot – Set Favicon image"
},
{
"code": null,
"e": 4323,
"s": 4290,
"text": " Spring Boot – Set Custom Banner"
},
{
"code": null,
"e": 4363,
"s": 4323,
"text": " Spring Boot – Set Application TimeZone"
},
{
"code": null,
"e": 4388,
"s": 4363,
"text": " Spring Boot – Send Mail"
},
{
"code": null,
"e": 4419,
"s": 4388,
"text": " Spring Boot – FileUpload Ajax"
},
{
"code": null,
"e": 4443,
"s": 4419,
"text": " Spring Boot – Actuator"
},
{
"code": null,
"e": 4489,
"s": 4443,
"text": " Spring Boot – Actuator Database Health Check"
},
{
"code": null,
"e": 4512,
"s": 4489,
"text": " Spring Boot – Swagger"
},
{
"code": null,
"e": 4539,
"s": 4512,
"text": " Spring Boot – Enable CORS"
},
{
"code": null,
"e": 4585,
"s": 4539,
"text": " Spring Boot – External Apache ActiveMQ Setup"
},
{
"code": null,
"e": 4625,
"s": 4585,
"text": " Spring Boot – Inmemory Apache ActiveMq"
},
{
"code": null,
"e": 4654,
"s": 4625,
"text": " Spring Boot – Scheduler Job"
},
{
"code": null,
"e": 4688,
"s": 4654,
"text": " Spring Boot – Exception Handling"
},
{
"code": null,
"e": 4718,
"s": 4688,
"text": " Spring Boot – Hibernate CRUD"
},
{
"code": null,
"e": 4754,
"s": 4718,
"text": " Spring Boot – JPA Integration CRUD"
},
{
"code": null,
"e": 4787,
"s": 4754,
"text": " Spring Boot – JPA DataRest CRUD"
},
{
"code": null,
"e": 4820,
"s": 4787,
"text": " Spring Boot – JdbcTemplate CRUD"
},
{
"code": null,
"e": 4864,
"s": 4820,
"text": " Spring Boot – Multiple Data Sources Config"
},
{
"code": null,
"e": 4898,
"s": 4864,
"text": " Spring Boot – JNDI Configuration"
},
{
"code": null,
"e": 4930,
"s": 4898,
"text": " Spring Boot – H2 Database CRUD"
},
{
"code": null,
"e": 4958,
"s": 4930,
"text": " Spring Boot – MongoDB CRUD"
},
{
"code": null,
"e": 4989,
"s": 4958,
"text": " Spring Boot – Redis Data CRUD"
},
{
"code": null,
"e": 5030,
"s": 4989,
"text": " Spring Boot – MVC Login Form Validation"
},
{
"code": null,
"e": 5064,
"s": 5030,
"text": " Spring Boot – Custom Error Pages"
},
{
"code": null,
"e": 5089,
"s": 5064,
"text": " Spring Boot – iText PDF"
},
{
"code": null,
"e": 5123,
"s": 5089,
"text": " Spring Boot – Enable SSL (HTTPs)"
},
{
"code": null,
"e": 5159,
"s": 5123,
"text": " Spring Boot – Basic Authentication"
},
{
"code": null,
"e": 5205,
"s": 5159,
"text": " Spring Boot – In Memory Basic Authentication"
},
{
"code": null,
"e": 5256,
"s": 5205,
"text": " Spring Boot – Security MySQL Database Integration"
},
{
"code": null,
"e": 5298,
"s": 5256,
"text": " Spring Boot – Redis Cache – Redis Server"
},
{
"code": null,
"e": 5329,
"s": 5298,
"text": " Spring Boot – Hazelcast Cache"
},
{
"code": null,
"e": 5352,
"s": 5329,
"text": " Spring Boot – EhCache"
},
{
"code": null,
"e": 5382,
"s": 5352,
"text": " Spring Boot – Kafka Producer"
},
{
"code": null,
"e": 5412,
"s": 5382,
"text": " Spring Boot – Kafka Consumer"
},
{
"code": null,
"e": 5461,
"s": 5412,
"text": " Spring Boot – Kafka JSON Message to Kafka Topic"
},
{
"code": null,
"e": 5495,
"s": 5461,
"text": " Spring Boot – RabbitMQ Publisher"
},
{
"code": null,
"e": 5528,
"s": 5495,
"text": " Spring Boot – RabbitMQ Consumer"
},
{
"code": null,
"e": 5557,
"s": 5528,
"text": " Spring Boot – SOAP Consumer"
},
{
"code": null,
"e": 5589,
"s": 5557,
"text": " Spring Boot – Soap WebServices"
},
{
"code": null,
"e": 5626,
"s": 5589,
"text": " Spring Boot – Batch Csv to Database"
},
{
"code": null,
"e": 5655,
"s": 5626,
"text": " Spring Boot – Eureka Server"
},
{
"code": null,
"e": 5684,
"s": 5655,
"text": " Spring Boot – MockMvc JUnit"
}
] |
Detect Specific Color From Image using Python OpenCV | Theory of Computation
In this previous article, we have mentioned to detect a specific color (say blue) from a captured video (webcam or video file) and in this article, you will learn to detect a specific color from an image using the Python OpenCV. Color detection is important to recognize objects, it is also utilized as a tool in various image editing and drawing applications.
The OpenCV module is being used for a very wide range of image processing and analysis, like Object Identification, color detection, optical character recognition, photo editing, and so on. It provides lots of functions for image processing.
The color detection process is mostly in demand in computer vision. A color detection algorithm identifies pixels in an image that match a specified color or color range. The color of detected pixels can then be changed to distinguish them from the rest of the image. This process can be easily done using OpenCV.
In this article, we will import two modules - cv2 and numpy. After this, we will load the image using imread, we will convert the color-space from BGR to HSV using cv2.cvtColor(), like the following -
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
Next, we will define the lower and upper color(blue) rgb value in two variables, i.e.-
light_blue = np.array([110,50,50])
dark_blue = np.array([130,255,255])
The OpenCV provides cv2.inRange() method to set the color range. It accepts three parameters, the source image in the first parameter and lower and upper color boundary of the threshold region. This function returns a binary mask, which we will pass in the bitwise AND operator.
mask = cv2.inRange(hsv, light_blue, dark_blue)
output = cv2.bitwise_and(image,image, mask= mask)
Above, we have explained the code flow of color detection. Here is the complete code -
import cv2
import numpy as np
image = cv2.imread("blue_flowers.jpg")
# Convert BGR to HSV
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# define blue color range
light_blue = np.array([110,50,50])
dark_blue = np.array([130,255,255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, light_blue, dark_blue)
# Bitwise-AND mask and original image
output = cv2.bitwise_and(image,image, mask= mask)
cv2.imshow("Color Detected", np.hstack((image,output)))
cv2.waitKey(0)
cv2.destroyAllWindows()
The above code returns the following output -
Jan 3
Stateful vs Stateless
A Stateful application recalls explicit subtleties of a client like profile, inclinations, and client activities...
A Stateful application recalls explicit subtleties of a client like profile, inclinations, and client activities...
Dec 29
Best programming language to learn in 2021
In this article, we have mentioned the analyzed results of the best programming language for 2021...
In this article, we have mentioned the analyzed results of the best programming language for 2021...
Dec 20
How is Python best for mobile app development?
Python has a set of useful Libraries and Packages that minimize the use of code...
Python has a set of useful Libraries and Packages that minimize the use of code...
July 18
Learn all about Emoji
In this article, we have mentioned all about emojis. It's invention, world emoji day, emojicode programming language and much more...
In this article, we have mentioned all about emojis. It's invention, world emoji day, emojicode programming language and much more...
Jan 10
Data Science Recruitment of Freshers
In this article, we have mentioned about the recruitment of data science. Data Science is a buzz for every technician...
In this article, we have mentioned about the recruitment of data science. Data Science is a buzz for every technician...
eTutorialsPoint©Copyright 2016-2022. All Rights Reserved. | [
{
"code": null,
"e": 112,
"s": 90,
"text": "Theory of Computation"
},
{
"code": null,
"e": 473,
"s": 112,
"text": "In this previous article, we have mentioned to detect a specific color (say blue) from a captured video (webcam or video file) and in this article, you will learn to detect a specific color from an image using the Python OpenCV. Color detection is important to recognize objects, it is also utilized as a tool in various image editing and drawing applications."
},
{
"code": null,
"e": 715,
"s": 473,
"text": "The OpenCV module is being used for a very wide range of image processing and analysis, like Object Identification, color detection, optical character recognition, photo editing, and so on. It provides lots of functions for image processing."
},
{
"code": null,
"e": 1029,
"s": 715,
"text": "The color detection process is mostly in demand in computer vision. A color detection algorithm identifies pixels in an image that match a specified color or color range. The color of detected pixels can then be changed to distinguish them from the rest of the image. This process can be easily done using OpenCV."
},
{
"code": null,
"e": 1230,
"s": 1029,
"text": "In this article, we will import two modules - cv2 and numpy. After this, we will load the image using imread, we will convert the color-space from BGR to HSV using cv2.cvtColor(), like the following -"
},
{
"code": null,
"e": 1275,
"s": 1230,
"text": "hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)"
},
{
"code": null,
"e": 1362,
"s": 1275,
"text": "Next, we will define the lower and upper color(blue) rgb value in two variables, i.e.-"
},
{
"code": null,
"e": 1433,
"s": 1362,
"text": "light_blue = np.array([110,50,50])\ndark_blue = np.array([130,255,255])"
},
{
"code": null,
"e": 1712,
"s": 1433,
"text": "The OpenCV provides cv2.inRange() method to set the color range. It accepts three parameters, the source image in the first parameter and lower and upper color boundary of the threshold region. This function returns a binary mask, which we will pass in the bitwise AND operator."
},
{
"code": null,
"e": 1810,
"s": 1712,
"text": "mask = cv2.inRange(hsv, light_blue, dark_blue)\n\noutput = cv2.bitwise_and(image,image, mask= mask)"
},
{
"code": null,
"e": 1897,
"s": 1810,
"text": "Above, we have explained the code flow of color detection. Here is the complete code -"
},
{
"code": null,
"e": 2435,
"s": 1897,
"text": "import cv2\nimport numpy as np\n\n\nimage = cv2.imread(\"blue_flowers.jpg\") \n# Convert BGR to HSV\nhsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)\n\n# define blue color range\nlight_blue = np.array([110,50,50])\ndark_blue = np.array([130,255,255])\n\n# Threshold the HSV image to get only blue colors\nmask = cv2.inRange(hsv, light_blue, dark_blue)\n\n# Bitwise-AND mask and original image\noutput = cv2.bitwise_and(image,image, mask= mask)\n \ncv2.imshow(\"Color Detected\", np.hstack((image,output)))\ncv2.waitKey(0)\ncv2.destroyAllWindows()"
},
{
"code": null,
"e": 2481,
"s": 2435,
"text": "The above code returns the following output -"
},
{
"code": null,
"e": 2627,
"s": 2481,
"text": "\nJan 3\nStateful vs Stateless\nA Stateful application recalls explicit subtleties of a client like profile, inclinations, and client activities...\n"
},
{
"code": null,
"e": 2743,
"s": 2627,
"text": "A Stateful application recalls explicit subtleties of a client like profile, inclinations, and client activities..."
},
{
"code": null,
"e": 2896,
"s": 2743,
"text": "\nDec 29\nBest programming language to learn in 2021\nIn this article, we have mentioned the analyzed results of the best programming language for 2021...\n"
},
{
"code": null,
"e": 2997,
"s": 2896,
"text": "In this article, we have mentioned the analyzed results of the best programming language for 2021..."
},
{
"code": null,
"e": 3136,
"s": 2997,
"text": "\nDec 20\nHow is Python best for mobile app development?\nPython has a set of useful Libraries and Packages that minimize the use of code...\n"
},
{
"code": null,
"e": 3219,
"s": 3136,
"text": "Python has a set of useful Libraries and Packages that minimize the use of code..."
},
{
"code": null,
"e": 3385,
"s": 3219,
"text": "\nJuly 18\nLearn all about Emoji\nIn this article, we have mentioned all about emojis. It's invention, world emoji day, emojicode programming language and much more...\n"
},
{
"code": null,
"e": 3519,
"s": 3385,
"text": "In this article, we have mentioned all about emojis. It's invention, world emoji day, emojicode programming language and much more..."
},
{
"code": null,
"e": 3686,
"s": 3519,
"text": "\nJan 10\nData Science Recruitment of Freshers\nIn this article, we have mentioned about the recruitment of data science. Data Science is a buzz for every technician...\n"
},
{
"code": null,
"e": 3807,
"s": 3686,
"text": "In this article, we have mentioned about the recruitment of data science. Data Science is a buzz for every technician..."
}
] |
How to add column using alter in MySQL?
| Following is the syntax to add column using alter in MySQL:
alter table yourTableName add column yourColumnName yourDataType default yourValue;
Let us first create a table:
mysql> create table alterTableDemo
-> (
-> Id int,
-> Name varchar(10)
-> );
Query OK, 0 rows affected (0.69 sec)
Let us check the description of the table using DESC command. This displays Field, Type, Key, etc. of the table:
mysql> desc alterTableDemo;
This will produce the following output
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| Id | int(11) | YES | | NULL | |
| Name | varchar(10) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
2 rows in set (0.01 sec)
Now, add column Age with default value 18. If user won’t supply value for column Age then MySQL will use the default value for Age column. Following is the query to add column using alter command.
mysql> alter table alterTableDemo add column Age int default 18;
Query OK, 0 rows affected (0.67 sec)
Records: 0 Duplicates: 0 Warnings: 0
Let us check the table description once again:
mysql> desc alterTableDemo;
This will produce the following output
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| Id | int(11) | YES | | NULL | |
| Name | varchar(10) | YES | | NULL | |
| Age | int(11) | YES | | 18 | |
+-------+-------------+------+-----+---------+-------+
3 rows in set (0.00 sec)
Let us insert record in the table using insert command.
Following is the query
mysql> insert into alterTableDemo(Id,Name,Age) values(100,'Chris',24);
Query OK, 1 row affected (0.16 sec)
mysql> insert into alterTableDemo(Id,Name) values(101,'Robert');
Query OK, 1 row affected (0.25 sec)
Following is the query to display all records from the table using select statement:
mysql> select *from alterTableDemo;
Following is the output. Since we haven’t set age for ‘Robert’, therefore the default 18 would be set for Age:
+------+--------+------+
| Id | Name | Age |
+------+--------+------+
| 100 | Chris | 24 |
| 101 | Robert | 18 |
+------+--------+------+
2 rows in set (0.00 sec) | [
{
"code": null,
"e": 1122,
"s": 1062,
"text": "Following is the syntax to add column using alter in MySQL:"
},
{
"code": null,
"e": 1206,
"s": 1122,
"text": "alter table yourTableName add column yourColumnName yourDataType default yourValue;"
},
{
"code": null,
"e": 1235,
"s": 1206,
"text": "Let us first create a table:"
},
{
"code": null,
"e": 1361,
"s": 1235,
"text": "mysql> create table alterTableDemo\n -> (\n -> Id int,\n -> Name varchar(10)\n -> );\nQuery OK, 0 rows affected (0.69 sec)"
},
{
"code": null,
"e": 1474,
"s": 1361,
"text": "Let us check the description of the table using DESC command. This displays Field, Type, Key, etc. of the table:"
},
{
"code": null,
"e": 1502,
"s": 1474,
"text": "mysql> desc alterTableDemo;"
},
{
"code": null,
"e": 1541,
"s": 1502,
"text": "This will produce the following output"
},
{
"code": null,
"e": 1896,
"s": 1541,
"text": "+-------+-------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+-------+-------------+------+-----+---------+-------+\n| Id | int(11) | YES | | NULL | |\n| Name | varchar(10) | YES | | NULL | |\n+-------+-------------+------+-----+---------+-------+\n2 rows in set (0.01 sec)"
},
{
"code": null,
"e": 2093,
"s": 1896,
"text": "Now, add column Age with default value 18. If user won’t supply value for column Age then MySQL will use the default value for Age column. Following is the query to add column using alter command."
},
{
"code": null,
"e": 2232,
"s": 2093,
"text": "mysql> alter table alterTableDemo add column Age int default 18;\nQuery OK, 0 rows affected (0.67 sec)\nRecords: 0 Duplicates: 0 Warnings: 0"
},
{
"code": null,
"e": 2279,
"s": 2232,
"text": "Let us check the table description once again:"
},
{
"code": null,
"e": 2307,
"s": 2279,
"text": "mysql> desc alterTableDemo;"
},
{
"code": null,
"e": 2346,
"s": 2307,
"text": "This will produce the following output"
},
{
"code": null,
"e": 2756,
"s": 2346,
"text": "+-------+-------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+-------+-------------+------+-----+---------+-------+\n| Id | int(11) | YES | | NULL | |\n| Name | varchar(10) | YES | | NULL | |\n| Age | int(11) | YES | | 18 | |\n+-------+-------------+------+-----+---------+-------+\n3 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2812,
"s": 2756,
"text": "Let us insert record in the table using insert command."
},
{
"code": null,
"e": 2835,
"s": 2812,
"text": "Following is the query"
},
{
"code": null,
"e": 3044,
"s": 2835,
"text": "mysql> insert into alterTableDemo(Id,Name,Age) values(100,'Chris',24);\nQuery OK, 1 row affected (0.16 sec)\n\nmysql> insert into alterTableDemo(Id,Name) values(101,'Robert');\nQuery OK, 1 row affected (0.25 sec)"
},
{
"code": null,
"e": 3129,
"s": 3044,
"text": "Following is the query to display all records from the table using select statement:"
},
{
"code": null,
"e": 3165,
"s": 3129,
"text": "mysql> select *from alterTableDemo;"
},
{
"code": null,
"e": 3276,
"s": 3165,
"text": "Following is the output. Since we haven’t set age for ‘Robert’, therefore the default 18 would be set for Age:"
},
{
"code": null,
"e": 3453,
"s": 3276,
"text": "+------+--------+------+\n| Id | Name | Age |\n+------+--------+------+\n| 100 | Chris | 24 |\n| 101 | Robert | 18 |\n+------+--------+------+\n2 rows in set (0.00 sec)\n\n"
}
] |
Creating a MySQL table using Node.js | Generally, NoSQL databases (like MongoDB) are more popular among the Node developers. However, it totally depends upon your usecase and choice to choose any DBMS from different database options present. The type of databse you choose mainly depends upon one's project's requirements.
For example, if you need table creation or real-time inserts and want to deal with loads of data, then a NoSQL database is the way to go, whereas if your project deals with more complex queries and transactions, an SQL database will make much more sense.
In this article, we will explain how to connect to a MySQL and then create a new table in it.
Following are the steps to check your application connection with the MySQL database.
Create a new project with a name of your choice, and then navigate to that project.
Create a new project with a name of your choice, and then navigate to that project.
>> mkdir mysql-test
>> cd mysql-test
Create a package.json file using the following command
Create a package.json file using the following command
>> npm init -y
You will get the following output −
Wrote to /home/abc/mysql-test/package.json:
{
"name": "mysql-test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Installing the MySQL module −
Installing the MySQL module −
>> npm install mysql
+ [email protected]
added 11 packages from 15 contributors and audited 11 packages in 3.264s
found 0 vulnerabilities
Create a JS file with the following name – app.js
Create a JS file with the following name – app.js
Copy and Paste the code snippet given below
Copy and Paste the code snippet given below
Run the file using the following command −
Run the file using the following command −
>> node app.js
// Checking the MySQL dependency in NPM
var mysql = require('mysql');
// Creating a mysql connection
var con = mysql.createConnection({
host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});
con.connect(function(err) {
if (err) throw err;
console.log("Database connected!");
var sql = "CREATE TABLE students (name VARCHAR(255), address VARCHAR(255))";
con.query(sql, function (err, result) {
if (err) throw err;
console.log("Table created");
});
});
The following output will be printed on the console −
Database connected!
Table created | [
{
"code": null,
"e": 1346,
"s": 1062,
"text": "Generally, NoSQL databases (like MongoDB) are more popular among the Node developers. However, it totally depends upon your usecase and choice to choose any DBMS from different database options present. The type of databse you choose mainly depends upon one's project's requirements."
},
{
"code": null,
"e": 1601,
"s": 1346,
"text": "For example, if you need table creation or real-time inserts and want to deal with loads of data, then a NoSQL database is the way to go, whereas if your project deals with more complex queries and transactions, an SQL database will make much more sense."
},
{
"code": null,
"e": 1695,
"s": 1601,
"text": "In this article, we will explain how to connect to a MySQL and then create a new table in it."
},
{
"code": null,
"e": 1781,
"s": 1695,
"text": "Following are the steps to check your application connection with the MySQL database."
},
{
"code": null,
"e": 1865,
"s": 1781,
"text": "Create a new project with a name of your choice, and then navigate to that project."
},
{
"code": null,
"e": 1949,
"s": 1865,
"text": "Create a new project with a name of your choice, and then navigate to that project."
},
{
"code": null,
"e": 1986,
"s": 1949,
"text": ">> mkdir mysql-test\n>> cd mysql-test"
},
{
"code": null,
"e": 2041,
"s": 1986,
"text": "Create a package.json file using the following command"
},
{
"code": null,
"e": 2096,
"s": 2041,
"text": "Create a package.json file using the following command"
},
{
"code": null,
"e": 2111,
"s": 2096,
"text": ">> npm init -y"
},
{
"code": null,
"e": 2147,
"s": 2111,
"text": "You will get the following output −"
},
{
"code": null,
"e": 2426,
"s": 2147,
"text": "Wrote to /home/abc/mysql-test/package.json:\n{\n \"name\": \"mysql-test\",\n \"version\": \"1.0.0\",\n \"description\": \"\",\n \"main\": \"index.js\",\n \"scripts\": {\n \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\"\n },\n \"keywords\": [],\n \"author\": \"\",\n \"license\": \"ISC\"\n}"
},
{
"code": null,
"e": 2456,
"s": 2426,
"text": "Installing the MySQL module −"
},
{
"code": null,
"e": 2486,
"s": 2456,
"text": "Installing the MySQL module −"
},
{
"code": null,
"e": 2510,
"s": 2486,
"text": " >> npm install mysql"
},
{
"code": null,
"e": 2622,
"s": 2510,
"text": "+ [email protected]\nadded 11 packages from 15 contributors and audited 11 packages in 3.264s\nfound 0 vulnerabilities"
},
{
"code": null,
"e": 2672,
"s": 2622,
"text": "Create a JS file with the following name – app.js"
},
{
"code": null,
"e": 2722,
"s": 2672,
"text": "Create a JS file with the following name – app.js"
},
{
"code": null,
"e": 2766,
"s": 2722,
"text": "Copy and Paste the code snippet given below"
},
{
"code": null,
"e": 2810,
"s": 2766,
"text": "Copy and Paste the code snippet given below"
},
{
"code": null,
"e": 2853,
"s": 2810,
"text": "Run the file using the following command −"
},
{
"code": null,
"e": 2896,
"s": 2853,
"text": "Run the file using the following command −"
},
{
"code": null,
"e": 2914,
"s": 2896,
"text": " >> node app.js"
},
{
"code": null,
"e": 3444,
"s": 2914,
"text": "// Checking the MySQL dependency in NPM\nvar mysql = require('mysql');\n\n// Creating a mysql connection\nvar con = mysql.createConnection({\n host: \"localhost\",\n user: \"yourusername\",\n password: \"yourpassword\",\n database: \"mydb\"\n});\n\ncon.connect(function(err) {\n if (err) throw err;\n console.log(\"Database connected!\");\n var sql = \"CREATE TABLE students (name VARCHAR(255), address VARCHAR(255))\";\n con.query(sql, function (err, result) {\n if (err) throw err;\n console.log(\"Table created\");\n });\n});"
},
{
"code": null,
"e": 3498,
"s": 3444,
"text": "The following output will be printed on the console −"
},
{
"code": null,
"e": 3532,
"s": 3498,
"text": "Database connected!\nTable created"
}
] |
string.find() function in Lua | string.find() is one of the most powerful library functions that is present inside the string library.
Lua doesn’t use the POSIX regular expression for pattern matching, as the implementation of the same takes 4,000 lines of code, which is actually bigger than all the Lua standard libraries together. In place of the POSIX pattern matching, the Lua’s implementation of pattern matching takes less than 500 lines.
The string.find() function is used to find a specific pattern in a given string, it normally takes two arguments, the first argument being the string in which the pattern we are trying to search, and the second argument is the pattern that we are trying to search.
There’s a third argument also, the third argument is an index that tells where in the subject string to start the search. This parameter is useful when we want to process all the indices where a given pattern appears. It is mainly used where there are chances that multiple times a same pattern will occur in the same string.
indexstart, indexend = string.find(s,”pattern”)
or
indexstart, indexend = string.find(s,”pattern”,indexstart + z)
In the above syntax, I mentioned both the types of the string.find() function that we can make use of.
Let’s consider a very simple example of the string.find() function where we will try to find a simple pattern in a given string.
Consider the example shown below −
Live Demo
s = "hello world"
i, j = string.find(s, "hello")
print(i, j)
Notice that in the above code, the i identifier is the starting index where the pattern that we search was found, the j identifier is the ending index of that pattern.
1 5
There might be scenarios where we would want to use only one of these indexes and in those cases, we can simply write the code as follows.
Consider the example shown below −
_, y = string.find(s,"world")
print(y)
x, _ = string.find(s,"world")
print(x)
11
7
Let’s explore one more example where we will make use of the third argument.
Consider the example shown below −
s = "hello n hello y hello z"
index = string.find(s,"hello",i+1)
print(index)
9 | [
{
"code": null,
"e": 1165,
"s": 1062,
"text": "string.find() is one of the most powerful library functions that is present inside the string library."
},
{
"code": null,
"e": 1476,
"s": 1165,
"text": "Lua doesn’t use the POSIX regular expression for pattern matching, as the implementation of the same takes 4,000 lines of code, which is actually bigger than all the Lua standard libraries together. In place of the POSIX pattern matching, the Lua’s implementation of pattern matching takes less than 500 lines."
},
{
"code": null,
"e": 1741,
"s": 1476,
"text": "The string.find() function is used to find a specific pattern in a given string, it normally takes two arguments, the first argument being the string in which the pattern we are trying to search, and the second argument is the pattern that we are trying to search."
},
{
"code": null,
"e": 2067,
"s": 1741,
"text": "There’s a third argument also, the third argument is an index that tells where in the subject string to start the search. This parameter is useful when we want to process all the indices where a given pattern appears. It is mainly used where there are chances that multiple times a same pattern will occur in the same string."
},
{
"code": null,
"e": 2181,
"s": 2067,
"text": "indexstart, indexend = string.find(s,”pattern”)\nor\nindexstart, indexend = string.find(s,”pattern”,indexstart + z)"
},
{
"code": null,
"e": 2284,
"s": 2181,
"text": "In the above syntax, I mentioned both the types of the string.find() function that we can make use of."
},
{
"code": null,
"e": 2413,
"s": 2284,
"text": "Let’s consider a very simple example of the string.find() function where we will try to find a simple pattern in a given string."
},
{
"code": null,
"e": 2448,
"s": 2413,
"text": "Consider the example shown below −"
},
{
"code": null,
"e": 2459,
"s": 2448,
"text": " Live Demo"
},
{
"code": null,
"e": 2520,
"s": 2459,
"text": "s = \"hello world\"\ni, j = string.find(s, \"hello\")\nprint(i, j)"
},
{
"code": null,
"e": 2688,
"s": 2520,
"text": "Notice that in the above code, the i identifier is the starting index where the pattern that we search was found, the j identifier is the ending index of that pattern."
},
{
"code": null,
"e": 2692,
"s": 2688,
"text": "1 5"
},
{
"code": null,
"e": 2831,
"s": 2692,
"text": "There might be scenarios where we would want to use only one of these indexes and in those cases, we can simply write the code as follows."
},
{
"code": null,
"e": 2866,
"s": 2831,
"text": "Consider the example shown below −"
},
{
"code": null,
"e": 2944,
"s": 2866,
"text": "_, y = string.find(s,\"world\")\nprint(y)\nx, _ = string.find(s,\"world\")\nprint(x)"
},
{
"code": null,
"e": 2949,
"s": 2944,
"text": "11\n7"
},
{
"code": null,
"e": 3026,
"s": 2949,
"text": "Let’s explore one more example where we will make use of the third argument."
},
{
"code": null,
"e": 3061,
"s": 3026,
"text": "Consider the example shown below −"
},
{
"code": null,
"e": 3139,
"s": 3061,
"text": "s = \"hello n hello y hello z\"\nindex = string.find(s,\"hello\",i+1)\nprint(index)"
},
{
"code": null,
"e": 3141,
"s": 3139,
"text": "9"
}
] |
C++ Program to Find the Number of Permutations of a Given String | We can arrange the characters of a string in different order. Here we will see how we can count the number of permutations can be formed from a given string.
We know that if one string is ‘abc’. It has three characters; we can arrange them into 3! = 6 different ways. So a string with n characters, we can arrange them into n! different ways. But now if there are same characters are present for multiple times, like aab, then there will not be 6 permutations.
aba
aab
baa
baa
aab
aba
Here the (1,6), (2, 5), (3,4) are same. So here the number of permutations is 3. This is basically (n!)/(sum of the factorials of all characters which is occurring more than one times).
To solve this problem, at first we have to calculate the frequency of all of the characters. Then count the factorial of n, then divide it by doing sum of all frequency values which are greater than 1.
#include<iostream>
using namespace std;
long fact(long n) {
if(n == 0 || n == 1 )
return 1;
return n*fact(n-1);
}
int countPermutation(string str) {
int freq[26] = {0};
for(int i = 0; i<str.size(); i++) {
freq[str[i] - 'a']++; //get the frequency of each characters individually
}
int res = fact(str.size()); //n! for string of length n
for(int i = 0; i<26; i++) {
if(freq[i] > 1)
res /= fact(freq[i]); //divide n! by (number of occurrences of each characters)!
}
return res;
}
main(){
string n;
cout << "Enter a number to count number of permutations can be possible: ";
cin >> n;
cout << "\nThe number of permutations: " << countPermutation(n);
}
Enter a number to count number of permutations can be possible: abbc
The number of permutations: 12 | [
{
"code": null,
"e": 1220,
"s": 1062,
"text": "We can arrange the characters of a string in different order. Here we will see how we can count the number of permutations can be formed from a given string."
},
{
"code": null,
"e": 1523,
"s": 1220,
"text": "We know that if one string is ‘abc’. It has three characters; we can arrange them into 3! = 6 different ways. So a string with n characters, we can arrange them into n! different ways. But now if there are same characters are present for multiple times, like aab, then there will not be 6 permutations."
},
{
"code": null,
"e": 1527,
"s": 1523,
"text": "aba"
},
{
"code": null,
"e": 1531,
"s": 1527,
"text": "aab"
},
{
"code": null,
"e": 1535,
"s": 1531,
"text": "baa"
},
{
"code": null,
"e": 1539,
"s": 1535,
"text": "baa"
},
{
"code": null,
"e": 1543,
"s": 1539,
"text": "aab"
},
{
"code": null,
"e": 1547,
"s": 1543,
"text": "aba"
},
{
"code": null,
"e": 1733,
"s": 1547,
"text": "Here the (1,6), (2, 5), (3,4) are same. So here the number of permutations is 3. This is basically (n!)/(sum of the factorials of all characters which is occurring more than one times)."
},
{
"code": null,
"e": 1935,
"s": 1733,
"text": "To solve this problem, at first we have to calculate the frequency of all of the characters. Then count the factorial of n, then divide it by doing sum of all frequency values which are greater than 1."
},
{
"code": null,
"e": 2650,
"s": 1935,
"text": "#include<iostream>\nusing namespace std;\nlong fact(long n) {\n if(n == 0 || n == 1 )\n return 1;\n return n*fact(n-1);\n}\nint countPermutation(string str) {\n int freq[26] = {0};\n for(int i = 0; i<str.size(); i++) {\n freq[str[i] - 'a']++; //get the frequency of each characters individually\n }\n int res = fact(str.size()); //n! for string of length n\n for(int i = 0; i<26; i++) {\n if(freq[i] > 1)\n res /= fact(freq[i]); //divide n! by (number of occurrences of each characters)!\n }\n return res;\n}\nmain(){\n string n;\n cout << \"Enter a number to count number of permutations can be possible: \";\n cin >> n;\n cout << \"\\nThe number of permutations: \" << countPermutation(n);\n}"
},
{
"code": null,
"e": 2750,
"s": 2650,
"text": "Enter a number to count number of permutations can be possible: abbc\nThe number of permutations: 12"
}
] |
Reduce the string to minimum length with the given operation - GeeksforGeeks | 13 Aug, 2021
Given a string str consisting of lowercase and uppercase characters, the task is to find the minimum possible length the string can be reduced to after performing the given operation any number of times. In a single operation, any two consecutive characters can be removed if they represent the same character in different cases i.e. “aA” and “Cc” can be removed but “cc” and “EE” cannot be removed.Examples:
Input: str = “ASbBsd” Output: 2 Operations 1: “ASbBsd” -> “ASsd” Operations 2: “ASsd” -> “Ad” The string cannot be reduced further.Input: str = “AsSaDda” Output: 1 Operations 1: “AsSaDda” -> “AaDda” Operations 2: “AaDda” -> “Dda” Operations 3: “Dda” -> “a”
Approach:
Create a stack to store the characters of the string.
For every character of the string starting from the first character, if the stack is empty then push the current character in the stack.
Else match the current character with the top of the stack, if they only differ in the case then pop the element from the stack and continue.
If they are not equal then push the current element to the stack and repeat the above steps for the rest of the string.
The size of the stack in the end is the required answer.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to return the minimum// possible length str can be reduced// to with the given operationint minLength(string str, int len){ // Stack to store the characters // of the given string stack<char> s; // For every character of the string for (int i = 0; i < len; i++) { // If the stack is empty then push the // current character in the stack if (s.empty()) { s.push(str[i]); } else { // Get the top character char c = s.top(); // If the top element is not equal // to the current element and it // only differs in the case if (c != str[i] && toupper(c) == toupper(str[i])) { // Pop the top element from stack s.pop(); } // Else push the current element else { s.push(str[i]); } } } return s.size();} // Driver codeint main(){ string str = "ASbBsd"; int len = str.length(); cout << minLength(str, len); return 0;}
// Java implementation of the approachimport java.util.*; class GFG{ // Function to return the minimum// possible length str can be reduced// to with the given operationstatic int minLength(String str, int len){ // Stack to store the characters // of the given string Stack<Character> s = new Stack<Character>(); // For every character of the string for (int i = 0; i < len; i++) { // If the stack is empty then push the // current character in the stack if (s.empty()) { s.push(str.charAt(i)); } else { // Get the top character char c = s.peek(); // If the top element is not equal // to the current element and it // only differs in the case if (c != str.charAt(i) && Character.toUpperCase(c) == Character.toUpperCase((str.charAt(i)))) { // Pop the top element from stack s.pop(); } // Else push the current element else { s.push(str.charAt(i)); } } } return s.size();} // Driver codepublic static void main(String []args){ String str = "ASbBsd"; int len = str.length(); System.out.println(minLength(str, len));}} // This code is contributed by Rajput-Ji
# Python3 implementation of the approach # Function to return the minimum# possible length str can be reduced# to with the given operationdef minLength(string, l) : # Stack to store the characters # of the given string s = []; # For every character of the string for i in range(l) : # If the stack is empty then push the # current character in the stack if (len(s) == 0) : s.append(string[i]); else : # Get the top character c = s[-1]; # If the top element is not equal # to the current element and it # only differs in the case if (c != string[i] and c.upper() == string[i].upper()) : # Pop the top element from stack s.pop(); # Else push the current element else : s.append(string[i]); return len(s); # Driver codeif __name__ == "__main__" : string = "ASbBsd"; l = len(string); print(minLength(string, l)); # This code is contributed by AnkitRai01
// C# implementation of the approachusing System;using System.Collections.Generic; class GFG{ // Function to return the minimum// possible length str can be reduced// to with the given operationstatic int minLength(String str, int len){ // Stack to store the characters // of the given string Stack<char> s = new Stack<char>(); // For every character of the string for (int i = 0; i < len; i++) { // If the stack is empty then push the // current character in the stack if (s.Count==0) { s.Push(str[i]); } else { // Get the top character char c = s.Peek(); // If the top element is not equal // to the current element and it // only differs in the case if (c != str[i] && char.ToUpper(c) == char.ToUpper((str[i]))) { // Pop the top element from stack s.Pop(); } // Else push the current element else { s.Push(str[i]); } } } return s.Count;} // Driver codepublic static void Main(String []args){ String str = "ASbBsd"; int len = str.Length; Console.WriteLine(minLength(str, len));}} // This code is contributed by PrinciRaj1992
<script> // Javascript implementation of the approach // Function to return the minimum // possible length str can be reduced // to with the given operation function minLength(str, len) { // Stack to store the characters // of the given string let s = []; // For every character of the string for (let i = 0; i < len; i++) { // If the stack is empty then push the // current character in the stack if (s.length==0) { s.push(str[i]); } else { // Get the top character let c = s[s.length - 1]; // If the top element is not equal // to the current element and it // only differs in the case if (c != str[i] && c.toUpperCase() == str[i].toUpperCase()) { // Pop the top element from stack s.pop(); } // Else push the current element else { s.push(str[i]); } } } return s.length; } let str = "ASbBsd"; let len = str.length; document.write(minLength(str, len)); </script>
2
Time Complexity: O(N).Auxiliary Space: O(N).
Rajput-Ji
princiraj1992
ankthon
decode2207
pankajsharmagfg
Data Structures
Stack
Strings
Data Structures
Strings
Stack
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
SDE SHEET - A Complete Guide for SDE Preparation
DSA Sheet by Love Babbar
Introduction to Algorithms
Introduction to Tree Data Structure
Differences and Applications of List, Tuple, Set and Dictionary in Python
Stack Data Structure (Introduction and Program)
Stack in Python
Stack Class in Java
Check for Balanced Brackets in an expression (well-formedness) using Stack
Queue using Stacks | [
{
"code": null,
"e": 25159,
"s": 25131,
"text": "\n13 Aug, 2021"
},
{
"code": null,
"e": 25570,
"s": 25159,
"text": "Given a string str consisting of lowercase and uppercase characters, the task is to find the minimum possible length the string can be reduced to after performing the given operation any number of times. In a single operation, any two consecutive characters can be removed if they represent the same character in different cases i.e. “aA” and “Cc” can be removed but “cc” and “EE” cannot be removed.Examples: "
},
{
"code": null,
"e": 25829,
"s": 25570,
"text": "Input: str = “ASbBsd” Output: 2 Operations 1: “ASbBsd” -> “ASsd” Operations 2: “ASsd” -> “Ad” The string cannot be reduced further.Input: str = “AsSaDda” Output: 1 Operations 1: “AsSaDda” -> “AaDda” Operations 2: “AaDda” -> “Dda” Operations 3: “Dda” -> “a” "
},
{
"code": null,
"e": 25843,
"s": 25831,
"text": "Approach: "
},
{
"code": null,
"e": 25897,
"s": 25843,
"text": "Create a stack to store the characters of the string."
},
{
"code": null,
"e": 26034,
"s": 25897,
"text": "For every character of the string starting from the first character, if the stack is empty then push the current character in the stack."
},
{
"code": null,
"e": 26176,
"s": 26034,
"text": "Else match the current character with the top of the stack, if they only differ in the case then pop the element from the stack and continue."
},
{
"code": null,
"e": 26296,
"s": 26176,
"text": "If they are not equal then push the current element to the stack and repeat the above steps for the rest of the string."
},
{
"code": null,
"e": 26353,
"s": 26296,
"text": "The size of the stack in the end is the required answer."
},
{
"code": null,
"e": 26406,
"s": 26353,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 26410,
"s": 26406,
"text": "C++"
},
{
"code": null,
"e": 26415,
"s": 26410,
"text": "Java"
},
{
"code": null,
"e": 26423,
"s": 26415,
"text": "Python3"
},
{
"code": null,
"e": 26426,
"s": 26423,
"text": "C#"
},
{
"code": null,
"e": 26437,
"s": 26426,
"text": "Javascript"
},
{
"code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to return the minimum// possible length str can be reduced// to with the given operationint minLength(string str, int len){ // Stack to store the characters // of the given string stack<char> s; // For every character of the string for (int i = 0; i < len; i++) { // If the stack is empty then push the // current character in the stack if (s.empty()) { s.push(str[i]); } else { // Get the top character char c = s.top(); // If the top element is not equal // to the current element and it // only differs in the case if (c != str[i] && toupper(c) == toupper(str[i])) { // Pop the top element from stack s.pop(); } // Else push the current element else { s.push(str[i]); } } } return s.size();} // Driver codeint main(){ string str = \"ASbBsd\"; int len = str.length(); cout << minLength(str, len); return 0;}",
"e": 27605,
"s": 26437,
"text": null
},
{
"code": "// Java implementation of the approachimport java.util.*; class GFG{ // Function to return the minimum// possible length str can be reduced// to with the given operationstatic int minLength(String str, int len){ // Stack to store the characters // of the given string Stack<Character> s = new Stack<Character>(); // For every character of the string for (int i = 0; i < len; i++) { // If the stack is empty then push the // current character in the stack if (s.empty()) { s.push(str.charAt(i)); } else { // Get the top character char c = s.peek(); // If the top element is not equal // to the current element and it // only differs in the case if (c != str.charAt(i) && Character.toUpperCase(c) == Character.toUpperCase((str.charAt(i)))) { // Pop the top element from stack s.pop(); } // Else push the current element else { s.push(str.charAt(i)); } } } return s.size();} // Driver codepublic static void main(String []args){ String str = \"ASbBsd\"; int len = str.length(); System.out.println(minLength(str, len));}} // This code is contributed by Rajput-Ji",
"e": 28977,
"s": 27605,
"text": null
},
{
"code": "# Python3 implementation of the approach # Function to return the minimum# possible length str can be reduced# to with the given operationdef minLength(string, l) : # Stack to store the characters # of the given string s = []; # For every character of the string for i in range(l) : # If the stack is empty then push the # current character in the stack if (len(s) == 0) : s.append(string[i]); else : # Get the top character c = s[-1]; # If the top element is not equal # to the current element and it # only differs in the case if (c != string[i] and c.upper() == string[i].upper()) : # Pop the top element from stack s.pop(); # Else push the current element else : s.append(string[i]); return len(s); # Driver codeif __name__ == \"__main__\" : string = \"ASbBsd\"; l = len(string); print(minLength(string, l)); # This code is contributed by AnkitRai01",
"e": 30076,
"s": 28977,
"text": null
},
{
"code": "// C# implementation of the approachusing System;using System.Collections.Generic; class GFG{ // Function to return the minimum// possible length str can be reduced// to with the given operationstatic int minLength(String str, int len){ // Stack to store the characters // of the given string Stack<char> s = new Stack<char>(); // For every character of the string for (int i = 0; i < len; i++) { // If the stack is empty then push the // current character in the stack if (s.Count==0) { s.Push(str[i]); } else { // Get the top character char c = s.Peek(); // If the top element is not equal // to the current element and it // only differs in the case if (c != str[i] && char.ToUpper(c) == char.ToUpper((str[i]))) { // Pop the top element from stack s.Pop(); } // Else push the current element else { s.Push(str[i]); } } } return s.Count;} // Driver codepublic static void Main(String []args){ String str = \"ASbBsd\"; int len = str.Length; Console.WriteLine(minLength(str, len));}} // This code is contributed by PrinciRaj1992",
"e": 31418,
"s": 30076,
"text": null
},
{
"code": "<script> // Javascript implementation of the approach // Function to return the minimum // possible length str can be reduced // to with the given operation function minLength(str, len) { // Stack to store the characters // of the given string let s = []; // For every character of the string for (let i = 0; i < len; i++) { // If the stack is empty then push the // current character in the stack if (s.length==0) { s.push(str[i]); } else { // Get the top character let c = s[s.length - 1]; // If the top element is not equal // to the current element and it // only differs in the case if (c != str[i] && c.toUpperCase() == str[i].toUpperCase()) { // Pop the top element from stack s.pop(); } // Else push the current element else { s.push(str[i]); } } } return s.length; } let str = \"ASbBsd\"; let len = str.length; document.write(minLength(str, len)); </script>",
"e": 32767,
"s": 31418,
"text": null
},
{
"code": null,
"e": 32769,
"s": 32767,
"text": "2"
},
{
"code": null,
"e": 32817,
"s": 32771,
"text": "Time Complexity: O(N).Auxiliary Space: O(N). "
},
{
"code": null,
"e": 32827,
"s": 32817,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 32841,
"s": 32827,
"text": "princiraj1992"
},
{
"code": null,
"e": 32849,
"s": 32841,
"text": "ankthon"
},
{
"code": null,
"e": 32860,
"s": 32849,
"text": "decode2207"
},
{
"code": null,
"e": 32876,
"s": 32860,
"text": "pankajsharmagfg"
},
{
"code": null,
"e": 32892,
"s": 32876,
"text": "Data Structures"
},
{
"code": null,
"e": 32898,
"s": 32892,
"text": "Stack"
},
{
"code": null,
"e": 32906,
"s": 32898,
"text": "Strings"
},
{
"code": null,
"e": 32922,
"s": 32906,
"text": "Data Structures"
},
{
"code": null,
"e": 32930,
"s": 32922,
"text": "Strings"
},
{
"code": null,
"e": 32936,
"s": 32930,
"text": "Stack"
},
{
"code": null,
"e": 33034,
"s": 32936,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33043,
"s": 33034,
"text": "Comments"
},
{
"code": null,
"e": 33056,
"s": 33043,
"text": "Old Comments"
},
{
"code": null,
"e": 33105,
"s": 33056,
"text": "SDE SHEET - A Complete Guide for SDE Preparation"
},
{
"code": null,
"e": 33130,
"s": 33105,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 33157,
"s": 33130,
"text": "Introduction to Algorithms"
},
{
"code": null,
"e": 33193,
"s": 33157,
"text": "Introduction to Tree Data Structure"
},
{
"code": null,
"e": 33267,
"s": 33193,
"text": "Differences and Applications of List, Tuple, Set and Dictionary in Python"
},
{
"code": null,
"e": 33315,
"s": 33267,
"text": "Stack Data Structure (Introduction and Program)"
},
{
"code": null,
"e": 33331,
"s": 33315,
"text": "Stack in Python"
},
{
"code": null,
"e": 33351,
"s": 33331,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 33426,
"s": 33351,
"text": "Check for Balanced Brackets in an expression (well-formedness) using Stack"
}
] |
Coding a custom imputer in scikit-learn | by Eryk Lewinson | Towards Data Science | Working with missing data is an inherent part of the majority of the machine learning projects. A typical approach would be to use scikit-learn’s SimpleImputer (or another imputer from the sklearn.impute module). However, often the simplest approach might not be the best one and we could gain some extra performance by using a more sophisticated approach.
That is why in this article I wanted to demonstrate how to code a custom scikit-learn based imputer. To make the case more interesting, the imputer will fill in the missing values based on the groups’ averages/medians.
Before jumping straight into coding I wanted to elaborate on a few potential reasons why writing a custom imputer class (inheriting from scikit-learn) might be worth your time:
It can help you with developing your programming skills — while writing imputers inheriting from scikit-learn you learn about some best practices already used by the contributors. Additionally, via inheritance you can use some of the already prepared methods. This way, your code will be better/cleaner and potentially more robust to some unforeseen issues.
Your custom classes can be further developed over time and potentially shared with other users (or maybe even integrated into scikit-learn!)
More on the practical side, by creating imputers using the scikit-learn framework you make them compatible with scikit-learn’s Pipelines, which make the project’s flow much cleaner and easier to reproduce/productionize. Another practical matter is the clear distinction between the fit and transform methods, so you will not accidentally introduce data leakage — including the test data in the process of determining the values to be used for imputing.
In this section, we will implement the custom imputer in Python.
First, we load all the required libraries:
For writing this article, I used scikit-learn version 0.22.2.
For this article we will use a toy dataset. We assume the case of collecting the height of people coming from two different populations (samples A and B), hence some variability in the data. Additionally, the first sample also has a distinguishing feature called variant (with values of a and b). What is behind this naming structure is of no importance, the goal was to have two different levels of possible aggregation. Then, we sample the heights from the Normal distribution (using numpy.random.normal) with different values of the scale and location parameters per sample_name.
By using sample(frac=1) we basically reshuffled the DataFrame, so our dataset does not look so artificial. Below you can see the preview of the created DataFrame.
Then, we replace 10 random heights with NaN values using the following code:
Now, the DataFrame is ready for imputation.
It is time to code the imputer. You can find the definition of the class below:
As described before, by using an inheritance from the sklearn.base classes (BaseEstimator, TransformerMixin) we get a lot of work done for us and at the same time the custom imputer class is compatible with scikit-learn’s Pipelines.
So what actually happens in the background? By inheriting from BaseEstimator we automatically get the get_params and set_params methods (all scikit-learn estimators require those). Then, inheriting from TransformerMixin provides the fit_transform method.
Note: There are also other kinds of Mixin classes available for inheritance. Whether we need to do so depends on the type of estimator we want to code. For example, ClassifierMixin and RegressorMixin give us access to the score method used for evaluating the performance of the estimators.
In the __init__ method, we stored the input parameters:
group_cols — the list of columns to aggregate over,
target — the target column for imputation (the column in which the missing values are located),
metric — the metric we want to use for imputation, it can be either the mean or the median of the group.
Additionally, we included a set of assertions to make sure we pass in the correct input.
In the fit method, we calculate the impute_map_, which is a DataFrame with the aggregated metric used for imputing. We also check if there are no missing values in the columns we used for aggregation. It is also very important no know that the fit method should always return self!
Lastly, in the transform method we replace the missing values in each group (indicated by the rows of the impute_map_) with the appropriate values. As an extra precaution, we use check_is_fitted to make sure that we have already fitted the imputer object before using the transform method. Before actually transforming the data, we make a copy of it using the copy method to make sure we do not modify the original source data. For more on the topic, you can refer to one of my previous articles.
In both the fit and transform methods, we have also specified y=None in the method definition, even though the GroupImputer class will not be using the y value of the dataset (also known as the target, not to be confused with the target parameter, which indicates the imputation target). The reason for including it is to ensure compatibility with other scikit-learn classes.
It is time to see the custom imputer in action!
Running the code prints out the following:
df contains 10 missing values.df_imp contains 0 missing values.
As with all imputers in scikit-learn, we first create the instance of the object and specify the parameters. Then, we use the fit_transform method to create the new object, with the missing values in the height column replaced by averages calculated over the sample_name and variant.
To create df_imp, we actually need to manually convert the output of the transformation into a pd.DataFrame, as the original output is a numpy array. That is the case with all imputers/transformers in scikit-learn.
We can see that the imputer worked as expected and replaced all the missing values in our toy DataFrame.
In this article, I showed how to quickly create a custom imputer by inheriting from some base classes in scikit-learn. This way, the coding is much faster and we also ensure that the imputer is compatible with the entire scikit-learn framework.
Creating custom imputers/transformers can definitely come in handy while working on machine learning projects. Additionally, we can always reuse the created classes for other projects, as we tried to make it as flexible as possible in the first place.
You can find the code used for this article on my GitHub. As always, any constructive feedback is welcome. You can reach out to me on Twitter or in the comments. | [
{
"code": null,
"e": 529,
"s": 172,
"text": "Working with missing data is an inherent part of the majority of the machine learning projects. A typical approach would be to use scikit-learn’s SimpleImputer (or another imputer from the sklearn.impute module). However, often the simplest approach might not be the best one and we could gain some extra performance by using a more sophisticated approach."
},
{
"code": null,
"e": 748,
"s": 529,
"text": "That is why in this article I wanted to demonstrate how to code a custom scikit-learn based imputer. To make the case more interesting, the imputer will fill in the missing values based on the groups’ averages/medians."
},
{
"code": null,
"e": 925,
"s": 748,
"text": "Before jumping straight into coding I wanted to elaborate on a few potential reasons why writing a custom imputer class (inheriting from scikit-learn) might be worth your time:"
},
{
"code": null,
"e": 1283,
"s": 925,
"text": "It can help you with developing your programming skills — while writing imputers inheriting from scikit-learn you learn about some best practices already used by the contributors. Additionally, via inheritance you can use some of the already prepared methods. This way, your code will be better/cleaner and potentially more robust to some unforeseen issues."
},
{
"code": null,
"e": 1424,
"s": 1283,
"text": "Your custom classes can be further developed over time and potentially shared with other users (or maybe even integrated into scikit-learn!)"
},
{
"code": null,
"e": 1877,
"s": 1424,
"text": "More on the practical side, by creating imputers using the scikit-learn framework you make them compatible with scikit-learn’s Pipelines, which make the project’s flow much cleaner and easier to reproduce/productionize. Another practical matter is the clear distinction between the fit and transform methods, so you will not accidentally introduce data leakage — including the test data in the process of determining the values to be used for imputing."
},
{
"code": null,
"e": 1942,
"s": 1877,
"text": "In this section, we will implement the custom imputer in Python."
},
{
"code": null,
"e": 1985,
"s": 1942,
"text": "First, we load all the required libraries:"
},
{
"code": null,
"e": 2047,
"s": 1985,
"text": "For writing this article, I used scikit-learn version 0.22.2."
},
{
"code": null,
"e": 2630,
"s": 2047,
"text": "For this article we will use a toy dataset. We assume the case of collecting the height of people coming from two different populations (samples A and B), hence some variability in the data. Additionally, the first sample also has a distinguishing feature called variant (with values of a and b). What is behind this naming structure is of no importance, the goal was to have two different levels of possible aggregation. Then, we sample the heights from the Normal distribution (using numpy.random.normal) with different values of the scale and location parameters per sample_name."
},
{
"code": null,
"e": 2793,
"s": 2630,
"text": "By using sample(frac=1) we basically reshuffled the DataFrame, so our dataset does not look so artificial. Below you can see the preview of the created DataFrame."
},
{
"code": null,
"e": 2870,
"s": 2793,
"text": "Then, we replace 10 random heights with NaN values using the following code:"
},
{
"code": null,
"e": 2914,
"s": 2870,
"text": "Now, the DataFrame is ready for imputation."
},
{
"code": null,
"e": 2994,
"s": 2914,
"text": "It is time to code the imputer. You can find the definition of the class below:"
},
{
"code": null,
"e": 3227,
"s": 2994,
"text": "As described before, by using an inheritance from the sklearn.base classes (BaseEstimator, TransformerMixin) we get a lot of work done for us and at the same time the custom imputer class is compatible with scikit-learn’s Pipelines."
},
{
"code": null,
"e": 3482,
"s": 3227,
"text": "So what actually happens in the background? By inheriting from BaseEstimator we automatically get the get_params and set_params methods (all scikit-learn estimators require those). Then, inheriting from TransformerMixin provides the fit_transform method."
},
{
"code": null,
"e": 3772,
"s": 3482,
"text": "Note: There are also other kinds of Mixin classes available for inheritance. Whether we need to do so depends on the type of estimator we want to code. For example, ClassifierMixin and RegressorMixin give us access to the score method used for evaluating the performance of the estimators."
},
{
"code": null,
"e": 3828,
"s": 3772,
"text": "In the __init__ method, we stored the input parameters:"
},
{
"code": null,
"e": 3880,
"s": 3828,
"text": "group_cols — the list of columns to aggregate over,"
},
{
"code": null,
"e": 3976,
"s": 3880,
"text": "target — the target column for imputation (the column in which the missing values are located),"
},
{
"code": null,
"e": 4081,
"s": 3976,
"text": "metric — the metric we want to use for imputation, it can be either the mean or the median of the group."
},
{
"code": null,
"e": 4170,
"s": 4081,
"text": "Additionally, we included a set of assertions to make sure we pass in the correct input."
},
{
"code": null,
"e": 4452,
"s": 4170,
"text": "In the fit method, we calculate the impute_map_, which is a DataFrame with the aggregated metric used for imputing. We also check if there are no missing values in the columns we used for aggregation. It is also very important no know that the fit method should always return self!"
},
{
"code": null,
"e": 4949,
"s": 4452,
"text": "Lastly, in the transform method we replace the missing values in each group (indicated by the rows of the impute_map_) with the appropriate values. As an extra precaution, we use check_is_fitted to make sure that we have already fitted the imputer object before using the transform method. Before actually transforming the data, we make a copy of it using the copy method to make sure we do not modify the original source data. For more on the topic, you can refer to one of my previous articles."
},
{
"code": null,
"e": 5325,
"s": 4949,
"text": "In both the fit and transform methods, we have also specified y=None in the method definition, even though the GroupImputer class will not be using the y value of the dataset (also known as the target, not to be confused with the target parameter, which indicates the imputation target). The reason for including it is to ensure compatibility with other scikit-learn classes."
},
{
"code": null,
"e": 5373,
"s": 5325,
"text": "It is time to see the custom imputer in action!"
},
{
"code": null,
"e": 5416,
"s": 5373,
"text": "Running the code prints out the following:"
},
{
"code": null,
"e": 5480,
"s": 5416,
"text": "df contains 10 missing values.df_imp contains 0 missing values."
},
{
"code": null,
"e": 5764,
"s": 5480,
"text": "As with all imputers in scikit-learn, we first create the instance of the object and specify the parameters. Then, we use the fit_transform method to create the new object, with the missing values in the height column replaced by averages calculated over the sample_name and variant."
},
{
"code": null,
"e": 5979,
"s": 5764,
"text": "To create df_imp, we actually need to manually convert the output of the transformation into a pd.DataFrame, as the original output is a numpy array. That is the case with all imputers/transformers in scikit-learn."
},
{
"code": null,
"e": 6084,
"s": 5979,
"text": "We can see that the imputer worked as expected and replaced all the missing values in our toy DataFrame."
},
{
"code": null,
"e": 6329,
"s": 6084,
"text": "In this article, I showed how to quickly create a custom imputer by inheriting from some base classes in scikit-learn. This way, the coding is much faster and we also ensure that the imputer is compatible with the entire scikit-learn framework."
},
{
"code": null,
"e": 6581,
"s": 6329,
"text": "Creating custom imputers/transformers can definitely come in handy while working on machine learning projects. Additionally, we can always reuse the created classes for other projects, as we tried to make it as flexible as possible in the first place."
}
] |
Building a Social Network from the News using Graph Theory | by Marcell Ferencz | Towards Data Science | Who are the most influential individuals on the news? What does the sprawling web of politicians, companies and celebrities really look like? How is Meghan Markle related to Argos?
If you’ve ever found yourself lying in bed sleeplessly, wondering any of the above, you’ve clicked on the right article. For we will explore a novel way of representing the vast information fed to us every minute of every hour of every day through news channels and opinion pieces: we are going to build a social network of people on the news.
We live in an age of media and information, and the importance of understanding the intricacies of the News cannot be overstated. In a previous article I have outlined the value of utilising social media to the benefit of any organisation, and demonstrated one approach using data from Twitter. Well, if Twitter is the mouthpiece of the consumer, its earpiece is listening to the news. In fact, many Tweeters these days simply share news articles in an effort to signal their agreement with (or outrage over) it.
Knowing if your organisation, product or boss was on the news is not difficult, but knowing in whose company they were while on the news can give you valuable insight into the kind of crowd they’re being banded with by those pesky journalists.
On a more technical note, building graph representations (i.e. social networks) of data opens up a whole host of possibilities for data science applications, such as finding the most central nodes (influential people) or identifying clusters of nodes (cliques of people) based on their edges (connections).
On a far less technical note, network graphs look really cool.
The fundamental premise behind building our social network will be two-fold and quite simple:
If two people are mentioned in the same article, they are friends.The more articles mention the same two people, the closer they are as friends.
If two people are mentioned in the same article, they are friends.
The more articles mention the same two people, the closer they are as friends.
Let’s take an example: if an article mentions Donald and Boris , and two other, separate articles mention Donald and Mike, we’ll say that Donald is friends with Boris, and Donald is also friends with Mike, only twice as much. We’ll therefore construct a social network like the following:
We not only get a pictorial representation of the friendship group, we can also start seeing hidden relationships: although Boris wasn’t mentioned in the same article as Mike, we can guess with some certainty that the two are related (and that they are related via their mutual friend, Donald). We can also tell that Donald is the alpha male in the group, having influence over both Mike and Boris.
This is just a bare-bones example of three (very short) articles on the news. You can imagine the kinds of insights we can draw from hundreds of documents writing about thousands of people. To achieve this, we’ll use two techniques, which I’ll briefly touch upon now.
Our first order of business is identifying individuals of interest within articles, which is not so easily done, if we don’t know whom we’re looking for ahead of time (otherwise we could use a simple string search). Luckily for us, Named Entity Recognition exists.
Named Entity Recognition is a Natural Language Processing task for extracting information from text; as the name suggests, it recognises entities, such as proper nouns within unstructured data. It achieves this using statistical models trained on large corpora of documents; the model learns to recognise and categorise entities based on the context in which they appear as words. We’ll use one of these models to list, for each article, words tagged as persons and organisations by the model.
Once we have our list of entities for each article, we’ll organise them into a graph structure. Graphs are, by definition, a set of vertices and edges:
Where G is our graph, made up of a set of vertices V (or nodes) and a set of edges E (or links).
This article isn’t meant as a primer on Graph Theory, but I do want to highlight a few important properties of graphs, which we’ll find useful further along the exercise:
Graphs can be directed or undirected — a directed graph’s nodes are linked with a direction (surprisingly), whereas the direction of links are irrelevant for an undirected graph. In our case, we will set up our network as an undirected graph, because Mike and Don being mentioned in the same article alone does not give any indication of direction of information between the two entities.Graphs can be connected or unconnected — a graph is connected if you can trace a path from any node to any other node. Our graph from above is connected, because all nodes are somehow connected to all others; if I were to introduce a fourth node, Jeremy, who wasn’t mentioned in any article together with Don, Mike or Boris, he would have no connection to any of the existing nodes, and our graph would become unconnected.Nodes have centrality measures — these are metrics to describe how important a node is within a network. One of these measures (which we’ll use in the upcoming analysis) is eigenvector centrality, which assigns a score to each node based on how many other important nodes to which it’s connected. A famous use case of eigenvector centrality is Google’s PageRank algorithm.Graphs can have cliques — cliques are a subset of graphs (or sub-graphs) in which all pairs of nodes are connected. These are the mathematical representations of friendship groups from your school days. In our example, if Mike also had a connection to Boris, they would form a clique.
Graphs can be directed or undirected — a directed graph’s nodes are linked with a direction (surprisingly), whereas the direction of links are irrelevant for an undirected graph. In our case, we will set up our network as an undirected graph, because Mike and Don being mentioned in the same article alone does not give any indication of direction of information between the two entities.
Graphs can be connected or unconnected — a graph is connected if you can trace a path from any node to any other node. Our graph from above is connected, because all nodes are somehow connected to all others; if I were to introduce a fourth node, Jeremy, who wasn’t mentioned in any article together with Don, Mike or Boris, he would have no connection to any of the existing nodes, and our graph would become unconnected.
Nodes have centrality measures — these are metrics to describe how important a node is within a network. One of these measures (which we’ll use in the upcoming analysis) is eigenvector centrality, which assigns a score to each node based on how many other important nodes to which it’s connected. A famous use case of eigenvector centrality is Google’s PageRank algorithm.
Graphs can have cliques — cliques are a subset of graphs (or sub-graphs) in which all pairs of nodes are connected. These are the mathematical representations of friendship groups from your school days. In our example, if Mike also had a connection to Boris, they would form a clique.
I’ve rambled on for long enough about entities and graphs — let’s get to the nitty gritty now.
So far, I’ve not touched on where I actually intend to get all these articles I was bragging about. There are, of course, many News APIs you can use to extract information from the news. The source I used was from the GDELT Project, which is a free, open platform of all world events, monitored in real time from across the globe.
Supported by Google Jigsaw, the GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages...
They carry out a number of very interesting activities and analyses (seriously, check them out), but we’ll be interested in their raw data, which they also make available completely free of charge (thank you GDELT). They publish daily CSVs with thousands of events which occurred during that day, but more importantly, they include the URL of the news source which reported on the event.
I used the file containing the events that ocurred on the 26th of March, and filtered on news published by some of the UK’s most read news providers. I limited my search in such a way to somewhat narrow down the volume of entities we’ll be dealing with and also to focus on what’s important for the British mainstream reader. Doing so still netted me whopping 805 articles from the following sources:
Once I had my 805 URLs, I used Goose to extract the content from each web page. Again, this is not the focus of this article, so I won’t go into the detail. There are countless resources out there which will tell you how to do that (including the Goose documentation), if you’re interested in following along.
Extracting the title and the content of each article allowed me to organise my data into a nice pandas DataFrame:
If you’ve read my previous article (that I already shamelessly linked to earlier), you’ll know of my love affair with the Flair library, providing state-of-the-art NLP solutions in a few lines of code. Flair uses a neural language model (Akbik et al., 2018) to assign tags to text data, beating most previous models’ accuracy in the process. Flair is more than good enough for us. Flair is a blessing. Let’s take it for a spin.
We’ll tell Flair’s sentence tagger to predict entities within the sentence:
Boris went to Seattle with Donald to meet Microsoft.
!pip install flairimport torchfrom flair.data import Sentencefrom flair.models import SequenceTaggertagger = SequenceTagger.load('ner')sentence = Sentence('Boris went to Seattle with Donald to meet Microsoft')tagger.predict(sentence)for entity in sentence.get_spans('ner'): print(entity)
Which yields four entities (as expected):
PER-span [1]: “Boris”LOC-span [4]: “Seattle”PER-span [6]: “Donald”ORG-span [10]: “Microsoft”
It successfully identified all four entities, and was able to recognise the two people (PER), one location (LOC) and one organisation (ORG).
Flair’s Sentence objects have a useful .to_dict() method which allows us to extract the entities in a more robust way:
sentence.to_dict(tag_type='ner')
Giving us a dictionary with all the juicy properties of each entity in the sentence:
{'entities': [{'confidence': 0.999889612197876, 'end_pos': 5, 'start_pos': 0, 'text': 'Boris', 'type': 'PER'}, {'confidence': 0.9988295435905457, 'end_pos': 21, 'start_pos': 14, 'text': 'Seattle', 'type': 'LOC'}, {'confidence': 0.9999843835830688, 'end_pos': 33, 'start_pos': 27, 'text': 'Donald', 'type': 'PER'}, {'confidence': 0.8911125063896179, 'end_pos': 56, 'start_pos': 47, 'text': 'Microsoft', 'type': 'ORG'}], 'labels': [], 'text': 'Boris went to Seattle with Donald to meet with Microsoft'}
We stated above that we only care about people and organisations, so we’ll organise the dict into a nice DataFrame and filter for those two:
sent_dict = sentence.to_dict(tag_type='ner')df_ner = pd.DataFrame(data={ 'entity': [entity['text'] for entity in sent_dict['entities']], 'type': [entity['type'] for entity in sent_dict['entities']] })df_ner[df_ner['type'].isin(['PER', 'ORG'])]
To build the actual social network, we’ll use the tried and trusted NetworkX package. Its Graph() class needs (at least) a list of edges for the graph, so we’ll massage our list of entities into a list of paired connections.
We’ll use the combinations functionality from itertools to, well, find all possible combinations given a list of items. First, we’ll sort our entities alphabetically — this is to ensure that in every pair we find (now and thereafter), the alphabetically superior entity appears on the left hand side, and we don’t duplicate pairs of A-B and B-A, for instance:
from itertools import combinationsdf_ner = df_ner.sort_values('entity')combs = list(combinations(df_ner['entity'], 2))df_links = pd.DataFrame(data=combs, columns=['from', 'to'])df_links
We are ready to create our graph and plot it!
import networkx as nxG = nx.Graph()for link in df_links.index: G.add_edge(df_links.iloc[link]['from'], df_links.iloc[link]['to'])nx.draw(G, with_labels=True)
All of the above can be wrapped in a nice function for us to apply to each of our articles. You’ll notice a couple of additions — first, I am doing some basic cleaning in the process, removing newlines and the like, and I’m also removing pronouns, which sometimes get recognised as named entities. Another key step here is that given an entity (say, Boris Johnson), I’m taking their last name only, as often individuals are referred to by their last name only. This step ensures that Boris Johnson and Johnson won’t be counted as two different entities.
def get_ner_data(paragraph): # remove newlines and odd characters paragraph = re.sub('\r', '', paragraph) paragraph = re.sub('\n', ' ', paragraph) paragraph = re.sub("’s", '', paragraph) paragraph = re.sub("“", '', paragraph) paragraph = re.sub("”", '', paragraph) # tokenise sentences sentences = tokenize.sent_tokenize(paragraph) sentences = [Sentence(sent) for sent in sentences] # predict named entities for sent in sentences: tagger.predict(sent) # collect sentence NER's to list of dictionaries sent_dicts = [sentence.to_dict(tag_type='ner') for sentence in sentences] # collect entities and types entities = [] types = [] for sent_dict in sent_dicts: entities.extend([entity['text'] for entity in sent_dict['entities']]) types.extend([entity['type'] for entity in sent_dict['entities']]) # create dataframe of entities (nodes) df_ner = pd.DataFrame(data={'entity': entities, 'type': types}) df_ner = df_ner[df_ner['type'].isin(['PER', 'ORG'])] df_ner = df_ner[df_ner['entity'].map(lambda x: isinstance(x, str))] df_ner = df_ner[~df_ner['entity'].isin(df_contraptions['contraption'].values)] df_ner['entity'] = df_ner['entity'].map(lambda x: x.translate(str.maketrans('', '', string.punctuation))) df_ner['entity'] = df_ner.apply(lambda x: x['entity'].split(' ')[len(x['entity'].split(' '))-1] if x['type']=='PER' else x['entity'], axis=1) df_ner = df_ner.drop_duplicates().sort_values('entity') # get entity combinations combs = list(combinations(df_ner['entity'], 2)) # create dataframe of relationships (edges) df_links = pd.DataFrame(data=combs, columns=['from', 'to']) return df_ner, df_links
We’re ready to apply our function to each article’s content. Note that this took me about 40–50 minutes to run on Google Colab for my 800 articles:
df_ner = pd.DataFrame()df_links = pd.DataFrame()for content in tqdm(df_day['content']): try: df_ner_temp, df_links_temp = get_ner_data(content) df_ner = df_ner.append(df_ner_temp) df_links = df_links.append(df_links_temp) except: continue
I put the try-catch in there just in case no entities are recognised in an article and the DataFrame.append() method fails. I know it’s not good practice. I did it anyway.
Armed with our massive list of entity links, we can start building our graph. To make things more manageable, I’ll only keep links which have appeared in at least two separate articles:
df_links = df_links.groupby(['from', 'to']).size().reset_index()df_links.rename(columns={0: 'weight'}, inplace=True)df_links = df_links[df_links['weight'] > 1]df_links.reset_index(drop=True, inplace=True)df_links.sort_values('weight', ascending=False).head(10)
By far the most connections found were between Boris Johnson and the NHS, followed by Matt Hancock and the NHS. Further honourable mentions go to Prince Harry and Meghan and Andrew Cuomo and Donald Trump.
Let’s build a more comprehensive picture, though. We’ll take the links which have appeared in over 6 articles (trust me, it’s the lowest number where you can still make something out of a plot) and draw the graph as before:
df_plot = df_links[df_links['weight']>6]df_plot.reset_index(inplace=True, drop=True)G_plot = nx.Graph()for link in tqdm(df_plot.index): G_plot.add_edge(df_plot.iloc[link]['from'], df_plot.iloc[link]['to'], weight=df_plot.iloc[link]['weight'])pos = nx.kamada_kawai_layout(G_plot)nodes = G_plot.nodes()fig, axs = plt.subplots(1, 1, figsize=(15,20))el = nx.draw_networkx_edges(G_plot, pos, alpha=0.1, ax=axs)nl = nx.draw_networkx_nodes(G_plot, pos, nodelist=nodes, node_color='#FAA6FF', with_labels=True, node_size=50, ax=axs)ll = nx.draw_networkx_labels(G_plot, pos, font_size=10, font_family=’sans-serif’)
Each dot represents an entity — a person or organisation — and a link between two dots mean they appeared in at least 2 separate articles together.
We can immediately see some groups of interest — the NHS and Johnson are indeed very much in the centre of things, but we can spot members of the Royal Family to the upper right corner as well as US politicians on the bottom half of the map.
And there you have it — we have built a social network from real news data. Let’s extract some insight from it!
Now that we’ve done the heavy lifting, the fun part can begin. We will ask our graph some questions.
Let’s start simple — we’ll use the nodes() and edges() methods to find the number of entities and connections in our social network:
n_nodes = len(G.nodes())n_edges = len(G.edges())print(f'There were {n_nodes} entities and {n_edges} connections found in the network.')
Which tells us that there were 2287 entities and 8276 connections found in the network.
In other words, is our graph connected? If not, how big is each subgraph?
nx.is_connected(G)
This command returns False, telling us that not all nodes are connected to each other in the graph.
subgraphs = [G.subgraph(c) for c in nx.connected_components(G)]subgraph_nodes = [sg.number_of_nodes() for sg in subgraphs]df_subgraphs = pd.DataFrame(data={ 'id': range(len(subgraph_nodes)), 'nodes': subgraph_nodes})df_subgraphs['percentage'] = df_subgraphs['nodes'].map(lambda x: 100*x/sum(df_subgraphs['nodes']))df_subgraphs = df_subgraphs.sort_values('nodes', ascending=False).reset_index(drop=True)df_subgraphs
Over 95% of our nodes belong to one big connected cluster. Interestingly, there don’t seem to be separate, equally large graphs in our network; the vast majority of entities are connected to one another in some way.
We will use a shortest-path algorithm to find this out, but first we’ll need to do some housekeeping. We’ll recreate our graph using only the nodes which belong in the 95% who all know each other (we want to make sure our graph is connected) and we’ll also add a new attribute, inverse_weight to our edges. This will be the reciprocal of our original weight (i.e. the number of articles mentioning the two entities), which will help our shortest path algorithm prioritise high weights (more common connections) over low ones.
sg = subgraphs[np.argmax(np.array(subgraph_nodes))]df_links_sg = nx.to_pandas_edgelist(sg)df_links_sg['inverse_weight'] = df_links_sg['weight'].map(lambda x: 1/x)G = nx.Graph()for link in tqdm(df_links_sg.index): G.add_edge(df_links_sg.iloc[link]['source'], df_links_sg.iloc[link]['target'], weight = df_links_sg.iloc[link]['weight'], inverse_weight = df_links_sg.iloc[link]['inverse_weight'])
We can now compute the shortest path between any two entities, finally — finally — giving us the long-awaited answer to the question of how Meghan Markle is related to the retail store Argos.
source='Markle'target='Argos'path = nx.shortest_path(G, source=source, target=target, weight='inverse_weight')path
Which gives us the chain of entities leading from Markle to Argos: ‘Markle’, ‘Charles’, ‘Johnson’, ‘Sainsbury’, ‘Argos’.
So Meghan Markle and Prince Charles appeared in an article, Prince Charles and Boris Johnson appeared in another article, and so on...
Let’s take a look at which articles these were:
df_path = pd.DataFrame([(path[i-1], path[i]) for i in range(1, len(path))], columns=['ent1', 'ent2'])def get_common_title(ent1, ent2): df_art_path = df_articles[(df_articles['content'].str.contains(ent1)) & (df_articles['content'].str.contains(ent2))] df_art_path.sort_values('date', ascending=False).head(1) return df_art_path.iloc[0]['title']df_path['titles'] = df_path.apply(lambda x: get_common_title(x['ent1'], x['ent2']), axis=1)for title in df_path['titles']: print(title)
The figure above illustrates our route from Meghan all the way to Argos, one article at a time:
Meghan was in an article with Prince Charles about her forbidding Prince Harry to visit the latter.Prince Charles appeared in an article with Boris Johnson on testing positive for Coronavirus.Boris was in an article with Sainsbury’s on the topic of panic buying in shops.Sainsbury’s appeared in an article with Argos on stores remaining open during the lockdown.
Meghan was in an article with Prince Charles about her forbidding Prince Harry to visit the latter.
Prince Charles appeared in an article with Boris Johnson on testing positive for Coronavirus.
Boris was in an article with Sainsbury’s on the topic of panic buying in shops.
Sainsbury’s appeared in an article with Argos on stores remaining open during the lockdown.
Onto more serious waters then. We can find each node’s centrality measure to find out how influential they are on the network. In other words, we will assign a score to each person or organisation based on how many other influential people or organisations they have appeared together with. We recall from an earlier paragraph that this can be achieved using the eigenvector centrality measure:
nodes = []eigenvector_cents = []ec_dict = nx.eigenvector_centrality(G, max_iter=1000, weight='weight')for node in tqdm(G.nodes()): nodes.append(node) eigenvector_cents.append(ec_dict[node])df_centralities = pd.DataFrame(data={ 'entity': nodes, 'centrality': eigenvector_cents})
Visualising the 20 most influential entities in the network gives:
The longer (and greener) the bar for an entity, the more influential they have been on the news.
The NHS and Boris Johnson lead the pack in terms of influence on British news, which is not surprising, considering the situation we’re all in with COVID-19.
In other words, can we identify cliques in our network? Remember, cliques are a set of nodes in a graph in which every pair of nodes is connected. The algorithm we’ll use to do this is called k-clique communities (Palla et al., 2005), which allows us to find closely connected clusters in the network by defining k, or the minimum number of nodes required for a clique to be formed. It’s worth noting that one node can be very popular and belong to multiple cliques.
Setting k = 12 gives us 9 nice and manageable cliques:
from networkx.algorithms.community.kclique import k_clique_communitiescliques = list(k_clique_communities(G, 12))
As before, we’ll also calculate the eigenvector centrality in each subgraph formed by the cliques (this, again, will be based on edge weights) and visualise the results:
Each pair of graphs represents a clique. On the left plot, we see the position of the nodes on the network, and on the right, we see the most influential nodes in the clique.
This allows us to identify core topics on the news via the groups of people who appeared together a lot:
Clique 0: US political and commercial entities
Clique 1: Australian academic institutes
Clique 2: UK military entities
Clique 3: UK retailers
Clique 4: UK political organisations
Clique 5: London transport companies
Clique 6: British politicians
Clique 7: more British politicians
Clique 8: the British Royal Family
We collected over 800 articles published on the 26th or March by some of the most read British newspapers
We used a state-of-the-art Named Entity Recognition algorithm to extract people and organisations that appeared together in articles
We structured the data in graph form, resulting in a social network of over 2,000 entities and over 8,000 connections
We found that over 95% of entities in our network were connected to each other
We found that Meghan Markle can be linked to Argos
We found that the NHS, Boris Johnson and Prince Charles were the most influential entities on the news on the day
We identified 9 distinct groups of entities, mainly politicians and government bodies who regularly all appeared together on the news
This took some time and writing, so I commend you if you’ve made it this far — I hope it was worth it. This exercise merely scratched the surface of what can be achieved with graph data structures. I intend to go a bit further with this data in a follow-up article, but for now, I’ll leave it here.
Did I do something wrong? Could I have done something better? Did I do something well?
Please don’t hesitate to reach out to me on LinkedIn; I’m always happy to be challenged or just have a chat if you’re interested in my work.
If you want to play with the code yourself, please follow the link to my Google Colab notebooks: | [
{
"code": null,
"e": 353,
"s": 172,
"text": "Who are the most influential individuals on the news? What does the sprawling web of politicians, companies and celebrities really look like? How is Meghan Markle related to Argos?"
},
{
"code": null,
"e": 697,
"s": 353,
"text": "If you’ve ever found yourself lying in bed sleeplessly, wondering any of the above, you’ve clicked on the right article. For we will explore a novel way of representing the vast information fed to us every minute of every hour of every day through news channels and opinion pieces: we are going to build a social network of people on the news."
},
{
"code": null,
"e": 1210,
"s": 697,
"text": "We live in an age of media and information, and the importance of understanding the intricacies of the News cannot be overstated. In a previous article I have outlined the value of utilising social media to the benefit of any organisation, and demonstrated one approach using data from Twitter. Well, if Twitter is the mouthpiece of the consumer, its earpiece is listening to the news. In fact, many Tweeters these days simply share news articles in an effort to signal their agreement with (or outrage over) it."
},
{
"code": null,
"e": 1454,
"s": 1210,
"text": "Knowing if your organisation, product or boss was on the news is not difficult, but knowing in whose company they were while on the news can give you valuable insight into the kind of crowd they’re being banded with by those pesky journalists."
},
{
"code": null,
"e": 1761,
"s": 1454,
"text": "On a more technical note, building graph representations (i.e. social networks) of data opens up a whole host of possibilities for data science applications, such as finding the most central nodes (influential people) or identifying clusters of nodes (cliques of people) based on their edges (connections)."
},
{
"code": null,
"e": 1824,
"s": 1761,
"text": "On a far less technical note, network graphs look really cool."
},
{
"code": null,
"e": 1918,
"s": 1824,
"text": "The fundamental premise behind building our social network will be two-fold and quite simple:"
},
{
"code": null,
"e": 2063,
"s": 1918,
"text": "If two people are mentioned in the same article, they are friends.The more articles mention the same two people, the closer they are as friends."
},
{
"code": null,
"e": 2130,
"s": 2063,
"text": "If two people are mentioned in the same article, they are friends."
},
{
"code": null,
"e": 2209,
"s": 2130,
"text": "The more articles mention the same two people, the closer they are as friends."
},
{
"code": null,
"e": 2498,
"s": 2209,
"text": "Let’s take an example: if an article mentions Donald and Boris , and two other, separate articles mention Donald and Mike, we’ll say that Donald is friends with Boris, and Donald is also friends with Mike, only twice as much. We’ll therefore construct a social network like the following:"
},
{
"code": null,
"e": 2897,
"s": 2498,
"text": "We not only get a pictorial representation of the friendship group, we can also start seeing hidden relationships: although Boris wasn’t mentioned in the same article as Mike, we can guess with some certainty that the two are related (and that they are related via their mutual friend, Donald). We can also tell that Donald is the alpha male in the group, having influence over both Mike and Boris."
},
{
"code": null,
"e": 3165,
"s": 2897,
"text": "This is just a bare-bones example of three (very short) articles on the news. You can imagine the kinds of insights we can draw from hundreds of documents writing about thousands of people. To achieve this, we’ll use two techniques, which I’ll briefly touch upon now."
},
{
"code": null,
"e": 3430,
"s": 3165,
"text": "Our first order of business is identifying individuals of interest within articles, which is not so easily done, if we don’t know whom we’re looking for ahead of time (otherwise we could use a simple string search). Luckily for us, Named Entity Recognition exists."
},
{
"code": null,
"e": 3924,
"s": 3430,
"text": "Named Entity Recognition is a Natural Language Processing task for extracting information from text; as the name suggests, it recognises entities, such as proper nouns within unstructured data. It achieves this using statistical models trained on large corpora of documents; the model learns to recognise and categorise entities based on the context in which they appear as words. We’ll use one of these models to list, for each article, words tagged as persons and organisations by the model."
},
{
"code": null,
"e": 4076,
"s": 3924,
"text": "Once we have our list of entities for each article, we’ll organise them into a graph structure. Graphs are, by definition, a set of vertices and edges:"
},
{
"code": null,
"e": 4173,
"s": 4076,
"text": "Where G is our graph, made up of a set of vertices V (or nodes) and a set of edges E (or links)."
},
{
"code": null,
"e": 4344,
"s": 4173,
"text": "This article isn’t meant as a primer on Graph Theory, but I do want to highlight a few important properties of graphs, which we’ll find useful further along the exercise:"
},
{
"code": null,
"e": 5811,
"s": 4344,
"text": "Graphs can be directed or undirected — a directed graph’s nodes are linked with a direction (surprisingly), whereas the direction of links are irrelevant for an undirected graph. In our case, we will set up our network as an undirected graph, because Mike and Don being mentioned in the same article alone does not give any indication of direction of information between the two entities.Graphs can be connected or unconnected — a graph is connected if you can trace a path from any node to any other node. Our graph from above is connected, because all nodes are somehow connected to all others; if I were to introduce a fourth node, Jeremy, who wasn’t mentioned in any article together with Don, Mike or Boris, he would have no connection to any of the existing nodes, and our graph would become unconnected.Nodes have centrality measures — these are metrics to describe how important a node is within a network. One of these measures (which we’ll use in the upcoming analysis) is eigenvector centrality, which assigns a score to each node based on how many other important nodes to which it’s connected. A famous use case of eigenvector centrality is Google’s PageRank algorithm.Graphs can have cliques — cliques are a subset of graphs (or sub-graphs) in which all pairs of nodes are connected. These are the mathematical representations of friendship groups from your school days. In our example, if Mike also had a connection to Boris, they would form a clique."
},
{
"code": null,
"e": 6200,
"s": 5811,
"text": "Graphs can be directed or undirected — a directed graph’s nodes are linked with a direction (surprisingly), whereas the direction of links are irrelevant for an undirected graph. In our case, we will set up our network as an undirected graph, because Mike and Don being mentioned in the same article alone does not give any indication of direction of information between the two entities."
},
{
"code": null,
"e": 6623,
"s": 6200,
"text": "Graphs can be connected or unconnected — a graph is connected if you can trace a path from any node to any other node. Our graph from above is connected, because all nodes are somehow connected to all others; if I were to introduce a fourth node, Jeremy, who wasn’t mentioned in any article together with Don, Mike or Boris, he would have no connection to any of the existing nodes, and our graph would become unconnected."
},
{
"code": null,
"e": 6996,
"s": 6623,
"text": "Nodes have centrality measures — these are metrics to describe how important a node is within a network. One of these measures (which we’ll use in the upcoming analysis) is eigenvector centrality, which assigns a score to each node based on how many other important nodes to which it’s connected. A famous use case of eigenvector centrality is Google’s PageRank algorithm."
},
{
"code": null,
"e": 7281,
"s": 6996,
"text": "Graphs can have cliques — cliques are a subset of graphs (or sub-graphs) in which all pairs of nodes are connected. These are the mathematical representations of friendship groups from your school days. In our example, if Mike also had a connection to Boris, they would form a clique."
},
{
"code": null,
"e": 7376,
"s": 7281,
"text": "I’ve rambled on for long enough about entities and graphs — let’s get to the nitty gritty now."
},
{
"code": null,
"e": 7707,
"s": 7376,
"text": "So far, I’ve not touched on where I actually intend to get all these articles I was bragging about. There are, of course, many News APIs you can use to extract information from the news. The source I used was from the GDELT Project, which is a free, open platform of all world events, monitored in real time from across the globe."
},
{
"code": null,
"e": 7872,
"s": 7707,
"text": "Supported by Google Jigsaw, the GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages..."
},
{
"code": null,
"e": 8260,
"s": 7872,
"text": "They carry out a number of very interesting activities and analyses (seriously, check them out), but we’ll be interested in their raw data, which they also make available completely free of charge (thank you GDELT). They publish daily CSVs with thousands of events which occurred during that day, but more importantly, they include the URL of the news source which reported on the event."
},
{
"code": null,
"e": 8661,
"s": 8260,
"text": "I used the file containing the events that ocurred on the 26th of March, and filtered on news published by some of the UK’s most read news providers. I limited my search in such a way to somewhat narrow down the volume of entities we’ll be dealing with and also to focus on what’s important for the British mainstream reader. Doing so still netted me whopping 805 articles from the following sources:"
},
{
"code": null,
"e": 8971,
"s": 8661,
"text": "Once I had my 805 URLs, I used Goose to extract the content from each web page. Again, this is not the focus of this article, so I won’t go into the detail. There are countless resources out there which will tell you how to do that (including the Goose documentation), if you’re interested in following along."
},
{
"code": null,
"e": 9085,
"s": 8971,
"text": "Extracting the title and the content of each article allowed me to organise my data into a nice pandas DataFrame:"
},
{
"code": null,
"e": 9513,
"s": 9085,
"text": "If you’ve read my previous article (that I already shamelessly linked to earlier), you’ll know of my love affair with the Flair library, providing state-of-the-art NLP solutions in a few lines of code. Flair uses a neural language model (Akbik et al., 2018) to assign tags to text data, beating most previous models’ accuracy in the process. Flair is more than good enough for us. Flair is a blessing. Let’s take it for a spin."
},
{
"code": null,
"e": 9589,
"s": 9513,
"text": "We’ll tell Flair’s sentence tagger to predict entities within the sentence:"
},
{
"code": null,
"e": 9642,
"s": 9589,
"text": "Boris went to Seattle with Donald to meet Microsoft."
},
{
"code": null,
"e": 9931,
"s": 9642,
"text": "!pip install flairimport torchfrom flair.data import Sentencefrom flair.models import SequenceTaggertagger = SequenceTagger.load('ner')sentence = Sentence('Boris went to Seattle with Donald to meet Microsoft')tagger.predict(sentence)for entity in sentence.get_spans('ner'): print(entity)"
},
{
"code": null,
"e": 9973,
"s": 9931,
"text": "Which yields four entities (as expected):"
},
{
"code": null,
"e": 10066,
"s": 9973,
"text": "PER-span [1]: “Boris”LOC-span [4]: “Seattle”PER-span [6]: “Donald”ORG-span [10]: “Microsoft”"
},
{
"code": null,
"e": 10207,
"s": 10066,
"text": "It successfully identified all four entities, and was able to recognise the two people (PER), one location (LOC) and one organisation (ORG)."
},
{
"code": null,
"e": 10326,
"s": 10207,
"text": "Flair’s Sentence objects have a useful .to_dict() method which allows us to extract the entities in a more robust way:"
},
{
"code": null,
"e": 10359,
"s": 10326,
"text": "sentence.to_dict(tag_type='ner')"
},
{
"code": null,
"e": 10444,
"s": 10359,
"text": "Giving us a dictionary with all the juicy properties of each entity in the sentence:"
},
{
"code": null,
"e": 10961,
"s": 10444,
"text": "{'entities': [{'confidence': 0.999889612197876, 'end_pos': 5, 'start_pos': 0, 'text': 'Boris', 'type': 'PER'}, {'confidence': 0.9988295435905457, 'end_pos': 21, 'start_pos': 14, 'text': 'Seattle', 'type': 'LOC'}, {'confidence': 0.9999843835830688, 'end_pos': 33, 'start_pos': 27, 'text': 'Donald', 'type': 'PER'}, {'confidence': 0.8911125063896179, 'end_pos': 56, 'start_pos': 47, 'text': 'Microsoft', 'type': 'ORG'}], 'labels': [], 'text': 'Boris went to Seattle with Donald to meet with Microsoft'}"
},
{
"code": null,
"e": 11102,
"s": 10961,
"text": "We stated above that we only care about people and organisations, so we’ll organise the dict into a nice DataFrame and filter for those two:"
},
{
"code": null,
"e": 11348,
"s": 11102,
"text": "sent_dict = sentence.to_dict(tag_type='ner')df_ner = pd.DataFrame(data={ 'entity': [entity['text'] for entity in sent_dict['entities']], 'type': [entity['type'] for entity in sent_dict['entities']] })df_ner[df_ner['type'].isin(['PER', 'ORG'])]"
},
{
"code": null,
"e": 11573,
"s": 11348,
"text": "To build the actual social network, we’ll use the tried and trusted NetworkX package. Its Graph() class needs (at least) a list of edges for the graph, so we’ll massage our list of entities into a list of paired connections."
},
{
"code": null,
"e": 11933,
"s": 11573,
"text": "We’ll use the combinations functionality from itertools to, well, find all possible combinations given a list of items. First, we’ll sort our entities alphabetically — this is to ensure that in every pair we find (now and thereafter), the alphabetically superior entity appears on the left hand side, and we don’t duplicate pairs of A-B and B-A, for instance:"
},
{
"code": null,
"e": 12119,
"s": 11933,
"text": "from itertools import combinationsdf_ner = df_ner.sort_values('entity')combs = list(combinations(df_ner['entity'], 2))df_links = pd.DataFrame(data=combs, columns=['from', 'to'])df_links"
},
{
"code": null,
"e": 12165,
"s": 12119,
"text": "We are ready to create our graph and plot it!"
},
{
"code": null,
"e": 12336,
"s": 12165,
"text": "import networkx as nxG = nx.Graph()for link in df_links.index: G.add_edge(df_links.iloc[link]['from'], df_links.iloc[link]['to'])nx.draw(G, with_labels=True)"
},
{
"code": null,
"e": 12890,
"s": 12336,
"text": "All of the above can be wrapped in a nice function for us to apply to each of our articles. You’ll notice a couple of additions — first, I am doing some basic cleaning in the process, removing newlines and the like, and I’m also removing pronouns, which sometimes get recognised as named entities. Another key step here is that given an entity (say, Boris Johnson), I’m taking their last name only, as often individuals are referred to by their last name only. This step ensures that Boris Johnson and Johnson won’t be counted as two different entities."
},
{
"code": null,
"e": 14533,
"s": 12890,
"text": "def get_ner_data(paragraph): # remove newlines and odd characters paragraph = re.sub('\\r', '', paragraph) paragraph = re.sub('\\n', ' ', paragraph) paragraph = re.sub(\"’s\", '', paragraph) paragraph = re.sub(\"“\", '', paragraph) paragraph = re.sub(\"”\", '', paragraph) # tokenise sentences sentences = tokenize.sent_tokenize(paragraph) sentences = [Sentence(sent) for sent in sentences] # predict named entities for sent in sentences: tagger.predict(sent) # collect sentence NER's to list of dictionaries sent_dicts = [sentence.to_dict(tag_type='ner') for sentence in sentences] # collect entities and types entities = [] types = [] for sent_dict in sent_dicts: entities.extend([entity['text'] for entity in sent_dict['entities']]) types.extend([entity['type'] for entity in sent_dict['entities']]) # create dataframe of entities (nodes) df_ner = pd.DataFrame(data={'entity': entities, 'type': types}) df_ner = df_ner[df_ner['type'].isin(['PER', 'ORG'])] df_ner = df_ner[df_ner['entity'].map(lambda x: isinstance(x, str))] df_ner = df_ner[~df_ner['entity'].isin(df_contraptions['contraption'].values)] df_ner['entity'] = df_ner['entity'].map(lambda x: x.translate(str.maketrans('', '', string.punctuation))) df_ner['entity'] = df_ner.apply(lambda x: x['entity'].split(' ')[len(x['entity'].split(' '))-1] if x['type']=='PER' else x['entity'], axis=1) df_ner = df_ner.drop_duplicates().sort_values('entity') # get entity combinations combs = list(combinations(df_ner['entity'], 2)) # create dataframe of relationships (edges) df_links = pd.DataFrame(data=combs, columns=['from', 'to']) return df_ner, df_links"
},
{
"code": null,
"e": 14681,
"s": 14533,
"text": "We’re ready to apply our function to each article’s content. Note that this took me about 40–50 minutes to run on Google Colab for my 800 articles:"
},
{
"code": null,
"e": 14934,
"s": 14681,
"text": "df_ner = pd.DataFrame()df_links = pd.DataFrame()for content in tqdm(df_day['content']): try: df_ner_temp, df_links_temp = get_ner_data(content) df_ner = df_ner.append(df_ner_temp) df_links = df_links.append(df_links_temp) except: continue"
},
{
"code": null,
"e": 15106,
"s": 14934,
"text": "I put the try-catch in there just in case no entities are recognised in an article and the DataFrame.append() method fails. I know it’s not good practice. I did it anyway."
},
{
"code": null,
"e": 15292,
"s": 15106,
"text": "Armed with our massive list of entity links, we can start building our graph. To make things more manageable, I’ll only keep links which have appeared in at least two separate articles:"
},
{
"code": null,
"e": 15553,
"s": 15292,
"text": "df_links = df_links.groupby(['from', 'to']).size().reset_index()df_links.rename(columns={0: 'weight'}, inplace=True)df_links = df_links[df_links['weight'] > 1]df_links.reset_index(drop=True, inplace=True)df_links.sort_values('weight', ascending=False).head(10)"
},
{
"code": null,
"e": 15758,
"s": 15553,
"text": "By far the most connections found were between Boris Johnson and the NHS, followed by Matt Hancock and the NHS. Further honourable mentions go to Prince Harry and Meghan and Andrew Cuomo and Donald Trump."
},
{
"code": null,
"e": 15982,
"s": 15758,
"text": "Let’s build a more comprehensive picture, though. We’ll take the links which have appeared in over 6 articles (trust me, it’s the lowest number where you can still make something out of a plot) and draw the graph as before:"
},
{
"code": null,
"e": 16622,
"s": 15982,
"text": "df_plot = df_links[df_links['weight']>6]df_plot.reset_index(inplace=True, drop=True)G_plot = nx.Graph()for link in tqdm(df_plot.index): G_plot.add_edge(df_plot.iloc[link]['from'], df_plot.iloc[link]['to'], weight=df_plot.iloc[link]['weight'])pos = nx.kamada_kawai_layout(G_plot)nodes = G_plot.nodes()fig, axs = plt.subplots(1, 1, figsize=(15,20))el = nx.draw_networkx_edges(G_plot, pos, alpha=0.1, ax=axs)nl = nx.draw_networkx_nodes(G_plot, pos, nodelist=nodes, node_color='#FAA6FF', with_labels=True, node_size=50, ax=axs)ll = nx.draw_networkx_labels(G_plot, pos, font_size=10, font_family=’sans-serif’)"
},
{
"code": null,
"e": 16770,
"s": 16622,
"text": "Each dot represents an entity — a person or organisation — and a link between two dots mean they appeared in at least 2 separate articles together."
},
{
"code": null,
"e": 17012,
"s": 16770,
"text": "We can immediately see some groups of interest — the NHS and Johnson are indeed very much in the centre of things, but we can spot members of the Royal Family to the upper right corner as well as US politicians on the bottom half of the map."
},
{
"code": null,
"e": 17124,
"s": 17012,
"text": "And there you have it — we have built a social network from real news data. Let’s extract some insight from it!"
},
{
"code": null,
"e": 17225,
"s": 17124,
"text": "Now that we’ve done the heavy lifting, the fun part can begin. We will ask our graph some questions."
},
{
"code": null,
"e": 17358,
"s": 17225,
"text": "Let’s start simple — we’ll use the nodes() and edges() methods to find the number of entities and connections in our social network:"
},
{
"code": null,
"e": 17494,
"s": 17358,
"text": "n_nodes = len(G.nodes())n_edges = len(G.edges())print(f'There were {n_nodes} entities and {n_edges} connections found in the network.')"
},
{
"code": null,
"e": 17582,
"s": 17494,
"text": "Which tells us that there were 2287 entities and 8276 connections found in the network."
},
{
"code": null,
"e": 17656,
"s": 17582,
"text": "In other words, is our graph connected? If not, how big is each subgraph?"
},
{
"code": null,
"e": 17675,
"s": 17656,
"text": "nx.is_connected(G)"
},
{
"code": null,
"e": 17775,
"s": 17675,
"text": "This command returns False, telling us that not all nodes are connected to each other in the graph."
},
{
"code": null,
"e": 18192,
"s": 17775,
"text": "subgraphs = [G.subgraph(c) for c in nx.connected_components(G)]subgraph_nodes = [sg.number_of_nodes() for sg in subgraphs]df_subgraphs = pd.DataFrame(data={ 'id': range(len(subgraph_nodes)), 'nodes': subgraph_nodes})df_subgraphs['percentage'] = df_subgraphs['nodes'].map(lambda x: 100*x/sum(df_subgraphs['nodes']))df_subgraphs = df_subgraphs.sort_values('nodes', ascending=False).reset_index(drop=True)df_subgraphs"
},
{
"code": null,
"e": 18408,
"s": 18192,
"text": "Over 95% of our nodes belong to one big connected cluster. Interestingly, there don’t seem to be separate, equally large graphs in our network; the vast majority of entities are connected to one another in some way."
},
{
"code": null,
"e": 18934,
"s": 18408,
"text": "We will use a shortest-path algorithm to find this out, but first we’ll need to do some housekeeping. We’ll recreate our graph using only the nodes which belong in the 95% who all know each other (we want to make sure our graph is connected) and we’ll also add a new attribute, inverse_weight to our edges. This will be the reciprocal of our original weight (i.e. the number of articles mentioning the two entities), which will help our shortest path algorithm prioritise high weights (more common connections) over low ones."
},
{
"code": null,
"e": 19333,
"s": 18934,
"text": "sg = subgraphs[np.argmax(np.array(subgraph_nodes))]df_links_sg = nx.to_pandas_edgelist(sg)df_links_sg['inverse_weight'] = df_links_sg['weight'].map(lambda x: 1/x)G = nx.Graph()for link in tqdm(df_links_sg.index): G.add_edge(df_links_sg.iloc[link]['source'], df_links_sg.iloc[link]['target'], weight = df_links_sg.iloc[link]['weight'], inverse_weight = df_links_sg.iloc[link]['inverse_weight'])"
},
{
"code": null,
"e": 19525,
"s": 19333,
"text": "We can now compute the shortest path between any two entities, finally — finally — giving us the long-awaited answer to the question of how Meghan Markle is related to the retail store Argos."
},
{
"code": null,
"e": 19643,
"s": 19525,
"text": "source='Markle'target='Argos'path = nx.shortest_path(G, source=source, target=target, weight='inverse_weight')path"
},
{
"code": null,
"e": 19764,
"s": 19643,
"text": "Which gives us the chain of entities leading from Markle to Argos: ‘Markle’, ‘Charles’, ‘Johnson’, ‘Sainsbury’, ‘Argos’."
},
{
"code": null,
"e": 19899,
"s": 19764,
"text": "So Meghan Markle and Prince Charles appeared in an article, Prince Charles and Boris Johnson appeared in another article, and so on..."
},
{
"code": null,
"e": 19947,
"s": 19899,
"text": "Let’s take a look at which articles these were:"
},
{
"code": null,
"e": 20431,
"s": 19947,
"text": "df_path = pd.DataFrame([(path[i-1], path[i]) for i in range(1, len(path))], columns=['ent1', 'ent2'])def get_common_title(ent1, ent2): df_art_path = df_articles[(df_articles['content'].str.contains(ent1)) & (df_articles['content'].str.contains(ent2))] df_art_path.sort_values('date', ascending=False).head(1) return df_art_path.iloc[0]['title']df_path['titles'] = df_path.apply(lambda x: get_common_title(x['ent1'], x['ent2']), axis=1)for title in df_path['titles']: print(title)"
},
{
"code": null,
"e": 20527,
"s": 20431,
"text": "The figure above illustrates our route from Meghan all the way to Argos, one article at a time:"
},
{
"code": null,
"e": 20890,
"s": 20527,
"text": "Meghan was in an article with Prince Charles about her forbidding Prince Harry to visit the latter.Prince Charles appeared in an article with Boris Johnson on testing positive for Coronavirus.Boris was in an article with Sainsbury’s on the topic of panic buying in shops.Sainsbury’s appeared in an article with Argos on stores remaining open during the lockdown."
},
{
"code": null,
"e": 20990,
"s": 20890,
"text": "Meghan was in an article with Prince Charles about her forbidding Prince Harry to visit the latter."
},
{
"code": null,
"e": 21084,
"s": 20990,
"text": "Prince Charles appeared in an article with Boris Johnson on testing positive for Coronavirus."
},
{
"code": null,
"e": 21164,
"s": 21084,
"text": "Boris was in an article with Sainsbury’s on the topic of panic buying in shops."
},
{
"code": null,
"e": 21256,
"s": 21164,
"text": "Sainsbury’s appeared in an article with Argos on stores remaining open during the lockdown."
},
{
"code": null,
"e": 21651,
"s": 21256,
"text": "Onto more serious waters then. We can find each node’s centrality measure to find out how influential they are on the network. In other words, we will assign a score to each person or organisation based on how many other influential people or organisations they have appeared together with. We recall from an earlier paragraph that this can be achieved using the eigenvector centrality measure:"
},
{
"code": null,
"e": 21933,
"s": 21651,
"text": "nodes = []eigenvector_cents = []ec_dict = nx.eigenvector_centrality(G, max_iter=1000, weight='weight')for node in tqdm(G.nodes()): nodes.append(node) eigenvector_cents.append(ec_dict[node])df_centralities = pd.DataFrame(data={ 'entity': nodes, 'centrality': eigenvector_cents})"
},
{
"code": null,
"e": 22000,
"s": 21933,
"text": "Visualising the 20 most influential entities in the network gives:"
},
{
"code": null,
"e": 22097,
"s": 22000,
"text": "The longer (and greener) the bar for an entity, the more influential they have been on the news."
},
{
"code": null,
"e": 22255,
"s": 22097,
"text": "The NHS and Boris Johnson lead the pack in terms of influence on British news, which is not surprising, considering the situation we’re all in with COVID-19."
},
{
"code": null,
"e": 22722,
"s": 22255,
"text": "In other words, can we identify cliques in our network? Remember, cliques are a set of nodes in a graph in which every pair of nodes is connected. The algorithm we’ll use to do this is called k-clique communities (Palla et al., 2005), which allows us to find closely connected clusters in the network by defining k, or the minimum number of nodes required for a clique to be formed. It’s worth noting that one node can be very popular and belong to multiple cliques."
},
{
"code": null,
"e": 22777,
"s": 22722,
"text": "Setting k = 12 gives us 9 nice and manageable cliques:"
},
{
"code": null,
"e": 22891,
"s": 22777,
"text": "from networkx.algorithms.community.kclique import k_clique_communitiescliques = list(k_clique_communities(G, 12))"
},
{
"code": null,
"e": 23061,
"s": 22891,
"text": "As before, we’ll also calculate the eigenvector centrality in each subgraph formed by the cliques (this, again, will be based on edge weights) and visualise the results:"
},
{
"code": null,
"e": 23236,
"s": 23061,
"text": "Each pair of graphs represents a clique. On the left plot, we see the position of the nodes on the network, and on the right, we see the most influential nodes in the clique."
},
{
"code": null,
"e": 23341,
"s": 23236,
"text": "This allows us to identify core topics on the news via the groups of people who appeared together a lot:"
},
{
"code": null,
"e": 23388,
"s": 23341,
"text": "Clique 0: US political and commercial entities"
},
{
"code": null,
"e": 23429,
"s": 23388,
"text": "Clique 1: Australian academic institutes"
},
{
"code": null,
"e": 23460,
"s": 23429,
"text": "Clique 2: UK military entities"
},
{
"code": null,
"e": 23483,
"s": 23460,
"text": "Clique 3: UK retailers"
},
{
"code": null,
"e": 23520,
"s": 23483,
"text": "Clique 4: UK political organisations"
},
{
"code": null,
"e": 23557,
"s": 23520,
"text": "Clique 5: London transport companies"
},
{
"code": null,
"e": 23587,
"s": 23557,
"text": "Clique 6: British politicians"
},
{
"code": null,
"e": 23622,
"s": 23587,
"text": "Clique 7: more British politicians"
},
{
"code": null,
"e": 23657,
"s": 23622,
"text": "Clique 8: the British Royal Family"
},
{
"code": null,
"e": 23763,
"s": 23657,
"text": "We collected over 800 articles published on the 26th or March by some of the most read British newspapers"
},
{
"code": null,
"e": 23896,
"s": 23763,
"text": "We used a state-of-the-art Named Entity Recognition algorithm to extract people and organisations that appeared together in articles"
},
{
"code": null,
"e": 24014,
"s": 23896,
"text": "We structured the data in graph form, resulting in a social network of over 2,000 entities and over 8,000 connections"
},
{
"code": null,
"e": 24093,
"s": 24014,
"text": "We found that over 95% of entities in our network were connected to each other"
},
{
"code": null,
"e": 24144,
"s": 24093,
"text": "We found that Meghan Markle can be linked to Argos"
},
{
"code": null,
"e": 24258,
"s": 24144,
"text": "We found that the NHS, Boris Johnson and Prince Charles were the most influential entities on the news on the day"
},
{
"code": null,
"e": 24392,
"s": 24258,
"text": "We identified 9 distinct groups of entities, mainly politicians and government bodies who regularly all appeared together on the news"
},
{
"code": null,
"e": 24691,
"s": 24392,
"text": "This took some time and writing, so I commend you if you’ve made it this far — I hope it was worth it. This exercise merely scratched the surface of what can be achieved with graph data structures. I intend to go a bit further with this data in a follow-up article, but for now, I’ll leave it here."
},
{
"code": null,
"e": 24778,
"s": 24691,
"text": "Did I do something wrong? Could I have done something better? Did I do something well?"
},
{
"code": null,
"e": 24919,
"s": 24778,
"text": "Please don’t hesitate to reach out to me on LinkedIn; I’m always happy to be challenged or just have a chat if you’re interested in my work."
}
] |
Neural Networks to Predict the Market | by Vivek Palaniappan | Towards Data Science | Machine Learning and deep learning have become new and effective strategies commonly used by quantitative hedge funds to maximize their profits. As an AI and finance enthusiast myself, this is exciting news as it combines two of my areas of interest. This article will be an introduction on how to use neural networks to predict the stock market, in particular the price of a stock (or index). This post is based on python project in my GitHub, where you can find the full python code and how to use the program. Also, for more content like this, check out my own page: Engineer Quant
Finance is highly nonlinear and sometimes stock price data can even seem completely random. Traditional time series methods such as ARIMA and GARCH models are effective only when the series is stationary, which is a restricting assumption that requires the series to be preprocessed by taking log returns (or other transforms). However, the main issue arises in implementing these models in a live trading system, as there is no guarantee of stationarity as new data is added.
This is combated by using neural networks, which do not require any stationarity to be used. Furthermore, neural networks by nature are effective in finding the relationships between data and using it to predict (or classify) new data.
A typical full stack data science project has the following workflow:
Data acquisition — this provides us the featuresData preprocessing — an often dreaded but necessary step to make the data usableDevelop and implement model — where we choose the type of neural network and parametersBacktest model — a very crucial step in any trading strategyOptimization — finding suitable parameters
Data acquisition — this provides us the features
Data preprocessing — an often dreaded but necessary step to make the data usable
Develop and implement model — where we choose the type of neural network and parameters
Backtest model — a very crucial step in any trading strategy
Optimization — finding suitable parameters
The input data for our neural network is the past ten days of stock price data and we use it to predict the next day’s stock price data.
Fortunately, the stock price data required for this project is readily available in Yahoo Finance. The data can be acquired by either using their Python API, pdr.get_yahoo_data(ticker, start_date, end_date or directly from their website.
In our case, we need to break up the data into training sets of ten prices and the next day price. I have done this by defining a class Preprocessing, breaking it up into train and test data and defining a method get_train(self, seq_len)that returns the training data (input and output) as numpy arrays, given a particular length of window (ten in our case). The full code is as follows:
def gen_train(self, seq_len): """ Generates training data :param seq_len: length of window :return: X_train and Y_train """ for i in range((len(self.stock_train)//seq_len)*seq_len - seq_len - 1): x = np.array(self.stock_train.iloc[i: i + seq_len, 1]) y = np.array([self.stock_train.iloc[i + seq_len + 1, 1]], np.float64) self.input_train.append(x) self.output_train.append(y) self.X_train = np.array(self.input_train) self.Y_train = np.array(self.output_train)
Similarly, for the test data, I defined a method that returns the test data X_test and Y_test.
For this project, I have used two neural network models: the Multilayer Perceptron (MLP) and the Long Short Term Model (LSTM). I will give a short introduction into how these models work, but to read through how MLPs work, check out this article. For LSTMs, check out this excellent article by Jakob Aungiers.
MLPs are simplest form of neural networks, where an input is fed into the model, and using certain weights, the values are fed forward through the hidden layers to produce the output. The learning comes from backpropagating through the hidden layers to change the value of the weights between each neuron. An issue with MLPs is the lack of ‘memory’. There is no sense of what happened in previous training data and how that might and should affect the new training data. In the context of our model, the difference between the ten days of data in one dataset and another dataset might be of importance (for example) but MLPs do not have the ability to analyse these relationships.
This is where LSTMs, or in general Recurrent Neural Networks (RNNs) come in. RNNs have the ability of storing certain information about the data for later use and this extends the network’s capability in analyzing the complex structure of the relationships between stock price data. A problem with RNNs is the vanishing gradient problem. This is due to the fact that when the number of layers increases, the learning rate (value less that one) is multiplied several times, and that causes the gradient to keep decreasing. This is combated by LSTMs, making them more effective.
To implement the models, I have chosen keras because it uses the idea of adding layers to the network instead of defining the entire network at once. This opens us up to quick alteration of the number of layers and type of layers, which is handy when optimizing the network.
An important step in using the stock price data is to normalize the data. This would usually mean that you minus the average and divide by standard deviation but in our case, we want to be able to use this system on live trade over a period of time. So taking the statistical moments might not be the most accurate way to normalize the data. So I have merely divided the entire data by 200 (an arbitrary number that makes everything small). Although it seems as though the normalization was plucked out of thin air, it is still effective in making sure the weights in the neural network do not grow too large.
Let us begin with the simpler MLP. In keras this is done by making a sequential model and adding dense layers on top of it. The full code is as follows:
model = tf.keras.models.Sequential()model.add(tf.keras.layers.Dense(100, activation=tf.nn.relu))model.add(tf.keras.layers.Dense(100, activation=tf.nn.relu))model.add(tf.keras.layers.Dense(1, activation=tf.nn.relu))model.compile(optimizer="adam", loss="mean_squared_error")
This is where the elegance of keras really shows. Just with those five lines of code, we have created a MLP with two hidden layers each with a hundred neurons. A little word about the optimizer. Adam optimizer is gaining popularity in the machine learning community because it is a more efficient algorithm to optimize compared to traditional stochastic gradient descent. The advantages are best understood by looking at the advantages of two other extensions of stochastic gradient descent:
Adaptive Gradient Algorithm (AdaGrad) that maintains a per-parameter learning rate that improves performance on problems with sparse gradients (e.g. natural language and computer vision problems).
Root Mean Square Propagation (RMSProp) that also maintains per-parameter learning rates that are adapted based on the average of recent magnitudes of the gradients for the weight (e.g. how quickly it is changing). This means the algorithm does well on online and non-stationary problems (e.g. noisy).
Adam can be thought of as combining the benefits of the above extensions and that is why I have chosen to use Adam as my optimizer.
Now we need to fit the model with our training data. Again, keras makes it simple with only requiring the following code:
model.fit(X_train, Y_train, epochs=100)
Once we fit our model, we need to evaluate it against our test data to see how well it performed. This is done by
model.evaluate(X_test, Y_test)
You can use the information from the evaluation to assess the ability of the model to predict the stock prices.
For the LSTM model, the procedure is similar, hence I will post the code below, leaving the explaining for you to read up on:
model = tf.keras.Sequential()model.add(tf.keras.layers.LSTM(20, input_shape=(10, 1), return_sequences=True))model.add(tf.keras.layers.LSTM(20))model.add(tf.keras.layers.Dense(1, activation=tf.nn.relu))model.compile(optimizer="adam", loss="mean_squared_error")model.fit(X_train, Y_train, epochs=50)model.evaluate(X_test, Y_test)
One important point to note is the requirement by keras for the input data to be of certain dimensions, determined by your model. It is crucial that you reshape your data using numpy.
Now that we have fitted our models using our training data and evaluated it using our test data, we can take the assessment a step further by backtesting the model on new data. This is done simply by
def back_test(strategy, seq_len, ticker, start_date, end_date, dim): """ A simple back test for a given date period :param strategy: the chosen strategy. Note to have already formed the model, and fitted with training data. :param seq_len: length of the days used for prediction :param ticker: company ticker :param start_date: starting date :type start_date: "YYYY-mm-dd" :param end_date: ending date :type end_date: "YYYY-mm-dd" :param dim: dimension required for strategy: 3dim for LSTM and 2dim for MLP :type dim: tuple :return: Percentage errors array that gives the errors for every test in the given date range """ data = pdr.get_data_yahoo(ticker, start_date, end_date) stock_data = data["Adj Close"] errors = [] for i in range((len(stock_data)//10)*10 - seq_len - 1): x = np.array(stock_data.iloc[i: i + seq_len, 1]).reshape(dim) / 200 y = np.array(stock_data.iloc[i + seq_len + 1, 1]) / 200 predict = strategy.predict(x) while predict == 0: predict = strategy.predict(x) error = (predict - y) / 100 errors.append(error) total_error = np.array(errors) print(f"Average error = {total_error.mean()}")
However, this backtesting is a simplified version and not a full blown backtest system. For full blown backtest systems, you will need to consider factors such as survivorship bias, look ahead bias, market regime change and transaction costs. Since this is merely an educational project, a simple backtest suffices. However, if you have questions about setting up a full backtest system, then feel free to contact me.
The following shows how my LSTM model performed when predicting the Apple stock price over the month of February
For a simple LSTM model with no optimization, that is quite good prediction. It really shows us how robust neural networks and machine learning models are in modelling complex relationships between parameters.
Optimizing the neural network model is often important to improve the performance of the model in out of sample testing. I have not included the tuning in my open source version of the project, as I want it to be a challenge to those reading it to go ahead and try to optimize the model to make it perform better. For those who do not know about optimizing, it involves finding the hyperparameters that maximize the performance of the model. There are several ways in which you can search for these ideal hyperparameters, from grid search to stochastic methods. I strongly feel that learning to optimize models can take your machine learning knowledge to new level, and hence, I am going to challenge you to come up with a optimized model that beats my performance shown in graph above.
Machine learning is constantly evolving with new methods being developed every day. It is crucial that we update our knowledge constantly and the best way to do so is by building models for fun projects like stock price prediction. Although the LSTM model above is not good enough to be used in live trading, the foundations built by developing such a model can help us build better models that might one day be used in our trading system. | [
{
"code": null,
"e": 757,
"s": 172,
"text": "Machine Learning and deep learning have become new and effective strategies commonly used by quantitative hedge funds to maximize their profits. As an AI and finance enthusiast myself, this is exciting news as it combines two of my areas of interest. This article will be an introduction on how to use neural networks to predict the stock market, in particular the price of a stock (or index). This post is based on python project in my GitHub, where you can find the full python code and how to use the program. Also, for more content like this, check out my own page: Engineer Quant"
},
{
"code": null,
"e": 1234,
"s": 757,
"text": "Finance is highly nonlinear and sometimes stock price data can even seem completely random. Traditional time series methods such as ARIMA and GARCH models are effective only when the series is stationary, which is a restricting assumption that requires the series to be preprocessed by taking log returns (or other transforms). However, the main issue arises in implementing these models in a live trading system, as there is no guarantee of stationarity as new data is added."
},
{
"code": null,
"e": 1470,
"s": 1234,
"text": "This is combated by using neural networks, which do not require any stationarity to be used. Furthermore, neural networks by nature are effective in finding the relationships between data and using it to predict (or classify) new data."
},
{
"code": null,
"e": 1540,
"s": 1470,
"text": "A typical full stack data science project has the following workflow:"
},
{
"code": null,
"e": 1858,
"s": 1540,
"text": "Data acquisition — this provides us the featuresData preprocessing — an often dreaded but necessary step to make the data usableDevelop and implement model — where we choose the type of neural network and parametersBacktest model — a very crucial step in any trading strategyOptimization — finding suitable parameters"
},
{
"code": null,
"e": 1907,
"s": 1858,
"text": "Data acquisition — this provides us the features"
},
{
"code": null,
"e": 1988,
"s": 1907,
"text": "Data preprocessing — an often dreaded but necessary step to make the data usable"
},
{
"code": null,
"e": 2076,
"s": 1988,
"text": "Develop and implement model — where we choose the type of neural network and parameters"
},
{
"code": null,
"e": 2137,
"s": 2076,
"text": "Backtest model — a very crucial step in any trading strategy"
},
{
"code": null,
"e": 2180,
"s": 2137,
"text": "Optimization — finding suitable parameters"
},
{
"code": null,
"e": 2317,
"s": 2180,
"text": "The input data for our neural network is the past ten days of stock price data and we use it to predict the next day’s stock price data."
},
{
"code": null,
"e": 2555,
"s": 2317,
"text": "Fortunately, the stock price data required for this project is readily available in Yahoo Finance. The data can be acquired by either using their Python API, pdr.get_yahoo_data(ticker, start_date, end_date or directly from their website."
},
{
"code": null,
"e": 2943,
"s": 2555,
"text": "In our case, we need to break up the data into training sets of ten prices and the next day price. I have done this by defining a class Preprocessing, breaking it up into train and test data and defining a method get_train(self, seq_len)that returns the training data (input and output) as numpy arrays, given a particular length of window (ten in our case). The full code is as follows:"
},
{
"code": null,
"e": 3456,
"s": 2943,
"text": "def gen_train(self, seq_len): \"\"\" Generates training data :param seq_len: length of window :return: X_train and Y_train \"\"\" for i in range((len(self.stock_train)//seq_len)*seq_len - seq_len - 1): x = np.array(self.stock_train.iloc[i: i + seq_len, 1]) y = np.array([self.stock_train.iloc[i + seq_len + 1, 1]], np.float64) self.input_train.append(x) self.output_train.append(y) self.X_train = np.array(self.input_train) self.Y_train = np.array(self.output_train)"
},
{
"code": null,
"e": 3551,
"s": 3456,
"text": "Similarly, for the test data, I defined a method that returns the test data X_test and Y_test."
},
{
"code": null,
"e": 3861,
"s": 3551,
"text": "For this project, I have used two neural network models: the Multilayer Perceptron (MLP) and the Long Short Term Model (LSTM). I will give a short introduction into how these models work, but to read through how MLPs work, check out this article. For LSTMs, check out this excellent article by Jakob Aungiers."
},
{
"code": null,
"e": 4542,
"s": 3861,
"text": "MLPs are simplest form of neural networks, where an input is fed into the model, and using certain weights, the values are fed forward through the hidden layers to produce the output. The learning comes from backpropagating through the hidden layers to change the value of the weights between each neuron. An issue with MLPs is the lack of ‘memory’. There is no sense of what happened in previous training data and how that might and should affect the new training data. In the context of our model, the difference between the ten days of data in one dataset and another dataset might be of importance (for example) but MLPs do not have the ability to analyse these relationships."
},
{
"code": null,
"e": 5119,
"s": 4542,
"text": "This is where LSTMs, or in general Recurrent Neural Networks (RNNs) come in. RNNs have the ability of storing certain information about the data for later use and this extends the network’s capability in analyzing the complex structure of the relationships between stock price data. A problem with RNNs is the vanishing gradient problem. This is due to the fact that when the number of layers increases, the learning rate (value less that one) is multiplied several times, and that causes the gradient to keep decreasing. This is combated by LSTMs, making them more effective."
},
{
"code": null,
"e": 5394,
"s": 5119,
"text": "To implement the models, I have chosen keras because it uses the idea of adding layers to the network instead of defining the entire network at once. This opens us up to quick alteration of the number of layers and type of layers, which is handy when optimizing the network."
},
{
"code": null,
"e": 6004,
"s": 5394,
"text": "An important step in using the stock price data is to normalize the data. This would usually mean that you minus the average and divide by standard deviation but in our case, we want to be able to use this system on live trade over a period of time. So taking the statistical moments might not be the most accurate way to normalize the data. So I have merely divided the entire data by 200 (an arbitrary number that makes everything small). Although it seems as though the normalization was plucked out of thin air, it is still effective in making sure the weights in the neural network do not grow too large."
},
{
"code": null,
"e": 6157,
"s": 6004,
"text": "Let us begin with the simpler MLP. In keras this is done by making a sequential model and adding dense layers on top of it. The full code is as follows:"
},
{
"code": null,
"e": 6430,
"s": 6157,
"text": "model = tf.keras.models.Sequential()model.add(tf.keras.layers.Dense(100, activation=tf.nn.relu))model.add(tf.keras.layers.Dense(100, activation=tf.nn.relu))model.add(tf.keras.layers.Dense(1, activation=tf.nn.relu))model.compile(optimizer=\"adam\", loss=\"mean_squared_error\")"
},
{
"code": null,
"e": 6922,
"s": 6430,
"text": "This is where the elegance of keras really shows. Just with those five lines of code, we have created a MLP with two hidden layers each with a hundred neurons. A little word about the optimizer. Adam optimizer is gaining popularity in the machine learning community because it is a more efficient algorithm to optimize compared to traditional stochastic gradient descent. The advantages are best understood by looking at the advantages of two other extensions of stochastic gradient descent:"
},
{
"code": null,
"e": 7119,
"s": 6922,
"text": "Adaptive Gradient Algorithm (AdaGrad) that maintains a per-parameter learning rate that improves performance on problems with sparse gradients (e.g. natural language and computer vision problems)."
},
{
"code": null,
"e": 7420,
"s": 7119,
"text": "Root Mean Square Propagation (RMSProp) that also maintains per-parameter learning rates that are adapted based on the average of recent magnitudes of the gradients for the weight (e.g. how quickly it is changing). This means the algorithm does well on online and non-stationary problems (e.g. noisy)."
},
{
"code": null,
"e": 7552,
"s": 7420,
"text": "Adam can be thought of as combining the benefits of the above extensions and that is why I have chosen to use Adam as my optimizer."
},
{
"code": null,
"e": 7674,
"s": 7552,
"text": "Now we need to fit the model with our training data. Again, keras makes it simple with only requiring the following code:"
},
{
"code": null,
"e": 7714,
"s": 7674,
"text": "model.fit(X_train, Y_train, epochs=100)"
},
{
"code": null,
"e": 7828,
"s": 7714,
"text": "Once we fit our model, we need to evaluate it against our test data to see how well it performed. This is done by"
},
{
"code": null,
"e": 7859,
"s": 7828,
"text": "model.evaluate(X_test, Y_test)"
},
{
"code": null,
"e": 7971,
"s": 7859,
"text": "You can use the information from the evaluation to assess the ability of the model to predict the stock prices."
},
{
"code": null,
"e": 8097,
"s": 7971,
"text": "For the LSTM model, the procedure is similar, hence I will post the code below, leaving the explaining for you to read up on:"
},
{
"code": null,
"e": 8425,
"s": 8097,
"text": "model = tf.keras.Sequential()model.add(tf.keras.layers.LSTM(20, input_shape=(10, 1), return_sequences=True))model.add(tf.keras.layers.LSTM(20))model.add(tf.keras.layers.Dense(1, activation=tf.nn.relu))model.compile(optimizer=\"adam\", loss=\"mean_squared_error\")model.fit(X_train, Y_train, epochs=50)model.evaluate(X_test, Y_test)"
},
{
"code": null,
"e": 8609,
"s": 8425,
"text": "One important point to note is the requirement by keras for the input data to be of certain dimensions, determined by your model. It is crucial that you reshape your data using numpy."
},
{
"code": null,
"e": 8809,
"s": 8609,
"text": "Now that we have fitted our models using our training data and evaluated it using our test data, we can take the assessment a step further by backtesting the model on new data. This is done simply by"
},
{
"code": null,
"e": 10031,
"s": 8809,
"text": "def back_test(strategy, seq_len, ticker, start_date, end_date, dim): \"\"\" A simple back test for a given date period :param strategy: the chosen strategy. Note to have already formed the model, and fitted with training data. :param seq_len: length of the days used for prediction :param ticker: company ticker :param start_date: starting date :type start_date: \"YYYY-mm-dd\" :param end_date: ending date :type end_date: \"YYYY-mm-dd\" :param dim: dimension required for strategy: 3dim for LSTM and 2dim for MLP :type dim: tuple :return: Percentage errors array that gives the errors for every test in the given date range \"\"\" data = pdr.get_data_yahoo(ticker, start_date, end_date) stock_data = data[\"Adj Close\"] errors = [] for i in range((len(stock_data)//10)*10 - seq_len - 1): x = np.array(stock_data.iloc[i: i + seq_len, 1]).reshape(dim) / 200 y = np.array(stock_data.iloc[i + seq_len + 1, 1]) / 200 predict = strategy.predict(x) while predict == 0: predict = strategy.predict(x) error = (predict - y) / 100 errors.append(error) total_error = np.array(errors) print(f\"Average error = {total_error.mean()}\")"
},
{
"code": null,
"e": 10449,
"s": 10031,
"text": "However, this backtesting is a simplified version and not a full blown backtest system. For full blown backtest systems, you will need to consider factors such as survivorship bias, look ahead bias, market regime change and transaction costs. Since this is merely an educational project, a simple backtest suffices. However, if you have questions about setting up a full backtest system, then feel free to contact me."
},
{
"code": null,
"e": 10562,
"s": 10449,
"text": "The following shows how my LSTM model performed when predicting the Apple stock price over the month of February"
},
{
"code": null,
"e": 10772,
"s": 10562,
"text": "For a simple LSTM model with no optimization, that is quite good prediction. It really shows us how robust neural networks and machine learning models are in modelling complex relationships between parameters."
},
{
"code": null,
"e": 11559,
"s": 10772,
"text": "Optimizing the neural network model is often important to improve the performance of the model in out of sample testing. I have not included the tuning in my open source version of the project, as I want it to be a challenge to those reading it to go ahead and try to optimize the model to make it perform better. For those who do not know about optimizing, it involves finding the hyperparameters that maximize the performance of the model. There are several ways in which you can search for these ideal hyperparameters, from grid search to stochastic methods. I strongly feel that learning to optimize models can take your machine learning knowledge to new level, and hence, I am going to challenge you to come up with a optimized model that beats my performance shown in graph above."
}
] |
How to deal with error “Error in eval(predvars, data, env) : numeric 'envir' arg not of length one” in R? | This error occurs when we do not pass the argument for independent variable as a data frame. The predict function will predict the value of the dependent variable for the provided values of the independent variable and we can also use the values of the independent variable using which the model is created.
Consider the below data frame −
set.seed(1)
x <-rnorm(20)
y <-runif(20,5,10)
df <-data.frame(x,y)
df
x y
1 -0.62645381 9.104731
2 0.18364332 8.235301
3 -0.83562861 8.914664
4 1.59528080 7.765182
5 0.32950777 7.648598
6 -0.82046838 8.946781
7 0.48742905 5.116656
8 0.73832471 7.386150
9 0.57578135 8.661569
10 -0.30538839 8.463658
11 1.51178117 7.388098
12 0.38984324 9.306047
13 -0.62124058 7.190486
14 -2.21469989 6.223986
15 1.12493092 5.353395
16 -0.04493361 5.497331
17 -0.01619026 6.581359
18 0.94383621 7.593171
19 0.82122120 8.310025
20 0.59390132 7.034151
Creating the linear model −
M <-lm(y~x,data=df)
Formula for prediction that results in error −
predict(M,newdata=df$x,interval="confidence")
Error in eval(predvars, data, env) :
numeric 'envir' arg not of length one
Formula for prediction that does not result in error −
predict(M,newdata=data.frame(df$x),interval="confidence")
fit lwr upr
1 7.642084 6.814446 8.469722
2 7.536960 6.927195 8.146725
3 7.669228 6.738695 8.599762
4 7.353775 6.214584 8.492966
5 7.518031 6.900897 8.135166
6 7.667261 6.744547 8.589975
7 7.497538 6.854767 8.140310
8 7.464980 6.749018 8.180943
9 7.486073 6.821666 8.150480
10 7.600420 6.902430 8.298410
11 7.364611 6.273305 8.455917
12 7.510202 6.885355 8.135048
13 7.641408 6.816180 8.466635
14 7.848187 6.091378 9.604995
15 7.414811 6.530792 8.298831
16 7.566622 6.935903 8.197340
17 7.562892 6.936919 8.188865
18 7.438312 6.639516 8.237107
19 7.454223 6.706932 8.201514
20 7.483722 6.814287 8.153156
We can simply use the Model object as well, if we want to predict the dependent variable for the independent variable
predict(M)
1 2 3 4 5 6 7 8
7.642084 7.536960 7.669228 7.353775 7.518031 7.667261 7.497538 7.464980
9 10 11 12 13 14 15 16
7.486073 7.600420 7.364611 7.510202 7.641408 7.848187 7.414811 7.566622
17 18 19 20
7.562892 7.438312 7.454223 7.483722
predict(M,interval="confidence")
fit lwr upr
1 7.642084 6.814446 8.469722
2 7.536960 6.927195 8.146725
3 7.669228 6.738695 8.599762
4 7.353775 6.214584 8.492966
5 7.518031 6.900897 8.135166
6 7.667261 6.744547 8.589975
7 7.497538 6.854767 8.140310
8 7.464980 6.749018 8.180943
9 7.486073 6.821666 8.150480
10 7.600420 6.902430 8.298410
11 7.364611 6.273305 8.455917
12 7.510202 6.885355 8.135048
13 7.641408 6.816180 8.466635
14 7.848187 6.091378 9.604995
15 7.414811 6.530792 8.298831
16 7.566622 6.935903 8.197340
17 7.562892 6.936919 8.188865
18 7.438312 6.639516 8.237107
19 7.454223 6.706932 8.201514
20 7.483722 6.814287 8.153156 | [
{
"code": null,
"e": 1370,
"s": 1062,
"text": "This error occurs when we do not pass the argument for independent variable as a data frame. The predict function will predict the value of the dependent variable for the provided values of the independent variable and we can also use the values of the independent variable using which the model is created."
},
{
"code": null,
"e": 1402,
"s": 1370,
"text": "Consider the below data frame −"
},
{
"code": null,
"e": 1471,
"s": 1402,
"text": "set.seed(1)\nx <-rnorm(20)\ny <-runif(20,5,10)\ndf <-data.frame(x,y)\ndf"
},
{
"code": null,
"e": 1946,
"s": 1471,
"text": " x y\n1 -0.62645381 9.104731\n2 0.18364332 8.235301\n3 -0.83562861 8.914664\n4 1.59528080 7.765182\n5 0.32950777 7.648598\n6 -0.82046838 8.946781\n7 0.48742905 5.116656\n8 0.73832471 7.386150\n9 0.57578135 8.661569\n10 -0.30538839 8.463658\n11 1.51178117 7.388098\n12 0.38984324 9.306047\n13 -0.62124058 7.190486\n14 -2.21469989 6.223986\n15 1.12493092 5.353395\n16 -0.04493361 5.497331\n17 -0.01619026 6.581359\n18 0.94383621 7.593171\n19 0.82122120 8.310025\n20 0.59390132 7.034151"
},
{
"code": null,
"e": 1974,
"s": 1946,
"text": "Creating the linear model −"
},
{
"code": null,
"e": 1994,
"s": 1974,
"text": "M <-lm(y~x,data=df)"
},
{
"code": null,
"e": 2041,
"s": 1994,
"text": "Formula for prediction that results in error −"
},
{
"code": null,
"e": 2162,
"s": 2041,
"text": "predict(M,newdata=df$x,interval=\"confidence\")\nError in eval(predvars, data, env) :\nnumeric 'envir' arg not of length one"
},
{
"code": null,
"e": 2217,
"s": 2162,
"text": "Formula for prediction that does not result in error −"
},
{
"code": null,
"e": 2275,
"s": 2217,
"text": "predict(M,newdata=data.frame(df$x),interval=\"confidence\")"
},
{
"code": null,
"e": 2893,
"s": 2275,
"text": " fit lwr upr\n1 7.642084 6.814446 8.469722\n2 7.536960 6.927195 8.146725\n3 7.669228 6.738695 8.599762\n4 7.353775 6.214584 8.492966\n5 7.518031 6.900897 8.135166\n6 7.667261 6.744547 8.589975\n7 7.497538 6.854767 8.140310\n8 7.464980 6.749018 8.180943\n9 7.486073 6.821666 8.150480\n10 7.600420 6.902430 8.298410\n11 7.364611 6.273305 8.455917\n12 7.510202 6.885355 8.135048\n13 7.641408 6.816180 8.466635\n14 7.848187 6.091378 9.604995\n15 7.414811 6.530792 8.298831\n16 7.566622 6.935903 8.197340\n17 7.562892 6.936919 8.188865\n18 7.438312 6.639516 8.237107\n19 7.454223 6.706932 8.201514\n20 7.483722 6.814287 8.153156"
},
{
"code": null,
"e": 3011,
"s": 2893,
"text": "We can simply use the Model object as well, if we want to predict the dependent variable for the independent variable"
},
{
"code": null,
"e": 3022,
"s": 3011,
"text": "predict(M)"
},
{
"code": null,
"e": 3253,
"s": 3022,
"text": "1 2 3 4 5 6 7 8\n7.642084 7.536960 7.669228 7.353775 7.518031 7.667261 7.497538 7.464980\n9 10 11 12 13 14 15 16\n7.486073 7.600420 7.364611 7.510202 7.641408 7.848187 7.414811 7.566622\n17 18 19 20\n7.562892 7.438312 7.454223 7.483722"
},
{
"code": null,
"e": 3286,
"s": 3253,
"text": "predict(M,interval=\"confidence\")"
},
{
"code": null,
"e": 3904,
"s": 3286,
"text": " fit lwr upr\n1 7.642084 6.814446 8.469722\n2 7.536960 6.927195 8.146725\n3 7.669228 6.738695 8.599762\n4 7.353775 6.214584 8.492966\n5 7.518031 6.900897 8.135166\n6 7.667261 6.744547 8.589975\n7 7.497538 6.854767 8.140310\n8 7.464980 6.749018 8.180943\n9 7.486073 6.821666 8.150480\n10 7.600420 6.902430 8.298410\n11 7.364611 6.273305 8.455917\n12 7.510202 6.885355 8.135048\n13 7.641408 6.816180 8.466635\n14 7.848187 6.091378 9.604995\n15 7.414811 6.530792 8.298831\n16 7.566622 6.935903 8.197340\n17 7.562892 6.936919 8.188865\n18 7.438312 6.639516 8.237107\n19 7.454223 6.706932 8.201514\n20 7.483722 6.814287 8.153156"
}
] |
C++ Vector Library - swap() Function | The C++ function std::vector::swap() exchanges the content of vector with contents of vector x.
Following is the declaration for std::vector::swap() function form std::vector header.
void swap (vector& x);
x − Another vector object of same type.
None
Constant i.e. O(1)
The following example shows the usage of std::vector::swap() function.
#include <iostream>
#include <vector>
using namespace std;
int main(void) {
vector<int> v1;
vector<int> v2 = {1, 2, 3, 4, 5};
v1.swap(v2);
cout << "Vector v1 contains" << endl;
for (int i = 0; i < v1.size(); ++i)
cout << v1[i] << endl;
return 0;
}
Let us compile and run the above program, this will produce the following result −
Vector v1 contains
1
2
3
4
5
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2699,
"s": 2603,
"text": "The C++ function std::vector::swap() exchanges the content of vector with contents of vector x."
},
{
"code": null,
"e": 2786,
"s": 2699,
"text": "Following is the declaration for std::vector::swap() function form std::vector header."
},
{
"code": null,
"e": 2810,
"s": 2786,
"text": "void swap (vector& x);\n"
},
{
"code": null,
"e": 2850,
"s": 2810,
"text": "x − Another vector object of same type."
},
{
"code": null,
"e": 2855,
"s": 2850,
"text": "None"
},
{
"code": null,
"e": 2874,
"s": 2855,
"text": "Constant i.e. O(1)"
},
{
"code": null,
"e": 2945,
"s": 2874,
"text": "The following example shows the usage of std::vector::swap() function."
},
{
"code": null,
"e": 3222,
"s": 2945,
"text": "#include <iostream>\n#include <vector>\n\nusing namespace std;\n\nint main(void) {\n vector<int> v1;\n vector<int> v2 = {1, 2, 3, 4, 5};\n\n v1.swap(v2);\n\n cout << \"Vector v1 contains\" << endl;\n for (int i = 0; i < v1.size(); ++i)\n cout << v1[i] << endl;\n\n return 0;\n}"
},
{
"code": null,
"e": 3305,
"s": 3222,
"text": "Let us compile and run the above program, this will produce the following result −"
},
{
"code": null,
"e": 3335,
"s": 3305,
"text": "Vector v1 contains\n1\n2\n3\n4\n5\n"
},
{
"code": null,
"e": 3342,
"s": 3335,
"text": " Print"
},
{
"code": null,
"e": 3353,
"s": 3342,
"text": " Add Notes"
}
] |
Java Program to Get System MAC Address of Windows and Linux Machine - GeeksforGeeks | 04 Jan, 2021
Media Access Control address (MAC address) is a unique hexadecimal identifier assigned to a Network Interface Controller (NIC) to be used as a network address in communications within a network segment. This use is common in most IEEE 802 networking technologies, including Ethernet, Wi-Fi, and Bluetooth. Within the Open Systems Interconnection (OSI) network model, MAC addresses utilized in the medium access control protocol sublayer of the data link layer. As typically represented, MAC addresses are recognizable as six groups of two hexadecimal digits, separated by hyphens, colons, or without a separator.
MAC addresses are primarily assigned by device manufacturers and therefore often mentioned as the burned-in address, or as an Ethernet hardware address, hardware address, or physical address.
Example 1
Java
// Java program to access the MAC address of the// localhost machineimport java.net.InetAddress;import java.net.NetworkInterface;import java.net.SocketException;import java.net.UnknownHostException;import java.util.Enumeration;public class MACAddress { // method to get the MAC Address void getMAC(InetAddress addr) throws SocketException { // create a variable of type NetworkInterface and // assign it with the value returned by the // getByInetAddress() method NetworkInterface iface = NetworkInterface.getByInetAddress(addr); // create a byte array and store the value returned // by the NetworkInterface.getHardwareAddress() // method byte[] mac = iface.getHardwareAddress(); // convert the obtained byte array into a printable // String StringBuilder sb = new StringBuilder(); for (int i = 0; i < mac.length; i++) { sb.append(String.format( "%02X%s", mac[i], (i < mac.length - 1) ? "-" : "")); } // print the final String containing the MAC Address System.out.println(sb.toString()); } // Driver method public static void main(String[] args) throws Exception { // a variable of type InetAddress to store the // address of the local host InetAddress addr = InetAddress.getLocalHost(); // instantiate the MACAddress class MACAddress obj = new MACAddress(); System.out.print("MAC Address of the system : "); // call the getMAC() method on the current object // passing the localhost address as the parameter obj.getMAC(addr); }}
Output
Example 2(When the device has more than one MAC address)
Java
// Java program to access all the MAC addresses of the// localhost machine import java.net.InetAddress;import java.net.NetworkInterface;import java.net.SocketException;import java.net.UnknownHostException;import java.util.Enumeration;public class MACAddress { public static void main(String[] args) throws Exception { // instantiate the MACAddress class MACAddress obj = new MACAddress(); // call the getMAC() method on the current object // passing the localhost address as the parameter obj.getMAC(); } // method to get the MAC addresses of the // localhost machine void getMAC() { try { // create an Enumeration of type // NetworkInterface and store the values // returned by // NetworkInterface.getNetworkInterfaces() // method Enumeration<NetworkInterface> networks = NetworkInterface.getNetworkInterfaces(); // for every network in the networks Enumeration while (networks.hasMoreElements()) { NetworkInterface network = networks.nextElement(); // call getHardwareAddress() method on each // network and store the returned value in a // byte array byte[] mac = network.getHardwareAddress(); if (mac != null) { System.out.print( "Current MAC address : "); // convert the obtained byte array into // a printable String StringBuilder sb = new StringBuilder(); for (int i = 0; i < mac.length; i++) { sb.append(String.format( "%02X%s", mac[i], (i < mac.length - 1) ? "-" : "")); } // print the final String containing the // MAC Address System.out.println(sb.toString()); } } } catch (SocketException e) { e.printStackTrace(); } }}
Output
Picked
Technical Scripter 2020
Java
Java Programs
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Functional Interfaces in Java
Stream In Java
Constructors in Java
Different ways of Reading a text file in Java
Exceptions in Java
Convert a String to Character array in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java? | [
{
"code": null,
"e": 23581,
"s": 23553,
"text": "\n04 Jan, 2021"
},
{
"code": null,
"e": 24194,
"s": 23581,
"text": "Media Access Control address (MAC address) is a unique hexadecimal identifier assigned to a Network Interface Controller (NIC) to be used as a network address in communications within a network segment. This use is common in most IEEE 802 networking technologies, including Ethernet, Wi-Fi, and Bluetooth. Within the Open Systems Interconnection (OSI) network model, MAC addresses utilized in the medium access control protocol sublayer of the data link layer. As typically represented, MAC addresses are recognizable as six groups of two hexadecimal digits, separated by hyphens, colons, or without a separator."
},
{
"code": null,
"e": 24386,
"s": 24194,
"text": "MAC addresses are primarily assigned by device manufacturers and therefore often mentioned as the burned-in address, or as an Ethernet hardware address, hardware address, or physical address."
},
{
"code": null,
"e": 24396,
"s": 24386,
"text": "Example 1"
},
{
"code": null,
"e": 24401,
"s": 24396,
"text": "Java"
},
{
"code": "// Java program to access the MAC address of the// localhost machineimport java.net.InetAddress;import java.net.NetworkInterface;import java.net.SocketException;import java.net.UnknownHostException;import java.util.Enumeration;public class MACAddress { // method to get the MAC Address void getMAC(InetAddress addr) throws SocketException { // create a variable of type NetworkInterface and // assign it with the value returned by the // getByInetAddress() method NetworkInterface iface = NetworkInterface.getByInetAddress(addr); // create a byte array and store the value returned // by the NetworkInterface.getHardwareAddress() // method byte[] mac = iface.getHardwareAddress(); // convert the obtained byte array into a printable // String StringBuilder sb = new StringBuilder(); for (int i = 0; i < mac.length; i++) { sb.append(String.format( \"%02X%s\", mac[i], (i < mac.length - 1) ? \"-\" : \"\")); } // print the final String containing the MAC Address System.out.println(sb.toString()); } // Driver method public static void main(String[] args) throws Exception { // a variable of type InetAddress to store the // address of the local host InetAddress addr = InetAddress.getLocalHost(); // instantiate the MACAddress class MACAddress obj = new MACAddress(); System.out.print(\"MAC Address of the system : \"); // call the getMAC() method on the current object // passing the localhost address as the parameter obj.getMAC(addr); }}",
"e": 26115,
"s": 24401,
"text": null
},
{
"code": null,
"e": 26122,
"s": 26115,
"text": "Output"
},
{
"code": null,
"e": 26179,
"s": 26122,
"text": "Example 2(When the device has more than one MAC address)"
},
{
"code": null,
"e": 26184,
"s": 26179,
"text": "Java"
},
{
"code": "// Java program to access all the MAC addresses of the// localhost machine import java.net.InetAddress;import java.net.NetworkInterface;import java.net.SocketException;import java.net.UnknownHostException;import java.util.Enumeration;public class MACAddress { public static void main(String[] args) throws Exception { // instantiate the MACAddress class MACAddress obj = new MACAddress(); // call the getMAC() method on the current object // passing the localhost address as the parameter obj.getMAC(); } // method to get the MAC addresses of the // localhost machine void getMAC() { try { // create an Enumeration of type // NetworkInterface and store the values // returned by // NetworkInterface.getNetworkInterfaces() // method Enumeration<NetworkInterface> networks = NetworkInterface.getNetworkInterfaces(); // for every network in the networks Enumeration while (networks.hasMoreElements()) { NetworkInterface network = networks.nextElement(); // call getHardwareAddress() method on each // network and store the returned value in a // byte array byte[] mac = network.getHardwareAddress(); if (mac != null) { System.out.print( \"Current MAC address : \"); // convert the obtained byte array into // a printable String StringBuilder sb = new StringBuilder(); for (int i = 0; i < mac.length; i++) { sb.append(String.format( \"%02X%s\", mac[i], (i < mac.length - 1) ? \"-\" : \"\")); } // print the final String containing the // MAC Address System.out.println(sb.toString()); } } } catch (SocketException e) { e.printStackTrace(); } }}",
"e": 28465,
"s": 26184,
"text": null
},
{
"code": null,
"e": 28472,
"s": 28465,
"text": "Output"
},
{
"code": null,
"e": 28479,
"s": 28472,
"text": "Picked"
},
{
"code": null,
"e": 28503,
"s": 28479,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 28508,
"s": 28503,
"text": "Java"
},
{
"code": null,
"e": 28522,
"s": 28508,
"text": "Java Programs"
},
{
"code": null,
"e": 28541,
"s": 28522,
"text": "Technical Scripter"
},
{
"code": null,
"e": 28546,
"s": 28541,
"text": "Java"
},
{
"code": null,
"e": 28644,
"s": 28546,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28653,
"s": 28644,
"text": "Comments"
},
{
"code": null,
"e": 28666,
"s": 28653,
"text": "Old Comments"
},
{
"code": null,
"e": 28696,
"s": 28666,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 28711,
"s": 28696,
"text": "Stream In Java"
},
{
"code": null,
"e": 28732,
"s": 28711,
"text": "Constructors in Java"
},
{
"code": null,
"e": 28778,
"s": 28732,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 28797,
"s": 28778,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 28841,
"s": 28797,
"text": "Convert a String to Character array in Java"
},
{
"code": null,
"e": 28867,
"s": 28841,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 28901,
"s": 28867,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 28948,
"s": 28901,
"text": "Implementing a Linked List in Java using Class"
}
] |
Filter array of objects by a specific property in JavaScript? | Use the concept of map() along with ternary operator (?). Following are our array of objects −
let firstCustomerDetails =
[
{firstName: 'John', amount: 100},
{firstName: 'David', amount: 50},
{firstName: 'Bob', amount: 80}
];
let secondCustomerDetails =
[
{firstName: 'John', amount: 400},
{firstName: 'David', amount: 70},
{firstName: 'Bob', amount: 40}
];
Let’s say, we need to filter array of objects by amount property. The one with the greatest
amount is considered.
let firstCustomerDetails =
[
{firstName: 'John', amount: 100},
{firstName: 'David', amount: 50},
{firstName: 'Bob', amount: 80}
];
let secondCustomerDetails =
[
{firstName: 'John', amount: 400},
{firstName: 'David', amount: 70},
{firstName: 'Bob', amount: 40}
];
var output = firstCustomerDetails.map((key, position) =>
key.amount > secondCustomerDetails[position].amount ? key :
secondCustomerDetails[position]
);
console.log(output);
To run the above program, you need to use the following command −
node fileName.js.
Here, my file name is demo83.js.
This will produce the following output −
PS C:\Users\Amit\JavaScript-code> node demo83.js
[
{ firstName: 'John', amount: 400 },
{ firstName: 'David', amount: 70 },
{ firstName: 'Bob', amount: 80 }
] | [
{
"code": null,
"e": 1157,
"s": 1062,
"text": "Use the concept of map() along with ternary operator (?). Following are our array of objects −"
},
{
"code": null,
"e": 1441,
"s": 1157,
"text": "let firstCustomerDetails =\n[\n {firstName: 'John', amount: 100},\n {firstName: 'David', amount: 50},\n {firstName: 'Bob', amount: 80}\n];\n let secondCustomerDetails =\n[\n {firstName: 'John', amount: 400},\n {firstName: 'David', amount: 70},\n {firstName: 'Bob', amount: 40}\n];"
},
{
"code": null,
"e": 1555,
"s": 1441,
"text": "Let’s say, we need to filter array of objects by amount property. The one with the greatest\namount is considered."
},
{
"code": null,
"e": 2009,
"s": 1555,
"text": "let firstCustomerDetails =\n[\n {firstName: 'John', amount: 100},\n {firstName: 'David', amount: 50},\n {firstName: 'Bob', amount: 80}\n];\nlet secondCustomerDetails =\n[\n {firstName: 'John', amount: 400},\n {firstName: 'David', amount: 70},\n {firstName: 'Bob', amount: 40}\n];\nvar output = firstCustomerDetails.map((key, position) =>\nkey.amount > secondCustomerDetails[position].amount ? key :\nsecondCustomerDetails[position]\n);\nconsole.log(output);"
},
{
"code": null,
"e": 2075,
"s": 2009,
"text": "To run the above program, you need to use the following command −"
},
{
"code": null,
"e": 2093,
"s": 2075,
"text": "node fileName.js."
},
{
"code": null,
"e": 2126,
"s": 2093,
"text": "Here, my file name is demo83.js."
},
{
"code": null,
"e": 2167,
"s": 2126,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2334,
"s": 2167,
"text": "PS C:\\Users\\Amit\\JavaScript-code> node demo83.js\n[\n { firstName: 'John', amount: 400 },\n { firstName: 'David', amount: 70 },\n { firstName: 'Bob', amount: 80 }\n]"
}
] |
For Statement List Implementations | The "FOR" construct offers looping capabilities for batch files. Following is the common construct of the ‘for’ statement for working with a list of values.
FOR %%variable IN list DO do_something
The classic ‘for’ statement consists of the following parts −
Variable declaration – This step is executed only once for the entire loop and used to declare any variables which will be used within the loop. In Batch Script, the variable declaration is done with the %% at the beginning of the variable name.
Variable declaration – This step is executed only once for the entire loop and used to declare any variables which will be used within the loop. In Batch Script, the variable declaration is done with the %% at the beginning of the variable name.
List – This will be the list of values for which the ‘for’ statement should be executed.
List – This will be the list of values for which the ‘for’ statement should be executed.
The do_something code block is what needs to be executed for each iteration for the list of values.
The do_something code block is what needs to be executed for each iteration for the list of values.
The following diagram shows the diagrammatic explanation of this loop.
Following is an example of how the ‘goto’ statement can be used.
@echo off
FOR %%F IN (1 2 3 4 5) DO echo %%F
The key thing to note about the above program is −
The variable declaration is done with the %% sign at the beginning of the variable name.
The variable declaration is done with the %% sign at the beginning of the variable name.
The list of values is defined after the IN clause.
The list of values is defined after the IN clause.
The do_something code is defined after the echo command. Thus for each value in the list, the echo command will be executed.
The do_something code is defined after the echo command. Thus for each value in the list, the echo command will be executed.
The above program produces the following output.
1
2
3
4
5
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2326,
"s": 2169,
"text": "The \"FOR\" construct offers looping capabilities for batch files. Following is the common construct of the ‘for’ statement for working with a list of values."
},
{
"code": null,
"e": 2366,
"s": 2326,
"text": "FOR %%variable IN list DO do_something\n"
},
{
"code": null,
"e": 2428,
"s": 2366,
"text": "The classic ‘for’ statement consists of the following parts −"
},
{
"code": null,
"e": 2674,
"s": 2428,
"text": "Variable declaration – This step is executed only once for the entire loop and used to declare any variables which will be used within the loop. In Batch Script, the variable declaration is done with the %% at the beginning of the variable name."
},
{
"code": null,
"e": 2920,
"s": 2674,
"text": "Variable declaration – This step is executed only once for the entire loop and used to declare any variables which will be used within the loop. In Batch Script, the variable declaration is done with the %% at the beginning of the variable name."
},
{
"code": null,
"e": 3009,
"s": 2920,
"text": "List – This will be the list of values for which the ‘for’ statement should be executed."
},
{
"code": null,
"e": 3098,
"s": 3009,
"text": "List – This will be the list of values for which the ‘for’ statement should be executed."
},
{
"code": null,
"e": 3198,
"s": 3098,
"text": "The do_something code block is what needs to be executed for each iteration for the list of values."
},
{
"code": null,
"e": 3298,
"s": 3198,
"text": "The do_something code block is what needs to be executed for each iteration for the list of values."
},
{
"code": null,
"e": 3369,
"s": 3298,
"text": "The following diagram shows the diagrammatic explanation of this loop."
},
{
"code": null,
"e": 3434,
"s": 3369,
"text": "Following is an example of how the ‘goto’ statement can be used."
},
{
"code": null,
"e": 3480,
"s": 3434,
"text": "@echo off \nFOR %%F IN (1 2 3 4 5) DO echo %%F"
},
{
"code": null,
"e": 3531,
"s": 3480,
"text": "The key thing to note about the above program is −"
},
{
"code": null,
"e": 3620,
"s": 3531,
"text": "The variable declaration is done with the %% sign at the beginning of the variable name."
},
{
"code": null,
"e": 3709,
"s": 3620,
"text": "The variable declaration is done with the %% sign at the beginning of the variable name."
},
{
"code": null,
"e": 3760,
"s": 3709,
"text": "The list of values is defined after the IN clause."
},
{
"code": null,
"e": 3811,
"s": 3760,
"text": "The list of values is defined after the IN clause."
},
{
"code": null,
"e": 3936,
"s": 3811,
"text": "The do_something code is defined after the echo command. Thus for each value in the list, the echo command will be executed."
},
{
"code": null,
"e": 4061,
"s": 3936,
"text": "The do_something code is defined after the echo command. Thus for each value in the list, the echo command will be executed."
},
{
"code": null,
"e": 4110,
"s": 4061,
"text": "The above program produces the following output."
},
{
"code": null,
"e": 4125,
"s": 4110,
"text": "1 \n2 \n3 \n4 \n5\n"
},
{
"code": null,
"e": 4132,
"s": 4125,
"text": " Print"
},
{
"code": null,
"e": 4143,
"s": 4132,
"text": " Add Notes"
}
] |
Prototyping a Recommender System Step by Step Part 2: Alternating Least Square (ALS) Matrix Factorization in Collaborative Filtering | by Kevin Liao | Towards Data Science | Part 1 of recommender systems can be found here
In the last post, we covered a lot of ground in how to build our own recommender systems and got our hand dirty with Pandas and Scikit-learn to implement a KNN item-based collaborative filtering movie recommender. The source code of the KNN recommender system can be found in my Github repo.In this post, we will talk about how to improve our movie recommender system with a more sophisticated machine learning technique: Matrix Factorization. Later in this post, we will discuss why we want to use matrix factorization in collaborative filtering and what is matrix factorization and how is it implemented in Spark.
During the last section in previous post, we asked our model for some movie recommendations. After we evaluated the list of recommended movies, we quickly identified two obvious limitations in our KNN approach. One is the “popularity bias”, the other is “item cold-start problem”. There will be another limitation, “scalability issue”, if the underlying training data is too big to fit in one machine
popularity bias: refers to system recommends the movies with the most interactions without any personalization
item cold-start problem: refers to when movies added to the catalogue have either none or very little interactions while recommender rely on the movie’s interactions to make recommendations
scalability issue: refers to lack of the ability to scale to much larger sets of data when more and more users and movies added into our database
All three above are very typical challenges for collaborative filtering recommender. They arrive naturally along with the user-movie (or movie-user) interaction matrix where each entry records an interaction of a user i and a movie j. In a real world setting, the vast majority of movies receive very few or even no ratings at all by users. We are looking at an extremely sparse matrix with more than 99% of entries are missing values.
With such a sparse matrix, what ML algorithms can be trained and reliable to make inference? To find solutions to the question, we are effectively solving a data sparsity problem.
In collaborative filtering, matrix factorization is the state-of-the-art solution for sparse data problem, although it has become widely known since Netflix Prize Challenge.
What is matrix factorization? Matrix factorization is simply a family of mathematical operations for matrices in linear algebra. To be specific, a matrix factorization is a factorization of a matrix into a product of matrices. In the case of collaborative filtering, matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. One matrix can be seen as the user matrix where rows represent users and columns are latent factors. The other matrix is the item matrix where rows are latent factors and columns represent items.
How does matrix factorization solve our problems?
Model learns to factorize rating matrix into user and movie representations, which allows model to predict better personalized movie ratings for usersWith matrix factorization, less-known movies can have rich latent representations as much as popular movies have, which improves recommender’s ability to recommend less-known movies
Model learns to factorize rating matrix into user and movie representations, which allows model to predict better personalized movie ratings for users
With matrix factorization, less-known movies can have rich latent representations as much as popular movies have, which improves recommender’s ability to recommend less-known movies
In the sparse user-item interaction matrix, the predicted rating user u will give item i is computed as:
Rating of item i given by user u can be expressed as a dot product of the user latent vector and the item latent vector.
Notice in above formula, the number of latent factors can be tuned via cross-validation. Latent factors are the features in the lower dimension latent space projected from user-item interaction matrix. The idea behind matrix factorization is to use latent factors to represent user preferences or movie topics in a much lower dimension space. Matrix factorization is one of very effective dimension reduction techniques in machine learning.
Very much like the concept of components in PCA, the number of latent factors determines the amount of abstract information that we want to store in a lower dimension space. A matrix factorization with one latent factor is equivalent to a most popular or top popular recommender (e.g. recommends the items with the most interactions without any personalization). Increasing the number of latent factors will improve personalization, until the number of factors becomes too high, at which point the model starts to overfit. A common strategy to avoid overfitting is to add regularization terms to the objective function.
The objective of matrix factorization is to minimize the error between true rating and predicted rating:
Once we have an objective function, we just need a training routine (eg, gradient descent) to complete the implementation of a matrix factorization algorithm. This implementation is actually called Funk SVD. It is named after Simon Funk, who he shared his findings with the research community during Netflix prize challenge in 2006.
Although Funk SVD was very effective in matrix factorization with single machine during that time, it’s not scalable as the amount of data grows today. With terabytes or even petabytes of data, it’s impossible to load data with such size into a single machine. So we need a machine learning model (or framework) that can train on dataset spreading across from cluster of machines.
Alternating Least Square (ALS) is also a matrix factorization algorithm and it runs itself in a parallel fashion. ALS is implemented in Apache Spark ML and built for a larges-scale collaborative filtering problems. ALS is doing a pretty good job at solving scalability and sparseness of the Ratings data, and it’s simple and scales well to very large datasets.
Some high-level ideas behind ALS are:
Its objective function is slightly different than Funk SVD: ALS uses L2 regularization while Funk uses L1 regularization
Its training routine is different: ALS minimizes two loss functions alternatively; It first holds user matrix fixed and runs gradient descent with item matrix; then it holds item matrix fixed and runs gradient descent with user matrix
Its scalability: ALS runs its gradient descent in parallel across multiple partitions of the underlying training data from a cluster of machines
If you are interested in learning more about ALS, you can find more details in this paper: Large-scale Parallel Collaborative Filtering for the Netflix Prize
Just like other machine learning algorithms, ALS has its own set of hyper-parameters. We probably want to tune its hyper-parameters via hold-out validation or cross-validation.
Most important hyper-params in Alternating Least Square (ALS):
maxIter: the maximum number of iterations to run (defaults to 10)
rank: the number of latent factors in the model (defaults to 10)
regParam: the regularization parameter in ALS (defaults to 1.0)
Hyper-parameter tuning is a highly recurring task in many machine learning projects. We can code it up in a function to speed up the tuning iterations.
After tuning, we found the best choice of hyper-parameters: maxIter=10, regParam=0.05, rank=20
Now that we know we have a wonderful model for movie recommendation, the next question is: how do we take our wonderful model and productize it into a recommender system? Machine learning model productization is another big topic and I won’t get into details about it. In this post, I will show how to build a MVP (minimum viable product) version for ALS recommender.
To productize a model, we need to build a work flow around the model. Typical ML work flow roughly starts with data preparation via pre-defined set of ETL jobs, offline/online model training, then ingesting trained models to web services for production. In our case, we are going to build a very minimum version of movie recommender that just does the job. Our work flow is following:
A new user inputs his/her favorite movies, then system create new user-movie interaction samples for the modelSystem retrains ALS model on data with the new inputsSystem creates movie data for inference (in my case, I sample all movies from the data)System make rating predictions on all movies for that userSystem outputs top N movie recommendations for that user based on the ranking of movie rating predictions
A new user inputs his/her favorite movies, then system create new user-movie interaction samples for the model
System retrains ALS model on data with the new inputs
System creates movie data for inference (in my case, I sample all movies from the data)
System make rating predictions on all movies for that user
System outputs top N movie recommendations for that user based on the ranking of movie rating predictions
Here is a small snippet of the source code for our MVP recommender system:
This snippet demos our make_recommendations method in our recommender’s implementation. Please find the detailed source code for recommender application in my GitHub Repo.
Once we implemented the ALS recommender system in a python script as a small Pyspark program, we can submit our spark application to a cluster with Client Deploy Mode or Cluster Deploy Mode and enjoy the power of distributed computing.
Finally, we are done with the technical details and implementations. Now let’s ask our recommender for some movie recommendations. I will pretend a new user and input my favorite movie “Iron Man” again into this new recommender system. Let’s see what movies it recommends to me. Hopefully, they are not the same list of popular movies that I have watched many times before.
For demo purpose, I submit my spark application locally by running following commends in terminal: (instruction of commands can be found here)
spark-submit --master local[4] --driver-memory 4g --executor-memory 8g src/als_recommender.py --movie_name "Iron Man" --top_n 10
Yay!! Successfully run our movie recommender with Spark.
This new list of recommended movies are completely different than the list from previous KNN recommender, which is very interesting!! I have never watch any one of movies from this new list. I find it very surprising that the new recommender proposed me unusual movies. They might be too unusual for other users, which is problematic.
One idea to further improve our movie recommender system is to blend this new list of movie recommendations with the previous list from KNN recommender. We basically implement a hybrid recommender system and this hybrid recommender can offer both popular and less-know content to users.
In this post, we covered how to improve collaborative filtering recommender system with matrix factorization. We learned that matrix factorization can solve “popular bias” and “item cold-start” problems in collaborative filtering. We also leveraged Spark ML to implement distributed recommender system using Alternating Least Square (ALS). The Jupyter Notebook version for this blog post can be found here. If you want to play around my source code, you can find it here.
In my next post, we will dig deeper at Matrix Factorization techniques. We can develop a more generalized form of matrix factorization model with neural network implementation in Keras. Stay tuned! Until then, have fun with machine learning and recommenders!
Like what you read? Checkout more data science / machine learning projects at my Github: Kevin’s Data Science Portfolio | [
{
"code": null,
"e": 220,
"s": 172,
"text": "Part 1 of recommender systems can be found here"
},
{
"code": null,
"e": 836,
"s": 220,
"text": "In the last post, we covered a lot of ground in how to build our own recommender systems and got our hand dirty with Pandas and Scikit-learn to implement a KNN item-based collaborative filtering movie recommender. The source code of the KNN recommender system can be found in my Github repo.In this post, we will talk about how to improve our movie recommender system with a more sophisticated machine learning technique: Matrix Factorization. Later in this post, we will discuss why we want to use matrix factorization in collaborative filtering and what is matrix factorization and how is it implemented in Spark."
},
{
"code": null,
"e": 1237,
"s": 836,
"text": "During the last section in previous post, we asked our model for some movie recommendations. After we evaluated the list of recommended movies, we quickly identified two obvious limitations in our KNN approach. One is the “popularity bias”, the other is “item cold-start problem”. There will be another limitation, “scalability issue”, if the underlying training data is too big to fit in one machine"
},
{
"code": null,
"e": 1348,
"s": 1237,
"text": "popularity bias: refers to system recommends the movies with the most interactions without any personalization"
},
{
"code": null,
"e": 1538,
"s": 1348,
"text": "item cold-start problem: refers to when movies added to the catalogue have either none or very little interactions while recommender rely on the movie’s interactions to make recommendations"
},
{
"code": null,
"e": 1684,
"s": 1538,
"text": "scalability issue: refers to lack of the ability to scale to much larger sets of data when more and more users and movies added into our database"
},
{
"code": null,
"e": 2120,
"s": 1684,
"text": "All three above are very typical challenges for collaborative filtering recommender. They arrive naturally along with the user-movie (or movie-user) interaction matrix where each entry records an interaction of a user i and a movie j. In a real world setting, the vast majority of movies receive very few or even no ratings at all by users. We are looking at an extremely sparse matrix with more than 99% of entries are missing values."
},
{
"code": null,
"e": 2300,
"s": 2120,
"text": "With such a sparse matrix, what ML algorithms can be trained and reliable to make inference? To find solutions to the question, we are effectively solving a data sparsity problem."
},
{
"code": null,
"e": 2474,
"s": 2300,
"text": "In collaborative filtering, matrix factorization is the state-of-the-art solution for sparse data problem, although it has become widely known since Netflix Prize Challenge."
},
{
"code": null,
"e": 3089,
"s": 2474,
"text": "What is matrix factorization? Matrix factorization is simply a family of mathematical operations for matrices in linear algebra. To be specific, a matrix factorization is a factorization of a matrix into a product of matrices. In the case of collaborative filtering, matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. One matrix can be seen as the user matrix where rows represent users and columns are latent factors. The other matrix is the item matrix where rows are latent factors and columns represent items."
},
{
"code": null,
"e": 3139,
"s": 3089,
"text": "How does matrix factorization solve our problems?"
},
{
"code": null,
"e": 3471,
"s": 3139,
"text": "Model learns to factorize rating matrix into user and movie representations, which allows model to predict better personalized movie ratings for usersWith matrix factorization, less-known movies can have rich latent representations as much as popular movies have, which improves recommender’s ability to recommend less-known movies"
},
{
"code": null,
"e": 3622,
"s": 3471,
"text": "Model learns to factorize rating matrix into user and movie representations, which allows model to predict better personalized movie ratings for users"
},
{
"code": null,
"e": 3804,
"s": 3622,
"text": "With matrix factorization, less-known movies can have rich latent representations as much as popular movies have, which improves recommender’s ability to recommend less-known movies"
},
{
"code": null,
"e": 3909,
"s": 3804,
"text": "In the sparse user-item interaction matrix, the predicted rating user u will give item i is computed as:"
},
{
"code": null,
"e": 4030,
"s": 3909,
"text": "Rating of item i given by user u can be expressed as a dot product of the user latent vector and the item latent vector."
},
{
"code": null,
"e": 4471,
"s": 4030,
"text": "Notice in above formula, the number of latent factors can be tuned via cross-validation. Latent factors are the features in the lower dimension latent space projected from user-item interaction matrix. The idea behind matrix factorization is to use latent factors to represent user preferences or movie topics in a much lower dimension space. Matrix factorization is one of very effective dimension reduction techniques in machine learning."
},
{
"code": null,
"e": 5091,
"s": 4471,
"text": "Very much like the concept of components in PCA, the number of latent factors determines the amount of abstract information that we want to store in a lower dimension space. A matrix factorization with one latent factor is equivalent to a most popular or top popular recommender (e.g. recommends the items with the most interactions without any personalization). Increasing the number of latent factors will improve personalization, until the number of factors becomes too high, at which point the model starts to overfit. A common strategy to avoid overfitting is to add regularization terms to the objective function."
},
{
"code": null,
"e": 5196,
"s": 5091,
"text": "The objective of matrix factorization is to minimize the error between true rating and predicted rating:"
},
{
"code": null,
"e": 5529,
"s": 5196,
"text": "Once we have an objective function, we just need a training routine (eg, gradient descent) to complete the implementation of a matrix factorization algorithm. This implementation is actually called Funk SVD. It is named after Simon Funk, who he shared his findings with the research community during Netflix prize challenge in 2006."
},
{
"code": null,
"e": 5910,
"s": 5529,
"text": "Although Funk SVD was very effective in matrix factorization with single machine during that time, it’s not scalable as the amount of data grows today. With terabytes or even petabytes of data, it’s impossible to load data with such size into a single machine. So we need a machine learning model (or framework) that can train on dataset spreading across from cluster of machines."
},
{
"code": null,
"e": 6271,
"s": 5910,
"text": "Alternating Least Square (ALS) is also a matrix factorization algorithm and it runs itself in a parallel fashion. ALS is implemented in Apache Spark ML and built for a larges-scale collaborative filtering problems. ALS is doing a pretty good job at solving scalability and sparseness of the Ratings data, and it’s simple and scales well to very large datasets."
},
{
"code": null,
"e": 6309,
"s": 6271,
"text": "Some high-level ideas behind ALS are:"
},
{
"code": null,
"e": 6430,
"s": 6309,
"text": "Its objective function is slightly different than Funk SVD: ALS uses L2 regularization while Funk uses L1 regularization"
},
{
"code": null,
"e": 6665,
"s": 6430,
"text": "Its training routine is different: ALS minimizes two loss functions alternatively; It first holds user matrix fixed and runs gradient descent with item matrix; then it holds item matrix fixed and runs gradient descent with user matrix"
},
{
"code": null,
"e": 6810,
"s": 6665,
"text": "Its scalability: ALS runs its gradient descent in parallel across multiple partitions of the underlying training data from a cluster of machines"
},
{
"code": null,
"e": 6968,
"s": 6810,
"text": "If you are interested in learning more about ALS, you can find more details in this paper: Large-scale Parallel Collaborative Filtering for the Netflix Prize"
},
{
"code": null,
"e": 7145,
"s": 6968,
"text": "Just like other machine learning algorithms, ALS has its own set of hyper-parameters. We probably want to tune its hyper-parameters via hold-out validation or cross-validation."
},
{
"code": null,
"e": 7208,
"s": 7145,
"text": "Most important hyper-params in Alternating Least Square (ALS):"
},
{
"code": null,
"e": 7274,
"s": 7208,
"text": "maxIter: the maximum number of iterations to run (defaults to 10)"
},
{
"code": null,
"e": 7339,
"s": 7274,
"text": "rank: the number of latent factors in the model (defaults to 10)"
},
{
"code": null,
"e": 7403,
"s": 7339,
"text": "regParam: the regularization parameter in ALS (defaults to 1.0)"
},
{
"code": null,
"e": 7555,
"s": 7403,
"text": "Hyper-parameter tuning is a highly recurring task in many machine learning projects. We can code it up in a function to speed up the tuning iterations."
},
{
"code": null,
"e": 7650,
"s": 7555,
"text": "After tuning, we found the best choice of hyper-parameters: maxIter=10, regParam=0.05, rank=20"
},
{
"code": null,
"e": 8018,
"s": 7650,
"text": "Now that we know we have a wonderful model for movie recommendation, the next question is: how do we take our wonderful model and productize it into a recommender system? Machine learning model productization is another big topic and I won’t get into details about it. In this post, I will show how to build a MVP (minimum viable product) version for ALS recommender."
},
{
"code": null,
"e": 8403,
"s": 8018,
"text": "To productize a model, we need to build a work flow around the model. Typical ML work flow roughly starts with data preparation via pre-defined set of ETL jobs, offline/online model training, then ingesting trained models to web services for production. In our case, we are going to build a very minimum version of movie recommender that just does the job. Our work flow is following:"
},
{
"code": null,
"e": 8817,
"s": 8403,
"text": "A new user inputs his/her favorite movies, then system create new user-movie interaction samples for the modelSystem retrains ALS model on data with the new inputsSystem creates movie data for inference (in my case, I sample all movies from the data)System make rating predictions on all movies for that userSystem outputs top N movie recommendations for that user based on the ranking of movie rating predictions"
},
{
"code": null,
"e": 8928,
"s": 8817,
"text": "A new user inputs his/her favorite movies, then system create new user-movie interaction samples for the model"
},
{
"code": null,
"e": 8982,
"s": 8928,
"text": "System retrains ALS model on data with the new inputs"
},
{
"code": null,
"e": 9070,
"s": 8982,
"text": "System creates movie data for inference (in my case, I sample all movies from the data)"
},
{
"code": null,
"e": 9129,
"s": 9070,
"text": "System make rating predictions on all movies for that user"
},
{
"code": null,
"e": 9235,
"s": 9129,
"text": "System outputs top N movie recommendations for that user based on the ranking of movie rating predictions"
},
{
"code": null,
"e": 9310,
"s": 9235,
"text": "Here is a small snippet of the source code for our MVP recommender system:"
},
{
"code": null,
"e": 9482,
"s": 9310,
"text": "This snippet demos our make_recommendations method in our recommender’s implementation. Please find the detailed source code for recommender application in my GitHub Repo."
},
{
"code": null,
"e": 9718,
"s": 9482,
"text": "Once we implemented the ALS recommender system in a python script as a small Pyspark program, we can submit our spark application to a cluster with Client Deploy Mode or Cluster Deploy Mode and enjoy the power of distributed computing."
},
{
"code": null,
"e": 10092,
"s": 9718,
"text": "Finally, we are done with the technical details and implementations. Now let’s ask our recommender for some movie recommendations. I will pretend a new user and input my favorite movie “Iron Man” again into this new recommender system. Let’s see what movies it recommends to me. Hopefully, they are not the same list of popular movies that I have watched many times before."
},
{
"code": null,
"e": 10235,
"s": 10092,
"text": "For demo purpose, I submit my spark application locally by running following commends in terminal: (instruction of commands can be found here)"
},
{
"code": null,
"e": 10390,
"s": 10235,
"text": "spark-submit --master local[4] --driver-memory 4g --executor-memory 8g src/als_recommender.py --movie_name \"Iron Man\" --top_n 10"
},
{
"code": null,
"e": 10447,
"s": 10390,
"text": "Yay!! Successfully run our movie recommender with Spark."
},
{
"code": null,
"e": 10782,
"s": 10447,
"text": "This new list of recommended movies are completely different than the list from previous KNN recommender, which is very interesting!! I have never watch any one of movies from this new list. I find it very surprising that the new recommender proposed me unusual movies. They might be too unusual for other users, which is problematic."
},
{
"code": null,
"e": 11069,
"s": 10782,
"text": "One idea to further improve our movie recommender system is to blend this new list of movie recommendations with the previous list from KNN recommender. We basically implement a hybrid recommender system and this hybrid recommender can offer both popular and less-know content to users."
},
{
"code": null,
"e": 11541,
"s": 11069,
"text": "In this post, we covered how to improve collaborative filtering recommender system with matrix factorization. We learned that matrix factorization can solve “popular bias” and “item cold-start” problems in collaborative filtering. We also leveraged Spark ML to implement distributed recommender system using Alternating Least Square (ALS). The Jupyter Notebook version for this blog post can be found here. If you want to play around my source code, you can find it here."
},
{
"code": null,
"e": 11800,
"s": 11541,
"text": "In my next post, we will dig deeper at Matrix Factorization techniques. We can develop a more generalized form of matrix factorization model with neural network implementation in Keras. Stay tuned! Until then, have fun with machine learning and recommenders!"
}
] |
emacs command in Linux with examples - GeeksforGeeks | 15 May, 2019
Introduction to Emacs Editor in Linux/Unix Systems: The Emacs is referred to a family of editors, which means it has many versions or flavors or iterations. The most commonly used version of Emacs editor is GNU Emacs and was created by Richard Stallman. The main difference between text editors like vi, vim, nano, and the Emacs is that is faster, powerful, and simple in terms of usage because of its simple user interface. Unlike the vi editor, the Emacs editor does not use an insert mode, and it is by default in editing mode, i.e., whatever you type will directly be written to the buffer, unless you manually enter command mode by using keyboard shortcuts.
Installing the Emacs Editor:
Ubuntu / Debian: sudo apt-get install emacs
sudo apt-get install emacs
Redhat / CentOS and Derivatives: yum install emacs
yum install emacs
If the above method doesn’t work for you or you want to manually compile emacs, follow these steps:
STEP 1: Download the latest version (26.1) of source code from the gnu server with following command:curl https://ftp.gnu.org/pub/gnu/emacs/emacs-26.1.tar.gz /emacs/emacs-26.1.tar.gz
curl https://ftp.gnu.org/pub/gnu/emacs/emacs-26.1.tar.gz /emacs/emacs-26.1.tar.gz
STEP 2: Extract the tar.gz file.tar -zxvf emacs-26.1.tar.gz
tar -zxvf emacs-26.1.tar.gz
STEP 3: Install Prerequisites.sudo apt-get update
sudo apt-get install build-essential libgnutls28-dev libncurses-dev
sudo apt-get update
sudo apt-get install build-essential libgnutls28-dev libncurses-dev
STEP 4: Install Emacs.cd /emacs/emacs-26.1/
./configure #Configure Emacs
make #build components using makefile
sudo make install #Install Emacs
cd /emacs/emacs-26.1/
./configure #Configure Emacs
make #build components using makefile
sudo make install #Install Emacs
The above steps will install Emacs into your system. To confirm the install, you can check using terminal using the following command:
emacs --version
To use emacs editor, use command – “emacs [-option] [file name]” (without quotation marks) :
Example:
emacs new.txt
Explanation: This command creates a file called new.txt if it doesn’t already exist. If the file with that name already exists, it’s content is copied to the memory buffer and shown at the editing buffer area.
Note: Using the emacs command with no filename opens the default interface of the emacs editor, as shown in the below image. This screen is user-friendly and you can navigate using the link options highlighted in the screen, like the option visit new file creates a new file buffer for you to start writing.
Emacs Common Options:
–file file_name, –find-file file_name, –visit file_nameThis option is used to provide file name to edit. However, in most cases, this is not required and directly file name ca be mentioned.+numberThe number here specifies the line number in the file which is followed in the command, and the cursor is moved to that line. There should be no space between the number and the + sign.+line:columnHere line represents the line number or row and the column represents the number of characters. The cursor is automatically placed to thisposition in the file that is followed.-q, –no-init-fileThis option prevents Emacs from loading an initialization or init file.–no-splashThis option prevents Emacs from showing splash screen at startup.-u user, –user userLoad user’s init file.–versionTo display version and license information.–helpDisplay help.
–file file_name, –find-file file_name, –visit file_nameThis option is used to provide file name to edit. However, in most cases, this is not required and directly file name ca be mentioned.
+numberThe number here specifies the line number in the file which is followed in the command, and the cursor is moved to that line. There should be no space between the number and the + sign.
+line:columnHere line represents the line number or row and the column represents the number of characters. The cursor is automatically placed to thisposition in the file that is followed.
-q, –no-init-fileThis option prevents Emacs from loading an initialization or init file.
–no-splashThis option prevents Emacs from showing splash screen at startup.
-u user, –user userLoad user’s init file.
–versionTo display version and license information.
–helpDisplay help.
Note: For more options, you can type “man emacs” or “emacs --help” without the quotation marks.
Emacs – Common Keyboard Shortcuts
General Shortcuts:ctrl-x ctrl-f : Find file or Open a file. This command prompts for a file name and opens it in buffer for editing. Also, it creates a new file if it doesn’t already exist.ctrl-x ctrl-s : Save File. This saves the current buffer content to the file.ctrl-x ctrl-w : Write to file. This command prompts for a file name to save buffer.Copy, cut and paste shortcuts:ctrl-d : Cut the character at the position of cursor.ESC d : Cut the word till next blank space from the current position.ctrl-k : Cut till end of the line from current position.ctrl-@ : Mark the current position as beginning for copy.ESC w : copy area between mark and cursor to paste.ctrl-y : Yank or Paste the recently copied or cut characters at the current position of cursor.Search and Replace:ctrl-s : Search forward- prompts for a search terms and search it in the buffer from current cursor position to the end of the buffer.ctrl-r : Search backwards/reverse- prompts for a search term and search from current position to the beginning of the buffer.ESC % : Replace- prompts for a search term and a replacement term and replaces the first occurrence of the word in buffer after cursor.Moving cursor:ctrl-a : Beginning of the line.ctrl-e : End of line.ctrl-f : Move forward by one character.ctrl-b : Move back by one character.ctrl-n : Move cursor to next line.ctrl-p : Cursor to previous line.ESC > : End of the buffer.ESC < : Starting of the buffer.ESC f : Move forward by one word.ESC b : Move back by one word.Miscellaneous:ctrl-z : Stop Emacs and quit immediately without confirmation(All changes in buffer are lost).ctrl-g : Cencel current command and revert back from command mode.ctrl-x u : undo the last command.ctrl-x ctrl-c : Save and quit.ctrl-h i : Help in Emacs- describes emacs shortcuts and commands.
General Shortcuts:ctrl-x ctrl-f : Find file or Open a file. This command prompts for a file name and opens it in buffer for editing. Also, it creates a new file if it doesn’t already exist.ctrl-x ctrl-s : Save File. This saves the current buffer content to the file.ctrl-x ctrl-w : Write to file. This command prompts for a file name to save buffer.
ctrl-x ctrl-f : Find file or Open a file. This command prompts for a file name and opens it in buffer for editing. Also, it creates a new file if it doesn’t already exist.
ctrl-x ctrl-s : Save File. This saves the current buffer content to the file.
ctrl-x ctrl-w : Write to file. This command prompts for a file name to save buffer.
Copy, cut and paste shortcuts:ctrl-d : Cut the character at the position of cursor.ESC d : Cut the word till next blank space from the current position.ctrl-k : Cut till end of the line from current position.ctrl-@ : Mark the current position as beginning for copy.ESC w : copy area between mark and cursor to paste.ctrl-y : Yank or Paste the recently copied or cut characters at the current position of cursor.
ctrl-d : Cut the character at the position of cursor.
ESC d : Cut the word till next blank space from the current position.
ctrl-k : Cut till end of the line from current position.
ctrl-@ : Mark the current position as beginning for copy.
ESC w : copy area between mark and cursor to paste.
ctrl-y : Yank or Paste the recently copied or cut characters at the current position of cursor.
Search and Replace:ctrl-s : Search forward- prompts for a search terms and search it in the buffer from current cursor position to the end of the buffer.ctrl-r : Search backwards/reverse- prompts for a search term and search from current position to the beginning of the buffer.ESC % : Replace- prompts for a search term and a replacement term and replaces the first occurrence of the word in buffer after cursor.
Search and Replace:
ctrl-s : Search forward- prompts for a search terms and search it in the buffer from current cursor position to the end of the buffer.
ctrl-r : Search backwards/reverse- prompts for a search term and search from current position to the beginning of the buffer.
ESC % : Replace- prompts for a search term and a replacement term and replaces the first occurrence of the word in buffer after cursor.
Moving cursor:ctrl-a : Beginning of the line.ctrl-e : End of line.ctrl-f : Move forward by one character.ctrl-b : Move back by one character.ctrl-n : Move cursor to next line.ctrl-p : Cursor to previous line.ESC > : End of the buffer.ESC < : Starting of the buffer.ESC f : Move forward by one word.ESC b : Move back by one word.
ctrl-a : Beginning of the line.
ctrl-e : End of line.
ctrl-f : Move forward by one character.
ctrl-b : Move back by one character.
ctrl-n : Move cursor to next line.
ctrl-p : Cursor to previous line.
ESC > : End of the buffer.
ESC < : Starting of the buffer.
ESC f : Move forward by one word.
ESC b : Move back by one word.
Miscellaneous:ctrl-z : Stop Emacs and quit immediately without confirmation(All changes in buffer are lost).ctrl-g : Cencel current command and revert back from command mode.ctrl-x u : undo the last command.ctrl-x ctrl-c : Save and quit.ctrl-h i : Help in Emacs- describes emacs shortcuts and commands.
ctrl-z : Stop Emacs and quit immediately without confirmation(All changes in buffer are lost).
ctrl-g : Cencel current command and revert back from command mode.
ctrl-x u : undo the last command.
ctrl-x ctrl-c : Save and quit.
ctrl-h i : Help in Emacs- describes emacs shortcuts and commands.
Help page inside emacs:
linux-command
Linux-misc-commands
Picked
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
TCP Server-Client implementation in C
ZIP command in Linux with examples
tar command in Linux with examples
curl command in Linux with Examples
UDP Server-Client implementation in C
Conditional Statements | Shell Script
Cat command in Linux with examples
Tail command in Linux with examples
touch command in Linux with Examples
Mutex lock for Linux Thread Synchronization | [
{
"code": null,
"e": 24150,
"s": 24122,
"text": "\n15 May, 2019"
},
{
"code": null,
"e": 24813,
"s": 24150,
"text": "Introduction to Emacs Editor in Linux/Unix Systems: The Emacs is referred to a family of editors, which means it has many versions or flavors or iterations. The most commonly used version of Emacs editor is GNU Emacs and was created by Richard Stallman. The main difference between text editors like vi, vim, nano, and the Emacs is that is faster, powerful, and simple in terms of usage because of its simple user interface. Unlike the vi editor, the Emacs editor does not use an insert mode, and it is by default in editing mode, i.e., whatever you type will directly be written to the buffer, unless you manually enter command mode by using keyboard shortcuts."
},
{
"code": null,
"e": 24842,
"s": 24813,
"text": "Installing the Emacs Editor:"
},
{
"code": null,
"e": 24887,
"s": 24842,
"text": "Ubuntu / Debian: sudo apt-get install emacs "
},
{
"code": null,
"e": 24916,
"s": 24887,
"text": " sudo apt-get install emacs "
},
{
"code": null,
"e": 24968,
"s": 24916,
"text": "Redhat / CentOS and Derivatives: yum install emacs "
},
{
"code": null,
"e": 24988,
"s": 24968,
"text": " yum install emacs "
},
{
"code": null,
"e": 25088,
"s": 24988,
"text": "If the above method doesn’t work for you or you want to manually compile emacs, follow these steps:"
},
{
"code": null,
"e": 25271,
"s": 25088,
"text": "STEP 1: Download the latest version (26.1) of source code from the gnu server with following command:curl https://ftp.gnu.org/pub/gnu/emacs/emacs-26.1.tar.gz /emacs/emacs-26.1.tar.gz"
},
{
"code": null,
"e": 25353,
"s": 25271,
"text": "curl https://ftp.gnu.org/pub/gnu/emacs/emacs-26.1.tar.gz /emacs/emacs-26.1.tar.gz"
},
{
"code": null,
"e": 25413,
"s": 25353,
"text": "STEP 2: Extract the tar.gz file.tar -zxvf emacs-26.1.tar.gz"
},
{
"code": null,
"e": 25441,
"s": 25413,
"text": "tar -zxvf emacs-26.1.tar.gz"
},
{
"code": null,
"e": 25560,
"s": 25441,
"text": "STEP 3: Install Prerequisites.sudo apt-get update\nsudo apt-get install build-essential libgnutls28-dev libncurses-dev\n"
},
{
"code": null,
"e": 25649,
"s": 25560,
"text": "sudo apt-get update\nsudo apt-get install build-essential libgnutls28-dev libncurses-dev\n"
},
{
"code": null,
"e": 25822,
"s": 25649,
"text": "STEP 4: Install Emacs.cd /emacs/emacs-26.1/\n./configure #Configure Emacs\nmake #build components using makefile\nsudo make install #Install Emacs\n"
},
{
"code": null,
"e": 25973,
"s": 25822,
"text": "cd /emacs/emacs-26.1/\n./configure #Configure Emacs\nmake #build components using makefile\nsudo make install #Install Emacs\n"
},
{
"code": null,
"e": 26108,
"s": 25973,
"text": "The above steps will install Emacs into your system. To confirm the install, you can check using terminal using the following command:"
},
{
"code": null,
"e": 26125,
"s": 26108,
"text": "emacs --version "
},
{
"code": null,
"e": 26218,
"s": 26125,
"text": "To use emacs editor, use command – “emacs [-option] [file name]” (without quotation marks) :"
},
{
"code": null,
"e": 26227,
"s": 26218,
"text": "Example:"
},
{
"code": null,
"e": 26241,
"s": 26227,
"text": "emacs new.txt"
},
{
"code": null,
"e": 26451,
"s": 26241,
"text": "Explanation: This command creates a file called new.txt if it doesn’t already exist. If the file with that name already exists, it’s content is copied to the memory buffer and shown at the editing buffer area."
},
{
"code": null,
"e": 26759,
"s": 26451,
"text": "Note: Using the emacs command with no filename opens the default interface of the emacs editor, as shown in the below image. This screen is user-friendly and you can navigate using the link options highlighted in the screen, like the option visit new file creates a new file buffer for you to start writing."
},
{
"code": null,
"e": 26781,
"s": 26759,
"text": "Emacs Common Options:"
},
{
"code": null,
"e": 27624,
"s": 26781,
"text": "–file file_name, –find-file file_name, –visit file_nameThis option is used to provide file name to edit. However, in most cases, this is not required and directly file name ca be mentioned.+numberThe number here specifies the line number in the file which is followed in the command, and the cursor is moved to that line. There should be no space between the number and the + sign.+line:columnHere line represents the line number or row and the column represents the number of characters. The cursor is automatically placed to thisposition in the file that is followed.-q, –no-init-fileThis option prevents Emacs from loading an initialization or init file.–no-splashThis option prevents Emacs from showing splash screen at startup.-u user, –user userLoad user’s init file.–versionTo display version and license information.–helpDisplay help."
},
{
"code": null,
"e": 27814,
"s": 27624,
"text": "–file file_name, –find-file file_name, –visit file_nameThis option is used to provide file name to edit. However, in most cases, this is not required and directly file name ca be mentioned."
},
{
"code": null,
"e": 28007,
"s": 27814,
"text": "+numberThe number here specifies the line number in the file which is followed in the command, and the cursor is moved to that line. There should be no space between the number and the + sign."
},
{
"code": null,
"e": 28196,
"s": 28007,
"text": "+line:columnHere line represents the line number or row and the column represents the number of characters. The cursor is automatically placed to thisposition in the file that is followed."
},
{
"code": null,
"e": 28285,
"s": 28196,
"text": "-q, –no-init-fileThis option prevents Emacs from loading an initialization or init file."
},
{
"code": null,
"e": 28361,
"s": 28285,
"text": "–no-splashThis option prevents Emacs from showing splash screen at startup."
},
{
"code": null,
"e": 28403,
"s": 28361,
"text": "-u user, –user userLoad user’s init file."
},
{
"code": null,
"e": 28455,
"s": 28403,
"text": "–versionTo display version and license information."
},
{
"code": null,
"e": 28474,
"s": 28455,
"text": "–helpDisplay help."
},
{
"code": null,
"e": 28570,
"s": 28474,
"text": "Note: For more options, you can type “man emacs” or “emacs --help” without the quotation marks."
},
{
"code": null,
"e": 28604,
"s": 28570,
"text": "Emacs – Common Keyboard Shortcuts"
},
{
"code": null,
"e": 30408,
"s": 28604,
"text": "General Shortcuts:ctrl-x ctrl-f : Find file or Open a file. This command prompts for a file name and opens it in buffer for editing. Also, it creates a new file if it doesn’t already exist.ctrl-x ctrl-s : Save File. This saves the current buffer content to the file.ctrl-x ctrl-w : Write to file. This command prompts for a file name to save buffer.Copy, cut and paste shortcuts:ctrl-d : Cut the character at the position of cursor.ESC d : Cut the word till next blank space from the current position.ctrl-k : Cut till end of the line from current position.ctrl-@ : Mark the current position as beginning for copy.ESC w : copy area between mark and cursor to paste.ctrl-y : Yank or Paste the recently copied or cut characters at the current position of cursor.Search and Replace:ctrl-s : Search forward- prompts for a search terms and search it in the buffer from current cursor position to the end of the buffer.ctrl-r : Search backwards/reverse- prompts for a search term and search from current position to the beginning of the buffer.ESC % : Replace- prompts for a search term and a replacement term and replaces the first occurrence of the word in buffer after cursor.Moving cursor:ctrl-a : Beginning of the line.ctrl-e : End of line.ctrl-f : Move forward by one character.ctrl-b : Move back by one character.ctrl-n : Move cursor to next line.ctrl-p : Cursor to previous line.ESC > : End of the buffer.ESC < : Starting of the buffer.ESC f : Move forward by one word.ESC b : Move back by one word.Miscellaneous:ctrl-z : Stop Emacs and quit immediately without confirmation(All changes in buffer are lost).ctrl-g : Cencel current command and revert back from command mode.ctrl-x u : undo the last command.ctrl-x ctrl-c : Save and quit.ctrl-h i : Help in Emacs- describes emacs shortcuts and commands."
},
{
"code": null,
"e": 30758,
"s": 30408,
"text": "General Shortcuts:ctrl-x ctrl-f : Find file or Open a file. This command prompts for a file name and opens it in buffer for editing. Also, it creates a new file if it doesn’t already exist.ctrl-x ctrl-s : Save File. This saves the current buffer content to the file.ctrl-x ctrl-w : Write to file. This command prompts for a file name to save buffer."
},
{
"code": null,
"e": 30930,
"s": 30758,
"text": "ctrl-x ctrl-f : Find file or Open a file. This command prompts for a file name and opens it in buffer for editing. Also, it creates a new file if it doesn’t already exist."
},
{
"code": null,
"e": 31008,
"s": 30930,
"text": "ctrl-x ctrl-s : Save File. This saves the current buffer content to the file."
},
{
"code": null,
"e": 31092,
"s": 31008,
"text": "ctrl-x ctrl-w : Write to file. This command prompts for a file name to save buffer."
},
{
"code": null,
"e": 31504,
"s": 31092,
"text": "Copy, cut and paste shortcuts:ctrl-d : Cut the character at the position of cursor.ESC d : Cut the word till next blank space from the current position.ctrl-k : Cut till end of the line from current position.ctrl-@ : Mark the current position as beginning for copy.ESC w : copy area between mark and cursor to paste.ctrl-y : Yank or Paste the recently copied or cut characters at the current position of cursor."
},
{
"code": null,
"e": 31558,
"s": 31504,
"text": "ctrl-d : Cut the character at the position of cursor."
},
{
"code": null,
"e": 31628,
"s": 31558,
"text": "ESC d : Cut the word till next blank space from the current position."
},
{
"code": null,
"e": 31685,
"s": 31628,
"text": "ctrl-k : Cut till end of the line from current position."
},
{
"code": null,
"e": 31743,
"s": 31685,
"text": "ctrl-@ : Mark the current position as beginning for copy."
},
{
"code": null,
"e": 31795,
"s": 31743,
"text": "ESC w : copy area between mark and cursor to paste."
},
{
"code": null,
"e": 31891,
"s": 31795,
"text": "ctrl-y : Yank or Paste the recently copied or cut characters at the current position of cursor."
},
{
"code": null,
"e": 32305,
"s": 31891,
"text": "Search and Replace:ctrl-s : Search forward- prompts for a search terms and search it in the buffer from current cursor position to the end of the buffer.ctrl-r : Search backwards/reverse- prompts for a search term and search from current position to the beginning of the buffer.ESC % : Replace- prompts for a search term and a replacement term and replaces the first occurrence of the word in buffer after cursor."
},
{
"code": null,
"e": 32325,
"s": 32305,
"text": "Search and Replace:"
},
{
"code": null,
"e": 32460,
"s": 32325,
"text": "ctrl-s : Search forward- prompts for a search terms and search it in the buffer from current cursor position to the end of the buffer."
},
{
"code": null,
"e": 32586,
"s": 32460,
"text": "ctrl-r : Search backwards/reverse- prompts for a search term and search from current position to the beginning of the buffer."
},
{
"code": null,
"e": 32722,
"s": 32586,
"text": "ESC % : Replace- prompts for a search term and a replacement term and replaces the first occurrence of the word in buffer after cursor."
},
{
"code": null,
"e": 33051,
"s": 32722,
"text": "Moving cursor:ctrl-a : Beginning of the line.ctrl-e : End of line.ctrl-f : Move forward by one character.ctrl-b : Move back by one character.ctrl-n : Move cursor to next line.ctrl-p : Cursor to previous line.ESC > : End of the buffer.ESC < : Starting of the buffer.ESC f : Move forward by one word.ESC b : Move back by one word."
},
{
"code": null,
"e": 33083,
"s": 33051,
"text": "ctrl-a : Beginning of the line."
},
{
"code": null,
"e": 33105,
"s": 33083,
"text": "ctrl-e : End of line."
},
{
"code": null,
"e": 33145,
"s": 33105,
"text": "ctrl-f : Move forward by one character."
},
{
"code": null,
"e": 33182,
"s": 33145,
"text": "ctrl-b : Move back by one character."
},
{
"code": null,
"e": 33217,
"s": 33182,
"text": "ctrl-n : Move cursor to next line."
},
{
"code": null,
"e": 33251,
"s": 33217,
"text": "ctrl-p : Cursor to previous line."
},
{
"code": null,
"e": 33278,
"s": 33251,
"text": "ESC > : End of the buffer."
},
{
"code": null,
"e": 33310,
"s": 33278,
"text": "ESC < : Starting of the buffer."
},
{
"code": null,
"e": 33344,
"s": 33310,
"text": "ESC f : Move forward by one word."
},
{
"code": null,
"e": 33375,
"s": 33344,
"text": "ESC b : Move back by one word."
},
{
"code": null,
"e": 33678,
"s": 33375,
"text": "Miscellaneous:ctrl-z : Stop Emacs and quit immediately without confirmation(All changes in buffer are lost).ctrl-g : Cencel current command and revert back from command mode.ctrl-x u : undo the last command.ctrl-x ctrl-c : Save and quit.ctrl-h i : Help in Emacs- describes emacs shortcuts and commands."
},
{
"code": null,
"e": 33773,
"s": 33678,
"text": "ctrl-z : Stop Emacs and quit immediately without confirmation(All changes in buffer are lost)."
},
{
"code": null,
"e": 33840,
"s": 33773,
"text": "ctrl-g : Cencel current command and revert back from command mode."
},
{
"code": null,
"e": 33874,
"s": 33840,
"text": "ctrl-x u : undo the last command."
},
{
"code": null,
"e": 33905,
"s": 33874,
"text": "ctrl-x ctrl-c : Save and quit."
},
{
"code": null,
"e": 33971,
"s": 33905,
"text": "ctrl-h i : Help in Emacs- describes emacs shortcuts and commands."
},
{
"code": null,
"e": 33995,
"s": 33971,
"text": "Help page inside emacs:"
},
{
"code": null,
"e": 34009,
"s": 33995,
"text": "linux-command"
},
{
"code": null,
"e": 34029,
"s": 34009,
"text": "Linux-misc-commands"
},
{
"code": null,
"e": 34036,
"s": 34029,
"text": "Picked"
},
{
"code": null,
"e": 34047,
"s": 34036,
"text": "Linux-Unix"
},
{
"code": null,
"e": 34145,
"s": 34047,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34183,
"s": 34145,
"text": "TCP Server-Client implementation in C"
},
{
"code": null,
"e": 34218,
"s": 34183,
"text": "ZIP command in Linux with examples"
},
{
"code": null,
"e": 34253,
"s": 34218,
"text": "tar command in Linux with examples"
},
{
"code": null,
"e": 34289,
"s": 34253,
"text": "curl command in Linux with Examples"
},
{
"code": null,
"e": 34327,
"s": 34289,
"text": "UDP Server-Client implementation in C"
},
{
"code": null,
"e": 34365,
"s": 34327,
"text": "Conditional Statements | Shell Script"
},
{
"code": null,
"e": 34400,
"s": 34365,
"text": "Cat command in Linux with examples"
},
{
"code": null,
"e": 34436,
"s": 34400,
"text": "Tail command in Linux with examples"
},
{
"code": null,
"e": 34473,
"s": 34436,
"text": "touch command in Linux with Examples"
}
] |
Inorder Traversal (Iterative) | Practice | GeeksforGeeks | Given a binary tree. Find the inorder traversal of the tree without using recursion.
Example 1
Input:
1
/ \
2 3
/ \
4 5
Output: 4 2 5 1 3
Explanation:
Inorder traversal (Left->Root->Right) of
the tree is 4 2 5 1 3.
Example 2
Input:
8
/ \
1 5
\ / \
7 10 6
\ /
10 6
Output: 1 7 10 8 6 10 5 6
Explanation:
Inorder traversal (Left->Root->Right)
of the tree is 1 7 10 8 6 10 5 6.
Expected time complexity: O(N)
Expected auxiliary space: O(N)
0
utkarshagarwal1012 weeks ago
Node* getRightMost(Node* curr, Node* left){
while(left->right!=NULL&&left->right!=curr){ // helper function // until left->right is NULL or equal to curr
left=left->right;
}
return left;
}
vector<int> inOrder(Node* root)
{
//code here
vector<int>ans;
Node* curr = root;
while(curr){ // until the current node is not NULL
Node* leftNode = curr->left; // at each step take its left node
if(leftNode==NULL){ // if left node is NULL, that is doesn't exist then u have to move towards the right, and in inorder, when u make a call to right, first u store the curr->val
ans.push_back(curr->data); // store the curr->val in ans vector
curr=curr->right; /// move to the right
}
else{
Node* rightMost = getRightMost(curr,leftNode); // if left node is not NULL, then find the rightmost node of left child, since the rightmost's child's inorder successor would be curr node
if(rightMost->right==NULL){
rightMost->right = curr; // make a thread from rightmost to curr
curr=curr->left; // move cur towards left child
}
else{
rightMost->right = NULL; // if thread already exits means rightmost->right is not NULL, it is pointing to the curr, means this left tree is already processed
// hence u need to move towards right and whenever u move towards right in inorder, first add element to and vector
ans.push_back(curr->data);
curr=curr->right; // move cur towards right
}
}
}
return ans;
}
};
+1
hanumanmanyam8373 weeks ago
class Solution
{
// Return a list containing the inorder traversal of the given tree
ArrayList<Integer> inOrder(Node root)
{
// Code
ArrayList<Integer>res=new ArrayList<>();
Stack<Node>st=new Stack<>();
while(true)
{
if(root!=null)
{
st.push(root);
root=root.left;
}
else
{
if(st.isEmpty())
{
break;
}
else
{
root=st.pop();
res.add(root.data);
root=root.right;
}
}
}
return res;
}
}
0
scien-terrific1 month ago
class Solution {
public:
vector<int> inOrder(Node* root)
{
//code here
vector<int>res;
Node* cur=root;
while(cur){
if(!cur->left){
res.push_back(cur->data);
cur=cur->right;
}
else{
Node* prev=cur->left;
while(prev->right and prev->right!=cur)prev=prev->right;
if(!prev->right){
prev->right=cur;
cur=cur->left;
}
else{
prev->right=NULL;
res.push_back(cur->data);
cur=cur->right;
}
}
}
return res;
}
};
0
devashishbakare2 months ago
Java
ArrayList<Integer> inOrder(Node root) { ArrayList<Integer> ans = new ArrayList<>(); if( root == null ) return ans; Stack<Node> stack = new Stack<>(); while( true ) { if( root != null){ //go to the extreme left until not found null and add nodes in stack for making the track of pervious call stack.push(root); root = root.left; } else { //break the loop when stack is empty if(stack.isEmpty()) break; else { //take the item out because its extreme left and passes call to it right for checking for the same root = stack.peek(); stack.pop(); ans.add(root.data); root = root.right; } } } return ans; } }
+1
mr_coder99332 months ago
vector<int> inOrder(Node* root)
{
vector<int> ans;
stack<Node*> s;
Node* node=root;
while(true){
if(node){
s.push(node);
node=node->left;
}else{
if(s.empty()) break;
node=s.top();
s.pop();
ans.push_back(node->data);
node=node->right;
}
}
return ans;
}
0
prabhakarsati294262 months ago
def inOrder(self, root): # code here if root is None: return st = [] curr = root while curr!= None: st.append(curr) curr = curr.left while len(st)>0: curr = st.pop() print(curr.data,end=" ") curr = curr.right while curr!=None: st.append(curr) curr = curr.left return st
0
himanshu11042003singh2 months ago
SIMPLE C++ SOLUTION:-
TIME COMPLEXITY- O(N)
SPACE COMPLEXITY- O(1)
CODE-
class Solution {public: vector<int> inOrder(Node* root) { //code here vector<int>inorder; Node* curr= root; while(curr!=NULL){ if(curr->left==NULL){ inorder.push_back(curr->data); curr= curr->right; } else{ Node* prev= curr->left; while(prev->right&&prev->right!=curr){ prev= prev->right; } if(prev->right==NULL){ prev->right= curr; curr=curr->left; } else{ prev->right=NULL; inorder.push_back(curr->data); curr=curr->right; } } } return inorder; }};
0
detroix072 months ago
vector<int> inOrder(Node* root) { stack<Node*> s; vector<int> v; while(1) { if(root!=NULL){ s.push(root); root=root->left; } else { if(s.size()==0) break; root = s.top(); s.pop(); v.push_back(root->data); root=root->right; } } return v; }
-3
imaniket2 months ago
vector<int> inOrder(Node* root)
{
auto node = root;
vector<int> ans;
if(!root) return ans;
stack<Node*> st;
while(true){
if(node){
st.push(node);
node= node->left;
}
else{
if(st.empty()) break;
node = st.top();
st.pop();
ans.push_back(node->data);
node= node->right;
}
}
return ans;
}
-1
codewithmitesh3 months ago
vector<int> inOrder(Node *root)
{
vector<int> ans;
// * step 1 :- Create an empty stack S of datatype BinaryNode
stack<Node *> s;
// * step 2 :- Initialize current node as root
Node *curr = root, *PoppedNode = NULL;
while (curr != NULL || (s.empty() == false))
{
// * step 3 :- Push the current node to S and set current = current->left until current is NULL
while (curr != NULL)
{
s.push(curr);
curr = curr->left;
}
// * Step 4 (a) :- If current is NULL and stack is not empty then Pop the top item from stack.
PoppedNode = s.top();
s.pop();
// * Step 4 (b) :- Print the popped item and set current = popped_item->right
ans.push_back(PoppedNode->data);
curr = PoppedNode->right;
}
return ans;
}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 323,
"s": 238,
"text": "Given a binary tree. Find the inorder traversal of the tree without using recursion."
},
{
"code": null,
"e": 333,
"s": 323,
"text": "Example 1"
},
{
"code": null,
"e": 507,
"s": 333,
"text": "Input:\n 1\n / \\\n 2 3\n / \\\n 4 5\nOutput: 4 2 5 1 3\nExplanation:\nInorder traversal (Left->Root->Right) of \nthe tree is 4 2 5 1 3.\n"
},
{
"code": null,
"e": 517,
"s": 507,
"text": "Example 2"
},
{
"code": null,
"e": 767,
"s": 517,
"text": "Input:\n 8\n / \\\n 1 5\n \\ / \\\n 7 10 6\n \\ /\n 10 6\nOutput: 1 7 10 8 6 10 5 6\nExplanation:\nInorder traversal (Left->Root->Right) \nof the tree is 1 7 10 8 6 10 5 6."
},
{
"code": null,
"e": 832,
"s": 769,
"text": "\nExpected time complexity: O(N)\nExpected auxiliary space: O(N)"
},
{
"code": null,
"e": 834,
"s": 832,
"text": "0"
},
{
"code": null,
"e": 863,
"s": 834,
"text": "utkarshagarwal1012 weeks ago"
},
{
"code": null,
"e": 2660,
"s": 863,
"text": " Node* getRightMost(Node* curr, Node* left){\n while(left->right!=NULL&&left->right!=curr){ // helper function // until left->right is NULL or equal to curr\n left=left->right;\n }\n return left;\n }\n vector<int> inOrder(Node* root)\n {\n //code here\n vector<int>ans;\n Node* curr = root;\n while(curr){ // until the current node is not NULL\n Node* leftNode = curr->left; // at each step take its left node\n if(leftNode==NULL){ // if left node is NULL, that is doesn't exist then u have to move towards the right, and in inorder, when u make a call to right, first u store the curr->val \n ans.push_back(curr->data); // store the curr->val in ans vector\n curr=curr->right; /// move to the right\n }\n else{\n Node* rightMost = getRightMost(curr,leftNode); // if left node is not NULL, then find the rightmost node of left child, since the rightmost's child's inorder successor would be curr node\n if(rightMost->right==NULL){\n rightMost->right = curr; // make a thread from rightmost to curr\n curr=curr->left; // move cur towards left child\n }\n else{\n rightMost->right = NULL; // if thread already exits means rightmost->right is not NULL, it is pointing to the curr, means this left tree is already processed\n // hence u need to move towards right and whenever u move towards right in inorder, first add element to and vector\n ans.push_back(curr->data);\n curr=curr->right; // move cur towards right\n }\n }\n }\n return ans;\n }\n \n};"
},
{
"code": null,
"e": 2663,
"s": 2660,
"text": "+1"
},
{
"code": null,
"e": 2691,
"s": 2663,
"text": "hanumanmanyam8373 weeks ago"
},
{
"code": null,
"e": 3436,
"s": 2691,
"text": "class Solution\n{\n // Return a list containing the inorder traversal of the given tree\n ArrayList<Integer> inOrder(Node root)\n {\n // Code\n ArrayList<Integer>res=new ArrayList<>();\n Stack<Node>st=new Stack<>();\n while(true)\n {\n if(root!=null)\n {\n st.push(root);\n root=root.left;\n }\n else\n {\n if(st.isEmpty())\n {\n break;\n }\n else\n {\n root=st.pop();\n res.add(root.data);\n root=root.right;\n }\n }\n }\n return res;\n }\n \n \n}"
},
{
"code": null,
"e": 3438,
"s": 3436,
"text": "0"
},
{
"code": null,
"e": 3464,
"s": 3438,
"text": "scien-terrific1 month ago"
},
{
"code": null,
"e": 4201,
"s": 3464,
"text": "class Solution {\npublic:\n vector<int> inOrder(Node* root)\n {\n //code here\n vector<int>res;\n Node* cur=root;\n while(cur){\n if(!cur->left){\n res.push_back(cur->data);\n cur=cur->right;\n }\n else{\n Node* prev=cur->left;\n while(prev->right and prev->right!=cur)prev=prev->right;\n if(!prev->right){\n prev->right=cur;\n cur=cur->left;\n }\n else{\n prev->right=NULL;\n res.push_back(cur->data);\n cur=cur->right;\n }\n }\n }\n return res;\n }\n};"
},
{
"code": null,
"e": 4203,
"s": 4201,
"text": "0"
},
{
"code": null,
"e": 4231,
"s": 4203,
"text": "devashishbakare2 months ago"
},
{
"code": null,
"e": 4237,
"s": 4231,
"text": "Java "
},
{
"code": null,
"e": 5196,
"s": 4239,
"text": " ArrayList<Integer> inOrder(Node root) { ArrayList<Integer> ans = new ArrayList<>(); if( root == null ) return ans; Stack<Node> stack = new Stack<>(); while( true ) { if( root != null){ //go to the extreme left until not found null and add nodes in stack for making the track of pervious call stack.push(root); root = root.left; } else { //break the loop when stack is empty if(stack.isEmpty()) break; else { //take the item out because its extreme left and passes call to it right for checking for the same root = stack.peek(); stack.pop(); ans.add(root.data); root = root.right; } } } return ans; } }"
},
{
"code": null,
"e": 5199,
"s": 5196,
"text": "+1"
},
{
"code": null,
"e": 5224,
"s": 5199,
"text": "mr_coder99332 months ago"
},
{
"code": null,
"e": 5680,
"s": 5224,
"text": "vector<int> inOrder(Node* root)\n {\n vector<int> ans;\n stack<Node*> s;\n Node* node=root;\n while(true){\n if(node){\n s.push(node);\n node=node->left;\n }else{\n if(s.empty()) break;\n node=s.top();\n s.pop();\n ans.push_back(node->data);\n node=node->right;\n }\n }\n return ans;\n }"
},
{
"code": null,
"e": 5682,
"s": 5680,
"text": "0"
},
{
"code": null,
"e": 5713,
"s": 5682,
"text": "prabhakarsati294262 months ago"
},
{
"code": null,
"e": 6151,
"s": 5715,
"text": "def inOrder(self, root): # code here if root is None: return st = [] curr = root while curr!= None: st.append(curr) curr = curr.left while len(st)>0: curr = st.pop() print(curr.data,end=\" \") curr = curr.right while curr!=None: st.append(curr) curr = curr.left return st"
},
{
"code": null,
"e": 6153,
"s": 6151,
"text": "0"
},
{
"code": null,
"e": 6187,
"s": 6153,
"text": "himanshu11042003singh2 months ago"
},
{
"code": null,
"e": 6209,
"s": 6187,
"text": "SIMPLE C++ SOLUTION:-"
},
{
"code": null,
"e": 6231,
"s": 6209,
"text": "TIME COMPLEXITY- O(N)"
},
{
"code": null,
"e": 6254,
"s": 6231,
"text": "SPACE COMPLEXITY- O(1)"
},
{
"code": null,
"e": 6260,
"s": 6254,
"text": "CODE-"
},
{
"code": null,
"e": 7027,
"s": 6260,
"text": "class Solution {public: vector<int> inOrder(Node* root) { //code here vector<int>inorder; Node* curr= root; while(curr!=NULL){ if(curr->left==NULL){ inorder.push_back(curr->data); curr= curr->right; } else{ Node* prev= curr->left; while(prev->right&&prev->right!=curr){ prev= prev->right; } if(prev->right==NULL){ prev->right= curr; curr=curr->left; } else{ prev->right=NULL; inorder.push_back(curr->data); curr=curr->right; } } } return inorder; }};"
},
{
"code": null,
"e": 7029,
"s": 7027,
"text": "0"
},
{
"code": null,
"e": 7051,
"s": 7029,
"text": "detroix072 months ago"
},
{
"code": null,
"e": 7440,
"s": 7051,
"text": "vector<int> inOrder(Node* root) { stack<Node*> s; vector<int> v; while(1) { if(root!=NULL){ s.push(root); root=root->left; } else { if(s.size()==0) break; root = s.top(); s.pop(); v.push_back(root->data); root=root->right; } } return v; }"
},
{
"code": null,
"e": 7443,
"s": 7440,
"text": "-3"
},
{
"code": null,
"e": 7464,
"s": 7443,
"text": "imaniket2 months ago"
},
{
"code": null,
"e": 7973,
"s": 7464,
"text": "vector<int> inOrder(Node* root)\n {\n auto node = root;\n vector<int> ans;\n if(!root) return ans;\n stack<Node*> st;\n while(true){\n if(node){\n st.push(node);\n node= node->left;\n }\n else{\n if(st.empty()) break;\n node = st.top();\n st.pop();\n ans.push_back(node->data);\n node= node->right;\n }\n }\n return ans;\n }"
},
{
"code": null,
"e": 7976,
"s": 7973,
"text": "-1"
},
{
"code": null,
"e": 8003,
"s": 7976,
"text": "codewithmitesh3 months ago"
},
{
"code": null,
"e": 8842,
"s": 8003,
"text": "vector<int> inOrder(Node *root)\n{\n vector<int> ans;\n\n // * step 1 :- Create an empty stack S of datatype BinaryNode\n stack<Node *> s;\n // * step 2 :- Initialize current node as root\n Node *curr = root, *PoppedNode = NULL;\n\n while (curr != NULL || (s.empty() == false))\n {\n // * step 3 :- Push the current node to S and set current = current->left until current is NULL\n while (curr != NULL)\n {\n s.push(curr);\n curr = curr->left;\n }\n // * Step 4 (a) :- If current is NULL and stack is not empty then Pop the top item from stack.\n PoppedNode = s.top();\n s.pop();\n // * Step 4 (b) :- Print the popped item and set current = popped_item->right\n ans.push_back(PoppedNode->data);\n curr = PoppedNode->right;\n }\n return ans;\n}"
},
{
"code": null,
"e": 8988,
"s": 8842,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 9024,
"s": 8988,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 9034,
"s": 9024,
"text": "\nProblem\n"
},
{
"code": null,
"e": 9044,
"s": 9034,
"text": "\nContest\n"
},
{
"code": null,
"e": 9107,
"s": 9044,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 9255,
"s": 9107,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 9463,
"s": 9255,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 9569,
"s": 9463,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
How to create nest tables within tables in HTML ? - GeeksforGeeks | 14 Aug, 2021
HTML tables are very helpful to structure the content in the form of rows and columns. But sometimes there is a need to add a table within a table. HTML supports this functionality and is known as the nesting of the tables. Tables can be nested together to create a table inside a table.
To create a nested table, we need to create a table using the <table> tag. This table is known as the outer table. The second table that will be nested table is called the inner table. This table is also created using the <table> tag but there is a special thing that must be kept in mind.
Note: The inner table always has to be placed between the <td> ..... </td> of the outer table.
Example 1: Below is an example of creating a nested table. The inner table is added to the second column of the first row of the first table i.e. inside the <td>...</td> tags of the outer table. The tables have been drawn using different colors for better understanding and clarity of the readers. The green border table represents the outer table whereas the inner table has a blue border.
HTML
<!DOCTYPE html><html> <body> <table border="2" bordercolor="green"> <tr> <td>Table 1</td> <td> Table 1 <table border="2" bordercolor="blue"> <tr> <td>Table 2</td> <td>Table 2</td> </tr> <tr> <td> Table 2 </td> <td>Table 2</td> </tr> </table> </td> </tr> <tr> <td> Table 1 </td> <td> Table 1. </td> </tr> </table></body> </html>
Output:
Example 2: The above example is modified a little for better understanding.
HTML
<!DOCTYPE html><html> <body> <h2 style="color:green">GeeksforGeeks</h2> <p><b>Nested tables</b></p> <table border="2" bordercolor="green"> <tr> <td>main Table row 1 column 1</td> <td>main Table column 2 <table border="2" bordercolor="blue"> <tr> <td>inner Table row 1 column 1</td> <td>inner Table row 1 column 2</td> </tr> <tr> <td>inner Table row 2 column 1 </td> <td>inner Table row 2 column 2</td> </tr> <tr> <td>inner Table row 3 column 1 </td> <td>inner Table row 3 column 2</td> </tr> </table> </td> </tr> <tr> <td> main Table row 2 column 1 </td> <td> main Table row 2 column 2 </td> </tr> </table></body> </html>
Output:
Note: Nested tables can be slow to load, restrictive for layouts, and prevent a more flexible and functional web page. They are lesser recommended from the SEO perspective.
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
HTML-Questions
HTML-Tags
Picked
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to update Node.js and NPM to next version ?
REST API (Introduction)
HTML Cheat Sheet - A Basic Guide to HTML
How to Insert Form Data into Database using PHP ?
CSS to put icon inside an input element in a form
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
Difference between var, let and const keywords in JavaScript | [
{
"code": null,
"e": 26039,
"s": 26011,
"text": "\n14 Aug, 2021"
},
{
"code": null,
"e": 26327,
"s": 26039,
"text": "HTML tables are very helpful to structure the content in the form of rows and columns. But sometimes there is a need to add a table within a table. HTML supports this functionality and is known as the nesting of the tables. Tables can be nested together to create a table inside a table."
},
{
"code": null,
"e": 26617,
"s": 26327,
"text": "To create a nested table, we need to create a table using the <table> tag. This table is known as the outer table. The second table that will be nested table is called the inner table. This table is also created using the <table> tag but there is a special thing that must be kept in mind."
},
{
"code": null,
"e": 26712,
"s": 26617,
"text": "Note: The inner table always has to be placed between the <td> ..... </td> of the outer table."
},
{
"code": null,
"e": 27103,
"s": 26712,
"text": "Example 1: Below is an example of creating a nested table. The inner table is added to the second column of the first row of the first table i.e. inside the <td>...</td> tags of the outer table. The tables have been drawn using different colors for better understanding and clarity of the readers. The green border table represents the outer table whereas the inner table has a blue border."
},
{
"code": null,
"e": 27108,
"s": 27103,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <body> <table border=\"2\" bordercolor=\"green\"> <tr> <td>Table 1</td> <td> Table 1 <table border=\"2\" bordercolor=\"blue\"> <tr> <td>Table 2</td> <td>Table 2</td> </tr> <tr> <td> Table 2 </td> <td>Table 2</td> </tr> </table> </td> </tr> <tr> <td> Table 1 </td> <td> Table 1. </td> </tr> </table></body> </html>",
"e": 27725,
"s": 27108,
"text": null
},
{
"code": null,
"e": 27733,
"s": 27725,
"text": "Output:"
},
{
"code": null,
"e": 27809,
"s": 27733,
"text": "Example 2: The above example is modified a little for better understanding."
},
{
"code": null,
"e": 27814,
"s": 27809,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <body> <h2 style=\"color:green\">GeeksforGeeks</h2> <p><b>Nested tables</b></p> <table border=\"2\" bordercolor=\"green\"> <tr> <td>main Table row 1 column 1</td> <td>main Table column 2 <table border=\"2\" bordercolor=\"blue\"> <tr> <td>inner Table row 1 column 1</td> <td>inner Table row 1 column 2</td> </tr> <tr> <td>inner Table row 2 column 1 </td> <td>inner Table row 2 column 2</td> </tr> <tr> <td>inner Table row 3 column 1 </td> <td>inner Table row 3 column 2</td> </tr> </table> </td> </tr> <tr> <td> main Table row 2 column 1 </td> <td> main Table row 2 column 2 </td> </tr> </table></body> </html>",
"e": 28817,
"s": 27814,
"text": null
},
{
"code": null,
"e": 28825,
"s": 28817,
"text": "Output:"
},
{
"code": null,
"e": 28998,
"s": 28825,
"text": "Note: Nested tables can be slow to load, restrictive for layouts, and prevent a more flexible and functional web page. They are lesser recommended from the SEO perspective."
},
{
"code": null,
"e": 29135,
"s": 28998,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 29150,
"s": 29135,
"text": "HTML-Questions"
},
{
"code": null,
"e": 29160,
"s": 29150,
"text": "HTML-Tags"
},
{
"code": null,
"e": 29167,
"s": 29160,
"text": "Picked"
},
{
"code": null,
"e": 29172,
"s": 29167,
"text": "HTML"
},
{
"code": null,
"e": 29189,
"s": 29172,
"text": "Web Technologies"
},
{
"code": null,
"e": 29194,
"s": 29189,
"text": "HTML"
},
{
"code": null,
"e": 29292,
"s": 29194,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29340,
"s": 29292,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 29364,
"s": 29340,
"text": "REST API (Introduction)"
},
{
"code": null,
"e": 29405,
"s": 29364,
"text": "HTML Cheat Sheet - A Basic Guide to HTML"
},
{
"code": null,
"e": 29455,
"s": 29405,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 29505,
"s": 29455,
"text": "CSS to put icon inside an input element in a form"
},
{
"code": null,
"e": 29545,
"s": 29505,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 29578,
"s": 29545,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 29623,
"s": 29578,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 29666,
"s": 29623,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Java | Renaming a file - GeeksforGeeks | 26 Mar, 2018
In Java we can rename a file using renameTo(newName) method that belongs to the File class.Declaration:
Following is the declaration for java.io.File.renameTo(File dest) method:
public boolean renameTo(File dest)
Parameters:
dest – The new abstract pathname for the existing abstract pathname.Exception:
SecurityException : If a security manager exists and its method denies write access to either the old or new pathnames.
NullPointerException : If parameter destination is null.
// Java program to rename a file.import java.io.File; public class GeeksforGeeks { public static void main(String[] args) { File oldName = new File("C:\Users\Siddharth\Desktop\java.txt"); File newName = new File("C:\Users\Siddharth\Desktop\GeeksforGeeks.txt"); if (oldName.renameTo(newName)) System.out.println("Renamed successfully"); else System.out.println("Error"); }}
Renamed successfully
java-file-handling
Java-I/O
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Exceptions in Java
Constructors in Java
Different ways of Reading a text file in Java
Functional Interfaces in Java
Generics in Java
Comparator Interface in Java with Examples
PriorityQueue in Java
Introduction to Java
How to remove an element from ArrayList in Java? | [
{
"code": null,
"e": 24114,
"s": 24086,
"text": "\n26 Mar, 2018"
},
{
"code": null,
"e": 24218,
"s": 24114,
"text": "In Java we can rename a file using renameTo(newName) method that belongs to the File class.Declaration:"
},
{
"code": null,
"e": 24292,
"s": 24218,
"text": "Following is the declaration for java.io.File.renameTo(File dest) method:"
},
{
"code": null,
"e": 24328,
"s": 24292,
"text": "public boolean renameTo(File dest)\n"
},
{
"code": null,
"e": 24340,
"s": 24328,
"text": "Parameters:"
},
{
"code": null,
"e": 24419,
"s": 24340,
"text": "dest – The new abstract pathname for the existing abstract pathname.Exception:"
},
{
"code": null,
"e": 24539,
"s": 24419,
"text": "SecurityException : If a security manager exists and its method denies write access to either the old or new pathnames."
},
{
"code": null,
"e": 24596,
"s": 24539,
"text": "NullPointerException : If parameter destination is null."
},
{
"code": "// Java program to rename a file.import java.io.File; public class GeeksforGeeks { public static void main(String[] args) { File oldName = new File(\"C:\\Users\\Siddharth\\Desktop\\java.txt\"); File newName = new File(\"C:\\Users\\Siddharth\\Desktop\\GeeksforGeeks.txt\"); if (oldName.renameTo(newName)) System.out.println(\"Renamed successfully\"); else System.out.println(\"Error\"); }}",
"e": 25066,
"s": 24596,
"text": null
},
{
"code": null,
"e": 25088,
"s": 25066,
"text": "Renamed successfully\n"
},
{
"code": null,
"e": 25107,
"s": 25088,
"text": "java-file-handling"
},
{
"code": null,
"e": 25116,
"s": 25107,
"text": "Java-I/O"
},
{
"code": null,
"e": 25121,
"s": 25116,
"text": "Java"
},
{
"code": null,
"e": 25126,
"s": 25121,
"text": "Java"
},
{
"code": null,
"e": 25224,
"s": 25126,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25239,
"s": 25224,
"text": "Stream In Java"
},
{
"code": null,
"e": 25258,
"s": 25239,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 25279,
"s": 25258,
"text": "Constructors in Java"
},
{
"code": null,
"e": 25325,
"s": 25279,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 25355,
"s": 25325,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 25372,
"s": 25355,
"text": "Generics in Java"
},
{
"code": null,
"e": 25415,
"s": 25372,
"text": "Comparator Interface in Java with Examples"
},
{
"code": null,
"e": 25437,
"s": 25415,
"text": "PriorityQueue in Java"
},
{
"code": null,
"e": 25458,
"s": 25437,
"text": "Introduction to Java"
}
] |
Python For Loops - GeeksforGeeks | 25 Aug, 2021
Python For loop is used for sequential traversal i.e. it is used for iterating over an iterable like string, tuple, list, etc. It falls under the category of definite iteration. Definite iterations mean the number of repetitions is specified explicitly in advance. In Python, there is no C style for loop, i.e., for (i=0; i<n; i++). There is “for in” loop which is similar to for each loop in other languages. Let us learn how to use for in loop for sequential traversals.
Note: In Python, for loops only implements the collection-based iteration.
Syntax:
for var in iterable:
# statements
Here the iterable is a collection of objects like lists, tuples. The indented statements inside the for loops are executed once for each item in an iterable. The variable var takes the value of the next item of the iterable each time through the loop.
Python3
# Python program to illustrate# Iterating over a listprint("List Iteration")l = ["geeks", "for", "geeks"]for i in l: print(i) # Iterating over a tuple (immutable)print("\nTuple Iteration")t = ("geeks", "for", "geeks")for i in t: print(i) # Iterating over a Stringprint("\nString Iteration")s = "Geeks"for i in s: print(i) # Iterating over dictionaryprint("\nDictionary Iteration")d = dict()d['xyz'] = 123d['abc'] = 345for i in d: print("% s % d" % (i, d[i]))
Output:
List Iteration
geeks
for
geeks
Tuple Iteration
geeks
for
geeks
String Iteration
G
e
e
k
s
Dictionary Iteration
xyz 123
abc 345
Loop control statements change execution from its normal sequence. When execution leaves a scope, all automatic objects that were created in that scope are destroyed. Python supports the following control statements.
Python continue Statement returns the control to the beginning of the loop.
Python3
# Prints all letters except 'e' and 's'for letter in 'geeksforgeeks': if letter == 'e' or letter == 's': continue print('Current Letter :', letter)
Output:
Current Letter : g
Current Letter : k
Current Letter : f
Current Letter : o
Current Letter : r
Current Letter : g
Current Letter : k
Python break statement brings control out of the loop.
Python3
for letter in 'geeksforgeeks': # break the loop as soon it sees 'e' # or 's' if letter == 'e' or letter == 's': break print('Current Letter :', letter)
Output:
Current Letter : e
The pass statement to write empty loops. Pass is also used for empty control statements, functions, and classes.
Python3
# An empty loopfor letter in 'geeksforgeeks': passprint('Last Letter :', letter)
Output:
Last Letter : s
Python range() is a built-in function that is used when a user needs to perform an action a specific number of times. range() in Python(3.x) is just a renamed version of a function called xrange() in Python(2.x). The range() function is used to generate a sequence of numbers. Depending on how many arguments user is passing to the function, user can decide where that series of numbers will begin and end as well as how big the difference will be between one number and the next.range() takes mainly three arguments.
start: integer starting from which the sequence of integers is to be returned
stop: integer before which the sequence of integers is to be returned. The range of integers end at stop – 1.
step: integer value which determines the increment between each integer in the sequence
Python3
# Python Program to# show range() basics # printing a numberfor i in range(10): print(i, end=" ")print() # using range for iterationl = [10, 20, 30, 40]for i in range(len(l)): print(l[i], end=" ")print() # performing sum of first 10 numberssum = 0for i in range(1, 10): sum = sum + iprint("Sum of first 10 numbers :", sum)
0 1 2 3 4 5 6 7 8 9
10 20 30 40
Sum of first 10 numbers : 45
In most of the programming languages (C/C++, Java, etc), the use of else statements has been restricted with the if conditional statements. But Python also allows us to use the else condition with for loops.
Note: The else block just after for/while is executed only when the loop is NOT terminated by a break statement
Python3
# Python program to demonstrate# for-else loop for i in range(1, 4): print(i)else: # Executed because no break in for print("No Break\n") for i in range(1, 4): print(i) breakelse: # Not executed as there is a break print("No Break")
Output:
1
2
3
No Break
1
Note: For more information refer to our Python for loop with else tutorial.
ankushgarg1998
balajikawle777
nikhilaggarwal3
Python loop-programs
python-basics
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe
Read a file line by line in Python
Enumerate() in Python
Iterate over a list in Python
How to Install PIP on Windows ?
Different ways to create Pandas Dataframe
Python String | replace() | [
{
"code": null,
"e": 41472,
"s": 41444,
"text": "\n25 Aug, 2021"
},
{
"code": null,
"e": 41946,
"s": 41472,
"text": "Python For loop is used for sequential traversal i.e. it is used for iterating over an iterable like string, tuple, list, etc. It falls under the category of definite iteration. Definite iterations mean the number of repetitions is specified explicitly in advance. In Python, there is no C style for loop, i.e., for (i=0; i<n; i++). There is “for in” loop which is similar to for each loop in other languages. Let us learn how to use for in loop for sequential traversals. "
},
{
"code": null,
"e": 42021,
"s": 41946,
"text": "Note: In Python, for loops only implements the collection-based iteration."
},
{
"code": null,
"e": 42029,
"s": 42021,
"text": "Syntax:"
},
{
"code": null,
"e": 42067,
"s": 42029,
"text": "for var in iterable:\n # statements"
},
{
"code": null,
"e": 42319,
"s": 42067,
"text": "Here the iterable is a collection of objects like lists, tuples. The indented statements inside the for loops are executed once for each item in an iterable. The variable var takes the value of the next item of the iterable each time through the loop."
},
{
"code": null,
"e": 42327,
"s": 42319,
"text": "Python3"
},
{
"code": "# Python program to illustrate# Iterating over a listprint(\"List Iteration\")l = [\"geeks\", \"for\", \"geeks\"]for i in l: print(i) # Iterating over a tuple (immutable)print(\"\\nTuple Iteration\")t = (\"geeks\", \"for\", \"geeks\")for i in t: print(i) # Iterating over a Stringprint(\"\\nString Iteration\")s = \"Geeks\"for i in s: print(i) # Iterating over dictionaryprint(\"\\nDictionary Iteration\")d = dict()d['xyz'] = 123d['abc'] = 345for i in d: print(\"% s % d\" % (i, d[i]))",
"e": 42798,
"s": 42327,
"text": null
},
{
"code": null,
"e": 42807,
"s": 42798,
"text": "Output: "
},
{
"code": null,
"e": 42937,
"s": 42807,
"text": "List Iteration\ngeeks\nfor\ngeeks\n\nTuple Iteration\ngeeks\nfor\ngeeks\n\nString Iteration\nG\ne\ne\nk\ns\n\nDictionary Iteration\nxyz 123\nabc 345"
},
{
"code": null,
"e": 43154,
"s": 42937,
"text": "Loop control statements change execution from its normal sequence. When execution leaves a scope, all automatic objects that were created in that scope are destroyed. Python supports the following control statements."
},
{
"code": null,
"e": 43230,
"s": 43154,
"text": "Python continue Statement returns the control to the beginning of the loop."
},
{
"code": null,
"e": 43238,
"s": 43230,
"text": "Python3"
},
{
"code": "# Prints all letters except 'e' and 's'for letter in 'geeksforgeeks': if letter == 'e' or letter == 's': continue print('Current Letter :', letter)",
"e": 43399,
"s": 43238,
"text": null
},
{
"code": null,
"e": 43408,
"s": 43399,
"text": "Output: "
},
{
"code": null,
"e": 43541,
"s": 43408,
"text": "Current Letter : g\nCurrent Letter : k\nCurrent Letter : f\nCurrent Letter : o\nCurrent Letter : r\nCurrent Letter : g\nCurrent Letter : k"
},
{
"code": null,
"e": 43596,
"s": 43541,
"text": "Python break statement brings control out of the loop."
},
{
"code": null,
"e": 43604,
"s": 43596,
"text": "Python3"
},
{
"code": "for letter in 'geeksforgeeks': # break the loop as soon it sees 'e' # or 's' if letter == 'e' or letter == 's': break print('Current Letter :', letter)",
"e": 43773,
"s": 43604,
"text": null
},
{
"code": null,
"e": 43782,
"s": 43773,
"text": "Output: "
},
{
"code": null,
"e": 43801,
"s": 43782,
"text": "Current Letter : e"
},
{
"code": null,
"e": 43914,
"s": 43801,
"text": "The pass statement to write empty loops. Pass is also used for empty control statements, functions, and classes."
},
{
"code": null,
"e": 43922,
"s": 43914,
"text": "Python3"
},
{
"code": "# An empty loopfor letter in 'geeksforgeeks': passprint('Last Letter :', letter)",
"e": 44006,
"s": 43922,
"text": null
},
{
"code": null,
"e": 44015,
"s": 44006,
"text": "Output: "
},
{
"code": null,
"e": 44032,
"s": 44015,
"text": "Last Letter : s "
},
{
"code": null,
"e": 44551,
"s": 44032,
"text": "Python range() is a built-in function that is used when a user needs to perform an action a specific number of times. range() in Python(3.x) is just a renamed version of a function called xrange() in Python(2.x). The range() function is used to generate a sequence of numbers. Depending on how many arguments user is passing to the function, user can decide where that series of numbers will begin and end as well as how big the difference will be between one number and the next.range() takes mainly three arguments. "
},
{
"code": null,
"e": 44629,
"s": 44551,
"text": "start: integer starting from which the sequence of integers is to be returned"
},
{
"code": null,
"e": 44739,
"s": 44629,
"text": "stop: integer before which the sequence of integers is to be returned. The range of integers end at stop – 1."
},
{
"code": null,
"e": 44828,
"s": 44739,
"text": "step: integer value which determines the increment between each integer in the sequence "
},
{
"code": null,
"e": 44836,
"s": 44828,
"text": "Python3"
},
{
"code": "# Python Program to# show range() basics # printing a numberfor i in range(10): print(i, end=\" \")print() # using range for iterationl = [10, 20, 30, 40]for i in range(len(l)): print(l[i], end=\" \")print() # performing sum of first 10 numberssum = 0for i in range(1, 10): sum = sum + iprint(\"Sum of first 10 numbers :\", sum)",
"e": 45168,
"s": 44836,
"text": null
},
{
"code": null,
"e": 45231,
"s": 45168,
"text": "0 1 2 3 4 5 6 7 8 9 \n10 20 30 40 \nSum of first 10 numbers : 45"
},
{
"code": null,
"e": 45440,
"s": 45231,
"text": "In most of the programming languages (C/C++, Java, etc), the use of else statements has been restricted with the if conditional statements. But Python also allows us to use the else condition with for loops. "
},
{
"code": null,
"e": 45553,
"s": 45440,
"text": "Note: The else block just after for/while is executed only when the loop is NOT terminated by a break statement "
},
{
"code": null,
"e": 45561,
"s": 45553,
"text": "Python3"
},
{
"code": "# Python program to demonstrate# for-else loop for i in range(1, 4): print(i)else: # Executed because no break in for print(\"No Break\\n\") for i in range(1, 4): print(i) breakelse: # Not executed as there is a break print(\"No Break\")",
"e": 45811,
"s": 45561,
"text": null
},
{
"code": null,
"e": 45820,
"s": 45811,
"text": "Output: "
},
{
"code": null,
"e": 45838,
"s": 45820,
"text": "1\n2\n3\nNo Break\n\n1"
},
{
"code": null,
"e": 45914,
"s": 45838,
"text": "Note: For more information refer to our Python for loop with else tutorial."
},
{
"code": null,
"e": 45929,
"s": 45914,
"text": "ankushgarg1998"
},
{
"code": null,
"e": 45944,
"s": 45929,
"text": "balajikawle777"
},
{
"code": null,
"e": 45960,
"s": 45944,
"text": "nikhilaggarwal3"
},
{
"code": null,
"e": 45981,
"s": 45960,
"text": "Python loop-programs"
},
{
"code": null,
"e": 45995,
"s": 45981,
"text": "python-basics"
},
{
"code": null,
"e": 46002,
"s": 45995,
"text": "Python"
},
{
"code": null,
"e": 46100,
"s": 46002,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 46109,
"s": 46100,
"text": "Comments"
},
{
"code": null,
"e": 46122,
"s": 46109,
"text": "Old Comments"
},
{
"code": null,
"e": 46150,
"s": 46122,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 46200,
"s": 46150,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 46222,
"s": 46200,
"text": "Python map() function"
},
{
"code": null,
"e": 46266,
"s": 46222,
"text": "How to get column names in Pandas dataframe"
},
{
"code": null,
"e": 46301,
"s": 46266,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 46323,
"s": 46301,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 46353,
"s": 46323,
"text": "Iterate over a list in Python"
},
{
"code": null,
"e": 46385,
"s": 46353,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 46427,
"s": 46385,
"text": "Different ways to create Pandas Dataframe"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.