Last year Packt asked me to develop a video course to teach various techniques of data visualization in R. Since I love the idea of video courses and tutorials, and I also enjoy plotting data, I readily agreed.

The result is this course, published last March, which I will briefly present below.

The course is available here:

https://www.packtpub.com/big-data-and-business-intelligence/learning-r-data-visualization-video

I wanted to create a course that was easy to follow, and at the same time could provide a good basis even for the most advanced forms of data visualization available today in R.

Packt was interested in presenting ggplot2, which is definitely the most advanced way of creating static plots. Since I regularly use ggplot2 and I find it a tremendous tool, I was glad to be able to present its functionalities more in details. Three chapters are dedicated to this package. Here I present all the most important types of plots: histograms, box-plots, scatterplots, bar-charts and time-series. Moreover, a whole chapter is dedicated to embellish the default plots by adding elements, such as text labels and much more.

However, I am also very interested in interactive plotting, which I believe is now rapidly becoming commonplace for lots of applications. For this reason two chapters are completely dedicated to interactive plots. In the first I present the package rCharts, which is extremely powerful but also a bit tricky to use at times. In many cases there is little documentation to work with, and for developing the course I found myself often wondering through stackoverflow searching for answers. Luckily for all of us, Prof. Ramnath Vaidyanathan, the creator of rCharts, is always available to answer all the users' questions quickly and clearly. In chapter 5 the viewer will be able to start from zero and quickly create nice interactive versions of all the plots I covered with ggplot2.

The last chapter is dedicated to Shiny and it is aimed at the creation of a full website for importing and plotting data. Here the reader will first learn the basics of Shiny and then will write the code to create the website and add lots of interesting functionalities.

I hope this video course will help R users become familiar with data visualization.

I would also like to take this opportunity to stress that I am open to support viewers throughout the learning process, meaning that if you have any question about the material in the course you should not hesitate one second in contacting me at info@fabioveronesi.net

# R tutorial for Spatial Statistics

## Monday, 25 April 2016

## Thursday, 31 December 2015

### Wind Resource Assessment

This is an article we recently published on "Renewable and Sustainable Energy Reviews". It starts with a thorough review of the methods used for wind resource assessment: from algorithms based on physical laws to other based on statistics, plus mixed methods.

In the second part of the manuscript we present a method for wind resource assessment based on the application of Random Forest, coded completely in R.

Elsevier allows to download the full paper for FREE until the 12th of February, so if you are interested please download a copy.

This is the link: http://authors.elsevier.com/a/1SG5a4s9HvhNZ6

Below is the abstract.

In the second part of the manuscript we present a method for wind resource assessment based on the application of Random Forest, coded completely in R.

Elsevier allows to download the full paper for FREE until the 12th of February, so if you are interested please download a copy.

This is the link: http://authors.elsevier.com/a/1SG5a4s9HvhNZ6

Below is the abstract.

#### Abstract

Wind resource assessment is fundamental when selecting a site for wind energy projects. Wind is influenced by several environmental factors and understanding its spatial variability is key in determining the economic viability of a site. Numerical wind flow models, which solve physical equations that govern air flows, are the industry standard for wind resource assessment. These methods have been proven over the years to be able to estimate the wind resource with a relatively high accuracy. However, measuring stations, which provide the starting data for every wind estimation, are often located at some distance from each other, in some cases tens of kilometres or more. This adds an unavoidable amount of uncertainty to the estimations, which can be difficult and time consuming to calculate with numerical wind flow models. For this reason, even though there are ways of computing the overall error of the estimations, methods based on physics fail to provide planners with detailed spatial representations of the uncertainty pattern. In this paper we introduce a statistical method for estimating the wind resource, based on statistical learning. In particular, we present an approach based on ensembles of regression trees, to estimate the wind speed and direction distributions continuously over the United Kingdom (UK), and provide planners with a detailed account of the spatial pattern of the wind map uncertainty.## Thursday, 27 August 2015

### Spatio-Temporal Kriging in R

## Preface

I am writing this post more for reminding to myself some theoretical background and the steps needed to perform spatio-temporal kriging in

**gstat**.
This month I had some free time to spend on small projects not specifically related to my primary occupation. I decided to spend some time trying to learn this technique since it may become useful in the future. However, I have never used it before so I had to first try to understand its basics both in terms of theoretical background and programming.

Since I have used several resources to get a handle on it, I decided to share my experience and thoughts on this blog post because they may become useful for other people trying the same method. However, this post cannot be considered a full review of spatio-temporal kriging and its theoretical basis. I just mentioned some important details to guide myself and the reader through the comprehension of the topic, but these are clearly not exhaustive. At the end of the post I included some references to additional material you may want to browse for more details.

## Introduction

This is the first time I considered spatio-temporal interpolation. Even though many datasets are indexed in both space and time, in the majority of cases time is not really taken into account for the interpolation. As an example we can consider temperature observations measured hourly from various stations in a determined study area. There are several different things we can do with such a dataset. We could for instance create a series of maps with the average daily or monthly temperatures. Time is clearly considered in these studies, but not explicitly during the interpolation phase. If we want to compute daily averages we first perform the averaging and then kriging. However, the temporal interactions are not considered in the kriging model.
An example of this type of analysis is provided by (Gräler, 2012) in the following image, which depicts monthly averages for some environmental parameter in Germany:

There are cases and datasets in which performing 2D kriging on “temporal slices” may be appropriate. However, there are other instances where this is not possible and therefore the only solution is take time into account during kriging. For doing so two possible solutions are suggested in literature: using time as a third dimension, or fit a covariance model with both spatial and temporal components (Gräler et al., 2013).

### Time as the third dimension

The idea behind this technique is extremely easy to grasp. To better understand it we can simply take a look at the equation to calculate the sample semivariogram, from Sherman (2011):

Under Matheron’s Intrinsic Hypothesis (Oliver et al., 1989) we can assume that the variance between two points,

*s*and_{i}*s*, depends only on their separation, which we indicate with the vector_{j}*h*in Eq.1. If we imagine a 2D example (i.e. purely spatial), the vector*h*is simply the one that connects two points,*i*and*j*, with a line, and its value can be calculated with the Euclidean distance:
If we consider a third dimension, which can be depth, elevation or time; it is easy to imagine Eq.2 be adapted to accommodate an additional dimension.
The only problem with this method is that in order for it to work properly the temporal dimension needs to have a range similar to the spatial dimension. For this reason time needs to be scaled to align it with the spatial dimension. In Gräler et al. (2013) they suggest several ways to optimize the scaling and achieve meaningful results. Please refer to this article for more information.

### Spatio-Temporal Variogram

The second way of taking time into account is to adapt the covariance function to the time component. In this case for each point

*s*there will be a time_{i}*t*associated with it, and to calculate the variance between this point and another we would need to calculate their spatial separation_{i}*h*and their temporal separation*u*. Thus, the spatio-temporal variogram can be computed as follows, from Sherman (2011):
With this equation we can compute a variogram taking into account every pair of points separated by distance

*h*and time*u*.## Spatio-Temporal Kriging in R

In R we can perform spatio-temporal kriging directly from

**gstat**with a set of functions very similar to what we are used to in standard 2D kriging. The package**spacetime**provides ways of creating objects where the time component is taken into account, and**gstat**uses these formats for its space-time analysis. Here I will present an example of spatio-temporal kriging using sensors’ data.### Data

In 2011, as part of the OpenSense project, several wireless sensors to measure air pollution (O3, NO2, NO, SO2, VOC, and fine particles) were installed on top of trams in the city of Zurich. The project now is in its second phase and more information about it can be found here: http://www.opensense.ethz.ch/trac/wiki/WikiStart

In this page some examples data about Ozone and Ultrafine particles are also distributed in csv format. These data have the following characteristics: time is in UNIX format, while position is in degrees (WGS 84). I will use these data to test spatio-temporal kriging in R.

### Packages

To complete this exercise we need to load several packages. First of all

**sp**, for handling spatial objects, and**gstat**, which has all the function to actually perform spatio-temporal kriging. Then**spacetime**, which we need to create the spatio-temporal object. These are the three crucial packages. However, I also loaded some others that I used to complete smaller tasks. I loaded the**raster**package, because I use the functions`coordinates`

and `projection`

to create spatial data. There is no need of loading it, since the same functions are available under different names in **sp**. However, I prefer these two because they are easier to remember. The last packages are**rgdal**and**rgeos**, for performing various operations on geodata.
The script therefore starts like:

### Data Preparation

There are a couple of issues to solve before we can dive into kriging. The first is that we need to do is translating the time from UNIX to

`POSIXlt`

or `POSIXct`

, which are standard ways of representing time in R. This very first thing we have to do is of course setting the working directory and loading the csv file:setwd("...") data <- read.table("ozon_tram1_14102011_14012012.csv", sep=",", header=T)

Now we need to address the UNIX time. So what is UNIX time anyway?

It is a way of tracking time as the number of seconds between a particular time and the UNIX epoch, which is January the 1st 1970 GMT. Basically, I am writing the first draft of this post on August the 18th at 16:01:00 CET. If I count the number of seconds from the UNIX epoch to this exact moment (there is an app for that!!) I find the UNIX time, which is equal to: 1439910060

Now let's take a look at one entry in the column “

*generation_time*” of our dataset:
As you may notice here the UNIX time is represented by 13 digits, while in the example above we just had 10. The UNIX time here represents also the milliseconds, which is something we cannot represent in R (as far as I know). So we cannot just convert each numerical value into

`POSIXlt`

, but we first need to extract only the first 10 digits, and then convert it. This can be done in one line of code but with multiple functions:data$TIME <- as.POSIXlt(as.numeric(substr(paste(data$generation_time), 1, 10)), origin="1970-01-01")

We first need to transform the UNIX time from numerical to character format, using the function

`paste(data$generation_time)`

. This creates the character string shown above, which we can then subset using the function `substr`

. This function is used to subtract characters from a string and takes three arguments: a string, a starting character and a stopping character. In this case we want to basically delete the last 3 numbers from our string, so we set the start on the first number (`start=1`

), and the stop at 10 (`stop=10`

). Then we need to change the numerical string back to a numerical format, using the function `as.numeric`

. Now we just need one last function to tell R that this particular number is a Date/Time object. We can do this using the function `as.POSIXlt`

, which takes the actual number we just created plus an origin. Since we are using UNIX time, we need to set the starting point at "*1970-01-01*". We can test this function of the first element of the vector*data$generation_time*to test its output:> as.POSIXlt(as.numeric(substr(paste(data$generation_time[1]), start=1, stop=10)), origin="1970-01-01") [1] "2011-10-14 11:14:46 CEST"

Now the

`data.frame`

data has a new column named *TIME*where the Date/Time information are stored.
Another issue with this dataset is in the formats of latitude and longitude. In the csv files these are represented in the format below:

Basically geographical coordinates are represented in degrees and minutes, but without any space. For example, for this point the longitude is 8°32.88’, while the latitude is 47°24.22’. For obtaining coordinates with a more manageable format we would again need to use strings.

data$LAT <- as.numeric(substr(paste(data$latitude),1,2))+(as.numeric(substr(paste(data$latitude),3,10))/60) data$LON <- as.numeric(substr(paste(data$longitude),1,1))+(as.numeric(substr(paste(data$longitude),2,10))/60)

We use again a combination of

`paste`

and `substr`

to extract only the numbers we need. For converting this format into degrees, we need to sum the degrees with the minutes divided by 60. So in the first part of the equation we just need to extract the first two digits of the numerical string and transform them back to numerical format. In the second part we need to extract the remaining of the strings, transform them into numbers and then divided them by 60.This operation creates some `NA`

s in the dataset, for which you will get a warning message. We do not have to worry about it as we can just exclude them with the following line: ### Subset

The ozone dataset by OpenSense provides ozone readings every minute or so, from October the 14th 2011 at around 11 a.m., up until January the 14th 2012 at around 2 p.m.

The size of this dataset is 200183 rows, which makes it kind of big for perform kriging without a very powerful machine. For this reason before we can proceed with this example we have to subset our data to make them more manageable. To do so we can use the standard subsetting method for

`data.frame`

objects using Date/Time:> sub <- data[data$TIME>=as.POSIXct('2011-12-12 00:00 CET')&data$TIME<=as.POSIXct('2011-12-14 23:00 CET'),] > nrow(sub) [1] 6734

Here I created an object named sub, in which I used only the readings from midnight on December the 12th to 11 p.m. on the 14th. This creates a subset of 6734 observations, for which I was able to perform the whole experiment using around 11 Gb of RAM.

After this step we need to transform the object

*sub*into a spatial object, and then I changed its projection into UTM so that the variogram will be calculated on metres and not degrees. These are the steps required to achieve all this:
Now we have the object

*ozone.UTM*, which is a`SpatialPointsDataFrame`

with coordinates in metres.
### Spacetime Package

**Gstat**is able to perform spatio-temporal kriging exploiting the functionalities of the package

**spacetime**, which was developed by the same team as

**gstat**. In

**spacetime**we have two ways to represent spatio-temporal data:

`STFDF`

and `STIDF`

formats. The first represents objects with a complete space time grid. In other words in this category are included objects such as the grid of weather stations presented in Fig.1. The spatio-temporal object is created using the *n*locations of the weather stations and the

*m*time intervals of their observations. The spatio-temporal grid is of size

*n*x

*m*.

` `

`STIDF`

objects are the one we are going to use for this example. These are unstructured spatio-temporal objects, where both space and time change dynamically. For example, in this case we have data collected on top of trams moving around the city of Zurich. This means that the location of the sensors is not consistent throughout the sampling window.
Creating

`STIDF`

objects is fairly simple, we just need to disassemble the `data.frame`

we have into a spatial, temporal and data components, and then merge them together to create the `STIDF`

object.
The first thing to do is create the

`SpatialPoints`

object, with the locations of the sensors at any given time:
ozoneSP <- SpatialPoints(ozone.UTM@coords,CRS("+init=epsg:3395"))

This is simple to do with the function

At this point we need to perform a very important operation for kriging, which is check whether we have some duplicated points. It may happen sometime that there are points with identical coordinates. Kriging cannot handle this and returns an error, generally in the form of a “singular matrix”. Most of the time in which this happens the problem is related to duplicated locations. So we now have to check if we have duplicates here, using the function

`SpatialPoints`

in the package **sp**. This function takes two arguments, the first is a`matrix`

or a `data.frame`

with the coordinates of each point. In this case I used the coordinates of the `SpatialPointsDataFrame`

we created before, which are provided in a `matrix`

format. Then I set the projection in UTM.At this point we need to perform a very important operation for kriging, which is check whether we have some duplicated points. It may happen sometime that there are points with identical coordinates. Kriging cannot handle this and returns an error, generally in the form of a “singular matrix”. Most of the time in which this happens the problem is related to duplicated locations. So we now have to check if we have duplicates here, using the function

`zerodist`

:dupl <- zerodist(ozoneSP)

It turns out that we have a couple of duplicates, which we need to remove. We can do that directly in the two lines of code we would need to create the data and temporal component for the

`STIDF`

object:ozoneDF <- data.frame(PPB=ozone.UTM$ozone_ppb[-dupl[,2]])

In this line I created a

`data.frame`

with only one column, named *PPB*, with the ozone observations in part per billion. As you can see I removed the duplicated points by excluding the rows from the object*ozone.UTM*with the indexes included in one of the columns of the object*dupl*. We can use the same trick while creating the temporal part:ozoneTM <- as.POSIXct(ozone.UTM$TIME[-dupl[,2]],tz="CET")

Now all we need to do is combine the objects

*ozoneSP*,*ozoneDF*and*ozoneTM*into a`STIDF`

:timeDF <- STIDF(ozoneSP,ozoneTM,data=ozoneDF)

This is the file we are going to use to compute the variogram and perform the spatio-temporal interpolation. We can check the raw data contained in the

`STIDF`

object by using the spatio-temporal version of the function `spplot`

, which is `stplot`

:stplot(timeDF)

### Variogram

The actual computation of the variogram at this point is pretty simple, we just need to use the appropriate function:

`variogramST`

. Its use is similar to the standard function for spatial kriging, even though there are some settings for the temporal component that need to be included.
As you can see here the first part of the call to the function

I must warn you that this operation takes quite a long time, so please be aware of that. I personally ran it overnight.

`variogramST`

is identical to a normal call to the function `variogram`

; we first have the formula and then the data source. However, then we have to specify the time units (`tunits`

) or the time lags (`tlags`

). I found the documentation around this point a bit confusing to be honest. I tested various combinations of parameters and the line of code I presented is the only one that gives me what appear to be good results. I presume that what I am telling to the function is to aggregate the data to the hours, but I am not completely sure. I hope some of the readers can shed some light on this!!I must warn you that this operation takes quite a long time, so please be aware of that. I personally ran it overnight.

### Plotting the Variogram

Basically the spatio-temporal version of the variogram includes different temporal lags. Thus what we end up with is not a single variogram but a series, which we can plot using the following line:

which return the following image:

Among all the possible types of visualizations for spatio-temporal variogram, this for me is the easiest to understand, probably because I am used to see variogram models. However, there are also other ways available to visualize it, such as the variogram map:

And the 3D wireframe:

### Variogram Modelling

As in a normal 2D kriging experiment, at this point we need to fit a model to our variogram. For doing so we will use the function

Below I present the code I used to fit all the models. For the automatic fitting I used most of the settings suggested in the following demo:

`vgmST`

and `fit.StVariogram`

, which are the spatio-temporal matches for `vgm`

and `fit.variogram`

.Below I present the code I used to fit all the models. For the automatic fitting I used most of the settings suggested in the following demo:

demo(stkrige)

Regarding the variogram models, in

**gstat**we have 5 options: separable, product sum, metric, sum metric, and simple sum metric. You can find more information to fit these model, including all the equations presented below, in (Gräler et al., 2015), which is available in pdf (I put the link in the "More Information" section).#### Separable

This covariance model assumes separability between the spatial and the temporal component, meaning that the covariance function is given by:

According to (Sherman, 2011): “While this model is relatively parsimonious and is nicely interpretable, there are many physical phenomena which do not satisfy the separability”. Many environmental processes for example do not satisfy the assumption of separability. This means that this model needs to be used carefully.

The first thing to set are the upper and lower limits for all the variogram parameters, which are used during the automatic fitting:

The first thing to set are the upper and lower limits for all the variogram parameters, which are used during the automatic fitting:

# lower and upper bounds pars.l <- c(sill.s = 0, range.s = 10, nugget.s = 0,sill.t = 0, range.t = 1, nugget.t = 0,sill.st = 0, range.st = 10, nugget.st = 0, anis = 0) pars.u <- c(sill.s = 200, range.s = 1000, nugget.s = 100,sill.t = 200, range.t = 60, nugget.t = 100,sill.st = 200, range.st = 1000, nugget.st = 100,anis = 700)

To create a separable variogram model we need to provide a model for the spatial component, one for the temporal component, plus the overall sill:

This line creates a basic variogram model, and we can check how it fits our data using the following line:

One thing you may notice is that the variogram parameters do not seem to have anything in common with the image shown above. I mean, in order to create this variogram model I had to set the sill of the spatial component at -60, which is total nonsense. However, I decided to try to fit this model by-eye as best as I could just to show you how to perform this type of fitting and calculate its error; but in this case it cannot be taken seriously. I found that for the automatic fit the parameters selected for

We can check how this model fits our data by using the function

`vgmST`

do not make much of a difference, so probably you do not have to worry too much about the parameters you select in `vgmST`

.We can check how this model fits our data by using the function

`fit.StVariogram`

with the option `fit.method=0`

, which keeps this model but calculates its Mean Absolute Error (MSE), compared to the actual data:
This is basically the error of the eye fit. However, we can also use the same function to automatically fit the separable model to our data (here I used the settings suggested in the demo):

As you can see the error increases. This probably demonstrates that this model is not suitable for our data, even though with some magic we can create a pattern that is similar to what we see in the observations. In fact, if we check the fit by plotting the model it is clear that this variogram cannot properly describe our data:

To check the parameters of the model we can use the function

`extractPar`

:> extractPar(separable_Vgm) range.s nugget.s range.t nugget.t sill 199.999323 10.000000 99.999714 1.119817 17.236256

#### Product Sum

A more flexible variogram model for spatio-temporal data is the product sum, which do not assume separability. The equation of the covariance model is given by:

with

*k*> 0.
In this case in the function

`vgmST`

we need to provide both the spatial and temporal component, plus the value of the parameter `k`

(which needs to be positive):
I first tried to set

We can then proceed with the fitting process and we can check the MSE with the following two lines:

`k = 5`

, but R returned an error message saying that it needed to be positive, which I did not understand. However, with 50 it worked and as I mentioned the automatic fit does not care much about these initial values, probably the most important things are the upper and lower bounds we set before.We can then proceed with the fitting process and we can check the MSE with the following two lines:

This process returns the following model:

#### Metric

This model assumes identical covariance functions for both the spatial and the temporal components, but includes a spatio-temporal anisotropy (

*k*) that allows some flexibility.
In this model all the distances (spatial, temporal and spatio-temporal) are treated equally, meaning that we only need to fit a joint variogram to all three. The only parameter we have to modify is the anisotropy

*k*. In R*k*is named`stAni`

and creating a metric model in `vgmST`

can be done as follows:metric <- vgmST("metric", joint = vgm(50,"Mat", 500, 0), stAni=200)

The automatic fit produces the following MSE:

We can plot this model to visually check its accuracy:

#### Sum Metric

A more complex version of this model is the sum metric, which includes a spatial and temporal covariance models, plus the joint component with the anisotropy:

This model allows maximum flexibility, since all the components can be set independently. In R this is achieved with the following line:

The automatic fit can be done like so:

Which creates the following model:

#### Simple Sum Metric

As the title suggests, this is a simpler version of the sum metric model. In this case instead of having total flexibility for each component we restrict them to having a single nugget. Basically we still have to set all the parameters, even though we do not care about setting the nugget in each component since we need to set a nugget effect for all three:

This returns a model similar to the sum metric:

#### Choosing the Best Model

We can visually compare all the models we fitted using wireframes in the following way:

The most important parameter to take into account for selecting the best model is certainly the MSE. By looking at the these it is clear that the best model is the sum metric, with an error of around 59, so I will use this for kriging.

### Prediction Grid

Since we are performing spatio-temporal interpolation, it is clear that we are interested in estimating new values in both space and time. For this reason we need to create a spatio-temporal prediction grid. In this case I first downloaded the road network for the area around Zurich, then I cropped it to match the extension of my study area, and then I created the spatial grid:

roads <- shapefile("VEC25_str_l_Clip/VEC25_str_l.shp")

This is the shapefile with the road network extracted from the Vector25 map of Switzerland. Unfortunately for copyright reasons I cannot share it. This file is projected in CH93, which is the Swiss national projection. Since I wanted to perform a basic experiment, I decided not to include the whole network, but only the major roads that in Switzerland are called Klass1. So the first thing I did was extracting from the

*roads*object only the lines belonging to Klass1 streets:Klass1 <- roads[roads$objectval=="1_Klass",]

Then I changed the projection of this object from CH93 to UTM, so that it is comparable with what I used so far:

Klass1.UTM <- spTransform(Klass1,CRS("+init=epsg:3395"))

Now I can crop this file so that I obtain only the roads within my study area. I can use the function

`crop`

in **rgeos**, with the object*ozone.UTM*that I created before:Klass1.cropped <- crop(Klass1.UTM,ozone.UTM)

This gives me the road network around the locations where the data were collected. I can show you the results with the following two lines:

Where the Klass1 roads are in black and the data points are represented in red. With this selection I can now use the function

`spsample`

to create a random grid of points along the road lines:sp.grid.UTM <- spsample(Klass1.cropped,n=1500,type="random")

This generates the following grid, which I think I can share with you in

`RData`

format (gridST.RData):
As I mentioned, now we need to add a temporal component to this grid. We can do that again using the package

**spacetime**. We first need to create a vector of Date/Times using the function`seq`

:tm.grid <- seq(as.POSIXct('2011-12-12 06:00 CET'),as.POSIXct('2011-12-14 09:00 CET'),length.out=5)

This creates a vector with 5 elements (

`length.out=5`

), with `POSIXct`

values between the two Date/Times provided. In this case we are interested in creating a spatio-temporal data frame, since we do not yet have any data for it. Therefore we can use the function `STF`

to merge spatial and temporal data into a spatio-temporal grid: grid.ST <- STF(sp.grid.UTM,tm.grid)

This can be used as new data in the kriging function.

### Kriging

This is probably the easiest step in the whole process. We have now created the spatio-temporal data frame, compute the best variogram model and create the spatio-temporal prediction grid. All we need to do now is a simple call to the function

`krigeST`

to perform the interpolation:pred <- krigeST(PPB~1, data=timeDF, modelList=sumMetric_Vgm, newdata=grid.ST)

We can plot the results again using the function

`stplot`

:stplot(pred)

## More information

There are various tutorial available that offer examples and guidance in performing spatio-temporal kriging. For example we can just write:

vignette("st", package = "gstat")

and a pdf will open with some of the instructions I showed here. Plus there is a demo available at:

demo(stkrige)

In the article “Spatio-Temporal Interpolation using gstat” Gräler et al. explain in details the theory behind spatio-temporal kriging. The pdf of this article can be found here: https://cran.r-project.org/web/packages/gstat/vignettes/spatio-temporal-kriging.pdfThere are also some books and articles that I found useful to better understand the topic, for which I will put the references at the end of the post.

## References

Gräler, B., 2012. Different concepts of spatio-temporal kriging [WWW Document]. URL geostat-course.org/system/files/part01.pdf (accessed 8.18.15).Gräler, B., Pebesma, Edzer, Heuvelink, G., 2015. Spatio-Temporal Interpolation using gstat.

Gräler, B., Rehr, M., Gerharz, L., Pebesma, E., 2013. Spatio-temporal analysis and interpolation of PM10 measurements in Europe for 2009.

Oliver, M., Webster, R., Gerrard, J., 1989. Geostatistics in Physical Geography. Part I: Theory. Trans. Inst. Br. Geogr., New Series 14, 259–269. doi:10.2307/622687

Sherman, M., 2011. Spatial statistics and spatio-temporal data: covariance functions and directional properties. John Wiley & Sons.

All the code snippets were created by Pretty R at inside-R.org

## Sunday, 21 June 2015

### Organize a walk around London with R

The subtitle of this post can be "

In this experiment I will show how to include multiple elements in interactive maps created using both plotGoogleMaps and leafletR. To complete the work presented here you would need the following packages:

I am going to use data from the OpenStreet maps, which can be downloaded for free from this website: weogeo.com

In particular I downloaded the shapefile with the stores, the one with the tourist attractions and the polyline shapefile with all the roads in London. I will assume that you want to spend a day or two walking around London, and for this you would need the location of some hotels and the locations of all the Greggs in the area, for lunch. You need to create a web map that you can take with you when you walk around the city with all these customized elements, that's how you create it.

Once you have downloaded the shapefile from weogeo.com you can open them and assign the correct projection with the following code:

To extract only the data we would need to the map we can use these lines:

I created three objects, two are points (Greggs and Hotel) and the last is of class

Let's take a look at the following code:

As you can see I first create two objects using the same function and then I call again the same function to draw and save the map. I can link the three maps together using the option

We need to be

Another important option is

The zoom level, if you want to set it, goes only in the first plot.

Another thing I changed compared to the last example is the addition of custom icons to the plot, using the option

The icon can be whatever image you like. You can find a collection of free icons from this website: http://kml4earth.appspot.com/icons.html

The result is the map below, available here: Multiple_Objects_GoogleMaps.html

We can do the same thing using leafletR. We first need to create

Now we need to set the style for each element. For this task we are going to use the function

We can change the icons of the elements in

As you can see we have the option

Then we can simply use the function

The result is the image below and the map available here: http://www.fabioveronesi.net/Blog/map.html

R code snippets created by Pretty R at inside-R.org

**How to plot multiple elements on interactive web maps in R**".In this experiment I will show how to include multiple elements in interactive maps created using both plotGoogleMaps and leafletR. To complete the work presented here you would need the following packages:

**sp**,**raster**,**plotGoogleMaps**and**leafletR**.I am going to use data from the OpenStreet maps, which can be downloaded for free from this website: weogeo.com

In particular I downloaded the shapefile with the stores, the one with the tourist attractions and the polyline shapefile with all the roads in London. I will assume that you want to spend a day or two walking around London, and for this you would need the location of some hotels and the locations of all the Greggs in the area, for lunch. You need to create a web map that you can take with you when you walk around the city with all these customized elements, that's how you create it.

Once you have downloaded the shapefile from weogeo.com you can open them and assign the correct projection with the following code:

stores <- shapefile("weogeo_j117529/data/shop_point.shp") projection(stores)=CRS("+init=epsg:3857") roads <- shapefile("weogeo_j117529/data/route_line.shp") projection(roads)=CRS("+init=epsg:3857") tourism <- shapefile("weogeo_j117529/data/tourism_point.shp") projection(tourism)=CRS("+init=epsg:3857")

To extract only the data we would need to the map we can use these lines:

**plotGoogleMaps**I created three objects, two are points (Greggs and Hotel) and the last is of class

*SpatialLinesDataFrame*. We already saw how to plot Spatial objects with plotGoogleMaps, here the only difference is that we need to create several maps and then link them together.Let's take a look at the following code:

Greggs.google <- plotGoogleMaps(Greggs,iconMarker=rep("http://local-insiders.com/wp-content/themes/localinsiders/includes/img/tag_icon_food.png",nrow(Greggs)),mapTypeId="ROADMAP",add=T,flat=T,legend=F,layerName="Gregg's",fitBounds=F,zoom=13) Hotel.google <- plotGoogleMaps(Hotel,iconMarker=rep("http://www.linguistics.ucsb.edu/projects/weal/images/hotel.png",nrow(Hotel)),mapTypeId="ROADMAP",add=T,flat=T,legend=F,layerName="Hotels",previousMap=Greggs.google) plotGoogleMaps(Footpaths,col="dark green",mapTypeId="ROADMAP",filename="Multiple_Objects_GoogleMaps.html",legend=F,previousMap=Hotel.google,layerName="Footpaths",strokeWeight=2)

As you can see I first create two objects using the same function and then I call again the same function to draw and save the map. I can link the three maps together using the option

*add=T*and*previousMap*.We need to be

__careful__here though, because the use of the option*add*is different from the standard plot function. In**plot**I can call the first and then if I want to add a second I call again the function with the option*add=T*.__Here this option needs to go in the first and second calls, not in the last.__Basically in this case we are saying to R not to close the plot because later on we are going to add elements to it. In the last line we do not put*add=T*, thus saying to R to go ahead and close the plot.Another important option is

*previousMap*, which is used starting from the second plot to link the various elements. This option is used referencing the previous object, meaning that I reference the map in*Hotel.google*to the map map to*Greggs.google*, while in the last call I reference it to the previous*Hotel.google*, not the very first.The zoom level, if you want to set it, goes only in the first plot.

Another thing I changed compared to the last example is the addition of custom icons to the plot, using the option

*iconMarker*. This takes a vector of icons, not just one, with the same length of the*SpatialObject*to be plotted. That is why I use the function**rep**, to create a vector with the same URL repeated for a number of times equal to the length of the object.The icon can be whatever image you like. You can find a collection of free icons from this website: http://kml4earth.appspot.com/icons.html

The result is the map below, available here: Multiple_Objects_GoogleMaps.html

**leafletR**We can do the same thing using leafletR. We first need to create

*GeoJSON*files for each element of the map using the following lines:Greggs.geojson <- toGeoJSON(Greggs) Hotel.geojson <- toGeoJSON(Hotel) Footpaths.geojson <- toGeoJSON(Footpaths)

Now we need to set the style for each element. For this task we are going to use the function

**styleSingle**, which basically defines a single style for all the elements of the*GeoJSON*. This differ from the map in a previous post in which we used the function**styleGrad**to create graduated colors depending of certain features of the dataset.We can change the icons of the elements in

**leafletR**using the following code:As you can see we have the option

*marker*that takes a vector with the name of the icon, its color and its size (between "s" for small, "m" for medium and "l" for large). The names of the icons can be found here: https://www.mapbox.com/maki/, where you have a series of icons and if you hover the mouse over them you would see some info, among which there is the name to use here, as the very last name. The style of the lines is set using the two options*col*and*lwd*, for line width.Then we can simply use the function

**leaflet**to set the various elements and styles of the map:The result is the image below and the map available here: http://www.fabioveronesi.net/Blog/map.html

R code snippets created by Pretty R at inside-R.org

Subscribe to:
Posts (Atom)