library(tidyverse)
library(patchwork) # we'll use this for some figures in this class
You have been using functions in R since the very first day of this class. These functions have been functions from base R or from one of many packages that we’ve used. Functions are great as they allow you to automate tasks that you commonly need to do.
Think about one of the most basic functions mean()
– this allows you to take the mean of a dataset without having to compute the sum and then divide that sum by the number of values in the dataset. More complex functions such as lm()
, which computes a linear model, allow you to perform operations that take many lines of code in just a single function call.
While, base R and the thousands of available R packages provide a wide variety of functions, many times you will find yourself needing a function that does not exist. You can write your own R functions to automate tasks that you frequently find yourself performing. Writing a function is a much, much better approach than simply copying and pasting code to repeat tasks. The benefits of function writing over the copy-and-paste approach is nicely summarized in the R4DS (excerpt below):
- You can give a function an evocative name that makes your code easier to understand.
- As requirements change, you only need to update code in one place, instead of many.
- You eliminate the chance of making incidental mistakes when you copy and paste (i.e. updating a variable name in one place, but not in another).
So if you find yourself copying and pasting code to perform the same task repeatedly, you should stop and write a function.
Before we move ahead, let’s have a brief refresher on some concepts and terminology related to functions. Functions, are passed arguments (i.e. inputs), the body of the function then perform operation(s) on these arguments and then the function returns outputs. When you use a function it is referred to as calling the function.
A function is written in R as follows
function_name <- function(argument_1, argument_2) {
# code performing functions operation(s)
# code performing functions operation(s)
# code performing functions operation(s)...
}
You can see that on the first line of the code above, the function function_name
is defined. By assigning function(argument_1, argument_2)
to function_name
we are declaring a new function called function_name
, which takes two arguments argument_1
and argument_2
. Note that you can write functions that have as many (of as few) arguments as you want/need.
Inside the curly brackets {}
is the body of the function – this is the code that is executed when the function is called.
In R, a function will return(i.e. output) the result of the last line in the body of the function.
To illustrate how we create a function in R, let’s create a very simple function that takes a number as input and then returns the cube of that number.
It is very good practice to include a description of the function, its inputs, and its outputs.
cube_it <- function(cube_me){
# Function computes the cube of the input
# inputs: cube_me is the value to cube
# output: the cube of cube_me
cube_me^3 # take the cube of the input cube_me
}
Notice how we gave the function a descriptive name, in this case we called it cube_it
. We also gave the argument (input) an informative name, cube_me
. While we can technically name our function and its arguments whatever we like, it is smart to give them informative names that will help us (and others) understand the purpose of the function.
Let’s give our function a quick test to confirm that it works as we expect.
cube_it(5)
## [1] 125
Arguments are matched by name or position in the argument list. In the above example with the function cube_it()
, we didn’t refer to the argument name when calling the function. The argument is thus matched by position, which in the cube_it()
example does not leave any room for confusion since there is only one argument. Note however, that we can call the function as follows
cube_it(cube_me = 5)
## [1] 125
Referring to the arguments by name when you have only one argument is not really necessary, however when you have mulitiple arguments this can be very helpful. Oftentimes you may not recall the position of the arguments in the function definition and thus you do not want to rely on positional matching as you may mismatch arguments.
Let’s highlight this point with a basic example where we create a function that takes one number and raises it to a specified power:
pow_it <- function(base_val, exp_val){
# Raises an input value to a specified power
# inputs: base_val is the value to be raised to a power; exp_val is the exponent value
# output: base_val raised to the exp_val
base_val^exp_val
}
Recall that unnamed arguments are matched in the order in which they are defined in the function declaration. If I do not use the argument names in my function call, then they will be matched by position – this could lead us to generate a result that doesn’t match our intentions. To prevent this from occuring we can specify the names of the arguments when we calll the function.
Let’s raise two to the third power \(2^3\)
pow_it(base_val = 2, exp_val = 3)
## [1] 8
Since we are naming the arguments, the position does not matter – the name takes precedence.
pow_it(exp_val = 3, base_val = 2)
## [1] 8
We could have also called the function and relied on positional matching
pow_it(2, 3)
## [1] 8
Though this latter function call is less clear (from a human perspective) and we could have easily forgotten and called pow(3, 2)
which would have squared three \(3^2\), which would not have been our intention.
FYI, \(Celsius = (Fahrenheit-32)*\frac{5}{9}\)
When calling a function, if you do not pass a value to an argument that is used in the body of the function, an error will result. Let’s illustrate this point using our pow_it()
function. Recall that pow_it()
takes two arguments base_val
and exp_val
. Run the following code and see what happens.
pow_it(base_val = 3)
You’ve now seen that the code above throws the following error, telling us that we are missing an argument
Error in pow_it(3) : argument "exp_val" is missing, with no default
While in many cases we want the code to “break” when it is used incorrectly, there are nonetheless situations where we would like the code to be “smart” and supply reasonable default arguments that prevent an error from occuring.
We can specify default values for arguments when we declare a function. Let’s redeclare pow_it()
– this time supplying a reasonable default for the exp_val
argument
pow_it <- function(base_val, exp_val = 1){
base_val^exp_val
}
In the function declaration above, we’ve supplied exp_val
a default value of 1. If the user supplies a value for exp_val
in their function call, then the user supplied value will be used, otherwise the default value will be applied. Let’s illustrate this below.
pow_it(base_val = 3)
## [1] 3
You can see that a default of 1 was used for exp_val
pow_it(base_val = 3, exp_val = 3)
## [1] 27
However, when we supply a value for exp_val
in our function call, this value is used.
In programming, scoping refers to the rules used to look up values associated with a variable.
A function first looks within the function itself (i.e. its local environment) to identify the values associated with the variables in use. If it finds the values there, then it stops looking, otherwise it expands its search to the next level of the environment. If a value is never found, then an error is thrown.
Let’s illustrate this with an example.
add_something <- function(my_number){
add_number <- 1
my_number + add_number
}
Notice that the variable add_number
does not appear in our Global Environment (see the Environment pane in the top right of your RStudio window). This is because, variables defined within functions are locally defined and thus are only accessible within that function.
R will first look for the variable add_number
within the function – since it finds a value there, it stops looking and uses that value for add_number
.
add_something(my_number = 5)
## [1] 6
To illustrate that R first looks inside the function and then stops if it finds the variable, let’s create a variable add_number
outside of our function. This variable will like in our Global Environment (see the Environment pane in the top right of your RStudio window).
add_number <- 4
add_something(my_number = 5)
## [1] 6
You see that add_something()
still uses the locally defined value for add_number
when executing the function.
However, if we hadn’t assigned a value to add_number
within our function, R would look outside of the function for this value. Let’s illustrate this with an example.
First, we will redefine the add_something()
function, though this time we will not supply a value for add_number
within the function
add_something <- function(my_number){
my_number + add_number
}
Let’s assign add_number
a value outside of our function. You’ll see that this variable is found in the Global Environment
add_number <- 10
Now, let’s run add_something()
add_something(my_number = 5)
## [1] 15
First R looked for a value for add_number
locally (i.e. within the function), since it did not find a value there it then looked outside of the function and found a value in the global environment, which it then used in the function evaluation.
Functions in R return the last evaluated expression. In many cases we would like to return more than one value from a function. To return multiple outputs we can have the function return a vector c()
or list list()
that contains the set of values/objects we want to return.
Let’s illustrate how we do this with a simple example. We will create a function that computes the minimum, maximum, and mean of a dataset.
get_stats_bad <- function(input_data){
min(input_data)
max(input_data)
mean(input_data)
}
Now, let’s test this function on an example dataset
my_data <- tibble(x = runif(1000, min = 0, max = 50)) #generate a vector of 1000 random values between 0 and 50
get_stats_bad(my_data$x)
## [1] 25.97018
Since the last statement evaluated in get_stats_bad()
is the mean, then that is the only value that is returned by our function. However, we would like to have the min, max, and mean returned.
Let’s define a new function get_stats_good()
that returns all of the computed statistics
get_stats_good <- function(input_data){
# Computes basic statistics on a univariate dataset
# inputs: input_data is a vector of values
# outputs: list containing the min, max, and mean of the input_data
stat_min <- min(input_data)
stat_max <- max(input_data)
stat_mean <- mean(input_data)
c(stat_min, stat_max, stat_mean)
}
You can see in the function declaration above, the last evaluated statement is a vector containing all of the values we would like to return. Let’s test out this function to demonstrate how it works.
get_stats_good(my_data$x)
## [1] 0.1473962 49.9991931 25.9701808
You can see that the function performed as desired – returning all three of the computed statistics (min, max, mean) in a single vector.
When you have outputs of the same type, then a vector works great for returning multiple values, however, when you want to return dissimilar data objects (e.g. a figure object and a data frame) a vector will not work. In these cases you can use a list()
object, which can contain different data structures within it.
Let’s illustrate this with an updated version of our get_stats_good()
function. We will compute the same statistics as before, though now we will also generate a histogram to plot the datasets distribution. The list()
at the end of the function returns the statistics along with the fig_hist
figure object.
get_stats_good <- function(input_data){
stat_min <- min(input_data)
stat_max <- max(input_data)
stat_mean <- mean(input_data)
fig_hist <- ggplot() +
geom_histogram(aes(x = input_data)) +
theme_classic()
list(stat_min, stat_max, stat_mean, fig_hist)
}
Let’s run the function to demonstrate how it works.
get_stats_good(my_data$x)
## [[1]]
## [1] 0.1473962
##
## [[2]]
## [1] 49.99919
##
## [[3]]
## [1] 25.97018
##
## [[4]]
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
The data for this exercise is here
state_temps <- read_csv("https://stahlm.github.io/ENS_215/Data/NOAA_State_Temp_Lab_Data.csv")
You can call your function state_climate_summary(state_temps, state_to_select)
. Where the argument state_temps
is the data frame with the temperature data and the argument state_to_select
is the abbreviation for the state of interest.
An example function call would look like state_climate_summary(state_temps, "MA")
Below is an example of what your output should look like
state_climate_summary(state_temps, "MA")
## [[1]]
## # A tibble: 12 x 4
## Month month_mean month_max month_min
## <dbl> <dbl> <dbl> <dbl>
## 1 1 24.3 33.8 14.7
## 2 2 25.4 34.2 12.8
## 3 3 34.2 44.1 25.7
## 4 4 45.0 50.6 39.9
## 5 5 56.0 61.8 47.7
## 6 6 64.5 68.6 58.4
## 7 7 70.0 75 65.7
## 8 8 67.9 73.5 61.4
## 9 9 60.8 66.3 56.1
## 10 10 50.2 57.4 43.4
## 11 11 39.6 45.8 32.5
## 12 12 28.6 41.8 15.8
##
## [[2]]
## # A tibble: 124 x 2
## Year annual_mean
## <dbl> <dbl>
## 1 1895 45.9
## 2 1896 46.0
## 3 1897 46.4
## 4 1898 47.2
## 5 1899 46.4
## 6 1900 47.0
## 7 1901 45.6
## 8 1902 46.0
## 9 1903 45.9
## 10 1904 43.3
## # ... with 114 more rows
##
## [[3]]
## `geom_smooth()` using formula 'y ~ x'
state_climate_summary_map(state_temps, "MA")
As an interesting exercise let’s examine some meteorological data and write a function to estimate the wet-bulb temperature. The wet-bulb temperature is the temperature measured by a thermometer that is covered in a water-saturated cloth over which ambient air is allowed to pass. The wet-bulb temperature provides a measure of the temperature that would be reach under ambient conditions of evaporative cooling. From the standpoint of human and organism survival/comfort, the wet-bulb temperature is very important as it indicates the temperature that the organism could reach under the ambient conditions by evaporative cooling (e.g., sweating in humans, panting in dogs). In a hot but very dry environment, evaporative cooling is very effective, while in a hot and humid environment evaporation is minimal and thus evaporative cooling potential is very limited. This is why you can feel relatively comfortable in a hot and dry place (e.g., Arizona) while often feeling much less comfortable in a less hot but humid environment (e.g., New York in the summer). Thus, the wet-bulb temperature is always less than or equal to the air temperature (wet bulb and air temperature are exactly equal when the relative humidity is 100%).
When wet-bulb temperatures reach the high 20’s (in degrees C), human health can become endangered and heat stress can occur. For a fit, acclimated person who is in the shade with a strong breeze and is fully hydrated, physical labor becomes difficult or impossible at a wet-bulb temperature of 31 degrees C or higher. For these same conditions a wet-bulb temperature of 35 degrees C or higher for more than 6 hours can lead to death (Buzan and Huber, 2020). These wet-bulb temperature thresholds for physical labor and risk of death are even lower for individuals who are not acclimated, have health conditions, and/or where access to shaded and breezy conditions are not available. For instance, high levels of fatalities occurred during the 2003 European and 2010 Russian heat waves, both of which had wet-bulb temperatures that did not exceed 28 degrees C (Raymond et al., 2020). Notably, some areas in southwest Asia are expected to have wet-bulb temperatures that frequently exceed the threshold for human adaptability by 2100 (Pal and Eltahir 2016).
While most meteorological stations measure air temperature and relative humidity, the wet-bulb temperature is much less commonly measured. However, it is possible to estimate the wet-bulb temperature from measurements of air temperature and relative humidity. We will use the approach of Stull (2011) to do this.
\[\begin{aligned} T_{wet} = T_{air}*atan(0.151977*(RH + 8.313659)^{1/2}) + atan(T_{air} +RH) - atan(RH - 1.676331) + \\ (0.00391838*(RH)^{3/2}) * atan(0.023101*RH) - 4.686035 \end{aligned} \]
Where \(T_{wet}\) is the estimated wet-bulb temperature; \(T_{air}\) is the measured air temperature in degrees C and \(RH\) is the measured relative humidity in percent. Note that the relative humidity would in the equation above is entered as a numeric value for the percent (e.g., 85% relative humidity would be entered as 85).
Write a function called calc_wetbulb()
that takes air temperature and relative humidity as inputs and returns the wet-bulb temperature. You can check if your function is working correctly by entering an air temperature of 20 deg C and a relative humidity of 15 percent, which should give you a wet-bulb temperature of 6.96 deg C.
Compute the hourly wet-bulb temperatures for Chicago in 1995 and for Samawah, Iraq in 2015. There was a significant heat wave in Chicago in 1995, which led to >700 deaths and there was significant heat wave across many areas of the Middle East in 2015.
To get hourly meteorological data we will use the worldmet
package (FYI, we used this package earlier in the term)
library(worldmet)
## Warning: package 'worldmet' was built under R version 4.0.5
Let’s get data for 1995 for a met station in Chicago and for 2015 for a met station in Samwah, Iraq.
worldmet_data_chicago <- importNOAA(code = "725300-94846",
year = 1995)
worldmet_data_chicago <- worldmet_data_chicago %>%
filter(!is.na(RH), !is.na(air_temp)) # remove observations that are missing RH and/or air temp data
worldmet_data_samwawah <- importNOAA(code = "406740-99999",
year = 2015)
worldmet_data_samwawah <- worldmet_data_samwawah %>%
filter(!is.na(RH), !is.na(air_temp)) # remove observations that are missing RH and/or air temp data
Now you should compute the wet-bulb temperatures for Chicago and add this data as a column to your Chicago data frame. Do the same for Samawah, Iraq. Note that you can do this by putting your calc_wetbulb()
function inside of a mutate()
. See an example below.
worldmet_data_chicago <- worldmet_data_chicago %>%
mutate(wet_bulb = calc_wetbulb(air_temp, RH))
worldmet_data_samwawah <- worldmet_data_samwawah %>%
mutate(wet_bulb = calc_wetbulb(air_temp, RH))
Once, you’ve computed the wet-bulb temperatures, explore this data. You might try plotting time-series of the wet-bulb temperature, computing statistics on the wet-bulb temperatures (e.g., mean, max, 95th percentile,…) or any other analysis that you think would be interesting.