library(biclustermd)
Suppose you are a data scientist who is interested in modeling delays in flight arrivals at airports around the world. You think that arrivals could be correlated by month. Hence, since you’re looking at delays in arrivals you cluster average delay (in minutes) by month and destination airport.
To illustrate this problem I’m using the dataset flights
from Hadley Wickham’s package nycflights13
:
# install.packages("nycflights13")
library(nycflights13)
data("flights")
Per the documentation, flights
contains data on all flights in 2013 that departed NYC via JFK, LaGuardia, or Newark. The variables we’re interested in here are month
, dest
, and arr_delay
.
library(dplyr)
flights <- flights %>%
select(month, dest, arr_delay)
bicluster()
requires data to be in table format, which is what we will do here using spread()
from tidyr
after using dplyr
to summarize the data since we are analyzing average arrival delay.
library(tidyr)
flights <- flights %>%
group_by(month, dest) %>%
summarise(mean_arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
spread(dest, mean_arr_delay) %>%
as.data.frame()
Now we name the rows using the text version of month
followed by removing month
and converting the data to a matrix.
rownames(flights) <- month.name[flights$month]
flights <- as.matrix(flights[, -1])
flights[1:5, 1:5]
## ABQ ACK ALB ANC ATL
## January NA NA 35.17460 NA 4.152047
## February NA NA 17.38889 NA 5.174092
## March NA NA 17.16667 NA 7.029286
## April 12.222222 NA 18.00000 NA 11.724280
## May -6.516129 3.904762 10.19643 NA 8.187036
Now that our data is in correct form we can bicluster it.
We first need to determine how many groups of months (r) and how many groups of destination airports (c) to define. Since rows correspond to months, in this analysis, setting r = 4 may not be a bad idea: create a group for each season/quarter of the year. For the columns (airports) the number of groups is a bit more ambiguous, just like with k-means clustering. Since each continent of the world may take a different amount of time and may have different policies for flights from the US, let’s set c = 6.
Now we’re ready to bicluster. I’ll put the code first and give brief descriptions to selected arguments. This will mostly be a repeat of the documentation for biclustermd()
.
library(biclustermd)
bc <- biclustermd(data = flights, col_clusters = 6, row_clusters = 4)
bc
##
## Data has 1260 values, 11.75% of which are missing
## 19 Iterations
## Initial SSE = 198666; Final SSE = 81587, a 58.9% reduction
## Rand similarity used; Indices: Columns (P) = 1, Rows (Q) = 1
biclustermd
outputs a list of class biclustermd
with useful information. Execute ?biclustermd
to view the output in detail.
This list is used to make plots and training data, for example.
To plot SSE by iteration, use autoplot.biclustermd_sse()
, which outputs points by default:
library(ggplot2)
?autoplot.biclustermd_sse
autoplot(bc$SSE)
Next, cluster similarity measures. If a similarity measure for rows equals one, that means that the row shuffling from the last iteration and second to last iteration are identical. If zero, the two shufflings have nothing in common. The same goes for column similarities. autoplot.biclustermd_sim()
plots the RIs by iteration.
?autoplot.biclustermd_sim
autoplot(bc$Similarities)
autoplot.biclustermd()
makes visual analysis of biclustering results easy by making a heat map of the clustered data. The algorithm does use randomness, so my results may look different from yours.
?autoplot.biclustermd
autoplot(bc) +
scale_fill_viridis_c(na.value = "white") +
labs(x = "Destination Airport", y = "Month", fill = "Average Delay")
Often times it is helpful to run the data through an S-shaped function before plotting. This is easy with autoplot.biclustermd()
: set transform_colors = TRUE
and specify by what constant you’d like to scale your data by (with c
) before running it through a standard normal CDF:
autoplot(bc, transform_colors = TRUE, c = 1/10) +
scale_fill_viridis_c(na.value = "white", limits = c(0, 1)) +
labs(x = "Destination Airport", y = "Month", fill = "Average Delay")
To make the results really stand out we can reorder the row and column groups with the reorder
argument:
autoplot(bc, transform_colors = TRUE, c = 1/10, reorder = TRUE) +
scale_fill_viridis_c(na.value = "white", limits = c(0, 1)) +
labs(x = "Destination Airport", y = "Month", fill = "Average Delay")
Since the algorithm uses purposeful randomness, it is recommend that analysts run the biclustering multiple times and keep the one with the smallest SSE. The function rep_biclustermd()
does exactly that. Arguments to rep_biclustermd()
are the same as those for biclustermd()
with 3 additional arguments: 1. nrep
: the number of times to repeat the biclustering. 2. parallel
: a logical indicating if parallelization should be used or not. By default this is FALSE
. 3. ncores
: the number of cores to parallelize over. Ignored if parallel = FALSE
.
Below we run biclustering 100 times without using parallelization.
repeated_bc <- rep_biclustermd(flights, nrep = 100, col_clusters = 6, row_clusters = 4)
repeated_bc
## $best_bc
##
## Data has 1260 values, 11.75% of which are missing
## 11 Iterations
## Initial SSE = 165098; Final SSE = 78478, a 52.5% reduction
## Rand similarity used; Indices: Columns (P) = 1, Rows (Q) = 1
##
## $rep_sse
## [1] 81594.84 89461.01 81586.77 90052.25 83048.40 80254.27 89572.42
## [8] 85061.18 81647.70 80254.27 84781.24 82666.69 80116.91 83790.67
## [15] 85336.34 83230.33 85677.37 88656.28 80513.17 84415.88 84991.58
## [22] 87317.68 83763.38 91621.77 82695.95 84612.58 81961.53 82543.57
## [29] 82337.74 89129.23 82609.88 81969.34 83782.45 85936.86 85588.77
## [36] 89826.51 81508.02 91821.06 78478.13 81586.77 83610.23 82940.41
## [43] 82683.98 83069.28 81959.46 82950.75 83550.50 90851.89 88371.18
## [50] 84714.03 88061.86 88520.66 83533.08 85879.47 85569.98 89795.90
## [57] 82063.30 84259.12 87743.41 90069.23 84501.07 89955.58 83806.77
## [64] 83605.14 81676.12 91369.84 84338.25 84971.77 84963.55 82130.68
## [71] 90133.20 83104.80 87229.64 85648.63 89201.13 83562.62 84732.15
## [78] 84713.03 81586.77 80721.26 83364.83 94137.17 82636.03 81634.72
## [85] 91231.94 83077.61 83210.25 81699.21 82405.06 82023.52 83785.14
## [92] 90240.37 82362.39 83224.12 83720.40 83230.33 81530.72 85320.52
## [99] 81573.44 84550.53
##
## $runtime
## user system elapsed
## 21.445 0.293 63.490
repeated_bc$best_bc
returns the best biclustering, and has the same structure as the output of biclustermd()
.repeated_bc$rep_sse
is the SSE for each repeat, in order.repeated_bc$runtime
provides the CPU runtime for the user and system as well as elapsed time in seconds.The 100 repeats take approximately 20 seconds to complete. Parallelization is quicker given sufficient complexity. However, as nrep
gets large, parallelization gets slow, since all nrep
objects have to be written to memory versus the minimum SSE object when processing serially. These remarks are current as of July 17, 2019.
repeated_bc$runtime
## user system elapsed
## 21.445 0.293 63.490
We can plot the best SSE at each repeat, that is, we can take the cumulative minimum of rep_sse
and plot:
plot(cummin(repeated_bc$rep_sse), type = 'o', ylab = 'Cumulative Minimum', xlab = 'Repeat Number')
We’ll end this tutorial by making a training dataset from the clustered data using gather.biclustermd()
, which gives the name of the row and column the data point comes from as well as the row and column group it belongs to.
training <- gather(repeated_bc$best_bc)
training %>% head()
## row_name col_name row_cluster col_cluster bicluster_no value
## 1 January ABQ 1 1 1 NA
## 2 February ABQ 1 1 1 NA
## 3 March ABQ 1 1 1 NA
## 4 April ABQ 1 1 1 12.22222
## 5 January ACK 1 1 1 NA
## 6 February ACK 1 1 1 NA
Now we’ll plot cell (1, 4) and fit a linear model to it.
autoplot(repeated_bc$best_bc, row_clusts = 1, col_clusts = 4) +
scale_fill_viridis_c(na.value = "white") +
labs(x = "Destination Airport", y = "Month", fill = "Average Delay")
model <- training %>%
filter(row_cluster == 1, col_cluster == 4) %>%
lm(value ~ row_name + col_name, data = .)
summary(model)
##
## Call:
## lm(formula = value ~ row_name + col_name, data = .)
##
## Residuals:
## Min 1Q Median 3Q Max
## -22.444 -2.712 0.126 2.489 31.746
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 9.3546 3.5028 2.671 0.00856 **
## row_nameFebruary -2.8113 1.4722 -1.910 0.05844 .
## row_nameJanuary -2.3886 1.4479 -1.650 0.10147
## row_nameMarch -4.1389 1.4355 -2.883 0.00462 **
## col_nameAUS 2.7282 4.7925 0.569 0.57019
## col_nameAVL -5.9103 5.8922 -1.003 0.31773
## col_nameBDL 4.0206 4.7925 0.839 0.40308
## col_nameBGR -6.6817 5.8924 -1.134 0.25895
## col_nameBNA 5.8319 4.7925 1.217 0.22591
## col_nameBQN 2.4314 4.7925 0.507 0.61281
## col_nameBTV 5.9935 4.7925 1.251 0.21338
## col_nameBUF 3.6674 4.7925 0.765 0.44556
## col_nameBUR -6.0667 4.7925 -1.266 0.20788
## col_nameBWI 6.3604 4.7925 1.327 0.18684
## col_nameBZN -1.5417 5.1849 -0.297 0.76669
## col_nameCHO -0.9124 5.8924 -0.155 0.87719
## col_nameCLE -1.7669 4.7925 -0.369 0.71298
## col_nameCLT 1.2744 4.7925 0.266 0.79074
## col_nameCMH 3.0317 4.7925 0.633 0.52814
## col_nameDCA 2.2127 4.7925 0.462 0.64508
## col_nameDEN 0.1885 4.7925 0.039 0.96869
## col_nameDTW -1.9800 4.7925 -0.413 0.68020
## col_nameEYW 6.2879 5.1853 1.213 0.22752
## col_nameFLL 2.8696 4.7925 0.599 0.55040
## col_nameHOU 3.2727 4.7925 0.683 0.49593
## col_nameIAD 7.4492 4.7925 1.554 0.12259
## col_nameIND 6.3126 4.7925 1.317 0.19015
## col_nameJAX 7.2084 4.7925 1.504 0.13504
## col_nameMCO -0.6007 4.7925 -0.125 0.90045
## col_nameMDW 2.6073 4.7925 0.544 0.58737
## col_nameMEM 5.0695 4.7925 1.058 0.29216
## col_nameMKE 4.1563 4.7925 0.867 0.38744
## col_nameMSP 0.4937 4.7925 0.103 0.91811
## col_nameMSY 1.6186 4.7925 0.338 0.73613
## col_nameORD -0.1586 4.7925 -0.033 0.97365
## col_nameORF 3.3241 4.7925 0.694 0.48920
## col_namePBI 3.1026 4.7925 0.647 0.51856
## col_namePHL 6.3537 4.7925 1.326 0.18730
## col_namePIT -1.2010 4.7925 -0.251 0.80254
## col_namePSE -4.8101 4.7925 -1.004 0.31745
## col_namePWM 8.4041 4.7925 1.754 0.08191 .
## col_nameRDU 4.9214 4.7925 1.027 0.30643
## col_nameROC 5.8926 4.7925 1.230 0.22114
## col_nameSAT -6.0691 4.7925 -1.266 0.20770
## col_nameSMF 3.2216 4.7925 0.672 0.50267
## col_nameSTL 3.7825 4.7925 0.789 0.43143
## col_nameSYR 1.0378 4.7925 0.217 0.82891
## col_nameTPA 3.2553 4.7925 0.679 0.49821
## col_nameXNA 3.8326 4.7925 0.800 0.42538
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.778 on 127 degrees of freedom
## (12 observations deleted due to missingness)
## Multiple R-squared: 0.3156, Adjusted R-squared: 0.05691
## F-statistic: 1.22 on 48 and 127 DF, p-value: 0.1907
sqrt(mean(resid(model) ^ 2))
## [1] 5.75738
If you have questions you can email me at johntreisner@gmail.com. In the case you’ve found an error please open up an issue.