In this problem, you will perform K-means clustering manually, with K = 2 on a small example with n = 6 observations and p = 2 features. The observations are as follows.
Obs. | \(X_1\) | \(X_2\) |
---|---|---|
1 | 1 | 4 |
2 | 1 | 3 |
3 | 0 | 4 |
4 | 5 | 1 |
5 | 6 | 2 |
6 | 4 | 0 |
3(a) Plot the observations.
n = 6
X = matrix(c(1, 4, 1, 3, 0, 4, 5, 1, 6, 2, 4, 0),
nrow = n, byrow = T)
plot(X)
3(b) Randomly assign a cluster to each observation. You can use the sample() command in R to do this. Report the cluster labels for each observation.
set.seed(2^17 - 1)
clusters = sample(1:2, n, replace = T)
clusters
## [1] 1 1 1 1 2 1
col = rep("red", n)
col[clusters == 2] = "blue"
pch = rep(16, n)
pch[clusters == 2] = 17
plot(X, col = col, pch = pch)
3(c) Compute the centroid for each cluster.
centroids = aggregate(X, list(Cluster = clusters), mean)
centroids
## Cluster V1 V2
## 1 1 2.2 2.4
## 2 2 6.0 2.0
plot(X, col = col, pch = pch)
points(centroids[1,2:3], col = "red", pch = 8)
points(centroids[2,2:3], col = "blue", pch = 8)
3(d) Assign each observation to the centroid to which it is closest, in terms of Euclidean distance. Report the cluster labels for each observation.
library(class)
clusters = knn(centroids[,2:3], X, factor(centroids[,1]))
clusters
## [1] 1 1 1 2 2 2
## Levels: 1 2
col = rep("red", n)
col[clusters == 2] = "blue"
pch = rep(16, n)
pch[clusters == 2] = 17
plot(X, col = col, pch = pch)
3(e) Repeat steps (c) and (d) until the answers stop changing.
centroids = aggregate(X, list(Cluster = clusters), mean)
centroids
## Cluster V1 V2
## 1 1 0.6666667 3.666667
## 2 2 5.0000000 1.000000
plot(X, col = col, pch = pch)
points(centroids[1,2:3], col = "red", pch = 8)
points(centroids[2,2:3], col = "blue", pch = 8)
clusters = knn(centroids[,2:3], X, factor(centroids[,1]))
clusters
## [1] 1 1 1 2 2 2
## Levels: 1 2
centroids = aggregate(X, list(Cluster = clusters), mean)
centroids
## Cluster V1 V2
## 1 1 0.6666667 3.666667
## 2 2 5.0000000 1.000000
3(f) In your plot from (a), color the observations according to the cluster labels obtained.
col = rep("red", n)
col[clusters == 2] = "blue"
pch = rep(16, n)
pch[clusters == 2] = 17
plot(X, col = col, pch = pch)
points(centroids[1,2:3], col = "red", pch = 8)
points(centroids[2,2:3], col = "blue", pch = 8)
In this problem, you will generate simulated data, and then perform PCA and K-means clustering on the data.
10(a) Generate a simulated data set with 20 observations in each of three classes (i.e. 60 observations total), and 50 variables.
Hint: There are a number of functions in R that you can use to generate data. One example is the rnorm() function; runif() is another option. Be sure to add a mean shift to the observations in each class so there are three distinct classes.
set.seed(2^17 - 1)
X = rbind(matrix(rnorm(20 * 50), nrow = 20, byrow = T),
matrix(rnorm(20 * 50), nrow = 20, byrow = T) + 6,
matrix(rnorm(20 * 50), nrow = 20, byrow = T) + 12)
y = rep(c(1, 2, 3), c(20, 20, 20))
10(b) Perform PCA on the 60 observations and plot the first two principal component score vectors. Use a different color to indicate the observations in each of the three classes. If the three classes appear separated in this plot, then continue on to part (c). If not, then return to part (a) and modify the simulation so that there is greater separation between the three classes. Do not continue to part (c) until the three classes show at least some separation in the first two principal component score vectors.
pca = prcomp(X)
plot(pca$x[,1:2], col = y)
10(c) Perform K-means clustering of the observations with K = 3. How well do the clusters that you obtained in K-means clustering compare to the true class labels?
Hint: You can use the table() function in R to compare the true class labels to the class labels obtained by clustering. Be careful how you interpret the results: K-means clustering will arbitrarily number the clusters, so you cannot simply check whether the true class labels and clustering labels are the same.
# don't forget to do one of the following:
# * use hierarchical clustering of a sample to initialize centroids
# * repeat the k-means clustering multiple times (selecting the best)
km = kmeans(X, centers = 3)
table(y, km$cluster)
##
## y 1 2 3
## 1 0 15 5
## 2 20 0 0
## 3 20 0 0
10(d) Perform K-means clustering with K = 2. Describe your results.
km = kmeans(X, centers = 2)
table(y, km$cluster)
##
## y 1 2
## 1 20 0
## 2 0 20
## 3 0 20
10(e) Now perform K-means clustering with K = 4, and describe your results.
km = kmeans(X, centers = 4)
table(y, km$cluster)
##
## y 1 2 3 4
## 1 15 0 0 5
## 2 0 0 20 0
## 3 0 20 0 0
10(f) Now perform K-means clustering with K = 3 on the first two principal component score vectors, rather than on the raw data. That is, perform K-means clustering on the 60 x 2 matrix of which the first column is the first principal component score vector, and the second column is the second principal component score vector. Comment on the results.
km = kmeans(pca$x[,1:2], centers = 3)
table(y, km$cluster)
##
## y 1 2 3
## 1 0 0 20
## 2 20 0 0
## 3 0 20 0
10(g) Using the scale() function, perform K-means clustering with K = 3 on the data after scaling each variable to have standard deviation one. How do these results compare to those obtained in (b)? Explain.
X.scaled = scale(X)
km = kmeans(X.scaled, centers = 3)
table(y, km$cluster)
##
## y 1 2 3
## 1 20 0 0
## 2 0 20 0
## 3 0 0 20
setwd("C:/Data/Insurance")
colClasses = rep("numeric", 85)
colClasses[1] = "factor"
colClasses[5] = "factor"
trn_X = read.table("trn_X.tsv", colClasses = colClasses)
trn_y = scan("trn_y.txt")
tst_X = read.table("tst_X.tsv", colClasses = colClasses)
X = rbind(trn_X, tst_X)
X_mm = model.matrix(~ 0 + ., data = X)
trn_X = X_mm[1:5822,]
tst_X = X_mm[5823:9822,]
set.seed(2^17-1)
library(xgboost)
AverageAmongTopP = function(predictions, dtrain) {
labels = getinfo(dtrain, "label")
ordering = order(predictions, decreasing = T)
count = round(0.2 * length(predictions))
output = list(metric = "AverageAmongTopP", value = mean(labels[ordering[1:count]]))
return(output)
}
model = xgboost(trn_X, trn_y, nrounds = 200,
params = list(eta = 0.05,
max_depth = 2,
subsample = 0.7,
colsample_bytree = 0.8,
objective = "binary:logistic",
eval_metric = AverageAmongTopP))
## [1] train-AverageAmongTopP:0.134021
## [2] train-AverageAmongTopP:0.152062
## [3] train-AverageAmongTopP:0.153780
## [4] train-AverageAmongTopP:0.154639
## [5] train-AverageAmongTopP:0.153780
## [6] train-AverageAmongTopP:0.153780
## [7] train-AverageAmongTopP:0.159794
## [8] train-AverageAmongTopP:0.159794
## [9] train-AverageAmongTopP:0.163230
## [10] train-AverageAmongTopP:0.159794
## [11] train-AverageAmongTopP:0.159794
## [12] train-AverageAmongTopP:0.166667
## [13] train-AverageAmongTopP:0.166667
## [14] train-AverageAmongTopP:0.166667
## [15] train-AverageAmongTopP:0.165808
## [16] train-AverageAmongTopP:0.164948
## [17] train-AverageAmongTopP:0.164948
## [18] train-AverageAmongTopP:0.164948
## [19] train-AverageAmongTopP:0.164948
## [20] train-AverageAmongTopP:0.164948
## [21] train-AverageAmongTopP:0.164948
## [22] train-AverageAmongTopP:0.167526
## [23] train-AverageAmongTopP:0.167526
## [24] train-AverageAmongTopP:0.167526
## [25] train-AverageAmongTopP:0.167526
## [26] train-AverageAmongTopP:0.166667
## [27] train-AverageAmongTopP:0.166667
## [28] train-AverageAmongTopP:0.166667
## [29] train-AverageAmongTopP:0.167526
## [30] train-AverageAmongTopP:0.170103
## [31] train-AverageAmongTopP:0.170103
## [32] train-AverageAmongTopP:0.170103
## [33] train-AverageAmongTopP:0.170103
## [34] train-AverageAmongTopP:0.167526
## [35] train-AverageAmongTopP:0.167526
## [36] train-AverageAmongTopP:0.166667
## [37] train-AverageAmongTopP:0.167526
## [38] train-AverageAmongTopP:0.167526
## [39] train-AverageAmongTopP:0.170962
## [40] train-AverageAmongTopP:0.170962
## [41] train-AverageAmongTopP:0.170103
## [42] train-AverageAmongTopP:0.166667
## [43] train-AverageAmongTopP:0.166667
## [44] train-AverageAmongTopP:0.166667
## [45] train-AverageAmongTopP:0.166667
## [46] train-AverageAmongTopP:0.166667
## [47] train-AverageAmongTopP:0.166667
## [48] train-AverageAmongTopP:0.168385
## [49] train-AverageAmongTopP:0.167526
## [50] train-AverageAmongTopP:0.168385
## [51] train-AverageAmongTopP:0.167526
## [52] train-AverageAmongTopP:0.167526
## [53] train-AverageAmongTopP:0.167526
## [54] train-AverageAmongTopP:0.167526
## [55] train-AverageAmongTopP:0.168385
## [56] train-AverageAmongTopP:0.167526
## [57] train-AverageAmongTopP:0.168385
## [58] train-AverageAmongTopP:0.167526
## [59] train-AverageAmongTopP:0.166667
## [60] train-AverageAmongTopP:0.166667
## [61] train-AverageAmongTopP:0.167526
## [62] train-AverageAmongTopP:0.170103
## [63] train-AverageAmongTopP:0.170103
## [64] train-AverageAmongTopP:0.167526
## [65] train-AverageAmongTopP:0.170103
## [66] train-AverageAmongTopP:0.170103
## [67] train-AverageAmongTopP:0.169244
## [68] train-AverageAmongTopP:0.170103
## [69] train-AverageAmongTopP:0.170103
## [70] train-AverageAmongTopP:0.170103
## [71] train-AverageAmongTopP:0.170962
## [72] train-AverageAmongTopP:0.170962
## [73] train-AverageAmongTopP:0.170962
## [74] train-AverageAmongTopP:0.171821
## [75] train-AverageAmongTopP:0.172680
## [76] train-AverageAmongTopP:0.173540
## [77] train-AverageAmongTopP:0.173540
## [78] train-AverageAmongTopP:0.173540
## [79] train-AverageAmongTopP:0.173540
## [80] train-AverageAmongTopP:0.174399
## [81] train-AverageAmongTopP:0.174399
## [82] train-AverageAmongTopP:0.173540
## [83] train-AverageAmongTopP:0.173540
## [84] train-AverageAmongTopP:0.173540
## [85] train-AverageAmongTopP:0.173540
## [86] train-AverageAmongTopP:0.174399
## [87] train-AverageAmongTopP:0.174399
## [88] train-AverageAmongTopP:0.175258
## [89] train-AverageAmongTopP:0.174399
## [90] train-AverageAmongTopP:0.174399
## [91] train-AverageAmongTopP:0.175258
## [92] train-AverageAmongTopP:0.177835
## [93] train-AverageAmongTopP:0.177835
## [94] train-AverageAmongTopP:0.176117
## [95] train-AverageAmongTopP:0.176117
## [96] train-AverageAmongTopP:0.176976
## [97] train-AverageAmongTopP:0.177835
## [98] train-AverageAmongTopP:0.177835
## [99] train-AverageAmongTopP:0.177835
## [100] train-AverageAmongTopP:0.178694
## [101] train-AverageAmongTopP:0.181271
## [102] train-AverageAmongTopP:0.180412
## [103] train-AverageAmongTopP:0.181271
## [104] train-AverageAmongTopP:0.181271
## [105] train-AverageAmongTopP:0.181271
## [106] train-AverageAmongTopP:0.181271
## [107] train-AverageAmongTopP:0.181271
## [108] train-AverageAmongTopP:0.180412
## [109] train-AverageAmongTopP:0.179553
## [110] train-AverageAmongTopP:0.181271
## [111] train-AverageAmongTopP:0.181271
## [112] train-AverageAmongTopP:0.181271
## [113] train-AverageAmongTopP:0.181271
## [114] train-AverageAmongTopP:0.181271
## [115] train-AverageAmongTopP:0.182131
## [116] train-AverageAmongTopP:0.183849
## [117] train-AverageAmongTopP:0.183849
## [118] train-AverageAmongTopP:0.182990
## [119] train-AverageAmongTopP:0.185567
## [120] train-AverageAmongTopP:0.185567
## [121] train-AverageAmongTopP:0.185567
## [122] train-AverageAmongTopP:0.185567
## [123] train-AverageAmongTopP:0.186426
## [124] train-AverageAmongTopP:0.187285
## [125] train-AverageAmongTopP:0.187285
## [126] train-AverageAmongTopP:0.187285
## [127] train-AverageAmongTopP:0.188144
## [128] train-AverageAmongTopP:0.187285
## [129] train-AverageAmongTopP:0.187285
## [130] train-AverageAmongTopP:0.189003
## [131] train-AverageAmongTopP:0.188144
## [132] train-AverageAmongTopP:0.188144
## [133] train-AverageAmongTopP:0.187285
## [134] train-AverageAmongTopP:0.186426
## [135] train-AverageAmongTopP:0.186426
## [136] train-AverageAmongTopP:0.186426
## [137] train-AverageAmongTopP:0.186426
## [138] train-AverageAmongTopP:0.186426
## [139] train-AverageAmongTopP:0.186426
## [140] train-AverageAmongTopP:0.186426
## [141] train-AverageAmongTopP:0.187285
## [142] train-AverageAmongTopP:0.187285
## [143] train-AverageAmongTopP:0.188144
## [144] train-AverageAmongTopP:0.188144
## [145] train-AverageAmongTopP:0.187285
## [146] train-AverageAmongTopP:0.187285
## [147] train-AverageAmongTopP:0.189003
## [148] train-AverageAmongTopP:0.188144
## [149] train-AverageAmongTopP:0.188144
## [150] train-AverageAmongTopP:0.189863
## [151] train-AverageAmongTopP:0.189863
## [152] train-AverageAmongTopP:0.189863
## [153] train-AverageAmongTopP:0.189863
## [154] train-AverageAmongTopP:0.190722
## [155] train-AverageAmongTopP:0.190722
## [156] train-AverageAmongTopP:0.189003
## [157] train-AverageAmongTopP:0.189003
## [158] train-AverageAmongTopP:0.189863
## [159] train-AverageAmongTopP:0.188144
## [160] train-AverageAmongTopP:0.189003
## [161] train-AverageAmongTopP:0.189863
## [162] train-AverageAmongTopP:0.189863
## [163] train-AverageAmongTopP:0.190722
## [164] train-AverageAmongTopP:0.189863
## [165] train-AverageAmongTopP:0.189863
## [166] train-AverageAmongTopP:0.189863
## [167] train-AverageAmongTopP:0.190722
## [168] train-AverageAmongTopP:0.190722
## [169] train-AverageAmongTopP:0.191581
## [170] train-AverageAmongTopP:0.191581
## [171] train-AverageAmongTopP:0.191581
## [172] train-AverageAmongTopP:0.191581
## [173] train-AverageAmongTopP:0.192440
## [174] train-AverageAmongTopP:0.192440
## [175] train-AverageAmongTopP:0.193299
## [176] train-AverageAmongTopP:0.193299
## [177] train-AverageAmongTopP:0.194158
## [178] train-AverageAmongTopP:0.193299
## [179] train-AverageAmongTopP:0.194158
## [180] train-AverageAmongTopP:0.194158
## [181] train-AverageAmongTopP:0.195017
## [182] train-AverageAmongTopP:0.195017
## [183] train-AverageAmongTopP:0.195017
## [184] train-AverageAmongTopP:0.195017
## [185] train-AverageAmongTopP:0.195017
## [186] train-AverageAmongTopP:0.194158
## [187] train-AverageAmongTopP:0.194158
## [188] train-AverageAmongTopP:0.195876
## [189] train-AverageAmongTopP:0.195876
## [190] train-AverageAmongTopP:0.195876
## [191] train-AverageAmongTopP:0.195876
## [192] train-AverageAmongTopP:0.195876
## [193] train-AverageAmongTopP:0.195876
## [194] train-AverageAmongTopP:0.194158
## [195] train-AverageAmongTopP:0.195876
## [196] train-AverageAmongTopP:0.195876
## [197] train-AverageAmongTopP:0.195017
## [198] train-AverageAmongTopP:0.195017
## [199] train-AverageAmongTopP:0.195017
## [200] train-AverageAmongTopP:0.195017
predictions = predict(model, tst_X)
output = data.frame(Id = 1:length(predictions), Predicted = predictions)
write.csv(output, "predictions.csv", quote=F, row.names = F)