sCompReorder
is supposed to reorder component planes for the
input map/data. It returns an object of class "sReorder". It is
realized by using a new map grid (with sheet shape consisting of a
rectangular lattice) to train component plane vectors (either
column-wise vectors of codebook/data matrix or the covariance matrix
thereof). As a result, similar component planes are placed closer to
each other. It is highly recommend to use trained map (i.e. codebook
matrix) as input if data matrix is hugely big to save computational
costs.
sCompReorder(sMap, xdim = NULL, ydim = NULL, amplifier = NULL, metric = c("none", "pearson", "spearman", "kendall", "euclidean", "manhattan", "cos", "mi"), init = c("linear", "uniform", "sample"), algorithm = c("sequential", "batch"), alphaType = c("invert", "linear", "power"), neighKernel = c("gaussian", "bubble", "cutgaussian", "ep", "gamma"))
sDistance
for detailsan object of class "sReorder", a list with following components:
nHex
: the total number of rectanges in the grid
xdim
: x-dimension of the grid
ydim
: y-dimension of the grid
uOrder
: the unique order/placement for each component
plane that is reordered to the "sheet"-shape grid with rectangular
lattice
coord
: a matrix of nHex x 2, with each row corresponding
to the coordinates of each "uOrder" rectangle in the 2D map grid
call
: the call that produced this result
All component planes are uniquely placed within a "sheet"-shape rectangle grid:
The size of "sheet"-shape rectangle grid depends on the input arguments:
nHex=xdim*ydim
.
nHex=5*sqrt(dlen)
, where dlen is the
number of rows of the input data.
# 1) generate an iid normal random matrix of 100x10 data <- matrix( rnorm(100*10,mean=0,sd=1), nrow=100, ncol=10) colnames(data) <- paste(rep('S',10), seq(1:10), sep="") # 2) get trained using by default setup sMap <- sPipeline(data=data)Start at 2018-01-18 16:56:00 First, define topology of a map grid (2018-01-18 16:56:00)... Second, initialise the codebook matrix (61 X 10) using 'linear' initialisation, given a topology and input data (2018-01-18 16:56:00)... Third, get training at the rough stage (2018-01-18 16:56:00)... 1 out of 7 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 2 out of 7 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 3 out of 7 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 4 out of 7 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 5 out of 7 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 6 out of 7 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 7 out of 7 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) Fourth, get training at the finetune stage (2018-01-18 16:56:00)... 1 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 2 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 3 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 4 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 5 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 6 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 7 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 8 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 9 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 10 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 11 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 12 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 13 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 14 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 15 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 16 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 17 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 18 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 19 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 20 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 21 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 22 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 23 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 24 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) 25 out of 25 (2018-01-18 16:56:00) updated (2018-01-18 16:56:00) Next, identify the best-matching hexagon/rectangle for the input data (2018-01-18 16:56:00)... Finally, append the response data (hits and mqe) into the sMap object (2018-01-18 16:56:00)... Below are the summaries of the training results: dimension of input data: 100x10 xy-dimension of map grid: xdim=9, ydim=9, r=5 grid lattice: hexa grid shape: suprahex dimension of grid coord: 61x2 initialisation method: linear dimension of codebook matrix: 61x10 mean quantization error: 4.92761300512866 Below are the details of trainology: training algorithm: batch alpha type: invert training neighborhood kernel: gaussian trainlength (x input data length): 7 at rough stage; 25 at finetune stage radius (at rough stage): from 3 to 1 radius (at finetune stage): from 1 to 1 End at 2018-01-18 16:56:00 Runtime in total is: 0 secs# 3) reorder component planes in different ways # 3a) directly using column-wise vectors of codebook matrix sReorder <- sCompReorder(sMap=sMap, amplifier=2, metric="none")Start at 2018-01-18 16:56:01 First, define topology of a map grid (2018-01-18 16:56:01)... Second, initialise the codebook matrix (20 X 61) using 'linear' initialisation, given a topology and input data (2018-01-18 16:56:01)... Third, get training at the rough stage (2018-01-18 16:56:01)... 1 out of 200 (2018-01-18 16:56:01) 20 out of 200 (2018-01-18 16:56:01) 40 out of 200 (2018-01-18 16:56:01) 60 out of 200 (2018-01-18 16:56:01) 80 out of 200 (2018-01-18 16:56:01) 100 out of 200 (2018-01-18 16:56:01) 120 out of 200 (2018-01-18 16:56:01) 140 out of 200 (2018-01-18 16:56:01) 160 out of 200 (2018-01-18 16:56:01) 180 out of 200 (2018-01-18 16:56:01) 200 out of 200 (2018-01-18 16:56:01) Fourth, get training at the finetune stage (2018-01-18 16:56:01)... 1 out of 800 (2018-01-18 16:56:01) 80 out of 800 (2018-01-18 16:56:01) 160 out of 800 (2018-01-18 16:56:01) 240 out of 800 (2018-01-18 16:56:01) 320 out of 800 (2018-01-18 16:56:01) 400 out of 800 (2018-01-18 16:56:01) 480 out of 800 (2018-01-18 16:56:01) 560 out of 800 (2018-01-18 16:56:01) 640 out of 800 (2018-01-18 16:56:01) 720 out of 800 (2018-01-18 16:56:01) 800 out of 800 (2018-01-18 16:56:01) Next, identify the best-matching hexagon/rectangle for the input data (2018-01-18 16:56:01)... Finally, append the response data (hits and mqe) into the sMap object (2018-01-18 16:56:01)... Below are the summaries of the training results: dimension of input data: 10x61 xy-dimension of map grid: xdim=5, ydim=4, r=3 grid lattice: rect grid shape: sheet dimension of grid coord: 20x2 initialisation method: linear dimension of codebook matrix: 20x61 mean quantization error: 5.50070826203008 Below are the details of trainology: training algorithm: sequential alpha type: invert training neighborhood kernel: gaussian trainlength (x input data length): 20 at rough stage; 80 at finetune stage radius (at rough stage): from 1 to 1 radius (at finetune stage): from 1 to 1 End at 2018-01-18 16:56:01 Runtime in total is: 0 secs# 3b) according to covariance matrix of pearson correlation of codebook matrix sReorder <- sCompReorder(sMap=sMap, amplifier=2, metric="pearson")Start at 2018-01-18 16:56:01 First, define topology of a map grid (2018-01-18 16:56:01)... Second, initialise the codebook matrix (20 X 10) using 'linear' initialisation, given a topology and input data (2018-01-18 16:56:01)... Third, get training at the rough stage (2018-01-18 16:56:01)... 1 out of 200 (2018-01-18 16:56:01) 20 out of 200 (2018-01-18 16:56:01) 40 out of 200 (2018-01-18 16:56:01) 60 out of 200 (2018-01-18 16:56:01) 80 out of 200 (2018-01-18 16:56:01) 100 out of 200 (2018-01-18 16:56:01) 120 out of 200 (2018-01-18 16:56:01) 140 out of 200 (2018-01-18 16:56:01) 160 out of 200 (2018-01-18 16:56:01) 180 out of 200 (2018-01-18 16:56:01) 200 out of 200 (2018-01-18 16:56:01) Fourth, get training at the finetune stage (2018-01-18 16:56:01)... 1 out of 800 (2018-01-18 16:56:01) 80 out of 800 (2018-01-18 16:56:01) 160 out of 800 (2018-01-18 16:56:01) 240 out of 800 (2018-01-18 16:56:01) 320 out of 800 (2018-01-18 16:56:01) 400 out of 800 (2018-01-18 16:56:01) 480 out of 800 (2018-01-18 16:56:01) 560 out of 800 (2018-01-18 16:56:01) 640 out of 800 (2018-01-18 16:56:01) 720 out of 800 (2018-01-18 16:56:01) 800 out of 800 (2018-01-18 16:56:01) Next, identify the best-matching hexagon/rectangle for the input data (2018-01-18 16:56:01)... Finally, append the response data (hits and mqe) into the sMap object (2018-01-18 16:56:01)... Below are the summaries of the training results: dimension of input data: 10x10 xy-dimension of map grid: xdim=5, ydim=4, r=3 grid lattice: rect grid shape: sheet dimension of grid coord: 20x2 initialisation method: linear dimension of codebook matrix: 20x10 mean quantization error: 0.548597803172487 Below are the details of trainology: training algorithm: sequential alpha type: invert training neighborhood kernel: gaussian trainlength (x input data length): 20 at rough stage; 80 at finetune stage radius (at rough stage): from 1 to 1 radius (at finetune stage): from 1 to 1 End at 2018-01-18 16:56:01 Runtime in total is: 0 secs# 3c) according to covariance matrix of pearson correlation of input matrix sReorder <- sCompReorder(sMap=data, amplifier=2, metric="pearson")Start at 2018-01-18 16:56:01 First, define topology of a map grid (2018-01-18 16:56:01)... Second, initialise the codebook matrix (20 X 10) using 'linear' initialisation, given a topology and input data (2018-01-18 16:56:01)... Third, get training at the rough stage (2018-01-18 16:56:01)... 1 out of 200 (2018-01-18 16:56:01) 20 out of 200 (2018-01-18 16:56:01) 40 out of 200 (2018-01-18 16:56:01) 60 out of 200 (2018-01-18 16:56:01) 80 out of 200 (2018-01-18 16:56:01) 100 out of 200 (2018-01-18 16:56:01) 120 out of 200 (2018-01-18 16:56:01) 140 out of 200 (2018-01-18 16:56:01) 160 out of 200 (2018-01-18 16:56:01) 180 out of 200 (2018-01-18 16:56:01) 200 out of 200 (2018-01-18 16:56:01) Fourth, get training at the finetune stage (2018-01-18 16:56:01)... 1 out of 800 (2018-01-18 16:56:01) 80 out of 800 (2018-01-18 16:56:01) 160 out of 800 (2018-01-18 16:56:01) 240 out of 800 (2018-01-18 16:56:01) 320 out of 800 (2018-01-18 16:56:01) 400 out of 800 (2018-01-18 16:56:01) 480 out of 800 (2018-01-18 16:56:01) 560 out of 800 (2018-01-18 16:56:01) 640 out of 800 (2018-01-18 16:56:01) 720 out of 800 (2018-01-18 16:56:01) 800 out of 800 (2018-01-18 16:56:01) Next, identify the best-matching hexagon/rectangle for the input data (2018-01-18 16:56:01)... Finally, append the response data (hits and mqe) into the sMap object (2018-01-18 16:56:01)... Below are the summaries of the training results: dimension of input data: 10x10 xy-dimension of map grid: xdim=5, ydim=4, r=3 grid lattice: rect grid shape: sheet dimension of grid coord: 20x2 initialisation method: linear dimension of codebook matrix: 20x10 mean quantization error: 0.56538206631514 Below are the details of trainology: training algorithm: sequential alpha type: invert training neighborhood kernel: gaussian trainlength (x input data length): 20 at rough stage; 80 at finetune stage radius (at rough stage): from 1 to 1 radius (at finetune stage): from 1 to 1 End at 2018-01-18 16:56:01 Runtime in total is: 0 secs
sCompReorder.r
sCompReorder.Rd
sCompReorder.pdf
sTopology
, sPipeline
, sBMH
,
sDistance
, visCompReorder