Title: | Core Mathematical Functions for Multi-Objective Optimization |
---|---|
Description: | Fast implementation of mathematical operations and performance metrics for multi-objective optimization, including filtering and ranking of dominated vectors according to Pareto optimality, computation of the empirical attainment function, V.G. da Fonseca, C.M. Fonseca, A.O. Hall (2001) <doi:10.1007/3-540-44719-9_15>, hypervolume metric, C.M. Fonseca, L. Paquete, M. López-Ibáñez (2006) <doi:10.1109/CEC.2006.1688440>, epsilon indicator, inverted generational distance, and Vorob'ev threshold, expectation and deviation, M. Binois, D. Ginsbourger, O. Roustant (2015) <doi:10.1016/j.ejor.2014.07.032>, among others. |
Authors: | Manuel López-Ibáñez [aut, cre] , Carlos Fonseca [ctb], Luís Paquete [ctb], Andreia P. Guerreiro [ctb], Mickaël Binois [ctb], Michael H. Buselli [cph] (AVL-tree library), Wessel Dankers [cph] (AVL-tree library), NumPy Developers [cph] (RNG and ziggurat constants), Jean-Sebastien Roy [cph] (mt19937 library), Makoto Matsumoto [cph] (mt19937 library), Takuji Nishimura [cph] (mt19937 library) |
Maintainer: | Manuel López-Ibáñez <[email protected]> |
License: | LGPL (>= 2) |
Version: | 0.1.2.9000 |
Built: | 2024-10-31 17:15:41 UTC |
Source: | https://github.com/multi-objective/moocore |
"double"
storage mode (base::storage.mode()
).Convert input to a matrix with "double"
storage mode (base::storage.mode()
).
as_double_matrix(x)
as_double_matrix(x)
x |
|
x
is coerced to a numerical matrix()
.
data.frame
.Convert a list of attainment surfaces to a single EAF data.frame
.
attsurf2df(x)
attsurf2df(x)
x |
|
data.frame()
Data frame with as many columns as objectives and an additional column percentiles
.
data(SPEA2relativeRichmond) attsurfs <- eaf_as_list(eaf(SPEA2relativeRichmond, percentiles = c(0,50,100))) str(attsurfs) eaf_df <- attsurf2df(attsurfs) str(eaf_df)
data(SPEA2relativeRichmond) attsurfs <- eaf_as_list(eaf(SPEA2relativeRichmond, percentiles = c(0,50,100))) str(attsurfs) eaf_df <- attsurf2df(attsurfs) str(eaf_df)
Interactively choose according to empirical attainment function differences
choose_eafdiff(x, left = stop("'left' must be either TRUE or FALSE"))
choose_eafdiff(x, left = stop("'left' must be either TRUE or FALSE"))
x |
|
left |
|
matrix()
where the first 4 columns give the coordinates of two
corners of each rectangle and the last column. In both cases, the last
column gives the positive differences in favor of the chosen side.
extdata_dir <- system.file(package="moocore", "extdata") A1 <- read_datasets(file.path(extdata_dir, "wrots_l100w10_dat")) A2 <- read_datasets(file.path(extdata_dir, "wrots_l10w100_dat")) # Choose A1 rectangles <- eafdiff(A1, A2, intervals = 5, rectangles = TRUE) rectangles <- choose_eafdiff(rectangles, left = TRUE) reference <- c(max(A1[, 1], A2[, 1]), max(A1[, 2], A2[, 2])) x <- split.data.frame(A1[,1:2], A1[,3]) hv_A1 <- sapply(split.data.frame(A1[, 1:2], A1[, 3]), hypervolume, reference=reference) hv_A2 <- sapply(split.data.frame(A2[, 1:2], A2[, 3]), hypervolume, reference=reference) print(fivenum(hv_A1)) print(fivenum(hv_A2)) whv_A1 <- sapply(split.data.frame(A1[, 1:2], A1[, 3]), whv_rect, rectangles=rectangles, reference=reference) whv_A2 <- sapply(split.data.frame(A2[, 1:2], A2[, 3]), whv_rect, rectangles=rectangles, reference=reference) print(fivenum(whv_A1)) print(fivenum(whv_A2))
extdata_dir <- system.file(package="moocore", "extdata") A1 <- read_datasets(file.path(extdata_dir, "wrots_l100w10_dat")) A2 <- read_datasets(file.path(extdata_dir, "wrots_l10w100_dat")) # Choose A1 rectangles <- eafdiff(A1, A2, intervals = 5, rectangles = TRUE) rectangles <- choose_eafdiff(rectangles, left = TRUE) reference <- c(max(A1[, 1], A2[, 1]), max(A1[, 2], A2[, 2])) x <- split.data.frame(A1[,1:2], A1[,3]) hv_A1 <- sapply(split.data.frame(A1[, 1:2], A1[, 3]), hypervolume, reference=reference) hv_A2 <- sapply(split.data.frame(A2[, 1:2], A2[, 3]), hypervolume, reference=reference) print(fivenum(hv_A1)) print(fivenum(hv_A2)) whv_A1 <- sapply(split.data.frame(A1[, 1:2], A1[, 3]), whv_rect, rectangles=rectangles, reference=reference) whv_A2 <- sapply(split.data.frame(A2[, 1:2], A2[, 3]), whv_rect, rectangles=rectangles, reference=reference) print(fivenum(whv_A1)) print(fivenum(whv_A2))
eaf()
but performs no checks and does not transform the input or
the output. This function should be used by other packages that want to
avoid redundant checks and transformations.Same as eaf()
but performs no checks and does not transform the input or
the output. This function should be used by other packages that want to
avoid redundant checks and transformations.
compute_eaf_call(x, cumsizes, percentiles)
compute_eaf_call(x, cumsizes, percentiles)
x |
|
cumsizes |
|
percentiles |
|
data.frame()
A data frame containing the exact representation of
EAF. The last column gives the percentile that corresponds to each
point. If groups is not NULL
, then an additional column indicates to
which group the point belongs.
as_double_matrix()
transform_maximise()
eafdiff()
but performs no checks and does not transform the input
or the output. This function should be used by other packages that want to
avoid redundant checks and transformations.Same as eafdiff()
but performs no checks and does not transform the input
or the output. This function should be used by other packages that want to
avoid redundant checks and transformations.
compute_eafdiff_call(x, y, cumsizes_x, cumsizes_y, intervals, ret)
compute_eafdiff_call(x, y, cumsizes_x, cumsizes_y, intervals, ret)
x , y
|
|
cumsizes_x , cumsizes_y
|
Cumulative size of the different sets of points in |
intervals |
|
ret |
( |
With rectangle=FALSE
, a data.frame
containing points where there
is a transition in the value of the EAF differences. With
rectangle=TRUE
, a matrix
where the first 4 columns give the
coordinates of two corners of each rectangle. In both cases, the last
column gives the difference in terms of sets in x
minus sets in y
that
attain each point (i.e., negative values are differences in favour y
).
as_double_matrix()
transform_maximise()
The data has the only goal of providing an example of use of vorobT()
and
vorobDev()
. It has been obtained by fitting two Gaussian processes on 20
observations of a bi-objective problem, before generating conditional
simulation of both GPs at different locations and extracting non-dominated
values of coupled simulations.
CPFs
CPFs
A data frame with 2967 observations on the following 3 variables.
f1
first objective values.
f2
second objective values.
set
indices of corresponding conditional Pareto fronts.
M Binois, D Ginsbourger, O Roustant (2015). “Quantifying uncertainty on Pareto fronts with Gaussian process conditional simulations.” European Journal of Operational Research, 243(2), 386–394. doi:10.1016/j.ejor.2014.07.032.
data(CPFs) vorobT(CPFs, reference = c(2, 200))
data(CPFs) vorobT(CPFs, reference = c(2, 200))
This function computes the EAF given a set of 2D or 3D points and a vector set
that indicates to which set each point belongs.
eaf(x, sets, percentiles = NULL, maximise = FALSE, groups = NULL)
eaf(x, sets, percentiles = NULL, maximise = FALSE, groups = NULL)
x |
|
sets |
|
percentiles |
|
maximise |
|
groups |
|
data.frame()
A data frame containing the exact representation of
EAF. The last column gives the percentile that corresponds to each
point. If groups is not NULL
, then an additional column indicates to
which group the point belongs.
There are several examples of data sets in
system.file(package="moocore","extdata")
. The current implementation
only supports two and three dimensional points.
Manuel López-Ibáñez
Viviane Grunert da Fonseca, Carlos M. Fonseca, Andreia O. Hall (2001). “Inferential Performance Assessment of Stochastic Optimisers and the Attainment Function.” In Eckart Zitzler, Kalyanmoy Deb, Lothar Thiele, Carlos A. Coello Coello, David Corne (eds.), Evolutionary Multi-criterion Optimization, EMO 2001, volume 1993 of Lecture Notes in Computer Science, 213–225. Springer, Berlin~/ Heidelberg. doi:10.1007/3-540-44719-9_15.
Carlos M. Fonseca, Andreia P. Guerreiro, Manuel López-Ibáñez, Luís Paquete (2011). “On the Computation of the Empirical Attainment Function.” In R H C Takahashi, Kalyanmoy Deb, Elizabeth F. Wanner, Salvatore Greco (eds.), Evolutionary Multi-criterion Optimization, EMO 2011, volume 6576 of Lecture Notes in Computer Science, 106–120. Springer, Berlin~/ Heidelberg. doi:10.1007/978-3-642-19893-9_8.
extdata_path <- system.file(package="moocore", "extdata") x <- read_datasets(file.path(extdata_path, "example1_dat")) # Compute full EAF (sets is the last column) str(eaf(x)) # Compute only best, median and worst str(eaf(x[,1:2], sets = x[,3], percentiles = c(0, 50, 100))) x <- read_datasets(file.path(extdata_path, "spherical-250-10-3d.txt")) y <- read_datasets(file.path(extdata_path, "uniform-250-10-3d.txt")) x <- rbind(data.frame(x, groups = "spherical"), data.frame(y, groups = "uniform")) # Compute only median separately for each group z <- eaf(x[,1:3], sets = x[,4], groups = x[,5], percentiles = 50) str(z)
extdata_path <- system.file(package="moocore", "extdata") x <- read_datasets(file.path(extdata_path, "example1_dat")) # Compute full EAF (sets is the last column) str(eaf(x)) # Compute only best, median and worst str(eaf(x[,1:2], sets = x[,3], percentiles = c(0, 50, 100))) x <- read_datasets(file.path(extdata_path, "spherical-250-10-3d.txt")) y <- read_datasets(file.path(extdata_path, "uniform-250-10-3d.txt")) x <- rbind(data.frame(x, groups = "spherical"), data.frame(y, groups = "uniform")) # Compute only median separately for each group z <- eaf(x[,1:3], sets = x[,4], groups = x[,5], percentiles = 50) str(z)
attsurf2df()
can be
used to convert the list into a single data frame.Convert an EAF data frame to a list of data frames, where each element
of the list is one attainment surface. The function attsurf2df()
can be
used to convert the list into a single data frame.
eaf_as_list(eaf)
eaf_as_list(eaf)
eaf |
|
list()
A list of data frames. Each data.frame
represents one attainment surface.
extdata_path <- system.file(package="moocore", "extdata") x <- read_datasets(file.path(extdata_path, "example1_dat")) attsurfs <- eaf_as_list(eaf(x, percentiles = c(0, 50, 100))) str(attsurfs)
extdata_path <- system.file(package="moocore", "extdata") x <- read_datasets(file.path(extdata_path, "example1_dat")) attsurfs <- eaf_as_list(eaf(x, percentiles = c(0, 50, 100))) str(attsurfs)
Calculate the differences between the empirical attainment functions of two data sets.
eafdiff(x, y, intervals = NULL, maximise = FALSE, rectangles = FALSE)
eafdiff(x, y, intervals = NULL, maximise = FALSE, rectangles = FALSE)
x , y
|
|
intervals |
|
maximise |
|
rectangles |
|
This function calculates the differences between the EAFs of two data sets.
With rectangle=FALSE
, a data.frame
containing points where there
is a transition in the value of the EAF differences. With
rectangle=TRUE
, a matrix
where the first 4 columns give the
coordinates of two corners of each rectangle. In both cases, the last
column gives the difference in terms of sets in x
minus sets in y
that
attain each point (i.e., negative values are differences in favour y
).
A1 <- read_datasets(text=' 3 2 2 3 2.5 1 1 2 1 2 ') A2 <- read_datasets(text=' 4 2.5 3 3 2.5 3.5 3 3 2.5 3.5 2 1 ') d <- eafdiff(A1, A2) str(d) d d <- eafdiff(A1, A2, rectangles = TRUE) str(d) d
A1 <- read_datasets(text=' 3 2 2 3 2.5 1 1 2 1 2 ') A2 <- read_datasets(text=' 4 2.5 3 3 2.5 3.5 3 3 2.5 3.5 2 1 ') d <- eafdiff(A1, A2) str(d) d d <- eafdiff(A1, A2, rectangles = TRUE) str(d) d
Computes the epsilon metric, either additive or multiplicative.
epsilon_additive(x, reference, maximise = FALSE) epsilon_mult(x, reference, maximise = FALSE)
epsilon_additive(x, reference, maximise = FALSE) epsilon_mult(x, reference, maximise = FALSE)
x |
|
reference |
|
maximise |
|
The epsilon metric of a set with respect to a reference set
is defined as
where and
are objective vectors and, in the case of
minimization of objective
,
is computed as
for the multiplicative variant (respectively,
for the additive variant), whereas in the case of maximization of objective
,
for the multiplicative variant
(respectively,
for the additive variant). This allows
computing a single value for problems where some objectives are to be
maximized while others are to be minimized. Moreover, a lower value
corresponds to a better approximation set, independently of the type of
problem (minimization, maximization or mixed). However, the meaning of the
value is different for each objective type. For example, imagine that
objective 1 is to be minimized and objective 2 is to be maximized, and the
multiplicative epsilon computed here for
. This means
that
needs to be multiplied by 1/3 for all
values and by 3
for all
values in order to weakly dominate
. The
computation of the multiplicative version for negative values doesn't make
sense.
Computation of the epsilon indicator requires , where
is the number of objectives (dimension of vectors).
numeric(1)
A single numerical value.
Manuel López-Ibáñez
Eckart Zitzler, Lothar Thiele, Marco Laumanns, Carlos M. Fonseca, Viviane Grunert da Fonseca (2003). “Performance Assessment of Multiobjective Optimizers: an Analysis and Review.” IEEE Transactions on Evolutionary Computation, 7(2), 117–132. doi:10.1109/TEVC.2003.810758.
# Fig 6 from Zitzler et al. (2003). A1 <- matrix(c(9,2,8,4,7,5,5,6,4,7), ncol=2, byrow=TRUE) A2 <- matrix(c(8,4,7,5,5,6,4,7), ncol=2, byrow=TRUE) A3 <- matrix(c(10,4,9,5,8,6,7,7,6,8), ncol=2, byrow=TRUE) if (requireNamespace("graphics", quietly = TRUE)) { plot(A1, xlab=expression(f[1]), ylab=expression(f[2]), panel.first=grid(nx=NULL), pch=4, cex=1.5, xlim = c(0,10), ylim=c(0,8)) points(A2, pch=0, cex=1.5) points(A3, pch=1, cex=1.5) legend("bottomleft", legend=c("A1", "A2", "A3"), pch=c(4,0,1), pt.bg="gray", bg="white", bty = "n", pt.cex=1.5, cex=1.2) } epsilon_mult(A1, A3) # A1 epsilon-dominates A3 => e = 9/10 < 1 epsilon_mult(A1, A2) # A1 weakly dominates A2 => e = 1 epsilon_mult(A2, A1) # A2 is epsilon-dominated by A1 => e = 2 > 1 # A more realistic example extdata_path <- system.file(package="moocore","extdata") path.A1 <- file.path(extdata_path, "ALG_1_dat.xz") path.A2 <- file.path(extdata_path, "ALG_2_dat.xz") A1 <- read_datasets(path.A1)[,1:2] A2 <- read_datasets(path.A2)[,1:2] ref <- filter_dominated(rbind(A1, A2)) epsilon_additive(A1, ref) epsilon_additive(A2, ref) # Multiplicative version of epsilon metric ref <- filter_dominated(rbind(A1, A2)) epsilon_mult(A1, ref) epsilon_mult(A2, ref)
# Fig 6 from Zitzler et al. (2003). A1 <- matrix(c(9,2,8,4,7,5,5,6,4,7), ncol=2, byrow=TRUE) A2 <- matrix(c(8,4,7,5,5,6,4,7), ncol=2, byrow=TRUE) A3 <- matrix(c(10,4,9,5,8,6,7,7,6,8), ncol=2, byrow=TRUE) if (requireNamespace("graphics", quietly = TRUE)) { plot(A1, xlab=expression(f[1]), ylab=expression(f[2]), panel.first=grid(nx=NULL), pch=4, cex=1.5, xlim = c(0,10), ylim=c(0,8)) points(A2, pch=0, cex=1.5) points(A3, pch=1, cex=1.5) legend("bottomleft", legend=c("A1", "A2", "A3"), pch=c(4,0,1), pt.bg="gray", bg="white", bty = "n", pt.cex=1.5, cex=1.2) } epsilon_mult(A1, A3) # A1 epsilon-dominates A3 => e = 9/10 < 1 epsilon_mult(A1, A2) # A1 weakly dominates A2 => e = 1 epsilon_mult(A2, A1) # A2 is epsilon-dominated by A1 => e = 2 > 1 # A more realistic example extdata_path <- system.file(package="moocore","extdata") path.A1 <- file.path(extdata_path, "ALG_1_dat.xz") path.A2 <- file.path(extdata_path, "ALG_2_dat.xz") A1 <- read_datasets(path.A1)[,1:2] A2 <- read_datasets(path.A2)[,1:2] ref <- filter_dominated(rbind(A1, A2)) epsilon_additive(A1, ref) epsilon_additive(A2, ref) # Multiplicative version of epsilon metric ref <- filter_dominated(rbind(A1, A2)) epsilon_mult(A1, ref) epsilon_mult(A2, ref)
Computes the hypervolume contribution of each point given a set of points with respect to a given reference point assuming minimization of all objectives. Dominated points have zero contribution. Duplicated points have zero contribution even if not dominated, because removing one of them does not change the hypervolume dominated by the remaining set.
hv_contributions(x, reference, maximise = FALSE)
hv_contributions(x, reference, maximise = FALSE)
x |
|
reference |
|
maximise |
|
numeric()
A numerical vector
Manuel López-Ibáñez
Carlos M. Fonseca, Luís Paquete, Manuel López-Ibáñez (2006). “An improved dimension-sweep algorithm for the hypervolume indicator.” In Proceedings of the 2006 Congress on Evolutionary Computation (CEC 2006), 1157–1163. doi:10.1109/CEC.2006.1688440.
Nicola Beume, Carlos M. Fonseca, Manuel López-Ibáñez, Luís Paquete, Jan Vahrenhold (2009). “On the complexity of computing the hypervolume indicator.” IEEE Transactions on Evolutionary Computation, 13(5), 1075–1082. doi:10.1109/TEVC.2009.2015575.
data(SPEA2minstoptimeRichmond) # The second objective must be maximized # We calculate the hypervolume contribution of each point of the union of all sets. hv_contributions(SPEA2minstoptimeRichmond[, 1:2], reference = c(250, 0), maximise = c(FALSE, TRUE)) # Duplicated points show zero contribution above, even if not # dominated. However, filter_dominated removes all duplicates except # one. Hence, there are more points below with nonzero contribution. hv_contributions(filter_dominated(SPEA2minstoptimeRichmond[, 1:2], maximise = c(FALSE, TRUE)), reference = c(250, 0), maximise = c(FALSE, TRUE))
data(SPEA2minstoptimeRichmond) # The second objective must be maximized # We calculate the hypervolume contribution of each point of the union of all sets. hv_contributions(SPEA2minstoptimeRichmond[, 1:2], reference = c(250, 0), maximise = c(FALSE, TRUE)) # Duplicated points show zero contribution above, even if not # dominated. However, filter_dominated removes all duplicates except # one. Hence, there are more points below with nonzero contribution. hv_contributions(filter_dominated(SPEA2minstoptimeRichmond[, 1:2], maximise = c(FALSE, TRUE)), reference = c(250, 0), maximise = c(FALSE, TRUE))
Results of Hybrid GA on Vanzyl and Richmond water networks
HybridGA
HybridGA
A list with two data frames, each of them with three columns, as
produced by read_datasets()
.
$vanzyl
data frame of results on Vanzyl network
$richmond
data frame of results on Richmond
network. The second column is filled with NA
Manuel López-Ibáñez (2009). Operational Optimisation of Water Distribution Networks. Ph.D. thesis, School of Engineering and the Built Environment, Edinburgh Napier University, UK. https://lopez-ibanez.eu/publications#LopezIbanezPhD..
data(HybridGA) print(HybridGA$vanzyl) print(HybridGA$richmond)
data(HybridGA) print(HybridGA$vanzyl) print(HybridGA$richmond)
Compute the hypervolume metric with respect to a given reference point
assuming minimization of all objectives. For 2D and 3D, the algorithm used
(Fonseca et al. 2006; Beume et al. 2009) has
complexity. For 4D or higher, the algorithm (Fonseca et al. 2006)
has
time and linear space complexity in the
worst-case, but experimental results show that the pruning techniques used
may reduce the time complexity even further.
hypervolume(x, reference, maximise = FALSE)
hypervolume(x, reference, maximise = FALSE)
x |
|
reference |
|
maximise |
|
numeric(1)
A single numerical value.
Manuel López-Ibáñez
Nicola Beume, Carlos
M. Fonseca, Manuel López-Ibáñez, Luís Paquete, Jan Vahrenhold (2009).
“On the complexity of computing the hypervolume indicator.”
IEEE Transactions on Evolutionary Computation, 13(5), 1075–1082.
doi:10.1109/TEVC.2009.2015575.
Carlos
M. Fonseca, Luís Paquete, Manuel López-Ibáñez (2006).
“An improved dimension-sweep algorithm for the hypervolume indicator.”
In Proceedings of the 2006 Congress on Evolutionary Computation (CEC 2006), 1157–1163.
doi:10.1109/CEC.2006.1688440.
data(SPEA2minstoptimeRichmond) # The second objective must be maximized # We calculate the hypervolume of the union of all sets. hypervolume(SPEA2minstoptimeRichmond[, 1:2], reference = c(250, 0), maximise = c(FALSE, TRUE))
data(SPEA2minstoptimeRichmond) # The second objective must be maximized # We calculate the hypervolume of the union of all sets. hypervolume(SPEA2minstoptimeRichmond[, 1:2], reference = c(250, 0), maximise = c(FALSE, TRUE))
Functions to compute the inverted generational distance (IGD and IGD+) and the averaged Hausdorff distance between nondominated sets of points.
igd(x, reference, maximise = FALSE) igd_plus(x, reference, maximise = FALSE) avg_hausdorff_dist(x, reference, maximise = FALSE, p = 1L)
igd(x, reference, maximise = FALSE) igd_plus(x, reference, maximise = FALSE) avg_hausdorff_dist(x, reference, maximise = FALSE, p = 1L)
x |
|
reference |
|
maximise |
|
p |
|
The generational distance (GD) of a set is defined as the distance
between each point
and the closest point
in a
reference set
, averaged over the size of
. Formally,
where the distance in our implementation is the Euclidean distance:
The inverted generational distance (IGD) is calculated as .
The modified inverted generational distanced (IGD+) was proposed by
Ishibuchi et al. (2015) to ensure that IGD+ is weakly Pareto compliant,
similarly to epsilon_additive()
or epsilon_mult()
. It modifies the
distance measure as:
The average Hausdorff distance () was proposed by
Schütze et al. (2012) and it is calculated as:
IGDX (Zhou et al. 2009) is the application of IGD to decision vectors
instead of objective vectors to measure closeness and diversity in decision
space. One can use the functions igd()
or igd_plus()
(recommended)
directly, just passing the decision vectors as data
.
There are different formulations of the GD and IGD metrics in the literature
that differ on the value of , on the distance metric used and on
whether the term
is inside (as above) or outside the exponent
. GD was first proposed by Van Veldhuizen and Lamont (1998) with
and
the term
outside the exponent. IGD seems to have been
mentioned first by Coello Coello and Reyes-Sierra (2004), however, some people also used the
name D-metric for the same concept with
and later papers have
often used IGD/GD with
. Schütze et al. (2012) proposed to
place the term
inside the exponent, as in the formulation
shown above. This has a significant effect for GD and less so for IGD given
a constant reference set. IGD+ also follows this formulation. We refer to
Ishibuchi et al. (2015) and Bezerra et al. (2017) for a more detailed
historical perspective and a comparison of the various variants.
Following Ishibuchi et al. (2015), we always use in our
implementation of IGD and IGD+ because (1) it is the setting most used in
recent works; (2) it makes irrelevant whether the term
is
inside or outside the exponent
; and (3) the meaning of IGD becomes
the average Euclidean distance from each reference point to its nearest
objective vector. It is also slightly faster to compute.
GD should never be used directly to compare the quality of approximations to a Pareto front, as it often contradicts Pareto optimality (it is not weakly Pareto-compliant). We recommend IGD+ instead of IGD, since the latter contradicts Pareto optimality in some cases (see examples below) whereas IGD+ is weakly Pareto-compliant, but we implement IGD here because it is still popular due to historical reasons.
The average Hausdorff distance () is also not weakly
Pareto-compliant, as shown in the examples below.
numeric(1)
A single numerical value.
Manuel López-Ibáñez
Leonardo
C.
T. Bezerra, Manuel López-Ibáñez, Thomas Stützle (2017).
“An Empirical Assessment of the Properties of Inverted Generational Distance Indicators on Multi- and Many-objective Optimization.”
In Heike Trautmann, Günter Rudolph, Kathrin Klamroth, Oliver Schütze, Margaret
M. Wiecek, Yaochu Jin, Christian Grimme (eds.), Evolutionary Multi-criterion Optimization, EMO 2017, volume 10173 of Lecture Notes in Computer Science, 31–45.
Springer International Publishing, Cham, Switzerland.
doi:10.1007/978-3-319-54157-0_3.
Carlos
A. Coello Coello, Margarita Reyes-Sierra (2004).
“A Study of the Parallelization of a Coevolutionary Multi-objective Evolutionary Algorithm.”
In Raúl Monroy, Gustavo Arroyo-Figueroa, Luis
Enrique Sucar, Humberto Sossa (eds.), Proceedings of MICAI, volume 2972 of Lecture Notes in Artificial Intelligence, 688–697.
Springer, Heidelberg, Germany.
Hisao Ishibuchi, Hiroyuki Masuda, Yuki Tanigaki, Yusuke Nojima (2015).
“Modified Distance Calculation in Generational Distance and Inverted Generational Distance.”
In António Gaspar-Cunha, Carlos
Henggeler Antunes, Carlos
A. Coello Coello (eds.), Evolutionary Multi-criterion Optimization, EMO 2015 Part I, volume 9018 of Lecture Notes in Computer Science, 110–125.
Springer, Heidelberg, Germany.
Oliver Schütze, X Esquivel, A Lara, Carlos
A. Coello Coello (2012).
“Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary Multiobjective Optimization.”
IEEE Transactions on Evolutionary Computation, 16(4), 504–522.
David
A. Van Veldhuizen, Gary
B. Lamont (1998).
“Evolutionary Computation and Convergence to a Pareto Front.”
In John
R. Koza (ed.), Late Breaking Papers at the Genetic Programming 1998 Conference, 221–228.
A Zhou, Qingfu Zhang, Yaochu Jin (2009).
“Approximating the set of Pareto-optimal solutions in both the decision and objective spaces by an estimation of distribution algorithm.”
IEEE Transactions on Evolutionary Computation, 13(5), 1167–1189.
doi:10.1109/TEVC.2009.2021467.
# Example 4 from Ishibuchi et al. (2015) ref <- matrix(c(10,0,6,1,2,2,1,6,0,10), ncol=2, byrow=TRUE) A <- matrix(c(4,2,3,3,2,4), ncol=2, byrow=TRUE) B <- matrix(c(8,2,4,4,2,8), ncol=2, byrow=TRUE) if (requireNamespace("graphics", quietly = TRUE)) { plot(ref, xlab=expression(f[1]), ylab=expression(f[2]), panel.first=grid(nx=NULL), pch=23, bg="gray", cex=1.5) points(A, pch=1, cex=1.5) points(B, pch=19, cex=1.5) legend("topright", legend=c("Reference", "A", "B"), pch=c(23,1,19), pt.bg="gray", bg="white", bty = "n", pt.cex=1.5, cex=1.2) } cat("A is better than B in terms of Pareto optimality,\n however, IGD(A)=", igd(A, ref), "> IGD(B)=", igd(B, ref), "and AvgHausdorff(A)=", avg_hausdorff_dist(A, ref), "> AvgHausdorff(A)=", avg_hausdorff_dist(B, ref), ", which both contradict Pareto optimality.\nBy contrast, IGD+(A)=", igd_plus(A, ref), "< IGD+(B)=", igd_plus(B, ref), ", which is correct.\n") # A less trivial example. extdata_path <- system.file(package="moocore","extdata") path.A1 <- file.path(extdata_path, "ALG_1_dat.xz") path.A2 <- file.path(extdata_path, "ALG_2_dat.xz") A1 <- read_datasets(path.A1)[,1:2] A2 <- read_datasets(path.A2)[,1:2] ref <- filter_dominated(rbind(A1, A2)) igd(A1, ref) igd(A2, ref) # IGD+ (Pareto compliant) igd_plus(A1, ref) igd_plus(A2, ref) # Average Haussdorff distance avg_hausdorff_dist(A1, ref) avg_hausdorff_dist(A2, ref)
# Example 4 from Ishibuchi et al. (2015) ref <- matrix(c(10,0,6,1,2,2,1,6,0,10), ncol=2, byrow=TRUE) A <- matrix(c(4,2,3,3,2,4), ncol=2, byrow=TRUE) B <- matrix(c(8,2,4,4,2,8), ncol=2, byrow=TRUE) if (requireNamespace("graphics", quietly = TRUE)) { plot(ref, xlab=expression(f[1]), ylab=expression(f[2]), panel.first=grid(nx=NULL), pch=23, bg="gray", cex=1.5) points(A, pch=1, cex=1.5) points(B, pch=19, cex=1.5) legend("topright", legend=c("Reference", "A", "B"), pch=c(23,1,19), pt.bg="gray", bg="white", bty = "n", pt.cex=1.5, cex=1.2) } cat("A is better than B in terms of Pareto optimality,\n however, IGD(A)=", igd(A, ref), "> IGD(B)=", igd(B, ref), "and AvgHausdorff(A)=", avg_hausdorff_dist(A, ref), "> AvgHausdorff(A)=", avg_hausdorff_dist(B, ref), ", which both contradict Pareto optimality.\nBy contrast, IGD+(A)=", igd_plus(A, ref), "< IGD+(B)=", igd_plus(B, ref), ", which is correct.\n") # A less trivial example. extdata_path <- system.file(package="moocore","extdata") path.A1 <- file.path(extdata_path, "ALG_1_dat.xz") path.A2 <- file.path(extdata_path, "ALG_2_dat.xz") A1 <- read_datasets(path.A1)[,1:2] A2 <- read_datasets(path.A2)[,1:2] ref <- filter_dominated(rbind(A1, A2)) igd(A1, ref) igd(A2, ref) # IGD+ (Pareto compliant) igd_plus(A1, ref) igd_plus(A2, ref) # Average Haussdorff distance avg_hausdorff_dist(A1, ref) avg_hausdorff_dist(A2, ref)
Identify nondominated points with is_nondominated()
and remove dominated
ones with filter_dominated()
.
pareto_rank()
ranks points according to Pareto-optimality,
which is also called nondominated sorting (Deb et al. 2002).
is_nondominated(x, maximise = FALSE, keep_weakly = FALSE) filter_dominated(x, maximise = FALSE, keep_weakly = FALSE) pareto_rank(x, maximise = FALSE)
is_nondominated(x, maximise = FALSE, keep_weakly = FALSE) filter_dominated(x, maximise = FALSE, keep_weakly = FALSE) pareto_rank(x, maximise = FALSE)
x |
|
maximise |
|
keep_weakly |
If |
pareto_rank()
is meant to be used like rank()
, but it
assigns ranks according to Pareto dominance. Duplicated points are kept on
the same front. When ncol(data) == 2
, the code uses the algorithm by Jensen (2003).
is_nondominated()
returns a logical vector of the same length
as the number of rows of data
, where TRUE
means that the
point is not dominated by any other point.
filter_dominated
returns a matrix or data.frame with only mutually nondominated points.
pareto_rank()
returns an integer vector of the same length as
the number of rows of data
, where each value gives the rank of each
point.
Manuel López-Ibáñez
Kalyanmoy Deb, A Pratap, S Agarwal, T Meyarivan (2002).
“A fast and elitist multi-objective genetic algorithm: NSGA-II.”
IEEE Transactions on Evolutionary Computation, 6(2), 182–197.
doi:10.1109/4235.996017.
M
T Jensen (2003).
“Reducing the run-time complexity of multiobjective EAs: The NSGA-II and other algorithms.”
IEEE Transactions on Evolutionary Computation, 7(5), 503–515.
S = matrix(c(1,1,0,1,1,0,1,0), ncol = 2, byrow = TRUE) is_nondominated(S) is_nondominated(S, maximise = TRUE) filter_dominated(S) filter_dominated(S, keep_weakly = TRUE) path_A1 <- file.path(system.file(package="moocore"),"extdata","ALG_1_dat.xz") set <- read_datasets(path_A1)[,1:2] is_nondom <- is_nondominated(set) cat("There are ", sum(is_nondom), " nondominated points\n") if (requireNamespace("graphics", quietly = TRUE)) { plot(set, col = "blue", type = "p", pch = 20) ndset <- filter_dominated(set) points(ndset[order(ndset[,1]),], col = "red", pch = 21) } ranks <- pareto_rank(set) str(ranks) if (requireNamespace("graphics", quietly = TRUE)) { colors <- colorRampPalette(c("red","yellow","springgreen","royalblue"))(max(ranks)) plot(set, col = colors[ranks], type = "p", pch = 20) }
S = matrix(c(1,1,0,1,1,0,1,0), ncol = 2, byrow = TRUE) is_nondominated(S) is_nondominated(S, maximise = TRUE) filter_dominated(S) filter_dominated(S, keep_weakly = TRUE) path_A1 <- file.path(system.file(package="moocore"),"extdata","ALG_1_dat.xz") set <- read_datasets(path_A1)[,1:2] is_nondom <- is_nondominated(set) cat("There are ", sum(is_nondom), " nondominated points\n") if (requireNamespace("graphics", quietly = TRUE)) { plot(set, col = "blue", type = "p", pch = 20) ndset <- filter_dominated(set) points(ndset[order(ndset[,1]),], col = "red", pch = 21) } ranks <- pareto_rank(set) str(ranks) if (requireNamespace("graphics", quietly = TRUE)) { colors <- colorRampPalette(c("red","yellow","springgreen","royalblue"))(max(ranks)) plot(set, col = colors[ranks], type = "p", pch = 20) }
Given a list of datasets, return the indexes of the pair with the largest EAF differences according to the method proposed by Diaz and López-Ibáñez (2021).
largest_eafdiff(x, maximise = FALSE, intervals = 5L, reference, ideal = NULL)
largest_eafdiff(x, maximise = FALSE, intervals = 5L, reference, ideal = NULL)
x |
|
maximise |
|
intervals |
|
reference |
|
ideal |
|
list()
A list with two components pair
and value
.
Juan Esteban Diaz, Manuel López-Ibáñez (2021). “Incorporating Decision-Maker's Preferences into the Automatic Configuration of Bi-Objective Optimisation Algorithms.” European Journal of Operational Research, 289(3), 1209–1222. doi:10.1016/j.ejor.2020.07.059.
# FIXME: This example is too large, we need a smaller one. data(tpls50x20_1_MWT) nadir <- apply(tpls50x20_1_MWT[,2:3], 2L, max) x <- largest_eafdiff(split.data.frame(tpls50x20_1_MWT[,2:4], tpls50x20_1_MWT[, 1L]), reference = nadir) str(x)
# FIXME: This example is too large, we need a smaller one. data(tpls50x20_1_MWT) nadir <- apply(tpls50x20_1_MWT[,2:3], 2L, max) x <- largest_eafdiff(split.data.frame(tpls50x20_1_MWT[,2:4], tpls50x20_1_MWT[, 1L]), reference = nadir) str(x)
Normalise points per coordinate to a range, e.g., c(1,2)
, where the
minimum value will correspond to 1 and the maximum to 2. If bounds are
given, they are used for the normalisation.
normalise(x, to_range = c(1, 2), lower = NA, upper = NA, maximise = FALSE)
normalise(x, to_range = c(1, 2), lower = NA, upper = NA, maximise = FALSE)
x |
|
to_range |
|
lower , upper
|
|
maximise |
|
matrix()
A numerical matrix
Manuel López-Ibáñez
data(SPEA2minstoptimeRichmond) # The second objective must be maximized head(SPEA2minstoptimeRichmond[, 1:2]) head(normalise(SPEA2minstoptimeRichmond[, 1:2], maximise = c(FALSE, TRUE))) head(normalise(SPEA2minstoptimeRichmond[, 1:2], to_range = c(0,1), maximise = c(FALSE, TRUE)))
data(SPEA2minstoptimeRichmond) # The second objective must be maximized head(SPEA2minstoptimeRichmond[, 1:2]) head(normalise(SPEA2minstoptimeRichmond[, 1:2], maximise = c(FALSE, TRUE))) head(normalise(SPEA2minstoptimeRichmond[, 1:2], to_range = c(0,1), maximise = c(FALSE, TRUE)))
x
and y
by row taking care of making all sets unique.Combine datasets x
and y
by row taking care of making all sets unique.
rbind_datasets(x, y)
rbind_datasets(x, y)
x , y
|
|
matrix()|
data.frame()'
A dataset.
x <- data.frame(f1 = 5:10, f2 = 10:5, set = 1:6) y <- data.frame(f1 = 15:20, f2 = 20:15, set = 1:6) rbind_datasets(x,y)
x <- data.frame(f1 = 5:10, f2 = 10:5, set = 1:6) y <- data.frame(f1 = 15:20, f2 = 20:15, set = 1:6) rbind_datasets(x,y)
Reads a text file in table format and creates a matrix from it. The file
may contain several sets, separated by empty lines. Lines starting by
'#'
are considered comments and treated as empty lines. The function
adds an additional column set
to indicate to which set each row
belongs.
read_datasets(file, col_names, text)
read_datasets(file, col_names, text)
file |
|
col_names |
|
text |
|
matrix()
A numerical matrix of the
data in the file. An extra column set
is added to indicate to
which set each row belongs.
A known limitation is that the input file must use newline characters
native to the host system, otherwise they will be, possibly silently,
misinterpreted. In GNU/Linux the program dos2unix
may be used
to fix newline characters.
There are several examples of data sets in
system.file(package="moocore","extdata")
.
Manuel López-Ibáñez
extdata_path <- system.file(package="moocore","extdata") A1 <- read_datasets(file.path(extdata_path,"ALG_1_dat.xz")) str(A1) read_datasets(text="1 2\n3 4\n\n5 6\n7 8\n", col_names=c("obj1", "obj2"))
extdata_path <- system.file(package="moocore","extdata") A1 <- read_datasets(file.path(extdata_path,"ALG_1_dat.xz")) str(A1) read_datasets(text="1 2\n3 4\n\n5 6\n7 8\n", col_names=c("obj1", "obj2"))
Results of SPEA2 when minimising electrical cost and maximising the minimum idle time of pumps on Richmond water network.
SPEA2minstoptimeRichmond
SPEA2minstoptimeRichmond
A data frame as produced by read_datasets()
. The second
column measures time in seconds and corresponds to a maximisation problem.
Manuel López-Ibáñez (2009). Operational Optimisation of Water Distribution Networks. Ph.D. thesis, School of Engineering and the Built Environment, Edinburgh Napier University, UK. https://lopez-ibanez.eu/publications#LopezIbanezPhD.
data(SPEA2minstoptimeRichmond) str(SPEA2minstoptimeRichmond)
data(SPEA2minstoptimeRichmond) str(SPEA2minstoptimeRichmond)
Results of SPEA2 with relative time-controlled triggers on Richmond water network.
SPEA2relativeRichmond
SPEA2relativeRichmond
A data frame as produced by read_datasets()
.
Manuel López-Ibáñez (2009). Operational Optimisation of Water Distribution Networks. Ph.D. thesis, School of Engineering and the Built Environment, Edinburgh Napier University, UK. https://lopez-ibanez.eu/publications#LopezIbanezPhD.
data(SPEA2relativeRichmond) str(SPEA2relativeRichmond)
data(SPEA2relativeRichmond) str(SPEA2relativeRichmond)
Results of SPEA2 with relative time-controlled triggers on Vanzyl's water network.
SPEA2relativeVanzyl
SPEA2relativeVanzyl
An object of class data.frame
with 107 rows and 3 columns.
Manuel López-Ibáñez (2009). Operational Optimisation of Water Distribution Networks. Ph.D. thesis, School of Engineering and the Built Environment, Edinburgh Napier University, UK. https://lopez-ibanez.eu/publications#LopezIbanezPhD.
data(SPEA2relativeVanzyl) str(SPEA2relativeVanzyl)
data(SPEA2relativeVanzyl) str(SPEA2relativeVanzyl)
Various strategies of Two-Phase Local Search applied to the Permutation Flowshop Problem with Makespan and Weighted Tardiness objectives.
tpls50x20_1_MWT
tpls50x20_1_MWT
A data frame with 1511 observations of 4 variables:
algorithm
TPLS search strategy
Makespan
first objective values.
WeightedTardiness
second objective values.
set
indices of corresponding conditional Pareto fronts.
Jérémie Dubois-Lacoste, Manuel López-Ibáñez, Thomas Stützle (2011). “Improving the Anytime Behavior of Two-Phase Local Search.” Annals of Mathematics and Artificial Intelligence, 61(2), 125–154. doi:10.1007/s10472-011-9235-0.
data(tpls50x20_1_MWT) str(tpls50x20_1_MWT)
data(tpls50x20_1_MWT) str(tpls50x20_1_MWT)
Transform matrix according to maximise parameter
transform_maximise(x, maximise)
transform_maximise(x, maximise)
x |
|
maximise |
|
x
transformed such that every column where maximise
is TRUE
is multiplied by -1
.
x <- data.frame(f1=1:10, f2=101:110) rownames(x) <- letters[1:10] transform_maximise(x, maximise=c(FALSE,TRUE)) transform_maximise(x, maximise=TRUE) x <- as.matrix(x) transform_maximise(x, maximise=c(FALSE,TRUE)) transform_maximise(x, maximise=TRUE)
x <- data.frame(f1=1:10, f2=101:110) rownames(x) <- letters[1:10] transform_maximise(x, maximise=c(FALSE,TRUE)) transform_maximise(x, maximise=TRUE) x <- as.matrix(x) transform_maximise(x, maximise=c(FALSE,TRUE)) transform_maximise(x, maximise=TRUE)
Compute Vorob'ev threshold, expectation and deviation. Also, displaying the symmetric deviation function is possible. The symmetric deviation function is the probability for a given target in the objective space to belong to the symmetric difference between the Vorob'ev expectation and a realization of the (random) attained set.
vorobT(x, sets, reference, maximise = FALSE) vorobDev(x, sets, reference, VE = NULL, maximise = FALSE)
vorobT(x, sets, reference, maximise = FALSE) vorobDev(x, sets, reference, VE = NULL, maximise = FALSE)
x |
|
sets |
|
reference |
|
maximise |
|
VE |
|
vorobT
returns a list with elements threshold
,
VE
, and avg_hyp
(average hypervolume)
vorobDev
returns the Vorob'ev deviation.
Mickael Binois
M Binois, D Ginsbourger, O Roustant (2015). “Quantifying uncertainty on Pareto fronts with Gaussian process conditional simulations.” European Journal of Operational Research, 243(2), 386–394. doi:10.1016/j.ejor.2014.07.032.
C. Chevalier (2013), Fast uncertainty reduction strategies relying on Gaussian process models, University of Bern, PhD thesis.
Ilya Molchanov (2005). Theory of Random Sets. Springer.
data(CPFs) res <- vorobT(CPFs, reference = c(2, 200)) res$threshold res$avg_hyp # Now print Vorob'ev deviation VD <- vorobDev(CPFs, VE = res$VE, reference = c(2, 200)) VD
data(CPFs) res <- vorobT(CPFs, reference = c(2, 200)) res$threshold res$avg_hyp # Now print Vorob'ev deviation VD <- vorobDev(CPFs, VE = res$VE, reference = c(2, 200)) VD
Return an estimation of the hypervolume of the space dominated by the input data following the procedure described by Auger et al. (2009). A weight distribution describing user preferences may be specified.
whv_hype( x, reference, ideal, maximise = FALSE, dist = "uniform", nsamples = 100000L, seed = NULL, mu = NULL )
whv_hype( x, reference, ideal, maximise = FALSE, dist = "uniform", nsamples = 100000L, seed = NULL, mu = NULL )
x |
|
reference |
|
ideal |
|
maximise |
|
dist |
|
nsamples |
|
seed |
|
mu |
|
The current implementation only supports 2 objectives.
A weight distribution (Auger et al. 2009) can be provided via the dist
argument. The ones currently supported are:
"uniform"
corresponds to the default hypervolume (unweighted).
"point"
describes a goal in the objective space, where the parameter mu
gives the coordinates of the goal. The resulting weight distribution is a multivariate normal distribution centred at the goal.
"exponential"
describes an exponential distribution with rate parameter 1/mu
, i.e., .
A single numerical value.
Anne Auger, Johannes Bader, Dimo Brockhoff, Eckart Zitzler (2009). “Articulating User Preferences in Many-Objective Problems by Sampling the Weighted Hypervolume.” In Franz Rothlauf (ed.), Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2009, 555–562. ACM Press, New York, NY.
read_datasets()
, eafdiff()
, whv_rect()
whv_hype(matrix(2, ncol=2), reference = 4, ideal = 1, seed = 42) whv_hype(matrix(c(3,1), ncol=2), reference = 4, ideal = 1, seed = 42) whv_hype(matrix(2, ncol=2), reference = 4, ideal = 1, seed = 42, dist = "exponential", mu=0.2) whv_hype(matrix(c(3,1), ncol=2), reference = 4, ideal = 1, seed = 42, dist = "exponential", mu=0.2) whv_hype(matrix(2, ncol=2), reference = 4, ideal = 1, seed = 42, dist = "point", mu=c(2.9,0.9)) whv_hype(matrix(c(3,1), ncol=2), reference = 4, ideal = 1, seed = 42, dist = "point", mu=c(2.9,0.9))
whv_hype(matrix(2, ncol=2), reference = 4, ideal = 1, seed = 42) whv_hype(matrix(c(3,1), ncol=2), reference = 4, ideal = 1, seed = 42) whv_hype(matrix(2, ncol=2), reference = 4, ideal = 1, seed = 42, dist = "exponential", mu=0.2) whv_hype(matrix(c(3,1), ncol=2), reference = 4, ideal = 1, seed = 42, dist = "exponential", mu=0.2) whv_hype(matrix(2, ncol=2), reference = 4, ideal = 1, seed = 42, dist = "point", mu=c(2.9,0.9)) whv_hype(matrix(c(3,1), ncol=2), reference = 4, ideal = 1, seed = 42, dist = "point", mu=c(2.9,0.9))
Calculates the hypervolume weighted by a set of rectangles (with zero weight
outside the rectangles). The function total_whv_rect()
calculates the
total weighted hypervolume as hypervolume()
+ scalefactor * abs(prod(reference - ideal)) * whv_rect()
. The details of the computation
are given by Diaz and López-Ibáñez (2021).
whv_rect(x, rectangles, reference, maximise = FALSE) total_whv_rect( x, rectangles, reference, maximise = FALSE, ideal = NULL, scalefactor = 0.1 )
whv_rect(x, rectangles, reference, maximise = FALSE) total_whv_rect( x, rectangles, reference, maximise = FALSE, ideal = NULL, scalefactor = 0.1 )
x |
|
rectangles |
|
reference |
|
maximise |
|
ideal |
|
scalefactor |
|
TODO
numeric(1)
A single numerical value.
Juan Esteban Diaz, Manuel López-Ibáñez (2021). “Incorporating Decision-Maker's Preferences into the Automatic Configuration of Bi-Objective Optimisation Algorithms.” European Journal of Operational Research, 289(3), 1209–1222. doi:10.1016/j.ejor.2020.07.059.
read_datasets()
, eafdiff()
, choose_eafdiff()
, whv_hype()
rectangles <- as.matrix(read.table(header=FALSE, text=' 1.0 3.0 2.0 Inf 1 2.0 3.5 2.5 Inf 2 2.0 3.0 3.0 3.5 3 ')) whv_rect (matrix(2, ncol=2), rectangles, reference = 6) whv_rect (matrix(c(2, 1), ncol=2), rectangles, reference = 6) whv_rect (matrix(c(1, 2), ncol=2), rectangles, reference = 6) total_whv_rect (matrix(2, ncol=2), rectangles, reference = 6, ideal = c(1,1)) total_whv_rect (matrix(c(2, 1), ncol=2), rectangles, reference = 6, ideal = c(1,1)) total_whv_rect (matrix(c(1, 2), ncol=2), rectangles, reference = 6, ideal = c(1,1))
rectangles <- as.matrix(read.table(header=FALSE, text=' 1.0 3.0 2.0 Inf 1 2.0 3.5 2.5 Inf 2 2.0 3.0 3.0 3.5 3 ')) whv_rect (matrix(2, ncol=2), rectangles, reference = 6) whv_rect (matrix(c(2, 1), ncol=2), rectangles, reference = 6) whv_rect (matrix(c(1, 2), ncol=2), rectangles, reference = 6) total_whv_rect (matrix(2, ncol=2), rectangles, reference = 6, ideal = c(1,1)) total_whv_rect (matrix(c(2, 1), ncol=2), rectangles, reference = 6, ideal = c(1,1)) total_whv_rect (matrix(c(1, 2), ncol=2), rectangles, reference = 6, ideal = c(1,1))
Write data sets to a file in the same format as read_datasets()
.
write_datasets(x, file = "")
write_datasets(x, file = "")
x |
|
file |
Either a character string naming a file or a connection open for
writing. |
No return value, called for side effects
utils::write.table()
, read_datasets()
x <- read_datasets(text="1 2\n3 4\n\n5 6\n7 8\n", col_names=c("obj1", "obj2")) write_datasets(x)
x <- read_datasets(text="1 2\n3 4\n\n5 6\n7 8\n", col_names=c("obj1", "obj2")) write_datasets(x)