Skip to content
Snippets Groups Projects
Commit 7d4a6934 authored by Nathalie Vialaneix's avatar Nathalie Vialaneix
Browse files

fixed a bug in itemize correction

parent f2682631
No related branches found
No related tags found
1 merge request!3Cranfix
......@@ -2,17 +2,6 @@
#'
#' Compute multiple kernels into a single meta-kernel
#'
#' @details
#' The arguments \code{method} allows to specify the Unsupervised Multiple
#' Kernel Learning (UMKL) method to use:
#' \item{\code{"STATIS-UMKL"}}{: combines input kernels into the best
#' consensus of all kernels;}
#' \item \code{"full-UMKL"}{: computes a kernel that minimizes the distortion
#' between the meta-kernel and the k-NN graphs obtained from all input
#' kernels;}
#' \item \code{"sparse-UMKL"}{: a sparse variant of the \code{"full-UMKL"}
#' approach.}
#'
#' @param ... list of kernels (called 'blocks') computed on different datasets
#' and measured on the same samples.
#' @param scale boleean. If \code{scale = TRUE}, each block is standardized to
......@@ -26,14 +15,25 @@
#' @param rho integer. Parameters for the augmented Lagrangian method. Default:
#' \code{20}.
#'
#' @return \code{combine.kernels} returns an object of classes \code{"kernel"} and
#' \code{"metaKernel"}, a list that contains the following components: \itemize{
#' @return \code{combine.kernels} returns an object of classes \code{"kernel"}
#' and \code{"metaKernel"}, a list that contains the following components:
#' \item{kernel}{: the computed meta-kernel matrix;}
#' \item{X}{: the dataset from which the kernel has been computed, as given by
#' the function \code{\link{compute.kernel}}. Can be \code{NULL} if a kernel
#' matrix was passed to this function;}
#' \item{weights}{: a vector containing the weights used to combine the
#' kernels.}
#'
#' @details
#' The arguments \code{method} allows to specify the Unsupervised Multiple
#' Kernel Learning (UMKL) method to use: \itemize{
#' \item{\code{"STATIS-UMKL"}}{: combines input kernels into the best
#' consensus of all kernels;}
#' \item \code{"full-UMKL"}{: computes a kernel that minimizes the distortion
#' between the meta-kernel and the k-NN graphs obtained from all input
#' kernels;}
#' \item \code{"sparse-UMKL"}{: a sparse variant of the \code{"full-UMKL"}
#' approach.}
#' }
#'
#' @author Jerome Mariette <jerome.mariette@@inrae.fr>
......
......@@ -11,7 +11,7 @@
#' \code{"gaussian.radial.basis"}, \code{"poisson"} or \code{"phylogenetic"}.
#' Default: \code{"linear"}.
#' @param ... the kernel function arguments. Valid parameters for
#' pre-implemented kernels are:
#' pre-implemented kernels are: \itemize{
#' \item \code{phylogenetic.tree} (\code{"phylogenetic"}): an instance of
#' phylo-class that contains a phylogenetic tree (required).
#' \item \code{scale} (\code{"linear"} or \code{"gaussian.radial.basis"}):
......@@ -30,6 +30,7 @@
#' \item \code{normalization} (\code{"poisson"}): character. Can be
#' \code{"deseq"} (more robust), \code{"mle"} (less robust) or
#' \code{"quantile"}.
#' }
#' @param test.pos.semidef boleean. If \code{test.pos.semidef = TRUE}, the
#' positive semidefiniteness of the resulting matrix is checked.
#'
......
......@@ -31,22 +31,21 @@ local topology of the datasets from each kernel. Default: \code{5}.}
\code{20}.}
}
\value{
\code{combine.kernels} returns an object of classes \code{"kernel"} and
\code{"metaKernel"}, a list that contains the following components: \itemize{
\code{combine.kernels} returns an object of classes \code{"kernel"}
and \code{"metaKernel"}, a list that contains the following components:
\item{kernel}{: the computed meta-kernel matrix;}
\item{X}{: the dataset from which the kernel has been computed, as given by
the function \code{\link{compute.kernel}}. Can be \code{NULL} if a kernel
matrix was passed to this function;}
\item{weights}{: a vector containing the weights used to combine the
kernels.}
}
kernels.}
}
\description{
Compute multiple kernels into a single meta-kernel
}
\details{
The arguments \code{method} allows to specify the Unsupervised Multiple
Kernel Learning (UMKL) method to use:
Kernel Learning (UMKL) method to use: \itemize{
\item{\code{"STATIS-UMKL"}}{: combines input kernels into the best
consensus of all kernels;}
\item \code{"full-UMKL"}{: computes a kernel that minimizes the distortion
......@@ -55,6 +54,7 @@ Kernel Learning (UMKL) method to use:
\item \code{"sparse-UMKL"}{: a sparse variant of the \code{"full-UMKL"}
approach.}
}
}
\examples{
data(TARAoceans)
......
......@@ -18,7 +18,7 @@ pre-implemented, that can be used by setting \code{kernel.func} to one of the
Default: \code{"linear"}.}
\item{...}{the kernel function arguments. Valid parameters for
pre-implemented kernels are:
pre-implemented kernels are: \itemize{
\item \code{phylogenetic.tree} (\code{"phylogenetic"}): an instance of
phylo-class that contains a phylogenetic tree (required).
\item \code{scale} (\code{"linear"} or \code{"gaussian.radial.basis"}):
......@@ -36,7 +36,8 @@ pre-implemented kernels are:
\code{"cao"}.
\item \code{normalization} (\code{"poisson"}): character. Can be
\code{"deseq"} (more robust), \code{"mle"} (less robust) or
\code{"quantile"}.}
\code{"quantile"}.
}}
\item{test.pos.semidef}{boleean. If \code{test.pos.semidef = TRUE}, the
positive semidefiniteness of the resulting matrix is checked.}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment