---
title: neuroim2 Cookbook
date: '`r Sys.Date()`'
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{neuroim2 Cookbook}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
params:
family: red
css: albers.css
resource_files:
- albers.css
- albers.js
includes:
in_header: |-
---
```{r, echo = FALSE, message = FALSE}
knitr::opts_chunk$set(collapse = TRUE, comment = "#>")
library(neuroim2)
library(purrr)
library(assertthat)
```
This vignette collects small, task‑oriented examples (“cookbook” style)
for common `NeuroVec` workflows that don’t warrant a full standalone vignette.
We assume basic familiarity with `NeuroVec` from `vignette("NeuroVector")`.
## Reducing a 4D NeuroVec over time into a NeuroVol
Sometimes you want to take a 4D time‑series (`NeuroVec`) and collapse the
time dimension into a single 3D volume by applying a reduction function
to each voxel’s time‑series (e.g., temporal mean, standard deviation, or
any custom summary).
The pattern is:
1. Convert the `NeuroVec` to a voxel × time matrix with `as.matrix()`.
2. Apply a function over the time axis for each voxel (row).
3. Reshape back to 3D and wrap as a `NeuroVol`.
```{r}
file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
vec <- read_vec(file_name) # 4D NeuroVec
reduce_ts_to_vol <- function(x, FUN) {
dm <- dim(x)
stopifnot(length(dm) == 4)
mat <- as.matrix(x) # voxels × time
vals <- apply(mat, 1, FUN) # one value per voxel
NeuroVol(array(vals, dm[1:3]), drop_dim(space(x)))
}
# Example: temporal mean volume
mean_vol <- reduce_ts_to_vol(vec, mean)
mean_vol
```
You can plug in any summary function that maps a numeric vector to a
single value (e.g. `median`, a trimmed mean, robust summaries, etc.).
## Splitting a NeuroVec into blocks by an index vector
If you have a single concatenated `NeuroVec` with multiple logical
blocks along the time dimension (e.g., runs or sessions), you can
“un‑concatenate” it into one `NeuroVec` per block using
`split_blocks()`.
Create an index vector of length `dim(vec)[4]` indicating which block
each timepoint belongs to, then split the time index and convert each
subset to a sub-vector (this is what \code{split_blocks()} does
internally):
```{r}
space4 <- NeuroSpace(c(10, 10, 10, 9), c(1, 1, 1))
vec4d <- NeuroVec(array(rnorm(10 * 10 * 10 * 9),
dim = c(10, 10, 10, 9)),
space4)
# Suppose timepoints 1–3 belong to block 1, 4–6 to block 2, 7–9 to block 3
block_idx <- c(1, 1, 1,
2, 2, 2,
3, 3, 3)
idx_list <- split(seq_len(dim(vec4d)[4]), block_idx)
blocks <- lapply(idx_list, function(ii) sub_vector(vec4d, ii))
length(blocks) # 3 blocks
dim(blocks[[1]]) # first block: 10×10×10×3
dim(blocks[[2]]) # second block: 10×10×10×3
dim(blocks[[3]]) # third block: 10×10×10×3
```
The result is a list of `NeuroVec` objects; access elements with
`blocks[[i]]` for block `i`. This is effectively the inverse of
concatenation by time. The helper \code{split_blocks()} exposes this
pattern with method dispatch for `NeuroVec`.
## Converting a NeuroVec to a memory‑mapped MappedNeuroVec
For large 4D datasets or shared‑memory workflows on HPC, it is often
useful to back a `NeuroVec` by a memory‑mapped file using
`MappedNeuroVec`. The `as_mmap()` helper converts common vector types
(`NeuroVec`, `SparseNeuroVec`, `FileBackedNeuroVec`) into a
`MappedNeuroVec`, writing an uncompressed NIfTI file if needed.
```{r}
file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
vec <- read_vec(file_name) # DenseNeuroVec in memory
# Convert to a memory-mapped representation backed by a temporary .nii file
mvec <- as_mmap(vec)
mvec
# Or explicitly choose an output file (must be uncompressed for mmap)
tmp_nii <- tempfile(fileext = ".nii")
mvec2 <- as_mmap(vec, file = tmp_nii, overwrite = TRUE)
inherits(mvec2, "MappedNeuroVec")
```
For sparse data (`SparseNeuroVec`), `as_mmap()` first densifies the
vector and then writes a full 4D NIfTI before mapping it, trading memory
once for much more efficient subsequent access and multi‑process
sharing.
## Mapping a kernel over a 3D NeuroVol with `mapf`
To apply a spatial kernel (e.g. a 3×3×3 mean filter) over a 3D volume,
use `mapf()` with a `Kernel` object. This keeps the familiar
`NeuroVol`/`NeuroSpace` metadata while doing neighborhood computations.
```{r}
bspace <- NeuroSpace(c(10, 10, 10), c(1, 1, 1))
vol <- NeuroVol(array(rnorm(10 * 10 * 10), c(10, 10, 10)), bspace)
# Simple 3×3×3 mean kernel
kern <- Kernel(c(3, 3, 3), vdim = c(3, 3, 3))
smoothed_vol <- mapf(vol, kern)
smoothed_vol
```
You can also pass a logical mask to restrict computation to a region
while keeping the output volume in the original space.
## Splitting a NeuroVec into ROIs by voxel clusters
Given voxelwise cluster labels (for example, from a parcellation),
`split_clusters()` can turn a `NeuroVec` into a list of `ROIVec`
objects—one per cluster—each containing the time‑series of voxels in
that cluster.
```{r}
file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
vec <- read_vec(file_name)
# Fake cluster labels over the full 3D grid
n_vox <- prod(dim(vec)[1:3])
cl_lab <- sample(1:5, n_vox, replace = TRUE)
roi_list <- split_clusters(vec, cl_lab)
length(roi_list) # number of non-empty clusters
roi_list[[1]] # ROIVec for cluster "1"
coords(roi_list[[1]])[1:5, ] # first few voxel coordinates
dim(values(roi_list[[1]])) # time × voxels in that cluster
```
This is useful when you want to work directly with voxel‑level ROIs per
cluster rather than first building a `ClusteredNeuroVec`.
## Group‑wise voxel reduction with `split_reduce`
When you have a voxel‑wise grouping (e.g. tissue classes, parcels, or
custom regions), `split_reduce()` can compute one summary time‑series
per group directly from a `NeuroVec`.
```{r}
file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
vec <- read_vec(file_name)
n_vox <- prod(dim(vec)[1:3])
# Assign each voxel to one of three arbitrary groups
fac <- factor(sample(1:3, n_vox, replace = TRUE))
# Default: mean over voxels in each group (per timepoint)
group_ts <- split_reduce(vec, fac)
dim(group_ts) # groups × timepoints
rownames(group_ts) # "1", "2", "3"
```
Here each row of `group_ts` is the mean time‑series of all voxels in
that group; you can supply any function `FUN` that maps a numeric vector
to a scalar (e.g. `median`, robust summaries, etc.).
## Concatenating NeuroVols into a NeuroVec
To build a 4D `NeuroVec` from multiple 3D `NeuroVol` objects that share
the same space, use `concat()` along the time dimension.
```{r}
sp3 <- NeuroSpace(c(10, 10, 10), c(1, 1, 1))
vol1 <- NeuroVol(array(rnorm(10 * 10 * 10), c(10, 10, 10)), sp3)
vol2 <- NeuroVol(array(rnorm(10 * 10 * 10), c(10, 10, 10)), sp3)
vol3 <- NeuroVol(array(rnorm(10 * 10 * 10), c(10, 10, 10)), sp3)
# Concatenate volumes into a 4D NeuroVec (time dimension length 3)
vec_3 <- concat(vol1, vol2, vol3)
dim(vec_3) # 10 × 10 × 10 × 3
space(vec_3) # inherits spatial metadata from inputs
```
All input volumes must have identical spatial dimensions and `NeuroSpace`;
otherwise `concat()` will error.
## Concatenating NeuroVecs along the time dimension
You can also concatenate multiple `NeuroVec` objects (e.g. runs or
sessions) into a longer 4D vector, again with `concat()`.
```{r}
file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
run1 <- read_vec(file_name) # 4D NeuroVec
run2 <- read_vec(file_name) # same space and shape
# Concatenate timepoints: result has dim(...)[4] = dim(run1)[4] + dim(run2)[4]
run12 <- concat(run1, run2)
dim(run1)
dim(run2)
dim(run12)
```
This is the natural inverse of splitting by blocks: you can break a long
`NeuroVec` into segments with `split_blocks()` and then reconstruct a
longer series again with `concat()`.
## Connected components in a 3D mask
To identify spatially contiguous clusters in a 3D volume or mask, use
`conn_comp()` on a `NeuroVol`. This is handy for summarizing thresholded
statistical maps or cleaning up binary masks.
```{r}
sp <- NeuroSpace(c(10, 10, 10), c(1, 1, 1))
arr <- array(0, c(10, 10, 10))
# Two small 2×2×2 clusters in opposite corners
arr[1:2, 1:2, 1:2] <- 1
arr[8:9, 8:9, 8:9] <- 1
vol <- NeuroVol(arr, sp)
# Find connected components above threshold 0 (26-connectivity by default)
cc <- conn_comp(vol, threshold = 0)
max(cc$index) # number of clusters (should be 2)
cc$size[cc$size > 0] # cluster sizes in voxels
```
You can also request a `cluster_table` of summary statistics or
`local_maxima` for peak finding:
```{r}
cc2 <- conn_comp(vol, threshold = 0, cluster_table = TRUE)
head(cc2$cluster_table)
```