public interface Caches
RandomAccessibleInterval
of
the typical NativeType
implementations in a memory cell image with
volatile cells.
TODO These methods should be in imglib2-cache.Modifier and Type | Interface and Description |
---|---|
static class |
Caches.RandomAccessibleLoader<T extends NativeType<T>>
A simple
CellLoader implementation that fills a pre-allocated
cell with data from a RandomAccessible source at the same
coordinates. |
Modifier and Type | Method and Description |
---|---|
static <T extends NativeType<T>> |
cache(RandomAccessibleInterval<T> source,
int... blockSize)
Cache a
RandomAccessibleInterval of the typical
NativeType implementations in a memory cell image with volatile
cells. |
static <T> ArrayList<Future<T>> |
preFetch(RandomAccessible<T> source,
Interval interval,
long[] spacing,
ExecutorService exec)
Trigger pre-fetching of an
Interval in a RandomAccessible
by concurrent sampling of values at a sparse grid. |
static <T extends NativeType<T>> RandomAccessibleInterval<T> cache(RandomAccessibleInterval<T> source, int... blockSize)
RandomAccessibleInterval
of the typical
NativeType
implementations in a memory cell image with volatile
cells. The result can be used with non-volatile types for processing but
it can also be wrapped into volatile types for visualization, see
VolatileViews.wrapAsVolatile(RandomAccessible)
.
This is a very naive method to implement this kind of cache, but it
serves the purpose for this tutorial. The imglib2-cache library offers
more control over caches, and you should go and test it out.T
- source
- blockSize
- static <T> ArrayList<Future<T>> preFetch(RandomAccessible<T> source, Interval interval, long[] spacing, ExecutorService exec) throws InterruptedException, ExecutionException
Interval
in a RandomAccessible
by concurrent sampling of values at a sparse grid.
Pre-fetching is only triggered and the set of value sampling
Future
s is returned such that it can be used to wait for
completion or ignored (typically, ignoring will be best).
This method is most useful to reduce wasted time waiting for high latency
data loaders (such as AWS S3 or GoogleCloud). Higher and more random
latency benefit from higher parallelism, e.g. total parallelism with
Executors.newCachedThreadPool()
. Medium latency loaders may be
served better with a limited number of threads, e.g.
Executors.newFixedThreadPool(int)
. The optimal solution depends
also on how the rest of the application is parallelized and how much
caching memory is available.
We do not suggest to use this to fill a CachedCellImg
with a
generator because now the ExecutorService
will do the complete
processing work without guarantees that the generated cells will persist.T
- source
- interval
- spacing
- exec
- InterruptedException
ExecutionException
Copyright © 2015–2022 ImgLib2. All rights reserved.