Currently we use boundless: false when fetching COG tiles:
|
const tile = await image.fetchTile(x, y, { |
|
boundless: false, |
|
pool, |
|
signal, |
|
}); |
and then we use clipToImageBounds to clip on CPU to the valid region of the image
|
/** |
|
* Clip a decoded tile array to the valid image bounds. |
|
* |
|
* Edge tiles in a COG are always encoded at the full tile size, with the |
|
* out-of-bounds region zero-padded. When `boundless=false` is requested, this |
|
* function copies only the valid pixel sub-rectangle into a new typed array, |
|
* returning a `RasterArray` whose `width`/`height` match the actual image |
|
* content rather than the tile dimensions. |
|
* |
|
* Interior tiles (where the tile fits entirely within the image) are returned |
|
* unchanged. |
|
*/ |
|
function clipToImageBounds( |
|
self: HasTiffReference, |
|
x: number, |
|
y: number, |
|
array: RasterArray, |
|
): RasterArray { |
|
const { width: clippedWidth, height: clippedHeight } = |
|
self.image.getTileBounds(x, y); |
|
|
|
// Interior tile — nothing to clip. |
|
if (clippedWidth === self.tileWidth && clippedHeight === self.tileHeight) { |
|
return array; |
|
} |
|
|
|
const clippedMask = array.mask |
|
? clipRows(array.mask, self.tileWidth, clippedWidth, clippedHeight, 1) |
|
: array.mask; |
|
|
|
if (array.layout === "pixel-interleaved") { |
|
const { count, data } = array; |
|
const clipped = clipRows( |
|
data, |
|
self.tileWidth, |
|
clippedWidth, |
|
clippedHeight, |
|
count, |
|
); |
|
return { |
|
...array, |
|
width: clippedWidth, |
|
height: clippedHeight, |
|
data: clipped as typeof data, |
|
mask: clippedMask, |
|
}; |
|
} |
|
|
|
// band-separate |
|
const { bands } = array; |
|
const clippedBands = bands.map( |
|
(band) => |
|
clipRows( |
|
band, |
|
self.tileWidth, |
|
clippedWidth, |
|
clippedHeight, |
|
1, |
|
) as typeof band, |
|
); |
|
|
|
return { |
|
...array, |
|
width: clippedWidth, |
|
height: clippedHeight, |
|
bands: clippedBands, |
|
mask: clippedMask, |
|
}; |
|
} |
I think we should probably (? maybe?) avoid image clipping on CPU and upload the raw tiles as-is. Then we can use existing nodata-masking or mask-array masking to discard these pixels from rendering.
The tradeoff is that using boundless: true and removing CPU clipping:
- Should be faster by having less CPU overhead
- Should use slightly more GPU memory for the regions of the texture that are known to be nodata
In the MultiCOGLayer we always use boundless: true because we need the uvTransform to be consistent across all tiles. This is what produced the warped screenshots in #410, which was fixed in #411

Currently we use
boundless: falsewhen fetching COG tiles:deck.gl-raster/packages/deck.gl-geotiff/src/geotiff/render-pipeline.ts
Lines 157 to 161 in 72ca8ae
and then we use
clipToImageBoundsto clip on CPU to the valid region of the imagedeck.gl-raster/packages/geotiff/src/fetch.ts
Lines 388 to 456 in 47afad5
I think we should probably (? maybe?) avoid image clipping on CPU and upload the raw tiles as-is. Then we can use existing nodata-masking or mask-array masking to discard these pixels from rendering.
The tradeoff is that using
boundless: trueand removing CPU clipping:In the
MultiCOGLayerwe always useboundless: truebecause we need theuvTransformto be consistent across all tiles. This is what produced the warped screenshots in #410, which was fixed in #411