Skip to content

feat(skeleton): skeleton line width control and conversion from model units#878

Open
chrisj wants to merge 5 commits intogoogle:masterfrom
seung-lab:cj-skeleton-shader-line-width-model-units
Open

feat(skeleton): skeleton line width control and conversion from model units#878
chrisj wants to merge 5 commits intogoogle:masterfrom
seung-lab:cj-skeleton-shader-line-width-model-units

Conversation

@chrisj
Copy link
Copy Markdown
Contributor

@chrisj chrisj commented Feb 2, 2026

continuing from: #820

Based on this section from the skeleton documentation:

The special vertex attribute id of "radius" may be used to indicate the radius in "stored model" units; it should have a "data_type" of "float32" and "num_components" of 1.

I've renamed the shader function to "modelToPixels"

The documentation also indicates:

If using a "radius" attribute, the scaling applied by "transform" should be uniform

So I think it is fine to keep the calculation of the scaling factor to be based on just a single dimension. What is lacking is the ability to use the display dimension scale. I think a 3d approach to rendering skeletons is the right choice if that matters.

I will look into finding a way to add good tests for this feature

I left in the line annotation code so we can see what adding annotation support would look like, but I will strip that out of this PR

…ls. Migrated skeleton shader editor to control vertex shader instead of fragment shader (match behavior of annotation layer).
…n indicates radius is in "stored model" units
@chrisj
Copy link
Copy Markdown
Contributor Author

chrisj commented Mar 20, 2026

@seankmartin I believe the plan was for Metacell to take this over and implement a cylinder+sphere approach to rendering the skeleton. I had an idea AI agents could handle it pretty well and gpt-5.4 came up with a solution that looks good so far
https://cj-cap-skel-share-dot-neuroglancer-dot-seung-lab.ue.r.appspot.com/#!middleauth+https://global.daf-apis.com/nglstate/api/v1/6228802970583040

seung-lab@6104b84#diff-31bd13c4f5980b931d888df0a45bf4303b010565bcf488e3021098006a7e5f17

https://github.com/seung-lab/neuroglancer/tree/cj-mesh-skeleton

If you haven't started work on this, I could wrap it up myself

@seankmartin
Copy link
Copy Markdown
Contributor

Thanks @chrisj! Haven't gotten around to starting anything on this one, so happy to review etc if you want to continue with it. Cheers :)

@chrisj
Copy link
Copy Markdown
Contributor Author

chrisj commented May 5, 2026

@jbms
We are planning to implement skeleton rendering using cylinders and spheres as mentioned in the previous comment.

There is still interest in scaling annotations based on physical units.

Options:

  1. Create a 3d equivalent rendering mode for each annotation type. circles become spheres, lines become cylinders, polylines become cylinders and spheres, bounding boxes are ignored

  2. Use the approach as in this PR, having a modelToPixels or similar shader function

Both options require a choice of how to deal with non-uniform transforms, the skeleton documentation specifies the transform should be uniform is a radius is given. We can scale using the transformation matrix or also add similar documentation indicating annotations should also have a uniform transform when scaling using physical sizes.

@jbms
Copy link
Copy Markdown
Collaborator

jbms commented May 5, 2026

A 3-d rendering mode would allow the cross section views to properly show cross sections of the 3-d shape, and for the 3d view to properly shade them and handle occlusion --- that would not be possible with just a modelToPixels function.

If the geometry just gets encoded as a new annotation type (point is already handled by ellipsoid, but there could be a new cylinder type) then transforms and proper clipping will get handled correctly automatically. However that won't allow using shader code to determine the geometry based on user-defined properties.

Note one caveat with setting the geometry from the shader: when using chunked annotations (e.g. with precomputed spatial index) that means Neuroglancer doesn't know the full geometry when selecting the chunks that are needed, which means e.g. a large cylinder may only get drawn if its center line intersects the viewport, even if part of it intersects the viewport.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants