feat(skeleton): skeleton line width control and conversion from model units#878
feat(skeleton): skeleton line width control and conversion from model units#878chrisj wants to merge 5 commits intogoogle:masterfrom
Conversation
…ls. Migrated skeleton shader editor to control vertex shader instead of fragment shader (match behavior of annotation layer).
…n indicates radius is in "stored model" units
|
@seankmartin I believe the plan was for Metacell to take this over and implement a cylinder+sphere approach to rendering the skeleton. I had an idea AI agents could handle it pretty well and gpt-5.4 came up with a solution that looks good so far seung-lab@6104b84#diff-31bd13c4f5980b931d888df0a45bf4303b010565bcf488e3021098006a7e5f17 https://github.com/seung-lab/neuroglancer/tree/cj-mesh-skeleton If you haven't started work on this, I could wrap it up myself |
|
Thanks @chrisj! Haven't gotten around to starting anything on this one, so happy to review etc if you want to continue with it. Cheers :) |
|
@jbms There is still interest in scaling annotations based on physical units. Options:
Both options require a choice of how to deal with non-uniform transforms, the skeleton documentation specifies the transform should be uniform is a radius is given. We can scale using the transformation matrix or also add similar documentation indicating annotations should also have a uniform transform when scaling using physical sizes. |
|
A 3-d rendering mode would allow the cross section views to properly show cross sections of the 3-d shape, and for the 3d view to properly shade them and handle occlusion --- that would not be possible with just a If the geometry just gets encoded as a new annotation type (point is already handled by ellipsoid, but there could be a new cylinder type) then transforms and proper clipping will get handled correctly automatically. However that won't allow using shader code to determine the geometry based on user-defined properties. Note one caveat with setting the geometry from the shader: when using chunked annotations (e.g. with precomputed spatial index) that means Neuroglancer doesn't know the full geometry when selecting the chunks that are needed, which means e.g. a large cylinder may only get drawn if its center line intersects the viewport, even if part of it intersects the viewport. |
continuing from: #820
Based on this section from the skeleton documentation:
I've renamed the shader function to "modelToPixels"
The documentation also indicates:
So I think it is fine to keep the calculation of the scaling factor to be based on just a single dimension. What is lacking is the ability to use the display dimension scale. I think a 3d approach to rendering skeletons is the right choice if that matters.
I will look into finding a way to add good tests for this feature
I left in the line annotation code so we can see what adding annotation support would look like, but I will strip that out of this PR