Skip to content

Conversation

@Gasoonjia
Copy link
Contributor

@Gasoonjia Gasoonjia commented Jan 5, 2026

Stack from ghstack (oldest at bottom):

Add SlimTensor-based implementations of basic property getter AOTI shim functions:

  1. aoti_torch_get_data_ptr() - Returns pointer to tensor data
  2. aoti_torch_get_sizes() - Returns pointer to sizes array (SlimTensor stores int64_t directly)
  3. aoti_torch_get_strides() - Returns pointer to strides array (SlimTensor stores int64_t directly)
  4. aoti_torch_get_dtype() - Returns the scalar type as int32_t
  5. aoti_torch_get_dim() - Returns the number of dimensions

Key design:

  • Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
  • Uses #ifdef CUDA_AVAILABLE conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
  • Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.

Differential Revision: D90126254

Add SlimTensor-based implementations of basic property getter AOTI shim functions:

1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data
2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly)
3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly)
4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t
5. `aoti_torch_get_dim()` - Returns the number of dimensions

Key design:
- Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up.
- Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps.
- Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library.

Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Jan 5, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16454

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 2 New Failures, 1 Unrelated Failure

As of commit 7676946 with merge base fd6fa87 (image):

NEW FAILURES - The following jobs have failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

This was referenced Jan 5, 2026
@github-actions
Copy link

github-actions bot commented Jan 5, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants