
de cuequivariance388
PyTorch GPU-accelerated equivariant tensor primitives and layers (SegmentedPolynomial, tensor products, spherical harmonics, equivariant linear layers) for buil
Provides GPU-accelerated implementations of equivariant polynomial/tensor operations for PyTorch (imported as cuet). It exposes a SegmentedPolynomial primitive with multiple CUDA backends and a set of high-level torch.nn.Module operations (ChannelWiseTensorProduct, FullyConnectedTensorProduct, Linear, SymmetricContraction, SphericalHarmonics, Rotation/Inversion) plus equivariant layers (BatchNorm, FullyConnectedTensorProductConv). Typical users build equivariant GNNs, message-passing layers, and physics-aware models (e.g., DiffDock, MACE, NEQUIP) that require SO(3)/O(3)-equivariant math on GPU.
Use this skill when you need highly-efficient, correct equivariant tensor contractions and layers in PyTorch, especially for GPU-accelerated workloads: converting descriptor polynomials into torch modules, implementing message-passing in equivariant GNNs, or exporting models to ONNX/TensorRT. Choose this over handwritten einsum code when you need performance and multiple backend strategies (naive, uniform_1d, fused_tp, indexed_linear).
Likely compatible with developer/engineering agent workflows and local execution CLIs that can run PyTorch code (Copilot/Codex/Gemini CLI style developer agents).
Cette compétence n'a pas encore été examinée par notre pipeline d'audit automatisé.