Research
Samir Awuapara
Solo researcher. Building neural network architectures from scratch in C++, Metal, and CUDA. No frameworks, no PyTorch.
Latest Articles
New
Apr 12, 2026
Organic Cache Locality in Sparse Neural Networks
Dynamically grown DAGs self-optimize memory layout. 77-89% utilization, zero preprocessing.
Apr 2026
Dynamic Topology: Growing a Sparse Network from Scratch
39K → 3M parameters via dynamic growth. A self-assembling sparse DAG trained on a laptop in 20 minutes.
Apr 10, 2026
Atomic-Free Backward Passes on Metal
Eliminating atomics sounds like a win. On Metal, it's a 2.8x slowdown.
Apr 10, 2026
Metal Compute Shaders for Sparse ML
Zero literature exists on Metal ML kernels. Here's what we learned.
Interested in our research?
We are open to collaborations with researchers and institutions working on related problems.
Get in Touch