Abstract:
Sheaf Laplacians generalize graph Laplacians to vector-valued node signals, enabling richer relational models but increasing computational cost. We present a spectral sparsification method for the $0$-dimensional sheaf Laplacian using leverage-style edge sampling from trace effective resistance with reweighting. The resulting sparse operator preserves the original quadratic form on $(\ker L_{\mathcal F})^\perp$ with high probability: for $\varepsilon\in(0,1)$ and $p_{\mathrm{fail}}\in(0,1)$, we obtain a $(1\pm\varepsilon)$ approximation with probability at least $1-p_{\mathrm{fail}}$. This gives a principled path to faster sheaf diffusion and scalable sheaf-based learning, and supports empirical study of the sparsity–accuracy tradeoff through tunable sampling.
Scheduled for: 2026-03-12 04:30 PM: Applied & Data Session #4.3 in Heritage Hall Building 104
Status: Accepted
Collection: Applied Topology and Topological Data
Back to collection