Times: 2026 Mar 12 from 04:30PM to 04:50PM (Central Time (US & Canada))
Abstract:
Sheaf Laplacians generalize graph Laplacians to vector-valued node signals, enabling richer relational models but increasing computational cost. We present a spectral sparsification method for the $0$-dimensional sheaf Laplacian using leverage-style edge sampling from trace effective resistance with reweighting. The resulting sparse operator preserves the original quadratic form on $(\ker L_{\mathcal F})^\perp$ with high probability: for $\varepsilon\in(0,1)$ and $p_{\mathrm{fail}}\in(0,1)$, we obtain a $(1\pm\varepsilon)$ approximation with probability at least $1-p_{\mathrm{fail}}$. This gives a principled path to faster sheaf diffusion and scalable sheaf-based learning, and supports empirical study of the sparsity–accuracy tradeoff through tunable sampling.