Browsing by Author "Priebe, Carey E."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Discovering Communication Pattern Shifts in Large-Scale Labeled Networks Using Encoder Embedding and Vertex Dynamics(IEEE Transactions on Network Science and Engineering, 2023-11-29) Shen, Cencheng; Larson, Jonathan; Trinh, Ha; Qin, Xihan; Park, Youngser; Priebe, Carey E.Analyzing large-scale time-series network data, such as social media and email communications, poses a significant challenge in understanding social dynamics, detecting anomalies, and predicting trends. In particular, the scalability of graph analysis is a critical hurdle impeding progress in large-scale downstream inference. To address this challenge, we introduce a temporal encoder embedding method. This approach leverages ground-truth or estimated vertex labels, enabling an efficient embedding of large-scale graph data and the processing of billions of edges within minutes. Furthermore, this embedding unveils a temporal dynamic statistic capable of detecting communication pattern shifts across all levels, ranging from individual vertices to vertex communities and the overall graph structure. We provide theoretical support to confirm its soundness under random graph models, and demonstrate its numerical advantages in capturing evolving communities and identifying outliers. Finally, we showcase the practical application of our approach by analyzing an anonymized time-series communication network from a large organization spanning 2019–2020, enabling us to assess the impact of Covid-19 on workplace communication patterns.Item One-Hot Graph Encoder Embedding(IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022-11-28) Shen, Cencheng; Wang, Qizhe; Priebe, Carey E.In this paper we propose a lightning fast graph embedding method called one-hot graph encoder embedding. It has a linear computational complexity and the capacity to process billions of edges within minutes on standard PC — making it an ideal candidate for huge graph processing. It is applicable to either adjacency matrix or graph Laplacian, and can be viewed as a transformation of the spectral embedding. Under random graph models, the graph encoder embedding is approximately normally distributed per vertex, and asymptotically converges to its mean. We showcase three applications: vertex classification, vertex clustering, and graph bootstrap. In every case, the graph encoder embedding exhibits unrivalled computational advantages.