TPoison: Data-Poisoning Attack against GNN-Based Social Trust Model
Jiahui Zhao,
Nan Jiang (),
Kanglu Pei,
Jie Wen,
Hualin Zhan and
Ziang Tu
Additional contact information
Jiahui Zhao: College of Information Engineering, East China Jiaotong University, Nanchang 330013, China
Nan Jiang: College of Information Engineering, East China Jiaotong University, Nanchang 330013, China
Kanglu Pei: School of Mathematics and Statistics, the University of Sydney, Camperdown, NSW 2006, Australia
Jie Wen: College of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
Hualin Zhan: College of Information Engineering, East China Jiaotong University, Nanchang 330013, China
Ziang Tu: College of Information Engineering, East China Jiaotong University, Nanchang 330013, China
Mathematics, 2024, vol. 12, issue 12, 1-16
Abstract:
In online social networks, users can vote on different trust levels for each other to indicate how much they trust their friends. Researchers have improved their ability to predict social trust relationships through a variety of methods, one of which is the graph neural network (GNN) method, but they have also brought the vulnerability of the GNN method into the social trust network model. We propose a data-poisoning attack method for GNN-based social trust models based on the characteristics of social trust networks. We used a two-sample test for power-law distributions of discrete data to avoid changes in the dataset being detected and used an enhanced surrogate model to generate poisoned samples. We further tested the effectiveness of our approach on three real-world datasets and compared it with two other methods. The experimental results using three datasets show that our method can effectively avoid detection. We also used three metrics to illustrate the effectiveness of our attack, and the experimental results show that our attack stayed ahead of the other two methods in all three datasets. In terms of one of our metrics, our attack method decreased the accuracies of the attacked models by 12.6%, 22.8%, and 13.8%.
Keywords: social trust model; graph neural network; data-poisoning attack (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/12/12/1813/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/12/1813/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:12:p:1813-:d:1412745
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().