Alan Ritter and Wei Xu

New Large-Language Model Can Protect Social Media Users' Privacy

Social media users may need to think twice before hitting that “Post” button.

A new large-language model (LLM) developed by Georgia Tech researchers can help them filter content that could risk their privacy and offer alternative phrasing that keeps the context of their posts intact.

According to a new paper that will be presented at the 2024 Association for Computing Linguistics (ACL) conference, social media users should tread carefully about the information they self-disclose in their posts.

Many people use social media to express their feelings about their experiences without realizing the risks to their privacy. For example, a person revealing their gender identity or sexual orientation may be subject to doxing and harassment from outside parties.

Others want to express their opinions without their employers or families knowing.

Ph.D. student Yao Dou and associate professors Alan Ritter and Wei Xu originally set out to study user awareness of self-disclosure privacy risks on Reddit. Working with anonymous users, they created an LLM to detect at-risk content.
Read more at cc.gatech.edu

Recent Stories