Securing change—the fight to protect our online space

Securing change—the fight to protect our online space

One thing we can all agree on is social media has drastically altered the way we interact with each other and the world around us. This societal shift has brought to light the need for awareness of the risks that surround it. As quickly as the landscape of social interaction changes, new threats emerge.

While there have been extensive studies on traditional methods for targeting a mass audience such as television, print and search engine advertising, there remains a significant gap in the understanding of the vulnerability of social systems to collective attention threats.

Dr. James Caverlee, associate professor in the Department of Computer Science and Engineering at Texas A&M University, is devoted to creating a world where every online interaction can be trusted, with assurances on who and what you are dealing with.

A few examples of collective attention threats are breaking news, viral videos and popular memes that can quickly spread misinformation, propaganda and malware. Because of these, like never before, users are involuntary accomplices to the spread and success of these new hazards.

“It is imperative to develop new techniques to detect, analyze, model and defend against collective attention threats in large-scale social systems,” Caverlee said. “The overarching research goal of this project is to develop the framework, algorithms and systems for analyzing, modeling and defending against emergent collective attention threats in large-scale social systems.”

Since users are typically dependent on the system operators to provide protection, Caverlee and his team are working to build a threat awareness application that will serve as an early-warning system for users. This countermeasure will prevent or mitigate the effects of these potential threats.


“YouTube, itself, is responsible for monitoring and expelling videos that are conduits to spam and malware; Twitter attempts to block spam accounts and messages once it collects sufficient evidence,” Caverlee said. “This one-size-fits-all method ignores individual risk profiles and suffers from either blocking too much content or allowing all content. Instead, we propose to develop a personalized awareness app that will communicate to each user their exposure to collective attention threats.”

The way the app will work is that on opening a user’s Twitter timeline, for instance, the app will highlight tweets that are associated with a threat. This will give each user more control over their social experience.

The idea is that the app may be able to sample evidence of collective attention threats early in the lifecycle of a collective attention phenomenon, for example sampling and labeling spam tweets from a trending topic. Based on this early evidence, the app will be able to identify and eliminate developing threats.

As a result of these threats growing and changing so rapidly, Caverlee recognizes the need to have a continually upgraded design. His team will provide its initial thoughts on the most relevant features influencing and predicting threats and will continue to explore the most computationally efficient features in order to maintain the responsiveness.

Caverlee began studying this topic in the mid-2000s by looking at web spam. That led to the study of emerging social systems, such as Facebook and Twitter, and the creation of “social honeypots” to lure social spammers and content polluters. Taking this further, Caverlee has also studied how online spaces can be manipulated by online campaigns and crowdsourced attacks.

“The ultimate goal is to build a scientific foundation for the deep understanding of these new threats, including new algorithms, frameworks and systems; give companies new tools to fight back against threats within their systems; and to give users themselves new power to make sense of their online experience,” Caverlee said.