Campaigning for information literacy

Search
Close this search box.

Social Media: Personalizing Interfaces Based on Individual Differences Can Positively Alter User Experience

According to a recently published study, a person’s distrust in humans predicts they will have more trust in artificial intelligence’s ability to moderate content online. The findings, the researchers say, have practical implications for both designers and users of AI tools in social media.

“We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification,” said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. “Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias.”

The study, Does distrust in humans predict greater trust in AI? Role of individual differences in user responses to content moderation, published in the journal of New Media & Society found users with conservative political ideology were more likely to trust AI-powered moderation. Molina and coauthor Sundar, who also co-directs Penn State’s Media Effects Research Laboratory, said this may stem from a distrust in mainstream media and social media companies.

The study also found that “power users” who are experienced users of information technology, had the opposite tendency. They trusted the AI moderators less because they believe that machines lack the ability to detect nuances of human language.

The study found that individual differences such as distrust of others and power usage predict whether users will invoke positive or negative characteristics of machines when faced with an AI-based system for content moderation, which will ultimately influence their trust toward the system.

The researchers suggest that personalizing interfaces based on individual differences can positively alter user experience. The type of content moderation in the study involves monitoring social media posts for problematic content like hate speech and suicidal ideation.

The researchers recruited 676 participants from the United States. The participants were told they were helping test a content moderating system that was in development. They were given definitions of hate speech and suicidal ideation, followed by one of four different social media posts. The posts were either flagged for fitting those definitions or not flagged. The participants were also told if the decision to flag the post or not was made by AI, a human or a combination of both.

The demonstration was followed by a questionnaire that asked the participants about their individual differences. Differences included their tendency to distrust others, political ideology, experience with technology and trust in AI.

Molina and Sundar say their results may help shape future acceptance of AI. By creating systems customized to the user, designers could alleviate skepticism and distrust, and build appropriate reliance in AI.


We are working hard to bring you the latest fact-checked information and tools. Donate every time you read disinformation and the money will be used to pay a fact-checking ad!


Eine einmalige Spende tätigen

Your contribution is appreciated.

Spenden


Make a monthly donation

Your contribution is appreciated.

Spenden


Make a yearly donation

Your contribution is appreciated.

Spenden

Leave a Reply

Your email address will not be published. Required fields are marked *

Safeguarding against disinformation:

Combatting disinformation, scams, and manipulation requires prioritizing Information and Media Literacy. Educate yourself and others about disinformation strategies, cultivate discernment, and question motivations to make informed choices. In addition, supporting independent journalism, fact-checking organizations, and reliable sources of information plays a crucial role in combating the spread of misinformation.

To stay informed and empowered: Sign up to receive the ReclaimTheFacts Newsletter and our latest Media and Information Literacy materials and tools straight to your inbox. You can also follow us on social media for regular updates.

We are diligently working to provide top-notch educational content on Media Literacy. Your donation will contribute to advertising efforts, expanding the reach of these materials on social media platforms. Support our cause and help empower more individuals through education.