DO YOU REALLY KNOW WHO YOU ARE CHATTING WITH?
Automating political dirty tricks…
The availability of manipulative open-source social media programs and affordable computing power has made it easier for malicious and poisonous political actors to employ technology to disparage candidates, disrupt campaigns, and, in general, wreak havoc among individuals seeking truthful information or genuinely expressing an opinion.
The weaponization of technology in political activities can take various forms…
-
Manipulative social media campaigns: Technology-created accounts can be employed to launch coordinated social media campaigns to polarize public opinion. These campaigns may involve spreading divisive content, targeting specific demographic groups, or exploiting sensitive issues to provoke emotional responses and create social divisions.
-
Disinformation dissemination: Technology-created accounts can spread false or misleading information, often in fake news, fabricated stories, or selectively edited content. By strategically disseminating such disinformation, these accounts seek to manipulate public perception, erode trust in institutions, and amplify existing political divisions. Creating exploitable faux crises is a perfect example of using technology-created accounts to manipulate public opinion.
-
Amplification of extreme voices: Technology-created accounts can amplify the voices of lone actors sitting in their basements in their underwear, fringe extremist groups, or disgruntled disruptors by giving them a platform to spew their poison and artificially inflate their influence. Promoting and sharing content from these accounts contributes to the normalization of extreme views and further polarization of political conversations.
-
Creation of echo chambers: The automated creation of bot accounts can contribute to the creation of echo chambers, where individuals are exposed only to information and viewpoints that align with their beliefs. This can reinforce ideological divisions, hinder constructive political conversations, and lead to the entrenchment of polarized opinions.
-
Targeted advertising and fundraising: It has never been easier to use artificial intelligence data mining and profiling to exploit the vast amount of online data to conduct targeted advertising, fundraising, or highly personalized messaging—techniques manipulating personal data to reinforce existing biases and exacerbate political divisions within a group.
-
Cyberattacks and hacking: As a form of censorship, malignant political actors may launch cyberattacks against political campaigns, organizations, or individuals to gain unauthorized access to sensitive information, disrupt operations, or spread chaos. These attacks can include phishing attempts, malware distribution, or DDoS (Distributed Denial of Service) attacks.
Do you know who you are talking to?
Increased awareness is a must to mitigate the negative impact of technology on political activities. While legislators and political actors want to create regulations to safeguard the public, these are often little more than attempts to secure control over communications channels that serve the best interests of corrupt political actors and regime apparatchiks.
There is no doubt that bots and fake accounts can be programmed to:
-
Promote and amplify divisive messages, memes, or articles that target specific individuals and political groups.
-
Disseminate false or misleading information, including spreading rumors, fabricated news stories, or distorted facts to manipulate public perception and sway political discourse.
-
Engage in online arguments, sometimes between competing bots, to create personal attacks using inflammatory rhetoric to exacerbate existing political conflicts.
-
Create the AstroTurf perception that a particular viewpoint is more popular or widely accepted than it actually is to create widespread support for fringe issues.
-
Artificially manufacture infighting between members of a particular organization to destroy the organization's messaging and effectiveness.
Recognizing a social media bot can be challenging, as some bots are designed using natural language processing (NLP) techniques to mimic human speech and behavior to appear authentic. However, several red flags might assist you in identifying a fake account or a potential bot-created message.
-
Profile Information: Check the profile information of the account. Artificially created accounts often have generic or incomplete profiles with little personal information. Look for signs like a lack of profile pictures, limited posts or activity, and minimal or repetitive bio information.
-
Posting Frequency: Bots often post at regular intervals or in a pattern. Look for a high volume of similarly themed posts within a short period or post or posted at odd hours consistently.
-
Content Quality: Bots tend to generate low-quality content with poor grammar, spelling mistakes, or nonsensical phrases. Look for excessive hashtags or repetitive phrases.
-
Lack of Personal Engagement: Bots appear to lack personal engagement when responding to comments, mentions, or direct messages, or they may react with generic replies or wildly unrelated messages.
-
Repetitive or Generic Content: Bots often share the same or similar content repeatedly using re-posts and re-tweets containing identical messages across multiple accounts. Look for patterns in the content they share or post.
-
Network Analysis: Analyze an account's followers and those they follow. It could indicate a bot network if most of its followers have suspicious characteristics (e.g., no profile picture, limited activity, few followers).
-
Unusual Account Behavior: Bots may exhibit unusual behavior, such as following an excessive number of accounts within a short period, liking or retweeting unrelated content, or engaging in spammy behavior, like sending unsolicited messages or mentions.
-
Automated Responses: Bots often use pre-set or automated responses to interact with others. They may reply with generic messages that don't directly address the context of the conversation.
It is important to note that what appears to be red flag behavior is not definitive proof of a bot's presence. The account and response might come from a newbie trying their social media wings or a uniformed, lesser-educated individual.
Bottom line…
We live in an age where computers can create the perception of reality, flooding us with deep fakes that look and sound like real things.
We are so screwed.
-- Steve
“Nullius in verba”-- take nobody's word for it!
"Acta non verba" -- actions not words
“Beware of false knowledge; it is more dangerous than ignorance.”-- George Bernard Shaw
“Progressive, liberal, Socialist, Marxist, Democratic Socialist -- they are all COMMUNISTS.”
“The key to fighting the craziness of the progressives is to hold them responsible for their actions, not their intentions.” – OCS "The object in life is not to be on the side of the majority, but to escape finding oneself in the ranks of the insane." -- Marcus Aurelius “A people that elect corrupt politicians, imposters, thieves, and traitors are not victims... but accomplices” -- George Orwell “Fere libenter homines id quod volunt credunt." (The people gladly believe what they wish to.) ~Julius Caesar “Describing the problem is quite different from knowing the solution. Except in politics." ~ OCS