

We needed that when Snowden leaked, now the political will is gone and the average person just accepts it because they get free services in exchange for letting advertising companies and law enforcement live in their pocket.
We needed that when Snowden leaked, now the political will is gone and the average person just accepts it because they get free services in exchange for letting advertising companies and law enforcement live in their pocket.
A funny loophole. The person who stole the data did a crime.
Now that the data is public then it is free game because it is public information.
Ah, I see you’re a man of culture as well
Can you name a single technology or human endeavour that doesn’t have negative side effects or potential for abuse?
The rise of anti-AI sentiment isn’t based on objective measurements of societal harms, it’s a meme because it’s new and popular and, like all memes, some people feed on the outrage-based reinforcement generated by social media interactions.
I’m not saying that there are no problems with AI. I’m saying that people are treating it as if it were a massive problem because their perception is warped by social media.
They used this access to suppress the Occupy Wall Street protests, including targeting the online activists, by designating it a ‘counter-terrorism’ operation.
If you participated in these protests online you’d suddenly find that the DEA knew about your marijuana use, the IRS decided that not filing your taxes was a criminal charge and your state and county police would receive ‘anonymous tips’ about any state laws that you were violating.
This was all because DHS intelligence services were combing through the online records of anybody that they could remotely link to these protests.
Literally every single online company is giving your data to law enforcement, often including real-time access.
This is the thing that Snowden leaked.
Facebook, Gmail, your cellular provider, Amazon, Credit Card companies, your bank, etc. They’re all systems that law enforcement intelligence can access, probably without a subpoena (a business can choose to give up business records since they own them, you don’t own ‘your data’).
If you’re doing something online, or on your phone, you should pretend that there’s a law enforcement officer sitting and reading over your shoulder because they effectively are. If they ever has cause to look at you they’ll pull the history of your account (possibly limited to 30 days back but there’s no guarantee of this) and see everything you’ve ever written and posted included things that you deleted.
If you did anything illegal they can use this information to start a new investigation, in addition to whatever investigation that led them to your account. This can allow them access to even more accounts.
So, if you’re using any commercial service that holds your data, you should assume that a law enforcement officer is combing through your information and trying to find something to charge you with.
You should not use commercial services if you’re in the US. I know I’m preaching to the choir in this community, but sometimes people need to see it written in black and white.
It’s one thing to think AI is poor quality in some tasks, but some users act like AI is personally assaulting them every morning as they wake up for work and pissing in their coffee.
AI, which is inherently a misrepresentation of truth
Oh, you’re one of those
In the US criminal justice system, Sentencing happens after the Trial. A mistrial requires rules to be violated during the Trial.
Also, there were at least 3 people in that room that both have a Juris Doctor and know the Arizona Court Rules, one of them is representing the defendant. Not a single one of them had any objections about allowing this statement to be made.
They can’t appeal on this issue because the defense didn’t object to the statement and, therefore, did not preserve the issue for appeal.
AI should absolutely never be allowed in court. Defense is probably stoked about this because it’s obviously a mistrial. Judge should be reprimanded for allowing that shit
You didn’t read the article.
This isn’t grounds for a mistrial, the trial was already over. This happened during the sentencing phase. The defense didn’t object to the statements.
From the article:
Jessica Gattuso, the victim’s right attorney that worked with Pelkey’s family, told 404 Media that Arizona’s laws made the AI testimony possible. “We have a victim’s bill of rights,” she said. “[Victims] have the discretion to pick what format they’d like to give the statement. So I didn’t see any issues with the AI and there was no objection. I don’t believe anyone thought there was an issue with it.”
This is just weird uninformed nonsense.
The reason that outbursts, like gasping or crying, can cause a mistrial is because they can unfairly influence a jury and so the rules of evidence do not allow them. This isn’t part of trial, the jury has already reached a verdict.
Victim impact statements are not evidence and are not governed by the rules of evidence.
It’s ludicrous that this was allowed and honestly is grounds to disbar the judge. If he allows AI nonsense like this, then his courtroom can not be relied upon for fair trials.
More nonsense.
If you were correct, and there were actual legal grounds to object to these statements then the defense attorney could have objected to them.
Here’s an actual attorney. From the article:
Jessica Gattuso, the victim’s right attorney that worked with Pelkey’s family, told 404 Media that Arizona’s laws made the AI testimony possible. “We have a victim’s bill of rights,” she said. “[Victims] have the discretion to pick what format they’d like to give the statement. So I didn’t see any issues with the AI and there was no objection. I don’t believe anyone thought there was an issue with it.”
The direct answer to your question is: verification of the security of the platform that the other party is using is outside of the scope of the Signal protocol. Anything you send to the other party can be taken off of their device. Signal only concerns itself with securing the message over the network and making it hard for an adversary with network dominance to build a social graph. It doesn’t protect from all SIGINT.
Additionally, since the server is open source and the protocol is open an publicly documented, it is completely possible to build your own Signal client and give it whatever capabilities that you’d like.
There are several open source packages available that allow you to interface with Signal without using the official Signal client:
https://github.com/AsamK/signal-cli
https://gitlab.com/signald/signald (also, https://signald.org/articles/clients/ )
It depends, but it’d be really hard to tell. I type around 90-100 WPM, so my comment only took me a few minutes.
If they’re responding within a second or two with a giant wall of text it could be a bot, but it may just be a person who’s staring at the notification screen waiting to reply. It’s hard to say.
I would have gotten away with it if it were not for you kids!
I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.
In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I’ve established in my decades of talking to people online.
In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.
I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.
For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.
That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).
The research in the OP is a good first step in figuring out how to solve the problem.
That’s in addition to anti-bot measures. I’ve seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn’t slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.
Because the source code is copyrighted, the fact that userA has an email address [email protected] isn’t copyrighted and so companies can ingest that data into their databases.