• 2 Posts
  • 213 Comments
Joined 1 year ago
cake
Cake day: July 19th, 2023

help-circle

  • No, left side is correct for the breakfast gun.

    A gun that size isn’t actually big enough for situations where you need a gun, it’s just meant to provide cover fire while you get a bigger better gun. You’ll be using your left hand to fire the cover gun, so that your right hand is available for picking up the bigger gun. This has the additional benefit of leaving your dominant hand free to eat with.



  • Yeah, the bank that manages my mortgage has mandatory text message 2fa if you’re on a new computer. And something about Firefox keeps it from remembering my machine, so I have to do the text message 2fa everytime.

    Right now it’s working fine, but they had a period of a few months where the text messages would take 10-15min to send after you tried to log in, and the log in attempt would expire after 5 min, making it impossible to log in. All of which could be avoided if they would let me use a 2fa app.



  • AI generated csam is still csam.

    Idk, with real people the determination on if someone is underage is based on their age and not their physical appearance. There are people who look unnaturally young that could legally do porn, and underage people who look much older but aren’t allowed. It’s not about their appearance, but how old they are.

    With drawn or AI-generated CSAM, how would you draw that line of what’s fine and what’s a major crime with lifelong repercussions? There’s not an actual age to use, the images aren’t real, so how do you determine the legal age? Do you do a physical developmental point scale and pick a value that’s developed enough? Do you have a committee where they just say “yeah, looks kinda young to me” and convict someone for child pornography?

    To be clear I’m not trying to defend these people, but it seems like trying to determine what counts legal/non-legal for fake images seems like a legal nightmare. I’m sure there are cases where this would be more clear cut (if they ai generate with a specific age, trying to do deep fakes of a specific person, etc), but a lot of it seems really murky when you try to imagine how to actually prosecute over it.


  • No, the version they released isn’t the full parameter set, and it’s leading to really bad results in a lot of prompts. You get dramatically better results using their API version, so the full sd3 model is good, but the version we have is not.

    Here’s an example of SD3 API version: SD3 API

    And here’s the same prompt on the local weights version they released: SD3 local weights 2B

    People think stability AI censored NSFW content in the released model, which has crippled its ability to understand a lot of poses and how anatomy works in general.

    For more examples of the issues with SD3, I’d recommend checking this reddit thread.





  • There’s also the issue of being fingerprinted. An unfortunate truth of the internet is that most browser/device set ups are unique, and it makes it possible to track people that way. Having features like “do not track” turned on actually make you more unique, making it easier to confidently identify you when you visit sites. It probably doesn’t matter though, in my experience basically every web browser/computer is recognized as a unique user now (with maybe the exception of using a popular mobile browser on a popular mobile phone model).

    Anyways, visit https://www.amiunique.org/ to have your hopes of being anonymous crushed.



  • I maybe didn’t use the best example, but it was less about people actually being religious and instead if they used any sort of popular phrasing that had any slight religious element they would try to turn it into a religion debate.

    A better example is that someone might post a polish word, someone else would reply “bless you” acting like the polish word was a sneeze sound, and then the 14-year-old atheists would descend and start a debate.