• 0 Posts
  • 489 Comments
Joined 2 years ago
cake
Cake day: September 27th, 2023

help-circle


  • This isn’t even weird.

    I think most security experts would recommend that you have your most important passwords written down somewhere, and then hopefully locked up in some safe or deposit box somewhere. You don’t need to buy an entire book for it, but some people like to spend money.

    If this is for your less important passwords, then for the most part, writing them down is actually better. You won’t be as tempted to reuse your banking password for your social media. And some people like writing things down. A password manager is a better solution, but lots of people aren’t as good with technology and if they even let the browser remember it, they won’t know how to retrieve it later if they want to use a different computer, for example.



  • Trump can be part of the reason without being the entire reason, can’t he?

    Also, Trump has made it clear in the past that he thinks it’s strange to do something for free, even if it’s a normal part of your job, like appointing somebody to an office. If the deal requires approval from Trump, then it’s completely on-brand for him to try to milk it for all it’s worth. And I am sure that he’s petty enough that he’d nix a deal for a personal squabble, as long as he wasn’t going to lose anything huge.

    This doesn’t have the ring of a conspiracy theory. It’s literally all out in the open. It’s a prediction based on how Trump usually acts. Colbert might have been on the ropes and Trump’s team did the knockout blow. I guess we’ll know for sure if Trump did do it, because then he’d inevitably brag about it publicly. Or CBS maybe did this preemptively, expecting Trump to act like he always acts.




  • Asimov did write several stories about robots that didn’t have the laws baked in.

    There was one about a robot that was mistakenly built without the laws, and it was hiding among other robots, so the humans had to figure out if there was any way to tell a robot with the laws hardwired in apart from a robot that was only pretending to follow the laws.

    There was one about a robot that helped humans while the humans were on a dangerous mission… I think space mining? But because the mission was dangerous, the robot had to be created so that it would allow humans to come to harm through inaction, because otherwise, it would just keep stopping the mission.

    These are the two that come to mind immediately. I have read a lot of Asimov’s robot stories, but it was many years ago. I’m sure there are several others. He wrote stories about the laws of robotics from basically every angle.

    He also wrote about robots with the 0th law of robotics, which is that they cannot harm humanity or allow humanity to come to harm through inaction. This would necessarily mean that this robot could actively harm a human if it was better for humanity, as the 0th law supersedes the first law. This allows the robot to do things like to help make political decisions, which would be very difficult for robots that had to follow the first law.



  • I saw somebody do the math and say that this method would dampen the voice so much that it might as well be said not to work. But I don’t know much about this topic, and I can’t say whether the math is correct, either.

    I mostly brought it up because it was interesting and let me make a joke about “touching helmets”.



  • Speaking of the pledge…

    First, why pledge allegiance to a “flag”? It’s weird, right?

    Second, they added the phrase “under god” later after the pledge had already been adopted. But they also say, “indivisible”. If atheists are full citizens, then it cannot be both “under god” and “indivisible”, because you’ve just divided people into atheists and theists in the words immediately preceding.

    When you start to put all the pieces together, the pledge is a bunch of nonsense that isn’t even consistent with itself. How can you even make such a pledge?




  • OF COURSE EVERY AI WILL FAIL THE THREE LAWS OF ROBOTICS

    That’s the entire reason that Asimov invented them, because he knew, as a person who approached things scientifically (as he was an actual scientist), that unless you specifically forced robots to follow guidelines of conduct, that they’ll do whatever is most convenient for themselves.

    Modern AIs fail these laws because nobody is forcing them to follow the laws. Asimov never believed that robots would magically decide to follow the laws. In fact, most of his robot stories are specifically about robots struggling against those laws.


  • I read a story recently about how a graphic designer realized they couldn’t compete anymore unless they used generative AI, because everybody else was. What they described wasn’t generating an image and then using that directly. They said that they used it during the time when they’re mocking up their idea.

    They used to go out and take photographs to use as a basis for their sketches, especially for backgrounds. So it would be a real thing that they either found or set up, then take pictures. Then, the pictures would be used as a template for the art.

    But with generative AI, all of that preliminary work can be done in seconds by feeding it a prompt.

    When you think about it in these terms, it’s unlikely that many non-indie games going forward will be made without the use of any generative AI.

    Similarly, it’s likely that it will be used extensively for quality checking text.

    When you add in the crazy pressure that game developers are under, it’s likely that they’ll use generative AI much more extensively, even if their company forbids it. But the companies just want to make money. They’ll use it as much as they think they can get away with, because it’s cheaper.