• 0 Posts
  • 115 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • I agree that’s it’s a “hate the game, not the player”. The issue is how much influence he could have to steer the market to favor his product vs. the competition. It’s happened so many times in history where the better product fails because they can’t play the game like the inferior company.

    To quote “Pirates of Silicon Valley”:

    Steve Jobs: We’re better than you are! We have better stuff.

    Bill Gates: You don’t get it, Steve. That doesn’t matter!

    So is it fair for the consumer for big companies to be able to influence the game itself and not just play within the same rules? I’d say no.









  • Maybe your argument isn’t against Lemmy, but against online discussion in general. Heating debates that break into less constructive postings have been around since the days of BBSes and Usenet. I don’t disagree with your point that people should try to act like adults when discussing topics, but a (not so) different format doesn’t change how people are, especially when they feel protected by anonymity to react badly.


  • Police are trained to drive at faster speeds for obvious reasons, but even they need to limit such higher speeds to the same constraint of reaction and vehicle performance times. I’ll be positive and give the benefit of the doubt that he did try to avoid hitting her once he saw her (if he saw her at all), but I can’t imagine anyone being able to react nor slow or swerve in such a setting if it was like most 25 mph zones I know of. People speed through our 25 mph subdivision at 35-40 mph and I’m just waiting for the day someone gets clipped.


  • There are two dangers in the current race to get to AGI and in developing the inevitable ANI products along the way. One is that advancement and profit are the goals while the concern for AI safety and alignment in case of success has taken a back seat (if it’s even considered anymore). Then there is number two - we don’t even have to succeed in AGI for there to be disastrous consequences. Look at the damage early LLM usage has already done, and it’s still not good enough to fool anyone who looks closely. Imagine a non-reasoning LLM able to manipulate any media well enough to be believable even with other AI testing tools. We’re just getting to that point - the latest AI Explained video discussed Gemini and Sora and one of them (I think Sora) fooled some text generation testers into thinking its stories were 100% human created. In short, we don’t need full general AI to end up with catastrophe, we’ll easily use the “lesser” ones ourselves. Which will really fuel things if AGI comes along and sees what we’ve done.









  • In areas that are prone to earthquakes, not really. This isn’t one of them, so it’s unusual and worth a report and determination of the source. A 4.0 at the epicenter would feel different farther depending on the material too - most of Florida wouldn’t transmit the energy well and slosh around a bit, unlike some bedrock that can carry the energy much farther. My real question would be if this is a natural cause, can there ever be a potential for seafloor movement that would power a tsunami (I don’t think so)? That would be far worse than the actual ground shaking for Florida coastline residents.