figma balls
figma balls
It’s really more of a proxy setup that I’m looking for. With thunderbird, you can get what I’m describing for a single client. But if I want to have access to those emails from several clients, there needs to be a shared server to access.
docker-mbsync might be a component I could use, but doesn’t sound like there’s a ready-made solution for this today.
Yeah, they are ideally the same mailbox. I’d like a similar experience to Gmail, but with all the emails rehomed to my server.
Steam + Proton works for most games, but there are still rough edges that you need to be prepared to deal with. In my experience, it’s typically older titles and games that use anti-cheat that have the most trouble. Most of the time it just works, I even ran the Battle.net installer as an external Steam game with Proton enabled and was able to play Blizzard titles right away.
The biggest gap IMO is VR. If you have a VR headset that you use on your desktop and it’s important to you, stay on Windows. There is no realistic solution for VR integration in Linux yet. There are ways that you can kinda get something to work with ALVR, but it’s incredibly janky and no dev will support it. There are rumors Steam Link is being ported to Linux, nothing official yet though.
On balance, I’m incredibly happy with Mint since I switched last year. However, I do a decent amount of personal software development, and I’ve used Linux for 2 decades as a professional developer. I wouldn’t say the average Windows gamer would be happy dealing with the rough spots quite yet, but it’s like 95% of the way there these days. Linux has really grown up a lot in the last few years.
Ralph Nader saying that he thinks the death toll is over 200k is not a reasonable source to cite. The 30-50k estimates from most sources are already appallingly high. There’s an active contingent of Ben Shapiro types trying to convince everyone what Israel is doing is fine, don’t give them ammo to cast doubt on the official death count.
Not sure where that 200k number is from. The article you linked doesn’t say that and I haven’t seen a number that high reported anywhere myself. All the info I have seen bounds the estimates between 30k and 50k killed, either through active combat or through disease/malnourishment/injury.
https://www.aljazeera.com/news/longform/2023/10/9/israel-hamas-war-in-maps-and-charts-live-tracker
I’m sure there are plenty of Israelis that want to do this even if they won’t admit it to themselves but this isn’t the final anything. The IDF has killed around 37,000 Palestinians out of ~2.3 million. That’s horrible but nowhere near the “barely any left” stage.
A genocide on the scale of millions takes industrial effort to accomplish. I’m not saying it couldn’t happen, but given Israel’s reliance on foreign aid, current industrial capacity, and political position, it seems unlikely. My guess is Israel will take some more territory and the conflict (kinda tough to call the IDF bombing almost exclusively civilians a war) will peter out. Foreign aid will be allowed back in and Israel will put its mask back on.
Personally, I don’t see how this doesn’t end with half the middle east actively going to war with Israel if they don’t stop soon. The only thing really keeping them safe is the US, and Israel has burned a lot of political capital here. Their leaders are awful, power-hungry shits, but they’re not stupid. If they don’t try to rebuild some of that capital, there’s every chance that Israel loses its lifeline.
What comes years after things die down, I don’t know. Gazan sentiment towards Israel was already overwhelmingly negative before this, but the IDF has never done anything on this scale before. I don’t think Israel can allow Gaza any type of self-governance for decades after this. This is beyond even post-WW2 Japan levels of destruction, and unlike Japan every nation around them is still on their side.
I didn’t say it wasn’t amazing nor that it couldn’t be a component in a larger solution but I don’t think LLMs work like our brains and I think the current trend of more tokens/parameters/training LLMs is a dead-end. They’re simulating the language area of human brains, sure, but there’s no reasoning or understanding in an LLM.
In most cases, the responses from well-trained models are great, but you can pretty easily see the cracks when you spend extended time with them on a topic. You’ll start to get oddly inconsistent answers the longer the conversation goes and the more branches you take. The best fit line (it’s a crude metaphor, but I don’t think it’s wrong) starts fitting less and less well until the conversation completely falls apart. That’s generally called “hallucination” but I’m not a fan of that because it implies a lot about the model that isn’t really true. Y
You may have already read this, but if you haven’t: Steven Wolfram wrote a great overview of how GPT works that isn’t too technical. There’s also a great sci-fi novel from 2006 called Blindsight that explores the way facsimiles of intelligence can be had without consciousness or even understanding and I’ve found it to be a really interesting way to think about LLMs.
It’s possible to build a really good Chinese room that can pass the Turing test, and I think LLMs are exactly that. More tokens/parameters/training aren’t going to change that, they’ll just make them better Chinese rooms.
Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there’s a ton of hype around them. They’re also easy to build tools around, so everyone in tech is trying to get their piece of AI now.
However, LLMs are chat interfaces to searching a large dataset, and that’s about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it’s pattern-matching.
A lot of people will say that’s intelligence, but it’s different; the LLM isn’t capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don’t make the LLM smarter.
AGI needs something new, we aren’t going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.
I wouldn’t shortchange how much making the barrier to entry lower can help. You have to fight Rust a lot to build anything complex, and that can have a chilling effect on contributions. This is not a dig at Rust; it has to force you to build things in a particular way because it has to guarantee memory safety at compile time. That isn’t to say that Rust’s approach is the only way to be sure your code is safe, mind you, just that Rust’s insistence on memory safety at compile time is constraining.
To be frank, this isn’t necessary most of the time, and Rust will force you to spend ages worrying about problems that may not apply to your project. Java gets a bad rap but it’s second only to Python in ease-of-use. When you’re working on an API-driven webapp, you really don’t need Rust’s efficiency as much as you need a well-defined architecture that people can easily contribute to.
I doubt it’ll magically fix everything on its own, but a combo of good contribution policies and a more approachable codebase might.
I think operating at 5V input might be a technical constraint for them. Compatibility revisions for existing hardware are a lot more difficult if the input voltage is 9x higher. Addressing that isn’t as easy as slapping a buck converter on the board.
Not saying requiring 5A was the right call, just that I can see reasons for not using USB-PD.
Why do you think ventilators made people worse? They only put people on ventilators when their O2 stats dropped so low they were going to die of oxygen deprivation.
Part of the reason these rules are similar is because AI-generated images look very dreamlike. The objects in the image are synthesized from a large corpus of real images. The synthesis is usually imperfect, but close enough that human brains can recognize it as the type of object that was intended from the prompt.
Mythical creatures are imaginary, and the descriptions obviously come from human brains rather than real life. If anyone “saw” a mythical creature, it would have been the brain’s best approximation of a shape the person was expecting to see. But, just like a dream, it wouldn’t be quite right. The brain would be filling in the gaps rather than correctly interpreting something in real life.
In reading this thread, I get the sense that some people don’t (or can’t) separate gameplay and story. Saying, “this is a great game” to me has nothing to do with the story; the way a game plays can exist entirely outside a story. The two can work together well and create a fantastic experience, but “game” seems like it ought to refer to the thing you do since, you know, you’re playing it.
My personal favorite example of this is Outer Wilds. The thing you played was a platformer puzzle game and it was executed very well. The story drove the gameplay perfectly and was a fantastic mystery you solved as you played. As an experience, it was about perfect to me; the gameplay was fun and the story made everything you did meaningful.
I loved the story of TLoU and was thrilled when HBO adapted it. Honestly, it’s hard to imagine anyone enjoying the thing TLoU had you do separately from the story it was telling. It was basically “walk here, press X” most of the time with some brief interludes of clunky shooting and quicktime events.
I get the gameplay making the story more immersive, but there’s no reason the gameplay shouldn’t be judged on its own merit separately from the story.
This is an honest question, not a troll: what makes The Last of Us groundbreaking from a technical perspective? I played it and loved the story, but the gameplay was utterly boring to me. I got through the game entirely because I wanted to see the conclusion of the story and when the HBO show came out I was thrilled because it meant I wouldn’t have to play a game I hated to see the story of TLoU 2.
It’s been years, but my recollection is the game was entirely on rails, mostly walking and talking with infrequent bursts of quicktime events and clunky shooting. What was groundbreaking about it?
The math here is beyond me, but this statement from the paper seems contradictory:
Planck time is derived from the speed of light and the gravitational constant. So wouldn’t there be at least four universal constants?