GitLab has been working on support for ActivityPub/ForgeFed federation as well, currently only implemented for releases though.
Just another Swedish programming sysadmin person.
Coffee is always the answer.
And beware my spaghet.
GitLab has been working on support for ActivityPub/ForgeFed federation as well, currently only implemented for releases though.
Mercurial does have a few things going for it, though for most use-cases it’s behind Git in almost all metrics.
I really do like the fact that it keeps a commit number counter, it’s a lot easier to know if “commit 405572” is newer than “commit 405488” after all, instead of Git’s “commit ea43f56” vs “commit ab446f1”. (Though Git does have the describe format, which helps somewhat in this regard. E.g. “0.95b-4204-g1e97859fb” being the 4204th commit after tag 0.95b)
Well, one available case you can look at is Uru: Live / Myst Online, currently running under the name Myst Online: Uru Live: Again.
They open-sourced their Dirt/Headspin/Plasma engine, which required stripping out - among other things - the PhysX code from it.
I assume both the $20 and $25 prices were during alpha/early access. Was thinking entirely of release pricing.
Completely blanked on early access pricing, so yes, if you bought it before release then it was likely cheaper still.
It’s reasonably easy to guess exactly what you paid for the game, since the only change in price since launch was a $5 bump in January last year. It’s never been on sale.
It releases while I’m on the way back home from a trip to Manchester, might have to bring my Deck so I can play on the flight/train.
It’s somewhat amusing how Itanium managed to completely miss the mark, and just how short its heyday was.
It’s also somewhat amusing that I’m still today helping host a pair of HPE Itanium blades - and two two-node DEC Alpha servers - for OpenVMS development.
Now that’s one hefty changelog.
Going to be really amazing to play Factorio again without knowing how to solve everything.
In general, browser benchmarks seem to often favor Firefox in terms of startup and first interaction timings, and often favor Chrome when it comes to crunching large amounts of data through JavaScript.
I.e. for pages which use small amounts of JavaScript, but call into it quickly after loading, Firefox tends to come out on top. But for pages which load lots of JavaScript and then run it constantly, Chrome tends to come out on top.
We’re usually talking milliseconds-level of difference here though. So if you’re using a mobile browser or a low-power laptop, then the difference is often not measurable at all, unless the page is specifically optimized for one or the other.
There’s a bunch of extensions that allow you to switch user-agent easily, I personally use this one, it includes a list of known strings to choose between as well.
They used to also use the unreleased version 0 of shadow DOM for building the Polymer UI, which - being a Chrome-only prototype - understandably didn’t work on Firefox, and therefore instead used a really slow Javascript polyfill to render its UI.
I haven’t checked on it lately, but I imagine they must’ve changed at least that by now.
One thing you can test is to apply a Chrome user-agent on Firefox when visiting YouTube. In my personal experience that actually noticeably improves the situation.
The EU AI act classifies AI based on risk (in case of mistakes etc), and things like criminality assessment is classed as an unacceptable risk, and is therefore prohibited without exception.
There’s a great high level summary available for the act, if you don’t want to read the hundreds of pages of text.
They couldn’t possibly do that, the EU has banned it after all.
To quote Microsoft themselves on the feature;
“No content moderation” is the most important part here, it will happily steal any and all corporate secrets it can see, since Microsoft haven’t given it a way not to.
Go has a heavy focus on simplicity and ease-of-use by hiding away complexity through abstractions, something that makes it an excellent language for getting to the minimum-viable-product point. Which I definitely applaud it for, it can be a true joy to code an initial implementation in it.
The issue with hiding complexity like such is when you reach the limit of the provided abstractions, something that will inevitably happen when your project reaches a certain size. For many languages (like C/C++, Ruby, Python, etc) there’s an option to - at that point - skip the abstractions and instead code directly against the underlying layers, but Go doesn’t actually have that option.
One result of this is that many enterprise-sized Go projects have had to - in pure desperation - hire the people who designed Go in the first place, just to get the necessary expertice to be able to continue development.
Here’s one example in the form of a blog - with some examples of where hidden complexity can cause issues in the longer term; https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-ride
Well, this has certainly caused quite a bit of drama from all sides.
I’m curious about the earlier audit of libolm which happened many years back (and by a reputable company), it feels like it should’ve found any potentially exploitable issues after all - including timing attacks.