“almost all of the most technical employees in framework are using either ubuntu, fedora or nixos. I’m mostly on Windows because we need actually people that are using Windows because our employee base in framework is all Linux users”
- Nirav Patel
“almost all of the most technical employees in framework are using either ubuntu, fedora or nixos. I’m mostly on Windows because we need actually people that are using Windows because our employee base in framework is all Linux users”
That is not the case for every country though. In France and Germany for example almost 3/4 of google requests are via IPv6.
And the firmware inside that rp2040 is stored on plain old flash memory. So while the data may still be on the memory chip, the controller chip dies at just the same pace than every other usb drive - and then you can’t access it.
The problem is not the EU demanding that, it rather is Apples blatant incompetence at implementing it
Well, doing none of the many chores to transform his pedo club into something socially acceptable, and instead killing his boredom by holding talks about a topic that has neither anything to do with church nor is he remotely qualified to say anything about, is on a whole other level of disrespect, isn’t it?
Nono, you are demanding in a not nice tone from a open source community to implement some bloat workaround to fix some you-specific-issue with commercial software. You know how free and open source software works? Either you contribute something positive, or you color yourself glad you get to use something so great completely for free and stay silent. Bark at that commercial vendor that doesn’t use the money from licenses + selling your soul to build something half decent! This upcoming demand-culture around things that others kindly share with wanting nothing in return pisses me off. Especially when it’s not even something about the project, but carrying over unrelated cruft, instead of directing the demand to the entitiy it would be justified against.
Just build a browser extension that does the conversion. Or a script that watches a folder where you drag it into as an intermediary, and then it converts it automatically. And then share it for free, because you are a kind person! You might find a handful of people that like it. And then watch some asshat writing you a demand that “stop converting to jpeg, forever stop that! I need bitmaps for my gameboy! Just give me a SETTING where I can chooose and a nice dialog where I can pick the freaking color palette!”
You using shitty software is not something somebody else would or should feel inclined to solve. Suggesting that everybody should suffer from not receiving the content they request from the webserver, but instead an arbitrary lossy compressed and therefore different picture for your individual comfort is just a self-centred, ignorant and narcissistic request. So go away and use edge, and then complain to Microsoft (whom you pay in contrast to mozilla+community!), that their shitware doesn’t work.
This is the correct answer, every device you use a bitwarden-client regularly on automatically becomes a backup
As far as I understand, in this case opaque binary test data was gradually added to the repository. Also the built binaries did not correspond 1:1 with the code in the repo due to some buildchain reasons. Stuff like this makes it difficult to spot deliberately placed bugs or backdors.
I think some measures can be:
So I think from a technical perspective there are ways to at least give attackers a hard time when trying to place covert backdoors. The larger problem is likely who does the work, because scalability is just such a hard problem with open source. Ultimately I think we need to come together globally and bear this work with many shoulders. For example the “prossimo” project by the Internet Security Research Group (the organisation behind Let’s Encrypt) is working on bringing memory safety to critical projects: https://www.memorysafety.org/ I also sincerely hope the german Sovereign Tech Fund ( https://www.sovereigntechfund.de/ ) takes this incident as a new angle to the outstanding work they’re doing. And ultimately, we need many more such organisations and initiatives from both private companies as well as the public sector to protect the technology that runs our societies together.
Well you must have either set up a port redirect (ipv4) or opened the port for external traffic (ipv6) yourself. It is not reachable by default as home routers put a NAT between the internet and your devices, or in the case of ipv6 they block any requests. So (unless you have a very exotic and unsafe router) just uhhh don’t 😅 To serve websites it is enough to open 443 for https, and possibly 80 for http if you want to serve an automatic redirect to https.
That’s odd, I upgraded my ender 3 with bed leveling and removed the knobs to mount it fixed, because the damn knobs keep moving and then you have to redo the bed calibration. To be honest I can imagine one reason might be that a loosely mounted bed gives you more fault tolerance against the nozzle being too low. I put my bed on two parallel linear rollers for more rigidity, and combined with dual z screws the nozzle has no chance anymore to produce any sort of first layer when it is slightly too low. That made me realize just how much the stock ender 3 is flopping around, but also how this can give you mostly okayish results most of the time without having to deal with a ton of small tolerances.
A colleague of mine had a (non externally reachable) raspberry pi with default credentials being hijacked for a botnet by a infected windows computer in the home network. I guess you’ll always have people come over with their devices you do not know the security condition of. So I’ve started to consider the home network insecure too, and one of the things I want to set up is an internal ssh honeypot with notifications, so that I get informed about devices trying to hijack others. So for this purpose that tool seems a possibilty, hopefully it is possible to set up some monitoring and notification via uptime kuma.
Well it is compiled to byte code in a first step, and this byte code then gets processed by the interpreter. Now Java does the exact same thing: gets compiled to byte code which then gets executed by the jvm (java virtual machine), which is essentially a interpreter that is just a little simpler than the python one (has fewer types for example). And yet, nobody talks about a java interpreter
Ah thank you, that wasn’t obvious to me from its website
Why do you prefer it over syncthing?
I don’t have one, I can only tell you that you can change the keyboard layout. The Readme of the firmware sourcecode says:
To change the keyboard layout, adjust the matrix arrays in keyboard.c.
https://source.mnt.re/reform/reform/-/tree/master/reform2-keyboard-fw
You might find more information in the mnt forum, it is here: https://community.mnt.re/
You do not want Octoprint on a machine that is busy. Otherwise you have load spikes that cause Octoprint to not be able to send the move-commands (gcode) as fast as the printer executes the movements. This problem is pronounced with faster printers and slicers that break up arcs into small straight lines (which is practically all slicers). Otherwise your printer stutters because it has to take small breaks to wait for the next command from octoprint.
What privacy concerns do you have? I’m all for privacy, but I don’t really see where registrars are a delicate topic in that. The most that comes to mind is that some (most?) have a service where they do not give out your name and address for whois requests, but instead the details of the registrar (namecheap has that for example).
True words. The sustained effort to keep something in decent shape over years is not to be underestimated. Now when life changes and one is not able or willing anymore to invest that amount of time, ill-timed issues can become quite the burden. At one point I decided to cut down on that by doing a better founded setup, that does backup with easy rollback automatically, and updates semi-automatically. I rely on my server(s), and all from having this idea to having it decently implemented took me a number of months. Just because time for such activities is limited, and getting a complex and intertwined system like this reliably and fault tolerant automated and monitored is simply something else than spinning up a one off service
With something like this, how do you handle the period of time while copying? I mean you can’t really leave it running as it wouldn’t be in a consistent state. A “under maintenance” page instead? Copy to a fresh folder and when done tell the webserver to serve the new location?