When you get judged based on what website you are visiting it is very likely that you are already the bad guy by using a vpn.
When you get judged based on what website you are visiting it is very likely that you are already the bad guy by using a vpn.
Yes. Not claimed otherwise. OC claimed that they see what you are doing which is wrong.
Basically everything online can be done encrypted. bittorrent has had support for encryption for years. There are other challenges like hiding from DPI and the thing that you broadcast your torrent IP but the content can be securely emcrypted.
They see what sites you are visiting yes but they do not see what you are doing on them. They do not see the content of the traffic. Huge difference.
a VPN provider can indeed tell what you’re doing
New to me that https is broken
Yes, any VPN provider will see what’s in your traffic, no way around that…ever…no matter who you choose
Yes, there is a way around it, just use https.
When you visit sites with https then the traffic is encrypted. They still see what sites you are visiting.
Who says that it is no longer maintained? https://github.com/containers/podman-compose Looks fine to me?
Surprised Transmission has issues seeding that many, thought Transmission 4.x made improvements in that area. How much RAM does your system have? Maybe at some point you just need more system resources to handle the load.
PS - For what it’s worth you can still stick with Transmission and/or other torrent clients & just spread the torrents among multiple torrent client instances. e.g. run multiple Transmission instances with each seeding 1000 or whatever amount of torrents works for you.
Those are duck tape solutions. Why use them when there is a good solution
There are enough private trackers without the requirement of using a VPN.
There are tunnel protocols like 6to4, 6RD and so on to allow you to get an IPv6 connection tunneled to you. Various routers do support it.
Another option is to ask your ISP if he will supply a IPv6 subnet to you.
I mean the “Crypto AG” was a thing. So not that unrealistic.
But that Proton is CIA is not that realistic imho but not impossible.
You can disable the web updater in the config which is the default when deploying via docker. The only time i had a mismatch is when i migrated from a nativ debian installation to a docker one and fucked up some permissions. And that was during tinkering while migrating it. Its solid for me ever since.
Again, there is no official nextcloud auto updater, OP chose to use an auto updater which bricked OPs setup (a plugin was disabled).
Docker is kind of a giant mess in my experience. The trick to it is creating backup plans to recover your data when it fails.
Thats the trick for any production service. Especially when you do an update.
They’re releasing a new version every two month or so and dropping them rapidly from support, pinning it with a tag means that in 12 months the install would be exploitable.
The lifecycle can be found with a single online search. Here https://github.com/nextcloud/server/wiki/Maintenance-and-Release-Schedule
Releases are maintained for roughly a year.
Set yourself a notification if you forget it otherwise.
What are you talking about? If you are not manual (or by something like watchtower) pull the newest image it will not update by itself.
I have never seen an auto-update feature by nextcloud itself, can you pls link to it?
The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower.
Lol. Do not blame others for your incompetence. If you have automatically updates enabled then that is your fault when it breaks things. Just pin the major version with a tag like nextcloud:29 or something. Upgrading major versions automatically in production is a terrible decision.
That brings me to what’s available. I almost pulled the trigger on Synology DS423+. It looks reasonable powerful, I can put 4 SATA SSDs and 2 M.2… that’s what I thought. But it turned out it’s not possible to use M.2 as storage with anything but Synology’s own overpriced drives that aren’t even available in my country.
You can use a script to make them available. Still a pain.
Since you only need 2 TB, why do you even bother with the m.2 slots?
Why do you think that you need the m.2 in the first place? I guess you are hang up on “sata bad cause m.2 new” (thats btw only the connector not the interface, there are sata m.2 as well)
sata can handle 6 Gbps. That’s 6 times more than most home network connections can even handle. Since you have not mentioned once how many Ethernet ports the systems have and how fast they are, i figure you only have a 1 Gbps LAN.
Yes NVMe SSDs are somewhat cheaper these days, but not that much that i would bother with it. We are only talking about 2 times 2 tb.
Yes it has better defenses against timing attacks. Just alone the fact that multiple packets are bundled together makes it harder to identify the route a single package used.
Also, it seems that I2P is more vulnerable against deanonymization when leaving the hidden network, i think the official I2P faq has some info about that, but have not read up upon it myself.
So you are basically saying that root CAs are unreliable or compromised?
The great thing is, that you can decide on your own which CAs you trust. Also please proof that those are actively malicious.
And no. That is not the reason that packages are signed, i am guessing you mean packages like on linux, packages contained in the installation repository. The reason is, that you build another chain of trust. Why would i trust a CA which issues certificates for domains with code distribution. That’s not their job.