The nextcloud snap is the best and easiest way to selfhost nextcloud.
I said it. Fight me.
The nextcloud snap is the best and easiest way to selfhost nextcloud.
I said it. Fight me.
The “coreutils” that macos uses by default are all older shitter bsd versions. I discovered this when half of my scripts and commands didnt work properly.
Silly me thought I could just bring my cash scripts over and not have any major issues (I’m not doing anything crazy). But even something as simple as grep didn’t work right because it could recursively search directories in the old bad version Mac comes with.
All of the gnu versions are much better and you can install them with homebrew.
Would you mind educating us plebs then? I had a similar question to op, and I can assure you, I definitely don’t understand local auth services the way I probably should.
It might be worth taking a step back and looking at your objective with all of this and why you are doing it in the first place.
If it’s for privacy, then unfortunately that ship has sailed when it comes to email. It’s the digital equivalent of a post card. It’s inherently not private. Nothing you do will make it private. Even services like proton Mail aren’t private–unless you only email other people on proton.
I appreciate wanting to control your own destiny with it but there are much more productive things you could be spending your time on the improve your privacy surface area.
If it’s working again all of the sudden I would lean towards f2b. I don’t know what your “timeout” is, but if f2b got tripped it would explain why you couldn’t get in yesterday but today it works (assuming your ban expires in 24hrs or so).
GPU with a ton of vran is what you need, BUT
An alternate solution is something like a Mac mini with an m series chip and 16gb of unified memory. The neural cores on apple silicon are actually pretty impressive and since they use unified memory the models would have access to whatever the system has.
I only mention it because a Mac mini might be cheaper than GPU with tons of vram by a couple hundred bucks.
And it will sip power comparatively.
4090 with 24gb of vram is $1900 M2 Mac mini with 24gb is $1000
I switched the the snap package and it’s been rock solid and pain free the entire time.
I welcome any and all comments on why snap is Satan.
I can’t tell if ops joke is “intentionally confusing buffers with registers” and everyone is playing along or if people aren’t making the distinction between the two in this thread.
Which is ironic and humorous…potentially by accident.
I use vimwiki and wrote a bash script that pulls all of the Todo items from across my wiki and puts them in a single file with TODO and IN PROGRESS sections.
I have a keybind that pulls up the list and runs the script to refresh it.
It’s not linked to any calendar though. I keep my to-do list and calendar separate.
I use Gmail and have that calendar for my personal stuff. At work I am forced to use outlook.
That’s why my windows partition is still in regular use. I play apex legends. It used to play flawlessly on pop is but over a year ago at this point it started screwing up.
I’d usually be able to get it working after a while, but when I have 30-45 minutes or maybe an hour at most to game, I don’t want to spend all of it “fixing”. I’d rather restart into windows and be playing in a minute.
“rocinante” for my proxmox host.
“awkward, past his prime, and engaged in a task beyond his capacities.” From don Quixote’s wiki page.
It seemed fitting considering it is a server built from old PC parts…engaged in tasks beyond its abilities.
The rest of my servers (VMs moslty) are named for what they actually do/which vlan they are on (eg vm15) and aren’t fun or excitin names. But at least I know if I am on that VM it has access to that vlan(or that it’s segregated from my other networks).
This is such horseshit. The drama of the Linux community never ceases to amaze me.
I totally believe you can hit the ram limit on these. I was just saying Ive surprisingly managed to be fine with 8GB.
Android emulators are notoriously memory hungry and there are certain tasks that just flat out require more ram regardless of how well it’s managed.
The advice I heard about these a while back is: if you know 8GB isn’t enough for you, then you aren’t the market segment they are targeting with the basic models.
That said, no “pro” model should come with just 8GB. It just waters down what “pro” means.
I’ve been using a Macair with 8GB of ram since they came out. It was on sale at Costco and I had a gift card. I think I paid $500 out of pocket.
I was worried that 8GB would limit me but it was the one on sale so I rolled with it. I can say that after several years, the only time it’s limited me was when I tried to run an AI model that was 8GB. Obviously, that becomes an issue at that point.
But for all I do with my air, including creating a 1GB ramdisk for a script/automation ml job I run, I have never felt limited by the ram.
I open a bagillion ff tabs. Never close windows etc. it’s an air after all, not a workstation substitute, so my use ases arent overly taxing in the grand scheme of things. I’m not editing my 4k video or doing rendering with it. But ram hasn’t been an issue outside of the AI workload with the 8GB model–and tbh that’s only an issue because of the ML cores. They absolutely scream vs my 1080ti that’s in my server. My m1 with 8GB of ram runs circles around my 24 core 128GB ram server that has a 1080ti.
I did just get a MacBook pro for work that I requested 128GB of ram. But that’s because I wanted it for bigger AI models(and work is paying not me).
I think he was trying to say apps get access to “root features” through an abstraction layer/API calls that is controlled.
They don’t/wouldn’t have carte blanche root access to the underlying system. It’s kinda like a docker container or VM or flatpaks/snap packages on Linux. They are sandboxed from everything else and have to be given explicit premission to do certain things(anything that would need root privileges/hardware access).
I think unRAID does that. But I never looked into it much tbh.
I don’t have nearly that much worth backing up(5TB–and realistically only 2TB is probably critical), but I have a Synology Nas(12TB raid 1) and truenas (zfs striped/mirrored) that I back my stuff to (and they back up to each other).
Then I have a raspberry pi with a USB drive (8tb) at my parents house 4 hours away, that my Synology backs up to (over tailscale).
Oh, and I have a USB HDD(8tb) that I plug in and backup my Synology Nas to and throw in my fireproof safe. But thats a manual backup I do once every quarter or 6 months if I remember. That’s a very very last resort backup.
My offsite is at my parents.
And no, I have not tested it because I don’t know how I’m actually supposed to do that.
So it sounds like Vultr isn’t doing anything nefarious at all.
Someone apparently actually read the terms and services for the first time a few days ago and misunderstood them since they were saying it was in reference to the Vuktr website not your servers.
And either way, they removed the offended lang to clear it up.
This seems like a knee jerk mob reaction more than anything.
There is no evidence that they’ve done anything with anyone’s data.
Is it unmaintained completely or just feature complete and not getting recent updates?
I’ve seen people say “this tool isn’t being maintained because there aren’t recent check ins” and those two things are very different.
I think you are obligated to share your entire known hosts file to prove this.