Oh, government email domain would scare anyone off. It’s as bad as a “fbi.com” address. I doubt the permission is really there as the post says, what I have seen is the contrary. Anyway, try with a regular email address. If you want, as background story, say you’re a student in a third-world country. That’s how I lived before Sci-Hub (via VPN) and it worked out most of the time (e.g. ~75% success rate).
Sci-Hub was the most similar exploitation of such “situation”
Etesync maybe?
Providing links like this on a forum sounds like a trap, it’s sad that you got so many downvotes for the lack of explanation (as given in comments).
A few more questions remain… Why did you program this? As in, how is this different or better than the alternatives?
There are so many! IMHO that’s a problem, as a user I don’t know how to decide…!
Very informative nonetheless! Thanks for your comment. Today I learnt more about these generations classification that seems to be everywhere.
Ah. I was speaking of planet earth, the world is big. Americans may have their trends. Similarly, between the different states you might see trends that are masked over when taking numbers “globally”.
I never understood the generation’s gap. Are there people of a certain age more frequent? Instead, I believe humans reproduce more or less at a yearly constant rate. I understand that having the categories are meaningful for many but to me it’s a ‘double-dipping’ statistical flaw.
Great for you. To me is not so much a matter as to how much time but the timing. After all, you could as well be reading the most elevated book saga. On the contrary, there’s the addiction of doomscrolling… I have seen friends scrolling posts on social media even while on a pizza night surrounded by others. Or trying to have chat conversations with potential dating partners instead of an actual phone call. That’s the kind of thing I believe is troublesome, the lack of “here and now” awareness. And something similar goes to the constant checking. For example, if you or anyone uses their phone 4 hours in total, I’d say it’s better if that’s on bigger chunks than if it’s just a millon of small distractions throughout the day hindering many other activities.
I’m using github.com/mag37/dockcheck for this, with its “-d N” argument. There’s a tradeoff between stability and security, you need to decide for yourself. It will also depend on what services you’re hosting. For example, nextcloud and immich would be disastrous under such a regime.
We could set the scheduled posts so that those left behind have the chance every week (or other time period) by a reminder. Just brainstorming. Anyway, this is sticky, that should suffice. I prefer the latter.
I can help. And I’d like the community to go for feddit.org instance since this one is bound to be hacked (no updates) sooner or later.
we should definitively have a wiki (though people should use “search” too, I wonder if a wiki would help really). This “topic” comes every month. I have posted this already, here it goes again: https://github.com/anderspitman/awesome-tunneling
Perhaps a chronological view is a bonus of the idea lives on for long enough. And having links between stories, or tags can be useful at some point too… https://www.usememos.com/
Ah. Indeed! This is about just one workshop in the conf. I misread thay keyword.
The description seems more like pushing a vegan perspective. I don’t know if that’s a good way to get started. Nonetheless, I wish your endeavor the best!
Yet, you could ride a pork to work. Meanwhile, you can’t eat your EVs. It’s clear that Europe needs to do. /s
Ah, docker-mailserver and delta.chat could also be great for your case!!
E2E is complicated, if you self-host for a group, having TLS and encrypting data at rest (storage) may be enough. Get a threat model. That being said, I would recommend snikket.org which is a superset of extensions over XMPP which is the open source IM that was the base of almost every app out there. Matrix and Rocket are both alright too. Depends too on your resources, synapse requires too much RAM (or so I heard)
Have you tried ollama ? Some (if not all) models would do inference just fine with your current specs. Of course, it all depends on how many queries per unit of time you need. And if you wanted to load a huge codebase and pass it as input. Anyway, go try out.