it is indeed infrequent, but the modern world has trained me to expect convenience and instant-ness. Last time i wanted a 12-year-old email I was in the car with friends and and to pull it up. it wasn't anything important at all, to be clear, but i'm hoping to search my 12-year-old emails with the same convenience as last month's.
megaman
I think that that is right that I fundamentally want an archive, not what a normal mail server provides. Part of my thought on looking at mail servers is that those would integrate directly with whatever other front-end/client that I'd normally use, whereas an archive maybe would not.
And regarding archive-specific stuff, I am seeing some things on a search, but I guess i'm wondering if folks here have any recommendations. When I look at , for example, nothing comes up for email archive, just for email servers. That, plus what I see when searching, makes me think that the archive-specific stuff is either oriented to business or oriented to a CLI (like NotMuch, which was mentioned in the discussion here and does look cool).
This looks like a good backend for sure, but the web frontends look a little lacking and I'm not seeing anything about a mobile frontend (other than if a web one was up, which would be fine). Have you tried any of the web frontends?
This article isnt about how emails associated with logins got released in a breach, but that documents that are uploaded to the archive are stamped with the email address of the account that uploaded it and that can be viewed by anyone who downloads the document.
So in standard, everyday use of the site, email addresses are being revealed and are associated with the actions of that person. Like if I upload a copy of the manual for my washing machine or something, which is a more benign example, my email is linked to that document now.
Then combine this with (1) the internet archive says in multiple spots that they dont reveal this info anywhere, and (2) the issue has been raised to the organization, and it becomes more of a specific negligence from them.
This article isnt about how emails associated with logins got released in a breach, but that documents that are uploaded to the archive are stamped with the email address of the account that uploaded it and that can be viewed by anyone who downloads the document.
So in standard, everyday use of the site, email addresses are being revealed and are associated with the actions of that person. Like if I upload a copy of the manual for my washing machine or something, which is a more benign example, my email is linked to that document now.
Then combine this with (1) the internet archive says in multiple spots that they dont reveal this info anywhere, and (2) the issue has been raised to the organization, and it becomes more of a specific negligence from them.
The nsa wants to watch people who are watching the pornhub video of someone else watching porn. The third level there is more difficult to find
Playing games was fine - it was loading things up that has sucked. I haven't gotten dota up on the SSD yet, but on the HDD it was real clunky and would half-load the landing page and sit there for ~10 seconds.
The biggest difference, though, is that firefox now opens immediately instead of taking ~10 seconds after clicking the icon
It sounds like you have a heavy duty door lock to be very secure, but you are essentially trying to backdoor all that security with a new internet-connected thing. An adversary only has to break the weakest link here, rendering the physical door lock obsolete.
If you are just going to have some digitally-connected device ultimately controlling access to the house, I'd go with just some standard door lock that does that (i haven't used em but they exist). The physical lock on those is surely less what you have know, but with your proposed solution the physical lock probably isnt what people who crack anyway.
Can "ai" make a good game, or just a thing that generates video and mostly accepts inputs (and it isnt even hardly doing that)?
Women are you and I are going to be a little late.
Datasette is a neat tool intended to publish static data in a sqlite database on the web with a helpful gui and a bunch of extensions available. I havent come across a good enough reason to do it myself, but may do what you want.
You can spin it up locally and it wont be on the web at all, just accessed via your browser if thats what you want.
The git repo should ignore the venv folder, so when you clone you then create a new one and activate it with those steps.
Then when you are installing requirements with pip, the repo you cloned will likely have a requirements.txt file in it, so you 'pip install -r requirements.txt'