And that's when it will get real scary real soon!
llama
That would make sense...
Yeah. I totally get what you're saying.
However, as you pointed out, AI can deal with more information than a human possibly could. I don't think it would be unrealistic to assume that in the near future it will be possible to track someone cross accounts based on things such as their interests, the way they type, etc. Then it will be a major privacy concern. I can totally see three letter agencies using this technique to identify potential people of interest.
Not really. All I did was ask it what it knew about llama@lemmy.dbzer0.com on Lemmy. It hallucinated a lot, thought. The answer was 5 to 6 items long, and the only one who was partially correct was the first one – it got the date wrong. But I never fed it any data.
Yeah, it hallucinated that part.
I couldn't agree more!
Oh, no. I don't dislike it, but I also don't have strong feelings about it. I'm just interested in hearing other people's opinions; I believe that if something is public, then it is indeed public.
I think so too. And I tried to do my research before making this post, but I wasn't able to find anyone bringing this issue up.
You can check Hugging Face's website for specific requirements. I will warn you that lot of home machines don't fit the minimum requirements for a lot of models available there. There is TinyLlama and it can run on most underpowered machines, but its functionalities are very limited and it would lack a lot as an everyday AI Chatbot. You can check my other comment too for other options.
The issue with that method, as you've noted, is that it prevents people with less powerful computers from running local LLMs. There are a few models that would be able to run on an underpowered machine, such as TinyLlama; but most users want a model that can do a plethora of tasks efficiently like ChatGPT can, I daresay. For people who have such hardware limitations, I believe the only option is relying on models that can be accessed online.
For that, I would recommend Mistral's Mixtral models (https://chat.mistral.ai/) and the surfeit of models available on Poe AI's platform (https://poe.com/). Particularly, I use Poe for interacting with the surprising diversity of Llama models they have available on the website.
Yes, the platform in question is Perplexity AI, and it conducts web searches. When it performs a web search, it generally gathers and analyzes a substantial amount of data. This compiled information can be utilized in various ways, including creating profiles of specific individuals or users. The reason I bring this up is that some people might consider this a privacy concern.
I understand that Perplexity employs other language models to process queries and that the information it provides isn't necessarily part of the training data used by these models. However, the primary concern for some people could be that their posts are being scraped (which raises a lot of privacy questions) and could also, potentially, be used to train AI models. Hence, the question.