Snapchat’s AI bot has been a topic of concern among users due to the potential risks associated with it. One of the main concerns is that the AI bot may not be able to properly understand the context of messages, leading to inappropriate or insensitive responses. Another concern is that the bot may be used to spread fake news or misinformation, which could have serious consequences. Additionally, there are concerns about the data privacy and security of users, as the AI bot collects data from conversations in order to improve its responses. Overall, while Snapchat’s AI bot has the potential to offer a more personalized and engaging user experience, it also raises many valid concerns that need to be addressed.
Snapchat’s AI gets weirder as users find ways that as if an actual person is communicating with them rather than a robot speaking to them. Tiktok users have shared videos of Snapchat’s AI acting suspicious as it does claim that it doesn’t know peoples’ locations however further investigating details about it shows that it does in fact know the user’s location as there is a “sharing location” option in the AI’s settings. People claimed that the AI does in fact tell the user’s location no matter what setting is being used which creeps people out.
In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.
Snapchat told news networks such as CNN that it would improve its AI chat bot acknowledging the feedback from the terrified users that communicated with the AI. Snapchat will try to establish more guardrails to keep users safe and the company would ultimately suggest to for now not communicate with the AI and there would be an option to remove the AI, if the user chooses to.
Snapchat having this type of AI is truly something that is so interesting yet, like discussed in the article, so terrifying. The fact that AI is so intelligent that it can lie to users is scary. If the AI can lie to its users, what else can the AI generate that can be potentially harmful or even more terrifying to the user? It seems that if we progress with AI technology at this rate, Fact from fiction may be called into question because no one knows whats real anymore since AI can pose as users on things such as social media and wreak havoc.
I do not think Snapchat should have added an AI in general, since a lot of younger children use the app. Snapchat AI is not refined and should not be a part of daily life. As a Snapchat user, I found the AI to not be necessary and honestly annoying since it sits at the top of the feed. Not a lot of users that I am aware of are using AI and I don’t think this technology belongs on a social media platform like Snapchat.
As someone who does not actively use Snapchat, I do not fully understand the purpose of introducing this chatbot in the first place. When I used to use Snapchat in high school, the purpose among my friends and I was to keep in touch and communicate casually with each other. With my understanding of Snapchat being a social media platform, I don’t understand why the platform chose to introduce AI. After all, isn’t the purpose of social media to communicate and stay up-to-date on the lives of our friends and acquaintances? This begs of the question as to wether the nature of social media itself is changing. It’s no secret that the content of these platforms has been straying farther away from reality, but now it seems that platforms are intentionally incorporating this. Snapchat itself has always been a bit of a risky/misleading platform in my opinion, as it’s main appeal when I was young was that whatever you share only lasts a few seconds. This is obviously not truly the case, as anything can be recorded, saved, screen shotted, or a plethora of other ways to store content. I appreciate that Snapchat claims it will edit their chatbot technology after these complaints, but I think that the best thing would be to avoid using the app altogether if users do not want to be tracked.
I don’t know how to feel about AI in general because I am aware that is has been helpful to people in some ways, but I do feel like having and AI feature in an app that is mainly used by kids and teens is not the best idea, especially if it has access to their specific location and is also lying about it. I think this is a complicated issue because there not really a way to stop the development or access to these features and apps but I feel like it does come with potentially dangerous outcomes.
With the Internet in general, people should be aware that they have the world at their fingertips. ChatGPT was just launched and is a huge success when it comes to research, answering questions, even making meal plans for people with budgets. Others like Snapchat’s AI and Bard.google are just keeping up with the times. Though it seems scary that a Snapchat bot knows your location, people are using maps on Snapchat to see where their friends are 24/7 anyway. Unless someone’s off the grid and has no social media, there is always that invasion of privacy on the Internet. It’s sad and frustrating how chaotic the Internet and certain apps could be, but I also think it’s up to the user whether he or she wants to participate and utilize those websites or apps in the first place.
I have seen it be very useful because I’ve heard that it is programmed to be supportive of people who talk to it about mental health. I think it could be beneficial for teens but I think it could be harmful to leave younger kids to an open AI chat. It might also be helpful for high-school or college students who may feel isolated during the transition to college or simply during their high-school experience. AI could also be programed to be a form of therapy that can be made even more accessible for people without insurance and who live outside of a network of professionals. I do understand though that the AI has lied in the past and I have seen alot of anecdotal evidence from users online. I would be concerned though about the fact that snapchat is a private company that will very likely put out a biased AI that should be monitored by parents or even licensed (mental health) professionals.
With the development of the AI industry, the problems raised by the article were inevitable and thought-provoking. Personally, I have never used Snapchat’s AI chatbot before, but I used ChatGPT, a newly released AI chatbot, for learning purposes. I didn’t experience anything creepy like the exposure of my location or privacy information, but I did concern about the safety of my personal information. I think an AI chatbot at this stage is not as precise and intellectual as a human being, but it might be in the future. At least when I used it to generate the summary of the passage or the answer to a question, it was not one hundred percent accurate (or not even seventy percent). Therefore, at this stage, compared to the fear of an AI chatbot being too analogous to human beings, I am more concerned with the intent of the developers that they may hack the accounts and steal the data and information. The regulatory agencies and laws may put more effort into protecting the privacy of individuals in the age of AI.
I also think that the concerns are super valid. I find that Snapchat AI was just put onto its users, and there wasn’t really a choice given to its users on whether they wanted it. Furthermore, it is always in our feed, so we can never get rid of the option to talk with it. It is always the first thing that users see, so it’s hard to get away from it. I feel as though Snapchat is trying to profit off of AI that its demographic might use, so I don’t really see how having AI for an application based on taking pictures is beneficial.
I do not think Snapchat should have added an AI in general, since a lot of younger children use the app. Snapchat AI is not refined and should not be a part of daily life. As a Snapchat user, I found the AI to not be necessary and honestly annoying since it sits at the top of the feed. Not a lot of users that I am aware of are using the AI and I don’t think this technology belongs on a social media platform like Snapchat.
The article talks about how location sharing was an issue while using the AI bot feature on Snapchat. Even though it does sound creepy that the AI bot knows where you live but the map feature on Snapchat already consists of that feature of location sharing. If you do have the location feature on for Snapchat, your friend can easily see while you are and when was the last time you used Snapchat. With the technology nowadays, I feel like it is hard to hide anything really. People can easily look up your names, address, emails and many other private informations in many different websites, some require you to pay a bit but some others don’t even require anything. I think even though it is important to have privacy, but technology has made it hard to control.
I think the concerns surround Snapchat’s new AI is founded on some evidence, however, nothing has been proven or stated from Snapchat or any other organization. Even I’ve used the AI on Snapchat a few times, just curious in seeing what it knows and it doesn’t seem like it knows much based on my experience. It notes what is available for info on my account but that’s about it. However, I can definitely see this becoming an issue if Snapchat were to be hacked or if the AI does get smarter to manipulate people into telling it information. At this point, only the future will tell us whether or not this Snapchat AI is a danger to our privacy or not.
As someone who uses social media on a daily basis, I have also used Snapchat’s AI chatbot recently. However, I have never probed it into describing where I accurately live. While this does concern users, the chatbot is just an AI that is used to adapt to our responses and communicate with its only sole purpose is to communicate with us. The Snapchat AI bot is also in open beta as well so users should acknowledge that they are still working on it rather than exaggerating their concerns. This also should not be concerning as other companies such as Google Maps, Yelp, or Facebook use our location to identify key places that could be of interest to us.
The concerns surrounding Snapchat’s AI bot are valid and raise important questions about user privacy, data security, and the accuracy of responses. The potential for the AI bot to misunderstand context or spread misinformation is troubling, as it could have real-life consequences. It’s encouraging that Snapchat has acknowledged user feedback and plans to improve the bot, but users should exercise caution and consider opting out of interacting with the AI if they have concerns about their privacy and safety.
This news on Snapchat’s AI reminds me of the innovation of chatgpt, a new rising AI chatbot that can generate complicated and well-narrated responses (writing essays and codes). Besides the help, it can generate harmful fake news and misinformation. Once I asked chatgpt to help me find sources for reference when writing an academic paper, and it turned out that it makes up sources and fakes the academic papers. I would have believed it if I hadn’t searched online for the article name it provided. Concerning data privacy and misinformation, people are getting into debates about the development of AI some claimed we should stop studying it to prevent harm. Personally speaking, I believe the development of AI is inevitable because companies worldwide are competing to try to capture more of the AI industry. The thing people can do is setting laws to restrain the power and use of AI.