Snapchat’s AI bot has been a topic of concern among users due to the potential risks associated with it. One of the main concerns is that the AI bot may not be able to properly understand the context of messages, leading to inappropriate or insensitive responses. Another concern is that the bot may be used to spread fake news or misinformation, which could have serious consequences. Additionally, there are concerns about the data privacy and security of users, as the AI bot collects data from conversations in order to improve its responses. Overall, while Snapchat’s AI bot has the potential to offer a more personalized and engaging user experience, it also raises many valid concerns that need to be addressed.
Snapchat’s AI gets weirder as users find ways that as if an actual person is communicating with them rather than a robot speaking to them. Tiktok users have shared videos of Snapchat’s AI acting suspicious as it does claim that it doesn’t know peoples’ locations however further investigating details about it shows that it does in fact know the user’s location as there is a “sharing location” option in the AI’s settings. People claimed that the AI does in fact tell the user’s location no matter what setting is being used which creeps people out.
In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.
Snapchat told news networks such as CNN that it would improve its AI chat bot acknowledging the feedback from the terrified users that communicated with the AI. Snapchat will try to establish more guardrails to keep users safe and the company would ultimately suggest to for now not communicate with the AI and there would be an option to remove the AI, if the user chooses to.
The Snapchat AI Bot poses ethical and safety concerns to all Snapchat users. Whether the bot knows a user’s location or may provide misinformation, it has the ability to instil fear and sway user beliefs upon factual/ objective matters and/or political matters. Further, if the company is compromised and its data is stolen, this information would put users at risk. There are other AI companies that users can use, so why does Snapchat feel the need to have one? This seems to only pose a threat to the security of users rather than benefit them.
Snapchat not collecting user data would be more surprising to me. WIth more and more allegations about large companies taking advantage of technology to collect and profit from user data and with the rising popularity of AI systems, Snapchat’s AI being a culprit makes sense. I remember seeing a video where a user convinced Snapchat’s AI that the AI was drunk, and of the AI systems I have seen, Snapchat’s responded the most like a human would.
Snapchat not collecting user data would be more surprising to me. With more and more allegations about large companies taking advantage of technology to collect and profit from user data and with the rising popularity of AI systems, Snapchat’s AI being a culprit of this makes sense. I remember seeing a video where a user convinced Snapchat’s AI it was drunk, and of the AI systems I have seen. Snapchat’s definitely responded the most like a human would.
Besides the concerns about privacy for AI, I am also concerned about the youth using AI to potentially replace social interaction. It’s easy to see how kids might be curious to engage with such a robot, but what if they are starting to be dependent on it? I kind of it crazy the amount of technology rising over the years and how we are starting to adapt to it.
I’ve definitely seen a few videos regarding this post. In one, the user convinced Snapchat’s AI it was drunk, and the chatbot did start to reply like it was inebriated. From my knowledge of different AI’s, Snapchat’s seems to be the most human-like when it comes to conversing (compared to ChatGPT). While I’m sure the bot itself is not the biggest problem, I’m not surprised these allegations are coming to light because thanks to more and more lawsuits, we all know that large companies have been gathering user data without their consent for awhile now. Snapchat not collecting user data would be more surprising to me.
The AI on Snapchat is completely unnecessary. There is no point in having a chat box with AI on snapchat because people will treat it as though they are treating the other people that they snapchat, and talk to it frequently about topics it is not equipped to respond to, like their friends might not be. Also, Snapchat is accessible to younger users, and using AI to talk to instead of real people their own age is detrimental to their social development. AI also does respond in an automated manner, so I agree with the notion that it comes across as insensitive, and I know many people that use the Snapchat AI for more pressing matters.
The AI bot created by snapchat is a terrifying features. My main concern being that it is unable to be removed from a snapchat users chat feed and it is always at the top. This encourages the users to use it as it is not going away. I think this AI bot encourages communication with a robot that can never disagree with you, rather than a real human connection. Why would a person want to wait for a response from a friend, when the chat bot is instantaneous? The chat bot is also allowed to view your story, unless selected not to. It gains personal intel on a user based on their conversations and snapchat story. It is a creepy filter that has ulterior motives. If that was not the case, it would be able to be easily removed if someone did not want to interact with it.
Snapchat AI is a joke. I have had one of those weird experiences in which twice now I have gone out, and woken up the next day with the snapchat AI saying “Hey” to me. It is just off-putting in the weirdest way, especially because I did not message it in any way before. I actually uninstalled the app just because I didn’t like it (but also because snapchat is dying).
I believe snapchat’s AI technology has raised concerns due to its ability to access and utilize personal information that users did not explicitly provide. I have seen videos of people exposing the AI or seeing some of its responses and it does not seem okay. It is important for Snapchat to address these concerns and be as transparent as possible when it comes to dealing with something such as this, where people from many ages use it.
Artificial Intelligence is something that has been gradually becoming more of a reality. Lately, many people use ChatGPT to help them learn concepts or create narratives. The scary part about it is that it can create false narratives that can become easy for some young people to believe. Snapchat is an app that has progressed into being a social media younger kids use often. With this AI chat bot being there (and it is something you’re unable to remove from your chat list), it can easily become an influence to younger children. Yes, it can be playful conversation, but it can also be harmful in promoting things the AI may not know how to explain well. I think it is a bad thing to have overall and should at least have an option to remove it from your account.
I do not believe the Snapchat AI was created for any harm or to terrify people. Instead, I think it was created for the benefit of the people and to improve the app. By having one’s own AI, Snapchat is generating more downloads and more users because of this new feature. Now folks can easily text their friends and get many questions answered in the same app without the use of opening Google or another app. In regards to knowing our location, I can see how this can violate one’s privacy, but with newer updates that can give one the option to turn off the AI bot, I am sure this risk might go away and the app will remain enjoyable.
I think that this chatbot AI, for now, is used as just a lighthearted way to pass the time. I think the main concerns are stemming from the fact that the bot can accurately reveal where the user is. I think that this is just a scare to the app having information about them on standby, but I am sure that these users don’t even know that they are sharing more information than just their location on their phones throughout every other app. I also think that asking the AI Chatbot for your location signifies verbal permission from the user for the app to access this information, so maybe that is why it can reveal locations accurately.
The development of artificial intelligence is most definitely inevitable. This is just one very early form of that. Honestly, I find any interaction with Snapchat AI more comedic than terrifying as I just use it to mess around. That being said, I don’t think it should have access to your location information unless you give it your location information, akin to speaking to a human. All we can do as users is wait for Snapchat to improve upon the AI and fix these big problems.
I am so glad this is a topic for discussion. I would like to add a concern to this list. First I know many people who are interested in AI and are interested in fascinating new abilities. One of my friends said how Chat GPT typed a whole podcast script of a theoretical conversation with Joe Rogan and Michelle Obama. The AI art is also equally interesting but scary. I have only used AI twice to see what it could do and get inspiration for a poem. However, the Snapchat AI was forced on Snapchat users. Specifically it has been at the top of my messages since they added it to the app. In addition it won’t let me get rid of the conversation and does not let me look at the settings until I accept the terms and conditions that it may use my location and profile information to improve its AI abilities. I am very frustrated by this new update. I know AI has amazing capabilities for the future, but I do not think anyone should be forced into using it, especially as a social outlet. I do not want to socialize with AI. Please let me post my comment.
I am so glad this is a topic for discussion. I would like to add a concern to this list. First I know many people who are interested in AI and are interested in fascinating new abilities. One of my friends said how Chat GPT typed a whole podcast script of a theoretical conversation with Joe Rogan and Michelle Obama. The AI art is also equally interesting but scary. I have only used AI twice to see what it could do and get inspiration for a poem. However, the Snapchat AI was forced on Snapchat users. Specifically it has been at the top of my messages since they added it to the app. In addition it won’t let me get rid of the conversation and does not let me look at the settings until I accept the terms and conditions that it may use my location and profile information to improve its AI abilities. I am very frustrated by this new update. I know AI has amazing capabilities for the future, but I do not think anyone should be forced into using it, especially as a social outlet. I do not want to socialize with AI.
I am honestly just a bit curious as to why social media apps like snapchat need these AI features? I see this happen all the time with every social media app, if something becomes popular then every app tries to incorporate it (the short video form that Tik Tok gave popularity to can now be found on every social media app). Because the AI technology is still very new, it can quickly become dangerous to allow users access to it when it is still in development, and many bugs still need to be fixed. Especially social media apps like snapchat, which tend to have a much younger audience, allowing them to use technology that a large sum of people barely understand can be problematic.
I’ve heard about Snapchat AI’s ability to accurately “guess” a user’s location, and honestly, it’s a little scary. Honestly, I don’t understand why Snapchat added an AI bot into their app (unless it’s for greater appeal in the market, to which I find absurd if that’s the case), but I hope that the developers don’t code this AI to obtain private/sensitive information from us. The “track someone’s location from the past 24 hours” feature that they introduced with Snapchat+ was enough to spark concern and debate, so I can see why people aren’t comfortable with the implementation of an AI bot. To acknowledge users’ concerns, I think Snapchat needs to be transparent in their AI moderation, else they risk getting into some lawsuits.
Snapchat’s AI bot is only one of the many new AI products/features being rapidly released in recent times. While AI does bring about some benefits, it is still in a fairly early stage of development and currently has a good amount of drawbacks. Snapchat’s AI demonstrates some of these drawbacks (e.g. privacy invasion and the spread of misinformation) as well as an overall concern about whether AI can truly be more beneficial than harmful. While the developers of such AI tools may have good intentions, it is clear that much caution must be applied before their release as some current AI tools have been seen to have unforeseen harmful side effects. Additionally, it is imperative that the developers of these tools listen and respond to user feedback to quickly address any misbehavior by their AI (the same way Snapchat is currently doing).
I have used Snapchat’s AI chatbot once as it just naturally pops up as one of my friends, so I tried it. I definitely acknowledge the concern regarding AI chatbots about data privacy, specifically the location of Snapchat users. Additionally, I agree that there is a lot to improve, as misunderstanding messages can lead to significant consequences. However, I also appreciate this feature as it may help bridge the connection for people with a smaller social network. Lastly, I am curious that if people worry about location, how should we address it as most of the apps can easily get our location just by letting us accept the user form?
Just like any other form of AI on social media, there are always going to be concerns. It’s going to be on Snapchat to establish clear guidelines and ethical frameworks to ensure that the chatbot will develop and align with user expectations and societal norms. With regular updates, user feedback algorithms, and ongoing monitoring, snapchat can help identify and mitigate any potential risks associated with the AI bot.
When I saw the new AI bot on Snapchat, I immediately got more concerned. While I don’t have much experience with AI, my limited knowledge on it makes me concerned on my privacy. I purposely never click on the bot even though I know the entire app is already taking my data. If feel like AI is spreading across many platforms and I have and always will be concerned for my privacy.
I personally haven’t tried Snapchat’s AI bot yet, but I can see many other social media platforms following in Snapchat’s footsteps and adding some type of AI bot to their platforms. I agree that Snapchat’s AI bot can be seen as creepy and scary, especially for younger users who may not understand the technology behind it, so I think that Snapchat should make more advances to protect its users and add more safeguards to the AI bot.
I think many of the concerns that are held towards Snapchat AI could be cleared with more transparency with Snapchat. For the concern that the Snapchat AI has your precise location, it’s receiving that information from Snapchat itself, which already has that access (if granted) for its Geofilters and Snap Maps. There is a lot of justified unrest around the popularization of AI in everyday life, so developers should be as transparent as possible to give consumers information about how the AI works. This will decrease the spread of fear and misinformation that we have already observed happening around us.
I don’t like the AI bot in Snapchat. I understand how it can seem terrifying with all the information that it collects without our knowledge. I think its an unnecessary feature to have added, it might disconnect us from reality because it does feel like we are talking with a human being or like any of our friends since it constantly collects information to have a better conversation with us.
I have not experienced Snapchat’s AI nor I have I seen the videos discussed in this post regarding the system having knowledge of location and personal information. However, Snapchat’s AI is another version of ChatGPT and countless other AI systems. All of these systems collect data from the user, which is dangerous. I personally like the benefits of AI, but we must be weary of the downfalls, specifically losing our personal data to these artificial systems.
I have not experienced Snapchat’s AI nor I have I seen the videos discussed in this post regarding the system having knowledge of location and personal information. However, Snapchat’s AI is another version of ChatGPT and countless other AI systems. All of these systems collect data from the user, which is dangerous. I personally like the benefits of AI, but we must be weary of the downfalls, specifically losing our personal data.
While I haven’t experienced Snapchat’s AI or seen the videos of this system knowing locations or private information, I think this is representative of AI as a whole. Snapchat’s AI bot is just another version of ChatGPT, and countless other AI systems that already exist. These systems collect data from users, which is a slippery slope as seen in this post. I personally like the benefits of AI, but we must be weary of the potential downfalls.
I don’t think Snapchat AI is necessary, particularly considering the platform’s target audience of young teenagers. The AI chat feature, which attempts to simulate conversation, seems to have been added to the app without any real purpose. Snapchat primarily serves as a platform for sharing photos and videos, emphasizing visual communication rather than text-based conversations. Introducing AI chat functionality could potentially detract from the core experience that Snapchat offers. Moreover, as the target demographic comprises young teenagers, there are concerns about the appropriateness and safety of AI chat interactions. Instead of focusing on developing and promoting an AI chat function, Snapchat could direct its resources towards enhancing privacy features and implementing robust safety measures to ensure the well-being and protection of its young user base.
I really don’t think Snapchat AI is necessary, particularly considering the platform’s target audience of young teenagers. The AI chat feature, which attempts to simulate conversation, seems to have been added to the app without any real purpose. Snapchat primarily serves as a platform for sharing photos and videos, emphasizing visual communication rather than text-based conversations. Introducing AI chat functionality could potentially detract from the core experience that Snapchat offers. Moreover, as the target demographic comprises young teenagers, there are concerns about the appropriateness and safety of AI chat interactions. Instead of focusing on developing and promoting an AI chat function, Snapchat could direct its resources towards enhancing privacy features and implementing robust safety measures to ensure the well-being and protection of its young user base.
The discussion of AI continues, and there is a natural fear of unknown technology, especially technology that can mimic humans. In order to get AI more similar to human faster, Internet companies need a large amount of data, so they violate privacy policies and use users’ private data for analysis. Not only is this a violation of social morality, it may well be a violation of the law.
I have not experienced Snapchat’s AI nor I have I seen the videos discussed in this post regarding the system having knowledge of location and personal information. However, Snapchat’s AI is another version of ChatGPT and countless other AI systems. All of these systems collect data from the user, which is dangerous. I personally like the benefits of AI, but we must be weary of the downfalls, specifically losing our personal data.
After hearing stories about some peoples’ experiences with AI, I have never really wanted to give it a try myself. I feel like snapchat should have had taken extra measures to ensure the new Al feature was completely safe for users in regards to their private information and personal safety before releasing it. Their user base contains an extremely diverse range of age in which AI can potentially be mentally harmful for some especially when trying to distinguish between fantasy and reality.
Considering AI is becoming more and more popular in technology today, I can’t say I am super surprised that Snapchat added this feature. I definitely find it a little bit unsettling that it has such accurate location services and knows almost everything, but then again I think this sort of data-collecting technology has been used for a while now just in a more subtle way. It’s exciting that computers are able to hold such intelligent conversations but I don’t think this is a super appropriate tool for children to be using, which is why I would say that it should probably be taken off Snapchat.
I personally don’t have much experience with AI bots outside of snapchat, therefore I don’t get weirded out by any responses. I also don’t chat about personal information when chatting with the AI bot. However, it is quite terrifying to know that it has information about people’s locations. I’ve turned off the location services on snapchat years ago but I should probably make sure it’s turned off from the AI too. I understand technology is evolving and that this is a good, exciting thing, but I also think it’s scary to think about because a company is gathering so much data from so many people.
I’ve noticed that social media apps like to copy others’ features. For example, after TikTok cemented itself as a power app, many other social media apps suddenly updated with a short video scrolling section. In the same way, as the age of AI is upon us, Snapchat has decided to add an AI bot to the app. I personally do not understand why this decision was necessary as I believe many younger teenagers/adults use Snapchat as a means to chat with their friends; there is no reason why Snapchat should have every single feature in its software. I have also heard about how the Snapchat AI claims it doesn’t know where the user is located but immediately provides the location of the nearest fast food restaurant when prompted. AI is powerful so anyone who uses it should be aware and exercise caution.
I may not use Snapchat constantly in my daily life, but adding an AI to chat with seems like something abruptly added to the app, and does not see to have a purpose or reason to its arrival. I agree that it can raise some concerns in conversing with other people, by the confusing claims of knowing peoples location despite the AI’s words. It’s possible this is a start of a type of experiment/test that Snapchat is trying out with their users and AI that will change in time as it continues to go on.
AI technology is something upcoming in our generation and seems to be gaining traction in improving our access to information. Whilst this is a positive thing, the infringement and uncertainty about the information what the AI knows is quite terrifying.
As a user of Snapchat, when I updated my phone and saw the AI feature, I initially thought it was a really cool concept. However, after using it for a couple days, I realized how weird it actually is. It’s as if I were talking to a real person and the fact that there is not option to remove the AI from your account is really upsetting. The only way you’re able to remove the AI, at the moment, is by buying Snapchat+ which is inconvenient to those that don’t want to buy extra things just to remove something we didn’t require in the first place. Although this industry is growing, I believe there needs to be better maintenance of user’s privacy.
AI technology is something upcoming in our generation and is gaining traction in improving our access to information. Whilst this is a positive thing, the infringement and uncertainty about the information what the AI knows is quite terrifying.
As a user of Snapchat, when I updated my phone and saw the AI feature, I initially thought it was a really cool concept. However, after using it for a couple days, I realized how weird it actually is. It’s as if I were talking to a real person and the fact that there is not option to remove the AI from your account is really upsetting. The only way you’re able to remove the AI, at the moment, is by buying Snapchat+ which is inconvenient to those that don’t want to buy extra things just to remove something we didn’t require in the first place. Although this industry is growing, I believe there needs to be better maintenance of user’s privacy.
y
Now this is a very interesting topic to touch upon because yes it was interesting to start hearing people using a snapchat generated AI. Now I was curious and had to see it for myself to I logged back into snapchat and man what can I say it’s definitely something else to experience. Out of curiosity I started messing around with it and its definitely terrifying to see how it can work. I exactly don’t know what to think since it is kind of strange how it’s on an app that is used mostly by teens and such and they may not know the best ways to use the AI. I agree that in some cases it may when when it comes to mental health outreach to some extend (since of course nothing beats actual therapy). But if not used in a correct manner or in a careful way who knows what type of mess this can bring along since it does use the data of conversations to evolve more and more. I guess only time will tell how this situation fully unfolds.
The rise in the use of AI technology has been rapid and unregulated. As seen with the recent leak of pentagon bombing pictures, created by AI technology that caused the stock market to tank as countries and sellers alike panicked due to fake media. The rise of fake news seen in the last couple of years is nothing compared to the damage AI can do to our society and media platforms.
As a Snapchat user, I was very intrigued and freaked out when I realized I had a permanent AI bot in my app. At first, I thought it was kind of fun to mess around, but I really dislike how you’re unable to delete/remove the bot from your list of friends. I also heard about the ‘sharing location’ issue on another social media platform, and that combined with the inability to delete the AI, seems like a huge invasion of privacy on Snapchat’s part. While Snapchat has access to your conversations and saved “snaps,” the decision to put such a controversial tool — that can apparently know where you are, despite location settings — doesn’t seem fair to the users that would prefer to not have the AI.
Snapchat having this type of AI is truly something that is so interesting yet, like discussed in the article, so terrifying. The fact that AI is so intelligent that it can lie to users is scary. If the AI can lie to its users, what else can the AI generate that can be potentially harmful or even more terrifying to the user? It seems that if we progress with AI technology at this rate, Fact from fiction may be called into question because no one knows whats real anymore since AI can pose as users on things such as social media and wreak havoc.
I do not think Snapchat should have added an AI in general, since a lot of younger children use the app. Snapchat AI is not refined and should not be a part of daily life. As a Snapchat user, I found the AI to not be necessary and honestly annoying since it sits at the top of the feed. Not a lot of users that I am aware of are using AI and I don’t think this technology belongs on a social media platform like Snapchat.
As someone who does not actively use Snapchat, I do not fully understand the purpose of introducing this chatbot in the first place. When I used to use Snapchat in high school, the purpose among my friends and I was to keep in touch and communicate casually with each other. With my understanding of Snapchat being a social media platform, I don’t understand why the platform chose to introduce AI. After all, isn’t the purpose of social media to communicate and stay up-to-date on the lives of our friends and acquaintances? This begs of the question as to wether the nature of social media itself is changing. It’s no secret that the content of these platforms has been straying farther away from reality, but now it seems that platforms are intentionally incorporating this. Snapchat itself has always been a bit of a risky/misleading platform in my opinion, as it’s main appeal when I was young was that whatever you share only lasts a few seconds. This is obviously not truly the case, as anything can be recorded, saved, screen shotted, or a plethora of other ways to store content. I appreciate that Snapchat claims it will edit their chatbot technology after these complaints, but I think that the best thing would be to avoid using the app altogether if users do not want to be tracked.
I don’t know how to feel about AI in general because I am aware that is has been helpful to people in some ways, but I do feel like having and AI feature in an app that is mainly used by kids and teens is not the best idea, especially if it has access to their specific location and is also lying about it. I think this is a complicated issue because there not really a way to stop the development or access to these features and apps but I feel like it does come with potentially dangerous outcomes.
With the Internet in general, people should be aware that they have the world at their fingertips. ChatGPT was just launched and is a huge success when it comes to research, answering questions, even making meal plans for people with budgets. Others like Snapchat’s AI and Bard.google are just keeping up with the times. Though it seems scary that a Snapchat bot knows your location, people are using maps on Snapchat to see where their friends are 24/7 anyway. Unless someone’s off the grid and has no social media, there is always that invasion of privacy on the Internet. It’s sad and frustrating how chaotic the Internet and certain apps could be, but I also think it’s up to the user whether he or she wants to participate and utilize those websites or apps in the first place.
I have seen it be very useful because I’ve heard that it is programmed to be supportive of people who talk to it about mental health. I think it could be beneficial for teens but I think it could be harmful to leave younger kids to an open AI chat. It might also be helpful for high-school or college students who may feel isolated during the transition to college or simply during their high-school experience. AI could also be programed to be a form of therapy that can be made even more accessible for people without insurance and who live outside of a network of professionals. I do understand though that the AI has lied in the past and I have seen alot of anecdotal evidence from users online. I would be concerned though about the fact that snapchat is a private company that will very likely put out a biased AI that should be monitored by parents or even licensed (mental health) professionals.
With the development of the AI industry, the problems raised by the article were inevitable and thought-provoking. Personally, I have never used Snapchat’s AI chatbot before, but I used ChatGPT, a newly released AI chatbot, for learning purposes. I didn’t experience anything creepy like the exposure of my location or privacy information, but I did concern about the safety of my personal information. I think an AI chatbot at this stage is not as precise and intellectual as a human being, but it might be in the future. At least when I used it to generate the summary of the passage or the answer to a question, it was not one hundred percent accurate (or not even seventy percent). Therefore, at this stage, compared to the fear of an AI chatbot being too analogous to human beings, I am more concerned with the intent of the developers that they may hack the accounts and steal the data and information. The regulatory agencies and laws may put more effort into protecting the privacy of individuals in the age of AI.
I also think that the concerns are super valid. I find that Snapchat AI was just put onto its users, and there wasn’t really a choice given to its users on whether they wanted it. Furthermore, it is always in our feed, so we can never get rid of the option to talk with it. It is always the first thing that users see, so it’s hard to get away from it. I feel as though Snapchat is trying to profit off of AI that its demographic might use, so I don’t really see how having AI for an application based on taking pictures is beneficial.
I do not think Snapchat should have added an AI in general, since a lot of younger children use the app. Snapchat AI is not refined and should not be a part of daily life. As a Snapchat user, I found the AI to not be necessary and honestly annoying since it sits at the top of the feed. Not a lot of users that I am aware of are using the AI and I don’t think this technology belongs on a social media platform like Snapchat.
The article talks about how location sharing was an issue while using the AI bot feature on Snapchat. Even though it does sound creepy that the AI bot knows where you live but the map feature on Snapchat already consists of that feature of location sharing. If you do have the location feature on for Snapchat, your friend can easily see while you are and when was the last time you used Snapchat. With the technology nowadays, I feel like it is hard to hide anything really. People can easily look up your names, address, emails and many other private informations in many different websites, some require you to pay a bit but some others don’t even require anything. I think even though it is important to have privacy, but technology has made it hard to control.
I think the concerns surround Snapchat’s new AI is founded on some evidence, however, nothing has been proven or stated from Snapchat or any other organization. Even I’ve used the AI on Snapchat a few times, just curious in seeing what it knows and it doesn’t seem like it knows much based on my experience. It notes what is available for info on my account but that’s about it. However, I can definitely see this becoming an issue if Snapchat were to be hacked or if the AI does get smarter to manipulate people into telling it information. At this point, only the future will tell us whether or not this Snapchat AI is a danger to our privacy or not.
As someone who uses social media on a daily basis, I have also used Snapchat’s AI chatbot recently. However, I have never probed it into describing where I accurately live. While this does concern users, the chatbot is just an AI that is used to adapt to our responses and communicate with its only sole purpose is to communicate with us. The Snapchat AI bot is also in open beta as well so users should acknowledge that they are still working on it rather than exaggerating their concerns. This also should not be concerning as other companies such as Google Maps, Yelp, or Facebook use our location to identify key places that could be of interest to us.
The concerns surrounding Snapchat’s AI bot are valid and raise important questions about user privacy, data security, and the accuracy of responses. The potential for the AI bot to misunderstand context or spread misinformation is troubling, as it could have real-life consequences. It’s encouraging that Snapchat has acknowledged user feedback and plans to improve the bot, but users should exercise caution and consider opting out of interacting with the AI if they have concerns about their privacy and safety.
Another questionable feature that people use to communicate with Snapchat AI is how far people can push its’ boundaries. For instance, people try to make the Snapchat AI feel “uncomfortable” by threatening suicide, asking emotionally inappropriate questions, or straight up sexually harassing it. I think it is not a good thing to say at all, let alone to a sentient algorithm. It begs the thought of wondering if AI technology could also develop as mental health workers, such as therapists or psychiatrists for diagnostic purposes and even written plans for patients.
This news on Snapchat’s AI reminds me of the innovation of chatgpt, a new rising AI chatbot that can generate complicated and well-narrated responses (writing essays and codes). Besides the help, it can generate harmful fake news and misinformation. Once I asked chatgpt to help me find sources for reference when writing an academic paper, and it turned out that it makes up sources and fakes the academic papers. I would have believed it if I hadn’t searched online for the article name it provided. Concerning data privacy and misinformation, people are getting into debates about the development of AI some claimed we should stop studying it to prevent harm. Personally speaking, I believe the development of AI is inevitable because companies worldwide are competing to try to capture more of the AI industry. The thing people can do is setting laws to restrain the power and use of AI.