Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains. 

No matter how well you understand technology for your day job, making sense of it as a parent is an entirely different story. Many of us are already navigating tough conversations with our kids about social media, healthy screen time habits, and plagiarism. Now, throw in generative artificial intelligence into this mix and a whole new set of worries appear.  

Are my kids going to AI for relationship advice? Is it making them lazy in school? Have they been the victim of deepfake bullying? It's enough to make even tech savvy privacy pros want to throw out all of their devices.

In policy and business circles, we're often grappling with questions of whether or not AI is fundamentally different than what came before. Does it need its own unique regulatory approach? Which department, if any, should take the lead on in-house compliance? 

The good news for parents is all the tips and tricks you picked up on talking to your kids about "past" technology — mobile phones, social media, gaming, and so forth — work just as well for AI. Ultimately, the approach is the same: Learn about the technology and its benefits and risks; talk with your kids about what they're experiencing; and try the tech with them.  

First things first: young people are using AI, including in school. Recent polling from the Center for Democracy & Technology shows 86% of students have used AI, including for personal and school use. Common Sense Media research shows teens are using generative AI for school assignments, though often without teacher permission. Seventy-two percent of teens have used AI companions with one-third engaging in social interactions and relationships. Notably, a third of teens find AI conversations just as satisfying — or even more so — than those with humans. 

It's important to remember that the AI used by young people is often not designed for them, tested on them, or optimized for their well-being. Young people have mixed feelings about AI and generative AI. They may find it entertaining or feel it allows them to receive targeted and personalized learning. 

At the same time, young people have worries — as do adults — that the use of AI may weaken skills like writing, conducting research, and reading comprehension. Half of teens distrust information from AI companions. Artificial intelligence can also exacerbate interpersonal concerns and bullying; a third of students reported a deepfake incident involving nonconsensual intimate imagery at their schools in the past year. In tragic cases, families have sued AI companies for encouraging their children to take their own lives. 

One thing young people and parents agree on: Parents don't know how their kids are using AI and aren't talking with their kids about it.

Have a conversation with your child about if they use AI and what they use it for. For younger kids especially, just asking them and their teachers about what tech tools they use and how they use them can be helpful. Younger kids may not realize summarized search results or the slide program that suggests statistics involve AI. 

Do they use it in the classroom? To do homework? Have they heard of people using it for cheating or considered such uses themselves? If they use generative AI to get help at school, do they also ask it other more personal questions? Have they or their friends used AI chatbots? Have they created things with it? Do they likeAI — and if so, which products and for what purposes? What excites them about it and what may scare them? If this conversation seems daunting, find some more tips here.

Then, think about what ground rules you'd like for your family with AI. Consider updating or creating a family media plan. Research the products your kids say they have used. Read the privacy policies and review third-party assessments, which can take into account specific concerns for youth. See if you can tighten any privacy settings. Check out any parental controls on AI products, if they are offered. OpenAI and Meta, for example, have recently announced them. 

Next, test out the technology with your kids. Watch them ask a chatbot a question and evaluate the answer and sources together. Talk about what personal information they may want or need to share in communicating with AI, and what other uses the AI may make of that information. Try a conversation on a topic your child knows about so they can more critically assess the responses. Create an image or a video together. Then talk about what may or may not be appropriate to use as inputs and how you could make it clear to others that this creation is AI-generated. 

Re-assess what AI products you are comfortable with your kids using and under what circumstances. Some products may be entirely inappropriate for younger kids or are only appropriate with close parental supervision. Continue to use the tech that you let your kids use and have open lines of communication about it. Even if you have parental control tech tools on, it is critical to remember that parental controls are not a panacea — both in terms of how they function on the back end and in terms of children's abilities to navigate out around them.   

Ultimately, you want to be able to help your kids safely and smartly navigate this new technology — and, hopefully, navigate it right alongside them.

Lastly, as privacy pros, we have the unique ability to shift and shape the practices of many of these tech products — the same ones our families are using. If something gives you pause as a parent, keep that in mind the next time you're asked for your opinion at work.

Ariel Fox Johnson, CIPP/US, is the founder of Digital Smarts Law & Policy and a senior advisor to Common Sense Media.