This article will give you tips on how to analyze responses from a Citizen survey about City Website Usability. If you want practical advice on AI survey response analysis, this guide is for you.
Choosing the right tools for citizen survey response analysis
The best approach for survey analysis depends on your data’s form—whether you have structured answers or open, conversational responses.
Quantitative data: Numbers, choices, and rating scales (like "How many people found the site hard to use?") are easy to handle in Excel or Google Sheets. These tools quickly show trends and basic statistics with formulas and charts.
Qualitative data: When you ask open-ended questions or use conversational interviews, answers can fill pages with paragraphs. Reading them manually isn’t realistic if you have more than a couple dozen responses. Here’s where AI steps in: it can spot patterns and extract key themes in minutes, something that would take you hours by hand.
There are two approaches for tooling when dealing with qualitative responses:
ChatGPT or similar GPT tool for AI analysis
Copy-paste into ChatGPT: You can copy exported responses into ChatGPT or another GPT tool. Then you ask it to summarize, group, or analyze for patterns with your own prompts.
But: This process is clunky for larger data sets—handling big chunks of exported data can break context limits. You’ll often need to split files, copy-paste in parts, and repeat prompts, which isn’t convenient or scalable for larger surveys.
All-in-one tool like Specific
Purpose-built AI platform: Tools like Specific are designed specifically to both collect and instantly analyze responses using AI. You can chat directly with AI about your results, just as you would with ChatGPT, but with more structure and features for managing what context the AI sees at any time.
Follow-up questions: Specific stands out in that it asks respondents follow-ups in real time—making responses deeper, clearer, and more actionable. The automatic follow-up feature increases the quality of insights you gather. (If you want to learn more about this, check the automatic AI follow-up questions article.)
AI-powered insights: As AI summarizes, you get a clear view of main ideas, trends, and actionable findings—without exporting data or doing manual work. For surveys about city website usability, this gets you to the “why” behind user frustrations, not just raw numbers. (For more detail, there’s a how-to guide on creating citizen surveys about city website usability.)
Instant collaboration: In Specific, you and your team can chat, filter, and review themes together within the same interface, making fast decisions on city website improvements.
When you’re dealing with issues as critical as usability, the stakes are high: according to recent research, 88% of online consumers are less likely to return after a bad website experience [1]. Picking the right AI tool for the analysis process is as important as the survey questions themselves.
Useful prompts that you can use for citizen survey analysis about city website usability
The power of AI analysis comes from your prompts. The right prompt transforms raw survey text into actionable findings and unlocks patterns that manual reading often misses. Let’s look at prompts you’ll want in your toolkit.
Prompt for core ideas: Use this to pull out main topics and the number of mentions—especially handy for city website usability feedback. This works whether you use Specific’s built-in AI analysis or paste survey data into ChatGPT:
Your task is to extract core ideas in bold (4-5 words per core idea) + up to 2 sentence long explainer.
Output requirements:
- Avoid unnecessary details
- Specify how many people mentioned specific core idea (use numbers, not words), most mentioned on top
- no suggestions
- no indications
Example output:
1. **Core idea text:** explainer text
2. **Core idea text:** explainer text
3. **Core idea text:** explainer text
Context makes a difference: AI works better when you give context about your survey, its purpose, and what you want from the responses. For example:
We ran a conversational survey with Citizens about City Website Usability. Our main goal is to understand major obstacles people face when using the site. Please focus analysis on pain points, navigation difficulties, unclear information architecture, and accessibility issues highlighted by respondents.
Drill deeper: If a core idea stands out, follow up with: "Tell me more about XYZ (core idea)". AI will find relevant quotes or patterns supporting that topic, letting you quickly validate and explore issues that might make or break the citizen experience.
Prompt for specific topics: Need to verify if anyone mentioned a specific idea (e.g., “search function”)? Use: "Did anyone talk about search function? Include quotes." Instantly see if it’s a real user pain point or a non-issue.
Prompt for personas: This helps you spot clusters in your data: "Based on the survey responses, identify and describe a list of distinct personas—similar to how 'personas' are used in product management. For each persona, summarize their key characteristics, motivations, goals, and any relevant quotes or patterns observed."
Prompt for pain points and challenges: Analyze recurring frustrations: "Analyze the survey responses and list the most common pain points, frustrations, or challenges mentioned. Summarize each, and note any patterns or frequency of occurrence."
Other useful approaches for citizen feedback on city websites include:
Prompt for Motivations & Drivers: Surface what drives user behavior: "From the survey conversations, extract the primary motivations, desires, or reasons participants express for their behaviors or choices. Group similar motivations together and provide supporting evidence from the data."
Prompt for Sentiment Analysis: Quickly split feedback into positive/negative/neutral: "Assess the overall sentiment expressed in the survey responses (e.g., positive, negative, neutral). Highlight key phrases or feedback that contribute to each sentiment category."
Prompt for Suggestions & Ideas: Gather ideas straight from citizens: "Identify and list all suggestions, ideas, or requests provided by survey participants. Organize them by topic or frequency, and include direct quotes where relevant."
Prompt for Unmet Needs & Opportunities: Reveal hidden growth areas: "Examine the survey responses to uncover any unmet needs, gaps, or opportunities for improvement as highlighted by respondents."
Using prompts like these, I can move effortlessly from high-level themes—like citizens wanting faster load times (which 47% of website visitors expect in under 2 seconds [3])—to individual frustrations or bold new ideas. That’s real evidence for change, not guesswork.
Want more inspiration for creating your survey? Try the best questions for citizen surveys about city website usability resource.
How Specific analyzes qualitative data by question type
Open-ended questions (with or without follow-ups): Specific’s AI gives a summary not just for each question, but wraps in context from follow-up exchanges. For example, if someone says, “The homepage is confusing,” and the AI probes “Which part is confusing?”, their responses get included in the summary for that question.
Choices with follow-ups: Each answer choice—like “Difficult navigation”—has its own AI summary, drawing only from follow-up responses tied to that option. This means you find out exactly why people who selected “Difficult navigation” felt that way.
NPS surveys: AI sorts feedback by detractors, passives, and promoters, summarizing the follow-ups behind each group. You can see what makes a “promoter” happy, or what turns a “detractor” away, all in one click.
You can use a similar method in ChatGPT, it just takes more manual effort to group and segment data by answer type, especially when cross-referencing follow-ups.
If you’d like to set up a tailored survey flow, check out the guide to creating a citizen survey about city website usability.
Solving AI context size challenges in large citizen surveys
One common challenge with analyzing loads of qualitative data is the context limit of GPT-based AIs. If you export too many survey responses and paste them in, the AI might lose track or only analyze a sample—risking missed insights.
There are two effective ways to address context limits, both handled out of the box in platforms like Specific:
Filtering: Analyze only the survey conversations where respondents answered specific questions, or gave certain answers. If you want to dig deep into citizens who had navigation issues, you filter for those responses—maximizing use of context, focusing analysis on exactly what you need.
Cropping: Limit what data the AI sees by sending it only selected questions from all the conversations. This lets you zoom in on, say, opinions about the “Events” section of the city site, without context overload.
Applying these smart filters makes it possible for AI to deliver focused, actionable summaries even in the largest data sets—essential for city website usability studies, where feedback volume is often high. For more on these technical details, see the AI survey response analysis page.
It’s worth noting that 73.1% of web designers say that a non-responsive (i.e., not mobile-optimized) site design is the main reason people leave a website [2]. By slicing and dicing survey data this way, you’ll know for sure if mobile issues are a major concern for your citizens or just an edge case.
Collaborative features for analyzing citizen survey responses
Analyzing city website usability feedback isn’t a solo sport—lots of stakeholders care about user experience, from IT to communications to city management. The challenge: aligning everyone quickly on what matters most in the feedback.
Chat-based collaboration: In Specific, you don’t just review AI summaries. You and your team can chat directly with AI about the survey data—posing your own questions, following up on threads, brainstorming possible fixes, and more.
Multiple chats, flexible focus: Need to segment analysis by mobile users? Or compare new vs. returning visitors? Each chat session in Specific can have its own filters. You always see who set up each chat, so team members can track what’s happening—making handoffs seamless.
Transparency in collaboration: Every chat shows exactly who asked what. When several team members join a conversation, their avatars show up alongside their queries in the AI chat, fostering accountability and shared understanding. No more guessing who flagged an insight or where a follow-up question came from.
Collaborative features dramatically speed up how citizens’ feedback turns into action—no more back-and-forth over tangled spreadsheets or endless email threads.
If you want to try out survey creation with AI editing built-in, see the AI survey editor overview. Or, to get straight to survey building, try the citizen survey generator for city website usability.
Create your citizen survey about city website usability now
Act on what your citizens care about—use AI to surface pain points instantly, collaborate with your team in real time, and turn city website feedback into smart improvements today.