Create your survey

Create your survey

Create your survey

How to use AI to analyze responses from civil servant survey about public procurement transparency

Adam Sabla - Image Avatar

Adam Sabla

·

Aug 22, 2025

Create your survey

This article will give you tips on how to analyze responses from a civil servant survey about public procurement transparency using AI technology for survey response analysis.

Choosing the right tools for analyzing civil servant survey data about public procurement transparency

The best approach and tooling for analyzing survey responses depends on the structure and format of your data. Here’s how I break it down:

  • Quantitative data: These are straightforward—think counts of how many civil servants chose a particular option or rated a process. I use classic tools like Excel or Google Sheets, since tallying numbers, making charts, and running simple stats are intuitive there.

  • Qualitative data: This is where it gets messy: written comments, long explanations, or responses to follow-up questions. Reading every word yourself is not realistic—especially if you have more than a dozen responses. That's where AI tools become essential, as they can distill large volumes of open-ended feedback into clear, actionable themes.

There are two approaches for tooling when dealing with qualitative responses:

ChatGPT or similar GPT tool for AI analysis

Copy-paste and prompt chat: You can export civil servant survey responses from your system (usually as a CSV) and paste them into ChatGPT or another GPT-powered model. From there, just chat about patterns, ask follow-up questions, or prompt it to identify key themes about public procurement transparency.

Not built for survey workflows: It's possible but not ideal. You have to manage the formatting headaches, keep prompts organized, and sometimes struggle with context limits if you have lots of answers.

Useful for quick, small jobs: I’d use this for one-off analyses or a batch of fewer than 50 responses. Anything more, and things can get clunky fast.

All-in-one tool like Specific

Purpose-built for survey collection and AI-driven analysis: With Specific, I can collect survey data (including open-ended and follow-up questions) in a conversational way and immediately analyze the qualitative insights.

Follow-ups lead to richer data: When specific, tailored follow-up questions are asked, I notice the data quality skyrockets. Civil servants are more likely to share real stories or practical examples—something traditional surveys often miss. (This feature is explained here: automatic AI follow-up questions.)

AI summaries and chat analysis in one place: Specific instantly summarizes responses, highlights common threads, and lets me chat with the AI about any subgroup of survey responses (like responses to a particular question, or only from a specific location). I don’t worry about manual filtering, copying data, or tracking what I asked before—the platform keeps everything organized for me.

Advanced control with data filtering: Sometimes, I only want to analyze what respondents said after giving a particular answer. Specific’s context management makes this painless, so I’m always chatting about exactly what matters.

Learn more about these features in the AI survey response analysis guide.

Useful prompts that you can use to analyze civil servant responses about public procurement transparency

Well-crafted prompts are the secret sauce when it comes to making any AI analysis work—whether you use Specific’s built-in AI chat or paste your survey data into tools like ChatGPT.

Prompt for core ideas: This is my go-to for summarizing themes from any batch of qualitative feedback. Use this with either Specific or a general GPT tool:

Your task is to extract core ideas in bold (4-5 words per core idea) + up to 2 sentence long explainer.

Output requirements:

- Avoid unnecessary details

- Specify how many people mentioned specific core idea (use numbers, not words), most mentioned on top

- no suggestions

- no indications

Example output:

1. **Core idea text:** explainer text

2. **Core idea text:** explainer text

3. **Core idea text:** explainer text

AI always performs better if you give it more context about the survey, its purpose, and your goals. Here’s an example of how I’d add context:

I ran a survey with 80 civil servants focused on public procurement transparency in the UK. My goal is to surface common pain points in applying transparency guidelines and using framework agreements. Please extract the biggest themes and outcomes.

If something specific catches my eye (say, concerns about the publication of contract completion certificates), I’ll drill into it with: Tell me more about contract completion certificate transparency.

Prompt for specific topics: To quickly check if civil servants brought up contract completion certificates or other hot issues, I prompt: Did anyone talk about contract completion certificates? Include quotes.

Prompt for pain points and challenges: When I want a list of frustrations, I ask:

Analyze the survey responses and list the most common pain points, frustrations, or challenges mentioned. Summarize each, and note any patterns or frequency of occurrence.


Prompt for sentiment analysis: If I want to see the mood—positive, neutral, or negative—across answers, I use:

Assess the overall sentiment expressed in the survey responses (e.g., positive, negative, neutral). Highlight key phrases or feedback that contribute to each sentiment category.


Prompt for suggestions & ideas: To surface actionable improvement ideas:

Identify and list all suggestions, ideas, or requests provided by survey participants. Organize them by topic or frequency, and include direct quotes where relevant.


Want more ideas? Check out these recommendations for civil servant survey questions and tips for creating new surveys.

How Specific analyzes different question types in civil servant surveys

Specific’s analysis features are tailored to different question and answer structures, letting you get granular insights with zero manual effort:

  • Open-ended questions (with or without follow-ups): The AI summarizes every response and—if follow-ups are present—compiles a common summary across all related conversations for that question. This is incredibly useful for discovering why transparency is (or isn’t) achieved, for example, in publishing procurement plans. [1]

  • Multiple choice questions with follow-ups: Each choice branches into a separate summary of all the follow-up responses tied to that answer. If, for instance, civil servants express trust or skepticism about framework agreements, you’ll see a unique summary for each opinion. [2]

  • NPS (Net Promoter Score): I get an individual summary for promoters, passives, and detractors, including explanations from each group about why they feel the way they do about public procurement transparency processes. This is extremely valuable for tracking shifts over time.

Of course, you can try to get similar results using ChatGPT. It just takes more work—copying the right conversations, applying grouping logic yourself, and making sure you’re not missing patterns hidden in subgroups.

How to handle AI context limits when analyzing large civil servant survey data sets

AI tools—including ChatGPT and most custom models—can only handle so much text at once (their “context limit”). If you’re running a large survey on public procurement transparency, you might hit these limits. Here’s how I solve it:

  • Filtering: I filter out conversations only to those where civil servants answered a particular question or made a certain selection. This reduces the total data size, letting the AI go deep without losing track.

  • Cropping: I can crop analysis to a single question (or small group of questions)—drastically shrinking the dataset so that AI can focus its energy on the most relevant insight and you never go over the context limit.

Specific handles these approaches natively—no extra effort on your part. They’re described in detail in the AI survey analysis docs.

Collaborative features for analyzing civil servant survey responses

The classic challenge: analyzing survey results is rarely a solo mission. Discussing takeaways from a civil servant public procurement transparency survey often means pulling in policy, legal, and procurement teams—and disagreements are common.

Easy real-time collaboration: With Specific, I analyze survey data by chatting with the AI—and I’m not alone. Each chat session can be filtered by question, response type, or subgroup. I can see who on my team started a specific analysis, and we share results instantly.

Multiple chats for better context: I create chats for different themes—for example, one chat dedicated to “framework agreement concerns,” another for “contract publication gaps.” Each is saved, always showing the team member’s name and avatar. This way I quickly trace back who asked what, and nothing gets lost.

Visibility into collaboration: When a colleague joins in, their questions are clearly attributed. We keep a running record of our hypotheses, findings, or next steps—turning analysis into a team sport, not a painful slog through comments on spreadsheets.

I recommend reading this step-by-step guide to launching a civil servant survey on public procurement transparency.

Create your civil servant survey about public procurement transparency now

Start collecting rich, actionable insights from civil servants today—Specific lets you instantly create, launch, and understand surveys with AI-powered analysis, from team collaboration to the deepest qualitative themes.

Create your survey

Try it out. It's fun!

Sources

  1. OECD. Implementing the OECD Recommendation on Public Procurement in OECD and Partner Countries – 2024 survey.

  2. Financial Times. The UK's use of framework agreements and the risks to transparency in government spending.

  3. Financial Times. UK CMA trials AI to detect bid-rigging in procurement.

  4. OGP Portugal. Barometer and opinion data on public procurement transparency and corruption perceptions.

Adam Sabla - Image Avatar

Adam Sabla

Adam Sabla is an entrepreneur with experience building startups that serve over 1M customers, including Disney, Netflix, and BBC, with a strong passion for automation.

Adam Sabla

Adam Sabla is an entrepreneur with experience building startups that serve over 1M customers, including Disney, Netflix, and BBC, with a strong passion for automation.

Adam Sabla

Adam Sabla is an entrepreneur with experience building startups that serve over 1M customers, including Disney, Netflix, and BBC, with a strong passion for automation.