Solution Page
Read RFP questions from Google Sheets, research them across Slack and approved domains, synthesize submission-ready answers, and write them back to the sheet.
Overview
For Sales, RevOps, Solutions engineers
Integrations: Google Sheets, Slack evidence, Website research
• Draft responses
• Evidence-backed answers
• Updated response sheet
• Reduces manual work for solutions and revenue teams.
• Improves consistency across submitted answers.
• Preserves traceability between questions, evidence, and output.
What It Solves
RFP response work is repetitive, coordination-heavy, and difficult to scale without losing source fidelity.
Workflow
Read RFP prompts from the operating spreadsheet.
Gather supporting evidence from Slack and approved research sources.
Write structured answers back into the sheet for submission workflows.
Implementation
Review the underlying plan definition, inspect the template when available, and see how the workflow is encoded for repeatable execution.
Plan Code
Markdown Report
rfp-sheet-responder.yaml
310 lines
# RFP / RFQ Sheet Responder
# Reads questions from Google Sheets, researches each one across Slack and
# approved web domains, synthesizes a submission-ready answer, and writes the
# answer column back to the sheet.
requiredTools:
- name: slack
type: mcp
- name: google-sheets
type: mcp
- name: internet_search
type: internet_search
parameters:
- name: spreadsheetId
schema:
type: string
description: "Google Sheets spreadsheet ID containing the RFP/RFQ questions"
- name: questionsRange
schema:
type: string
default: "Sheet1!A2:A100"
description: "A1 notation range for the questions column, for example 'Sheet1!A2:A100'"
- name: answersRange
schema:
type: string
default: "Sheet1!B2:B100"
description: "A1 notation range for the answers column. Must align row-for-row with questionsRange"
- name: slackChannelIds
schema:
type: string
description: "Comma-separated Slack channel IDs to research, for example 'C012AB3CD,C098ZY7WX'"
- name: searchDomains
schema:
type: string
description: "Comma-separated domains to search for supporting evidence, for example 'yourcompany.com,docs.yourcompany.com'"
- name: lookBackDays
schema:
type: string
default: "14"
description: "Number of days of Slack history to retrieve per channel"
components:
schemas:
RfpAnswer:
type: object
required:
- question
- answer
- sourceSummary
- confidence
properties:
question:
type: string
description: "The original RFP/RFQ question text"
answer:
type: string
description: "Professional submission-ready answer written in first-person plural"
sourceSummary:
type: string
description: "One sentence describing which Slack and web sources informed the answer"
confidence:
type: string
enum: [high, medium, low]
description: "Confidence based on the quality and specificity of the evidence found"
preCalls:
- name: computeSlackCutoff
in: vars
var: oldestTs
args:
days: "{{ .params.lookBackDays }}"
code: |-
const days = parseInt(args.days, 10);
const safeDays = Number.isFinite(days) && days > 0 ? days : 14;
Math.floor(Date.now() / 1000) - (safeDays * 86400);
- name: normalizeSlackChannels
in: vars
var: slackChannels
args:
raw: "{{ .params.slackChannelIds }}"
code: |-
String(args.raw || "")
.split(",")
.map((value) => value.trim())
.filter(Boolean);
- name: normalizeSearchDomains
in: vars
var: searchDomainsList
args:
raw: "{{ .params.searchDomains }}"
code: |-
String(args.raw || "")
.split(",")
.map((value) => value.trim())
.filter(Boolean);
sessions:
read-rfp-questions:
tools:
- name: google-sheets
type: mcp
prePrompt: |-
Use the google-sheets tool to read the spreadsheet with ID "{{ .params.spreadsheetId }}"
and retrieve the range "{{ .params.questionsRange }}".
Return the full ordered list of cell values exactly as they appear in the sheet.
Each non-empty row represents one RFP/RFQ question.
prompt: |-
Convert the sheet output into an ordered array of question strings.
Rules:
- preserve original row order
- drop empty rows
- trim obvious leading/trailing whitespace
- output only the array of questions
crawl-slack-per-question:
dependsOn:
- session: read-rfp-questions
context: "{{ .context.questions }}"
iterateOn: "context.questions"
tools:
- name: slack
type: mcp
prePrompt: |-
You are researching evidence for this RFP/RFQ question:
"{{ .it }}"
Use the Slack tool to inspect the following channel IDs:
{{ json .vars.slackChannels }}
Research window:
- lookback days: {{ .params.lookBackDays }}
- oldest timestamp: {{ .vars.oldestTs }}
Retrieve messages that contain facts, claims, product details, customer outcomes,
implementation notes, security language, or metrics relevant to this question.
Include concrete evidence whenever available:
- message text
- approximate date
- channel context
prompt: |-
Summarize the strongest Slack evidence for answering:
"{{ .it }}"
Requirements:
- extract only facts or claims useful for the response
- call out specific numbers, outcomes, or implementation details if present
- note when evidence is partial or ambiguous
- if no relevant Slack evidence was found, say that explicitly
search-web-per-question:
dependsOn:
- session: read-rfp-questions
context: "{{ .context.questions }}"
iterateOn: "context.questions"
tools:
- name: internet_search
type: internet_search
prePrompt: |-
You are researching evidence for this RFP/RFQ question:
"{{ .it }}"
Search only within these domains:
{{ json .vars.searchDomainsList }}
Run targeted searches that combine the question's concepts with
site-specific queries where useful.
Look for:
- product capabilities
- architecture or deployment details
- security or compliance claims
- case studies
- documentation
- support and implementation material
prompt: |-
Summarize the best supporting web evidence for answering:
"{{ .it }}"
Requirements:
- surface concrete facts and supporting claims
- include URLs or page references when available
- distinguish strong evidence from weak or indirect evidence
- if nothing relevant was found, say that explicitly
synthesize-answer-per-question:
dependsOn:
- session: crawl-slack-per-question
- session: search-web-per-question
context: true
iterateOn: "context.questions"
prompt: |-
You are writing a professional RFP/RFQ response on behalf of the company.
The question is:
"{{ .it }}"
You have two evidence streams in context for this same question:
1. Slack research from crawl-slack-per-question
2. Web research from search-web-per-question
Write a polished answer suitable for direct inclusion in an RFP/RFQ response.
Answer rules:
- write in first-person plural
- answer directly and concisely
- ground the answer in real evidence from the provided sources
- if evidence is incomplete, be honest and avoid overclaiming
- never fabricate metrics, certifications, or specifics
Confidence rubric:
- high: strong and specific evidence backs the answer
- medium: partial but meaningful evidence supports the answer
- low: weak evidence; answer is best-effort and carefully qualified
Map the output exactly to the RfpAnswer schema:
- question
- answer
- sourceSummary
- confidence
write-answers-to-sheet:
dependsOn:
- session: synthesize-answer-per-question
context: true
tools:
- name: google-sheets
type: mcp
prePrompt: |-
Use the google-sheets tool to write answer text back to the spreadsheet.
Inputs:
- Spreadsheet ID: {{ .params.spreadsheetId }}
- Target range: {{ .params.answersRange }}
The synthesized answers are available in context under "answers".
Write only the "answer" field from each answer object.
Preserve original question order and write as a single answer column.
Use RAW value input mode if the tool supports it.
prompt: |-
Confirm the write result.
Return:
- rowsWritten
- spreadsheetId
- rangeWritten
- errors
If all writes succeeded, return an empty errors array.
schema:
type: object
properties:
questions:
x-session: read-rfp-questions
type: array
items:
type: string
description: "Ordered list of questions read from the sheet"
slackEvidence:
x-session: crawl-slack-per-question
type: array
items:
type: string
description: "Per-question Slack evidence summaries aligned to the original question order"
webEvidence:
x-session: search-web-per-question
type: array
items:
type: string
description: "Per-question web evidence summaries aligned to the original question order"
answers:
x-session: synthesize-answer-per-question
type: array
items:
$ref: "#/components/schemas/RfpAnswer"
description: "Synthesized RFP/RFQ answers in original row order"
writeConfirmation:
x-session: write-answers-to-sheet
type: object
properties:
rowsWritten:
type: integer
description: "Number of answer rows successfully written"
spreadsheetId:
type: string
description: "Spreadsheet that was updated"
rangeWritten:
type: string
description: "The actual A1 range written"
errors:
type: array
items:
type: string
description: "Row-level or sheet-level write failures"