One Tool, Three Platforms: Multi-Platform Bug Bounty Integration
How I built unified integration for HackerOne, Intigriti, and Bugcrowd with platform-specific formatters and a shared findings model. Part 4 of 5.

I found the same vulnerability on three different programs. Same IDOR pattern, same impact, same proof-of-concept.
Wrote three completely different reports. HackerOne wanted structured sections with their severity dropdown. Intigriti expected different field names and inline severity justification. Bugcrowd had its own template that matched neither.
That specific tedium—of reformatting the same finding three times—is exactly what automation should eliminate.
Multi-platform bug bounty integration requires a unified internal findings model that transforms to platform-specific formats at submission time. Store vulnerabilities once in a canonical structure with all possible fields. When submitting, platform formatters extract relevant data and restructure it for HackerOne, Intigriti, or Bugcrowd’s expected format. One truth, three presentations.
Why Not Just Use Each Platform’s API Directly?
Direct API integration seems simpler at first:
// The naive approach
if (platform === 'hackerone') {
await hackeroneAPI.submitReport(finding);
} else if (platform === 'intigriti') {
await intigritiAPI.submitReport(finding);
} else if (platform === 'bugcrowd') {
await bugcrowdAPI.submitReport(finding);
} But then every piece of code needs platform awareness. Testing agents need to know which platform. Validation needs platform context. Storage needs platform-specific schemas.
The complexity explodes.
Instead, I built a unified findings model at the core. Every agent works with this model. Platform awareness only exists at two boundaries:
- Ingestion: When pulling program scope from platforms
- Submission: When sending reports to platforms
Everything between is platform-agnostic.
In part 1, I described the 4-tier agent architecture. The Reporter Agent handles submission—it’s the only agent that knows about platform differences.
What Does the Unified Findings Model Look Like?
A finding has everything any platform might need:
interface Finding {
// Core identification
id: string;
sessionId: string;
targetAssetId: string;
// Vulnerability details
title: string;
description: string; // Markdown supported
vulnerabilityType: VulnType; // XSS, IDOR, SQLi, etc.
// Severity
cvssVector: string; // Full CVSS v3.1 vector
cvssScore: number; // Calculated from vector
severity: 'critical' | 'high' | 'medium' | 'low' | 'informational';
// Proof
poc: {
steps: string[]; // Reproduction steps
curl?: string; // Raw curl command
script?: string; // Python/JS script
};
// Evidence
evidence: {
screenshots: string[]; // File paths or base64
requestResponse: string[]; // HTTP exchanges
hashes: string[]; // SHA-256 for authenticity
};
// Metadata
confidence: number; // 0.0 - 1.0
status: FindingStatus; // new, validating, reviewed, submitted
createdAt: Date;
platform?: string; // Set at submission time
externalId?: string; // Platform's report ID after submission
} This model captures everything. Not every field is used by every platform—but all fields are available for any platform that needs them.
[!NOTE] The CVSS vector is stored as a string (e.g.,
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N). The score is calculated from this vector. Storing both allows quick sorting by score while preserving the detailed breakdown.
How Do Platform-Specific Formatters Work?
Each platform has a formatter that transforms the unified model:
HackerOne Formatter
Intigriti Formatter
Bugcrowd Formatter
// Simplified formatter pattern
interface PlatformFormatter {
format(finding: Finding): PlatformReport;
validate(report: PlatformReport): ValidationResult;
submit(report: PlatformReport): Promise<SubmissionResult>;
}
class HackerOneFormatter implements PlatformFormatter {
format(finding: Finding): HackerOneReport {
return {
data: {
type: 'report',
attributes: {
title: finding.title,
vulnerability_information: finding.description,
severity_rating: this.mapSeverity(finding.severity),
weakness_id: this.mapVulnType(finding.vulnerabilityType),
impact: this.generateImpactStatement(finding),
// ... platform-specific fields
}
}
};
}
} I originally had one giant switch statement for formatting. Well, it’s more like… I thought “just add another case” was sustainable. By platform #3, the function was 400 lines. Separate formatters saved my sanity.
How Does the Budget Manager Prevent Rate Limiting?
Each platform has different API limits:
- HackerOne: X requests per minute
- Intigriti: Different limits, different reset windows
- Bugcrowd: Yet another set of constraints
The Budget Manager tracks all of them:
class BudgetManager {
private budgets: Map<string, PlatformBudget>;
canRead(platform: string): boolean {
const budget = this.budgets.get(platform);
return budget.read.remaining > 0;
}
canWrite(platform: string): boolean {
const budget = this.budgets.get(platform);
return budget.write.remaining > 0;
}
consumeRead(platform: string): void {
const budget = this.budgets.get(platform);
budget.read.remaining--;
this.scheduleRefill(platform, 'read');
}
async waitForBudget(platform: string, type: 'read' | 'write'): Promise<void> {
while (!this.can(platform, type)) {
await sleep(1000);
}
}
} Before any API call, agents check with the Budget Manager:
async function fetchProgramScope(platform: string, programId: string) {
await budgetManager.waitForBudget(platform, 'read');
budgetManager.consumeRead(platform);
return await platformAPI.getProgram(programId);
} This connects to failure-driven learning in part 3. When rate limits hit despite budget management, the failure detector adjusts budget estimates downward.
What Is CVSS v3.1 and Why Calculate It Myself?
CVSS (Common Vulnerability Scoring System) is the industry standard for severity. Version 3.1 uses 8 metrics:
| Metric | What It Measures |
|---|---|
| Attack Vector (AV) | Network, Adjacent, Local, Physical |
| Attack Complexity (AC) | Low, High |
| Privileges Required (PR) | None, Low, High |
| User Interaction (UI) | None, Required |
| Scope (S) | Unchanged, Changed |
| Confidentiality (C) | None, Low, High |
| Integrity (I) | None, Low, High |
| Availability (A) | None, Low, High |
I calculate CVSS myself rather than trusting platform defaults because:
- Consistency: Same vulnerability scored the same across platforms
- Credibility: Detailed CVSS breakdown shows I understand the impact
- Accuracy: Platform auto-scoring often uses simplified heuristics
function calculateCVSS(metrics: CVSSMetrics): { score: number; vector: string } {
// Implement CVSS v3.1 formula
const iss = 1 - ((1 - metrics.C) * (1 - metrics.I) * (1 - metrics.A));
const impact = metrics.S === 'unchanged'
? 6.42 * iss
: 7.52 * (iss - 0.029) - 3.25 * Math.pow(iss - 0.02, 15);
const exploitability = 8.22 * metrics.AV * metrics.AC * metrics.PR * metrics.UI;
// Full formula is complex--this is simplified
const score = roundUp(Math.min(impact + exploitability, 10));
return { score, vector: buildVectorString(metrics) };
} [!TIP] Always include the CVSS vector string in reports, not just the score. Triage teams can verify your scoring methodology. “9.8 Critical” is less convincing than “CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N” which they can validate independently.
How Does First-Mover Priority Work?
New programs are gold. Less competition. More low-hanging fruit. Higher acceptance rates for initial reports.
My system detects new programs and prioritizes them:
async function checkNewPrograms(): Promise<NewProgram[]> {
const newPrograms = [];
for (const platform of ['hackerone', 'intigriti', 'bugcrowd']) {
const recent = await platformAPI.getRecentPrograms(platform, { hours: 24 });
const unknown = recent.filter(p => !db.hasProgram(p.id));
newPrograms.push(...unknown);
}
return newPrograms.sort((a, b) => b.freshness - a.freshness);
} When new programs detected:
Immediate passive recon
Scope clarification delay
Active testing begins
Report queuing
The freshness score decreases over time. A program launched 1 hour ago gets higher priority than one launched 20 hours ago.
What Are the Platform Authentication Differences?
Each platform authenticates differently:
HackerOne: Basic auth with username + API token
const credentials = btoa(username + ':' + apiToken);
const headers = {
'Authorization': 'Basic ' + credentials,
'Content-Type': 'application/json'
}; Intigriti: Different OAuth-style flow with refresh tokens
const headers = {
'Authorization': 'Bearer ' + accessToken,
'X-API-Key': apiKey
}; Bugcrowd: Yet another structure with API key in header
const headers = {
'Authorization': 'Token ' + token,
'Content-Type': 'application/vnd.bugcrowd+json'
}; The credential manager stores these separately and handles refresh for each:
class CredentialManager {
async getCredentials(platform: string): Promise<Credentials> {
const creds = await this.loadFromSecureStorage(platform);
if (this.needsRefresh(creds)) {
return await this.refresh(platform, creds);
}
return creds;
}
} This connects to auth error recovery in part 3. When auth fails, the system attempts credential refresh before escalating.
How Does the Unified Model Handle Platform-Specific Fields?
Some platforms have unique requirements not covered by the base model.
Solution: extensible metadata
interface Finding {
// ... standard fields ...
platformMetadata?: {
hackerone?: {
weakness_id?: string; // HackerOne's weakness taxonomy ID
structured_scope_id?: string;
};
intigriti?: {
submission_type?: string; // Intigriti-specific field
};
bugcrowd?: {
bounty_table_entry?: string; // Bugcrowd payout tier
};
};
} Formatters check for platform-specific metadata and use it if present. Otherwise, they derive the needed values from standard fields.
What’s the Report Submission Flow?
From validated finding to platform submission:
Validated Finding (0.85+ confidence)
↓
Human Review Queue
↓
[Human approves]
↓
Formatter transforms to platform format
↓
Budget Manager confirms API availability
↓
Platform API submission
↓
External ID captured
↓
Status → 'submitted' All submissions go through human-in-the-loop review (part 5). Automation prepares; humans decide.
Where Does This Series Go Next?
This is part 4 of a 5-part series on building bug bounty automation:
- Architecture & Multi-Agent Design
- From Detection to Proof: Validation & False Positives
- Failure-Driven Learning: Auto-Recovery Patterns
- One Tool, Three Platforms: Multi-Platform Integration (you are here)
- Human-in-the-Loop: The Ethics of Security Automation
Next up: why humans still make every submission decision, and how mandatory review gates protect researcher reputation.
Maybe platform differences aren’t obstacles. Maybe they’re forcing functions—requiring a cleaner internal model that happens to work anywhere, because it had to.