Skip to main content

Overview

JobHive’s bulk operations allow you to efficiently manage high-volume hiring scenarios, from startup scaling to enterprise-level recruitment campaigns. Process hundreds of candidates while maintaining consistent quality and experience.

Bulk Interview Creation

Create multiple interviews in a single API call with optimized processing

Parallel Processing

Handle thousands of concurrent interviews with automatic load balancing

Batch Results Export

Export comprehensive results for multiple interviews in various formats

Smart Rate Limiting

Intelligent request batching to maximize throughput within rate limits

Bulk Interview Creation

Single API Call for Multiple Interviews

Create up to 100 interviews per request with the bulk endpoint:
curl -X POST "https://backend.jobhive.ai/v1/interviews/bulk" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "interviews": [
      {
        "candidate_email": "[email protected]",
        "position": "Frontend Developer",
        "skills": ["React", "TypeScript", "CSS"]
      },
      {
        "candidate_email": "[email protected]", 
        "position": "Backend Developer",
        "skills": ["Node.js", "PostgreSQL", "Docker"]
      },
      {
        "candidate_email": "[email protected]",
        "position": "Full Stack Developer", 
        "skills": ["React", "Node.js", "MongoDB"]
      }
    ],
    "defaults": {
      "duration_minutes": 45,
      "difficulty": "intermediate",
      "company_name": "TechCorp Inc",
      "send_invitation": true
    }
  }'

Bulk Response Format

The bulk endpoint returns detailed success and failure information:
{
  "success": true,
  "data": {
    "successful": [
      {
        "index": 0,
        "interview": {
          "id": "int_abc123def456",
          "candidate_email": "[email protected]", 
          "interview_url": "https://app.jobhive.ai/interview/int_abc123def456",
          "status": "scheduled"
        }
      },
      {
        "index": 2,
        "interview": {
          "id": "int_ghi789jkl012",
          "candidate_email": "[email protected]",
          "interview_url": "https://app.jobhive.ai/interview/int_ghi789jkl012", 
          "status": "scheduled"
        }
      }
    ],
    "failed": [
      {
        "index": 1,
        "candidate_email": "[email protected]",
        "error": {
          "code": "INVALID_EMAIL",
          "message": "Email format is invalid"
        }
      }
    ],
    "summary": {
      "total_requested": 3,
      "successful_count": 2,
      "failed_count": 1,
      "success_rate": 0.67
    }
  }
}

Advanced Bulk Patterns

CSV/Excel Import Processing

Process candidate lists from spreadsheet uploads:
const csv = require('csv-parser');
const fs = require('fs');

async function processCSVFile(filePath) {
  const candidates = [];
  
  return new Promise((resolve, reject) => {
    fs.createReadStream(filePath)
      .pipe(csv())
      .on('data', (row) => {
        // Transform CSV row to interview format
        candidates.push({
          candidate_email: row.email,
          position: row.position,
          skills: row.skills.split(',').map(s => s.trim()),
          // Add custom fields from CSV
          experience_level: row.experience,
          preferred_start_date: row.start_date
        });
      })
      .on('end', async () => {
        try {
          // Process in batches of 50
          const batches = chunkArray(candidates, 50);
          const results = [];
          
          for (const batch of batches) {
            const response = await createBulkInterviews(batch);
            results.push(response);
            
            // Rate limiting delay
            await new Promise(resolve => setTimeout(resolve, 1000));
          }
          
          resolve(results);
        } catch (error) {
          reject(error);
        }
      });
  });
}

function chunkArray(array, chunkSize) {
  const chunks = [];
  for (let i = 0; i < array.length; i += chunkSize) {
    chunks.push(array.slice(i, i + chunkSize));
  }
  return chunks;
}

async function createBulkInterviews(candidates) {
  const response = await fetch('https://backend.jobhive.ai/v1/interviews/bulk', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.JOBHIVE_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      interviews: candidates,
      defaults: {
        duration_minutes: 45,
        difficulty: 'intermediate',
        send_invitation: true
      }
    })
  });
  
  return response.json();
}

// Usage
processCSVFile('./candidates.csv')
  .then(results => {
    const totalSuccess = results.reduce((sum, batch) => 
      sum + batch.data.successful.length, 0);
    console.log(`Successfully created ${totalSuccess} interviews`);
  })
  .catch(console.error);

Bulk Results Processing

Efficiently retrieve and process results from multiple completed interviews:
async function getBulkResults(interviewIds, includeTranscripts = false) {
  const batchSize = 20; // API limit for bulk results
  const results = [];
  
  for (let i = 0; i < interviewIds.length; i += batchSize) {
    const batch = interviewIds.slice(i, i + batchSize);
    
    try {
      const response = await fetch('https://backend.jobhive.ai/v1/interviews/bulk-results', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${process.env.JOBHIVE_API_KEY}`,
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({
          interview_ids: batch,
          include_results: true,
          include_transcripts: includeTranscripts
        })
      });
      
      const batchResults = await response.json();
      results.push(...batchResults.data);
      
      // Rate limiting
      if (i + batchSize < interviewIds.length) {
        await new Promise(resolve => setTimeout(resolve, 1000));
      }
      
    } catch (error) {
      console.error(`Error processing batch ${i/batchSize + 1}:`, error);
    }
  }
  
  return results;
}

async function generateHiringReport(interviewIds) {
  const interviews = await getBulkResults(interviewIds);
  
  const report = {
    total_interviews: interviews.length,
    completed: interviews.filter(i => i.status === 'completed').length,
    average_score: 0,
    recommendations: {
      hire: 0,
      maybe: 0, 
      no_hire: 0
    },
    skill_analysis: {},
    position_breakdown: {}
  };
  
  const completedInterviews = interviews.filter(i => i.status === 'completed' && i.results);
  
  if (completedInterviews.length > 0) {
    // Calculate average score
    report.average_score = completedInterviews.reduce((sum, i) => 
      sum + i.results.overall_score, 0) / completedInterviews.length;
    
    // Count recommendations
    completedInterviews.forEach(interview => {
      report.recommendations[interview.results.recommendation]++;
      
      // Analyze by position
      const position = interview.position;
      if (!report.position_breakdown[position]) {
        report.position_breakdown[position] = { count: 0, avg_score: 0, total_score: 0 };
      }
      report.position_breakdown[position].count++;
      report.position_breakdown[position].total_score += interview.results.overall_score;
    });
    
    // Calculate position averages
    Object.keys(report.position_breakdown).forEach(position => {
      const data = report.position_breakdown[position];
      data.avg_score = data.total_score / data.count;
      delete data.total_score;
    });
  }
  
  return report;
}

// Usage
const interviewIds = ['int_abc123', 'int_def456', 'int_ghi789']; // ... more IDs
generateHiringReport(interviewIds)
  .then(report => {
    console.log('📊 Hiring Report:', JSON.stringify(report, null, 2));
  })
  .catch(console.error);

Performance Optimization

Parallel Processing Strategies

JavaScript Promise.all Pattern
async function createInterviewsParallel(candidates) {
  const MAX_CONCURRENT = 5;
  const results = [];
  
  for (let i = 0; i < candidates.length; i += MAX_CONCURRENT) {
    const batch = candidates.slice(i, i + MAX_CONCURRENT);
    
    const promises = batch.map(candidate => 
      createSingleInterview(candidate)
        .catch(error => ({ error, candidate }))
    );
    
    const batchResults = await Promise.all(promises);
    results.push(...batchResults);
  }
  
  return results;
}
Python ThreadPoolExecutor
from concurrent.futures import ThreadPoolExecutor, as_completed
import time

def create_interviews_parallel(candidates, max_workers=5):
    results = []
    
    with ThreadPoolExecutor(max_workers=max_workers) as executor:
        future_to_candidate = {
            executor.submit(create_single_interview, candidate): candidate 
            for candidate in candidates
        }
        
        for future in as_completed(future_to_candidate):
            candidate = future_to_candidate[future]
            try:
                result = future.result()
                results.append(result)
            except Exception as e:
                results.append({'error': str(e), 'candidate': candidate})
                
    return results
Adaptive Delay Strategy
class RateLimitedClient {
  constructor(apiKey, requestsPerMinute = 300) {
    this.apiKey = apiKey;
    this.requestsPerMinute = requestsPerMinute;
    this.requestTimes = [];
  }
  
  async makeRequest(url, options) {
    await this.enforceRateLimit();
    
    const response = await fetch(url, {
      ...options,
      headers: {
        'Authorization': `Bearer ${this.apiKey}`,
        ...options.headers
      }
    });
    
    this.requestTimes.push(Date.now());
    
    if (response.status === 429) {
      const retryAfter = response.headers.get('X-RateLimit-Retry-After');
      await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
      return this.makeRequest(url, options);
    }
    
    return response;
  }
  
  async enforceRateLimit() {
    const now = Date.now();
    const oneMinuteAgo = now - 60000;
    
    // Remove old requests
    this.requestTimes = this.requestTimes.filter(time => time > oneMinuteAgo);
    
    if (this.requestTimes.length >= this.requestsPerMinute) {
      const oldestRequest = this.requestTimes[0];
      const waitTime = 60000 - (now - oldestRequest);
      
      if (waitTime > 0) {
        await new Promise(resolve => setTimeout(resolve, waitTime));
      }
    }
  }
}
Dynamic Batch Sizing
class OptimalBatchProcessor:
    def __init__(self, api_key):
        self.api_key = api_key
        self.optimal_batch_size = 50
        self.performance_history = []
    
    def process_candidates(self, candidates):
        total_processed = 0
        start_time = time.time()
        
        while total_processed < len(candidates):
            batch_start = total_processed
            batch_end = min(total_processed + self.optimal_batch_size, len(candidates))
            batch = candidates[batch_start:batch_end]
            
            batch_start_time = time.time()
            result = self.create_bulk_interviews(batch)
            batch_duration = time.time() - batch_start_time
            
            # Track performance
            self.performance_history.append({
                'batch_size': len(batch),
                'duration': batch_duration,
                'success_rate': result['data']['successful_count'] / len(batch)
            })
            
            # Adjust batch size based on performance
            self.adjust_batch_size()
            
            total_processed = batch_end
        
        total_duration = time.time() - start_time
        return {
            'total_processed': total_processed,
            'duration': total_duration,
            'throughput': total_processed / total_duration
        }
    
    def adjust_batch_size(self):
        if len(self.performance_history) < 3:
            return
        
        recent_performance = self.performance_history[-3:]
        avg_duration = sum(p['duration'] for p in recent_performance) / 3
        avg_success_rate = sum(p['success_rate'] for p in recent_performance) / 3
        
        # Increase batch size if performing well
        if avg_duration < 5 and avg_success_rate > 0.95:
            self.optimal_batch_size = min(self.optimal_batch_size + 10, 100)
        # Decrease batch size if struggling
        elif avg_duration > 15 or avg_success_rate < 0.8:
            self.optimal_batch_size = max(self.optimal_batch_size - 10, 10)

Monitoring & Observability

Bulk Operation Metrics

Track the performance and success of your bulk operations:
class BulkOperationMetrics {
  constructor() {
    this.metrics = {
      total_requests: 0,
      successful_interviews: 0,
      failed_interviews: 0,
      average_batch_time: 0,
      error_rates: {},
      throughput_per_minute: 0
    };
    this.start_time = Date.now();
  }
  
  recordBatchResult(batchSize, duration, result) {
    this.metrics.total_requests += batchSize;
    this.metrics.successful_interviews += result.data.successful_count;
    this.metrics.failed_interviews += result.data.failed_count;
    
    // Update average batch time
    this.metrics.average_batch_time = 
      (this.metrics.average_batch_time + duration) / 2;
    
    // Track error patterns
    result.data.failed.forEach(failure => {
      const errorCode = failure.error.code;
      this.metrics.error_rates[errorCode] = 
        (this.metrics.error_rates[errorCode] || 0) + 1;
    });
    
    // Calculate throughput
    const elapsed_minutes = (Date.now() - this.start_time) / 60000;
    this.metrics.throughput_per_minute = 
      this.metrics.successful_interviews / elapsed_minutes;
  }
  
  getReport() {
    const success_rate = this.metrics.successful_interviews / 
      (this.metrics.successful_interviews + this.metrics.failed_interviews);
    
    return {
      ...this.metrics,
      success_rate: success_rate,
      total_runtime_minutes: (Date.now() - this.start_time) / 60000
    };
  }
}

// Usage
const metrics = new BulkOperationMetrics();

async function processCandidatesWithMetrics(candidates) {
  const batches = chunkArray(candidates, 50);
  
  for (const batch of batches) {
    const start = Date.now();
    const result = await createBulkInterviews(batch);
    const duration = Date.now() - start;
    
    metrics.recordBatchResult(batch.length, duration, result);
    
    console.log(`Batch completed: ${result.data.successful_count}/${batch.length} successful`);
  }
  
  const report = metrics.getReport();
  console.log('📊 Final Metrics:', report);
}

Best Practices

Optimization Guidelines

Batch Size Strategy

Recommended Sizes
  • Start with 50 interviews per batch
  • Increase to 100 for stable operations
  • Reduce to 25 if seeing high error rates
  • Monitor performance and adjust dynamically

Error Handling

Resilience Patterns
  • Retry failed interviews individually
  • Log all failures for manual review
  • Implement exponential backoff
  • Set maximum retry limits

Rate Limiting

Respect Limits
  • Stay under 80% of rate limit
  • Implement request queuing
  • Use intelligent delays between batches
  • Monitor rate limit headers

Data Validation

Quality Assurance
  • Validate email formats before API calls
  • Check required fields completeness
  • Remove duplicates from candidate lists
  • Sanitize skill inputs

Common Pitfalls to Avoid

Anti-Patterns
  • Don’t send all interviews in a single massive request
  • Don’t ignore rate limit headers and retry immediately
  • Don’t assume all interviews will succeed
  • Don’t forget to handle partial failures gracefully
Pro Tips
  • Use webhooks for real-time updates instead of polling
  • Process results asynchronously to avoid blocking operations
  • Implement comprehensive logging for debugging
  • Test with small batches before scaling up

Next Steps