API Performance Optimization: Sub-200ms Automotive Data at Scale

Serving 50 million monthly API requests for automotive data requires ruthless optimization. Our production API achieves p95 response times of 185ms while handling 5.8M daily requestsโa 92% improvement from the initial 2.3-second average.
This technical deep-dive reveals the exact optimizations, architecture decisions, and code patterns that transformed a slow prototype into an enterprise-grade automotive data API.
Before Optimization:
- p95 response time: 2,300ms
- Database queries: 8-12 per request
- Cache hit rate: 12%
- Infrastructure cost: $8,200/month
After Optimization:
- p95 response time: 185ms (92% improvement)
- Database queries: 0.8 per request (90% reduction)
- Cache hit rate: 94%
- Infrastructure cost: $1,800/month (78% reduction)
Key Techniques: Redis multi-layer caching, database indexing, CDN for images, connection pooling, GraphQL for flexible queries
The Performance Challengeโ
Initial Architecture Problemsโ
Production Metrics (Before Optimization):
// Slow endpoint - vehicle listing detail
GET /api/v1/vehicles/:id
// Performance characteristics
const beforeMetrics = {
avgResponseTime: 1850, // ms
p50: 1200, // ms
p95: 2300, // ms
p99: 4500, // ms
// Resource usage per request
databaseQueries: 12,
databaseQueryTime: 980, // ms total
externalAPICalls: 3,
externalAPITime: 650, // ms total
computationTime: 220, // ms
// Infrastructure load
dbConnections: 450, // concurrent
cpuUtilization: 78, // %
memoryUsage: 4.2, // GB
// Cost implications
monthlyRequests: 50_000_000,
infraCost: 8200, // $/month
costPerRequest: 0.000164 // $
};
// Database query breakdown
const slowQueries = {
vehicleDetails: 180, // ms
specifications: 220, // ms
pricingHistory: 340, // ms - SLOW!
sellerInformation: 95, // ms
similarVehicles: 520, // ms - VERY SLOW!
images: 140, // ms
reviews: 180, // ms
maintenance: 125, // ms
};
Critical Issues:
- N+1 Query Problem: Fetching related data caused cascade of queries
- No Caching: Every request hit database
- Unoptimized Indexes: Full table scans on 10M+ row tables
- Synchronous External Calls: Blocking on third-party APIs
- Image Processing: Real-time resizing on every request
- Connection Pool Exhaustion: 450 concurrent connections to PostgreSQL
Slow APIs force horizontal scaling:
- 6 servers needed for 5.8M daily requests
- RDS db.r5.2xlarge required for query load
- $8,200/month infrastructure cost
After optimization:
- 2 servers handle same load
- RDS db.r5.large sufficient
- $1,800/month infrastructure cost
- Savings: $6,400/month ($76,800/year)
Optimization Strategyโ
Performance Optimization Roadmapโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Layer 1: Edge Caching โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ CloudFlare CDN - 100+ Global Locations โ โ
โ โ - Static assets (images, CSS, JS) โ โ
โ โ - Cache hit rate: 98% โ โ
โ โ - Response time: 15-50ms โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Cache Miss (2% of requests)
โโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Layer 2: Application Cache โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Redis Cluster - 3 Nodes (Master + 2 Replicas) โ โ
โ โ - Hot data: Vehicle listings, search results โ โ
โ โ - TTL: 15-60 minutes โ โ
โ โ - Cache hit rate: 94% โ โ
โ โ - Response time: 2-8ms โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Cache Miss (6% of requests)
โโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Layer 3: Query Optimization โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Optimized PostgreSQL Queries โ โ
โ โ - Indexed columns โ โ
โ โ - Materialized views โ โ
โ โ - Connection pooling โ โ
โ โ - Query time: 15-80ms โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Layer 1: Redis Caching Strategyโ
Multi-Layer Cache Implementationโ
// Production-grade Redis caching
class VehicleCache {
private redis: RedisCluster;
private readonly TTL = {
HOT: 900, // 15 minutes - frequently accessed
WARM: 3600, // 1 hour - regularly accessed
COLD: 86400, // 24 hours - rarely accessed
SEARCH: 300 // 5 minutes - search results
};
async getVehicle(vehicleId: string): Promise<Vehicle | null> {
// Try cache first
const cacheKey = `vehicle:${vehicleId}`;
const cached = await this.redis.get(cacheKey);
if (cached) {
// Update access frequency for cache warming
await this.incrementAccessCount(vehicleId);
return JSON.parse(cached);
}
// Cache miss - fetch from database
const vehicle = await this.db.getVehicle(vehicleId);
if (vehicle) {
// Determine TTL based on vehicle popularity
const ttl = await this.calculateTTL(vehicleId);
await this.redis.setex(cacheKey, ttl, JSON.stringify(vehicle));
}
return vehicle;
}
private async calculateTTL(vehicleId: string): Promise<number> {
// Hot vehicles get shorter TTL (more frequent updates)
const accessCount = await this.getAccessCount(vehicleId);
if (accessCount > 100) return this.TTL.HOT; // Very popular
if (accessCount > 20) return this.TTL.WARM; // Popular
return this.TTL.COLD; // Normal
}
async searchVehicles(query: SearchQuery): Promise<SearchResults> {
// Generate deterministic cache key from query
const cacheKey = `search:${this.hashQuery(query)}`;
// Check cache
const cached = await this.redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// Execute search
const results = await this.db.searchVehicles(query);
// Cache search results (shorter TTL - data changes frequently)
await this.redis.setex(
cacheKey,
this.TTL.SEARCH,
JSON.stringify(results)
);
return results;
}
// Cache warming strategy
async warmCache(): Promise<void> {
// Pre-populate cache with top 1000 most accessed vehicles
const popularVehicles = await this.db.getPopularVehicles(1000);
const pipeline = this.redis.pipeline();
for (const vehicle of popularVehicles) {
const key = `vehicle:${vehicle.id}`;
pipeline.setex(key, this.TTL.HOT, JSON.stringify(vehicle));
}
await pipeline.exec();
}
// Cache invalidation on updates
async invalidateVehicle(vehicleId: string): Promise<void> {
const patterns = [
`vehicle:${vehicleId}`,
`search:*`, // Invalidate all search caches
`similar:${vehicleId}*`
];
for (const pattern of patterns) {
const keys = await this.redis.keys(pattern);
if (keys.length > 0) {
await this.redis.del(...keys);
}
}
}
}
// Production cache metrics
const cacheMetrics = {
hitRate: 0.94, // 94% cache hit rate
avgHitTime: 4.2, // ms
avgMissTime: 78, // ms
memorySaved: '92GB/day', // avoided database queries
costSavings: '$4,200/month' // reduced database load
};
Advanced Caching Patternsโ
1. Cache-Aside Pattern (Lazy Loading):
async getCachedData<T>(
key: string,
fetchFn: () => Promise<T>,
ttl: number
): Promise<T> {
// Try cache first
const cached = await this.redis.get(key);
if (cached) return JSON.parse(cached);
// Fetch from source
const data = await fetchFn();
// Store in cache
await this.redis.setex(key, ttl, JSON.stringify(data));
return data;
}
2. Write-Through Pattern (Immediate Consistency):
async updateVehicle(vehicle: Vehicle): Promise<void> {
// Write to database AND cache simultaneously
await Promise.all([
this.db.updateVehicle(vehicle),
this.redis.setex(
`vehicle:${vehicle.id}`,
this.TTL.WARM,
JSON.stringify(vehicle)
)
]);
}
3. Cache Stampede Prevention:
async getWithStampedeProtection<T>(
key: string,
fetchFn: () => Promise<T>
): Promise<T> {
// Try cache
const cached = await this.redis.get(key);
if (cached) return JSON.parse(cached);
// Acquire distributed lock
const lockKey = `lock:${key}`;
const acquired = await this.redis.set(lockKey, '1', 'EX', 10, 'NX');
if (!acquired) {
// Another process is fetching - wait and retry
await this.sleep(100);
return this.getWithStampedeProtection(key, fetchFn);
}
try {
// Fetch data
const data = await fetchFn();
// Cache it
await this.redis.setex(key, this.TTL.WARM, JSON.stringify(data));
return data;
} finally {
// Release lock
await this.redis.del(lockKey);
}
}
Layer 2: Database Query Optimizationโ
Index Strategyโ
Before: Missing Indexes (Full Table Scans)
-- Slow query (2.3 seconds on 10M rows)
EXPLAIN ANALYZE
SELECT * FROM vehicles
WHERE make = 'Toyota'
AND model = 'Camry'
AND year >= 2020
AND price BETWEEN 20000 AND 30000
ORDER BY price ASC
LIMIT 50;
-- Query plan (BEFORE):
-- Seq Scan on vehicles (cost=0..890123 rows=45000)
-- Planning Time: 2.3 ms
-- Execution Time: 2341.2 ms โ SLOW!
After: Strategic Indexes
-- Create composite indexes for common queries
CREATE INDEX CONCURRENTLY idx_vehicles_make_model_year_price
ON vehicles(make, model, year, price);
CREATE INDEX CONCURRENTLY idx_vehicles_location_price
ON vehicles(location, price);
CREATE INDEX CONCURRENTLY idx_vehicles_created_at
ON vehicles(created_at DESC);
-- Partial indexes for hot queries
CREATE INDEX CONCURRENTLY idx_vehicles_active_listings
ON vehicles(make, model, price)
WHERE status = 'active' AND deleted_at IS NULL;
-- Same query with indexes (45ms on 10M rows)
EXPLAIN ANALYZE
SELECT * FROM vehicles
WHERE make = 'Toyota'
AND model = 'Camry'
AND year >= 2020
AND price BETWEEN 20000 AND 30000
ORDER BY price ASC
LIMIT 50;
-- Query plan (AFTER):
-- Index Scan using idx_vehicles_make_model_year_price
-- Planning Time: 0.8 ms
-- Execution Time: 45.2 ms โ
FAST! (98% improvement)
Query Optimization Patternsโ
1. N+1 Query Elimination:
// BEFORE: N+1 problem (1 + N queries)
async getVehiclesWithSpecs(ids: string[]): Promise<VehicleWithSpecs[]> {
const vehicles = await this.db.query('SELECT * FROM vehicles WHERE id = ANY($1)', [ids]);
// โ BAD: Loops through each vehicle (N queries)
for (const vehicle of vehicles) {
vehicle.specifications = await this.db.query(
'SELECT * FROM specifications WHERE vehicle_id = $1',
[vehicle.id]
);
}
return vehicles;
}
// AFTER: Single JOIN query
async getVehiclesWithSpecs(ids: string[]): Promise<VehicleWithSpecs[]> {
// โ
GOOD: Single query with JOIN
return this.db.query(`
SELECT
v.*,
json_agg(s.*) as specifications
FROM vehicles v
LEFT JOIN specifications s ON s.vehicle_id = v.id
WHERE v.id = ANY($1)
GROUP BY v.id
`, [ids]);
}
2. Materialized Views for Complex Aggregations:
-- Expensive aggregation query (runs on every request)
-- Execution time: 1.2 seconds โ
SELECT
make,
model,
COUNT(*) as total_listings,
AVG(price) as avg_price,
MIN(price) as min_price,
MAX(price) as max_price,
AVG(mileage) as avg_mileage
FROM vehicles
WHERE status = 'active'
GROUP BY make, model
ORDER BY total_listings DESC;
-- Create materialized view (refreshed every hour)
CREATE MATERIALIZED VIEW vehicle_stats AS
SELECT
make,
model,
COUNT(*) as total_listings,
AVG(price) as avg_price,
MIN(price) as min_price,
MAX(price) as max_price,
AVG(mileage) as avg_mileage,
NOW() as last_updated
FROM vehicles
WHERE status = 'active'
GROUP BY make, model
ORDER BY total_listings DESC;
-- Create index on materialized view
CREATE INDEX idx_vehicle_stats_make_model
ON vehicle_stats(make, model);
-- Query materialized view (12ms) โ
SELECT * FROM vehicle_stats
WHERE make = 'Toyota' AND model = 'Camry';
-- Refresh strategy (background job every hour)
REFRESH MATERIALIZED VIEW CONCURRENTLY vehicle_stats;
3. Connection Pooling:
// Proper connection pool configuration
const pool = new Pool({
host: process.env.DB_HOST,
port: 5432,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
// Connection pool settings
min: 10, // Minimum connections
max: 50, // Maximum connections
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 5000, // Timeout acquiring connection
// Performance settings
statement_timeout: 10000, // Query timeout: 10 seconds
query_timeout: 10000,
// Keep-alive
keepAlive: true,
keepAliveInitialDelayMillis: 10000
});
// Connection usage pattern
async executeQuery<T>(query: string, params: any[]): Promise<T> {
const client = await pool.connect();
try {
const result = await client.query(query, params);
return result.rows;
} finally {
client.release(); // Return to pool
}
}
// Metrics
const poolMetrics = {
activeConnections: 28, // Current active
idleConnections: 12, // Available
waitingClients: 0, // No waiting
totalQueries: 145_000_000, // Lifetime
avgCheckoutTime: 2.1 // ms
};
Layer 3: CDN & Image Optimizationโ
Image Delivery Strategyโ
// Image optimization and CDN integration
class ImageService {
private cdn: CloudflareCDN;
private storage: S3Storage;
async getOptimizedImage(
vehicleId: string,
imageId: string,
options: ImageOptions
): Promise<string> {
// Generate CDN URL with transformations
const cdnUrl = this.cdn.getImageURL({
path: `vehicles/${vehicleId}/${imageId}`,
width: options.width,
height: options.height,
quality: options.quality || 85,
format: options.format || 'webp', // Modern format
fit: options.fit || 'cover'
});
return cdnUrl;
}
async uploadVehicleImages(
vehicleId: string,
images: File[]
): Promise<string[]> {
const uploadPromises = images.map(async (image, index) => {
// Upload original to S3
const key = `vehicles/${vehicleId}/${index}_original.jpg`;
await this.storage.upload(key, image);
// Trigger CDN cache warming for common sizes
const sizes = [
{ width: 400, height: 300 }, // Thumbnail
{ width: 800, height: 600 }, // Medium
{ width: 1200, height: 900 } // Large
];
await Promise.all(
sizes.map(size => this.cdn.warmCache(key, size))
);
return this.cdn.getImageURL({ path: key });
});
return Promise.all(uploadPromises);
}
}
// Image optimization results
const imageMetrics = {
avgImageSize: {
before: 2400, // KB (JPEG)
after: 380, // KB (WebP)
reduction: '84%'
},
loadTime: {
before: 1200, // ms
after: 85, // ms
improvement: '93%'
},
cdnHitRate: 0.98, // 98%
bandwidthSaved: '18TB/month',
costSavings: '$1,200/month'
};
Layer 4: API Architecture Patternsโ
GraphQL for Flexible Queriesโ
// RESTful API (multiple requests needed)
// Request 1: Get vehicle
GET /api/v1/vehicles/12345
// Request 2: Get specifications
GET /api/v1/vehicles/12345/specifications
// Request 3: Get similar vehicles
GET /api/v1/vehicles/12345/similar
// Request 4: Get seller info
GET /api/v1/vehicles/12345/seller
// Total: 4 round trips โ
// GraphQL API (single request with exact data needed)
POST /graphql
{
vehicle(id: "12345") {
id
make
model
year
price
specifications {
engine
transmission
}
similarVehicles(limit: 5) {
id
make
model
price
}
seller {
name
rating
phone
}
}
}
// Total: 1 round trip โ
GraphQL Implementation:
// GraphQL schema
const typeDefs = gql`
type Vehicle {
id: ID!
make: String!
model: String!
year: Int!
price: Float!
# Nested resolvers with DataLoader
specifications: Specifications
similarVehicles(limit: Int): [Vehicle!]!
seller: Seller
images(size: ImageSize): [Image!]!
}
type Query {
vehicle(id: ID!): Vehicle
searchVehicles(query: SearchInput!): VehicleSearchResults!
}
`;
// Resolvers with DataLoader (prevents N+1)
const resolvers = {
Vehicle: {
specifications: (parent, args, context) => {
// DataLoader batches requests
return context.specLoader.load(parent.id);
},
similarVehicles: async (parent, args, context) => {
// Use cached results
return context.cache.getOrFetch(
`similar:${parent.id}`,
() => context.db.findSimilarVehicles(parent.id, args.limit)
);
},
seller: (parent, args, context) => {
return context.sellerLoader.load(parent.sellerId);
}
}
};
// DataLoader for batching
const createSpecLoader = (db) => new DataLoader(async (vehicleIds) => {
// Single query for all vehicle specs
const specs = await db.query(`
SELECT * FROM specifications
WHERE vehicle_id = ANY($1)
`, [vehicleIds]);
// Map results back to original order
return vehicleIds.map(id =>
specs.filter(s => s.vehicle_id === id)
);
});
Rate Limiting & Quotasโ
// Token bucket rate limiting
class RateLimiter {
private redis: Redis;
async checkRateLimit(
userId: string,
tier: 'free' | 'pro' | 'enterprise'
): Promise<RateLimitResult> {
const limits = {
free: { requests: 1000, window: 3600 }, // 1K/hour
pro: { requests: 10000, window: 3600 }, // 10K/hour
enterprise: { requests: 100000, window: 3600 } // 100K/hour
};
const limit = limits[tier];
const key = `ratelimit:${userId}:${Math.floor(Date.now() / (limit.window * 1000))}`;
// Increment counter
const current = await this.redis.incr(key);
// Set expiry on first request
if (current === 1) {
await this.redis.expire(key, limit.window);
}
const remaining = Math.max(0, limit.requests - current);
const allowed = current <= limit.requests;
return {
allowed,
remaining,
resetAt: new Date(Date.now() + limit.window * 1000)
};
}
}
// Rate limit middleware
app.use(async (req, res, next) => {
const user = req.user;
const result = await rateLimiter.checkRateLimit(user.id, user.tier);
// Set headers
res.set('X-RateLimit-Limit', result.limit.toString());
res.set('X-RateLimit-Remaining', result.remaining.toString());
res.set('X-RateLimit-Reset', result.resetAt.toISOString());
if (!result.allowed) {
return res.status(429).json({
error: 'Rate limit exceeded',
retryAfter: result.resetAt
});
}
next();
});
Performance Monitoringโ
Real-Time Metrics Dashboardโ
// APM (Application Performance Monitoring)
class PerformanceMonitor {
trackEndpoint(endpoint: string, duration: number, success: boolean): void {
// Store metrics in time-series database
this.influxdb.write({
measurement: 'api_request',
tags: {
endpoint,
success: success.toString(),
region: process.env.AWS_REGION
},
fields: {
duration,
timestamp: Date.now()
}
});
// Real-time alerting
if (duration > 1000) {
this.alert('Slow endpoint detected', { endpoint, duration });
}
}
async getPerformanceReport(): Promise<PerformanceReport> {
const metrics = await this.influxdb.query(`
SELECT
MEAN(duration) as avg_duration,
PERCENTILE(duration, 50) as p50,
PERCENTILE(duration, 95) as p95,
PERCENTILE(duration, 99) as p99
FROM api_request
WHERE time > now() - 1h
GROUP BY endpoint
`);
return metrics;
}
}
// Production performance metrics
const performanceReport = {
endpoint: '/api/v1/vehicles/:id',
requests: 5_800_000, // daily
avgDuration: 128, // ms
p50: 95, // ms
p95: 185, // ms โ
p99: 340, // ms
errorRate: 0.0012, // 0.12%
cacheHitRate: 0.94 // 94%
};
Cost-Benefit Analysisโ
Infrastructure Cost Breakdownโ
Before Optimization:
API Servers: 6x t3.large = $540/month
Database: RDS db.r5.2xlarge = $5,200/month
Redis: None = $0
CDN: CloudFlare Free = $0
Load Balancer: $25/month
Monitoring: $200/month
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total: $5,965/month
After Optimization:
API Servers: 2x t3.medium = $120/month
Database: RDS db.r5.large = $1,200/month
Redis: ElastiCache 3-node = $280/month
CDN: CloudFlare Pro = $20/month
Load Balancer: $25/month
Monitoring: $150/month
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total: $1,795/month
Monthly Savings: $4,170
Annual Savings: $50,040
Performance ROIโ
| Metric | Before | After | Improvement |
|---|---|---|---|
| p95 Response Time | 2,300ms | 185ms | 92% faster |
| Requests/Server | 967K/day | 2.9M/day | 200% increase |
| Database Load | 78% CPU | 32% CPU | 59% reduction |
| Cache Hit Rate | 12% | 94% | 683% improvement |
| Infrastructure Cost | $5,965/mo | $1,795/mo | 70% reduction |
| Cost per Request | $0.000164 | $0.000036 | 78% reduction |
3-Year TCO:
- Before: $214,740
- After: $64,620
- Total Savings: $150,120
Implementation Checklistโ
Phase 1: Quick Wins (Week 1)โ
- Add Redis caching for hot endpoints
- Create database indexes for top 10 queries
- Enable connection pooling
- Implement basic CDN for images
- Expected Impact: 60% response time improvement
Phase 2: Advanced Optimization (Week 2-3)โ
- Implement multi-layer caching strategy
- Create materialized views for aggregations
- Optimize N+1 queries with DataLoader
- Set up comprehensive monitoring
- Expected Impact: 85% response time improvement
Phase 3: Architecture Evolution (Month 2)โ
- Migrate to GraphQL (optional)
- Implement advanced rate limiting
- Set up auto-scaling
- Deploy monitoring dashboards
- Expected Impact: 92% response time improvement
Conclusionโ
Achieving sub-200ms API response times for automotive data requires systematic optimization across all layers:
The Five Performance Pillars:
- Multi-Layer Caching - 94% cache hit rate with Redis
- Database Optimization - Strategic indexes and query patterns
- CDN Integration - 98% cache hit for images (84% size reduction)
- Connection Pooling - Efficient resource utilization
- Real-Time Monitoring - Continuous performance tracking
Production Results:
- 92% response time improvement (2,300ms โ 185ms)
- 70% infrastructure cost reduction ($5,965 โ $1,795/month)
- 200% capacity increase per server
- $150K saved over 3 years
For automotive data APIs serving millions of requests, these optimizations aren't optionalโthey're the difference between $215K vs $65K infrastructure costs over 3 years.
Carapis API includes all production optimizations: Redis caching, CDN integration, optimized queries, and sub-200ms p95 response times.
Get Started โ | API Documentation โ | View All Parsers โ
Related Resourcesโ
- Anti-Detection Architecture - 99.9999% success rate at scale
- Korean Market Data ROI - Market analysis and extraction economics
- API Reference - Complete API documentation
- All Parser Documentation - 25+ market coverage
Questions? Contact our team at info@carapis.com or join our Telegram community.
