Performance
Optimize your Scalar application for speed and scalability
Learn how to optimize your Scalar application for maximum performance, scalability, and user experience.
Performance Areas
Database Optimization
Query optimization, indexing, and connection pooling
Caching Strategies
Redis, CDN, and application-level caching
Frontend Performance
Code splitting, lazy loading, and asset optimization
API Optimization
Response optimization and request batching
Database Optimization
Query Optimization
Optimize your database queries for better performance:
export const blogPost = defineModel({
name: 'blogPost',
fields: {
title: {
type: 'text',
indexed: true, // Add index for frequently queried fields
},
slug: {
type: 'slug',
indexed: true,
unique: true,
},
status: {
type: 'select',
indexed: true, // Index for filtering
},
publishedAt: {
type: 'datetime',
indexed: true, // Index for sorting
},
content: {
type: 'richtext',
// Don't index large text fields unless needed for search
},
},
// Define composite indexes for complex queries
indexes: [
{ fields: ['status', 'publishedAt'] },
{ fields: ['author', 'status'] },
],
});
Efficient Querying
Use select
to limit returned fields and reduce payload size:
// ❌ Bad - Returns all fields
const posts = await scalar.content.findMany({
model: 'blogPost',
filter: { status: 'published' },
});
// ✅ Good - Only return needed fields
const posts = await scalar.content.findMany({
model: 'blogPost',
filter: { status: 'published' },
select: ['id', 'title', 'slug', 'excerpt', 'publishedAt'],
populate: {
author: ['id', 'name', 'avatar'],
},
});
// Use pagination to limit result sets
const posts = await scalar.content.findMany({
model: 'blogPost',
filter: { status: 'published' },
limit: 20,
offset: page * 20,
sort: '-publishedAt',
});
Connection Pooling
Configure database connection pooling for better resource management:
export default defineConfig({
database: {
url: process.env.DATABASE_URL,
pool: {
min: 2,
max: 10,
acquireTimeoutMillis: 30000,
createTimeoutMillis: 30000,
destroyTimeoutMillis: 5000,
idleTimeoutMillis: 30000,
reapIntervalMillis: 1000,
createRetryIntervalMillis: 200,
},
},
});
Database Indexes
Create strategic indexes for your most common queries:
-- Index for published posts ordered by date
CREATE INDEX idx_posts_published_date ON blog_posts(status, published_at DESC)
WHERE status = 'published';
-- Index for author posts
CREATE INDEX idx_posts_author_status ON blog_posts(author_id, status);
-- Full-text search index
CREATE INDEX idx_posts_search ON blog_posts
USING gin(to_tsvector('english', title || ' ' || content));
Caching Strategies
Redis Caching
Implement Redis for high-performance caching:
import Redis from 'ioredis';
const redis = new Redis({
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT || '6379'),
password: process.env.REDIS_PASSWORD,
retryDelayOnFailover: 100,
enableReadyCheck: false,
maxRetriesPerRequest: null,
});
export class CacheManager {
static async get<T>(key: string): Promise<T | null> {
try {
const cached = await redis.get(key);
return cached ? JSON.parse(cached) : null;
} catch (error) {
console.error('Cache get error:', error);
return null;
}
}
static async set(key: string, value: any, ttl = 3600): Promise<void> {
try {
await redis.setex(key, ttl, JSON.stringify(value));
} catch (error) {
console.error('Cache set error:', error);
}
}
static async del(key: string): Promise<void> {
try {
await redis.del(key);
} catch (error) {
console.error('Cache delete error:', error);
}
}
static async invalidatePattern(pattern: string): Promise<void> {
try {
const keys = await redis.keys(pattern);
if (keys.length > 0) {
await redis.del(...keys);
}
} catch (error) {
console.error('Cache invalidation error:', error);
}
}
}
// Usage example
export async function getCachedPosts(page = 1) {
const cacheKey = `blog:posts:page:${page}`;
let posts = await CacheManager.get(cacheKey);
if (!posts) {
posts = await scalar.content.findMany({
model: 'blogPost',
filter: { status: 'published' },
limit: 20,
offset: (page - 1) * 20,
sort: '-publishedAt',
});
await CacheManager.set(cacheKey, posts, 300); // 5 minutes
}
return posts;
}
Cache Invalidation
Implement smart cache invalidation:
export const blogPost = defineModel({
name: 'blogPost',
hooks: {
afterCreate: async (record) => {
// Invalidate listing caches
await CacheManager.invalidatePattern('blog:posts:*');
await CacheManager.invalidatePattern('blog:featured:*');
},
afterUpdate: async (record, changes) => {
// Invalidate specific post cache
await CacheManager.del(`blog:post:${record.slug}`);
// Invalidate listings if status changed
if (changes.status) {
await CacheManager.invalidatePattern('blog:posts:*');
}
// Invalidate featured posts if featured status changed
if (changes.featured) {
await CacheManager.invalidatePattern('blog:featured:*');
}
},
afterDelete: async (record) => {
await CacheManager.del(`blog:post:${record.slug}`);
await CacheManager.invalidatePattern('blog:posts:*');
},
},
});
CDN Configuration
Configure CDN for static assets:
import { CloudFront } from 'aws-sdk';
const cloudfront = new CloudFront();
export async function invalidateCDN(paths: string[]) {
const params = {
DistributionId: process.env.CLOUDFRONT_DISTRIBUTION_ID!,
InvalidationBatch: {
CallerReference: Date.now().toString(),
Paths: {
Quantity: paths.length,
Items: paths,
},
},
};
try {
await cloudfront.createInvalidation(params).promise();
} catch (error) {
console.error('CDN invalidation error:', error);
}
}
// Usage
await invalidateCDN(['/blog/*', '/images/*']);
export async function purgeCloudflareCache(urls: string[]) {
const response = await fetch(
`https://api.cloudflare.com/client/v4/zones/${process.env.CLOUDFLARE_ZONE_ID}/purge_cache`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.CLOUDFLARE_API_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ files: urls }),
}
);
if (!response.ok) {
throw new Error('Cloudflare cache purge failed');
}
return response.json();
}
/** @type {import('next').NextConfig} */
const nextConfig = {
images: {
domains: ['your-cdn-domain.com'],
formats: ['image/webp', 'image/avif'],
},
// Enable static optimization
experimental: {
optimizeCss: true,
optimizePackageImports: ['lucide-react'],
},
// Configure caching headers
async headers() {
return [
{
source: '/images/:path*',
headers: [
{
key: 'Cache-Control',
value: 'public, max-age=31536000, immutable',
},
],
},
{
source: '/api/:path*',
headers: [
{
key: 'Cache-Control',
value: 'public, s-maxage=60, stale-while-revalidate=300',
},
],
},
];
},
};
module.exports = nextConfig;
Frontend Performance
Code Splitting
Implement strategic code splitting:
import dynamic from 'next/dynamic';
import { Suspense } from 'react';
// Lazy load heavy components
const RichTextEditor = dynamic(
() => import('./rich-text-editor'),
{
ssr: false,
loading: () => <div>Loading editor...</div>,
}
);
const AdminDashboard = dynamic(
() => import('./admin-dashboard'),
{
ssr: false,
loading: () => <div>Loading dashboard...</div>,
}
);
// Route-based code splitting
const BlogPage = dynamic(() => import('../pages/blog'));
const ProductPage = dynamic(() => import('../pages/product'));
// Component-based lazy loading
export function LazyModal({ children }: { children: React.ReactNode }) {
return (
<Suspense fallback={<div>Loading...</div>}>
{children}
</Suspense>
);
}
Image Optimization
Optimize images for better performance:
import Image from 'next/image';
import { useState } from 'react';
interface OptimizedImageProps {
src: string;
alt: string;
width: number;
height: number;
priority?: boolean;
sizes?: string;
}
export function OptimizedImage({
src,
alt,
width,
height,
priority = false,
sizes = '(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 33vw',
}: OptimizedImageProps) {
const [loading, setLoading] = useState(true);
return (
<div className="relative overflow-hidden">
{loading && (
<div
className="absolute inset-0 bg-gray-200 animate-pulse"
style={{ aspectRatio: `${width}/${height}` }}
/>
)}
<Image
src={src}
alt={alt}
width={width}
height={height}
priority={priority}
sizes={sizes}
quality={85}
placeholder="blur"
blurDataURL="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCAABAAEDASIAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAv/xAAUEAEAAAAAAAAAAAAAAAAAAAAA/8QAFQEBAQAAAAAAAAAAAAAAAAAAAAX/xAAUEQEAAAAAAAAAAAAAAAAAAAAA/9oADAMBAAIRAxEAPwCdABmX/9k="
onLoad={() => setLoading(false)}
className={`transition-opacity duration-300 ${
loading ? 'opacity-0' : 'opacity-100'
}`}
/>
</div>
);
}
Bundle Analysis
Analyze and optimize your bundle size:
# Install bundle analyzer
npm install --save-dev @next/bundle-analyzer
# Configure in next.config.js
const withBundleAnalyzer = require('@next/bundle-analyzer')({
enabled: process.env.ANALYZE === 'true',
});
module.exports = withBundleAnalyzer({
// your existing config
});
# Run analysis
ANALYZE=true npm run build
API Optimization
Response Optimization
Optimize API responses for better performance:
import { NextResponse } from 'next/server';
import { z } from 'zod';
// Response compression
export function compressedResponse(data: any, status = 200) {
return NextResponse.json(data, {
status,
headers: {
'Content-Encoding': 'gzip',
'Cache-Control': 'public, s-maxage=60, stale-while-revalidate=300',
},
});
}
// Paginated responses
export function paginatedResponse(data: any[], meta: any) {
return NextResponse.json({
data,
meta: {
total: meta.total,
page: meta.page,
limit: meta.limit,
totalPages: Math.ceil(meta.total / meta.limit),
hasNextPage: meta.page < Math.ceil(meta.total / meta.limit),
hasPreviousPage: meta.page > 1,
},
});
}
// Validate and optimize query parameters
const querySchema = z.object({
page: z.coerce.number().min(1).default(1),
limit: z.coerce.number().min(1).max(100).default(20),
sort: z.string().optional(),
filter: z.record(z.any()).optional(),
});
export function validateQuery(searchParams: URLSearchParams) {
const params = Object.fromEntries(searchParams.entries());
return querySchema.parse(params);
}
Request Batching
Implement request batching for better efficiency:
class RequestBatcher {
private batches = new Map<string, Promise<any>>();
private timers = new Map<string, NodeJS.Timeout>();
async batch<T>(
key: string,
request: () => Promise<T>,
delay = 10,
): Promise<T> {
// If batch already exists, return the existing promise
if (this.batches.has(key)) {
return this.batches.get(key)!;
}
// Create new batch promise
const batchPromise = new Promise<T>((resolve, reject) => {
const timer = setTimeout(async () => {
try {
const result = await request();
resolve(result);
} catch (error) {
reject(error);
} finally {
this.batches.delete(key);
this.timers.delete(key);
}
}, delay);
this.timers.set(key, timer);
});
this.batches.set(key, batchPromise);
return batchPromise;
}
}
const batcher = new RequestBatcher();
// Usage example
export async function getBatchedPosts(ids: string[]) {
const cacheKey = `posts:${ids.sort().join(',')}`;
return batcher.batch(cacheKey, async () => {
return scalar.content.findMany({
model: 'blogPost',
filter: { id: { $in: ids } },
});
});
}
Monitoring and Profiling
Performance Monitoring
Set up performance monitoring:
import { performance } from 'perf_hooks';
export class PerformanceMonitor {
private static timers = new Map<string, number>();
static start(label: string) {
this.timers.set(label, performance.now());
}
static end(label: string) {
const startTime = this.timers.get(label);
if (startTime) {
const duration = performance.now() - startTime;
console.log(`${label}: ${duration.toFixed(2)}ms`);
this.timers.delete(label);
return duration;
}
return 0;
}
static async measure<T>(label: string, fn: () => Promise<T>): Promise<T> {
this.start(label);
try {
const result = await fn();
return result;
} finally {
this.end(label);
}
}
}
// Usage
export async function getPostsWithMonitoring() {
return PerformanceMonitor.measure('fetch-posts', async () => {
return scalar.content.findMany({
model: 'blogPost',
filter: { status: 'published' },
});
});
}
Database Query Profiling
Profile database queries:
export default defineConfig({
database: {
url: process.env.DATABASE_URL,
debug: process.env.NODE_ENV === 'development',
hooks: {
beforeQuery: (query) => {
console.time(`Query: ${query.sql.slice(0, 50)}...`);
},
afterQuery: (query, result) => {
console.timeEnd(`Query: ${query.sql.slice(0, 50)}...`);
if (result.duration > 1000) {
console.warn(`Slow query detected: ${query.sql}`);
}
},
},
},
});
Performance Best Practices
General Guidelines
- Use appropriate data types - Choose the right field types for your data
- Implement pagination - Always paginate large result sets
- Cache frequently accessed data - Use Redis or in-memory caching
- Optimize images - Use WebP format and proper sizing
- Minimize API calls - Batch requests when possible
- Use CDN - Serve static assets from a CDN
- Monitor performance - Set up monitoring and alerting
Production Checklist
- Database indexes for all frequently queried fields
- Redis caching for expensive operations
- CDN configured for static assets
- Image optimization enabled
- Code splitting implemented
- Bundle size optimized
- Performance monitoring set up
- Error tracking configured
- Rate limiting enabled
- Compression enabled
Performance optimization is an ongoing process. Use tools like Lighthouse, Web Vitals, and application performance monitoring to continuously improve your application's performance.