Skip to content

Performance Optimization

Optimize your Ignis application for speed and scalability.

1. Measure Performance

Identify bottlenecks before optimizing:

typescript
import { executeWithPerformanceMeasure } from '@venizia/ignis';

await executeWithPerformanceMeasure({
  logger: this.logger,
  scope: 'DataProcessing',
  description: 'Process large dataset',
  task: async () => {
    await processLargeDataset();
  },
});

Logs execution time automatically.

Deep Dive: See Performance Utility for advanced profiling.

2. Offload CPU-Intensive Tasks

Prevent blocking the event loop with Worker Threads:

Use Worker Threads for:

  • Complex calculations or crypto operations
  • Large file/data processing
  • Any synchronous task > 5ms

Deep Dive: See Worker Thread Helper for implementation guide.

3. Optimize Database Queries

TechniqueExampleImpact
Select specific fieldsfields: { id: true, name: true }Reduce data transfer
Use indexesCreate indexes on WHERE/JOIN columns10-100x faster queries
Mandatory Limitlimit: 20Prevent fetching massive datasets
Paginate resultslimit: 20, offset: 0Prevent memory overflow
Eager load relationsinclude: [{ relation: 'creator' }]Solve N+1 problem

Query Operators Reference

Ignis supports extensive query operators for filtering:

OperatorDescriptionExample
eqEqual (handles null){ status: { eq: 'ACTIVE' } }
ne, neqNot equal{ status: { ne: 'DELETED' } }
gt, gteGreater than (or equal){ age: { gte: 18 } }
lt, lteLess than (or equal){ price: { lt: 100 } }
likeSQL LIKE (case-sensitive){ name: { like: '%john%' } }
ilikeCase-insensitive LIKE{ email: { ilike: '%@gmail%' } }
nlike, nilikeNOT LIKE variants{ name: { nlike: '%test%' } }
regexpPostgreSQL regex (~){ code: { regexp: '^[A-Z]+$' } }
iregexpCase-insensitive regex (~*){ name: { iregexp: '^john' } }
in, inqValue in array{ status: { in: ['A', 'B'] } }
ninValue NOT in array{ role: { nin: ['guest'] } }
betweenRange (inclusive){ age: { between: [18, 65] } }
is, isnIS NULL / IS NOT NULL{ deletedAt: { is: null } }
and, orLogical operators{ or: [{ a: 1 }, { b: 2 }] }

Complex Filter Example:

typescript
await repo.find({
  filter: {
    where: {
      and: [
        { status: { in: ['ACTIVE', 'PENDING'] } },
        { createdAt: { gte: new Date('2024-01-01') } },
        { or: [
          { role: 'admin' },
          { permissions: { ilike: '%manage%' } }
        ]}
      ]
    },
    limit: 50,
  },
});

JSON Path Filtering

Filter by nested JSON/JSONB fields using PostgreSQL's #> operator:

typescript
// Order by nested JSON path
await repo.find({
  filter: {
    order: ['metadata.nested[0].field ASC'],
  },
});

// The framework uses PostgreSQL #> operator for path extraction
// metadata #> '{nested,0,field}'

TIP

Avoid Deep Nesting: While Ignis supports deeply nested include filters, each level adds significant overhead to query construction and result mapping. We strongly recommend a maximum of 2 levels (e.g., User -> Orders -> Items). For more complex data fetching, consider separate queries.

Example:

typescript
await userRepository.find({
  filter: {
    fields: { id: true, name: true, email: true },  // ✅ Specific fields
    where: { status: 'ACTIVE' },
    limit: 20,                                       // ✅ Mandatory limit
    include: [{ 
      relation: 'orders',
      scope: {
        include: [{ relation: 'items' }]             // ✅ Level 2 (Recommended limit)
      }
    }],
  },
});

4. Implement Caching

Reduce database load with caching:

Cache TypeUse CaseImplementation
RedisDistributed cache, session storageRedis Helper
In-MemorySingle-process cacheMemoryStorageHelper

Example:

typescript
// Cache expensive query results
const cached = await redis.get('users:active');
if (!cached) {
  const users = await userRepository.find({ where: { active: true } });
  await redis.set('users:active', users, 300); // 5 min TTL
}

5. Production Settings

SettingValueWhy
NODE_ENVproductionEnables library optimizations
Process ManagerPM2, systemd, DockerAuto-restart, cluster mode
Cluster ModeCPU coresUtilize all CPUs

PM2 Cluster Mode:

bash
pm2 start dist/index.js -i max  # Use all CPU cores

6. Transaction Support

Use transactions to ensure atomicity across multiple database operations:

typescript
// Start a transaction
const tx = await userRepository.beginTransaction({
  isolationLevel: 'READ COMMITTED', // or 'REPEATABLE READ' | 'SERIALIZABLE'
});

try {
  // Pass transaction to all operations
  const user = await userRepository.create({
    data: { name: 'John', email: 'john@example.com' },
    options: { transaction: tx },
  });

  await orderRepository.create({
    data: { userId: user.data.id, total: 100 },
    options: { transaction: tx },
  });

  // Commit if all operations succeed
  await tx.commit();
} catch (error) {
  // Rollback on any failure
  await tx.rollback();
  throw error;
}

Transaction Isolation Levels:

LevelDescriptionUse Case
READ COMMITTEDSee committed data only (default)Most CRUD operations
REPEATABLE READConsistent reads within transactionReports, aggregations
SERIALIZABLEFull isolation, may retryFinancial transactions

WARNING

Higher isolation levels reduce concurrency. Use READ COMMITTED unless you have specific consistency requirements.

7. Database Connection Pooling

Connection pooling significantly improves performance by reusing database connections instead of creating new ones for each request.

Configure in DataSource:

typescript
import { Pool } from 'pg';
import { drizzle } from 'drizzle-orm/node-postgres';

export class PostgresDataSource extends AbstractDataSource {
  override connect(): void {
    const pool = new Pool({
      host: this.settings.host,
      port: this.settings.port,
      user: this.settings.username,
      password: this.settings.password,
      database: this.settings.database,

      // Connection pool settings
      max: 20,                      // Maximum connections in pool
      min: 5,                       // Minimum connections to maintain
      idleTimeoutMillis: 30000,     // Close idle connections after 30s
      connectionTimeoutMillis: 5000, // Fail if can't connect in 5s
      maxUses: 7500,                // Close connection after 7500 queries
    });

    this.connector = drizzle({ client: pool, schema: this.schema });
  }
}

Recommended Pool Sizes:

Server RAMConcurrent UsersMax Pool SizeMin Pool Size
< 2GB< 100102
2-4GB100-500205
4-8GB500-10003010
> 8GB> 100050+15

Formula: max_connections = (number_of_cores * 2) + effective_spindle_count

For most applications: max_connections = CPU_cores * 2 + 1

Monitoring Pool Health:

typescript
// Log pool statistics periodically
const pool = new Pool({ /* ... */ });

setInterval(() => {
  this.logger.info('[pool] Stats | total: %d | idle: %d | waiting: %d',
    pool.totalCount,
    pool.idleCount,
    pool.waitingCount
  );
}, 60000); // Every minute

Warning Signs:

  • waitingCount > 0 consistently → Increase max
  • idleCount === totalCount always → Decrease max
  • Connection timeouts → Check network, increase connectionTimeoutMillis

8. Query Optimization Tips

Use Indexes Strategically

typescript
// Create indexes on frequently queried columns
export const User = pgTable('User', {
  id: text('id').primaryKey(),
  email: text('email').notNull().unique(),  // Implicit unique index
  status: text('status').notNull(),
  createdAt: timestamp('created_at').notNull(),
}, (table) => ({
  // Composite index for common query patterns
  statusCreatedIdx: index('idx_user_status_created').on(table.status, table.createdAt),
  // Partial index for active users only
  activeEmailIdx: index('idx_active_email').on(table.email).where(eq(table.status, 'ACTIVE')),
}));

Avoid N+1 Queries

typescript
// ❌ BAD - N+1 queries
const users = await userRepo.find({ filter: { limit: 100 } });
for (const user of users.data) {
  user.posts = await postRepo.find({ filter: { where: { authorId: user.id } } });
}

// ✅ GOOD - Single query with relations
const users = await userRepo.find({
  filter: {
    limit: 100,
    include: [{ relation: 'posts' }],
  },
});

Batch Operations

typescript
// ❌ BAD - Many individual inserts
for (const item of items) {
  await repo.create({ data: item });
}

// ✅ GOOD - Batch insert
await repo.createMany({ data: items });

9. Memory Management

Stream Large Datasets

typescript
// ❌ BAD - Load all records into memory
const allUsers = await userRepo.find({ filter: { limit: 100000 } });

// ✅ GOOD - Process in batches
const batchSize = 1000;
let offset = 0;
let hasMore = true;

while (hasMore) {
  const batch = await userRepo.find({
    filter: { limit: batchSize, offset },
  });

  for (const user of batch.data) {
    await processUser(user);
  }

  hasMore = batch.data.length === batchSize;
  offset += batchSize;
}

Avoid Memory Leaks in Long-Running Processes

typescript
// ❌ BAD - Growing array in long-running process
const processedIds: string[] = [];
// This array grows forever!

// ✅ GOOD - Use Set with cleanup or external storage
const processedIds = new Set<string>();

// Periodically clear or use Redis
setInterval(() => {
  if (processedIds.size > 10000) {
    processedIds.clear();
  }
}, 3600000); // Every hour

10. High-Frequency Logging

For performance-critical applications like HFT systems, use HfLogger instead of standard logging.

Standard Logger vs HfLogger

FeatureStandard LoggerHfLogger
Latency~1-10 microseconds~100-300 nanoseconds
AllocationPer-call string formattingZero in hot path
Use caseGeneral applicationTrading, real-time systems

HfLogger Usage

typescript
import { HfLogger, HfLogFlusher } from '@venizia/ignis-helpers';

// At initialization (once):
const logger = HfLogger.get('OrderEngine');
const MSG_ORDER_SENT = HfLogger.encodeMessage('Order sent');
const MSG_ORDER_FILLED = HfLogger.encodeMessage('Order filled');

// Start background flusher
const flusher = new HfLogFlusher();
flusher.start(100); // Flush every 100ms

// In hot path (~100-300ns, zero allocation):
logger.log('info', MSG_ORDER_SENT);
logger.log('info', MSG_ORDER_FILLED);

Key points:

  • Pre-encode messages at initialization, not in hot path
  • Use background flushing to avoid I/O blocking
  • HfLogger uses a lock-free ring buffer (64K entries, 16MB)

Deep Dive: See Logger Helper for complete HfLogger API.

Performance Checklist

CategoryCheckImpact
DatabaseConnection pooling configuredHigh
DatabaseIndexes on WHERE/JOIN columnsHigh
DatabaseLimit on all queriesHigh
QueriesUsing fields to select specific columnsMedium
QueriesRelations limited to 2 levelsMedium
QueriesBatch operations for bulk dataHigh
MemoryLarge datasets processed in batchesHigh
CachingExpensive queries cachedHigh
WorkersCPU-intensive tasks offloadedHigh
LoggingHfLogger for hot paths (HFT)High
MonitoringPerformance logging enabledLow

See Also