Largest Triangle Three Buckets: Downsampling Time-Series Data Without Losing Signal

Jan 23, 2026

The Core Problem

You have 10 million time-series data points - stock prices, server metrics, IoT sensor readings. Your charting library chokes at 50,000 points. The browser tab freezes. Users complain.

The naive solution: take every Nth point. The result: jagged lines, missing peaks, lost valleys. Critical anomalies disappear. Your dashboard lies.

Largest Triangle Three Buckets (LTTB) solves this. It's a downsampling algorithm that preserves the visual shape of your data by maximizing the area of triangles formed between consecutive points. You get smooth, accurate charts that load instantly.

What Is LTTB?

LTTB is a downsampling algorithm that reduces N points to M points (where M << N) while maintaining visual accuracy. The name describes exactly how it works:

  • Largest Triangle: It finds the point that forms the largest triangle area with neighboring points
  • Three Buckets: It divides data into buckets and looks at three buckets at a time

Think of it like this: imagine you're drawing a mountain range. Instead of drawing every grain of sand, you pick the peaks, valleys, and slopes that capture the mountain's shape. LTTB does this mathematically.

The Mathematical Insight

The algorithm maximizes the area of triangles formed by consecutive selected points. Large triangle areas mean significant visual change - exactly what human eyes need to perceive the data's shape correctly.

For three points forming a triangle, the area formula is:

area = |x₁(y₂ - y₃) + x₂(y₃ - y₁) + x₃(y₁ - y₂)| / 2

By selecting points that maximize these areas, we preserve visual features: sharp turns, peaks, valleys, and trend changes.

Why Use LTTB?

1. Visual Fidelity Over Statistical Precision

LTTB optimizes for what humans see, not statistical accuracy. When you're rendering a chart, you don't need every data point - you need the shape. LTTB preserves:

  • Peaks and valleys: Extreme values stay visible
  • Trend changes: Slope transitions remain clear
  • Visual density: High-activity regions get more points

2. Predictable Performance

The algorithm is O(n) - it makes exactly one pass through your data. No sorting, no complicated data structures. For 1 million points downsampled to 1,000:

  • Time complexity: O(n) - linear scan
  • Space complexity: O(1) extra - processes in place (with output buffer)
  • Processing time: ~10-50ms on modern hardware

3. Deterministic Results

Unlike sampling with randomness, LTTB always produces the same output for the same input. This matters for:

  • Reproducibility: Charts look identical across page reloads
  • Debugging: Issues are consistent and trackable
  • Caching: You can cache downsampled results reliably

The LTTB Algorithm: Step by Step

Let me break down how it works before we code it.

Setup:

  1. You have N source points
  2. You want M target points (M < N)
  3. First and last points are always included (preserves range)

Process:

  1. Divide remaining N-2 points into M-2 buckets
  2. Always include the first point
  3. For each bucket:
    • Look at the previously selected point (from bucket i-1)
    • Look at the average point of the next bucket (bucket i+1)
    • Select the point in current bucket (bucket i) that forms the largest triangle with these two points
  4. Always include the last point

Visual representation:

Bucket 1    Bucket 2    Bucket 3    Bucket 4
  [...]      [...]       [...]       [...]
    ↓          ↓           ↓           ↓
  Point A    Point B    Point C     Point D

For Bucket 2:
- Previous selected: Point A (from Bucket 1)
- Next bucket average: avg(Bucket 3 points)
- Select point from Bucket 2 that makes largest triangle

TypeScript Implementation

Here's a production-ready implementation:

interface Point {
  x: number;
  y: number;
}

/**
 * Largest Triangle Three Buckets downsampling algorithm
 * 
 * @param data - Source data points (must be sorted by x)
 * @param threshold - Target number of points in output
 * @returns Downsampled points that preserve visual shape
 */
function largestTriangleThreeBuckets(
  data: Point[],
  threshold: number
): Point[] {
  const dataLength = data.length;

  // If threshold is greater than or equal to data length, return all data
  if (threshold >= dataLength) {
    return data;
  }

  // If threshold is very small, return edge points
  if (threshold <= 2) {
    return [data[0], data[dataLength - 1]];
  }

  const sampled: Point[] = [];
  
  // Always include the first point
  sampled.push(data[0]);

  // Bucket size. Leave room for start and endpoints
  const bucketSize = (dataLength - 2) / (threshold - 2);

  // Index of currently selected point in previous bucket
  let a = 0;

  for (let i = 0; i < threshold - 2; i++) {
    // Calculate point average for next bucket (used as point C in triangle)
    let avgX = 0;
    let avgY = 0;

    const avgRangeStart = Math.floor((i + 1) * bucketSize) + 1;
    const avgRangeEnd = Math.min(
      Math.floor((i + 2) * bucketSize) + 1,
      dataLength
    );
    const avgRangeLength = avgRangeEnd - avgRangeStart;

    for (let j = avgRangeStart; j < avgRangeEnd; j++) {
      avgX += data[j].x;
      avgY += data[j].y;
    }
    avgX /= avgRangeLength;
    avgY /= avgRangeLength;

    // Get the range for this bucket
    const rangeStart = Math.floor(i * bucketSize) + 1;
    const rangeEnd = Math.floor((i + 1) * bucketSize) + 1;

    // Point a is the previously selected point
    const pointA = data[a];

    let maxArea = -1;
    let maxAreaPoint = 0;

    // Find point in current bucket that forms largest triangle
    for (let j = rangeStart; j < rangeEnd; j++) {
      // Calculate triangle area using the cross product formula
      // area = |x₁(y₂ - y₃) + x₂(y₃ - y₁) + x₃(y₁ - y₂)| / 2
      const area = Math.abs(
        (pointA.x - avgX) * (data[j].y - pointA.y) -
        (pointA.x - data[j].x) * (avgY - pointA.y)
      ) * 0.5;

      if (area > maxArea) {
        maxArea = area;
        maxAreaPoint = j;
      }
    }

    sampled.push(data[maxAreaPoint]);
    a = maxAreaPoint; // This point is the next "point A"
  }

  // Always include the last point
  sampled.push(data[dataLength - 1]);

  return sampled;
}

Example Usage

// Generate sample time-series data: a sine wave with noise
function generateTimeSeriesData(count: number): Point[] {
  const data: Point[] = [];
  for (let i = 0; i < count; i++) {
    const x = i;
    const y = Math.sin(i / 100) * 50 + Math.random() * 10;
    data.push({ x, y });
  }
  return data;
}

// Original data: 10,000 points
const originalData = generateTimeSeriesData(10000);
console.log(`Original: ${originalData.length} points`);

// Downsample to 500 points
const downsampled = largestTriangleThreeBuckets(originalData, 500);
console.log(`Downsampled: ${downsampled.length} points`);
console.log(`Reduction: ${((1 - downsampled.length / originalData.length) * 100).toFixed(1)}%`);

// Use in your charting library
// chart.setData(downsampled); // Much faster rendering!

Real-World Example: Stock Price Visualization

Let's say you're building a stock charting application. You fetch intraday data - one price per minute for a year:

interface StockPrice {
  timestamp: number;  // Unix timestamp
  price: number;
  volume: number;
}

function downsampleStockData(
  prices: StockPrice[],
  targetPoints: number
): StockPrice[] {
  // Convert to Point format
  const points: Point[] = prices.map(p => ({
    x: p.timestamp,
    y: p.price
  }));

  // Apply LTTB
  const downsampled = largestTriangleThreeBuckets(points, targetPoints);

  // Map back to original indices to preserve volume data
  const downsampledPrices: StockPrice[] = [];
  let priceIndex = 0;
  
  for (const point of downsampled) {
    // Find original data point
    while (priceIndex < prices.length && prices[priceIndex].timestamp !== point.x) {
      priceIndex++;
    }
    if (priceIndex < prices.length) {
      downsampledPrices.push(prices[priceIndex]);
    }
  }

  return downsampledPrices;
}

// Usage: 1 year of minute data (525,600 points) down to 2,000
const yearData = fetchStockPrices('AAPL', '1y', '1m'); // 525,600 points
const chartData = downsampleStockData(yearData, 2000); // 99.6% reduction

// Chart renders instantly, all major price movements visible

Where to Use LTTB

1. Time-Series Visualization

Perfect fit:

  • Financial charts (stocks, crypto, forex)
  • Server metrics (CPU, memory, network)
  • IoT sensor data (temperature, pressure, vibration)
  • Analytics dashboards (user activity, sales trends)

Why it works: Time-series data is dense and continuous. LTTB preserves the temporal patterns humans need to spot trends and anomalies.

2. Real-Time Monitoring Dashboards

When streaming live metrics:

class MetricsBuffer {
  private buffer: Point[] = [];
  private maxBufferSize = 100000;
  private displayThreshold = 1000;

  add(point: Point): void {
    this.buffer.push(point);
    
    // Downsample when buffer gets large
    if (this.buffer.length > this.maxBufferSize) {
      this.buffer = largestTriangleThreeBuckets(
        this.buffer,
        this.maxBufferSize / 2
      );
    }
  }

  getDisplayData(): Point[] {
    if (this.buffer.length <= this.displayThreshold) {
      return this.buffer;
    }
    return largestTriangleThreeBuckets(this.buffer, this.displayThreshold);
  }
}

3. Historical Data Analysis

When users zoom out to see long time ranges:

function adaptiveDownsample(
  data: Point[],
  viewportWidth: number
): Point[] {
  // One point per 2-3 pixels is optimal for line charts
  const targetPoints = Math.floor(viewportWidth / 2);
  
  if (data.length <= targetPoints * 2) {
    return data;
  }
  
  return largestTriangleThreeBuckets(data, targetPoints);
}

// When user zooms in/out, re-downsample for current view
chart.on('zoom', (viewport) => {
  const visible = getDataInViewport(allData, viewport);
  const downsampled = adaptiveDownsample(visible, viewport.width);
  chart.update(downsampled);
});

4. Data Export/Transfer

Reduce payload size for client applications:

// API endpoint that returns downsampled data
app.get('/api/metrics/:id', async (req, res) => {
  const { points = 1000 } = req.query;
  const fullData = await db.getMetrics(req.params.id);
  
  // Downsample before sending
  const optimized = largestTriangleThreeBuckets(
    fullData,
    Math.min(Number(points), 5000) // Cap at 5k
  );
  
  res.json({
    original_count: fullData.length,
    downsampled_count: optimized.length,
    data: optimized
  });
});

Where NOT to Use LTTB

1. Statistical Analysis

Don't use for:

  • Calculating averages, medians, or percentiles
  • Standard deviation or variance
  • Correlation analysis
  • Any statistical computation

Why: LTTB optimizes for visual accuracy, not statistical properties. It intentionally biases toward extremes and changes, which skews statistics.

// BAD: Statistical analysis on downsampled data
const downsampled = largestTriangleThreeBuckets(data, 1000);
const average = downsampled.reduce((s, p) => s + p.y, 0) / downsampled.length;
// ❌ This average is WRONG - biased toward peaks/valleys

// GOOD: Calculate stats on full data
const average = data.reduce((s, p) => s + p.y, 0) / data.length;
// ✅ Correct statistical average

2. Scientific Precision Requirements

Don't use for:

  • Medical device data where every reading matters
  • Financial audit trails (compliance requires complete data)
  • Scientific experiments requiring exact measurements
  • Legal or regulatory data (must be complete and unaltered)

Reason: Downsampling discards information. If you need provable accuracy or regulatory compliance, you can't afford to lose any data points.

3. Sparse or Non-Continuous Data

Don't use for:

  • Event logs (sparse, discrete events)
  • Transaction records (each record is unique)
  • Categorical data (non-numeric or discrete categories)
  • Already small datasets (< 1000 points)

Example of bad fit:

// BAD: Transaction log data
const transactions = [
  { time: 100, type: 'purchase', amount: 50 },
  { time: 500, type: 'refund', amount: 20 },
  { time: 1000, type: 'purchase', amount: 100 }
];
// Each transaction is important - can't downsample without losing critical info

// BAD: Sparse sensor data with gaps
const sensorData = [
  { time: 0, value: 10 },
  { time: 1000, value: 12 },  // 1000 time units gap
  { time: 1001, value: 50 }   // Sudden spike
];
// The algorithm assumes continuous data - gaps break this assumption

4. When You Need Exact Points

Don't use for:

  • Min/Max calculations (unless you verify endpoints)
  • Exact zero-crossing detection
  • Precise integration or area-under-curve
  • Anomaly detection algorithms

LTTB may miss the exact maximum if it falls inside a bucket but doesn't form a large triangle:

// Potential issue: Missing exact peak
const data = [
  { x: 0, y: 10 },
  { x: 1, y: 11 },
  { x: 2, y: 100 }, // Peak
  { x: 3, y: 11 },
  { x: 4, y: 10 }
];

const downsampled = largestTriangleThreeBuckets(data, 3);
// Might skip x=2 if it doesn't form largest triangle
// Use max aggregation instead for guaranteed peak capture

Trade-offs and Limitations

Memory vs. Visual Accuracy

LTTB gives you a knob: the threshold parameter. Higher threshold = more points = better visual accuracy but slower rendering and more memory.

Sweet spots by use case:

  • Live monitoring: 500-1,000 points (updates every second)
  • Historical analysis: 1,000-2,000 points (user-triggered)
  • Export/reporting: 2,000-5,000 points (one-time generation)

Processing Cost

The O(n) scan isn't free:

// Benchmark on MacBook Pro M1
const sizes = [10_000, 100_000, 1_000_000, 10_000_000];
sizes.forEach(size => {
  const data = generateTimeSeriesData(size);
  const start = performance.now();
  largestTriangleThreeBuckets(data, 1000);
  const duration = performance.now() - start;
  console.log(`${size.toLocaleString()} points: ${duration.toFixed(2)}ms`);
});

// Results:
// 10,000 points: 2ms
// 100,000 points: 15ms
// 1,000,000 points: 150ms
// 10,000,000 points: 1,500ms

For 10M points, 1.5 seconds is significant. Solutions:

  • Pre-compute: Downsample on the backend, cache results
  • Web Workers: Offload processing to background thread
  • Progressive loading: Start with coarse downsample, refine on idle
// Web Worker approach
// main.ts
const worker = new Worker('lttb-worker.ts');
worker.postMessage({ data: largeDataset, threshold: 1000 });
worker.onmessage = (e) => {
  chart.setData(e.data.downsampled);
};

// lttb-worker.ts
self.onmessage = (e) => {
  const { data, threshold } = e.data;
  const downsampled = largestTriangleThreeBuckets(data, threshold);
  self.postMessage({ downsampled });
};

Loss of Statistical Properties

This is the big one. Downsampled data looks right but calculates wrong:

MetricFull DataLTTB (1000pts)Error
Mean50.051.2+2.4%
Std Dev10.512.8+21.9%
Min25.025.00%
Max75.075.00%

Why: LTTB favors extremes. Standard deviation inflates because you're keeping more outliers relative to the mean.

Solution: Dual-path approach:

interface DatasetView {
  visual: Point[];    // LTTB downsampled for charts
  stats: Point[];     // Full data or aggregated bins for calculations
}

function prepareDataset(raw: Point[], threshold: number): DatasetView {
  return {
    visual: largestTriangleThreeBuckets(raw, threshold),
    stats: raw // Keep full data for accurate calculations
  };
}

// Use visual for rendering, stats for calculations
const dataset = prepareDataset(rawData, 1000);
chart.render(dataset.visual);
const average = calculateMean(dataset.stats);

Advanced Patterns

Multi-Resolution Storage

Store multiple resolutions for different zoom levels:

interface MultiResData {
  full: Point[];
  res_10k: Point[];
  res_1k: Point[];
  res_100: Point[];
}

function createMultiResolution(data: Point[]): MultiResData {
  return {
    full: data,
    res_10k: data.length > 10_000 
      ? largestTriangleThreeBuckets(data, 10_000) 
      : data,
    res_1k: largestTriangleThreeBuckets(data, 1_000),
    res_100: largestTriangleThreeBuckets(data, 100)
  };
}

function selectResolution(
  multi: MultiResData,
  visiblePoints: number
): Point[] {
  if (visiblePoints > 5_000) return multi.full;
  if (visiblePoints > 500) return multi.res_10k;
  if (visiblePoints > 50) return multi.res_1k;
  return multi.res_100;
}

Streaming Incremental Updates

For real-time data, you can't reprocess everything on each new point:

class StreamingLTTB {
  private buffer: Point[] = [];
  private downsampled: Point[] = [];
  private windowSize = 10_000;
  private targetSize = 1_000;

  add(point: Point): void {
    this.buffer.push(point);

    // Sliding window approach
    if (this.buffer.length > this.windowSize) {
      this.buffer.shift(); // Remove oldest
      this.recompute();
    } else if (this.buffer.length % 100 === 0) {
      // Recompute every 100 points
      this.recompute();
    }
  }

  private recompute(): void {
    this.downsampled = largestTriangleThreeBuckets(
      this.buffer,
      this.targetSize
    );
  }

  getView(): Point[] {
    return this.downsampled;
  }
}

Comparison with Other Downsampling Methods

MethodProsConsUse Case
Every NthSimple, O(1)Misses peaks/valleysNever (use LTTB instead)
Random SamplingUnbiased statsUnpredictable visualStatistical analysis
Min-MaxPreserves extremesLoses smooth curvesRange-bound displays
LTTBVisual fidelity, O(n)Stats biasedTime-series charts
M4Very fast, extremesComplex, requires binningHigh-frequency data

When to choose LTTB:

  • You're rendering line charts
  • Visual shape matters more than exact statistics
  • Data is time-ordered and continuous
  • You need consistent, reproducible results

Production Checklist

Before deploying LTTB in production:

  • Verify input data is sorted by x-axis (time)
  • Handle edge cases: empty arrays, single points, threshold >= data length
  • Add input validation: non-null points, valid numbers
  • Consider pre-computing and caching downsampled views
  • Use Web Workers for large datasets (> 100k points)
  • Keep full data for statistical calculations
  • Document that downsampled data is for visualization only
  • Add metrics to monitor processing time
  • Test with real production data shapes (spikes, flat lines, noise)

Conclusion

LTTB solves a specific problem elegantly: making large time-series datasets renderable without losing visual meaning. It's not a general-purpose data compression algorithm - it's a visualization optimization.

The algorithm makes a clear trade: statistical accuracy for visual fidelity. If you're rendering charts, this trade is almost always worth it. Your users see the same patterns in 1,000 points that exist in 1,000,000 - but their browser doesn't crash.

Use it when you're drawing lines on screens. Don't use it when you're doing math. Keep these separate, and LTTB becomes a powerful tool in your performance optimization toolkit.

The implementation is straightforward - about 80 lines of TypeScript. The impact is immediate: charts that were unusable become instant. That's the mark of a good algorithm - simple idea, dramatic results.

RSS
https://rajnandan.com/posts/feed.xml