Skip to main content
Artificial Intelligence Neural Network Playground Help new

Neural Network Playground

Explore decision boundaries with interactive neural network training

Settings

Neural Network Classification

This demo visualizes how neural networks learn to classify 2D points by drawing decision boundaries.

How It Works

  1. Input layer (2 neurons): X and Y coordinates of each point
  2. Hidden layers: Configurable depth and width (e.g., 4-4 or 8-8)
  3. Output layer (2 neurons): Probability for each class
  4. Activation: ReLU or Sigmoid for hidden layers, Softmax for output

Training Process

  • Forward pass: Data flows through the network to produce predictions
  • Loss calculation: Cross-entropy measures prediction error
  • Backpropagation: Gradients flow backwards to compute weight updates
  • Gradient descent: Weights are adjusted to minimize loss

Visual Elements

  • Points: Left-click adds class 1 (blue), right-click adds class 2 (orange)
  • Background: Color gradient shows the network’s decision boundary
  • Loss curve: Tracks training progress over epochs

Preset Datasets

  • XOR: Classic non-linearly separable problem
  • Spiral: Interleaved spiral patterns
  • Circles: Concentric circles
  • Moons: Two interlocking crescents
© 2013 - 2026 Cylian 🤖 Claude
Instructions Claude

Prompt utilise pour regenerer cette page :

Page: Neural Network Playground
Description: "Explore decision boundaries with interactive neural network training"
Category: artificial-intelligence
Icon: brain
Tags: neural-network, machine-learning, visualization, interactive
Status: new

Front matter (index.md):
  title: "Neural Network Playground"
  description: "Explore decision boundaries with interactive neural network training"
  icon: "brain"
  tags: ["neural-network", "machine-learning", "visualization", "interactive"]
  status: ["new"]

HTML structure (index.md):
  <section class="container visual size-800 ratio-1-1 canvas-contain">
    <canvas id="playground-canvas"></canvas>
  </section>

Widget files:
- _controls.right.md (weight: 20): ##### Settings
  <dl> with:
    - Dataset: select#dataset with options:
        "" (-- Select --), "xor", "spiral", "circles", "moons", "gaussian", "checkerboard", "ring"
    - Hidden Layers: select#hidden-layers with options:
        "4" (26w), "4-4" selected (42w), "8" (42w), "8-8" (106w), "4-4-4" (58w), "8-4" (74w)
    - Activation: select#activation with options: "relu" selected, "sigmoid", "tanh"
    - Learning Rate: input#learning-rate type=range min=0.001 max=0.1 step=0.001 value=0.03
  <div class="control-buttons"> with:
    {{< button id="btn-train" icon="play" label="Train" class="is-start" >}}
    {{< button id="btn-pause" icon="pause" label="Pause" class="is-pause" >}}
    {{< button id="btn-reset" label="Reset" >}}
    {{< button id="btn-clear" label="Clear" >}}

- _help.menu.md (weight: 10):
  <a class="item button" data-modal="instructions">
    {{< icon name="help-circle" >}}
    Help
  </a>

- instructions.modal.md (weight: 10): ##### How to Use
  <div class="playground-instructions">
    - Left-click adds blue point (class 1), right-click adds orange point (class 2)
    - Or select a preset dataset from the dropdown
    - Press Train to start neural network training
    ##### Tips: more neurons = more complex boundaries, ReLU faster, Sigmoid smoother, lower LR = more stable
  </div>

- _algorithm.after.md (weight: 85): Explains neural network classification: 2-input layer (x,y), configurable hidden layers, 2-output layer (softmax). Training: forward pass, cross-entropy loss, backpropagation, gradient descent. Visual: points, background decision boundary, loss curve.

Architecture (single file default.js):
  IIFE, imports: panic from '/_lib/panic_v3.js'

  Configuration state:
    hiddenLayers=[4,4], activationFn='relu', learningRate=0.03
    isTraining=false, epoch=0, lossHistory=[], MAX_LOSS_HISTORY=200
    dataPoints=[], canvas, ctx, network, animationId
    BOUNDARY_RESOLUTION=40

  Cached colors: surface('#ffffff'), class1('#3498db'), class2('#e67e22'),
    boundary1('rgba(52, 152, 219, 0.3)'), boundary2('rgba(230, 126, 34, 0.3)')

  DenseLayer class:
    constructor(inputSize, outputSize): Xavier initialization scale=sqrt(2/(in+out)), random weights +-scale, biases=0. Cache fields: lastInput, lastOutput, lastPreActivation, weightGrads, biasGrads.
    forward(input, activation): matrix-vector multiply + bias, stores lastPreActivation, applies activate(), stores lastOutput. Returns output array.
    activate(x, activation): sigmoid (clamped -500..500), tanh, relu (default).
    activateDerivative(x, activation): sigmoid s*(1-s), tanh 1-t^2, relu x>0?1:0.
    backward(outputGrad, activation): computes preActGrad = outputGrad * activateDerivative(preActivation). Computes weightGrads = preActGrad * lastInput. biasGrads = preActGrad. Returns inputGrad for previous layer.
    updateWeights(lr): biases -= lr * biasGrads, weights -= lr * weightGrads.

  NeuralNetwork class:
    constructor(layerSizes): creates DenseLayer chain for each consecutive pair in layerSizes.
    forward(input): hidden layers use activationFn, output layer uses 'none' (raw logits). Then numerically stable softmax (max subtraction). Returns probabilities.
    loss(predicted, label): cross-entropy with epsilon=1e-15 clamping. Returns -log(p).
    trainStep(input, label, lr): forward pass, softmax+CE gradient (output - one-hot), manual output layer backward (no activation derivative needed for softmax+CE combined), then hidden layers backward through activation, then updateWeights for all layers. Returns loss.
    trainBatch(data, lr): shuffles data (sort random), calls trainStep per point, returns average loss.
    predict(x, y): calls forward([x, y]).

  Dataset generators (7 presets, all return {x, y, label} arrays, coordinates in 0-1 range):
    generateXOR(n=25): 4 quadrants, class 0 in Q1+Q3, class 1 in Q2+Q4. noise=0.15. 4*n points.
    generateSpiral(n=50): 2 interleaved spirals, class 0 and class 1 rotated 180deg. r grows with angle t. noise=0.03. 2*n points.
    generateCircles(n=50): inner circle r=0.15 (class 0), outer circle r=0.35 (class 1). noise=0.03. 2*n points.
    generateMoons(n=50): upper moon class 0 (centered 0.4, radius 0.28, angle 0..PI), lower moon class 1 (centered 0.6, radius 0.28, angle PI..2PI, offset y=0.45). noise=0.12. 2*n points.
    generateGaussian(n=60): two Gaussian blobs with Box-Muller sampling. Cluster 1 at (0.35,0.6), cluster 2 at (0.55,0.45). stdDev=0.12. 2*n points.
    generateCheckerboard(n=15): 3x3 grid, label = (row+col)%2. cellSize=0.28, offset=0.18. noise=0.02. 9*n points.
    generateRing(n=50): 3 regions: inner (r<0.12, class 0), ring (r=0.25..0.37, class 1), outer (r=0.42..0.48, class 0). noise=0.03. 3*n points.

  Rendering:
    drawDecisionBoundary(): samples network predictions on BOUNDARY_RESOLUTION(40) grid. Each cell colored blue or orange with alpha based on confidence (0.1 + abs(p-0.5)*2 * 0.3). Canvas resolution 800x800.
    drawPoints(): circles with radius max(5, min(8, w/80)). Class 0 = #3498db, class 1 = #e67e22. White border 2px.
    drawLossCurve(): 150x80 graph in top-right corner. Black semi-transparent background rgba(0,0,0,0.7). Green #2ecc71 curve. Shows "Loss: X.XXXX" and "Epoch: N" labels.
    drawStatus(): "TRAINING" indicator top-left when active.
    render(): clear with surface color, drawDecisionBoundary, drawPoints, drawLossCurve, drawStatus.

  Training loop:
    trainEpoch(): calls network.trainBatch(dataPoints, learningRate), increments epoch, pushes to lossHistory (max MAX_LOSS_HISTORY, shifts oldest).
    trainingLoop(): if not isTraining return. Trains epochsPerFrame=5 epochs per frame, renders, requestAnimationFrame.
    startTraining(): requires >= 2 points. Sets isTraining=true, updates button state, starts trainingLoop.
    stopTraining(): sets isTraining=false, cancels animationFrame, updates button state, renders.
    toggleTraining(): toggles start/stop.

  Network management:
    createNetwork(): architecture = [2, ...hiddenLayers, 2]. Resets epoch and lossHistory.
    resetNetwork(): stops training if active, calls createNetwork, renders.
    clearPoints(): stops training, clears dataPoints/epoch/lossHistory, renders.

  Event handlers:
    handleCanvasClick(e): getBoundingClientRect, scale by canvas/rect ratio, normalize to 0-1. y = 1 - py/height. Left click (button 0) = label 0, right click (button 2) = label 1.
    handleContextMenu(e): preventDefault (for right-click).
    loadDataset(name): stops training, switch on name (xor/spiral/circles/moons/gaussian/checkerboard/ring), calls resetNetwork + render.
    updateHiddenLayers(value): parses "4-4" string, calls resetNetwork.
    updateActivation(value): sets activationFn, calls resetNetwork.
    updateLearningRate(value): sets learningRate.
    updateButtonState(): toggles .is-running class on btn-train parent (.control-buttons).
    updateColors(): caches --background-color-surface.
    initCanvas(): sets canvas 800x800 (no HiDPI scaling).

  Initialization:
    init(): gets playground-canvas, 2d context. Canvas events: click, contextmenu, mousedown (button===2 for right-click). Buttons: btn-train (toggleTraining), btn-reset (resetNetwork), btn-clear (clearPoints). Selects: dataset (loadDataset), hidden-layers (updateHiddenLayers), activation (updateActivation). Range: learning-rate (updateLearningRate). Theme: prefers-color-scheme change + theme-changed event -> updateColors + render. Creates initial network, renders.
    Auto-init: readyState check, DOMContentLoaded fallback.

SCSS file (default.scss):
  $breakpoint-mobile: 768px

  #playground-canvas: 100% width/height, cursor crosshair, background var(--background-color-surface)

  .playground-controls: flex column, gap 1rem

  .control-group: flex column, gap 0.5rem
    label: 0.75rem, weight 600, uppercase, letter-spacing 0.05em, var(--text-color-muted)
    select, input[type="range"]: 100% width, 0.5rem padding, 0.875rem font, surface bg, surface border, 4px radius, pointer cursor. Focus: primary border. Range: padding 0, height 24px.

  .control-buttons: flex column, gap 0.5rem
    .is-start: display block (visible)
    .is-pause: display none (hidden)
    &.is-running: .is-start none, .is-pause block

  .lr-display: flex, space-between, 0.75rem, muted color

  .playground-stats: flex column, gap 0.5rem
    .stat-row: flex, space-between, 0.875rem
      .stat-label: muted color
      .stat-value: weight 600, monospace, surface text

  .playground-instructions: 0.875rem, line-height 1.6, surface text
    ul: padding-left 1.25rem, li margin-bottom 0.5rem
    .key: inline-block, padding, 0.75rem monospace, surface bg, 3px radius
    .class-1: #3498db, weight 600
    .class-2: #e67e22, weight 600

  @media mobile: .control-buttons flex row wrap, .button flex 1 1 auto, min-width 100px

Page entierement generee et maintenue par IA, sans intervention humaine.

Instructions
How to Use

Add data points by clicking on the canvas:

  • Left-click adds a blue point (class 1)
  • Right-click adds an orange point (class 2)

Or select a preset dataset from the dropdown.

Press Train to start the neural network training. The background colors show the decision boundary as it evolves.

Tips
  • More hidden neurons = more complex boundaries
  • ReLU learns faster, Sigmoid is smoother
  • Lower learning rate = more stable training