Minimal Example
<wally-audio-waveform
  [isStartRecording]="isRecording()"
  [isStopRecording]="!isRecording()">
</wally-audio-waveform>Real-time audio waveform visualizer component with microphone input, FFT frequency analysis, and automatic audio recording. Features responsive bar count, smooth 60fps animations, and built-in recording functionality with download support.
npx wally-ui add audio-waveformimport { AudioWaveform } from './components/wally-ui/audio-waveform/audio-waveform';
import { AudioWaveformService } from './components/wally-ui/audio-waveform/audio-waveform.service';@Component({
  selector: 'app-example',
  imports: [AudioWaveform],
  templateUrl: './example.html'
})<wally-audio-waveform
  [isStartRecording]="isRecording()"
  [isStopRecording]="!isRecording()">
</wally-audio-waveform><wally-audio-waveform
  [isStartRecording]="isRecording()"
  [isStopRecording]="!isRecording()"
  [showTimer]="true">
</wally-audio-waveform><div class="flex flex-col gap-4">
  <!-- Waveform Visualizer -->
  <wally-audio-waveform
    [isStartRecording]="isRecording()"
    [isStopRecording]="!isRecording()"
    [showTimer]="true">
  </wally-audio-waveform>
  <!-- Recording Controls -->
  <div class="flex gap-2">
    @if (!isRecording()) {
      <button (click)="startRecording()"
        class="px-4 py-2 bg-red-500 text-white rounded-full">
        Start Recording
      </button>
    } @else {
      <button (click)="stopRecording()"
        class="px-4 py-2 bg-neutral-500 text-white rounded-full">
        Stop Recording
      </button>
    }
  </div>
</div>export class RecordingComponent {
  isRecording = signal(false);
  startRecording() {
    this.isRecording.set(true);
  }
  stopRecording() {
    this.isRecording.set(false);
  }
}Basic waveform visualization without recording timer display.
Waveform visualization with recording timer in MM:SS format.
Record audio and download the result as a WebM file.
Access the AudioWaveformService to download, retrieve, or clear recorded audio.
// Access the service to get recorded audio
export class RecordingComponent {
  @ViewChild(AudioWaveform) audioWaveform!: AudioWaveform;
  downloadRecording() {
    const service = this.audioWaveform.audioWaveformService;
    service.downloadRecording('my-recording.webm');
  }
  getRecordedBlob() {
    const service = this.audioWaveform.audioWaveformService;
    const blob = service.recordedAudioBlob();
    return blob;
  }
  clearRecording() {
    const service = this.audioWaveform.audioWaveformService;
    service.clearRecording();
  }
}The component uses the Web Audio API to analyze microphone input in real-time with FFT (Fast Fourier Transform) frequency analysis.
// How it works under the hood:
// 1. getUserMedia() - Request microphone access
// 2. AudioContext - Process audio in real-time
// 3. AnalyserNode - FFT analysis (256 samples → 128 frequencies)
// 4. getByteFrequencyData() - Get frequency values (0-255)
// 5. Normalize to 0-100% for bar heights
// 6. requestAnimationFrame - Smooth 60fps animationService-level configuration (read-only constants for optimal performance).
FFT_SIZE = 256  // Higher = more frequency detailFFT size for frequency analysis. Higher values provide more detail but use more CPU.
BAR_COUNT = 30 (mobile) | 65 (desktop)  // Auto-responsiveAutomatically adjusts based on viewport width for optimal display.
SMOOTHING = 0.8  // 0 (no smoothing) to 1 (max smoothing)Smoothing constant for animation. 0 = no smoothing (choppy), 1 = max smoothing (laggy).
| Property | Type | Default | Description | 
|---|---|---|---|
| Input Properties | |||
| isStartRecording | InputSignal<boolean> | false | Signal to start recording. Set to true to begin capturing audio | 
| isStopRecording | InputSignal<boolean> | false | Signal to stop recording. Set to true to stop capturing audio | 
| showTimer | InputSignal<boolean> | false | Controls visibility of recording timer (MM:SS format) | 
| Service Signals (Read-Only) | |||
| isRecording | WritableSignal<boolean> | - | Current recording state (true = recording, false = stopped) | 
| audioData | WritableSignal<number[]> | - | Array of normalized bar heights (0-100%) updated at 60fps | 
| recordedAudioBlob | WritableSignal<Blob | null> | - | Recorded audio as a Blob (audio/webm format). Available after stopping recording | 
| recordedAudioUrl | WritableSignal<string | null> | - | Temporary URL for recorded audio. Use for playback or download | 
Public methods available on AudioWaveformService for managing recorded audio.
downloadRecording(filename?: string): voidDownloads the recorded audio file. Optionally specify custom filename.
clearRecording(): voidClears recorded audio and revokes URL to free memory. Call after download or discard.
Automatic speech-to-text transcription will be added in a future release. The API preview below shows the planned implementation.
// COMING SOON: Audio Transcription
// The component will support automatic transcription in a future release
<wally-audio-waveform
  [isStartRecording]="isRecording()"
  [isStopRecording]="!isRecording()"
  [enableTranscription]="true"
  (transcriptionComplete)="onTranscription($event)">
</wally-audio-waveform>// Requires modern browsers with:
// - Web Audio API
// - MediaStream API
// - MediaRecorder API
//
// Supported: Chrome 60+, Firefox 55+, Safari 14+, Edge 79+The component follows accessibility best practices for audio recording interfaces.
<!-- The component includes built-in accessibility -->
<!-- Timer updates announced via aria-live region (when showTimer=true) -->
<!-- Waveform visualizer is decorative (aria-hidden) -->