Skip to content

Research prompt

Title
Reference-guided continuation for a single instrument


1. Context and assumption

We assume:

  • Architecture A can generate single-instrument clips from text prompts alone.
  • Symbolic and performance layers exist (clean score → humanized performance → audio).
  • Users also want to continue or extend an existing musical idea for the same instrument.

This research focuses on reference-guided continuation:

  • Input: a short reference snippet for the target instrument (symbolic, and optionally audio in later phases), plus optional prompt text.
  • Output: a continuation that:
  • Maintains local musical coherence (key, tempo, feel).
  • Respects the style and basic patterns of the reference (rhythm, density, register).
  • Is not a trivial copy/paste or near-duplicate.

For this research, assume the main, practical v0 path uses symbolic reference input (e.g. MIDI). Audio-to-symbolic extraction can be treated as a separate future extension.


2. Objectives

  1. Define what “continuation” means operationally
  2. Temporal scope (e.g. 4–16 bars).
  3. Degree of adherence to reference style vs freedom to evolve.

  4. Specify input and conditioning formats

  5. How symbolic reference segments are represented.
  6. How optional text prompts interact with the reference.

  7. Design 1–2 continuation approaches

  8. At least one minimal, implementable now.
  9. Optionally one more advanced approach (e.g. style embedding, motif-aware model).

  10. Establish constraints for coherence and diversity

  11. How to prevent near-duplication or looping artifacts.
  12. How to avoid abrupt changes in feel/key/tempo unless requested.

  13. Define evaluation and success criteria

  14. Human and automatic methods to assess coherence, style match, and novelty.

  15. Identify riskiest assumptions and minimal experiments

  16. Especially around style conditioning and avoiding overfitting to the reference.

3. Questions to answer

3.1 Product and UX framing

  1. What are the key user flows for continuation?

Examples: - User records or imports a short MIDI sketch and asks: “Continue this for 8 bars in the same style.” - User selects last 4 bars of a generated clip and asks: “Give me an alternative continuation that is more energetic.” - User gives text prompt + reference: “Make this pattern evolve into something more dramatic over the next 8 bars.”

  1. What controls does the user have during continuation?

Potential controls: - Length of continuation (bars/seconds). - “Similarity to reference” (low → high). - “Energy / density” change (down, same, up). - Optional mood change.

  1. What constraints must be respected?
  2. Keep tempo and meter consistent unless asked otherwise.
  3. Maintain key/scale unless user explicitly asks to modulate.
  4. Avoid copying full bars verbatim beyond what is musically natural.

3.2 Representation and segmentation

  1. How is the reference segment represented?

  2. Same symbolic format as Architecture A (tokens/notes with bar/beat positions, velocities, etc.).

  3. Any additional structural tags needed? (e.g. phrase boundaries.)

  4. How is the continuation segment represented?

  5. Same representation, but with explicit boundary between reference and continuation.

  6. How do you segment the reference and continuation?

  7. Fixed-length window (e.g. last N bars as conditioning).
  8. Or variable length depending on user selection.

  9. How do you handle partial bars or non-aligned references?

  10. Round to nearest bar?
  11. Allow half-bar or beat-level alignment?

3.3 Conditioning on the reference

  1. What information from the reference is used for conditioning?

Candidates: - Key/scale and tonal center. - Tempo and meter. - Register (pitch range). - Density (notes per bar). - Rhythmic motifs or patterns. - Harmonic information (for chords/arpeggios).

  1. What conditioning approaches are possible?

Approach A (minimal): - Concatenate reference tokens and have the model autoregressively continue. - Possibly clip the context length to the last N bars.

Approach B (structured): - Extract features/summary from the reference (e.g. density, contour, rhythm patterns). - Condition the continuation model on both tokens and these features.

  1. How do we combine text prompts with reference conditioning?
    • Text for high-level intent (“more energetic”, “sparser”, “modulate to G major”).
    • Reference for detailed style (groove, voicing habits).

3.4 Continuation model design

  1. What model architectures are viable for continuation?

Minimal option: - Same model family as Architecture A’s symbolic generator, used in “conditional continuation” mode: - Input: reference sequence (and optional prompt encoding). - Output: new tokens for continuation.

More advanced option: - Model trained explicitly on continuation tasks: - Input: (prefix, desired continuation length, control tokens). - Output: continuation segment.

  1. How do you enforce:
  2. Temporal coherence (no jumps in tempo or meter)?
  3. Tonal coherence (no random key jumps unless requested)?
  4. Style coherence (feel similar but not identical)?

  5. How do you ensure novelty vs overfitting?

  6. Penalize exact repetition of long n-grams from the reference?
  7. Use sampling strategies that avoid trivial looping?
  8. Encourage controlled variation (e.g. motif transformation).

  9. What is the minimal training setup to get a useful continuation model?

  10. Can we train on generic single-instrument data by:
    • Splitting pieces into (prefix, continuation) pairs?
    • Adding random crop positions?

3.5 Integration with performance and rendering

  1. At what stage does continuation operate?

  2. Symbolic level:

    • Reference symbolic → continuation symbolic.
  3. Performance level:
    • Apply performance modeling after full sequence (reference + continuation) is decided.
  4. Audio level:

    • Continuation is rendered with the same instrument and performance style.
  5. How do we ensure seamless audio joins?

  6. Align on bar boundaries.
  7. Use consistent performance parameters across the boundary.
  8. Avoid sudden changes in loudness or timbre.

  9. How do we handle multiple continuation attempts?

  10. Preserve the reference as immutable.
  11. Cache different continuation variants (symbolic and/or audio).
  12. Support A/B listening in UX.

3.6 Evaluation

  1. What automatic metrics can approximate good continuation?

Symbolic-level: - Key and scale consistency between reference and continuation. - Similar density and rhythmic complexity, unless controls say otherwise. - Motif similarity measures (e.g. pattern reuse with variation).

  1. What human evaluation setup is needed?

Examples: - Blind tests where listeners rate: - How coherent the continuation feels with the reference. - How well it matches a target instruction (e.g. “more energetic”). - Comparisons: - Continuation vs naive baseline (e.g. unrelated generated clip). - Multiple continuation options for the same reference.

  1. What minimal bar defines success?
  2. Majority of listeners rate continuations as coherent and stylistically similar.
  3. Few cases of abrupt or jarring transitions (quantified via user feedback).

4. Scope and constraints

  • Single instrument (same as Architecture A and the performance module).
  • Primary input modality: symbolic reference (MIDI or internal symbolic representation).
  • Audio reference support can be treated as:
  • Out of scope for v0, or
  • A separate future research track (audio-to-symbolic extraction for this instrument).

  • Continuations limited to short spans (e.g. up to 8–16 bars) for this research.

  • Must preserve:
  • Tempo and meter unless explicitly overridden.
  • Overall key unless explicitly overridden.

5. Artifacts and deliverables

The research should produce:

  1. Continuation problem definition
  2. Precise statement of what continuation means in v0.
  3. Constraints and UX expectations.

  4. Input/output and conditioning spec

  5. How reference, prompt, and control parameters are represented.
  6. How they feed into the continuation model.

  7. Model design(s)

  8. At least one minimal approach, including:

    • Architecture,
    • Training procedure,
    • Inference strategy.
  9. Integration design

  10. Where continuation sits in the full pipeline.
  11. How it interfaces with performance modeling and rendering.
  12. Caching and UX behavior for multiple continuations.

  13. Evaluation plan

  14. Automatic metrics.
  15. Human listening test designs.

  16. Risk and experiment plan

  17. Key assumptions (e.g. “simple autoregressive continuation with trimming is enough for v0”).
  18. Small experiments to test these assumptions before full implementation.

6. Process guidance

  1. Start by defining the user stories and constraints for continuation.
  2. Specify the symbolic representation and segmentation strategy for references.
  3. Design the simplest possible continuation approach consistent with these constraints.
  4. Prototype on existing single-instrument data (prefix → continuation) and evaluate qualitatively.
  5. Iterate on conditioning and sampling to balance coherence and novelty.
  6. Document what works well enough for v0 and what should be deferred.

7. Non-goals

This research does not need to:

  • Handle multi-instrument or full-band continuations.
  • Perform robust audio-to-symbolic transcription for arbitrary user audio.
  • Guarantee global song-level structure beyond the local continuation window.
  • Incorporate advanced user editing tools beyond basic selection and regeneration.