Generating Melodies from Melodic Outlines Towards an Improvisation Support System for Non-musicians

Authors

  • Tetsuro Kitahara Nihon University

DOI:

https://doi.org/10.52731/liir.v003.081

Keywords:

convolutional neural network, improvisation, melodic outline, melody generation

Abstract

One promising approach for supporting non-musicians’ music improvisation is to enable them to input the coarse shape of melodies they want to play. Therefore, drawing a melodic outline is adopted as an input method of existing improvisation support systems. However, generating melodies from melodic outlines with deep neural networks has not been fully exploited. In this paper, we report an attempt to generate melodies from melodic outlines using a convolutional neural network. Given a melodic outline, it is reduced with two convolution layers and then coverted to a sequence of notes with two deconvolution layers. Objective and subjective evaluations imply that our model with sufficient filter width and channels could generate melodies of moderate, close quality to human melodies.

References

Katsuhisa Ishida, Tetsuro Kitahara, and Masayuki Takeda. ism: Improvisation supporting system based on melody correction. In Proceedings of the International Conference on New Interfaces for Musical Expression, pages 177–180, June 2004.

Homei Miyashita and Kazushi Nishimoto. Theremoscore: A new-type musical score

with temperature sensation. In Int’l Conf. New Interface for Musical Expression, pages

–107, 2004.

Jan Buchholz, Eric Lee, Jonathan Klein, and Jan Borchers. coJIVE: a system to support collaborative jazz improvisation. Technical Report AIB-2007-04,

Aachener Informatik-Berichte RWTH Aachen, Department of Computer Science,

http://www.informatik.rwth-aachen.de/go/id/lolj/lidx/1/file/47944, 2007.

Tetsuro Kitahara, Sergio Giraldo, and Rafael Ram´ırez. JamSketch: Improvisation

support system with GA-based melody creation from user’s drawing. In Proceedings

of the 2017 International Symposium on Computer Music Multidisciplinary Research,

pages 352–363, Matoshinhos, Portugal, 2017.

Nicholas Trieu and Robert M. Keller. JazzGAN: Improvising with generative adversarial networks. In Proceedings of the 2018 Workshop on Musical Metacreation

(MUME 2018), 2018.

Vincenzo Madaghiele, Pasquale Lisena, and Raphael Troncy. MINGUS: Melodic improvisation neural generator using seq2seq. In Proceedings of the 22nd International

Society for Music Information Retrieval (ISMIR 2021), pages 412–419, 2021.

Shunit Haviv Hakimi, Nadav Bhonker, and Ran El-Yaniv. BebopNet: Deep neural

models for personalized jazz improvisations. In Proceedings of the 21st International

Society for Music Information Retrieval (ISMIR 2020), pages 828–836, 2020.

Shih-Lun Wu and Yi-Hsuan Yang. The Jazz Transformer on the front line: Exploring

the shortcomings of AI-composed music through guantitavie measures. In Proceedings of the 21st Information Society for Music Information Retrieval (ISMIR 2020),

Li-Chia Yang, Szu-Yu Chou, and Yi-Hsuan Yang. MidiNet: A convolutional generative adversarial network for symbolic-domain music generation. In Proceedings of

Int’l Soc. for Music Information Retrieval Conf., pages 324–331, 2017.

Kosuke Nakamura, Takashi Nose, Yuya Chiba, and Akinori Ito. A symbolic-level

melody completion based on a convolutional neural network with generative adversarial learning. Journal of Information Processing, 28:248–257, 2020.

Yongjie Huang, Xiaofeng Huang, and Qiakai Cai. Music generation based on

convolution-LSTM. Computer and Information Science, 11(3):50–56, 2018.

Downloads

Published

2023-02-17