OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Symbolic Music Generation with Transformer-GANs
Aashiq Muhamed, Liang Li, Xingjian Shi, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2021) Vol. 35, Iss. 1, pp. 408-417
Open Access | Times Cited: 40

Showing 26-50 of 40 citing articles:

Harmonic Alchemy: Exploring Musical Creation through GANs
Asst. Prof Tabassum Khan, Aditi Sharma, Ayush Parate, et al.
International Journal of Advanced Research in Science Communication and Technology (2024), pp. 237-247
Open Access

Symbolic Music Generation from Graph-Learning-based Preference Modeling and Textual Queries
Xichu Ma, Yuchen Wang, Ye Wang
IEEE Transactions on Multimedia (2024) Vol. 26, pp. 10545-10558
Closed Access

Real-Time Emotion-Based Piano Music Generation Using Generative Adversarial Network (GAN)
Lijun Zheng, Chenglong Li
IEEE Access (2024) Vol. 12, pp. 87489-87500
Open Access

MELFuSION: Synthesizing Music from Image and Language Cues Using Diffusion Models
Sanjoy Chowdhury, Sayan Nag, K J Joseph, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 34, pp. 26816-26825
Closed Access

Generating High-Quality Symbolic Music Using Fine-Grained Discriminators
Z.H. Zhang, Liang Li, Jiehua Zhang, et al.
Lecture notes in computer science (2024), pp. 332-344
Closed Access

Affective Neural Responses Sonified through Labeled Correlation Alignment
Andrés Marino Álvarez-Meza, Héctor Cardona, Mauricio Orozco‐Alzate, et al.
Sensors (2023) Vol. 23, Iss. 12, pp. 5574-5574
Open Access | Times Cited: 1

EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation
Hsiao-Tzu Hung, Joann Ching, SeungHeon Doh, et al.
Zenodo (CERN European Organization for Nuclear Research) (2021)
Open Access | Times Cited: 2

Rethinking Golf Swing Classification: From A Frequency Domain View
Zhaoyang He, Zhuoming Zhu, Libin Jiao, et al.
Procedia Computer Science (2022) Vol. 202, pp. 252-259
Open Access | Times Cited: 1

Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music
Hao‐Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 1

Everybody Compose: Deep Beats To Music
Conghao Shen, Violet Z. Yao, Y.L. Liu
(2023), pp. 353-357
Open Access

Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
Jaeyong Kang, Soujanya Poria, Dorien Herremans
arXiv (Cornell University) (2023)
Open Access

MAGI-NET: Masked Global Information Network for Symbolic Music Generation
Dawei Wang, Pengfei Li, Jingcheng Wu
(2023), pp. 787-794
Closed Access

Previous Page - Page 2

Scroll to top