
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: A systematic review of recent advancements and future prospects
Shiqing Zhang, Yijiao Yang, Chen Chen, et al.
Expert Systems with Applications (2023) Vol. 237, pp. 121692-121692
Closed Access | Times Cited: 73
Shiqing Zhang, Yijiao Yang, Chen Chen, et al.
Expert Systems with Applications (2023) Vol. 237, pp. 121692-121692
Closed Access | Times Cited: 73
Showing 51-75 of 73 citing articles:
Optimizing Emotional Insight through Unimodal and Multimodal Long Short-term Memory Models
Hemin Ibrahim, Chu Kiong Loo, Shreeyash Y. Geda, et al.
ARO-The Scientific Journal of Koya University (2024) Vol. 12, Iss. 1, pp. 154-160
Open Access
Hemin Ibrahim, Chu Kiong Loo, Shreeyash Y. Geda, et al.
ARO-The Scientific Journal of Koya University (2024) Vol. 12, Iss. 1, pp. 154-160
Open Access
Contrastive Learning Joint Regularization for Pathological Image Classification with Noisy Labels
Wenping Guo, Gang Han, Yaling Mo, et al.
Electronics (2024) Vol. 13, Iss. 13, pp. 2456-2456
Open Access
Wenping Guo, Gang Han, Yaling Mo, et al.
Electronics (2024) Vol. 13, Iss. 13, pp. 2456-2456
Open Access
Analysis of Human Emotion using Deep Learning
Santosh Kumar, Smriti Kumari Gupta, Yogesh Kumar
(2024), pp. 499-503
Closed Access
Santosh Kumar, Smriti Kumari Gupta, Yogesh Kumar
(2024), pp. 499-503
Closed Access
Feature Fusion Approach for Emotion Classification in EEG Signals
Yahya Alqahtani
Advances in intelligent systems and computing (2024), pp. 227-233
Closed Access
Yahya Alqahtani
Advances in intelligent systems and computing (2024), pp. 227-233
Closed Access
Multi-task disagreement-reducing multimodal sentiment fusion network
Zijun Wang, Jiang Nai-cheng, Chao Xinyue, et al.
Image and Vision Computing (2024) Vol. 149, pp. 105158-105158
Closed Access
Zijun Wang, Jiang Nai-cheng, Chao Xinyue, et al.
Image and Vision Computing (2024) Vol. 149, pp. 105158-105158
Closed Access
Analyzing Recorded Video to Evaluate How Engaged and Emotional Students Are in Remote Learning Environments
Benyoussef Abdellaoui, Ahmed Remaida, Zineb Sabri, et al.
(2024), pp. 1-7
Closed Access
Benyoussef Abdellaoui, Ahmed Remaida, Zineb Sabri, et al.
(2024), pp. 1-7
Closed Access
PENGARUH MEDIA AUDIO VISUAL TERHADAP KEMAMPUAN MENYIMAK BAHASA INGGRIS SISWA KELAS IV SEKOLAH DASAR
Dadan Setiawan, Ghina Nur’aini, Sakirah, et al.
Jurnal Lensa Pendas (2024) Vol. 9, Iss. 2, pp. 177-184
Open Access
Dadan Setiawan, Ghina Nur’aini, Sakirah, et al.
Jurnal Lensa Pendas (2024) Vol. 9, Iss. 2, pp. 177-184
Open Access
Cascaded Encoder-Decoder Reconstruction Network with Gated Mechanism for Multimodal Emotion Recognition under Missing Modalities
Linghui Sun, Xudong Li, Jingzhi Zhang, et al.
2022 International Joint Conference on Neural Networks (IJCNN) (2024) Vol. 9, pp. 1-10
Closed Access
Linghui Sun, Xudong Li, Jingzhi Zhang, et al.
2022 International Joint Conference on Neural Networks (IJCNN) (2024) Vol. 9, pp. 1-10
Closed Access
Improving Speech Emotion Recognition through Hierarchical Classification and Text Integration for Enhanced Emotional Analysis and Contextual Understanding
Nawal Alqurashi, Yuhua Li, Kirill Sidorov
2022 International Joint Conference on Neural Networks (IJCNN) (2024), pp. 1-8
Closed Access
Nawal Alqurashi, Yuhua Li, Kirill Sidorov
2022 International Joint Conference on Neural Networks (IJCNN) (2024), pp. 1-8
Closed Access
Weiwei Yu, Siong Yuen Kok, Gautam Srivastava
Expert Systems (2024)
Closed Access
Counterfactual discriminative micro-expression recognition
Yong Li, Menglin Liu, Lingjie Lao, et al.
Visual Intelligence (2024) Vol. 2, Iss. 1
Open Access
Yong Li, Menglin Liu, Lingjie Lao, et al.
Visual Intelligence (2024) Vol. 2, Iss. 1
Open Access
Multimodal Seed Data Augmentation for Low-Resource Audio Latin Cuengh Language
Lanlan Jiang, Xingguo Qin, Jingwei Zhang, et al.
Applied Sciences (2024) Vol. 14, Iss. 20, pp. 9533-9533
Open Access
Lanlan Jiang, Xingguo Qin, Jingwei Zhang, et al.
Applied Sciences (2024) Vol. 14, Iss. 20, pp. 9533-9533
Open Access
Enhancing Multimodal Emotional Information Extraction in Film and Television through Adaptive Feature Fusion with DenseNe, Transformer, and 3D CNN Models
Shilei Liang
Applied Artificial Intelligence (2024) Vol. 38, Iss. 1
Open Access
Shilei Liang
Applied Artificial Intelligence (2024) Vol. 38, Iss. 1
Open Access
Emotion-Recognition System for Smart Environments Using Acoustic Information (ERSSE)
Gabriela Santiago, José Aguilar, Rodrigo García
Information (2024) Vol. 15, Iss. 11, pp. 677-677
Open Access
Gabriela Santiago, José Aguilar, Rodrigo García
Information (2024) Vol. 15, Iss. 11, pp. 677-677
Open Access
RDA-MTE: an innovative model for emotion recognition in sports behavior decision-making
Shuling Zhang
Frontiers in Neuroscience (2024) Vol. 18
Open Access
Shuling Zhang
Frontiers in Neuroscience (2024) Vol. 18
Open Access
Integrating gating and learned queries in audiovisual emotion recognition
Zaifang Zhang, Qing Guo, Shunlu Lu, et al.
Multimedia Systems (2024) Vol. 30, Iss. 6
Closed Access
Zaifang Zhang, Qing Guo, Shunlu Lu, et al.
Multimedia Systems (2024) Vol. 30, Iss. 6
Closed Access
A low heterogeneity missing modality recovery learning for speech-visual emotion recognition
Guanghui Chen, Lele Chen, Shuang Jiao, et al.
Expert Systems with Applications (2024) Vol. 266, pp. 126070-126070
Closed Access
Guanghui Chen, Lele Chen, Shuang Jiao, et al.
Expert Systems with Applications (2024) Vol. 266, pp. 126070-126070
Closed Access
A bidirectional cross-modal transformer representation learning model for EEG-fNIRS multimodal affective BCI
Xiaopeng Si, Shuai Zhang, Zhuobin Yang, et al.
Expert Systems with Applications (2024) Vol. 266, pp. 126081-126081
Closed Access
Xiaopeng Si, Shuai Zhang, Zhuobin Yang, et al.
Expert Systems with Applications (2024) Vol. 266, pp. 126081-126081
Closed Access
Speech Emotion Recognition Using Multi-Scale Global–Local Representation Learning with Feature Pyramid Network
Yuhua Wang, Jianxing Huang, Zhengdao Zhao, et al.
Applied Sciences (2024) Vol. 14, Iss. 24, pp. 11494-11494
Open Access
Yuhua Wang, Jianxing Huang, Zhengdao Zhao, et al.
Applied Sciences (2024) Vol. 14, Iss. 24, pp. 11494-11494
Open Access
Personalized emotion analysis based on fuzzy multi-modal transformer model
Jianbang Liu, Mei Choo Ang, Jun Kit Chaw, et al.
Applied Intelligence (2024) Vol. 55, Iss. 3
Closed Access
Jianbang Liu, Mei Choo Ang, Jun Kit Chaw, et al.
Applied Intelligence (2024) Vol. 55, Iss. 3
Closed Access
FP-KDNet: Facial Perception and Knowledge Distillation Network for Emotion Recogniton in Coversation
Chuangxin Cai, Xianxuan Lin, Jing Zhang, et al.
(2024), pp. 1-9
Closed Access
Chuangxin Cai, Xianxuan Lin, Jing Zhang, et al.
(2024), pp. 1-9
Closed Access
The Application of Emotion Recognition Based on Deep Learning in Photography Works
Adzrool Idzwan bin Ismail, Shihui Wang
(2024), pp. 279-284
Closed Access
Adzrool Idzwan bin Ismail, Shihui Wang
(2024), pp. 279-284
Closed Access
EMOLIPS: Towards Reliable Emotional Speech Lip-Reading
Dmitry Ryumin, Elena Ryumina, Denis Ivanko
Mathematics (2023) Vol. 11, Iss. 23, pp. 4787-4787
Open Access | Times Cited: 1
Dmitry Ryumin, Elena Ryumina, Denis Ivanko
Mathematics (2023) Vol. 11, Iss. 23, pp. 4787-4787
Open Access | Times Cited: 1
Pose estimation-based visual perception system for analyzing fish swimming
Xin Wu, Jipeng Huang, Yonghui Wang, et al.
bioRxiv (Cold Spring Harbor Laboratory) (2022)
Open Access | Times Cited: 2
Xin Wu, Jipeng Huang, Yonghui Wang, et al.
bioRxiv (Cold Spring Harbor Laboratory) (2022)
Open Access | Times Cited: 2