A DEEP ATTENTIVE MULTIMODAL LEARNING APPROACH FOR DISASTER IDENTIFICATION FROM SOCIAL MEDIA POSTS

Authors

  • Mrs C. Chaitanya Author
  • Shravya Author
  • Maghana Author

Keywords:

CNN, LSTM, RNN, Twitter, social media

Abstract

Twitter and other micro blogging platforms have become indispensable for disseminating critical information, especially in the aftermath of both natural and man-made disasters. In order to relay critical information like deaths, facility damages, and urgent needs of impacted people, people often upload multimedia components using photographs and/or videos. Humanitarian organisations may greatly benefit from this data in order to plan an adequate and timely response. The need for an automated method to sort through social media for actionable and non-actionable disaster-related material arises from the difficulty of extracting useful information from massive amounts of communications. Previous work mostly examined textual methods and/or used standard frequent neural networks (RNNs) or convolutional neural networks (CNNs), which might lead to efficiency degradation in the case of lengthy input sequences, although numerous studies have shown the effectiveness of integrating message and picture components for disaster recognition. Using a combination of visual and linguistic information, this article presents a multi-modal catastrophe detection system that can identify tweets by affixing salient word characteristics with aesthetic purposes. A retrained convolutional neural network (e.g., ResNet50) is used for visual attribute extraction, while a bidirectional long-lasting memory (BiLSTM) coupled with an attention device is employed for textual attribute extraction. A function combination technique and the soft max classifier are then used to accumulate visual and textual functions. The results demonstrate that the proposed multi-modal system outperforms the current baselines, which include both multi-modal and uni-modal models, by around 1% and 7% of performance improvement, respectively.

Downloads

Published

2025-11-22

How to Cite

A DEEP ATTENTIVE MULTIMODAL LEARNING APPROACH FOR DISASTER IDENTIFICATION FROM SOCIAL MEDIA POSTS. (2025). Excerpta Medica Archives Journal: Transaction B, 5(2), 18-26. https://stanfordgroup.org/index.php/EMAJ-B/article/view/106