Enhancing Aspect-Based Sentiment Analysis with Dynamic Few-Shot Prompting for Large Language Models
Received: 7 October 2025 | Revised: 10 November 2025 | Accepted: 21 November 2025 | Online: 9 February 2026
Corresponding author: Mohammed Ziaulla
Abstract
While Large Language Models (LLMs) have shown great promise, their effectiveness in few-shot learning settings is often limited by static prompting strategies, where a fixed set of examples may lack contextual relevance for diverse test cases. To address this limitation, this paper introduces a dynamic few-shot prompting methodology for Aspect-based Sentiment Analysis (ABSA) that leverages the Gemini Large Language Model (Gemini LLM). Our approach dynamically selects the most semantically pertinent examples from a training corpus for each individual test instance by computing cosine similarity between sentence embeddings. This ensures the LLM receives tailored, contextually rich guidance for every prediction. We evaluated our methodology on the benchmark SemEval-2014 datasets for the laptop and restaurant domains. The results demonstrate state-of-the-art performance, achieving F1-scores of 87.3% and 90.0%, respectively, significantly surpassing static few-shot prompting and other established baselines. The findings underscore the critical role of example pertinence in few-shot learning and illustrate that dynamic, context-aware prompting is a highly effective strategy for unlocking the full potential of LLMs on specialized Natural Language Processing (NLP) tasks without extensive model fine-tuning.
Keywords:
Aspect-Based Sentiment Analysis (ABSA), Large Language Models (LLMs), few-shot learning, dynamic prompting, semantic similarity, prompt engineering, Gemini LLMDownloads
References
K. Schouten and F. Frasincar, "Survey on Aspect-Level Sentiment Analysis," IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 3, pp. 813–830, Mar. 2016. DOI: https://doi.org/10.1109/TKDE.2015.2485209
M. Pontiki et al., "SemEval-2016 Task 5: Aspect Based Sentiment Analysis," in Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), San Diego, California, 2016, pp. 19–30. DOI: https://doi.org/10.18653/v1/S16-1002
A. Chowdhery et al., "PaLM: scaling language modeling with pathways," Journal of Machine Learning Research, vol. 24, no. 1, Jan. 2023, Art. no.240.
R. Anil et al., "Gemini: A Family of Highly Capable Multimodal Models." arXiv, May 09, 2025.
T. B. Brown et al., "Language models are few-shot learners," in Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020, pp. 1877–1901.
M. Pontiki, D. Galanis, J. Pavlopoulos, H. Papageorgiou, I. Androutsopoulos, and S. Manandhar, "SemEval-2014 Task 4: Aspect Based Sentiment Analysis," in Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, 2014, pp. 27–35. DOI: https://doi.org/10.3115/v1/S14-2004
W. Medhat, A. Hassan, and H. Korashy, "Sentiment analysis algorithms and applications: A survey," Ain Shams Engineering Journal, vol. 5, no. 4, pp. 1093–1113, Dec. 2014. DOI: https://doi.org/10.1016/j.asej.2014.04.011
J. Mir, A. Mahmood, and S. Khatoon, "Aspect Βased Classification Model for Social Reviews," Engineering, Technology & Applied Science Research, vol. 7, no. 6, pp. 2296–2302, Dec. 2017. DOI: https://doi.org/10.48084/etasr.1578
M. Hu and B. Liu, "Mining and summarizing customer reviews," in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, Seattle, WA, USA, 2004, pp. 168–177. DOI: https://doi.org/10.1145/1014052.1014073
S. Kiritchenko, X. Zhu, C. Cherry, and S. Mohammad, "NRC-Canada-2014: Detecting Aspects and Sentiment in Customer Reviews," in Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, 2014, pp. 437–442. DOI: https://doi.org/10.3115/v1/S14-2076
L. Jiang, M. Yu, M. Zhou, X. Liu, and T. Zhao, "Target-dependent Twitter Sentiment Classification," in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR, USA, 2011, pp. 151–160.
P. Chen, Z. Sun, L. Bing, and W. Yang, "Recurrent Attention Network on Memory for Aspect Sentiment Analysis," in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 2017, pp. 452–461. DOI: https://doi.org/10.18653/v1/D17-1047
Y. Wang, M. Huang, X. Zhu, and L. Zhao, "Attention-based LSTM for Aspect-level Sentiment Classification," in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA, 2016, pp. 606–615. DOI: https://doi.org/10.18653/v1/D16-1058
D. Tang, B. Qin, X. Feng, and T. Liu, "Effective LSTMs for Target-Dependent Sentiment Classification," in Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan, 2016, pp. 3298–3307.
Y. Ma, H. Peng, T. Khan, E. Cambria, and A. Hussain, "Sentic LSTM: a Hybrid Network for Targeted Aspect-Based Sentiment Analysis," Cognitive Computation, vol. 10, no. 4, pp. 639–650, Aug. 2018. DOI: https://doi.org/10.1007/s12559-018-9549-x
H. Xu, B. Liu, L. Shu, and P. Yu, "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis," in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2019, pp. 2324–2335.
C. Sun, L. Huang, and X. Qiu, "Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence," in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2019, pp. 380–385.
J. Snell, K. Swersky, and R. Zemel, "Prototypical networks for few-shot learning," in Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 4080–4090.
Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, "Generalizing from a Few Examples: A Survey on Few-shot Learning," ACM Computing Surveys, vol. 53, no. 3, June 2020, Art. no. 63. DOI: https://doi.org/10.1145/3386252
C. Finn, P. Abbeel, and S. Levine, "Model-agnostic meta-learning for fast adaptation of deep networks," in Proceedings of the 34th International Conference on Machine Learning - Volume 70, Sydney, Australia, 2017, pp. 1126–1135.
S. Ruder, "Neural Transfer Learning for Natural Language Processing," Ph.D. dissertation, School of Engineering and Informatics, College of Engineering and Informatics, National University of Ireland, Galway, Ireland, 2019.
T. Gao, A. Fisch, and D. Chen, "Making Pre-trained Language Models Better Few-shot Learners," in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, 2021, pp. 3816–3830. DOI: https://doi.org/10.18653/v1/2021.acl-long.295
B. Lester, R. Al-Rfou, and N. Constant, "The Power of Scale for Parameter-Efficient Prompt Tuning," in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, 2021, pp. 3045–3059. DOI: https://doi.org/10.18653/v1/2021.emnlp-main.243
H. W. Chung et al., "Scaling instruction-finetuned language models," Journal of Machine Learning Research, vol. 25, no. 1, Jan. 2024, Art. no.70.
X. L. Li and P. Liang, "Prefix-Tuning: Optimizing Continuous Prompts for Generation," in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, 2021, pp. 4582–4597. DOI: https://doi.org/10.18653/v1/2021.acl-long.353
Z. Cui, X. Shi, and Y. Chen, "Sentiment analysis via integrating distributed representations of variable-length word sequence," Neurocomputing, vol. 187, pp. 126–132, Apr. 2016. DOI: https://doi.org/10.1016/j.neucom.2015.07.129
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, "Language Models are Unsupervised Multitask Learners," OpenAI blog, vol. 1, no. 8, 2019, Art. no. 9.
S. Narayan, C. Gardent, S. B. Cohen, and A. Shimorina, "Split and Rephrase," in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 2017, pp. 606–616. DOI: https://doi.org/10.18653/v1/D17-1064
Z. Li, Y. Wei, Y. Zhang, X. Zhang, and X. Li, "Exploiting Coarse-to-Fine Task Transfer for Aspect-Level Sentiment Classification," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 1, pp. 4253–4260, July 2019. DOI: https://doi.org/10.1609/aaai.v33i01.33014253
J. Wei et al., "Chain-of-thought prompting elicits reasoning in large language models," in Proceedings of the 36th International Conference on Neural Information Processing Systems, New Orleans, LA, USA, 2022, pp. 24824–24837.
H. Zhang, Y.-N. Cheah, O. M. Alyasiri, and J. An, "Exploring aspect-based sentiment quadruple extraction with implicit aspects, opinions, and ChatGPT: a comprehensive survey," Artificial Intelligence Review, vol. 57, no. 2, Jan. 2024, Art. no. 17. DOI: https://doi.org/10.1007/s10462-023-10633-x
H. Wu, D. Yang, P. Liu, and X. Li, "Chain of Thought Guided Few-Shot Fine-Tuning of LLMs for Multimodal Aspect-Based Sentiment Classification," in 31st International Conference on Multimedia Modeling, Nara, Japan, 2025, pp. 182–194. DOI: https://doi.org/10.1007/978-981-96-2054-8_14
M. Radi, N. Omar, and W. Kaur, "Syntactic-Guided Chain of Thought for Iterative Implicit and Explicit Target Detection in Aspect-Based Sentiment Analysis," IEEE Access, vol. 13, pp. 84738–84751, 2025. DOI: https://doi.org/10.1109/ACCESS.2025.3568695
Downloads
How to Cite
License
Copyright (c) 2025 Mohammed Ziaulla, Arun Biradar

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.
