Working Paper2025SSRN Journal of Finance

VideoConviction: A Multimodal Benchmark for Human Conviction and Stock Market Recommendations

Authors: Michael Galarnyk, Veer Kejriwal, Agam Shah, Yash Bhardwaj, Nicholas Meyer, Anand Krishnan, Sudheer Chava

Abstract

Social media has amplified the reach of financial influencers known as "finfluencers, " who share stock recommendations on platforms like YouTube. Understanding their influence requires analyzing multimodal signals like tone, delivery style, and facial expressions, which extend beyond text-based financial analysis. We introduce VideoConviction, a multimodal dataset with 6,000+ expert annotations, produced through 457 hours of human effort, to benchmark multimodal large language models (MLLMs) and text-based large language models (LLMs) in financial discourse. Our results show that while multimodal inputs improve stock ticker extraction (e.g., extracting Apple's ticker AAPL), both MLLMs and LLMs struggle to distinguish investment actions and conviction-the strength of belief conveyed through confident delivery and detailed reasoning-often misclassifying general commentary as definitive recommendations. While high-conviction recommendations perform better than low-conviction ones, they still underperform the popular S&P 500 index fund. An inverse strategy-betting against finfluencer recommendations-outperforms the S&P 500 by 6.8% in annual returns but carries greater risk (Sharpe ratio of 0.41 vs. 0.65). Our benchmark enables a diverse evaluation of multimodal tasks, comparing model performance on both full video and segmented video inputs. This enables deeper advancements in multimodal financial research. Our code, dataset, and evaluation leaderboard are available under the CC BY-NC 4.0 license. 

Keywords

MultimodalYouTubeFinanceFinfluencerBenchmarkingAsset PricingLarge Language models

Tags of Social Finance

#Experimental & Survey-Based Empirical#Financing- and Investment Decisions (Individual)#Manager & Firm Behavior