Replication of Daniel Graves “What Lies Beneath Zero: Censoring, Demand Estimation, and Hidden Beliefs”

2025

Replication of empirical results from Graves' JMP + portfolio construction from Hidden Beliefs

Results from a test run. Portfolio returns sorted on the Hidden Beliefs Index (HBI), 2023Q1–2024Q4. Stocks are assigned to size deciles and HBI quintiles, yielding 50 equal-weighted portfolios. Returns are computed using HBI measures lagged one to three quarters and averaged across lags. Results report annualized four-factor alphas.
Results from a test run. Portfolio returns sorted on the Hidden Beliefs Index (HBI), 2023Q1–2024Q4. Stocks are assigned to size deciles and HBI quintiles, yielding 50 equal-weighted portfolios. Returns are computed using HBI measures lagged one to three quarters and averaged across lags. Results report annualized four-factor alphas.

Project Summary: This project replicates the core empirical pipeline of Graves (2025), which estimates institutional investors’ unobserved demand for stocks they could have held but chose not to. I construct the Hidden Beliefs Index (HBI) using CRSP, Compustat, and 13F holdings data, and form characteristic-sorted portfolios based on these inferred beliefs. I then evaluate portfolio performance using standard asset-pricing factor models. The results support the paper’s central finding that information embedded in institutional non-holdings is not fully incorporated into prices, giving rise to statistically significant abnormal returns.

A Study on the Political Nature of Conversational LLMs Using Item Response Theory

2025

under the guidance of Professor Andrew B. Hall (Stanford GSB)

Abstract: As large language models (LLMs) become increasingly integrated into political discourse, understanding their ideological leanings and response tendencies is crucial. This study applies a two-step item response theory (IRT) model to compare the likelihood that LLMs select supportive responses to binary questions from the Cooperative Election Study (CES) to those of the human respondents polled in that study. By analyzing models from major AI companies, I position them on the political spectrum relative to human respondents and find that they align most closely with individuals on the far left. Furthermore, I observe that LLMs tend to adapt their political sentiment to mirror user input. These findings suggest that politically biased AI models can exacerbate polarization by reinforcing users’ preexisting beliefs and pushing users to more extreme positions by creating echo chambers, while eroding trust in models among those with opposing views.

Project Summary: I have modeled the individual ability of every soccer/football player in the Spanish LaLiga. I also discuss why modern methodology for holistic evaluation of soccer/football players has a critical error. I attempt to pose an alternative solution. Attached is an evaluation of every player, alongside a research paper detailing the methodology.