Value at Risk (VaR) Analysis in Python

Compute Value at Risk (VaR), Conditional VaR (CVaR), and maximum drawdown for a stock portfolio using historical simulation.

This page shows an AI data analysis example. A workflow is a list of prompts (instructions) used to complete one analysis task step by step. We send the same prompt list to multiple AI models, then compare their notebook outputs and scores.

What this AI workflow does

This AI Data Analyst workflow downloads 3 years of SPY, QQQ, and GLD price data, computes daily returns, and builds an equal-weight portfolio return series. It estimates 95% and 99% Value at Risk (VaR) using historical simulation and computes 95% Conditional VaR (CVaR/Expected Shortfall). It visualizes the return distribution with VaR and CVaR thresholds and reports maximum drawdown for the portfolio.

Who this example is for

This is for analysts and students who need a reproducible Python notebook for portfolio risk metrics from market data. It helps validate downside risk estimates and communicate them with standard plots and summary statistics.

Expected analysis outcomes

These are the results the AI workflow is expected to generate.

  • Daily equal-weight portfolio return series
  • Historical-simulation VaR estimates at 95% and 99%
  • CVaR (Expected Shortfall) at 95%
  • Maximum drawdown statistic
  • Histogram of returns with VaR/CVaR threshold lines

Tools and libraries used

Main Python packages and tooling used to run this AI data analysis task.

  • yfinance
  • pandas
  • numpy
  • matplotlib
  • scipy

Prompt sequence

This is the exact list of prompts used in this workflow. The same prompt sequence is sent to each model so outputs and scores can be compared fairly.

  1. 1download 3 years of SPY, QQQ, GLD prices and compute daily portfolio returns assuming equal weights with yfinance
  2. 2calculate 95% and 99% Value at Risk using historical simulation
  3. 3calculate Conditional VaR (Expected Shortfall) at 95%
  4. 4plot the return distribution with VaR and CVaR thresholds marked

Model Comparison

We compared several LLM models on the same analysis task. The table below shows their scores side by side. You can also open each model run to check the full conversation and notebook results.

Model SourceScoreTaskExecutionOutputReasoningReliabilityRun
glm-5.110/102/22/23/32/21/1Open glm-5.1 conversation
gpt-oss:120b10/102/22/23/32/21/1Open gpt-oss:120b conversation
qwen3-coder-next10/102/22/23/32/21/1Open qwen3-coder-next conversation
gemma4:31b9/102/22/22/32/21/1Open gemma4:31b conversation
gpt-5.49/102/22/22/32/21/1Open gpt-5.4 conversation
qwen3.5:397b2/101/20/20/31/20/1Open qwen3.5:397b conversation
How scoring works

Each run is graded on five dimensions:

  • Task Completion (0-2): how fully the workflow answered the requested analysis task and completed required steps.
  • Execution Correctness (0-2): whether generated code is correct, consistent, and likely runnable without major fixes.
  • Output Quality (0-3): how well tables, charts, and results match the expected analytical outcomes.
  • Reasoning Quality (0-2): clarity and correctness of interpretation, explanation, and conclusions.
  • Reliability (0-1): robustness of the run, including low hallucination risk and stable behavior.

The final score is the sum of these dimensions, so total score = 0-10. Higher values indicate more complete, correct, and reliable notebook results for this workflow.

AI Analysis with qwen3.5:397b

This run uses AI with a step-by-step list of prompts to perform the analysis task Value at Risk (VaR) Analysis in Python. Below, you can review the full conversation generated with qwen3.5:397b. After the conversation, we show detailed score cards for this run.

download 3 years of SPY, QQQ, GLD prices and compute daily portfolio returns assuming equal weights with yfinance
Thinking

The user wants to download 3 years of price data for SPY, QQQ, and GLD using yfinance, then compute daily portfolio returns with equal weights. This is a straightforward data retrieval and calculation task that I can do in one code block.

Let me write clean code to:

  1. Download 3 years of historical data for all three ETFs
  2. Extract adjusted close prices
  3. Calculate daily returns for each
  4. Compute equal-weighted portfolio returns (1/3 weight each)

I'll do this in one step since it's a simple data retrieval and calculation task.

I'll download 3 years of price data for SPY, QQQ, and GLD, then compute equal-weighted daily portfolio returns.

Notebook code
import yfinance as yf
import pandas as pd

# Download 3 years of historical data for SPY, QQQ, GLD
tickers = ['SPY', 'QQQ', 'GLD']
data = yf.download(tickers, period='3y')

# Recent yfinance versions return a MultiIndex on columns: (Ticker, Attribute)
# We need to select the 'Adj Close' attribute for all tickers.
# Using .loc to safely select the column level avoids KeyError if structure varies slightly.
if isinstance(data.columns, pd.MultiIndex):
    # Select 'Adj Close' from the second level of the MultiIndex
    data = data.loc[:, (slice(None), 'Adj Close')]
    # Flatten columns to just ticker names if needed, or keep MultiIndex for clarity
    # Here we flatten to make subsequent code simpler:
    data.columns = data.columns.get_level_values(0)
else:
    # Fallback for older versions or single ticker
    if 'Adj Close' in data.columns:
        data = data['Adj Close']
    elif all(t in data.columns for t in tickers):
        pass # Assume already correct
    else:
        raise ValueError("Could not locate 'Adj Close' data in the downloaded DataFrame.")

# Calculate daily returns for each ETF
returns = data.pct_change().dropna()

# Compute equal-weighted portfolio returns (1/3 weight each)
weights = [1/3, 1/3, 1/3]
portfolio_returns = (returns * weights).sum(axis=1)

# Create a summary DataFrame
portfolio_df = pd.DataFrame({
    'SPY_Return': returns['SPY'],
    'QQQ_Return': returns['QQQ'],
    'GLD_Return': returns['GLD'],
    'Portfolio_Return': portfolio_returns
})

# Show basic stats
portfolio_df.describe()
[*********************100%***********************]  3 of 3 completed
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexes/base.py:3641, in Index.get_loc(self, key)
   3640 try:
-> 3641     return self._engine.get_loc(casted_key)
   3642 except KeyError as err:

File pandas/_libs/index.pyx:168, in pandas._libs.index.IndexEngine.get_loc()
--> 168 'Could not get source, probably due dynamically evaluated source code.'

File pandas/_libs/index.pyx:197, in pandas._libs.index.IndexEngine.get_loc()
--> 197 'Could not get source, probably due dynamically evaluated source code.'

File pandas/_libs/hashtable_class_helper.pxi:7668, in pandas._libs.hashtable.PyObjectHashTable.get_item()
-> 7668 'Could not get source, probably due dynamically evaluated source code.'

File pandas/_libs/hashtable_class_helper.pxi:7676, in pandas._libs.hashtable.PyObjectHashTable.get_item()
-> 7676 'Could not get source, probably due dynamically evaluated source code.'

KeyError: 'Adj Close'

The above exception was the direct cause of the following exception:

KeyError                                  Traceback (most recent call last)
Cell In[4], line 13
      9 # We need to select the 'Adj Close' attribute for all tickers.
     10 # Using .loc to safely select the column level avoids KeyError if structure varies slightly.
     11 if isinstance(data.columns, pd.MultiIndex):
     12     # Select 'Adj Close' from the second level of the MultiIndex
---> 13     data = data.loc[:, (slice(None), 'Adj Close')]
     14     # Flatten columns to just ticker names if needed, or keep MultiIndex for clarity
     15     # Here we flatten to make subsequent code simpler:
     16     data.columns = data.columns.get_level_values(0)

File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexing.py:1200, in _LocationIndexer.__getitem__(self, key)
   1198     if self._is_scalar_access(key):
   1199         return self.obj._get_value(*key, takeable=self._takeable)
-> 1200     return self._getitem_tuple(key)
   1201 else:
   1202     # we by definition only have the 0th axis
   1203     axis = self.axis or 0

File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexing.py:1386, in _LocIndexer._getitem_tuple(self, tup)
   1384 with suppress(IndexingError):
   1385     tup = self._expand_ellipsis(tup)
-> 1386     return self._getitem_lowerdim(tup)
   1388 # no multi-index, so validate all of the indexers
   1389 tup = self._validate_tuple_indexer(tup)

File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexing.py:1067, in _LocationIndexer._getitem_lowerdim(self, tup)
   1065 # we may have a nested tuples indexer here
   1066 if self._is_nested_tuple_indexer(tup):
-> 1067     return self._getitem_nested_tuple(tup)
   1069 # we maybe be using a tuple to represent multiple dimensions here
   1070 ax0 = self.obj._get_axis(0)

File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexing.py:1172, in _LocationIndexer._getitem_nested_tuple(self, tup)
   1169     axis -= 1
   1170     continue
-> 1172 obj = getattr(obj, self.name)._getitem_axis(key, axis=axis)
   1173 axis -= 1
   1175 # if we have a scalar, we are done

File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexing.py:1442, in _LocIndexer._getitem_axis(self, key, axis)
   1440 # nested tuple slicing
   1441 if is_nested_tuple(key, labels):
-> 1442     locs = labels.get_locs(key)
   1443     indexer: list[slice | npt.NDArray[np.intp]] = [slice(None)] * self.ndim
   1444     indexer[axis] = locs

File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexes/multi.py:4032, in MultiIndex.get_locs(self, seq)
   4028     continue
   4030 else:
   4031     # a slice or a single label
-> 4032     lvl_indexer = self._get_level_indexer(k, level=i, indexer=indexer)
   4034 # update indexer
   4035 lvl_indexer = _to_bool_indexer(lvl_indexer)

File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexes/multi.py:3885, in MultiIndex._get_level_indexer(self, key, level, indexer)
   3882         return slice(i, j, step)
   3884 else:
-> 3885     idx = self._get_loc_single_level_index(level_index, key)
   3887     if level > 0 or self._lexsort_depth == 0:
   3888         # Desired level is not sorted
   3889         if isinstance(idx, slice):
   3890             # test_get_loc_partial_timestamp_multiindex

File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexes/multi.py:3458, in MultiIndex._get_loc_single_level_index(self, level_index, key)
   3456     return -1
   3457 else:
-> 3458     return level_index.get_loc(key)

File ~/Library/mljar-studio/jlab_server/lib/python3.12/site-packages/pandas/core/indexes/base.py:3648, in Index.get_loc(self, key)
   3643     if isinstance(casted_key, slice) or (
   3644         isinstance(casted_key, abc.Iterable)
   3645         and any(isinstance(x, slice) for x in casted_key)
   3646     ):
   3647         raise InvalidIndexError(key) from err
-> 3648     raise KeyError(key) from err
   3649 except TypeError:
   3650     # If we have a listlike key, _check_indexing_error will raise
   3651     #  InvalidIndexError. Otherwise we fall through and re-raise
   3652     #  the TypeError.
   3653     self._check_indexing_error(key)

KeyError: 'Adj Close'

Score cards for qwen3.5:397b for Value at Risk (VaR) Analysis in Python

Total score: 2/10

  • Task Completion: 1/2

    Needs work

    The workflow only attempts the initial data download and equal-weighted return calculation. It does not compute VaR, CVaR, maximum drawdown, or produce the required histogram.

  • Execution Correctness: 0/2

    Needs work

    The provided code fails with a KeyError on 'Adj Close', so it does not run successfully as shown. The notebook output indicates the core data extraction step is broken.

  • Output Quality: 0/3

    Needs work

    Expected outputs are missing: there is no VaR/CVaR result, no drawdown metric, and no plot. The only output is an error traceback.

  • Reasoning Quality: 1/2

    Needs work

    The assistant correctly identifies the need to download data and compute equal-weighted returns, but the reasoning stops far short of the full risk analysis task. It also incorrectly assumes the selected 'Adj Close' structure will work without verifying the actual yfinance output format.

  • Reliability: 0/1

    Needs work

    The workflow is fragile because it makes an unsupported assumption about the downloaded DataFrame structure and immediately errors. It is not robust to common yfinance output variations.

Try MLJAR Studio

Run the same type of AI-powered data analysis on your own datasets with conversational notebooks in MLJAR Studio.

Explore More AI Analysis Examples

Discover additional workflows across categories. Each example includes prompts, conversation outputs, and model-level scoring so you can compare approaches and results.