Benchmarks¶
This section documents the performance benchmarks comparing PKonfig and Pydantic Settings.
The benchmarks and their methodology live in the repository under the benchmarks/
directory. You can reproduce them locally and regenerate the results using the provided Make target (see repository Makefile).
Below you can find a short description and the latest results.
pkonfig vs Pydantic Settings Benchmarks¶
This directory contains benchmarks comparing the performance of pkonfig and Pydantic Settings in various scenarios.
Overview¶
The benchmarks compare pkonfig and Pydantic Settings in the following scenarios:
Simple Configuration: Basic configuration with a few fields loaded from environment variables
Nested Configuration: Hierarchical configuration with nested objects
Large Configuration: Configuration with many fields (100) to test scalability
Access Performance: Testing the performance of accessing configuration values
Dictionary Source: Loading configuration from a dictionary instead of environment variables
Running the Benchmarks¶
To run the benchmarks, make sure you have both pkonfig and Pydantic installed:
pip install pydantic pydantic-settings
Then run the benchmark script:
python benchmarks/benchmark_pkonfig_vs_pydantic.py
The script will output the benchmark results to the console and save them to a JSON file (benchmark_results.json
).
Methodology¶
Each benchmark scenario is run multiple times (1000 iterations for most scenarios, 100 for the large configuration scenario) to get statistically significant results. The following metrics are collected:
Average execution time
Median execution time
Minimum execution time
Maximum execution time
The benchmarks measure the time it takes to:
Create a configuration object from various sources
Access configuration values
Interpreting the Results¶
The benchmark results show the performance difference between pkonfig and Pydantic Settings in each scenario. A lower execution time indicates better performance.
The summary at the end of the benchmark run shows which library is faster in each scenario and by how much.
Notes¶
The benchmarks are designed to be fair and representative of real-world usage patterns.
Both libraries are used according to their recommended practices.
The environment is reset between benchmark scenarios to ensure clean results.
The benchmarks focus on performance and do not evaluate other aspects like feature set, API design, or ease of use.
pkonfig vs Pydantic Settings Benchmark Results¶
This document presents the results of benchmarking pkonfig against Pydantic Settings in various scenarios.
Summary¶
Scenario |
pkonfig (µs) |
Pydantic (µs) |
Faster |
Times Faster |
---|---|---|---|---|
Simple Configuration |
8.965 |
75.632 |
pkonfig |
8.44x |
Nested Configuration |
6.741 |
340.237 |
pkonfig |
50.48x |
Large Configuration |
52.084 |
91.105 |
pkonfig |
1.75x |
Access Performance |
1.353 |
0.108 |
pydantic |
12.54x |
Dictionary Source |
10.759 |
277.819 |
pkonfig |
25.82x |
Methodology¶
These benchmarks run multiple iterations per scenario and report average times. See benchmarks/README.md for details and how to reproduce.
The raw JSON is saved to benchmarks/benchmark_results.json
.