Update "Performance-and-Profiling.-"

John McCardle 2026-02-07 22:33:59 +00:00
commit 9c742d4f72

@ -8,9 +8,8 @@ Performance monitoring and optimization infrastructure for McRogueFace. Press F3
- [#104](../issues/104) - Basic Profiling/Metrics (Closed - Implemented) - [#104](../issues/104) - Basic Profiling/Metrics (Closed - Implemented)
- [#148](../issues/148) - Dirty Flag RenderTexture Caching (Closed - Implemented) - [#148](../issues/148) - Dirty Flag RenderTexture Caching (Closed - Implemented)
- [#123](../issues/123) - Chunk-based Grid Rendering (Closed - Implemented) - [#123](../issues/123) - Chunk-based Grid Rendering (Closed - Implemented)
- [#115](../issues/115) - SpatialHash Implementation (Open - Tier 1) - [#115](../issues/115) - SpatialHash Implementation (Open)
- [#113](../issues/113) - Batch Operations for Grid (Open - Tier 1) - [#113](../issues/113) - Batch Operations for Grid (Open)
- [#117](../issues/117) - Memory Pool for Entities (Open - Tier 1)
**Key Files:** **Key Files:**
- `src/Profiler.h` - ScopedTimer RAII helper - `src/Profiler.h` - ScopedTimer RAII helper
@ -21,7 +20,7 @@ Performance monitoring and optimization infrastructure for McRogueFace. Press F3
## Benchmark API ## Benchmark API
The benchmark API captures detailed per-frame timing data to JSON files. C++ handles all timing responsibility; Python processes results afterward. The benchmark API captures per-frame timing data to JSON files. C++ handles all timing; Python processes results afterward.
### Basic Usage ### Basic Usage
@ -36,7 +35,6 @@ mcrfpy.start_benchmark()
# Stop and get the output filename # Stop and get the output filename
filename = mcrfpy.end_benchmark() filename = mcrfpy.end_benchmark()
print(f"Benchmark saved to: {filename}") print(f"Benchmark saved to: {filename}")
# e.g., "benchmark_12345_20250528_143022.json"
``` ```
### Adding Log Messages ### Adding Log Messages
@ -55,7 +53,20 @@ mcrfpy.log_benchmark("Combat started")
filename = mcrfpy.end_benchmark() filename = mcrfpy.end_benchmark()
``` ```
Log messages appear in the `logs` array of each frame in the output JSON. ### Headless Mode Note
In `--headless` mode with `step()`, the benchmark API warns that step-based simulation bypasses the game loop. For headless performance measurement, use Python's `time` module:
```python
import time
start = time.perf_counter()
# ... operation to measure ...
elapsed = time.perf_counter() - start
print(f"Operation took {elapsed*1000:.2f}ms")
```
The benchmark API works best with the normal game loop (non-headless mode).
### Output Format ### Output Format
@ -71,8 +82,7 @@ The JSON file contains per-frame data:
"entity_render_time_ms": 2.1, "entity_render_time_ms": 2.1,
"python_time_ms": 1.8, "python_time_ms": 1.8,
"logs": ["Player spawned"] "logs": ["Player spawned"]
}, }
...
], ],
"summary": { "summary": {
"total_frames": 1000, "total_frames": 1000,
@ -85,8 +95,6 @@ The JSON file contains per-frame data:
### Processing Results ### Processing Results
Since Python processes results *after* capture, timing overhead doesn't affect measurements:
```python ```python
import json import json
@ -101,7 +109,6 @@ def analyze_benchmark(filename):
print(f"Slow frames (>16.67ms): {len(slow_frames)}") print(f"Slow frames (>16.67ms): {len(slow_frames)}")
print(f"Average: {data['summary']['avg_frame_time_ms']:.2f}ms") print(f"Average: {data['summary']['avg_frame_time_ms']:.2f}ms")
# Find what was happening during slow frames
for frame in slow_frames[:5]: for frame in slow_frames[:5]:
print(f" Frame {frame['frame_number']}: {frame['frame_time_ms']:.1f}ms") print(f" Frame {frame['frame_number']}: {frame['frame_time_ms']:.1f}ms")
if frame.get("logs"): if frame.get("logs"):
@ -128,7 +135,6 @@ def analyze_benchmark(filename):
- Per-frame counts: - Per-frame counts:
- Grid cells rendered - Grid cells rendered
- Entities rendered (visible/total) - Entities rendered (visible/total)
- Draw calls
**Implementation:** `src/ProfilerOverlay.cpp` **Implementation:** `src/ProfilerOverlay.cpp`
@ -139,7 +145,7 @@ def analyze_benchmark(filename):
### Implemented Optimizations ### Implemented Optimizations
**Chunk-based Rendering** ([#123](../issues/123)): **Chunk-based Rendering** ([#123](../issues/123)):
- Large grids divided into chunks (~256 cells each) - Large grids divided into chunks
- Only visible chunks processed - Only visible chunks processed
- 1000x1000+ grids render efficiently - 1000x1000+ grids render efficiently
@ -154,19 +160,14 @@ def analyze_benchmark(filename):
### Current Bottlenecks ### Current Bottlenecks
**Entity Spatial Queries** - O(n) iteration: **Entity Spatial Queries** - O(n) iteration for large counts:
- Finding entities at position requires checking all entities - Use `grid.entities_in_radius()` for proximity queries
- Becomes noticeable at 500+ entities - SpatialHash planned in [#115](../issues/115)
- **Solution:** [#115](../issues/115) SpatialHash
**Bulk Grid Updates** - Python/C++ boundary: **Bulk Grid Updates** - Python/C++ boundary:
- Many individual `layer.set()` calls are slower than batch operations - Many individual `layer.set()` calls are slower than batch operations
- Each call crosses the Python/C++ boundary - Use `layer.fill()` for uniform values
- **Solution:** [#113](../issues/113) Batch Operations - Batch operations planned in [#113](../issues/113)
**Entity Allocation** - Memory fragmentation:
- Frequent spawn/destroy cycles fragment memory
- **Solution:** [#117](../issues/117) Memory Pool
--- ---
@ -177,7 +178,8 @@ def analyze_benchmark(filename):
3. **Analyze**: Process JSON to find patterns in slow frames 3. **Analyze**: Process JSON to find patterns in slow frames
4. **Optimize**: Make targeted changes 4. **Optimize**: Make targeted changes
5. **Verify**: Re-run benchmark, compare results 5. **Verify**: Re-run benchmark, compare results
6. **Iterate**: Repeat until acceptable performance
See [[Performance-Optimization-Workflow]] for the full methodology.
### Performance Targets ### Performance Targets
@ -218,9 +220,6 @@ void expensiveFunction() {
## Related Systems ## Related Systems
- [[Grid-Rendering-Pipeline]] - Chunk caching and dirty flags - [[Grid-Rendering-Pipeline]] - Chunk caching and dirty flags
- [[Performance-Optimization-Workflow]] - Full optimization methodology
- [[Entity-Management]] - Entity performance considerations - [[Entity-Management]] - Entity performance considerations
- [[Writing-Tests]] - Performance test creation - [[Writing-Tests]] - Performance test creation
---
*Last updated: 2025-11-29*