TileLayer (closes#200):
- apply_threshold(source, range, tile): Set tile index where heightmap value is in range
- apply_ranges(source, ranges): Apply multiple tile assignments in one pass
ColorLayer (closes#201):
- apply_threshold(source, range, color): Set fixed color where value is in range
- apply_gradient(source, range, color_low, color_high): Interpolate colors based on value
- apply_ranges(source, ranges): Apply multiple color assignments (fixed or gradient)
All methods return self for chaining. HeightMap size must match layer dimensions.
Later ranges override earlier ones if overlapping. Cells not matching any range are unchanged.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Rename parameters for clearer semantics:
- dig_hill: depth -> target_height
- dig_bezier: start_depth/end_depth -> start_height/end_height
The libtcod "dig" functions set terrain TO a target height, not
relative to current values. "target_height" makes this clearer.
Also add warnings for likely user errors:
- add_hill/dig_hill/dig_bezier with radius <= 0 (no-op)
- smooth with iterations <= 0 already raises ValueError
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add three methods that create NEW HeightMap objects:
- threshold(range): preserve original values where in range, 0.0 elsewhere
- threshold_binary(range, value=1.0): set uniform value where in range
- inverse(): return (1.0 - value) for each cell
These operations are immutable - they preserve the original HeightMap.
Useful for masking operations with Grid.apply_threshold/apply_ranges.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Position argument flexibility:
- get(), get_interpolated(), get_slope(), get_normal() now accept:
- Two separate args: hmap.get(5, 5)
- Tuple: hmap.get((5, 5))
- List: hmap.get([5, 5])
- Vector: hmap.get(mcrfpy.Vector(5, 5))
- Uses PyPositionHelper for standardized parsing
Subscript support:
- Add __getitem__ as shorthand for get(): hmap[5, 5] or hmap[(5, 5)]
Range validation:
- count_in_range() now raises ValueError when min > max
- count_in_range() accepts both tuple and list
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add methods to apply HeightMap data to Grid walkable/transparent properties:
- apply_threshold(source, range, walkable, transparent): Apply properties
to cells where HeightMap value is in the specified range
- apply_ranges(source, ranges): Apply multiple threshold rules in one pass
Features:
- Size mismatch between HeightMap and Grid raises ValueError
- Both methods return self for chaining
- Uses dynamic type lookup via module for HeightMap type checking
- First matching range wins in apply_ranges
- Cells not matching any range remain unchanged
- TCOD map is synced after changes for FOV/pathfinding
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add methods to query HeightMap values and statistics:
- get(pos): Get height value at integer coordinates
- get_interpolated(pos): Get bilinearly interpolated height at float coords
- get_slope(pos): Get slope angle (0 to pi/2) at position
- get_normal(pos, water_level): Get surface normal vector
- min_max(): Get (min, max) tuple of all values
- count_in_range(range): Count cells with values in range
All methods include proper bounds checking and error messages.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fixes potential integer overflow and invalid input issues:
- Add GRID_MAX constant (8192) to Common.h for global use
- Validate HeightMap dimensions against GRID_MAX to prevent
integer overflow in w*h calculations (65536*65536 = 0)
- Add min > max validation for clamp() and normalize()
- Add unit tests for all new validation cases
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement the foundational HeightMap class for procedural generation:
- HeightMap(size, fill=0.0) constructor with libtcod backend
- Immutable size property after construction
- Scalar operations returning self for method chaining:
- fill(value), clear()
- add_constant(value), scale(factor)
- clamp(min=0.0, max=1.0), normalize(min=0.0, max=1.0)
Includes procedural generation spec document and unit tests.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
McRogueFace needs to accept callable objects (properties on C++ objects)
and also support subclassing (getattr on user objects). Only direct
properties were supported previously, now shadowing a callback by name
will allow custom objects to "just work".
- Added CallbackCache struct and is_python_subclass flag to UIDrawable.h
- Created metaclass for tracking class-level callback changes
- Updated all UI type init functions to detect subclasses
- Modified PyScene.cpp event dispatch to try subclass methods
Scene subclasses can now define on_key(self, key, state) methods that
receive keyboard events, matching the existing on_enter, on_exit, and
update lifecycle callbacks.
Changes:
- Rename call_on_keypress to call_on_key (consistent naming with property)
- Add triggerKeyEvent helper in McRFPy_API
- Call triggerKeyEvent from GameEngine when key_callable is not set
- Fix condition to check key_callable.isNone() (not just pointer existence)
- Handle both bound methods and instance-assigned callables
Usage:
class GameScene(mcrfpy.Scene):
def on_key(self, key, state):
if key == "Escape" and state == "end":
quit_game()
Property assignment (scene.on_key = callable) still works and takes
precedence when key_callable is set via the property setter.
Includes comprehensive test: tests/unit/scene_subclass_on_key_test.py
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Enhanced tp_doc strings for both layer types to include:
- What happens when grid_size=None (inherits from parent Grid)
- That layers are created via Grid.add_layer() rather than directly
- FOV-related methods for ColorLayer
- Tile index -1 meaning no tile/transparent for TileLayer
- fill_rect method documentation
- Comprehensive usage examples
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Major Timer API improvements:
- Add `stopped` flag to Timer C++ class for proper state management
- Add `start()` method to restart stopped timers (preserves callback)
- Add `stop()` method that removes from engine but preserves callback
- Make `active` property read-write (True=start/resume, False=pause)
- Add `start=True` init parameter to create timers in stopped state
- Add `mcrfpy.timers` module-level collection (tuple of active timers)
- One-shot timers now set stopped=true instead of clearing callback
- Remove deprecated `setTimer()` and `delTimer()` module functions
Timer callbacks now receive (timer, runtime) instead of just (runtime).
Updated all tests to use new Timer API and callback signature.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Demonstrates the object-oriented Scene API as alternative to module-level
functions. Key features tested:
- Scene object creation and properties (name, active, children)
- scene.activate() vs mcrfpy.setScene()
- scene.on_key property - can be set on ANY scene, not just current
- Scene visual properties (pos, visible, opacity)
- Subclassing for lifecycle callbacks (on_enter, on_exit, update)
The on_key advantage resolves confusion with keypressScene() which only
works on the currently active scene.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace module-level audio functions with proper OOP API:
- mcrfpy.Sound: Wraps sf::SoundBuffer + sf::Sound for short effects
- mcrfpy.Music: Wraps sf::Music for streaming long tracks
- Both support: volume, loop, playing, duration, play/pause/stop
- Music adds position property for seeking
Add mcrfpy.keyboard singleton for real-time modifier state:
- shift, ctrl, alt, system properties (bool, read-only)
- Queries sf::Keyboard::isKeyPressed() directly
Add mcrfpy.__version__ = "1.0.0" for version identity
Remove old audio API entirely (no deprecation - unused in codebase):
- createSoundBuffer, loadMusic, playSound
- setMusicVolume, getMusicVolume, setSoundVolume, getSoundVolume
closes#66, closes#160, closes#164🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add action economy system with free (LOOK, SPEAK) vs turn-ending (GO, WAIT, TAKE) actions
- Implement LOOK action with detailed descriptions for doors, objects, entities, directions
- Add SPEAK/ANNOUNCE speech system with room-wide and proximity-based message delivery
- Create multi-tile pathing with FOV interrupt detection (path cancels when new entity visible)
- Implement TAKE action with adjacency requirement and clear error messages
- Add conversation history and error feedback loop so agents learn from failed actions
- Create structured simulation logging for offline viewer replay
- Document offline viewer requirements in OFFLINE_VIEWER_SPEC.md
- Fix import path in 1_multi_agent_demo.py for standalone execution
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
keypressScene() sets the handler for the CURRENT scene, so we must
call setScene() first to make focus_demo the active scene before
registering the key handler.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements a comprehensive Python-level focus management system showing:
- FocusManager: central coordinator for keyboard routing, tab cycling, modal stack
- ModifierTracker: workaround for tracking Shift/Ctrl/Alt state (#160)
- FocusableGrid: WASD movement in a grid with player marker
- TextInputWidget: text entry with cursor, backspace, home/end
- MenuIcon: icons that open modal dialogs on Space/Enter
Features demonstrated:
- Click-to-focus on any widget
- Tab/Shift+Tab cycling through focusable widgets
- Visual focus indicators (blue outline)
- Keyboard routing to focused widget
- Modal dialog push/pop stack
- Escape to close modals
Addresses #143🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add AnimationConflictMode enum with three modes:
- REPLACE (default): Complete existing animation and start new one
- QUEUE: Wait for existing animation to complete before starting
- ERROR: Raise RuntimeError if property is already being animated
Changes:
- AnimationManager now tracks property locks per (target, property) pair
- Animation.start() accepts optional conflict_mode parameter
- Queued animations start automatically when property becomes free
- Updated type stubs with ConflictMode type alias
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add SpatialHash class for efficient spatial queries on entities:
- New SpatialHash.h/cpp with bucket-based spatial hashing
- Grid.entities_in_radius(x, y, radius) method for O(k) queries
- Automatic spatial hash updates on entity add/remove/move
Benchmark results at 2,000 entities:
- Single query: 16.2× faster (0.044ms → 0.003ms)
- N×N visibility: 104.8× faster (74ms → 1ms)
This enables efficient range queries for AI, visibility, and
collision detection without scanning all entities.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Benchmark suite measuring entity performance at scale:
- B1: Entity creation (measures allocation overhead)
- B2: Full iteration (measures cache locality)
- B3: Single range query (measures O(n) scan cost)
- B4: N×N visibility (the "what can everyone see" problem)
- B5: Movement churn (baseline for spatial index overhead)
Key findings at 2,000 entities on 100×100 grid:
- Creation: 75k entities/sec
- Range query: 0.05ms (O(n) - checks all entities)
- N×N visibility: 128ms total (O(n²))
- EntityCollection iteration 60× slower than direct iteration
Addresses #115, addresses #117🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
TurnOrchestrator: Coordinates multi-agent turn-based simulation
- Perspective switching with FOV layer updates
- Screenshot capture per agent per turn
- Pluggable LLM query callback
- SimulationStep/SimulationLog for full context capture
- JSON save/load with replay support
New demos:
- 2_integrated_demo.py: WorldGraph + action execution integration
- 3_multi_turn_demo.py: Complete multi-turn simulation with logging
Updated 1_multi_agent_demo.py with action parser/executor integration.
Tested with Qwen2.5-VL-32B: agents successfully navigate based on
WorldGraph descriptions and VLM visual input.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
ActionParser: Extracts structured actions from LLM text responses
- Regex patterns for GO, WAIT, LOOK, TAKE, DROP, PUSH, USE, etc.
- Direction normalization (N→NORTH, UP→NORTH)
- Handles "Action: GO EAST" and fallback patterns
- 12 unit tests covering edge cases
ActionExecutor: Executes parsed actions in the game world
- Movement with collision detection (walls, entities)
- Boundary checking
- ActionResult with path data for animation replay
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implements Python-side room graph data structures for LLM agent environments:
- Room, Door, WorldObject dataclasses with full metadata
- WorldGraph class with spatial queries (room_at, get_exits)
- Deterministic text generation (describe_room, describe_exits)
- Available action enumeration based on room state
- Factory functions for test scenarios (two_room, button_door)
Example output:
"You are in the guard room. The air is musty. On the ground you see
a brass key. Exits: east (the armory)."
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- grid_demo.py: Updated for new layer-based rendering
- Screenshots: Refreshed demo screenshots
- API docs: Regenerated with latest method signatures
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- 0_basic_vllm_demo.py: Single agent with FOV, grounded text, VLLM query
- 1_multi_agent_demo.py: Three agents with perspective cycling
Features demonstrated:
- Headless step() + screenshot() for AI-driven gameplay
- ColorLayer.apply_perspective() for per-agent fog of war
- Grounded text generation based on entity visibility
- Sequential VLLM queries with vision model support
- Proper FOV reset between perspective switches
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The benchmark API captures per-frame data from the game loop, which is
bypassed when using step()-based simulation control. This warning
informs users to use Python's time module for headless performance
measurement instead.
Also adds test_headless_benchmark.py which verifies:
- step() and screenshot() don't produce benchmark frames
- Wall-clock timing for headless operations
- Complex scene throughput measurement
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implements Python-controlled simulation advancement for headless mode:
- Add mcrfpy.step(dt) to advance simulation by dt seconds
- step(None) advances to next scheduled event (timer/animation)
- Timers use simulation_time in headless mode for deterministic behavior
- automation.screenshot() now renders synchronously in headless mode
(captures current state, not previous frame)
This enables LLM agent orchestration (#156) by allowing:
- Set perspective, take screenshot, query LLM - all synchronous
- Deterministic simulation control without frame timing issues
- Event-driven advancement with step(None)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
GridPoint.entities (#114):
- Returns list of entities at this grid cell position
- Enables convenient cell-based entity queries without manual iteration
- Example: grid.at(5, 5).entities → [<Entity>, <Entity>]
GridPointState.point (#16):
- Returns GridPoint if entity has discovered this cell, None otherwise
- Respects entity's perspective: undiscovered cells return None
- Enables entity.at(x,y).point.walkable style access
- Live reference: changes to GridPoint are immediately visible
This provides a simpler solution for #16 without the complexity of
caching stale GridPoint copies. The visible/discovered flags indicate
whether the entity "should" trust the data; Python can implement
memory systems if needed.
closes#114, closes#16🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
ColorLayer enhancements:
- fill_rect(x, y, w, h, color): Fill rectangular region
- draw_fov(source, radius, fov, visible, discovered, unknown): One-time FOV draw
- apply_perspective(entity, visible, discovered, unknown): Bind layer to entity
- update_perspective(): Refresh layer from bound entity's gridstate
- clear_perspective(): Remove entity binding
New demo: tests/demo/perspective_patrol_demo.py
- Entity patrols around 10x10 central obstacle
- FOV layer shows visible/discovered/unknown states
- [R] to reset vision, [Space] to pause, [Q] to quit
- Demonstrates fog of war memory system
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Phase 3 of Agent POV Integration:
Entity.updateVisibility() improvements:
- Now uses grid.fov_algorithm and grid.fov_radius instead of hardcoded values
- Updates any ColorLayers bound to this entity via apply_perspective()
- Properly triggers layer FOV recomputation when entity moves
New Entity.visible_entities(fov=None, radius=None) method:
- Returns list of other entities visible from this entity's position
- Optional fov parameter to override grid's FOV algorithm
- Optional radius parameter to override grid's fov_radius
- Useful for AI decision-making and line-of-sight checks
Test coverage in test_perspective_binding.py:
- Tests entity movement with bound layers
- Tests visible_entities with wall occlusion
- Tests radius override limiting visibility
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add `layers` dict parameter to Grid constructor for explicit layer definitions
- `layers={"ground": "color", "terrain": "tile"}` creates named layers
- `layers={}` creates empty grid (entities + pathfinding only)
- Default creates single TileLayer named "tilesprite" for backward compat
- Implement dynamic GridPoint property access via layer names
- `grid.at(x,y).layer_name = value` routes to corresponding layer
- Protected names (walkable, transparent, etc.) still use GridPoint
- Remove base layer rendering from UIGrid::render()
- Layers are now the sole source of grid rendering
- Old chunk_manager remains for GridPoint data access
- FOV overlay unchanged
- Update test to use explicit `layers={}` parameter
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Adds a sub-grid system where grids larger than 64x64 cells are automatically
divided into 64x64 chunks, each with its own RenderTexture for incremental
rendering. This significantly improves performance for large grids by:
- Only re-rendering dirty chunks when cells are modified
- Caching rendered chunk textures between frames
- Viewport culling at the chunk level (skip invisible chunks entirely)
Implementation details:
- GridChunk class manages individual 64x64 cell regions with dirty tracking
- ChunkManager organizes chunks and routes cell access appropriately
- UIGrid::at() method transparently routes through chunks for large grids
- UIGrid::render() uses chunk-based blitting for large grids
- Compile-time CHUNK_SIZE (64) and CHUNK_THRESHOLD (64) constants
- Small grids (<= 64x64) continue to use flat storage (no regression)
Benchmark results show ~2x improvement in base layer render time for 100x100
grids (0.45ms -> 0.22ms) due to chunk caching.
Note: Dynamic layers (#147) still use full-grid textures; extending chunk
system to layers is tracked separately as #150.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implement per-layer dirty tracking and RenderTexture caching for
ColorLayer and TileLayer. Each layer now maintains its own cached
texture and only re-renders when content changes.
Key changes:
- Add dirty flag, cached_texture, and cached_sprite to GridLayer base
- Implement renderToTexture() for both ColorLayer and TileLayer
- Mark layers dirty on: set(), fill(), resize(), texture change
- Viewport changes (center/zoom) just blit cached texture portion
- Fallback to direct rendering if texture creation fails
- Add regression test with performance benchmarks
Expected performance improvement: Static layers render once, then
viewport panning/zooming only requires texture blitting instead of
re-rendering all cells.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implements ColorLayer and TileLayer classes with z_index ordering:
- ColorLayer: stores RGBA color per cell for overlays, fog of war, etc.
- TileLayer: stores sprite index per cell with optional texture
- z_index < 0: renders below entities
- z_index >= 0: renders above entities
Python API:
- grid.add_layer(type, z_index, texture) - create layer
- grid.remove_layer(layer) - remove layer
- grid.layers - list of layers sorted by z_index
- grid.layer(z_index) - get layer by z_index
- layer.at(x,y) / layer.set(x,y,value) - cell access
- layer.fill(value) - fill entire layer
Layers are allocated separately from UIGridPoint, reducing memory
for grids that don't need all features. Base grid retains walkable/
transparent arrays for TCOD pathfinding.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
compute_fov() was iterating through the entire grid to build a Python
list of visible cells, causing O(grid_size) performance instead of
O(radius²). On a 1000×1000 grid this was 15.76ms vs 0.48ms.
The fix returns None instead - users should use is_in_fov() to query
visibility, which is the pattern already used by existing code.
Performance: 33x speedup (15.76ms → 0.48ms on 1M cell grid)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Track actual work time separately from frame time to determine
system load percentage:
- work_time_ms: Time spent doing actual work before display()
- sleep_time = frame_time_ms - work_time_ms
This allows calculating load percentage:
load% = (work_time / frame_time) * 100
Example at 60fps with light load:
- frame_time: 16.67ms, work_time: 2ms
- load: 12%, sleep: 14.67ms
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>