-
Notifications
You must be signed in to change notification settings - Fork 21
add async support in some AST visitors #719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
PR Reviewer Guide 🔍(Review updated until commit a6072b9)Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Latest suggestions up to a6072b9
Previous suggestionsSuggestions up to commit 8039ec4
|
The optimized code achieves a 13% speedup through several targeted micro-optimizations that reduce overhead in the hot loops: **Key optimizations applied:** 1. **Hoisted loop-invariant computations**: Moved `isinstance` tuple constants (`compound_types`, `valid_types`) and frequently accessed attributes (`get_comment`, `orig_rt`, `opt_rt`) outside the loops to avoid repeated lookups. 2. **Precomputed key prefix**: Instead of repeatedly concatenating `test_qualified_name + "#" + str(self.abs_path)` inside loops, this is computed once as `key_prefix` and reused with f-string formatting. 3. **Optimized `getattr` usage**: Replaced the costly `getattr(compound_line_node, "body", [])` pattern with a single `getattr(..., None)` call, then conditionally building the `nodes_to_check` list using unpacking (`*compound_line_node_body`) when a body exists. 4. **Reduced function call overhead**: Cached the `get_comment` method reference and called it once per `match_key`, reusing the same comment for all nodes that share the same key, rather than calling it for each individual node. 5. **String formatting optimization**: Replaced string concatenation with f-string formatting for better performance. **Performance characteristics by test case:** - **Large-scale tests** show the best improvements (10-79% faster), particularly `test_large_deeply_nested` (78.8% faster) where the inner loop optimizations have maximum impact - **Basic cases** show modest gains (1-4% faster) as there's less loop iteration overhead to optimize - **Edge cases** with minimal computation show negligible or slightly negative impact due to the upfront setup cost of hoisted variables The optimizations are most effective for functions with complex nested structures (for/while/if blocks) and many runtime entries, where the reduced per-iteration overhead compounds significantly.
⚡️ Codeflash found optimizations for this PR📄 14% (0.14x) speedup for
|
…odeflash-ai/codeflash into async-support-for
…odeflash-ai/codeflash into async-support-for
Persistent review updated to latest commit a6072b9 |
PR Type
Enhancement, Tests
Description
Add async support in AST analyses
Extend unused helper detection to async
Enhance global assignment collection logic
Add comprehensive async-focused tests
Diagram Walkthrough
File Walkthrough
coverage_utils.py
Consider async defs in dependent function extraction
codeflash/code_utils/coverage_utils.py
unused_definition_remover.py
Async-aware entrypoint detection in remover
codeflash/context/unused_definition_remover.py
test_code_context_extractor.py
Tests for global assignment collection with async
tests/test_code_context_extractor.py
test_code_replacement.py
Tests for OptimFunctionCollector async coverage
tests/test_code_replacement.py
test_code_utils.py
Tests for async-aware dependent function extraction
tests/test_code_utils.py
test_unused_helper_revert.py
Async unused helper detection and revert tests
tests/test_unused_helper_revert.py