Skip to content

Conversation

Saga4
Copy link
Contributor

@Saga4 Saga4 commented Sep 12, 2025

PR Type

Enhancement


Description

  • Add LSP-aware dynamic config getters

  • Reduce LSP candidates, tests, runtime

  • Replace constants with getters across code

  • Improve profiling step logging


Diagram Walkthrough

flowchart LR
  A["config_consts: LSP constants + getters"] -- "get_n_candidates/get_n_tests/get_total_time" --> B["git_utils: worktree count uses getter"]
  A -- "dynamic values" --> C["function_optimizer: tests, candidates, timing"]
  A -- "dynamic values" --> D["test_runner: runtime params via getter"]
  C -- "profiling UX" --> E["Add info log + console rule"]
Loading

File Walkthrough

Relevant files
Enhancement
config_consts.py
Add LSP-specific constants and dynamic getters                     

codeflash/code_utils/config_consts.py

  • Add LSP-specific constants.
  • Introduce getters for candidates, tests, time.
  • Route values via LSP enabled check.
+23/-0   
git_utils.py
Use dynamic candidate count for worktrees                               

codeflash/code_utils/git_utils.py

  • Import dynamic candidate getter.
  • Use getter to size worktrees.
+2/-2     
function_optimizer.py
Dynamic tests/candidates/time; better profiling logs         

codeflash/optimization/function_optimizer.py

  • Replace constants with getters for tests/time/candidates.
  • Adjust executor and assertions to dynamic tests.
  • Pass dynamic candidate count to optimizers.
  • Improve line profiler logging and rule.
+28/-19 
test_runner.py
Test runners use dynamic total runtime                                     

codeflash/verification/test_runner.py

  • Swap fixed runtime with dynamic getter.
  • Apply to behavioral, profiling, benchmarking runners.
+4/-4     

Copy link

github-actions bot commented Sep 12, 2025

PR Reviewer Guide 🔍

(Review updated until commit 65a5b6a)

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Import Overhead

Repeated intra-function imports of is_LSP_enabled in small getters may add overhead on hot paths; consider caching the flag or the computed values to avoid repeated checks.

def get_n_candidates() -> int:
    from codeflash.lsp.helpers import is_LSP_enabled

    return N_CANDIDATES_LSP if is_LSP_enabled() else N_CANDIDATES


def get_n_tests_to_generate() -> int:
    from codeflash.lsp.helpers import is_LSP_enabled

    return N_TESTS_TO_GENERATE_LSP if is_LSP_enabled() else N_TESTS_TO_GENERATE


def get_total_looping_time() -> float:
    from codeflash.lsp.helpers import is_LSP_enabled

    return TOTAL_LOOPING_TIME_LSP if is_LSP_enabled() else TOTAL_LOOPING_TIME
Default Arg Eval

Using get_total_looping_time() as a function default value will bind the value at import time; if LSP mode can change at runtime, the default may become stale. Pass None and compute inside, or set at call sites.

def run_and_parse_tests(
    self,
    testing_type: TestingMode,
    test_env: dict[str, str],
    test_files: TestFiles,
    optimization_iteration: int,
    testing_time: float = get_total_looping_time(),
    *,
Dynamic Defaults

Functions now default pytest_target_runtime_seconds to get_total_looping_time(), evaluated at import; verify this is intended and that LSP toggling after import is not required.

def run_behavioral_tests(
    test_paths: TestFiles,
    test_framework: str,
    test_env: dict[str, str],
    cwd: Path,
    *,
    pytest_timeout: int | None = None,
    pytest_cmd: str = "pytest",
    verbose: bool = False,
    pytest_target_runtime_seconds: int = get_total_looping_time(),
    enable_coverage: bool = False,
) -> tuple[Path, subprocess.CompletedProcess, Path | None, Path | None]:
    if test_framework == "pytest":
        test_files: list[str] = []

Copy link

github-actions bot commented Sep 12, 2025

PR Code Suggestions ✨

Latest suggestions up to 65a5b6a
Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Make LSP getter import-safe

Importing inside these getters can raise import errors at runtime in non-LSP
environments or during early import cycles. Wrap the conditional in a try/except to
fall back safely when LSP helpers are unavailable. This prevents hard crashes and
preserves default behavior.

codeflash/code_utils/config_consts.py [21-25]

 def get_n_candidates() -> int:
-    from codeflash.lsp.helpers import is_LSP_enabled
+    try:
+        from codeflash.lsp.helpers import is_LSP_enabled  # local import to avoid hard dependency
+        return N_CANDIDATES_LSP if is_LSP_enabled() else N_CANDIDATES
+    except Exception:
+        # On any import/runtime issue, default to non-LSP values
+        return N_CANDIDATES
 
-    return N_CANDIDATES_LSP if is_LSP_enabled() else N_CANDIDATES
-
Suggestion importance[1-10]: 7

__

Why: The existing code does a local import that could fail in environments without LSP; adding a try/except improves robustness without changing behavior. Impact is moderate since it guards against runtime import errors but is not a critical bug fix.

Medium
Guard test count getter

Protect the lazy import and LSP check with a try/except to avoid failures when the
LSP module is absent or partially initialized. Default to standard values on any
exception.

codeflash/code_utils/config_consts.py [27-31]

 def get_n_tests_to_generate() -> int:
-    from codeflash.lsp.helpers import is_LSP_enabled
+    try:
+        from codeflash.lsp.helpers import is_LSP_enabled
+        return N_TESTS_TO_GENERATE_LSP if is_LSP_enabled() else N_TESTS_TO_GENERATE
+    except Exception:
+        return N_TESTS_TO_GENERATE
 
-    return N_TESTS_TO_GENERATE_LSP if is_LSP_enabled() else N_TESTS_TO_GENERATE
-
Suggestion importance[1-10]: 7

__

Why: Similar to the first, wrapping the lazy import prevents hard failures when LSP helpers are unavailable, defaulting safely to standard values. It’s accurate and improves resilience with moderate impact.

Medium
Harden looping time getter

Ensure resilience against missing LSP dependencies by catching exceptions around the
import and check. Return the default timing when LSP is unavailable to prevent
runtime errors during test runs.

codeflash/code_utils/config_consts.py [33-37]

 def get_total_looping_time() -> float:
-    from codeflash.lsp.helpers import is_LSP_enabled
+    try:
+        from codeflash.lsp.helpers import is_LSP_enabled
+        return TOTAL_LOOPING_TIME_LSP if is_LSP_enabled() else TOTAL_LOOPING_TIME
+    except Exception:
+        return TOTAL_LOOPING_TIME
 
-    return TOTAL_LOOPING_TIME_LSP if is_LSP_enabled() else TOTAL_LOOPING_TIME
-
Suggestion importance[1-10]: 7

__

Why: Catching exceptions around the LSP import/check avoids crashes during timing retrieval, defaulting to the non-LSP constant. Correct and useful for stability, but not a critical fix.

Medium

Previous suggestions

Suggestions up to commit 65a5b6a
CategorySuggestion                                                                                                                                    Impact
General
Use cached LSP flag for tests

Reuse a cached LSP-enabled result to prevent repeated dynamic imports and checks,
improving performance and avoiding import-time side effects. Keep logic consistent
across all getters.

codeflash/code_utils/config_consts.py [27-31]

 def get_n_tests_to_generate() -> int:
-    from codeflash.lsp.helpers import is_LSP_enabled
+    return N_TESTS_TO_GENERATE_LSP if _is_lsp_enabled_cached() else N_TESTS_TO_GENERATE
 
-    return N_TESTS_TO_GENERATE_LSP if is_LSP_enabled() else N_TESTS_TO_GENERATE
-
Suggestion importance[1-10]: 6

__

Why: This aligns with the first suggestion’s caching goal and correctly targets the matching lines; modest performance gain and consistency, but dependent on introducing the cached helper safely.

Low
Cache LSP check for timing

Apply the same cached LSP check here to avoid repeated imports and ensure consistent
behavior. This prevents unnecessary overhead on frequently called timing paths.

codeflash/code_utils/config_consts.py [33-36]

 def get_total_looping_time() -> float:
-    from codeflash.lsp.helpers import is_LSP_enabled
+    return TOTAL_LOOPING_TIME_LSP if _is_lsp_enabled_cached() else TOTAL_LOOPING_TIME
 
-    return TOTAL_LOOPING_TIME_LSP if is_LSP_enabled() else TOTAL_LOOPING_TIME
-
Suggestion importance[1-10]: 6

__

Why: Matches the new hunk and would reduce repeated imports on a hot path; impact is moderate and contingent on the cached helper being added without altering expected dynamic behavior.

Low
Cache LSP-enabled check

Cache the LSP-enabled check to avoid repeated imports and function calls on hot
paths. This reduces overhead when these getters are called frequently during
optimization.

codeflash/code_utils/config_consts.py [21-25]

+from functools import cache
+
+@cache
+def _is_lsp_enabled_cached() -> bool:
+    from codeflash.lsp.helpers import is_LSP_enabled
+    return is_LSP_enabled()
+
 def get_n_candidates() -> int:
-    from codeflash.lsp.helpers import is_LSP_enabled
+    return N_CANDIDATES_LSP if _is_lsp_enabled_cached() else N_CANDIDATES
 
-    return N_CANDIDATES_LSP if is_LSP_enabled() else N_CANDIDATES
-
Suggestion importance[1-10]: 5

__

Why: The existing code matches the new hunk and the proposal can reduce repeated imports/calls, but it introduces global caching that may not reflect runtime changes to LSP state and adds a new helper that isn't shown integrated across the file.

Low

@Saga4 Saga4 marked this pull request as ready for review September 12, 2025 12:39
Copy link

Persistent review updated to latest commit 65a5b6a

# LSP-specific
N_CANDIDATES_LSP = 3
N_TESTS_TO_GENERATE_LSP = 1
TOTAL_LOOPING_TIME_LSP = 5.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should not decrease this, as this can increase performance noise

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also enable line profiling but reduce the number of candidates for it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also enable line profiling but reduce the number of candidates for it

We never disabled it for VSC, the base profiling and optimization candidate profiling all works same as cli. Yeah we can check on reduction of count though.


# LSP-specific
N_CANDIDATES_LSP = 3
N_TESTS_TO_GENERATE_LSP = 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets keep it 2

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets move the line profiler candidate count to cli

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed with @mohammedahmed18 N_candidates are hardcoded in aiservices currently. so need to make it parameterised with max value.

    optimization_response_items, llm_cost = await optimize_python_code(
        request.user, ctx, data.dependency_code, n=5, python_version=python_version
    )
    ```

Comment on lines +21 to +36
def get_n_candidates() -> int:
from codeflash.lsp.helpers import is_LSP_enabled

return N_CANDIDATES_LSP if is_LSP_enabled() else N_CANDIDATES


def get_n_tests_to_generate() -> int:
from codeflash.lsp.helpers import is_LSP_enabled

return N_TESTS_TO_GENERATE_LSP if is_LSP_enabled() else N_TESTS_TO_GENERATE


def get_total_looping_time() -> float:
from codeflash.lsp.helpers import is_LSP_enabled

return TOTAL_LOOPING_TIME_LSP if is_LSP_enabled() else TOTAL_LOOPING_TIME
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's inline this to the variable, these don't need to be funds that will be called constantly , codeflash should be able to optimize this away

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants