-
Notifications
You must be signed in to change notification settings - Fork 70
Add E2E tests for Workloads/Deployment in the dashboard #250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add E2E tests for Workloads/Deployment in the dashboard #250
Conversation
Signed-off-by: SunsetB612 <[email protected]>
Signed-off-by: SunsetB612 <[email protected]>
Signed-off-by: SunsetB612 <[email protected]>
Signed-off-by: SunsetB612 <[email protected]>
Signed-off-by: SunsetB612 <[email protected]>
Signed-off-by: SunsetB612 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @SunsetB612, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the test coverage for the dashboard's Workloads/Deployment management features by introducing a new set of end-to-end tests. These tests validate critical user interactions, from creating and modifying Deployments to listing and viewing their details, ensuring the reliability and stability of the UI.
Highlights
- New E2E Test Suite for Deployments: A comprehensive suite of Playwright end-to-end tests has been added for the Workloads/Deployment section of the dashboard.
- Full CRUD Operations Covered: The new tests cover the complete lifecycle of Deployments, including creation, viewing details, editing, and deletion functionalities.
- Deployment List and View Tests: Specific tests have been added to ensure the correct display of the Deployment list and the ability to view detailed information for individual Deployments.
- Test File Reorganization: The existing E2E test file
ui/apps/dashboard/e2e/login/login.spec.ts
has been renamed as part of the test suite organization.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds a comprehensive set of E2E tests for Deployment workload operations, which is a great addition. The tests cover create, list, edit, view, and delete functionalities. However, there are several areas for improvement to enhance the reliability and maintainability of these tests.
A major concern is the significant code duplication across the new test files, especially for setup (beforeEach
), test data generation, and interaction with the Monaco editor. I recommend abstracting these common functionalities into a shared utility file.
Additionally, many tests rely on complex and fragile methods to interact with components (e.g., traversing React's internal fiber tree) and to handle asynchronous operations. This can lead to flaky tests that are hard to maintain. Simplifying these interactions and using Playwright's built-in waiting mechanisms more effectively would greatly improve test stability. I've left specific suggestions on the relevant lines of code.
// 等待编辑器内容加载 | ||
let yamlContent = ''; | ||
let attempts = 0; | ||
const maxAttempts = 30; | ||
|
||
const expectedName = responseData?.data?.metadata?.name || ''; | ||
const expectedKind = responseData?.data?.kind || ''; | ||
|
||
while (attempts < maxAttempts) { | ||
yamlContent = await page.evaluate(() => { | ||
const textarea = document.querySelector('.monaco-editor textarea') as HTMLTextAreaElement; | ||
return textarea ? textarea.value : ''; | ||
}); | ||
|
||
if (yamlContent && yamlContent.length > 0) { | ||
const containsExpectedName = !expectedName || yamlContent.includes(expectedName); | ||
const containsExpectedKind = !expectedKind || yamlContent.includes(expectedKind); | ||
|
||
if (containsExpectedName && containsExpectedKind) { | ||
break; | ||
} | ||
} | ||
|
||
await page.waitForTimeout(500); | ||
attempts++; | ||
} | ||
|
||
// 如果内容仍为空,从API响应手动设置内容 | ||
if (!yamlContent || yamlContent.length === 0) { | ||
yamlContent = await page.evaluate((apiData) => { | ||
const data = apiData.data; | ||
const yaml = `apiVersion: ${data.apiVersion} | ||
kind: ${data.kind} | ||
metadata: | ||
name: ${data.metadata.name} | ||
namespace: ${data.metadata.namespace} | ||
spec: | ||
replicas: ${data.spec.replicas} | ||
selector: | ||
matchLabels: | ||
app: ${data.spec.selector.matchLabels.app} | ||
template: | ||
metadata: | ||
labels: | ||
app: ${data.spec.template.metadata.labels.app} | ||
spec: | ||
containers: | ||
- name: ${data.spec.template.spec.containers[0].name} | ||
image: ${data.spec.template.spec.containers[0].image} | ||
ports: | ||
- containerPort: ${data.spec.template.spec.containers[0].ports[0].containerPort}`; | ||
|
||
const textarea = document.querySelector('.monaco-editor textarea') as HTMLTextAreaElement; | ||
if (textarea) { | ||
textarea.value = yaml; | ||
textarea.focus(); | ||
textarea.dispatchEvent(new Event('input', { bubbles: true })); | ||
return yaml; | ||
} | ||
return ''; | ||
}, responseData); | ||
} | ||
|
||
// 如果仍然无法获取内容,报告错误 | ||
if (!yamlContent || yamlContent.length === 0) { | ||
throw new Error(`Edit feature error: Monaco editor does not load deployment YAML content. Expected name: "${expectedName}", kind: "${expectedKind}"`); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section of the test is extremely fragile and indicates a significant problem with test reliability. The test includes complex logic with a while
loop and a fallback mechanism to manually reconstruct YAML from an API response if the Monaco editor doesn't load its content in time.
This is a test anti-pattern. Tests should not contain fallback application logic. Instead, they should reliably wait for the application to reach the desired state. This complex workaround will be a maintenance nightmare and is a strong indicator of underlying flakiness. The test should be simplified to properly wait for the editor content to be populated by the application itself, likely by waiting for a specific network response to complete and the UI to update accordingly.
// 调用React的onChange回调来更新组件状态 | ||
await page.evaluate((yaml) => { | ||
|
||
const findReactFiber = (element: any) => { | ||
const keys = Object.keys(element); | ||
return keys.find(key => key.startsWith('__reactFiber') || key.startsWith('__reactInternalInstance')); | ||
}; | ||
|
||
const monacoContainer = document.querySelector('.monaco-editor'); | ||
if (monacoContainer) { | ||
const fiberKey = findReactFiber(monacoContainer); | ||
if (fiberKey) { | ||
let fiber = (monacoContainer as any)[fiberKey]; | ||
|
||
while (fiber) { | ||
if (fiber.memoizedProps && fiber.memoizedProps.onChange) { | ||
fiber.memoizedProps.onChange(yaml); | ||
return; | ||
} | ||
fiber = fiber.return; | ||
} | ||
} | ||
} | ||
|
||
const dialog = document.querySelector('[role="dialog"]'); | ||
if (dialog) { | ||
const fiberKey = findReactFiber(dialog); | ||
if (fiberKey) { | ||
let fiber = (dialog as any)[fiberKey]; | ||
|
||
const traverse = (node: any, depth = 0) => { | ||
if (!node || depth > 20) return false; | ||
|
||
if (node.memoizedProps && node.memoizedProps.onChange) { | ||
node.memoizedProps.onChange(yaml); | ||
return true; | ||
} | ||
|
||
if (node.child && traverse(node.child, depth + 1)) return true; | ||
if (node.sibling && traverse(node.sibling, depth + 1)) return true; | ||
|
||
return false; | ||
}; | ||
|
||
traverse(fiber); | ||
} | ||
} | ||
}, testDeploymentYaml); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach used to update the Monaco editor's content and trigger a state update in React is very complex and fragile. It relies on traversing React's internal Fiber tree (__reactFiber$
) to find and call the onChange
prop. This is an implementation detail that can easily break with any change to the component structure or React version, leading to flaky tests.
A more robust and simpler approach would be to use Playwright's built-in methods to simulate user input, or at least a less intrusive DOM manipulation. Consider using locator.fill()
or dispatching a standard input
event.
// A simpler way to set editor content and trigger updates.
await page.evaluate((yaml) => {
const textarea = document.querySelector('.monaco-editor textarea') as HTMLTextAreaElement;
if (textarea) {
textarea.value = yaml;
textarea.focus();
// Dispatch an 'input' event to notify React of the change.
textarea.dispatchEvent(new Event('input', { bubbles: true }));
}
}, testDeploymentYaml);
try { | ||
await expect(page.locator('table').locator(`text=${deploymentName}`)).toBeVisible({ | ||
timeout: 15000 | ||
}); | ||
} catch { | ||
// 如果列表中没有立即显示,可能是由于缓存或刷新延迟 | ||
// 但API成功表示deployment已创建 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The try...catch
block here swallows potential assertion failures. If toBeVisible
fails, the test will not fail but instead proceed, potentially masking bugs in the application. Assertions in tests should be direct. If the created deployment doesn't appear in the list within the timeout, the test should fail. Playwright's auto-waiting mechanism within expect
is designed for this purpose.
await expect(page.locator('table').locator(`text=${deploymentName}`)).toBeVisible({
timeout: 15000
});
test.beforeEach(async ({ page }) => { | ||
await page.goto(`${baseURL}/login`, { waitUntil: 'domcontentloaded' }); | ||
await page.evaluate((t) => localStorage.setItem('token', t), token); | ||
await page.goto(`${baseURL}${basePath}`, { waitUntil: 'domcontentloaded' }); | ||
await page.evaluate((t) => localStorage.setItem('token', t), token); | ||
await page.reload({ waitUntil: 'domcontentloaded' }); | ||
await page.waitForSelector('text=Overview', { timeout: 30000 }); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The beforeEach
hook contains redundant steps for authentication. The token is set in localStorage
twice. This can be simplified to make the setup cleaner and more efficient. Consider setting the token once before the final navigation or reload. Also, this setup logic is duplicated across multiple test files. It would be best to extract it into a shared utility function to improve maintainability.
await page.goto(`${baseURL}${basePath}`);
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.reload({ waitUntil: 'domcontentloaded' });
await page.waitForSelector('text=Overview', { timeout: 30000 });
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gemini-code-assist could you provide any suggestion for re-using duplicate code across files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@warjiang That's an excellent question! To address the duplication of the beforeEach
setup and other common utilities across your Playwright test files, you can create a shared utility file. This approach centralizes common logic, making your tests more maintainable and easier to update.
I recommend creating a new file, for example, ui/apps/dashboard/e2e/utils/test-utils.ts
, with the following content:
import { Page } from '@playwright/test';
// Global constants for E2E tests
export const HOST = process.env.HOST || 'localhost';
export const PORT = process.env.PORT || 5173;
export const baseURL = `http://${HOST}:${PORT}`;
export const basePath = '/multicloud-resource-manage';
export const token = process.env.KARMADA_TOKEN || '';
/**
* Sets up authentication and navigates to the dashboard overview page.
* This should be called in a test.beforeEach hook.
*/
export async function setupDashboardAuthentication(page: Page) {
await page.goto(`${baseURL}${basePath}`);
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.reload({ waitUntil: 'domcontentloaded' });
await page.waitForSelector('text=Overview', { timeout: 30000 });
}
/**
* Generates a unique test deployment YAML.
*/
export function generateTestDeploymentYaml() {
const timestamp = Date.now();
return `apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment-${timestamp}
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80`;
}
Then, in your test files like ui/apps/dashboard/e2e/workload/deployment/deployment-create.spec.ts
, you can refactor them as follows:
-
Add an import statement at the top of the file:
import { test, expect } from '@playwright/test'; import { setupDashboardAuthentication, generateTestDeploymentYaml, baseURL, basePath, token } from '../../utils/test-utils';
(Note: The
baseURL
,basePath
, andtoken
constants are exported fromtest-utils.ts
for use in other parts of your tests if needed, but thesetupDashboardAuthentication
function already uses them internally.) -
Remove the duplicated constants (lines 17-22 in
deployment-create.spec.ts
). -
Remove the duplicated
generateTestDeploymentYaml
function (lines 24-45 indeployment-create.spec.ts
). -
Replace the
test.beforeEach
block with a call to the shared function. For the highlighted lines (48-55) indeployment-create.spec.ts
, you can use the following suggestion:
test.beforeEach(async ({ page }) => { | |
await page.goto(`${baseURL}/login`, { waitUntil: 'domcontentloaded' }); | |
await page.evaluate((t) => localStorage.setItem('token', t), token); | |
await page.goto(`${baseURL}${basePath}`, { waitUntil: 'domcontentloaded' }); | |
await page.evaluate((t) => localStorage.setItem('token', t), token); | |
await page.reload({ waitUntil: 'domcontentloaded' }); | |
await page.waitForSelector('text=Overview', { timeout: 30000 }); | |
}); | |
test.beforeEach(async ({ page }) => { | |
await setupDashboardAuthentication(page); | |
}); |
await page.waitForSelector('[role="tooltip"]', { timeout: 10000 }); | ||
|
||
// 点击Confirm按钮确认删除 | ||
const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm)$/ }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The selector for the confirm button is not localized. While the delete button selector correctly handles both Chinese (删除
) and English (Delete
), the confirm button only looks for Confirm
. This could cause the test to fail in a localized environment. For consistency and robustness, this should also handle different languages.
const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm)$/ }); | |
const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm|确认)$/ }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the implementaion is ok currently, due to there is no need to take chinese scenario into consideration
await page.goto(`${baseURL}/login`, { waitUntil: 'networkidle' }); | ||
await page.evaluate((t) => localStorage.setItem('token', t), token); | ||
await page.goto(`${baseURL}${basePath}`, { waitUntil: 'networkidle' }); | ||
await page.evaluate((t) => localStorage.setItem('token', t), token); | ||
await page.reload({ waitUntil: 'networkidle' }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using waitUntil: 'networkidle'
is a good practice for single-page applications as it's generally more reliable than domcontentloaded
. It helps ensure that dynamic content fetched via API calls has loaded. For consistency and to improve the reliability of all tests, I recommend using networkidle
in the beforeEach
hooks of all the new E2E test files.
await page.waitForSelector('text=Overview', { timeout: 30000 }); | ||
}); | ||
|
||
test('should view deployment details', async ({ page }) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is incomplete. It navigates to the deployment details view but doesn't perform any assertions to verify that the correct information is displayed. A test without assertions doesn't validate any behavior. You should add expect
statements to check for the presence of key information, such as the deployment name, namespace, or labels on the details page.
const testDeploymentYaml = `apiVersion: apps/v1 | ||
kind: Deployment | ||
metadata: | ||
name: test-deployment-for-view-${Date.now()} | ||
namespace: default | ||
spec: | ||
replicas: 1 | ||
selector: | ||
matchLabels: | ||
app: test-app | ||
template: | ||
metadata: | ||
labels: | ||
app: test-app | ||
spec: | ||
containers: | ||
- name: nginx | ||
image: nginx:latest | ||
ports: | ||
- containerPort: 80`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test defines the test deployment YAML inline, whereas other tests like deployment-create.spec.ts
use the generateTestDeploymentYaml
helper function. This inconsistency makes the tests harder to maintain. For consistency, you should use the shared generateTestDeploymentYaml
function here as well. This also reinforces the need to extract common helpers into a shared utility file.
/assign |
// 等待删除确认tooltip出现 | ||
await page.waitForSelector('[role="tooltip"]', { timeout: 10000 }); | ||
|
||
// 点击Confirm按钮确认删除 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we translate the comment into en language 😄
…glish Signed-off-by: SunsetB612 <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: SunsetB612 <[email protected]>
…deployments tests Signed-off-by: SunsetB612 <[email protected]>
@@ -74,4 +74,4 @@ export default defineConfig({ | |||
timeout: 120 * 1000, | |||
reuseExistingServer: !process.env.CI, | |||
}, | |||
}); | |||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add extra line at the end of file
@@ -16,13 +16,17 @@ concurrency: | |||
cancel-in-progress: true | |||
permissions: | |||
contents: read # for actions/checkout to fetch code | |||
|
|||
env: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gemini-code-assist should we use new environment variable or re-use github build environment variable like RUNNER_DEBUG or ACTIONS_RUNNER_DEBUG
jobs: | ||
golangci: | ||
name: lint | ||
runs-on: ubuntu-22.04 | ||
steps: | ||
- name: checkout code | ||
uses: actions/checkout@v4 | ||
uses: actions/checkout@v5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
|
||
// Wait for API call to succeed | ||
const response = await apiRequestPromise; | ||
expect(response.status()).toBe(200); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the assertation seems redurant because
// Listen for API calls
const apiRequestPromise = page.waitForResponse(response => {
return response.url().includes('/api/v1/_raw/Deployment') && response.status() === 200;
}, { timeout: 15000 });
only when response.status
is 200, the promise can be await successfully.
const response = await apiRequestPromise; | ||
expect(response.status()).toBe(200); | ||
|
||
// Wait for dialog to close |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seesm that we need to wait for the resource be created successfully, maybe we need to add more retry in the dashboard to avoid these kind of tricks.
}); | ||
|
||
// Verify new deployment appears in list | ||
const deploymentName = testDeploymentYaml.match(/name: (.+)/)?.[1]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use regexp can work but not the best way here, maybe we can try to parse the content use yaml library here, like:
import { parse } from 'yaml';
import _ from 'lodash';
const yamlObject = parse(content) as Record<string, string>;
const deploymentName = _.get(yamlObject, 'metadata.name');
|
||
// Verify new deployment appears in list | ||
const deploymentName = testDeploymentYaml.match(/name: (.+)/)?.[1]; | ||
if (deploymentName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for deployment resources, the deployment name should not be empty, we can add extra assertation avoid empty deploymentname and remove the if
check.
await page.waitForSelector('[role="tooltip"]', { timeout: 10000 }); | ||
|
||
// 点击Confirm按钮确认删除 | ||
const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm)$/ }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the implementaion is ok currently, due to there is no need to take chinese scenario into consideration
}); | ||
|
||
test('should delete deployment successfully', async ({ page }) => { | ||
// 导航到工作负载页面 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still some chinese 😢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改,帮忙继续检视下
traverse(fiber); | ||
} | ||
} | ||
}, testDeploymentYaml); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gemini-code-assist we need to set monaco-editor with the generated yaml content, any better implementation for that? maybe we can create the kubernetes resources use kubectl
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's an excellent question, @warjiang! You've hit on a key point for robust E2E testing.
Directly manipulating the Monaco editor's DOM and triggering React's internal onChange
callbacks (as currently done in lines 39-93) is generally a fragile approach for E2E tests. It couples your tests tightly to the UI's internal implementation details, making them prone to breaking with minor UI changes or framework updates.
Your suggestion to create Kubernetes resources using kubectl
(or a Kubernetes client library) is indeed a much better and more reliable strategy for setting up test preconditions. Here's why:
- Reliability: It bypasses the UI, making the test setup independent of UI rendering or JavaScript execution quirks.
- Speed: Directly interacting with the Kubernetes API is typically much faster than simulating complex UI interactions.
- Decoupling: It separates the test data setup from the UI interaction, allowing your tests to focus purely on verifying the dashboard's functionality rather than its ability to create resources.
To implement this, I recommend adding a utility function to ui/apps/dashboard/e2e/workload/deployment/test-utils.ts
that uses kubectl
to create the deployment. Here's how that function could look:
// ui/apps/dashboard/e2e/workload/deployment/test-utils.ts
import { Page } from '@playwright/test';
import { exec } from 'child_process'; // Add this import
// ... existing code ...
/**
* Creates a Kubernetes Deployment using kubectl.
* This is a more robust way to set up test data than UI interaction.
* @param yamlContent The YAML content of the deployment.
* @returns A Promise that resolves when the deployment is created.
*/
export async function createK8sDeployment(yamlContent: string): Promise<void> {
return new Promise((resolve, reject) => {
// Escape single quotes in the YAML content for shell command
const escapedYaml = yamlContent.replace(/'/g, "'\\''");
const command = `echo '${escapedYaml}' | kubectl apply -f -`;
exec(command, (error, stdout, stderr) => {
if (error) {
console.error(`Error creating Kubernetes deployment:\n${stderr}`);
return reject(new Error(`Failed to create deployment: ${error.message}\n${stderr}`));
}
console.log(`Kubernetes deployment created successfully:\n${stdout}`);
resolve();
});
});
}
Once you've added this utility, you can update the deployment-delete.spec.ts
(and deployment-create.spec.ts
, deployment-edit.spec.ts
, deployment-view.spec.ts
which have similar logic) to use this new function. You'll also need to update the import statement in these files.
For ui/apps/dashboard/e2e/workload/deployment/deployment-delete.spec.ts
, please consider replacing the current Monaco editor interaction (lines 39-93) with the following, which is also provided as a direct suggestion:
import { setupDashboardAuthentication, generateTestDeploymentYaml, createK8sDeployment } from './test-utils';
// ...
// Create a test deployment directly via Kubernetes API/kubectl to set up test data
const testDeploymentYaml = generateTestDeploymentYaml();
await createK8sDeployment(testDeploymentYaml);
This change will make your E2E tests more stable and efficient. Let me know if you'd like to explore using a Node.js Kubernetes client library instead of kubectl
directly, which can offer even more control and error handling.
}, testDeploymentYaml); | |
// Create a test deployment directly via Kubernetes API/kubectl to set up test data | |
const testDeploymentYaml = generateTestDeploymentYaml(); | |
await createK8sDeployment(testDeploymentYaml); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 @SunsetB612 maybe you can try this way,
- change the way from create k8s resources in way of kubectl, make it easier to create the resources
- make us focus on the key feature which we want to test, for example for the delete test case, we only want to the test whether the delete function works as expect, we should not pay much attention to create the resources in the way to web-ui, use kubectl is more convenient ~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改,帮忙继续检视下
@@ -16,13 +16,17 @@ concurrency: | |||
cancel-in-progress: true | |||
permissions: | |||
contents: read # for actions/checkout to fetch code | |||
|
|||
env: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gemini-code-assist should we use new environment variable or re-use github build environment variable like RUNNER_DEBUG or ACTIONS_RUNNER_DEBUG
025fa46
to
699fc12
Compare
0b3595a
to
65d02a0
Compare
…f UI creation Signed-off-by: SunsetB612 <[email protected]>
9fecd48
to
5579dc4
Compare
No description provided.