Skip to content

Conversation

matthewgoodman13
Copy link
Contributor

@matthewgoodman13 matthewgoodman13 commented Jul 14, 2022

Affiliated with Issue 47

TLDR:
ClusterID getting exported to Cloud Map & getting propagated to endpoints as an CLUSTER_ID attribute

Description of changes:

  • Added the ClusterProperty CRD (from new group about.k8s.io)
  • In serviceexport_controller.go, we now watch for changes in the ClusterProperty attribute, and when there is a change, we reconcile all endpoints. This works for adding and removing of a ClusterProperty object.
    • This can be tested by kubectl apply -f config/samples/about_v1alpha1_clusterproperty.yaml
  • [Update 1] If no ClusterId is defined - an error will now be thrown. Service Export will not be created.
  • [Update 2 - b35bc75] ClustersetId added. This id is sent in DiscoverInstances as a QueryParameter to only return endpoints within the clusterset. More unit tests added.
  • [Update 3] ClusterUtils added. Integration tests will fail until next PR.

Notes:

  • This is a minimal PR. Main goal was to add the new about api. No error handling introduced yet and no errors will be thrown yet for no ClusterID being defined other than when the service is being exported. No unit tests in this PR either.
  • Adding / removing the ClusterProperty (id.k8s.io) will add/remove the CLUSTER_ID attribute in the endpoint
  • In suite_test.go, changes needed to be made to not include the clusterproperty CRD for testing. Special tests will need to be constructed to manually add the annotation to be allowed to use '.k8s.io' suffix.

Testing:

  • manual testing of the controller
  • make test runs without error
  • [Update 1] tests updated to include ClusterID. Unit test added for no cluster id - expect
  • [Update 2] more unit tests added for Clusterid and ClustersetId
  • [Update 3] Integration tests will fail until next PR

Added managed by label - issue 110 (aws#139)
@codecov-commenter
Copy link

codecov-commenter commented Jul 14, 2022

Codecov Report

Merging #165 (1aa5dde) into main (a95829a) will decrease coverage by 1.86%.
The diff coverage is 41.75%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #165      +/-   ##
==========================================
- Coverage   74.52%   72.65%   -1.87%     
==========================================
  Files          15       15              
  Lines        1480     1558      +78     
==========================================
+ Hits         1103     1132      +29     
- Misses        300      342      +42     
- Partials       77       84       +7     
Impacted Files Coverage Δ
pkg/model/types.go 68.18% <25.00%> (-3.02%) ⬇️
...ntrollers/multicluster/serviceexport_controller.go 40.00% <36.17%> (-0.83%) ⬇️
...kg/controllers/multicluster/cloudmap_controller.go 45.23% <38.46%> (-0.45%) ⬇️
pkg/cloudmap/client.go 76.60% <50.00%> (-0.25%) ⬇️
pkg/cloudmap/api.go 81.32% <66.66%> (-1.48%) ⬇️
integration/janitor/api.go 78.57% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a95829a...1aa5dde. Read the comment docs.

@matthewgoodman13 matthewgoodman13 changed the title Reconcileclusterid Propagate ClusterID to Endpoints + Cloud Map Jul 14, 2022
@matthewgoodman13 matthewgoodman13 changed the title Propagate ClusterID to Endpoints + Cloud Map Propagate ClusterID to Cloud Map & Endpoints Jul 14, 2022
@matthewgoodman13 matthewgoodman13 merged commit 0354a3d into aws:main Jul 29, 2022
astaticvoid added a commit that referenced this pull request Jul 31, 2022
runakash pushed a commit to runakash/aws-cloud-map-mcs-controller-for-k8s that referenced this pull request Jun 20, 2025
ClusterId, ClusterSetId getting exported to Cloud Map & getting propagated to endpoints as CLUSTER_ID and CLUSTERSET_ID attributes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants