-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Enable migration tests for clusters in legacy schema #2975
base: master
Are you sure you want to change the base?
Conversation
@@ -29,7 +29,6 @@ func TestMigAdvancedCluster_singleShardedMultiCloud(t *testing.T) { | |||
} | |||
|
|||
func TestMigAdvancedCluster_symmetricGeoShardedOldSchema(t *testing.T) { | |||
acc.SkipIfAdvancedClusterV2Schema(t) // unexpected update and then: error operation not permitted, nums_shards from 1 -> > 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the last mig test to enable for TPF
|
||
// sendLegacySchemaRequestToRead sets ClusterID to a special value so Read can know whether it must use legacy schema. | ||
// private state can't be used here because it's not available in Move Upgrader. | ||
// ClusterID is computed (not optional) so the value will be overridden in Read and the special value won't ever appear in the state file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let me know if there is any question about why we use ClusterID as a side channel to communicate between State Move / Upgrader and Read. Also if you have a better idea, please let me know
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we go with this option, do you think we can add a check of ClusterID != forceLegacySchema
in one of the tests that are testing the upgrade?
func sendLegacySchemaRequestToRead(model *TFModel) { | ||
model.ClusterID = types.StringValue("forceLegacySchema") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As an alternative, is it too complex to populate the replication spec list only with the bare minimum so our existing logic detects the legacy sharding config (objects with num_shards)? With this approach we would avoid receivedLegacySchemaRequestInRead
using cluster_id which looks more hacky
@@ -39,101 +41,149 @@ func stateUpgraderFromV1(ctx context.Context, req resource.UpgradeStateRequest, | |||
setStateResponse(ctx, &resp.Diagnostics, req.RawState, &resp.State) | |||
} | |||
|
|||
func setStateResponse(ctx context.Context, diags *diag.Diagnostics, stateIn *tfprotov6.RawState, stateOut *tfsdk.State) { | |||
rawStateValue, err := stateIn.UnmarshalWithOpts(tftypes.Object{ | |||
// Minimum attributes needed from source schema. Read will fill in the rest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
non-blocking comment: in the future where new attributes will be added, will there be some compile-time or test failure if that field needs to be specified here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in line 69 , we're using: IgnoreUndefinedAttributes: true}
that means that we're flexible in the schema, if you see the code below the real mandatory ones are project_id and cluster, the ones we try to used them in a best-effort basis, it's ok if they don't come (e.g. later when moving from flex cluster)
so the schema doesn't make it fail if they some attribute doesn't exist, later when the value is tried to be read it will be null
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's ok if they don't come
ok then why are we even populating them? What is the advantage of populating? Answering this question should also help me with understanding the consequence of:
later when the value is tried to be read it will be null
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the previous version/resource have them and we don't set them, then there will a plan change, e.g. if timeouts or retain_backups_enabled is defined in SDKv2 and want to migrate to TPF, users will get a plan change saying that timeouts/retain_backups_enabled will be deleted.
(we don't want plan changes when upgrading from sdkv2 to tpf or moved block from cluster to tpf adv_cluster)
in the case that will come with moving from flex cluster to adv_cluster for example, they don't exist so it's ok not to send them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's to avoid plan changes by filling attributes that Read can't do
Description
Enable migration tests for clusters in legacy schema. With this PR, all mig tests for TPF are passing.
Link to any related issue(s): CLOUDP-295165
Type of change:
Required Checklist:
Further comments