Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calling textAsBlob on an ascii field results in com.datastax.oss.driver.api.core.type.codec.CodecNotFoundException: Codec not found for requested operation: [BLOB <-> java.lang.String] #308

Closed
paayers opened this issue Sep 13, 2024 · 0 comments · Fixed by #311
Assignees
Labels
enhancement New feature or request

Comments

@paayers
Copy link

paayers commented Sep 13, 2024

When migrating, we have a column 'node_id' that is held in an ascii field, but we need to compress those and store in a blob field, however CDM isn't able to convert the ascii field to a blob on the fly, and we get:
com.datastax.oss.driver.api.core.type.codec.CodecNotFoundException: Codec not found for requested operation: [BLOB <-> java.lang.String]

The source schema with node_id as an ascii field is this:

CREATE TABLE v2metrics_downsampled.per_node_3m_0 (
    metric_id blob,
    bucket_ts int,
    bucket_idx tinyint,
    node_id ascii,
    row_type tinyint,
    value blob,
    PRIMARY KEY ((metric_id, bucket_ts), bucket_idx, node_id, row_type)
) WITH CLUSTERING ORDER BY (bucket_idx ASC, node_id ASC, row_type ASC)
    AND additional_write_policy = '99p'
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND cdc = false
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 'compaction_window_size': '24', 'compaction_window_unit': 'HOURS', 'max_threshold': '32', 'min_threshold': '4', 'tombstone_threshold': '0.8', 'unchecked_tombstone_compaction': 'true', 'unsafe_aggressive_sstable_expiration': 'true'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND default_time_to_live = 2419200
    AND extensions = {}
    AND gc_grace_seconds = 3600
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair = 'BLOCKING'
    AND speculative_retry = '99p';

The target schema where node_id is a blob is simply:

CREATE TABLE v2metrics_downsampled.per_node_3m_0 (
    metric_id blob,
    bucket_ts int,
    bucket_idx tinyint,
    node_id blob,
    row_type tinyint,
    value blob,
    PRIMARY KEY ((metric_id, bucket_ts), bucket_idx, node_id, row_type)
) WITH CLUSTERING ORDER BY (bucket_idx ASC, node_id ASC, row_type ASC)
    AND additional_write_policy = '99p'
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND cdc = false
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 'compaction_window_size': '24', 'compaction_window_unit': 'HOURS', 'max_threshold': '32', 'min_threshold': '4', 'tombstone_threshold': '0.8', 'unchecked_tombstone_compaction': 'true', 'unsafe_aggressive_sstable_expiration': 'true'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND default_time_to_live = 2419200
    AND extensions = {}
    AND gc_grace_seconds = 3600
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair = 'BLOCKING'
    AND speculative_retry = '99p';
    ```

Being able to perform conversions like this would be super helpful.
Thanks
@pravinbhat pravinbhat self-assigned this Sep 13, 2024
@pravinbhat pravinbhat added the enhancement New feature or request label Sep 18, 2024
@pravinbhat pravinbhat linked a pull request Sep 19, 2024 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants