You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<class 'dlt.normalize.exceptions.NormalizeJobFailed'>
Job for my_resource.7547067d79.typed-jsonl failed terminally in load 1723141697.7149394 with message [Errno 36] File name too long: '/home/alex/.dlt/pipelines/my_source/load/new/1723141697.7149394/new_jobs/<long name>.ee379b2de9.0.insert_values'.
The total the filename length (without any of the path) is 43 characters.
The data structures here are deeply nested with long key names (coming from the 3rd party source), so I don't have control over it unless I start renaming things.
Expected behavior
My expectation is that I can run a pipeline with deeply nested resources having long names.
@VioletM and I briefly discussed this issue over Slack. I'm told there is a table name shortening mechanism that prevents this sort of problem for tables, but the same mechanism isn't applied to filenames which are used for intermediate operations, if I understand correctly. I guess this mechanism or similar should be used to ensure runs like this can work.
Steps to reproduce
I cannot share my source data but any data with sufficiently long key names should produce this error when running the pipeline.
Operating system
Linux
Runtime environment
Local
Python version
3.11
dlt data source
API
dlt destination
DuckDB
Other deployment details
No response
Additional information
No response
The text was updated successfully, but these errors were encountered:
@sheluchin there's a simple workaround (until we fix it properly). right now we support changing destination capabilities in code so you can limit identifier length:
dlt version
0.5.2
Describe the problem
When running my pipeline, I got an error like:
The total the filename length (without any of the path) is 43 characters.
The data structures here are deeply nested with long key names (coming from the 3rd party source), so I don't have control over it unless I start renaming things.
Expected behavior
My expectation is that I can run a pipeline with deeply nested resources having long names.
@VioletM and I briefly discussed this issue over Slack. I'm told there is a table name shortening mechanism that prevents this sort of problem for tables, but the same mechanism isn't applied to filenames which are used for intermediate operations, if I understand correctly. I guess this mechanism or similar should be used to ensure runs like this can work.
Steps to reproduce
I cannot share my source data but any data with sufficiently long key names should produce this error when running the pipeline.
Operating system
Linux
Runtime environment
Local
Python version
3.11
dlt data source
API
dlt destination
DuckDB
Other deployment details
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: