-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interoperability with pd.to_sql? #123
Comments
Answering my own question: need to change YEAR series to pint[dimensionless] and then write co2_df.pint.dequantify() to get a SQL table that works. |
I have updated the demo notebook to show the hoops through which one must jump in order to unpack the results that come back from pd.read_sql. The relevant snippet is:
Re-opening in case there's an obviously better way to do this. |
I have updated the notebook to do a full round-trip of a dataframe that includes both non-quantified data (strings), as well as with both homogeneous units in a column as well as heterogeneous units in a column. |
What is the recommended way to write and read pint-pandas dataframes to and from SQL databases?
Is it immediately obvious that the answer should be to convert to/from JSON? In my case, I'm interested in writing to/from a TRINO database via SQLAlchemy, but happy to see answers that work for PostgreSQL, SQLite, etc. Here's a test case:
https://github.com/os-climate/data-platform-demo/blob/master/notebooks/pint-demo.ipynb
Relevant snippet:
The text was updated successfully, but these errors were encountered: