-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Converting MFP CSV to YAML schedule #111
base: main
Are you sure you want to change the base?
Conversation
I'll ask the students to add a column with station time. I hope it will become available in later downloads from the MFP website directly as well. It is included online.
|
@ammedd |
I was discussing with @ammedd , and I think the best thing to do would be to use the CSV to populate as many fields as it can in the schedule, and leave the rest up to the user to populate in the YAML. This saves getting the user to add columns to the CSV (which I think would be more error prone). I'll be able to give this a proper review on Friday - just have some other items to clear before then :) |
- instrument: CTD | ||
location: | ||
latitude: 55.124089 | ||
longitude: 5.156524 | ||
time: "2023-01-01 01:00:00" | ||
- instrument: DRIFTER | ||
location: | ||
latitude: 55.124089 | ||
longitude: 5.156524 | ||
time: "2023-01-01 01:00:00" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was discussing with @ammedd , and I think the best thing to do would be to use the CSV to populate as many fields as it can in the schedule, and leave the rest up to the user to populate in the YAML. This saves getting the user to add columns to the CSV (which I think would be more error prone).
I'll be able to give this a proper review on Friday - just have some other items to clear before then :)
But that means that we need to either leave blank for the dates or guess it, right? There are also stations that will have more than one instrument, but it will be hard to guess they are the same if we leave the dates blank. Of course, they have the same location, but if longitudes and latitudes are too close, it might be harder than just adding a column to the CSV. Although I understand that adding a column manually is more prone to error.
Okay, I've managed to look through the code. Thanks for picking this up @iuryt ! I definitely see how this will streamline things for users - especially for the waypoints and the bounding box.
Correct, we'll have to leave the dates blank - its just information that is not provided from the MFP export. Though the order of points in the YAML should match the CSV. If the MFP export changes in future, then we can adapt. So yes, it would also be difficult to guess whether stations are the same. Honestly, I think that we should avoid making that guess. Making it clear that Its not optimal, but given MFP->virtualship isn't necessarily 1to1, and given the limited export, I think that its all we can do. I also think this is the clearer approach from a maintenance POV - being explicit with what is supported, and by not making additional assumptions which may not be right - and hands the control to the user to modify the schedule file.
Sounds like a good plan! My vote is for
Let's install Other notes:
Let me know what your thoughts are on all this ^, and if there's anything I can do to help :) |
But I tested it, and the current version is working. # Define maximum depth and buffer for each instrument
instrument_properties = {
"CTD": {"depth": 5000, "buffer": 1},
"DRIFTER": {"depth": 1, "buffer": 5},
"ARGO_FLOAT": {"depth": 2000, "buffer": 5},
} Any comments on this, @erikvansebille ? Let me know what you think. I have not added yet
|
Looks goo, but perhaps explain what |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good progress! Left some review comments. I think there is some duplication of information between the comments and the code. I recommend in general: Don't Write Comments - rewrite the code (really good YouTube channel in general for code style - highly recommend). The only time I write comments is to explain why something was done. E.g., # Keeping this legacy format for backward compatibility with older datasets
There are many ways to make things self documenting. E.g.,
mfp_to_yaml(
mfp_file, str(path)
) # Pass the path to save in the correct directory
could become
mfp_to_yaml(
mfp_file, save_directory = str(path)
)
.
It's possible to have clean, readable code with 0 duplication, but also 0 comments ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it would be possible to use the Pydantic objects directly here? (Waypoint, SpaceTimeRegion, then building up to Schedule) I think it would be less error prone than manually creating the dictionary/YAML.
# Extract unique instruments from dataset | ||
unique_instruments = np.unique( | ||
np.hstack(coordinates_data["Instrument"].apply(lambda a: a.split(", ")).values) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would just be clearer as a for loop.
unique_instruments = set()
for row in ....:
instruments = ...split(", ")
unique_instruments |= set(instruments) # or just `unique_instruments = unique_instruments | set(instruments)`
That way there's no need to write the comment since its evident from the code
if mfp_file: | ||
# Generate schedule.yaml from the MPF file | ||
click.echo(f"Generating schedule from {mfp_file}...") | ||
mfp_to_yaml( | ||
mfp_file, str(path) | ||
) # Pass the path to save in the correct directory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If mfp_to_yaml
returned a string representing the yaml instead of doing the file modifications, then we can use schedule.write_text(...)
like in the else clause. So something like
if mfp_file:
# don't repeat the comment here :) the log message is comment enough
click.echo(f"Generating schedule from {mfp_file}...")
schedule_body = mfp_to_yaml(
mfp_file,
)
else:
schedule_body = utils.get_example_schedule()
schedule.write_text(...)
@@ -12,6 +12,7 @@ dependencies: | |||
- pip | |||
- pyyaml | |||
- copernicusmarine >= 2 | |||
- openpyxl >= 3.1.5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- openpyxl >= 3.1.5 | |
- openpyxl |
I don't think we need to specify the lower bound. Its best to be flexible unless there is an incompatibility with x version in our codebase
# Define maximum depth and buffer for each instrument | ||
instrument_properties = { | ||
"CTD": {"depth": 5000, "buffer": 1}, | ||
"DRIFTER": {"depth": 1, "buffer": 5}, | ||
"ARGO_FLOAT": {"depth": 2000, "buffer": 5}, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Define maximum depth and buffer for each instrument | |
instrument_properties = { | |
"CTD": {"depth": 5000, "buffer": 1}, | |
"DRIFTER": {"depth": 1, "buffer": 5}, | |
"ARGO_FLOAT": {"depth": 2000, "buffer": 5}, | |
} | |
instrument_properties = { | |
"CTD": {"max_depth": 5000, "buffer": 1}, | |
"DRIFTER": {"max_depth": 1, "buffer": 5}, | |
"ARGO_FLOAT": {"max_depth": 2000, "buffer": 5}, | |
} |
mfp_file, str(path) | ||
) # Pass the path to save in the correct directory | ||
else: | ||
# Create a default example schedule |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Create a default example schedule |
@ammedd @VeckoTheGecko
The file does not contain the time expected for each of the stations, how should we guess this?
Also, how should we organize this code? Should we embed this to the
virtualship init
command?e.g.
virtualship init --mfp_file ./CoordinatesExport-Filled.xlsx
So far, this is just a script that creates the
yaml
file. We can delete them after deciding where to implement it.I am also using the CSV version to avoid installing
openpyxl
, but we can also install it, no problem.