Skip to content

Commit

Permalink
Adds images to examples
Browse files Browse the repository at this point in the history
To make it simpler to grok what an example is doing. We should be leveraging this fact
to always make it clear that you can provide this documentation if you use Hamilton!

Also fixes some minor bugs I found with one of the examples.
  • Loading branch information
skrawcz committed Apr 1, 2023
1 parent c17bdc7 commit db97ef4
Show file tree
Hide file tree
Showing 56 changed files with 148 additions and 18 deletions.
4 changes: 4 additions & 0 deletions examples/async/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,10 @@ is awaited.
Any node inputs are awaited on prior to node computation if they are awaitable, so you can pass
in external tasks as inputs if you want.

Here is the execution visualized:

![pipeline](pipeline.dot.png)

## Caveats

1. This will break in certain cases when decorating an async function (E.G. with `extract_outputs`).
Expand Down
1 change: 1 addition & 0 deletions examples/async/fastapi_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ async def call(request: fastapi.Request) -> dict:
# Can instantiate a driver within a request as well:
# dr = h_async.AsyncDriver({}, async_module, result_builder=base.DictResult())
result = await dr.execute(["pipeline"], inputs=input_data)
# dr.visualize_execution(["pipeline"], "./pipeline.dot", {"format": "png"}, inputs=input_data)
return result


Expand Down
Binary file added examples/async/pipeline.dot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 5 additions & 0 deletions examples/dask/hello_world/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,8 @@ File organization:
* `data_loaders.py` houses logic to load data for the business_logic.py module. The
idea is that you'd swap this module out for other ways of loading data.
* `run.py` is the script that ties everything together.

# Visualization of execution
Here is the graph of execution:

![hello_world_dask](hello_world_dask.png)
Binary file added examples/dask/hello_world/hello_world_dask.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/dask/hello_world/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
]
df = dr.execute(output_columns)
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, './hello_world_dask', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')
logger.info(df.to_string())
client.shutdown()
11 changes: 11 additions & 0 deletions examples/data_loaders/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,14 @@ To load/analyze the data, you can run the script `run.py`

Note that you, as the user, have to manually handle connections/whatnot for duckdb.
We are currently designing the ability to do this natively in hamilton: https://github.com/dagworks-inc/hamilton/issues/197.

# Execution Graphs
You'll see that the execution graphs are practically the same, except for the top part. Which is exactly what we want,
because the data_loader modules defines the same set of functions, but have different requirements for getting the data!

## CSV
![csv](csv_execution_graph.png)
## DuckDB
![duckdb](duckdb_execution_graph.png)
## Mock
![mock](mock_execution_graph.png)
Binary file added examples/data_loaders/csv_execution_graph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/data_loaders/duckdb_execution_graph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 17 additions & 3 deletions examples/data_loaders/load_data_csv.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,26 @@


def spend(db_path: str) -> pd.DataFrame:
return pd.read_csv(os.path.join(db_path, "marketing_spend.csv"))
"""Takes in the dataframe and then generates a date index column to it,
where each row is a day starting from 2020-01-01"""
df = pd.read_csv(os.path.join(db_path, "marketing_spend.csv"))
df["date"] = pd.date_range(start="2020-01-01", periods=len(df), freq="D")
return df


def churn(db_path: str) -> pd.DataFrame:
return pd.read_csv(os.path.join(db_path, "marketing_spend.csv"))
"""Takes in the dataframe and then generates a date index column to it,
where each row is a day starting from 2020-01-01
"""
df = pd.read_csv(os.path.join(db_path, "churn.csv"))
df["date"] = pd.date_range(start="2020-01-01", periods=len(df), freq="D")
return df


def signups(db_path: str) -> pd.DataFrame:
return pd.read_csv(os.path.join(db_path, "marketing_spend.csv"))
"""Takes in the dataframe and then generates a date index column to it,
where each row is a day starting from 2020-01-01
"""
df = pd.read_csv(os.path.join(db_path, "signups.csv"))
df["date"] = pd.date_range(start="2020-01-01", periods=len(df), freq="D")
return df
Binary file added examples/data_loaders/mock_execution_graph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions examples/data_loaders/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,18 +31,21 @@ def duckdb():
{"db_path": "./test_data/database.duckdb"}, load_data_duckdb, prep_data
)
print(driver.execute(VARS))
# driver.visualize_execution(VARS, './duckdb_execution_graph', {"format": "png"})


@main.command()
def csv():
driver = hamilton.driver.Driver({"db_path": "test_data"}, load_data_csv, prep_data)
print(driver.execute(VARS))
# driver.visualize_execution(VARS, './csv_execution_graph', {"format": "png"})


@main.command()
def mock():
driver = hamilton.driver.Driver({}, load_data_mock, prep_data)
print(driver.execute(VARS))
# driver.visualize_execution(VARS, './mock_execution_graph', {"format": "png"})


if __name__ == "__main__":
Expand Down
17 changes: 17 additions & 0 deletions examples/data_quality/pandera/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,3 +67,20 @@ It is best practice to create a python virtual environment for each project/exam
It is best practice to create a python virtual environment for each project/example. We omit showing that step here.
> pip install -r requirements-spark.txt
> python run_spark.py

# Visualizing Execution
Again you'll see that the visualizations don't change much between the different ways of executing. But to help you
visualize what's going on, here is the output of `visualize_execution` for each of them.

## Vanilla Hamilton
![run](./run.png)

## Dask
![run_dask](./run_dask.png)

## Ray
![run_ray](./run_ray.png)

## Pandas on Spark
![run_spark](./run_spark.png)
Binary file added examples/data_quality/pandera/run.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/data_quality/pandera/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
"absenteeism_time_in_hours",
]
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, './run', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')

# let's create the dataframe!
Expand Down
Binary file added examples/data_quality/pandera/run_dask.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/data_quality/pandera/run_dask.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
"absenteeism_time_in_hours",
]
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, './run_dask', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')

# let's create the dataframe!
Expand Down
Binary file added examples/data_quality/pandera/run_ray.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions examples/data_quality/pandera/run_ray.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@
]
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {}, graphviz_kwargs=dict(graph_attr={'ratio': "1"}))
# dr.visualize_execution(output_columns, './run_ray', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')

# let's create the dataframe!
Expand Down
Binary file added examples/data_quality/pandera/run_spark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/data_quality/pandera/run_spark.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@
"index_col",
]
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, './run_spark', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')

# let's create the dataframe!
Expand Down
17 changes: 17 additions & 0 deletions examples/data_quality/simple/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,3 +65,20 @@ It is best practice to create a python virtual environment for each project/exam
It is best practice to create a python virtual environment for each project/example. We omit showing that step here.
> pip install -r requirements-spark.txt
> python run_spark.py

# Visualizing Execution
Again you'll see that the visualizations don't change much between the different ways of executing. But to help you
visualize what's going on, here is the output of `visualize_execution` for each of them.

## Vanilla Hamilton
![run](./run.png)

## Dask
![run_dask](./run_dask.png)

## Ray
![run_ray](./run_ray.png)

## Pandas on Spark
![run_spark](./run_spark.png)
Binary file added examples/data_quality/simple/run.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/data_quality/simple/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
"absenteeism_time_in_hours",
]
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, './run', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')

# let's create the dataframe!
Expand Down
Binary file added examples/data_quality/simple/run_dask.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/data_quality/simple/run_dask.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
"absenteeism_time_in_hours",
]
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, './run_dask', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')

# let's create the dataframe!
Expand Down
Binary file added examples/data_quality/simple/run_ray.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/data_quality/simple/run_ray.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@
"absenteeism_time_in_hours",
]
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {}, graphviz_kwargs=dict(graph_attr={'ratio': "1"}))
# dr.visualize_execution(output_columns, './run_ray', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')

# let's create the dataframe!
Expand Down
Binary file added examples/data_quality/simple/run_spark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/data_quality/simple/run_spark.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@
"index_col",
]
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, './run_spark', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')

# let's create the dataframe!
Expand Down
5 changes: 5 additions & 0 deletions examples/dbt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,11 @@ We've organized the code into two separate DBT models:
DBT in python is still in beta, and we'll be opening issues/contributing to get it more advanced! We're especially excited about FAL as it helps solve some of the
uglier python problems we hit along the way.

## Visualizing Execution
Here is the DAG generated by Hamilton for the above example:

![titanic_dbt](titanic_dbt.png)

# Future Directions

This is just a start, and we think that Hamilton + DBT have a long/exciting future together. In particular, we could:
Expand Down
3 changes: 3 additions & 0 deletions examples/dbt/models/train_and_infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,9 @@ def model(dbt, session):
results = titanic_dag.execute(
final_vars=["model_predict"], inputs={"raw_passengers_df": raw_passengers_df}
)
# pip install "sf-hamilton[visualization]" to get this to work
# titanic_dag.visualize_execution(["model_predict"], './titanic_dbt', {"format": "png"},
# inputs={"raw_passengers_df": raw_passengers_df})
# Take the "predictions" result, which is an np array
predictions = results["model_predict"]
# Return a dataframe!
Expand Down
Binary file added examples/dbt/titanic_dbt.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 5 additions & 0 deletions examples/hello_world/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,8 @@ To run things:

If you have questions, or need help with this example,
join us on [slack](https://join.slack.com/t/hamilton-opensource/shared_invite/zt-1bjs72asx-wcUTgH7q7QX1igiQ5bbdcg), and we'll try to help!

# Visualizing Execution
Here is the graph of execution - pretty simple, right?

![my_dag](my_dag.dot.png)
Binary file added examples/hello_world/my_dag.dot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/hello_world/my_script.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@

# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
dr.visualize_execution(output_columns, "./my_dag.dot", {"format": "png"})
dr.display_all_functions("./my_full_dag.dot", {"format": "png"})
# dr.display_all_functions("./my_full_dag.dot", {"format": "png"})
9 changes: 9 additions & 0 deletions examples/model_examples/scikit-learn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,12 @@ house the same function names as they should map to the inputs required by funct
## running it
* run.py houses the "driver code" required to stitch everything together. It is responsible for creating the
right configuration to create the DAG, as well as determining what python modules should be loaded.

# Visualization of execution
Here is the graph of execution for the digits data set and logistic regression model:

![model_dag_digits_logistic.dot.png](model_dag_digits_logistic.dot.png)

Here is the graph of execution for the iris data set and SVM model:

![model_dag_iris_svm](model_dag_iris_svm.dot.png)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion examples/model_examples/scikit-learn/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,8 @@ def get_model_config(model_type: str) -> dict:
"""
dr = driver.Driver(dag_config, data_module, my_train_evaluate_logic, adapter=adapter)
# ensure you have done "pip install "sf-hamilton[visualization]"" for the following to work:
# dr.visualize_execution(['classification_report', 'confusion_matrix', 'fit_clf'], './model_dag.dot', {})
# dr.visualize_execution(['classification_report', 'confusion_matrix', 'fit_clf'],
# f'./model_dag_{_data_set}_{_model_type}.dot', {"format": "png"})
results = dr.execute(["classification_report", "confusion_matrix", "fit_clf"])
for k, v in results.items():
print(k, ":\n", v)
5 changes: 5 additions & 0 deletions examples/polars/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,11 @@ To run things:
> python my_script.py
```

# Visualizing Execution
Here is the graph of execution - which should look the same as the pandas example:

![polars](polars.png)

# Caveat with Polars
There is one major caveat with Polars to be aware of: THERE IS NO INDEX IN POLARS LIKE THERE IS WITH PANDAS.

Expand Down
2 changes: 1 addition & 1 deletion examples/polars/my_script.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,5 +29,5 @@
print(df)

# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, './polars', {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')
Binary file added examples/polars/polars.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 4 additions & 0 deletions examples/ray/hello_world/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,10 @@ For the vanilla Ray implementation use:

> python run.py
Here is the visualization of the execution:

![ray_dag](ray_dag.png)

For the [Ray Workflow](https://docs.ray.io/en/latest/workflows/concepts.html) implementation use:

> python run_rayworkflow.py
Binary file added examples/ray/hello_world/ray_dag.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/ray/hello_world/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
# let's create the dataframe!
df = dr.execute(output_columns)
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, "./ray_dag", {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')
print(df.to_string())
ray.shutdown()
5 changes: 5 additions & 0 deletions examples/reusing_functions/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,8 @@ In this case, we are calculating unique website visitors from the following set

You can find the code in [unique_users.py](unique_users.py) and [reusable_subdags.py](reusable_subdags.py)
and look at how we run it in [main.py](main.py).

# Visualizing Execution
Here you can see the resulting shape of the DAG that will be executed:

![reusable_subdags](reusable_subdags.png)
8 changes: 8 additions & 0 deletions examples/reusing_functions/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,14 @@ def main():
"monthly_unique_users_CA",
]
)
# dr.visualize_execution([
# "daily_unique_users_US",
# "daily_unique_users_CA",
# "weekly_unique_users_US",
# "weekly_unique_users_CA",
# "monthly_unique_users_US",
# "monthly_unique_users_CA",
# ], "./reusable_subdags", {"format": "png"})
print(result)


Expand Down
Binary file added examples/reusing_functions/reusable_subdags.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 4 additions & 0 deletions examples/scikit-learn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ To run things:
```bash
> python run.py
```
# DAG Visualization:
Here is the visualization of the execution that the transformer currently performs if you run `run.py`:

![scikit_transformer](scikit_transformer.png)

# Limitations and TODOs
- The current implementation relies on Hamilton defaults' `base.HamiltonGraphAdapter` and `base.PandasDataFrameResult` which limits the compatibility with other computation engines supported by Hamilton
Expand Down
5 changes: 4 additions & 1 deletion examples/scikit-learn/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,10 @@ def transform(self, X, y=None, **kwargs) -> pd.DataFrame:
X = X.to_dict(orient="series")

X_t = self.driver_.execute(final_vars=self.final_vars, overrides=self.overrides_, inputs=X)

# self.driver_.visualize_execution(final_vars=self.final_vars,
# output_file_path="./scikit_transformer",
# render_kwargs={"format": "png"},
# inputs=X)
self.n_features_out_ = len(self.final_vars)
self.feature_names_out_ = X_t.columns.to_list()
return X_t
Expand Down
Binary file added examples/scikit-learn/scikit_transformer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 5 additions & 0 deletions examples/spark/pandas_on_spark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,8 @@ File organization:
* `data_loaders.py` houses logic to load data for the business_logic.py module. The
idea is that you'd swap this module out for other ways of loading data.
* `run.py` is the script that ties everything together.

# DAG Visualization:
Here is the visualization of the execution when you'd execute `run.py`:

![pandas_on_spark.png](pandas_on_spark.png)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/spark/pandas_on_spark/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@
# let's create the dataframe!
df = dr.execute(output_columns)
# To visualize do `pip install "sf-hamilton[visualization]"` if you want these to work
# dr.visualize_execution(output_columns, './my_dag.dot', {})
# dr.visualize_execution(output_columns, "./pandas_on_spark", {"format": "png"})
# dr.display_all_functions('./my_full_dag.dot')
print(type(df))
print(df.to_string())
Expand Down
2 changes: 1 addition & 1 deletion hamilton/function_modifiers/recursive.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ class subdag(base.NodeCreator):
def feature_engineering(source_path: str) -> pd.DataFrame:
'''You could recursively use Hamilton within itself.'''
dr = driver.Driver({}, feature_modules)
df = dr.exexcute(["feature_df"], inputs={"path": source_path})
df = dr.execute(["feature_df"], inputs={"path": source_path})
return df
You instead can use the `@subdag` decorator to do the same thing, with the added benefit of visibility into the\
Expand Down

0 comments on commit db97ef4

Please sign in to comment.