From 97107fdbd60b0c8954cf80267b839e44b8febd43 Mon Sep 17 00:00:00 2001 From: datejada Date: Tue, 28 Jan 2025 14:59:41 +0100 Subject: [PATCH] Update documentation --- docs/src/10-how-to-use.md | 449 ++++++++++--------------------------- docs/src/20-tutorials.md | 11 +- docs/src/30-concepts.md | 204 ++++++++++------- docs/src/40-formulation.md | 8 +- docs/src/50-schemas.md | 18 ++ docs/src/60-structures.md | 74 ++++++ src/structures.jl | 17 +- 7 files changed, 356 insertions(+), 425 deletions(-) create mode 100644 docs/src/50-schemas.md create mode 100644 docs/src/60-structures.md diff --git a/docs/src/10-how-to-use.md b/docs/src/10-how-to-use.md index f1ba52eb..cf480427 100644 --- a/docs/src/10-how-to-use.md +++ b/docs/src/10-how-to-use.md @@ -36,7 +36,7 @@ julia> using TulipaEnergyModel ### (Optional) Running automatic tests -It is nice to check that tests are passing to make sure your environment is working. (This takes a minute or two.) +It is nice to check that tests are passing to make sure your environment is working, this takes a minute or two. - Enter package mode (press "]") @@ -46,6 +46,9 @@ pkg> test TulipaEnergyModel All tests should pass. +!!! warning "Admin rights in your local machine" + Ensure you have admin rights on the folder where the package is installed; otherwise, an error will appear during the tests. + ## Running a Scenario To run a scenario, use the function: @@ -57,256 +60,22 @@ The `connection` should have been created and the data loaded into it using [Tul See the [tutorials](@ref tutorials) for a complete guide on how to achieve this. The `output_folder` is optional if the user wants to export the output. -## [Input](@id input) - -Currently, we only accept input from [CSV files](@ref csv-files) that follow the [Schemas](@ref schemas). -You can also check the [`test/inputs` folder](https://github.com/TulipaEnergy/TulipaEnergyModel.jl/tree/main/test/inputs) for examples. - -### [CSV Files](@id csv-files) - -!!! danger - This section is out of date. The update of these docs is tracked in - -Below, we have a description of the files. -At the end, in [Schemas](@ref schemas), we have the expected columns in these CSVs. - -> **Tip:** -> If you modify CSV files and want to see your modifications, the normal `git diff` command will not be informative. -> Instead, you can use -> -> ```bash -> git diff --word-diff-regex="[^[:space:],]+" -> ``` -> -> to make `git` treat the `,` as word separators. -> You can also compare two CSV files with -> -> ```bash -> git diff --no-index --word-diff-regex="[^[:space:],]+" file1 file2 -> ``` - -#### [`graph-assets-data.csv`](@id graph-assets-data) - -This file contains the list of assets and the static data associated with each of them. - -The meaning of `Missing` data depends on the parameter, for instance: - -- `group`: No group assigned to the asset. - -#### [`graph-flows-data.csv`](@id graph-flows-data) - -The same as [`graph-assets-data.csv`](@ref graph-assets-data), but for flows. Each flow is defined as a pair of assets. - -#### [`assets-data.csv`](@id assets-data) - -This file contains the yearly data of each asset. - -The investment parameters are as follows: - -- The `investable` parameter determines whether there is an investment decision for the asset or flow. -- The `investment_integer` parameter determines if the investment decision is integer or continuous. -- The `investment_cost` parameter represents the cost in the defined [timeframe](@ref timeframe). Thus, if the timeframe is a year, the investment cost is the annualized cost of the asset. -- The `investment_limit` parameter limits the total investment capacity of the asset or flow. This limit represents the potential of that particular asset or flow. Without data in this parameter, the model assumes no investment limit. - -The meaning of `Missing` data depends on the parameter, for instance: - -- `investment_limit`: There is no investment limit. -- `initial_storage_level`: The initial storage level is free (between the storage level limits), meaning that the optimization problem decides the best starting point for the storage asset. In addition, the first and last time blocks in a representative period are linked to create continuity in the storage level. - -#### [`flows-data.csv`](@id flows-data) - -The same as [`assets-data.csv`](@ref assets-data), but for flows. Each flow is defined as a pair of assets. - -The meaning of `Missing` data depends on the parameter, for instance: - -- `investment_limit`: There is no investment limit. - -#### [`assets-profiles.csv`] (@id assets-profiles-definition) - -These files contain information about assets and their associated profiles. Each row lists an asset, the type of profile (e.g., availability, demand, maximum or minimum storage level), and the profile's name. -These profiles are used in the [intra-temporal constraints](@ref concepts-summary). - -#### [`flows-profiles.csv`](@id flows-profiles-definition) - -This file contains information about flows and their representative period profiles for intra-temporal constraints. Each flow is defined as a pair of assets. - -#### [`rep-periods-data.csv`](@id rep-periods-data) - -Describes the [representative periods](@ref representative-periods) by their unique ID, the number of timesteps per representative period, and the resolution per timestep. Note that in the test files the resolution units are given as hours for understandability, but the resolution is technically unitless. - -#### [`rep-periods-mapping.csv`](@id rep-periods-mapping) - -Describes the periods of the [timeframe](@ref timeframe) that map into a [representative period](@ref representative-periods) and the weight of the representative periods that construct a period. Note that each weight is a decimal between 0 and 1, and that the sum of weights for a given period must also be between 0 and 1 (but do not have to sum to 1). - -#### `profiles-rep-periods.csv` - -Define all the profiles for the `rep-periods`. -The `profile_name` is a unique identifier, the `period` and `value` define the profile, and the `rep_period` field informs the representative period. - -The profiles are linked to assets and flows in the files [`assets-profiles`](@ref assets-profiles-definition), [`assets-timeframe-profiles`](@ref assets-profiles-definition), and [`flows-profiles`](@ref flows-profiles-definition). - -#### `assets-timeframe-profiles.csv` - -Like the [`assets-profiles.csv`](@ref assets-profiles-definition), but for the [inter-temporal constraints](@ref concepts-summary). - -#### `group-asset.csv` (optional) - -This file contains the list of groups and the methods that apply to each group, along with their respective parameters. - -#### `profiles-timeframe.csv` (optional) - -Define all the profiles for the `timeframe`. -This is similar to the [`profiles-rep-periods.csv`](@ref) except that it doesn't have a `rep-period` field and if this is not passed, default values are used in the timeframe constraints. - -#### [`assets-rep-periods-partitions.csv` (optional)](@id assets-rep-periods-partitions-definition) - -Contains a description of the [partition](@ref Partition) for each asset with respect to representative periods. -If not specified, each asset will have the same time resolution as the representative period, which is hourly by default. - -There are currently three ways to specify the desired resolution, indicated in the column `specification`. -The column `partition` serves to define the partitions in the specified style. - -- `specification = uniform`: Set the resolution to a uniform amount, i.e., a time block is made of `X` timesteps. The number `X` is defined in the column `partition`. The number of timesteps in the representative period must be divisible by `X`. -- `specification = explicit`: Set the resolution according to a list of numbers separated by `;` on the `partition`. Each number in the list is the number of timesteps for that time block. For instance, `2;3;4` means that there are three time blocks, the first has 2 timesteps, the second has 3 timesteps, and the last has 4 timesteps. The sum of the list must be equal to the total number of timesteps in that representative period, as specified in `num_timesteps` of [`rep-periods-data.csv`](@ref rep-periods-data). -- `specification = math`: Similar to explicit, but using `+` and `x` for simplification. The value of `partition` is a sequence of elements of the form `NxT` separated by `+`, indicating `N` time blocks of length `T`. For instance, `2x3+3x6` is 2 time blocks of 3 timesteps, followed by 3 time blocks of 6 timesteps, for a total of 24 timesteps in the representative period. - -The table below shows various results for different formats for a representative period with 12 timesteps. - -| Time Block | :uniform | :explicit | :math | -| :-------------------- | :------- | :---------------------- | :---------- | -| 1:3, 4:6, 7:9, 10:12 | 3 | 3;3;3;3 | 4x3 | -| 1:4, 5:8, 9:12 | 4 | 4;4;4 | 3x4 | -| 1:1, 2:2, …, 12:12 | 1 | 1;1;1;1;1;1;1;1;1;1;1;1 | 12x1 | -| 1:3, 4:6, 7:10, 11:12 | NA | 3;3;4;2 | 2x3+1x4+1x2 | - -Note: If an asset is not specified in this file, the balance equation will be written in the lowest resolution of both the incoming and outgoing flows to the asset. - -#### [`flows-rep-periods-partitions.csv` (optional)](@id flow-rep-periods-partitions-definition) - -The same as [`assets-rep-periods-partitions.csv`](@ref assets-rep-periods-partitions-definition), but for flows. +### [Input](@id input) -If a flow is not specified in this file, the flow time resolution will be for each timestep by default (e.g., hourly). +Currently, we only accept input from CSV files that follow the [Schemas](@ref schemas). -#### [`assets-timeframe-partitions.csv` (optional)](@id assets-timeframe-partitions) +You can also check the [`test/inputs` folder](https://github.com/TulipaEnergy/TulipaEnergyModel.jl/tree/main/test/inputs) for examples of different predefined energy systems and features. Moreover, Tulipa's Offshore Bidding Zone Case Study can be found in . It shows how to start from user-friendly files and transform the data into the input files in the [Schemas](@ref schemas) through different functions. -The same as their [`assets-rep-periods-partitions.csv`](@ref assets-rep-periods-partitions-definition) counterpart, but for the periods in the [timeframe](@ref timeframe) of the model. +### Writing the output to CSV -### [Schemas](@id schemas) - -```@eval -using Markdown, TulipaEnergyModel - -Markdown.parse( - join(["- **`$filename`**\n" * - join( - [" - `$f: $t`" for (f, t) in schema], - "\n", - ) for (filename, schema) in TulipaEnergyModel.schema_per_table_name - ] |> sort, "\n") -) -``` - -## [Structures](@id structures) - -!!! danger - This section is out of date. The update of these docs is tracked in - -The list of relevant structures used in this package are listed below: - -### EnergyProblem - -The `EnergyProblem` structure is a wrapper around various other relevant structures. -It hides the complexity behind the energy problem, making the usage more friendly, although more verbose. - -#### Fields - -- `graph`: The Graph object that defines the geometry of the energy problem. -- `representative_periods`: A vector of [Representative Periods](@ref representative-periods). -- `constraints_partitions`: Dictionaries that connect pairs of asset and representative periods to [time partitions](@ref Partition) (vectors of time blocks). -- `timeframe`: The number of periods in the `representative_periods`. -- `dataframes`: A Dictionary of dataframes used to linearize the variables and constraints. These are used internally in the model only. -- `groups`: A vector of [Groups](@ref group). -- `model`: A JuMP.Model object representing the optimization model. -- `solution`: A structure of the variable values (investments, flows, etc) in the solution. -- `solved`: A boolean indicating whether the `model` has been solved or not. -- `objective_value`: The objective value of the solved problem (Float64). -- `termination_status`: The termination status of the optimization model. -- `time_read_data`: Time taken (in seconds) for reading the data (Float64). -- `time_create_model`: Time taken (in seconds) for creating the model (Float64). -- `time_solve_model`: Time taken (in seconds) for solving the model (Float64). - -#### Constructor - -The `EnergyProblem` can also be constructed using the minimal constructor below. - -- `EnergyProblem(connection)`: Constructs a new `EnergyProblem` object with the given `connection` that has been created and the data loaded into it using [TulipaIO](https://github.com/TulipaEnergy/TulipaIO.jl). The `graph`, `representative_periods`, and `timeframe` are computed using `create_internal_structures`. The `constraints_partitions` field is computed from the `representative_periods`, and the other fields are initialized with default values. - -See the [basic example tutorial](@ref basic-example) to see how these can be used. - -### GraphAssetData - -This structure holds all the information of a given asset. -These are stored inside the Graph. -Given a graph `graph`, an asset `a` can be accessed through `graph[a]`. - -### GraphFlowData - -This structure holds all the information of a given flow. -These are stored inside the Graph. -Given a graph `graph`, a flow from asset `u` to asset `v` can be accessed through `graph[u, v]`. - -### Partition - -A [representative period](@ref representative-periods) will be defined with a number of timesteps. -A partition is a division of these timesteps into [time blocks](@ref time-blocks) such that the time blocks are disjunct (not overlapping) and that all timesteps belong to some time block. -Some variables and constraints are defined over every time block in a partition. - -For instance, for a representative period with 12 timesteps, all sets below are partitions: - -- $\{\{1, 2, 3\}, \{4, 5, 6\}, \{7, 8, 9\}, \{10, 11, 12\}\}$ -- $\{\{1, 2, 3, 4\}, \{5, 6, 7, 8\}, \{9, 10, 11, 12\}\}$ -- $\{\{1\}, \{2, 3\}, \{4\}, \{5, 6, 7, 8\}, \{9, 10, 11, 12\}\}$ - -### [Timeframe](@id timeframe) - -The timeframe is the total period we want to analyze with the model. Usually this is a year, but it can be any length of time. A timeframe has two fields: - -- `num_periods`: The timeframe is defined by a certain number of periods. For instance, a year can be defined by 365 periods, each describing a day. -- `map_periods_to_rp`: Indicates the periods of the timeframe that map into a [representative period](@ref representative-periods) and the weight of the representative period to construct that period. - -### [Representative Periods](@id representative-periods) - -The [timeframe](@ref timeframe) (e.g., a full year) is described by a selection of representative periods, for instance, days or weeks, that nicely summarize other similar periods. For example, we could model the year into 3 days, by clustering all days of the year into 3 representative days. Each one of these days is called a representative period. _TulipaEnergyModel.jl_ has the flexibility to consider representative periods of different lengths for the same timeframe (e.g., a year can be represented by a set of 4 days and 2 weeks). To obtain the representative periods, we recommend using [TulipaClustering](https://github.com/TulipaEnergy/TulipaClustering.jl). - -A representative period has three fields: - -- `weight`: Indicates how many representative periods are contained in the [timeframe](@ref timeframe); this is inferred automatically from `map_periods_to_rp` in the [timeframe](@ref timeframe). -- `timesteps`: The number of timesteps blocks in the representative period. -- `resolution`: The duration in time of each timestep. - -The number of timesteps and resolution work together to define the coarseness of the period. -Nothing is defined outside of these timesteps; for instance, if the representative period represents a day and you want to specify a variable or constraint with a coarseness of 30 minutes. You need to define the number of timesteps to 48 and the resolution to `0.5`. - -### [Time Blocks](@id time-blocks) - -A time block is a range for which a variable or constraint is defined. -It is a range of numbers, i.e., all integer numbers inside an interval. -Time blocks are used for the periods in the [timeframe](@ref timeframe) and the timesteps in the [representative period](@ref representative-periods). Time blocks are disjunct (not overlapping), but do not have to be sequential. - -### [Group](@id group) - -This structure holds all the information of a given group with the following fields: - -- `name`: The name of the group. -- `invest_method`: Boolean value to indicate whether or not the group has an investment method. -- `min_investment_limit`: A minimum investment limit in MW is imposed on the total investments of the assets belonging to the group. -- `max_investment_limit`: A maximum investment limit in MW is imposed on the total investments of the assets belonging to the group. +To save the solution to CSV files, you can use [`export_solution_to_csv_files`](@ref). See the [tutorials](@ref tutorials) for an example showcasing this function. ## [Exploring infeasibility](@id infeasible) If your model is infeasible, you can try exploring the infeasibility with [JuMP.compute_conflict!](https://jump.dev/JuMP.jl/stable/api/JuMP/#JuMP.compute_conflict!) and [JuMP.copy_conflict](https://jump.dev/JuMP.jl/stable/api/JuMP/#JuMP.copy_conflict). -> **Note:** Not all solvers support this functionality. +!!! warning "Check your solver options!" + Not all solvers support this functionality; please check depending on each case. Use `energy_problem.model` for the model argument. For instance: @@ -332,21 +101,26 @@ create_model!(energy_problem; enable_names = false) For more information, see the [JuMP documentation](https://jump.dev/JuMP.jl/stable/tutorials/getting_started/performance_tips/#Disable-string-names). +## Finding an input parameter + +!!! tip "Are you looking for a input parameter?" + Please visit the [Model Parameters](@ref schemas) section for a description and location of the input parameters mentioned in this section. + ## Storage specific setups ### [Seasonal and non-seasonal storage](@id seasonal-setup) Section [Storage Modeling](@ref storage-modeling) explains the main concepts for modeling seasonal and non-seasonal storage in _TulipaEnergyModel.jl_. To define if an asset is one type or the other then consider the following: -- _Seasonal storage_: When the storage capacity of an asset is greater than the total length of representative periods, we recommend using the inter-temporal constraints. To apply these constraints, you must set the input parameter `is_seasonal` to `true` in the [`assets-data.csv`](@ref schemas). -- _Non-seasonal storage_: When the storage capacity of an asset is lower than the total length of representative periods, we recommend using the intra-temporal constraints. To apply these constraints, you must set the input parameter `is_seasonal` to `false` in the [`assets-data.csv`](@ref schemas). +- _Seasonal storage_: When the storage capacity of an asset is greater than the total length of representative periods, we recommend using the inter-temporal constraints. To apply these constraints, you must set the input parameter `is_seasonal` to `true`. +- _Non-seasonal storage_: When the storage capacity of an asset is lower than the total length of representative periods, we recommend using the intra-temporal constraints. To apply these constraints, you must set the input parameter `is_seasonal` to `false`. -> **Note:** -> If the input data covers only one representative period for the entire year, for example, with 8760-hour timesteps, and you have a monthly hydropower plant, then you should set the `is_seasonal` parameter for that asset to `false`. This is because the length of the representative period is greater than the storage capacity of the storage asset. +!!! info + If the input data covers only one representative period for the entire year, for example, with 8760-hour timesteps, and you have a monthly hydropower plant, then you should set the `is_seasonal` parameter for that asset to `false`. This is because the length of the representative period is greater than the storage capacity of the storage asset. ### [The energy storage investment method](@id storage-investment-setup) -Energy storage assets have a unique characteristic wherein the investment is based not solely on the capacity to charge and discharge, but also on the energy capacity. Some storage asset types have a fixed duration for a given capacity, which means that there is a predefined ratio between energy and power. For instance, a battery of 10MW/unit and 4h duration implies that the energy capacity is 40MWh. Conversely, other storage asset types don't have a fixed ratio between the investment of capacity and storage capacity. Therefore, the energy capacity can be optimized independently of the capacity investment, such as hydrogen storage in salt caverns. To define if an energy asset is one type or the other then consider the following parameter setting in the file [`assets-data.csv`](@ref schemas): +Energy storage assets have a unique characteristic wherein the investment is based not solely on the capacity to charge and discharge, but also on the energy capacity. Some storage asset types have a fixed duration for a given capacity, which means that there is a predefined ratio between energy and power. For instance, a battery of 10MW/unit and 4h duration implies that the energy capacity is 40MWh. Conversely, other storage asset types don't have a fixed ratio between the investment of capacity and storage capacity. Therefore, the energy capacity can be optimized independently of the capacity investment, such as hydrogen storage in salt caverns. To define if an energy asset is one type or the other then consider the following parameters: - _Investment energy method_: To use this method, set the parameter `storage_method_energy` to `true`. In addition, it is necessary to define: @@ -357,13 +131,13 @@ Energy storage assets have a unique characteristic wherein the investment is bas - _Fixed energy-to-power ratio method_: To use this method, set the parameter `storage_method_energy` to `false`. In addition, it is necessary to define the parameter `energy_to_power_ratio` to establish the predefined duration of the storage asset or ratio between energy and power. Note that all the investment costs should be allocated in the parameter `investment_cost`. -In addition, the parameter `capacity_storage_energy` in the [`graph-assets-data.csv`](@ref schemas) defines the energy per unit of storage capacity invested in (e.g., MWh/unit). +In addition, the parameter `capacity_storage_energy` defines the energy per unit of storage capacity invested in (e.g., MWh/unit). For more details on the constraints that apply when selecting one method or the other, please visit the [`mathematical formulation`](@ref formulation) section. ### [Control simultaneous charging and discharging](@id storage-binary-method-setup) -Depending on the configuration of the energy storage assets, it may or may not be possible to charge and discharge them simultaneously. For instance, a single battery cannot charge and discharge at the same time, but some pumped hydro storage technologies have separate components for charging (pump) and discharging (turbine) that can function independently, allowing them to charge and discharge simultaneously. To account for these differences, the model provides users with three options for the `use_binary_storage_method` parameter in the [`assets-data.csv`](@ref schemas) file: +Depending on the configuration of the energy storage assets, it may or may not be possible to charge and discharge them simultaneously. For instance, a single battery cannot charge and discharge at the same time, but some pumped hydro storage technologies have separate components for charging (pump) and discharging (turbine) that can function independently, allowing them to charge and discharge simultaneously. To account for these differences, the model provides users with three options for the `use_binary_storage_method` parameter: - `binary`: the model adds a binary variable to prevent charging and discharging simultaneously. - `relaxed_binary`: the model adds a binary variable that allows values between 0 and 1, reducing the likelihood of charging and discharging simultaneously. This option uses a tighter set of constraints close to the convex hull of the full formulation, resulting in fewer instances of simultaneous charging and discharging in the results. @@ -373,7 +147,7 @@ For more details on the constraints that apply when selecting this method, pleas ## [Setting up unit commitment constraints](@id unit-commitment-setup) -The unit commitment constraints are only applied to producer and conversion assets. The `unit_commitment` parameter must be set to `true` to include the constraints in the [`assets-data.csv`](@ref schemas). Additionally, the following parameters should be set in that same file: +The unit commitment constraints are only applied to producer and conversion assets. The `unit_commitment` parameter must be set to `true` to include the constraints. Additionally, the following parameters should be set in that same file: - `unit_commitment_method`: It determines which unit commitment method to use. The current version of the code only includes the basic version. Future versions will add more detailed constraints as additional options. - `units_on_cost`: Objective function coefficient on `units_on` variable. (e.g., no-load cost or idling cost in kEUR/h/unit) @@ -384,7 +158,7 @@ For more details on the constraints that apply when selecting this method, pleas ## [Setting up ramping constraints](@id ramping-setup) -The ramping constraints are only applied to producer and conversion assets. The `ramping` parameter must be set to `true` to include the constraints in the [`assets-data.csv`](@ref schemas). Additionally, the following parameters should be set in that same file: +The ramping constraints are only applied to producer and conversion assets. The `ramping` parameter must be set to `true` to include the constraints. Additionally, the following parameters should be set in that same file: - `max_ramp_up`: Maximum ramping up rate as a portion of the capacity of asset (p.u./h) - `max_ramp_down:`Maximum ramping down rate as a portion of the capacity of asset (p.u./h) @@ -395,15 +169,17 @@ For more details on the constraints that apply when selecting this method, pleas For the model to add constraints for a [maximum or minimum energy limit](@ref inter-temporal-energy-constraints) for an asset throughout the model's timeframe (e.g., a year), we need to establish a couple of parameters: -- `is_seasonal = true` in the [`assets-data.csv`](@ref schemas). This parameter enables the model to use the inter-temporal constraints. -- `max_energy_timeframe_partition` $\neq$ `missing` or `min_energy_timeframe_partition` $\neq$ `missing` in the [`assets-data.csv`](@ref schemas). This value represents the peak energy that will be then multiplied by the profile for each period in the timeframe. - > **Note:** - > These parameters are defined per period, and the default values for profiles are 1.0 p.u. per period. If the periods are determined daily, the energy limit for the whole year will be 365 times `max`or `min_energy_timeframe_partition`. -- (optional) `profile_type` and `profile_name` in the [`assets-timeframe-profiles.csv`](@ref schemas) and the profile values in the [`profiles-timeframe.csv`](@ref schemas). If there is no profile defined, then by default it is 1.0 p.u. for all periods in the timeframe. -- (optional) define a period partition in [`assets-timeframe-partitions.csv`](@ref schemas). If there is no partition defined, then by default the constraint is created for each period in the timeframe, otherwise, it will consider the partition definition in the file. +- `is_seasonal = true`. This parameter enables the model to use the inter-temporal constraints. +- `max_energy_timeframe_partition` $\neq$ `missing` or `min_energy_timeframe_partition` $\neq$ `missing`. This value represents the peak energy that will be then multiplied by the profile for each period in the timeframe. + +!!! info + These parameters are defined per period, and the default values for profiles are 1.0 p.u. per period. If the periods are determined daily, the energy limit for the whole year will be 365 times `max`or `min_energy_timeframe_partition`. + +- (optional) `profile_type` and `profile_name` in the timeframe files. If there is no profile defined, then by default it is 1.0 p.u. for all periods in the timeframe. +- (optional) define a period partition in timeframe partition files. If there is no partition defined, then by default the constraint is created for each period in the timeframe, otherwise, it will consider the partition definition in the file. -> **Tip:** -> If you want to set a limit on the maximum or minimum outgoing energy for a year with representative days, you can use the partition definition to create a single partition for the entire year to combine the profile. +!!! tip "Tip" + If you want to set a limit on the maximum or minimum outgoing energy for a year with representative days, you can use the partition definition to create a single partition for the entire year to combine the profile. ### Example: Setting Energy Limits @@ -412,8 +188,8 @@ Let's assume we have a year divided into 365 days because we are using days as p | Profile | Period Partitions | Example | | ------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | None | None | The default profile is 1.p.u. for each period and since there are no period partitions, the constraints will be for each period (i.e., daily). So the outgoing energy of the asset for each day must be less than or equal to 10MWh. | -| Defined | None | The profile definition and value will be in the [`assets-timeframe-profiles.csv`](@ref schemas) and [`profiles-timeframe.csv`](@ref schemas) files. For example, we define a profile that has the following first four values: 0.6 p.u., 1.0 p.u., 0.8 p.u., and 0.4 p.u. There are no period partitions, so constraints will be for each period (i.e., daily). Therefore the outgoing energy of the asset for the first four days must be less than or equal to 6MWh, 10MWh, 8MWh, and 4MWh. | -| Defined | Defined | Using the same profile as above, we now define a period partition in the [`assets-timeframe-partitions.csv`](@ref schemas) file as `uniform` with a value of 2. This value means that we will aggregate every two periods (i.e., every two days). So, instead of having 365 constraints, we will have 183 constraints (182 every two days and one last constraint of 1 day). Then the profile is aggregated with the sum of the values inside the periods within the partition. Thus, the outgoing energy of the asset for the first two partitions (i.e., every two days) must be less than or equal to 16MWh and 12MWh, respectively. | +| Defined | None | The profile definition and value will be in the timeframe profiles files. For example, we define a profile that has the following first four values: 0.6 p.u., 1.0 p.u., 0.8 p.u., and 0.4 p.u. There are no period partitions, so constraints will be for each period (i.e., daily). Therefore the outgoing energy of the asset for the first four days must be less than or equal to 6MWh, 10MWh, 8MWh, and 4MWh. | +| Defined | Defined | Using the same profile as above, we now define a period partition in the timeframe partitions file as `uniform` with a value of 2. This value means that we will aggregate every two periods (i.e., every two days). So, instead of having 365 constraints, we will have 183 constraints (182 every two days and one last constraint of 1 day). Then the profile is aggregated with the sum of the values inside the periods within the partition. Thus, the outgoing energy of the asset for the first two partitions (i.e., every two days) must be less than or equal to 16MWh and 12MWh, respectively. | ## [Defining a group of assets](@id group-setup) @@ -421,29 +197,28 @@ A group of assets refers to a set of assets that share certain constraints. For In order to define the groups in the model, the following steps are necessary: -1. Create a group in the [`group-asset.csv`](@ref schemas) file by defining the `name` property and its parameters. -2. In the file [`graph-assets-data.csv`](@ref schemas), assign assets to the group by setting the `name` in the `group` parameter/column. +1. Create a group file by defining the `name` property and its parameters in the `group_asset` table (or CSV file). +2. Assign assets to the group by setting the `name` in the `group` parameter/column of the asset file. - > **Note:** - > A missing value in the parameter `group` in the [`graph-assets-data.csv`](@ref schemas) means that the asset does not belong to any group. +!!! info + A missing value in the parameter `group` means that the asset does not belong to any group. Groups are useful to represent several common constraints, the following group constraints are available. ### [Setting up a maximum or minimum investment limit for a group](@id investment-group-setup) -The mathematical formulation of the maximum and minimum investment limit for group constraints is available [here](@ref investment-group-constraints). The parameters to set up these constraints in the model are in the [`group-asset.csv`](@ref schemas) file. +The mathematical formulation of the maximum and minimum investment limit for group constraints is available [here](@ref investment-group-constraints). - `invest_method = true`. This parameter enables the model to use the investment group constraints. - `min_investment_limit` $\neq$ `missing` or `max_investment_limit` $\neq$ `missing`. This value represents the limits that will be imposed on the investment that belongs to the group. - > **Notes:** - > - > 1. A missing value in the parameters `min_investment_limit` and `max_investment_limit` means that there is no investment limit. - > 2. These constraints are applied to the investments each year. The model does not yet have investment limits to a group's accumulated invested capacity. +!!! info + 1. A missing value in the parameters `min_investment_limit` and `max_investment_limit` means that there is no investment limit. + 2. These constraints are applied to the investments each year. The model does not yet have investment limits to a group's accumulated invested capacity. ### Example: Group of Assets -Let's explore how the groups are set up in the test case called [Norse](https://github.com/TulipaEnergy/TulipaEnergyModel.jl/tree/main/test/inputs/Norse). First, let's take a look at the group-asset.csv file: +Let's explore how the groups are set up in the test case called [Norse](https://github.com/TulipaEnergy/TulipaEnergyModel.jl/tree/main/test/inputs/Norse). First, let's take a look at the `group-asset.csv` file: ```@example display-group-setup using DataFrames # hide @@ -454,7 +229,7 @@ assets = CSV.read(input_asset_file, DataFrame, header = 1) # hide In the given data, there are two groups: `renewables` and `ccgt`. Both groups have the `invest_method` parameter set to `true`, indicating that investment group constraints apply to both. For the `renewables` group, the `min_investment_limit` parameter is missing, signifying that there is no minimum limit imposed on the group. However, the `max_investment_limit` parameter is set to 40000 MW, indicating that the total investments of assets in the group must be less than or equal to this value. In contrast, the `ccgt` group has a missing value in the `max_investment_limit` parameter, indicating no maximum limit, while the `min_investment_limit` is set to 10000 MW for the total investments in that group. -Let's now explore which assets are in each group. To do so, we can take a look at the graph-assets-data.csv file: +Let's now explore which assets are in each group. To do so, we can take a look at the `asset.csv` file: ```@example display-group-setup input_asset_file = "../../test/inputs/Norse/asset.csv" # hide @@ -464,97 +239,111 @@ assets = assets[.!ismissing.(assets.group), [:asset, :type, :group]] # hide Here we can see that the assets `Asgard_Solar` and `Midgard_Wind` belong to the `renewables` group, while the assets `Asgard_CCGT` and `Midgard_CCGT` belong to the `ccgt` group. -> **Note:** -> If the group has a `min_investment_limit`, then assets in the group have to allow investment (`investable = true`) for the model to be feasible. If the assets are not `investable` then they cannot satisfy the minimum constraint. +!!! info + If the group has a `min_investment_limit`, then assets in the group have to allow investment (`investable = true`) for the model to be feasible. If the assets are not `investable` then they cannot satisfy the minimum constraint. ## [Setting up multi-year investments](@id multi-year-setup) -!!! danger - This section is out of date. The update of these docs is tracked in +!!! warning "This feature is under a major refactor" + This section might have out-of-date information. The update of these docs is tracked in It is possible to simutaneously model different years, which is especially relevant for modeling multi-year investments. Multi-year investments refer to making investment decisions at different points in time, such that a pathway of investments can be modeled. This is particularly useful when long-term scenarios are modeled, but modeling each year is not practical. Or in a business case, investment decisions are supposed to be made in different years which has an impact on the cash flow. +### Filling the input data + In order to set up a model with year information, the following steps are necessary. -- Fill in all the years in [`year-data.csv`](@ref schemas) file by defining the `year` property and its parameters. +#### Year data - Differentiate milestone years and non-milestone years. +Fill in all the years in [`year-data.csv`](@ref schemas) file by defining the `year` property and its parameters. Differentiate milestone years and non-milestone years. - - Milestone years are the years you would like to model, e.g., if you want to model operation and/or investments (it is possibile to not allow investments) in 2030, 2040, and 2050. These 3 years are then milestone years. - - Non-milestone years are the investment years of existing units. For example, you want to consider a existing wind unit that is invested in 2020, then 2020 is a non-milestone year. - > **Note:** A year can both be a year that you want to model and that there are existing units invested, then this year is a milestone year. +- Milestone years are the years you would like to model, e.g., if you want to model operation and/or investments (it is possibile to not allow investments) in 2030, 2040, and 2050. These 3 years are then milestone years. +- Non-milestone years are the investment years of existing units. For example, you want to consider a existing wind unit that is invested in 2020, then 2020 is a non-milestone year. -- Fill in the parameters in [`vintage-assets-data.csv`](@ref schemas) and [`vintage-flows-data.csv`](@ref schemas). Here you need to fill in parameters that are only related to the investment year (`commission_year` in the data) of the asset, i.e., investment costs and fixed costs. +!!! info + A year can both be a year that you want to model and that there are existing units invested, then this year is a milestone year. -- Fill in the parameters in [`graph-assets-data.csv`](@ref schemas) and [`graph-flows-data.csv`](@ref schemas). These parameters are for the assets across all the years, i.e., not dependent on years. Examples are lifetime (both `technical_lifetime` and `economic_lifetime`) and capacity of a unit. +#### Commission year data - You also have to choose a `investment_method` for the asset, between `none`, `simple`, and `compact`. The below tables shows what happens to the activation of the investment and decommission variable for certain investment methods and the `investable` parameter. +Fill in the parameters related to the investment year (`commission_year` in the data) of the asset, i.e., investment costs and fixed costs. - Consider you only want to model operation without investments, then you would need to set `investable_method` to `none`. Neither investment variables and decommission variables are activated. And here the `investable_method` overrules `investable`, because the latter does not matter. +#### Assets investment data - > **Note:** Although it is called `investment_method`, you can see from the table that, actually, it controls directly the activation of the decommission variable. The investment variable is controlled by `investable`, which is overruled by `investable_method` in case of a conflict (i.e., for the `none` method). +Fill in the parameters in the asset file. These parameters are for the assets across all the years, i.e., not dependent on years. Examples are lifetime (both `technical_lifetime` and `economic_lifetime`) and capacity of a unit. - | investment_method | investable | investment variable | decommission variable | - | ----------------- | ---------- | ------------------- | --------------------- | - | none | true | false | false | - | none | false | false | false | - | simple | true | true | true | - | simple | false | false | true | - | compact | true | true | true | - | compact | false | false | true | +You also have to choose a `investment_method` for the asset, between `none`, `simple`, and `compact`. The below tables shows what happens to the activation of the investment and decommission variable for certain investment methods and the `investable` parameter. - For more details on the constraints that apply when selecting these methods, please visit the [`mathematical formulation`](@ref formulation) section. +Consider you only want to model operation without investments, then you would need to set `investable_method` to `none`. Neither investment variables and decommission variables are activated. And here the `investable_method` overrules `investable`, because the latter does not matter. - > **Note:** `compact` method can only be applied to producer assets and conversion assets. Transport assets and storage assets can only use `simple` method. +!!! info + Although it is called `investment_method`, you can see from the table that, actually, it controls directly the activation of the decommission variable. The investment variable is controlled by `investable`, which is overruled by `investable_method` in case of a conflict (i.e., for the `none` method). - - Fill in the assets and flows information in [`assets-data.csv`](@ref schemas) and [`flows-data.csv`](@ref schemas). +| investment_method | investable | investment variable | decommission variable | +| ----------------- | ---------- | ------------------- | --------------------- | +| none | true | false | false | +| none | false | false | false | +| simple | true | true | true | +| simple | false | false | true | +| compact | true | true | true | +| compact | false | false | true | - - In the `year` column, fill in all the milestone years. In the `commission_year` column, fill in the investment years of the existing assets that are still available in this `year`. - - If the `commission_year` is a non-milestone year, then it means the row is for an existing unit. The `investable` has to be set to `false`, and you put the existing units in the column `initial_units`. - - If the `commission_year` is a milestone year, then you put the existing units in the column `initial_units`. Depending on whether you want to model investments or not, you put the `investable` to either `true` or `false`. +For more details on the constraints that apply when selecting these methods, please visit the [`mathematical formulation`](@ref formulation) section. - Let's explain further using an example. To do so, we can take a look at the assets-data.csv file: +!!! info + The `compact` method can only be applied to producer assets and conversion assets. Transport assets and storage assets can only use `simple` method. - ```@example multi-year-setup - using DataFrames # hide - using CSV # hide - input_asset_file = "../../test/inputs/Multi-year Investments/asset-both.csv" # hide - assets_data = CSV.read(input_asset_file, DataFrame) # hide - assets_data = assets_data[1:10, [:asset, :milestone_year, :commission_year, :initial_units]] # hide - ``` +#### Assets and flows information - We allow investments of `ocgt`, `ccgt`, `battery`, `wind`, and `solar` in 2030. +Fill in the assets and flows information. - - `ocgt` has no existing units. - - `ccgt` has 1 existing units, invested in 2028, and still available in 2030. - - `ccgt` has 0.07 existing units, invested in 2020, and still available in 2030. Another 0.02 existing units, invested in 2030. - - `wind` has 0.07 existing units, invested in 2020, and still available in 2030. Another 0.02 existing units, invested in 2030. - - `solar` has no existing units. +- In the `year` column, fill in all the milestone years. In the `commission_year` column, fill in the investment years of the existing assets that are still available in this `year`. + - If the `commission_year` is a non-milestone year, then it means the row is for an existing unit. The `investable` has to be set to `false`, and you put the existing units in the column `initial_units`. + - If the `commission_year` is a milestone year, then you put the existing units in the column `initial_units`. Depending on whether you want to model investments or not, you put the `investable` to either `true` or `false`. - > **Note:** We only consider the existing units which are still available in the milestone years. +Let's explain further using an example. To do so, we can take a look at the `asset-both.csv` file: -- Fill in relevant profiles in [`assets-profiles.csv`](@ref schemas), [`flows-profiles.csv`](@ref schemas), and [`profiles-rep-periods.csv`](@ref schemas). Important to know that you can use different profiles for assets that are invested in different years. You fill in the profile names in `assets-profiles.csv` for relevant years. In `profiles-rep-periods.csv`, you relate the profile names with the modeled years. +```@example multi-year-setup +using DataFrames # hide +using CSV # hide +input_asset_file = "../../test/inputs/Multi-year Investments/asset-both.csv" # hide +assets_data = CSV.read(input_asset_file, DataFrame) # hide +assets_data = assets_data[1:10, [:asset, :milestone_year, :commission_year, :initial_units]] # hide +``` + +We allow investments of `ocgt`, `ccgt`, `battery`, `wind`, and `solar` in 2030. + +- `ocgt` has no existing units. +- `ccgt` has 1 existing units, invested in 2028, and still available in 2030. +- `ccgt` has 0.07 existing units, invested in 2020, and still available in 2030. Another 0.02 existing units, invested in 2030. +- `wind` has 0.07 existing units, invested in 2020, and still available in 2030. Another 0.02 existing units, invested in 2030. +- `solar` has no existing units. - Let's explain further using an example. To do so, we can take a look at the `assets-profiles.csv` file: +!!! info + We only consider the existing units which are still available in the milestone years. - ```@example multi-year-setup - input_asset_file = "../../test/inputs/Multi-year Investments/assets-profiles.csv" # hide - assets_profiles = CSV.read(input_asset_file, DataFrame, header = 1) # hide - assets_profiles = assets_profiles[1:2, :] # hide - ``` +#### Profiles information - We have two profiles for `wind` invested in 2020 and 2030. Imagine these are two wind turbines with different efficiencies due to the year of manufacture. These are reflected in the profiles for the model year 2030, which are defined in the `profiles-rep-periods.csv` file. +Fill in relevant profiles in the files. Important to know that you can use different profiles for assets that are invested in different years. You fill in the profile names in `assets-profiles.csv` for relevant years. In `profiles-rep-periods.csv`, you relate the profile names with the modeled years. - ### Economic representation +Let's explain further using an example. To do so, we can take a look at the `assets-profiles.csv` file: + +```@example multi-year-setup +input_asset_file = "../../test/inputs/Multi-year Investments/assets-profiles.csv" # hide +assets_profiles = CSV.read(input_asset_file, DataFrame, header = 1) # hide +assets_profiles = assets_profiles[1:2, :] # hide +``` - For economic representation, the following parameters need to be set up: +We have two profiles for `wind` invested in 2020 and 2030. Imagine these are two wind turbines with different efficiencies due to the year of manufacture. These are reflected in the profiles for the model year 2030, which are defined in the `profiles-rep-periods.csv` file. - - [optional] `discount year` and `discount rate` in the `model-parameters-example.toml` file: model-wide discount year and rate. By default, the model will use a discount rate of 0, and a discount year of the first milestone year. In other words, the costs will be discounted to the cost of the first milestone year. +### Economic representation - - `discount_rate` in [`graph-assets-data.csv`](@ref schemas) and [`graph-flows-data.csv`](@ref schemas): technology-specific discount rates. +For economic representation, the following parameters need to be set up: - - `economic_lifetime` in [`graph-assets-data.csv`](@ref schemas) and [`graph-flows-data.csv`](@ref schemas): used for discounting the costs. +- [optional] `discount year` and `discount rate` in the `model-parameters-example.toml` file: model-wide discount year and rate. By default, the model will use a discount rate of 0, and a discount year of the first milestone year. In other words, the costs will be discounted to the cost of the first milestone year. +- `discount_rate`: technology-specific discount rates. +- `economic_lifetime`: used for discounting the costs. - > **Note:** Since the model explicitly discounts, all the inputs for costs should be given in the costs of the relevant year. For example, to model investments in 2030 and 2050, the `investment_cost` should be given in 2030 costs and 2050 costs, respectively. +!!! info + Since the model explicitly discounts, all the inputs for costs should be given in the costs of the relevant year. For example, to model investments in 2030 and 2050, the `investment_cost` should be given in 2030 costs and 2050 costs, respectively. - For more details on the formulas for economic representation, please visit the [`mathematical formulation`](@ref formulation) section. +For more details on the formulas for economic representation, please visit the [`mathematical formulation`](@ref formulation) section. diff --git a/docs/src/20-tutorials.md b/docs/src/20-tutorials.md index 15c44e2b..2f9268f7 100644 --- a/docs/src/20-tutorials.md +++ b/docs/src/20-tutorials.md @@ -102,21 +102,22 @@ There is currently no reason to manually create and maintain these structures yo To avoid having to update this documentation whenever we make changes to the internals of TulipaEnergyModel before the v1.0.0 release, we will keep this section empty until then. -### Change optimizer and specify parameters +## Change optimizer and specify parameters By default, the model is solved using the [HiGHS](https://github.com/jump-dev/HiGHS.jl) optimizer (or solver). To change this, we can give the functions `run_scenario` or `solve_model!` a different optimizer. !!! warning - HiGHS is the only open source solver that we recommend. GLPK and Cbc are missing features and are not (fully) tested for Tulipa. + HiGHS is the only open source solver that we recommend. GLPK and Cbc are not (fully) tested for Tulipa. -For instance, we run the [GLPK](https://github.com/jump-dev/GLPK.jl) optimizer below: +For instance, let's run the Tiny example using the [GLPK](https://github.com/jump-dev/GLPK.jl) optimizer: ```@example using DuckDB, TulipaIO, TulipaEnergyModel, GLPK input_dir = "../../test/inputs/Tiny" # hide +# input_dir should be the path to Tiny as a string (something like "test/inputs/Tiny") connection = DBInterface.connect(DuckDB.DB) read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name) energy_problem = run_scenario(connection, optimizer = GLPK.Optimizer) @@ -130,8 +131,8 @@ using GLPK solution = solve_model!(energy_problem, GLPK.Optimizer) ``` -Notice that, in any of these cases, we need to explicitly add the GLPK package -ourselves and add `using GLPK` before using `GLPK.Optimizer`. +!!! info + Notice that, in any of these cases, we need to explicitly add the GLPK package ourselves and add `using GLPK` before using `GLPK.Optimizer`. In any of these cases, default parameters for the `GLPK` optimizer are used, which you can query using [`default_parameters`](@ref). diff --git a/docs/src/30-concepts.md b/docs/src/30-concepts.md index 500df658..84529c09 100644 --- a/docs/src/30-concepts.md +++ b/docs/src/30-concepts.md @@ -110,8 +110,8 @@ So, let's recap: - The flow from the balance hub to the demand (`balance, demand`) has a uniform resolution of 3 hours; therefore, it has two blocks, one from hours 1 to 3 (i.e., `1:3`) and the other from hours 4 to 6 (i.e., `4:6`). - The `balance` hub integrates all the different assets with their different resolutions. The lowest resolution of all connections determines the balance equation for this asset. Therefore, the resulting resolution is into two blocks, one from hours 1 to 4 (i.e., `1:4`) and the other from hours 5 to 6 (i.e., `5:6`). -> **Note:** -> This example demonstrates that different time resolutions can be assigned to each asset and flow in the model. Additionally, the resolutions do not need to be uniform and can vary throughout the horizon. +!!! tip "Fully flexible temporal resolution" + This example demonstrates that different time resolutions can be assigned to each asset and flow in the model. Furthermore, the resolutions do not need to be multiples of one another or evenly distributed and they can vary throughout the time horizon. The complete input data for this example can be found [here](https://github.com/TulipaEnergy/TulipaEnergyModel.jl/tree/main/test/inputs/Variable%20Resolution). @@ -123,7 +123,11 @@ Due to the flexible resolution, we must explicitly state how the constraints are - How the resolution is determined (regardless of whether it is highest or lowest): the incoming flows, the outgoing flows, or a combination of both. - How the related parameters are treated. We use two methods of aggregation, _sum_ or _mean_. -Below is the table outlining the details for each type of constraint. Note _min_ means highest resolution, and _max_ means lowest resolution. +Below is the table outlining the details for each type of constraint. + +!!! tip "Before reading the table consider this:" + To calculte the resolution of the constraints we use the `min` function to determine which is the highest resolution in the constraint, and the `max` function to determine the lowest resolution in the constraint. + For example, the consumer balance is defined as `power` type, and it involves the inputs and outputs, then the constraint resolution must be the minimum resolution among them to ensure it is on the `highest resolution`. Then, if you have an input of `1h` resolution and an output of `2h` resolution; then the resolution of the constraint must be `1h` (i.e., `min(1h,2h)`). | Name | Variables involved | Profile involved | Constraint type | Resolution of the constraints | Profile aggregation | | ---------------------------------------------- | ------------------------------ | ---------------- | --------------- | ---------------------------------------------------------------------------------------- | ------------------- | @@ -167,7 +171,7 @@ As shown in the table, the resolution of the storage balance is energy, which is #### Consumer Balance -The flows coming from the balancing hub are defined every 3 hours. Therefore, the flows impose the lowest resolution and the demand is balanced every 3 hours. The input demand is aggregated as the mean of the hourly values in the input data. As with the storage balance, the flows are multiplied by their durations. +The flows coming from the balancing hub are defined every 3 hours. Then, _min(incoming flows, outgoing flows)_ becomes _min(3, -) = 3_, and thus balanced every 3 hours. The input demand is aggregated as the mean of the hourly values in the input data. ```math \begin{aligned} @@ -180,7 +184,7 @@ The flows coming from the balancing hub are defined every 3 hours. Therefore, th #### Hub Balance -The hub balance is quite interesting because it integrates several flow resolutions. Remember that we didn't define any specific time resolution for this asset. Therefore, the highest resolution of all incoming and outgoing flows in the horizon implies that the hub balance must be imposed for all 6 blocks. The balance must account for each flow variable's duration in each block. +The hub balance is quite interesting because it integrates several flow resolutions. Remember that we didn't define any specific time resolution for this asset. Therefore, the highest resolution of all incoming and outgoing flows in the horizon implies that the hub balance must be imposed for all 6 blocks since _min(incoming flows, outgoing flows)_ becomes _min(1,2,3) = 1_ ```math \begin{aligned} @@ -340,14 +344,11 @@ The level of reduction and approximation error will depend on the case study. So ## [Flexible Time Resolution in the Unit Commitment and Ramping Constraints](@id flex-time-res-uc) -!!! danger - This section is out of date. The update of these docs is tracked in - In the previous section, we have seen how the flexible temporal resolution is handled for the model's flow capacity and balance constraints. Here, we show how flexible time resolution is applied when considering the model's unit commitment and ramping constraints. Let's consider the example in the folder [`test/inputs/UC-ramping`](https://github.com/TulipaEnergy/TulipaEnergyModel.jl/tree/main/test/inputs/UC-ramping) to explain how all these constraints are created in _TulipaEnergyModel.jl_ when having the flexible time resolution. ![unit-commitment-case-study](./figs/unit-commitment-case-study.png) -The example demonstrates various assets that supply demand. Each asset has different input data in the `assets-data` file, which activates different sets of constraints based on the method. For example, the `gas` producer has ramping constraints but not unit commitment constraints, while the `ocgt` conversion has unit commitment constraints but not ramping constraints. Lastly, the `ccgt` and `smr` assets both have unit commitment and ramping constraints. +The example demonstrates various assets that supply demand. Each asset has different input data files, which activates different sets of constraints based on the method. For example, the `gas` producer has ramping constraints but not unit commitment constraints, while the `ocgt` conversion has unit commitment constraints but not ramping constraints. Lastly, the `ccgt` and `smr` assets both have unit commitment and ramping constraints. ```@example unit-commitment using DataFrames # hide @@ -375,7 +376,8 @@ filtered_flows_partitions = flows_partitions_data[!, ["from_asset", "to_asset", The default value for the assets and flows partitions is 1 hour. This means that assets and flows not in the previous tables are considered on an hourly basis in the model. -> **Important**: It's not recommended to set up the input data `partitions` in such a way that the `flow` variables have a lower resolution than the `units_on`. This is because doing so will result in constraints that fix the value of the `units_on` in the timestep block where the `flow` is defined, leading to unnecessary extra variable constraints in the model. For instance, if the `units_on` are hourly and the `flow` is every two hours, then a non-zero `flow` in the timestep block 1:2 will require the `units_on` in timestep blocks 1:1 and 2:2 to be the same and equal to one. Therefore, the time resolution of the `units_on` should always be lower than or equal to the resolution of the `flow` in the asset. +!!! warning "Not every combination of resolutions is viable" + It's not recommended to set up the input data `partitions` in such a way that the `flow` variables have a lower resolution than the `units_on`. This is because doing so will result in constraints that fix the value of the `units_on` in the timestep block where the `flow` is defined, leading to unnecessary extra variable constraints in the model. For instance, if the `units_on` are hourly and the `flow` is every two hours, then a non-zero `flow` in the timestep block 1:2 will require the `units_on` in timestep blocks 1:1 and 2:2 to be the same and equal to one. Therefore, the time resolution of the `units_on` should always be lower than or equal to the resolution of the `flow` in the asset. Remember that the section [`mathematical formulation`](@ref formulation) shows the unit commitment and ramping constraints in the model considering an uniform time resolution as a reference. @@ -390,52 +392,67 @@ We will analyze each case in the following sections, considering the constraints ### Ramping in Assets with Multiple Outputs -In the case of the `gas` asset, there are two output flows above the minimum operating point with different time resolutions. The ramping constraints follow the highest time resolution of the two flows at each timestep block. Since the highest resolution is always defined by the hourly output of the `flow(gas,ocgt)`, the ramping constraints are also hourly. The figure below illustrates this situation. +In the case of the `gas` asset, there are two output flows above the minimum operating point with different time resolutions. The ramping constraints follow the highest time resolution of the two flows at each timestep block. Since the highest resolution is always defined by the hourly output of the $v^{\text{flow}}_{(\text{gas,ocgt}),b_k}$, the ramping constraints are also hourly. The figure below illustrates this situation. ![unit-commitment-gas-asset](./figs/unit-commitment-gas.png) Let's now take a look at the resulting constraints in the model. -`max_ramp_up(gas)`: The first constraint starts in the second timestep block and takes the difference between the output flows above the minimum operating point from $b_k =$ `2:2` and $b_k =$ `1:1`. Note that since the `flow(gas,ccgt)` is the same in both timestep blocks, the only variables that appear in this first constraint are the ones associated with the `flow(gas,ocgt)`. The second constraint takes the difference between the output flows from $b_k =$ `3:3` and $b_k =$ `2:2`; in this case, there is a change in the `flow(gas, ocgt)`; therefore, the constraint considers both changes in the output flows of the asset. In addition, the ramping parameter is multiplied by the flow duration with the highest resolution, i.e., one hour, which is the duration of the `flow(gas,ocgt)`. +`max_ramp_up(gas)`: The first constraint starts in the second timestep block and takes the difference between the output flows above the minimum operating point from $b_{k = 2:2}$ and $b_{k = 1:1}$. Note that since the $v^{\text{flow}}_{(\text{gas,ccgt}),b_k}$ is the same in both timestep blocks, the only variables that appear in this first constraint are the ones associated with the $v^{\text{flow}}_{(\text{gas,ocgt}),b_k}$. The second constraint takes the difference between the output flows from $b_{k = 3:3}$ and $b_{k = 2:2}$; in this case, there is a change in the `flow(gas, ocgt)`; therefore, the constraint considers both changes in the output flows of the asset. In addition, the ramping parameter is multiplied by the flow duration with the highest resolution, i.e., one hour, which is the duration of the $v^{\text{flow}}_{(\text{gas,ocgt}),b_k}$. -- $b_k =$ `2:2`: -1 `flow(gas,ocgt,1:1)` + 1 `flow(gas,ocgt,2:2)` <= 1494 -- $b_k =$ `3:3`: -1 `flow(gas,ocgt,2:2)` + 1 `flow(gas,ocgt,3:3)` - 1 `flow(gas,ccgt,1:2)` + 1 `flow(gas,ccgt,3:4)` <= 1494 -- $b_k =$ `4:4`: -1 `flow(gas,ocgt,3:3)` + 1 `flow(gas,ocgt,4:4)` <= 1494 -- $b_k =$ `5:5`: -1 `flow(gas,ocgt,4:4)` + 1 `flow(gas,ocgt,5:5)` - 1 `flow(gas,ccgt,3:4)` + 1 `flow(gas,ccgt,5:6)` <= 1494 +```math +\begin{aligned} +& b_{k = 2:2}: - v^{\text{flow}}_{(\text{gas,ocgt}),1:1} + v^{\text{flow}}_{(\text{gas,ocgt}),2:2} \leq 1494 \\ +& b_{k = 3:3}: - v^{\text{flow}}_{(\text{gas,ocgt}),2:2} + v^{\text{flow}}_{(\text{gas,ocgt}),3:3} - v^{\text{flow}}_{(\text{gas,ccgt}),1:2} + v^{\text{flow}}_{(\text{gas,ccgt}),3:4} \leq 1494 \\ +& ... +\end{aligned} +``` For the maximum ramp down we have similiar constraints as the ones shown above. ### Unit Commitment in Assets with Constant Time Resolution -The `ocgt` asset includes both the `flow(oct,demand)` and the asset time resolution, which defines the resolution of the `units_on` variable, with a default setting of one hour. As a result, the unit commitment constraints are also set on an hourly basis. This is the conventional method for representing these types of constraints in power system models. The figure below illustrates this situation. +The `ocgt` asset includes both the $v^{\text{flow}}_{(\text{ocgt,demand}),b_k}$ and the asset time resolution, which defines the resolution of the `units_on` variable, with a default setting of one hour. As a result, the unit commitment constraints are also set on an hourly basis. This is the conventional method for representing these types of constraints in power system models. The figure below illustrates this situation. ![unit-commitment-ocgt-asset](./figs/unit-commitment-ocgt.png) Let's now take a look at the resulting constraints in the model. Because everything is based on an hourly timestep, the equations are simple and easy to understand. -`limit_units_on(ocgt)`: The upper bound of the `units_o`n is the investment variable of the `asset` +`limit_units_on(ocgt)`: The upper bound of the `units_on` is the investment variable of the `asset` -- $b_k =$ `1:1`: -1 `assets_investment(ocgt)` + 1 `units_on(ocgt,1:1)` <= 0 -- $b_k =$ `2:2`: -1 `assets_investment(ocgt)` + 1 `units_on(ocgt,2:2)` <= 0 -- $b_k =$ `3:3`: -1 `assets_investment(ocgt)` + 1 `units_on(ocgt,3:3)` <= 0 +```math +\begin{aligned} +& b_{k = 1:1}: v^{\text{units on}}_{\text{ocgt},1:1} \leq v^{\text{inv}}_{\text{ocgt}} \\ +& b_{k = 2:2}: v^{\text{units on}}_{\text{ocgt},2:2} \leq v^{\text{inv}}_{\text{ocgt}} \\ +& ... +\end{aligned} +``` `min_output_flow(ocgt)`: The minimum operating point is 10 MW, so the asset must produce an output flow greater than this value when the unit is online. -- $b_k =$ `1:1`: 1 `flow(ocgt,demand,1:1)` - 10 `units_on(ocgt,1:1)` >= 0 -- $b_k =$ `2:2`: 1 `flow(ocgt,demand,2:2)` - 10 `units_on(ocgt,2:2)` >= 0 -- $b_k =$ `3:3`: 1 `flow(ocgt,demand,3:3)` - 10 `units_on(ocgt,3:3)` >= 0 +```math +\begin{aligned} +& b_{k = 1:1}: v^{\text{flow}}_{(\text{ocgt,demand}),1:1} \geq 10 \cdot v^{\text{units on}}_{\text{ocgt},1:1} \\ +& b_{k = 2:2}: v^{\text{flow}}_{(\text{ocgt,demand}),2:2} \geq 10 \cdot v^{\text{units on}}_{\text{ocgt},2:2} \\ +& ... +\end{aligned} +``` `max_output_flow(ocgt)`: The capacity is 100 MW, so the asset must produce an output flow lower than this value when the unit is online. -- $b_k =$ `1:1`: 1 `flow(ocgt,demand,1:1)` - 100 `units_on(ocgt,1:1)` <= 0 -- $b_k =$ `2:2`: 1 `flow(ocgt,demand,2:2)` - 100 `units_on(ocgt,2:2)` <= 0 -- $b_k =$ `3:3`: 1 `flow(ocgt,demand,3:3)` - 100 `units_on(ocgt,3:3)` <= 0 +```math +\begin{aligned} +& b_{k = 1:1}: v^{\text{flow}}_{(\text{ocgt,demand}),1:1} \leq 100 \cdot v^{\text{units on}}_{\text{ocgt},1:1} \\ +& b_{k = 2:2}: v^{\text{flow}}_{(\text{ocgt,demand}),2:2} \leq 100 \cdot v^{\text{units on}}_{\text{ocgt},2:2} \\ +& ... +\end{aligned} +``` For the maximum ramp down we have similiar constraints as the ones shown above. ### Unit Commitment and Ramping in Assets with Flexible Time Resolution that are Multiples of Each Other -In this case, the `smr` asset has an output `flow(smr,demand)` in a hourly basis, but its time resolution (i.e., partition) is every six hours. Therefore, the `unist_on` variables are defined in timestep block of every six hours. As a result, the unit commitment and ramping constraints are set on highest resolution of both, i.e., the hourly resolution of the `flow(smr,demand)`. The figure below illustrates this situation. +In this case, the `smr` asset has an output $v^{\text{flow}}_{(\text{smr,demand}),b_k}$ in a hourly basis, but its time resolution (i.e., partition) is every six hours. Therefore, the `unist_on` variables are defined in timestep block of every six hours. As a result, the unit commitment and ramping constraints are set on highest resolution of both, i.e., the hourly resolution of the $v^{\text{flow}}_{(\text{smr,demand}),b_k}$. The figure below illustrates this situation. ![unit-commitment-smr-asset](./figs/unit-commitment-smr.png) @@ -443,49 +460,59 @@ Let's now take a look at the resulting constraints in the model. `limit_units_on(smr)`: The `units_on` variables are defined every 6h; therefore, the upper bound of the variable is also every 6h. In addition, the `smr` is not investable and has one existing unit that limits the commitment variables. -- $b_k =$ `1:6`: 1 `units_on(smr,1:6)` <= 1 -- $b_k =$ `7:12`: 1 `units_on(smr,7:12)` <= 1 -- $b_k =$ `13:18`: 1 `units_on(smr,13:18)` <= 1 -- $b_k =$ `19:24`: 1 `units_on(smr,19:24)` <= 1 +```math +\begin{aligned} +& b_{k = 1:6}: v^{\text{units on}}_{\text{smr},1:6} \leq 1 \\ +& b_{k = 7:12}: v^{\text{units on}}_{\text{smr},7:12} \leq 1 \\ +& ... +\end{aligned} +``` `min_output_flow(smr)`: The minimum operating point is 150 MW, so the asset must produce an output flow greater than this value when the unit is online. Since the `units_on` variables are defined every 6h, the first six constraints show that the minimum operating point is multiplied by the variable in block `1:6`. The next six constraints are multiplied by the `units_on` in block `7:12`, and so on. -- $b_k =$ `1:1`: 1 `flow(smr,demand,1:1)` - 150 `units_on(smr,1:6)` >= 0 -- $b_k =$ `2:2`: 1 `flow(smr,demand,2:2)` - 150 `units_on(smr,1:6)` >= 0 -- $b_k =$ `3:3`: 1 `flow(smr,demand,3:3)` - 150 `units_on(smr,1:6)` >= 0 -- $b_k =$ `4:4`: 1 `flow(smr,demand,4:4)` - 150 `units_on(smr,1:6)` >= 0 -- $b_k =$ `5:5`: 1 `flow(smr,demand,5:5)` - 150 `units_on(smr,1:6)` >= 0 -- $b_k =$ `6:6`: 1 `flow(smr,demand,6:6)` - 150 `units_on(smr,1:6)` >= 0 -- $b_k =$ `7:7`: 1 `flow(smr,demand,7:7)` - 150 `units_on(smr,7:12)` >= 0 -- $b_k =$ `8:8`: 1 `flow(smr,demand,8:8)` - 150 `units_on(smr,7:12)` >= 0 +```math +\begin{aligned} +& b_{k = 1:1}: v^{\text{flow}}_{(\text{smr,demand}),1:1} \geq 150 \cdot v^{\text{units on}}_{\text{smr},1:6} \\ +& b_{k = 2:2}: v^{\text{flow}}_{(\text{smr,demand}),2:2} \geq 150 \cdot v^{\text{units on}}_{\text{smr},1:6} \\ +& ... \\ +& b_{k = 6:6}: v^{\text{flow}}_{(\text{smr,demand}),6:6} \geq 150 \cdot v^{\text{units on}}_{\text{smr},1:6} \\ +& b_{k = 7:7}: v^{\text{flow}}_{(\text{smr,demand}),7:7} \geq 150 \cdot v^{\text{units on}}_{\text{smr},7:12} \\ +& ... +\end{aligned} +``` `max_output_flow(smr)`: The capacity is 200 MW, so the asset must produce an output flow lower than this value when the unit is online. Similiar to the minimum operating point constraint, here the `units_on` for the timestep block `1:6` are used in the first six constraints, the `units_on` for the timestep block `7:12` are used in the next six constraints, and so on. -- $b_k =$ `1:1`: 1 `flow(smr,demand,1:1)` - 200 `units_on(smr,1:6)` <= 0 -- $b_k =$ `2:2`: 1 `flow(smr,demand,2:2)` - 200 `units_on(smr,1:6)` <= 0 -- $b_k =$ `3:3`: 1 `flow(smr,demand,3:3)` - 200 `units_on(smr,1:6)` <= 0 -- $b_k =$ `4:4`: 1 `flow(smr,demand,4:4)` - 200 `units_on(smr,1:6)` <= 0 -- $b_k =$ `5:5`: 1 `flow(smr,demand,5:5)` - 200 `units_on(smr,1:6)` <= 0 -- $b_k =$ `6:6`: 1 `flow(smr,demand,6:6)` - 200 `units_on(smr,1:6)` <= 0 -- $b_k =$ `7:7`: 1 `flow(smr,demand,7:7)` - 200 `units_on(smr,7:12)` <= 0 -- $b_k =$ `8:8`: 1 `flow(smr,demand,8:8)` - 200 `units_on(smr,7:12)` <= 0 +```math +\begin{aligned} +& b_{k = 1:1}: v^{\text{flow}}_{(\text{smr,demand}),1:1} \leq 200 \cdot v^{\text{units on}}_{\text{smr},1:6} \\ +& b_{k = 2:2}: v^{\text{flow}}_{(\text{smr,demand}),2:2} \leq 200 \cdot v^{\text{units on}}_{\text{smr},1:6} \\ +& ... \\ +& b_{k = 6:6}: v^{\text{flow}}_{(\text{smr,demand}),6:6} \leq 200 \cdot v^{\text{units on}}_{\text{smr},1:6} \\ +& b_{k = 7:7}: v^{\text{flow}}_{(\text{smr,demand}),7:7} \leq 200 \cdot v^{\text{units on}}_{\text{smr},7:12} \\ +& ... +\end{aligned} +``` `max_ramp_up(smr)`: The ramping capacity is 20MW, so the change in the output flow above the minimum operating point needs to be below that value when the asset is online. For constraints from `2:2` to `6:6`, the `units_on` variable is the same, i.e., `units_on` at timestep block `1:6`. The ramping constraint at timestep block `7:7` shows the `units_on` from the timestep block `1:6` and `7:12` since the change in the flow includes both variables. Note that if the `units_on` variable is zero in the timestep block `1:6`, then the ramping constraint at timestep block `7:7` allows the asset to go from zero flow to the minimum operating point plus the ramping capacity (i.e., 150 + 20 = 170). -- $b_k =$ `2:2`: -1 `flow(smr,demand,1:1)` + 1 `flow(smr,demand,2:2)` - 20 `units_on(smr,1:6)` <= 0 -- $b_k =$ `3:3`: -1 `flow(smr,demand,2:2)` + 1 `flow(smr,demand,3:3)` - 20 `units_on(smr,1:6)` <= 0 -- $b_k =$ `4:4`: -1 `flow(smr,demand,3:3)` + 1 `flow(smr,demand,4:4)` - 20 `units_on(smr,1:6)` <= 0 -- $b_k =$ `5:5`: -1 `flow(smr,demand,4:4)` + 1 `flow(smr,demand,5:5)` - 20 `units_on(smr,1:6)` <= 0 -- $b_k =$ `6:6`: -1 `flow(smr,demand,5:5)` + 1 `flow(smr,demand,6:6)` - 20 `units_on(smr,1:6)` <= 0 -- $b_k =$ `7:7`: -1 `flow(smr,demand,6:6)` + 1 `flow(smr,demand,7:7)` + 150 `units_on(smr,1:6)` - 170 `units_on(smr,7:12)` <= 0 -- $b_k =$ `8:8`: -1 `flow(smr,demand,7:7)` + 1 `flow(smr,demand,8:8)` - 20 `units_on(smr,7:12)` <= 0 -- $b_k =$ `9:9`: -1 `flow(smr,demand,8:8)` + 1 `flow(smr,demand,9:9)` - 20 `units_on(smr,7:12)` <= 0 +```math +\begin{aligned} +& b_{k = 2:2}: - v^{\text{flow}}_{(\text{smr,demand}),1:1} + v^{\text{flow}}_{(\text{smr,demand}),2:2} \leq 20 \cdot v^{\text{units on}}_{\text{smr},1:6} \\ +& b_{k = 3:3}: - v^{\text{flow}}_{(\text{smr,demand}),2:2} + v^{\text{flow}}_{(\text{smr,demand}),3:3} \leq 20 \cdot v^{\text{units on}}_{\text{smr},1:6} \\ +& ... \\ +& b_{k = 6:6}: - v^{\text{flow}}_{(\text{smr,demand}),5:5} + v^{\text{flow}}_{(\text{smr,demand}),6:6} \leq 20 \cdot v^{\text{units on}}_{\text{smr},1:6} \\ +& b_{k = 7:7}: - v^{\text{flow}}_{(\text{smr,demand}),6:6} + v^{\text{flow}}_{(\text{smr,demand}),7:7} \leq - 150 \cdot v^{\text{units on}}_{\text{smr},1:6} + 170 \cdot v^{\text{units on}}_{\text{smr},7:12} \\ +& b_{k = 8:8}: - v^{\text{flow}}_{(\text{smr,demand}),7:7} + v^{\text{flow}}_{(\text{smr,demand}),8:8} \leq 20 \cdot v^{\text{units on}}_{\text{smr},7:12} \\ +& ... +\end{aligned} +``` For the maximum ramp down we have similiar constraints as the ones shown above. ### Unit Commitment and Ramping in Assets with Flexible Time Resolution that are NOT Multiples of Each Other -In this case, the `ccgt` asset has an output `flow(ccgt,demand)` on a two-hour basis, but its time resolution (i.e., partition) is every three hours. Therefore, the `unist_on` variables are defined in a timestep block every three hours. This setup means that the flow and unit commitment variables are not multiples of each other. As a result, the unit commitment and ramping constraints are defined on the highest resolution, meaning that we also need the intersections of both resolutions. The figure below illustrates this situation. +In this case, the `ccgt` asset has an output $v^{\text{flow}}_{(\text{ccgt,demand}),b_k}$ on a two-hour basis, but its time resolution (i.e., partition) is every three hours. Therefore, the `unist_on` variables are defined in a timestep block every three hours. This setup means that the flow and unit commitment variables are not multiples of each other. As a result, the unit commitment and ramping constraints are defined on the highest resolution, meaning that we also need the intersections of both resolutions. The figure below illustrates this situation. ![unit-commitment-ccgt-asset](./figs/unit-commitment-ccgt.png) @@ -493,36 +520,56 @@ Let's now take a look at the resulting constraints in the model. `limit_units_on(ccgt)`: The `units_on` variables are defined every 3h; therefore, the upper bound of the variable is also every 3h. In addition, the `ccgt` is investable and has one existing unit that limits the commitment variables. -- $b_k =$ `1:3`: -1 `assets_investment(ccgt)` + 1 `units_on(ccgt,1:3)` <= 1 -- $b_k =$ `4:6`: -1 `assets_investment(ccgt)` + 1 `units_on(ccgt,4:6)` <= 1 -- $b_k =$ `7:9`: -1 `assets_investment(ccgt)` + 1 `units_on(ccgt,7:9)` <= 1 +```math +\begin{aligned} +& b_{k = 1:3}: v^{\text{units on}}_{\text{ccgt},1:3} \leq 1 + v^{\text{inv}}_{\text{ccgt}} \\ +& b_{k = 4:6}: v^{\text{units on}}_{\text{ccgt},4:6} \leq 1 + v^{\text{inv}}_{\text{ccgt}} \\ +& ... +\end{aligned} +``` `min_output_flow(ccgt)`: The minimum operating point is 50 MW, so the asset must produce an output flow greater than this value when the unit is online. Here, we can see the impact of the constraints of having different temporal resolutions that are not multiples of each other. For instance, the constraint is defined for all the intersections, so `1:2`, `3:3`, `4:4`, `5:6`, etc., to ensure that the minimum operating point is correctly defined considering all the timestep blocks of the `flow` and the `units_on` variables. -- $b_k =$ `1:2`: 1 `flow(ccgt,demand,1:2)` - 50 `units_on(ccgt,1:3)` >= 0 -- $b_k =$ `3:3`: 1 `flow(ccgt,demand,3:4)` - 50 `units_on(ccgt,1:3)` >= 0 -- $b_k =$ `4:4`: 1 `flow(ccgt,demand,3:4)` - 50 `units_on(ccgt,4:6)` >= 0 -- $b_k =$ `5:6`: 1 `flow(ccgt,demand,5:6)` - 50 `units_on(ccgt,4:6)` >= 0 +```math +\begin{aligned} +& b_{k = 1:2}: v^{\text{flow}}_{(\text{ccgt,demand}),1:2} \geq 50 \cdot v^{\text{units on}}_{\text{ccgt},1:3} \\ +& b_{k = 3:3}: v^{\text{flow}}_{(\text{ccgt,demand}),3:4} \geq 50 \cdot v^{\text{units on}}_{\text{ccgt},1:3} \\ +& b_{k = 4:4}: v^{\text{flow}}_{(\text{ccgt,demand}),3:4} \geq 50 \cdot v^{\text{units on}}_{\text{ccgt},4:6} \\ +& b_{k = 5:6}: v^{\text{flow}}_{(\text{ccgt,demand}),5:6} \geq 50 \cdot v^{\text{units on}}_{\text{ccgt},4:6} \\ +& ... +\end{aligned} +``` `max_output_flows(ccgt)`: The capacity is 200 MW, so the asset must produce an output flow lower than this value when the unit is online. The situation is similar as in the minimum operating point constraint, we have constraints for all the intersections of the resolutions to ensure the correct definition of the maximum capacity. -- $b_k =$ `1:2`: 1 `flow(ccgt,demand,1:2)` - 200 `units_on(ccgt,1:3)` <= 0 -- $b_k =$ `3:3`: 1 `flow(ccgt,demand,3:4)` - 200 `units_on(ccgt,1:3)` <= 0 -- $b_k =$ `4:4`: 1 `flow(ccgt,demand,3:4)` - 200 `units_on(ccgt,4:6)` <= 0 -- $b_k =$ `5:6`: 1 `flow(ccgt,demand,5:6)` - 200 `units_on(ccgt,4:6)` <= 0 +```math +\begin{aligned} +& b_{k = 1:2}: v^{\text{flow}}_{(\text{ccgt,demand}),1:2} \leq 200 \cdot v^{\text{units on}}_{\text{ccgt},1:3} \\ +& b_{k = 3:3}: v^{\text{flow}}_{(\text{ccgt,demand}),3:4} \leq 200 \cdot v^{\text{units on}}_{\text{ccgt},1:3} \\ +& b_{k = 4:4}: v^{\text{flow}}_{(\text{ccgt,demand}),3:4} \leq 200 \cdot v^{\text{units on}}_{\text{ccgt},4:6} \\ +& b_{k = 5:6}: v^{\text{flow}}_{(\text{ccgt,demand}),5:6} \leq 200 \cdot v^{\text{units on}}_{\text{ccgt},4:6} \\ +& ... +\end{aligned} +``` `max_ramp_up(ccgt)`: The ramping capacity is 120MW, so the change in the output flow above the minimum operating point needs to be below that value when the asset is online. When the time resolutions of the flow and `units_on` are not multiples of each other, we encounter some counterintuitive constraints. For example, consider the constraint at timestep block `4:4`. This constraint only involves `units_on` variables because the flow above the minimum operating point at timestep block `4:4` differs from the previous timestep block `3:3` only in terms of the `units_on` variables. As a result, the ramping-up constraint establishes a relationship between the `units_on` variable at `1:3` and `4:6`. This means that if the unit is on at timestep `1:3`, then it must also be on at timestep `4:6`. However, this is redundant because there is already a flow variable defined for `3:4` that ensures this, thanks to the minimum operating point and maximum capacity constraints. Therefore, although this constraint is not incorrect, it is unnecessary due to the flexible time resolutions that are not multiples of each other. -- $b_k =$ `3:3`: -1 `flow(ccgt,demand,1:2)` + 1 `flow(ccgt,demand,3:4)` - 120 `units_on(ccgt,1:3)` <= 0 -- $b_k =$ `4:4`: 50 `units_on(ccgt,1:3)` - 170 `units_on(ccgt,4:6)` <= 0 -- $b_k =$ `5:6`: -1 `flow(ccgt,demand,3:4)` + 1 `flow(ccgt,demand,5:6)` - 120 `units_on(ccgt,4:6)` <= 0 -- $b_k =$ `7:8`: -1 `flow(ccgt,demand,5:6)` + 1 `flow(ccgt,demand,7:8)` + 50 `units_on(ccgt,4:6)` - 170 `units_on(ccgt,7:9)` <= 0 -- $b_k =$ `9:9`: -1 `flow(ccgt,demand,7:8)` + 1 `flow(ccgt,demand,9:10)` - 120 `units_on(ccgt,7:9)` <= 0 +```math +\begin{aligned} +& b_{k = 3:3}: - v^{\text{flow}}_{(\text{ccgt,demand}),1:2} + v^{\text{flow}}_{(\text{ccgt,demand}),3:4} \leq 120 \cdot v^{\text{units on}}_{\text{ccgt},1:3} \\ +& b_{k = 4:4}: 0 \leq - 50 \cdot v^{\text{units on}}_{\text{ccgt},1:3} + 170 \cdot v^{\text{units on}}_{\text{ccgt},4:6} \\ +& b_{k = 5:6}: - v^{\text{flow}}_{(\text{ccgt,demand}),3:4} + v^{\text{flow}}_{(\text{ccgt,demand}),5:6} \leq 120 \cdot v^{\text{units on}}_{\text{ccgt},4:6} \\ +& b_{k = 7:8}: - v^{\text{flow}}_{(\text{ccgt,demand}),5:6} + v^{\text{flow}}_{(\text{ccgt,demand}),7:8} \leq - 50 \cdot v^{\text{units on}}_{\text{ccgt},4:6} + 170 \cdot v^{\text{units on}}_{\text{ccgt},7:9} \\ +& b_{k = 9:9}: - v^{\text{flow}}_{(\text{ccgt,demand}),7:8} + v^{\text{flow}}_{(\text{ccgt,demand}),9:10} \leq 120 \cdot v^{\text{units on}}_{\text{ccgt},7:9} \\ +& ... +\end{aligned} +``` For the maximum ramp down we have similiar constraints as the ones shown above. -> **Important**: The time resolutions of the unit commitment constraints do not have to be multiples of each other. However, using multiples of each other can help avoid extra redundant constraints. +!!! warning "Avoiding redundant constraints" + The time resolutions of the unit commitment constraints do not have to be multiples of each other. However, using multiples of each other can help avoid extra redundant constraints. ### Unit Commitment and Ramping Case Study Results @@ -549,12 +596,9 @@ The equations of intra- and inter-temporal constraints for energy storage are av ### Example to Model Seasonal and Non-seasonal Storage -!!! danger - This section is out of date. The update of these docs is tracked in - We use the example in the folder [`test/inputs/Storage`](https://github.com/TulipaEnergy/TulipaEnergyModel.jl/tree/main/test/inputs/Storage) to explain how all these concepts come together in _TulipaEnergyModel.jl_. -Let's first look at this feature's most relevant input data, starting with the `assets-data` file. Here, we show only the storage assets and the appropriate columns for this example, but all the input data can be found in the previously mentioned folder. +Let's first look at this feature's most relevant input data in the input files. Here, we show only the storage assets and the appropriate columns for this example, but all the input data can be found in the previously mentioned folder. ```@example seasonal-storage using DataFrames # hide @@ -595,8 +639,8 @@ phs_partitions_file = "../../test/inputs/Storage/assets-timeframe-partitions.csv phs_partitions = CSV.read(phs_partitions_file, DataFrame, header = 1) # hide ``` -> **Note:** -> For the sake of simplicity, we show how using three representative days can recover part of the chronological information of one week. The same method can be applied to more representative periods to analyze the seasonality across a year or longer timeframe. +!!! info + For the sake of simplicity, we show how using three representative days can recover part of the chronological information of one week. The same method can be applied to more representative periods to analyze the seasonality across a year or longer timeframe. Now let's solve the example and explore the results: diff --git a/docs/src/40-formulation.md b/docs/src/40-formulation.md index 05f24bf5..8d07daa9 100644 --- a/docs/src/40-formulation.md +++ b/docs/src/40-formulation.md @@ -577,7 +577,7 @@ v^{\text{intra-storage}}_{a,k_y,b^{\text{first}}_{k_y}} \geq p^{\text{init stora This constraint allows us to consider the storage seasonality throughout the model's timeframe (e.g., a year). The parameter $p^{\text{map}}_{p_y,k_y}$ determines how much of the representative period $k_y$ is in the period $p_y$, and you can use a clustering technique to calculate it. For _TulipaEnergyModel.jl_, we recommend using [_TulipaClustering.jl_](https://github.com/TulipaEnergy/TulipaClustering.jl) to compute the clusters for the representative periods and their map. -For the sake of simplicity, we show the constraint assuming the inter-storage level between two consecutive periods $p_y$; however, _TulipaEnergyModel.jl_ can handle more flexible period block definition through the timeframe definition in the model using the information in the file [`assets-timeframe-partitions.csv`](@ref assets-timeframe-partitions). +For the sake of simplicity, we show the constraint assuming the inter-storage level between two consecutive periods $p_y$; however, _TulipaEnergyModel.jl_ can handle more flexible period block definition through the timeframe definition in the model using the information in the timeframe partitions file, see [schemas](@ref schemas). ```math \begin{aligned} @@ -679,7 +679,7 @@ v^{\text{flow}}_{f,k_y,b_{k_y}} \geq - p^{\text{availability profile}}_{f,y,k_y, v^{\text{inv}}_{a,y} \leq \frac{p^{\text{inv limit}}_{a,y}}{p^{\text{capacity}}_{a}} \quad \forall y \in \mathcal{Y}, \forall a \in \mathcal{A}^{\text{i}}_y ``` -If the parameter `investment_integer` in the [`assets-data.csv`](@ref assets-data) file is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer. +If the parameter `investment_integer` is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer. #### Maximum Energy Investment Limit for Assets @@ -687,7 +687,7 @@ If the parameter `investment_integer` in the [`assets-data.csv`](@ref assets-dat v^{\text{inv energy}}_{a,y} \leq \frac{p^{\text{inv limit energy}}_{a,y}}{p^{\text{energy capacity}}_{a}} \quad \forall y \in \mathcal{Y}, \forall a \in \mathcal{A}^{\text{i}}_y \cap \mathcal{A}^{\text{se}}_y ``` -If the parameter `investment_integer_storage_energy` in the [`assets-data.csv`](@ref assets-data) file is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer. +If the parameter `investment_integer_storage_energy` is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer. #### Maximum Investment Limit for Flows @@ -695,7 +695,7 @@ If the parameter `investment_integer_storage_energy` in the [`assets-data.csv`]( v^{\text{inv}}_{f,y} \leq \frac{p^{\text{inv limit}}_{f,y}}{p^{\text{capacity}}_{f}} \quad \forall y \in \mathcal{Y}, \forall f \in \mathcal{F}^{\text{ti}}_y ``` -If the parameter `investment_integer` in the [`flows-data.csv`](@ref flows-data) file is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer. +If the parameter `investment_integer` is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer. ### [Inter-temporal Energy Constraints](@id inter-temporal-energy-constraints) diff --git a/docs/src/50-schemas.md b/docs/src/50-schemas.md new file mode 100644 index 00000000..eac16b68 --- /dev/null +++ b/docs/src/50-schemas.md @@ -0,0 +1,18 @@ +# [Model Parameters](@id schemas) + +The optimization model parameters with the input data must follow the schema below for each table. To create these tables we currently use CSV files that follow this same schema and then convert them into tables using TulipaIO, as shown in the basic example of the [Tutorials](@ref basic-example) section. + +The schemas can be accessed at any time after loading the package by typing `TulipaEnergyModel.schema_per_table_name` in the Julia console. Here is the complete list of model parameters in the schemas per table (or CSV file): + +```@eval +using Markdown, TulipaEnergyModel + +Markdown.parse( + join(["- **`$filename`**\n" * + join( + [" - `$f: $t`" for (f, t) in schema], + "\n", + ) for (filename, schema) in TulipaEnergyModel.schema_per_table_name + ] |> sort, "\n") +) +``` diff --git a/docs/src/60-structures.md b/docs/src/60-structures.md new file mode 100644 index 00000000..22bc3e60 --- /dev/null +++ b/docs/src/60-structures.md @@ -0,0 +1,74 @@ +# [Model Structures](@id structures) + +```@contents +Pages = ["60-structures.md"] +Depth = 3 +``` + +The list of relevant structures used in this package are listed below: + +## EnergyProblem + +The `EnergyProblem` structure is a wrapper around various other relevant structures. +It hides the complexity behind the energy problem, making the usage more friendly, although more verbose. + +### Fields + +- `db_connection`: A DuckDB connection to the input tables in the model +- `graph`: The Graph object that defines the geometry of the energy problem. +- `model`: A JuMP.Model object representing the optimization model. +- `objective_value`: The objective value of the solved problem (Float64). +- `variables`: A [TulipaVariable](@ref TulipaVariable) structure to store all the information related to the variables in the model. +- `constraints`: A [TulipaConstraint](@ref TulipaConstraint) structure to store all the information related to the constraints in the model. +- `representative_periods`: A vector of [Representative Periods](@ref representative-periods). +- `solved`: A boolean indicating whether the `model` has been solved or not. +- `termination_status`: The termination status of the optimization model. +- `timeframe`: A structure with the number of periods in the `representative_periods` and the mapping between the periods and their representatives. +- `model_parameters`: A [ModelParameters](@ref ModelParameters) structure to store all the parameters that are exclusive of the model. +- `years`: A vector with the information of all the milestone years. + +### Constructor + +The `EnergyProblem` can also be constructed using the minimal constructor below. + +- `EnergyProblem(connection)`: Constructs a new `EnergyProblem` object with the given `connection` that has been created and the data loaded into it using [TulipaIO](https://github.com/TulipaEnergy/TulipaIO.jl). The `graph`, `representative_periods`, `timeframe`, and `years` are computed using `create_internal_structures`. + +See the [basic example tutorial](@ref basic-example) to see how these can be used. + +## GraphAssetData + +This structure holds all the information of a given asset. +These are stored inside the Graph. +Given a graph `graph`, an asset `a` can be accessed through `graph[a]`. + +## GraphFlowData + +This structure holds all the information of a given flow. +These are stored inside the Graph. +Given a graph `graph`, a flow from asset `u` to asset `v` can be accessed through `graph[u, v]`. + +## [Timeframe](@id timeframe) + +The timeframe is the total period we want to analyze with the model. Usually this is a year, but it can be any length of time. A timeframe has two fields: + +- `num_periods`: The timeframe is defined by a certain number of periods. For instance, a year can be defined by 365 periods, each describing a day. +- `map_periods_to_rp`: Indicates the periods of the timeframe that map into a [representative period](@ref representative-periods) and the weight of the representative period to construct that period. + +## [Representative Periods](@id representative-periods) + +The [timeframe](@ref timeframe) (e.g., a full year) is described by a selection of representative periods, for instance, days or weeks, that nicely summarize other similar periods. For example, we could model the year into 3 days, by clustering all days of the year into 3 representative days. Each one of these days is called a representative period. _TulipaEnergyModel.jl_ has the flexibility to consider representative periods of different lengths for the same timeframe (e.g., a year can be represented by a set of 4 days and 2 weeks). To obtain the representative periods, we recommend using [TulipaClustering](https://github.com/TulipaEnergy/TulipaClustering.jl). + +A representative period has three fields: + +- `weight`: Indicates how many representative periods are contained in the [timeframe](@ref timeframe); this is inferred automatically from `map_periods_to_rp` in the [timeframe](@ref timeframe). +- `timesteps`: The number of timesteps blocks in the representative period. +- `resolution`: The duration in time of each timestep. + +The number of timesteps and resolution work together to define the coarseness of the period. +Nothing is defined outside of these timesteps; for instance, if the representative period represents a day and you want to specify a variable or constraint with a coarseness of 30 minutes. You need to define the number of timesteps to 48 and the resolution to `0.5`. + +## [Time Blocks](@id time-blocks) + +A time block is a range for which a variable or constraint is defined. +It is a range of numbers, i.e., all integer numbers inside an interval. +Time blocks are used for the periods in the [timeframe](@ref timeframe) and the timesteps in the [representative period](@ref representative-periods). Time blocks are disjunct (not overlapping), but do not have to be sequential. diff --git a/src/structures.jl b/src/structures.jl index 1f730c45..ce9d577d 100644 --- a/src/structures.jl +++ b/src/structures.jl @@ -51,6 +51,9 @@ mutable struct TulipaVariable end end +""" +Structure to hold the JuMP constraints for the TulipaEnergyModel +""" mutable struct TulipaConstraint indices::DataFrame table_name::String @@ -311,16 +314,18 @@ Structure to hold all parts of an energy problem. It is a wrapper around various It hides the complexity behind the energy problem, making the usage more friendly, although more verbose. # Fields +- `db_connection`: A DuckDB connection to the input tables in the model - `graph`: The Graph object that defines the geometry of the energy problem. -- `representative_periods`: A vector of [Representative Periods](@ref representative-periods). -- `constraints_partitions`: Dictionaries that connect pairs of asset and representative periods to [time partitions (vectors of time blocks)](@ref Partition) -- `timeframe`: The number of periods of the `representative_periods`. -- `dataframes`: The data frames used to linearize the variables and constraints. These are used internally in the model only. -- `model_parameters`: The model parameters. - `model`: A JuMP.Model object representing the optimization model. +- `objective_value`: The objective value of the solved problem (Float64). +- `variables`: A [TulipaVariable](@ref TulipaVariable) structure to store all the information related to the variables in the model. +- `constraints`: A [TulipaConstraint](@ref TulipaConstraint) structure to store all the information related to the constraints in the model. +- `representative_periods`: A vector of [Representative Periods](@ref representative-periods). - `solved`: A boolean indicating whether the `model` has been solved or not. -- `objective_value`: The objective value of the solved problem. - `termination_status`: The termination status of the optimization model. +- `timeframe`: A structure with the number of periods in the `representative_periods` and the mapping between the periods and their representatives. +- `model_parameters`: A [ModelParameters](@ref ModelParameters) structure to store all the parameters that are exclusive of the model. +- `years`: A vector with the information of all the milestone years. # Constructor - `EnergyProblem(connection)`: Constructs a new `EnergyProblem` object with the given connection. The `constraints_partitions` field is computed from the `representative_periods`, and the other fields are initialized with default values.