stride Project API

This page displays the API methods available in the Project class.

Project

class stride.Project(config: ProjectConfig, project_path: Path, **connection_kwargs: Any)

Manages a Stride project.

classmethod create(config_file: Path | str, base_dir: Path = PosixPath('.'), overwrite: bool = False, dataset_requirements: DatasetDimensionRequirements | None = None, dataset: str = 'global', data_dir: Path | None = None) Self

Create a project from a config file.

Parameters:
  • config_file – Defines the project inputs.

  • base_dir – Base dir in which to create the project directory, defaults to the current directory. The project directory will be base_dir / project_id.

  • overwrite – Set to True to overwrite the project directory if it already exists.

  • dataset_requirements – Optional, requirements to use when checking dataset consistency.

  • dataset – Name of dataset, if provided. Can be “global” or “global-test”.

  • data_dir – Directory containing datasets. Defaults to STRIDE_DATA_DIR env var or ~/.stride/data.

Examples

>>> Project.create("my_project.json5")
classmethod load(project_path: Path | str, **connection_kwargs: Any) Self

Load a project from a serialized directory.

Parameters:
  • project_path – Directory containing an existing project.

  • connection_kwargs – Keyword arguments to be forwarded to the DuckDB connect call. Pass read_only=True if you will not be mutating the database so that multiple stride processes can access the database simultaneously.

Examples

>>> from stride import Project
>>> with Project.load("my_project_path", read_only=True) as project:
    project.list_scenario_names()
close() None

Close the connection to the database.

property con: DuckDBPyConnection

Return the connection to the database.

property config: ProjectConfig

Return the project configuration.

property path: Path

Return the project path.

property palette: ColorPalette

Get or create the color palette for this project.

The palette is automatically populated with: - Scenarios from the project config - Model years from start_year, end_year, step_year - Metrics are populated during project creation

To refresh metrics after project updates, call: >>> project.populate_palette_metrics() >>> project.save_palette()

populate_palette_metrics() None

Populate the palette with all metrics (sectors and end uses) from the database.

This method queries the database for unique sectors and end uses and adds them to the metrics category of the palette. It’s called automatically during project creation, but can be called manually to refresh the palette after updates.

Examples

>>> project = Project.load("my_project")
>>> project.populate_palette_metrics()
>>> project.save_palette()
refresh_palette_colors() None

Refresh all palette colors to use the correct themes for each category.

This is useful for fixing palettes that may have incorrect color assignments (e.g., metrics using model year colors). It reassigns colors while preserving the labels in each category.

Examples

>>> project = Project.load("my_project")
>>> project.refresh_palette_colors()
>>> project.save_palette()
save_palette() None

Save the current palette state back to the project conig file.

override_calculated_tables(overrides: list[CalculatedTableOverride]) None

Override one or more calculated tables.

remove_calculated_table_overrides(overrides: list[CalculatedTableOverride]) None

Remove an overridden calculated table.

Parameters:

overrides – Remove the specified overrides.

Examples

>>> project.remove_calculated_table_override(
...     [
...         CalculatedTableOverride(
...             scenario="baseline",
...             table_name="energy_projection_res_load_shapes",
...         )
...     ]
... )
>>> project.remove_calculated_table_override(
...     [
...         CalculatedTableOverride(
...             scenario="baseline",
...             table_name="energy_projection_res_load_shapes_override",
...         )
...     ]
... )
copy_dbt_template() None

Copy the dbt template for all scenarios.

export_calculated_table(scenario_name: str, table_name: str, filename: Path, overwrite: bool = False) None

Export the specified calculated table to filename. Supports CSV and Parquet, inferred from the filename’s suffix.

show_calculated_table(scenario_name: str, table_name: str, limit: int = 20) None

Print a limited number of rows of the table to the console.

has_table(name: str, schema: str = 'main') bool

Return True if the table name is in the specified schema.

list_scenario_names() list[str]

Return a list of scenario names in the project.

list_tables(schema: str = 'main') list[str]

List all tables stored in the database in the specified schema.

list_calculated_tables() list[str]

List all calculated tables stored in the database. They apply to each scenario.

static list_data_tables() list[str]

List the data tables available in any project.

persist() None

Persist the project config to the project directory.

compute_energy_projection(use_table_overrides: bool = True) None

Compute the energy projection dataset for all scenarios.

This operation overwrites all tables and views in the database.

Parameters:

use_table_overrides – If True, use compute results based on the table overrides specified in the project config.

export_energy_projection(filename: Path = PosixPath('energy_projection.csv'), overwrite: bool = False) None

Export the energy projection table to a file.

Parameters:
  • filename – Filename to create. Supports .csv and .parquet.

  • overwrite – If True, overwrite the file if it already exists.

Examples

>>> project.export_energy_projection()
INFO: Exported the energy projection table to energy_projection.csv
get_energy_projection(scenario: str | None = None) DuckDBPyRelation

Return the energy projection table, optionally for a scenario.

Parameters:

scenario – By default, return a table with all scenarios. Otherwise, filter on one scenario.

Returns:

Relation containing the data.

Return type:

DuckDBPyRelation

show_data_table(scenario: str, data_table_id: str, limit: int = 20) None

Print a limited number of rows of the data table to the console.

get_table_overrides() dict[str, list[str]]

Return a dictionary of tables being overridden for each scenario.