vertiport_autonomy.training package

Submodules

vertiport_autonomy.training.curriculum module

Curriculum learning trainer for progressive difficulty training.

class vertiport_autonomy.training.curriculum.CurriculumTrainer(log_dir: str = 'logs', model_dir: str = 'models')[source]

Bases: object

Curriculum learning trainer for vertiport autonomy.

__init__(log_dir: str = 'logs', model_dir: str = 'models')[source]

Initialize the curriculum trainer.

Parameters:
  • log_dir – Directory for training logs

  • model_dir – Directory for saving models

set_custom_phases(phases: List[Dict[str, Any]]) None[source]

Set custom curriculum phases.

Parameters:

phases – List of phase configurations

train_phase(phase_config: Dict[str, Any], model: PPO | None = None) PPO[source]

Train a single curriculum phase.

Parameters:
  • phase_config – Configuration for this phase

  • model – Previous model to continue from (None for first phase)

Returns:

Trained model for this phase

run_full_curriculum() PPO[source]

Run the complete curriculum learning process.

Returns:

Final trained model

run_single_phase(phase_name: str, model: PPO | None = None) PPO[source]

Run a single phase of the curriculum.

Parameters:
  • phase_name – Name of the phase to run

  • model – Optional model to continue from

Returns:

Trained model for this phase

Raises:

ValueError – If phase_name is not found

get_phase_names() List[str][source]

Get list of available phase names.

Returns:

List of phase names

vertiport_autonomy.training.curriculum.main()[source]

Curriculum training entry point.

vertiport_autonomy.training.trainer module

Basic training utilities for vertiport autonomy agents.

class vertiport_autonomy.training.trainer.Trainer(log_dir: str = 'logs', model_dir: str = 'models', n_envs: int = 50, **ppo_kwargs)[source]

Bases: object

Basic trainer for PPO agents in vertiport environments.

__init__(log_dir: str = 'logs', model_dir: str = 'models', n_envs: int = 50, **ppo_kwargs)[source]

Initialize the trainer.

Parameters:
  • log_dir – Directory for training logs

  • model_dir – Directory for saving models

  • n_envs – Number of parallel environments

  • **ppo_kwargs – Additional arguments for PPO

create_environment(scenario_path: str) VecNormalize[source]

Create a vectorized and normalized environment.

Parameters:

scenario_path – Path to scenario configuration file

Returns:

Normalized vectorized environment

create_model(env: VecNormalize, **override_params) PPO[source]

Create a PPO model.

Parameters:
  • env – Environment for training

  • **override_params – Parameters to override defaults

Returns:

PPO model instance

create_callbacks(save_freq: int = 50000, eval_freq: int = 10000, n_eval_episodes: int = 5, name_prefix: str = 'ppo_vertiport') list[source]

Create training callbacks.

Parameters:
  • save_freq – Frequency for saving checkpoints

  • eval_freq – Frequency for evaluation

  • n_eval_episodes – Number of episodes for evaluation

  • name_prefix – Prefix for saved model names

Returns:

List of callbacks

train(scenario_path: str, total_timesteps: int, tb_log_name: str = 'PPO_Vertiport', save_final: bool = True, final_model_name: str = 'ppo_vertiport_final', **model_params) PPO[source]

Train a PPO agent.

Parameters:
  • scenario_path – Path to scenario configuration

  • total_timesteps – Total training timesteps

  • tb_log_name – TensorBoard log name

  • save_final – Whether to save final model

  • final_model_name – Name for final model

  • **model_params – Additional model parameters

Returns:

Trained PPO model

vertiport_autonomy.training.trainer.main()[source]

Basic training entry point.

Module contents

Training utilities and frameworks.

class vertiport_autonomy.training.Trainer(log_dir: str = 'logs', model_dir: str = 'models', n_envs: int = 50, **ppo_kwargs)[source]

Bases: object

Basic trainer for PPO agents in vertiport environments.

__init__(log_dir: str = 'logs', model_dir: str = 'models', n_envs: int = 50, **ppo_kwargs)[source]

Initialize the trainer.

Parameters:
  • log_dir – Directory for training logs

  • model_dir – Directory for saving models

  • n_envs – Number of parallel environments

  • **ppo_kwargs – Additional arguments for PPO

create_callbacks(save_freq: int = 50000, eval_freq: int = 10000, n_eval_episodes: int = 5, name_prefix: str = 'ppo_vertiport') list[source]

Create training callbacks.

Parameters:
  • save_freq – Frequency for saving checkpoints

  • eval_freq – Frequency for evaluation

  • n_eval_episodes – Number of episodes for evaluation

  • name_prefix – Prefix for saved model names

Returns:

List of callbacks

create_environment(scenario_path: str) VecNormalize[source]

Create a vectorized and normalized environment.

Parameters:

scenario_path – Path to scenario configuration file

Returns:

Normalized vectorized environment

create_model(env: VecNormalize, **override_params) PPO[source]

Create a PPO model.

Parameters:
  • env – Environment for training

  • **override_params – Parameters to override defaults

Returns:

PPO model instance

train(scenario_path: str, total_timesteps: int, tb_log_name: str = 'PPO_Vertiport', save_final: bool = True, final_model_name: str = 'ppo_vertiport_final', **model_params) PPO[source]

Train a PPO agent.

Parameters:
  • scenario_path – Path to scenario configuration

  • total_timesteps – Total training timesteps

  • tb_log_name – TensorBoard log name

  • save_final – Whether to save final model

  • final_model_name – Name for final model

  • **model_params – Additional model parameters

Returns:

Trained PPO model

class vertiport_autonomy.training.CurriculumTrainer(log_dir: str = 'logs', model_dir: str = 'models')[source]

Bases: object

Curriculum learning trainer for vertiport autonomy.

__init__(log_dir: str = 'logs', model_dir: str = 'models')[source]

Initialize the curriculum trainer.

Parameters:
  • log_dir – Directory for training logs

  • model_dir – Directory for saving models

get_phase_names() List[str][source]

Get list of available phase names.

Returns:

List of phase names

run_full_curriculum() PPO[source]

Run the complete curriculum learning process.

Returns:

Final trained model

run_single_phase(phase_name: str, model: PPO | None = None) PPO[source]

Run a single phase of the curriculum.

Parameters:
  • phase_name – Name of the phase to run

  • model – Optional model to continue from

Returns:

Trained model for this phase

Raises:

ValueError – If phase_name is not found

set_custom_phases(phases: List[Dict[str, Any]]) None[source]

Set custom curriculum phases.

Parameters:

phases – List of phase configurations

train_phase(phase_config: Dict[str, Any], model: PPO | None = None) PPO[source]

Train a single curriculum phase.

Parameters:
  • phase_config – Configuration for this phase

  • model – Previous model to continue from (None for first phase)

Returns:

Trained model for this phase