src.fairreckitlib.evaluation.pipeline.evaluation_event

This module contains all event ids, event args and a print switch for the evaluation pipeline.

Constants:

ON_BEGIN_EVAL_PIPELINE: id of the event that is used when the evaluation pipeline starts.
ON_BEGIN_FILTER_RECS: id of the event that is used when recs filtering starts.
ON_BEGIN_LOAD_RATING_SET: id of the event that is used when a rating set is being loaded.
ON_BEGIN_LOAD_TEST_SET: id of the event that is used when a test set is being loaded.
ON_BEGIN_LOAD_TRAIN_SET: id of the event that is used when a train set is being loaded.
ON_BEGIN_METRIC: id of the event that is used when a metric computation started.
ON_END_EVAL_PIPELINE: id of the event that is used when the evaluation pipeline ends.
ON_END_FILTER_RECS: id of the event that is used when recs filtering finishes.
ON_END_LOAD_RATING_SET: id of the event that is used when a rating set has been loaded.
ON_END_LOAD_TEST_SET: id of the event that is used when a test set has been loaded.
ON_END_LOAD_TRAIN_SET: id of the event that is used when a train set has been loaded.
ON_END_METRIC: id of the event that is used when a metric computation finishes.

Classes:

EvaluationPipelineEventArgs: event args related to the evaluation pipeline.
MetricEventArgs: event args related to a metric.

Functions:

get_eval_events: list of evaluation pipeline event IDs.
get_eval_event_print_switch: switch to print evaluation pipeline event arguments by ID.

This program has been developed by students from the bachelor Computer Science at Utrecht University within the Software Project course. © Copyright Utrecht University (Department of Information and Computing Sciences)

  1"""This module contains all event ids, event args and a print switch for the evaluation pipeline.
  2
  3Constants:
  4
  5    ON_BEGIN_EVAL_PIPELINE: id of the event that is used when the evaluation pipeline starts.
  6    ON_BEGIN_FILTER_RECS: id of the event that is used when recs filtering starts.
  7    ON_BEGIN_LOAD_RATING_SET: id of the event that is used when a rating set is being loaded.
  8    ON_BEGIN_LOAD_TEST_SET: id of the event that is used when a test set is being loaded.
  9    ON_BEGIN_LOAD_TRAIN_SET: id of the event that is used when a train set is being loaded.
 10    ON_BEGIN_METRIC: id of the event that is used when a metric computation started.
 11    ON_END_EVAL_PIPELINE: id of the event that is used when the evaluation pipeline ends.
 12    ON_END_FILTER_RECS: id of the event that is used when recs filtering finishes.
 13    ON_END_LOAD_RATING_SET: id of the event that is used when a rating set has been loaded.
 14    ON_END_LOAD_TEST_SET: id of the event that is used when a test set has been loaded.
 15    ON_END_LOAD_TRAIN_SET: id of the event that is used when a train set has been loaded.
 16    ON_END_METRIC: id of the event that is used when a metric computation finishes.
 17
 18Classes:
 19
 20    EvaluationPipelineEventArgs: event args related to the evaluation pipeline.
 21    MetricEventArgs: event args related to a metric.
 22
 23Functions:
 24
 25    get_eval_events: list of evaluation pipeline event IDs.
 26    get_eval_event_print_switch: switch to print evaluation pipeline event arguments by ID.
 27
 28This program has been developed by students from the bachelor Computer Science at
 29Utrecht University within the Software Project course.
 30© Copyright Utrecht University (Department of Information and Computing Sciences)
 31"""
 32
 33from dataclasses import dataclass
 34from typing import Callable, Dict, List
 35
 36from ...core.events.event_dispatcher import EventArgs
 37from ...core.io.event_io import print_load_df_event_args
 38from ...data.filter.filter_event import print_filter_event_args
 39from .evaluation_config import MetricConfig
 40
 41ON_BEGIN_LOAD_TEST_SET = 'EvaluationPipeline.on_begin_load_test_set'
 42ON_END_LOAD_TEST_SET = 'EvaluationPipeline.on_end_load_test_set'
 43ON_BEGIN_LOAD_TRAIN_SET = 'EvaluationPipeline.on_begin_load_train_set'
 44ON_END_LOAD_TRAIN_SET = 'EvaluationPipeline.on_end_load_train_set'
 45ON_BEGIN_LOAD_RATING_SET = 'EvaluationPipeline.on_begin_load_rating_set'
 46ON_END_LOAD_RATING_SET = 'EvaluationPipeline.on_end_load_rating_set'
 47ON_BEGIN_EVAL_PIPELINE = 'EvaluationPipeline.on_begin_eval_pipeline'
 48ON_END_EVAL_PIPELINE = 'EvaluationPipeline.on_end_eval_pipeline'
 49ON_BEGIN_METRIC = 'EvaluationPipeline.on_begin_metric'
 50ON_END_METRIC = 'EvaluationPipeline.on_end_metric'
 51ON_BEGIN_EVAL_METRIC = 'EvaluationPipeline.on_begin_eval_metric'
 52ON_END_EVAL_METRIC = 'EvaluationPipeline.on_end_eval_metric'
 53ON_BEGIN_FILTER_RECS = 'EvaluationPipeline.on_begin_filter_recs'
 54ON_END_FILTER_RECS = 'EvaluationPipeline.on_end_filter_recs'
 55
 56
 57@dataclass
 58class EvaluationPipelineEventArgs(EventArgs):
 59    """Evaluation Pipeline Event Arguments.
 60
 61    event_id: the unique ID that classifies the evaluation pipeline event.
 62    metrics_config: list of metric configurations that is used in the evaluation pipeline.
 63    """
 64
 65    metrics_config: List[MetricConfig]
 66
 67
 68@dataclass
 69class MetricEventArgs(EventArgs):
 70    """Evaluation Pipeline Event Arguments.
 71
 72    event_id: the unique ID that classifies the metric event.
 73    metric_config: the metric configuration that is used.
 74    """
 75
 76    metric_config: MetricConfig
 77
 78
 79def get_eval_events() -> List[str]:
 80    """Get a list of evaluation pipeline event IDs.
 81
 82    Returns:
 83        a list of unique evaluation pipeline event IDs.
 84    """
 85    return [
 86        # DataframeEventArgs
 87        ON_BEGIN_LOAD_TEST_SET,
 88        ON_END_LOAD_TEST_SET,
 89        # DataframeEventArgs
 90        ON_BEGIN_LOAD_TRAIN_SET,
 91        ON_END_LOAD_TRAIN_SET,
 92        # DataframeEventArgs
 93        ON_BEGIN_LOAD_RATING_SET,
 94        ON_END_LOAD_RATING_SET,
 95        # EvaluationPipelineEventArgs
 96        ON_BEGIN_EVAL_PIPELINE,
 97        ON_END_EVAL_PIPELINE,
 98        # MetricEventArgs
 99        ON_BEGIN_METRIC,
100        ON_END_METRIC,
101        # MetricEventArgs
102        ON_BEGIN_EVAL_METRIC,
103        ON_END_EVAL_METRIC,
104        # FilterDataframeEventArgs
105        ON_BEGIN_FILTER_RECS,
106        ON_END_FILTER_RECS,
107    ]
108
109
110def get_eval_event_print_switch(elapsed_time: float=None) -> Dict[str, Callable[[EventArgs], None]]:
111    """Get a switch that prints evaluation pipeline event IDs.
112
113    Returns:
114        the print evaluation pipeline event switch.
115    """
116    return {
117        ON_BEGIN_EVAL_PIPELINE:
118            lambda args: print('\nStarting Evaluation Pipeline to process',
119                               len(args.metrics_config), 'metric(s)'),
120        ON_BEGIN_EVAL_METRIC:
121            lambda args: print('Starting evaluation with metric', args.metric_config.name),
122        ON_BEGIN_FILTER_RECS: print_filter_event_args,
123        ON_BEGIN_LOAD_RATING_SET: print_load_df_event_args,
124        ON_BEGIN_LOAD_TEST_SET: print_load_df_event_args,
125        ON_BEGIN_LOAD_TRAIN_SET: print_load_df_event_args,
126        ON_BEGIN_METRIC:
127            lambda args: print('Starting metric', args.metric_config.name),
128        ON_END_EVAL_PIPELINE:
129            lambda args: print('Finished Evaluation Pipeline on',
130                               len(args.metrics_config), 'metrics',
131                               f'in {elapsed_time:1.4f}s'),
132        ON_END_EVAL_METRIC:
133            lambda args: print('Finished evaluating metric', args.metric_config.name,
134                               f'in {elapsed_time:1.4f}s'),
135        ON_END_FILTER_RECS: lambda args: print_filter_event_args(args, elapsed_time),
136        ON_END_LOAD_RATING_SET: lambda args: print_load_df_event_args(args, elapsed_time),
137        ON_END_LOAD_TEST_SET: lambda args: print_load_df_event_args(args, elapsed_time),
138        ON_END_LOAD_TRAIN_SET: lambda args: print_load_df_event_args(args, elapsed_time),
139        ON_END_METRIC:
140            lambda args: print('Finished metric', args.metric_config.name,
141                               f'in {elapsed_time:1.4f}s'),
142    }
@dataclass
class EvaluationPipelineEventArgs(src.fairreckitlib.core.events.event_args.EventArgs):
58@dataclass
59class EvaluationPipelineEventArgs(EventArgs):
60    """Evaluation Pipeline Event Arguments.
61
62    event_id: the unique ID that classifies the evaluation pipeline event.
63    metrics_config: list of metric configurations that is used in the evaluation pipeline.
64    """
65
66    metrics_config: List[MetricConfig]

Evaluation Pipeline Event Arguments.

event_id: the unique ID that classifies the evaluation pipeline event. metrics_config: list of metric configurations that is used in the evaluation pipeline.

EvaluationPipelineEventArgs( event_id: str, metrics_config: List[src.fairreckitlib.evaluation.pipeline.evaluation_config.MetricConfig])
@dataclass
class MetricEventArgs(src.fairreckitlib.core.events.event_args.EventArgs):
69@dataclass
70class MetricEventArgs(EventArgs):
71    """Evaluation Pipeline Event Arguments.
72
73    event_id: the unique ID that classifies the metric event.
74    metric_config: the metric configuration that is used.
75    """
76
77    metric_config: MetricConfig

Evaluation Pipeline Event Arguments.

event_id: the unique ID that classifies the metric event. metric_config: the metric configuration that is used.

MetricEventArgs( event_id: str, metric_config: src.fairreckitlib.evaluation.pipeline.evaluation_config.MetricConfig)
def get_eval_events() -> List[str]:
 80def get_eval_events() -> List[str]:
 81    """Get a list of evaluation pipeline event IDs.
 82
 83    Returns:
 84        a list of unique evaluation pipeline event IDs.
 85    """
 86    return [
 87        # DataframeEventArgs
 88        ON_BEGIN_LOAD_TEST_SET,
 89        ON_END_LOAD_TEST_SET,
 90        # DataframeEventArgs
 91        ON_BEGIN_LOAD_TRAIN_SET,
 92        ON_END_LOAD_TRAIN_SET,
 93        # DataframeEventArgs
 94        ON_BEGIN_LOAD_RATING_SET,
 95        ON_END_LOAD_RATING_SET,
 96        # EvaluationPipelineEventArgs
 97        ON_BEGIN_EVAL_PIPELINE,
 98        ON_END_EVAL_PIPELINE,
 99        # MetricEventArgs
100        ON_BEGIN_METRIC,
101        ON_END_METRIC,
102        # MetricEventArgs
103        ON_BEGIN_EVAL_METRIC,
104        ON_END_EVAL_METRIC,
105        # FilterDataframeEventArgs
106        ON_BEGIN_FILTER_RECS,
107        ON_END_FILTER_RECS,
108    ]

Get a list of evaluation pipeline event IDs.

Returns: a list of unique evaluation pipeline event IDs.

def get_eval_event_print_switch( elapsed_time: float = None) -> Dict[str, Callable[[src.fairreckitlib.core.events.event_args.EventArgs], NoneType]]:
111def get_eval_event_print_switch(elapsed_time: float=None) -> Dict[str, Callable[[EventArgs], None]]:
112    """Get a switch that prints evaluation pipeline event IDs.
113
114    Returns:
115        the print evaluation pipeline event switch.
116    """
117    return {
118        ON_BEGIN_EVAL_PIPELINE:
119            lambda args: print('\nStarting Evaluation Pipeline to process',
120                               len(args.metrics_config), 'metric(s)'),
121        ON_BEGIN_EVAL_METRIC:
122            lambda args: print('Starting evaluation with metric', args.metric_config.name),
123        ON_BEGIN_FILTER_RECS: print_filter_event_args,
124        ON_BEGIN_LOAD_RATING_SET: print_load_df_event_args,
125        ON_BEGIN_LOAD_TEST_SET: print_load_df_event_args,
126        ON_BEGIN_LOAD_TRAIN_SET: print_load_df_event_args,
127        ON_BEGIN_METRIC:
128            lambda args: print('Starting metric', args.metric_config.name),
129        ON_END_EVAL_PIPELINE:
130            lambda args: print('Finished Evaluation Pipeline on',
131                               len(args.metrics_config), 'metrics',
132                               f'in {elapsed_time:1.4f}s'),
133        ON_END_EVAL_METRIC:
134            lambda args: print('Finished evaluating metric', args.metric_config.name,
135                               f'in {elapsed_time:1.4f}s'),
136        ON_END_FILTER_RECS: lambda args: print_filter_event_args(args, elapsed_time),
137        ON_END_LOAD_RATING_SET: lambda args: print_load_df_event_args(args, elapsed_time),
138        ON_END_LOAD_TEST_SET: lambda args: print_load_df_event_args(args, elapsed_time),
139        ON_END_LOAD_TRAIN_SET: lambda args: print_load_df_event_args(args, elapsed_time),
140        ON_END_METRIC:
141            lambda args: print('Finished metric', args.metric_config.name,
142                               f'in {elapsed_time:1.4f}s'),
143    }

Get a switch that prints evaluation pipeline event IDs.

Returns: the print evaluation pipeline event switch.