src.fairreckitlib.evaluation.metrics.lenskit.lenskit_rating_metric
This module contains the lenskit rating metric and creation functions.
Classes:
LensKitRatingMetric: rating metric implementation for lenskit.
Functions:
create_mae: create the MAE rating metric (factory creation compatible).
create_rmse: create the RMSE rating metric (factory creation compatible).
This program has been developed by students from the bachelor Computer Science at Utrecht University within the Software Project course. © Copyright Utrecht University (Department of Information and Computing Sciences)
1"""This module contains the lenskit rating metric and creation functions. 2 3Classes: 4 5 LensKitRatingMetric: rating metric implementation for lenskit. 6 7Functions: 8 9 create_mae: create the MAE rating metric (factory creation compatible). 10 create_rmse: create the RMSE rating metric (factory creation compatible). 11 12This program has been developed by students from the bachelor Computer Science at 13Utrecht University within the Software Project course. 14© Copyright Utrecht University (Department of Information and Computing Sciences) 15""" 16 17from typing import Any, Dict 18 19from lenskit.metrics import predict 20import pandas as pd 21 22from ...evaluation_sets import EvaluationSets 23from ..metric_base import ColumnMetric 24 25 26class LensKitRatingMetric(ColumnMetric): 27 """Rating metric implementation for the LensKit framework.""" 28 29 def on_evaluate(self, eval_sets: EvaluationSets) -> float: 30 """Evaluate the sets for the performance of the metric. 31 32 Args: 33 eval_sets: the sets to use for computing the performance of the metric. 34 35 Returns: 36 the evaluated performance. 37 """ 38 lenskit_ratings = eval_sets.ratings.drop('rating', axis=1) 39 score_column = 'score' if 'score' in lenskit_ratings else 'prediction' 40 scores = pd.merge(eval_sets.test, lenskit_ratings, how='left', on=['user', 'item']) 41 return predict.user_metric(scores, score_column=score_column, metric=self.eval_func) 42 43 44def create_mae(name: str, params: Dict[str, Any], **_) -> LensKitRatingMetric: 45 """Create the MAE rating metric. 46 47 Args: 48 name: the name of the metric. 49 params: there are no parameters for this metric. 50 51 Returns: 52 the LensKitAccuracyMetric wrapper of MAE. 53 """ 54 return LensKitRatingMetric(name, params, predict.mae) 55 56 57def create_rmse(name: str, params: Dict[str, Any], **_) -> LensKitRatingMetric: 58 """Create the RMSE rating metric. 59 60 Args: 61 name: the name of the metric. 62 params: there are no parameters for this metric. 63 64 Returns: 65 the LensKitAccuracyMetric wrapper of RMSE. 66 """ 67 return LensKitRatingMetric(name, params, predict.rmse)
27class LensKitRatingMetric(ColumnMetric): 28 """Rating metric implementation for the LensKit framework.""" 29 30 def on_evaluate(self, eval_sets: EvaluationSets) -> float: 31 """Evaluate the sets for the performance of the metric. 32 33 Args: 34 eval_sets: the sets to use for computing the performance of the metric. 35 36 Returns: 37 the evaluated performance. 38 """ 39 lenskit_ratings = eval_sets.ratings.drop('rating', axis=1) 40 score_column = 'score' if 'score' in lenskit_ratings else 'prediction' 41 scores = pd.merge(eval_sets.test, lenskit_ratings, how='left', on=['user', 'item']) 42 return predict.user_metric(scores, score_column=score_column, metric=self.eval_func)
Rating metric implementation for the LensKit framework.
30 def on_evaluate(self, eval_sets: EvaluationSets) -> float: 31 """Evaluate the sets for the performance of the metric. 32 33 Args: 34 eval_sets: the sets to use for computing the performance of the metric. 35 36 Returns: 37 the evaluated performance. 38 """ 39 lenskit_ratings = eval_sets.ratings.drop('rating', axis=1) 40 score_column = 'score' if 'score' in lenskit_ratings else 'prediction' 41 scores = pd.merge(eval_sets.test, lenskit_ratings, how='left', on=['user', 'item']) 42 return predict.user_metric(scores, score_column=score_column, metric=self.eval_func)
Evaluate the sets for the performance of the metric.
Args: eval_sets: the sets to use for computing the performance of the metric.
Returns: the evaluated performance.
45def create_mae(name: str, params: Dict[str, Any], **_) -> LensKitRatingMetric: 46 """Create the MAE rating metric. 47 48 Args: 49 name: the name of the metric. 50 params: there are no parameters for this metric. 51 52 Returns: 53 the LensKitAccuracyMetric wrapper of MAE. 54 """ 55 return LensKitRatingMetric(name, params, predict.mae)
Create the MAE rating metric.
Args: name: the name of the metric. params: there are no parameters for this metric.
Returns: the LensKitAccuracyMetric wrapper of MAE.
58def create_rmse(name: str, params: Dict[str, Any], **_) -> LensKitRatingMetric: 59 """Create the RMSE rating metric. 60 61 Args: 62 name: the name of the metric. 63 params: there are no parameters for this metric. 64 65 Returns: 66 the LensKitAccuracyMetric wrapper of RMSE. 67 """ 68 return LensKitRatingMetric(name, params, predict.rmse)
Create the RMSE rating metric.
Args: name: the name of the metric. params: there are no parameters for this metric.
Returns: the LensKitAccuracyMetric wrapper of RMSE.