src.fairreckitlib.evaluation.metrics.lenskit.lenskit_accuracy_metric
This module contains the lenskit accuracy metric and creation functions.
Classes:
LensKitAccuracyMetric: accuracy metric implementation for lenskit.
Functions:
create_ndcg: create the NDCG@K accuracy metric (factory creation compatible).
create_hit_ratio: create the HR@K accuracy metric (factory creation compatible).
create_precision: create the P@K accuracy metric (factory creation compatible).
create_recall: create the R@K accuracy metric (factory creation compatible).
create_mean_recip_rank: create the MRR accuracy metric (factory creation compatible).
This program has been developed by students from the bachelor Computer Science at Utrecht University within the Software Project course. © Copyright Utrecht University (Department of Information and Computing Sciences)
1"""This module contains the lenskit accuracy metric and creation functions. 2 3Classes: 4 5 LensKitAccuracyMetric: accuracy metric implementation for lenskit. 6 7Functions: 8 9 create_ndcg: create the NDCG@K accuracy metric (factory creation compatible). 10 create_hit_ratio: create the HR@K accuracy metric (factory creation compatible). 11 create_precision: create the P@K accuracy metric (factory creation compatible). 12 create_recall: create the R@K accuracy metric (factory creation compatible). 13 create_mean_recip_rank: create the MRR accuracy metric (factory creation compatible). 14 15This program has been developed by students from the bachelor Computer Science at 16Utrecht University within the Software Project course. 17© Copyright Utrecht University (Department of Information and Computing Sciences) 18""" 19 20from typing import Any, Callable, Dict 21 22from lenskit import topn 23import pandas as pd 24 25from ...evaluation_sets import EvaluationSets 26from ..metric_base import BaseMetric 27from ..metric_constants import KEY_METRIC_PARAM_K 28 29 30class LensKitAccuracyMetric(BaseMetric): 31 """Accuracy metric implementation for the LensKit framework.""" 32 33 def __init__( 34 self, 35 name: str, 36 params: Dict[str, Any], 37 eval_func: Callable[[pd.DataFrame, pd.DataFrame], pd.DataFrame], 38 group: str): 39 """Construct the lenskit accuracy metric. 40 41 Args: 42 name: the name of the metric. 43 params: the parameters of the metric. 44 eval_func: the lenskit evaluation function. 45 group: the group name of the lenskit evaluation function. 46 """ 47 BaseMetric.__init__(self, name, params) 48 self.eval_func = eval_func 49 self.group = group 50 51 def on_evaluate(self, eval_sets: EvaluationSets) -> float: 52 """Evaluate the sets for the performance of the metric. 53 54 Args: 55 eval_sets: the sets to use for computing the performance of the metric. 56 57 Returns: 58 the evaluated performance. 59 """ 60 # Drop the rating column as it is not needed and will be used by lenskit internally 61 lenskit_ratings = eval_sets.ratings.drop('rating', axis=1) 62 # Lenskit needs this column 63 lenskit_ratings['Algorithm'] = 'APPROACHNAME' 64 65 analysis = topn.RecListAnalysis() 66 k = self.params.get(KEY_METRIC_PARAM_K) 67 if k: 68 analysis.add_metric(self.eval_func, k=k) 69 else: 70 analysis.add_metric(self.eval_func) 71 72 results = analysis.compute(lenskit_ratings, eval_sets.test) 73 return float(results.groupby('Algorithm')[self.group].mean()[0]) 74 75 76def create_ndcg(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 77 """Create the NDCG@K accuracy metric. 78 79 Args: 80 name: the name of the metric. 81 params: containing the following name-value pairs: 82 K(int): the number of item recommendations to test on. 83 84 Returns: 85 the LensKitAccuracyMetric wrapper of NDCG@K. 86 """ 87 return LensKitAccuracyMetric(name, params, topn.ndcg, 'ndcg') 88 89 90def create_hit_ratio(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 91 """Create the HR@K accuracy metric. 92 93 Args: 94 name: the name of the metric. 95 params: containing the following name-value pairs: 96 K(int): the number of item recommendations to test on. 97 98 Returns: 99 the LensKitAccuracyMetric wrapper of HR@K. 100 """ 101 return LensKitAccuracyMetric(name, params, topn.hit, 'hit') 102 103 104def create_precision(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 105 """Create the P@K accuracy metric. 106 107 Args: 108 name: the name of the metric. 109 params: containing the following name-value pairs: 110 K(int): the number of item recommendations to test on. 111 112 Returns: 113 the LensKitAccuracyMetric wrapper of P@K. 114 """ 115 return LensKitAccuracyMetric(name, params, topn.precision, 'precision') 116 117 118def create_recall(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 119 """Create the R@K accuracy metric. 120 121 Args: 122 name: the name of the metric. 123 params: containing the following name-value pairs: 124 K(int): the number of item recommendations to test on. 125 126 Returns: 127 the LensKitAccuracyMetric wrapper of R@K. 128 """ 129 return LensKitAccuracyMetric(name, params, topn.recall, 'recall') 130 131 132def create_mean_recip_rank(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 133 """Create the MRR accuracy metric. 134 135 Args: 136 name: the name of the metric. 137 params: there are no parameters for this metric. 138 139 Returns: 140 the LensKitAccuracyMetric wrapper of MRR. 141 """ 142 return LensKitAccuracyMetric(name, params, topn.recip_rank, 'recip_rank')
31class LensKitAccuracyMetric(BaseMetric): 32 """Accuracy metric implementation for the LensKit framework.""" 33 34 def __init__( 35 self, 36 name: str, 37 params: Dict[str, Any], 38 eval_func: Callable[[pd.DataFrame, pd.DataFrame], pd.DataFrame], 39 group: str): 40 """Construct the lenskit accuracy metric. 41 42 Args: 43 name: the name of the metric. 44 params: the parameters of the metric. 45 eval_func: the lenskit evaluation function. 46 group: the group name of the lenskit evaluation function. 47 """ 48 BaseMetric.__init__(self, name, params) 49 self.eval_func = eval_func 50 self.group = group 51 52 def on_evaluate(self, eval_sets: EvaluationSets) -> float: 53 """Evaluate the sets for the performance of the metric. 54 55 Args: 56 eval_sets: the sets to use for computing the performance of the metric. 57 58 Returns: 59 the evaluated performance. 60 """ 61 # Drop the rating column as it is not needed and will be used by lenskit internally 62 lenskit_ratings = eval_sets.ratings.drop('rating', axis=1) 63 # Lenskit needs this column 64 lenskit_ratings['Algorithm'] = 'APPROACHNAME' 65 66 analysis = topn.RecListAnalysis() 67 k = self.params.get(KEY_METRIC_PARAM_K) 68 if k: 69 analysis.add_metric(self.eval_func, k=k) 70 else: 71 analysis.add_metric(self.eval_func) 72 73 results = analysis.compute(lenskit_ratings, eval_sets.test) 74 return float(results.groupby('Algorithm')[self.group].mean()[0])
Accuracy metric implementation for the LensKit framework.
34 def __init__( 35 self, 36 name: str, 37 params: Dict[str, Any], 38 eval_func: Callable[[pd.DataFrame, pd.DataFrame], pd.DataFrame], 39 group: str): 40 """Construct the lenskit accuracy metric. 41 42 Args: 43 name: the name of the metric. 44 params: the parameters of the metric. 45 eval_func: the lenskit evaluation function. 46 group: the group name of the lenskit evaluation function. 47 """ 48 BaseMetric.__init__(self, name, params) 49 self.eval_func = eval_func 50 self.group = group
Construct the lenskit accuracy metric.
Args: name: the name of the metric. params: the parameters of the metric. eval_func: the lenskit evaluation function. group: the group name of the lenskit evaluation function.
52 def on_evaluate(self, eval_sets: EvaluationSets) -> float: 53 """Evaluate the sets for the performance of the metric. 54 55 Args: 56 eval_sets: the sets to use for computing the performance of the metric. 57 58 Returns: 59 the evaluated performance. 60 """ 61 # Drop the rating column as it is not needed and will be used by lenskit internally 62 lenskit_ratings = eval_sets.ratings.drop('rating', axis=1) 63 # Lenskit needs this column 64 lenskit_ratings['Algorithm'] = 'APPROACHNAME' 65 66 analysis = topn.RecListAnalysis() 67 k = self.params.get(KEY_METRIC_PARAM_K) 68 if k: 69 analysis.add_metric(self.eval_func, k=k) 70 else: 71 analysis.add_metric(self.eval_func) 72 73 results = analysis.compute(lenskit_ratings, eval_sets.test) 74 return float(results.groupby('Algorithm')[self.group].mean()[0])
Evaluate the sets for the performance of the metric.
Args: eval_sets: the sets to use for computing the performance of the metric.
Returns: the evaluated performance.
77def create_ndcg(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 78 """Create the NDCG@K accuracy metric. 79 80 Args: 81 name: the name of the metric. 82 params: containing the following name-value pairs: 83 K(int): the number of item recommendations to test on. 84 85 Returns: 86 the LensKitAccuracyMetric wrapper of NDCG@K. 87 """ 88 return LensKitAccuracyMetric(name, params, topn.ndcg, 'ndcg')
Create the NDCG@K accuracy metric.
Args: name: the name of the metric. params: containing the following name-value pairs: K(int): the number of item recommendations to test on.
Returns: the LensKitAccuracyMetric wrapper of NDCG@K.
91def create_hit_ratio(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 92 """Create the HR@K accuracy metric. 93 94 Args: 95 name: the name of the metric. 96 params: containing the following name-value pairs: 97 K(int): the number of item recommendations to test on. 98 99 Returns: 100 the LensKitAccuracyMetric wrapper of HR@K. 101 """ 102 return LensKitAccuracyMetric(name, params, topn.hit, 'hit')
Create the HR@K accuracy metric.
Args: name: the name of the metric. params: containing the following name-value pairs: K(int): the number of item recommendations to test on.
Returns: the LensKitAccuracyMetric wrapper of HR@K.
105def create_precision(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 106 """Create the P@K accuracy metric. 107 108 Args: 109 name: the name of the metric. 110 params: containing the following name-value pairs: 111 K(int): the number of item recommendations to test on. 112 113 Returns: 114 the LensKitAccuracyMetric wrapper of P@K. 115 """ 116 return LensKitAccuracyMetric(name, params, topn.precision, 'precision')
Create the P@K accuracy metric.
Args: name: the name of the metric. params: containing the following name-value pairs: K(int): the number of item recommendations to test on.
Returns: the LensKitAccuracyMetric wrapper of P@K.
119def create_recall(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 120 """Create the R@K accuracy metric. 121 122 Args: 123 name: the name of the metric. 124 params: containing the following name-value pairs: 125 K(int): the number of item recommendations to test on. 126 127 Returns: 128 the LensKitAccuracyMetric wrapper of R@K. 129 """ 130 return LensKitAccuracyMetric(name, params, topn.recall, 'recall')
Create the R@K accuracy metric.
Args: name: the name of the metric. params: containing the following name-value pairs: K(int): the number of item recommendations to test on.
Returns: the LensKitAccuracyMetric wrapper of R@K.
133def create_mean_recip_rank(name: str, params: Dict[str, Any], **_) -> LensKitAccuracyMetric: 134 """Create the MRR accuracy metric. 135 136 Args: 137 name: the name of the metric. 138 params: there are no parameters for this metric. 139 140 Returns: 141 the LensKitAccuracyMetric wrapper of MRR. 142 """ 143 return LensKitAccuracyMetric(name, params, topn.recip_rank, 'recip_rank')
Create the MRR accuracy metric.
Args: name: the name of the metric. params: there are no parameters for this metric.
Returns: the LensKitAccuracyMetric wrapper of MRR.