Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

evaluate.py 1.1 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
  1. import sys
  2. import os
  3. import pickle
  4. import json
  5. from sklearn.metrics import precision_recall_curve
  6. import sklearn.metrics as metrics
  7. if len(sys.argv) != 5:
  8. sys.stderr.write('Arguments error. Usage:\n')
  9. sys.stderr.write('\tpython evaluate.py model features scores plots\n')
  10. sys.exit(1)
  11. model_file = sys.argv[1]
  12. matrix_file = os.path.join(sys.argv[2], 'test.pkl')
  13. scores_file = sys.argv[3]
  14. plots_file = sys.argv[4]
  15. with open(model_file, 'rb') as fd:
  16. model = pickle.load(fd)
  17. with open(matrix_file, 'rb') as fd:
  18. matrix = pickle.load(fd)
  19. labels = matrix[:, 1].toarray()
  20. x = matrix[:, 2:]
  21. predictions_by_class = model.predict_proba(x)
  22. predictions = predictions_by_class[:, 1]
  23. precision, recall, thresholds = precision_recall_curve(labels, predictions)
  24. auc = metrics.auc(recall, precision)
  25. with open(scores_file, 'w') as fd:
  26. json.dump({'auc': auc}, fd)
  27. with open(plots_file, 'w') as fd:
  28. json.dump({'prc': [{
  29. 'precision': p,
  30. 'recall': r,
  31. 'threshold': t
  32. } for p, r, t in zip(precision, recall, thresholds)
  33. ]}, fd)
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...