Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

2. Footnote: We make an exception for sources that can be taken for granted in the instructional setting, namely, the course materials. To minimize documentation effort, we also do not expect you to credit the course staff for ideas you get from them, although it’s nice to do so anyway.

 

Grading Guidelines: (still under construction)

Grading:

  • Clear explanation of your main model (20 points)
    • Explain any preprocessing you did, explain clearly what your model takes as input
    • Explain clearly what algorithm you used to train and not just the model
  • How does your model fit the problem description?  (10 points)
  • How does model account for the fact that there were 3 bots?  (10 points)
  • How were parameters chosen in  a principled fashion?  (10 points)
  • Failed attempts. (Have a clear flow of your reasoning for why you tried various models and how their failure guided you to pick next one) Give clear comparison of things you tried. Dont go for numbers but rather clear progression of thought and how each model guided the next.  (15 points)
  • Visualization (what did you learn from them and how they guided you). This includes tables, plots, graphs etc.  (10 points)
  • Supervision: How did you use the labeled examples given in your model. Did you use these to minimize kaggle submissions?  (10 points)
  • Unlabeled examples: How were the unlabeled data points part of you model  (10 points)
  • Understanding data, what did you learn from the observations and how was it used in your approach? (5 points)

 

 Bonus (at the discretion of the graders):

 

  • Tried new or more methods not necessarily covered in class

  • Developed new algorithm or methods, tweaked existing methods to fit the problem better