A Method for Assessing User-generated Tests for Online Courses Exploiting Crowdsourcing Concept

Reviewed
Ghada Farouk Naiem, Sozo Inoue,
International Workshop on Web Intelligence and Smart Sensing (IWWISS)
(Not Available)
(Not Available)
1-6
2014-09-01
Saint Etienne, France.
http://iwwiss.ht.sfc.keio.ac.jp/2014/
In this research, we focus on the challenge of user-generated tests, whereanyusercanmakeaquestioncontent.Ifonlyateacheror a limited person is allowed to create tests, there comes the limita- tion of the contents, which leads to a tough labor of the teacher, or reduction of the quality of the test because of the shortage of the number or randomness of questions. To solve these problems, the system should provide the functionality of creating a question for any user, editing their own questions. In this paper, we propose a method for assessing an answer of a user for a question, taking into account the difficulty and the validity of the question. The difficulty is how less the question obtained a score generally, and the validity is whether the question can qualify the users understanding along with the objective of the test. Since the formalization of assessment and validity which we made in this paper are recursive, we also pro- pose an heuristic algorithm for iteratively calculate the both values. Moreover, we show simulation result to confirm that the algorithm converges, and that it reflects the validity and difficulty.

Data Files