A Method for Automatic Assessment of User-generated Tests and Its Evaluation

Reviewed, Featured
Atsushi Taniguchi, Sozo Inoue,
ACM Int'l Conf. Pervasive and Ubiquitous Computing (Ubicomp) Poster
(Not Available)
(Not Available)
225-228
2015-09-09
Osaka
http://ubicomp.org/ubicomp2015/
In this paper, we propose the e-Learning web system in which any users can not only answer questions, but also make questions. We also show an algorithm to assess the answer logs in consideration of difficulty and validity of questions. Today, education systems such as e-Learning are adopted in many universities, but only teachers or a limited number of people can make questions in many scenes so that they bear a great burden of making questions. Therefore it is possible to reduce the burden on teacher if any users can make questions. We formulated algorithm to reflect difficulty and validity of the questions to evaluation and made Web system to make questions and answer them. We carried out an experiment in a class of the university whether it would be possible to make a correct evaluation on real human-made questions using the system by algorithm of suggestion technique. As a result, assessment results in consideration of difficulty and validity were calculated for the human-made questions.

Data Files

No records to display