skip to main content
10.1007/978-3-030-27618-8_8guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

TRR: Reducing Crowdsourcing Task Redundancy

Published: 26 August 2019 Publication History

Abstract

In this paper, we address the problem of task redundancy in crowdsourcing systems while providing a methodology to decrease the overall effort required to accomplish a crowdsourcing task. Typical task assignment systems assign tasks to a fixed number of crowd workers, while tasks are varied in difficulty as being easy or hard tasks. Easy tasks need fewer task assignments than hard tasks. We present TRR, a task redundancy reducer that assigns tasks to crowd workers on several work iterations, that adaptively estimates how many workers are needed for each iteration for Boolean and classification task types. TRR stops assigning tasks to crowd workers upon detecting convergence between workers’ opinions that in turn reduces invested cost and time to answer a task. TRR supports Boolean, classification, and rating task types taking into consideration both crowdsourcing task assignment schemes of anonymous workers task assignments and non-anonymous workers task assignments. The paper includes experimental results by performing simulating experiments on crowdsourced datasets.

References

[1]
Amazon Mechanical Turk (2005). https://www.mturk.com/
[3]
Zheng, Y., Li, G., Li, Y., Shan, C., Cheng, R.: Truth Inference in Crowdsourcing : Is the Problem Solved ? PVLDB 3(3.0), 541–552 (2016)
[4]
Karger, D.R., Oh, S., Shah, D.: Iterative learning for reliable crowdsourcing systems. In: Advances in Neural Information Processing Systems 24 (NIPS) (2011)
[5]
Karger DR, Oh S, and Shah D Budget-optimal task allocation for reliable crowdsourcing systems Oper. Res. 2014 62 1 1-24
[6]
Liu X, Lu M, Ooi C, Shen Y, Wu S, and Zhang M CDAS: a crowdsourcing data analytics p system PVLDB 2012 5 10 1040-1051
[7]
Abraham, I., Alonso, O., Kandylas, V., Patel, R., Shelford, S., Slivkins, A.: How many workers to ask? adaptive exploration for collecting high quality labels. In: ACM SIGIR, p. 473 (2016)
[8]
Franklin, M.J., Kossmann, D., Kraska, T., Ramesh, S., Xin, R.: CrowdDB: answering queries with crowdsourcing. In: SIGMOD, pp. 61–72 (2011)
[9]
Haas D, Ansel J, Gu L, and Marcus A Argonaut: macrotask crowdsourcing for complex data processing PVLDB 2015 8 12 1642-1653
[10]
Jost L Entropy and diversity Oikos 2006 113 2 363-375
[11]
Zhang, C.J., Zhao, Z., Chen, L., Jagadish, H.V.: Cao CC. CrowdMatcher: crowd-assisted schema matching. In: SIGMOD, pp. 721–724 (2014)
[12]
Fan, J., Tan, K.: iCrowd : an adaptive crowdsourcing framework. In: SIGMOD, pp. 1015–1030 (2015)
[13]
Zheng Y, Wang J, Li G, Cheng R, and Feng J Berkeley UC 2015 QASCA A quality-aware task assignment system for crowdsourcing applications. SIGMOD 1031-1046
[14]
Zheng Y, Li G, and Cheng R DOCS: domain-aware crowdsourcing system PVLDB 2016 10 4 361-372
[15]
Jain A, Das Sarma A, Parameswaran A, and Widom J Understanding workers, developing effective tasks, and enhancing marketplace dynamics: a study of a large crowdsourcing marketplace PVLDB 2017 10 7 829-840
[16]
Guo, S., Parameswaran, A., Garcia-molina, H.: So who won ? dynamic max discovery with the crowd. In: SIGMOD (2012)
[17]
Khanfouci, M., Nicolas, G.: Consensus-based techniques for range-task resolution in crowdsourcing systems. In: EDBT/ICDT Workshops (2017)
[18]
Dubois, D., Prade, H.: Fundamentals of Fuzzy Sets. Springer (2000)
[19]
Szmidt E and Kacprzyk J Distances between intuitionistic fuzzy sets Fuzzy Sets Syst. 2000 114 505-518
[20]
Solanas A, Selvam RM, and Leiva D Common indexes of group diversity: upper boundaries Psychol. Rep. 2012 111 3 777-796
[21]
Zeng S Some intuitionistic fuzzy weighted distance measures and their application to group decision making Group Decis. Negot. 2013 22 2 281-298
[23]
Weather sentiment analysis dataset. https://eprints.soton.ac.uk/376543/

Cited By

View all
  • (2023)TDG4CrowdProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/333(2984-2992)Online publication date: 19-Aug-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
Database and Expert Systems Applications: 30th International Conference, DEXA 2019, Linz, Austria, August 26–29, 2019, Proceedings, Part II
Aug 2019
479 pages
ISBN:978-3-030-27617-1
DOI:10.1007/978-3-030-27618-8

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 26 August 2019

Author Tags

  1. Crowdsourcing task redundancy
  2. Crowdsourcing HITs redundancy
  3. Crowdsourcing tasks

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 22 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)TDG4CrowdProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/333(2984-2992)Online publication date: 19-Aug-2023

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media