Accepted Papers
-
Creating a Persian-English Comparable Corpus
Homa Baradaran Hashemi, Azadeh Shakery, and Heshaam Faili
- Tie-breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation
Guillaume Cabanac, Gilles Hubert, Mohand Boughanem, and Claude Chrisment
- A New Approach for Cross-Language Plagiarism Analysis
Rafael Corezola Pereira, Viviane P. Moreira, and Renata Galante
- Evaluating Information Extraction
Andrea Esuli and Fabrizio Sebastiani
- Which Log for which Information? Gathering Multilinguality Data from Different Log File Types
Maria Gäde, Vivien Petras, and Juliane Stiller
- Automated Component-Level Evaluation: Present and Future
Allan Hanbury and Henning Müller
- MapReduce for information retrieval evaluation: "Let's quickly test this on 12 TB of data"
Djoerd Hiemstra and Claudia Hauff
- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases
Bouke Huurnink, Katja Hofmann, Maarten de Rijke and, Marc Bron
- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages
Aki Loponen and Kalervo Järvelin
- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements
Walid Magdy and Gareth Jones
- On the Evaluation of Entity Profiles
Maarten de Rijke, Krisztian Balog, Toine Bogers, and Antal van den Bosch
- Using Parallel corpora for Multilingual(Multi-Document) Summarisation Evaluation
Marco Turchi, Josef Steinberger, Ralf Steinberger, and Mijail Kabadjov