Abdullatif Köksal Defended his MS Thesis

Title: Datasets and Transformer Models for Cross-lingual Relation Classification

Abstract: Relation classification is one of the key topics in information extraction, which can be used to construct knowledge bases or to provide useful information for question answering. Current approaches for relation classification are mainly focused on the English language and require lots of training data with human annotations. Creating and annotating a large amount of training data for low-resource languages is impractical and expensive. To overcome this issue, we propose two cross-lingual relation classification models: a baseline model based on Multilingual BERT (mBERT) and a new multilingual pretraining setup called Matching the Multilingual Blanks (MTMB), which significantly improves the baseline with distant supervision. For evaluation, we introduce a new public benchmark dataset for cross-lingual relation classification in English, French, German, Spanish, and Turkish, called RELX. We also provide the RELX-Distant dataset, which includes hundreds of thousands of sentences with relations from Wikipedia and Wikidata collected by distant supervision for these languages. We observe that MTMB significantly outperforms the mBERT baseline in presented languages by 2.14% absolute improvement of F1-score on average. We further investigate MTMB's effectiveness in low-resource settings, and when 10% of the training data is used, 10.58% absolute improvement of F1-score on average over mBERT is observed. 

Bize Ulaşın

Bilgisayar Mühendisliği Bölümü, Boğaziçi Üniversitesi,
34342 Bebek, İstanbul, Türkiye

  • Telefon: +90 212 359 45 23/24
  • Faks: +90 212 2872461
 

Bizi takip edin

Sosyal Medya hesaplarımızı izleyerek bölümdeki gelişmeleri takip edebilirsiniz