Performance Evaluation of Open Source Large Language Model (LLM) Frameworks

Performance Evaluation of Open Source Large Language Model (LLM) Frameworks

 

This project aims to asses the performances of open-source large language model (LLM) frameworks. With the increasing popularity of large language models, it is crucial to assess their performance to understand their capabilities and limitations.

The objectives of this project include:

 

  • Evaluating the efficiency and effectiveness of different open-source LLM frameworks.
  • Comparing the performance of LLM frameworks in terms of training time, inference speed, and resource utilization.
  • Analyzing the scalability of LLM frameworks to handle large datasets and complex language tasks.
  • Assessing the quality of generated text and the ability to fine-tune models for specific domains.

To achieve these objectives, students will design and conduct experiments using various open-source LLM frameworks. We will consider different benchmark datasets and evaluate the performance metrics mentioned above. Additionally, we will explore techniques to optimize and fine-tune the LLM frameworks for specific use cases.

The findings of this project will provide valuable insights into the strengths and weaknesses of different open-source LLM frameworks. It will help researchers, developers, and practitioners in selecting the most suitable framework for their specific language processing tasks. Furthermore, it will contribute to the advancement of LLM technology by identifying areas for improvement and optimization.

Project Members: 

İbrahim Furkan Özçelik
Mustafa Berk Turgut

Project Advisor: 

Atay Özgövde

Project Status: 

Project Year: 

2023
  • Fall

Contact us

Department of Computer Engineering, Boğaziçi University,
34342 Bebek, Istanbul, Turkey

  • Phone: +90 212 359 45 23/24
  • Fax: +90 212 2872461
 

Connect with us

We're on Social Networks. Follow us & get in touch.