Cem Ersoy and Albert Ali Salah recieve BUVAK Award for Excellence in Research

International Evening at CMPE

CmpE team wins the Eating Condition Sub-Challenge Award at INTERSPEECH Computational Paralinguistics Challenge

İsmail Arı Computer Science and Engineering Summer School (In Turkish)

Cerberus competes for CMPE in RoboCup 2015 World Championship

Binnur Görer awarded Anita Borg scholarship

Ada Lovelace Day

2015 Senior Projects

About Us

Department Overview

The Computer Engineering Department (CmpE), with its 24 full-time faculty members, holds the largest research groups on Computer and Sensor Networks, Video and Image Processing, Robotics and Artificial Intelligence in Turkey. With approximately 200 graduate students, enrolled in 2-year Master of Science (with thesis) and PhD programs, CmpE has the largest graduate program in Turkey.  

The undergraduate program in computer engineering is designed so that students have a balanced background in computer hardware, software, and computer applications, and that they can adopt themselves to rapidly changing technology in their professional carrier.

The research labs and the research projects span a wide spectrum ranging from embedded system design and real time operating systems to parallel, distributed and ubiquitous computing and multi-agent systems, including various aspects such as multimedia communications, security, and human computer interaction.



  1. CMPE 579 Seminar: Sebastian Wernicke
    • Calendar: cmpe.events@gmail.com
    • Start time: 12:00pm
    • Title : Discovery in Millions of Genomes: Three key challenges for bioinformatics
      Speaker : Dr. Sebastian Wernicke, Seven Bridges Genomics, London and Cambridge (MA)
      Time : Tuesday December 1, 2015, 12:00-12:50
      Place : ETA 16 (AVS Seminar Room)

      Our ability to sequence the human genetic code advances even more rapidly than computation. Rather than doubling performance every 18 months (“Moore’s law”), our ability to read genetic codes quadruples in that same amount of time. It is therefore estimated that by next year, a million individuals will have their entire genome sequenced and that by 2018, we will have ~2 Exabytes of genetic information generated.

      While these genomic datasets – often generated through large consortia or through government initiatives – hold many promises for the effective future treatment of diseases such as cancer and rare genetic conditions, their very large size and medical sensitivity pose significant challenges for processing this data while at the same time keeping it secure. Solving these challenges requires very specific approaches that can’t be simply transferred from other large-scale computing approaches.

      This talk will outline three key computational challenges of discovering information in large genomic datasets and present our current research on these topics.

      The first challenge: Connect the data and make it available for analysis
      Genetic information is only useful if it can be analyzed efficiently by researchers. This is a nontrivial challenge since on the one the one hand, the data is inherently sensitive and personal, yet on the other hand researchers must be able to run tens of thousands of different and customized tools on this data. Additionally, the data usually does not reside in a single place, but multiple locations and is far to large to import everything into a single location. This requires the development of novel computing concepts, for which we will present our current approaches and thinking.

      The second challenge: Make algorithms come to the data
      Analyzing genetic data usually happens through complex chains of tools (“pipelines”). This poses two main challenges: First, despite this complexity, pipelines need to be easily reproducible and distributable by researchers. Second, as the same algorithms will usually run on distributed datasets, we need to ensure that they behave reliable in very different environments from small desktop solutions to massively parallel cloud environments. We will present the Common Workflow Language (CWL), an open source project initiated by Seven Bridges to tackle this challenge.

      The third challenge: Develop new, massively scalable algorithms
      Most of the algorithms and tools that are used today in generating and exploring genetic information were developed at a time when sequencing 1000 human genomes was considered a very ambitious feat. Consequently, these algorithms and methods were developed with small datasets in mind and do not scale efficiently to larger datasets, requiring new concepts to be developed and implemented. We will present so-called graph genomes, which are an example of these new technologies that we are developing in collaboration with the UK 100k genomes project.

      Dr. Sebastian Wernicke leads the global strategy and growth of Seven Bridges Genomics, a Cambridge (MA)-based company that builds and deploys platforms for Next Generation Sequencing analysis. He also serves as managing director of Seven Bridges Genomics UK, a research-focused subsidiary in London that works in close collaboration with Genomics England and the 100k Genomes project. Dr. Wernicke joined Seven Bridges in 2012 after spending several years consulting for Fortune 500 pharmaceutical and financial services companies on their strategic initiatives. He received his Ph.D. In Bioinformatics from the university of Jena in Germany, where he developed novel algorithms for the combinatorial analysis of biological networks; today his tools and algorithms for network analytics are used by thousands of researchers worldwide.

    • View this event in Google Calendar

Monday, December 7th

  1. Course evaluations period

Monday, December 14th

  1. Applications to graduate programs for 2016 Spring term

Friday, December 25th

  1. Classes end