BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MBZUAI - ECPv6.15.16.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MBZUAI
X-ORIGINAL-URL:https://asmbzuaipr-staging-71adee5795-ajdpepcwanf7bwcd.a03.azurefd.net
X-WR-CALDESC:Events for MBZUAI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Asia/Dubai
BEGIN:STANDARD
TZOFFSETFROM:+0400
TZOFFSETTO:+0400
TZNAME:+04
DTSTART:20220101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Asia/Dubai:20230920T110000
DTEND;TZID=Asia/Dubai:20230920T120000
DTSTAMP:20260415T022422
CREATED:20230914T124413Z
LAST-MODIFIED:20250109T051851Z
UID:9808-1695207600-1695211200@asmbzuaipr-staging-71adee5795-ajdpepcwanf7bwcd.a03.azurefd.net
SUMMARY:Self-supervised DNA models and scalable sequence processing with memory augmented transformers
DESCRIPTION:In recent years\, the domain of genomics has greatly benefited from advancements in artificial intelligence\, particularly machine learning’s capability to interpret genomic sequences. Such interpretations often require intricate analyses of the complex molecular processes at play within DNA functionality. In the first part of this talk\, we introduce GENA-LM\, a suite of transformer-based DNA language models tailored to process and decode DNA sequences. However\, the scalability of these transformer architectures poses its set of challenges\, especially in computational complexity that grows quadratically with increased input size. The second part of the talk delves into our innovative approach of recurrent memory augmentation in pre-trained transformer models. This strategy notably boosts the models’ capacity to handle impressively long input sequences—up to millions tokens—while maintaining computational efficiency. The implications of this are profound\, not only improving performance in language modeling tasks but also holding promise for memory-intensive applications. We propose that such memory augmentation can further propel the capabilities of models like GENA-LM\, extending their domain of application in bioinformatics and truly harnessing the power of AI in genomics discovery. \nAbout the Speaker: \nDr Mikhail Burtsev is a Landau AI Fellow at the London Institute. He studied microelectronics at the Moscow Power Engineering Institute\, before doing his PhD in computer science at the Keldysh Institute of Applied Mathematics. He held senior research positions at the Anokhin Institute of Normal Physiology and later the Kurchatov Institute\, and visiting research positions at Cambridge. He was Scientific Director of Artificial Intelligence Research Institute in Moscow\, and set up and ran the Neural Nets and Deep Learning Laboratory at the Moscow Institute of Physics and Technology. Under his leadership\, it developed the award-winning open-source conversational AI framework\, DeepPavlov. \nDr Burtsev researches the mathematics behind more intelligent AI\, including continual learning and memory augmented neural networks\, as well as AI assisted maths.
URL:https://asmbzuaipr-staging-71adee5795-ajdpepcwanf7bwcd.a03.azurefd.net/event/self-supervised-dna-models-and-scalable-sequence-processing-with-memory-augmented-transformers/
LOCATION:Online Webinar
CATEGORIES:Virtual
ATTACH;FMTTYPE=image/png:https://staticcdn.mbzuai.ac.ae/mbzuaiwpprd01/2023/09/Mikhail-burstev37.png
END:VEVENT
END:VCALENDAR