Easy way to get 15 free YouTube views, likes and subscribers
Get Free YouTube Subscribers, Views and Likes

Spring 2024 GRASP Seminar Yutong Bai Johns Hopkins University

Follow
GRASP Lab

“Listening to the Data: Visual Learning from the Bottom Up”

ABSTRACT
We introduce a novel sequential modeling approach which enables learning a Large Vision Model (LVM) without making use of any linguistic data. To do this, we define a common format, “visual sentences”, in which we can represent raw images and videos as well as annotated data sources such as semantic segmentations and depth reconstructions without needing any metaknowledge beyond the pixels. Once this wide variety of visual data (comprising 420 billion tokens) is represented as sequences, the model can be trained to minimize a crossentropy loss for next token prediction. By training across various scales of model architecture and data diversity, we provide empirical evidence that our models scale effectively. Many different vision tasks can be solved by designing suitable visual prompts at test time.

PRESENTER
Yutong Bai is a 5thyear CS PhD student at Johns Hopkins University advised by Prof. Alan Yuille, and currently a visiting student at UC Berkeley advised by Prof. Alyosha Efros. She has interned at Meta AI (FAIR Labs) and Google Brain, and she is selected as a 2023 Apple Scholar and EECS Rising Star.

posted by maalisi6