Apple is sponsoring the International Conference on Learning Representations (ICLR), which will take place in person from May 7 to 11 in Vienna, Austria. ICLR brings together professionals dedicated to the advancement of deep learning.
Schedule
Below is the schedule of workshops and events sponsored by Apple at ICLR 2024. Visit the Apple booth on May 7-9 from 9:00 a.m. to 5:00 p.m. CEST and on May 10 from 9:00 a.m. to 4:00 pm CEST in Halle/Hall A, booth #3.
Tuesday May 7
- Efficient ConvBN blocks for transfer learning and more
- 10:45 – 12:45 CEST, Halle/Hall B, No. 176
- Kaichao You (Tsinghua University), Qin Guo (Tsinghua University), Anchang Bao (Tsinghua University), Meng Cao, Ping Huang, Jiulong Shan, Mingsheng Long (Tsinghua University)
- Contrastive Poly-View Learning
- 16:30 – 18:30 CEST, Halle/Hall B, #284
- Amitis Shidani (Oxford University), Dan Busbridge, Devon Hjelm, Jason Ramapuram, Eeshan Gunesh Dhekane, Russ Webb
Wednesday May 8
- Women in machine learning
- 12:00 – 14:00 CEST, Halle/Hall B 6
- Dhruti Shah and Maria Cervera will participate in the WIML social activity.
- FERRET: Query and ground anything anywhere and at any granularity
- 16:30 – 18:30 CEST, Halle/Hall B, #81
- Haoxuan You, Haotian (AIML) Zhang, Liangliang Cao, Zhe Gan, Bowen Zhang, Zirui Wang, Xianzhi Du, Shih-Fu Chang (Columbia University), Yinfei Yang
Thursday May 9
- Generative modeling with stochastic phase bridge
- 10:30 – 10:45 CEST, Halle/Hall A 8-9
- Tianrong Chen (Georgia Institute of technology), Jiatao Gu, Josh Susskind, Shuangfei Zhai, Laurent Dinh, Evangelos Theodorou (Georgia tech)
- Generative modeling with stochastic phase bridge
- 10:45 – 00:45 CEST, Hall/Hall B, #77
- Tianrong Chen (Georgia Institute of technology), Jiatao Gu, Josh Susskind, Shuangfei Zhai, Laurent Dinh, Evangelos Theodorou (Georgia tech)
- MOFI: Learning Image Representation from Annotated Images of Noisy Entities
- 10:45 – 00:45 CEST, Hall/Hall B, #190
- Wentao Wu, Aleksei Timofeev, Chen (SII) Chen, Bowen Zhang, Kun Duan, Shuangning Liu, Yantao Zheng, Jonathon Shlens (Google (contributions while at Apple)), Xianzhi Du, Zhe Gan, Yinfei Yang
- Multiple diffusion fields
- 16:30 – 18:30 CEST, Halle/Hall B, No. 38
- Ahmed Elhag (AIMS Senegal), Yuyang Wang, Josh Susskind, Miguel Ángel Bautista Martín
- Matryoshka diffusion models
- 16:30 – 18:30 CEST, Halle/Hall B, No. 246
- Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Josh Susskind, Navdeep Jaitly
Friday 10th May
- Large Language Models for Generalizable Reinforcement Learning of Embedded Tasks
- 10:45 – 00:45 CEST, Hall/Hall B, #264
- Andrew Szot (Georgia Institute of technology), Max Schwarzer (Université de Montréal), Harsh Agrawal, Bogdan Mazoure, Rin Metcalf Susa, Natalie Mackraz, Walter Talbott, Devon Hjelm, Alexander Toshev
- TiC-CLIP: Continuous training of CLIP models
- 16:30 – 18:30 CEST, Halle/Hall B, No. 139
- Saurabh Garg (Carnegie Mellon University), Hadi Pour Ansari, Mehrdad Farajtabar, Sachin Mehta, Raviteja Vemulapalli, Oncel Tuzel, Vaishaal Shankar, Fartash Faghri
Saturday May 11
Accepted articles
LLM Compression: The truth is rarely pure and never simple
Ajay Jaiswal (University of Texas at Austin), Zhe Gan, Xianzhi Du, Bowen Zhang, Zhangyang Wang (University of Texas at Austin), Yinfei Yang
Data Filtering Features: Algorithmic curation of billion-scale data sets
Alex Fang (University of Washington), Albin Madappally Jose, Amit Jain, Ludwig Schmidt (University of Washington), Alexander Toshev, Vaishaal Shankar
Efficient ConvBN blocks for transfer learning and more
Kaichao You (Tsinghua University), Qin Guo (Tsinghua University), Anchang Bao (Tsinghua University), Meng Cao, Ping Huang, Jiulong Shan, Mingsheng Long (Tsinghua University)
Efficient-3Dim: Learning a novel generalizable single-image view synthesizer in a day
Yifan Jiang, Hao Tang, Rick Chang, Liangchen Song, Zhangyang Wang (University of Texas at Austin), Liangliang Cao
FedHyper: Adaptive step sizes for efficient federated learning with hypergradient descent
Ziyao Wang (University of Maryland College Park), Jianyu Wang, Ang Li (University of Maryland College Park)
FERRET: Query and ground anything anywhere and at any granularity
Haoxuan You, Haotian (AIML) Zhang, Liangliang Cao, Zhe Gan, Bowen Zhang, Zirui Wang, Xianzhi Du, Shih-Fu Chang (Columbia University), Yinfei Yang
Generative modeling with stochastic phase bridge
Tianrong Chen (Georgia tech), Jiatao Gu, Josh Susskind, Shuangfei Zhai, Laurent Dinh, Evangelos Theodorou (Georgia tech)
Instruction-based image editing guide using large multimodal language models
Tsu-Jui Fu (University of California, Santa Barbara), Wenze Hu, Xianzhi Du, William Wang (University of California, Santa Barbara), Yinfei Yang, Zhe Gan
Retrospective priorities to reward learning from human preferences
Mudit Verma (Apple Intern/Arizona State University), Rin Metcalf Susa
JointNet: Extending Text-to-Image Diffusion for Dense Distribution Modeling
Jingyang Zhang, Shiwei Li, Yuanxun Lu (Nanjing University), Tian Fang, David McKinnon, Yanghai Tsin, Long Quan (University of Science and technology), Yao Yao (Nanjing University)
Large Language Models for Generalizable Reinforcement Learning of Embedded Tasks
Andrew Szot (Georgia Institute of technology), Max Schwarzer (Université de Montréal), Harsh Agrawal, Bogdan Mazoure, Rin Metcalf Susa, Natalie Mackraz, Walter Talbott, Devon Hjelm, Alexander Toshev
Large-scale training of basic models for wearable biosignals
Salar Abbaspourazad, Oussama Elachqar, Andy Miller, Saba Emrani, Udhay Nallasamy, Ian Shapiro
LiDAR: Linear Probing Performance Detection on Co-Integration SSL Architectures
Vimal Thilak, Omid Saremi, Preetum Nakkiran, Josh Susskind, Chen Huang, Hanlin Goh, Laurent Dinh, Etai Littwin
Multiple diffusion fields
Ahmed Elhag (AIMS Senegal), Yuyang Wang, Josh Susskind, Miguel Ángel Bautista Martín
Matryoshka diffusion models
Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Josh Susskind, Navdeep Jaitly
MOFI: Learning Image Representation from Annotated Images of Noisy Entities
Wentao Wu, Aleksei Timofeev, Chen (SII) Chen, Bowen Zhang, Kun Duan, Shuangning Liu, Yantao Zheng, Jonathon Shlens (Google (contributions while at Apple)), Xianzhi Du, Zhe Gan, Yinfei Yang
Overcoming the obstacles of fitting the vision-language model for the generalization of OOD
Yuhang Zang (Nanyang Technological University), Hanlin Goh, Josh Susskind, Chen Huang
Contrastive Poly-View Learning
Amitis Shidani (Oxford University), Dan Busbridge, Devon Hjelm, Jason Ramapuram, Eeshan Gunesh Dhekane, Russ Webb
ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models
Iman Mirzadeh, Keivan Alizadeh Vahid, Sachin Mehta, Carlo C Del Mundo, Oncel Tuzel, Golnoosh Samei, Mohammad Rastegari, Mehrdad Farajtabar
TiC-CLIP: Continuous training of CLIP models
Saurabh Garg (Carnegie Mellon University), Hadi Pour Ansari, Mehrdad Farajtabar, Sachin Mehta, Raviteja Vemulapalli, Oncel Tuzel, Vaishaal Shankar, Fartash Faghri
Disappearance of gradients in reinforcement learning based on fine-tuning of linguistic models
Noam Razin (Tel Aviv University), Hattie Zhou (University of Montreal), Preetum Nakkiran, Josh Susskind, Omid Saremi, Arwen Bradley, Vimal Thilak, Etai Littwin
What algorithms can transformers learn? A study on length generalization
Hattie Zhou (University of Montreal), Omid Saremi, Etai Littwin, Arwen Bradley, Noam Razin (Tel Aviv University), Josh Susskind, Samy Bengio, Preetum Nakkiran
Conformal prediction using regression as classification
Etash Guha (Riken AIP), Shlok Natarajan (Salesforce), Thomas Mollenhoff (Riken AIP), Emtiyaz Khan (Riken AIP), Eugene Ndiaye
How to efficiently calculate Hessian vector products?
Mathieu Dagreou (Inria), Thomas Moreau (Inria), Samuel Vaiter (CNRS), Pierre Ablin
Pay only for what is uncertain: variance-adaptive Thompson sampling
Aadi Saha, Branislav Kveton (amazon)
Pseudogeneralized dynamic view synthesis from video
Xiaoming Zhao (UIUC), Fangchang Ma, Josh Susskind, Miguel Ángel Bautista Martín, Alex Colburn, Alex Schwing
When can transformers reason with abstract symbols?
Enric Boix (MIT), Josh Susskind, Omid Saremi, Emmanuel Abbe, Etai Littwin, Samy Bengio
Work accepted in the workshop
Frequency-aware masked autoencoders for multimodal pretraining on biosignals
Ran Liu (Georgia Institute of technology), Ellen Zippi, Hadi Pour Ansari, Chris Sandino, Jingping Nie (Columbia University), Hanlin Goh, Erdrin Azemi, Ali Moin
How far are we from intelligent visual deductive reasoning?
Yizhe Zhang, Richard Bai, Ruixiang Zhang, Jiatao Gu, Shuangfei Zhai, Josh Susskind, Navdeep Jaitly
Large-scale training of basic models for wearable biosignals
Salar Abbaspourazad, Oussama Elachqar, Andy Miller, Saba Emrani, Udhay Nallasamy, Ian Shapiro
Rephrase, don't repeat: Overcoming scaling laws in data-constrained language modeling
Pratyush Maini (CMU), Skyler Seto, David Grangier, Richard Bai, Yizhe Zhang, Navdeep Jaitly
Thanks
Samy Bengio is a member of the ICLR 2024 Organizing Committee.
Samy Bengio, Miguel Angel Baptist Martin, Eugene Ndiaye and Yizhe Zhang are the ICLR 2024 area presidents.
Fartash Faghri, Enrico Fini, Devon Hjelm, Bogdan Mazoure, Wenze Hu, Rin Metcalf Susa, Vimal Thilak and Luca Zappella are ICLR 2024 reviewers.