-
Google DeepMind
- Mountain View, USA
-
00:12
- 12h behind - fangyuliu.me/about
- https://orcid.org/0000-0001-7038-3623
- @hardy_qr
Highlights
- Pro
Stars
Paper List for Contrastive Learning for Natural Language Processing
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
[TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
Language Models Can See: Plugging Visual Controls in Text Generation
Improving Word Translation via Two-Stage Contrastive Learning (ACL 2022). Keywords: Bilingual Lexicon Induction, Word Translation, Cross-Lingual Word Embeddings.
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
TimeLMs: Diachronic Language Models from Twitter
[NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
The code of Improving Factual Completeness and Consistency of Image-to-text Radiology Report Generation
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
[NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning
[EMNLP 2021] Code and data for our paper "Visually Grounded Reasoning across Languages and Cultures"
[CoNLL'21] MirrorWiC: On Eliciting Word-in-Context Representationsfrom Pretrained Language Models
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System (ACL 2022)
[EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.
Codes for paper: Mixture-of-Partitions: Infusing Large Biomedical Knowledge Graphs into BERT
Source code for paper "Contrastive Out-of-Distribution Detection for Pretrained Transformers", EMNLP 2021
An open source implementation of CLIP.
The official implementation of "BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?, ACL 2021 main conference"
[TACL 2021] Code for our paper "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs"
[RepL4NLP(2021)] Hierarchical Sparse Variation Autoencoder (HSVAE)
Code for paper "An Improved Baseline for Sentence-level Relation Extraction", AACL-IJCNLP 2022
🤖 A Python library for learning and evaluating knowledge graph embeddings
[NAACL'21 & ACL'21] SapBERT: Self-alignment pretraining for BERT & XL-BEL: Cross-Lingual Biomedical Entity Linking.
PyTorch implementation of Barlow Twins.