-
Anthropic
- Bethesda, MD
-
15:29
(UTC -04:00) - @EzraWu
- https://scholar.google.com/citations?user=jxJflawAAAAJ
Block or Report
Block or report shijie-wu
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering
LLM training code for Databricks foundation models
Tools for managing datasets for governance and training.
A tiny library for coding with large language models.
🦜🔗 Build context-aware reasoning applications
tiktoken is a fast BPE tokeniser for use with OpenAI's models.
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110). This framework is also used to evaluate text-to-image …
Solve puzzles. Improve your pytorch.
Beautiful calculator app for macOS, Linux & Windows
A word2vec negative sampling implementation with correct CBOW update.
A bilingual NLI dataset annotated in Spanish and human translated into English
This repository contains the FewGLUE dataset for few-shot natural language understanding.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Beyond Accuracy: Behavioral Testing of NLP models with CheckList
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
BERT models for many languages created from Wikipedia texts
Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.
Obtain Word Alignments using Pretrained Language Models (e.g., mBERT)