Publications

GLM-130B: An Open Bilingual Pre-Trained Model

We introduce GLM-130B, a bilingual (English and Chinese) pre-trained language model with 130 billion parameters. It is an attempt to open-source a 100B-scale model at least as good as GPT-3 and unveil how models of such a scale can be successfully pre-trained. Over the course of this effort, we face numerous unexpected technical and engineering challenges, particularly on loss spikes and disconvergence. In this paper, we introduce the training process of GLM-130B including its design choices, training strategies for both efficiency and stability, and engineering efforts. The resultant GLM-130B model offers significant outperformance over GPT-3 175B on a wide range of popular English benchmarks while the performance advantage is not observed in OPT-175B and BLOOM-176B. It also consistently and significantly outperforms ERNIE TITAN 3.0 260B – the largest Chinese language model – across related benchmarks. Finally, we leverage a unique scaling property of GLM-130B to reach INT4 quantization, without quantization aware training and with almost no performance loss, making it the first among 100B-scale models. More importantly, the property allows its effective inference on 4×RTX 3090 (24G) or 8×RTX 2080 Ti (11G) GPUs, the most ever affordable GPUs required for using 100B-scale models. The GLM-130B model weights are publicly accessible and its code, training logs, related toolkit, and lessons learned are open-sourced at https://github.com/THUDM/GLM-130B

CogKR: Cognitive Graph for Multi-hop Knowledge Reasoning

Inferring new facts from an existing knowledge graph with explainable reasoning processes is an important problem, known as knowledge graph (KG) reasoning. The problem is often formulated as finding the specific path that represents the query relation and connects the query entity and the correct answer. However, due to the limited expressiveness of individual paths, the majority of previous works failed to capture the complex subgraph structure in the graph. We propose CogKR that traverses the knowledge graph to conduct multi-hop reasoning. More specifically, motivated by the dual process theory from cognitive science, our framework is composed of an extension module and a reasoning module. By setting up a cognitive graph through iteratively coordinating the two modules, CogKR can cope with more complex reasoning scenarios in the form of subgraphs instead of individual paths. Experiments on three knowledge graph reasoning benchmarks demonstrate that CogKR achieves significant improvements in accuracy compared with previous methods while providing the explainable capacity. Moreover, we evaluate CogKR on the challenging one-shot link prediction task, exhibiting the superiority of the framework on accuracy and scalability compared to the state-of-the-art approaches.

Policy-Gradient Training of Fair and Unbiased Ranking Functions

While implicit feedback (e.g., clicks, dwell times, etc.) is an abundant and attractive source of data for learning to rank, it can produce unfair ranking policies for both exogenous and endogenous reasons. Exogenous reasons typically manifest themselves as biases in the training data, which then get reflected in the learned ranking policy and often lead to rich-get-richer dynamics. Moreover, even after the correction of such biases, reasons endogenous to the design of the learning algorithm can still lead to ranking policies that do not allocate exposure among items in a fair way. To address both exogenous and endogenous sources of unfairness, we present the first learning-to-rank approach that addresses both presentation bias and merit-based fairness of exposure simultaneously. Specifically, we define a class of amortized fairness-of-exposure constraints that can be chosen based on the needs of an application, and we show how these fairness criteria can be enforced despite the selection biases in implicit feedback data. The key result is an efficient and flexible policy-gradient algorithm, called FULTR, which is the first to enable the use of counterfactual estimators for both utility estimation and fairness constraints. Beyond the theoretical justification of the framework, we show empirically that the proposed algorithm can learn accurate and fair ranking policies from biased and noisy feedback.

EFCNN: A Restricted Convolutional Neural Network for Expert Finding