The quantization of neural networks using low-precision numeric is an efficient approach to reduce the requirements for storage and computation resources, which makes it a reality to run the neural network on the resource-constrained devices. However, the accuracy of the neural network deteriorates after the normal quantization, especially for the object detection task. To overcome this dilemma, we propose a teacher–student based quantization training scheme for the one-stage object detection neural network. First, the quantization configurations of weights and activations are determined according to their statistical probability densities of each layer, thus ensuring a lower representation error of the fixed-point numeric. Second, the supplementary supervision information from a high-performance floating-point teacher detection neural network is used to assist the training of the quantized student detection neural network. Therefore, the expression and fitting ability of the quantized student can be significantly improved by selectively imitating the teacher’s key features that are indicated in the feature selection matrix. The superiority of our method has been verified on several benchmark datasets in comparison with other related methods. On the PASCAL VOC2007 test set, the mean average precision (mAP) of the 4-bit quantized tiny-YOLOv2 trained with our method can reach 61.1%, compared with an mAP of 57.4% of its 32-bit floating-point counterpart trained with the plain method. Further, the 3-bit quantized detection neural networks can still achieve satisfactory mAP even without the pretrained weights, demonstrating the robustness of the proposed method.