The randomness and complexity of urban traffic scenes make it a difficult task for self-driving cars to detect drivable areas. Inspired by human driving behaviors, we propose a novel method of drivable area detection for self-driving cars based on fusing pixel information from a monocular camera with spatial information from a light detection and ranging (LIDAR) scanner. Similar to the bijection of collineation, a new concept called co-point mapping, which is a bijection that maps points from the LIDAR scanner to points on the edge of the image segmentation, is introduced in the proposed method. Our method positions candidate drivable areas through self-learning models based on the initial drivable areas that are obtained by fusing obstacle information with superpixels. In addition, a fusion of four features is applied in order to achieve a more robust performance. In particular, a feature called drivable degree (DD) is proposed to characterize the drivable degree of the LIDAR points. After the initial drivable area is characterized by the features obtained through self-learning, a Bayesian framework is utilized to calculate the final probability map of the drivable area. Our approach introduces no common hypothesis and requires no training steps; yet it yields a state-of-art performance when tested on the ROAD-KITTI benchmark. Experimental results demonstrate that the proposed method is a general and efficient approach for detecting drivable area.