Seeking eﬀective neural networks is a critical and practical ﬁeld in deep learning. Besides designing the depth, type of convolution, normalization, and nonlinearities, the topological connectivity of neural networks is also important. Previous principles of rule-based modular design simplify the diﬃculty of building an eﬀective architecture, but constrain the possible topologies in limited spaces. In this paper, we attempt to optimize the connectivity in neural networks. We propose a topological perspective to represent a network into a complete graph for analysis, where nodes carry out aggregation and transformation of features, and edges determine the ﬂow of information. By assigning learnable parameters to the edges which reﬂect the magnitude of connections, the learning process can be performed in a diﬀerentiable manner. We further attach auxiliary sparsity constraint to the distribution of connectedness, which promotes the learned topology focus on critical connections. This learning process is compatible with existing networks and owns adaptability to larger search spaces and diﬀerent tasks. Quantitative results of experiments reﬂect the learned connectivity is superior to traditional rule-based ones, such as random, residual and complete. In addition, it obtains signiﬁcant improvements in image classiﬁcation and object detection without introducing excessive computation burden.