Dependency parsing, as an essential task in Natural Language Processing, is a key step in analyzing and understanding texts. Most of the previous work on unsupervised dependency parsing is based on generative models. In order to effectively induce a grammar, various knowledge priors and inductive biases are manually encoded in the learning process. However, these knowledge priors and inductive biases are mostly local features that can only be defined by experts. Another disadvantage of generative models comes from the context-freeness, which limits the information available to dependencies in a sentence. We proposed several approaches to unsupervised dependency parsing that automatically capture useful information: correlations between tokens, context information and multilingual similarity.