In recent years, prediction-based distributional word vectors (i.e., word embeddings) have become ubiqiutous in natural language processing. While word embeddings robustly capture existence of semantic associations between words, they fail to reflect – due to the distributional nature of embedding models – the exact type of the semantic link that holds between the words, that is, the exact semantic relation (e.g., synonymy, antonymy, hypernymy). This talk presents an overview of recent models that fine-tune distributional word spaces for specific lexico-semantic relations, using external knowledge from lexico-semantic resources (e.g., WordNet) for supervision. I will analyze models for specializing embeddings for semantic similarity (vs. other types of semantic association) as well as models that specialize word vectors for detecting particular relations, both symmetric (e.g., synonymy, antonymy) and asymmetric (e.g., hypernymy, meronymy). The talk will also examine evaluation procedures and downstream tasks that benefit from specializing embedding spaces. Finally, I will demonstrate how to transfer embedding specializations to resource-lean languages, for which no external lexico-semantic resources exist.