In this talk, I will speak of several papers, which will be presented at ACL 2019: some of them on the main conference, some at the workshops and related events. The central topic of many of them will be how one can make use of symbolic linguistic structures, such as knowledge bases and graphs of taxonomic relations in the era of neural NLP models. It is not obvious to directly encode graph structures due to their sparseness in a neural network (and almost any lexical resource could be considered as a form of a multi-label weighted graph). On the other hand, hundreds of man-years were spent to manually encode some linguistic information into these resources and it may be a big miss to not use them. However, to date, most of the neural NLP models rely on word and character embeddings which are derived from text only, potentially limiting their performance.