--- title: "Distributional Semantics in R with the 'wordspace' Package" author: "Stefan Evert" date: "1 April 2016" output: rmarkdown::html_vignette: fig_width: 6 fig_height: 4 pdf_document: null vignette: > %\VignetteIndexEntry{Introduction to Wordspace} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- Distributional semantic models (DSMs) represent the meaning of a target term (which can be a word form, lemma, morpheme, word pair, etc.) in the form of a feature vector that records either co-occurrence frequencies of the target term with a set of feature terms (_term-term model_) or its distribution across textual units (_term-context model_). Such DSMs have become an indispensable ingredient in many NLP applications that require flexible broad-coverage lexical semantics. Distributional modelling is an empirical science. DSM representations are determined by a wide range of parameters such as size and type of the co-occurrence context, feature selection, weighting of co-occurrence frequencies (often with statistical association measures), distance metric, dimensionality reduction method and the number of latent dimensions used. Despite recent efforts to carry out systematic evaluation studies, the precise effects of these parameters and their relevance for different application settings are still poorly understood. The **wordspace** package aims to provide a flexible, powerful and easy to use "interactive laboratory" that enables its users to build DSMs and experiment with them, but that also scales up to the large models required by real-life applications. Further background information and references can be found in: > Evert, Stefan (2014). Distributional semantics in R with the wordspace package. > In _Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations_, pages 110--114, Dublin, Ireland. Before continuing with this tutorial, load the package with ```{r message=FALSE} library(wordspace) ``` ## Input formats The most general representation of a distributional model takes the form of a sparse matrix, with entries specified as a triplet of row label (_target term_), column label (_feature term_) and co-occurrence frequency. A sample of such a table is included in the package under the name `DSM_VerbNounTriples_BNC`, listing syntactic verb-noun co-occurrences in the British National Corpus: ```{r echo=FALSE} set.seed(42) idx <- sort(sample.int(nrow(DSM_VerbNounTriples_BNC), 10)) knitr::kable(DSM_VerbNounTriples_BNC[idx, ]) ``` The `wordspace` package creates DSM objects from such triplet representations, which can easily be imported into R from a wide range of file and database formats. Ready-made import functions are provided for TAB-delimited text files (as used by [DISSECT](https://github.com/composes-toolkit/dissect)), which may be compressed to save disk space, and for term-document models created by the text-mining package `tm`. The native input format is a pre-compiled sparse matrix representation generated by the [UCS toolkit](http://www.collocations.de/software.html). In this way, UCS serves as a hub for the preparation of co-occurrence data, which can be collected from dependency pairs, extracted from a corpus indexed with the [IMS Corpus Workbench](https://cwb.sourceforge.io/) or imported from various other formats. ## Creating a DSM The first step in the creation of a distributional semantic model is the compilation of a co-occurrence matrix. Let us illustrate the procedure for verb-noun co-occurrences from the written part of the British National Corpus. First, we extract relevant rows from the table above. ```{r} Triples <- subset(DSM_VerbNounTriples_BNC, mode == "written") ``` Note that many verb-noun pairs such as _(walk, dog)_ still have multiple entries in `Triples`: _dog_ can appear either as the subject or as the object of _walk_. ```{r} subset(Triples, noun == "dog" & verb == "walk") ``` There are two ways of dealing with such cases: we can either add up the frequency counts (a _dependency-filtered model_) or treat "dog-as-subject" and "dog-as-object" as two different terms (a _dependency-structured model_). We opt for a dependency-filtered model in this example -- can you work out how to compile the corresponding dependency-structured DSM in R, either for verbs of for nouns as target terms? The `dsm` constructor function expects three vectors of the same length, containing row label (target term), column label (feature term) and co-occurrence count (or pre-weighted score) for each nonzero cell of the co-occurrence matrix. In our example, we use nouns as targets and verbs as features. Note the option `raw.freq=TRUE` to indicate that the matrix contains raw frequency counts. ```{r} VObj <- dsm(target=Triples$noun, feature=Triples$verb, score=Triples$f, raw.freq=TRUE) dim(VObj) ``` The constructor automatically computes marginal frequencies for the target and feature terms by summing over rows and columns of the matrix respectively. The information is collected in data frames `VObj$rows` and `VObj$cols`, together with the number of nonzero elements in each row and column: ```{r} subset(VObj$rows, rank(-f) <= 6) # 6 most frequent nouns ``` This way of computing marginal frequencies is appropriate for syntactic co-occurrence and term-document models. In the case of surface co-occurrence based on token spans, the correct marginal frequencies have to be provided separately in the `rowinfo=` and `colinfo=` arguments (see `?dsm` for details). The actual co-occurrence matrix is stored in `VObj$M`. Since it is too large to display on screen, we extract the top left corner with the `head` method for DSM objects. Note that you can also use `head(VObj, Inf)` to extract the full matrix. ```{r} head(VObj) ``` ## The DSM parameters Rows and columns with few nonzero cells provide unreliable semantic information and can lead to numerical problems (e.g. because a sparse association score deletes the remaining nonzero entries). It is therefore common to apply frequency thresholds both on rows and columns, here in the form of requiring at least 3 nonzero cells. The option `recursive=TRUE` guarantees that both criteria are satisfied by the final DSM when rows and columns are filtered at the same time (see the examples in `?subset.dsm` for an illustration). ```{r} VObj <- subset(VObj, nnzero >= 3, nnzero >= 3, recursive=TRUE) dim(VObj) ``` If you want to filter _only_ columns or rows, you can pass the constraint as a named argument: `subset=(nnzero >= 3)` for rows and `select=(nnzero >= 3)` for columns. The next step is to weight co-occurrence frequency counts. Here, we use the _simple log-likelihood_ association measure with an additional logarithmic transformation, which has shown good results in evaluation studies. The `wordspace` package computes _sparse_ (or "positive") versions of all association measures by default, setting negative associations to zero. This guarantees that the sparseness of the co-occurrence matrix is preserved. We also normalize the weighted row vectors to unit Euclidean length (`normalize=TRUE`). ```{r} VObj <- dsm.score(VObj, score="simple-ll", transform="log", normalize=TRUE, method="euclidean") ``` Printing a DSM object shows information about the dimensions of the co-occurrence matrix and whether it has already been scored. Note that the scored matrix does not replace the original co-occurrence counts, so `dsm.score` can be executed again at any time with different parameters. ```{r} VObj ``` Most distributional models apply a dimensionality reduction technique to make data sets more manageable and to refine the semantic representations. A widely-used technique is singular value decomposition (SVD). Since `VObj` is a sparse matrix, `dsm.projection` automatically applies an efficient algorithm from the `sparsesvd` package. ```{r} VObj300 <- dsm.projection(VObj, method="svd", n=300) dim(VObj300) ``` `VObj300` is a dense matrix with 300 columns, giving the coordinates of the target terms in 300 latent dimensions. Its attribute `"R2"` shows what proportion of information from the original matrix is captured by each latent dimension. ```{r, fig.width=7, fig.height=3, echo=2} par(mar=c(4,4,1,1)) plot(attr(VObj300, "R2"), type="h", xlab="latent dimension", ylab="R2") ``` ## Using DSM representations The primary goal of a DSM is to determine "semantic" distances between pairs of words. The arguments to `pair.distances` can also be parallel vectors in order to compute distances for a large number of word pairs efficiently. ```{r} pair.distances("book", "paper", VObj300, method="cosine") ``` By default, the function converts similarity measures into an equivalent distance metric -- the angle between vectors in the case of cosine similarity. If you want the actual similarity values, specify `convert=FALSE`: ```{r} pair.distances("book", "paper", VObj300, method="cosine", convert=FALSE) ``` We are often interested in finding the nearest neighbours of a given term in the DSM space: ```{r} nearest.neighbours(VObj300, "book", n=14) # reduced space ``` The return value is actually a vector of distances to the nearest neighbours, labelled with the corresponding terms. Here is how you obtain the actual neighbour terms: ```{r} nn <- nearest.neighbours(VObj, "book", n=15) # unreduced space names(nn) ``` The neighbourhood plot visualizes nearest neighbours as a semantic network based on their mutual distances. This often helps interpretation by grouping related neighbours. The network below shows that _book_ as a text type is similar to _novel_, _essay_, _poem_ and _article_; as a form of document it is similar to _paper_, _letter_ and _document_; and as a publication it is similar to _leaflet_, _magazine_ and _newspaper_. ```{r, echo=c(1,3), fig.height=4} nn.mat <- nearest.neighbours(VObj300, "book", n=15, dist.matrix=TRUE) par(mar=c(1,1,1,1)) plot(nn.mat) ``` A straightforward way to evaluat distributional representations is to compare them with human judgements of the semantic similarity between word pairs. The `wordspace` package includes to well-known data sets of this type: Rubenstein-Goodenough (`RG65`) and `WordSim353` (a superset of `RG65` with judgements from new test subjects). ```{r echo=FALSE} knitr::kable(RG65[seq(5, 65, 10), ]) ``` There is also a ready-made evaluation function, which computes Pearson and rank correlation between the DSM distances and human subjects. The option `format="HW"` adjusts the POS-disambiguated notation for terms in the data set (e.g. `book_N`) to the format used by our distributional model (`book`). ```{r} eval.similarity.correlation(RG65, VObj300, convert=FALSE, format="HW") ``` Evaluation results can also be visualized in the form of a scatterplot with a trend line. ```{r, echo=2} par(mar=c(4,4,2,1)) plot(eval.similarity.correlation(RG65, VObj300, convert=FALSE, format="HW", details=TRUE)) ``` The rank correlation of 0.308 is very poor, mostly due to the small amount of data on which our DSM is based. Much better results are obtained with pre-compiled DSM vectors from large Web corpus, which are also included in the package. Note that target terms are given in a different format there (which corresponds to the format in `RG65`). ```{r, echo=2} par(mar=c(4,4,2,1)) plot(eval.similarity.correlation(RG65, DSM_Vectors, convert=FALSE, details=TRUE)) ``` ## Advanced techniques Schütze (1998) used DSM representations for word sense disambiguation (or, more precisely, word sense induction) based on a clustering of the sentence contexts of an ambiguous word. The `wordspace` package includes a small data set with such contexts for a selection of English words. Let us look at the noun _vessel_ as an example, which has two main senses ("ship" and "blood vessel"): ```{r} Vessel <- subset(SemCorWSD, target == "vessel" & pos == "n") table(Vessel$gloss) ``` Sentence contexts are given as tokenized strings (`$sentence`), in lemmatized form (`$hw`) and as lemmas annotated with part-of-speech codes (`$lemma`). Choose the version that matches the representation of target terms in your DSM. ```{r, echo=FALSE} knitr::kable(Vessel[, c("sense", "sentence")], row.names=FALSE) ``` Following Schütze, each context is represented by a centroid vector obtained by averaging over the DSM vectors of all context words. ```{r} centroids <- context.vectors(DSM_Vectors, Vessel$lemma, row.names=Vessel$id) ``` This returns a matrix of centroid vectors for the 12 sentence contexts of _vessel_ in the data set. The vectors can now be clustered and analyzed using standard R functions. Partitioning around medoids (PAM) has shown good and robust performance in evaluation studies. ```{r, echo=2:4} par(mar=c(2, 2, 2, 1)) library(cluster) # clustering algorithms of Kaufman & Rousseeuw (1990) res <- pam(dist.matrix(centroids), 2, diss=TRUE, keep.diss=TRUE) plot(res, col.p=factor(Vessel$sense), shade=TRUE, which=1, main="WSD for 'vessel'") ``` Colours in the plot above indicate the gold standard sense of each instance of _vessel_. A confusion matrix confirms perfect clustering of the two senses: ```{r, echo=1, eval=2} table(res$clustering, Vessel$sense) knitr::kable(table(res$clustering, Vessel$sense)) ``` We can also use a pre-defined function for the evaluation of clustering tasks, which is convenient but does not produce a visualization of the clusters. Note that the "target terms" of the task must correspond to the row labels of the centroid matrix, which we have set to sentence IDs (`Vessel$id`) above. ```{r} eval.clustering(Vessel, M=centroids, word.name="id", class.name="sense") ``` As a final example, let us look at a simple approach to compositional distributional semantics, which computes the compositional meaning of two words as the element-wise sum or product of their DSM vectors. ```{r} mouse <- VObj300["mouse", ] # extract row vectors from matrix computer <- VObj300["computer", ] ``` The nearest neighbours of mouse are problematic, presumably because the type vector represents a mixture of the two senses that is not close to either meaning in the semantic space. ```{r} nearest.neighbours(VObj300, "mouse", n=12) ``` By adding the vectors of _mouse_ and _computer_, we obtain neighbours that seem to fit the "computer mouse" sense very well: ```{r} nearest.neighbours(VObj300, mouse + computer, n=12) ``` Note that the target is specified as a distributional vector rather than a term in this case. Observations from the recent literature suggest that element-wise multiplication is not compatible with non-sparse SVD-reduced DSMs, so it is not surprising to find completely unrelated nearest neighbours in our example: ```{r} nearest.neighbours(VObj300, mouse * computer, n=12) ```