Title: | A Slow Version of the Rapid Automatic Keyword Extraction (RAKE) Algorithm |
---|---|
Description: | A mostly pure-R implementation of the RAKE algorithm (Rose, S., Engel, D., Cramer, N. and Cowley, W. (2010) <doi:10.1002/9780470689646.ch1>), which can be used to extract keywords from documents without any training data. |
Authors: | Christopher Baker [aut, cre] |
Maintainer: | Christopher Baker <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.1.1 |
Built: | 2024-11-12 06:46:25 UTC |
Source: | CRAN |
A data frame containing PLOS publication data for publications related to dogs. The purpose of this data frame is to provide an example of some text to extract keywords from.
dog_pubs
dog_pubs
A data frame with 30 rows and 3 variables:
The publication's DOI
The publication's title
The publication's abstract
A data frame containing all possible parts-of-speech, as per the
openNLP
package. This list was taken from
Part-Of-Speech
Tagging with R. pos_tags
contains the following two columns:
The abbreviation for the part-of-speech (i.e., its tag)
A short description of the part-of-speech
pos_tags
pos_tags
An object of class data.frame
with 36 rows and 2 columns.
rbind a rakelist
rbind_rakelist(rakelist, doc_id = NULL)
rbind_rakelist(rakelist, doc_id = NULL)
rakelist |
An object of class |
doc_id |
An optional vector of document IDs, which should be the same
length as |
A single data frame which contains all documents' keywords. The
doc_id
column tells you which document a keyword was found in.
rakelist <- slowrake(txt = dog_pubs$abstract[1:2]) # Without specifying doc_id: head(rbind_rakelist(rakelist = rakelist)) # With specifying doc_id: head(rbind_rakelist(rakelist = rakelist, doc_id = dog_pubs$doi[1:2]))
rakelist <- slowrake(txt = dog_pubs$abstract[1:2]) # Without specifying doc_id: head(rbind_rakelist(rakelist = rakelist)) # With specifying doc_id: head(rbind_rakelist(rakelist = rakelist, doc_id = dog_pubs$doi[1:2]))
A relatively slow version of the Rapid Automatic Keyword Extraction (RAKE)
algorithm. See Automatic keyword extraction from individual documents for
details on how RAKE works or read the "Getting started" vignette (
vignette("getting-started")
).
slowrake(txt, stop_words = smart_words, stop_pos = c("VB", "VBD", "VBG", "VBN", "VBP", "VBZ"), word_min_char = 3, stem = TRUE)
slowrake(txt, stop_words = smart_words, stop_pos = c("VB", "VBD", "VBG", "VBN", "VBP", "VBZ"), word_min_char = 3, stem = TRUE)
txt |
A character vector, where each element of the vector contains the text for one document. |
stop_words |
A vector of stop words which will be removed from your
documents. The default value ( |
stop_pos |
All words that have a part-of-speech (POS) that appears in
|
word_min_char |
The minimum number of characters that a word must have
to remain in the corpus. Words with fewer than |
stem |
Do you want to stem the words before running RAKE? |
An object of class rakelist
, which is just a list of data
frames (one data frame for each element of txt
). Each data frame
will have the following columns:
A keyword that was identified by RAKE.
The number of times the keyword appears in the document.
The keyword's score, as per the RAKE algorithm. Keywords with higher scores are considered to be higher quality than those with lower scores.
If you specified stem = TRUE
, you will get the
stemmed versions of the keywords in this column. When you choose stemming,
the keyword's score (score
) will be based off its stem,
but the reported number of times that the keyword appears (freq
)
will still be based off of the raw, unstemmed version of the keyword.
slowrake(txt = "some text that has great keywords") slowrake(txt = dog_pubs$title[1:2], stem = FALSE)
slowrake(txt = "some text that has great keywords") slowrake(txt = dog_pubs$title[1:2], stem = FALSE)
A vector containing the SMART information retrieval system stop words. See tm::stopwords('SMART') for more details.
smart_words
smart_words
An object of class character
of length 571.