NHSDataDictionaRy - a package for accessing NHS Data Dictionary with web scraping and other useful functions

Context

This package has been commissioned by the NHS-R community and is intended to be used to web scrape the NHS Data Dictionary website for useful look up tables. The NHS-R community have been pivotal in getting this package off the ground.

The package is maintained by Gary Hutson - Head of Advanced Analytics at Arden and GEM Commissioning Support Unit and to contact the maintainer directly you can navigate to this site.

Additionally, the package has been developed with generic web scraping functionality to allow other websites containing data tables and elements to be scraped.

Loading the package

To load the package, you can use the below command:

library(NHSDataDictionaRy)
library(dplyr)
library(magrittr)
library(tibble)

This brings in the functions needed to work with the package. The below sub sections will show how to use the package, as intended.

Text manipulation of the tibble

The NHSDataDictionaRy package provides a couple of Microsoft Excel convenience functions for working with text data. These are:

  • left_xl()
  • right_xl()
  • mid_xl()
  • len_xl()

I will demonstrate how these can be used on the tibble extracted from the previous example in the following sub sections.

left_xl() function

To utilise the left_xl function it expects two parameters - the first is the text to work with and the second is the number of characters to left trim by:

#Grab a sub set of the data frame
df <- nhs_tibble[10,]
result <- NHSDataDictionaRy::left_xl(df$link_name, 22)
print(result)
#> [1] "ACCOMMODATION STATUS ("
class(result)
#> [1] "character"

right_xl() function

This works the same way as the left function, but trims from the right of the text inward:

#Grab a sub set of the data frame
df <- nhs_tibble[10,]
result <- NHSDataDictionaRy::right_xl(df$link_name, 23)
print(result)
#> [1] "TION STATUS (SNOMED CT)"
class(result)
#> [1] "character"

mid_xl() function

This function takes a slightly different approach and expects 3 input parameter, the first being the text to trim, the second being where to start trimming and the third parameter is the termination point i.e. where to stop the trimming of the string:

#Grab a sub set of the data frame
df <- nhs_tibble[10,]
original <- df$link_name
#Original string
result <- NHSDataDictionaRy::mid_xl(df$link_name, 12, 20)
print(original); print(result)
#> [1] "ACCOMMODATION STATUS (SNOMED CT)"
#> [1] "ON STATUS (SNOMED CT"
class(result)
#> [1] "character"

len_xl() function

This is a simple, but useful function, as it gets the length of the string:

#Grab a sub set of the data frame
df <- nhs_tibble[10,]
#Original string
original <- df$link_name
string_length <- NHSDataDictionaRy::len_xl(original)
print(string_length)
#> [1] 32
class(string_length)
#> [1] "integer"

Working with the NHS R Data Dictionary lookup

This package provides functionality for working with the nhs_data_elements extracted from the NHS Data Dictionary website. The two main useful function to extract elements are the tableR function and the xPathTextR function. These can work with the tibble returned to extract useful lookups.

tableR function (utilising scrapeR function)

The scrapeR function is the workhorse, but the tableR wraps the results of the function in a nice tibble output. This will show you how to utilise the return tibble and to pass the function through the tableR to scrape a tibble to be utilised for lookups:

# Filter by a specific lookup required
if(is.null(nhs_tibble)){
  print("The NHS tibble has not loaded, this could be due to internet connection issues.")
} else{
  reduced_tibble <-
  dplyr::filter(nhs_tibble, link_name == "ACTIVITY TREATMENT FUNCTION CODE")
}

#Use the tableR function to query the NHS Data Dictionary website and return the associate tibble

national_codes <- NHSDataDictionaRy::tableR(url=reduced_tibble$full_url,
                          xpath = reduced_tibble$xpath_nat_code, 
                          title = "NHS Hospital Activity Treatment Function National Codes")




# The query has returned results, if the url does not have a lookup table an error will be thrown

print(head(national_codes,10))
#> # A tibble: 10 × 4
#>    Code  Description                               Dict_Type DttmExtracted      
#>    <chr> <chr>                                     <chr>     <dttm>             
#>  1 100   General Surgery Service                   NHS Hosp… 2024-12-02 06:49:18
#>  2 101   Urology Service                           NHS Hosp… 2024-12-02 06:49:18
#>  3 102   Transplant Surgery Service                NHS Hosp… 2024-12-02 06:49:18
#>  4 103   Breast Surgery Service                    NHS Hosp… 2024-12-02 06:49:18
#>  5 104   Colorectal Surgery Service                NHS Hosp… 2024-12-02 06:49:18
#>  6 105   Hepatobiliary and Pancreatic Surgery Ser… NHS Hosp… 2024-12-02 06:49:18
#>  7 106   Upper Gastrointestinal Surgery Service    NHS Hosp… 2024-12-02 06:49:18
#>  8 107   Vascular Surgery Service                  NHS Hosp… 2024-12-02 06:49:18
#>  9 108   Spinal Surgery Service                    NHS Hosp… 2024-12-02 06:49:18
#> 10 109   Bariatric Surgery Service                 NHS Hosp… 2024-12-02 06:49:18

Not all lookups will have associated national code tables, if they are not returned you will receive a message saying the lookup table is not available for this NHS Data Dictionary type.

Using my lookup with NHS data

There are common lookups that are needed, and this is one such mapping between specialty code, to get the description of the specialty unit description. I will show an example with a made up data frame to illustrate the use case for these lookups and to have up to date lookups:


act_aggregations <- tibble(SpecCode = as.character(c(101,102,103, 104, 105)),
                             ActivityCounts = round(rnorm(5,250,3),0), 
                             Month = rep("May", 5))

# Use dplyr to join the NHS activity by specialty code

if(is.null(national_codes)){
  print("The NHS tibble has not loaded, this could be due to internet connection issues.")
} else{
  act_aggregations %>% 
  left_join(national_codes, by = c("SpecCode"="Code"))
}
#> # A tibble: 5 × 6
#>   SpecCode ActivityCounts Month Description        Dict_Type DttmExtracted      
#>   <chr>             <dbl> <chr> <chr>              <chr>     <dttm>             
#> 1 101                 250 May   Urology Service    NHS Hosp… 2024-12-02 06:49:18
#> 2 102                 260 May   Transplant Surger… NHS Hosp… 2024-12-02 06:49:18
#> 3 103                 248 May   Breast Surgery Se… NHS Hosp… 2024-12-02 06:49:18
#> 4 104                 256 May   Colorectal Surger… NHS Hosp… 2024-12-02 06:49:18
#> 5 105                 259 May   Hepatobiliary and… NHS Hosp… 2024-12-02 06:49:18


  
# This easily joins the lookup on to your data
  

The benefit of having it in an R package is that you can instantaneously have a lookup of the most relevant and up to date NHS lookups, replacing the need to have a massive data warehouse to capture this information.

nhs_table_findeR function

This function allows you to perform the steps above in one consolidated function. This means that there is no need to call the nhs_data_elements() function and tableR functions separately, they are all nested in this nice convenience function. This is how you would use it:

nhs_table_findeR("ACCOMMODATION STATUS CODE", title="Accomodation Status Code National Code Lookup")
#> # A tibble: 54 × 4
#>    Code  Description                               Dict_Type DttmExtracted      
#>    <chr> <chr>                                     <chr>     <dttm>             
#>  1 MA00  Mainstream Housing                        Accomoda… 2024-12-02 06:49:19
#>  2 MA01  Owner occupier                            Accomoda… 2024-12-02 06:49:19
#>  3 MA02  Settled mainstream housing with family/f… Accomoda… 2024-12-02 06:49:19
#>  4 MA03  Shared ownership scheme e.g. Social Home… Accomoda… 2024-12-02 06:49:19
#>  5 MA04  Tenant - Local Authority/Arms Length Man… Accomoda… 2024-12-02 06:49:19
#>  6 MA05  Tenant - Housing Association              Accomoda… 2024-12-02 06:49:19
#>  7 MA06  Tenant - private landlord                 Accomoda… 2024-12-02 06:49:19
#>  8 MA09  Other mainstream housing (not listed)     Accomoda… 2024-12-02 06:49:19
#>  9 HM00  Homeless                                  Accomoda… 2024-12-02 06:49:19
#> 10 HM01  Rough sleeper                             Accomoda… 2024-12-02 06:49:19
#> # ℹ 44 more rows
#Lower case still works
glimpse(nhs_table_findeR("accommodation status code"))
#> Rows: 54
#> Columns: 4
#> $ Code          <chr> "MA00", "MA01", "MA02", "MA03", "MA04", "MA05", "MA06", …
#> $ Description   <chr> "Mainstream Housing", "Owner occupier", "Settled mainstr…
#> $ Dict_Type     <chr> "Not Specified", "Not Specified", "Not Specified", "Not …
#> $ DttmExtracted <dttm> 2024-12-02 06:49:19, 2024-12-02 06:49:19, 2024-12-02 06…

xpathTextR function

This function has been provided to return elements from a website, other than html tables, as these functions predominately work with tables. The below example shows how this can be implemented, but requires the retrieval of the xpath via the Inspect command in Google Chrome (CTRL + SHIFT + I):


url <- "https://datadictionary.nhs.uk/data_elements/abbreviated_mental_test_score.html"
xpath_element <- '//*[@id="element_abbreviated_mental_test_score.description"]'

# Run the xpathTextR function to retrieve details of the element retrieved

result_list <- NHSDataDictionaRy::xpathTextR(url, xpath_element)
print(result_list)
#> $result
#> [1] "Description\n  \n  \n    \n  \n        \n    \n            \n      ABBREVIATED MENTAL TEST SCORE\n                    is the same as attribute \n  \n            \n            \n      PERSON SCORE.\n        \n        \n    \n        \n    \n            \n      ABBREVIATED MENTAL TEST SCORE\n                    is recorded during a \n  \n            \n            \n      Breast Cancer Care Spell\n                    where the \n  \n            \n            \n      ASSESSMENT TOOL\n                    is \n  \n            \n            \n      'Abbreviated Mental Test Score'.\n        \n        \n    \n        \n    The score is in the range 0 to 10.\n      \n  \n  \n\n"
#> 
#> $website_passed
#> [1] "https://datadictionary.nhs.uk/data_elements/abbreviated_mental_test_score.html"
#> 
#> $xpath_passed
#> [1] "//*[@id=\"element_abbreviated_mental_test_score.description\"]"
#> 
#> $html_node_result
#> {html_document}
#> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en" data-whc_version="21.1">
#> [1] <head>\n<link rel="shortcut icon" href="../oxygen-webhelp/template/resour ...
#> [2] <body class="wh_topic_page frmBody">\n        <a href="#wh_topic_body" cl ...
#> 
#> $datetime_access
#> [1] "2024-12-02 06:49:20 UTC"
#> 
#> $person_accessed
#> [1] " - "

This provides details of the result, the text retrieved live from the website - this would need some cleaning, the website passed to the function, the xpath included, the result of the node search, the date and time the list was generated and the person and domain accessing this.

Cleaning the text example

The example below shows how the text could be cleaned once it is retrieved:

# Use the returned result and do some text processing
clean_text <- trimws(unlist(result_list$result))
clean_text <- clean_text %>% 
  gsub("[\r\n]", "", .) %>% #Remove new line and breaks
  trimws() %>% #Get rid of any white space
  as.character() #Cast to a character vector

print(clean_text)
#> [1] "Description                                        ABBREVIATED MENTAL TEST SCORE                    is the same as attribute                                 PERSON SCORE.                                                  ABBREVIATED MENTAL TEST SCORE                    is recorded during a                                 Breast Cancer Care Spell                    where the                                 ASSESSMENT TOOL                    is                                 'Abbreviated Mental Test Score'.                                The score is in the range 0 to 10."

I have used the trim white space function to extract the result element from the returned list from the previous function and now I use piping to a gsub function to remove newlines and spaces, I use the trimws() command again to make sure the spacing is sorted and then I convert (cast) this into a character string. Finally, the results are printed.

Getting data from OpenSafely

A contribution has been added to the package to allow for the OpenSafely data to be examined. To get the OpenSafely data you can specify the code list required and this will pull it into a list. To do this follow the below example:

# Check if the connection has returned any values
if(is.null(result_list)){
  print("There is an issue with the internet. This function cannot be used until the internet is available.")
} else{
  os_list <- NHSDataDictionaRy::openSafely_listR("opensafely/ace-inhibitor-medications")
  glimpse(os_list)
}
#> Rows: 1,096
#> Columns: 6
#> $ type          <chr> "amp", "amp", "amp", "amp", "amp", "amp", "amp", "amp", …
#> $ id            <chr> "2.191211e+16", "2.192711e+16", "2.998391e+16", "2.19124…
#> $ bnf_code      <chr> "0205051AAAAAAAA", "0205051AAAAAAAA", "0205051AAAAAAAA",…
#> $ nm            <chr> "Perindopril tosilate 2.5mg tablets (Teva UK Ltd)", "Per…
#> $ Dict_Type     <chr> "Not Specified", "Not Specified", "Not Specified", "Not …
#> $ DttmExtracted <dttm> 2024-12-02 06:49:21, 2024-12-02 06:49:21, 2024-12-02 06…

This extends the functionality of the tableR wrapper to pull back the HTML tables, and has been added as its specific function for convenience in working with the OpenSafely site.

Wrapping up

There are lots of use cases for this, but I would like to keep iterating this tool so please contact me with suggestions of what could be included in future versions.