ChEMBL Resources

The SARfaris: GPCR, Kinase, ADME

Tuesday, 6 March 2018

ChEMBL tissues: Increasing depth, breadth and accuracy of annotations

Our current tissue annotation efforts have been on increasing the breadth and depth of the tissue effort first started in ChEMBL 22. The figure above represents the increased depth and coverage from that initial point till now. 

We continue to use a suite of tissue ontologies namely: Uberon, Experimental Factor Ontology (, CALOHA ( and Brenda Tissue Ontology ((  to identify assays where the tissue is the assay system. We have increased the detail of information we capture to reflect the more granular tissues mentioned in the assays such as 'Popliteal lymph node' and 'Substantia nigra' pars compacta where previously the higher level term ‘lymph node’ and ‘Substantia nigra’ might have been captured.

Plasma based assays

We have recently focused annotation efforts on plasma based assays  in response to end user interest in this assays as well as acknowledging the general prevalence of plasma as an assay system for many functional/ADME assays.

Assays with multiple tissue types
We have also increased tissue curation of bioassays whose measurements are recorded across multiple tissues in a single assay e.g ‘Kidney/Liver’, ‘Heart/Liver’. In these cases, bespoke entries are created in the Tissue Dictionary, representing the tissue combination.
Ongoing improvements to tissue curation

·      These newly created tissue targets and assays annotated with these will be available in the next ChEMBL release (ChEMBL 24).
·      Our future web interface tissue search functionality will also make use of hierarchies inherent in the tissue ontologies to retrieve the more granular tissue terms on searching with a higher level term. An example would be that a tissue search for a high level term would include child terms of the higher level term e.g  A search for assays annotated with the tissue ‘compound eye’ UBERON:0000018 should also ideally retrieve assays annotated with direct children of this higher level term e.g ommatidium (UBERON:0000971).
·      The nature of ontological terms is such that species differences may not always be abundantly clear where single tissue term is used across different taxonomic groups to describe tissues that perform the same function in the different species but have clear anatomical differences. An example being the term eye which refers to the ‘compound eye’ UBERON:0000018 found in insects vs ‘camera type eye’ UBERON:0000019 as found in humans. We plan to use taxonomic constraint information to disambiguate cases like these and improve the correctness of mappings.
For queries and questions on tissue annotation-related matters please contact our help desk

Tuesday, 23 January 2018

Targets in ChEMBL through the years

Evolution of targets over time

While ChEMBL was first released in 2009, the data on which it is built originate from publications extending back to 1975. Despite relatively sparse coverage from the early years in comparison to now, it is interesting to see how the publically available data for targets has grown over time. This interactive plot aims to present key data for each of ChEMBL’s targets over the years, in a style inspired by the late Hans Rosling’s TED talk on global development (if you haven't already seen it, I recommend that you watch it now!)

As shown above, dragging the slider at the bottom of the plot updates the year to reflect the data available up until that point.  The following values are shown:

  • Y-axis: The cumulative sum of compounds with a pChEMBL value for the target
  • X-axis: The maximum pChEMBL value or LLE (depending on radio button selection) achieved to date for target
  • Point Size: The maximum phase achieved for the target
  • Colour: Protein classification

Hovering the mouse over a datapoint will reveal the target's name, however as the number of points increases, it may become difficult to make sense of the data. In addition to the controls at the top of the plot, which allow you to zoom and pan the data, it is possible to filter the data by protein classification. For example, single clicking on "Enzyme" will toggle these points on and off, double clicking will turn all other points, allowing you to isolate the data for enzymes.

Use the plot to explore the target data in ChEMBL, feel free to share any interesting observations in the comments.

The plot was created using Dash and Plotly. You can view a larger version of the plot here, or download the source code here.

Thursday, 11 January 2018

Software Engineer Wanted!

We are currently seeking a talented Software Engineer to work on developing our exciting SureChEMBL resource.

SureChEMBL is a publicly available large-scale database containing compounds extracted from the full text, images and attachments of patent documents. The data are extracted from the patent literature according to an automated text and image-mining pipeline on a daily basis (see for more information), producing a database of more than 19 million chemical structures.

The successful candidate will have a minimum of 3 years of professional development experience with strong core Java Enterprise Edition development skills (please see job description below for full requirements).

For more details of the position, or to apply please visit:

The closing date for applications is 21st January 2018

Wednesday, 12 July 2017

Using autoencoders for molecule generation

Some time ago we found the following paper so we decided to take a look at it and train the described model using ChEMBL.

Lucky us, we also found two open source implementations of the model; the original authors one and We decided to rely on the last one as the original author states that it might be easier to have greater success using it.

What is the paper about? It describes how molecules can be generated and specifically designed using autoencoders.

First of all we are going to give some simple and not very technical introduction for those that are not familiar with autoencoders and then go through a ipython notebook showing few examples of how to use it.

  1. Autoencoder introduction

Autoencoders are one of the many different and popular unsupervised deep learning algorithms used nowadays for many different fields and purposes. These work with two joint main blocks, an encoder and a decoder. Both blocks are made of neural networks.

In classical cryptography the cryptographer defines an encoding and decoding function to make the data impossible to read for those people that might intercept the message but do not have the decoding function. A classical example of this is the Caesar’s cipher .

However using autoencoders we don’t need to set up the encoding and decoding functions, this is just what the autoencoder do for us. We just need to set up the architecture of our autoencoder and the autoencoder will automatically learn the encoding and decoding functions minimizing a loss (also called cost or objective) function in a optimization algorithm. In an ideal world we would have a loss of 0.0, this would mean that all data we used as an input is perfectly reconstructed after the encoding. This is not usually the case :)

So, after the encoding phase we get a intermediate representation of the data (also called latent representation or code). This is why it’s said that autoencoders can learn a new representation of data.

Two most typical scenarios using autoencoders are:

  1. Dimensionality reduction: Setting up a bottleneck layer (layer in the middle) with lower dimensionality than the input layer we get a lower dimensionality representation of our data in the latent space. This can be somehow compared using classic PCA. Differences between using autoencoders and PCA is that PCA is purely linear, while autoencoders usually use non-linear transfer functions (multiple layers with relu, tanh, sigmoid... transfer functions). The optimal solution for an autoencoder using only linear transfer functions is strongly related to PCA.

  1. Generative models: As the latent representation (representation after encoding phase) is just an n-dimensional array it can be really tempting to artificially generate n-dimensional arrays and try decode them in order to get new items (molecules!) based on the learnt representation. This is what we will achieve in the following example.

  1. Model training

Except RNN, most machine/deep learning approaches require of a fixed length vector as an input. The authors decided to take smiles no longer of 120 characters for a further one hot encoding representation to feed the model. This only left out less than 3% of molecules in ChEMBL. All of them above 1000 dalton.

We trained the autoencoder using the whole ChEMBL database (except that 3%), using a 80/20 train/test partition with a validation accuracy of 0.99.

  1. Example

As the code repository only provided a model trained with 500k ChEMBL 22 molecules and training a model against ChEMBL it’s a quite time expensive task we wanted to share with you the model we trained with whole ChEMBL 23 and a ipython notebook with some basic usage examples.

To run locally the notebook you just need to clone the repository, create a conda environment using the provided environment.yml file and run jupyter notebook.

cd autoencoder_ipython
conda env create -f environment.yml
jupyter notebook

The notebook covers simple usage of the model:

  • Encoding and decoding a molecule (aspirin) to check that the model is working properly.
  • Sampling latent space next to aspirin and getting auto-generated aspirin neighbours(figure 3a in original publication), validating the molecules and checking how many of them don’t exist in ChEMBL.
  • Interpolation between two molecules. Didn’t worked as well as in the paper.

  1. Other possible uses for this model

As stated on the original paper, this model can be also used to optimize a molecule given a desired property AlogP, QED...

Latent representations of molecules can be also used as structural molecular descriptors for target predictions algorithms.
Most popular target prediction algorithms are using fingerprints. Fingerprints have an obvious loss of structure information; molecule can’t be reconstructed from its fingerprint representation. As latent representation is saving all 2D molecule structural information in most cases (0.99 accuracy on ChEMBL test set)  we also believe that it may be used to improve fingerprint based target prediction algorithms accuracy.

Hope you enjoy it!