VMXi-Versatile Macromolecular Crystallography in situ
|
Open Access
Abstract: A group of three deep-learning tools, referred to collectively as CHiMP (Crystal Hits in My Plate), were created for analysis of micrographs of protein crystallization experiments at the Diamond Light Source (DLS) synchrotron, UK. The first tool, a classification network, assigns images into categories relating to experimental outcomes. The other two tools are networks that perform both object detection and instance segmentation, resulting in masks of individual crystals in the first case and masks of crystallization droplets in addition to crystals in the second case, allowing the positions and sizes of these entities to be recorded. The creation of these tools used transfer learning, where weights from a pre-trained deep-learning network were used as a starting point and repurposed by further training on a relatively small set of data. Two of the tools are now integrated at the VMXi macromolecular crystallography beamline at DLS, where they have the potential to absolve the need for any user input, both for monitoring crystallization experiments and for triggering in situ data collections. The third is being integrated into the XChem fragment-based drug-discovery screening platform, also at DLS, to allow the automatic targeting of acoustic compound dispensing into crystallization droplets.
|
Oct 2024
|
|
|
Open Access
Abstract: Many bioimaging research projects require objects of interest to be identified, located, and then traced to allow quantitative measurement. Depending on the complexity of the system and imaging, instance segmentation is often done manually, and automated approaches still require weeks to months of an individual’s time to acquire the necessary training data for AI models. As such, there is a strong need to develop approaches for instance segmentation that minimize the use of expert annotation while maintaining quality on challenging image analysis problems.
Herein, we present our work on a citizen science project we ran called Science Scribbler: Virus Factory on the Zooniverse platform, in which citizen scientists annotated a cryo-electron tomography volume by locating and categorising viruses using point-based annotations instead of manually drawing outlines. One crowdsourcing workflow produced a database of virus locations, and the other workflow produced a set of classifications of those locations. Together, this allowed mask annotation to be generated for training a deep learning–based segmentation model. From this model, segmentations were produced that allowed for measurements such as counts of the viruses by virus class.
The application of citizen science–driven crowdsourcing to the generation of instance segmentations of volumetric bioimages is a step towards developing annotation-efficient segmentation workflows for bioimaging data. This approach aligns with the growing interest in citizen science initiatives that combine the collective intelligence of volunteers with AI to tackle complex problems while involving the public with research that is being undertaken in these important areas of science.
|
Jan 2024
|
|
|
Open Access
Abstract: Public participation in research, also known as citizen science, is being increasingly adopted for the analysis of biological volumetric data. Researchers working in this domain are applying online citizen science as a scalable distributed data analysis approach, with recent research demonstrating that non-experts can productively contribute to tasks such as the segmentation of organelles in volume electron microscopy data. This, alongside the growing challenge to rapidly process the large amounts of biological volumetric data now routinely produced, means there is increasing interest within the research community to apply online citizen science for the analysis of data in this context. Here, we synthesise core methodological principles and practices for applying citizen science for analysis of biological volumetric data. We collate and share the knowledge and experience of multiple research teams who have applied online citizen science for the analysis of volumetric biological data using the Zooniverse platform (www.zooniverse.org). We hope this provides inspiration and practical guidance regarding how contributor effort via online citizen science may be usefully applied in this domain.
|
Jun 2023
|
|
I13-2-Diamond Manchester Imaging
|
Diamond Proposal Number(s):
[16205]
Open Access
Abstract: Methane (CH4) hydrate dissociation and CH4 release are potential geohazards currently investigated using X-ray computed tomography (XCT). Image segmentation is an important data processing step for this type of research. However, it is often time consuming, computing resource-intensive, operator-dependent, and tailored for each XCT dataset due to differences in greyscale contrast. In this paper, an investigation is carried out using U-Nets, a class of Convolutional Neural Network, to segment synchrotron XCT images of CH4-bearing sand during hydrate formation, and extract porosity and CH4 gas saturation. Three U-Net deployments previously untried for this task are assessed: (1) a bespoke 3D hierarchical method, (2) a 2D multi-label, multi-axis method and (3) RootPainter, a 2D U-Net application with interactive corrections. U-Nets are trained using small, targeted hand-annotated datasets to reduce operator time. It was found that the segmentation accuracy of all three methods surpass mainstream watershed and thresholding techniques. Accuracy slightly reduces in low-contrast data, which affects volume fraction measurements, but errors are small compared with gravimetric methods. Moreover, U-Net models trained on low-contrast images can be used to segment higher-contrast datasets, without further training. This demonstrates model portability, which can expedite the segmentation of large datasets over short timespans.
|
Dec 2022
|
|
|
Open Access
Abstract: Segmentation of 3-dimensional (3D, volumetric) images is a widely used technique that allows
interpretation and quantification of experimental data collected using a number of techniques
(for example, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Electron
Tomography (ET)). Although the idea of semantic segmentation is a relatively simple one,
giving each pixel a label that defines what it represents (e.g cranial bone versus brain tissue);
due to the subjective and laborious nature of the manual labelling task coupled with the huge
size of the data (multi-GB files containing billions of pixels) this process is often a bottleneck
in imaging workflows. In recent years, deep learning has brought models capable of fast and
accurate interpretation of image data into the toolbox available to scientists. These models
are often trained on large image datasets that have been annotated at great expense. In many
cases however, scientists working on novel samples and using new imaging techniques do not
yet have access to large stores of annotated data. To overcome this issue, simple software
tools that allow the scientific community to create segmentation models using relatively small
amounts of training data are required. Volume Segmantics is a Python package that provides
a command line interface (CLI) as well as an Application Programming Interface (API) for
training 2-dimensional (2D) PyTorch (Paszke et al., 2019) deep learning models on small
amounts of annotated 3D image data. The package also enables applying these models to
new (often much larger) 3D datasets to speed up the process of semantic segmentation.
|
Oct 2022
|
|
|
Abstract: Recent developments in experimental microscopy techniques have led to improvements in the way we visualize various biological phenomena. Presently, state-of-art microscopy involves cryogenic sample preparation, 3D correlative microscopy and milling, followed by tilt series acquisition of biological volumes leading to datasets with nanometer scale information. While this workflow is technically possible, it is still challenging to collect, process, and analyze these large datasets, especially when the workflow includes correlative imaging and segmentation steps. In the Artificial Intelligence and Informatics group (AI&I) at The Rosalind Franklin Institute we are automating these workflow steps to solve computationally difficult and time-intensives problems by developing open-source software tools. Here, we present some notable examples.
|
Aug 2022
|
|
B24-Cryo Soft X-ray Tomography
I13-2-Diamond Manchester Imaging
Krios I-Titan Krios I at Diamond
|
Open Access
Abstract: As sample preparation and imaging techniques have expanded and improved to include a variety of options for larger sized and numbers of samples, the bottleneck in volumetric imaging is now data analysis. Annotation and segmentation are both common, yet difficult, data analysis tasks which are required to bring meaning to the volumetric data. The SuRVoS application has been updated and redesigned to provide access to both manual and machine learning-based segmentation and annotation techniques, including support for crowd sourced data. Combining adjacent, similar voxels (supervoxels) provides a mechanism for speeding up segmentation both in the painting of annotation and by training a segmentation model on a small amount of annotation. The support for layers allows multiple datasets to be viewed and annotated together which, for example, enables the use of correlative data (e.g. crowd-sourced annotations or secondary imaging techniques) to guide segmentation. The ability to work with larger data on high-performance servers with GPUs has been added through a client-server architecture and the Pytorch-based image processing and segmentation server is flexible and extensible, and allows the implementation of deep learning-based segmentation modules. The client side has been built around Napari allowing integration of SuRVoS into an ecosystem for open-source image analysis while the server side has been built with cloud computing and extensibility through plugins in mind. Together these improvements to SuRVoS provide a platform for accelerating the annotation and segmentation of volumetric and correlative imaging data across modalities and scales.
|
Apr 2022
|
|
DIAD-Dual Imaging and Diffraction Beamline
|
Christina
Reinhard
,
Michael
Drakopoulos
,
Sharif I.
Ahmed
,
Hans
Deyhle
,
Andrew
James
,
Christopher M.
Charlesworth
,
Martin
Burt
,
John
Sutter
,
Steven
Alexander
,
Peter
Garland
,
Thomas
Yates
,
Russell
Marshall
,
Ben
Kemp
,
Edmund
Warrick
,
Armando
Pueyos
,
Ben
Bradnick
,
Maurizio
Nagni
,
A. Douglas
Winter
,
Jacob
Filik
,
Mark
Basham
,
Nicola
Wadeson
,
Oliver N. F.
King
,
Navid
Aslani
,
Andrew J.
Dent
Open Access
Abstract: The Dual Imaging and Diffraction (DIAD) beamline at Diamond Light Source is a new dual-beam instrument for full-field imaging/tomography and powder diffraction. This instrument provides the user community with the capability to dynamically image 2D and 3D complex structures and perform phase identification and/or strain mapping using micro-diffraction. The aim is to enable in situ and in operando experiments that require spatially correlated results from both techniques, by providing measurements from the same specimen location quasi-simultaneously. Using an unusual optical layout, DIAD has two independent beams originating from one source that operate in the medium energy range (7–38 keV) and are combined at one sample position. Here, either radiography or tomography can be performed using monochromatic or pink beam, with a 1.4 mm × 1.2 mm field of view and a feature resolution of 1.2 µm. Micro-diffraction is possible with a variable beam size between 13 µm × 4 µm and 50 µm × 50 µm. One key functionality of the beamline is image-guided diffraction, a setup in which the micro-diffraction beam can be scanned over the complete area of the imaging field-of-view. This moving beam setup enables the collection of location-specific information about the phase composition and/or strains at any given position within the image/tomography field of view. The dual beam design allows fast switching between imaging and diffraction mode without the need of complicated and time-consuming mode switches. Real-time selection of areas of interest for diffraction measurements as well as the simultaneous collection of both imaging and diffraction data of (irreversible) in situ and in operando experiments are possible.
|
Nov 2021
|
|
I13-2-Diamond Manchester Imaging
|
W. M.
Tun
,
G.
Poologasundarampillai
,
H.
Bischof
,
G.
Nye
,
O. N. F.
King
,
M.
Basham
,
Y.
Tokudome
,
R. M.
Lewis
,
E. D.
Johnstone
,
P.
Brownbill
,
M.
Darrow
,
I. L.
Chernyavsky
Diamond Proposal Number(s):
[23941, 22562]
Open Access
Abstract: Multi-scale structural assessment of biological soft tissue is challenging but essential to gain insight into structure–function relationships of tissue/organ. Using the human placenta as an example, this study brings together sophisticated sample preparation protocols, advanced imaging and robust, validated machine-learning segmentation techniques to provide the first massively multi-scale and multi-domain information that enables detailed morphological and functional analyses of both maternal and fetal placental domains. Finally, we quantify the scale-dependent error in morphological metrics of heterogeneous placental tissue, estimating the minimal tissue scale needed in extracting meaningful biological data. The developed protocol is beneficial for high-throughput investigation of structure–function relationships in both normal and diseased placentas, allowing us to optimize therapeutic approaches for pathological pregnancies. In addition, the methodology presented is applicable in the characterization of tissue architecture and physiological behaviours of other complex organs with similarity to the placenta, where an exchange barrier possesses circulating vascular and avascular fluid spaces.
|
Jun 2021
|
|
I04-1-Macromolecular Crystallography (fixed wavelength)
|
Akane
Kawamura
,
Martin
Münzel
,
Tatsuya
Kojima
,
Clarence
Yapp
,
Bhaskar
Bhushan
,
Yuki
Goto
,
Anthony
Tumber
,
Takayuki
Katoh
,
Oliver N. F.
King
,
Toby
Passioura
,
Louise J.
Walport
,
Stephanie B.
Hatch
,
Sarah
Madden
,
Susanne
Müller
,
Paul E.
Brennan
,
Rasheduzzaman
Chowdhury
,
Richard J.
Hopkinson
,
Hiroaki
Suga
,
Christopher J.
Schofield
Diamond Proposal Number(s):
[1230, 9306]
Open Access
Abstract: The JmjC histone demethylases (KDMs) are linked to tumour cell proliferation and are current cancer targets; however, very few highly selective inhibitors for these are available. Here we report cyclic peptide inhibitors of the KDM4A-C with selectivity over other KDMs/2OG oxygenases, including closely related KDM4D/E isoforms. Crystal structures and biochemical analyses of one of the inhibitors (CP2) with KDM4A reveals that CP2 binds differently to, but competes with, histone substrates in the active site. Substitution of the active site binding arginine of CP2 to N-ɛ-trimethyl-lysine or methylated arginine results in cyclic peptide substrates, indicating that KDM4s may act on non-histone substrates. Targeted modifications to CP2 based on crystallographic and mass spectrometry analyses results in variants with greater proteolytic robustness. Peptide dosing in cells manifests KDM4A target stabilization. Although further development is required to optimize cellular activity, the results reveal the feasibility of highly selective non-metal chelating, substrate-competitive inhibitors of the JmjC KDMs.
|
Apr 2017
|
|