B24-Cryo Soft X-ray Tomography
I13-2-Diamond Manchester Imaging
Krios I-Titan Krios I at Diamond
|
Open Access
Abstract: As sample preparation and imaging techniques have expanded and improved to include a variety of options for larger sized and numbers of samples, the bottleneck in volumetric imaging is now data analysis. Annotation and segmentation are both common, yet difficult, data analysis tasks which are required to bring meaning to the volumetric data. The SuRVoS application has been updated and redesigned to provide access to both manual and machine learning-based segmentation and annotation techniques, including support for crowd sourced data. Combining adjacent, similar voxels (supervoxels) provides a mechanism for speeding up segmentation both in the painting of annotation and by training a segmentation model on a small amount of annotation. The support for layers allows multiple datasets to be viewed and annotated together which, for example, enables the use of correlative data (e.g. crowd-sourced annotations or secondary imaging techniques) to guide segmentation. The ability to work with larger data on high-performance servers with GPUs has been added through a client-server architecture and the Pytorch-based image processing and segmentation server is flexible and extensible, and allows the implementation of deep learning-based segmentation modules. The client side has been built around Napari allowing integration of SuRVoS into an ecosystem for open-source image analysis while the server side has been built with cloud computing and extensibility through plugins in mind. Together these improvements to SuRVoS provide a platform for accelerating the annotation and segmentation of volumetric and correlative imaging data across modalities and scales.
|
Apr 2022
|
|
I13-2-Diamond Manchester Imaging
|
Diamond Proposal Number(s):
[9396]
Open Access
Abstract: Recently, several convolutional neural networks have been proposed not only for 2D images, but also for 3D and 4D volume segmentation. Nevertheless, due to the large data size of the latter, acquiring a sufficient amount of training annotations is much more strenuous than in 2D images. For 4D time-series tomograms, this is usually handled by segmenting the constituent tomograms independently through time with 3D convolutional neural networks. Inter-volume information is therefore not utilized, potentially leading to temporal incoherence. In this paper, we attempt to resolve this by proposing two hidden Markov model variants that refine 4D segmentation labels made by 3D convolutional neural networks working on each time point. Our models utilize not only inter-volume information, but also the prediction confidence generated by the 3D segmentation convolutional neural networks themselves. To the best of our knowledge, this is the first attempt to refine 4D segmentations made by 3D convolutional neural networks using hidden Markov models. During experiments we test our models, qualitatively, quantitatively and behaviourally, using prespecified segmentations. We demonstrate in the domain of time series tomograms which are typically undersampled to allow more frequent capture; a particularly challenging problem. Finally, our dataset and code is publicly available.
|
Dec 2021
|
|
DIAD-Dual Imaging and Diffraction Beamline
|
Christina
Reinhard
,
Michael
Drakopoulos
,
Sharif I.
Ahmed
,
Hans
Deyhle
,
Andrew
James
,
Christopher M.
Charlesworth
,
Martin
Burt
,
John
Sutter
,
Steven
Alexander
,
Peter
Garland
,
Thomas
Yates
,
Russell
Marshall
,
Ben
Kemp
,
Edmund
Warrick
,
Armando
Pueyos
,
Ben
Bradnick
,
Maurizio
Nagni
,
A. Douglas
Winter
,
Jacob
Filik
,
Mark
Basham
,
Nicola
Wadeson
,
Oliver N. F.
King
,
Navid
Aslani
,
Andrew J.
Dent
Open Access
Abstract: The Dual Imaging and Diffraction (DIAD) beamline at Diamond Light Source is a new dual-beam instrument for full-field imaging/tomography and powder diffraction. This instrument provides the user community with the capability to dynamically image 2D and 3D complex structures and perform phase identification and/or strain mapping using micro-diffraction. The aim is to enable in situ and in operando experiments that require spatially correlated results from both techniques, by providing measurements from the same specimen location quasi-simultaneously. Using an unusual optical layout, DIAD has two independent beams originating from one source that operate in the medium energy range (7–38 keV) and are combined at one sample position. Here, either radiography or tomography can be performed using monochromatic or pink beam, with a 1.4 mm × 1.2 mm field of view and a feature resolution of 1.2 µm. Micro-diffraction is possible with a variable beam size between 13 µm × 4 µm and 50 µm × 50 µm. One key functionality of the beamline is image-guided diffraction, a setup in which the micro-diffraction beam can be scanned over the complete area of the imaging field-of-view. This moving beam setup enables the collection of location-specific information about the phase composition and/or strains at any given position within the image/tomography field of view. The dual beam design allows fast switching between imaging and diffraction mode without the need of complicated and time-consuming mode switches. Real-time selection of areas of interest for diffraction measurements as well as the simultaneous collection of both imaging and diffraction data of (irreversible) in situ and in operando experiments are possible.
|
Nov 2021
|
|
|
Abstract: The challenge of processing big data effectively and efficiently is crucial for many synchrotron facilities which can collect up to several petabytes of data annually. At Diamond Light Source, the tomographic data is reconstructed with Python-based software Savu which utilises Message Passing Interface protocols to efficiently reconstruct parallel beam geometry data. When projection data is undersampled and/or noisy, regularised iterative reconstruction methods can provide a better reconstruction quality than direct methods. The iterative methods, however, require significantly more computational resources than direct methods and their usability is impeded by the choice of additional parameters. Notably, the use of 2D regularised iterative methods for reconstruction of 3D objects results in inconsistent (saw-shaped) features in a perpendicular to slicing orientation. Due to large data sizes, fully 3D regularised model-based iterative reconstruction is problematic or impossible in practice due to high memory requirements and long processing times. In this work, we demonstrate a practical solution which enables an approximated full 3D regularised iterative reconstruction running in parallel on a computing cluster. This modification delivers an equivalent to exact 3D reconstruction quality of images with a high computational efficiency.
|
Nov 2021
|
|
|
Open Access
Abstract: In cryo-electron tomography (cryo-ET) of biological samples, the quality of tomographic reconstructions can vary depending on the transmission electron microscope (TEM) instrument and data acquisition parameters. In this paper, we present Parakeet, a ‘digital twin’ software pipeline for the assessment of the impact of various TEM experiment parameters on the quality of three-dimensional tomographic reconstructions. The Parakeet digital twin is a digital model that can be used to optimize the performance and utilization of a physical instrument to enable in silico optimization of sample geometries, data acquisition schemes and instrument parameters. The digital twin performs virtual sample generation, TEM image simulation, and tilt series reconstruction and analysis within a convenient software framework. As well as being able to produce physically realistic simulated cryo-ET datasets to aid the development of tomographic reconstruction and subtomogram averaging programs, Parakeet aims to enable convenient assessment of the effects of different microscope parameters and data acquisition parameters on reconstruction quality. To illustrate the use of the software, we present the example of a quantitative analysis of missing wedge artefacts on simulated planar and cylindrical biological samples and discuss how data collection parameters can be modified for cylindrical samples where a full 180° tilt range might be measured.
|
Oct 2021
|
|
I13-2-Diamond Manchester Imaging
|
W. M.
Tun
,
G.
Poologasundarampillai
,
H.
Bischof
,
G.
Nye
,
O. N. F.
King
,
M.
Basham
,
Y.
Tokudome
,
R. M.
Lewis
,
E. D.
Johnstone
,
P.
Brownbill
,
M.
Darrow
,
I. L.
Chernyavsky
Diamond Proposal Number(s):
[23941, 22562]
Open Access
Abstract: Multi-scale structural assessment of biological soft tissue is challenging but essential to gain insight into structure–function relationships of tissue/organ. Using the human placenta as an example, this study brings together sophisticated sample preparation protocols, advanced imaging and robust, validated machine-learning segmentation techniques to provide the first massively multi-scale and multi-domain information that enables detailed morphological and functional analyses of both maternal and fetal placental domains. Finally, we quantify the scale-dependent error in morphological metrics of heterogeneous placental tissue, estimating the minimal tissue scale needed in extracting meaningful biological data. The developed protocol is beneficial for high-throughput investigation of structure–function relationships in both normal and diseased placentas, allowing us to optimize therapeutic approaches for pathological pregnancies. In addition, the methodology presented is applicable in the characterization of tissue architecture and physiological behaviours of other complex organs with similarity to the placenta, where an exchange barrier possesses circulating vascular and avascular fluid spaces.
|
Jun 2021
|
|
I23-Long wavelength MX
|
Open Access
Abstract: In this paper a practical solution for the reconstruction and segmentation of low-contrast X-ray tomographic data of protein crystals from the long-wavelength macromolecular crystallography beamline I23 at Diamond Light Source is provided. The resulting segmented data will provide the path lengths through both diffracting and non-diffracting materials as basis for analytical absorption corrections for X-ray diffraction data taken in the same sample environment ahead of the tomography experiment. X-ray tomography data from protein crystals can be difficult to analyse due to very low or absent contrast between the different materials: the crystal, the sample holder and the surrounding mother liquor. The proposed data processing pipeline consists of two major sequential operations: model-based iterative reconstruction to improve contrast and minimize the influence of noise and artefacts, followed by segmentation. The segmentation aims to partition the reconstructed data into four phases: the crystal, mother liquor, loop and vacuum. In this study three different semi-automated segmentation methods are experimented with by using Gaussian mixture models, geodesic distance thresholding and a novel morphological method, RegionGrow, implemented specifically for the task. The complete reconstruction-segmentation pipeline is integrated into the MPI-based data analysis and reconstruction framework Savu, which is used to reduce computation time through parallelization across a computing cluster and makes the developed methods easily accessible.
|
May 2021
|
|
I13-2-Diamond Manchester Imaging
|
Diamond Proposal Number(s):
[9396]
Open Access
Abstract: Over recent years, many approaches have been proposed for the denoising or semantic segmentation of X-ray computed tomography (CT) scans. In most cases, high-quality CT reconstructions are used; however, such reconstructions are not always available. When the X-ray exposure time has to be limited, undersampled tomograms (in terms of their component projections) are attained. This low number of projections offers low-quality reconstructions that are difficult to segment. Here, we consider CT time-series (i.e. 4D data), where the limited time for capturing fast-occurring temporal events results in the time-series tomograms being necessarily undersampled. Fortunately, in these collections, it is common practice to obtain representative highly sampled tomograms before or after the time-critical portion of the experiment. In this paper, we propose an end-to-end network that can learn to denoise and segment the time-series’ undersampled CTs, by training with the earlier highly sampled representative CTs. Our single network can offer two desired outputs while only training once, with the denoised output improving the accuracy of the final segmentation. Our method is able to outperform state-of-the-art methods in the task of semantic segmentation and offer comparable results in regard to denoising. Additionally, we propose a knowledge transfer scheme using synthetic tomograms. This not only allows accurate segmentation and denoising using less real-world data, but also increases segmentation accuracy. Finally, we make our datasets, as well as the code, publicly available.
|
May 2021
|
|
Data acquisition
|
A. D.
Parsons
,
S.
Ahmed
,
M.
Basham
,
D.
Bond
,
B.
Bradnick
,
M.
Burt
,
T.
Cobb
,
N.
Dougan
,
M.
Drakopoulos
,
F.
Ferner
,
J.
Filik
,
C.
Forrester
,
L.
Hudson
,
P.
Joyce
,
B.
Kaulich
,
A.
Kavva
,
J.
Kelly
,
J.
Mudd
,
B.
Nutter
,
P.
Quinn
,
K.
Ralphs
,
C.
Reinhard
,
J.
Shannon
,
M.
Taylor
,
T.
Trafford
,
X.
Tran
,
E.
Warrick
,
A.
Wilson
,
A. D.
Winter
Open Access
Abstract: We present a beamline analogue, capable of system pro- totyping, integrated development and testing, specifically designed to provide a facility for full scientific testing of instrument prototypes. With an identical backend to real beamline instruments the P99 development rig has allowed increased confidence and troubleshooting ahead of final scientific commissioning. We present detail of the software and hardware components of this environment and how these have been used to develop functionality for the new operational instruments. We present several high impact examples of such integrated prototyping development in- cluding the instrumentation for DIAD (integrated Dual Im- aging And Diffraction) and the J08 (Soft X-ray ptychogra- phy) beamline end station.
|
Oct 2019
|
|
I13-2-Diamond Manchester Imaging
|
Diamond Proposal Number(s):
[9396]
Open Access
Abstract: X-ray computed tomography and, specifically, time-resolved volumetric tomography data collections (4D datasets) routinely produce terabytes of data, which need to be effectively processed after capture. This is often complicated due to the high rate of data collection required to capture at sufficient time-resolution events of interest in a time-series, compelling the researchers to perform data collection with a low number of projections for each tomogram in order to achieve the desired `frame rate'. It is common practice to collect a representative tomogram with many projections, after or before the time-critical portion of the experiment without detrimentally affecting the time-series to aid the analysis process. For this paper these highly sampled data are used to aid feature detection in the rapidly collected tomograms by assisting with the upsampling of their projections, which is equivalent to upscaling the θ-axis of the sinograms. In this paper, a super-resolution approach is proposed based on deep learning (termed an upscaling Deep Neural Network, or UDNN) that aims to upscale the sinogram space of individual tomograms in a 4D dataset of a sample. This is done using learned behaviour from a dataset containing a high number of projections, taken of the same sample and occurring at the beginning or the end of the data collection. The prior provided by the highly sampled tomogram allows the application of an upscaling process with better accuracy than existing interpolation techniques. This upscaling process subsequently permits an increase in the quality of the tomogram's reconstruction, especially in situations that require capture of only a limited number of projections, as is the case in high-frequency time-series capture. The increase in quality can prove very helpful for researchers, as downstream it enables easier segmentation of the tomograms in areas of interest, for example. The method itself comprises a convolutional neural network which through training learns an end-to-end mapping between sinograms with a low and a high number of projections. Since datasets can differ greatly between experiments, this approach specifically develops a lightweight network that can easily and quickly be retrained for different types of samples. As part of the evaluation of our technique, results with different hyperparameter settings are presented, and the method has been tested on both synthetic and real-world data. In addition, accompanying real-world experimental datasets have been released in the form of two 80 GB tomograms depicting a metallic pin that undergoes corruption from a droplet of salt water. Also a new engineering-based phantom dataset has been produced and released, inspired by the experimental datasets.
|
May 2019
|
|