Deep learning based 3D-segmentation of dendritic spines recorded with two-photon in vivo imaging
Published in: 6th International BonnBrain Conference, Bonn, Germany, 2023
Type: Poster Presentation
Citation
Fabrizio Musacchio & Pragya Mishra, Pranjal Dhole, Shekoufeh Gorgi Zadeh, Sophie Crux, Felix Nebeling, Stefanie Poll, Manuel Mittag, Falko Fuhrmann, Eleonora Ambrad, Andrea Baral, Julia Steffen, Miguel Fernandes, Thomas Schultz, Martin Fuhrmann, "Deep learning based 3D-segmentation of dendritic spines recorded with two-photon in vivo imaging" (2023). 6th International BonnBrain Conference, Bonn, Germany, https://bonnbrain.de/
Abstract
The automatic detection of dendritic spines in 3D is still a challenging and yet not fully resolved problem with regard to two-photon in-vivo imaging. The emergence of convolutional neural networks (CNN) like U-Nets1 enabled the development of deep learning based segmentation pipelines for biomedical images in general and for dendritic spines in particular (e.g.2,3). While these pipelines are most suitable for in-vitro confocal image data, they provide lower prediction accuracy when applied to volumetric in-vivo two-photon images that have a lower signal-to-noise ratio and larger motion artifacts. Thus, researchers of this field still tend to analyze dendritic spines manually, which is time-consuming and prone to human bias. We therefore developed a pipeline for multi-class semantic image segmentation based on a fully convolutional neural network, that specifically targets 3D two-photon in-vivo image data. By choosing U-Net as the underlying network architecture, only a few labeled training images (<50) are required. The U-Net processes 2D images to reduce computation time. A post-hoc 3D connectivity analysis merges the classified spine pixels and reconstructs the 3D morphology. Our pipeline is capable to segment spines from its associated dendrite with 85% accuracy and enables the further analysis of, e.g., spine morphology and spine density.
comments