This workflow creates and trains a Unet for segmenting cell images. The trained network is used to predict the segmentation of unseen data.
Data:
The training data is a set of 30 sections from a serial section Transmission Electron Microscopy (ssTEM) data set of the Drosophila first instar larva ventral nerve cord (VNC). The microcube measures 2 x 2 x 1.5 microns approx., with a resolution of 4x4x50 nm/pixel.
The corresponding binary labels are provided in an in-out fashion, i.e. white for the pixels of segmented objects and black for the rest of pixels (which correspond mostly to membranes).
(Source: http://brainiac2.mit.edu/isbi_challenge/home)
Dear all,
I tried this workflow, however this does not work in my hands. It gives me a following error:
ERROR Keras Network Learner 3:213 Execute failed: An error occurred while creating the Keras network from its layer specifications. Details:
âDataFrameâ object has no attribute âconvert_objectsâ
Traceback (most recent call last):
File ââ, line 5, in
File âC:\Program Files\KNIME\plugins\org.knime.dl.python_4.4.1.v202108230956\py\DLPythonNetworkSpecToDataFrameConverter.pyâ, line 97, in get_layer_data_specs_as_data_frames
input_specs = extractor.input_specs_to_data_frame()
File âC:\Program Files\KNIME\plugins\org.knime.dl.python_4.4.1.v202108230956\py\DLPythonNetworkSpecToDataFrameConverter.pyâ, line 57, in input_specs_to_data_frame
return self.__layer_data_specs_to_data_frame(self._network_spec.input_specs)
File âC:\Program Files\KNIME\plugins\org.knime.dl.python_4.4.1.v202108230956\py\DLPythonNetworkSpecToDataFrameConverter.pyâ, line 88, in __layer_data_specs_to_data_frame
specs_with_numeric_types = specs.convert_objects(convert_numeric=True)
File âC:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.pyâ, line 5465, in getattr
return object.getattribute(self, name)
AttributeError: âDataFrameâ object has no attribute âconvert_objectsâ
When this issue has come up before, reverting to an older version of pandas in your Python environment generally solves it. See this thread for more details:
In the paper, they stated that âThe ground truth boundary maps for the training images were created by one coauthor (AC) who manually segmented each neurite of the training volume by manually marking its borders on each 2D plane.â
I hope that answers your question, but let me know if you have more questions.