I've just started to use Knime and tried to learn it with the given examples. At the minute I am working on a workflow to classify metal particles. First I've built up a simple workflow by using the random forest learner/predictor. Now I am trying to get better results with DL4J. To start I've just adapted the Alex_Net example to classify the metal particles. It works but I want to improve the accuracy of the model. There are some parameters I am not sure about especially which features to adjust within the "Feature Calculator(Beta)". Is there any chance that you can help me to adjust my nodes to improve my workflow. I tried to attach the pictures and the worklflow but it is both to big. Can I send you them files as pm? Please advise.
We have an sftp server for that purpose, I have sent you the details as a PM.
I've uploaded the files. You can find them under the following link(https://www.dropbox.com/sh/2r3lj83tmyxc8kb/AAA5z56hnN6HH9AWxSWSgw8ca?dl=0). In the meanwhile I trying to improve the preprocessing of my workflow and changed some parameters. I could reach approx. 38% error with the alex_net worklow and approx. 26 % error with a simple alternative random forest workflow.
I think that my worklfow has got two major issues. First of all my database isn't big enough... but I'm working on extending the database. It takes a lot of time to simulate processes to get new images. The second issue in my opinion is the bad preprocessing. If you have a closer look to the images u can see that there are sometimes more particle on the image than the main metal particle. I think it might influence the feature calculator.
I tried to preprocess the images with several things... First of all I started to use the global thresholder + labeling filter to get rid of the background and smaller pixel. The Problem is that there are sometimes bigger particle than the main particle... Moreover the thresholder node excludes some pixels from the main particle and declares it as background. I mean i could use the interactive annotator node to segment it by myself but that would take ages and I think it is not the sense of the thing. Could you help me please?
I got another question... Why do i get multiplied images as output after the Feature Calculator (Beta)?
I am not sure what you mean with “multiplied images”, the node should either output the calculated features appended to the input images, or just the calculated features. Can you send us a screenshot that visualizes your question?
I tried to visualize it and attached some screenshots.
It looks like you just need to change some settings to get your desired result. You can modify the behaviour of the node, so that it only returns the calculated features. To do so, change the Column Creation Mode setting from Append to New Table.
Also you can always use a Column Filter Node to change the columns in a KNIME table
I attached a screenshot highliting the option, the pink underline marks the locaiton of the Column Creation Mode menu and the blue arrow the dropdown menu wher you can change the settings.
thanks for your answer but I think you understood me wrong. The appended images were on purpose... I meant that the rows are getting multiplied(screenshot attached). I could fix the problem now by using the resizer node but I dont understand why...
If you look at the rows closely you see that they are not exactly the same, the name of the images differs in the last digit. This indicates that the features were calculated for
You are probably operating on a RGB image, which means one color image actually consists out of 3 gray value images, one for each color channel. Many features are only valid on certain dimension configurations. This is why the node allows you to select the dimensions you want to calculate your features on.
To continue you can einther decide to forfeit the color infomation (e.g. combining the channels via the Projector node), average the features for each color dimension together afte r the features have been calculated (you could use the GroupBy node for that), or handle each channel seperately.
after a while I am a step further... I think that I am going to classify the particles only based on a segmentation.
- global threshold(isolate or yen) --> find edges --> fill holes --> connected component analyses --> labeling filter -->?
The problem is that there are more segments I need... Is there a node to filter the biggest particle or/and identify the middle particle? I think that will improve my results.
What do you think about this method? Any other advices?
Here the link with the workflow and images:https://www.dropbox.com/sh/2r3lj83tmyxc8kb/AAA5z56hnN6HH9AWxSWSgw8ca?dl=0
The segmentation approach can lead to good results. If you want to get some ideas on the processing steps involved, you can take a look at the following workflow:
To be able to filter the segments better you can calculate features of each segment, such as size in pixels, diameter etc..
Good afternoon Gab,
Thank you for your response. First of all i started to use the segment features node calculate some features. Then I've read that the feature calculator (beta) should substitute the 3 feature calculator (image segment features, segment features and the deprecated feature calculator). So I calculated the features on the unchanged image and on the segments... So far so good, but I do not understand exactly how the results are composed in comparison to the segment feature calculator because I am calculating features on an image and a segment at the same time.
Moreover I tried to filter the segments with the GroupBy node by the sum of the pixels. I am getting the segment with the max amount of pixels but the other features are put together by the mean. I tried to use the joiner node to put the the segment with the max amount of pixels and the dedicated features. But it doesn't work like it should. I think it is due to the changed names within the process. Any easier way how to solve that problem?!
Thank you for your help!
The Segment Features node calculates features on the shape of segments, it does not consider pixel values at all. The Feature Calculator node can calculate features in 3 different ways:
Image: Features are calculated based on all pixels in the image
Labeling: Features are calculated per segment in the labeling
Image + Labeling: Features are calculated on regions of the pixels in the image as described by the labeling
The following example workflow shows the applications: https://www.knime.com/nodeguide/community/image-processing/tutorials/node-tutorials/feature-calculation
To make your life easier when joining images you should add a common property to all rows that originate from the same image. An easy way to do this is to add a Image Properties node and select the Name feature, this will add the same image name to all segments, allowing for easy grouping and joining at a later timepoint.
Thank you very much... That's what I was looking after.