unfortunately it is hard to give some advice from just looking at the workflow snipped you posted. Could you give some more information about the problem you are facing? In theory, you could use the same approach as in the semantic segmentation example, however adapted to your number of classes. Of course, this is only possible if you have suitable training data available.
Hello @DaveK, sorry to answer you only today.
I was trying to create a workflow which was able to frame cars and people in aerial images. I developed that part of the workflow that I posted, it should have been the segmentation part. I don’t know if it is correct or not, and I don’t know how to link that part to the training one.
I tried to use the “SEMANTIC SEGMENTATION” workflow, too. But I didn’t understand how to change the input images, because I don’t have images that are ready to be trained. How should I proceed? Should I create a table with labels for my images? if yes, how?
Sorry to ask you so many questions, but I really don’t know what to do more.
your problem sounds more like an object detection task than a segmentation. In object detection the goal is to find bounding boxes and the class labels of objects contained in an image (have a look at this GitHub repo for examples). This is in contrast to semantic segmentation where you want to find a class label for each pixel. For object segmentation you would probably want to use a different approach than the segmentation method shown in the Semantic Segmentation workflow. A very well-known and successful method are Faster-RCNNs, however currently we do not have an example for them in KNIME, and they also might be a bit problematic to implement. It is probably easier to start with raw Python to implement this. For this, there should be plenty existing implementations of Faster-RCNN. E.g. the GitHub repo I meantioned earlier (this). Personally, I have already worked with an extension of Faster-RCNN called Mask-RCNN using this implementation. It is pretty straight forward to use.
However, before you start doing this you need a suitable dataset first. If you want to do object detection you need to have bounding box annotations for each of your images.
Do you have any annotations of your images? If not, I don’t know if it is feasible to create those manually in your case. This depends on the amount of images available and if you could probably use a similar dataset for training.
Also, depending on the complexity of your images you could maybe even use a more classical image analysis approach not using deep learning.
I think that the best option for me is to avoid deep learning because it would be too difficult to complete the workflow.
I saw the “car counting” workflow, which should be without deep learning, but it doesn’t fit with my project. How can I implement my workflow in a proper way ? Could I upload it here?
I don’t know how your dataset looks like, so I can’t give you specific advice. I’m happy to help you if you have more specific questions, but you have to do your own research to figure out what’s the best general approach for your project.
You can upload your workflow here and maybe I can point you in a possible direction. Make sure to include some example images if this is possible.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.