I’m fairly new to KNIME and this might sound like a basic question but I have a need for “live streaming” of data into my analytics platform on the “edge”. So, I have to install my analytics platform on an edge server to begin with (this part is fine) and then make the platform read continuously from an OPC server and feed the data into my “pre-trained” neural network.
Can you share an example, workflow or some guidance on how to do this?
I shall be ever so grateful.
thanks a lot
Hello @Puya18 -
I confess that I am not very familiar with OPC servers. Might you be able to connect to it using JDBC, or using a REST interface? If so, KNIME should be able to do that in both cases.
May I ask how you’re defining the neural network? Is it in Keras, or can you export it to PMML, or is it in a completely different format? KNIME has a few options that may work here as well - I’m just trying to narrow down what you’re working with.
First, on the OPC: it’s basically a server/client type communication based on TCP/IP and Microsoft/s DCOM… long story! The bottom-line is that you get live value reading of your variables once per second (or whatever your clock rate is). So, if you’re monitoring 5 variables, you get a 1x5 array of say floating values each second.
What I’d like to make happen is:
KNIME reads my live values 1x5 arrays, and stacks them on top of each other for 10 seconds to end with a 10x5 matrix. Now this 10x5 matrix can be used as the “input” to my MLP neural network predictor.
I intend to use the MLP or PMML predictors in the main KNIME’s node repository.
Any assistance will be highly appreciated.
Hello @Puya18 -
I asked one of our support engineers about this, as I don’t think there’s a way to completely do this within KNIME (not yet). He suggested possibly using Spring Cloud Data Flow to deploy a model built by KNIME. SCDF has a PMML read/applier that KNIME could build, and then you might use SCDF as the framework to deploy it.
More here: https://cloud.spring.io/spring-cloud-dataflow/