I'm trying to import NetCDF Files into Knime. I found an R Code who is doing this very well. Now I implement this Code in an R snipped and It worked. Poorly I have several tables with different dimensions in the NetCDF File and I dont know how to export them out of the R snipped. Is there a workaround for this problem?
Structure of NetCDF:
variable 1 -50 -> dim:50
variable 50-100 -> dim:51
nc <- nc_open( "file.nc", write=FALSE, readunlim=FALSE, verbose=FALSE )
d <- nc$nvars
dn <- 1:50
for (i in dn)
v <- nc$var[[i]]
data <- ncvar_get( nc, v )
if (i < 2)
CVM <- list(data)
CVM <- list(CVM , data)
#names(CVM[i]) <- v$name
R <- CVM
Thx for any help
Interesting question! I haven't worked with netcdf data much before but I think I understand the problem. There are a couple of ways that I would try to get this to work. The central tension is that your data is not quite a flat table (x-rows, y-columns) and we need something like that in KNIME.
Open an R-snippet node and export the meta information about the netcdf tables (table names and dimensions) into KNIME. Then use a table row to variable loop start to begin reading tables from the nc file. Here is where things diverge in order to "flatten" the table.
Adapt each table in the R-snippet node to have the same number of columns. How you do this depends on your application, but everything needs to have the name number of columns and same column names when the data is collected in the loop end node. The advatage of this approach is that it allows you to "see" all of your data in the same context (aka table) which may be important depending on your application.
Do your analysis on a per-file basis inside the loop and aggregate (close the loop) with your aggregations rather than the raw data. The advantage here is that this approach is likely simpler to implement and results in smaller tables. It has the drawback however of not being able to see between different tables in your analysis.
I'm looking forwad to seeing if we can find a solution, as netcdf seems like a pretty cool data format and I think it will continue to be relevant in the future.