Create/Delete folder in Azure DataLake Blob Storage

Hi ,

I m trying to move file from blob azure storage from one folder to different one and I would like to recreate the same folder path in the destination folder but I got an error when I try to create the new folder through KNIME, the account Key has all the permission to create or delete object.

Does anyone faced the same issue ?

here the log

2021-02-18 10:38:38,744 : WARN : KNIME-Worker-13-Transfer Files 0:607 : : PathCopier : Transfer Files : 0:607 : Something went wrong during the copying / moving process. See log for further details.
org.knime.ext.azure.blobstorage.filehandling.AzureUtils$WrappedBlobStorageException: Server encountered an internal error. Please try again after some time.
at org.knime.ext.azure.blobstorage.filehandling.AzureUtils.toIOE(AzureUtils.java:118)
at org.knime.ext.azure.blobstorage.filehandling.AzureUtils.toIOE(AzureUtils.java:136)
at org.knime.ext.azure.blobstorage.filehandling.fs.AzureBlobStorageFileSystemProvider.createDirectoryInternal(AzureBlobStorageFileSystemProvider.java:252)
at org.knime.ext.azure.blobstorage.filehandling.fs.AzureBlobStorageFileSystemProvider.createDirectoryInternal(AzureBlobStorageFileSystemProvider.java:1)
at org.knime.filehandling.core.connections.base.BaseFileSystemProvider.createDirectory(BaseFileSystemProvider.java:543)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at org.knime.filehandling.core.connections.FSFiles.createDirectories(FSFiles.java:165)
at org.knime.filehandling.utility.nodes.transfer.PathCopier.createDirectories(PathCopier.java:218)
at org.knime.filehandling.utility.nodes.transfer.TransferFilesNodeModel.copy(TransferFilesNodeModel.java:162)
at org.knime.filehandling.utility.nodes.transfer.TransferFilesNodeModel.execute(TransferFilesNodeModel.java:139)
at org.knime.core.node.NodeModel.executeModel(NodeModel.java:576)
at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1245)
at org.knime.core.node.Node.execute(Node.java:1025)
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:558)
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:334)
at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:210)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)
Caused by: com.azure.storage.blob.models.BlobStorageException: Status code 500, “<?xml version="1.0" encoding="utf-8"?>InternalErrorServer encountered an internal error. Please try again after some time.
RequestId:20e24e11-a01e-003b-19d9-058232000000
Time:2021-02-18T09:38:38.7249514Z”
at sun.reflect.GeneratedConstructorAccessor420.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.azure.core.http.rest.RestProxy.instantiateUnexpectedException(RestProxy.java:320)
at com.azure.core.http.rest.RestProxy.lambda$ensureExpectedStatus$3(RestProxy.java:361)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:118)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1782)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:320)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:337)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2344)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onSubscribe(MonoCacheTime.java:276)
at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:191)
at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:132)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:123)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:112)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:213)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:123)
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onNext(FluxHandleFuseable.java:178)
at reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:96)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1782)
at reactor.core.publisher.MonoCollectList$MonoCollectListSubscriber.onComplete(MonoCollectList.java:121)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252)
at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:378)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:373)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:429)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:645)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:96)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1526)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1275)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1322)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Mono.block(Mono.java:1680)
at com.azure.storage.common.implementation.StorageImplUtils.blockWithOptionalTimeout(StorageImplUtils.java:99)
at com.azure.storage.blob.BlobClient.uploadWithResponse(BlobClient.java:222)
at com.azure.storage.blob.BlobClient.uploadWithResponse(BlobClient.java:185)
at com.azure.storage.blob.BlobClient.upload(BlobClient.java:163)
at org.knime.ext.azure.blobstorage.filehandling.fs.AzureBlobStorageFileSystemProvider.createDirectoryInternal(AzureBlobStorageFileSystemProvider.java:247)
at org.knime.ext.azure.blobstorage.filehandling.fs.AzureBlobStorageFileSystemProvider.createDirectoryInternal(AzureBlobStorageFileSystemProvider.java:1)
at org.knime.filehandling.core.connections.base.BaseFileSystemProvider.createDirectory(BaseFileSystemProvider.java:543)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at org.knime.filehandling.core.connections.FSFiles.createDirectories(FSFiles.java:165)
at org.knime.filehandling.utility.nodes.transfer.PathCopier.createDirectories(PathCopier.java:218)
at org.knime.filehandling.utility.nodes.transfer.TransferFilesNodeModel.copy(TransferFilesNodeModel.java:162)
at org.knime.filehandling.utility.nodes.transfer.TransferFilesNodeModel.execute(TransferFilesNodeModel.java:139)
at org.knime.core.node.NodeModel.executeModel(NodeModel.java:576)
at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1245)
at org.knime.core.node.Node.execute(Node.java:1025)
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:558)
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:334)
at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:210)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)
2021-02-18 10:38:38,747 : ERROR : KNIME-Worker-13-Transfer Files 0:607 : : Node : Transfer Files : 0:607 : Execute failed: Server encountered an internal error. Please try again after some time.
org.knime.ext.azure.blobstorage.filehandling.AzureUtils$WrappedBlobStorageException: Server encountered an internal error. Please try again after some time.
at org.knime.ext.azure.blobstorage.filehandling.AzureUtils.toIOE(AzureUtils.java:118)
at org.knime.ext.azure.blobstorage.filehandling.AzureUtils.toIOE(AzureUtils.java:136)
at org.knime.ext.azure.blobstorage.filehandling.fs.AzureBlobStorageFileSystemProvider.createDirectoryInternal(AzureBlobStorageFileSystemProvider.java:252)
at org.knime.ext.azure.blobstorage.filehandling.fs.AzureBlobStorageFileSystemProvider.createDirectoryInternal(AzureBlobStorageFileSystemProvider.java:1)
at org.knime.filehandling.core.connections.base.BaseFileSystemProvider.createDirectory(BaseFileSystemProvider.java:543)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at org.knime.filehandling.core.connections.FSFiles.createDirectories(FSFiles.java:165)
at org.knime.filehandling.utility.nodes.transfer.PathCopier.createDirectories(PathCopier.java:218)
at org.knime.filehandling.utility.nodes.transfer.TransferFilesNodeModel.copy(TransferFilesNodeModel.java:162)
at org.knime.filehandling.utility.nodes.transfer.TransferFilesNodeModel.execute(TransferFilesNodeModel.java:139)
at org.knime.core.node.NodeModel.executeModel(NodeModel.java:576)
at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1245)
at org.knime.core.node.Node.execute(Node.java:1025)
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:558)
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:334)
at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:210)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)
Caused by: com.azure.storage.blob.models.BlobStorageException: Status code 500, “<?xml version="1.0" encoding="utf-8"?>InternalErrorServer encountered an internal error. Please try again after some time.
RequestId:20e24e11-a01e-003b-19d9-058232000000
Time:2021-02-18T09:38:38.7249514Z”
at sun.reflect.GeneratedConstructorAccessor420.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.azure.core.http.rest.RestProxy.instantiateUnexpectedException(RestProxy.java:320)
at com.azure.core.http.rest.RestProxy.lambda$ensureExpectedStatus$3(RestProxy.java:361)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:118)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1782)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:320)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:337)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2344)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onSubscribe(MonoCacheTime.java:276)
at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:191)
at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:132)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:123)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:112)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:213)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:123)
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onNext(FluxHandleFuseable.java:178)
at reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:96)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1782)
at reactor.core.publisher.MonoCollectList$MonoCollectListSubscriber.onComplete(MonoCollectList.java:121)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252)
at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:378)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:373)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:429)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:645)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:96)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1526)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1275)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1322)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Mono.block(Mono.java:1680)
at com.azure.storage.common.implementation.StorageImplUtils.blockWithOptionalTimeout(StorageImplUtils.java:99)
at com.azure.storage.blob.BlobClient.uploadWithResponse(BlobClient.java:222)
at com.azure.storage.blob.BlobClient.uploadWithResponse(BlobClient.java:185)
at com.azure.storage.blob.BlobClient.upload(BlobClient.java:163)
at org.knime.ext.azure.blobstorage.filehandling.fs.AzureBlobStorageFileSystemProvider.createDirectoryInternal(AzureBlobStorageFileSystemProvider.java:247)
at org.knime.ext.azure.blobstorage.filehandling.fs.AzureBlobStorageFileSystemProvider.createDirectoryInternal(AzureBlobStorageFileSystemProvider.java:1)
at org.knime.filehandling.core.connections.base.BaseFileSystemProvider.createDirectory(BaseFileSystemProvider.java:543)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at org.knime.filehandling.core.connections.FSFiles.createDirectories(FSFiles.java:165)
at org.knime.filehandling.utility.nodes.transfer.PathCopier.createDirectories(PathCopier.java:218)
at org.knime.filehandling.utility.nodes.transfer.TransferFilesNodeModel.copy(TransferFilesNodeModel.java:162)
at org.knime.filehandling.utility.nodes.transfer.TransferFilesNodeModel.execute(TransferFilesNodeModel.java:139)
at org.knime.core.node.NodeModel.executeModel(NodeModel.java:576)
at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1245)
at org.knime.core.node.Node.execute(Node.java:1025)
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:558)
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:334)
at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:210)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)

Hi @geppopompo,

the logs contains the following:

Sounds like Azure has a temporary problem and you might try it again later. Does the problem still happen?

1 Like

HI @sascha.wolke ,

I don’t think the connection is the issue, in the same job I can download and move file from existing folder , the issue is when I try to create folder from KNIME

Thanks !

Gianpaolo

Hi @geppopompo,

can you describe your setup and what you try? Do you try to transfer files between two different storage accounts, different containers or only different directories inside the same container? If the later one, try to use only one connector.

Can you make sure that you can create new folders in the destination container (verify that the folder does not exists before running the Create Folder node).

While testing around with this, I got the same error on storage accounts with ADLS Gen2 hierarchical namespace enabled. But I’m not sure right now if this is the same problem.

Regards,
Sascha

Hi @sascha.wolke ,

I have tried different ways move form the same container (or different ) into a different folder it works, i does not works when KNIME has to create the new target folder, I tried with different container or same container, also with same connector or different connector, I also try to create the new folder form this windows


but same error, it seems that the data lake (i m using data lake storage gen2) does not allow KNIME to create the folder even if the key has all the permission

I dis not get this part
While testing around with this, I got the same error on storage accounts with ADLS Gen2 hierarchical namespace enabled. But I’m not sure right now if this is the same problem.
Could you give more info ?

Thanks!

Hi @geppopompo,

the Azure Blob Storage Connector works only on storage accounts with hierarchical namespace disabled.

On a storage account with hierarchical namespace enabled aka. ADLS Gen2, you have to use the new Azure Data Lake Storage Gen2 Connector. The new connector is not released right now, but you can try a development Version in the nightly KNIME build if you like.

Cheers

Hi @sascha.wolke ,

thanks for your clarification, I m happy to be a tester :slight_smile: how can I do that ?
For me is really important to understand if I can move the folder&file once they are processed otherwise reading all the file will take too much time and resource

waiting from you feedback

Thanks!

Gianpaolo

Hi @geppopompo,

you can find the nightly here if you like to test (read the disclaimer carefully!): Nightly build downloads | KNIME

Cheers
Sascha

Hi @sascha.wolke ,

do you know if there is a timeline for the release?
I suppose that that version is not stable

Thanks

Gianpaolo

Hi @geppopompo,

the nightly is only a testing version and not stable, don’t use it in production. The next KNIME release is planned in summer/july, but there is no fixed date yet. Maybe we release the ADLS Gen2 connector as part of an bugfix release before that. I will leave a message here to keep you up to date in this case.

Cheers
Sascha

Hi @geppopompo,

small update on this. We try to release the new Node with the next KNIME 4.3.3 release. See Tobias post.

Cheers
Sascha

Hi Sascha,

thanks for the update !

Gianpaolo

I have a requirement to transfer knime tables from local to new Datalake folder which would be created based on current datetime.
I also used similar connection as mentioned in the workflow picture, i.e (Microsoft authentication → Azure blob storage connector->Transfer file)
I also check marked create missing folders option in transfer files node and preaped the target path as below.

I get below error when i run the workflow:

Execute failed: The requested URI does not represent any resource on the server.

Hi @rajvenkatesh_k,

can you create a new thread about this and explain there what nodes are you using. This thread was about the difference between Azure Blob Storage and then new Azure Data Lake Storage Gen2 connector.

Cheers
Sascha

1 Like

Hi @sascha.wolke,

I found the heading topic similar.

But as suggested, new thread is created:

1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.