Feature available: large content transfer using built in connectors in Logic Apps Standard
Published Nov 25 2022 02:58 AM 5,838 Views
Microsoft

There are a few connectors which often have to deal with extremely large entities having content size so large that it's not possible to load them completely in memory. Some of the currently available connectors that fall in this category are - SFTP, FTP, Azure Blob. Please note that this list is not exhaustive and as we add more built-in connectors, it will only grow.

 

SFTP (SftpWithSsh) and Azure Blob connectors from Azure that run in the shared cloud environment provide support for transferring files up to 1 GiB (1024 MiB). Refer to the limitations section of SFTP and Blob connectors respectively. To fully take advantage of this feature, the connector and the application engine (Logic apps in this case) both must support large content transfer. This feature is now provisioned in Logic Apps Standard for built-in connectors with more liberal limitation on the content size. In general, the content size that can be transferred should be well beyond 1 GiB, however the exact limitation will be different for each connector.

 

Which connectors have enabled this already?

As of writing this, SFTP and FTP built-ins have implemented this with no limitation on the file size for download and upload operations.

 

How to use it?

1. Large content download

We'll take SFTP connector as example for describing how to use this. First step is to update the host settings of the app using Kudu tool (see how to modify host.json), if needed. This step ensures that the connector is expecting files up to a certain size.

  • Runtime.ServiceProviders.Sftp.MaxFileSizeInBytes
  • FlowRunRetryableActionJobCallback.MaximumContentLengthInBytesForPartialContent

The first one is the connector setting that sets the limit on file size that the connector can fetch using Get File Content (V2) action. This action is the second version of the Get File Content action that is used to fetch smaller files. We are going to see the detailed comparison of both the versions later in this article. The default value for this host setting is 2GiB (2147483648 bytes).

 

The second host setting will set the upper bound on the output size of all the actions that use large content transfer implementation. This setting is not connector exclusive, instead it applies to all the actions from all the connectors that will execute in the given logic app. This ensures that the logic apps engine is expecting output contents up to a certain size. The default value is 1GiB (1073741824 bytes). You don't need to update the settings if your file size is going to be smaller than the default values.

 

The host.json was modified as follows in order to allow file downloads of size up to 5 GB.

 

Screenshot_20221103_115159.png

 

 

Restart your logic app from Overview page after saving this file. 

 

Next step is workflow creation. You need to use the Get File Content (V2) action. If you made any changes to above mentioned settings to override the defaults, you will see the value of connector setting appear in the description of this action. Although you can download large files using this action, the output link that is supposed to appear when you view the action from the run history won't be visible when the file size is larger than memory referenceable size (more on this later in the article). Making that available for large content is still work-in-progress. This essentially means that you can't make use of this content directly, unless you use an upload action in the workflow to send this data to some service.

 

2. Large content upload

Users can add the existing SFTP upload file content action to the workflow to send large files to a file server. The file content input parameter should reference the output 'Body of Get File Content (V2)' in case you wish to use the file content of the file that was downloaded using the Get action. In case you wish to send this file to some other server, you need to create a new connection. Note that there is no V2 version for this action and no host setting modification required for large files.

 

Important note: The download and upload operations are I/O bound so the performance will depend over available network bandwidth. Generally speaking, the CPU usage is expected to be comfortably low as long as multiple workflows with large content operations are not running simultaneously. When running multiple workflows at scale (see section below on the limits), it is advisable to deploy the logic app in the same Azure region as the file server and to use the WS3 (Workflow Standard 3 plan) to maximize the performance. Read this for more information on pricing tiers.

 

Scale

We will see the results of some scale tests that we did in order to reveal the feature performance. We deployed a container instance as our SFTP server that runs using Ubuntu image and uses Azure File Share as underlying storage. The logic app (named PerfBench-SFTP) was created in the same region and the pricing tier used was WS3. The workflow is simple, we want to test the download operation.

 

workflow_get.jpg

 

The trigger request payload will carry the file name parameter so that we can control from our testing client which file to download. In first iteration of testing, only two files were used having sizes 200 MiB and 512 MiB respectively. Total 100 requests were sent in batches with 2 minutes of interval. Because the file sizes are moderately large, we want to take pauses in between batches. In each batch 15 requests were sent at once and the file name was chosen randomly in each request from the two file options to distribute it fairly. 

 

The average execution duration of the action (latency) was roughly 175 seconds. 1% of the runs took 8 minutes to download the file. This latency is defined as the time interval between action start and action end. This includes only the time that is required to execute the operation and not the waiting time (job delay) before it actually starts. It takes only 40 seconds to fetch a 512 MiB file using this action when only one request is sent, however when multiple sessions execute concurrently, more time is required per operation due to division of system resources. 

 

 

 

 

WorkflowActions
| where TIMESTAMP between (datetime(2022-10-31 14:00) .. datetime(2022-11-1))
| where siteName == "PerfBench-SFTP"
| where TaskName contains "end"
| summarize average=avg(todouble(durationInMilliseconds))/1000, percentiles(todouble(durationInMilliseconds)/1000, 50, 90, 99)

 

 

 

 

 

average

percentile__50

percentile__90

percentile__99

175.186

142.653

383.563

474.461

 

Here is a snapshot of the performance counters from the logic app captured with a sampling interval of 1 min from Application Insights.

 

Logic app: I/O bytes exchanged per secondLogic app: I/O bytes exchanged per secondProcessor timeProcessor timeNormalized processor timeNormalized processor time

 

From the first graph, we can see that the transfer rate went over 250M bytes/s at some points. This is possible only when the client and server are present in same region to take advantage of Azure cloud network. Notice the 6 distinct spikes which correspond to 6 request batches (100/15).

 

For CPU usage, notice the difference in two graphs - %Processor time and Normalized Processor Time. These two are included to show how the pricing tier can make a difference. By definition, the relationship between the two is Normalized processor time = Processor time / No. of available CPUs. While the values from the former graph spiked to 130%, the corresponding value from latter graph is ~32%. This implies that there are 4 CPUs available which is consistent with the WS 3 resource allocation as it allocates 4 vCPUs.

 

During scale out, only two instances were allocated.

Instance countInstance count

CPU usage per instanceCPU usage per instance

 

Here is a snapshot of the SFTP server's metrics during the same interval.

 

SFTP Server instance: CPUSFTP Server instance: CPUSFTP Server instance: Bytes transmitted per secondSFTP Server instance: Bytes transmitted per second

CPU usage hovers around 400% (not normalized) which is significantly high. Due to higher no. of available cores for this deployment it was able to push through during this test, but that may not always be the case. 

 

The throughput of this action cannot be generalized because of the input variable - file size. Higher the file size, lower is the number of actions that execute per minute. A better metric for throughput would be bytes downloaded per second. 

Average throughput = Total bytes downloaded / total time of test = 37329305600 / 1200s = 31.1 MB/s

 

To put things in perspective, here are the metrics from a performance test of Create Folder action from the same connector.

 

Logic app: Normalized Processor Time (CPU usage)Logic app: Normalized Processor Time (CPU usage)SFTP server instance: I/O bytes transmitted per secSFTP server instance: I/O bytes transmitted per secSFTP server instance: CPUSFTP server instance: CPU

The logic app's CPU went just over 1.2% and the SFTP instance's was no more than 65%. A total of 3000 requests were sent with a batch size of 300 and time interval of 10s. The average latency of execution was ~700 milliseconds. Create folder action doesn't transmit any file data so the network I/O of server is expectedly low.

 

 

In second iteration of testing, the same number of requests were sent with same batch size, except for the additional files that were added with following size - 2.5GB, 2GiB, 1.5GB and 900MiB. This time lots of runs failed with the following error:

 

TIMESTAMP

operationName

message

2022-11-03 11:44:24.9634642

FlowHttpEngine.GetErrorMessageFromException

Http request failed with unexpected exception of type 'SshException' and message exception: 'Channel was closed.'.

 

The SFTP client running into such errors with message either "Channel was closed" or "Session was not open" is indicative of overloaded server that is trying to close existing channels. Metrics from container instance confirm this (notice 900% CPU):

 

SFTP SERVER CPUSFTP SERVER CPUSFTP SERVER I/O bytes per secSFTP SERVER I/O bytes per sec

 

Logic app metrics from the same test:

 

Logic app: I/O bytesLogic app: I/O bytesLogic app: Normalized Processor TimeLogic app: Normalized Processor Time

 

The metrics from the app have a sharp spike towards the beginning of the test and then a sudden drop which happened due to runs that entered failed state and were no longer pulling file data from the server. The extremely fast transfer rate of Azure network combined with multiple concurrent downloads sessions from the logic app exhausted the compute capacity of the file server. This is where the users need to exercise caution while working with large content operations. Unlike the other granular actions in Logic Apps which don't have huge amount of I/O exchange and long running execution time like the ones that were used in performance benchmarks in a previous blog post, these operations should execute only with a reasonable burst load. A burst load having 100k executions was not an issue in that scenario because the average execution time of an action was less than 5 milliseconds and whole batch finished in 14 mins on the WS3 deployment whereas a load of only 100 requests took over 20 mins to finish in this scenario. Moreover, when the file sizes were increased same number of requests were sufficient to cause run failures. 

For large content operations, the user must restrict number of runs that can execute in parallel based on the file size and the load capacity of their file server. For example, when the same action was used with small files (1 KiB each), no server-side failures were seen even with a burst load of 10k requests and the total test finishes in just 4 minutes.

 

Next, we will see the results from the upload file content action perf test. It is not possible to test this action in isolation for large files because the file data that is supposed to be uploaded will have to come from the output of another action which itself must support large content. If we include the Get File Content (V2) action in the workflow, each run will execute both these operations and the metrics won't reflect the performance of the upload action in isolation.

A workaround is to use the foreach scope after the Get action which fans out into multiple instances of the upload action. All these instances will use the output from the Get action as file content input. Instead of having multiple runs, we have multiple action executions in the same run and instead of utilizing concurrency at the run level, the workflow will utilize concurrency at the action level offered by foreach. The concurrency limit was set to the maximum (50).

 

Workflow for upload action performance testWorkflow for upload action performance test

 

In this test, instead of using a container instance as SFTP server, we used a storage account with SFTP enabled which uses blob storage to store the file data and exposes SFTP public endpoints. This server's performance is much better based on all parameters - availability, bandwidth and latency.

 

Round #1, the burst load is 100 with a 200 MiB file. The run took total 20.26 mins to finish and the average latency of the upload action was ~43 s.

avg_durationInSeconds

percentile_durationInSeconds_90

percentile_durationInSeconds_95

percentile_durationInSeconds_99

43.3574

76.553

81.343

83.052

 

Logic app: %CPU per instanceLogic app: %CPU per instanceStorage account (SFTP server): Ingress bytes per minStorage account (SFTP server): Ingress bytes per min

 

 

Round #2, the burst size was 100 with a 1GB file. This was a massive one because of the file size and took around 3 hours to complete. It would have taken much less time if the concurrency of for loop would have been turned off. This was another case where not running so many actions in parallel would benefit more. 

Logic app: CPU usage per instanceLogic app: CPU usage per instanceStorage account (server): Ingress bytes per minuteStorage account (server): Ingress bytes per minute

 

What is the maximum file size supported by SFTP connector?

The short answer is that we don't know. Theoretically, there is no limit. Any size should work. We have tested file transfers up to 5 GiB.

 

Why is the output link not visible for some action executions?

This will depend upon the file size in the action run and the value of host setting ContentLink.MaximumContentSizeInBytes. The files that are smaller than a threshold size are referenceable in memory and the outputs link will be visible and it won't be for the ones that are larger. The threshold can be modified by changing this host setting and its default value is 209715200 bytes (200MiB). Making content link available for all file sizes is a future task in large content support.

 

Are the large content operations from built-ins faster compared to shared connectors?

Yes. The shared connectors use chunking to divide the file into multiple small pieces and transfer each piece in a separate HTTP session. Notice the difference in execution time in this run that fetches 512 MiB file using both the SFTP-SSH connector and the built-in SFTP. The difference in execution time increases with file size.

 

SFTP-SSH Get - 1m 50s, SFTP built in - 44sSFTP-SSH Get - 1m 50s, SFTP built in - 44s 

5 Comments
Co-Authors
Version history
Last update:
‎Nov 27 2022 09:43 PM
Updated by: