Copy files from SharePoint to Blob Storage using Azure Data Factory


Azure Data Factory is a great tool for ETL pipelines, and we love working with it. However, when it comes to the integration with the rest of the non-Azure Microsoft world (especially SharePoint) it can get a bit frustrating. In a recent project I wanted to build a solution, which allows employees to upload documents from their phones via OneDrive app and its integrated scan function. The uploaded documents should then be fetched and copied to Azure Data Lake Storage using an Azure Data Factory pipeline. Seemed like a simple task until I ran into several not so simple problems. In this blog post I want to share the final solution and walk through all the necessary steps.

Addendum from 15.12.2022: In the meantime I expanded the solution, enabling it to copy multiple files from multiple SharePoint folders to an Azure Blob Storage, using a nested loop. If you are interested in a step-by-step guide on this topic you can find the blog here: Nested ForEach loops in Azure Data Factory – Syntera. If you only need to copy multiple files of one SharePoint folder the solution shown in this blog will be sufficient.

The Solution

Let’s first have a look at the final pipeline in Azure Data Factory and break it down into individual tasks to solve:

Prerequisite:

  1. Register SharePoint (SPO) application in Azure Active Directory (AAD).
  2. Grant SPO site permission to registered application in AAD.
  3. Provision ResourceGroup, Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS) in your Azure Subscription.

Inside of ADF pipeline:

  1. WebActivity “GetBearerToken”: Get an access token from SPO via API call.
  2. WebActivity “GetSPOFolderMetadata”: Get SPO folder metadata including a list of all files in the SPO target folder using the SPO access token via API call.
  3. CopyActivity “Copy data from SPO to ADLS” inside ForEach loop: Iterate through files in list of files and copy each file to ADLS.

1. Register SPO application in AAD

To enable ADF to access content from SPO we need to register an application in AAD. Basically, we are creating an identity inside of our Azure environment and in a second step telling SPO that if someone with this identity ever shows up, he/she can be trusted. To setup the application in AAD I followed this Microsoft guide: Copy data from SharePoint Online List – Azure Data Factory & Azure Synapse | Microsoft Learn.

  1. Search for and select Azure Active Directory in Azure portal
  2. Inside of AAD under Manage select App registrations
  3. Click on New registration
  4. Choose a name for your registered app, leave the other values at default and select Register
  1. After the registration select your application and under Manage select Authentication
  2. Under Platform configurations click on Add a platform
  1. Under Configure platforms select Web
  2. Enter a redirect URI, a Front-channel logout URL (in our case this does not realy matter) and make sure the checkbox ID tokens is marked.
  1. Under Manage select Certificates & secrets and click on New client secret under Client secrets (0)
  2. Add a Description and a suitable expire date
  3. Make sure you record the secret value for later use, once you leave the page it will never be displayed again!

We will need the secret value and secret ID as well as the Application (client) ID and Directory (tenant) ID from the Overview page later for the API calls and to set up permissions in SPO.

2. Grant SPO site permission to registered application in AAD

Now that we have created an Identity we need to grant it permission to the SPO site we want to get files from.

  1. Open the SharePoint Online site link: https://[your_site_url]/_layouts/15/appinv.aspx
    • replace [your_site_url] with the SPO site URL e.g. if your SPO domain name was “syntera” and your SPO Site “blog” your URL would look like this:
      • https://syntera.sharepoint.com/sites/blog/_layouts/15/appinv.aspx
  2. Fill in the fields as follows:
    • App Id: Copy the Application (client) ID from your registered app
    • Title: Choose a title
    • App Domain: localhost.com
    • Redirect URL: https://www.localhost.com
    • Permission Request XML: Insert the following code snippet:
<AppPermissionRequests AllowAppOnlyPolicy="true">
    <AppPermissionRequest Scope="http://sharepoint/content/sitecollection/web" Right="Read"/>
</AppPermissionRequests>
  1. Select Create and Trust it on the next page

Important:
Azure Access Control (ACS), a service of Azure Active Directory (Azure AD), has been retired on November 7, 2018. This means that if your SPO tenant was created after this date the use of ACS app-only access token is disabled by default. To enable it (necessary for this solution) run set-spotenant -DisableCustomAppAuthentication $false from your SharePoint admin PowerShell.

3. Provision RessourceGroup, ADF and ADLS

  1. In Azure Portal select Resource groups under Navigate and select Create
  2. Choose your subscription, a name (in my case: dev-rg-blog-00000) and a region and select Review + create
  3. In Azure Portal select Create a resource search for Azure Data Factory and select Create
  4. Choose your subscription, the resource group you created previously, a name (in my case: dev-adf-blog-00000) and a region
  5. Under Git configuration check the box Configure Git later and select Review + create (you can configure your Git integration later following another guide if necessary for your project)
  6. In Azure Portal select Create a resource, search for “storage”, select Storage Account and then Create
  7. Choose your subscription, the resource group you created previously, a name (in my case: devadlsblog00000, a region and change the redundancy to LRS (lowest cost option – adjust for your case)
  8. In Advanced under Data Lake Storage Gen2 check the box for Enable hierarchical namespace and select Review + create
  9. Go to your created ADLS resource and under Data Storage select Containers
  10. Select + Container on the top, give the container a name and create it

4. Create Pipeline in ADF and get access token from SPO

Now that all the prerequisites are set up we can start building the pipeline in ADF. In the first step we are creating a web activity to retrieve an access token from the SPO site using an API call.

  1. Open ADF Studio
    • In Azure portal under Navigate select Resource groups and click on the resource group you created
    • Inside your resource group you should see the ADF and ADLS resources we created previously
    • After selecting your ADF resource under Getting started you should see Open Azure Data Factory Studio, which will open a new tab
  2. Create a new pipeline
    • Under Author right-click Pipelines and select New pipeline
  3. Create a Web activity
    • Under General select Web and drag it to your pipeline
  4. Under General choose a suitable name (in my case: GetBearerToken)
  5. In Settings enter the following information:
    • URL: https://accounts.accesscontrol.windows.net/[Tenant-ID]/tokens/OAuth/2
      • replace [Tenant-ID] with the Directory (tenant) ID from your registered app
    • Method: POST
    • Authentication: None
    • Headers:
      • Name: Content-Type
      • Value: application/x-www-form-urlencoded
    • Body: grant_type=client_credentials&client_id=[Client-ID]@[Tenant-ID]&client_secret=[Client-Secret]&resource=00000003-0000-0ff1-ce00-000000000000/[Tenant-Name].sharepoint.com@[Tenant-ID]
      • replace [Tenant-ID] with the Directory (tenant) ID from your registered app
      • replace [Client-ID] with the Application (client) ID from your registered app
      • replace [Client-Secret] with the value of the generated secret from your registered app
      • replace [Tenant-Name] with the SPO domain name (in my case: syntera)
  1. Select Debug to check if everything is set up correctly (check the output of the activity to see the access token)
  2. On General check the box for Secure output to ensure that your access tokens don’t get logged in your pipeline runs

5. Get SPO folder metadata (list of files)

Now that we can retrieve access tokens from SPO we are able to get information/files from the SPO tenant. In order to copy multiple files from a folder we first have to get a list of all files currently located in that specific folder. To receive this list, we will use an API function from SPO which returns the metadata of a specified folder:

  1. Add another Web activity and connect it with the previous one
  2. Under General choose a suitable name (in my case: GetSPOFolderMetadata) and check the box for secure input (again to ensure the access token does not get logged in your pipeline runs)
  3. Under Settings put in the following information:
    • URL: https://[sharepoint-domain-name].sharepoint.com/sites/[sharepoint-site]/_api/web/GetFolderByServerRelativeUrl('/sites/[sharepoint-site]/[relative-path-to-folder]')/Files
      • replace [sharepoint-domain-name] with your SPO domain name (in my case: syntera)
      • replace [sharepoint-site] with your SPO site name (in my case: blog)
      • replace [relative-path-to-file] with the relative URL of your folder (you can check the folder path in SPO by right-clicking on the folder, selecting Details and selecting More details in the bottom of the window on the right)
    • Method: GET
    • Authentication: None
    • Headers (there are two headers):
      • Name1: Authorization
      • Value1: @{concat('Bearer ', activity('GetBearerToken').output.access_token)}
      • Replace GetBearerToken with the name you gave to the first web activity
      • Name2: Accept
      • Value2: application/json
  1. Select Debug to check if everything is set up correctly (check the output of the activity to see the folder metadata)

6. Iterate through list of files and copy each file to ADLS

With the list of files, we received as an output of the last web activity, we can now setup a copy job, which copies the files to ADLS. For this task we need a ForEach activity to loop trough every item in the list and a copy activity to actually copy the data. For the copy activity we need to define a source and a sink dataset, which themselves refer to linked services that define the place where a sink or source is located. In our case the sink will be the ADLS, which has a supported connector, the linked service for the source however will be managed through another API call.

  1. Add a ForEach activity and connect it with the previous web activity
  2. Under Settings check the box Sequential and in Items enter the following: @activity('GetSPOFolderMetadata').output.value
    • Replace GetSPOFolderMetadata with the name you chose for the second web activity
  1. Inside the ForEach activity add a copy data activity and rename it (in my case: Copy data from SPO to ADLS)
  2. Create source linked service
    • In Manage under Connections select Linked services and click on New
    • In New linked service search for “HTTP” and select it.
    • Define a name for your linked service (in my case: ls_spo_Blog)
    • Set Authentication type to Anonymous
    • Scroll down to Parameters select New and call it RelativeURL
    • For the Base URL enter the following URL: https://[sharepoint-domain-name].sharepoint.com/sites/[sharepoint-site]/_api/web/GetFileByServerRelativeUrl('@{linkedService().RelativeURL}')/$value
      • replace [sharepoint-domain-name] with your SPO domain name (in my case: syntera)
      • replace [sharepoint-site] with your SPO site name (in my case: blog)
  1. Create source dataset
    • Right-click on Datasets and select New dataset
    • Under New dataset search for “HTTP” and select it
    • Under Select format choose the suitable format for your data (in my case the files are .pdf so I chose binary)
    • In Set properties define a name (in my case: ds_spo_Blog) and select the linked service you created previously
    • In the overview of the dataset under Connection in Linked service properties enter: @dataset().RelativeURL
    • Under Parameter select New and Call it RelativeURL
  1. Setup parameters for copy data activity
    • Go back to the copy data activity
    • Under Source in Dataset properties enter: @{item().ServerRelativeUrl}
    • Request method: GET
    • Additional headers: @{concat(‘Authorization: Bearer ‘, activity(‘GetBearerToken’).output.access_token)}
      • Replace GetBearerToken with the name you gave to the first web activity
  1. Create sink dataset and linked service:
    • Under Sink select New to create a new source dataset
    • Under New dataset search for choose Azure Data Lake Storage Gen2 and continue
    • Under Select format choose the suitable format for your data (in my case the files are .pdf so I chose binary)
    • In Set properties define a name (in my case: ds_adls_Blog)
    • For Linked service select New
    • Define a name for your linked service (in my case: ls_adls_Blog)
    • Under Account selection method select your subscription and the storage account name from the drop down
  1. Back in Set properties select the Linked service you just created and in the file path enter the container name
  1. Select Debug to check if everything is set up correctly, if successful check your blob storage to validate that the file was copied.

Further Considerations

After all these steps we now have a working solution but there are some further things to be considered. We secured the output of the ‘GetBearerToken’ activity and the input of the ‘GetSPOFolderMetadata’ activity to ensure the access token does not get logged. However, the secret of the app we registered is still hard coded in the ‘GetBearerToken’ activity, which means it will get logged in your pipeline runs. Consider setting up an Azure Key Vault to secure the secret of the registered app. For reusability purposes in your company, I would also advise you to parametrize all the inputs to the pipeline (e.g. the SPO Site, the application and client ID etc.). It is a onetime effort which will make your life way easier in the future when applying this pipeline on a different use case.


58 responses to “Copy files from SharePoint to Blob Storage using Azure Data Factory”

  1. (403) Forbidden.,Source, i am getting this error in the copy activity inside for each

    • Since the previous steps work, the Bearer Token is generated successfully. I therefore assume that the problem is related to the permissions on SharePoint side.

      Can you please check if you have set up everything according to point 2 in the blog?

      On the one hand the permissions (“Read”) or the scope could be wrong (different SharePoint site).

      If your SPO tenant was created after 07.11.2018 can you check if the use of ACS app-only access token is enabled? To enable it run the comand set-spotenant -DisableCustomAppAuthentication $false from your SharePoint admin PowerShell.

      Please let me know if any of these suggestions helped in solving your issue

      • i also got a forbidden error: Http request failed with client error, status code 403 Forbidden, please check your activity settings. If you configured a baseUrl that includes path, please make sure it ends with ‘/’.

        i tried adding the / to the url but this didnt resolved the issue. i ran the powershell command and checked if the SPO site was right.

        i tried using different sinks and checked if i can wright file and folders with azcopy to the sink. this is the case.

        so i guess the issue has something to do with spo right. please help

        • Hi Roy

          Microsofts error messages are sometimes not very helpful / missleading. In your case there should be no problem with the URL but the registered app in Azure AD probably has no permission to the sharepoint site.

          Can you check if you set up everything accordingly to “2. Grant SPO site permission to registered application in AAD” in the Blog?

  2. ErrorCode=HttpRequestFailedWithUnauthorizedError,’Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Http request failed with status code 401 Unauthorized, usually this is caused by invalid credentials, please check your activity settings.

    iam getting this error in copy activity

    • This error usually occurs when the token does not have the correct permissions. This can be caused by various problems in the setup. What throws me off is that a possible problem with the token should already occur in the activity “Get SPO Folder Metadata” and not only in the “Copy Data” step.

      Does the “Get SPO Folder Metadata” activity run without any problems or are you pherhabs not using this activity at all in your solution?

      If so please check if you have set up everything according to point 2 in the blog. Especially check if the setup on the sharepoint side has the correct permissions and if your SPO tenant has enabeled the use of ACS app-only access token. To enable it run the comand set-spotenant -DisableCustomAppAuthentication $false from your SharePoint admin PowerShell.

      Please let me know if that helped in resolving your issue

      • Hello manikanta, Dimitri – I am getting the same issue. I checked SPO DisableCustomAppAuthentication is already set to false.

        Were you able to resolve this?

  3. Linked service test connection is expecting a parameter value. What value to be passed there?

  4. Hi Alina

    We pass parameters to the Base URL field of the linked service (e.g. “@{linkedService().RelativeURL}”). These parameters need to be defined otherwise the Base URL can not reference it.

    To do so go to 6.4 in the blog and follow the step “Scroll down to Parameters select New and call it RelativeURL”. Also make sure to pass the parameter through in the dataset (6.5 in the blog) and the copy activity itself (6.6 in the blog).

    Please let me know if that helped in resloving your issue

  5. Thanks for sharing this method with us and I would to share it is easy now if you are using Rsync to copy from Sharepoint/onedrive to Azure blob . and one of the main reasons Rsync is used so widely is its ability to copy only the things that are different between the source (copy from) and target (copy to). There can be millions of files, database records, configuration details, etc, and if only one thing has changed. it will copy only that one thing.
    Another recommended tools but are GUI , Sharegate , Gs Richcopy 360 , Syncback and Goodsync , take a look

    • Hi Triveni

      Can you please give me further informations on your error. What is the error message? What dataset is causing the error (we are using two different datasets in this solution)?

      • HI @Dimitri Bütikofer iam geeting error in copy activity inside for each

        The template function ‘dataset’ is not defined or not valid

        i created process same as mentioned by you..but getting this error after debug..
        can you please help me out to figure this out

        • Hi Joshi,

          I think you get the error because of what you entered at point 6.5 in the blog, can you check this step again: “In the overview of the dataset under Connection in Linked service properties enter: @dataset().RelativeURL”

  6. Hi Dimitri, there is wrong parameter in Additional Headers (Source tab in copy Activity)

    In article you Wrote: Additional headers: @{concat(‘Authentication: Bearer ‘, activity(‘GetBearerToken’).output.access_token)}

    It should be:

    @{concat(‘Authorization: Bearer ‘, activity(‘GetBearerToken’).output.access_token)}

    Replace Authentication to Authorization

    https://learn.microsoft.com/en-us/azure/data-factory/connector-sharepoint-online-list?tabs=data-factory#copy-file-from-sharepoint-online

  7. I am encountering an error in the copy activity. The error message reads ‘The file operation has failed at path ‘ContainerName/FolderInDL’. The error type is System.WebException and the message is ‘The remote server returned a 403 Forbidden error’. Could you please help me determine what the issue might be?

    • Hi Kim

      It looks like your Data Factory has no access to required container on your StorageAccount. Try to authorize data factory on your container using RBAC and run the pipeline again.

  8. i am able to copy the excel files from share point to datalake using binary dataset, however the files are not in the same format as in the share point, files are getting corrupted, any suggestions to fix this issue

    • Hi Pirla

      In my case I was trying to copy .pdf files to the datalake, which is why I had to choose binary as the dataset format. If your files are excels you need to choose excel as the dataset format (reference: point 6.5 in the blog choose excel instead of binary).

      This should solve your problem.

      • Hey,

        is there any solution to this? I’ve multiple .xlsx and .csv files. But I’m not able to copy them to the datalake in the correct format.

        • Hi Miuqei

          Your problem is that you can not define the correct format for the dataset because you have mixed filetypes?

          If so you would probably need to handle it in two seperate pipelines (one for .xlsx and one for .csv) and insert a filter after GetSPOFoldereMetadata activity to only include .xlsx or .csv files (check if filename contains the relevant suffix).

  9. Hey there ,

    I have an issue with the source.

    I cannot find a way to add “Data set properties”

    I have the exact same options without the Dataset properties.

    • Hi Shlomi

      Dataset properties shows you available parameters you can set for the dataset, so in your case there are no parameters. In section 6.5 in the blog under the last bullet point youcan find the instruction to do so. After you created the parameter in the dataset it should be visible in the dataset properties.

  10. Thanks for the detailed post and with this not more failure in DEBUG. However, It is not reading any file. Item count is always 0. I tried with Binary file and placing a pdf file, tried with an excel file and excel connection.
    what could be the possible issue?

    • Hi Gaurav

      Can you check your output of the GetSPOFolderMetadata activity? It looks like you are either not getting any items in the metadata (e.g. wrong URL or wrong permissions) or not referencing it correctly in the copy activity.

      • Thanks, I can get the files now but only in the same folder, no tin sub folder. Does it supposed to read from subfolders too?

        • No worries, I saw your response for the next question. I can take it from here.

  11. Hey there ,

    There is a way to do this process recursively and move all folder and subfolder files in it ?

    Thanks

  12. HI ,
    I followed steps as explained in the blob.I am able to run before for each loop successfully and listing out files details in output.But after adding foreach loop/copy data activity , i am getting below error:
    Failed to run {my pipeline_name} (Pipeline)
    error details:
    {
    “code”: “BadRequest”,
    “message”: null,
    “target”: “pipeline//runid/d2ba0e99-3702-4588-87f1-a607e262bdf8”,
    “details”: null,
    “error”: null
    }

    • Hi GV

      The Microsoft error resonpenses are sometimes not very informative. Unfortunately I can’t get any information from the error.

      From my experience and your description, I assume that the URL that is constructed in the copy activity is not correct. Could you check the URL that is used by the copy activity (you should be able to see it hardcoded without parameters in the output of the activity)?

  13. Hey Dimitri! Great post, I love it. Very well explained every step.

    I would be very greatful if you please could give me your thoughts on this:

    I keep getting this error in the GetSPOFolderMetadata-step

    {“error_description”:”Exception of type ‘Microsoft.IdentityModel.Tokens.AudienceUriValidationFailedException’ was thrown.”}

    I wonder what might be causing this issue. I am not sure I posted the [relative-path-to-folder] correctly (I posted ‘/sites/group’)

    Br.

    • Hi Jesse

      The relative-path-to-folder is wrong (see point 5.3 in the blog). Inside the brackets It should be ‘sites/[sharepoint-site]/[relative-path-to-folder]’. You can check the folder path in SPO by right-clicking on the folder, selecting Details and selecting More details in the bottom of the window on the right. To get the [relative-path-to-folder] select only the part after “…/sites/[sharepont-site]/” in the folder path from SPO (example: folder path from SPO is “…/sites/mysposite/myfolder/mysubfolder” your relative-path-to-folder would be “myfolder/mysubfolder”.

  14. How will this work in data factory in Fabric? Do we still have to create an app in azure portal, especially if we’re using fabric in power bi capacity where azure integration is not much

    • Hi Renjith

      Microsoft will likely make changes to the data factory to better fit in with the Fabric toolbox but you can still create data pipelines in the same way as before they introduced Microsoft Fabric. For this solution to work you need to create an app in azure portal for the authentication workflow to Sharepoint. I don’t quite understand what Power BI has to do with the solution described in my blog post. After copying the data to Azure Blob Storage you can connect to the data with Power BI. With the introduction of OneLake this might even be an easier process than before.

  15. There are several ways to copy files from SharePoint to Azure Blob Storage, depending on your requirements and the tools you have available. Here are three common methods:

    – Using third-party tools like Gs Richcopy 360, ShareGate, and GoodSync, it is the easy and direct way.
    – Use Azure Logic Apps: it is a cloud-based service that allows you to create workflows that integrate with various systems, including SharePoint and Azure Blob Storage. You can create a Logic App that retrieves files from SharePoint using the SharePoint connector, and then uses the Azure Blob Storage connector to upload the files to Azure Blob Storage. This approach requires no coding and is relatively easy to set up.

    – Use Azure Data Factory: it is a cloud-based data integration service that allows you to create data pipelines that move and transform data between various sources and destinations, including SharePoint and Azure Blob Storage. You can create a pipeline that retrieves files from SharePoint using the SharePoint connector and then uses the Azure Blob Storage connector to upload the files to Azure Blob Storage. This approach requires some configuration and coding but provides more flexibility and scalability than Logic Apps.

    – Use a custom solution: If you have specific requirements or constraints that prevent you from using Logic Apps or Azure Data Factory, you can create a custom solution using the SharePoint REST API and the Azure Blob Storage SDK. This approach requires more coding and testing but provides complete control over the copying process and can be tailored to your specific needs.

    Overall, the best approach depends on your specific requirements and the tools and skills available on your team.

  16. instead of a single folder(that is mentioned as “blog” here), I have multiple folders. For ex. Project1, Project2 & so on, in each of these folder I have a folder “Solution” and in that lies the pdf file or files that I want to copy to blob storage. So I have to iterate over each project folder and then solution folder and then the file. Can You please suggest the necessary changes that I have to make in the pipeline. Thanks in advance

      • Hi Dimitri , Thanks a lot for writing these informative blogs and for your reply.
        One more thing I am trying to add is the original file name. While transferring the files, the original file name and type get changed when it is copied to the blob storage. In my case the pdf files change to binary and some random hash name is given in the blob storage. I am able to store those pdf file names in the metadata of each blob by defining Metadata in Sink of Copy data from SPO activity by using the ADF portal interface. But somehow when I use the create_update pipeline python function by using the json of the same pipeline(which have the metadata defined also) those metadata does not get defined in the pipeline. So I have two querries
        1. Is there a way to preserve the file name and filetype during the copying activity
        2. How to overcome the problem in pipelines.create_or_update python function, the metadata fields doesn’t get defined when we use the function even after defining the metadata feilds in the json.
        It would be very helpful if you can provide some guidance because not much content is there on the internet related to these specific services.

        • Hi Dimitri,
          In refrence to the above comment I found the workaround for Query no.2 . I was using azure-mgmt-datafactory package for the create_update pipeline function, but instead of that when I used the rest API for ADF using python requests library, it worked(i.e. metadata fields get defined now in sink of Copy data from SPO activity) .
          But still not able to find a solution for Query 1. If you can shed some light on it,that would be great

          • Hi Virat

            Unfortunately I have never used the python libraries you mentioned. Regarding the preservation of the filename: In the activity that reads out the metadata should be a value for the name of the file. You can reference this value in your copy activity to define the name of the blob created.

  17. Dimitri, thank you so much for this well expalined article. I managed to execute in one go! I think last part that is missing is in the sink side, how do we save the read files in its original file name?

    • Within the metadata obtained from the SharePoint Online (SPO) folder (step 5 in the blog), you’ll discover a wealth of information for each file in the folder (check the output of the GetSPOFolderMetadata to see the structur and available values). To pinpoint the file’s original name, look for a field labeled “Filename” or something similar to it. You can reference this value inside another activity in the for each loop using the expression @{item().Filename}. You can use this value in the path for the sink to save the file in its original name.

  18. Hi Dimitry, thank you for this great method.
    The first two blocks working just fine. GetToken & GetSPOfolder. When I’m adding the “For each” I get the error
    {
    “code”: “BadRequest”,
    “message”: null,
    “target”: “pipeline//runid/b4f9e46a-a18d-4a03-9f66-9cb3ad3adb4b”,
    “details”: null,
    “error”: null
    }

    • Hi Adir

      The error message provided doesn’t provide a lot of information, can you try to isolate the problem, for example by just adding a different action like a wait activity inside the for each loop and check if that works? Afterwards build up the activity inside the for each loop step by step and test it periodically.

    • Check copy data activity, Source, Additional headers. make sure it is correct. might be the cause..

Leave a Reply

Your email address will not be published. Required fields are marked *