Back to blog

Logic Apps


What are Logic Apps?
Integration and orchestration has always been a challenge for most enterprise systems. When the cloud arrived, another layer of complexity was added to the equation of integration. Within the on premise (within a customer-controlled environment) world, something like BizTalk would be a choice for integrating and setting up communication between internal & external systems. Within the cloud there was a gap for this kind of solution. Microsoft decided to take BizTalk’s concept of workflows, designed via any web browser, to develop a scalable integrated solution, and called it Logic Apps.

Microsoft have invested a lot time and money to bring the technology to Azure and to move the cloud towards a serverless architecture. The idea to create, develop and deploy without worrying about the infrastructure, is an interesting concept and Logic Apps are a good example this approach.

The serverless architecture approach has some key advantages:

– Reduced Development Time
– Reduced Time to Market
– Per action billing 

Main Components
To create a Logic App, you require the following components:
1. Connector
2. Condition

A connector provides the ability to communicate with other Systems or Services. A connector is an API wrapper that has 2 operations which are used to connect to the System or Service:
Actions: These are selected and driven by the user. For example, sending an email when a request has been received.
Triggers: A trigger is a notification when an event has occurred. For example, when a new file has been uploaded to a specific file storage account. There are 2 types of triggers, Polling and Pushing Triggers.

There are 2 different type of connectors, Built-In actions or Managed.
Built-In Actions: These are actions to call an API endpoint, provide an HTTP endpoint, call other Logic Apps or Azure functions or schedule a reoccurring task.
Managed: These managed connectors provide access to APIs for various services. They fall into various categories of Standard Connectors, On-Premise Connectors, integration Account Connectors and Enterprise Connectors.

Logic apps provide the facility to add conditions after each action. This operation is optional and not mandatory. To add a condition, click in the plus after an action and select “Add a condition”. You will be presented with 3 boxes the first is the field to be evaluated, the second is the evaluation operation and the final box is the comparison value.

Logic app actions are grouped into API actions, Workflow actions and Condition actions.

API Actions

Workflow Actions

Condition Actions

Each action contains the following basic options:

Settings brings up the following options, which alters the behaviour of the action, but in most cases these options are rarely changed.

Configure Run After
When I first started to use logic apps, I applied if statements and switch statements based on action result until I found this setting. The action is only run when the condition is true, in the example below, only if the previous action was successful this action will execute.

Peek Code
This option is very useful, if you want to, for example, verify the correct variables/objects have been selected and applied to the action for it to execute successfully. I have used this option to copy and reapply a connection string to a new action to provide consistency and speed up the development process.

Bringing it all Together…
The following example is a completed logic app within the browser, but what is happening under the hood?

Behind logic apps, there is a JSON workflow generated and it’s comprised of the Logic App Workflow Definition Language. The image below provides the basic structure of the definition language:

Schema: The schema defines the version of the definition language, for example,
ContentVersion: This is the definition version.
Parameters: Within your workflow, you can specify input parameters with a maximum of 50 per workflow. These are useful when passing data from one action to another.
Trigger: This is where the workflow triggers are defined and configured.
Actions: The actions within the workflow are defined within this section of the schema.
Outputs: The outputs of each action are defined within the section of the schema.

In addition to the basic schema, you’ll have a variable called “$connections”.

The connection values can contain multiple connections, for various resources across the Azure platform. The values are added when you add a new connection via the UI. This is where it pays to build your initial workflow within the browser and not within Visual Studio. However, this presents another problem around tokenising which I will cover in further sections of this blog.

Logic apps have the capability to use Key Vault to secure connection strings and other key data. I would certainly recommend implementing Key Vault early in the development process as it is best practice Please refer to Microsoft documentation for further guidance.

Who Should Develop Logic Apps?
Logic apps are a simple and easy tool to use to design and create any integration for any size solution, so the question is, who should develop logic apps? I would say anyone who has some knowledge of how applications are architected and talk to each other. It all depends on how much experience the designer has as to how complex the workflow can be. In my own experience, workflow development has not been a difficult issue, but some of the blockers I have come across relate to the lack of integration into other systems or custom functionality required to complete the workflow.

For example, whilst working on a solution, I was unsure how we could integrate authentication into the flow. The problem I faced was getting authenticated against many AAD instances and then completing the flow. How can you keep the data secure and reuse the tokens? I developed an Azure Function to keep the configuration separate from the calling action and give me greater control over the implementation of the authentication mechanism. I also was able to reuse the same function in other child workflows.

Could Logic Apps be Part of a Domain Solution?
I came across the term Domain Driven Development (DDD) over a decade ago, but never got the opportunity to deep dive and develop a solution with the methodology, so my experience with DDD is very limited. From what I have experienced, using DDD with cloud technologies, I would say (in answer to the question “could Logic Apps be part of a domain solution?”) yes, it’s certainly possible. Especially around handling Azure Service Bus messages from one domain to another or handling interactions from an external source, whether its via web API or WCF. The ability to create custom functions can overcome any specific requirements to connect systems together.

So far so Good?
We covered how to develop Logic Apps, and the benefits, but what are the downside in its current form?

Provisioning and Deployments
I have highlighted from the Workflow Definition Language, connections are generated and hardcoded to the workflow, so what do you do if you want to deploy the workflow into multiple environments? This is where provisioning a workflow can be a painful process. During the development of the last project I worked on, the team discovered an issue with tokenising connection strings, specifically, how to extract the correct API connection string for a specific resource against an environment. Logic App API Connections are added to workflow and the resource group with the following format <Connection Type><Number>, for example ServiceBus1, ServiceBus2. This is not particularly useful when you wish to provision multiple connections to similar resources.

The best way to add a tokenised connection string is breakdown the connection string into tokenised parameters and variables. An example of the tokenised parameters and variables is shown below:

"parameters": {
    "storageAccountName": {
        "type": "string"
    "cosmosDBAccountName": {
        "type": "string"
    "cosmosDBAccessKey": {
        "type": "securestring"
    "storageAccountAccessKey": {
        "type": "securestring"
"variables": {
    "DocumentDBId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Web/locations/', resourceGroup().location, '/managedApis/documentdb')]",
    "StorageAccountId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Web/locations/', resourceGroup().location, '/managedApis/azureblob')]"

Once the parameters/variables values are set, they can be implemented in the resource section of the logic app, as shown below:


      "type": "Microsoft.Web/connections",
      "name": "documentdb",
      "apiVersion": "2016-06-01",
      "location": "[resourceGroup().location]",
      "properties": {
        "displayName": "documentdb",
        "api": {
          "id": "[variables('DocumentDBId')]"
        "parameterValues": {
          "databaseAccount": "[parameters('cosmosDBAccountName')]",
          "accessKey": "[parameters('cosmosDBAccessKey')]"
        "nonSecretParameterValues": {
          "databaseAccount": "[parameters('cosmosDBAccountName')]"
      "dependsOn": [ ]

Although logic apps are a JSON script that can be developed within Visual Studio, the experience is not the greatest and you’re detached from the resources you have within the Azure platform. I have found developing a logic app from within a browser against Azure resources is far better, for the following reasons:

1. Immediate feedback of process against real resources
2. Isolate and resolve errors quickly

There is one issue around this approach, tokenisation. One small change can be a lengthy process, which goes against the ethos of fast development, deployment and scalable solutions. I go into more details around this in the next section.


In the first iteration of a spike, we started with a large workflow encompassing all the necessary actions. Over time this became too difficult to manage, especially in the browser. If you’re zoomed in to a specific action within the browser and make a change, the browser will refresh and point to another part of the workflow that you’re not working on. This became very frustrating and unworkable. To alleviate this issue, I broke up the Logic App into smaller ones. This made the maintenance easier to manage and gave each Logic App a Single Point of Responsibility. The tokenisation of smaller workflows became an easier task to manage, and the team can resolve issues quickly. The workflow and deployment scripts can be added to source control and this provides controls around the development and release of new features.

Final Thoughts
I think Logic Apps, are a good piece of tech for orchestration on the Azure platform, however, it’s still evolving, and the process of development and deployment is not as straight forward as the documentation would lead you to believe. Once the deployment of a Logic App is more in line with how other resources are provisioned and deployed it could be become a useful tool within any user’s arsenal to develop/integrate solutions together. Knowing the limitations is half the battle, once you are aware you can allow for any additional time required for deployments. With no specific hardware to configure, the workflow is based on workload only. This is where Microsoft plan to take their new serverless architecture, on a cost per action plan.

The integration with Azure Functions is a nice touch for any custom work that needs to be carried out. I can only see this area getting better and better. Watch this space.

Leave a Reply

Your email address will not be published. Required fields are marked *

The browser you're using is out of date. Please update for better security, speed and experience on this site.