I've set up an Azure function with a CosmosDBTrigger in Python, targeted at a container i've called "Tasks". The idea is that everytime a task is added to the container, this triggers the an Azure function instance which then processes that task based on the supplied metadata.
Currently the function is working in a basic sense - it is picking up the changes and processing them, however it is not scaling out and processing the changes to the container in a parallel fashion. i.e. as tasks are added to the container they are processed one-by-one, not in parallel by the different invocations as intended. Is there a way of ensuring tasks are processed in parallel?
For added background I am on the Flex Consumption plan.
I've set up an Azure function with a CosmosDBTrigger in Python, targeted at a container i've called "Tasks". The idea is that everytime a task is added to the container, this triggers the an Azure function instance which then processes that task based on the supplied metadata.
Currently the function is working in a basic sense - it is picking up the changes and processing them, however it is not scaling out and processing the changes to the container in a parallel fashion. i.e. as tasks are added to the container they are processed one-by-one, not in parallel by the different invocations as intended. Is there a way of ensuring tasks are processed in parallel?
For added background I am on the Flex Consumption plan.
This question was asked in the past, some previous answers:
In a nutshell, the degree of parallelism is defined by the Physical Partitions in the Monitored container. Change Feed needs to guarantee order of events within the logical data partition. If all changes would be delivered in parallel, that guarantee would be broken.