Azure Devops environment variable gets cross contaminated when run in parallel - Stack Overflow

admin2025-04-18  3

We have a pipeline with a number of environments to deploy onto; dev, qa, iat, preprod. We do a build, some checks & deploy to dev. Once that part has completed the next three environments are gated & we can run them all in parallel or one at a time.

We currently have to run this pipeline one at a a time for qa, iat & vNextIat because the environment variable appears to be contaminated from one of the other environments.

The error refers to an attempt to retrieve the state from the wrong storage account:

The image shows the IAT deployment failing as it tries to get the state file (I think) from the vNextIat environment. The pksvnextiatsa is the storage account from within the vnextiat environment. It should be targeting pksiatsa (its a simple naming scheme).

  • Why is this happening?
  • Why is it ok when ran on its own?
  • How can I fix this?

To be clear, because of the gates they aren't kicked off at exactly the same time but quickly one after the other. Always fails when more than one of them is running & always seems to be attempting to get the next environments resources. Not always but mostly.

We have a pipeline with a number of environments to deploy onto; dev, qa, iat, preprod. We do a build, some checks & deploy to dev. Once that part has completed the next three environments are gated & we can run them all in parallel or one at a time.

We currently have to run this pipeline one at a a time for qa, iat & vNextIat because the environment variable appears to be contaminated from one of the other environments.

The error refers to an attempt to retrieve the state from the wrong storage account:

The image shows the IAT deployment failing as it tries to get the state file (I think) from the vNextIat environment. The pksvnextiatsa is the storage account from within the vnextiat environment. It should be targeting pksiatsa (its a simple naming scheme).

  • Why is this happening?
  • Why is it ok when ran on its own?
  • How can I fix this?

To be clear, because of the gates they aren't kicked off at exactly the same time but quickly one after the other. Always fails when more than one of them is running & always seems to be attempting to get the next environments resources. Not always but mostly.

Share edited Jan 30 at 16:30 marc_s 756k184 gold badges1.4k silver badges1.5k bronze badges asked Jan 30 at 14:39 onesixtyfourthonesixtyfourth 8661 gold badge11 silver badges36 bronze badges 2
  • 2 Can you provide some details about your YAML pipeline and the type of agent pool that you're using? Cloud-hosted agents run jobs on separate machines so there's no possibility of one job impacting the other. – bryanbcook Commented Jan 30 at 16:22
  • 1 The error code 404 indicates the resource(key or storage account itself) doesn't exist when the task is executing. It could happen when you are invoking incorrect resource(wrong variable value) or resource doesn't exist. If the resource needs to be prepared in specific stage/job, you should set up the correct stage&job dependencies. From the screenshot, it appears the IAT is NOT depend on vNextIat which caused the error. It's recommended to share the yaml file for further checking. – wade zhou - MSFT Commented Jan 31 at 8:18
Add a comment  | 

1 Answer 1

Reset to default 2

We currently have to run this pipeline one at a a time for qa, iat & vNextIat because the environment variable appears to be contaminated from one of the other environments.

You haven't shown any details about these variables but my first impression is that this is a scope issue.

In a YAML pipeline (and correspondent templates), you can set a variable at various scopes:

  • Root level (pipeline): available to all jobs in the pipeline.
  • Stage level: available only to a specific stage.
  • Job level: available only to a specific job.

You should reduce the scope of your environment-related variables as much as possible.

I suggest organizing the variables by component/environment and then set them at the job level, but in some cases it might be OK at the stage level as well. This should prevent the "contamination" of variables from other environments.

Example - referencing a variables template at the job level:

parameters:
  # other parameters here

  - name: environment
    type: string
    displayName: 'Environment'

jobs:
  - job: deploy_${{ parameters.environment }}
    displayName: 'Deploy to ${{ parameters.environment }}'
    variables:
      # Consumers of this job are expected to provide a variables template 
      # using the following folder structure:
      # /pipelines/variables/{environment}-variables.yaml
      - template: /pipelines/variables/${{ parameters.environment }}-variables.yaml@self
    steps:
      # ...
转载请注明原文地址:http://anycun.com/QandA/1744913185a89393.html