pipeline.json¶
Purpose¶
This configuration file is used for defining pipeline settings that affect the pipeline as a whole, not a specific account/environment.
Example Configuration¶
{
"type": "ec2",
"owner_email": "",
"documentation": "",
"notifications": {
"email": "",
"slack": ""
},
"pipeline_notifications": [],
"promote_restrict": "none",
"base": "tomcat8",
"env": ["stage", "prod"],
"primary_region": "us-east-1",
"image": {
"bake_instance_type": "t2.small",
"root_volume_size": 6,
"builder": "ebs"
},
"lambda": {
"app_description": "default description",
"runtime": "java8",
"handler": "main",
"vpc_enabled": false,
"package_type": "zip"
},
"pipeline_files": [],
"chaos_monkey": {
"enabled": false,
"mean_time": 5,
"minimum_time": 3,
"exceptions": []
},
"instance_links": {},
"permissions": {
"read_roles": [],
"write_roles": []
},
"traffic_guards": {
"accounts": []
},
"cloudfunction": {
"project_name": "my-project*",
"entry_point": "hello_get",
"runtime": "python37"
}
}
Configuration Details¶
type
¶
Specifies what type of pipeline to use for the application.
Type: stringDefault:"ec2"
Options:
"ec2"
- Sets up an AWS EC2 pipeline and infrastructure"datapipeline"
- Sets up an AWS Data Pipeline infrastructure"lambda"
- Sets up an AWS Lambda pipeline and infrastructure"stepfunction"
- Sets up an AWS Step Function pipeline and infrastructure"cloudfunction"
- Sets up a GCP Cloud Function pipeline, infrastructure and deploys code"s3"
- Sets up an AWS S3 pipeline and infrastructure"rolling"
- Sets up a “rolling” style pipeline. Requires custom templates."manual"
- Sets up pipelines from raw Spinnaker Pipeline JSON; more info: Configuration Files Advanced Usages.
owner_email
¶
The application owners email address. This is not used directly in the pipeline but can be consumed by other tools
Type: stringDefault:null
documentation
¶
Link to the applications documentation. This is not used directly in the pipeline but can be consumed by other tools
Type: stringDefault:null
notifications
Block¶
Warning
notifications
is deprecated, see "pipeline_notifications"
instead
Where to send pipeline failure notifications
pipeline_notifications
Array¶
Where to send pipeline notifications. Notifications can be sent on several events including pipelines starting, completing and failing. Any supported notification option in Spinnaker can be defined, including Slack, Microsoft Teams, Bearychat, PubSub, Google Chat and Email.
pipeline_notifications
¶
Array of notification definitions
Type: arrayDefault:[]
Example Microsoft Teams:Example Slack:Example Email:[ { "level": "pipeline", "type": "email", "address": "jane.doe@who.com", "cc": "jon.doe@optional.com", "when": [ "pipeline.failed", "pipeline.complete", "pipeline.starting" ] } ]Example Google Cloud Pub/Sub:[ { "level": "pipeline", "type": "pubsub", "publisherName": "my-publisher", "when": [ "pipeline.starting", "pipeline.complete", "pipeline.failed" ] } ]Example Google Chat:[ { "level": "pipeline", "type": "googlechat", "address": "https://chat.google.com/v1/spaces/my-google-chat-space", "when": [ "pipeline.starting", "pipeline.complete", "pipeline.failed" ] } ]Example custom messages:Some notification types support custom messages, which can be defined using the
messages
field:[ { /* First define your notification, e.g. slack or teams */ /* ... */ "message": { "pipeline.complete": { "text": "A pipeline finished, wow!" }, "pipeline.failed": { "text": "A pipeline has failed :(" }, "pipeline.starting": { "text": "A pipeline started!" } } } ]
promote_restrict
¶
Restriction setting for promotions to prod* accounts.
Type: stringDefault:"none"
Options:
"masters-only"
- only masters/owners on a repository can approve deployments"members-only"
- Any member of a repository can approve deployments"none"
- No restrictions
base
¶
The base AMI to use for baking the application. This can be an alias defined in ami-lookup.json or an AMI Id.
Type: stringDefault:"tomcat8"
env
¶
List of accounts that the application will be deployed to. Order matters as it defines the order of the pipeline. The accounts should be named the same as you have them in Spinnaker Clouddriver
Type: arrayDefault:["stage", "prod"]
image
Block¶
Holds settings for the baked image
bake_instance_type
¶
Defines the instance type for Rosco (bake step) to use. This could help with issues of large and complex bakes. Refer to: https://aws.amazon.com/ec2/instance-types/
Type: stringDefault:"t2.small"
root_volume_size
¶
Defines the root volume size of the resulting AMI in GB
Type: numberUnits: GigabyteDefault: 6
lambda
Block¶
Holds settings related to lambda deployments
runtime
¶
The runtime environment for the Lambda function Since value is passed directly to the lambda API new runtimes are automatically supported as they are released
Type: stringDefault:"java8"
Options:
"java8"
"nodejs"
"nodejs4.3"
"nodejs6.10"
"nodejs8.10"
"python2.7"
"python3.6"
"dotnetcore1.0"
"dotnetcore2.0"
"nodejs4.3-edge"
"go1.x"
services
Block¶
Access to different Cloud Services will be added to an inline Policy for an IAM
Role. Keys must match with a corresponding template in
src/foremast/templates/infrastructure/iam/key.json.j2
.
cloudwatchlogs
¶
Add CloudWatch Logs access. Lambda Functions will automatically have this added.
Type: booleanDefault:false
parameterstore
¶
Add SSM ParameterStore PutParameter and GetParametersByPath access based on app name.
Type: booleanDefault:false
rds-db
¶
Add RDS-DB Connect access to RDS DB Resources. Expects RDS DB User to match Spinnaker appname or use of Secrets Manager credentials for DB to connect. (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html)
Type: arrayDefault:[]
Example:["db-12ABC34DEFG5HIJ6KLMNOP78QR", "*"]`
rds-data
¶
Add RDS-Data APIs. By using the Data API for Aurora Serverless, you can work with a web-services interface to your Aurora Serverless DB cluster. The Data API doesn’t require a persistent connection to the DB cluster. Instead, it provides a secure HTTP endpoint and integration with AWS SDKs. You can use the endpoint to run SQL statements without managing connections.
Requires AWS Secret Manager to be passed.
(https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html)
Type: booleanDefault:false
redshift-data
¶
Add Redshift-Data API. You can access your Amazon Redshift database using the built-in Amazon Redshift Data API. Using this API, you can access Amazon Redshift data with web services–based applications, including AWS Lambda, AWS AppSync, Amazon SageMaker notebooks, and AWS Cloud9.
The Data API doesn’t require a persistent connection to the cluster. Instead, it provides a secure HTTP endpoint and integration with AWS SDKs. You can use the endpoint to run SQL statements without managing connections. Calls to the Data API are asynchronous.
The Data API uses either credentials stored in AWS Secrets Manager
(https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html)
Type: booleanDefault:false
s3
¶
Add S3 access to the provided Bucket. You may need to override default templates, see templates_path. To access other S3 Buckets, provide a list of names to add.
Type: boolean XOR arrayDefault:false
Example boolean:{ "s3": true }Example array:{ "s3": ["other_bucket"] }
gcp_roles
¶
Adds GCP Roles to the given projects.
Wildcards (*) are supported in the project_name field. For example project-one* may match to project-one-prod or project-one-stage depending on what environment is being deployed to.
Type: array of objectsDefault: NoneExample:"gcp_roles": [ { "project_name": "project-one*", "roles": [ "roles/secretmanager.secretAccessor", "roles/pubsub.subscriber" ] }, { "project_name": "project-two*", "roles": [ "roles/storage.objectViewer" ] } ]
chaos_monkey
Block¶
Key that configures Chaos Monkey
mean_time
¶
Mean time between terminations. If mean_time is n, then the probability of a termination on each day is 1/n
Type: numberDefault:5
Units: Days
minimum_time
¶
Minimum time between terminations
Type: numberDefault:3
Units: Days
exceptions
¶
Accounts that Chaos Monkey will not affect
Type: arrayDefault:[]
instance_links
Block¶
Adds custom instance links to spinnaker. This takes a dictionary where the key is the name of the link and the value is the destination.
Example:{ "instance_links": { "health": ":8080/health", "documentation": "http://example.com" } }
permissions
Block¶
Key that configures permissions for an application (leverages Fiat Roles/Groups) For more info, visit: https://www.spinnaker.io/setup/security/authorization/
read_roles
¶
Roles that should have read permission to this application in Spinnaker
Type: arrayDefault:[]
write_roles
¶
Roles that should have write permission to this application in Spinnaker
Type: arrayDefault:[]
traffic_guards
Block¶
Key that configures Traffic Guards for an application
accounts
¶
Accounts that Traffic Guards will be enabled for. Traffic Guards allow you to specify critical clusters that should always have active instances. If a user or process tries to delete, disable, or resize the server group, Spinnaker will verify the operation will not leave the cluster with no active instances, and fail the operation if it would.
Type: arrayDefault:[]
cloudfunction
Block¶
Holds settings related to GCP Cloud Function deployments
project_name
¶
The project name. Wildcards are supported to ensure the correct project is used in each GCP Environment.
For example my-project*
may match to my-project-prod
and my-project-stage
depending on the
environment being deployed to.
Type: stringDefault:None
Required:Yes
entry_point
¶
The entry point of your code. Typically this is a function or method name.
Type: stringDefault:None
Required:Yes
Example:my_function
runtime
¶
The runtime your function is using. See the GCP docs for a full list of options.
Type: stringDefault:None
Required:Yes
Example:python37