Retrieves an aggregated list of autoscalers.
aggregatedList_next(previous_request, previous_response)
Retrieves the next page of results.
Close httplib2 connections.
delete(project, zone, autoscaler, requestId=None, x__xgafv=None)
Deletes the specified autoscaler.
get(project, zone, autoscaler, x__xgafv=None)
Returns the specified autoscaler resource. Gets a list of available autoscalers by making a list() request.
insert(project, zone, body=None, requestId=None, x__xgafv=None)
Creates an autoscaler in the specified project using the data included in the request.
Retrieves a list of autoscalers contained within the specified zone.
list_next(previous_request, previous_response)
Retrieves the next page of results.
patch(project, zone, autoscaler=None, body=None, requestId=None, x__xgafv=None)
Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.
testIamPermissions(project, zone, resource, body=None, x__xgafv=None)
Returns permissions that a caller has on the specified resource.
update(project, zone, autoscaler=None, body=None, requestId=None, x__xgafv=None)
Updates an autoscaler in the specified project using the data included in the request.
aggregatedList(project, filter=None, includeAllScopes=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None)
Retrieves an aggregated list of autoscalers. Args: project: string, Project ID for this request. (required) filter: string, A filter expression that filters resources listed in the response. The expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:` operator can be used with string fields to match substrings. For non-string fields it is equivalent to the `=` operator. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = "Intel Skylake") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = "Intel Skylake") OR (cpuPlatform = "Intel Broadwell") AND (scheduling.automaticRestart = true) ``` includeAllScopes: boolean, Indicates whether every visible scope for each scope type (zone, region, global) should be included in the response. For new resource types added after this field, the flag has no effect as new resource types will always include every visible scope for each scope type in response. For resource types which predate this field, if this flag is omitted or false, only scopes of the scope types where the resource type is expected to be found will be included. maxResults: integer, The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`) orderBy: string, Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy="creationTimestamp desc"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported. pageToken: string, Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results. returnPartialSuccess: boolean, Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { "id": "A String", # [Output Only] Unique identifier for the resource; defined by the server. "items": { # A list of AutoscalersScopedList resources. "a_key": { # [Output Only] Name of the scope containing this set of autoscalers. "autoscalers": [ # [Output Only] A list of autoscalers contained in this scope. { # Represents an Autoscaler resource. Google Compute Engine has two Autoscaler resources: * [Zonal](/compute/docs/reference/rest/alpha/autoscalers) * [Regional](/compute/docs/reference/rest/alpha/regionAutoscalers) Use autoscalers to automatically add or delete instances from a managed instance group according to your defined autoscaling policy. For more information, read Autoscaling Groups of Instances. For zonal managed instance groups resource, use the autoscaler resource. For regional managed instance groups, use the regionAutoscalers resource. "autoscalingPolicy": { # Cloud Autoscaler policy. # The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%. "coolDownPeriodSec": 42, # The number of seconds that the autoscaler waits before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process. "cpuUtilization": { # CPU utilization policy. # Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group. "predictiveMethod": "A String", # Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand. "utilizationTarget": 3.14, # The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization. }, "customMetricUtilizations": [ # Configuration parameters of autoscaling based on a custom metric. { # Custom utilization metric policy. "filter": "A String", # A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a *per-group metric* for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value. "metric": "A String", # The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE. "singleInstanceAssignment": 3.14, # If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead. "utilizationTarget": 3.14, # The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances. "utilizationTargetType": "A String", # Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE. }, ], "loadBalancingUtilization": { # Configuration parameters of autoscaling based on load balancing. # Configuration parameters of autoscaling based on load balancer. "utilizationTarget": 3.14, # Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8. }, "maxNumReplicas": 42, # The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas. "minNumReplicas": 42, # The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed. "mode": "A String", # Defines operating mode for this policy. "scaleDownControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledDownReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scaleInControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledInReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scalingSchedules": { # Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed. "a_key": { # Scaling based on user-defined schedule. The message describes a single scaling schedule. A scaling schedule changes the minimum number of VM instances an autoscaler can recommend, which can trigger scaling out. "description": "A String", # A description of a scaling schedule. "disabled": True or False, # A boolean value that specifies whether a scaling schedule can influence autoscaler recommendations. If set to true, then a scaling schedule has no effect. This field is optional, and its value is false by default. "durationSec": 42, # The duration of time intervals, in seconds, for which this scaling schedule is to run. The minimum allowed value is 300. This field is required. "minRequiredReplicas": 42, # The minimum number of VM instances that the autoscaler will recommend in time intervals starting according to schedule. This field is required. "schedule": "A String", # The start timestamps of time intervals when this scaling schedule is to provide a scaling signal. This field uses the extended cron format (with an optional year field). The expression can describe a single timestamp if the optional year is set, in which case the scaling schedule runs once. The schedule is interpreted with respect to time_zone. This field is required. Note: These timestamps only describe when autoscaler starts providing the scaling signal. The VMs need additional time to become serving. "timeZone": "A String", # The time zone to use when interpreting the schedule. The value of this field must be a time zone name from the tz database: http://en.wikipedia.org/wiki/Tz_database. This field is assigned a default value of “UTC” if left empty. }, }, }, "creationTimestamp": "A String", # [Output Only] Creation timestamp in RFC3339 text format. "description": "A String", # An optional description of this resource. Provide this property when you create the resource. "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server. "kind": "compute#autoscaler", # [Output Only] Type of the resource. Always compute#autoscaler for autoscalers. "name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash. "recommendedSize": 42, # [Output Only] Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction. "region": "A String", # [Output Only] URL of the region where the instance group resides (for autoscalers living in regional scope). "scalingScheduleStatus": { # [Output Only] Status information of existing scaling schedules. "a_key": { "lastStartTime": "A String", # [Output Only] The last time the scaling schedule became active. Note: this is a timestamp when a schedule actually became active, not when it was planned to do so. The timestamp is in RFC3339 text format. "nextStartTime": "A String", # [Output Only] The next time the scaling schedule is to become active. Note: this is a timestamp when a schedule is planned to run, but the actual time might be slightly different. The timestamp is in RFC3339 text format. "state": "A String", # [Output Only] The current state of a scaling schedule. }, }, "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "status": "A String", # [Output Only] The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future. "statusDetails": [ # [Output Only] Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter. { "message": "A String", # The status message. "type": "A String", # The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions. }, ], "target": "A String", # URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler. "zone": "A String", # [Output Only] URL of the zone where the instance group resides (for autoscalers living in zonal scope). }, ], "warning": { # [Output Only] Informational warning which replaces the list of autoscalers when the list is empty. "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response. "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" } { "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding). "value": "A String", # [Output Only] A warning data value corresponding to the key. }, ], "message": "A String", # [Output Only] A human-readable description of the warning code. }, }, }, "kind": "compute#autoscalerAggregatedList", # [Output Only] Type of resource. Always compute#autoscalerAggregatedList for aggregated lists of autoscalers. "nextPageToken": "A String", # [Output Only] This token allows you to get the next page of results for list requests. If the number of results is larger than maxResults, use the nextPageToken as a value for the query parameter pageToken in the next list request. Subsequent list requests will have their own nextPageToken to continue paging through the results. "selfLink": "A String", # [Output Only] Server-defined URL for this resource. "unreachables": [ # [Output Only] Unreachable resources. end_interface: MixerListResponseWithEtagBuilder "A String", ], "warning": { # [Output Only] Informational warning message. "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response. "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" } { "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding). "value": "A String", # [Output Only] A warning data value corresponding to the key. }, ], "message": "A String", # [Output Only] A human-readable description of the warning code. }, }
aggregatedList_next(previous_request, previous_response)
Retrieves the next page of results. Args: previous_request: The request for the previous page. (required) previous_response: The response from the request for the previous page. (required) Returns: A request object that you can call 'execute()' on to request the next page. Returns None if there are no more items in the collection.
close()
Close httplib2 connections.
delete(project, zone, autoscaler, requestId=None, x__xgafv=None)
Deletes the specified autoscaler. Args: project: string, Project ID for this request. (required) zone: string, Name of the zone for this request. (required) autoscaler: string, Name of the autoscaler to delete. (required) requestId: string, An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000). x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Represents an Operation resource. Google Compute Engine has three Operation resources: * [Global](/compute/docs/reference/rest/alpha/globalOperations) * [Regional](/compute/docs/reference/rest/alpha/regionOperations) * [Zonal](/compute/docs/reference/rest/alpha/zoneOperations) You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the `globalOperations` resource. - For regional operations, use the `regionOperations` resource. - For zonal operations, use the `zonalOperations` resource. For more information, read Global, Regional, and Zonal Resources. "clientOperationId": "A String", # [Output Only] The value of `requestId` if you provided it in the request. Not present otherwise. "creationTimestamp": "A String", # [Deprecated] This field is deprecated. "description": "A String", # [Output Only] A textual description of the operation, which is set when the operation is created. "endTime": "A String", # [Output Only] The time that this operation was completed. This value is in RFC3339 text format. "error": { # [Output Only] If errors are generated during processing of the operation, this field will be populated. "errors": [ # [Output Only] The array of errors encountered while processing this operation. { "code": "A String", # [Output Only] The error type identifier for this error. "location": "A String", # [Output Only] Indicates the field in the request that caused the error. This property is optional. "message": "A String", # [Output Only] An optional, human-readable error message. }, ], }, "httpErrorMessage": "A String", # [Output Only] If the operation fails, this field contains the HTTP error message that was returned, such as `NOT FOUND`. "httpErrorStatusCode": 42, # [Output Only] If the operation fails, this field contains the HTTP error status code that was returned. For example, a `404` means the resource was not found. "id": "A String", # [Output Only] The unique identifier for the operation. This identifier is defined by the server. "insertTime": "A String", # [Output Only] The time that this operation was requested. This value is in RFC3339 text format. "kind": "compute#operation", # [Output Only] Type of the resource. Always `compute#operation` for Operation resources. "metadata": { # [Output Only] Service-specific metadata attached to this operation. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # [Output Only] Name of the operation. "operationGroupId": "A String", # [Output Only] An ID that represents a group of operations, such as when a group of operations results from a `bulkInsert` API request. "operationType": "A String", # [Output Only] The type of operation, such as `insert`, `update`, or `delete`, and so on. "progress": 42, # [Output Only] An optional progress indicator that ranges from 0 to 100. There is no requirement that this be linear or support any granularity of operations. This should not be used to guess when the operation will be complete. This number should monotonically increase as the operation progresses. "region": "A String", # [Output Only] The URL of the region where the operation resides. Only applicable when performing regional operations. "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "startTime": "A String", # [Output Only] The time that this operation was started by the server. This value is in RFC3339 text format. "status": "A String", # [Output Only] The status of the operation, which can be one of the following: `PENDING`, `RUNNING`, or `DONE`. "statusMessage": "A String", # [Output Only] An optional textual description of the current status of the operation. "targetId": "A String", # [Output Only] The unique target ID, which identifies a specific incarnation of the target resource. "targetLink": "A String", # [Output Only] The URL of the resource that the operation modifies. For operations related to creating a snapshot, this points to the persistent disk that the snapshot was created from. "user": "A String", # [Output Only] User who requested the operation, for example: `user@example.com`. "warnings": [ # [Output Only] If warning messages are generated during processing of the operation, this field will be populated. { "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response. "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" } { "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding). "value": "A String", # [Output Only] A warning data value corresponding to the key. }, ], "message": "A String", # [Output Only] A human-readable description of the warning code. }, ], "zone": "A String", # [Output Only] The URL of the zone where the operation resides. Only applicable when performing per-zone operations. }
get(project, zone, autoscaler, x__xgafv=None)
Returns the specified autoscaler resource. Gets a list of available autoscalers by making a list() request. Args: project: string, Project ID for this request. (required) zone: string, Name of the zone for this request. (required) autoscaler: string, Name of the autoscaler to return. (required) x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Represents an Autoscaler resource. Google Compute Engine has two Autoscaler resources: * [Zonal](/compute/docs/reference/rest/alpha/autoscalers) * [Regional](/compute/docs/reference/rest/alpha/regionAutoscalers) Use autoscalers to automatically add or delete instances from a managed instance group according to your defined autoscaling policy. For more information, read Autoscaling Groups of Instances. For zonal managed instance groups resource, use the autoscaler resource. For regional managed instance groups, use the regionAutoscalers resource. "autoscalingPolicy": { # Cloud Autoscaler policy. # The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%. "coolDownPeriodSec": 42, # The number of seconds that the autoscaler waits before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process. "cpuUtilization": { # CPU utilization policy. # Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group. "predictiveMethod": "A String", # Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand. "utilizationTarget": 3.14, # The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization. }, "customMetricUtilizations": [ # Configuration parameters of autoscaling based on a custom metric. { # Custom utilization metric policy. "filter": "A String", # A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a *per-group metric* for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value. "metric": "A String", # The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE. "singleInstanceAssignment": 3.14, # If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead. "utilizationTarget": 3.14, # The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances. "utilizationTargetType": "A String", # Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE. }, ], "loadBalancingUtilization": { # Configuration parameters of autoscaling based on load balancing. # Configuration parameters of autoscaling based on load balancer. "utilizationTarget": 3.14, # Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8. }, "maxNumReplicas": 42, # The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas. "minNumReplicas": 42, # The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed. "mode": "A String", # Defines operating mode for this policy. "scaleDownControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledDownReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scaleInControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledInReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scalingSchedules": { # Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed. "a_key": { # Scaling based on user-defined schedule. The message describes a single scaling schedule. A scaling schedule changes the minimum number of VM instances an autoscaler can recommend, which can trigger scaling out. "description": "A String", # A description of a scaling schedule. "disabled": True or False, # A boolean value that specifies whether a scaling schedule can influence autoscaler recommendations. If set to true, then a scaling schedule has no effect. This field is optional, and its value is false by default. "durationSec": 42, # The duration of time intervals, in seconds, for which this scaling schedule is to run. The minimum allowed value is 300. This field is required. "minRequiredReplicas": 42, # The minimum number of VM instances that the autoscaler will recommend in time intervals starting according to schedule. This field is required. "schedule": "A String", # The start timestamps of time intervals when this scaling schedule is to provide a scaling signal. This field uses the extended cron format (with an optional year field). The expression can describe a single timestamp if the optional year is set, in which case the scaling schedule runs once. The schedule is interpreted with respect to time_zone. This field is required. Note: These timestamps only describe when autoscaler starts providing the scaling signal. The VMs need additional time to become serving. "timeZone": "A String", # The time zone to use when interpreting the schedule. The value of this field must be a time zone name from the tz database: http://en.wikipedia.org/wiki/Tz_database. This field is assigned a default value of “UTC” if left empty. }, }, }, "creationTimestamp": "A String", # [Output Only] Creation timestamp in RFC3339 text format. "description": "A String", # An optional description of this resource. Provide this property when you create the resource. "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server. "kind": "compute#autoscaler", # [Output Only] Type of the resource. Always compute#autoscaler for autoscalers. "name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash. "recommendedSize": 42, # [Output Only] Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction. "region": "A String", # [Output Only] URL of the region where the instance group resides (for autoscalers living in regional scope). "scalingScheduleStatus": { # [Output Only] Status information of existing scaling schedules. "a_key": { "lastStartTime": "A String", # [Output Only] The last time the scaling schedule became active. Note: this is a timestamp when a schedule actually became active, not when it was planned to do so. The timestamp is in RFC3339 text format. "nextStartTime": "A String", # [Output Only] The next time the scaling schedule is to become active. Note: this is a timestamp when a schedule is planned to run, but the actual time might be slightly different. The timestamp is in RFC3339 text format. "state": "A String", # [Output Only] The current state of a scaling schedule. }, }, "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "status": "A String", # [Output Only] The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future. "statusDetails": [ # [Output Only] Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter. { "message": "A String", # The status message. "type": "A String", # The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions. }, ], "target": "A String", # URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler. "zone": "A String", # [Output Only] URL of the zone where the instance group resides (for autoscalers living in zonal scope). }
insert(project, zone, body=None, requestId=None, x__xgafv=None)
Creates an autoscaler in the specified project using the data included in the request. Args: project: string, Project ID for this request. (required) zone: string, Name of the zone for this request. (required) body: object, The request body. The object takes the form of: { # Represents an Autoscaler resource. Google Compute Engine has two Autoscaler resources: * [Zonal](/compute/docs/reference/rest/alpha/autoscalers) * [Regional](/compute/docs/reference/rest/alpha/regionAutoscalers) Use autoscalers to automatically add or delete instances from a managed instance group according to your defined autoscaling policy. For more information, read Autoscaling Groups of Instances. For zonal managed instance groups resource, use the autoscaler resource. For regional managed instance groups, use the regionAutoscalers resource. "autoscalingPolicy": { # Cloud Autoscaler policy. # The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%. "coolDownPeriodSec": 42, # The number of seconds that the autoscaler waits before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process. "cpuUtilization": { # CPU utilization policy. # Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group. "predictiveMethod": "A String", # Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand. "utilizationTarget": 3.14, # The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization. }, "customMetricUtilizations": [ # Configuration parameters of autoscaling based on a custom metric. { # Custom utilization metric policy. "filter": "A String", # A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a *per-group metric* for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value. "metric": "A String", # The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE. "singleInstanceAssignment": 3.14, # If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead. "utilizationTarget": 3.14, # The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances. "utilizationTargetType": "A String", # Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE. }, ], "loadBalancingUtilization": { # Configuration parameters of autoscaling based on load balancing. # Configuration parameters of autoscaling based on load balancer. "utilizationTarget": 3.14, # Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8. }, "maxNumReplicas": 42, # The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas. "minNumReplicas": 42, # The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed. "mode": "A String", # Defines operating mode for this policy. "scaleDownControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledDownReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scaleInControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledInReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scalingSchedules": { # Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed. "a_key": { # Scaling based on user-defined schedule. The message describes a single scaling schedule. A scaling schedule changes the minimum number of VM instances an autoscaler can recommend, which can trigger scaling out. "description": "A String", # A description of a scaling schedule. "disabled": True or False, # A boolean value that specifies whether a scaling schedule can influence autoscaler recommendations. If set to true, then a scaling schedule has no effect. This field is optional, and its value is false by default. "durationSec": 42, # The duration of time intervals, in seconds, for which this scaling schedule is to run. The minimum allowed value is 300. This field is required. "minRequiredReplicas": 42, # The minimum number of VM instances that the autoscaler will recommend in time intervals starting according to schedule. This field is required. "schedule": "A String", # The start timestamps of time intervals when this scaling schedule is to provide a scaling signal. This field uses the extended cron format (with an optional year field). The expression can describe a single timestamp if the optional year is set, in which case the scaling schedule runs once. The schedule is interpreted with respect to time_zone. This field is required. Note: These timestamps only describe when autoscaler starts providing the scaling signal. The VMs need additional time to become serving. "timeZone": "A String", # The time zone to use when interpreting the schedule. The value of this field must be a time zone name from the tz database: http://en.wikipedia.org/wiki/Tz_database. This field is assigned a default value of “UTC” if left empty. }, }, }, "creationTimestamp": "A String", # [Output Only] Creation timestamp in RFC3339 text format. "description": "A String", # An optional description of this resource. Provide this property when you create the resource. "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server. "kind": "compute#autoscaler", # [Output Only] Type of the resource. Always compute#autoscaler for autoscalers. "name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash. "recommendedSize": 42, # [Output Only] Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction. "region": "A String", # [Output Only] URL of the region where the instance group resides (for autoscalers living in regional scope). "scalingScheduleStatus": { # [Output Only] Status information of existing scaling schedules. "a_key": { "lastStartTime": "A String", # [Output Only] The last time the scaling schedule became active. Note: this is a timestamp when a schedule actually became active, not when it was planned to do so. The timestamp is in RFC3339 text format. "nextStartTime": "A String", # [Output Only] The next time the scaling schedule is to become active. Note: this is a timestamp when a schedule is planned to run, but the actual time might be slightly different. The timestamp is in RFC3339 text format. "state": "A String", # [Output Only] The current state of a scaling schedule. }, }, "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "status": "A String", # [Output Only] The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future. "statusDetails": [ # [Output Only] Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter. { "message": "A String", # The status message. "type": "A String", # The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions. }, ], "target": "A String", # URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler. "zone": "A String", # [Output Only] URL of the zone where the instance group resides (for autoscalers living in zonal scope). } requestId: string, An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000). x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Represents an Operation resource. Google Compute Engine has three Operation resources: * [Global](/compute/docs/reference/rest/alpha/globalOperations) * [Regional](/compute/docs/reference/rest/alpha/regionOperations) * [Zonal](/compute/docs/reference/rest/alpha/zoneOperations) You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the `globalOperations` resource. - For regional operations, use the `regionOperations` resource. - For zonal operations, use the `zonalOperations` resource. For more information, read Global, Regional, and Zonal Resources. "clientOperationId": "A String", # [Output Only] The value of `requestId` if you provided it in the request. Not present otherwise. "creationTimestamp": "A String", # [Deprecated] This field is deprecated. "description": "A String", # [Output Only] A textual description of the operation, which is set when the operation is created. "endTime": "A String", # [Output Only] The time that this operation was completed. This value is in RFC3339 text format. "error": { # [Output Only] If errors are generated during processing of the operation, this field will be populated. "errors": [ # [Output Only] The array of errors encountered while processing this operation. { "code": "A String", # [Output Only] The error type identifier for this error. "location": "A String", # [Output Only] Indicates the field in the request that caused the error. This property is optional. "message": "A String", # [Output Only] An optional, human-readable error message. }, ], }, "httpErrorMessage": "A String", # [Output Only] If the operation fails, this field contains the HTTP error message that was returned, such as `NOT FOUND`. "httpErrorStatusCode": 42, # [Output Only] If the operation fails, this field contains the HTTP error status code that was returned. For example, a `404` means the resource was not found. "id": "A String", # [Output Only] The unique identifier for the operation. This identifier is defined by the server. "insertTime": "A String", # [Output Only] The time that this operation was requested. This value is in RFC3339 text format. "kind": "compute#operation", # [Output Only] Type of the resource. Always `compute#operation` for Operation resources. "metadata": { # [Output Only] Service-specific metadata attached to this operation. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # [Output Only] Name of the operation. "operationGroupId": "A String", # [Output Only] An ID that represents a group of operations, such as when a group of operations results from a `bulkInsert` API request. "operationType": "A String", # [Output Only] The type of operation, such as `insert`, `update`, or `delete`, and so on. "progress": 42, # [Output Only] An optional progress indicator that ranges from 0 to 100. There is no requirement that this be linear or support any granularity of operations. This should not be used to guess when the operation will be complete. This number should monotonically increase as the operation progresses. "region": "A String", # [Output Only] The URL of the region where the operation resides. Only applicable when performing regional operations. "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "startTime": "A String", # [Output Only] The time that this operation was started by the server. This value is in RFC3339 text format. "status": "A String", # [Output Only] The status of the operation, which can be one of the following: `PENDING`, `RUNNING`, or `DONE`. "statusMessage": "A String", # [Output Only] An optional textual description of the current status of the operation. "targetId": "A String", # [Output Only] The unique target ID, which identifies a specific incarnation of the target resource. "targetLink": "A String", # [Output Only] The URL of the resource that the operation modifies. For operations related to creating a snapshot, this points to the persistent disk that the snapshot was created from. "user": "A String", # [Output Only] User who requested the operation, for example: `user@example.com`. "warnings": [ # [Output Only] If warning messages are generated during processing of the operation, this field will be populated. { "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response. "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" } { "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding). "value": "A String", # [Output Only] A warning data value corresponding to the key. }, ], "message": "A String", # [Output Only] A human-readable description of the warning code. }, ], "zone": "A String", # [Output Only] The URL of the zone where the operation resides. Only applicable when performing per-zone operations. }
list(project, zone, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None)
Retrieves a list of autoscalers contained within the specified zone. Args: project: string, Project ID for this request. (required) zone: string, Name of the zone for this request. (required) filter: string, A filter expression that filters resources listed in the response. The expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:` operator can be used with string fields to match substrings. For non-string fields it is equivalent to the `=` operator. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = "Intel Skylake") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = "Intel Skylake") OR (cpuPlatform = "Intel Broadwell") AND (scheduling.automaticRestart = true) ``` maxResults: integer, The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`) orderBy: string, Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy="creationTimestamp desc"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported. pageToken: string, Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results. returnPartialSuccess: boolean, Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Contains a list of Autoscaler resources. "id": "A String", # [Output Only] Unique identifier for the resource; defined by the server. "items": [ # A list of Autoscaler resources. { # Represents an Autoscaler resource. Google Compute Engine has two Autoscaler resources: * [Zonal](/compute/docs/reference/rest/alpha/autoscalers) * [Regional](/compute/docs/reference/rest/alpha/regionAutoscalers) Use autoscalers to automatically add or delete instances from a managed instance group according to your defined autoscaling policy. For more information, read Autoscaling Groups of Instances. For zonal managed instance groups resource, use the autoscaler resource. For regional managed instance groups, use the regionAutoscalers resource. "autoscalingPolicy": { # Cloud Autoscaler policy. # The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%. "coolDownPeriodSec": 42, # The number of seconds that the autoscaler waits before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process. "cpuUtilization": { # CPU utilization policy. # Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group. "predictiveMethod": "A String", # Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand. "utilizationTarget": 3.14, # The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization. }, "customMetricUtilizations": [ # Configuration parameters of autoscaling based on a custom metric. { # Custom utilization metric policy. "filter": "A String", # A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a *per-group metric* for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value. "metric": "A String", # The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE. "singleInstanceAssignment": 3.14, # If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead. "utilizationTarget": 3.14, # The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances. "utilizationTargetType": "A String", # Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE. }, ], "loadBalancingUtilization": { # Configuration parameters of autoscaling based on load balancing. # Configuration parameters of autoscaling based on load balancer. "utilizationTarget": 3.14, # Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8. }, "maxNumReplicas": 42, # The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas. "minNumReplicas": 42, # The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed. "mode": "A String", # Defines operating mode for this policy. "scaleDownControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledDownReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scaleInControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledInReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scalingSchedules": { # Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed. "a_key": { # Scaling based on user-defined schedule. The message describes a single scaling schedule. A scaling schedule changes the minimum number of VM instances an autoscaler can recommend, which can trigger scaling out. "description": "A String", # A description of a scaling schedule. "disabled": True or False, # A boolean value that specifies whether a scaling schedule can influence autoscaler recommendations. If set to true, then a scaling schedule has no effect. This field is optional, and its value is false by default. "durationSec": 42, # The duration of time intervals, in seconds, for which this scaling schedule is to run. The minimum allowed value is 300. This field is required. "minRequiredReplicas": 42, # The minimum number of VM instances that the autoscaler will recommend in time intervals starting according to schedule. This field is required. "schedule": "A String", # The start timestamps of time intervals when this scaling schedule is to provide a scaling signal. This field uses the extended cron format (with an optional year field). The expression can describe a single timestamp if the optional year is set, in which case the scaling schedule runs once. The schedule is interpreted with respect to time_zone. This field is required. Note: These timestamps only describe when autoscaler starts providing the scaling signal. The VMs need additional time to become serving. "timeZone": "A String", # The time zone to use when interpreting the schedule. The value of this field must be a time zone name from the tz database: http://en.wikipedia.org/wiki/Tz_database. This field is assigned a default value of “UTC” if left empty. }, }, }, "creationTimestamp": "A String", # [Output Only] Creation timestamp in RFC3339 text format. "description": "A String", # An optional description of this resource. Provide this property when you create the resource. "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server. "kind": "compute#autoscaler", # [Output Only] Type of the resource. Always compute#autoscaler for autoscalers. "name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash. "recommendedSize": 42, # [Output Only] Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction. "region": "A String", # [Output Only] URL of the region where the instance group resides (for autoscalers living in regional scope). "scalingScheduleStatus": { # [Output Only] Status information of existing scaling schedules. "a_key": { "lastStartTime": "A String", # [Output Only] The last time the scaling schedule became active. Note: this is a timestamp when a schedule actually became active, not when it was planned to do so. The timestamp is in RFC3339 text format. "nextStartTime": "A String", # [Output Only] The next time the scaling schedule is to become active. Note: this is a timestamp when a schedule is planned to run, but the actual time might be slightly different. The timestamp is in RFC3339 text format. "state": "A String", # [Output Only] The current state of a scaling schedule. }, }, "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "status": "A String", # [Output Only] The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future. "statusDetails": [ # [Output Only] Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter. { "message": "A String", # The status message. "type": "A String", # The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions. }, ], "target": "A String", # URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler. "zone": "A String", # [Output Only] URL of the zone where the instance group resides (for autoscalers living in zonal scope). }, ], "kind": "compute#autoscalerList", # [Output Only] Type of resource. Always compute#autoscalerList for lists of autoscalers. "nextPageToken": "A String", # [Output Only] This token allows you to get the next page of results for list requests. If the number of results is larger than maxResults, use the nextPageToken as a value for the query parameter pageToken in the next list request. Subsequent list requests will have their own nextPageToken to continue paging through the results. "selfLink": "A String", # [Output Only] Server-defined URL for this resource. "warning": { # [Output Only] Informational warning message. "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response. "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" } { "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding). "value": "A String", # [Output Only] A warning data value corresponding to the key. }, ], "message": "A String", # [Output Only] A human-readable description of the warning code. }, }
list_next(previous_request, previous_response)
Retrieves the next page of results. Args: previous_request: The request for the previous page. (required) previous_response: The response from the request for the previous page. (required) Returns: A request object that you can call 'execute()' on to request the next page. Returns None if there are no more items in the collection.
patch(project, zone, autoscaler=None, body=None, requestId=None, x__xgafv=None)
Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. Args: project: string, Project ID for this request. (required) zone: string, Name of the zone for this request. (required) body: object, The request body. The object takes the form of: { # Represents an Autoscaler resource. Google Compute Engine has two Autoscaler resources: * [Zonal](/compute/docs/reference/rest/alpha/autoscalers) * [Regional](/compute/docs/reference/rest/alpha/regionAutoscalers) Use autoscalers to automatically add or delete instances from a managed instance group according to your defined autoscaling policy. For more information, read Autoscaling Groups of Instances. For zonal managed instance groups resource, use the autoscaler resource. For regional managed instance groups, use the regionAutoscalers resource. "autoscalingPolicy": { # Cloud Autoscaler policy. # The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%. "coolDownPeriodSec": 42, # The number of seconds that the autoscaler waits before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process. "cpuUtilization": { # CPU utilization policy. # Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group. "predictiveMethod": "A String", # Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand. "utilizationTarget": 3.14, # The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization. }, "customMetricUtilizations": [ # Configuration parameters of autoscaling based on a custom metric. { # Custom utilization metric policy. "filter": "A String", # A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a *per-group metric* for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value. "metric": "A String", # The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE. "singleInstanceAssignment": 3.14, # If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead. "utilizationTarget": 3.14, # The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances. "utilizationTargetType": "A String", # Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE. }, ], "loadBalancingUtilization": { # Configuration parameters of autoscaling based on load balancing. # Configuration parameters of autoscaling based on load balancer. "utilizationTarget": 3.14, # Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8. }, "maxNumReplicas": 42, # The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas. "minNumReplicas": 42, # The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed. "mode": "A String", # Defines operating mode for this policy. "scaleDownControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledDownReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scaleInControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledInReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scalingSchedules": { # Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed. "a_key": { # Scaling based on user-defined schedule. The message describes a single scaling schedule. A scaling schedule changes the minimum number of VM instances an autoscaler can recommend, which can trigger scaling out. "description": "A String", # A description of a scaling schedule. "disabled": True or False, # A boolean value that specifies whether a scaling schedule can influence autoscaler recommendations. If set to true, then a scaling schedule has no effect. This field is optional, and its value is false by default. "durationSec": 42, # The duration of time intervals, in seconds, for which this scaling schedule is to run. The minimum allowed value is 300. This field is required. "minRequiredReplicas": 42, # The minimum number of VM instances that the autoscaler will recommend in time intervals starting according to schedule. This field is required. "schedule": "A String", # The start timestamps of time intervals when this scaling schedule is to provide a scaling signal. This field uses the extended cron format (with an optional year field). The expression can describe a single timestamp if the optional year is set, in which case the scaling schedule runs once. The schedule is interpreted with respect to time_zone. This field is required. Note: These timestamps only describe when autoscaler starts providing the scaling signal. The VMs need additional time to become serving. "timeZone": "A String", # The time zone to use when interpreting the schedule. The value of this field must be a time zone name from the tz database: http://en.wikipedia.org/wiki/Tz_database. This field is assigned a default value of “UTC” if left empty. }, }, }, "creationTimestamp": "A String", # [Output Only] Creation timestamp in RFC3339 text format. "description": "A String", # An optional description of this resource. Provide this property when you create the resource. "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server. "kind": "compute#autoscaler", # [Output Only] Type of the resource. Always compute#autoscaler for autoscalers. "name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash. "recommendedSize": 42, # [Output Only] Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction. "region": "A String", # [Output Only] URL of the region where the instance group resides (for autoscalers living in regional scope). "scalingScheduleStatus": { # [Output Only] Status information of existing scaling schedules. "a_key": { "lastStartTime": "A String", # [Output Only] The last time the scaling schedule became active. Note: this is a timestamp when a schedule actually became active, not when it was planned to do so. The timestamp is in RFC3339 text format. "nextStartTime": "A String", # [Output Only] The next time the scaling schedule is to become active. Note: this is a timestamp when a schedule is planned to run, but the actual time might be slightly different. The timestamp is in RFC3339 text format. "state": "A String", # [Output Only] The current state of a scaling schedule. }, }, "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "status": "A String", # [Output Only] The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future. "statusDetails": [ # [Output Only] Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter. { "message": "A String", # The status message. "type": "A String", # The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions. }, ], "target": "A String", # URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler. "zone": "A String", # [Output Only] URL of the zone where the instance group resides (for autoscalers living in zonal scope). } autoscaler: string, Name of the autoscaler to patch. requestId: string, An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000). x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Represents an Operation resource. Google Compute Engine has three Operation resources: * [Global](/compute/docs/reference/rest/alpha/globalOperations) * [Regional](/compute/docs/reference/rest/alpha/regionOperations) * [Zonal](/compute/docs/reference/rest/alpha/zoneOperations) You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the `globalOperations` resource. - For regional operations, use the `regionOperations` resource. - For zonal operations, use the `zonalOperations` resource. For more information, read Global, Regional, and Zonal Resources. "clientOperationId": "A String", # [Output Only] The value of `requestId` if you provided it in the request. Not present otherwise. "creationTimestamp": "A String", # [Deprecated] This field is deprecated. "description": "A String", # [Output Only] A textual description of the operation, which is set when the operation is created. "endTime": "A String", # [Output Only] The time that this operation was completed. This value is in RFC3339 text format. "error": { # [Output Only] If errors are generated during processing of the operation, this field will be populated. "errors": [ # [Output Only] The array of errors encountered while processing this operation. { "code": "A String", # [Output Only] The error type identifier for this error. "location": "A String", # [Output Only] Indicates the field in the request that caused the error. This property is optional. "message": "A String", # [Output Only] An optional, human-readable error message. }, ], }, "httpErrorMessage": "A String", # [Output Only] If the operation fails, this field contains the HTTP error message that was returned, such as `NOT FOUND`. "httpErrorStatusCode": 42, # [Output Only] If the operation fails, this field contains the HTTP error status code that was returned. For example, a `404` means the resource was not found. "id": "A String", # [Output Only] The unique identifier for the operation. This identifier is defined by the server. "insertTime": "A String", # [Output Only] The time that this operation was requested. This value is in RFC3339 text format. "kind": "compute#operation", # [Output Only] Type of the resource. Always `compute#operation` for Operation resources. "metadata": { # [Output Only] Service-specific metadata attached to this operation. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # [Output Only] Name of the operation. "operationGroupId": "A String", # [Output Only] An ID that represents a group of operations, such as when a group of operations results from a `bulkInsert` API request. "operationType": "A String", # [Output Only] The type of operation, such as `insert`, `update`, or `delete`, and so on. "progress": 42, # [Output Only] An optional progress indicator that ranges from 0 to 100. There is no requirement that this be linear or support any granularity of operations. This should not be used to guess when the operation will be complete. This number should monotonically increase as the operation progresses. "region": "A String", # [Output Only] The URL of the region where the operation resides. Only applicable when performing regional operations. "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "startTime": "A String", # [Output Only] The time that this operation was started by the server. This value is in RFC3339 text format. "status": "A String", # [Output Only] The status of the operation, which can be one of the following: `PENDING`, `RUNNING`, or `DONE`. "statusMessage": "A String", # [Output Only] An optional textual description of the current status of the operation. "targetId": "A String", # [Output Only] The unique target ID, which identifies a specific incarnation of the target resource. "targetLink": "A String", # [Output Only] The URL of the resource that the operation modifies. For operations related to creating a snapshot, this points to the persistent disk that the snapshot was created from. "user": "A String", # [Output Only] User who requested the operation, for example: `user@example.com`. "warnings": [ # [Output Only] If warning messages are generated during processing of the operation, this field will be populated. { "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response. "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" } { "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding). "value": "A String", # [Output Only] A warning data value corresponding to the key. }, ], "message": "A String", # [Output Only] A human-readable description of the warning code. }, ], "zone": "A String", # [Output Only] The URL of the zone where the operation resides. Only applicable when performing per-zone operations. }
testIamPermissions(project, zone, resource, body=None, x__xgafv=None)
Returns permissions that a caller has on the specified resource. Args: project: string, Project ID for this request. (required) zone: string, The name of the zone for this request. (required) resource: string, Name or id of the resource for this request. (required) body: object, The request body. The object takes the form of: { "permissions": [ # The set of permissions to check for the 'resource'. Permissions with wildcards (such as '*' or 'storage.*') are not allowed. "A String", ], } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { "permissions": [ # A subset of `TestPermissionsRequest.permissions` that the caller is allowed. "A String", ], }
update(project, zone, autoscaler=None, body=None, requestId=None, x__xgafv=None)
Updates an autoscaler in the specified project using the data included in the request. Args: project: string, Project ID for this request. (required) zone: string, Name of the zone for this request. (required) body: object, The request body. The object takes the form of: { # Represents an Autoscaler resource. Google Compute Engine has two Autoscaler resources: * [Zonal](/compute/docs/reference/rest/alpha/autoscalers) * [Regional](/compute/docs/reference/rest/alpha/regionAutoscalers) Use autoscalers to automatically add or delete instances from a managed instance group according to your defined autoscaling policy. For more information, read Autoscaling Groups of Instances. For zonal managed instance groups resource, use the autoscaler resource. For regional managed instance groups, use the regionAutoscalers resource. "autoscalingPolicy": { # Cloud Autoscaler policy. # The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%. "coolDownPeriodSec": 42, # The number of seconds that the autoscaler waits before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process. "cpuUtilization": { # CPU utilization policy. # Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group. "predictiveMethod": "A String", # Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand. "utilizationTarget": 3.14, # The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization. }, "customMetricUtilizations": [ # Configuration parameters of autoscaling based on a custom metric. { # Custom utilization metric policy. "filter": "A String", # A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a *per-group metric* for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value. "metric": "A String", # The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE. "singleInstanceAssignment": 3.14, # If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead. "utilizationTarget": 3.14, # The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances. "utilizationTargetType": "A String", # Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE. }, ], "loadBalancingUtilization": { # Configuration parameters of autoscaling based on load balancing. # Configuration parameters of autoscaling based on load balancer. "utilizationTarget": 3.14, # Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8. }, "maxNumReplicas": 42, # The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas. "minNumReplicas": 42, # The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed. "mode": "A String", # Defines operating mode for this policy. "scaleDownControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledDownReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scaleInControl": { # Configuration that allows for slower scale in so that even if Autoscaler recommends an abrupt scale in of a MIG, it will be throttled as specified by the parameters below. "maxScaledInReplicas": { # Encapsulates numeric value that can be either absolute or relative. # Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step. "calculated": 42, # [Output Only] Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded. "fixed": 42, # Specifies a fixed number of VM instances. This must be a positive integer. "percent": 42, # Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. }, "timeWindowSec": 42, # How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above. }, "scalingSchedules": { # Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed. "a_key": { # Scaling based on user-defined schedule. The message describes a single scaling schedule. A scaling schedule changes the minimum number of VM instances an autoscaler can recommend, which can trigger scaling out. "description": "A String", # A description of a scaling schedule. "disabled": True or False, # A boolean value that specifies whether a scaling schedule can influence autoscaler recommendations. If set to true, then a scaling schedule has no effect. This field is optional, and its value is false by default. "durationSec": 42, # The duration of time intervals, in seconds, for which this scaling schedule is to run. The minimum allowed value is 300. This field is required. "minRequiredReplicas": 42, # The minimum number of VM instances that the autoscaler will recommend in time intervals starting according to schedule. This field is required. "schedule": "A String", # The start timestamps of time intervals when this scaling schedule is to provide a scaling signal. This field uses the extended cron format (with an optional year field). The expression can describe a single timestamp if the optional year is set, in which case the scaling schedule runs once. The schedule is interpreted with respect to time_zone. This field is required. Note: These timestamps only describe when autoscaler starts providing the scaling signal. The VMs need additional time to become serving. "timeZone": "A String", # The time zone to use when interpreting the schedule. The value of this field must be a time zone name from the tz database: http://en.wikipedia.org/wiki/Tz_database. This field is assigned a default value of “UTC” if left empty. }, }, }, "creationTimestamp": "A String", # [Output Only] Creation timestamp in RFC3339 text format. "description": "A String", # An optional description of this resource. Provide this property when you create the resource. "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server. "kind": "compute#autoscaler", # [Output Only] Type of the resource. Always compute#autoscaler for autoscalers. "name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash. "recommendedSize": 42, # [Output Only] Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction. "region": "A String", # [Output Only] URL of the region where the instance group resides (for autoscalers living in regional scope). "scalingScheduleStatus": { # [Output Only] Status information of existing scaling schedules. "a_key": { "lastStartTime": "A String", # [Output Only] The last time the scaling schedule became active. Note: this is a timestamp when a schedule actually became active, not when it was planned to do so. The timestamp is in RFC3339 text format. "nextStartTime": "A String", # [Output Only] The next time the scaling schedule is to become active. Note: this is a timestamp when a schedule is planned to run, but the actual time might be slightly different. The timestamp is in RFC3339 text format. "state": "A String", # [Output Only] The current state of a scaling schedule. }, }, "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "status": "A String", # [Output Only] The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future. "statusDetails": [ # [Output Only] Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter. { "message": "A String", # The status message. "type": "A String", # The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions. }, ], "target": "A String", # URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler. "zone": "A String", # [Output Only] URL of the zone where the instance group resides (for autoscalers living in zonal scope). } autoscaler: string, Name of the autoscaler to update. requestId: string, An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000). x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Represents an Operation resource. Google Compute Engine has three Operation resources: * [Global](/compute/docs/reference/rest/alpha/globalOperations) * [Regional](/compute/docs/reference/rest/alpha/regionOperations) * [Zonal](/compute/docs/reference/rest/alpha/zoneOperations) You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the `globalOperations` resource. - For regional operations, use the `regionOperations` resource. - For zonal operations, use the `zonalOperations` resource. For more information, read Global, Regional, and Zonal Resources. "clientOperationId": "A String", # [Output Only] The value of `requestId` if you provided it in the request. Not present otherwise. "creationTimestamp": "A String", # [Deprecated] This field is deprecated. "description": "A String", # [Output Only] A textual description of the operation, which is set when the operation is created. "endTime": "A String", # [Output Only] The time that this operation was completed. This value is in RFC3339 text format. "error": { # [Output Only] If errors are generated during processing of the operation, this field will be populated. "errors": [ # [Output Only] The array of errors encountered while processing this operation. { "code": "A String", # [Output Only] The error type identifier for this error. "location": "A String", # [Output Only] Indicates the field in the request that caused the error. This property is optional. "message": "A String", # [Output Only] An optional, human-readable error message. }, ], }, "httpErrorMessage": "A String", # [Output Only] If the operation fails, this field contains the HTTP error message that was returned, such as `NOT FOUND`. "httpErrorStatusCode": 42, # [Output Only] If the operation fails, this field contains the HTTP error status code that was returned. For example, a `404` means the resource was not found. "id": "A String", # [Output Only] The unique identifier for the operation. This identifier is defined by the server. "insertTime": "A String", # [Output Only] The time that this operation was requested. This value is in RFC3339 text format. "kind": "compute#operation", # [Output Only] Type of the resource. Always `compute#operation` for Operation resources. "metadata": { # [Output Only] Service-specific metadata attached to this operation. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # [Output Only] Name of the operation. "operationGroupId": "A String", # [Output Only] An ID that represents a group of operations, such as when a group of operations results from a `bulkInsert` API request. "operationType": "A String", # [Output Only] The type of operation, such as `insert`, `update`, or `delete`, and so on. "progress": 42, # [Output Only] An optional progress indicator that ranges from 0 to 100. There is no requirement that this be linear or support any granularity of operations. This should not be used to guess when the operation will be complete. This number should monotonically increase as the operation progresses. "region": "A String", # [Output Only] The URL of the region where the operation resides. Only applicable when performing regional operations. "selfLink": "A String", # [Output Only] Server-defined URL for the resource. "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id. "startTime": "A String", # [Output Only] The time that this operation was started by the server. This value is in RFC3339 text format. "status": "A String", # [Output Only] The status of the operation, which can be one of the following: `PENDING`, `RUNNING`, or `DONE`. "statusMessage": "A String", # [Output Only] An optional textual description of the current status of the operation. "targetId": "A String", # [Output Only] The unique target ID, which identifies a specific incarnation of the target resource. "targetLink": "A String", # [Output Only] The URL of the resource that the operation modifies. For operations related to creating a snapshot, this points to the persistent disk that the snapshot was created from. "user": "A String", # [Output Only] User who requested the operation, for example: `user@example.com`. "warnings": [ # [Output Only] If warning messages are generated during processing of the operation, this field will be populated. { "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response. "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" } { "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding). "value": "A String", # [Output Only] A warning data value corresponding to the key. }, ], "message": "A String", # [Output Only] A human-readable description of the warning code. }, ], "zone": "A String", # [Output Only] The URL of the zone where the operation resides. Only applicable when performing per-zone operations. }