reCAPTCHA Enterprise API . projects . assessments

Instance Methods

annotate(name, body=None, x__xgafv=None)

Annotates a previously created Assessment to provide additional information on whether the event turned out to be authentic or fraudulent.

close()

Close httplib2 connections.

create(parent, body=None, x__xgafv=None)

Creates an Assessment of the likelihood an event is legitimate.

Method Details

annotate(name, body=None, x__xgafv=None)
Annotates a previously created Assessment to provide additional information on whether the event turned out to be authentic or fraudulent.

Args:
  name: string, Required. The resource name of the Assessment, in the format "projects/{project}/assessments/{assessment}". (required)
  body: object, The request body.
    The object takes the form of:

{ # The request message to annotate an Assessment.
  "annotation": "A String", # Optional. The annotation that will be assigned to the Event. This field can be left empty to provide reasons that apply to an event without concluding whether the event is legitimate or fraudulent.
  "hashedAccountId": "A String", # Optional. Optional unique stable hashed user identifier to apply to the assessment. This is an alternative to setting the hashed_account_id in CreateAssessment, for example when the account identifier is not yet known in the initial request. It is recommended that the identifier is hashed using hmac-sha256 with stable secret.
  "reasons": [ # Optional. Optional reasons for the annotation that will be assigned to the Event.
    "A String",
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Empty response for AnnotateAssessment.
}
close()
Close httplib2 connections.
create(parent, body=None, x__xgafv=None)
Creates an Assessment of the likelihood an event is legitimate.

Args:
  parent: string, Required. The name of the project in which the assessment will be created, in the format "projects/{project}". (required)
  body: object, The request body.
    The object takes the form of:

{ # A recaptcha assessment resource.
  "accountDefenderAssessment": { # Account Defender risk assessment. # Assessment returned by Account Defender when a hashed_account_id is provided.
    "labels": [ # Labels for this request.
      "A String",
    ],
    "recommendedAction": "A String", # Recommended action after this request.
  },
  "event": { # The event being assessed.
    "expectedAction": "A String", # Optional. The expected action for this type of event. This should be the same action provided at token generation time on client-side platforms already integrated with recaptcha enterprise.
    "siteKey": "A String", # Optional. The site key that was used to invoke reCAPTCHA on your site and generate the token.
    "token": "A String", # Optional. The user response token provided by the reCAPTCHA client-side integration on your site.
    "userAgent": "A String", # Optional. The user agent present in the request from the user's device related to this event.
    "userIpAddress": "A String", # Optional. The IP address in the request from the user's device related to this event.
  },
  "name": "A String", # Output only. The resource name for the Assessment in the format "projects/{project}/assessments/{assessment}".
  "riskAnalysis": { # Risk analysis result for an event. # Output only. The risk analysis result for the event being assessed.
    "reasons": [ # Reasons contributing to the risk analysis verdict.
      "A String",
    ],
    "score": 3.14, # Legitimate event score from 0.0 to 1.0. (1.0 means very likely legitimate traffic while 0.0 means very likely non-legitimate traffic).
  },
  "tokenProperties": { # Output only. Properties of the provided event token.
    "action": "A String", # Action name provided at token generation.
    "createTime": "A String", # The timestamp corresponding to the generation of the token.
    "hostname": "A String", # The hostname of the page on which the token was generated.
    "invalidReason": "A String", # Reason associated with the response when valid = false.
    "valid": True or False, # Whether the provided user response token is valid. When valid = false, the reason could be specified in invalid_reason or it could also be due to a user failing to solve a challenge or a sitekey mismatch (i.e the sitekey used to generate the token was different than the one specified in the assessment).
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A recaptcha assessment resource.
  "accountDefenderAssessment": { # Account Defender risk assessment. # Assessment returned by Account Defender when a hashed_account_id is provided.
    "labels": [ # Labels for this request.
      "A String",
    ],
    "recommendedAction": "A String", # Recommended action after this request.
  },
  "event": { # The event being assessed.
    "expectedAction": "A String", # Optional. The expected action for this type of event. This should be the same action provided at token generation time on client-side platforms already integrated with recaptcha enterprise.
    "siteKey": "A String", # Optional. The site key that was used to invoke reCAPTCHA on your site and generate the token.
    "token": "A String", # Optional. The user response token provided by the reCAPTCHA client-side integration on your site.
    "userAgent": "A String", # Optional. The user agent present in the request from the user's device related to this event.
    "userIpAddress": "A String", # Optional. The IP address in the request from the user's device related to this event.
  },
  "name": "A String", # Output only. The resource name for the Assessment in the format "projects/{project}/assessments/{assessment}".
  "riskAnalysis": { # Risk analysis result for an event. # Output only. The risk analysis result for the event being assessed.
    "reasons": [ # Reasons contributing to the risk analysis verdict.
      "A String",
    ],
    "score": 3.14, # Legitimate event score from 0.0 to 1.0. (1.0 means very likely legitimate traffic while 0.0 means very likely non-legitimate traffic).
  },
  "tokenProperties": { # Output only. Properties of the provided event token.
    "action": "A String", # Action name provided at token generation.
    "createTime": "A String", # The timestamp corresponding to the generation of the token.
    "hostname": "A String", # The hostname of the page on which the token was generated.
    "invalidReason": "A String", # Reason associated with the response when valid = false.
    "valid": True or False, # Whether the provided user response token is valid. When valid = false, the reason could be specified in invalid_reason or it could also be due to a user failing to solve a challenge or a sitekey mismatch (i.e the sitekey used to generate the token was different than the one specified in the assessment).
  },
}