BigQuery API . tabledata

Instance Methods

close()

Close httplib2 connections.

insertAll(projectId, datasetId, tableId, body=None)

Streams data into BigQuery one record at a time without needing to run a load job. Requires the WRITER dataset role.

list(projectId, datasetId, tableId, maxResults=None, pageToken=None, selectedFields=None, startIndex=None)

Retrieves table data from a specified set of rows. Requires the READER dataset role.

list_next(previous_request, previous_response)

Retrieves the next page of results.

Method Details

close()
Close httplib2 connections.
insertAll(projectId, datasetId, tableId, body=None)
Streams data into BigQuery one record at a time without needing to run a load job. Requires the WRITER dataset role.

Args:
  projectId: string, Project ID of the destination table. (required)
  datasetId: string, Dataset ID of the destination table. (required)
  tableId: string, Table ID of the destination table. (required)
  body: object, The request body.
    The object takes the form of:

{
  "ignoreUnknownValues": True or False, # [Optional] Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.
  "kind": "bigquery#tableDataInsertAllRequest", # The resource type of the response.
  "rows": [ # The rows to insert.
    {
      "insertId": "A String", # [Optional] A unique ID for each row. BigQuery uses this property to detect duplicate insertion requests on a best-effort basis.
      "json": { # Represents a single JSON object. # [Required] A JSON object that contains a row of data. The object's properties and values must match the destination table's schema.
        "a_key": "",
      },
    },
  ],
  "skipInvalidRows": True or False, # [Optional] Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist.
  "templateSuffix": "A String", # If specified, treats the destination table as a base template, and inserts the rows into an instance table named "{destination}{templateSuffix}". BigQuery will manage creation of the instance table, using the schema of the base template table. See https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables for considerations when working with templates tables.
}


Returns:
  An object of the form:

    {
  "insertErrors": [ # An array of errors for rows that were not inserted.
    {
      "errors": [ # Error information for the row indicated by the index property.
        {
          "debugInfo": "A String", # Debugging information. This property is internal to Google and should not be used.
          "location": "A String", # Specifies where the error occurred, if present.
          "message": "A String", # A human-readable description of the error.
          "reason": "A String", # A short error code that summarizes the error.
        },
      ],
      "index": 42, # The index of the row that error applies to.
    },
  ],
  "kind": "bigquery#tableDataInsertAllResponse", # The resource type of the response.
}
list(projectId, datasetId, tableId, maxResults=None, pageToken=None, selectedFields=None, startIndex=None)
Retrieves table data from a specified set of rows. Requires the READER dataset role.

Args:
  projectId: string, Project ID of the table to read (required)
  datasetId: string, Dataset ID of the table to read (required)
  tableId: string, Table ID of the table to read (required)
  maxResults: integer, Maximum number of results to return
  pageToken: string, Page token, returned by a previous call, identifying the result set
  selectedFields: string, List of fields to return (comma-separated). If unspecified, all fields are returned
  startIndex: string, Zero-based index of the starting row to read

Returns:
  An object of the form:

    {
  "etag": "A String", # A hash of this page of results.
  "kind": "bigquery#tableDataList", # The resource type of the response.
  "pageToken": "A String", # A token used for paging results. Providing this token instead of the startIndex parameter can help you retrieve stable results when an underlying table is changing.
  "rows": [ # Rows of results.
    {
      "f": [ # Represents a single row in the result set, consisting of one or more fields.
        {
          "v": "",
        },
      ],
    },
  ],
  "totalRows": "A String", # The total number of rows in the complete table.
}
list_next(previous_request, previous_response)
Retrieves the next page of results.

Args:
  previous_request: The request for the previous page. (required)
  previous_response: The response from the request for the previous page. (required)

Returns:
  A request object that you can call 'execute()' on to request the next
  page. Returns None if there are no more items in the collection.