Skip to content

[Bug] job_execution_timeout_seconds is not passed as jobTimeoutMs to BigQuery jobs #1165

@linchun3

Description

@linchun3

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Which packages are affected?

  • dbt-adapters
  • dbt-tests-adapter
  • dbt-athena
  • dbt-athena-community
  • dbt-bigquery
  • dbt-postgres
  • dbt-redshift
  • dbt-snowflake
  • dbt-spark

Current Behavior

When job_execution_timeout_seconds is configured in profiles.yml, the setting is effectively ignored. The dbt run task times out after job_execution_timeout_seconds but the underlying BigQuery job is never cancelled by the server.

I can confirm that the jobTimeoutMs key is missing from the final JSON payload sent to the BigQuery Jobs API.

This appears to be a bug introduced in PR #1109, the old client-side timeout logic was removed to rely on the python-big query's implementation. However, the logic to add the server-side job_timeout_ms parameter to the job's configuration was not implemented, leaving the feature non-functional.

Expected Behavior

When job_execution_timeout_seconds is set to N in a profile, and a dbt model's query execution time exceeds N seconds, the following should happen:

  • The BigQuery job running on the server should be cancelled.
  • The dbt run command should fail with an error indicating that the job was cancelled or timed out by the server.

Steps To Reproduce

In this environment

  • dbt-core: 1.8.1
  • dbt-bigquery: 1.8.1
  • google-cloud-bigquery: 3.20.1

With this config

Configure a BigQuery profile in profiles.yml and set a short timeout:

my_project:
  target: dev
  outputs:
    dev:
      type: bigquery
      method: service-account
      project: ...
      dataset: ...
      threads: 4
      job_execution_timeout_seconds: 10

Run

Create and run a dbt model with a query that is guaranteed to take longer than the timeout.

Example Model (models/long_running_query.sql):

-- long running query > 60s

Command:

dbt run --select long_running_query

Problem

There is no error. The dbt run command terminates after 10 seconds, but the BigQuery Job continues to run for > 60s.

Relevant log output

Environment

- OS: MacOS
- Python: Python 3.10
- dbt-core: 1.8.1
- dbt-bigquery: 1.8.1
- google-cloud-bigquery: 3.20.1

Additional Context

Background & Timeline

The implementation of this feature has evolved over time, leading to the current bug:

Initial State: Originally, dbt-bigquery used a client-side timeout because the underlying python-bigquery library did not support a server-side jobTimeoutMs parameter, as noted in googleapis/python-bigquery#1421.

Server-Side Support Added: The python-bigquery library introduced support for the server-side jobTimeoutMs parameter in googleapis/python-bigquery#1675.

dbt-bigquery Adopts the Change: To leverage this improvement, dbt-bigquery PR #1109 removed the client-side timeout to utilise the server-side timeout.

The Current Bug: However, the investigation revealed that while the client-side timeout was removed, the logic to add the server-side jobTimeoutMs to the job's configuration was not implemented. As confirmed by inspecting the api_repr of the job configuration, the jobTimeoutMs key was missing from the payload sent to BigQuery.

Potential Fix

A potential solution is to populate the job_params dictionary with the timeout value before it is used to create the QueryJobConfig.

This change should be made in dbt/adapters/bigquery/connections.py, within the raw_execute method:

# In dbt/adapters/bigquery/connections.py, inside raw_execute()

# ... after job_execution_timeout is defined ...
      job_creation_timeout = self.get_job_creation_timeout_seconds(conn)
      job_execution_timeout = self.get_job_execution_timeout_seconds(conn)

# === PROPOSED FIX START ===
      if job_execution_timeout:
          job_params["job_timeout_ms"] = job_execution_timeout * 1000
# === PROPOSED FIX END ===

      def fn():
          return self._query_and_results(
# ... rest of the function
This ensures the `job_timeout_ms` parameter is included in the configuration, allowing BigQuery to correctly enforce the job execution timeout on the server side. This has been verified to resolve the issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    triage:productIn Product's queuetype:bugSomething isn't working as documented

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions