Databricks Knowledge Base

Main Navigation

  • Help Center
  • Documentation
  • Knowledge Base
  • Community
  • Training
  • Feedback

Jobs (GCP)

These articles can help you with your Databricks jobs.

16 Articles in this category

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request. We’ll get back to you as soon as possible.

Please enter the details of your request. A member of our support staff will respond as soon as possible.

  • Home
  • All articles
  • Jobs (GCP)

Distinguish active and dead jobs

Learn how to distinguish between active and dead Databricks jobs....

Last updated: May 10th, 2022 by Adam Pavlacka

Spark job fails with Driver is temporarily unavailable

Learn how to distinguish between active and dead Databricks jobs....

Last updated: May 10th, 2022 by Adam Pavlacka

How to delete all jobs using the REST API

Learn how to delete all Databricks jobs using the REST API....

Last updated: May 10th, 2022 by Adam Pavlacka

Job cluster limits on notebook output

Job clusters have a maximum notebook output size of 20 MB. If the output is larger, it results in an error....

Last updated: May 10th, 2022 by Jose Gonzalez

Job fails, but Apache Spark tasks finish

Your job fails, but all of the Apache Spark tasks have completed successfully. You are using spark.stop() or System.exit(0) in your code....

Last updated: May 10th, 2022 by harikrishnan.kunhumveettil

Job fails due to job rate limit

Learn how to resolve Databricks job failures due to job rate limits....

Last updated: May 10th, 2022 by Adam Pavlacka

Job fails with invalid access token

Jobs that run more than 48 hours fail with invalid access token error when the dbutils token expires....

Last updated: May 11th, 2022 by manjunath.swamy

Task deserialization time is high

Configure cluster-installed libraries to install on executors at cluster launch vs executor launch to speed up your job task runs....

Last updated: February 23rd, 2023 by Adam Pavlacka

Pass arguments to a notebook as a list

Use a JSON file to temporarily store arguments that you want to use in your notebook....

Last updated: October 29th, 2022 by pallavi.gowdar

Uncommitted files causing data duplication

Partially uncommitted files from a failed write can result in apparent data duplication. Adjust VACUUM settings to resolve the issue....

Last updated: November 8th, 2022 by gopinath.chandrasekaran

Multi-task workflows using incorrect parameter values

If parallel tasks running on the same cluster use Scala companion objects the wrong values can be used due to sharing a single class in the JVM....

Last updated: December 5th, 2022 by Rajeev kannan Thangaiah

Job fails with Spark Shuffle FetchFailedException error

Disable the default Spark Shuffle service to work around a FetchFailedException error....

Last updated: December 5th, 2022 by shanmugavel.chandrakasu

Users unable to view job results when using remote Git source

Databricks does not manage permission for remote repos, so you must sync changes with a local notebook so non-admin users can view results....

Last updated: March 7th, 2023 by ravirahul.padmanabhan

Single scheduled job tries to run multiple times

Ensure your cron syntax is correct when scheduling jobs. A wildcard in the wrong space can produce unexpected results....

Last updated: January 20th, 2023 by monica.cao

Add custom tags to a Delta Live Tables pipeline

Manually edit the JSON configuration file to add custom tags....

Last updated: February 24th, 2023 by John.Lourdu

Update notification settings for jobs with the Jobs API

You can use the Jobs API to add email notifications to some, or all, of the jobs in your workspace....

Last updated: March 17th, 2023 by manoj.hegde


© Databricks 2022-2023. All rights reserved. Apache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation.

Send us feedback | Privacy Notice (Updated) | Terms of Use | Your Privacy Choices | Your California Privacy Rights Privacy Rights icon

Definition by Author

0
0