• Databricks
  • Databricks
  • Support
  • Feedback
  • Try Databricks
  • Help Center
  • Documentation
  • Knowledge Base
Knowledge Base for Databricks on AWS
  • Databricks administration
  • AWS infrastructure
  • Business intelligence tools
  • Clusters
  • Data management
  • Data sources
  • Databricks File System (DBFS)
  • Databricks SQL
  • Developer tools
  • Delta Lake
  • Jobs
    • Distinguish active and dead jobs
    • Spark job fails with Driver is temporarily unavailable
    • How to delete all jobs using the REST API
    • Identify less used jobs
    • Job cluster limits on notebook output
    • Job fails, but Apache Spark tasks finish
    • Job fails due to job rate limit
    • Create table in overwrite mode fails when interrupted
    • Apache Spark Jobs hang due to non-deterministic custom UDF
    • Apache Spark job fails with Failed to parse byte string
    • Apache Spark UI shows wrong number of jobs
    • Apache Spark job fails with a Connection pool shut down error
    • Job fails with atypical errors message
    • Apache Spark job fails with maxResultSize exception
    • Databricks job fails because library is not installed
    • Jobs failing on Databricks Runtime 5.5 LTS with an SQLAlchemy package error
    • Job failure due to Azure Data Lake Storage (ADLS) CREATE limits
    • Job fails with invalid access token
    • How to ensure idempotency for jobs
    • Monitor running jobs with a Job Run dashboard
    • Streaming job has degraded performance
    • Task deserialization time is high
  • Job execution
  • Libraries
  • Machine learning
  • Metastore
  • Metrics
  • Notebooks
  • Security and permissions
  • Streaming
  • Visualizations
  • Python with Apache Spark
  • R with Apache Spark
  • Scala with Apache Spark
  • SQL with Apache Spark

Updated Apr 14, 2022

Send us feedback

  • Documentation
  • Jobs

Jobs

These articles can help you with your Databricks jobs.

  • Distinguish active and dead jobs
  • Spark job fails with Driver is temporarily unavailable
  • How to delete all jobs using the REST API
  • Identify less used jobs
  • Job cluster limits on notebook output
  • Job fails, but Apache Spark tasks finish
  • Job fails due to job rate limit
  • Create table in overwrite mode fails when interrupted
  • Apache Spark Jobs hang due to non-deterministic custom UDF
  • Apache Spark job fails with Failed to parse byte string
  • Apache Spark UI shows wrong number of jobs
  • Apache Spark job fails with a Connection pool shut down error
  • Job fails with atypical errors message
  • Apache Spark job fails with maxResultSize exception
  • Databricks job fails because library is not installed
  • Jobs failing on Databricks Runtime 5.5 LTS with an SQLAlchemy package error
  • Job failure due to Azure Data Lake Storage (ADLS) CREATE limits
  • Job fails with invalid access token
  • How to ensure idempotency for jobs
  • Monitor running jobs with a Job Run dashboard
  • Streaming job has degraded performance
  • Task deserialization time is high


© Databricks 2022. All rights reserved. Apache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation.

Send us feedback | Privacy Policy | Terms of Use