A Databricks job has been configured with 3 tasks, each of which is a Databricks notebook. Task A does not depend on other tasks. Tasks B and C run in parallel, with each having a serial dependency on Task A. If task A fails during a scheduled run, which statement describes the results of this run?
A. Because all tasks are managed as a dependency graph, no changes will be committed to the Lakehouse until all tasks have successfully been completed.
B. Tasks B and C will attempt to run as configured; any changes made in task A will be rolled back due to task failure.
C. Unless all tasks complete successfully, no changes will be committed to the Lakehouse; because task A failed, all commits will be rolled back automatically.
D. Tasks B and C will be skipped; some logic expressed in task A may have been committed before task failure.
E. Tasks B and C will be skipped; task A will not commit any changes because of stage failure.
Explanation:
When a Databricks job runs multiple tasks with dependencies, the tasks are
executed in a dependency graph. If a task fails, the downstream tasks that depend on it are
skipped and marked as Upstream failed. However, the failed task may have already
committed some changes to the Lakehouse before the failure occurred, and those changes
are not rolled back automatically. Therefore, the job run may result in a partial update of the
Lakehouse. To avoid this, you can use the transactional writes feature of Delta Lake to
ensure that the changes are only committed when the entire job run succeeds.
Alternatively, you can use the Run if condition to configure tasks to run even when some or
all of their dependencies have failed, allowing your job to recover from failures and
continue running. References:
transactional writes: https://docs.databricks.com/delta/deltaintro.html#transactional-writes
Run if: https://docs.databricks.com/en/workflows/jobs/conditional-tasks.html
Review the following error traceback:
Which statement describes the error being raised?
A. The code executed was PvSoark but was executed in a Scala notebook.
B. There is no column in the table named heartrateheartrateheartrate
C. There is a type error because a column object cannot be multiplied.
D. There is a type error because a DataFrame object cannot be multiplied.
E. There is a syntax error because the heartrate column is not correctly identified as a column.
Explanation:
The error being raised is an AnalysisException, which is a type of exception
that occurs when Spark SQL cannot analyze or execute a query due to some logical or
semantic error1. In this case, the error message indicates that the query cannot resolve the
column name ‘heartrateheartrateheartrate’ given the input columns ‘heartrate’ and ‘age’.
This means that there is no column in the table named ‘heartrateheartrateheartrate’, and
the query is invalid. A possible cause of this error is a typo or a copy-paste mistake in the
query. To fix this error, the query should use a valid column name that exists in the table,
such as ‘heartrate’. References: AnalysisException
In order to facilitate near real-time workloads, a data engineer is creating a helper function to leverage the schema detection and evolution functionality of Databricks Auto Loader. The desired function will automatically detect the schema of the source directly, incrementally process JSON files as they arrive in a source directory, and automatically evolve the schema of the table when new fields are detected. The function is displayed below with a blank: Which response correctly fills in the blank to meet the specified requirements?
A. Option A
B. Option B
C. Option C
D. Option D
E. Option E
Explanation:
Option B correctly fills in the blank to meet the specified requirements. Option B uses the
“cloudFiles.schemaLocation” option, which is required for the schema detection and
evolution functionality of Databricks Auto Loader. Additionally, option B uses the
“mergeSchema” option, which is required for the schema evolution functionality of
Databricks Auto Loader. Finally, option B uses the “writeStream” method, which is required
for the incremental processing of JSON files as they arrive in a source directory. The other
options are incorrect because they either omit the required options, use the wrong method,
or use the wrong format.
References:
Configure schema inference and evolution in Auto Loader:
https://docs.databricks.com/en/ingestion/auto-loader/schema.html
Write streaming data: https://docs.databricks.com/spark/latest/structuredstreaming/writing-streaming-data.html
A junior data engineer has been asked to develop a streaming data pipeline with a grouped
aggregation using DataFrame df. The pipeline needs to calculate the average humidity and
average temperature for each non-overlapping five-minute interval. Events are recorded
once per minute per device.
Streaming DataFrame df has the following schema:
"device_id INT, event_time TIMESTAMP, temp FLOAT, humidity FLOAT"
Code block:
Choose the response that correctly fills in the blank within the code block to complete this
task
A. to_interval("event_time", "5 minutes").alias("time")
B. window("event_time", "5 minutes").alias("time")
C. "event_time"
D. window("event_time", "10 minutes").alias("time")
E. lag("event_time", "10 minutes").alias("time")
Explanation:
This is the correct answer because the window function is used to group
streaming data by time intervals. The window function takes two arguments: a time column
and a window duration. The window duration specifies how long each window is, and must
be a multiple of 1 second. In this case, the window duration is “5 minutes”, which means
each window will cover a non-overlapping five-minute interval. The window function also
returns a struct column with two fields: start and end, which represent the start and end
time of each window. The alias function is used to rename the struct column as “time”.
Verified References: [Databricks Certified Data Engineer Professional], under “Structured
Streaming” section; Databricks Documentation, under “WINDOW” section.
https://www.databricks.com/blog/2017/05/08/event-time-aggregation-watermarkingapache-sparks-structured-streaming.html
A data ingestion task requires a one-TB JSON dataset to be written out to Parquet with a target part-file size of 512 MB. Because Parquet is being used instead of Delta Lake, builtin file-sizing features such as Auto-Optimize & Auto-Compaction cannot be used. Which strategy will yield the best performance without shuffling data?
A. Set spark.sql.files.maxPartitionBytes to 512 MB, ingest the data, execute the narrow transformations, and then write to parquet.
B. Set spark.sql.shuffle.partitions to 2,048 partitions (1TB*1024*1024/512), ingest the data, execute the narrow transformations, optimize the data by sorting it (which automatically repartitions the data), and then write to parquet.
C. Set spark.sql.adaptive.advisoryPartitionSizeInBytes to 512 MB bytes, ingest the data, execute the narrow transformations, coalesce to 2,048 partitions (1TB*1024*1024/512), and then write to parquet.
D. Ingest the data, execute the narrow transformations, repartition to 2,048 partitions (1TB* 1024*1024/512), and then write to parquet.
E. Set spark.sql.shuffle.partitions to 512, ingest the data, execute the narrow transformations, and then write to parquet.
Explanation:
The key to efficiently converting a large JSON dataset to Parquet files of a
specific size without shuffling data lies in controlling the size of the output files directly.
Setting spark.sql.files.maxPartitionBytes to 512 MB configures Spark to process
data in chunks of 512 MB. This setting directly influences the size of the part-files
in the output, aligning with the target file size.
Narrow transformations (which do not involve shuffling data across partitions) can
then be applied to this data.
Writing the data out to Parquet will result in files that are approximately the size
specified by spark.sql.files.maxPartitionBytes, in this case, 512 MB.
The other options involve unnecessary shuffles or repartitions (B, C, D) or an
incorrect setting for this specific requirement (E).
References:
Apache Spark Documentation: Configuration - spark.sql.files.maxPartitionBytes
Databricks Documentation on Data Sources: Databricks Data Sources Guide
A junior data engineer seeks to leverage Delta Lake's Change Data Feed functionality to create a Type 1 table representing all of the values that have ever been valid for all rows in a bronze table created with the property delta.enableChangeDataFeed = true. They plan to execute the following code as a daily job: Which statement describes the execution and results of running the above query multiple times?
A. Each time the job is executed, newly updated records will be merged into the target table, overwriting previous values with the same primary keys.
B. Each time the job is executed, the entire available history of inserted or updated records will be appended to the target table, resulting in many duplicate entries.
C. Each time the job is executed, the target table will be overwritten using the entire history of inserted or updated records, giving the desired result.
D. Each time the job is executed, the differences between the original and current versions are calculated; this may result in duplicate entries for some records.
E. Each time the job is executed, only those records that have been inserted or updated since the last execution will be appended to the target table giving the desired result.
Explanation: Reading table’s changes, captured by CDF, using spark.read means that you are reading them as a static source. So, each time you run the query, all table’s changes (starting from the specified startingVersion) will be read.
A user new to Databricks is trying to troubleshoot long execution times for some pipeline logic they are working on. Presently, the user is executing code cell-by-cell, using display() calls to confirm code is producing the logically correct results as new transformations are added to an operation. To get a measure of average time to execute, the user is running each cell multiple times interactively. Which of the following adjustments will get a more accurate measure of how code is likely to perform in production?
A. Scala is the only language that can be accurately tested using interactive notebooks; because the best performance is achieved by using Scala code compiled to JARs. all PySpark and Spark SQL logic should be refactored.
B. The only way to meaningfully troubleshoot code execution times in development notebooks Is to use production-sized data and production-sized clusters with Run All execution.
C. Production code development should only be done using an IDE; executing code against a local build of open source Spark and Delta Lake will provide the most accurate benchmarks for how code will perform in production.
D. Calling display () forces a job to trigger, while many transformations will only add to the logical query plan; because of caching, repeated execution of the same logic does not provide meaningful results.
E. The Jobs Ul should be leveraged to occasionally run the notebook as a job and track execution time during incremental code development because Photon can only be enabled on clusters launched for scheduled jobs.
Explanation:
In Databricks notebooks, using the display() function triggers an action that
forces Spark to execute the code and produce a result. However, Spark operations are
generally divided into transformations and actions. Transformations create a new dataset
from an existing one and are lazy, meaning they are not computed immediately but added
to a logical plan. Actions, like display(), trigger the execution of this logical plan.
Repeatedly running the same code cell can lead to misleading performance measurements
due to caching. When a dataset is used multiple times, Spark's optimization mechanism
caches it in memory, making subsequent executions faster. This behavior does not
accurately represent the first-time execution performance in a production environment
where data might not be cached yet.
To get a more realistic measure of performance, it is recommended to:
Clear the cache or restart the cluster to avoid the effects of caching.
Test the entire workflow end-to-end rather than cell-by-cell to understand the
cumulative performance.
Consider using a representative sample of the production data, ensuring it
includes various cases the code will encounter in production.
References:
Databricks Documentation on Performance Optimization: Databricks Performance
Tuning
Apache Spark Documentation: RDD Programming Guide - Understanding
transformations and actions
A junior data engineer is working to implement logic for a Lakehouse table named silver_device_recordings. The source data contains 100 unique fields in a highly nested JSON structure. The silver_device_recordings table will be used downstream to power several production monitoring dashboards and a production model. At present, 45 of the 100 fields are being used in at least one of these applications. The data engineer is trying to determine the best approach for dealing with schema declaration given the highly-nested structure of the data and the numerous fields. Which of the following accurately presents information about Delta Lake and Databricks that may impact their decision-making process?
A. The Tungsten encoding used by Databricks is optimized for storing string data; newlyadded native support for querying JSON strings means that string types are always most efficient.
B. Because Delta Lake uses Parquet for data storage, data types can be easily evolved by just modifying file footer information in place.
C. Human labor in writing code is the largest cost associated with data engineering workloads; as such, automating table declaration logic should be a priority in all migration workloads.
D. Because Databricks will infer schema using types that allow all observed data to be processed, setting types manually provides greater assurance of data quality enforcement.
E. Schema inference and evolution on .Databricks ensure that inferred types will always accurately match the data types used by downstream systems.
Explanation:
This is the correct answer because it accurately presents information about
Delta Lake and Databricks that may impact the decision-making process of a junior data
engineer who is trying to determine the best approach for dealing with schema declaration
given the highly-nested structure of the data and the numerous fields. Delta Lake and
Databricks support schema inference and evolution, which means that they can
automatically infer the schema of a table from the source data and allow adding new
columns or changing column types without affecting existing queries or pipelines. However,
schema inference and evolution may not always be desirable or reliable, especially when
dealing with complex or nested data structures or when enforcing data quality and
consistency across different systems. Therefore, setting types manually can provide
greater assurance of data quality enforcement and avoid potential errors or conflicts due to
incompatible or unexpected data types. Verified References: [Databricks Certified Data
Engineer Professional], under “Delta Lake” section; Databricks Documentation, under
“Schema inference and partition of streaming DataFrames/Datasets” section.
The business intelligence team has a dashboard configured to track various summary
metrics for retail stories. This includes total sales for the previous day alongside totals and
averages for a variety of time periods. The fields required to populate this dashboard have
the following schema:
For Demand forecasting, the Lakehouse contains a validated table of all itemized sales
updated incrementally in near real-time. This table named products_per_order, includes the
following fields:
Because reporting on long-term sales trends is less volatile, analysts using the new
dashboard only require data to be refreshed once daily. Because the dashboard will be
queried interactively by many users throughout a normal business day, it should return
results quickly and reduce total compute associated with each materialization.
Which solution meets the expectations of the end users while controlling and limiting
possible costs?
A. Use the Delta Cache to persists the products_per_order table in memory to quickly the dashboard with each query.
B. Populate the dashboard by configuring a nightly batch job to save the required to quickly update the dashboard with each query.
C. Use Structure Streaming to configure a live dashboard against the products_per_order table within a Databricks notebook.
D. Define a view against the products_per_order table and define the dashboard against this view.
Explanation:
Given the requirement for daily refresh of data and the need to ensure quick
response times for interactive queries while controlling costs, a nightly batch job to precompute and save the required summary metrics is the most suitable approach.
By pre-aggregating data during off-peak hours, the dashboard can serve queries
quickly without requiring on-the-fly computation, which can be resource-intensive
and slow, especially with many users.
This approach also limits the cost by avoiding continuous computation throughout
the day and instead leverages a batch process that efficiently computes and stores
the necessary data.
The other options (A, C, D) either do not address the cost and performance
requirements effectively or are not suitable for the use case of less frequent data
refresh and high interactivity.
References:
Databricks Documentation on Batch Processing: Databricks Batch Processing
Data Lakehouse Patterns: Data Lakehouse Best Practices
A data engineer needs to capture pipeline settings from an existing in the workspace, and use them to create and version a JSON file to create a new pipeline. Which command should the data engineer enter in a web terminal configured with the Databricks CLI?
A. Use the get command to capture the settings for the existing pipeline; remove the pipeline_id and rename the pipeline; use this in a create command
B. Stop the existing pipeline; use the returned settings in a reset command
C. Use the alone command to create a copy of an existing pipeline; use the get JSON command to get the pipeline definition; save this to git
D. Use list pipelines to get the specs for all pipelines; get the pipeline spec from the return results parse and use this to create a pipeline
Explanation:
The Databricks CLI provides a way to automate interactions with Databricks
services. When dealing with pipelines, you can use the databricks pipelines get --
pipeline-id command to capture the settings of an existing pipeline in JSON format. This
JSON can then be modified by removing the pipeline_id to prevent conflicts and renaming
the pipeline to create a new pipeline. The modified JSON file can then be used with the
databricks pipelines create command to create a new pipeline with those settings.
References:
Databricks Documentation on CLI for Pipelines: Databricks CLI - Pipelines
Page 2 out of 11 Pages |
Previous |