Master Jobs, Stages, and Tasks for Data Engineering Interviews

Image
Mastering Spark execution internals is a "must-have" skill for Data Engineers. Whether you are prepping for an interview or debugging a slow production pipeline, understanding how Spark breaks down your code is the key to performance tuning. Spark applications follow a strict hierarchy: Jobs > Stages > Tasks . Let’s break down exactly how this works. 1. High-Level Architecture Before we dive into the code, let’s look at the components that manage the execution: Driver: The brain. It converts your code into a Directed Acyclic Graph (DAG) and schedules tasks. DAG Scheduler: Splits the graph into Stages based on "shuffles." Task Scheduler: Sends the individual Tasks to the executors. Executors: The workers that actually run the tasks in parallel. 2. Real-World Code Walkthrough: The "Wide" Transformation Let’s analyze a common scenario: reading data, filtering, grouping, and saving. # 1. Read Data (Narrow) df = sp...

If Delta Lake Uses Immutable Files, How Do UPDATE, DELETE, and MERGE Work?

Listen and Watch here

One of the most common questions data engineers ask is: if Delta Lake stores data in immutable Parquet files, how can it support operations like UPDATE, DELETE, and MERGE? The answer lies in Delta Lake’s transaction log and its clever file rewrite mechanism.

🔍 Immutable Files in Delta Lake

Delta Lake stores data in Parquet files, which are immutable by design. This immutability ensures consistency and prevents accidental corruption. But immutability doesn’t mean data can’t change — it means changes are handled by creating new versions of files rather than editing them in place.

⚡ How UPDATE Works

When you run an UPDATE statement, Delta Lake:

  1. Identifies the files containing rows that match the update condition.
  2. Reads those files and applies the update logic.
  3. Writes out new Parquet files with the updated rows.
  4. Marks the old files as removed in the transaction log.
UPDATE people
SET age = age + 1
WHERE country = 'India';

Result: The updated rows are written into new files, while old files are excluded from the active snapshot.

🗑️ How DELETE Works

DELETE follows a similar process:

  1. Finds files containing rows that match the delete condition.
  2. Rewrites those files without the deleted rows.
  3. Marks old files as removed in the transaction log.
DELETE FROM people
WHERE birthDate < '1955-01-01';

Result: Rows are removed by rewriting files, not by editing them directly.

🔄 How MERGE Works

MERGE (also known as upsert) combines insert, update, and delete logic. Delta Lake:

  1. Matches source and target rows based on a condition.
  2. Updates or deletes matching rows by rewriting files.
  3. Inserts new rows into new files.
MERGE INTO people AS target
USING updates AS source
ON target.id = source.id
WHEN MATCHED THEN UPDATE SET target.age = source.age
WHEN NOT MATCHED THEN INSERT (id, name, age) VALUES (source.id, source.name, source.age);

Result: MERGE rewrites affected files and appends new ones, ensuring the table reflects the latest state.

📌 Behind the Scenes: Transaction Log

Delta Lake maintains a transaction log (stored as JSON files) that records every operation. Each commit creates a new snapshot of the table. This log enables:

  • ACID transactions — reliable updates even in distributed environments.
  • Time travel — query older versions of the table.
  • Scalability — efficient file skipping and compaction.

🚀 Best Practices

  • Use OPTIMIZE with ZORDER to reduce small files after heavy updates/deletes.
  • Leverage MERGE for change data capture (CDC) and upserts.
  • Monitor the transaction log for auditing and debugging.

✅ Conclusion

Delta Lake doesn’t break immutability — it embraces it. By rewriting files and tracking changes in the transaction log, Delta Lake enables powerful operations like UPDATE, DELETE, and MERGE while preserving data integrity and enabling time travel.

🔖 Hashtags

#DeltaLake #BigData #DataEngineering #Spark #ImmutableFiles #MERGE #Lakehouse

Comments

Popular posts from this blog

How to Configure a Databricks Cluster to Process 10 TB of Data Efficiently

Spark Execution Internals: Deconstructing Jobs, Stages, and Shuffles

How Delta Lake Improves Query Performance with OPTIMIZE and File Compaction