Posts

Showing posts from June, 2024

Spark Execution Internals: Deconstructing Jobs, Stages, and Shuffles

Understanding Spark Execution: A Deep Dive If you are working with Big Data, writing code that "works" is only half the battle. To truly master Apache Spark, you need to understand how your code is translated into physical execution. Today, let's break down a specific Spark snippet to see how Jobs, Stages, and Tasks are born. The Scenario Imagine we have the following PySpark code: df = spark.read.parquet("sales") result = (     df.filter("amount > 100")     .select("customer_id", "amount")     .repartition(4)     .groupBy("customer_id")     .sum("amount") ) result.write.mode("overwrite").parquet("output") Our Cluster Constraints: Input Data:  12 partitions. Cluster Hardware:  4 executors, each capable of running 2 tasks simultaneously. Q1. How many Spark Jobs will be created? Answer: 1 Job. In Spark, a  Job  is triggered by an  Action . Transformations (like  filter  or  groupBy ) are lazy...

Optimizing SQL queries

Image
  ๐Ÿš€ Optimizing SQL queries is crucial for improving database performance and ensuring efficient use of resources. ๐Ÿ‘‰ Few SQL query optimization techniques are as below: ✅ Index Optimization ➡️ Ensure indexes are created on columns that are frequently used in 'WHERE' clauses, 'JOIN' conditions and as part of 'ORDER BY' clauses. ➡️Use composite indexes for columns that are frequently queried together. ➡️Regularly analyze and rebuild fragmented indexes. ✅ Query Refactoring ➡️ Break complex queries into simpler subqueries or use common table expressions (CTEs). ➡️ Avoid unnecessary columns in the 'SELECT' clause to reduce the data processed. ✅ Join Optimization ➡️ Use the appropriate type of join (INNER JOIN, LEFT JOIN, etc.) based on the requirements. ➡️ Ensure join columns are indexed to speed up the join operation. ➡️ Consider the join order, starting with the smallest table. ✅ Use of Proper Data Types ➡️ Choose the most efficient data type for your col...