📜  PySpark Dataframe 上聚合的多个标准

📅  最后修改于: 2022-05-13 01:55:13.534000             🧑  作者: Mango

PySpark Dataframe 上聚合的多个标准

在本文中,我们将讨论如何在 PySpark Dataframe 上进行多条件聚合。

使用中的数据框:

在 PySpark 中,groupBy() 用于将相同的数据收集到 PySpark DataFrame 上的组中,并对分组数据执行聚合函数。因此,我们可以一次进行多个聚合。

语法

在哪里,

  • column_name_group 是要分组的列
  • 函数是聚合函数

让我们先了解什么是聚合。它们在 pyspark.sql 的函数模块中可用,所以我们需要导入它来开始。聚合函数是:

  • count():这将返回每个组的行数。

句法:

  • mean():这将返回每个组的值的平均值。

句法:

  • max() :这将返回每个组的最大值。

句法:

  • min():这将返回每个组的最小值。

句法:

  • sum():这将返回每个组的总值。

句法:

  • avg():这将返回每个组的平均值。

句法:

我们可以使用以下语法聚合多个函数。

句法:

示例: DEPT 列与 FEE 列的多个聚合

Python3
# importing module
import pyspark
  
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
  
#import functions
from pyspark.sql import functions
  
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
  
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
  
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
  
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
  
  
# aggregating DEPT column with min.max,sum,mean,avg and count functions
dataframe.groupBy('DEPT').agg(functions.min('FEE'),
                              functions.max('FEE'),
                              functions.sum('FEE'), 
                              functions.mean('FEE'),
                              functions.count('FEE'),
                              functions.avg('FEE')).show()


Python3
# importing module
import pyspark
  
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
  
#import functions
from pyspark.sql import functions
  
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
  
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
  
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
  
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
  
  
# aggregating DEPT, NAME column with min.max,
# sum,mean,avg and count functions
dataframe.groupBy('DEPT', 'NAME').agg(functions.min('FEE'), 
                                      functions.max('FEE'), 
                                      functions.sum('FEE'),
                                      functions.mean('FEE'), 
                                      functions.count('FEE'), 
                                      functions.avg('FEE')).show()


输出:

示例 2:分组 dept 和 name 列中的多重聚合

Python3

# importing module
import pyspark
  
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
  
#import functions
from pyspark.sql import functions
  
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
  
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
  
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
  
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
  
  
# aggregating DEPT, NAME column with min.max,
# sum,mean,avg and count functions
dataframe.groupBy('DEPT', 'NAME').agg(functions.min('FEE'), 
                                      functions.max('FEE'), 
                                      functions.sum('FEE'),
                                      functions.mean('FEE'), 
                                      functions.count('FEE'), 
                                      functions.avg('FEE')).show()

输出: