如何从 PySpark 的数据框中选择一系列行?
在本文中,我们将从 PySpark 数据框中选择一系列行。
可以通过以下方式完成:
- 使用过滤器()。
- 使用 where()。
- 使用 SQL 表达式。
创建用于演示的数据框:
Python3
# importing module
import pyspark
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
# list of students data
data = [["1", "sravan", "vignan", 67, 89],
["2", "ojaswi", "vvit", 78, 89],
["3", "rohith", "vvit", 100, 80],
["4", "sridevi", "vignan", 78, 80],
["1", "sravan", "vignan", 89, 98],
["5", "gnanesh", "iit", 94, 98]]
# specify column names
columns = ['student ID', 'student NAME',
'college', 'subject1', 'subject2']
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
# display dataframe
dataframe.show()
Python3
# select dataframe between
# 23 and 78 marks in subject2
dataframe.filter(
dataframe.subject1.between(23,78)).show()
Python3
# select dataframe between
# 85 and 100 in subject1 column
dataframe.filter(
dataframe.subject1.between(85,100)).show()
Python3
# select dataframe in college column
# for vvit
dataframe.filter(
dataframe.college.between("vvit","vvit")).collect()
Python3
# create view for the dataframe
dataframe.createOrReplaceTempView("my_view")
# data subject1 between 23 and 78
spark.sql("SELECT * FROM my_view WHERE\
subject1 between 23 and 78").collect()
Python3
# create view for the dataframe
dataframe.createOrReplaceTempView("my_view")
# data subject1 between 23 and 78
spark.sql("SELECT * FROM my_view WHERE\
ID between 1 and 3").collect()
输出:
方法一:使用filter()
此函数用于通过根据给定条件选择记录来过滤数据帧。
Syntax: dataframe.filter(condition)
示例:基于subject2 列选择数据框的Python代码。
蟒蛇3
# select dataframe between
# 23 and 78 marks in subject2
dataframe.filter(
dataframe.subject1.between(23,78)).show()
输出:
方法 2:使用 where()
此函数用于通过根据给定条件选择记录来过滤数据帧。
Syntax: dataframe.where(condition)
示例 1:基于主题 1 列选择数据框的Python程序。
蟒蛇3
# select dataframe between
# 85 and 100 in subject1 column
dataframe.filter(
dataframe.subject1.between(85,100)).show()
输出:
示例 2:按大学列选择数据框中的行
蟒蛇3
# select dataframe in college column
# for vvit
dataframe.filter(
dataframe.college.between("vvit","vvit")).collect()
输出:
[Row(ID=’2′, student NAME=’ojaswi’, college=’vvit’, subject1=78, subject2=89),
Row(ID=’3′, student NAME=’rohith’, college=’vvit’, subject1=100, subject2=80)]
方法 3:使用 SQL 表达式
通过使用带有 between()运算符的SQL 查询,我们可以获得行的范围。
Syntax: spark.sql(“SELECT * FROM my_view WHERE column_name between value1 and value2”)
示例 1:基于主题 2 列从数据框中选择行的Python程序
蟒蛇3
# create view for the dataframe
dataframe.createOrReplaceTempView("my_view")
# data subject1 between 23 and 78
spark.sql("SELECT * FROM my_view WHERE\
subject1 between 23 and 78").collect()
输出:
[Row(student ID=’1′, student NAME=’sravan’, college=’vignan’, subject1=67, subject2=89),
Row(student ID=’2′, student NAME=’ojaswi’, college=’vvit’, subject1=78, subject2=89),
Row(student ID=’4′, student NAME=’sridevi’, college=’vignan’, subject1=78, subject2=80)]
示例 2:根据 ID 选择
蟒蛇3
# create view for the dataframe
dataframe.createOrReplaceTempView("my_view")
# data subject1 between 23 and 78
spark.sql("SELECT * FROM my_view WHERE\
ID between 1 and 3").collect()
输出:
[Row(ID=’1′, student NAME=’sravan’, college=’vignan’, subject1=67, subject2=89),
Row(ID=’2′, student NAME=’ojaswi’, college=’vvit’, subject1=78, subject2=89),
Row(ID=’3′, student NAME=’rohith’, college=’vvit’, subject1=100, subject2=80),
Row(ID=’1′, student NAME=’sravan’, college=’vignan’, subject1=89, subject2=98)]