📅  最后修改于: 2020-11-30 05:04:54             🧑  作者: Mango
Impala GROUP BY子句与SELECT语句配合使用,以将相同的数据分组。
以下是GROUP BY子句的语法。
select data from table_name Group BY col_name;
假设我们在数据库my_db中有一个名为客户的表,其内容如下-
[quickstart.cloudera:21000] > select * from customers;
Query: select * from customers
+----+----------+-----+-----------+--------+
| id | name | age | address | salary |
+----+----------+-----+-----------+--------+
| 1 | Ramesh | 32 | Ahmedabad | 20000 |
| 2 | Khilan | 25 | Delhi | 15000 |
| 3 | kaushik | 23 | Kota | 30000 |
| 4 | Chaitali | 25 | Mumbai | 35000 |
| 5 | Hardik | 27 | Bhopal | 40000 |
| 6 | Komal | 22 | MP | 32000 |
+----+----------+-----+-----------+--------+
Fetched 6 row(s) in 0.51s
您可以使用GROUP BY查询获得每个客户的工资总额,如下所示。
[quickstart.cloudera:21000] > Select name, sum(salary) from customers Group BY name;
执行时,上面的查询给出以下输出。
Query: select name, sum(salary) from customers Group BY name
+----------+-------------+
| name | sum(salary) |
+----------+-------------+
| Ramesh | 20000 |
| Komal | 32000 |
| Hardik | 40000 |
| Khilan | 15000 |
| Chaitali | 35000 |
| kaushik | 30000 |
+----------+-------------+
Fetched 6 row(s) in 1.75s
假定此表具有多个记录,如下所示。
+----+----------+-----+-----------+--------+
| id | name | age | address | salary |
+----+----------+-----+-----------+--------+
| 1 | Ramesh | 32 | Ahmedabad | 20000 |
| 2 | Ramesh | 32 | Ahmedabad | 1000| |
| 3 | Khilan | 25 | Delhi | 15000 |
| 4 | kaushik | 23 | Kota | 30000 |
| 5 | Chaitali | 25 | Mumbai | 35000 |
| 6 | Chaitali | 25 | Mumbai | 2000 |
| 7 | Hardik | 27 | Bhopal | 40000 |
| 8 | Komal | 22 | MP | 32000 |
+----+----------+-----+-----------+--------+
现在,再次考虑记录的重复输入,您可以使用Group By子句来获得员工的工资总额,如下所示。
Select name, sum(salary) from customers Group BY name;
执行时,上面的查询给出以下输出。
Query: select name, sum(salary) from customers Group BY name
+----------+-------------+
| name | sum(salary) |
+----------+-------------+
| Ramesh | 21000 |
| Komal | 32000 |
| Hardik | 40000 |
| Khilan | 15000 |
| Chaitali | 37000 |
| kaushik | 30000 |
+----------+-------------+
Fetched 6 row(s) in 1.75s