Fork me on GitHub

Spark再体验之springboot整合spark

有说胎记是前世死的方式,偶肚子上有个,于是想,难不成上辈子是被人捅死的,谁那么狠。。。后来遇到个人,在同样的位置也有个类似的,忽然就平衡了。
神回复:也可能你们俩上辈子是很烤串

spark

前言

  上一篇主要讲的是spark环境的搭建和任务的提交,这一篇是将spark直接部署在springboot搭建的web服务里,一些数据逻辑交给spark去处理,至于原理等我对spark有了更深的理解再来一一讲述!

编码

  使用springboot快速搭建一个web框架,之前对pom中的依赖配置不是怎么在意,进过spark和scala版本的坑之后,发现想配置一个完美的pom是多么的不容易,下面倾情奉送

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.3.2.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<scala.version>2.10.4</scala.version>
<spark.version>1.6.2</spark.version>
</properties>

<dependencies>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>

<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.4.4</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>${spark.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-launcher_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.specs</groupId>
<artifactId>specs</artifactId>
<version>1.2.5</version>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.ansj</groupId>
<artifactId>ansj_seg</artifactId>
<version>5.1.1</version>
</dependency>

</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>

</plugins>
</build>

这里包含了springboot和spark需要的依赖

然后再写一个计算单词个数的方法,这个程序跟以前的一样,只是SparkConfig的配置有所改变

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
@Component
public class WordCountService implements Serializable {
private static final Pattern SPACE = Pattern.compile(" ");

@Autowired
private transient JavaSparkContext sc;

public Map<String, Integer> run() {
Map<String, Integer> result = new HashMap<>();
JavaRDD<String> lines = sc.textFile("C:\\Users\\bd2\\Downloads\\blsmy.txt").cache();

lines.map(new Function<String, String>() {
@Override
public String call(String s) throws Exception {
System.out.println(s);
return s;
}
});

System.out.println(lines.count());

JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {


@Override
public Iterable<String> call(String s) throws Exception {
return Arrays.asList(SPACE.split(s));
}
});

JavaPairRDD<String, Integer> ones = words.mapToPair(new PairFunction<String, String, Integer>() {

private static final long serialVersionUID = 1L;

public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});

JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() {

private static final long serialVersionUID = 1L;

public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});

List<Tuple2<String, Integer>> output = counts.collect();
for (Tuple2<String, Integer> tuple : output) {
result.put(tuple._1(),tuple._2());

}

return result;

}
}

注意 注意 注意
上面两点写法需要注意
implements Serializableprivate transient JavaSparkContext sc
transient为的是不让sc序列化,如果没有它做修饰,你会遇到这样错

1
2
Task not serializable] with root cause
java.io.NotSerializableException: com.quick.spark.xxx

别说我怎么知道的,这个问题花了整整一下午一把血与泪啊,中文,,英文和日文的解答都尼玛看了。。。文本我用的是《巴黎圣母院》的英文版,下面是结果

结果

字数统计
代码我放在了GitHub上,有兴趣的可以看一看。

后记

  代码都放在了公司了,自己住的地方网速慢的要死,短短一篇文章写了半个多小时。。。
  接触spark不到四天,通过demo对其有了更进一步的认识,前几天买的书《Spark快速大数据分析》今天刚到,值得去看一看。

后续

早上使用java8提供的lambda表达式改了以下代码,如下图

lambda

代码量减少了一倍,据说效率还提高了。。。

-------------本文结束感谢您的阅读-------------