Relocating Classes using Apache Maven Shade Plugin

Ana Suzuki
2 min readOct 24, 2017

--

Photo Credit by Genius

I just had a weird encounter upon running Spark Job.

There is a maven project that contains 2 modules: ModuleA and ModuleB.

To make the story short, ModuleA depends on ModuleB. There’s a class from ModuleB that is called to ModuleA.

Upon submitting the job, there is this error:

Exception in thread “main” java.lang.NoSuchMethodError: io.netty.buffer.CompositeByteBuf.addComponents(ZLjava/lang/Iterable;)Lio/netty/buffer/CompositeByteBuf;
...
...
...

Found out that there are multiple netty version:

  • Module A: org.apache.spark:spark-core and org.apache.hbase:hbase-server
  • Module B: org.elasticsearch.client:transport

To tell you the truth, I’ve been debugging this for hours I almost lost my humanity. I’ve tried excluding that package in pom.xml, assembly.xml and even in spark config but to no avail.

Finally, I met Apache Maven Shade Plugin where one of its ability is to rename packages of some dependencies. Take a look at Apache Spark and Elasticsearch’s pom.xml. They are also using this plugin.

The solution is this:

  1. Exclude netty package fororg.apache.hbase:hbase-server
<!-- Module A's pom.xml -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
<exclusions>
<exclusion>
<groupId>io.netty</groupId>
<artifactId>netty</artifactId>
</exclusion>
<exclusion>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
</exclusion>
</exclusions>
</dependency>

2. Added maven-shade-plugin to org.elasticsearch.client:transport

<!-- ModuleB's pom.xml -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<shadedArtifactAttached>false</shadedArtifactAttached>
<artifactSet>
<includes>
<include>*:*</include>
</includes>
</artifactSet>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<relocations combine.children="append">
<relocation>
<pattern>io.netty</pattern>
<shadedPattern>
com.shaded_package.io.netty
</shadedPattern>
<includes>
<include>io.netty.**</include>
</includes>
</relocation>
</relocations>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>

It would basically append com.shaded_package to all that starts with io.netty . However if you doubt this and want proof, compile this to a jar file and do jar -tf moduleb.jar | grep shaded_package

3. I didn’t do anything with org.apache.spark:spark-core netty packages since this is my most priority package.

After all of that my Spark Job works like a charm.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Ana Suzuki
Ana Suzuki

Written by Ana Suzuki

Data Engineer working on big data

No responses yet

Write a response