Some time ago, I introduced some Dagger in our project (it's in Java 8). It went ok, it wasn't much. But the project's build time has increased by 40% or so (it has 4k+ of source files).
So I was tasked with this by the lead engineer (I rephrased it a little):
Create a new project with Dagger, add an interface that Dagger would generate an implementation for. Add 100 more of such interfaces, then 200, 300, 1000. By estimating compilation time, we can determine the relationship between the two, whether it's linear or not, how fast it grows.
Obviously, I don't want to do it manually (or probably at all). But I still need to figure out how Dagger affects compilation time.
I think it primarily depends on the number of source files totally rather than the number of Dagger interfaces. I guess it's the scanning that's costly, not the generation itself. But I need data.
How can I get it?
<!-- pom.xml -->
<dagger.ver>2.51.1</dagger.ver>
<!-- ... -->
<dependency>
<groupId>com.google.dagger</groupId>
<artifactId>dagger</artifactId>
<version>${dagger.ver}</version>
</dependency>
<dependency>
<groupId>com.google.dagger</groupId>
<artifactId>dagger-compiler</artifactId>
<version>${dagger.ver}</version>
<scope>provided</scope>
</dependency>
<!-- ... -->
<plugin>
<groupId>.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<compilerArguments>
<XDignore.symbol.file/>
</compilerArguments>
<fork>true</fork>
<annotationProcessorPaths>
<path>
<groupId>com.google.dagger</groupId>
<artifactId>dagger-compiler</artifactId>
<version>${dagger.ver}</version>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>