Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/fix code coverage issue #228

Merged
merged 3 commits into from
Dec 12, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Migrate and Validate Tables between Origin and Target Cassandra Clusters.

### Prerequisite
- Install Java8 as spark binaries are compiled with it.
- Install Spark version [3.4.1](https://archive.apache.org/dist/spark/spark-3.4.1/) on a single VM (no cluster necessary) where you want to run this job. Spark can be installed by running the following: -
- Install Spark version [3.4.1](https://archive.apache.org/dist/spark/spark-3.4.1/spark-3.4.1-bin-hadoop3-scala2.13.tgz) on a single VM (no cluster necessary) where you want to run this job. Spark can be installed by running the following: -
```
wget https://archive.apache.org/dist/spark/spark-3.4.1/spark-3.4.1-bin-hadoop3-scala2.13.tgz
tar -xvzf spark-3.4.1-bin-hadoop3-scala2.13.tgz
Expand Down Expand Up @@ -97,7 +97,7 @@ Each line above represents a partition-range (`min,max`). Alternatively, you can
./spark-submit --properties-file cdm.properties \
--conf spark.cdm.schema.origin.keyspaceTable="<keyspacename>.<tablename>" \
--conf spark.cdm.tokenRange.partitionFile="/<path-to-file>/<csv-input-filename>" \
--master "local[*]" --driver-memory 25G --executor-memory 25G \
--master "local[*]" --driver-memory 25G --executor-memory 25G \
--class com.datastax.cdm.job.<Migrate|DiffData> cassandra-data-migrator-4.x.x.jar &> logfile_name_$(date +%Y%m%d_%H_%M).txt
```
This mode is specifically useful to processes a subset of partition-ranges that may have failed during a previous run.
Expand Down
3 changes: 3 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Release Notes
## [4.1.9 to 4.1.11] - 2023-12-11
- Code test & coverage changes

## [4.1.8] - 2023-10-13
- Upgraded to use Scala 2.13

Expand Down
84 changes: 24 additions & 60 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,14 @@

<groupId>datastax.cdm</groupId>
<artifactId>cassandra-data-migrator</artifactId>
<version>4.1.10-SNAPSHOT</version>
<version>4.1.11-SNAPSHOT</version>
<packaging>jar</packaging>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<scala.version>2.13.12</scala.version>
<scala.main.version>2.13</scala.main.version>
<spark.version>3.4.1</spark.version>
<scalatest.version>3.2.17</scalatest.version>
<connector.version>3.4.1</connector.version>
<cassandra.version>5.0-alpha1</cassandra.version>
<junit.version>5.9.1</junit.version>
Expand Down Expand Up @@ -151,12 +150,6 @@
</dependency>

<!-- Test Dependencies -->
<dependency>
<groupId>org.scalatest</groupId>
<artifactId>scalatest_${scala.main.version}</artifactId>
<version>${scalatest.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
Expand Down Expand Up @@ -200,20 +193,27 @@
</resources>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>4.8.0</version>
<executions>
<execution>
<phase>process-sources</phase>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>4.8.0</version>
<executions>
<execution>
<phase>process-sources</phase>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>

</execution>
</executions>
</execution>
</executions>
</plugin>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.2</version>
</plugin>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
Expand Down Expand Up @@ -242,35 +242,6 @@
</execution>
</executions>
</plugin>
<!-- Instructions from http://www.scalatest.org/user_guide/using_the_scalatest_maven_plugin -->
<!-- disable surefire -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.2</version>
<configuration>
<skipTests>true</skipTests>
</configuration>
</plugin>
<!-- enable scalatest -->
<plugin>
<groupId>org.scalatest</groupId>
<artifactId>scalatest-maven-plugin</artifactId>
<version>2.2.0</version>
<configuration>
<reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory>
<junitxml>.</junitxml>
<filereports>WDF TestSuite.txt</filereports>
</configuration>
<executions>
<execution>
<id>test</id>
<goals>
<goal>test</goal>
</goals>
</execution>
</executions>
</plugin>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
Expand Down Expand Up @@ -304,35 +275,28 @@
<id>jacoco-check</id>
<phase>test</phase>
<goals>
<goal>check</goal>
<goal>report</goal>
<goal>check</goal>
</goals>
<configuration>
<excludes>
<!-- Excluding all the Scala classes -->
<exclude>com.datastax.cdm.job.*</exclude>
</excludes>
<rules>
<rule>
<element>BUNDLE</element>
<limits>
<limit>
<counter>COMPLEXITY</counter>
<value>COVEREDRATIO</value>
<!-- <minimum>0.33</minimum>-->
<minimum>0</minimum>
<minimum>0.33</minimum>
</limit>
<limit>
<counter>INSTRUCTION</counter>
<value>COVEREDRATIO</value>
<!-- <minimum>41%</minimum>-->
<minimum>0%</minimum>
<minimum>45%</minimum>
</limit>
<limit>
<counter>LINE</counter>
<value>MISSEDCOUNT</value>
<!-- <maximum>1544</maximum>-->
<maximum>3085</maximum>
<maximum>1500</maximum>
</limit>
</limits>
</rule>
Expand Down
2 changes: 1 addition & 1 deletion src/main/java/com/datastax/cdm/job/SplitPartitions.java
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ public class SplitPartitions {

public static Logger logger = LoggerFactory.getLogger(SplitPartitions.class.getName());

public static Collection<Partition> getRandomSubPartitions(int numSplits, BigInteger min, BigInteger max, int coveragePercent) {
public static List<Partition> getRandomSubPartitions(int numSplits, BigInteger min, BigInteger max, int coveragePercent) {
logger.info("ThreadID: {} Splitting min: {} max: {}", Thread.currentThread().getId(), min, max);
List<Partition> partitions = getSubPartitions(numSplits, min, max, coveragePercent);
Collections.shuffle(partitions);
Expand Down
40 changes: 40 additions & 0 deletions src/test/java/com/datastax/cdm/job/SplitPartitionsTest.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
package com.datastax.cdm.job;

import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;

import java.math.BigInteger;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import java.util.stream.Stream;

import static org.junit.jupiter.api.Assertions.assertEquals;

public class SplitPartitionsTest {

@Test
void getRandomSubPartitionsTest() {
List<SplitPartitions.Partition> partitions = SplitPartitions.getRandomSubPartitions(10, BigInteger.ONE,
BigInteger.valueOf(100), 100);
assertEquals(10, partitions.size());
partitions.forEach(p -> {
assertEquals(9, p.getMax().longValue() - p.getMin().longValue());
});
}

@Test
void getRandomSubPartitionsTestOver100() {
List<SplitPartitions.Partition> partitions = SplitPartitions.getRandomSubPartitions(8, BigInteger.ONE,
BigInteger.valueOf(44), 200);
assertEquals(8, partitions.size());
}
@Test
void batchesTest() {
List<String> mutable_list = Arrays.asList("e1", "e2", "e3", "e4", "e5", "e6");
Stream<List<String>> out = SplitPartitions.batches(mutable_list, 2);
assertEquals(3, out.count());
}

}