CognitiveJ is an open-source fluent Java (8) API that manages and orchestrates the interaction between Java applications and Microsofts’ Cognitive (Project Oxford) Machine Learning & Image Processing libraries and allows you to query and analyze images.
Faces
- Facial Detection – Capture faces, gender, age and associated facial features and landmarks from an image
- Emotion Detection – Derive emotional state from faces within an image
- Verification – Verify, with a confidence scale on whether 2 different faces are of the same person
- Identification – Identify a person from a set of known people.
- Find Similar – detect, group and rank similar faces
- Grouping – group people based on facial characteristics
- Person Group/Person/Face Lists; Create, manage and train groups, face lists and persons to interact with the identification/grouping/find similar face features.
Vision
- Image Describe - Describe visual content of an image and return real world caption to what the Image is of.
- Image Analysis – extract key details from an image and if the image is of an adult/racy nature.
- OCR – detect and extract text from an image.
- Thumbnail – Create thumbnail images based on key points of interest from the image.
Overlay (Experimental)
- Apply image layers onto images to visually represent found features.
- Apply captions onto faces and images
- Graphically illustrate the Faces/Vision feature sets.
- Pixelate faces in an image.
Other Features
- Works with local or remote images
- validation of parameters
Getting Started
- Java 8
- Subscription keys for the MS Cognitive libraries (free registration here)
- Add the dependency from JCenter
repositories {
jcenter()
}
dependencies {
compile "cognitivej:cognitivej:0.6.2"
...
}
<dependency>
<groupId>cognitivej</groupId>
<artifactId>cognitivej</artifactId>
<version>0.6.2</version>
<type>pom</type>
</dependency>
Chained Builders - The builders are simple lightweight wrappers over the MS Cognitive REST calls that manages the marshalling of parameters/responses, the HTTP communications and retry strategies. The builders are chained to allow for follow up manipulation on resources that have been created or retrieved & where applicable.
Scenarios - Scenarios are real world use case classes that greatly simplifies the interaction between the builders and the wrapper classes. While there is no reason you can’t interact directly with the builders, scenarios have much of the boilerplate logic in place to reduce burden.
Overlay - Allows for creating and writing new images based on the results from the queries. Note: work is ongoing around collision detection and observing boundaries
Wrappers Simple domain wrappers around request/response/parameter objects (e.g. Face, FaceAttributes,Person etc)
Face – Detect can detect faces from within an image and return the results as a collection of ‘face’ results.
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder imageOverlayBuilder = ImageOverlayBuilder.builder(IMAGE_URL);
imageOverlayBuilder.outlineFacesOnImage(faceScenarios.findFaces(IMAGE_URL), RectangleType.FULL,
CognitiveJColourPalette.STRAWBERRY).launchViewer();
}
Face – Landmarks can detect faces from within an image and apply facial landmarks
public static void main(String[] args) throws IOException {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
Face faces = faceScenarios.findSingleFace(IMAGE_URL);
ImageOverlayBuilder.builder(IMAGE_URL).outFaceLandmarksOnImage(faces).launchViewer();
}
Face – Detect with Attributes displays associated attributes for detected faces
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
List<Face> faces = faceScenarios.findFaces(IMAGE_URL);
ImageOverlayBuilder.builder(IMAGE_URL).outlineFacesOnImage(faces, RectangleType.CORNERED,
CognitiveJColourPalette.MEADOW).writeFaceAttributesToTheSide(faces, CognitiveJColourPalette.MEADOW).launchViewer();
}
Face – Verify will validate (with a confidence ratio) if 2 different faces are of the same persons.
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder imageOverlayBuilder = ImageOverlayBuilder.builder(CANDIDATE_1);
imageOverlayBuilder.verify(CANDIDATE_2, faceScenarios.verifyFaces(CANDIDATE_1, CANDIDATE_2)).launchViewer();
}
Face – Identify will identify a person (or people) within an image. Before the library can identify, we need to provide the the Cognitive libraries with the samples set of candidates. Currently supports 1000 candidates.
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder imageOverlayBuilder = ImageOverlayBuilder.builder(IMAGE);
List<ImageHolder> candidates = candidates();
People people = ScenarioHelper.createPeopleFromHoldingImages(candidates, ImageNamingStrategy.DEFAULT);
String groupId = faceScenarios.createGroupWithPeople(randomAlphabetic(6).toLowerCase(), people);
}
**Face – Pixelate **will identify all faces within an image and pixelate them.
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder imageOverlayBuilder = ImageOverlayBuilder.builder(IMAGE);
faceScenarios.findFaces(IMAGE).stream().forEach(imageOverlayBuilder:: pixelateFaceOnImage);
imageOverlayBuilder.launchViewer();
}
Emotion – Detect will detect what emotion a face(s) is showing within an image.
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder.builder(IMAGE_URL).outlineEmotionsOnImage(faceScenarios.findEmotionFaces(IMAGE_URL)).launchViewer();
}
Vision – Describe will analyse and describe the contents of an image in a human readable caption.
public static void main(String[] args) {
ComputerVisionScenario computerVisionScenario = new ComputerVisionScenario(getProperty("azure.cognitive.vision.subscriptionKey"));
ImageDescription imageDescription = computerVisionScenario.describeImage(IMAGE_URL);
ImageOverlayBuilder.builder(IMAGE_URL).describeImage(imageDescription).launchViewer();
}
**Vision – OCR **will analyse and extract text from within an image into a computer understandable stream.
public static void main(String[] args) {
ComputerVisionScenario computerVisionScenario = new ComputerVisionScenario(getProperty("azure.cognitive.vision.subscriptionKey"));
OCRResult ocrResult = computerVisionScenario.ocrImage(IMAGE_URL);
ImageOverlayBuilder.builder(IMAGE_URL).ocrImage(ocrResult).launchViewer();
}