utility Posts

Creating a Rest API Call with WorkItemHandler on Red Hat Process Automation Manager

Usually we are using tasks for showing steps of process on top of Red Hat Process Automation Manager (RHPAM). But most of the time we need a customized task involved, thats where WorkItemHandler comes in handy.
For this demo, im trying to create a simple WorkItemHandler to create a custom task on top of RHPAM which is getting some response from a third party api provider.

So here is pretty much the raw concept,

As usual, for beginning we need to create a pom xml file, and it is important to put “scope” variable as provided.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <description>a simple rest api to be used within RHPAM</description>






and simple java class to do a simple REST Api call,

package com.edw;

import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
import java.util.HashMap;
import java.util.List;

import org.codehaus.jackson.map.ObjectMapper;
import org.drools.core.process.instance.WorkItemHandler;
import org.kie.api.runtime.process.WorkItem;
import org.kie.api.runtime.process.WorkItemManager;

public class AnimeWorkItemHandler implements WorkItemHandler {
    public void abortWorkItem(WorkItem wi, WorkItemManager wim) {

    // will try fire to https://jikan.moe/ api
    public void executeWorkItem(WorkItem wi, WorkItemManager wim) {
        String name = (String) wi.getParameter("name");

        String nameResponse = "";
        String imageUrl = "";

        try {
            // api endpoint = https://api.jikan.moe/v3/search/character?q=???&limit=1
            URL url = new URL(String.format("https://api.jikan.moe/v3/search/character?q=%s&limit=1", name));

            URLConnection urlConnection = url.openConnection();
            BufferedReader in = new BufferedReader(
                    new InputStreamReader(
            String inputLine;
            StringBuffer stringBuffer = new StringBuffer();
            while ((inputLine = in.readLine()) != null) {

            ObjectMapper objectMapper = new ObjectMapper();
            HashMap jsonResult = objectMapper.readValue(stringBuffer.toString(), HashMap.class);
            List<HashMap> results = (List<HashMap>) jsonResult.get("results");

            nameResponse = (String) results.get(0).get("name");
            imageUrl = (String) results.get(0).get("image_url");

        } catch (Exception e) {

        HashMap result = new HashMap();
        result.put("nameResponse", nameResponse);
        result.put("imageUrl", imageUrl);
        wim.completeWorkItem(wi.getId(), result);

Once created, we need to build it by using an mvn command

mvn clean package

After jar has been build, next step is to upload that jar thru your Business Central,

If uploading success, we can see the uploaded jar file within our Nexus

Next is lets go to RHPAM dashboard, and create a work item definition

And put this values, make sure that the content of “parameters” and “result” is the same as on your java class, which in my case is “name”, “nameResponse” and “imageUrl”.

    "name" : "AnimeWorkItemHandler", 
    "parameters" : [ 
        "name" : new StringDataType() 
    "results" : [ 
        "nameResponse" : new StringDataType(),
        "imageUrl" : new StringDataType() 
    "displayName" : "AnimeWorkItemHandler", 
    "icon" : "" 

The next step, and pehaps the most important is, include our newly created jar file into RHPAM project. We can do so by going thru “Settings” tab, and create a new Work Item Handlers,

And import the library and dependency needed

The last is adding our new WorkItemHandler to our existing workflow,

So as you can see, it’s not that difficult to create a custom task on RHPAM. And for the full code for this project can be accessed on my github page,


How to Create and Test Workflow on Red Hat Process Automation Manager with REST API

Red Hat Process Automation Manager, or RHPAM, is a platform that are being use for automating business decisions and processes. It enables enterprise business and IT users to document, simulate, manage, automate and monitor business processes and decisions.

And for today’s example im trying to simulate a logic to validate a very simple condition, to check whether a user registration data is valid or not. The parameter is, a user should have a valid name, and age between 21 and 49.

First we need to create a simple Data Object to accomodate our parameter data and result,

Next is creating a simple decision table,

Put data constraint there,

Create expected result,

After adding some decision logic, the end result should be like this,

Next is creating a workflow (Business Processes), in here we are doing some data unmarshalling, logging, validating the data with our existing data table, and providing the result. The result of this workflow is going to be stored in a “status” variable.

The whole project structure will looks like this,

Next, we can do build, and deploy after that. This will make our project deployed to Kie Server.

Now here comes the exciting part, how to access and test our deployed pam project thru REST API. First we need to understand that our project is being deployed to a Kie Server, therefore we need to see what api services that are provided there. We can check easily by seeing our Kie Server’s swagger link,


We can list all projects available on our Kie Server with this curl command, dont forget replacing pamuser and pampassword with your actual RHPAM username and password

curl -kv https://pamuser:pampassword@kieserver-url/services/rest/server/containers/

Use this curl command to see the api endpoint for our business process workflow,

curl -kv https://pamuser:pampassword@kieserver-url/services/rest/server/containers/Project01_1.0.0-SNAPSHOT/processes

Once we found our business process id, we can start our workflow by using this curl command,

curl -kv https://pamuser:pampassword@kieserver-url/services/rest/server/containers/Project01_1.0.0-SNAPSHOT/processes/Project01.Business01/instances \
-H 'Content-Type: application/json' \
--data-raw '{
    "application": {
        "com.edw.project01.User": {
            "age": 2,

This curl command will return a specific numerical id,

We can see the progress result of corresponding id on “Process Instances” menu,

Or by using a curl api call,

curl -kv https://pamuser:pampassword@kieserver-url/services/rest/server/containers/Project01_1.0.0-SNAPSHOT/processes/instances/12

Next is seeing what is the workflow result, by seeing that we are giving “age” parameter below 20 means we are expeting the registration result to be “false”. We can achieve that by using this curl command,

curl -kv https://pamuser:pampassword@kieserver-url/services/rest/server/containers/Project01_1.0.0-SNAPSHOT/processes/instances/12/variables/instances/status

And it is showing that value is “false”,

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

Lets try with a correct data,

curl -kv https://pamuser:pampassword@kieserver-url/services/rest/server/containers/Project01_1.0.0-SNAPSHOT/processes/Project01.Business01/instances \
-H 'Content-Type: application/json' \
--data-raw '{
    "application": {
        "com.edw.project01.User": {
            "age": 25,

This time it will shows “true”

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

You can check my sample code on github,


So, it’s pretty much simple right.
Have fun with RHPAM (Y)


Migrating ReplicationController Between Openshift Cluster

For this scenario, im trying to migrate replicationcontroller (RC) between two OCP with different version. One OCP is on version 3.x, while the other one is 4.x.
So, it’s actually quite tricky. This is the first method that im doing, a simple export on OCP 3

oc get rc -o yaml -n projectname --export > rc.yaml

And do a simple import on OCP 4

oc create -n projectname -f rc.yaml

But there are some error happens,

Error from server (Forbidden): replicationcontrollers "rc-1" is forbidden: 
cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>

It seems like despite im using –export parameter, somehow still exporting the previous DC uid on rc.yaml

      app: HelloWorld
      group: com.redhat.edw
      openshift.io/deployment-config.name: helloworld
      provider: fabric8
      version: "1.0"
    name: helloworld-5
    namespace: project
    - apiVersion: apps.openshift.io/v1
      blockOwnerDeletion: true
      controller: true
      kind: DeploymentConfig
      name: helloworld
      uid: 8a96de62-9be4-11ea-a05c-0a659b38d468
    resourceVersion: "65350349"
    selfLink: /api/v1/namespaces/project/replicationcontrollers/helloworld-5
    uid: 257eda84-9be7-11ea-a05c-0a659b38d468

The solution is by removing ownerReferences tag from yaml,

sed -i '/ownerReferences/,+6 d' rc.yaml

It will regenerate ownerReference tag once successfully imported to a new project.

But another problem arise. It seems like despite i’ve successfully import all my RCs, they are not showing when i do a oc get rc command. The culprit is revisionHistoryLimit, removing it from our dc solve this problem.

oc patch  dc helloworld -n project --type json --patch '[{ "op": "remove", "path": "/spec/revisionHistoryLimit" }]'

Migrating Image Stream from One Openshift Image Registry to Another Image Registry with Skopeo

I have a requirement where i need to move all images from Image Registry on Openshift 3, to Image Registry on Openshift 4. There are a lot of ways to do it, such as mounting the same disk to multiple Openshift instance or move in manually using docker pull, tag and then push.

After brainstorming for quite some time, i come up with a solution of using Skopeo as a tools to do image migration. It’s a very convenient tool for handling image copying from one image registry to another.

It is actually a very simple script, first we need to capture all images within every OCP3 project,

oc get project -o template --template='{{range.items}}{{.metadata.name}}{{"\n"}}{{end}}' | while read line
	oc get imagestreamtag -n $line -o template \ 
		--template='{{range.items}}{{.metadata.namespace}}{{"/"}}{{.metadata.name}}{{"\n"}}{{end}}' > images.txt

Use this command to capture your OCP username and token,

# capturing your username
oc whoami 

#capturing your token
oc whoami -t

And then we need to iterate the content of generated file with the username and token you get from previous command.

cat images.txt | while read line
	skopeo copy  --src-creds ocp3username:ocp3token --src-tls-verify=false \
		--dest-creds ocp4username:ocp4token  --dest-tls-verify=false \
		docker://docker-registry-from.ocp3/$line \

After all is done, what is left is do a simple validation to count how many images has been migrated.

 oc get imagestreamtag --no-headers | wc -l

Deploy a New Application and Building It Using Openshift S2I Feature and a Custom Base Image

Lots of ways to deploy apps to Openshift, one of it is by using oc new-app command. We are trying now to create a new app using corresponding command, but specifying a custom base image for it. For this example, im using a OpenJDK 11 and RHEL 7 base image.

The command is quite easy, run it on your code folder

D:\source> oc new-app registry.access.redhat.com/openjdk/openjdk-11-rhel7~. --name=spring-boot-2

D:\source> oc start-build spring-boot-2 --from-dir=.

It will create a BuildConfig with the name of spring-boot-2,

D:\source> oc get bc spring-boot-2
NAME            TYPE      FROM      LATEST
spring-boot-2   Source    Binary    3

We can see the detail of our BuildConfig by running this command,

D:\source> oc describe bc spring-boot-2

Strategy:       Source
From Image:     ImageStreamTag openjdk-11-rhel7:latest
Output to:      ImageStreamTag spring-boot-2:latest
Binary:         provided on build

And if we have some code change and want to redeploy, we can run this command

D:\source> oc start-build spring-boot-2 --from-dir=.

It will rebuild the whole image, and using new code which are uploaded from existing source directory.