openshift Posts

Migrating ReplicationController Between Openshift Cluster

For this scenario, im trying to migrate replicationcontroller (RC) between two OCP with different version. One OCP is on version 3.x, while the other one is 4.x.
So, it’s actually quite tricky. This is the first method that im doing, a simple export on OCP 3

oc get rc -o yaml -n projectname --export > rc.yaml

And do a simple import on OCP 4

oc create -n projectname -f rc.yaml

But there are some error happens,

Error from server (Forbidden): replicationcontrollers "rc-1" is forbidden: 
cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>

It seems like despite im using –export parameter, somehow still exporting the previous DC uid on rc.yaml

    labels:
      app: HelloWorld
      group: com.redhat.edw
      openshift.io/deployment-config.name: helloworld
      provider: fabric8
      version: "1.0"
    name: helloworld-5
    namespace: project
    ownerReferences:
    - apiVersion: apps.openshift.io/v1
      blockOwnerDeletion: true
      controller: true
      kind: DeploymentConfig
      name: helloworld
      uid: 8a96de62-9be4-11ea-a05c-0a659b38d468
    resourceVersion: "65350349"
    selfLink: /api/v1/namespaces/project/replicationcontrollers/helloworld-5
    uid: 257eda84-9be7-11ea-a05c-0a659b38d468

The solution is by removing ownerReferences tag from yaml,

sed -i '/ownerReferences/,+6 d' rc.yaml

It will regenerate ownerReference tag once successfully imported to a new project.


But another problem arise. It seems like despite i’ve successfully import all my RCs, they are not showing when i do a oc get rc command. The culprit is revisionHistoryLimit, removing it from our dc solve this problem.

oc patch  dc helloworld -n project --type json --patch '[{ "op": "remove", "path": "/spec/revisionHistoryLimit" }]'

Securing Connection Between Pods in Openshift with SSL

On this post, im trying to create a simple microservices application on top of Openshift 3.11 and each services will do a simple secure connection between it by using a self-sign SSL which are managed by Openshift.

The goal of why Openshift are managing SSL certificate thru Openshift Secret is to have a rolling or rotating certificate feature on each services but can be triggered by Openshift without have to replace SSL on each services manually.

First is generate a p12 certificate by using keytool

cert>keytool -genkey -alias edw 
	-keystore edw.p12 -storetype PKCS12 
	-keyalg RSA -storepass password 
	-validity 730 -keysize 4096
What is your first and last name?
  [Unknown]:  Edwin
What is the name of your organizational unit?
  [Unknown]:  Company 01
What is the name of your organization?
  [Unknown]:  IT
What is the name of your City or Locality?
  [Unknown]:  Jakarta
What is the name of your State or Province?
  [Unknown]:  Jakarta
What is the two-letter country code for this unit?
  [Unknown]:  ID
Is CN=Edwin, OU=Company 01, O=IT, L=Jakarta, ST=Jakarta, C=ID correct?
  [no]:  yes

Next is creating two java projects which are connected one and another,

https://github.com/edwin/ssl-pods-example
https://github.com/edwin/ssl-pods-example-2

There are several part of the code that need mentioning,

First is making sure https option is active on application.properties, include our p12 certificate and make certificate password as parameterized. This parameter later on will be injected as environment variables on Openshift.

server.ssl.key-store-type=PKCS12
server.ssl.key-store=cert/edw.p12
server.ssl.key-store-password=${SSLPASSWORD}
server.ssl.key-alias=edw

server.port=8443
server.ssl.enabled=true

And the next is because we are using a custom certificate, dont forget to include it on RestTemplate.

@Configuration
public class MyRestTemplate {

    @Value("${server.ssl.key-store}")
    private String sslKeyStore;

    @Value("${server.ssl.key-store-password}")
    private String sslPassword;

    @Bean
    public RestTemplate restTemplate() throws Exception {
        KeyStore clientStore = KeyStore.getInstance("PKCS12");
        clientStore.load(new FileInputStream(sslKeyStore), sslPassword.toCharArray());

        SSLContext sslContext = SSLContextBuilder
                .create()
                .loadTrustMaterial(clientStore, new TrustSelfSignedStrategy())
                .build();
        SSLConnectionSocketFactory socketFactory = new SSLConnectionSocketFactory(sslContext, NoopHostnameVerifier.INSTANCE);
        HttpClient httpClient = HttpClients.custom()
                .setSSLSocketFactory(socketFactory)
                .build();
        HttpComponentsClientHttpRequestFactory factory = new HttpComponentsClientHttpRequestFactory(httpClient);

        return new RestTemplate(factory);
    }
}

Deploy those two application to Openshift,

oc new-app registry.access.redhat.com/openjdk/openjdk-11-rhel7~https://github.com/edwin/ssl-pods-example

oc new-app registry.access.redhat.com/openjdk/openjdk-11-rhel7~https://github.com/edwin/ssl-pods-example-2

Deploy certificate as OCP Secret and mount it as a volume on our application,

oc create secret generic cert --from-file=cert\edw.p12

oc set volume dc ssl-pods-example --add -t secret -m /deployments/cert --name cert --secret-name cert
oc set volume dc ssl-pods-example-2 --add -t secret -m /deployments/cert --name cert --secret-name cert

And our certificate password as OCP Secret and inject it as environment variable to our application

oc create secret generic sslpassword --from-literal=SSLPASSWORD=password

oc set env dc ssl-pods-example --from=secret/sslpassword 
oc set env dc ssl-pods-example-2 --from=secret/sslpassword 

After all deployed on OCP, next is give a route for our application. Im using re-encrypt method for ensuring an end to end encryption within the app. In order to do so, we need to include our application CA certificate as our route’s destination certificate. We can do so by exporting our certificate from p12 file using this command,

keytool -exportcert -keystore edw.p12 -storetype PKCS12 -storepass password -alias edw -file edw.crt -rfc

And paste the certificate on our route,

The end result would be like below image,

And as you can see, we are using certificate from end to end for securing our connection.

Deploy a New Application and Building It Using Openshift S2I Feature and a Custom Base Image

Lots of ways to deploy apps to Openshift, one of it is by using oc new-app command. We are trying now to create a new app using corresponding command, but specifying a custom base image for it. For this example, im using a OpenJDK 11 and RHEL 7 base image.

The command is quite easy, run it on your code folder

D:\source> oc new-app registry.access.redhat.com/openjdk/openjdk-11-rhel7~. --name=spring-boot-2

D:\source> oc start-build spring-boot-2 --from-dir=.

It will create a BuildConfig with the name of spring-boot-2,

D:\source> oc get bc spring-boot-2
NAME            TYPE      FROM      LATEST
spring-boot-2   Source    Binary    3

We can see the detail of our BuildConfig by running this command,

D:\source> oc describe bc spring-boot-2

....
Strategy:       Source
From Image:     ImageStreamTag openjdk-11-rhel7:latest
Output to:      ImageStreamTag spring-boot-2:latest
Binary:         provided on build
....

And if we have some code change and want to redeploy, we can run this command

D:\source> oc start-build spring-boot-2 --from-dir=.

It will rebuild the whole image, and using new code which are uploaded from existing source directory.

Create Async HTTP Process with Red Hat Fuse on Top of Openshift 3.11

Got a unique requirement yesterday where one process on Red Hat Fuse consuming a very long time, therefore creating lots of timeout error from frontend. It happen because some process on Fuse performing a very complicated task and we cannot modified existing api flow due to business requirements.

For this example, i want to simulate the same slowness on my Fuse + Spring Boot app,

package com.redhat.edw;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

A simple camel route,

package com.redhat.edw;

import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.model.rest.RestBindingMode;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

@Component
public class Routes extends RouteBuilder {
    @Override
    public void configure() throws Exception {
        restConfiguration("servlet")
                .bindingMode(RestBindingMode.json)
        ;

        rest()
                .get("hello")
                .route()
                .setBody(method("helloRouteHandler", "setHelloWithName"))
                .endRest()
        ;
    }
}

A placeholder bean,

package com.redhat.edw;

import lombok.*;

@Data
@ToString
@AllArgsConstructor
@NoArgsConstructor
@Builder
public class HelloResponse {
    private String content;
}

And a Handler class,

package com.redhat.edw;

import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

@Slf4j
@Component("helloRouteHandler")
public class HelloRouteHandler {

    @Value("${name}")
    private String name;

    public HelloResponse setHelloWithName() throws Exception {

        /*
         * simulate a very long process (10second)
         */
        Thread.sleep(10000);

        /*
         * this process will still getting called after 10 second regardless of sync or async
         */
        log.info("calling hello for "+name);

        return HelloResponse.builder().content(name).build();
    }
}

And two simple properties file,

# The Camel context name
camel.springboot.name=FuseHelloWorld

# enable all management endpoints
endpoints.enabled=true
management.security.enabled=false

camel.component.servlet.mapping.contextPath=/api/*

name=Edwin
# OpenShift ConfigMap name
spring.application.name=fuse-hello-world

spring.cloud.kubernetes.reload.enabled=true
spring.cloud.kubernetes.reload.strategy=restart_context
spring.cloud.kubernetes.reload.monitoring-config-maps=true
spring.cloud.kubernetes.reload.monitoring-secrets=true
spring.cloud.kubernetes.reload.mode=polling

Run the project and try calling /api/hello api and see how much time is needed to give response time.

As you can see, 10seconds is not an acceptable response for a regular web user and wee need to improve it. The workaround is quite simple, Spring have an @Async method which we will utilize on Handler class,

package com.redhat.edw;

import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Component;

@Slf4j
@Component("helloRouteHandler")
public class HelloRouteHandler {

    @Value("${name}")
    private String name;

    @Async
    public HelloResponse setHelloWithName() throws Exception {

        /*
         * simulate a very long process (10second)
         */
        Thread.sleep(10000);

        /*
         * this process will still getting called after 10 second regardless of sync or async
         */
        log.info("calling hello for "+name);

        return HelloResponse.builder().content(name).build();
    }
}

Which will give a faster response,

And we can deploy our code to Openshift by using this command,

mvn fabric8:deploy -Pfabric8

A successful deployment will looks like this,

And finally, the complete code can be downloaded from this url,

https://github.com/edwin/fuse-with-async-http

[Openshift] Adding a NodeSelector on the Fly

This is snippet code for adding a specific nodeselector to a rc (ReplicationConfig)

oc patch rc simple-helloworld -p '{"spec": {"template": {"spec": {"nodeSelector": {"infra": "my-infra-node"}}}}}' -n dev

The same code should work for other type, such as Deployment, or DeploymentConfig.