Pages

Wednesday, 6 September 2017

Introduction to ForgeRock DevOps - Part 3 - Deploying Clusters

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:

http://identity-implementation.blogspot.co.uk/2017/04/introduction-to-forgerock-devops-part-1.html
http://identity-implementation.blogspot.co.uk/2017/05/introduction-to-forgerock-devops-part-2.html


I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Deploying Clusters

So now we have docker images deployed into Bluemix. The next step is to actually deploy the images into a Kubernetes cluster. Firstly we need to create a cluster, then we need to actually deploy into it. For what we are doing here we need a standard paid cluster.

Preperation

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

2. Choose a location, you can view locations with:

bx cs locations



2. Choose machine type, you can view machine types for locations with:

bx cs machine-types dal10



3. Check for VLANS. You need to choose both a public and private VLAN for a standard cluster. It should look something like this:

bx cs vlans dal10



If you need to create them... init the SoftLayer CLI first:

bx sl init

Just select Single Sign On: (2)



You should be logged in and able to create vlans:

bx sl vlan create -t public -d dal10 -s 8 -n waynepublic

Note: Your Bluemix account needs permission to create VLANs, if you don't have this you need to contact support. You'll be told if this is the case. You should get one free public VLAN I believe.

Creating a Cluster

1. Create a cluster:

Assuming you have public and private VLANs you can create a kubernetes cluster:

bx cs cluster-create --location dal10 --machine-type u1c.2x4 --workers 2 --name wbcluster --private-vlan 1638423 --public-vlan 2106869



You *should* also be able to use the Bluemix UI to create clusters.

2. You may need to wait a little while for the cluster to be deployed. You can check the status of it using:

bx cs clusters



During the deployment you will likely receive various emails from Bluemix confirming infrastructure has been provisioned.

3. When the cluster has finished deployment ( state is pending ), set the new cluster as the current context:

bx cs cluster-config wbcluster



The statement in yellow is the important bit, copy and paste that export back into the terminal to configure the environment for kubernetes to run.



4. Now you can run kubectl commands, view the cluster config with:

kubectl config view



See the kubernetes documentation for the full set of commands you can run, we will only be looking at a few key ones for now.

5. Clone (or download) the ForgeRock Kubernetes repo to somewhere local:

https://stash.forgerock.org/projects/DOCKER/repos/fretes/browse

6. Navigate to the fretes directory:

cd /usr/local/DevOps/stash/fretes

 

7. We need to make a tweak to the fretes/helm/custom.yaml file and add the following:

storageClass: ibmc-file-bronze



This specified the type of storage we want our deployment to use in Bluemix. If it were AWS or Azure you may need something similar.

8. From the same terminal window that you have setup kubectl, navigate to the fretes/helm/ directory and run:

helm init



This will install the helm component into the cluster ready to process the helm scripts we are going to run.

9. Run the OpenAM helm script which will configure instances of AM, backed by DJ into our kubernetes cluster:

/usr/local/DevOps/stash/fretes/helm/bin/openam.sh

This script will take a while and again will trigger the provisioning of infrastructure, storage and other components resulting in emails from Bluemix. While this is happening you should see something like this:



If you have to re-deploy on subsequent occasions, the storage will not need to be re-provisioned and the whole process will be significantly faster. When it is all done you should see something like this:



10. Proxy the kube dash:

kubectl proxy



Navigate to http://127.0.0.1:8001/ui in a browser and you should see the kubernetes console!



Here you can see everything that has been deployed automatically using the helm script!

We have multiple instances of AM and DJ with storage deployed into Bluemix ready to configure!

In the next blog we will take a detailed look at the kubernetes dashboard to understand exactly what we have done, but for now lets take a quick look at one of our new AM instances.

11. Log in to AM:

Ctrl-C the proxy command and type the following:

bx cs workers wbcluster



You can see a list of our workers above, and the IP they have been exposed publicly on.

Note: There are defined ways of accessing applications using Kubernetes, typically you would use an ingress or a load balancer and not go directly using the public IP. We may look at these in later blogs.

As you probably know, AM expects a fully qualified domain name so before we can log in we need to edit /etc/hosts and add the following:



Then you can navigate to AM:

http://openam.example.com:30080/openam



You should be able to login with amadmin/password!


Summary

So far in this series we have created docker containers with the ForgeRock components, uploaded these to Bluemix and run the orchestration helm script to actually deploy instances of these containers into a meaningful architecture. Not bad!

In the next blog we will take a detailed look at the kubernetes console and examine what has actually been deployed.





Thursday, 29 June 2017

Open Banking, PSD2 & Screen Scraping

Open Banking & PSD2

PSD2 is due to come into force September 2018, meanwhile the UK is forging ahead with Open Banking which is due to come into force even earlier in January 2018. Both regulations are all about cracking open banking APIs to increase digital competitiveness an improve consumer choice.

The 9 biggest UK banks have been collaborating in the form of the Open Banking Working Group (OBWG) to define the solution for Open Banking in the UK. After much discussion and deliberation the OBWG has determined that Open Banking should be achieved through the use of open standards and specifically the use of the OAuth 2.0 family of standards.

OAuth 2.0

OAuth 2.0 is something I use just about every day and it's something that all of us have probably used at one time or another though we may not have realised it. OAuth is a standard designed for Delegated Authorization. 

We commonly refer to Authentication as proving who you are, whereas Authorization determines what you are allowed to do. Authentication is typically achieved with some sort of username and password (and ideally a second factor). Authorization is generally concerned with the policy and permissions that apply once I have authenticated.

Effectively, Delegated Authorization is a way to permit someone to do something on my behalf. A very common example can be seen with Instragram and Twitter when a Twitter user gives Instagram permission to post to their Twitter feed.

With OAuth 2.0, Instragram will redirect you to Twitter, you will authenticate with Twitter and consent to Instragram posting to your Twitter account. Twitter will then share an authorization code with Instagram that Instagram will exchange for an access token. This access token can only be used to post to your Twitter account, Instagram for example could not use it to delete your tweets. 

In a world without OAuth 2.0, Instagram would have to know your Twitter username and password in order to post a Tweet. This would allow them to post to your Twitter feed but it would also enable them to do anything else that you could do if you authenticated. More crucially your username and password have now been shared with a third party who you have to trust. Propagating passwords is never a good thing for security and is really the very definition of a security anti-pattern. This is how screen scraping works.

Screen Scraping

Up to now there has been no standards based mechanism for sharing account data. There are services at the moment that can aggregate your financial data in one place. These services are convenient for many however to use them you have to share your credentials with them. So, if you want the aggregator to be able to report on your bank account. You need to share your banking credentials with the aggregator. You have to trust a third party with your banking credentials.

Putting aside the issues of trust, massive credential leaks are now a weekly occurrence and the more you share your credentials around the more vulnerable those credentials become.

Open Banking aims to put an end to this by using secure, trusted open standards such as OAuth. As a security professional and as a customer I feel very strongly that this is the right way to do Open Banking and it ensures I remain in control of my account data and enables me to revoke third party access at any time.

Right now there is much debate and discussion as to whether screen scraping should be permitted in both PSD2 and Open Banking. There are a number of groups who are right now petitioning for it to remain a valid approach for data sharing under the new regulations.

I can appreciate the difficulties many organisations may face in transitioning from screen scraping to an OAuth 2.0 based model but I cannot in good conscience support the screen scraping approach and I suspect that if it were to be adopted as an acceptable interim solution that it would persist for the longer term and undermine the benefits that an API driven approach to Open Banking would bring.

The Kantara Initiative is a non-profit organisation dedicated to advancing digital identity and data privacy. If you feel as strongly as I do about this, please visit the Kantara Initiative and sign the pledge against screen scraping:










Friday, 12 May 2017

Introduction to ForgeRock DevOps - Part 2 - Building Docker Containers

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:


I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Building Docker Containers

First up, we need to actually create docker containers that will host the ForgeRock components.

Prerequisites

Install all of the below:

Used to build, tag and upload docker containers.

Bluemix CLI: http://clis.ng.bluemix.net/ui/home.html

Used to deploy and configure the Bluemix environment.

CloudFoundry CLI: https://github.com/cloudfoundry/cli

Bluemix dependency.

Kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Deploy and manage Kubernetes clusters.

Initial Configuration

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

Note we are using the US instance of Bluemix here as it has support for Kubernetes in beta.


When prompted to select an account ( just type 1) and if you are logged in successfully you should see the above. Now you can interact with the Bluemix environment just as you might if you were logged in via a browser.

2. Add the Bluemix Docker components:

bx plugin repo-add Bluemix https://plugins.ng.bluemix.netbx plugin install container-service -r Bluemix
bx plugin install IBM-Containers -r Bluemix

Check they have installed:

bx plugin list


3. Clone (or download) the ForgeRock Docker Repo to somewhere local:


4. Download the ForgeRock AM and DS component binaries from backstage:


5. Unzip and copy ForgeRock binaries into the Docker build directories:

AM:

unzip AM-5.0.0.zip
cp openam/AM-5.0.0.war /usr/local/DevOps/stash/docker/openam/


DJ:

mv DS-5.0.0.zip /usr/local/DevOps/stash/docker/opendj/opendj.zip


Amster:

mv Amster-5.0.0.zip /usr/local/DevOps/stash/docker/amster/amster.zip

For those unfamiliar, Amster is our new RESTful configuration tool for AM in the 5 platform, replacing SSOADM with a far more DevOps friendly tool, I'll be covering it in a future blog.

Build Containers

We are going to create three containers: AM, DJ & Amster:

1. Build and Tag OpenAM container ( don't forget the . ) :

cd /usr/local/DevOps/stash/docker/openam
docker build -t wayneblacklockfr/openam .

Note wayneblacklockfr/openam is just a name to tag the container with locally, replace it with whatever you like but keep the /openam.

All being well you will see something like the below:


Congratulations, you have built your first ForgeRock container! 

Now we need to get the namespace for tagging, this is usually your username but check using:

bx ic namespace-get


Now lets tag it ready for upload to Bluemix, use the container ID output at the end of the build process and your namespace

docker tag d7e1700cfadd registry.ng.bluemix.net/wayneblacklock/openam:14.0.0


Repeat the process for Amster and DS.

2. Build and Tag Amster container:

cd /usr/local/DevOps/stash/docker/amster
docker build -t wayneblacklockfr/amster .
docker tag 54bf5bd46bf1 registry.ng.bluemix.net/wayneblacklock/amster:14.0.0

3. Build and Tag DS container:

cd /usr/local/DevOps/stash/docker/opendj
docker build -t wayneblacklockfr/opendj .
docker tag 19b8a6f4af73 registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0

4. View the containers:

You can take a look at what we have built with: docker images 

Push Containers

Finally we want to push our containers up to the Bluemix registry.

1. Login again:

bx login -a https://api.ng.bluemix.net

2. Initiate the Bluemix container service, this may take a moment:

bx ic init


Ignore Option 1 & Option 2, we are not doing either.

3. Push your Docker images up to Bluemix:

docker push registry.ng.bluemix.net/wayneblacklock/openam:14.0.0 

docker push registry.ng.bluemix.net/wayneblacklock/amster:14.0.0 

docker push registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0 

4. Confirm your images have been uploaded:

bx ic images


If you login to the Bluemix webapp you should be able to see your containers in the catalog:


Next Time

We will take a look at actually deploying a Kubernetes cluster and everything we have to do to ready our containers for deployment. 

Friday, 28 April 2017

Introduction to ForgeRock DevOps - Part 1

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

As always with this blog I am going to step through a fully worked example. In this case I am using IBM Bluemix however it could just as easily have been AWS, Azure. GKE or any service that supports Kubernetes. By the end of this blog you will have a containerised instance of ForgeRock OpenAM and OpenDJ running on Bluemix deployed using Kubernetes. First off we will cover the basics.

DevOps Basics

There are many tutorials out there introducing dev ops that do a great job so I am not going to repeat those here I will point you towards the excellent ForgeRock Platform 5 DevOps guide which also takes you through DevOps deployment step by step into Minikube or GKE: 


What I want to do briefly is touch on some of the key ideas that really helped me to understand DevOps. I do not claim to be an expert but I think I am beginning to piece it all together:

12 Factor Applications: Best practices for developing applications, superbly summarised here this is why we need containers and DevOps.

Docker: Technology for building, deploying and managing containers.

Containers: A minimal operating system and components necessary to host an application. Traditionally we host apps in virtual machines with full blown operating systems whereas containers cut all of that down to just what you need for the application you are going to run. 

In docker containers are built from Dockerfiles which are effectively recipes for building containers from different components. e.g. a recipe for a container running Tomcat.

Container Registry: A place where built containers can be uploaded to, managed, downloaded and deployed from. You could have a registry running locally, cloud environments will also typically have registries they will use to retrieve containers at deployment time.

Kubernetes: An engine for orchestrating deployment of containers. Because containers are very minimal, they need to have extra elements provisioning such as volume storage, secrets storage and configuration. In addition when you deploy any application you need load balancing and numerous other considerations. Kubernetes is a language for defining all of these requirements and an engine for implementing them all. 

In cloud environments such as AWS, Azure and IBM Bluemix that support Kubernetes this effectively means that Kubernetes will manage the configuration of the cloud infrastructure for you in effect abstracting away all of the usual configuration you have to do specific to these environments. 

Storage is a good example, in Kubernetes you can define persistent volume claims, this is effectively a way of asking for storage. Now with Kubernetes you do not need to be concerned with the specifics of how this storage is provisioned. Kubernetes will do that for you regardless of whether you deploy onto AWS, Azure, IBM Bluemix.

This enables automated and simplified deployment of your application to any deployment environment that supports Kubernetes! If you want to move from one environment to another just point your script at that environment! More so Kubernetes gives you a consistent deployment management and monitoring dashboard across all of these environments!



Helm: An engine for scripting Kubernetes deployments and operations. The ForgeRock platform uses this for DevOps deployment. It simply enables scripting of Kubernetes functionality and configuration of things like environment variables that may change between deployments.

The above serves as a very brief introduction to the world of DevOps and helps to set the scene for our deployment.

If you want to following along with this guide please get yourself a paid IBM Bluemix account alternatively if you want to use GKE or Minikube ( for local deployment ) take a look at the superb ForgeRock DevOps Guide. I will likely cover off Azure and AWS deployment in later blogs however everything we talk about here will still be relevant for those and other cloud environments as after all that is the whole point of Kubernetes!

In Part 2 we will get started by installing some prerequisites and building our first docker containers.






Tuesday, 11 April 2017

Making Rest Calls from IDM Workflow

I attended the Starling Bank Hackathon this weekend and had a great time, I will shortly be writing a longer blog post to talk all about it but before that I briefly wanted to blog about a little bit of code that might be really helpful to anyone building IDM workflows.

The External Rest Endpoint

OpenIDM has an REST API that effectively allows you to invoke external REST services hosted anywhere. You might use this for example to call out to an identity verification service as part of a registration workflow and I made good use of it at the hackathon.

With the following piece of code you can create some JSON and call out to a REST service outside of OpenIDM:
java.util.logging.Logger logger = java.util.logging.Logger.getLogger("")
logger.info("Make REST call)

def slurper = new groovy.json.JsonSlurper()
def result = slurper.parseText('{"destinationAccountUid": "a41dd561-d64c-4a13-8f86-532584b8edc4","payment": {"amount": 12.34,"currency": "GBP"},"reference": "text"}')

result = openidm.action('external/rest', 'call', ['body': (new groovy.json.JsonBuilder(result)).toString(), 'method': 'POST', 'url': 'https://api-sandbox.starlingbank.com/api/v1/payments/local', 'contentType':'application/json', 'authenticate': ['type':'bearer', 'token': 'lRq08rfL4vzy2GyoqkJmeKzjwaeRfSKfWbuAi9NFNFZZ27eSjhqRNplBwR2do3iF'], 'forceWrap': true ])

A really small bit of code but with it you can do all sorts of awesome things!


Sunday, 2 April 2017

Introducing IDM Workflow

ForgeRock Identity Management includes an OOTB workflow engine based on BPMN (Business Process Model & Notation). This isn't unique, most identity management solutions have some form of workflow engine. However in my experience they are typically based on some proprietary technology and/or very painful to work with.

I have recently had to build some workflows for various customer Proof of Concepts and I am really impressed by how quickly you can pull something together so I wanted to write up a blog entry.




So in this blog we are going to use a brand new instance of IDM (installed locally) and create a simple request and approval workflow which we will then test.

I am going to use the Eclipse Activiti plugin for this, there are other BPMN editors however I am going to stick with what I know for now. I am also not going to spend much time talking about BPMN beyond what we need to build a meaningful workflow. Much more information is available here. In the spirit of this blog I am just going to get on with it and walk you through the basic steps to build and test a simple workflow.

Additionally, the workflow samples that ship with IDM are a brilliant place to start. I highly recommend taking a look at them and using them as the basis of your workflows until you get comfortable building them yourself.

Getting Started

I am going to assume you have an installation of IDM already, if not check out my IDM beginners series

IDM ships with a built in version of the Activiti workflow engine: https://www.activiti.org/We are going to use the free Eclipse Activiti Workflow Designer to build our workflows. 

Firstly, download and install the Eclipse IDE.

When you have Eclipse installed, fire it up and navigate to help -> install new software:




Click Add:



Enter the following location: https://www.activiti.org/designer/update/ and press OK.



Wait for the installation process to complete, now that is all out of the way. Lets get started!

Create a New Project

Navigate to File -> New Project:


Then select General -> Project (we do not need an Activiti Project):


Give your project a name:



And press Finish.

Building a New Workflow

Right click on the new project, select New File:


Give it a name similar to the following:


.bpmn20.xml is the convention we use for workflow files in IDM.

Finally right click on our new workflow, select Open With and Other...


Finally right click on our new workflow, select Open With and Other...


And select Activiti Diagram Editor.

Ok, you should now have a blank canvas that looks a bit like this:



Lets get started. First thing we need is a Start Event. Drag one over from the menu on the right and drop it somewhere on the workflow canvas. 


Now, we need some workflow steps. As we are building a simple request and approval workflow so we probably need:
  • A step to actually create a request for something (that is actually our StartEvent we just created).
  • A step to gather some information and determine who the request needs to go to for approval.
  • A step for the actual approval.
  • A step for processing the result. Typically you also want to send an email containing the response. In fact, we probably need two steps here, one for success and one for failure.
We will build the workflow steps and connect them together first, before implementing the actual logic. Select a Script Task from the menu on the right and drag it on to our canvas.


You should now have something like this:
We probably want to give our Script Task a name, click on it and give it a new name:


Just replace the Name value with something appropriate:
We also need to make sure that all Script Tasks have a script defined so that IDM can parse them successfully. The easiest way to do this is to add a simple logging statement to the task. Select the Process Request task again. then the Main config tab and add some simple logging script:



Now you can use either javascript or groovy for scripting. I tend to use groovy but that is just personal choice.

java.util.logging.Logger logger = java.util.logging.Logger.getLogger("")
logger.info("SimpleWorkflow - Process Request")


Make sure you save your work.

Now select the Start Event, you should see the following menu:

Click on the Create Connection arrow, but keep the mouse button held down and drag a connection over to our Process Request task. We now have a flow linking the two tasks in sequence:
Next we need a User Task, similar to before select User Task from the menu on the right, drag it into the canvas, give it a name and connect it to the Process Request task.


Ok, now things get a little more complicated, as a request could be either approved or rejected. So we need a Gateway, specifically an Exclusive Gateway:


We also need to new Script Tasks, build the workflow as below:


Remember to add some simple script to Main config for each new task, otherwise parsing of the workflow will fail.

Finally we need an EndEvent:


Put this to the right of the Approved and Rejected tasks, and connect them to it as below:


We now have a basic workflow outline, time to make it actually do something.

Workflow Logic

StartEvent

We are going to start with our StartEvent. What this actually translates to is the form that a user will complete in self service to make their request.

Click on the StartEvent and select the Form tab


 Click New, You should see the following:


Fill it in exactly as I have below then press OK:


We should now have an attribute on our form:


Let's add another one, justification is a common field when making a request, add it in exactly the same way:


One more thing before we test this in IDM. Click somewhere on the canvas until you can edit the process Name.


Change My process to something meaningful like Request with Justification. Make sure you save the workflow.

Testing the Workflow in IDM

Although our workflow isn't really doing anything yet, this is a good time to quickly test what it looks like in IDM.

Fire up IDM and navigate to the openidm directory, create a new workflow directory:


Now copy the SimpleWorkflow.bpmn20.xml file into the new workflow directory, you should see IDM pick it up in the logs. In fact you will probably see a warning which we will ignore for now.


Login to IDM as a user other than openidm-admin, someone you have created previously, I'm using a user called jbloggs. Remember to login to user self service: http://localhost.localdomain.com:8080/#login/

You should see the dashboard, and our new process!


Click Details and you can see the form we created earlier! You can enter a request and justification but do not hit Start, because right now nothing will happen.

More Workflow Logic

Ok, so we can make a request now, but it doesn't go to anyone. Let's fix that, there are a few things to do here. Firstly we need to gather some more information about the requestor for the approver. Select the StartEvent then Main config and set Initiator as "initiatorId":


Select the Process Request task and enter the following script as groovy into main Config then save your work.


java.util.logging.Logger logger = java.util.logging.Logger.getLogger("")
logger.info("SimpleWorkflow - Process Request " + initiatorId);
// find user
readStartUserFromRepoParams = [_queryId:'for-userName',uid:initiatorId]
qresults = openidm.query('managed/user', readStartUserFromRepoParams)
// get user details
users = qresults.result
execution.setVariable("userId", users[0]._id)
execution.setVariable("userName", users[0].userName)
execution.setVariable("givenName", users[0].givenName)
execution.setVariable("sn", users[0].sn)
execution.setVariable("mail", users[0].mail)
// set approver
execution.setVariable("approverId", "openidm-admin")

All we are doing here is retrieving the user data for the initiating user and setting it as variables in the workflow process. We are also assigning the user who will approve the request (simply as a static Id here but you can easily make this dynamic, for example assign the task to a group or to a manager - I may cover this in a later blog).

A few more steps, we need to set the assignee for the approval task. Select the Approve Request task and enter the following for assignee:


We also need to configure the approval form as below to show the data we just collected:


We also need to add the request fields the user just filled in:



We also need to add an approvalResult, to the form. This field is a little different as it is an enum:


Because this field is an enumeration we need to add some form values for the user to choose, press New:


And configure the Form Value configuration as below and press OK:


Do the same for "rejected", and you should end up with the following:

Save your work.


Back to IDM

Again, copy the workflow file into IDM. Now logout, login as a user and select the workflow and populate the request:


Press Start. All being well you should see confirmation the workflow process has been started. Now log out and log back in as openidm-admin, you should see that there is a request to be approved on the dashboard:


And if you select Details, you can see the request itself and the additional information we put into it, as well as our approval drop down:


However as before do not click Complete just yet, as we need to actually make this do something.

Final Workflow Logic

Back to the workflow editor, lets take a look at the exclusive gateway we added a bit earlier:



So what we need now is some logic, based on the approval result to send the workflow to the right place. Click on the flow to the Approved task:



Take a look at the Main config, and specifically the Condition field:


Enter the following:

${approvalResult=='approved'}


Do something similar for the Rejected flow:

${approvalResult=='rejected'}

You will notice these two conditions are based on the enum we defined earlier, depending on the selection made by the approver we flow will go to either the Approved or Rejected task.

Now lets finish the workflow, select the Approved task and Main config and Script, enter the following:

java.util.logging.Logger logger = java.util.logging.Logger.getLogger("")
logger.info("SimpleWorkflow - Approved")

java.text.SimpleDateFormat formatUTC = new java.text.SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.S'Z'");     
formatUTC.setTimeZone(TimeZone.getTimeZone("UTC"));
requestDate = formatUTC.format(new Date());
                
def requesterNotification = [
   "receiverId":  userId,
   "requesterId" : "",
   "requester" : "",
   "createDate" : requestDate,
   "notificationType" : "info",
   "notificationSubtype" : "",
   "message" : "The access request was accepted"
];
                
openidm.create("repo/ui/notification/", null, requesterNotification)

This bit of script simply uses the IDM notification engine to let the user know that their request has been approved.

Save your work for the last time and copy the work flow into IDM.

Back to IDM for the Last Time

So now, we should have a complete workflow.

Login to IDM as jbloggs and make a request.
Login to IDM as openidm-admin to approve the request.
Finally, log back in as jbloggs and you should see a notification the request has been approved.


We have only looked at the "happy" flow here, I leave the rejected flow as a task for the interested reader.

And thats it, very basic workflow but hopefully you can begin to see what is possible. In future blogs I'll look at actually at assigning a role based on the approval and also enabling a request drop down dynamically. I really just wanted to give a taster of what can be done in a relatively short time frame with IDM's workflow engine.