Thursday, May 24, 2012

Oracle Data Integrator 11.1.1.6 - Hyperion Planning IKM bug

There seems to be a bug in ODI 11.1.1.6 when using IKM SQL to Hyperion Planning. This post explains the bug and how it can be resolved. We can also learn that Hyperion is still an important topic for the ODI development team at Oracle ;-)

While preparing the ODI lab for KScope12, I came across an issue with the Hyperion Planning IKM. I implemented an interface which loads metadata from a database table to a Planning application. So, as always, I dragged & dropped my source and target DataStores into my interface, mapped the source columns to the target columns and then went on to selecting the KMs on the Flow tab. Those of you familiar with ODI and the Hyperion KMs know that it's important to set the "Staging Area Different From Target" option on the Definition tab. If this option has not be set, the IKM SQL to Hyperion Planning does not appear in the IKM dropdown.



However, in ODI 11.1.1.6 even after setting the Staging area to either the Memory Engine or a custom staging area, I am not able to select the Planning IKM. I haven't seen this in previous versions, so this seems to be new to 11.1.1.6.


I was wondering if this behavior is the same with other KMs which require a Staging Area different from the Target, so I built a quick interface to write data from the database table to a flat file, using IKM SQL to File Append. This was working fine, though, and I was able to select the IKM from the dropdown list. This confirmed that it must be something specific to the Planning IKM (and potentially also to others, but we will discuss this in a moment). If I'm not able to select the Planning IKM although everything has been set up as needed, there must have been a reason why it was not selectable.

The mechanisms which define the KMs which ODI offers in the dropdown are tied to the technologies of the Interface's Source, Staging Area and Target. So I checked the definition of the IKM and found that the Source Technology of was set to Generic SQL.


At a first glance, this seems to make sense because the usability of the IKM should not be limited to a certain database technology like Oracle or SQL Server. However, ODI is very strict about the selected technologies and since I was using a DataStore of technology Oracle, this KM is not available in my interface.

Now there are two things you can do:
  1. Align the Source Technology with what you are using (in my case Oracle)
  2. Set the Source Technology to <Undefined>
While the first option is working, the second one will be the one that is the better choice because it does not restrict the IKM to a certain technology, but would also work if I was using the Memory Engine or SQL Server as a staging area.


This little change resolves this bug - unless you have defined your Data Server of technology Generic SQL because then it would have worked fine in the first place. So after I change this setting, I'm able to select the IKM as expected.


The last thing I wanted to know was why this is just the case in 11.1.1.6. Potentially it could have been related to a change in the underlying logic of how ODI determines the available KMs. Since this would be hard to find out, I decided to start going the easy route and compared the IKM Source Technology between version 11.1.1.6 and 11.1.1.5. As you probably already assumed yourself, the IKM in 11.1.1.5 does not define a Source Technology either.

So what did we learn: if you are using the IKM SQL to Hyperion Planning in ODI 11.1.1.6, you can resolve the issue described above by modify the Source Technology. Another thing that we know now is that Oracle is still working on the Hyperion KMs and might come up with some cool new features for us in the future!

Thursday, March 1, 2012

Setting up ODI Environments

Content: There’s a common misconception on how to set up environments for Oracle Data Integrator (ODI). While this setup doesn’t prevent you from building integration processes, it causes confusion and is the root cause for serious problems arising. This post will tell you what you need to know, so enjoy the benefit of learning from the experience of many ODI implementations and set yourself up for a successful implementation right from the start (or just contact MindStream Analytics and we’ll give you industry-leading value from inception throughout the entire development and operating life cycle).

Since 2008, I’ve been involved in many ODI implementations as Integration Lead and Architect. Thinking back to my first project always puts me a little bit at unease, although I’ve also built some neat processes there. Today I would have definitely done some things a lot different, but then again it was a great learning experience that a lot me to get very familiar with the underlying principles. (Also the client asked me to come back because the intermediary Consulting firm wasn’t able to perform as expected in this complex environment).

Since Oracle designated ODI as the replacement for Hyperion Application Link (HAL), a lot of clients are starting from scratch with their implementations and environments. This is always very nice, like a fresh breeze to an optimistic start, because it is possible to make sure that all the best practices that I’ve developed over time can be implemented. However, one thing is often not optimal: the initial installation and configuration of the environments. I’m not trying to blame the infrastructure consultants, they are doing a great job getting environments up and running, but I think it’s a matter of understanding the specific requirements of ODI which differ from the other Hyperion products. The main point here is that there are significant differences between a development and a production environment. While these pretty much contain the same artifacts throughout all environments for Hyperion Planning and Financial Management (hierarchies, rules, calc scripts etc.), the objects that ODI uses come in different formats.

Typically ODI is being installed in each individual environment, e.g. DEV, TEST, PROD. While the installation can usually be done pretty much in the same way (install ODI Studio, Standalone and/or J2EE agent, ODI Console), there are some differences in configuring the repositories in the different environments. Let’s take a look at the different repository types:
  • Master Repository: specify available Technologies, stores connectivity information (Topology), set up Work Repositories, define Security
  • Work Repository: generally, a Work Repository contains definitions of the processes and the execution logs. There are two types, though:
    • Development Work Repository (DWR): allows creation and modification of integration processes and components; ODI Designer module is only available for this type of Work Repository
    • Execution Work Repository (EWR): only allows execution of Scenarios (executables), all objects are read-only
What I’ve noticed several times was that all environments were set up the same way: one Master Repository and one Development Work Repository in DEV, TEST and PROD. While this actually does work, I would be concerned that it may be the root cause for future inconsistencies. It causes confusion because it allows a developer to create integration processes in other environments than in DEV. While this is often considered a quick and easy way to develop hot fixes and a workaround to handle incorrectly deemed “migration errors” due to hard coded paths and application names, it’s the beginning of serious problems with ODI – by invitation. If someone really understands ODI, there are actually very elegant and efficient ways to deal with every aspect of these concerns (but this is a different discussion).

EnvironmentDevelopment WRExecution WRMaster Repository
DEVXX (optional; DWR is usually sufficient)X
TESTXX
STAGE (if desired)XX
PRODXX

Note: instead of creating one Master Repository per environment, it is also possible to create one centralized Master Repository which can be accessed from all environments (see below).



A Development Work Repository should only exist in DEV: one place to make changes, all other environments will only execute the processes which have been designed in DEV. This principle is the foundation that every software project should follow unless you want to face the risk of running into serious issues with version inconsistencies.

All other environments should only have an Execution Work Repository, to transfer over and execute the Scenarios (“Executables”) which were designed in DEV. ODI supports this principle by restricting the Designer to be only available in a Development Work Repository. Trying to connect via Designer to an Execution Work Repository leads to an error message. This way, Projects, Models etc. can only be accessed in DEV. Access to EWRs is granted via the Operator, the main purpose being monitoring and debugging processes as well as importing the Scenarios which have been created in DEV.


The setup of the Master Repository is different. It is actually possible to leverage one central Master Repository for all environments. All types of WRs can access the same Master Repository. From an architecture standpoint I like this idea because it simplifies the migration process. However, there are some concerns about this. From my experience, most clients are more comfortable with a separate Master in each environment rather than having one central repository. This is particularly the case because it is one way to prevent users to execute a process in the wrong Context.

I do understand this concern, but on the same token would propose designing a Security Concept which handles these access matters. Once this has been set up, there are several advantages over the environment specific Master Repositories, especially in regards to Migration as well as setting up Disaster Recovery environments.