Total Pageviews

Thursday, March 27, 2014

Deployment of mappings among different environments.

In real time scenario, we have three environments to develop, test and deploy our coded mapping or workflow. They are as follows :

1) Development - Develop the mapping.
2) Test - Testing the functionality of mapping as per business requirement.
3) Production - Deployment to the production for real time data.

As once we develop the mappings/workflows in development environment, we need to move them to the QA/ Test environment for there testing and after this stage the developed workflow is deployed to production.

There are basically  3 ways we can move/migrate the mappings : 

1) Import/Export
2) Using Deployment groups
3) Copy/past or Drag/drop

Before making any changes to the Production folder make sure you take the back up of all the objects

1. Export and import


In this method we export objects from the source environment and the same objects are imported in target environments, this is suitable when we have to deploy very less objects.
  • Export the mappings that you want to move from the DEV folder and save those as XML in some folder
  • Take the back up of the Production mappings before replacing
  • Import these XML into Production Folder
  • Save the mappings
But when you are doing these we need to check the below thing
1.We need to check the Replace check box in case the source or target is already present in the folder
2.For other reusable transformations such as source shortcuts, target shortcuts
or any other transformations (Sequence Generator, Lookup etc) choose Reuse (not
replace)
3. On Global copy options, in conflict resolution wizard,
select (or mark) Retain Sequence Generator, Normalizer or XML key current
values. 

2. Create Deployment Group in Informatica.


In this deployment technique a label is created and all the objects that need to me moved are added in the
label. Advantages of these labels are if huge no. of objects are available then they can be moved in a single shot and if any issue occurred during this process we can roll back to the previous versions.

In informatica, there are two types of deployment groups. They are:

Static Deployment Group: You have to add objects manually. Create static deployment group when the objects in the group are not going to change.
Dynamic Deployment Group: Object query is used to create the deployment group. Create dynamic group if the objects in the group are going to change frequently.

CREATING DEPLOYMENT GROUP:Use the Deployment Group Editor to create and edit deployment groups.

Follow the below steps for creating a deployment group:

  1. Login to the Repository Manager. Click on Tools > Deployment > Groups to view the existing deployment groups in the Deployment Group Browser.
  2. Click New to create the deployment group in the Deployment Group Editor. Enter a name for the deployment group.
  3. Select whether to create a static or dynamic deployment group.
  4. If you are creating a dynamic deployment group, click Queries to select a query from the Query Browser, and then click Close to return to the Deployment Group Editor.
  5. Optionally, you can enter a comment for the deployment group.
  6. Click OK.After you create a deployment group, the new deployment group appears in the Deployment Groups node in the Navigator of the Repository Manager. If you have created a static deployment group, you can add objects to it.

3. Drag and drop the objects.

The last one is Drag/drop this is followed when we don’t have any proper migration processes. We follow this technique if  there is any issue during  informatica maintenance periods.
Simply drag and drop the mappings.

Tuesday, March 18, 2014

Transaction Control Transformation.

Hi Folks,

Today I am going to tell you about transaction control transformation, which may not be well known but has its own significance.


TRANSACTION CONTROL TRANSFORMATION IN INFORMATICA

Transaction Control is an active and connected transformation. The transaction control transformation is used to control the commit and rollback of transactions. You can define a transaction based on varying number of input rows. As an example, you can define a transaction on a group rows in the employees data using the department Id as a key.
In the informatica power center, you can define the transaction at the following levels:

  1. Mapping level.
  2. Session level.

  • Mapping level: Use the transaction control transformation to define the transactions.
  • Session level: You can specify the "Commit Type" option in the session properties tab. The different options of "Commit Type" are Target, Source and User Defined. If you have used the transaction control transformation in the mapping, then the "Commit Type" will always be "User Defined".
When you run a session, the integration service evaluates the expression for each row in the transaction control transformation. When it evaluates the expression as commit, then it commits all the rows in the transaction to the target(s). When the integration service evaluates the expression as rollback, then it roll back all the rows in the transaction from the target(s).

Hope this will prove to be helpful.
Thanks.

Friday, March 14, 2014

Basic Interview Questions of COGNOS.



Query Studio : 

is a very user–friendly adhoc query tool that allows users to easily access information. With Query Studio, the user drags and drops fields to execute queries, and with multiple available options at a touch of the mouse, it can group data, create summaries, display charts, filter information and export data. An Author who has access to Querv Studio is referred to as a Business Author.



Metrics Studio :

is a powerful scorecard tool that allows for the creation and monitoring of goals through an organization. An Author who has access to Metrics Studio is a Business Manager.


Analysis Studio :

is the user-friendly tool used to quickly analyze summarized information with powerful dimension and variable crossing for complex analysis, discovery of trends and forecasting. An Author who has access to Analysis Studio is a Business Analyst.

Report Studio :

is the professional report-authoring tool. The tool allows for the development of pixel-perfect reports, such as invoices and statements, as well as very complex layouts, such as those required for enterprise dashboards. An Author who has access to Report Studio is a Professional Author.





Cascading prompts :
Cascading Prompts allows a user to use values selected from one prompt to filter values in another prompt.


Difference between view and materialized view
Views contains query whenever execute views it has read from base table Where as M views loading or replicated takes place only once which gives you better query performance

Tuesday, March 11, 2014

Incremental Aggregation in Informatica

INCREMENTAL AGGREGATION IN INFORMATICA


Theory

Incremental Aggregation is the process of capturing the changes in the source and calculating the aggregations in a session. This process makes the integration service to update the target incrementally and avoids the process of calculating the aggregations on the entire source. Consider the below sales table as an example and see how the incremental aggregation works.

Source:

YEAR PRICE
----------
2010 100
2010 200
2010 300
2011 500
2011 600
2012 700

For simplicity, I have used only the year and price columns of sales table. We need to do aggregation and find the total price in each year.

When you run the session for the first time using the incremental aggregation, then integration service process the entire source and stores the data in two file, index and data file. The integration service creates the files in the cache directory specified in the aggregator transformation properties.

After the aggregation, the target table will have the below data.

Target:

YEAR PRICE
----------
2010 600
2011 1100
2012 700
Now assume that the next day few more rows are added into the source table.

Source:

YEAR PRICE
----------
2010 100
2010 200
2010 300
2011 500
2011 600
2012 700

2010 400
2011 100
2012 200
2013 800

Now for the second run, you have to pass only the new data changes to the incremental aggregation. So, the source will contain the last four records. The incremental aggregation uses the data stored in the cache and calculates the aggregation. Once the aggregation is done, the integration service writes the changes to the target and the cache. The target table will contains the below data.
Target:

YEAR PRICE
----------
2010 1000
2011 1200
2012 900
2013 800

Points to remember

  • When you use incremental aggregation, first time you have to run the session with complete source data and in the subsequent runs you have to pass only the changes in the source data. 
  • Use incremental aggregation only if the target is not going to change significantly. If the incremental aggregation process changes more than half of the data in target, then the session performance many not benefit. In this case go for normal aggregation.
Note : The integration service creates a new aggregate cache when
  1. A new version of mapping is saved 
  2. Configure the session to reinitialize the aggregate cache 
  3. Moving or deleting the aggregate files 
  4. Decreasing the number of partitions

Configuring the mapping for incremental aggregation

Before enabling the incremental aggregation option, make sure that you capture the changes in the source data. You can use lookup transformation or stored procedure transformation to remove the data which is already processed. You can also create a trigger on the source database and can read only the source changes in the mapping.

You can find the incremental aggregation in the session properties tab under the performance section.

Friday, March 7, 2014

Performing Update without Look-Up and Update strategy.

Hi Folks,

This sounds a bit awkward that, when we have Lookup and Update strategy, why not to use it.

But trust me this is a very commonly encountered question in interviews. So here I can give you a solution for this with descriptions.

Please go through it twice, you will definitely ably to make it out.

You might have come across an ETL scenario, where you need to update a huge table with few records and occasional inserts. The straight forward approach of using LookUp transformation to identify the Inserts, Update and Update Strategy to do the Insert or Update may not be right for this particular scenario, mainly because of the LookUp transformation may not perform better and start degrading as the lookup table size increases.
In this article lets talk about a design, which can take care of the scenario we just spoke.

The Theory

When you configure an Informatica PowerCenter session, you have several options for handling database operations such as insert, update, delete.

Specifying an Operation for All Rows

During session configuration, you can select a single database operation for all rows using the Treat Source Rows As setting from the 'Properties' tab of the session.

  • Insert :- Treat all rows as inserts. 
  • Delete :- Treat all rows as deletes. 
  • Update :- Treat all rows as updates. 
  • Data Driven :- Integration Service follows instructions coded into Update Strategy flag rows for insert, delete, update, or reject.

Specifying Operations for Individual Target Rows

Once you determine how to treat all rows in the session, you can also set options for individual rows, which gives additional control over how each rows behaves. Define these options in the Transformations view on Mapping tab of the session properties. 
 
Insert :- Select this option to insert a row into a target table.
Delete :- Select this option to delete a row from a table.
Update :- You have the following options in this situation:
  • Update as Update :- Update each row flagged for update if it exists in the target table. 
  • Update as Insert :- Insert each row flagged for update. 
  • Update else Insert :- Update the row if it exists. Otherwise, insert it. 
Truncate Table :- Select this option to truncate the target table before loading data.

Design and Implementation


Now we understand the properties we need to use for our design implementation.

We can create the mapping just like an 'INSERT' only mapping, with out LookUp, Update Strategy Transformation. During the session configuration lets set up the session properties such that the session will have the capability to both insert and update.

First set Treat Source Rows As property as shown in below image.

            


Now lets set the properties for the target table as shown below. Choose the properties Insert and Update else Insert.


That's all we need to set up the session for update and insert with out update strategy.

All The Best


Getting ALIVE..!!

Hi All

I was not active on my blogs from a long time. As I was tangled in many things. Now here again I am back to bring some new things which might help you guys in one or the other way.

My initial blog were on .NET, then I moved to Mainframes, as I was hired as a Mainframe professional. Now I am working in an MNC as an Informatica and cognos developer, so here I want to share some of the important things.

Hope this will help you all.

Thanks

Saturday, August 17, 2013

Some common DB2 / SQL errors with their resolution.

Hi
I am here presenting some common errors, hope they will help you in interviews.

INSTANT EXPERT: SQL ERROR CODES
DB2 UDB Version 8 for z/OS
Introduction
SQL return codes provided by DB2 UDB for OS/390 and z/OS can be confusing and often reference manuals are not available or close at hand when you really need 
them. This Instant Expert Reference Card will review SQL 
return code processing and common SQL error conditions you may encounter in your daily work with DB2.
Retrieving SQL Return Code Information & 
Messages Into Your Programs

• COBOL programs executing SQL statements communicate 
with DB2 via a Working Storage area called the SQL Communications Area (SQLCA).
• When DB2 executes SQL statements, it returns the results of 
the operation into the SQLCODE and SQLSTATE fields in the 
SQLCA. SQLCODE provides key information about the success or failure of SQL statement execution.

SQLCODE Overview

• If SQLCODE = 0, execution was successful.
• If SQLCODE > 0, execution was successful with a warning.
• If SQLCODE < 0, execution was not successful.

 -805 DBRM OR PACKAGE NAME location-name.collectionid.dbrm-name.consistency -token NOT FOUND IN PLAN 
plan-name. REASON reason. 
Suggestion: Ensure COLLECTION 
name is in DB2 PLAN. Recompile and BIND the DB2 program. 
Verify correct LOAD library is being used.


 -811 THE RESULT OF AN EMBEDDED SELECT STATEMENT OR 
A SUBSELECT IN THE SET CLAUSE OF AN UPDATE STATEMENT IS A TABLE OF MORE THAN ONE ROW, OR THE RESULT 
OF A SUBQUERY OF A BASIC PREDICATE IS MORE THAN ONE 
VALUE.
 Suggestion: -811 is often detected after program 
check for DB2 data existence. Consider using new DB2 V8 
FETCH FIRST ROW ONLY feature instead.

-818 THE PRECOMPILER-GENERATED TIMESTAMP x IN THE 
LOAD MODULE IS DIFFERENT FROM THE BIND TIMESTAMP y 
BUILT FROM THE DBRM z.
Suggestion: Recompile and BIND 
the DB2 program. Verify correct LOAD library is being used.

-904 UNSUCCESSFUL EXECUTION CAUSED BY AN UNAVAILABLE RESOURCE. REASON reason-code, TYPE OF RESOURCE 
resource-type, AND RESOURCE NAME resource-name.
 Suggestion: -904 is usually caused because a database utility job has 
started the desired DB2 object in utility mode. Check DB2 Master Log for more details on the resource name – contact DBA. 

-911 THE CURRENT UNIT OF WORK HAS BEEN ROLLED BACK 
DUE TO DEADLOCK OR TIMEOUT. REASON reason-code, 
TYPE OF RESOURCE resource-type, AND RESOURCE NAME 
resource-name.
Suggestion: Review DB2 Master Log to find 
process holding DB2 locks. Consider adding additional COMMITs to program holding the DB2 resource. 

-913 UNSUCCESSFUL EXECUTION CAUSED BY DEADLOCK OR 
TIMEOUT. REASON CODE reason-code, TYPE OF RESOURCE 
resource-type, AND RESOURCE NAME resource-name.
Suggestion: Review DB2 Master Log to find process holding DB2
locks. Consider adding additional COMMITs to program holding the DB2 resource. 

-922 AUTHORIZATION FAILURE: error-type ERROR. REASON 
reason-code.
 Suggestion: Connection to DB2 has failed 
due authority for USER or PLAN. Contact DBA to check DB2 
authorizations.

-927 THE LANGUAGE INTERFACE (LI) WAS CALLED WHEN THE 
CONNECTING ENVIRONMENT WAS NOT ESTABLISHED. THE 
PROGRAM SHOULD BE INVOKED UNDER THE DSN COMMAND.

·        -117 The number of values assigned is not the same as 
the number of specified or implied columns. 
Suggestion: Provide one value for each column in the table.

-150 (DB2 V8) the object of the insert, delete, or update 
statement is a view, system-maintained materialized 
query table, or transition table for which the requested operation is not permitted. Suggestion: Be certain to 
specify base DB2 table/view names for INSERT statements.

-180 THE DATE, TIME, OR TIMESTAMP VALUE value IS INVALID. 
Suggestion: Verify the data value is in the correct range and 
value type.

-181 THE STRING REPRESENTATION OF A DATETIME VALUE IS 
NOT A VALID DATETIME VALUE.
Suggestion: Verify data format 
with the SQL Reference Guide.

-204 name IS AN UNDEFINED NAME.
Suggestion: Correct DB2
CREATOR or OBJECT NAMEs located in SQL statements.

-227 (DB2 V8) FETCH fetch-orientation IS NOT ALLOWED, 
BECAUSE CURSOR cursor-name HAS AN UNKNOWN POSITION 
(sqlcode,sqlstate).
 Suggestion: CLOSE and re-OPEN the cursor; 
For scrollable use (FIRST, LAST, BEFORE, AFTER, or ABSOLUTE) 
to establish valid position.

-305 THE NULL VALUE CANNOT BE ASSIGNED TO OUTPUT HOST 
VARIABLE NUMBER position-number BECAUSE NO INDICATOR 
VARIABLE IS SPECIFIED.
 Suggestion: Add null indicator variable 
to SELECT statement in the format of “column:hostvarind”

-501 THE CURSOR IDENTIFIED IN A FETCH OR CLOSE STATEMENT 
IS NOT OPEN.
Suggestion: Correct logic in application program 
to OPEN the cursor before the FETCH or CLOSE statement.
-502 THE CURSOR IDENTIFIED IN AN OPEN STATEMENT IS ALREADY OPEN.
Suggestion: Correct logic in application program 
to CLOSE the CURSOR before the OPEN statement.

-503 A COLUMN CANNOT BE UPDATED BECAUSE IT IS NOT IDENTIFIED IN THE UPDATE CLAUSE OF THE SELECT STATEMENT OF 
THE CURSOR.
Suggestion: Use FOR UPDATE statement in your 
cursor.

-530 THE INSERT OR UPDATE VALUE OF FOREIGN KEY constraint name IS INVALID.
 Suggestion: Ensure that INSERT row for DB2
PARENT table is completed before INSERT row in CHILD table.

-532 THE RELATIONSHIP constraint-name RESTRICTS THE DELETION OF ROW WITH RID X’rid-number‘.
Suggestion: Change the 
program to DELETE CHILD table row before DELETE of row on 
PARENT table.

-551 auth-id DOES NOT HAVE THE PRIVILEGE TO PERFORM 
OPERATION operation ON OBJECT object-name.
Suggestion: 
Contact the support DBA to GRANT the needed privilege.

-803 an inserted or updated value is invalid because 
the index in index space indexspace-name constrains 
columns of the table so no two rows can contain duplicate values in those columns. rid of existing row is 
xrid.
 Suggestion: Verify DB2 INDEX and, if needed, change the 
statement to an UPDATE.