The Parameter names can change according to your requirement. Now you need to add parameters in the same transformation settings of the second ktr as the below image: This will help us in acheiving different outputs for different employee entries. “Execute for every input row” will ensure that for every single entry in the memory, the second transformation will be executed. This will be easier to use the rows as variables. “Copy previous results to parameters” will allow you to store the incoming rows as a parameter. Step III: Once you have completed this step, for the second transformation, open the transformation setting and check on the two options as shown in the image below: In the first transformation, do the following as below.Īfter successfully executing this step, all the incoming rows from the input step (Table Input as in the pic) will be stored in the memory. Step II : The idea is to read all the contents of the Employee as a parameter into the memory using the “Copy rows to result” step. Transformation 2 : Generate Output for every Employee Transformation 1 : Load Employees List into Memory Step I: Create a Job with two transformation/KTR Now the requirement is to create multiple (seperate) Excel files for each employee along with their details. Check the sample employee details as below image. Scenario: Suppose you have an excel file which contain rows of employee names along with their details. It can be used by the Get rows from result step and some job entries that allow to process the internal result row set. This step allows you to transfer rows of data (in memory) to the next transformation (or job entry) in a job via an internal result row set. So in order to accomplish this, Pentaho has a step named “ Copy rows to result“. Each row generates a different set of output.
The order of the step transition is not correct gx works 2 code#
There arise serveral situations in Pentaho DI, where we would need to execute a single piece of code for every single rows coming from the input stream.