Yes, you are right. For data like this, we would usually recommend ComplexDataReader. However setting up this component might sometimes be a little tricky and whether this is the best solution depends on the exact structure of your data. For example:
1. Is there always just one header, some number of transactions and one footer?
2. Is it always in this order?
3. Does each of this records have any prefix or standard field?
I assume that the answer is "yes" for all those questions and based on that assumption I have prepared two different examples of the solution for you.
One of them is using the ComplexDataReader (complexData.grf). It splits the data into three parts:
1. Header: The ComplexDataReader assumes that the header is just one line, and then it automatically continues to the next state: transactions.
2. Transactions: The lines are being filtered out to the second output as long as the prefix is not FOOTER.
3. Footer: The ComplexDataReader, as it is setup in my example, supposes that after it turns to state "$2 footer", there is just one "footer" line left to be processed.
The data are then passed to three output ports, each with its own metadata.
With this kind of data, you can also consider filtering record lines based on the known prefix (or part of a string and so on). For example, if the transaction line always starts with a prefix TRANS, you can filter the records to a given output using the following expression: startsWith($in.0.field1,"TRANS") in a Filter component (see filterData.grf). This way you can then reformat each line and process it independently.
Please review and if none of it is a suitable solution for you, please provide me with an example of your data (feel free to remove any sensitive information from the file, or you can send it to email firstname.lastname@example.org
Thanks and have a nice day Eva