This is very much a beginner's question.
I have a history of transactions for large number of properties that I want to prepare for analysis. The transactions are in a file which I have/can setup to also import into a database table. The support data is also in files (CSV) but could also be put into database tables.
- I need to merge some additional data that is in CSV files, looking up by ID - this would just effectively add columns from one file to the master table to perform a simple join - the example is taking a property transaction and then adding some additional type classification by looking up that transaction in a seperate file/table - is the join table node the best to use for this?
- I then need to 'flag' some records - we are looking up whether a property transaction is of a type that is listed in a seperate file/table - in this case a list of transaction types that are considered distinct types for review - I think the right way to do this is with an outer join to add the flag?
- I also then need to match to one last data set which links the transaction to a client record - getting the correct client record requires looking at the transaction date and then searching for the correct client record in a file that has the client ID with a start date and end date for each property. I am really not sure of the best way to go about this:
- Should I take the joined table and loop over it with the 'Table Row to Variable Loop Start'?
- If so then what would be the best approach or nodes to use to take each row, pick out the ID and date to use for a look up and then take that and query the client dataset?
- Or, is it better and more stratightfoward to push client data and the joined table into a database and then write a view or stored procedure that handles this search and combine for me?
Many thanks in advance,