Currently you cannot work forward from a batch dataflow directly. You have two options and a best practice to make sure you don't end up in the same situation again:
Option 1a – Cascading Batch Dataflows
You can export the results of your batch dataflow as a Xcalar Table, and continue modeling. At the end of your modeling session, exporting a new batch dataflow will export the entire data algorithm from data source to output.
Note: If you choose this option, then Xcalar recommends you use your modeling cluster for both executing the batch dataflow and this new stage of modeling, as modeling in your operational cluster could theoretically risk your organization's Service Level Obligations (SLOs).
Option 1b – Export and Re-import CSV
In rare circumstances, such as extremely small size of data output from the first batch dataflow, or large number of distinct ways you need to process the output from the first batch dataflow, it may be more efficient to export the data as CSV, then to use a separate dataflow modeled from the output of the CSV. Because Xcalar streamlines execution of operations on data in memory, disk-intensive operations like export to CSV are usually prohibitively expensive, making this option impractical.
Option 2 – Repeat the steps.
Depending on the work, it might be faster to repeat the steps. It is easier to operationalize one batch dataflow than to work through the dependencies of multiple batch dataflows.
Best Practice –
The best practice for next time is to Duplicate your Workbook either before you release modeling memory or when you export the batch dataflow. A Duplicated Workbook doesn't take up memory, and contains all of the necessary metadata to work forward from that stopping point or any other active, hidden, or temporary virtual table in your work.
Would you like me to add your name to the existing feature request for the ability to continue modeling from a batch dataflow? Let me know if you run into any additional challenges!