Your instinct is spot on, but rather than phrase as best practices, each dataset is unique and each dataset's error handling tends to be equally unique. So, the use of these facilities is somewhat subjective, however you are 100% right that Xcalar Design provides all of the tools you need (and then some) to handle your data errors:
Grossly mis-structured data – A great practice is to anticipate records with badly mis-structured data. Consider the possibility that your data is not well-formed, and if it doesn't conform to your expectations, use Python try/except to throw errors to halt the import. For more information, see Import UDFs error questions
Once you know that your data is roughly well-structured, it commonly has numerous data inconsistencies to accommodate. In such a circumstance, you have to make a classic decision of whether to omit information from your data source or to handle unexpected data in Xcalar. Only you can make the right decision in the context of your organization. If you choose to do this work in Xcalar Design (common), I find that FNFs, null, and None provide significant flexibility:
FNFs will highlight when the data within a field does not match the type. For example, when you have string data in an integer field. The fact that FNFs don't actually change the underlying data is key here. For more info on FNFs, check out Field not Found FNFs - are these the same as Nulls?
Next, because Xcalar design handles empty strings: "", null, and None as their own entity, you can use these to handle different no-value error conditions, like different binary data conditions. If you can identify these conditions using a UDF, then you can yield JSON containing null or None, as needed. For more information, refer to the thread from last week, Handling Nulls in Xcalar
Be well, @vhall, and happy data-wrangling with import UDFs!