I'm just starting to look into Embedded Process in Teradata. I'm a dataminer / advanced SQL coder, so my intention is to use an Embedded Process to manage predictive scoring and similar analytics.
Suppose I have a database table containing 'predictive scoring' functions (each record contains a fucntion/predictive model, and the columns represents the input parameters of that function/predictive model) which are then applied as a 'scoring' mechanism to customer level tables (to predict risk, churn, fraud etc).
What I'm wondering is: Would the embedded process be instantiated/made static when being used to score a customer table, or could it be edited as the records are processed?
I realise that the MMP nature of Teradata means the sequential processing of the records is awkward, but the idea of having an analytics model as an embedded process that can be adaptive (self learning) as records are processed is very cool. I can managed the statistics side of the problem, I'm just unsure if an Embedded Process used in this way would be applied atomically or dynamically (applied to each record individually).
What are you thoughts / comments?