The Friday Night Project #16 – Quotation Management Part 1

UDA
The UDA channel is for Teradata’s Unified Data Architecture including the Analytical Ecosystem and other UDA influences. This channel provides information specific to the integration and co-existence of multiple systems, in particular when a mix of Aster, Teradata, and Hadoop are present. It is also meant to support information around the UDA enabling technologies so products like Viewpoint, Data Mover, Connectors, QueryGrid, etc.
Teradata Employee

The Friday Night Project #16 – Quotation Management Part 1

In this Mini Section we are going to extend upon the Simple Quotation Engine from FNP#6 in order to do Quotation Management. This will involve persisting information from the “Get Quote” method including Customer and Property Details as well as the resulting Quotation information. In this first part we will discuss the business process and establish the new database tables we will need. In the second part we will create the necessary Data Access Objects using iBatis and wire the Quotation Manager into the Web service of FNP#8 and FNP#9, and in the third part we will complete the Quotation Management process by providing for the ability to buy a TZA Insurance Policy using the Web service.

Quotation Management Business Process

First lets recap the TZA-Insurance business model, i.e. the provision of house insurance, whereby they provide quotations to customers based upon the physical details of their house or property and its location (Zip Code). The provision of the quotation itself is provided by the Simple Quotation Engine, which is a very simple algorithm that ultimately comes down to accessing a look up table (ZipCodeRiskFactors) that assigns different levels of risk associated with fire, flood, theft etc to a given location (in this case a range of Zip Codes).

It would be possible to simply run the Simple Quotation Engine every time the customer asked for a quotation and certainly if they have changed the details of the property for which they are requesting a quotation that would be required. However, TZA-Insurance appreciates that customers will most likely “shop around” and may request quotations from different insurers before settling on one or the other. In line with the rest of the industry TZA-Insurance quotations are valid for 30 days, therefore if the same customer requests a quotation for an identical set of property details as they requested within the previous 30 days they will be provided with the same quotation amount as previously. If this is the case we will not use the Simple Quotation Engine at all; rather we will need to retrieve and use the previously stored quotation.

In order to achieve this we are going to need three new tables in order to maintain a persistent record of current or potential Customers, their Properties and any Quotations associated with them, so this week we will setup all of the Database elements necessary to provide for Quotation Management.

Update the TZA-Database Project.

Start up your Eclipse IDE (including the Teradata Plug-In) against your Friday Night Project Workspace.

Open up the TZA-Database project and in particular the src/sql folder.

Add new Table Definitions to CreateTZA_Tables.sql

First thing we need to do is add the definitions for our three new tables. Double click on the CreateTZA_Tables.sql file and when it opens in the SQL Editor cut and paste the following definitions into it.

--/**
-- * CUSTOMER Table used to hold Customer Information
-- */

CREATE TABLE CUSTOMER
(
CUSTOMER_ID INTEGER NOT NULL PRIMARY KEY,
FIRSTNAME VARCHAR(50),
MIDDLENAME VARCHAR(50),
LASTNAME VARCHAR(50),
DOB DATE NOT NULL,
PREFIX VARCHAR(4),
SALUTATION VARCHAR(10)
)
UNIQUE INDEX (FIRSTNAME, MIDDLENAME, LASTNAME);
COMMIT;

--/**
-- * PROPERTY Table used to hold Property Information
-- */

CREATE TABLE PROPERTY
(
PROPERTY_ID INTEGER NOT NULL PRIMARY KEY,
HOUSE_NAMENUMBER VARCHAR(15),
STREET_ADDRESS1 VARCHAR(50) NOT NULL,
STREET_ADDRESS2 VARCHAR(50),
STREET_ADDRESS3 VARCHAR(50),
STREET_ADDRESS4 VARCHAR(50),
CITY VARCHAR(50) NOT NULL,
STATE CHAR(2) NOT NULL,
ZIPCODE CHAR(5) NOT NULL,
--/*
-- * OWNERSHIP:- 0 - RENTED, 1 - OWNED, 2 - MORTGAGED.
-- */
OWNERSHIP INTEGER NOT NULL,
NUM_BEDROOMS INTEGER NOT NULL,
YEAR_BUILT CHAR(4) NOT NULL,
--/*
-- * PROPERTY_TYPE:- 0 - APARTMENT, 1 - CONDOMINIUM, 2 - TOWN HOUSE, 3 - DUPLEX, 4 - DETACHED.
-- */
PROPERTY_TYPE INTEGER NOT NULL,
BUILDINGS_AMOUNT_INSURED INTEGER NOT NULL,
BUILDINGS_COVER CHAR(1) NOT NULL,
BUILDINGS_ACCIDENTAL_COVER CHAR(1) NOT NULL,
CONTENTS_AMOUNT_INSURED INTEGER NOT NULL,
CONTENTS_COVER CHAR(1) NOT NULL,
CONTENTS_ACCIDENTAL_COVER CHAR(1) NOT NULL,
SINGLE_ITEM_LIMIT INTEGER,
ALARMED CHAR(1) NOT NULL,
SECURITY_PATROLLED CHAR(1) NOT NULL,
CUSTOMER_ID INTEGER NOT NULL,
FOREIGN KEY(CUSTOMER_ID) REFERENCES CUSTOMER(CUSTOMER_ID)
)
UNIQUE INDEX(HOUSE_NAMENUMBER, STREET_ADDRESS1, ZIPCODE);
COMMIT;

--/**
-- * QUOTATION Table holds information on Quotes given to a CUTOMER aganist their PROPERTY
-- */

CREATE TABLE QUOTATION
(
QUOTATION_ID INTEGER NOT NULL PRIMARY KEY,
PROPERTY_ID INTEGER NOT NULL,
AMOUNT NUMERIC(18,2) NOT NULL,
CURRENCY_CODE CHAR(3) NOT NULL,
START_DATE DATE NOT NULL,
EXPIRY_DATE DATE NOT NULL,
FOREIGN KEY(PROPERTY_ID) REFERENCES PROPERTY(PROPERTY_ID)
)
UNIQUE INDEX (PROPERTY_ID);
COMMIT;

Nothing particularly interesting to see here, relatively standard database definitions of the elements that appeared previously in the TZA-InsuranceProcess Business Objects and the TZA-InsuranceService Schema Definitions (XSD’s). Strings become VARCHAR’s, Booleans become CHAR(1), Big Decimals become NUMERIC(18,2) etc. We add some Teradata Specific UNIQUE INDEX statements to improve the Select performance but not really rocket navigation (remember rocket science is easy, just light the Blue touch paper and retire, steering the thing is the cleaver part).

Update DropTZA_Tables.sql

We need to add the corresponding drop statements to the DropTZA_Tables.sql file so we can have our automated TZA-Database build. Double click on the DropTZA_Tables.sql file and when it opens in the SQL Editor cut and paste the following definitions into it.

DROP TABLE QUOTATION;
COMMIT;

DROP TABLE PROPERTY;
COMMIT;

DROP TABLE CUSTOMER;
COMMIT;

Update DeleteTZA_Data.sql

We also need to add the corresponding delete statements to the DeleteTZA_Data.sql file so we can have our automated TZA-Database build. Double click on the DeleteTZA_Tables.sql file and when it opens in the SQL Editor cut and paste the following definitions into it.

DELETE FROM QUOTATION;

DELETE FROM PROPERTY;

DELETE FROM CUSTOMER;

InitializeTZA_DB

We are not going to load any data into these tables at this time so that’s all we need to do editing wise. All we need to do now is run the TZA-Database build in order to Re-Initialize the TZA_DB with our new tables.

The TZA-Database build.xml can be run from within Eclipse using the built in ANT View (so you never have to leave Eclipse).

  • In Eclipse select Windows -> Show View -> Other... -> Ant -> Ant to open the Ant view
  • In the Ant View select the Add Buildfiles.. option (Green Plus Sign with a little Ant beside it).
  • Use the Buildfile Selection Dialog to find the TZA-Database -> build.xml and add it to this view.

  • In the ANT View Double Click on the InitialiseTZA_DB task to clear any existing data and rebuild your database.

So that’s it, we are done; TZA_DB populated with three new tables into which the Quotation Manager can persist Customer, Property and Quotation information? Not so fast Toto, you need to ask if you are ready to be “Highly Available”.

High Availability.

While this is just a sample application, which realistically will only ever run on your local test system, you should always be thinking ahead to what you "should" do in a full production application and architecting appropriately (remember all the good work we did in FNP#10 to Query Band enable the application).

In this section we are going to think about what we might do, architecturally, to support High Availability (HA). Being HA ready within the Enterprise Data Warehouse world ultimately comes down to having a second EDW available (the Secondary system) to take over from the Production or Primary system should it become unavailable. There are a range of techniques used to keep these two systems synchronized (nightly backup from the Primary with restore to the Secondary, partial table copy of changes from Primary to Secondary or live replication between them). While a detailed discussion of these different techniques is beyond the scope of this article, what is important to us here is how we ensure that the Primary Key for a given table is kept unique when we may be required to insert rows on either the Primary or Secondary system at any given point in time depending upon which one is considered the active system (the other being off-line or passive). Note we are not going to specifically address Dual Active here, where both Primary and Secondary may be active at the same time, although the approach we are going to take could be extended to support a Dual Active approach.

So consider our previous definition of the Customer Table (repeated below):

CREATE TABLE CUSTOMER
(
CUSTOMER_ID INTEGER NOT NULL PRIMARY KEY,
FIRSTNAME VARCHAR(50),
MIDDLENAME VARCHAR(50),
LASTNAME VARCHAR(50),
DOB DATE NOT NULL,
PREFIX VARCHAR(4),
SALUTATION VARCHAR(10)
)
UNIQUE INDEX (FIRSTNAME, MIDDLENAME, LASTNAME);

The Primary Key (which will also be the Teradata Primary Index) is going to be CUSTOMER_ID, and to satisfy the future requirement of high availability we need to ensure that this can be kept unique across the two systems. In order to achieve this we are going to use a Stored Procedure as a means to provide a consistent interface. While we could have done this within our application code we gain an advantage through pushing the interface and implementation into the database in that we can change the underlying algorithm (implementation), say to support a Dual Active environment, without having to change the application. Our initial Stored Procedure approach is show below:

CREATE PROCEDURE "GetCustomerID" (OUT CUSTOMER_ID INTEGER)
BEGIN
DECLARE workingID INTEGER;
SELECT MAX(CUSTOMER_ID) INTO workingID FROM CUSTOMER;
IF workingID IS NULL THEN
SET workingID = 1;
ELSE
SET workingID = workingID + 1;
END IF;
SET CUSTOMER_ID = workingID;
END;

Seems simple enough, find out what the current Maximum value of CUSTOMER_ID in the CUSTOMER table is, add 1 and return it to to the calling process. However, there are two issues with this approach.

MAX(CUSTOMER_ID) Explain Plan

The simplest way to illustrate the first problem is to demonstrate the use of the Teradata Plug-in for Eclipse Execution Plan capability. When presented with a SQL Statement it is possible to have Teradata “Explain” how it proposes to deal with it. This “Explain Plan” or “Execution Plan” can provide great insight into how Teradata will deal with a given SQL statement. The “Explain Plan” is significant across the range of Tactical and Strategic queries, where the shear volume of “Tactical” queries can be as significant, in terms of system resource utilization, as a single “Strategic” query across a given time period.

In order to demonstrate the use of the “Explain Plan” start by creating a SQL file (Scratchpad.sql) to hold our High Availability experiments as follows.

  • Right Click on src/sql and select New -> File -> Sql
  • Set the file Name to Scratchpad.sql.

Quotation Management Create Scratchpad.sql

Click Finish and a template SQL file will open within the Eclipse editor.

Enter the following SQL into the Scratchpad:

SELECT MAX(CUSTOMER_ID) FROM TZA_DB.CUSTOMER;

Initialize the Connection Profile to point to your preferred database and Right Click on the SQL statement. Once the Connection is established (for the first time) you will be offered the standard SQL operations menu as show below:

Quotation Management Get Execution Plan

  • Select the Get Execution Plan option and take a look at the Execution Plan that Teradata has for this SQL Statement.

Now I’m no Database expert but words like LOCK, ALL, SUM, AGGREGATE don’t sound good when we are operating in a Tactical, Web service manner, so there is clearly something to address here.

Concurrent Sessions

The second issue, while significant within any database system, is exacerbated by the Internet/Web based nature of the applications that are operating upon our database. Any high performance Web or Web service application will provide for concurrent user access to the application, which in this example will be reflected, through the Connection Pool, in concurrent access to the underlying Database. In traditional Enterprise Data Warehouse applications (which are typically Read Only Analytics) this is just a matter of workload management. However, in this application we will be adding to the Customer, Property and Quotation tables, which will require the use of INSERT and potentially UPDATE SQL Statements. This forces us to think about the implications of concurrent access to the data in order to ensure that one thread of execution can not overwrite the work of another.

So while the Stored Procedure defined above could be used by one application Thread of Execution to obtain the next CUSTOMER_ID, until that Thread of Execution actually Inserts a corresponding row into the CUSTOMER table with that value there is nothing to stop one or more concurrent Threads of Execution from executing the GetCustomerID Stored Procedure against the existing CUSTOMER table, which would ultimately lead to a Duplicate Index error.

Therfore we need to look for some alternative approaches which can help solve these issues.

Simple Index Table

The first thing to consider from the Explain Plan above is the potential volumes involved. Let’s say TZA-Insurance takes off and reaches the 5 Million customer mark. Do we really want to be doing SUM and AGREGGATE type activities against 5 Million records just to determine that the next customer ID should be 5,000,001?. Probably not, therfore we introduce a new Simple Index Table as a means to maintain the current “MAX” Customer ID as follows:

CREATE TABLE CUSTOMER_INDEX
(
CUSTOMER_ID INTEGER NOT NULL PRIMARY KEY
);

So now the associated Stored Procedure would be:

CREATE PROCEDURE "GetCustomerID" (OUT CUSTOMER_ID INTEGER)
BEGIN
DECLARE workingID INTEGER;
SELECT CUSTOMER_ID INTO workingID FROM CUSTOMER_INDEX;
IF workingID IS NULL THEN
SET workingID = 1;
ELSE
SET workingID = workingID + 1;
END IF;
SET CUSTOMER_ID = workingID;
END;

To test this approach we can add the following lines to the Scratchpad.sql file.

CREATE TABLE TZA_DB.CUSTOMER_INDEX
(
CUSTOMER_ID INTEGER NOT NULL PRIMARY KEY
);

SELECT CUSTOMER_ID FROM TZA_DB.CUSTOMER_INDEX;

Select the CREATE TABLE statement, Right Click on it and select "Execute Selected Text" in order to create the table.

Now Right Click on the SELECT statement and select the Get Execution Plan option:

Quotation Management Get Exec Plan for Customer_index

The explain looks better and while it does have an all-AMPs RETRIEVE in it we know it will not be doing a SUM or AGRREGATE across 5+ million rows of Customer information.

While it is our architectural “Intention” that the CUSTOMER_INDEX table only ever has one row in it (with the current CUSTOMER_INDEX value) no one told the Teradata Optimizer this so it needs to go check all of the AMPs for any rows that might be in the table. Ideally we would like to try and lock this single row table on to a single Row Hash or AMP to further improve the performance of this highly tactical query.

Complex Index Table

Therfore we change our definition of what the CUSTOMER_INDEX table will look like as follows:

CREATE TABLE CUSTOMER_INDEX
(
LOCK_POINT INTEGER NOT NULL PRIMARY KEY,
CUSTOMER_ID INTEGER NOT NULL
);

Note that the Primary Key is positioned on the LOCK_POINT column (not the CUSTOMER_ID column). The application logic will need to artificially ensure that this LOCK_POINT always has a fixed, known value, thus allowing applications to access the table with a single AMP query.

To test this approach first drop the current CUSTOMER_INDEX table (use the Data Source Explorer to find and delete (drop) the TZA_DB.CUSTOMER_INDEX table) then add the following lines to the Scratchpad.sql file.

CREATE TABLE TZA_DB.CUSTOMER_INDEX
(
LOCK_POINT INTEGER NOT NULL PRIMARY KEY,
CUSTOMER_ID INTEGER NOT NULL
);

SELECT CUSTOMER_ID FROM TZA_DB.CUSTOMER_INDEX WHERE LOCK_POINT = 1;

Select the CREATE TABLE statement, Right Click on it and select Execute Selected Text in order to create the table.

Now Right Click on the SELECT statement and select the Get Execution Plan option:

Quotation Management Execution Plan with Lock Point

This looks much better as we are doing a single-AMP Retrieve based upon a unique primary index (sounds nice and Tactical to me).

Duplicate Entries

Now that we can get the value of the current Customer ID and therefore determine what the next value will be (probably just +1 in this simple example, however, we could do an even only on one side and odd only on the other side for an Active Dual System) we still need to solve the potential duplicate entry problem, which requires us to do an update on the fly to lock down any newlt selected ID. So within the Stored Procedure, once we have the new the ID, we need to update the CUSTOMER_INDEX table to reflect the assignment of that ID within the current Thread of Execution.

Our modified GetCustomerID Stored Procedure might be as follows:

CREATE PROCEDURE "GetCustomerID" (OUT CUSTOMER_ID INTEGER)
BEGIN
DECLARE workingID INTEGER;
SELECT CUSTOMER_ID INTO workingID FROM CUSTOMER_INDEX WHERE LOCK_POINT = 1;
IF workingID IS NULL THEN
SET workingID = 1;
INSERT CUSTOMER_INDEX (1,1);
ELSE
SET workingID = workingID + 1;
INSERT CUSTOMER_INDEX (1, workingID);
END IF;

SET CUSTOMER_ID = workingID;
END;

UPSERT the Index

Now while this will work it is all very “Programmatic” when what we really want is a Database solution to our GetCustomerID Stored Procedure. If we are really clever (or know someone like FredK) we can come up with an even simpler Stored Procedure where we use an UPSERT as follows:

CREATE PROCEDURE "GetCustomerID" (OUT CUSTOMER_ID INTEGER)
BEGIN
BEGIN REQUEST
UPDATE CUSTOMER_INDEX SET CUSTOMER_ID=CUSTOMER_ID+1 WHERE LOCK_POINT=1
ELSE INSERT CUSTOMER_INDEX (1, 1);
SELECT CUSTOMER_ID INTO :CUSTOMER_ID FROM CUSTOMER_INDEX WHERE LOCK_POINT=1;
END REQUEST;
END;

Here we use an UPDATE ELSE INSERT (UPSERT) approach to increment any existing CUSTOMER_ID row by 1 (where the LOCK_POINT = 1) and if there is no row then we initialize a row with CUSTOMER_ID = 1 and LOCK_POINT = 1. So within a single SQL statement we have satisfied our requirement to quickly find the only row within the CUSTOMER_INDEX and update with a new value. Finally we Select the updated value from the table so that it can be returned to the application, note this all occurs within a BEGIN REQUEST / END REQUEST block, to ensure its integrity.

A quick check on the Explain Plan for the UPSERT statement shows we are still doing single-AMP processing with either an UPDATE or an INSERT. In practice the INSERT will only happen on the very first call to the Stored Procedure when no customer index yet exists.

Quotation Management Explain Plan for Upsert

Complete the update of the TZA-Database Project.

Based upon this new knowledge we need to go back and update the TZA-Database project to reflect this revised approach to allocating Customer, Property and Quoation ID's. First return to the src/sql folder:

Add Index Table Definitions to CreateTZA_Tables.sql

Now that we understand the definition required for the various index tables we need add the definitions for these. Double click on the CreateTZA_Tables.sql file and when it opens in the SQL Editor cut and paste the following definitions into it.

--/**
-- * CUSTOMER_INDEX Table used to hold current Customer Index
-- */
CREATE TABLE CUSTOMER_INDEX
(
LOCK_POINT INTEGER NOT NULL PRIMARY KEY,
CUSTOMER_ID INTEGER NOT NULL
);

--/**
-- * PROPERTY_INDEX Table used to hold current Property Index
-- */
CREATE TABLE PROPERTY_INDEX
(
LOCK_POINT INTEGER NOT NULL PRIMARY KEY,
PROPERTY_ID INTEGER NOT NULL
);

--/**
-- * QUOTATION_INDEX Table used to hold current Quotation Index
-- */
CREATE TABLE QUOTATION_INDEX
(
LOCK_POINT INTEGER NOT NULL PRIMARY KEY,
QUOTATION_ID INTEGER NOT NULL
);

Add Index Table elements to DropTZA_Tables.sql

We need to add the corresponding drop statements to the DropTZA_Tables.sql file so we can have our automated TZA-Database build. Double click on the DropTZA_Tables.sql file and when it opens in the SQL Editor cut and paste the following definitions into it.

DROP TABLE CUSTOMER_INDEX;
COMMIT;

DROP TABLE PROPERTY_INDEX;
COMMIT;

DROP TABLE QUOTATION_INDEX;
COMMIT;

Clear Index Table elements to ClearTZA_Data.sql

We also need to add the corresponding delete statements to the ClearTZA_Data.sql file so we can have our automated TZA-Database build. Double click on the ClearTZA_Tables.sql file and when it opens in the SQL Editor cut and paste the following definitions into it.

DELETE FROM CUSTOMER_INDEX;

DELETE FROM PROPERTY_INDEX;

DELETE FROM QUOTATION_INDEX;

Create the Get Index Stored Procedures

Now we create the Stored Procedures required to access the index tables in a tactical and high availability focused manner. Double click on the ClearTZA_Tables.sql file and when it opens in the SQL Editor cut and paste the following definitions into it.

--/**
-- *
-- * Stored Procedure to Get a Customer ID from the CUSTOMER_INDEX Table
-- *
-- */
CREATE PROCEDURE "GetCustomerID" (OUT CUSTOMER_ID INTEGER)
BEGIN
BEGIN REQUEST
UPDATE CUSTOMER_INDEX SET CUSTOMER_ID=CUSTOMER_ID+1 WHERE LOCK_POINT=1
ELSE INSERT CUSTOMER_INDEX (1, 1);
SELECT CUSTOMER_ID INTO :CUSTOMER_ID FROM CUSTOMER_INDEX WHERE LOCK_POINT=1;
END REQUEST;
END;
/
COMMIT;
/
--/**
-- *
-- * Stored Procedure to Get a Property ID from the PROPERTY_INDEX Table
-- *
-- */
CREATE PROCEDURE "GetPropertyID" (OUT PROPERTY_ID INTEGER)
BEGIN
BEGIN REQUEST
UPDATE PROPERTY_INDEX SET PROPERTY_ID= PROPERTY_ID+1 WHERE LOCK_POINT=1
ELSE INSERT PROPERTY_INDEX (1, 1);
SELECT PROPERTY_ID INTO : PROPERTY_ID FROM PROPERTY_INDEX WHERE LOCK_POINT=1;
END REQUEST;
END;
/
COMMIT;
/
--/**
-- *
-- * Stored Procedure to Get a Quotation ID from the QUOTATION_INDEX Table
-- *
-- */
CREATE PROCEDURE "GetQuotationID" (OUT QUOTATION_ID INTEGER)
BEGIN
BEGIN REQUEST
UPDATE QUOTATION_INDEX SET QUOTATION_ID= QUOTATION_ID+1 WHERE LOCK_POINT=1
ELSE INSERT QUOTATION_INDEX (1, 1);
SELECT QUOTATION_ID INTO : QUOTATION_ID FROM QUOTATION_INDEX WHERE LOCK_POINT=1;
END REQUEST;
END;
/
COMMIT;
/

Drop the Get Index Stored Procedures

As before we add appropriate line to the DropTZA_Procedures.sql file to hold the drop statements.

The DropTZA_Procedures.sql can then be built up by inserting the following code segment within the DropTZA_ Procedures.sql file using copy and paste.

DROP PROCEDURE GetCustomerID;
COMMIT;

DROP PROCEDURE GetPropertyID;
COMMIT;

DROP PROCEDURE GetQuotationID;
COMMIT;

Run the Ant Build

Having added these new SQL statements we can return to the Ant View and expand the TZA-Database project. The Ant targets CreateTZA-Macros, CreateTZA-Procedures, DropTZA-Macros and DropTZA-Procedures can now be seen in the Ant view and these can be exercised individually if you require.

However, we configured the dependencies within DropTZA_DB and InitialiseTZA_DB so that a single Ant target (InitialiseTZA_DB) could be used to rebuild TZA_DB in a repeatable manner so make sure and Double Click on this at some point before proceeding.

  • Finally in the Ant View Double Click on the InitialiseTZA_DB task to clear any existing data and rebuild your database.

Note if you haven’t run the CreateTZA-Macros or CreateTZA-Procedures targets then when the InitialiseTZA_DB target is first run the DropTZA-Macro and DropTZA-Procedure targets may fail (as nothing has been created for them to Drop), however, the onerror="" flag in the SQL Task will let this pass. A subsequent run of InitialiseTZA_DB will run as follows:

Quotation Management Ant Build

So that is all of the database work required to do Quotation Management completed, in that we have Tables available to hold Customer, Property and Quotation information with related Index Tables and Stored Procedures available to allocate Primary Key values for these, in a tactical manner, which are also future proofed against a future "High Availability" requirement.

In the next part of this Quotation Management Mini-Section we will develop the Data Access Layer necessary for the Quotation Management business make use of these database elements. To do this we will use the iBatis Object Relational Framework and we will rely on the capabilities of the Teradata plug-in for Eclipse to do a lot of the work for us.