Blog

Blog
The best minds from Teradata, our partners, and customers blog about relevant topics and features.
343 Views
0 Comments

A new capability is available in 16.10 that will allow you to specify which map a table operator will use when it executes. This capability is activated by adding an "EXECUTE  MAP = map-name" clause in the definition of the table operator, or in the SQL used to execute the table operator.  This posting describes this capability and its value when executing table operators.

Read more...

523 Views
2 Comments

Several new fields have been added into the DBQLogTbl in the 16.20 software release.  This posting describes the new data, and how it can be useful to you.

Read more...

626 Views
2 Comments

If you are using block-level compression (BLC) you may have glanced over the list of 40+ DBS Control parameters that influence how this feature is behaving on your platform. Be aware that most of these parameters have been filled in with reasonable defaults for your system, and with some exceptions, you are not encouraged to experiment with them.

 

One parameter that might be particularly tempting to change is called CompressionLevel. This posting describes what the compression level parameter is and why you do not want to casually start increasing it.

 

This posting applies to Teradata Database 16.10 and greater releases.

Read more...

566 Views
0 Comments

Early versions of Teradata Express included some small sample databases, but those were removed from later releases.

But new Teradata users usually ask for existing tables to start playing around. To make it easy I published several data sets on GitHub Smiley Happy

Read more...

1933 Views
8 Comments

This posting describes two available approaches that can be used to better understand platform I/O utilization today: OutReqTime and IOTAs. Both are available in ResUsage tables.

Read more...

483 Views
0 Comments

Parsing time is often an afterthought. But sometimes there is a concern because parsing is taking longer than expected. But whether the time to accomplish parsing is becoming an issue for you or not, there are small benefits that can be enabled by creating a special workload to manage the parsing activity for all your queries. This blog posting explains why and how to set that up on your system.

Read more...

409 Views
0 Comments

Back in 14.10, some new fields related to Capacity on Demand (COD) and other types of hard limits were quietly added to ResUsage tables. You might be interested in these fields today, because they report how long queries were delayed due to hard limit enforcement. This can give you insights into whether a hard limit, like WM COD is ever being reached, and its actual impact on activity on the platform when it is reached.

Read more...

1836 Views
7 Comments

Temperature-based block-level compression, also known as "Compress on Cold" was introduced in Teradata Database 14.0. Temperature-based block-level compression (TBBLC) automatically compresses infrequently accessed data and decompress frequently accessed data.  The access frequency is determined by statistics collected by TVS. TBBLC may be applied on a table-by-table basis or by default on all user tables.

 

 

This blog posting provides a little background and describes the easiest way to enable TBBLC on just a few selected tables in your database.

Read more...

653 Views
0 Comments

If I told you there was a way you might be able to speed up parsing time for your queries, would you be interested?  

 

If yes, consider "expediting" express requests. This blog posting explains what that means, how it works, and when it can help you. The goal of expediting express requests is to get the best performance you can from parsing when the system is being fully utilized. 

 

Note: This is a rewrite of an earlier blog posting ("Expediting Express Requests") with a similar name that I posted back in 2014. Enhancements have been made to this capability since then, so it's worth giving this posting a read even if you are familiar with how the option used to work.

Read more...

560 Views
0 Comments

GenomeWorldWindow-StephenBrobst-Web-650.png

 

About the Insights

This data visualization shows genetic variations (and similarities) across multiple human populations and geographies using data from the 1000 Genomes Project. Each frame shows a different community or geography within the 1000 Genome Project and was built from the pure genome data. The observer can clearly see the variations between communities proving that large scale genome data provides clear insight into geographic communities across the globe. The goal of the project is to prove the value of large scale genome analytics using high intensity super graphic methods to better understand the genetic patterns of cancer and how to develop personalized medical treatments aligned to genetic composition of individuals.

 

About the Analytics

This visualization shows a collection of Quartal Super Graphics created by VizExplorer sitting on top of a Teradata relational database using query pushdown for large scale data processing.

 

The large scale processing starts with applying the quartal tree algorithm, using an in-database recursive algorithm, that processes the entire 1000 genome population positional information into a common hierarchical quartal grid. A database query is then used to build the subset of data for each of the corresponding communities within the total population. The subset of data is used to render a heatmap shown on each frame. Finally the frames are assembled into a graphic made up of 'small multiples' so the pattern of sequence data can be observed across the communities in the entire 1000 Genome Project. Genomic data is extremely large in size: a database of just 25,000 tumors implies over seventy-five trillion data records.

 

About the Analyst

Andrew is the Chief Technology Officer for VizExplorer. He holds a Bachelor of Surveying from Otago University and a Diploma of Computer Science from Victoria University. He is a cartographer by training and has created over 60 patents and inventions in the areas of cartography, data visualization, and high performance database design. He and his team have been awarded two Smithsonian laureates for heroism in information technology related to data visualization. Andrew has co-authored a book on mathematical gaming analytics as well as over sixty articles in areas related to data visualization and advanced analytics. Andrew was born and raised in the South Island of New Zealand. He now lives in California with his wife and four children.

 

Stephen Brobst

 

Stephen is the Chief Technology Officer for Teradata Corporation. Stephen performed his graduate work in Computer Science at the Massachusetts Institute of Technology where his Masters and PhD research focused on highperformance parallel processing. He also completed an MBA with joint course and thesis work at the Harvard Business School and the MIT Sloan School of Management. During Barack Obama's first term he was also appointed to the Presidential Council of Advisors on Science and Technology (PCAST) in the working group on Networking and Information Technology Research and Development (NITRD). He was recently ranked by ExecRank as the #4 CTO in the United States (behind the CTOs from Amazon.com, Tesla Motors, and Intel) out of a pool of 10,000+ CTOs.

 

Stephen is the data guy. Andrew is the visualization guy. Together they have been teaching advanced data visualization for over ten years at The Data Warehousing Institute and in other forums. Included in this course is a deep examination of the patterns in the genomics super graphic. Stephen and Andrew are both avid fans of the outdoors and have backpacked together in New Zealand and across the USA.

Bloggers
Top Kudoed Authors