Entries Tagged as 'DMX'

Predixion Insight, now 1.2

Predixion Insight 1.2 shipped today, with quite a few new features: PMML support, data profiling, data normalization, Visual Macros for all the tools, VBA API integration  and many others.

The web site has a new look, our Support forums are now up and running  — feel free to post your questions, for Predixion Insight or in general, for Data Mining in Excel.

Predixion Insight has now a splash screen with links to Video Tutorials, our online documentation and a rich sample dataset.

One feature that I love in this release is the Data Profiling feature. With a couple of clicks, this feature will compute common statistics for all columns in your PowerPivot or Excel dataset, and render a report allowing you to explore these statistics.

image

to try it out, go to Insight Analytics \ Explore Data and select Profile Data.

Like all other Predixion Insight tasks, this allows you to select an Excel range or table or a Power Pivot table as your data source for profiling.

The second pane of the wizard allows one to select the type of statistics to be collected: Basic or Advanced. Advanced is a bit slower (think milliseconds to seconds), but so much more useful.

The Data Profiling result is rendered in a new Excel sheet, with a few sections:

Data Summary

the first section, Data Summary, tells you what kind of columns were detected in your data and it looks like this:

image

CustomerID is identified as a likely key (a unique row identifier), so statistics will not be collected for this column

Continuous statistics

image

This section contains information about numeric columns. Here are the profile columns generated under Advanced analysis:

- #  - the ordinal of the column in the source table. Useful if, after  sorting by any other measure, one wants to restore the table to the original layout

- Column Name

- “Looks Like” –  the result of the Predixion Insight heuristic analysis of the column. A column may be regarded as Interval (a range of values), Multinomial (a few distinct value, hence categorical or discrete), Binomial (2 values) or Constant (a single value)

- Count of Blanks, Minimum, Maximum, Mean, Sample Variance, Standard Deviation, Range, Kurtosis, Skewness, Standard Error of Mean, Approximate Median, Mode, the 95% confidence interval (assuming normal distribution). If you feel rusty about any of these metrics, just click on the column name and your browser will give you a friendly refresher

Discrete Statistics

image

Ordinal, Column Name, “Looks Like” and Count of Blanks appear again, just like in Continuous Statistics. The number of distinct states is added for discrete columns.

Other columns:

- Top 80% states – number of distinct states covering 80% of the data

- The Top 3 most/least popular  values, with their respective counts

The report can be filtered by column name. You may notice that certain columns appear both in the Continuous and the Discrete section of the report. Such columns may have a numeric type but a distribution which suggests they should be regarded as categorical. The Data Profile report should help you decide whether to use Classify or Estimate for such columns. (If in doubt, this may help: No model for Constant, Estimate for Interval, Classify for all others)

Correlation/Covariance

image

For numeric columns, here are the correlation and covariance matrices. While not very visible in this dataset, the correlation matrix uses a heat map from 1 (green, perfect positive correlation) to –1 (red, perfect negative correlation). White cells show no correlation (a value of 0). The Correlation matrix can be sorted by correlation for each individual column – sorting it by Column Index, #, will restore the shape of the matrix

Data Mining for the Cloud (or how I spent my summer)

This week, at the KDD (Knowledge Discovery and Data Mining) conference, we (as in Microsoft SQL Server Data Mining team) presented the Table Analysis Tools for the Cloud, a preview for a technology that enables anybody to play with some of the Microsoft’s data mining tools, without any bulky downloads and with zero configuration effort.

Around May this year I practically entered some sort of sabbatical: 3 months to work on an incubation project of my choice (yes, the Microsoft SQL Server organization does this kind of things! if it sounds appealing,  check out our recruiting site or, even better, contact directly our SQL Server Data Mining recruiter, Melsa Clarke - melsac AT microsoft DOT com). With some help from Jamie, various management levels and some nice guys in the SQL Server Data Services team, I gathered the infrastructure for a Software as a Service incubation and set up a web incarnation of the Table Analysis Tools add-in for Excel.

Now it is up and running. The entry page is at sqlserverdatamining.com/cloud, so if I got you bored already and you don’t want to read the rest of this stuff, go ahead and browse that page.

image

 

What it is

TAT Cloud is a set of canned data mining tasks that you can use without having SQL Server installed on your machine.  It consists of encapsulations of some common data mining problems, such as detecting key influencers, forecasting, generating predictive scorecards or doing market basket analysis. The tasks can be executed directly from your browser: just go to the web page, upload your data (in CSV) format and run a tool from the toolbar. Even better, the tasks can be executed directly from Excel. For this, however, you will need to have Excel 2007 and install an add-in which can be downloaded from here.

All tasks work on a table (Excel table or a table in CSV format that you upload to the web interface). All tasks produce reports that can be used to learn more about the analyzed data.

Here is a complete list of features:

  • - Analyze Key Influencers: it detects the columns that impact your target column. It presents a report of those values in other columns that correlate strongly with values in your target column.
  • - Detect Categories (clustering, for data miners) — identifies groups of table rows that share similar characteristics. A categories report is generated, which details the characteristics of each category
  • - Fill From Example — to some extent, similar to Excel’s Autofill feature: it learns from a few examples and extends the learned patterns to the remaining rows in the table
  • - Forecasting — analyzes vertical series of numeric data, detects periodicity, trends and correlations between series and produces a forecast for those series
  • - Highlight Exceptions — finds the interesting (or unusual, or out-of-ordinary) rows in your table
  • - Scenario Analysis — What-If and Goal-Seek tools based on a probabilistic model built on top of your data.
  • - Prediction Calculator — a tool for generating prediction scorecards
  • - Market Basket Analysis — analyzes transaction tables to identify groups of items that appear together in transactions

All features work from Excel. However, not all the features are implemented in the web interface. They will show up eventually!

A really nice presentation of how to use the tools was written by Brent Ozar on SQLServerPedia. Thanks, Brent!

 

How it works

Your data (CSV file or Excel spreadsheet) is uploaded to the web service site. There, Analysis Services crunches it and produces the reports you get either in the browser or in a different spreadsheet. The data and the mining model used for analyzing it are  deleted immediately after processing. If something bad happens and your session does not conclude successfully (fancy wording for "if it crashes") then both data and models are removed automatically after 15-20 minutes.

You will notice in the Excel add-in (as well as in the web interface) a strange IP address, 131.107.181.99 — this is IP of the service. It will be changed to something cleaner very soon.

Excel uses HTTPS to connect to the service, in an effort to protect your data. However, you should not use this technology on sensitive data.

 

What it is not

This is not an official shipping Microsoft technology. It is actually less than even a beta. It may crash, it may produce incorrect results, it may be shot down at any time. BTW, I would appreciate if you posted note here or on the Microsoft data mining forums if this happens (particularly about crashes, I should probably know if it gets shut down)!

 

What next

Well, try the tools: sqlserverdatamining.com/cloud . I recommend downloading the Excel 2007 add-in, rather than using the web application, as it is more functional. Post any questions you might have on the Microsoft data mining forums. And check out this blog periodically for announcements (typically new additions to the web interface) or for more details on how this stuff works

DMX in 2008: Model Filtering (Part 2 of 2)

This second post on model filtering deals with nested table filters. The most common use of nested tables in SQL Server Data Mining is to model transaction tables. That is, model Customer entities together with the “bag” of products they purchased.

Nested table filters primarily serve two purposes:

  • - Use (in the model) only those nested table rows with a certain property (e.g. consider only products that are not “Coca Cola”, because most people do buy “Coca Cola”, so it is not that interesting)
  • - Use (in the model) only those cases where the associated nested tables have certain properties (e.g. - build a prediction model on customers that have purchased Coca Cola)

Notice the difference between the filters: while the first filter removes products from shopping baskets before modeling, the second removes customers from the model (those that have not purchased Coca Cola).

[Read more →]

DMX in 2008: Model Filtering (Part 1 of 2)

This is the first post in a small series describing my favorite data mining feature in SQL Server 2008, model filtering. Filtering allows you to take full advantage of the mining structure / mining model dichotomy, separating the data staging (in the structure) from the modeling (in the mining model).

SQL Server 2008 allows specifying, for each mining model, a filter to be applied on the training data. The filter acts as a partitioning mechanism inside the mining structure and it is applied on top of any training/testing partitioning already existing at the structure level.  Furthermore, filters can be applied to data in nested tables, but more about this in the next post. Filters allow you to build different models for various partitions of the data with minimal effort, either to obtain better overall accuracy (think of it as manually specifying the first split in a decision tree), or to compare the patterns between different partitions.

So, let’s start with a simple scenario, using the same old Adventure Works data - building a mining model that predicts how likely a customer is to purchase a bike based on various demographic information. Let’s also assume that you know, based on your experience with the data that the Geography factor has in important role and you don’t want your North American patterns to hide (because of volume) the peculiarities of the Pacific market.

Suppose you start with a mining structure defined like below:

[Read more →]

(More) DMX features in 2008: Better use of structure columns in models

In a previous post I presented some of the new DMX features in the November CTP of SQL Server 2008. So, here are some new cool DMX tricks.

Assume you want to use Microsoft Association Rules and Microsoft Decision Trees on the same data. Also, assume that the data contains one numeric column (say, Age). You may have noticed that Decision Trees supports continuous columns, while Association Rules does not. Not a big deal, we have discretization, and Age can be added twice to the same mining structure as, say Age (Continuous) and Age Disc (Discretized). However, the two different names raise a problem in the case of NATURAL PREDICTION JOIN (where input columns are bound by name to the model columns).

In SQL Server 2005, the mining model columns typically had the same name as the mining structure’s ones. There was no way in DMX to change the model column names (well, there is a way in BI DevStudio).

Another problem: one would not include the email, name or phone number of a customer in a mining model, because, at best, this would increase training time and, in the worst case, would unnecessarily complicate the model with fake patterns. But this makes it hard to link the training cases leading to one pattern (available with the drillthrough feature) to information that would perhaps make that pattern actionable (like contact information, if the pattern suggest strong probability to buy a product).

In 2008, these problems are much easier to solve, with mode column aliasing and structure columns drillthrough, and here is how these work.

[Read more →]

New DMX features in 2008 : Test and Training data

As Raman mentioned in the Data Miner newsletter, the November CTP of SQL Server 2008 is now available.

It includes many really cool new features in Analysis Services. Among them: Holdout support, Model filtering, DMX Column aliasing, Drillthrough enhancements, Cross validation and practically two new forecasting algorithms under the Microsoft_Time_Series umbrella.
I intend to present all of them briefly, and I start today with the Holdout support.

Most of the data mining tasks require a validation step, performed right after modeling. This validation step
consist (typically) in evaluating model’s performance against data that was not seen during training (test data). 

The test data ideally has the same statistical properties as the training data (data seen by the mining model during training). An easy way to achieve statistical similarity between the training and test set is to use random sampling. This method is not guaranteed to give correct results (statistical similar populations) but, assuming that the random sampling mechanism is independent of the data, it will work in most common scenarios.

SQL Server Integration Services has a Random Sample transform, which extracts a random sample (with a certain percentage) using a mechanism independent of the actual data being samples. This is why we strongly recommended using Integration Services to generate test/training partitions for SQL Server Data Mining.

However, there are a few problems:
- Integration Services will have to save at least one of the sample sets in a relational table (or some form of output destination)
- This sampling method can only be applied to data coming from a relational source (or, in general, a source that can be used with IS). That means it is difficult to use IS sampling with application that do data mining on in-memory data
- Integration Services is rather hard to use to sample data for models with nested tables. It can be done, but it takes around 11 simple steps and 14 transforms :-) to do this for a single nested table (an example is available here: Sampling Nested Tables )

Now, there is a simpler way to do this. You may remember that, in the SQL Server Data mining architecture, a Mining Structure object acts as a data space while a mining model is a problem to be addressed in that data space. As the mining structure describes the data space, it is  natural for the structure to partition the data into training and testing sets.

In SQL Server 2008, the new wizard for creating a model (or structure — yes, there is a wizard now for creating a structure with no models!) allows specifying the desired size of the “Holdout” dataset — that is, data to be stored in the mining structure for testing purposes, without being available for models training. By default, the holdout size is 30% (leaving 70% for training). You may choose to specify a fixed number of rows instead of a percentage, or both (in this case the semantic is “use 30%, but no more than 10000 rows for test data”).

The rest of this post shows how to express the holdout wizardry in DMX.

[Read more →]

Excel data mining cell functions

 In the previous post I presented a not-so-documented cell function installed in Excel by the data mining add-ins. Jamie also gave some cool demos featuring these functions. To complete this topic, here is a full description of the cell functions and how they can be used.

The Excel Data Mining Client add-in  includes 3 cell functions: DMPREDICT, DMPREDICTTABLEROW and DMCONTENTQUERY. These functions are included in the add-ins for SQL Server 2005, but not documented or supported. However, they will be both documented and supported in the add-ins for SQL Server 2008. 

To use these functions, you will need a mining model available on the server. For the examples below, I created a Microsoft Decision Trees mining model, built by running the Classify task in the DM Client add-in over the sample Table Analysis Tools Sample spreadsheet in the sample workbook installed by the  add-ins.

[Read more →]

Regression accuracy: Excel’s regression vs. the SSDM algorithm

A recent post on the MSDN Forums raised an interesting issue: Excel Data Analysis’s Linear Regression and SSDM were returning different results. Specifically, the SSDM results were much worse.

The issue turned out to be a data modeling issue (columns were not mapped properly).  However, during the investigation I had to compare Excel’s linear regression with the SSDM regression algorithm(s). Thought this might be interesting, so here is one way to compare the results from the two implementations.

I started with some simple (X,Y) data (available for download as a CSV file). First step - run Excel’s Data Analysis regression tool. The results are displayed typically in a separate spreadsheet, and the interesting part  is the Coefficients sections:

da_coeff.PNG

Therefore, the Excel regression formula is
Y = 2.37486095661847*X -0.310344827586214

Next thing — apply Excel’s regression coefficients to the existing data. I did this by adding a column in my data spreadsheet and populating it with a formula:
da_applycoeff.PNG

Next thing, I created a Microsoft Linear Regression mining model on the same data. There is a variety of ways to do this, such as exporting data to a table, connecting directly to the Excel spreadsheet or, the simplest way, by using the Excel add-ins.

To get the model’s predictions in Excel, I used one of the functions exposed by the Excel add-ins, DMPREDICT. If you do not have the add-ins you can always execute a prediction query in SQL Server Management Studio or BI Dev Studio.
However, with the add-ins’s function, getting the prediction results is really easy:

[Read more →]