This video shows you how to run a sample AutoAI Experiment to create a Watson Machine Learning model. Start in a Watson Studio project, and add to that project a new AutoAI Experiment. To run an AutoAI experiment, you’ll need the Watson Machine Learning service. Here you have the option to associate a Watson Machine Learning service with this project. You can either create a new service instance, or select an existing service instance. When you return to the page where you’re creating the experiment, just reload the page, and you’ll see the new service instance listed. For this first experiment, you will select a sample. The Bank marketing sample contains text data collected from phone calls to a bank in response to a marketing campaign. When you select a sample, the experiment name and description are filled in for you, so you’re ready to create the experiment. Next, the AutoAI Experiment Builder displays. Since this experiment is from a sample, the Bank marketing source file is already selected. And the column to predict is also already selected. In this case, it’s the Y column which represents whether a user will sign up for a term deposit as part of the marketing campaign. Based on the data set and the selected column to predict, AutoAI analyzes a subset of the data and chooses a prediction type and metric to optimize. In this case, since the column to predict contains values of Y or N for Yes or No, the Binary Classification was chosen, the positive class is Yes, and the optimized metric is ROC AUC. The ROC AUC metric balances precision, accuracy, and recall. Now run the experiment, and wait as the Pipeline leaderboard fills in. During AutoAI training, your data set is split into two parts: training data and hold-out data. The training data is used by the AutoAI training stages to generate the model pipelines and cross-validation scores are used to rank them.  After training, the hold-out data is used for the resulting pipeline model evaluation and computation of performance information such as ROC curves and confusion matrices. Next, AutoAI generates pipelines using different estimators, such as the XGBoost classifier, or enhancements, such as hyperparameter optimization and feature engineering, with the pipelines ranked based on the ROC AUC metric. Hyperparameter Optimization is a mechanism for automatically exploring a search space of potential Hyperparameters, building a series of models and comparing the models using metrics of interest. Feature engineering attempts to transform the raw data into the combination of features that best represents the problem to achieve the most accurate prediction. Okay! The run has completed. The legend explains where to find the data, top algorithm, pipelines, and feature transformers on the relationship map,  You can view the full log to see complete details. By default, you’ll see the relationship map, but you can swap views to see the progress map. Scroll down to take a look at the leaderboard. You may want to start with comparing the pipelines. This chart provides metrics for the four pipelines viewed by cross-validation score or by holdout score. You can see the pipelines ranked based on other metrics such as average precision.  Back on the Experiment Summary tab, you can select an individual pipeline to review the model evaluation, confusion matrix, precision recall curve, model information, feature transformations, and feature importance. This pipeline had the highest ranking, so you can save this as a machine learning model. Just accept the defaults, and save the model. Now view the model. From here you can deploy the model. Find more videos in the IBM Watson Data and AI Learning Center at http://ibm.biz/learning-centers.