In this tutorial, build a machine learning application to predict turbofan engine degradation. This application is structured into three important steps:
Prediction Engineering
Feature Engineering
Machine Learning
In the first step, create new labels from the data by using Compose. In the second step, generate features for the labels by using Featuretools. In the third step, search for the best machine learning pipeline using EvalML. After working through these steps, you should understand how to build machine learning applications for real-world problems like forecasting demand.
[1]:
from demo.turbofan_degredation import load_sample from matplotlib.pyplot import subplots import composeml as cp import featuretools as ft import evalml
Use a dataset provided by NASA simulating turbofan engine degradation. In the dataset, there is data about engines that have been monitored over time. Each engine had operational settings and sensor measurements recorded over a number of cycles. The remaining useful life (RUL) is the amount of cycles an engine has left before it needs maintenance. What makes this dataset special is that the engines run all the way until failure, giving us precise RUL information for every engine at every point in time. The model you build in this tutorial predicts RUL.
[2]:
df = load_sample() df.head()
5 rows × 27 columns
Which range is the RUL of a turbofan engine in?
In this prediction problem, you want to group the RUL data into ranges, then predict which range the RUL is in. You can make variations in the ranges to create different prediction problems. For example, the ranges could be manually defined (0 - 150, 150 - 300, etc.) or based on the quartiles from historical observations. Bin the RUL to make variations, helping you explore different scenarios that are crucial for making better decisions.
Let’s start by defining the labeling function of an engine that calculates the RUL. Given that engines run all the way until failure, the RUL is just the remaining number of observations. Our labeling function will be used by a label maker to extract the training examples.
[3]:
def rul(ds): return len(ds) - 1
Represent the prediction problem by creating a label maker with the following parameters:
The target_entity as the column for the engine ID, since you want to process records for each engine.
target_entity
The labeling_function as the function you defined previously.
labeling_function
The time_index as the column for the event time.
time_index
[4]:
lm = cp.LabelMaker( target_entity='engine_no', labeling_function=rul, time_index='time', )
Run a search to get the training examples by using the following parameters:
The records sorted by the event time, since the search expects the records to be sorted chronologically. Otherwise, an error occurs.
num_examples_per_instance as the number of training examples to find for each engine.
num_examples_per_instance
minimum_data as the amount of data to use to make features for the first training example.
minimum_data
gap as the number of rows to skip between examples. This is done to cover different points in time of an engine.
gap
You can easily tweak these parameters and run more searches for training examples as the requirements of our model change.
[5]:
lt = lm.search( df.sort_values('time'), num_examples_per_instance=20, minimum_data=5, gap=20, verbose=False, ) lt.head()
The output from the search is a label times table with three columns:
The engine ID associated to the records. There can be many training examples generated from each engine.
The event time of the engine. This is also known as a cutoff time for building features. Only data that existed beforehand is valid to use for predictions.
The value of the RUL. This is calculated by the labeling function.
At this point, you only have continuous values of the RUL. As a helpful reference, you can print out the search settings that were used to generate these labels.
[6]:
lt.describe()
Label Distribution ------------------ count 22.000000 mean 75.045455 std 43.795496 min 6.000000 25% 37.750000 50% 74.000000 75% 111.250000 max 153.000000 Settings -------- gap 20 minimum_data 5 num_examples_per_instance 20 target_column rul target_entity engine_no target_type continuous window_size None Transforms ---------- No transforms applied
You can also get a better look at the values by plotting the distribution and the cumulative count across time.
[7]:
%matplotlib inline fig, ax = subplots(nrows=2, ncols=1, figsize=(6, 8)) lt.plot.distribution(ax=ax[0]) lt.plot.count_by_time(ax=ax[1]) fig.tight_layout(pad=2)
With the continuous values, you can explore different ranges without running the search again. In this case, use quartiles to bin the values into ranges.
[8]:
lt = lt.bin(4, quantiles=True, precision=0)
When you print out the settings again, you can now see that the description of the labels has been updated and reflects the latest changes.
[9]:
Label Distribution ------------------ (5.0, 38.0] 6 (38.0, 74.0] 5 (74.0, 111.0] 5 (111.0, 153.0] 6 Total: 22 Settings -------- gap 20 minimum_data 5 num_examples_per_instance 20 target_column rul target_entity engine_no target_type discrete window_size None Transforms ---------- 1. bin - bins: 4 - labels: None - precision: 0 - quantiles: True - right: True
Look at the new label distribution and cumulative count across time.
[10]:
fig, ax = subplots(nrows=2, ncols=1, figsize=(6, 8)) lt.plot.distribution(ax=ax[0]) lt.plot.count_by_time(ax=ax[1]) fig.tight_layout(pad=2)
In the previous step, you generated the labels. The next step is to generate features.
Let’s start by representing the data with an entity set. That way, you can generate features based on the relational structure of the dataset. You currently have a single table of records where one engine can have many records. This one-to-many relationship can be represented by normalizing an engine entity. The same can be done for other one-to-many relationships. Because you want to make predictions based on the engine, you should use this engine entity as the target entity for generating features.
[11]:
es = ft.EntitySet('observations') es.entity_from_dataframe( dataframe=df.reset_index(), entity_id='records', index='id', time_index='time', ) es.normalize_entity( base_entity_id='records', new_entity_id='engines', index='engine_no', ) es.normalize_entity( base_entity_id='records', new_entity_id='cycles', index='time_in_cycles', ) es.plot()
Now you can generate features by using a method called Deep Feature Synthesis (DFS). That method automatically builds features by stacking and applying mathematical operations called primitives across relationships in an entity set. The more structured an entity set is, the better DFS can leverage the relationships to generate better features. Run DFS with these parameters:
entity_set as the entity set we structured previously.
entity_set
target_entity as the engine entity.
cutoff_time as the label times that we generated previously. The label values are appended to the feature matrix.
cutoff_time
[12]:
fm, fd = ft.dfs( entityset=es, target_entity='engines', agg_primitives=['sum'], trans_primitives=[], cutoff_time=lt, cutoff_time_in_index=True, include_cutoff_time=False, verbose=False, ) fm.head()
5 rows × 25 columns
There are two outputs from DFS: a feature matrix and feature definitions. The feature matrix is a table that contains the feature values with the corresponding labels based on the cutoff times. Feature definitions are features in a list that can be stored and reused later to calculate the same set of features on future data.
In the previous steps, generate the labels and features. The final step is to build the machine learning pipeline.
Start by extracting the labels from the feature matrix and splitting the data into a training set and a holdout set.
[13]:
y = fm.pop('rul').cat.codes splits = evalml.preprocessing.split_data( X=fm, y=y, test_size=0.2, random_state=2, problem_type='multiclass', ) X_train, X_holdout, y_train, y_holdout = splits
Run a search on the training set to find the best machine learning model. During the search process, predictions from several different pipelines are evaluated to find the best pipeline.
[14]:
automl = evalml.AutoMLSearch( X_train=X_train, y_train=y_train, problem_type='multiclass', objective='f1 macro', random_state=0, allowed_model_families=['catboost', 'random_forest'], max_iterations=3, ) automl.search( data_checks='disabled', show_iteration_plot=False, )
Generating pipelines to search over... ***************************** * Beginning pipeline search * ***************************** Optimizing for F1 Macro. Greater score is better. Searching up to 3 pipelines. Allowed model families: random_forest, catboost (1/3) Mode Baseline Multiclass Classificati... Elapsed:00:00 Starting cross validation Finished cross validation - mean F1 Macro: 0.113 High coefficient of variation (cv >= 0.2) within cross validation scores. Mode Baseline Multiclass Classification Pipeline may not perform as estimated on unseen data. (2/3) Random Forest Classifier w/ Imputer Elapsed:00:00 Starting cross validation Finished cross validation - mean F1 Macro: 0.678 (3/3) CatBoost Classifier w/ Imputer Elapsed:00:02 Starting cross validation Finished cross validation - mean F1 Macro: 0.567 High coefficient of variation (cv >= 0.2) within cross validation scores. CatBoost Classifier w/ Imputer may not perform as estimated on unseen data. Search finished after 00:03 Best pipeline: Random Forest Classifier w/ Imputer Best pipeline F1 Macro: 0.677778
Once the search is complete, you can print out information about the best pipeline found, like the parameters in each component.
[15]:
automl.best_pipeline.describe() automl.best_pipeline.graph()
*************************************** * Random Forest Classifier w/ Imputer * *************************************** Problem Type: multiclass Model Family: Random Forest Number of features: 24 Pipeline Steps ============== 1. Imputer * categorical_impute_strategy : most_frequent * numeric_impute_strategy : mean * categorical_fill_value : None * numeric_fill_value : None 2. Random Forest Classifier * n_estimators : 100 * max_depth : 6 * n_jobs : -1
Score the model performance by evaluating predictions on the holdout set.
[16]:
best_pipeline = automl.best_pipeline.fit(X_train, y_train) score = best_pipeline.score( X=X_holdout, y=y_holdout, objectives=['f1 macro'], ) dict(score)
{'F1 Macro': 0.7}
From the pipeline, you can see which features are most important for predictions.
[17]:
feature_importance = best_pipeline.feature_importance feature_importance = feature_importance.set_index('feature')['importance'] top_k = feature_importance.abs().sort_values().tail(20).index feature_importance[top_k].plot.barh(figsize=(8, 8), fontsize=14, width=.7);
<AxesSubplot:ylabel='feature'>
You are ready to make predictions with our trained model. Start by calculating the same set of features by using the feature definitions. Use a cutoff time based on the latest information available in the dataset.
[18]:
fm = ft.calculate_feature_matrix( features=fd, entityset=es, cutoff_time=ft.pd.Timestamp('2001-01-08'), cutoff_time_in_index=True, verbose=False, ) fm.head()
3 rows × 24 columns
Now predict which one of the four ranges the RUL is in.
[19]:
y_pred = best_pipeline.predict(fm) y_pred = y_pred.to_series().values prediction = fm[[]] prediction['rul (estimate)'] = y_pred prediction.head()
You have completed this tutorial. You can revisit each step to explore and fine-tune the model using different parameters until it is ready for production. For more information about how to work with the features produced by Featuretools, take a look at the Featuretools documentation. For more information about how to work with the models produced by EvalML, take a look at the EvalML documentation.