What is Imputation and Implementation Techniques

In this blog, we would discuss What is Imputation and Implementation Techniques. You can remove the missing values row or column if your dataset contains some missing values. This approach is strongly discouraged because it shrinks the size of the data and can cause the data analysis to deviate from reality.

 

 

Instead, we should employ machine learning techniques that are unaffected by missing numbers or make use of imputers to fill in the gaps. The imputer is an estimator that completes datasets’ missing values. It applies the mean, median, and constant to numerical numbers. It utilizes the most common, constant value for categorical values. Additionally, you can train your model to anticipate the missing labels.

 

 

 

 

What is Imputation?

 

Imputation is the statistical technique used to estimate missing values in a dataset. When values are missing, imputation can be used to fill in the missing data points. This can be done using a variety of methods, such as mean imputation, median imputation, or hot-deck imputation. One of the benefits of imputation is that it can help to preserve the relationships between variables in a dataset. This is important for both descriptive and inferential statistics.

 

 

In addition, imputation can help to increase the power of statistical analysis by reducing the amount of missing data. There are some drawbacks to imputation, however. One is that it can introduce biases into a dataset if the imputation method is not used correctly. Another is that imputed values are often less reliable than observed values, which can lead to inaccurate results. Overall, imputation is a useful tool for dealing with missing data. When used correctly, it can help to improve the accuracy of statistical analyses. When data is missing, it can be hard to analyze or interpret. Imputation can help by replacing missing values with estimated values. This gives us a complete dataset that is easier to work with.

 

 

 

 

Implementation of Imputation Techniques

 

Simple methods for imputing missing values are provided by the SimpleImputer class. The statistics (mean, median, or most frequent) of each column in which the missing values are present can be used to impute missing values or they can be replaced with a constant value. Additionally, this class supports various missing value encodings.

 

import numpy as np
from sklearn.impute import SimpleImputer
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
imp.fit([[2, 3], [np.nan, 6], [4, 8]])

X = [[np.nan, 3], [6, np.nan], [4, 8]]
print(imp.transform(X))

 

 

Using the IterativeImputer class is a more advanced method since it models each feature with missing values as a function of other features and then utilizes that estimate for imputation. The process is iterated round-robin; one feature column is assigned as output y and the remaining feature columns are regarded as input X at each phase. On (X, y), a regressor is fitted for known y. Next, the regressor is employed to forecast the values of y that are absent. This is carried out iteratively for each feature, and the max iter imputation rounds are repeated after that. The final imputation round’s outcomes are provided.

 

import numpy as np
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
imp = IterativeImputer(max_iter=10, random_state=0)
imp.fit([[2, 4], [3, 6], [1, 2], [np.nan, 7], [7, np.nan]])

X_test = [[np.nan, 4], [9, np.nan], [np.nan, 3]]
print(np.round(imp.transform(X_test)))

 

 

The KNNImputer class offers imputation using the k-Nearest Neighbors method to replace missing values. The closest neighbors are determined by default using the nan euclidean distances euclidean distance metric, which accommodates missing values. Each missing feature is imputed using values from the feature’s nearest n neighbors. The characteristics of the neighbors are evenly averaged or weighted according to each neighbor’s distance. If a sample is lacking multiple features, the neighbors for that sample may vary based on which missing features are being imputed.

 

import numpy as np
from sklearn.impute import KNNImputer
nan = np.nan
X = [[1, 2, nan], [4, 5, 6], [nan, 8, 9], [3, 3, 7]]
imputer = KNNImputer(n_neighbors=2, weights="uniform")
imputer.fit_transform(X)

 

 

 

Also, read about the 176B Parameter Bloom Model.

 

Share this post

2 thoughts on “What is Imputation and Implementation Techniques

Leave a Reply

Your email address will not be published. Required fields are marked *