If you are working on any real data set, you will get the requirement to normalise the values to improve the model accuracy. At this we will use standardscalaer() function from sklearn.

sklearn.preprocessing.StandardScaler() function(): This function Standardize features by removing the mean and scaling to unit variance.

The standard score of a sample x is calculated as: z = (x – u) / s

x = variable

u = mean

s = standard deviation

**Parameters:**

copy : boolean, optional, default True

If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.

with_mean : boolean, True by default

If True, center the data before scaling. This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.

with_std : boolean, True by default

If True, scale the data to unit variance (or equivalently, unit standard deviation).

**Attributes:**

scale_ : ndarray or None, shape (n_features,)

Per feature relative scaling of the data. This is calculated using np.sqrt(var_). Equal to None when with_std=False.

New in version 0.17: scale_

mean_ : ndarray or None, shape (n_features,)

The mean value for each feature in the training set. Equal to None when with_mean=False.

var_ : ndarray or None, shape (n_features,)

The variance for each feature in the training set. Used to compute scale_. Equal to None when with_std=False.

n_samples_seen_ : int or array, shape (n_features,)

The number of samples processed by the estimator for each feature. If there are not missing samples, the n_samples_seen will be an integer, otherwise it will be an array. Will be reset on new calls to fit, but increments across partial_fit calls.

**Methods**

fit(X[, y]): Compute the mean and std to be used for later scaling.

fit_transform(X[, y]): Fit to data, then transform it.

get_params([deep]): Get parameters for this estimator.

inverse_transform(X[, copy]): Scale back the data to the original representation

partial_fit(X[, y]): Online computation of mean and std on X for later scaling.

set_params(**params): Set the parameters of this estimator.

transform(X[, y, copy]): Perform standardization by centering and scaling

#Example program

` `

```
from sklearn.preprocessing import StandardScaler
X=[10,15,22,33,25,34,56]
Y=[101,105,222,333,225,334,556]
print("Before standardisation X values are ", X)
print("Before standardisation Y values are ", Y)
sc_X = StandardScaler()
X = sc_X.fit_transform(X)
Y = sc_X.fit_transform(Y)
print("After standardisation X values are ", X)
print("After standardisation Y values are ", Y)
```

` `

` `

**Output:**

` `

```
Before standardisation X values are [10, 15, 22, 33, 25, 34, 56]
Before standardisation Y values are [101, 105, 222, 333, 225, 334, 556]
After standardisation X values are [-1.27049317 -0.91475508 -0.41672176 0.36590203 -0.20327891 0.43704965
2.00229723]
After standardisation Y values are [-1.14102498 -1.11369504 -0.31429431 0.44411152 -0.29379685 0.450944
1.96775565]
```

` `

` `