Day 17: Dimensionality Reduction

Python for Data Science

Muhammad Dawood
6 min readAug 1, 2023
Day 17: Dimensionality Reduction

Welcome to Day 17 of our Python for data science challenge! Dimensionality Reduction is a powerful technique used to simplify high-dimensional data while preserving essential information. Today, we will explore dimensionality reduction techniques, including Principal Component Analysis (PCA) and t-distributed Stochastic Neighbor Embedding (t-SNE). Learn how to reduce dimensions and visualize high-dimensional data, as well as the application of dimensionality reduction in data analysis. Dimensionality Reduction empowers us to gain insights and make data analysis more efficient. Let’s dive into the world of Dimensionality Reduction with Python!

Dimensionality Reduction:

Dimensionality reduction is a technique used to reduce the number of features (dimensions) in a high-dimensional dataset while preserving as much relevant information as possible. It is beneficial for various reasons, including data visualization, computational efficiency, and improving model performance. Two popular dimensionality reduction techniques are Principal Component Analysis (PCA) and t-distributed Stochastic Neighbor Embedding (t-SNE).

PCA (Principal Component Analysis):

PCA is a linear dimensionality reduction technique that transforms the data into a new set of orthogonal components (principal components) by maximizing the variance along each component. The first principal component captures the most significant variance in the data, and subsequent components capture the remaining variance in descending order of importance. By projecting the data onto a reduced set of principal components, PCA can effectively reduce the dimensions while preserving most of the variance in the data. PCA is mainly used for unsupervised feature extraction, data compression, and noise reduction.

Strengths of PCA:

  • Fast and computationally efficient for large datasets.
  • Can be applied to both centred and non-centred data.
  • Provides interpretable principal components that represent linear combinations of original features.

Limitations of PCA:

  • PCA is sensitive to the scale of the data, so it is essential to standardize or normalize features before applying PCA.
  • It may not capture non-linear relationships in the data.
  • PCA may not perform well in preserving local structures for visualization purposes.

t-SNE (t-distributed Stochastic Neighbor Embedding):

t-SNE is a non-linear dimensionality reduction technique primarily used for visualizing high-dimensional data in a lower-dimensional space (e.g., 2D or 3D). It focuses on preserving the local structures in the data, meaning it tries to maintain the similarity between nearby points in the high-dimensional space when projecting them onto the lower-dimensional space. t-SNE is particularly useful for visualizing clusters or patterns in complex datasets.

Strengths of t-SNE:

  • Effective in preserving local structures, making them suitable for data visualization.
  • Can reveal hidden patterns and clusters in high-dimensional data.
  • Well-suited for exploratory data analysis and identifying outliers.

Limitations of t-SNE:

  • Computationally expensive and can be slow on large datasets.
  • t-SNE is sensitive to the choice of perplexity hyperparameter, which affects the quality of the visualizations.
  • t-SNE is not suitable for feature extraction or data compression tasks, as it doesn’t provide meaningful components.

Implementing PCA and t-SNE using Python’s scikit-learn library:

To implement PCA and t-SNE in Python, you can use the scikit-learn library, a popular machine learning library that provides easy-to-use implementations for various algorithms, including dimensionality reduction techniques.

PCA Implementation:

from sklearn.decomposition import PCA

# Assuming 'X' is your high-dimensional data
pca = PCA(n_components=2) # You can choose the number of components for the lower-dimensional representation (e.g., 2D visualization)
X_pca = pca.fit_transform(X)

# Access the explained variance ratio of the principal components
explained_variance_ratio = pca.explained_variance_ratio_

t-SNE Implementation:

from sklearn.manifold import TSNE

# Assuming 'X' is your high-dimensional data
tsne = TSNE(n_components=2, perplexity=30) # Perplexity is a hyperparameter that can be adjusted for better visualizations
X_tsne = tsne.fit_transform(X)

Choosing the appropriate number of components for PCA and interpreting the explained variance ratio:

When applying PCA, you can choose the number of components based on your specific requirements. A common approach is to set the number of components such that it explains a certain percentage of the total variance in the data. The explained variance ratio (available through explained_variance_ratio_ attribute) provides the proportion of variance explained by each principal component. You can use a cumulative sum of the explained variance ratio to determine the number of components that retain a desired amount of variance.

For example, if you want to retain 95% of the variance in the data:

cumulative_variance = np.cumsum(pca.explained_variance_ratio_)
num_components = np.argmax(cumulative_variance >= 0.95) + 1

Visualization of high-dimensional data in lower dimensions:

After applying dimensionality reduction, you can plot the reduced data using libraries like Matplotlib or Seaborn to visualize the data in lower dimensions:

import matplotlib.pyplot as plt

# Assuming 'X_pca' or 'X_tsne' contains the reduced data
plt.scatter(X_pca[:, 0], X_pca[:, 1], label='PCA')
# or
plt.scatter(X_tsne[:, 0], X_tsne[:, 1], label='t-SNE')
plt.legend()
plt.xlabel('Component 1')
plt.ylabel('Component 2')
plt.title('Data Visualization')
plt.show()

Application of Dimensionality Reduction in Data Analysis:

Using PCA for Feature Extraction and Data Compression:

In data analysis and machine learning, PCA can be employed as a preprocessing step to reduce the number of features and improve model performance. By selecting the most important principal components, PCA acts as a feature extraction technique, helping to focus on the most informative aspects of the data. It can also be used for data compression by retaining a reduced set of components that still captures a significant portion of the variance. The compressed data occupies less memory and speeds up computations without sacrificing much information.

Using t-SNE for Data Visualization and Pattern Identification:

t-SNE is mainly used for visualizing complex high-dimensional data in a lower-dimensional space. It helps to identify clusters, patterns, and outliers that might be difficult to perceive in the original high-dimensional space. Data scientists often use t-SNE in exploratory data analysis to gain insights into the underlying structure of the data. However, t-SNE is not used for feature extraction or compression, as it doesn’t provide meaningful components.

Practical Application:

For a practical example, let’s consider using dimensionality reduction on a real-world dataset such as the famous Iris dataset.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE

# Load the Iris dataset
data = load_iris()
X = data.data
y = data.target

# Standardize the features (important for PCA)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# PCA
pca = PCA(n_components=2)
X_pca = pca.fit_transform(X_scaled)

# t-SNE
tsne = TSNE(n_components=2, perplexity=30)
X_tsne = tsne.fit_transform(X_scaled)

# Visualize PCA and t-SNE results
plt.figure(figsize=(10, 5))

plt.subplot(1, 2, 1)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, cmap='viridis')
plt.xlabel('Component 1 (PCA)')
plt.ylabel('Component 2 (PCA)')
plt.title('PCA')

plt.subplot(1, 2, 2)
plt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=y, cmap='viridis')
plt.xlabel('Component 1 (t-SNE)')
plt.ylabel('Component 2 (t-SNE)')
plt.title('t-SNE')

plt.tight_layout()
plt.show()

Output:

In this example, we applied both PCA and t-SNE to the Iris dataset, which has four features. The reduced representations (2D) of the data were then visualized, with points coloured according to their respective classes. The plots will show the clusters and patterns present in the data, aiding in understanding its structure.

Overall, dimensionality reduction techniques like PCA and t-SNE are valuable tools for data analysis, visualization, and machine learning tasks. They offer insights into high-dimensional data, make computation more efficient, and enhance the performance of various data-driven applications.

Congratulations on completing Day 17 of our Python for data science challenge! Today, you explored the power of dimensionality reduction, learning about PCA and t-SNE, reducing dimensions, and visualizing high-dimensional data. Dimensionality Reduction empowers you to uncover meaningful patterns and simplify complex data for better analysis.

As you continue your Python journey, remember the significance of dimensionality reduction in streamlining data analysis and improving model performance. Tomorrow, on Day 18, we will delve into the world of Natural Language Processing (NLP), a transformative field in data science.

--

--

Muhammad Dawood
Muhammad Dawood

Written by Muhammad Dawood

On a journey to unlock the potential of data-driven insights. Day Trader | FX & Commodity Markets | Technical Analysis & Risk Management Expert| Researcher

No responses yet