I have three dataframes (different variables) that I am trying to run a PCA on in python. The sizes for them are:
df1 = 17 rows × 60212 columns (17 are model names, and the 60212 is the data)
df2 = 17 rows × 60077 columns
df3 = 17 rows × 61513 columns
My biggest issue is that for when trying to concat the datapoints don't align so NaN will appear in the cells and when I drop them all the data is disappears. There is no overlapping points. So I was wondering if I should do individual PCA on the dataframes. And go from there.
I have three dataframes (different variables) that I am trying to run a PCA on in python. The sizes for them are:
df1 = 17 rows × 60212 columns (17 are model names, and the 60212 is the data)
df2 = 17 rows × 60077 columns
df3 = 17 rows × 61513 columns
My biggest issue is that for when trying to concat the datapoints don't align so NaN will appear in the cells and when I drop them all the data is disappears. There is no overlapping points. So I was wondering if I should do individual PCA on the dataframes. And go from there.
Share Improve this question asked Nov 19, 2024 at 17:33 ElliebirdElliebird 191 bronze badge1 Answer
Reset to default 0Option 1: Perform Individual PCA
You can use sklearn StandardScaler and PCA for this:
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
You will need to standardise them so they have equal weight and calculate PCA individually:
def perform_pca(df, n_components):
scaler = StandardScaler()
standardized_data = scaler.fit_transform(df)
pca = PCA(n_components=n_components)
principal_components = pca.fit_transform(standardized_data)
return principal_components, pca.explained_variance_ratio_
pcs_df1, var_ratio_df1 = perform_pca(df1, n_components=5)
pcs_df2, var_ratio_df2 = perform_pca(df2, n_components=5)
pcs_df3, var_ratio_df3 = perform_pca(df3, n_components=5)
Then combine with numpy:
combined_pcs = np.hstack([pcs_df1, pcs_df2, pcs_df3])
Option 2: Impute the missing (NaN) values:
If the missing values supposedly should carry meaningful context you cna impute them with mean imputation, KNN imputation or something else like matrix factorisation... etc...
and then apply PCA on them:
Combine with pandas:
combined_df = pd.concat([df1, df2, df3], axis=1)
Impute values example:
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy='mean')
combined_df_imputed = imputer.fit_transform(combined_df)
Run PCA on that:
pca = PCA(n_components=10)
principal_components = pca.fit_transform(combined_df_imputed)
Option 3: Feature selection or aggregation
If you want to perform PCA on all data but reduce dimensionality beforehand you can select common or representative features from each df and use them for PCA.
Aggregate (summarised df could be averaging groups of features for example):
summarized_df1 = df1.groupby(axis=1, level=0).mean()
summarized_df2 = df2.groupby(axis=1, level=0).mean()
summarized_df3 = df3.groupby(axis=1, level=0).mean()
Then concatenate these dfs:
combined_df = pd.concat([summarized_df1, summarized_df2, summarized_df3], axis=1)
Then run PCA as usual as described beforehand.