I'm new to recommender system and I'm currently building a collaborative filtering based recommender system. In my dataset there are current 600 users and 9000 items having different ratings. I have created a user item interaction matrix and I'm doing all my operations in numpy. I'm using Pearson correlation coefficient as the method to find top k similar user with respect to each user. My current code to find the top 10 most similar user with respect to each target user has time complexity of O(m^2n) and space complexity of O(mn) where m is the number of user and n is the number of items.
Given this time complexity it will be infeasible for large number of users. Upon researching I found that dimensionality reduction can be a solution. But my concern is if I reduce the number of users then the whole system will not be able do justice to the recommendation part as I want to recommend for each user.
So what are the different ways to optimize so that it helps in case of scaling ?