In Vertical Federated Logistic Regression, suppose there are two parties, A and B, where Party B holds the label values. If Party A obtains the gradient information in plaintext, it can infer the label values of Party B. This is because the backward derivative ∂Z_A can reveal the training labels. The fundamental idea is that the logistic loss function produces opposite directions for different labels, so the direction of the derivatives inevitably reflects label information. In classification problems, the direction of the gradient is strongly correlated with the label values, for example, in binary classification
Based on the gradient direction, we can infer label values 0 and 1.
However, in Vertical Federated Linear Regression, where the label values are continuous, such as in house price prediction, what kind of information would the gradient leak? Can Party A receive plaintext gradient information?
After reviewing related papers on privacy-preserving federated learning for linear regression, I found that this issue is not discussed.
In regression problems, is it possible not to protect gradient information and simply let Party A obtain the plaintext gradient? Since Party A obtaining the plaintext gradient only means knowing the difference between y* and y, which is the gap between the predicted and actual values, it doesn't allow Party A to directly infer label values as in classification problems.
Is it possible to obtain gradient data in plain text in regression problems?