最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

privacy - Can Gradient Information Be Exposed in Vertical Federated Linear Regression Without Leaking Label Information? - Stack

programmeradmin0浏览0评论

In Vertical Federated Logistic Regression, suppose there are two parties, A and B, where Party B holds the label values. If Party A obtains the gradient information in plaintext, it can infer the label values of Party B. This is because the backward derivative ∂Z_A can reveal the training labels. The fundamental idea is that the logistic loss function produces opposite directions for different labels, so the direction of the derivatives inevitably reflects label information. In classification problems, the direction of the gradient is strongly correlated with the label values, for example, in binary classification

Based on the gradient direction, we can infer label values 0 and 1.

However, in Vertical Federated Linear Regression, where the label values are continuous, such as in house price prediction, what kind of information would the gradient leak? Can Party A receive plaintext gradient information?

After reviewing related papers on privacy-preserving federated learning for linear regression, I found that this issue is not discussed.

In regression problems, is it possible not to protect gradient information and simply let Party A obtain the plaintext gradient? Since Party A obtaining the plaintext gradient only means knowing the difference between y* and y, which is the gap between the predicted and actual values, it doesn't allow Party A to directly infer label values as in classification problems.

Is it possible to obtain gradient data in plain text in regression problems?

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论