The exponential mechanism is a generalized differentially private algorithm for any output space . Unlike the ๐ Laplace Mechanism that must output real-valued statistics, can be anythingโsuch as models, clustering, or voting results.
Our mechanism takes in some input and outputs some . Let the โqualityโ of some for the input be measured as
For example, if is a dataset and is the model, could be the inverse error of on .
Without privacy, our output would simply be the that maximizes quality,
To add randomness, we use generalized sensitivity,
where are neighbors. Intuitively, generalized sensitivity is the maximum change in a single input that changes the quality of some output. For example, the generalized sensitivity of the machine learning setting (input dataset, output model) is .
In the exponential mechanism, we sample the output using probability
where is a normalizing constant to ensure a valid probability distribution. This formulation assigns exponentially higher probability to higher-quality outputs, but due to the sampling randomness, we can show that this method is differentially private.
Weโll show that the exponential mechanism is -differentially private. For all neighbors and outputs , we have
Next, since the absolute value of the difference is less than or equal to the actual difference, we have the first term in the product
As for the second term, we have
Since is the maximum difference between and , we have
and thus
Combining both terms, we finally get
This can then be generalized to sets of outcomes rather than single outcomes, and thus the exponential mechanism formulation is differentially private.
In the special case of finite , for some input and output of the exponential mechanism, we have with high probability
This gives us a strong bound on how good our differentially-private output will be. Since is usually dependent on , we can directly calculate the that gives us a desired difference between our output and the optimal result. For example, in the machine learning setting, , and if we let , we have