{"title": "Privacy Amplification by Mixing and Diffusion Mechanisms", "book": "Advances in Neural Information Processing Systems", "page_first": 13298, "page_last": 13308, "abstract": "A fundamental result in differential privacy states that the privacy guarantees of a mechanism are preserved by any post-processing of its output. In this paper we investigate under what conditions stochastic post-processing can amplify the privacy of a mechanism. By interpreting post-processing as the application of a Markov operator, we first give a series of amplification results in terms of uniform mixing properties of the Markov process defined by said operator. Next we provide amplification bounds in terms of coupling arguments which can be applied in cases where uniform mixing is not available. Finally, we introduce a new family of mechanisms based on diffusion processes which are closed under post-processing, and analyze their privacy via a novel heat flow argument. On the applied side, we generalize the analysis of \"privacy amplification by iteration\" in Noisy SGD and show it admits an exponential improvement in the strongly convex case, and study a mechanism based on the Ornstein\u2013Uhlenbeck diffusion process which contains the Gaussian mechanism with optimal post-processing on bounded inputs as a special case.", "full_text": "PrivacyAmpli\ufb01cationbyMixingandDiffusionMechanismsBorjaBalleGillesBartheMPIforSecurityandPrivacyIMDEASoftwareInstituteMarcoGaboardiBostonUniversityJosephGeumlekUniversityofCalifornia,SanDiegoAbstractAfundamentalresultindifferentialprivacystatesthattheprivacyguaranteesofamechanismarepreservedbyanypost-processingofitsoutput.Inthispaperweinvestigateunderwhatconditionsstochasticpost-processingcanamplifytheprivacyofamechanism.Byinterpretingpost-processingastheapplicationofaMarkovoperator,we\ufb01rstgiveaseriesofampli\ufb01cationresultsintermsofuniformmixingpropertiesoftheMarkovprocessde\ufb01nedbysaidoperator.Nextweprovideampli\ufb01cationboundsintermsofcouplingargumentswhichcanbeappliedincaseswhereuniformmixingisnotavailable.Finally,weintroduceanewfamilyofmechanismsbasedondiffusionprocesseswhichareclosedunderpost-processing,andanalyzetheirprivacyviaanovelheat\ufb02owargument.Ontheappliedside,wegeneralizetheanalysisof\u201cprivacyampli\ufb01cationbyiteration\u201dinNoisySGDandshowitadmitsanexponentialimprovementinthestronglyconvexcase,andstudyamechanismbasedontheOrnstein\u2013UhlenbeckdiffusionprocesswhichcontainstheGaussianmechanismwithoptimalpost-processingonboundedinputsasaspecialcase.1IntroductionDifferentialprivacy(DP)[15]hasariseninthelastdecadeintoastrongde-factostandardforprivacy-preservingcomputationinthecontextofstatisticalanalysis.ThesuccessofDPisbased,atleastinpart,ontheavailabilityofrobustbuildingblocks(e.g.,theLaplace,exponentialandGaussianmechanisms)togetherwithrelativelysimplerulesforanalyzingcomplexmechanismsbuiltoutoftheseblocks(e.g.,compositionandrobustnesstopost-processing).Theinherenttensionbetweenprivacyandutilityinpracticalapplicationshassparkedarenewedinterestintothedevelopmentoffurtherrulesleadingtotighterprivacybounds.Atrendinthisdirectionisto\ufb01ndwaystomeasuretheprivacyintroducedbysourcesofrandomnessthatarenotaccountedforbystandardcompositionrules.Generallyspeaking,thesearereferredtoasprivacyampli\ufb01cationrules,withprominentexamplesbeingampli\ufb01cationbysubsampling[9,18,20,6,5,8,2,27],shuf\ufb02ing[16,10,3]anditeration[17].Motivatedbytheseconsiderations,inthispaperweinitiateasystematicstudyofprivacyampli\ufb01cationbystochasticpost-processing.Speci\ufb01cally,givenaDPmechanismMproducing(probabilistic)outputsinXandaMarkovoperatorKde\ufb01ningastochastictransitionbetweenXandY,weareinterestedinmeasuringtheprivacyofthepost-processedmechanismK\u25e6MproducingoutputsinY.Thestandardpost-processingpropertyofDPstatesthatK\u25e6MisatleastasprivateasM.Ourgoalistounderstandunderwhatconditionsthepost-processedmechanismK\u25e6MisstrictlymoreprivatethanM.Roughlyspeaking,thisampli\ufb01cationshouldbenon-trivialwhentheoperatorK\u201cforgets\u201dinformationaboutthedistributionofitsinputM(D).Ourmaininsightisthat,atleastwhenY=X,33rdConferenceonNeuralInformationProcessingSystems(NeurIPS2019),Vancouver,Canada.\ftheforgetfulnessofKfromthepointofviewofDPcanbemeasuredusingsimilartoolstotheonesdevelopedtoanalyzethespeedofconvergence,i.e.mixing,oftheMarkovprocessassociatedwithK.Inthissetting,weprovidethreetypesofresults,eachassociatedwithastandardmethodusedinthestudyofconvergenceforMarkovprocesses.Inthe\ufb01rstplace,Section3providesDPampli\ufb01cationresultsforthecasewheretheoperatorKsatis\ufb01esauniformmixingcondition.TheseincludestandardconditionsusedintheanalysisofMarkovchainsondiscretespaces,includingthewell-knownDobrushincoef\ufb01centandDoeblin\u2019sminorizationcondition[19].Althoughinprincipleuniformmixingconditionscanalsobede\ufb01nedinmoregeneralnon-discretespaces[12],mostMarkovoperatorsofinterestinRddonotexhibituniformmixingsincethespeedofconvergencedependsonhowfaraparttheinitialinputsare.Convergenceanalysesinthiscaserelyonmoresophisticatedtools,includingLyapunovfunctions[22],couplingmethods[21]andfunctionalinequalities[1].Followingtheseideas,Section4investigatestheuseofcouplingmethodstoquantifyprivacyampli\ufb01cationbypost-processingunderR\u00e9nyiDP[23].Thesemethodsapplytooperatorsgivenby,e.g.,GaussianandLaplacedistributions,forwhichuniformmixingdoesnothold.Resultsinthissectionareintimatelyrelatedtotheprivacyampli\ufb01cationbyiterationphenomenonstudiedin[17]andcanbeinterpretedasextensionsoftheirmainresultstomoregeneralsettings.Inparticular,ouranalysisunpackstheshiftedR\u00e9nyidivergenceusedintheproofsfrom[17]andallowsustoeasilytracktheeffectofiteratingarbitrarynoisyLipschitzmaps.Asaconsequence,weshowanexponentialimprovementontheprivacyampli\ufb01cationbyiterationofNoisySGDinthestronglyconvexcasewhichfollowsfromapplyingthisgeneralizedanalysistostrictcontractions.OurlastsetofresultsconcernsthecasewhereKisreplacedbyafamilyofoperators(Pt)t\u22650formingaMarkovsemigroup[1].Thisisthenaturalsettingforcontinuous-timeMarkovprocesses,andincludesdiffusionprocessesde\ufb01nedintermsofstochasticdifferentialequations[25].InSection5weassociate(acollectionof)diffusionmechanisms(Mt)t\u22650toadiffusionsemigroup.Interestingly,thesemechanismsare,byconstruction,closedunderpost-processinginthesensethatPs\u25e6Mt=Ms+t.WeshowtheGaussianmechanismfallsintothisfamily\u2013sinceGaussiannoiseisclosedunderaddition\u2013andalsopresentanewmechanismbasedontheOrnstein-UhlenbeckprocesswhichhasbettermeansquarederrorthanthestandardGaussianmechanism(andmatchestheerroroftheoptimallypost-processedGaussianmechanismwithboundedinputs).OurmainresultondiffusionmechanismsprovidesagenericR\u00e9nyiDPguaranteebasedonanintrinsicnotionofsensitivityderivedfromthegeometryinducedbythesemigroup.Theproofreliesonaheat\ufb02owargumentreminiscentoftheanalysisofmixingindiffusionprocessesbasedonfunctionalinequalities[1].2BackgroundWestartbyintroducingnotationandconceptsthatwillbeusedthroughoutthepaper.Wewrite[n]={1,...,n},a\u2227b=min{a,b}and[a]+=max{a,0}.Probability.LetX=(X,\u03a3,\u03bb)beameasurablespacewithsigma-algebra\u03a3andbasemeasure\u03bb.WewriteP(X)todenotethesetofprobabilitydistributionsonX.Givenaprobabilitydistribution\u00b5\u2208P(X)andameasurableeventE\u2286Xwewrite\u00b5(E)=P[X\u2208E]forarandomvariableX\u223c\u00b5,denoteitsexpectationunderf:X\u2192RdbyE[f(X)],andcangetbackitsdistributionas\u00b5=Law(X).Giventwodistributions\u00b5,\u03bd(or,ingeneral,arbitrarymeasures)wewrite\u00b5(cid:28)\u03bdtodenotethat\u00b5isabsolutelycontinuouswithrespectto\u03bd,inwhichcasethereexistsaRadon-Nikodymderivatived\u00b5d\u03bd.Weshallreservethenotationp\u00b5=d\u00b5d\u03bbtodenotethedensityof\u00b5withrespecttothebasemeasure.WealsowriteC(\u00b5,\u03bd)todenotethesetofcouplingsbetween\u00b5and\u03bd;i.e.\u03c0\u2208C(\u00b5,\u03bd)isadistributiononP(X\u00d7X)withmarginals\u00b5and\u03bd.Thesupportofadistributionissupp(\u00b5).MarkovOperators.WewilluseK(X,Y)todenotethesetofMarkovoperatorsK:X\u2192P(Y)de\ufb01ningastochastictransitionmapbetweenXandYandsatisfyingthatx7\u2192K(x)(E)ismeasurableforeverymeasurableE\u2286Y.Markovoperatorsactondistributions\u00b5\u2208P(X)ontheleftthrough(\u00b5K)(E)=RK(x)(E)\u00b5(dx),andonfunctionsf:Y\u2192Rontherightthrough(Kf)(x)=Rf(y)K(x,dy),whichcanalsobewrittenas(Kf)(x)=E[f(X)]withX\u223cK(x).ThekernelofaMarkovoperatorK(withrespectto\u03bb)isthefunctionk(x,\u00b7)=dK(x)d\u03bbassociatingwithxthedensityofK(x)withrespecttoa\ufb01xedmeasure.Divergences.ApopularwaytomeasuredissimilaritybetweendistributionsistouseCsisz\u00e1rdivergencesD\u03c6(\u00b5k\u03bd)=R\u03c6(d\u00b5d\u03bd)d\u03bd,where\u03c6:R+\u2192Risconvexwith\u03c6(1)=0.Taking2\f\u03c6(u)=12|u\u22121|yieldsthetotalvariationdistanceTV(\u00b5,\u03bd),andthechoice\u03c6(u)=[u\u2212e\u03b5]+with\u03b5\u22650givesthehockey-stickdivergenceDe\u03b5,whichsatis\ufb01esDe\u03b5(\u00b5k\u03bd)=Z(cid:20)d\u00b5d\u03bd\u2212e\u03b5(cid:21)+d\u03bd=Z[p\u00b5\u2212e\u03b5p\u03bd]+d\u03bb=supE\u2286X(\u00b5(E)\u2212e\u03b5\u03bd(E)).Itiseasytocheckthat\u03b57\u2192De\u03b5(\u00b5k\u03bd)ismonotonicallydecreasingandD1=TV.AllCsisz\u00e1rdivergencessatisfyjointconvexityD((1\u2212\u03b3)\u00b51+\u03b3\u00b52k(1\u2212\u03b3)\u03bd1+\u03b3\u03bd2)\u2264(1\u2212\u03b3)D(\u00b51k\u03bd1)+\u03b3D(\u00b52k\u03bd2)andthedataprocessinginequalityD(\u00b5Kk\u03bdK)\u2264D(\u00b5k\u03bd)foranyMarkovoperatorK.R\u00e9nyidivergences1areanotherwaytocomparedistributions.For\u03b1>1theR\u00e9nyidivergenceoforder\u03b1isde\ufb01nedasR\u03b1(\u00b5k\u03bd)=1\u03b1\u22121logR(d\u00b5d\u03bd)\u03b1d\u03bd,andalsosatis\ufb01esthedataprocessinginequality.Finally,tomeasuresimilaritybetween\u00b5,\u03bd\u2208P(Rd)wesometimesusethe\u221e-Wassersteindistance:W\u221e(\u00b5,\u03bd)=inf\u03c0\u2208C(\u00b5,\u03bd)inf{w\u22650:kX\u2212Yk\u2264wholdsalmostsurelyfor(X,Y)\u223c\u03c0}.DifferentialPrivacy.AmechanismM:Dn\u2192P(X)isarandomizedfunctionthattakesadatasetD\u2208DnoversomeuniverseofrecordsDandreturnsa(samplefrom)distributionM(D).WewriteD\u2019D0todenotetwodatabasesdifferinginasinglerecord.WesaythatMsatis\ufb01es2(\u03b5,\u03b4)-DPifsupD\u2019D0De\u03b5(M(D)kM(D0))\u2264\u03b4[15].Furthermore,wesaythatMsatis\ufb01es(\u03b1,\u0001)-RDPifsupD\u2019D0R\u03b1(M(D)kM(D0))\u2264\u0001[23].3Ampli\ufb01cationFromUniformMixingWestartouranalysisofprivacyampli\ufb01cationbystochasticpost-processingbyconsideringsettingswheretheMarkovoperatorKsatis\ufb01esoneofthefollowinguniformmixingconditions.De\ufb01nition1.LetK\u2208K(X,Y)beaMarkovoperator,\u03b3\u2208[0,1]and\u03b5\u22650.WesaythatKis:(1)\u03b3-Dobrushinifsupx,x0TV(K(x),K(x0))\u2264\u03b3,(2)(\u03b3,\u03b5)-Dobrushinifsupx,x0De\u03b5(K(x)kK(x0))\u2264\u03b3,(3)\u03b3-Doeblinifthereexistsadistribution\u03c9\u2208P(Y)suchthatK(x)\u2265(1\u2212\u03b3)\u03c9forallx\u2208X,(4)\u03b3-ultra-mixingifforallx,x0\u2208XwehaveK(x)(cid:28)K(x0)anddK(x)dK(x0)\u22651\u2212\u03b3.MostoftheseconditionsariseinthecontextofmixinganalysesinMarkovchains.Inparticular,theDobrushinconditioncanbetrackedbackto[13],whileDoeblin\u2019sconditionwasintroducedearlier[14](seealso[24]).Ultra-mixingisastrengtheningofDoeblin\u2019sconditionusedin[12].The(\u03b3,\u03b5)-Dobrushinis,ontheotherhand,newandisdesignedtobeageneralizationofDobrushintailoredforampli\ufb01cationunderthehockey-stickdivergence.ItisnothardtoseethatDobrushin\u2019sistheweakestamongtheseconditions,andinfactwehavetheimplicationssummarizedinFigure1(seeLemma9).Thisexplainswhytheampli\ufb01cationboundsinthefollowingresultareincreasinglystronger,andinparticularwhythe\ufb01rsttwoonlyprovideampli\ufb01cationin\u03b4,whilethelasttwoalsoamplifythe\u03b5parameter.Theorem1.LetMbean(\u03b5,\u03b4)-DPmechanism.ForagivenMarkovoperatorK,thepost-processedmechanismK\u25e6Msatis\ufb01es:(1)(\u03b5,\u03b40)-DPwith\u03b40=\u03b3\u03b4ifKis\u03b3-Dobrushin,(2)(\u03b5,\u03b40)-DPwith\u03b40=\u03b3\u03b4ifKis(\u03b3,\u02dc\u03b5)-Dobrushinwith3\u02dc\u03b5=log(1+e\u03b5\u22121\u03b4),(3)(\u03b50,\u03b40)-DPwith\u03b50=log(1+\u03b3(e\u03b5\u22121))and\u03b40=\u03b3(1\u2212e\u03b50\u2212\u03b5(1\u2212\u03b4))ifKis\u03b3-Doeblin,(4)(\u03b50,\u03b40)-DPwith\u03b50=log(1+\u03b3(e\u03b5\u22121))and\u03b40=\u03b3\u03b4e\u03b50\u2212\u03b5ifKis\u03b3-ultra-mixing.Afewremarksaboutthisresultareinorder.Firstwenotethat(2)isstrongerthan(1)sincethemonotonicityofhockey-stickdivergencesimpliesTV=D1\u2265De\u02dc\u03b5.Alsonotehowintheresultsabovewealwayshave\u03b50\u2264\u03b5,andinfacttheformof\u03b50isthesameasobtainedunderampli\ufb01cation1R\u00e9nyidivergencesdonotbelongtothefamilyofCsisz\u00e1rdivergences.2ThisdivergencecharacterizationofDPisdueto[4].3Wetaketheconvention\u02dc\u03b5=\u221ewhenever\u03b4=0,inwhichcasethe(\u03b3,\u221e)-DobrushinconditionisobtainedwithrespecttothedivergenceD\u221e(\u00b5k\u03bd)=\u00b5(supp(\u00b5)\\supp(\u03bd)).3\f\u03b3-ultra-mixing\u03b3-Doeblin\u03b3-Dobrushin(\u03b3,\u03b5)-DobrushinFigure1:ImplicationsbetweenmixingconditionsMixingConditionLocalDPCondition\u03b3-Dobrushin(0,\u03b3)-LDP(\u03b3,\u03b5)-Dobrushin(\u03b5,\u03b3)-LDP\u03b3-DoeblinBlanketcondition4\u03b3-ultra-mixing(log11\u2212\u03b3,0)-LDPTable1:Relationbetweenmixingcondi-tionsandlocalDPbysubsamplingwhen,e.g.,a\u03b3-fractionoftheoriginaldatasetiskept.Thisisnotacoincidencesincetheproofsof(3)and(4)leveragetheoverlappingmixturestechniqueusedtoanalyzeampli\ufb01cationbysubsamplingin[2].However,wenotethatfor(3)wecanhave\u03b40>0evenwith\u03b4=0.InfacttheDoeblinconditiononlyleadstoanampli\ufb01cationin\u03b4if\u03b3\u2264\u03b4e\u03b5(1\u2212\u03b4)(e\u03b5\u22121).WeconcludethissectionbynotingthattheconditionsinDe\ufb01nition1,despitebeingquitenatural,mightbetoostringentforprovingampli\ufb01cationforDPmechanismson,say,Rd.OnewaytoseethisistointerprettheoperatorK:X\u2192P(Y)asamechanismandtonotethattheuniformmixingconditionsonKcanberephrasedintermsoflocalDP(LDP)[18]properties(seeTable1forproperty4translations)wherethesupremumistakenoveranypairofinputs(insteadofneighboringones).Thismotivatestheresultsonnextsection,wherewelookfor\ufb01nerconditionstoproveampli\ufb01cationbystochasticpost-processing.4Ampli\ufb01cationFromCouplingsInthissectionweturntocoupling-basedproofsofampli\ufb01cationbypost-processingundertheR\u00e9nyiDPframework.Our\ufb01rstresultisameasure-theoreticgeneralizationoftheshift-reductionlemmain[17]whichdoesnotrequiretheunderlyingspacetobeanormedvectorspace.ThemaindifferencesinourproofaretouseexplicitcouplingsinsteadoftheshiftedR\u00e9nyidivergencewhichimplicitlyreliesontheexistenceofanorm(throughtheuseofW\u221e),andreplacetheidentityU+W\u2212W=Ubetweenrandomvariableswhichdependsonthevector-spacestructurewithatransportoperatorsH\u03c0andH\u03c00whichsatisfy\u00b5H\u03c00H\u03c0=\u00b5inageneralmeasure-theoreticsetting.Givenacoupling\u03c0\u2208C(\u00b5,\u03bd)with\u00b5,\u03bd\u2208P(X),weconstructatransportMarkovoperatorH\u03c0:X\u2192P(X)withkernel5h\u03c0(x,y)=p\u03c0(x,y)p\u00b5(x),wherep\u03c0=d\u03c0d\u03bb\u2297\u03bbandp\u00b5=d\u00b5d\u03bb.Itisimmediatetoverifyfromthede\ufb01nitionthatH\u03c0isaMarkovoperatorsatisfyingthetransportproperty\u00b5H\u03c0=\u03bd(seeLemma16).Theorem2.Let\u03b1\u22651,\u00b5,\u03bd\u2208P(X)andK\u2208K(X,Y).Foranydistribution\u03c9\u2208P(X)andcoupling\u03c0\u2208C(\u03c9,\u00b5)wehaveR\u03b1(\u00b5Kk\u03bdK)\u2264R\u03b1(\u03c9k\u03bd)+supx\u2208supp(\u03bd)R\u03b1((H\u03c0K)(x)kK(x)).(1)Notethatthisresultcapturesthedata-processinginequalityforR\u00e9nyidivergencessincetaking\u03c9=\u00b5andtheidentitycouplingyieldsR\u03b1(\u00b5Kk\u03bdK)\u2264R\u03b1(\u00b5k\u03bd).Thenextexamplesillustratetheuseofthistheoremtoobtainampli\ufb01cationbyoperatorscorrespondingtotheadditionofGaussianandLaplacenoise.Example1(IteratedGaussian).Wecanshowthat(1)istightandequivalenttotheshift-reductionlemma[17]onRdbyconsideringthesimplescenarioofaddingGaussiannoisetotheoutputofaGaussianmechanism.Inparticular,supposeM(D)=N(f(D),\u03c321I)forsomefunctionfwithglobalL2-sensitivity\u2206andtheMarkovoperatorKisgivenbyK(x)=N(x,\u03c322I).Thepost-processedmechanismisgivenby(K\u25e6M)(D)=N(f(D),(\u03c321+\u03c322)I),whichsatis\ufb01es(\u03b1,\u03b1\u220622(\u03c321+\u03c322))-RDP.WenowshowhowthisresultalsofollowsfromTheorem2.GiventwodatasetsD\u2019D0wewrite\u00b5=M(D)=N(u,\u03c321I)and\u03bd=M(D0)=N(v,\u03c321I)withku\u2212vk\u2264\u2206.We4TheblanketconditionisanecessaryconditionforLDPintroducedin[3]toanalyzeprivacyampli\ufb01cationbyshuf\ufb02ing.5Hereweusetheconvention00=0.4\ftake\u03c9=N(w,\u03c321I)forsomewtobedeterminedlater,andcouple\u03c9and\u00b5throughatranslation\u03c4=u\u2212w,yieldingacoupling\u03c0withp\u03c0(x,y)\u221dexp(\u2212kx\u2212wk22\u03c321)I[y=x+\u03c4]andatransportoperatorH\u03c0withkernelh\u03c0(x,y)=I[y=x+\u03c4].Pluggingtheseinto(1)wegetR\u03b1(\u00b5Kk\u03bdK)\u2264\u03b1kw\u2212vk22\u03c321+supx\u2208RdR\u03b1(K(x+\u03c4)kK(x))=\u03b12(cid:18)kw\u2212vk2\u03c321+ku\u2212wk2\u03c322(cid:19).Finally,takingw=\u03b8u+(1\u2212\u03b8)vwith\u03b8=(1+\u03c322\u03c321)\u22121yieldsR\u03b1(\u00b5Kk\u03bdK)\u2264\u03b1\u220622(\u03c321+\u03c322).Example2(IteratedLaplace).Toillustratethe\ufb02exibilityofthistechnique,wealsoapplyittogetanampli\ufb01cationresultforiteratedLaplacenoise,inwhichLaplacenoiseisaddedtotheoutputofaLaplacemechanism.Webeginbynotinganegativeresultthatthereisnoampli\ufb01cationinthe(\u03b5,0)-DPregime.Lemma3.LetM(D)=Lap(f(D),\u03bb1)forsomefunctionf:D\u2192RwithglobalL1-sensitivity\u2206andlettheMarkovoperatorKbegivenbyK(x)=Lap(x,\u03bb2).Thepost-processedmechanism(K\u25e6M)doesnotachieve(\u03b5,0)-DPforany\u03b5<\u2206max{\u03bb1,\u03bb2}.NotethatMachieves(\u2206\u03bb1,0)-DPandK(f(D))achieves(\u2206\u03bb2,0)-DP.However,theiteratedLaplacemechanismK\u25e6MabovestilloffersadditionalprivacyintherelaxedRDPsetting.Anapplicationof(1)allowsustoidentifysomeofthisimprovement.Recallfrom[23,Corollary2]thatMsatis\ufb01es(\u03b1,1\u03b1\u22121logg\u03b1(\u2206\u03bb1))-RDPwithg\u03b1(z)=\u03b12\u03b1\u22121exp(z(\u03b1\u22121))+\u03b1\u221212\u03b1\u22121exp(\u2212z\u03b1).AsinExample1,wetake\u03c9=Lap(w,\u03bb1)forsomewtobedeterminedlater,andcouple\u03c9and\u00b5throughatranslation\u03c4=u\u2212w.Through(1)weobtainR\u03b1(\u00b5Kk\u03bdK)\u22641\u03b1\u22121log(cid:18)g\u03b1(cid:18)|w\u2212v|\u03bb1(cid:19)(cid:19)+supx\u2208RR\u03b1(K(x+\u03c4)kK(x))=1\u03b1\u22121log(cid:18)g\u03b1(cid:18)|w\u2212v|\u03bb1(cid:19)g\u03b1(cid:18)|u\u2212w|\u03bb2(cid:19)(cid:19).Inthesimplecasewhere\u03bb1=\u03bb2,anampli\ufb01cationresultisobservedfromthelog-convexityofg\u03b1,sinceg\u03b1(a)g\u03b1(b)\u2264g\u03b1(a+b).When\u03bb16=\u03bb2,certainvaluesofwstillresultinampli\ufb01cation,buttheydependnontriviallyon\u03b1.However,wealsoobservethatthisimprovementvanishesas\u03b1\u2192\u221e,sincethenecessaryconvexityalsovanishes.Inthelimit,thelowestupperboundofferedby(1)forR\u221e(whichreducesto(\u03b5,0)-DP)matchesthe\u2206max{\u03bb1,\u03bb2}resultofLemma3.Example3(LipschitzKernel).Asawarm-upfortheresultsinSection4.1,wenowre-workExample1withaslightlymorecomplexMarkovoperator.Suppose\u03c8isanL-Lipschitzmap6andletK(x)=N(\u03c8(x),\u03c322I).TakingMtobetheGaussianmechanismfromExample1,wewillshowthatthepost-processedmechanismK\u25e6Msatis\ufb01es(\u03b1,\u03b1\u220622\u03c32\u2217)-RDPwith\u03c32\u2217=\u03c321+\u03c322L2.Toprovethisbound,weinstantiatethenotationfromExample1,andusethesamecouplingstrategytoobtainR\u03b1(\u00b5Kk\u03bdK)\u2264\u03b12(cid:18)kw\u2212vk2\u03c321+supx\u2208Rdk\u03c8(x+\u03c4)\u2212\u03c8(x)k2\u03c322(cid:19)\u2264\u03b12(cid:18)kw\u2212vk2\u03c321+L2ku\u2212wk2\u03c322(cid:19),wherethesecondinequalityusestheLipschitzproperty.Asbefore,theresultfollowsfromtakingw=\u03b8u+(1\u2212\u03b8)vwith\u03b8=(1+\u03c322L2\u03c321)\u22121.Thisexampleshowsthatwegetampli\ufb01cation(i.e.\u03c32\u2217>\u03c321)foranyL<\u221eand\u03c32>0,althoughtheamountofampli\ufb01cationdecreasesasLgrows.Ontheotherhand,forL<1theampli\ufb01cationisstrongerthanjustaddingGaussiannoise(Example1).4.1Ampli\ufb01cationbyIterationinNoisyProjectedSGDwithStronglyConvexLossesNowweuseTheorem2andthecomputationsabovetoshowthattheproofofprivacyampli\ufb01cationbyiteration[17,Theorem22]canbeextendedtoexplicitlytracktheLipschitzcoef\ufb01cientsina\u201cnoisyiteration\u201dalgorithm.Inparticular,thisallowsustoshowanexponentialimprovementontherateofprivacyampli\ufb01cationbyiterationinnoisySGDwhenthelossisstronglyconvex.Toobtainthisresultwe\ufb01rstprovideaniteratedversionofTheorem2inRdwithLipschitzGaussian6Thatis,k\u03c8(x)\u2212\u03c8(y)k\u2264Lkx\u2212ykforanypairx,y.5\fkernels.ThisversionoftheanalysisintroducesanexplicitdependenceontheW\u221edistancesalongan\u201cinterpolating\u201dpathbetweentheinitialdistributions\u00b5,\u03bd\u2208P(Rd)whichcouldlaterbeoptimizedfordifferentapplications.Inourview,thishelpstoclarifytheintuitionbehindthepreviousanalysisofampli\ufb01cationbyiteration.Theorem4.Let\u03b1\u22651,\u00b5,\u03bd\u2208P(Rd)andletK\u2286Rdbeaconvexset.SupposeK1,...,Kr\u2208K(Rd,Rd)areMarkovoperatorswhereYi\u223cKi(x)isobtainedas7Yi=\u03a0K(\u03c8i(x)+Zi)withZi\u223cN(0,\u03c32I),wherethemaps\u03c8i:K\u2192RdareL-Lipschitzforalli\u2208[r].Forany\u00b50,\u00b51,...,\u00b5r\u2208P(Rd)with\u00b50=\u00b5and\u00b5r=\u03bdwehaveR\u03b1(\u00b5K1\u00b7\u00b7\u00b7Krk\u03bdK1\u00b7\u00b7\u00b7Kr)\u2264\u03b1L22\u03c32rXi=1L2(r\u2212i)W\u221e(\u00b5i,\u00b5i\u22121)2.(2)Furthermore,ifL\u22641andW\u221e(\u00b5,\u03bd)=\u2206,thenR\u03b1(\u00b5K1\u00b7\u00b7\u00b7Krk\u03bdK1\u00b7\u00b7\u00b7Kr)\u2264\u03b1\u22062Lr+12r\u03c32.(3)NotehowtakingL=1intheboundaboveweobtain\u03b1\u220622r\u03c32=O(1/r),whichmatches[17,Theorem1].Ontheotherhand,forLstrictlysmallerthan1,theanalysisaboveshowsthattheampli\ufb01cationrateisO(Lr+1/r)asaconsequenceofthemaps\u03c8ibeingstrictcontractions,i.e.k\u03c8i(x)\u2212\u03c8i(y)k<kx\u2212yk.ForL>1thisresultisnotusefulsincethesumwilldiverge;however,theproofcouldeasilybeadaptedtohandlethecasewhereeach\u03c8iisLi-LipschitzwithsomeLi>1andsomeLi<1.Wenowapplythisresulttoimprovetheper-personprivacyguaranteesofnoisyprojectedSGD(Algorithm1)inthecasewherethelossfunctionissmoothandstronglyconvex.Algorithm1:NoisyProjectedStochasticGradientDescent\u2014NoisyProjSGD(D,\u2018,\u03b7,\u03c3,\u03be0)Input:DatasetD=(z1,...,zn),lossfunction\u2018:K\u00d7D\u2192R,learningrate\u03b7,noiseparameter\u03c3,initialdistribution\u03be0\u2208P(K)Samplex0\u223c\u03be0fori\u2208[n]doxi\u2190\u03a0K(xi\u22121\u2212\u03b7(\u2207x\u2018(xi\u22121,zi)+Z))withZ\u223cN(0,\u03c32I)returnxnAfunctionf:K\u2286Rd\u2192Rde\ufb01nedonaconvexsetis\u03b2-smoothifitiscontinuouslydifferentiableand\u2207fis\u03b2-Lipschitz,i.e.,k\u2207f(x)\u2212\u2207f(y)k\u2264\u03b2kx\u2212yk,andis\u03c1-stronglyconvexifthefunctiong(x)=f(x)\u2212\u03c12kxk2isconvex.Whenwesaythatalossfunction\u2018:K\u00d7D\u2192Rsatis\ufb01esaproperty(e.g.smoothness)wemeanthepropertyissatis\ufb01edby\u2018(\u00b7,z)forallz\u2208D.Furthermore,werecallfrom[17]thatamechanismM:Dn\u2192Xsatis\ufb01es(\u03b1,\u0001)-RDPatindexiifR\u03b1(M(D)kM(D0))\u2264\u0001holdsforanypairofdatabasesDandD0differingontheithcoordinate.Theorem5.Let\u2018:K\u00d7D\u2192RbeaC-Lipschitz,\u03b2-smooth,\u03c1-stronglyconvexlossfunction.If\u03b7\u22642\u03b2+\u03c1,thenNoisyProjSGD(D,\u2018,\u03b7,\u03c3,\u03be0)satis\ufb01es(\u03b1,\u03b1\u0001i)-RDPatindexi,where\u0001n=2C2\u03c32and\u0001i=2C2(n\u2212i)\u03c32(1\u22122\u03b7\u03b2\u03c1\u03b2+\u03c1)n\u2212i+12for1\u2264i\u2264n\u22121.Since[17,Theorem23]showsthatforsmoothLipschitzlossfunctionstheguaranteeatindexiofNoisyProjSGDisgivenby\u0001i=O(C2(n\u2212i)\u03c32),ourresultprovidesanexponentialimprovementinthestronglyconvexcase.Thisimplies,forexample,thatusingthetechniquein[17,Corollary31]onecanshowthat,inthestronglyconvexsetting,running\u0398(log(d))additionaliterationsofNoisyProjSGDonpublicdataisenoughtoattain(uptoconstantfactors)thesameoptimizationerrorasnon-privateSGDwhileprovidingprivacyforallindividuals.5DiffusionMechanismsNowwegobeyondtheanalysisfromprevioussectionsandsimultaneouslyconsiderafamilyofMarkovoperatorsP=(Pt)t\u22650indexedbyacontinuousparametertandsatisfyingthesemigroup7Here\u03a0K(x)=argminy\u2208Kkx\u2212ykdenotestheprojectionoperatorontotheconvexsetK\u2286Rd.6\fpropertyPtPs=Pt+s.SuchPiscalledaMarkovsemigroupandcanbeusedtode\ufb01neafamilyofoutputperturbationmechanismsMft(D)=Pt(f(D))whichareclosedunderpost-processingbyPinthesensethatPs\u25e6Mft=Mft+s.Thesemigrouppropertygreatlysimpli\ufb01estheanalysisofprivacyampli\ufb01cationbypost-processing,since,forexample,ifweshowthatMftsatis\ufb01es(\u03b1,\u0001(t))-RDP,thenthisimmediatelyprovidesRDPguaranteesforanypost-processingofMtbyanynumberofoperatorsinP.ThemainresultofthissectionprovidessuchprivacyanalysisformechanismsarisingfromsymmetricdiffusionMarkovsemigroupsinEuclideanspace.Wewillshowthisclassincludesthewell-knownGaussianmechanism,andalsoidentifyanotherinterestingmechanisminthisclassarisingfromtheOrnstein-Uhlenbeckdiffusionprocess.Roughlyspeaking,adiffusionMarkovsemigroupP=(Pt)t\u22650onRdcorrespondstothecasewhereXt\u223cPt(x)de\ufb01nesaMarkovprocess(Xt)t\u22650arisingfroma(time-homogeneousIt\u00f4)stochasticdifferentialequation(SDE)oftheformX0=xanddXt=u(Xt)dt+v(Xt)dWt,whereWtisastandardd-dimensionalWienerprocess,andthedriftu:Rd\u2192Rdanddiffusionv:Rd\u2192Rd\u00d7dcoef\ufb01cientssatisfyappropriateregularityassumptions.8Inthispaper,however,weshallfollow[1]andtakeamoreabstractapproachtoMarkovdiffusionsemigroups.WesynthesizethisapproachbymakinganumberofhypothesesonPthatwediscussafterintroducingtwocoreconceptsfromthetheoryofMarkovsemigroups.InthecontextofaMarkovsemigroupP,theactionoftheMarkovoperatorsPtonfunctionscanbeusedtode\ufb01nethegeneratorLofthesemigroupastheoperatorgivenbyLf=ddt(Ptf)|t=0.Inparticular,foradiffusionsemigrouparisingfromtheSDEdXt=u(Xt)dt+v(Xt)dWtitiswell-knownthatonecanwritethegeneratorasLf=hu,\u2207fi+12hvv>,H(f)i,whereH(f)istheHessianoffandthesecondtermisaFrobeniusinnerproduct.Usingthegeneratoronealsode\ufb01nestheso-calledcarr\u00e9duchampoperator\u0393(f,g)=12(L(fg)\u2212fLg\u2212gLf).Thisoperatorisbilinearandnon-negativeinthesensethat\u0393(f),\u0393(f,f)\u22650.Thecarr\u00e9duchampoperatoroperatorcanbeinterpretedasadevicetomeasurehowfarLisfrombeinga\ufb01rst-orderdifferentialoperator,since,e.g.,ifL=Piai\u2202\u2202xithenL(fg)=fLg+gLfandtherefore\u0393(f,g)=0.Theoperator\u0393canalsoberelatedtonotionsofcurvature/contractivityoftheunderlyingsemigroup[1].BelowweillustratetheseconceptswiththeexampleofBrownianmotion;but\ufb01rstweformallystateourassumptionsonthesemigroup.Assumption1.SupposetheMarkovsemigroupP=(Pt)t\u22650\u2282K(Rd,Rd)satis\ufb01esthefollowing:(1)Thereexistsauniquenon-negativeinvariantmeasure\u03bb;thatis,\u03bbPt=\u03bbforallt\u22650.Whentheinvariantmeasureis\ufb01nitewenormalizeittobeaprobabilitymeasure.(2)TheoperatorsPtadmitasymmetrickernelpt(x,y)=pt(y,x)withrespecttotheinvariantmeasure.Equivalently,theinvariantmeasure\u03bbisreversiblefortheMarkovprocessXt.(3)ThegeneratorLsatis\ufb01esthediffusionpropertyL\u03c6(f)=\u03c60(f)Lf+\u03c600(f)\u0393(f)foranydifferentiable\u03c6:R\u2192R.ThisisachainrulepropertysayingthatLisasecond-orderdifferentialoperatorwithoutconstantterms.Example4(BrownianMotion).ThesimplestdiffusionprocessistheBrownianmotiongivenbythesimpleSDEdXt=\u221a2dWt.,whichcorrespondstothesemigroupPgivenbyPt(x)=N(x,2t).Inthiscase,themechanismMft(D)=Pt(f(D))isaGaussianmechanismwithvariance\u03c32=2tandthereforesatis\ufb01es(\u03b1,\u03b1\u220624t)-RDP,where\u2206istheglobalL2-sensitivityoff.Adirectsubstitutionwithu=0andv=\u221a2Ishowsthatthesemigroup\u2019sgeneratoristhestandardLaplacianinRd,L=\u22072=Pdi=1\u22022\u2202x2i,andasimplecalculationyieldstheexpression\u0393(f,g)=h\u2207f,\u2207giforthecarr\u00e9duchampoperator.NowwecheckthatPsatis\ufb01estheconditionsinAssumption1.First,werecallthatBrownianmotionhastheLebesguemeasure\u03bbonRdasitsuniqueinvariantmeasure;thishappenstobeanon-\ufb01nitemeasure.Withrespectto\u03bb,thesemigrouphaskernelpt(x,y)\u221dexp(\u2212kx\u2212yk24t)whichisclearlysymmetric.Finally,weusethechainruleforthegradienttoverifythatLf=\u22072\u03c6(f)=\u2207(\u03c60(f)\u2207f)=\u03c600(f)h\u2207f,\u2207fi+\u03c60(f)\u22072f=\u03c600(f)\u0393(f)+\u03c60(f)Lf.Nowweturntothemainresultofthissection,whichprovidesaprivacyanalysisforthediffusionmechanismMftassociatedwithanarbitrarysymmetricdiffusionMarkovsemigroup.Thekeyinsight8ThedetailsarenotrelevantheresinceweworkdirectlywithsemigroupssatisfyingAssumption1.Wereferto[25]fordetails.7\fbehindthisresultisthatthecarr\u00e9duchampoperatorofthesemigroupprovidesameasure\u039b(t)ofintrinsicsensitivityforthemechanismMftde\ufb01nedas:\u039b(t)=supD\u2019D0Z\u221et\u03baf(D),f(D0)(s)ds,where\u03bax,x0(t)=supy\u2208Rd\u0393(cid:18)logpt(x,y)pt(x0,y)(cid:19).Theorem6.Letf:Dn\u2192RdandletP=(Pt)t\u22650byaMarkovsemigrouponRdsatisfyingAssumption1.IfthemechanismMft(D)=Pt(f(D))hasintrinsicsensitivity\u039b(t),thenitsatis\ufb01es(\u03b1,\u03b1\u039b(t))-RDPforany\u03b1>1andt>0.Example5(BrownianMotion,Continued).ToillustratetheuseofTheorem6weshowhowitcanbeusedtorecovertheprivacyguaranteesoftheGaussianmechanismthroughitsconnectionwithBrownianmotion.WeletPbethesemigroupfromExample4andstartbyusing\u0393(f)=k\u2207fk2tocompute\u03bax,x0(t)asfollows:\u0393(cid:18)logpt(x,y)pt(x0,y)(cid:19)=(cid:13)(cid:13)(cid:13)(cid:13)\u2207y(cid:18)kx0\u2212yk2\u2212kx\u2212yk24t(cid:19)(cid:13)(cid:13)(cid:13)(cid:13)2=kx\u2212x0k24t2.NowweuseR\u221et1s2ds=1tand\u22062=supD\u2019D0kf(D)\u2212f(D0)k2toseethatthemechanismassociatedwithPhasintrinsicsensitivity\u039b(t)=\u220624t,yieldingtheprivacyguaranteefromExample4.5.1TheOrnstein-UhlenbeckMechanismBeyondBrownianmotion,anotherwell-knowndiffusionprocessistheOrnstein-Uhlenbeckprocesswithparameters\u03b8,\u03c1>0givenbytheSDEdXt=\u2212\u03b8Xtdt+\u221a2\u03c1dWt.ThisdiffusionprocessisassociatewiththesemigroupP=(Pt)t\u22650givenbyPt(x)=N(e\u2212\u03b8tx,\u03c12\u03b8(1\u2212e\u22122\u03b8t)I).OneinterpretationofthisdiffusionprocessistothinkofXtasaBrownianmotionwithvariance\u03c12appliedtoameanreverting\ufb02owthatpullsaparticletowardstheoriginatarate\u03b8.Inparticular,themechanismMft(D)isgivenbyreleasinge\u2212\u03b8tf(D)+N(0,\u03c12\u03b8(1\u2212e\u22122\u03b8t)).Takingthelimitt\u2192\u221eoneseesthatthe(unique)invariantmeasureofPistheGaussiandistribution\u03bb=N(0,\u03c12\u03b8I).FromtheSDEcharacterizationofthisprocessitiseasytocheckthatitsgeneratorisLf=\u03c12\u22072f\u2212\u03b8hx,\u2207fiandtheassociatedcarr\u00e9duchampoperatoris\u0393(f,g)=\u03c12h\u2207f,\u2207gi.Thus,Psatis\ufb01esconditions(1)and(3)inAssumption1.TocheckthesymmetryconditionweapplyachangeofmeasuretotheGaussiandensity\u02dcpt(x,y)ofPtwithrespecttotheLebesguemeasuretogetitsdensityw.r.t.\u03bb:pt(x,y)=\u02dcpt(x,y)\u02dcp\u03bb(y)\u221dexp(cid:16)\u2212\u03b8ky\u2212e\u2212\u03b8txk22\u03c12(1\u2212e\u22122\u03b8t)(cid:17)exp(cid:16)\u2212\u03b8kyk22\u03c12(cid:17)=exp(cid:18)\u2212\u03b8kxk2\u22122e\u03b8thx,yi+kyk22\u03c12(e2\u03b8t\u22121)(cid:19),where\u02dcp\u03bbisthedensityof\u03bbw.r.t.theLebesguemeasure.Thus,Theorem6yieldsthefollowing.Corollary7.Letf:Dn\u2192RdhaveglobalL2-sensitivity\u2206andP=(Pt)t\u22650betheOrnstein-Uhlenbecksemigroupwithparameters\u03b8,\u03c1.Forany\u03b1>1andt>0themechanismMft(D)=Pt(f(D))satis\ufb01es(\u03b1,\u03b1\u039b(t))-RDPwith\u039b(t)=\u03b8\u220622\u03c12(e2\u03b8t\u22121).TheOrnstein-UhlenbeckmechanismisnotanunbiasedmechanismsinceE[Mft(D)]=e\u2212\u03b8tf(D).ThisbiasisthereasonwhytheprivacyguaranteeinCorollary7exhibitsarateO(e\u22122\u03b8t),while,forexample,theBrownianmotionmechanismonlyexhibitsarateO(t\u22121).Inparticular,theOrnstein-Uhlenbeckmechanismachievesitsprivacynotonlybyintroducingnoise,butalsobyshrinkingf(D)towardsadata-independentpoint(theorigininthiscase);thiseffectivelycorrespondstoreducingthesensitivityofffrom\u2206toe\u2212\u03b8t\u2206.Thisprovidesawaytotrade-offvarianceandbiasinthemean-squarederror(MSE)incurredbyprivatelyreleasingf(D)inasimilarwaythatcanbeachievedbypost-processingtheGaussianmechanismwhenf(D)isknowntobebounded.Toformalizethisresultwede\ufb01nethemeansquarederrorEOU(\u03b8,\u03c1,t)oftheOrnstein-Uhlenbeckmechanismwithparameters\u03b8,\u03c1attimet,whichisgivenby:EOU(\u03b8,\u03c1,t),E[kf(D)\u2212Mft(D)k2]=(1\u2212e\u2212\u03b8t)2kf(D)k2+d\u03c12\u03b8(1\u2212e\u22122\u03b8t).(4)8\fSimilarly,wede\ufb01neEGM(\u03b8,\u03c1,t)asthemeansquarederrorofaGaussianmechanismwiththesameprivacyguaranteesasMftwithparameters\u03b8,\u03c1.Inparticular,wehaveEGM(\u03b8,\u03c1,t)=d\u02dc\u03c32,where\u02dc\u03c32,\u03c12(e2\u03b8t\u22121)\u03b8(cf.Corollary7).Wealsonotethepost-processedGaussianmechanism(PGM)D7\u2192\u03b2(f(D)+N(0,\u02dc\u03c32I))whichmultipliestheoutputbyascalar\u03b2optimizedtominimizetheMSEundertheconditionkf(D)k\u2264RyieldserrorEPGM(\u03b8,\u03c1,t)\u2264EGM(\u03b8,\u03c1,t)(1+d\u02dc\u03c32R2)\u22121.Theorem8.Supposef:Dn\u2192RdhasglobalL2-sensitivity\u2206andsatis\ufb01essupDkf(D)k\u2264R.If\u03b8R2\u22644d\u03c12thenwehaveEOU(\u03b8,\u03c1,t)EGM(\u03b8,\u03c1,t)\u22641forallt\u22650andlimt\u2192\u221eEOU(\u03b8,\u03c1,t)EGM(\u03b8,\u03c1,t)=0.Inparticular,taking\u03b8=log(cid:16)1+d\u220622\u0001R2(cid:17)and\u03c12=\u03b8\u220622\u0001(e2\u03b8\u22121)with\u0001>0,themechanismMftsatis\ufb01es(\u03b1,\u03b1\u0001)-RDPattimet=1andwehaveEOU(\u03b8,\u03c1,1)EGM(\u03b8,\u03c1,1)\u2264(cid:16)1+d\u220622\u0001R2(cid:17)\u22121.ThisresultnotonlyshowsthattheOrnstein-UhlenbeckmechanismisuniformlybetterthantheGaussianmechanismforanylevelofprivacy,butalsoshowsthatinthismechanismtheerroralwaysstaysboundedandcanattainthesameleveloferrorastheGaussianmechanismwithoptimalpost-processing.ToseethisnotethatwiththechoicesofparametersmadeinthesecondstatementgiveEGM(\u03b8,\u03c1,1)=d\u220622\u0001andthereforeEOU(\u03b8,\u03c1,1)\u2264d\u22062R22\u0001R2+d\u22062,whichbehaveslikeO(R2)with\u2206constantandeither\u0001\u21920ord\u2192\u221e.6ConclusionWehaveundertakenasystematicstudyofampli\ufb01cationbypost-processing.Ourresultsyieldimprovementsoverrecentworkonampli\ufb01cationbyiteration,andintroduceanewOrnstein-UhlenbeckmechanismwhichismoreaccuratethantheGaussianmechanism.Inthefutureitwouldbeinterestingtostudyapplicationsofampli\ufb01cationbypost-processing.OnepromisingapplicationisHierarchicalDifferentialPrivacy,whereinformationisreleasedunderincreasinglystrongprivacyconstraints(e.g.toarestrictedgroupwithinacompany,globallywithinacompany,and\ufb01nallytooutsideparties).AcknowledgementsMGwaspartiallysupportedbyNSFgrantCCF-1718220.References[1]DominiqueBakry,IvanGentil,andMichelLedoux.AnalysisandgeometryofMarkovdiffusionoperators,volume348.SpringerScience&BusinessMedia,2013.[2]BorjaBalle,GillesBarthe,andMarcoGaboardi.Privacyampli\ufb01cationbysubsampling:Tightanalysesviacouplingsanddivergences.InAdvancesinNeuralInformationProcessingSystems31:AnnualConferenceonNeuralInformationProcessingSystems2018,NeurIPS2018,3-8December2018,Montr\u00e9al,Canada.,pages6280\u20136290,2018.[3]BorjaBalle,JamesBell,Adri\u00e0Gasc\u00f3n,andKobbiNissim.Theprivacyblanketoftheshuf\ufb02emodel.CoRR,abs/1903.02837,2019.[4]GillesBartheandFedericoOlmedo.Beyonddifferentialprivacy:Compositiontheoremsandrelationallogicforf-divergencesbetweenprobabilisticprograms.InInternationalColloquiumonAutomata,Languages,andProgramming,pages49\u201360.Springer,2013.[5]AmosBeimel,HaiBrenner,ShivaPrasadKasiviswanathan,andKobbiNissim.Boundsonthesamplecomplexityforprivatelearningandprivatedatarelease.Machinelearning,94(3):401\u2013437,2014.[6]AmosBeimel,KobbiNissim,andUriStemmer.Characterizingthesamplecomplexityofprivatelearners.InProceedingsofthe4thconferenceonInnovationsinTheoreticalComputerScience,pages97\u2013110.ACM,2013.[7]S\u00e9bastienBubeck.Convexoptimization:Algorithmsandcomplexity.FoundationsandTrendsR(cid:13)inMachineLearning,8(3-4):231\u2013357,2015.9\f[8]MarkBun,KobbiNissim,UriStemmer,andSalilVadhan.Differentiallyprivatereleaseandlearningofthresholdfunctions.InFoundationsofComputerScience(FOCS),2015IEEE56thAnnualSymposiumon,pages634\u2013649.IEEE,2015.[9]KamalikaChaudhuriandNinaMishra.Whenrandomsamplingpreservesprivacy.InAnnualInternationalCryptologyConference,pages198\u2013213.Springer,2006.[10]AlbertCheu,AdamD.Smith,JonathanUllman,DavidZeber,andMaximZhilyaev.Distributeddifferentialprivacyviashuf\ufb02ing.InAdvancesinCryptology-EUROCRYPT2019-38thAnnualInternationalConferenceontheTheoryandApplicationsofCryptographicTechniques,Darmstadt,Germany,May19-23,2019,Proceedings,PartI,pages375\u2013403,2019.[11]JoelECohen,YohIwasa,GhRautu,MaryBethRuskai,EugeneSeneta,andGhZbaganu.Relativeentropyundermappingsbystochasticmatrices.Linearalgebraanditsapplications,179:211\u2013235,1993.[12]PDelMoral,MLedoux,andLMiclo.OncontractionpropertiesofMarkovkernels.Probabilitytheoryandrelated\ufb01elds,126(3):395\u2013420,2003.[13]RolandLDobrushin.CentrallimittheoremfornonstationaryMarkovchains.I.TheoryofProbability&ItsApplications,1(1):65\u201380,1956.[14]W.Doeblin.Surlesproprietesasymptotiquesdemouvementsr\u00c9gisparcertainstypesdecha\u00cenessimples(suiteet\ufb01n).Bulletinmath\u00e9matiquedelaSoci\u00e9t\u00e9RoumainedesSciences,39(2):3\u201361,1937.[15]CynthiaDwork,FrankMcSherry,KobbiNissim,andAdamSmith.Calibratingnoisetosensitivityinprivatedataanalysis.InTheoryofcryptography,pages265\u2013284.Springer,2006.[16]\u00dalfarErlingsson,VitalyFeldman,IlyaMironov,AnanthRaghunathan,KunalTalwar,andAbhradeepThakurta.Ampli\ufb01cationbyshuf\ufb02ing:Fromlocaltocentraldifferentialprivacyviaanonymity.InProceedingsoftheThirtiethAnnualACM-SIAMSymposiumonDiscreteAlgorithms,pages2468\u20132479.SIAM,2019.[17]VitalyFeldman,IlyaMironov,KunalTalwar,andAbhradeepThakurta.Privacyampli\ufb01cationbyiteration.In2018IEEE59thAnnualSymposiumonFoundationsofComputerScience(FOCS),pages521\u2013532.IEEE,2018.[18]ShivaPrasadKasiviswanathan,HominKLee,KobbiNissim,SofyaRaskhodnikova,andAdamSmith.Whatcanwelearnprivately?SIAMJournalonComputing,40(3):793\u2013826,2011.[19]DavidALevinandYuvalPeres.Markovchainsandmixingtimes,volume107.AmericanMathematicalSoc.,2017.[20]NinghuiLi,WahbehQardaji,andDongSu.Onsampling,anonymization,anddifferentialpri-vacyor,k-anonymizationmeetsdifferentialprivacy.InProceedingsofthe7thACMSymposiumonInformation,ComputerandCommunicationsSecurity,pages32\u201333.ACM,2012.[21]TorgnyLindvall.Lecturesonthecouplingmethod.CourierCorporation,2002.[22]SeanPMeynandRichardLTweedie.Markovchainsandstochasticstability.SpringerScience&BusinessMedia,2012.[23]IlyaMironov.R\u00e9nyidifferentialprivacy.In30thIEEEComputerSecurityFoundationsSymposium,CSF2017,SantaBarbara,CA,USA,August21-25,2017,pages263\u2013275,2017.[24]EsaNummelin.GeneralirreducibleMarkovchainsandnon-negativeoperators,volume83.CambridgeUniversityPress,2004.[25]Bernt\u00d8ksendal.Stochasticdifferentialequations.InStochasticdifferentialequations,pages65\u201384.Springer,2003.[26]MaximRaginsky.Strongdataprocessinginequalitiesand\u03a6-Sobolevinequalitiesfordiscretechannels.IEEETransactionsonInformationTheory,62(6):3355\u20133389,2016.10\f[27]Yu-XiangWang,BorjaBalle,andShivaKasiviswanathan.Subsampledr\u00e9nyidifferentialprivacyandanalyticalmomentsaccountant.InProceedingsofthe22ndInternationalConferenceonArti\ufb01cialIntelligenceandStatistics(AISTATS),2019.11\f", "award": [], "sourceid": 7289, "authors": [{"given_name": "Borja", "family_name": "Balle", "institution": "Amazon"}, {"given_name": "Gilles", "family_name": "Barthe", "institution": "Max Planck Institute"}, {"given_name": "Marco", "family_name": "Gaboardi", "institution": "Univeristy at Buffalo"}, {"given_name": "Joseph", "family_name": "Geumlek", "institution": "University of California, San Diego"}]}