{"title": "Learning Auctions with Robust Incentive Guarantees", "book": "Advances in Neural Information Processing Systems", "page_first": 11591, "page_last": 11601, "abstract": "We study the problem of learning Bayesian-optimal revenue-maximizing auctions. The classical approach to maximizing revenue requires a known prior distribution on the demand of the bidders, although recent work has shown how to replace the knowledge of a prior distribution with a polynomial sample. However, in an online setting, when buyers can participate in multiple rounds, standard learning techniques are susceptible to \\emph{strategic overfitting}: bidders can improve their long-term wellbeing by manipulating the trajectory of the learning algorithm in earlier rounds. For example, they may be able to strategically adjust their behavior in earlier rounds to achieve lower, more favorable future prices. Such non-truthful behavior can hinder learning and harm revenue. In this paper, we combine tools from differential privacy, mechanism design, and sample complexity to give a repeated auction that (1) learns bidder demand from past data, (2) is approximately revenue-optimal, and (3) strategically robust, as it incentivizes bidders to behave truthfully.", "full_text": "Learning Auctions with Robust Incentive Guarantees\n\nJacob Abernethy\n\nGeorgia Tech\n\nprof@gatech.edu\n\nRachel Cummings\n\nGeorgia Tech\n\nBhuvesh Kumar\n\nGeorgia Tech\n\nrachelc@gatech.edu\n\nbhuvesh@gatech.edu\n\nJamie Morgenstern\n\nGeorgia Tech\n\nSamuel Taggart\nOberlin College\n\njamiemmt.cs@gatech.edu\n\nsam.taggart@oberlin.edu\n\nAbstract\n\nWe study the problem of learning Bayesian-optimal revenue-maximizing auctions.\nThe classical approach to maximizing revenue requires a known prior distribution\non the demand of the bidders, although recent work has shown how to replace\nthe knowledge of a prior distribution with a polynomial sample. However, in an\nonline setting, when buyers can participate in multiple rounds, standard learning\ntechniques are susceptible to strategic over\ufb01tting: bidders can improve their long-\nterm wellbeing by manipulating the trajectory of the learning algorithm through\nbidding. For example, they may be able to strategically adjust their behavior in\nearlier rounds to achieve lower, more favorable future prices. Such non-truthful\nbehavior can hinder learning and harm revenue. In this paper, we combine tools\nfrom differential privacy, mechanism design, and sample complexity to give a\nrepeated auction that (1) learns bidder demand from past data, (2) is approximately\nrevenue-optimal, and (3) strategically robust, as it incentivizes bidders to behave\ntruthfully.\n\n1\n\nIntroduction\n\nWhen we observe prices in market settings\u2014stock exchanges, farmers\u2019 markets, ad auctions\u2014we\nunderstand that these prices were not chosen arbitrarily. Rather, the seller (auctioneer, market maker,\netc.) selected these prices after observing a stream of previous transactions, which provide relevant\ninformation about the demands of buyers that are key to maximizing income as well as managing\navailable inventory. The process of setting prices from a growing database of previous sales is\nfundamentally a learning problem, with all of the typical tradeoffs akin to bias versus variance, etc.\nIn the case of repeated auctions, however, there is one additional challenge: market participants are\noften quite aware of the underlying learning procedures employed by the auctioneer and can seek\nto bene\ufb01t using deceptive bidding strategies. Buyers, in other words, can aim to induce over\ufb01tting,\nintroducing additional hurdles to learning problem at hand.\nUnder bayesian assumptions, and in a batch setting where agents only act once, auction pricing\nhas been well-understood since the work of Myerson [39], who characterized the revenue-optimal\nscheme as a function of the prior distribution of values of the bidders. Frequentist alternatives to\nthis model have been introduced in recent years [21, 13, 4, 18, 37, 5, 38, 25, 17, 23], with the\ngoal of designing auctions with good revenue guarantees if one does not have a prior but instead is\ngiven only samples from the underlying distribution. These methods, however, still imagine only a\none-shot mechanism and are not robust to multi-round strategic behavior of bidders.\nThis paper studies the design of multi-round auction-learning algorithms that exhibit theoretical\nguarantees that limit a buyer\u2019s ability to manipulate the mechanism towards their own bene\ufb01t. Our\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\fresults aim to nudge the development of optimal auctions closer to realistic environments where\nsuch mechanisms are deployed. We employ tools from differential privacy as our core technique\nto control the impact of any individual buyer\u2019s strategy on her utility in future participation. A\ndifferentially private mechanism ensures that that the output of a computation has only a small\ndependence on any one input data point. Privacy has been previously used as a tool to achieve\ntruthfulness in a variety of game theoretic environments [14], including mechanism design [33, 40],\nmediated games [29, 16], and market design [15, 28, 44, 12]. Our seller\u2019s learning algorithm is\ndifferentially private with respect to bid data, which limits the effect of each player\u2019s bid on future\nchoices of single-round auctions, thus disentangling incentives across rounds. In this sense we use\ndifferential privacy not as a tool for information security but instead for robustness; this in turn yields\nthe desired incentive guarantees.\nOur two main results are the \ufb01rst computationally tractable algorithms for learning nearly-optimal\nBayesian revenue-maximizing auctions with an approximate truthfulness guarantee. We \ufb01rst give a\nlearning algorithm for which there exists an approximate equilibrium in which all bidders report their\ntrue values. Along the way to this result, we provide several useful technical lemmas for comparing\nthe revenue of mechanisms on two similar distributions, and comparing the revenue of two similar\nmechanisms on any \ufb01xed distribution. These may \ufb01nd future applications of independent interest\nbeyond this work. Second, under an assumption limiting changes in buyer behavior from round to\nround, we construct a mechanism for which there is an exact equilibrium where every bidder bids\nwithin a small constant of their value, which also achieves a nearly optimal revenue guarantee. A\nmechanism with this guarantee is substantially more complex to achieve, requiring arguments about\nwhat bidders might learn about each other across rounds.\n\nRelated Work The classical Myerson auction [39] maximizes revenue when selling a single item\nto buyers drawn from a \ufb01xed, known distribution. More recent work investigated how to maximize\nrevenue when that distribution is unknown but some small number of samples are available [21, 13,\n4, 18, 37, 5, 38, 25, 17, 23] (or in an online model [9]). This line of work assumes every buyer in the\nsample reported their values truthfully, rather than manipulating their reported data. If each buyer\nappears in at most one auction, then they will not have incentive to misreport their bids. However,\nstrategic buyers participating repeatedly in these auctions may have incentive to manipulate their\nbehavior so as to guarantee themselves more utility in future rounds. We extend this line of work by\nassuming each buyer can participate in several auctions, and so we must analyze bidders\u2019 incentives\nacross rounds if we hope to learn a good auction from their past bidding behavior. Related line\nof work includes dynamic auctions [42, 3, 36, 31] where the buyers are strategic about how their\nbehaviour today effects the seller\u2019s behaviour in the future and [8] where the buyer uses a non regret\nlearning algorithm to bid across the rounds.\nOur results are related to work on iteratively learning prices [7, 35, 10, 41], although these results\ndo not consider multiple appearances of bidders across rounds. Most closely related to our work is\nthat of Liu et al. [30], which assumes buyers may appear more than once, and \ufb01nds no-regret posted\nprices or anonymous reserves. We leverage several of their novel ideas, such as maintaining a dif-\nferentially private internal state to guarantee approximate truthfulness. Further, our work optimizes\nover a substantially more complicated space of mechanisms.\nWith repeated appearances of each buyer, our auction learning problem comes to resemble dynamic\nmechanism design. We therefore review some of the relevant literature. A truthful mechanism is\ngiven in [6] that exactly maximizes social welfare in a dynamic environment, and [27, 43] extended\nthis mechanism to maximize revenue. In contrast, our mechanism approximately maximizes rev-\nenue in a dynamic environment with much looser assumptions on buyers\u2019 value distributions, but\ncompares to the weaker per-round benchmark of the optimal single-shot revenue. Epasto et al. [22]\ntakes a similar approach and proposes a dynamic mechanism for buyers who strategize across rounds\nunder a large market assumption. In contrast to our proposed mechanism, theirs will not generally\nrun in polynomial time. Additionally, we focus on mechanisms which explicitly limit an agent\u2019s\nability to manipulate them.\n\n2 Model and Preliminaries\n\nThe model We consider a T -round auction, where a seller sells a supply of J identical items to\nn unit-demand bidders each round. For each round t and population i a value vi,t is sampled from\n\n2\n\n\fi replacing the i-th element.\n\ni, v\u2212i) to denote the same vector with v(cid:48)\n\na \ufb01xed distribution Di, representing the amount the bidder is willing to pay for the item. We let\nD = D1 \u00d7 \u00b7\u00b7\u00b7 \u00d7 Dn denote the product distribution of value distributions, and we use v to denote\na vector of values sampled from this distribution. Further, we let v\u2212i denote v with the i-th element\nremoved, and use (v(cid:48)\nWe consider a setting similar to Liu et al. [30] where a bidder from any population may appear\nseveral times over the course of the T rounds, drawing a fresh value each time.1 In this setting,\nbidders may have an incentive to misreport their values in order to change the mechanism in future\nrounds, and their potential reward for doing so depends on the number of future rounds in which\nthey expect to participate. Amin et al. [1] show that very little can be done when a bidder participates\nin every round, so we assume this cannot occur. Formally:\nAssumption 1. No bidder participates in more than k rounds of the T -round auction.\nMechanism Design Basics One can view a mechanism (auction) M := (x, p) as having two\ncomponents: (a) a possibly-randomized allocation rule x : V n \u2192 X , which takes in a vector of\nvalues (bids) v and returns a feasible allocation of the items, where xi(v) is 1 if i receives the item\nand 0 otherwise; and (b) a payment rule p : V n \u2192 Rn, which takes v and outputs a vector of\npayments demanded of each player. Assuming, for the moment, that bidders bid their true values\nv, we can de\ufb01ne the expected revenue of a mechnamism M as the expectation of the payments\nreceived,\n\nRev(M; D) := Ev\u223cD[(cid:80)n\n\ni=1 pi(v)].\n\nWe make the standard assumption that the participants have quasi-linear utility: for a vector of\nbids v(cid:48) (which may not necessarily match the values v), a bidder\u2019s utility for allocation xi(v(cid:48)) and\npayment pi(v) is\n\nui(x, p, v(cid:48)) = vi \u00b7 xi(v(cid:48)) \u2212 pi(v(cid:48)).\n\n\u2212i)).\n\nWe may now introduce the notion of a truthful mechanism. A mechanism M := (x, p) is\ndeemed truthful if, given a vector of true values v and some arbitrary vector of bids v(cid:48), each\nagent receives no less utility bidding vi rather than v(cid:48)\ni; that is, for every i, it must hold that\nui(x, p, v(cid:48)) \u2264 ui(x, p, (vi, v(cid:48)\nLet us now recall a classical result in Bayesian-optimal mechanism design when the seller\u2019s goal\nis to maximize revenue. Myerson [39] essentially fully characterized the solution in this setting.\nThe interested reader can learn more in Hartline [24]; we brie\ufb02y review these results here in two\npieces. The \ufb01rst piece states that payments in truthful mechanisms essentially depend solely on the\nallocation function.\nTheorem 2 (Payment Identity, Myerson [39]). A mechanism is truthful if and only if it has a mono-\ntone allocation rule and payments which for all valuation pro\ufb01les v satisfy\n\npi(v) = vi \u00b7 xi(vi, v\u2212i) \u2212(cid:82) vi\n\n0 xi(u, v\u2212i)du + pi(0, v\u2212i).\n\nThe second key result is that for truthful mechanisms, the expected revenue can be written in terms\nof welfare in a remapped virtual value space.\nTheorem 3 (Myerson [39]). For any truthful mechanism M = (x, p) with values distributed ac-\ncording to D, the expected revenue from player i can be written as Ev\u223cD[\u03c6i(vi)xi(v)], where \u03c6i(vi)\nis the virtual value, given by \u03c6i(vi) = 1 \u2212 1\u2212Fi(vi)\nWe will use the notation M\u2217\nMyerson provides a precise construction of this auction.\nDe\ufb01nition 4 (Myerson\u2019s Auction). Fixing a prior distribution D, given a value pro\ufb01le v Myerson\u2019s\nrevenue-optimal mechanism M\u2217\nand (a) se-\nlects the feasible allocation which maximizes virtual welfare according to the virtual values and (b)\ncharges payments according to the Payment Identity of Theorem 2.\n\n. So, Rev(M; D) = Ev\u223cD[(cid:80)\n\nD calculates virtual values \u03c6i(vi) = vi \u2212 1\u2212FDi (vi)\n\nD to denote the revenue-optimal mechanism for distribution D\u2014\n\ni \u03c6i(vi)xi(v)].\n\nfDi (vi)\n\nfi(vi)\n\n1Our results can be shown to hold when values are not drawn fresh, in which case Di is the distribution of\n\ndrawn values, taking into account the process by which bidders are redrawn from population.\n\n3\n\n\fi (vt\n\ni ; ht\n\ni and pt\n\ni, and value vt\n\ni). We suppress the dependence on the history when clear from context.\n\nTruthfulness and Dynamic Equilibrium The mechanism design preliminaries discussed previ-\nously are for one shot games where players do not observe past actions of others and adjust their\nstrategy accordingly. We now turn our attention to multi-round play, we need to expand our notion\nof player behavior and strategy. We will now assume that a strategy for each bidder maps the history\nof observed actions to contingency plans over actions in the current and future rounds: we assume\nagents observe their own outcomes xt\ni in rounds in which they participate, but not the full\nhistorical data used by the designer to produce the mechanism each round. A history ht\ni in round\nt for each agent i then consists of a bid, an allocation, and a payment for each round the agent has\ni in the current round t for agent i, we denote agent\nparticipated in. Given a history ht\ni\u2019s strategy as \u03c3t\nNote that because agents do not observe the historical bids of others, they must form probabilistic\nbeliefs about these bids, and we assume these beliefs are Bayesian in nature. Denote by \u00b5t\ni(ht\ni)\ni. A pro\ufb01le of strategies is an equilib-\nthe beliefs of agent i in round t after observing history ht\nrium for this game if in every round t, every agent i, and every history ht\ni that agent i might have\nobserved previously, agent i\u2019s strategy maximizes their expected total utility over the current and\nfuture rounds; the expectation is taken over the randomness of agent i\u2019s beliefs as well as the future\ni) denote the total expected utility of agent i in rounds\nvalues of all agents. Formally, let U t\ni, playing strategy\nt, t + 1, . . . , T given a value of vt\n\u03c3.\nDe\ufb01nition 5 (Perfect Bayesian Equilibrium (PBE)). A pro\ufb01le of strategies \u03c3 is an \u03b7-approximate\ni for agent i in round t, value\nPerfect Bayesian equilibrium if for every agent i, round t, history ht\ni) approximately maximizes agent i\u2019s total expected utility\ni for i in round t, the strategy \u03c3t\nvt\ni) \u2265 U t\ni)\u2212 \u03b7 for every\nfrom future rounds up to an additive \u03b7. That is, U t\ni , ht\nalternate strategy \u03c3(cid:48). If \u03b7 = 0, we say that \u03c3 is an exact Perfect Bayesian Equilibrium.\n\ni in the current round and an observed history ht\n\ni (\u03c3(cid:48), vt\n\ni (\u03c3t\n\ni (vt\n\ni ; ht\n\ni), vt\n\ni , ht\n\ni (\u03c3, vt\n\ni , ht\n\ni (vt\n\ni ; ht\n\nIn other words, a PBE is a strategy \u03c3 such that, for every bidder i and for every history of the game,\nif all other bidders besides i behave according to the strategy \u03c3, then playing \u03c3i is approximately\nutility-maximizing behavior for bidder i.\nDe\ufb01nition 6 (\u03b7-utility-approximate BIC). A mechanism is \u03b7 utility-approximately Bayesian incen-\ntive compatible if the strategy pro\ufb01le where every agent bids truthfully in every history is an \u03b7-\napproximate Perfect Bayesian equilibrium.\n\nWe also consider a more robust notion of incentive compatibility, where there exists an exact equi-\nlibrium with each agent bidding \u03b7-close to her true value, a notion also used in Liu et al. [30].\nDe\ufb01nition 7 (\u03b7-bid-approximate BIC). A mechanism is \u03b7 bid-approximate BIC if \u2203 an exact PBE\nwhere each bidder bids within \u03b7 of their value in every history.\n\nDe\ufb01nition 6 guarantees that all bidders bidding truthfully in all rounds is an (approximate) Bayes-\nNash equilibrium (BNE). In proving a mechanism utility-approximate BIC, one therefore may as-\nsume bidders report truthfully in future rounds. Consequently, the only impact an agent\u2019s bid has\non their future utility is through their impact on future mechanisms. De\ufb01nition 7 guarantees the\nexistence of an exact equilibrium in which all bidders bid within \u03b7 of their value. Therefore, mech-\nanisms with this guarantee will need to ensure bidders do not change their behavior in later rounds\nto in\ufb02uence other bidders\u2019 behavior in earlier rounds.\n\nDifferential Privacy Background We now provide some basics on differential privacy, our main\ntechnique that helps guarantee approximate truthfulness in equilibrium. We refer to a database\nZ \u2208 Z n as a collection of data from n individuals, and we say that two databases are neighboring if\nthey differ in at most one entry.\nDe\ufb01nition 8 (Differential Privacy [19]). An algorithm (mapping) A : Z n \u2192 R is (\u0001, \u03b4)-\ndifferentially private if for neighboring databases Z, Z(cid:48) \u2208 Z n and subsets of possible outputs\nS \u2286 R, we have P[A(Z) \u2208 S] \u2264 exp(\u0001) P[A(Z(cid:48)) \u2208 S] + \u03b4.\n\nThe parameter \u0001 quanti\ufb01es the algorithm\u2019s privacy guarantee; smaller \u0001 corresponds to stronger\nprivacy. A key property of differential privacy is that it is robust to post-processing.\nLemma 9 (Post-processing [19]). Let A : Z n \u2192 R be an (\u0001, \u03b4)-differentially private algorithm and\nlet f : R \u2192 R(cid:48) be a random function. Then f \u25e6 A : Z n \u2192 R(cid:48) is also (\u0001, \u03b4)-differentially private.\n\n4\n\n\fWe need a more precise notion of privacy when multiple agents are involved receiving different\ninformation. For notation, we say two databases are i-neighbors if they differ only in the i-th entry.\nAlso, let A(Z)\u2212i denote the vector of outputs to all players except player i.\nDe\ufb01nition 10 (Joint Differential Privacy [29]). An algorithm A : Z n \u2192 Rn is (\u0001, \u03b4)-jointly differ-\nentially private if for every i \u2208 [n], every pair of i-neighbors Z, Z(cid:48) \u2208 Z n, and every S \u2286 Rn\u22121,\n\nP[A(Z)\u2212i \u2208 S] \u2264 exp(\u0001) P[A(Z(cid:48))\u2212i \u2208 S] + \u03b4.\n\nFinally, we can reason about the joint differential privacy of mechanism decomposed into a public\nsanitized broadcast, i.e. as if on a \u201cbillboard,\u201d and a private non-sanitzed portion. The following\nlemma shows that privacy is still preserved under such a decomposition.\nLemma 11 (Billboard Lemma [26]). Suppose A : Z n \u2192 R is (\u0001, \u03b4)-differentially private. Consider\nany collection of functions fi : Zi \u00d7 R \u2192 R(cid:48), for i \u2208 [n], where Zi is the portion of the database\ncontaining i\u2019s data. Then the composition {fi (\u03a0i(Z),A(Z))} is (\u0001, \u03b4)-jointly differentially private,\nwhere \u03a0i : Z \u2192 Zi is the projection to i\u2019s data.\nThe \ufb01nal tools we borrow from differential privacy are the exponential mechanism [34], and the\nability to maintain a histogram estimate of values which arrive in one at a time. The primary tech-\nnique for the latter involves data structures known as tree-based aggregations [20, 11]. This protocol\nis a differentially private method for calculating the cumulative sum of elements from 1 to t for any\nt \u2264 T , for which at any round t the protocol can return an estimate of the number of elements prior\nto round t, for which the entire execution is differentially private. We provide more details about our\ninstances of these algorithms in Appendices B.1.3 and A.1.1.\n\n3 Utility-Approximate Bayesian Incentive Compatibility\n\nIn this section, we give an online algorithm (Algorithm 1) for learning the optimal auction which is\nutility-approximate BIC. The main idea is to use differential privacy to explicitly control the amount\nof information the auctioneer takes forward from round t to later rounds. We do so by maintaining\na differentially private estimate H(cid:48)\ni,t of each empirical bid distribution, and choosing future auctions\nbased only on this differentially private estimate. Thus, from the perspective of any bidder, her\nbehavior in round t has very little chance of affecting any of the auctions selected in later rounds.\nIn round t, we run Myerson\u2019s mechanism with prior H(cid:48)\ni,t to compute allocations and payments.\nThus the one-shot mechanism in round t is exactly incentive compatible with respect to the current\nround. The differentially private subroutine used to compute estimate H(cid:48)\ni,t is described in full detail\nin Appendix A, with privacy, truthfulness, and revenue guarantees presented in Section 3.1.\n\nAlgorithm 1: Utility-Approximate BIC Online Auction\nParameters: discretization \u03b2, privacy \u0001, upper bound on support h, num. of rounds T\nInitialize: H(cid:48)\nfor t = 1,\u00b7\u00b7\u00b7 , T do\n\ni,0 \u2190 Uniform(0, h) for i = 1,\u00b7\u00b7\u00b7 , n\n\nReceive bid pro\ufb01le vt = (v1,t, . . . , vn,t), rounded down to integer multiple of \u03b2\nRun Myerson (Def. 4) with H(cid:48)\nfor i = 1, . . . , n do\n\nt\u22121 as prior and vt as bid for allocations/payments.\n\nUpdate H(cid:48)\n\ni,t via two-fold tree aggregation (Algorithm 3), giving as input vi,t\n\nend\n\nend\n\n3.1 Privacy, Truthfulness, and Revenue Guarantees\n\nWe now prove that the learning subroutine of Algorithm 1 satis\ufb01es differential privacy (Theorem 12),\nuse this to prove the mechanism is utility-approximate BIC (Theorem 13), and show that Algorithm\n1 achieves a o(1) additive approximation to the optimal revenue (Theorem 14).\nTheorem 12. The stream of estimates {H(cid:48)\nprivate with respect to the stream of input bids {vt}T\nWe emphasize that Theorem 12 does not claim that Algorithm 1 is itself differentially private, it only\nstates that the procedure rests on a differentially private subroutine. This distinction is critical: our\n\nt}T\nt=1 maintained by Algorithm 1 is (\u0001, \u0001/T )-differentially\n\nt=1.\n\n5\n\n\falgorithm is not differentially private in its selection of allocations and payments in round t. How-\never, the information the mechanism carries forward (namely, the estimated empirical distribution)\nis maintained in a differentially private manner. This is suf\ufb01cient for guaranteeing that bidders\u2019 be-\nhavior in round t does not signi\ufb01cantly affect which auctions are selected in later rounds. This will\nallow us to prove a utility-approximate BIC guarantee, but will not be suf\ufb01ciently strong to argue\nabout bid-approximate BIC. For that, we will need to additionally ensure that the allocations (and\npayments) in round t are differentially private (jointly differentially private), see Section 4.\nWe now turn our attention to proving a guarantee on the truthfulness of Algorithm 1, which will\nlean heavily on the privacy guarantee given in Theorem 12. We note that if our mechanism were\n(\u0001, 0)-differentially private then a result of [33], stating that any (\u0001, 0)-DP mechanism is 2\u0001-dominant\nstrategy incentive compatible.\nTwo issues arise if one were to try this approach in our setting. First, the entire mechanism is\nnot differentially private as discussed above. A bidder i\u2019s behavior might have signi\ufb01cant impact\non other bidder\u2019s allocations and payments, and those bidders may as a result choose to behave\ndifferently in later rounds based on that information. Thus we relax to the weaker incentive guarantee\nof utility-approximate BIC, avoiding the issue of other bidders behaving differently in response to\nactivity from earlier rounds. Second, the stream of estimates maintained by our mechanism is (\u0001, \u03b4)-\ndifferentially private for \u03b4 = \u0001/T > 0 and not (\u0001, 0)-differentially private which is necessary for the\nresult of [33] to hold.\n\nTheorem 13. Algorithm 1 is kh\u0001(cid:0)2 + 1\n\n(cid:1)-utility approximate BIC when \u0001 < 1.\n\nT\n\nt\n\nD(cid:48), and M\u2217\nt as the prior; that is, we run M\u2217\nH(cid:48)\n\nFinally, we consider the revenue-optimality of our proposed mechanism. Our revenue guarantees\nwill rely on tools developed and described in Appendix A.2. Recall that we use Rev(M; D) to\ndenote the expected revenue generated by the mechanism M on a value distribution D, and that D\nand D(cid:48) respectively denote the joint distributions of true values and true values rounded down to the\nnearest multiple of \u03b2. Let M\u2217\n, M\u2217\nD be the truly revenue-optimal mechanisms for the\nH(cid:48)\ndistributions H(cid:48)\nt, D(cid:48), and D, respectively. In each round of our mechanism, we get a sample from\nD(cid:48) and run Myerson\u2019s auction with H(cid:48)\n; D(cid:48))\nto denote the expected revenue we achieve in round t. The bidders true values are drawn from D\nand we use Rev(M\u2217\nD; D) to denote the optimal expected revenue in any round. In this section, we\nwill compare the revenue Rev(M\u2217\nD; D).\nH(cid:48)\nThe main result of this section, Theorem 14, bounds the difference between our average expected\n; D(cid:48)), and that of the optimum. We show that over T rounds, with high\n(cid:80)T\nrevenue, 1\nT\nprobability the average expected revenue is within an additive o(1) error of the optimal.\nTheorem 14. With probability at least 1 \u2212 \u03b1, Algorithm 1 satis\ufb01es 1\nt=1 Rev(M\u2217\nH(cid:48)\nRev(M\u2217\n\n; D(cid:48)) of our mechanism to the optimal revenue Rev(M\u2217\n\n(cid:80)T\nt=1 Rev(M\u2217\nH(cid:48)\n\nfor regular distributions D and \u0001 < 1.\n\n. We use Rev(M\u2217\nH(cid:48)\n\n; D(cid:48)) \u2265\n\n(cid:17)\n\nT\n\nt\n\nt\n\nt\n\nt\n\nt\n\nProof. We start by instantiating Lemma 34 for every round t instantiated with failure proba-\nbility \u03b1/T . Then taking a union bound over all T rounds and summing over t, ensures that\nwith probability 1 \u2212 \u03b1, 1\nt=1 \u03b3t for\n\n; D(cid:48)) \u2265 Rev(M\u2217\n\n(cid:114)\nD; D) \u2212 \u03b2J \u2212 4hn2\n\nT\n\nt\n\n(cid:80)T\n\nand \u03c3 =\n\n8 log T log\n\nT log T log\n\nh\n\u03b2\n\nln\n\n\u03b3t =\n\nT\n\n+ 1\nT \u0001\n\n(cid:16) 1\u221a\nD; D) \u2212 \u03b2J \u2212 4hn2 \u02dcO\n(cid:80)T\n(cid:114)\n(cid:17)\n(cid:16) 2hn\nt=1 Rev(M\u2217\nH(cid:48)\n(cid:80)T\n(cid:32)(cid:114)\n(cid:113)\n\n(cid:113) log\n(cid:80)T\n\n2t + \u03c3\n\n(cid:113)\n\n\u03b2 log T\n\nlog h\n\n2 log\n\nnT\n\u03b1\n\nn\n\u03b1\n\nlog\n\n\u03b2\u03b1\n\nt\n\n(cid:80)T\n(cid:114)\n\nIn the remainder of the proof, we bound 1\nT\n\nt=1 \u03b3t. (Recall that the \u03b1 in Lemma 34 is \u03b1/T here.)\n\n1\nT\n\nt=1\n\n2t + \u03c3\n\nt=1 \u03b3t = 1\nT\n\n(cid:113)\nThe \ufb01rst inequality comes from the facts that(cid:80)T\n\n+ 2\u03c3 log T\n\n2 log\nT\n\nnT\n\u03b1\n\n\u2264\n\nlog h\n\nT\n\nt\n\nlog h\n\n\u03b2 log T\n\n(cid:114)\n\n\u03b2 log T\n\n2 log\n\u221a\n\u2264 2\n\nt = HT \u2264 log T + 1 \u2264\n2 log T . The following equality come from plugging in the expression of \u03c3 and combining terms.\nHence, w.p. \u2265 1 \u2212 \u03b1, 1\n\n(cid:80)T\nt=1 Rev(M\u2217\nH(cid:48)\n\nD; D) \u2212 \u03b2J \u2212 4hn2 \u02dcO\n\n; D(cid:48)) \u2265 Rev(M\u2217\n\n(cid:17)\n\n1\u221a\nt\n\n+ 1\nT \u0001\n\nt=1\n\nt=1\n\nT\n\n.\n\nt\n\nT\n\n\u0001\n\n(cid:114)\n\n\u03b2\u03b1\n\n2 log\n\n(cid:16) 2hnT\n(cid:16) 2hnT\n(cid:17)\nT and(cid:80)T\n\n\u03b2\u03b1\n\nT\nh\n\u03b2\n\n\u0001\n\n(cid:17)(cid:33)\n(cid:18) 1\u221a\n(cid:16) 1\u221a\n\nT\n\n1\n\n= \u02dcO\n\n+\n\n(cid:19)\n\n1\nT \u0001\n\n6\n\n\fNotice that this statement says that if we set \u03b2, the discretization parameter to be o(1) in terms of T\nthen the revenue one can earn is a (1 \u2212 o(1))-approximation to the optimal revenue.\n\n4 Bid-Approximate Bayesian Incentive Compatibility\n\nWe now describe an algorithm for \u201ctraining\u201d a mechanism to achieve nearly optimal revenue in\nan exact equilibrium, where each bidder bids close to their true value. This contrasts with the\nresult in the prior section, wich shows that bidders bidding their exact true values is an approximate\nequilibrium. While the equilibrium we describe in this section is not quite a truthful one, we can\ncompare its revenue with the revenue of the optimal mechanism facing truthful bids.\nThe primary technical meat of this section has two parts. First, we describe how to modify Algo-\nrithm 1 to get the stronger incentive guarantee of bid-approximate Bayesian incentive compatibility.\nTheorem 13 gave utility-approximate BIC, which means all bidders behaving truthfully in all rounds\nis an approximate Bayes-Nash equilibrium (BNE). Bid-approximate BIC means there is an exact\nBayes-Nash equilibrium where all bidders bid within \u03b7 of their values. The main challenge here\ncomes from whether bidders change their behavior in later rounds based on the non-truthful behav-\nior of other bidders in earlier rounds.2 To make this jump, we guarantee that round t\u2019s allocations\nand payments leak very little information about a bidder\u2019s behavior to other bidders. So, bidders will\nhave very little ability to condition on one another\u2019s behavior in round t when selecting a strategy in\nlater rounds. To this end, we ensure that the allocations in round t are differentially private and the\npayments in the round t are jointly differentially private, informally stated below.\nTheorem 15. Algorithm 2 runs in polynomial time, is \u03b7t-bid approximate BIC in round t for \u03b7t =\nc + \u02dcO(t\u22121/4) and achieves expected revenue Rev = OPT \u2212 c(cid:48) \u2212 \u02dcO(t\u22121/4) for small constants c, c(cid:48).\nThe more precise result, given in Theorem 19, shows that under mild assumptions about bidders\u2019\nbehavior, every equilibrium bid under our mechanism lies close to the true value in the bid space.\nThat is, in round t, we show that no bidder in equilibrium will underbid by more than a small factor\n\u03b7t. The main challenge in getting such a result requires us to bound the extent to which bidders\u2019\nbehavior in the current round can affect their future utilities. If we can control this quantity, we\ncan control the amount by which people can game the system in the current round and also ensures\nthat bidders are unable to learn much about the value distributions of other bidders. The formal\nrevenue guarantees for our mechanism are are presented in Theorem 20 where we show that with\nhigh probability, the average expected revenue obtained by our mechanism is close to the optimal\nexpected revenue of the mechanism with complete information about the value distribution of the\nbidders (i.e., Myerson\u2019s auction on D).\n\n4.1 A mechanism with private payments and allocations\n\nWe now Algorithm 2 which is a Bid-approximate BIC Online Auction algorithm. The primary\nchallenge is to ensure that the allocation and payments in round t do not leak much information\nabout bidders\u2019 behavior to one another. The mechanism ensures this by making choices which are all\njointly differentially private, ensuring that bidders do not substantially affect either the mechanism\u2019s\nstate or what other bidders learn about them. The mechanism maintains a private estimate of the\nempirical distribution (as before), and also computes prices and allocations in round t using jointly\ndifferentially private algorithms. We use the exponential mechanism (Algorithm 6) for picking round\nt\u2019s allocation, black box payments (Algorithm 5) to pick payments in round t (which in expectation\nyields payments close to the truthful payments). This additional step of ensuring round t decisions\nare differentially private is crucial when bidders might condition their behavior in later rounds based\nupon what the learn from round t (which might be the case in bid-approximate equilibria).\nTo describe exact equilibria, we need to argue about each bidder\u2019s utility for modifying her bid, and\nargue that shading by more than \u03b7t hurts her utility. We do this using a punishing mechanism, which\npenalizes bidders who shade their bid by using a strictly truthful mechanism with some probability\n\u03c1t in each round. Since we are going to show that there exists an equilibrium where in round t every\nbidder bids with \u03b7t of their true value, it will be convenient to de\ufb01ne the round t equilibrium bid\ndistributions Ft = F1.t \u00d7 \u00b7\u00b7\u00b7 \u00d7 Fn.t .\n\n2This is not a challenge when we assume all bidders behave truthfully, since bidders won\u2019t have non-truthful\n\nbehavior in earlier rounds to condition their behavior upon.\n\n7\n\n\fAlgorithm 2: Bid-approximate BIC Online Auction\nParameters: discretization \u03b2, privacy \u0001, upper bound on support h, num. of rounds T\ni,0 \u2190 Uniform(0, h) for i = 1,\u00b7\u00b7\u00b7 , n\nInitialize: H(cid:48)\nfor t = 1,\u00b7\u00b7\u00b7 , T do\n\nReceive vector of bids bt = (b1,t, . . . , bn,t), rounded down to multiple of \u03b2\nWith probability \u03c1t run mechanism StrictlyTruthful(bt) (Algorithm 4)\nelse\n\nfor i = 1, . . . , n do\n\nUse H(cid:48)\n\ni,t\u22121 to calculate \u03c6i,t(bi,t) (Theorem 3)\n\nend\nUse exponential mechanism (Algorithm 6) to select allocation xt(\u03c6t(b))\nUse Black box payments (Algorithm 5) to calculate payments pt(bt).\n\ni,t via two-fold tree aggregation (Algorithm 3), giving as input bi,t\n\nend\nfor i = 1, . . . , n do\n\nUpdate H(cid:48)\n\nend\n\nend\n\n4.2 Privacy, Truthfulness, and Revenue Guarantees\n\nIn this subsection, we provide the differential privacy and incentive guarantees of Algorithm 2. We\nrefer to Appendix B for the and additional technichal details and proofs.\nLemma 16. Algorithm 2 is (3\u0001, 3\u0001/T )-jointly differentially private in the bids of users.\nLemma 17. Fix a round t, population i, and the bidder from population i in round t. This bidder\u2019s\ntotal utility in rounds t(cid:48) > t for misreporting in round t is \u2264 \u0001kh(2+ 1\nT ) more than truthful reporting.\nFinally, we introduce an assumption we make about the equilibrium behavior of bidders facing this\nmechanism. Informally, the assumption states that a mechanism run over many rounds should have\nequilibrium bid distributions which have similar probability of any bid between adjacent rounds.\nAssumption 18 (\u03bb-Stable Bid Distribution). We say the mechanism M supports a \u03bb-stable bid\ndistribution if it has some BNE with distribution of equilibrium bids Ft in round t such that for all\npopulations i and rounds t, there exist \u03bbt s.t. (cid:107)Ft \u2212 Ft\u22121(cid:107)\u221e \u2264 \u03bbt.\nRemark. Consider a mechanism with very similar behavior in round t versus t + 1: both the\nmechanism\u2019s distribution over allocation and payment rules in round t and t + 1 are very close. If all\nother bidders strategies from round t to t + 1 are also very close, then the utility for any particular\nbid b from a bidder with valuation v from population i will be quite close, but that bidder\u2019s utility-\noptimal bid may or may not be identical or even particularly close in the two rounds.\nThis suggests that analyzing exact equilibria in iterated settings is quite complex, in that the distri-\nbution over utility-optimal bids might shift quite substantially from round to round. So, mechanisms\nand equilibria without this property may have highly erratic behavior, and such equilibria may not\nsupport a learning procedure which competes with the (truthful) optimal revenue. Hence we assume\nthat the mechanism in Algorithm 2 supports a \u03bb-Stable Bid Distribution with the condition that the\n\nT = o(1). We now present the truthfulness gurantee of Algorithm 2.\nt\u03bbt\n\nTheorem 19. Algorithm 2 in round t is \u03b7t-bid approximate BIC, i.e.\nwith value vi,t reports bid bi,t which satis\ufb01es vi,t \u2212 \u03b7t \u2264 bi,t \u2264 vi,t where \u03b7t = h\n\nin round t, any bidder i\n,\n\n(cid:113) 8n2\u03b3t+6k\u0001\n\n\u03c1tJ\n\nquantity \u03a3T :=(cid:80)T\u22121\n(cid:113) log(2hn/\u03b2\u03b1)\n\nt=1\n\n\u03b3t =\n\n2t\n\n+\u03a3t + \u03c3\nt\n\nlog h\n\n\u03b2 log T\n\nlog\n\n, \u03c3 =\n\n8 log T log\n\nh\n\u03b2\n\n\u0001\n\nlog T log\n\nh\n\u03b2\n\n\u03b4\n\nand \u03b4 = \u0001\nT .\n\nln\n\n(cid:114)\n\n(cid:113)\n\n(cid:114)\n\n(cid:17)\n\n(cid:16) hn\n\n\u03b2\u03b1\n\nLet F(cid:48)\nt be distribution of round t bids (Ft) rounded down to the nearest multiple of \u03b2. Let the\nmechanism we run in round t on the bids we recieve from F(cid:48)\nt. The expected revenue we\nachieve in round t is Rev(MH(cid:48)\nD; D).\nWe now present the main revenue guarantee for Algorithm 2, i.e we compare the average expected\nrevenue of Algorithm 2 with respect to optimal revenue. We refer to the Appendix B.4 for the proof.\n\nt) which we will compare with the optimal reveue Rev(M\u2217\n\nt be MH(cid:48)\n\nt; F(cid:48)\n\n8\n\n\fTheorem 20. Using Algorithm 2 for T rounds, with probability at least 1 \u2212 \u03b1 the average expected\nrevenue obtained by the mechanism over the T rounds satis\ufb01es,\n\nT(cid:88)\n\nt=1\n\n1\nT\n\nRev(MH(cid:48)\n\nt ; F(cid:48)\n\nt) \u2265 Rev(M\u2217\n\nD; D)\n\n\u2212(cid:16)\n\nhnJ 2/3 \u02dcO( 1\n\nT 1/4 ) + J ln n\n\n\u0001 + h\n\n3\u0001 ln J\n\nk\u0001 + hJ 2/3(12k\u0001)1/3 + \u03b2J\n\n(cid:17)\n\nReferences\n[1] Kareem Amin, Afshin Rostamizadeh, and Umar Syed. 2013. Learning prices for repeated\nauctions with strategic buyers. In Advances in Neural Information Processing Systems. 1169\u2013\n1177.\n\n[2] Aaron Archer, Christos Papadimitriou, Kunal Talwar, and \u00b4Eva Tardos. 2004. An approximate\ntruthful mechanism for combinatorial auctions with single parameter agents. Internet Mathe-\nmatics 1, 2 (2004), 129\u2013150.\n\n[3] Itai Ashlagi, Constantinos Daskalakis, and Nima Haghpanah. 2016. Sequential mechanisms\nwith ex-post participation guarantees. In Proceedings of the 2016 ACM Conference on Eco-\nnomics and Computation. ACM, 213\u2013214.\n\n[4] Maria-Florina Balcan, Avrim Blum, Jason D Hartline, and Yishay Mansour. 2005. Mechanism\ndesign via machine learning. In Foundations of Computer Science, 2005. FOCS 2005. 46th\nAnnual IEEE Symposium on. IEEE, 605\u2013614.\n\n[5] Maria-Florina Balcan, Tuomas Sandholm, and Ellen Vitercik. 2016. Sample complexity of\nautomated mechanism design. In Advances in Neural Information Processing Systems. 2083\u2013\n2091.\n\n[6] Dirk Bergemann and Juuso Valimaki. 2010. The Dynamic Pivot Mechanism. Econometrica\n\n78, 2 (2010), 771\u2013789.\n\n[7] Avrim Blum and Jason D Hartline. 2005. Near-optimal online auctions. In Proceedings of\nthe sixteenth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and\nApplied Mathematics, 1156\u20131163.\n\n[8] Mark Braverman, Jieming Mao, Jon Schneider, and Matt Weinberg. 2018. Selling to a no-\nregret buyer. In Proceedings of the 2018 ACM Conference on Economics and Computation.\nACM, 523\u2013538.\n\n[9] Sebastien Bubeck, Nikhil R Devanur, Zhiyi Huang, and Rad Niazadeh. 2017. Online auctions\nand multi-scale online learning. In Proceedings of the 2017 ACM Conference on Economics\nand Computation. ACM, 497\u2013514.\n\n[10] Nicolo Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. 2015. Regret minimization for re-\nserve prices in second-price auctions. IEEE Transactions on Information Theory 61, 1 (2015),\n549\u2013564.\n\n[11] T-H Hubert Chan, Elaine Shi, and Dawn Song. 2011. Private and continual release of statistics.\n\nACM Transactions on Information and System Security (TISSEC) 14, 3 (2011), 26.\n\n[12] Yiling Chen, Stephen Chong, Ian A Kash, Tal Moran, and Salil Vadhan. 2016. Truthful mecha-\nnisms for agents that value privacy. ACM Transactions on Economics and Computation (TEAC)\n4, 3 (2016), 13.\n\n[13] Richard Cole and Tim Roughgarden. 2014. The sample complexity of revenue maximization.\nIn Proceedings of the forty-sixth annual ACM symposium on Theory of computing. ACM, 243\u2013\n252.\n\n[14] Rachel Cummings. 2017. Differential Privacy As a Tool for Truthfulness in Games. XRDS 24,\n\n1 (Sept. 2017), 34\u201337.\n\n9\n\n\f[15] Rachel Cummings, Stratis Ioannidis, and Katrina Ligett. 2015. Truthful Linear Regression. In\n\nProceedings of The 28th Conference on Learning Theory (COLT \u201915). 448\u2013483.\n\n[16] Rachel Cummings, Michael Kearns, Aaron Roth, and Zhiwei Steven Wu. 2015. Privacy and\nTruthful Equilibrium Selection for Aggregative Games. In Proceedings of the 11th Interna-\ntional Conference on Web and Internet Economics (WINE \u201915). 286\u2013299.\n\n[17] Nikhil R Devanur, Zhiyi Huang, and Christos-Alexandros Psomas. 2016. The sample com-\nplexity of auctions with side information. In Proceedings of the forty-eighth annual ACM sym-\nposium on Theory of Computing. ACM, 426\u2013439.\n\n[18] Peerapong Dhangwatnotai, Tim Roughgarden, and Qiqi Yan. 2015. Revenue maximization\n\nwith a single sample. Games and Economic Behavior 91 (2015), 318\u2013333.\n\n[19] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to\nsensitivity in private data analysis. In Theory of cryptography conference. Springer, 265\u2013284.\n\n[20] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N Rothblum. 2010. Differential privacy\nunder continual observation. In Proceedings of the forty-second ACM symposium on Theory of\ncomputing. ACM, 715\u2013724.\n\n[21] Edith Elkind. 2007. Designing and Learning Optimal Finite Support Auctions. In Proceedings\nof the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA \u201907). Society\nfor Industrial and Applied Mathematics, Philadelphia, PA, USA, 736\u2013745. http://dl.acm.\norg/citation.cfm?id=1283383.1283462\n\n[22] Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, and Song Zuo. 2018. Incentive-\nAware Learning for Large Markets. In Proceedings of the 2018 World Wide Web Conference\non World Wide Web. International World Wide Web Conferences Steering Committee, 1369\u2013\n1378.\n\n[23] Yannai A. Gonczarowski and Noam Nisan. 2017. Ef\ufb01cient Empirical Revenue Maximization in\nSingle-parameter Auction Environments. In Proceedings of the 49th Annual ACM Symposium\non Theory of Computing (STOC \u201917). 856\u2013868.\n\n[24] Jason Hartline. 2013. Mechanism design and approximation. Book draft. October 122 (2013).\n\n[25] Jason Hartline and Samuel Taggart. 2016. Non-Revelation Mechanism Design. arXiV preprint\n\n1608.01875. (2016).\n\n[26] Justin Hsu, Zhiyi Huang, Aaron Roth, Tim Roughgarden, and Zhiwei Steven Wu. 2016. Private\n\nmatchings and allocations. SIAM J. Comput. 45, 6 (2016), 1953\u20131984.\n\n[27] Sham Kakade, Ilan Lobel, and Hamid Nazerzadeh. 2013. Optimal Dynamic Mechanism De-\n\nsign and the Virtual-Pivot Mechanism. Operations Research 64, 4 (2013), 837\u2013854.\n\n[28] Sampath Kannan, Jamie Morgenstern, Ryan Rogers, and Aaron Roth. 2015. Private Pareto\nOptimal Exchange. In Proceedings of the Sixteenth ACM Conference on Economics and Com-\nputation (EC \u201915). 261\u2013278.\n\n[29] Michael Kearns, Mallesh Pai, Aaron Roth, and Jonathan Ullman. 2014. Mechanism Design in\nLarge Games: Incentives and Privacy. In Proceedings of the 5th Conference on Innovations in\nTheoretical Computer Science (ITCS \u201914). ACM, 403\u2013410.\n\n[30] Jinyan Liu, Zhiyi Huang, and Xiangning Wang. 2018. Learning Optimal Reserve Price against\n\nNon-myopic Bidders. In Advances in Neural Information Processing Systems. 2038\u20132048.\n\n[31] Siqi Liu and Christos-Alexandros Psomas. 2018. On the competition complexity of dynamic\nmechanism design. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on\nDiscrete Algorithms. Society for Industrial and Applied Mathematics, 2008\u20132025.\n\n[32] Pascal Massart. 1990. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. The\n\nannals of Probability (1990), 1269\u20131283.\n\n10\n\n\f[33] Frank McSherry and Kunal Talwar. 2007. Mechanism design via differential privacy. In Foun-\ndations of Computer Science, 2007. FOCS\u201907. 48th Annual IEEE Symposium on. IEEE, 94\u2013\n103.\n\n[34] Frank McSherry and Kunal Talwar. 2007. Mechanism Design via Differential Privacy. In\nProceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science. 94\u2013\n103.\n\n[35] Andres M Medina and Mehryar Mohri. 2014. Learning theory and algorithms for revenue\noptimization in second price auctions with reserve. In Proceedings of the 31st International\nConference on Machine Learning (ICML-14). 262\u2013270.\n\n[36] Vahab S Mirrokni, Renato Paes Leme, Pingzhong Tang, and Song Zuo. 2016. Dynamic Auc-\n\ntions with Bank Accounts.. In IJCAI. 387\u2013393.\n\n[37] Jamie Morgenstern and Tim Roughgarden. 2016. Learning simple auctions. In Conference on\n\nLearning Theory. 1298\u20131318.\n\n[38] Jamie H Morgenstern and Tim Roughgarden. 2015. On the pseudo-dimension of nearly opti-\n\nmal auctions. In Advances in Neural Information Processing Systems. 136\u2013144.\n\n[39] Roger B Myerson. 1981. Optimal auction design. Mathematics of operations research 6, 1\n\n(1981), 58\u201373.\n\n[40] Kobbi Nissim, Claudio Orlandi, and Rann Smorodinsky. 2012. Privacy-aware Mechanism\nDesign. In Proceedings of the 13th ACM Conference on Electronic Commerce (EC \u201912). ACM,\n774\u2013789.\n\n[41] Renato Paes Leme, Martin Pal, and Sergei Vassilvitskii. 2016. A \ufb01eld guide to personalized\nreserve prices. In Proceedings of the 25th International Conference on World Wide Web. Inter-\nnational World Wide Web Conferences Steering Committee, 1093\u20131102.\n\n[42] Christos Papadimitriou, George Pierrakos, Christos-Alexandros Psomas, and Aviad Rubin-\nstein. 2016. On the complexity of dynamic mechanism design. In Proceedings of the twenty-\nseventh annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Ap-\nplied Mathematics, 1458\u20131475.\n\n[43] Alessandro Pavan, Ilya Segal, and Juuso Toikka. 2014. Dynamic Mechanism Design: A My-\n\nersonian Approach. Econometrica 82, 2 (2014), 601\u2013653.\n\n[44] David Xiao. 2013. Is privacy compatible with truthfulness?. In Proceedings of the 4th confer-\n\nence on Innovations in Theoretical Computer Science. ACM, 67\u201386.\n\n11\n\n\f", "award": [], "sourceid": 6183, "authors": [{"given_name": "Jacob", "family_name": "Abernethy", "institution": "Georgia Institute of Technology"}, {"given_name": "Rachel", "family_name": "Cummings", "institution": "Georgia Tech"}, {"given_name": "Bhuvesh", "family_name": "Kumar", "institution": "Georgia Tech"}, {"given_name": "Sam", "family_name": "Taggart", "institution": "Oberlin College"}, {"given_name": "Jamie", "family_name": "Morgenstern", "institution": "University of Washington"}]}