Contact Info
an image
Amin Sayedi
Foster School of Business
Paccar 483
University of Washington
Seattle, WA 98195
Email: aminsa@uw.edu

Papers Under Review and Working Papers


Real-time Bidding in Online Display Advertising [pdf]

Display advertising is a major source of revenue for many of the online publishers and content providers. Historically, display advertising impressions have been sold through pre-negotiated contracts, known as reservation contracts, between publishers and advertisers. In recent years, a growing number of impressions are being sold in real-time bidding (RTB), where advertisers bid for impressions in real-time, as consumers visit publishers' websites. RTB allows advertisers to target consumers at an individual level using browser cookie information, and enables them to customize their ads for each individual. The rapid growth of RTB has created new challenges for advertisers and publishers on how much budget and ad inventory to allocate to RTB. In this paper, we use a game theory model with two advertisers and a publisher to study the effects of RTB on advertisers' and publishers' strategies and their profits. We show that symmetric advertisers use asymmetric strategies where one advertiser buys all of his impressions in RTB whereas the other advertiser focuses on reservation contracts. Interestingly, we find that while both advertisers benefit from existence of RTB, the advertiser that focuses on reservation contracts benefits more than the advertiser that focuses on RTB. We show that while RTB lowers the equilibrium price of impressions in reservation contracts, it increases the publisher's total revenue. Despite many analysts' belief that, because of being more efficient, RTB will replace reservation contracts in the future, we show that publishers have to sell a sufficiently large fraction of their impressions in reservation contracts in order to maximize their revenue. We extend our model to consider premium consumers, publisher's uncertainty about the number of future visitors, and effectiveness of ad customization.


Exclusivity in Online Advertising, joint with M. Baghaie and K. Jerath [pdf]

As sponsored search becomes increasingly important as an advertising medium for firms, search engines are exploring more advanced bidding and ranking mechanisms to increase their revenue from sponsored search auctions. For instance, Google, Yahoo! and Bing are investigating auction mechanisms in which each advertiser submits two bids: one bid for the standard display format in which multiple advertisers are displayed, and one bid for being shown exclusively. If the exclusive-placement bid by an advertiser is high enough then only that advertiser is displayed, otherwise multiple advertisers are displayed and ranked based on their multiple-placement bids. We call such auctions two-dimensional auctions and study the GSP2D mechanism, which is an extension of the GSP mechanism and has recently been patented by Yahoo! as a key candidate for implementing two-dimensional exclusive-display auctions. In a significant advance on the existing literature on sponsored search auctions, we assume that advertisers have uncertain valuations and solve for the Bayesian Nash equilibria of the GSP and GSP2D auctions.
We find that allowing advertisers to bid for exclusivity can increase the revenue of the search engine because competition is heightened---bidders compete not only for positions in the non-exclusive outcome but also compete for the outcome to be exclusive or non-exclusive. Interestingly, however, under certain conditions, the revenue of the search engine can decrease---competition between outcomes leads to bidders reducing bids for their non-preferred outcome; specifically, a bidder who values the exclusive outcome highly will bid high for exclusivity and, simultaneously, bid low for non-exclusivity to increase the chance of obtaining the exclusive outcome. We also find interesting results on the bidding strategies of advertisers in GSP2D; for instance, we find that, under certain conditions, advertisers have the incentive to bid above their true valuations.


Publications


The Effects of Autoscaling in Cloud Computing on Product Launch, joint with Amir Fazli and Jeff Shulman [pdf], forthcoming at Management Science

Web-based firms often rely on cloud-based computational resources to serve customers, but the number of customers they will serve is rarely known at the time of product launch. A recent innovation in cloud computing, known as autoscaling, allows companies to automatically scale their computational load up or down to match customer demand. We build a game theory model to examine how autoscaling will affect firms' decisions to enter a new market and the resulting equilibrium prices, profitability, and consumer surplus. The model produces novel results depending on the likelihood of a firm's success in the new market, differentiation among potential entrants, and the cost of computational capacity. For instance, in contrast to previous capacity commitment models with demand certainty, we show that autoscaling can mitigate price competition if the likelihood of a firm's success in the market is moderate and the cost of capacity is sufficiently low. This is because without autoscaling, the firms' uncertainty about demand would lead to excessive computational capacities and thus aggressive price competition. We also find that when the likelihood of success is sufficiently high, autoscaling facilitates entry for one firm yet deters a second firm from simultaneously entering the market.


Strategic Compliments in Sales, joint with J. Shulman [pdf], Quantitative Marketing and Economics (2017)

Salespersons often spend time and money giving prospective buyers compliments such as kind words, meals and gifts. Though prior research has shown that compliments will influence a prospective buyer’s decision, it is unknown the extent to which salespersons should make these investments. In this paper, we develop an analytical model to examine how seller and buyer characteristics affect the equilibrium provision of compliments by the seller. We establish that the optimal magnitude of compliments is non-monotonic in the buyer’s sensitivity to compliments. We identify conditions for when a seller of a high-quality product will offer greater (or lesser) compliments than a seller of a lower quality product. We show that, under certain conditions, an uninformed buyer earns greater utility than a buyer who knows the quality of the seller’s product. The findings have implications for sellers in their choice of compliments and buyers in the inferences they draw from the compliments received.


Expertise in Online Markets, joint with S. Despotakis, I. Hafalir and R. Ravi [pdf], Management Science (2016)

We examine the effect of the presence of knowledgeable buyers (experts) in online markets where auctions with a hard close and posted prices are widely used. We model buyer expertise as the ability to accurately predict the quality, or condition, of an item. In auctions with a hard close, sniping–submitting bids in the last minute–emerges as an equilibrium strategy for experts. We show that non-experts bid more aggressively as the proportion of experts increases. As a consequence, we establish that the auction platform may obtain a higher revenue by (i) enforcing a hard close and allowing sniping, and (ii) withholding information regarding the quality of the item. Moreover, in online markets where both auctions and posted prices are available, we show that the presence of experts allows the sellers of high quality items to signal their quality by choosing to sell via auctions.


Keyword Management Costs and "Borad Match" in Sponsored Search Advertising, joint with W. Amaldoss and K. Jerath [pdf], Marketing Science (2015)

In sponsored search advertising, advertisers choose the keywords to bid on, decide how much to bid on them, customize the ads for the chosen keywords by geographic region, time of day and season, and then measure and manage campaign effectiveness. We call the costs that arise from such activities keyword management costs. To reduce these costs and increase advertisers’ participation in keyword auctions, search engines offer a tool called broad match which automates bidding on keywords. These automated bids are based on the search engine’s estimates of the advertisers’ valuations and, therefore, may be less accurate than the bids the advertisers would have turned in themselves. Using a game-theoretic model, we examine the strategic role of keyword management costs, and of broad match, in sponsored search advertising. We show that because these costs inhibit participation by advertisers in keyword auctions, the search engine has to reduce the reserve price, which reduces the search engine’s profits. This motivates the search engine to offer broad match as a tool to reduce keyword management costs. If the accuracy of broad match bids is high enough, advertisers adopt broad match and benefit from the cost reduction, whereas if the accuracy is very low, advertisers do not use it. Interestingly, at moderate levels of bid accuracy, advertisers individually find it attractive to reduce costs by using broad match, but competing advertisers also adopt broad match and the increased competition hurts all advertisers’ profits, and thus a “prisoner’s dilemma” arises. Adoption of broad match by advertisers increases search engine profits, and it therefore seems natural to expect that the search engine will be motivated to improve broad match accuracy. Our analysis shows that the search engine will increase broad match accuracy upto the point where advertisers choose broad match, but increasing the accuracy any further reduces the search engine’s profits. We consider a number of extensions to the basic model to show the robustness of our insights.


Competitive Poaching in Sponsored Search Advertising and Strategic Impact on Traditional Advertising, joint with K. Jerath and K. Srinivasan [pdf], Marketing Science (2014)

An important decision for a firm is how to allocate its advertising budget among different types of advertising. Most traditional channels of advertising, such as advertising on television and in print, serve the purpose of building consumer awareness and desire about the firm's products. With recent developments in technology, sponsored search (or paid search) advertising at search engines in response to a keyword searched by a user has become an important part of most firms' advertising efforts. An advantage of sponsored search advertising is that, since the firm advertises in response to a consumer-initiated search, it is a highly targeted form of communication and the sales-conversion rate is typically higher than in traditional advertising. However, a consumer would search for a specific product or brand only if she is already aware of the same due to previous awareness-generating traditional advertising efforts. Moreover, competing firms can use sponsored search to free-ride on the awareness-building efforts of other firms by directly advertising on their keywords and therefore "poaching" their customers. Anecdotal evidence shows that this is a frequent occurrence. In other words, traditional advertising builds awareness, while sponsored search is a form of technology-enabled communication that helps to target consumers in a later stage of the purchase process, which induces competitors to poach these potential customers.
Using a game theory model, we study the implications of these tradeoffs on the advertising decisions of competing firms, and on the design of the sponsored search auction by the search engine. We find that symmetric firms may follow asymmetric advertising strategies, with one firm focusing on traditional advertising and the other firm focusing on sponsored search with poaching. Interestingly, the search engine benefits from discouraging competition in its own auctions in sponsored search advertising by shielding firms from competitors' bids. This explains the practice of employing "keyword relevance" measures, under which search engines such as Google, Yahoo! and Bing under-weight the bids of firms bidding on competitors' keywords. We also obtain various other interesting insights on the interplay between sponsored search advertising and traditional advertising.


A Near Pareto Optimal Auction with Budget Constraints, joint with I. Hafalir and R. Ravi [pdf]
Games and Economic Behavior (2011)

In a setup where a divisible good is to be allocated to a set of bidders with budget constraints, we introduce a mechanism in the spirit of the Vickrey auction. In the mechanism we propose, understating budgets or values is weakly dominated. Since the revenue is increasing in budgets and values, all kinds of equilibrium deviations from true valuations turn out to be beneficial to the auctioneer. We also show that ex-post Nash equilibrium of our mechanism is near Pareto optimal in the sense that all full winners' values are above all full losers' values.


We Know Who You Followed Last Summer: Inferring Social Link Creation Times in Twitter, joint with C. Borgs, J. Chayes, B. Karrer, B. Meeder and R. Ravi [pdf]

Understanding a network's temporal evolution appears to require multiple observations of the graph over time. These often expensive repeated crawls are only able to answer questions about what happened from observation to observation, and not what happened before or between network snapshots. Contrary to this picture, we propose a method for Twitter's social network that takes a single static snapshot of network edges and user account creation times to accurately infer when these edges were formed. This method can be exact in theory, and we demonstrate empirically for a large subset of Twitter relationships it is accurate to within hours in practice. We study users who have a very large number of edges or who are recommended by Twitter. We examine the graph formed by these nearly 1,800 Twitter celebrities and their 862 million edges in detail, showing that a single static snapshot can give novel insights about Twitter's evolution. We conclude from this analysis that real-world events and changes to Twitter's interface for recommending users strongly influence network growth.

Appeared in the Twentieth International World Wide Web Conference, WWW '11 (Acceptance Rate 12%).


Game-theoretic Models of Information Overload in Social Networks, joint with C. Borgs, J. Chayes, B. Karrer, B. Meeder, R. Ravi and R. Reagans [pdf]

We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under very simple and natural models of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of pure rate equilibria when there is a global ranking of the celebrities in terms of the quality of their updates to users. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.

Appeared in the Seventh Workshop on Algorithms and Models for the Web Graph, WAW '10.


Trading Off Mistakes and Don't-know Predictions, joint with A. Blum and M. Zadimoghaddam [pdf]

We discuss an online learning framework in which the agent is allowed to say "I don't know" as well as making incorrect predictions on given examples. We analyze the trade off between saying "I don't know" and making mistakes. If the number of don't know predictions is forced to be zero, the model reduces to the well-known mistake-bound model introduced by Littlestone [Lit88]. On the other hand, if no mistakes are allowed, the model reduces to KWIK framework introduced by Li et. al. [LLW08]. We propose a general, though inefficient, algorithm for general finite concept classes that minimizes the number of don't-know predictions if a certain number of mistakes are allowed. We then present specific polynomial-time algorithms for the concept classes of monotone disjunctions and linear separators.

Appeared in the Twenty-fourth Annual Conference on Neural Information Processing Systems, Spotlight Paper (spotlight paper acceptance rate was less than 6%) NIPS '10.


Expressive Auctions for Externalities in Online Advertising, joint with A. Ghosh [pdf]

When online ads are shown together, they compete for user attention and conversions, imposing negative externalities on each other. While the competition for user attention in sponsored search can be captured via models of clickthrough rates, the post-click competition for conversions cannot: since the value-per-click of an advertiser is proportional to the conversion probability conditional on a click, which depends on the other ads displayed, the private value of an advertiser is no longer one-dimensional, and the GSP mechanism is not adequately expressive. We study the design of expressive GSP-like mechanisms for the simplest form that an advertiser's private value can have in the presence of such externalities--- an advertiser's value depends on exclusivity, i.e., whether her ad is shown exclusively, or along with other ads.

Part of our results appeared in the Fifth Workshop on Ad Auctions, AAW '09.
Appeared in the Nineteenth International World Wide Web Conference, WWW '10 (Acceptance Rate 14%).


Mechanism Design for Complexity-constrained Bidders, joint with R. Kumar and M. Mahdian [pdf]

A well-known result due to Vickery gives a mechanism for selling a number of goods to interested buyers in a way that achieves the maximum social welfare. In practice, a problem with this mechanism is that it requires the buyers to specify a large number of values. In this paper we study the problem of designing optimal mechanisms subject to constraints on the complexity of the bidding language in a setting where buyers have additive valuations for a large set of goods. This setting is motivated by sponsored search auctions, where the valuations of the advertisers are more or less additive, and the number of keywords that are up for sale is huge. We give a complete solution for this problem when the valuations of the buyers are drawn from simple classes of prior distributions. For a more realistic class of priors, we show that a mechanism akin to the broad match mechanism currently in use provides a reasonable bicriteria approximation.

Appeared in the Fifth Workshop on Internet and Network Economics, WINE '09.


Minimizing Movement, joint with E. Demaine, M. Hajiaghayi, H. Mahini, S. Oveisgharan, M. Zadimoghaddam
Journal Version [pdf] Conference Version [pdf]

We give approximation algorithms and inapproximability results for a class of movement problems. In general, these problems involve planning the coordinated motion of a large collection of objects (representing anything from a robot swarm or firefighter team to map labels or network messages) to achieve a global property of the network while minimizing the maximum or average movement. In particular, we consider the goals of achieving connectivity (undirected and directed), achieving connectivity between a given pair of vertices, achieving independence (a dispersion problem), and achieving a perfect matching (with applications to multicasting). This general family of movement problems encompass an intriguing range of graph and geometric algorithms, with several real-world applications and a surprising range of approximability. In some cases, we obtain tight approximation and inapproximability results using direct techniques (without use of PCP), assuming just that P != NP.

Journal version appeared in ACM Transactions on Algorithms ACM TALG 5(3) 2009.
Conference version appeared in Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07 .


Scheduling to Minimize Gaps and Power Consumption, joint with E. Demaine, M. Ghodsi, M. Hajiaghayi and M. Zadimoghaddam [pdf]

This paper considers scheduling tasks while minimizing the power consumption of one or more processors, each of which can go to sleep at a fixed cost alpha. There are two natural versions of this problem, both considered extensively in recent work: minimize the total power consumption (including computation time), or minimize the number of "gaps" in execution. For both versions in a multiprocessor system, we develop a polynomial-time algorithm based on sophisticated dynamic programming. In a generalization of the power-saving problem, where each task can execute in any of a specified set of time intervals, we develop a (1 + (2/3)alpha)-approximation, and show that dependence on alpha is necessary. In contrast, the analogous multi-interval gap scheduling problem is set-cover hard (and thus not o(lg n)-approximable), even in the special cases of just two intervals per job or just three unit intervals per job. We also prove several other hardness-of-approximation results. Finally, we give an O(sqrt(n))-approximation for maximizing throughput given a hard upper bound on the number of gaps.

Appeared in Proceedings of 19th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA '07 .


Spanning Trees with Minimum Weighted Degrees, joint with M. Ghodsi, H. Mahini, K. Mirjalali, S. Oveisgharan and M. Zadimoghaddam [pdf]

Given a metric graph G, we are concerned with finding a spanning tree of G where the maximum weighted degree of its vertices is minimum. In a metric graph (or its spanning tree), the weighted degree of a vertex is defined as the sum of the weights of its incident edges. In this paper, we propose a 4.5-approximation algorithm for this problem. We also prove it is NP-hard to approximate this problem within a 2-epsilon factor.

Appeared in Information Processing Letters, IPL 104(3) 2007.