<div dir="ltr"><div><div>Dear all,<br><br></div>The details of the seminar talk for this week is below. Hope to see you in İAS.<br></div>Gökhan Göğüş<br><div><br></div><div>* İAS is supported by Sabancı University</div><div><b><br></b><div><b>Speaker: <span class="gmail-m_3002202578975105981gmail-il"><span class="gmail-il">Naci</span></span> <span class="gmail-il">Saldı</span>, Özyeğin University</b></div><div><b>Title: Approximate Nash Equilibria in Mean-Field Games with Discounted Cost</b></div><div><b>Date: 20 October 2017</b></div><div><b>Time: 15.40</b></div><b>Place: Sabancı University, Karaköy Communication Center, Bankalar Caddesi 2, Karaköy 34420, İstanbul</b></div><div><br></div><div><b><br></b></div>
<p class="MsoNormal" style="text-align:justify">In this talk, I will present a
general theory for discrete-time mean-field games with discounted
infinite-horizon cost. I will cover both perfect state and partial state
information structures. The state space of each player is a Polish space, and
at each time, the players are coupled through the empirical distribution of
their states, which affects both the players individual costs as well as their
state transition probabilities. I will first discuss the difficulties to be
encountered in any attempt to obtain the exact Nash equilibrium in such dynamic
games with decentralized information, with a finite number of players. The
mean-field approach offers a way out of this difficulty. First focusing on the
perfect state information, and using the solution concept of Markov-Nash
equilibrium, I will show under some mild conditions the existence of a
mean-field equilibrium in the infinite population limit. I will then show that
the policy obtained from the mean-field equilibrium is approximately
Markov-Nash when the number of players is sufficiently large. Following this, I
will turn to the class of discrete-time partially observed mean-field games.
Using the technique of converting the original partially observed stochastic
control problem to a fully observed one on the belief space and the dynamic
programming principle, I will establish the existence of Nash equilibria under
mild technical conditions. I will again show, as in the perfect state
information case, that the mean-field equilibrium policy, when adopted by each
player, forms an approximate Nash equilibrium for games with sufficiently many
players.</p></div>