The COST Action CA16228 European Network for Game theory is organizing an online mini-conference on mathematical game theory and its applications. This workshop will be the scientific part of the annual MC meeting of the COST Action.

Four young speakers from the different workgroups will present their works, followed by an invited speaker talk by Costis Daskalakis.

**The link to the talks should open when you click here, and if it does not work, copy-paste the following address to your browser:**

**https://zoom.us/j/241150956?pwd=clc4L1I5M3BYTXEvQnErOVBtRFI0UT09**

**The schedule (Central European Time):**

**12:20 – 12:30** A few opening words

**12:30-13:00** WG1 speaker – ** Laura Vargas Koch**, Aachen University

**Title : ** When Cars Actually Take Space – Nash Flows Over Time with Spillback

**Abstract**:: Modeling traffic in road networks is a widely studied but challenging problem, especially under the assumption that drivers act selfishly. A common approach is the deterministic queuing model. The basic idea is to model traffic by a continuous flow that travels over time through a network, in which the arcs are endowed with transit times and capacities. Whenever the flow rate exceeds the capacity the flow particles build up a queue. So far it was not possible to represent spillback in this model. By introducing a storage capacity arcs can become full, and thus, might block preceding arcs, i.e., spillback occurs. We carry over the main results of the original model to our generalization, i.e., we characterize dynamic equilibria by sequences of particular static flows, so-called spillback thin flows.

This is joint work with Leon Sering. ** **

**13:00-13:30** WG2 speaker- **Tommaso Cesari**, Toulouse University

** Title: ** Cooperation in Online Learning

** Abstract:** In cooperative online learning, a network of communicating agents interacts sequentially with an adversarial environment to solve a common online convex optimization problem.

In this presentation, we will see an overview of some recent positive and negative results on the theoretical guarantees that can be achieved in this setting under different modeling assumptions.

**15 minutesâ€™ coffee break **

**13:45-14:15** WG3 speaker- ** Raimundo Saona**, IST Austria

**Title:** The Complexity of POMDPs with Long-run Average Objectives

** Abstract:** We study the problem of approximation of optimal values in partially-observable Markov decision processes (POMDPs) with long-run average objectives. POMDPs are a standard model for dynamic systems with probabilistic and nondeterministic behaviour in uncertain environments. In long-run average objectives, rewards are associated with every transition of the POMDP and the payoff is the long-run average of the rewards during the execution of the POMDP. We establish strategy complexity and computational complexity results. Our main result shows that finite-memory strategies suffice to approximate the optimal value, and the related decision problem is recursively enumerable complete.

**14:15-14:45** WG4 speaker- **Dima Shaiderman**, Tel Aviv University

**Title::** Repeated Games with Incomplete Information over Predictable Ergodic Systems

**Abstract:** We study a zero-sum repeated game with incomplete information on one side, in which the states of the game are determined by a stochastic process. Consider an asymptotically predictable ergodic system with finitely many states. At each stage of the infinitely repeated game the process realizes a state, Player 1 is informed of this state and a zero-sum game that corresponds to it is played .Player 2 is informed only of the actions played. We show that the uniform value of this game exists. Moreover, in the case of two states, the game-theoretical results (i.e., the existence of the uniform value) may be utilized to deduce a probabilistic property of the underlying system.

Joint work with Ehud Lehrer

15 minutesâ€™ coffee

break

**15:00 – 16:00 invited speaker talk** (which is also a special issue of one world mathematical game theory seminar) :** Costis Daskalakis**, MIT

**Title: **Equilibrium Computation and the Foundations of Deep Learning** **

**Abstract: **Deep Learning has recently yielded important advances in single-agent learning challenges, much of that progress being fueled by the empirical success of gradient descent and its variants in computing local optima of non-convex optimization problems. In multi-agent learning applications, the role of single-objective optimization is played by equilibrium computation, yet our understanding of its complexity in settings that are relevant for Deep Learning remains sparse. In this talk we focus on min-max optimization of nonconvex-nonconcave objectives, which has found applications in GANs, and other adversarial learning problems. Here, not only are there no known gradient-descent based methods converging to even local and approximate min-max equilibria, but the computational complexity of identifying them remains poorly understood. We show that finding approximate local min-max equilibria of Lipschitz and smooth objectives requires a number of queries to the function and its gradient that is exponential in the relevant parameters, in sharp contrast to the polynomial number of queries required to find approximate local minima of non-convex objectives. Our oracle lower bound is a byproduct of a complexity-theoretic result showing that finding approximate local min-max equilibria is computationally equivalent to finding Brouwer fixed points, and Nash equilibria in non zero-sum games, and thus PPAD-complete. Minimal complexity theory knowledge will be assumed in the talk.

Joint work with Stratis Skoulakis and Manolis Zampetaki

Program committee: Mikos Pinter, Panayotis Mertikopoulos, Antonin Kucera, Agnes Cseh

Local Organization: Galit Ashkenazi-Golan