-
Giuseppe Caire
TU Berlin
Cell-free Massive MIMO
Dense large-scale antenna deployments are one of the most promising technologies for delivering very large
throughputs per unit area in cellular networks. We consider such a dense deployment involving a
distributed system formed by multi-antenna remote radio head (RRH) units connected to the same fronthaul serving a geographical area.
Knowledge of the DL channel between each active user and its nearby RRH antennas is most efficiently obtained at the
RRHs via reciprocity based training, that is, by estimating a user's channel using uplink (UL) pilots transmitted by the user,
and exploiting the UL/DL channel reciprocity. In this semi-tutorial talk, we review the cell-free approach proposed
by Marzetta, Larsson et al., involving only data packet replication at each RRH and local combining (i.e., no centralized
joint precoding of the signals). We identify some shortcomings of this approach, and propose a novel scheme based on
coded UL pilots and ``on-the-fly'' pilot collision detection, which allow each RRH to decide autonomously whether to send data to a specific user
or not, without explicit association of users to RRHs. We also present an interesting stochastic geometry problem arising from this
architecture, nicknamed ``unique covering problem'', and an aprpoximate solution for the two-dimensional case recently obtained
by Haenggi et al., motivated by our own work. Finally, we also review some simple but powerful asymtotic random matrix theory results,
which allow a clean analysis of massive MIMO systems via closed-form or almost closed-form SINR expressions.
-
Laurent Decreusefond
Telecom ParisTech
Distances between point processes and applications
There always is a discrepancy between models and reality. For instance, a model may be ill-suited or wrongly calibrated. In Telecommunications, Poisson point processes are often used to represent resources as well as customers. There are strong evidence that this model is far from the reality at least for the modelization of base stations. We show how the theory of distances between point processes can be used to assess the error induced by such bad choices. We also show how one can reconcile the global hypothesis of « Poissonianity » with the local characteristics of repulsivity.
- Meik Dörpinghaus
TU Dresden
Oversampling increases the capacity pre-log of noncoherent Rayleigh fading channels
Fading is one of the key impairments of mobile radio channels. To understand the impact of fading we study the capacity of continuous-time noncoherent Rayleigh fading channels, where the channel realizations are not known a priori to the transmitter and the receiver. Almost all results in the literature on the capacity of this channel are based on the assumption that the receiver performs matched filtering followed by sampling at symbol rate. This yields a discrete-time channel in which each transmitted symbol corresponds to one output sample. However, this approach does not properly model the underlying physical channel. As a result, many known capacity results for the discrete-time symbol rate sampling model do not properly describe the capacity of the actual physical channel. The reason for this is that the multiplication of the channel input process with the fading process leads to a bandwidth expansion such that symbol rate sampling does not provide a sufficient statistics and, thus, is not capacity-achieving.
In this talk, we present results showing that for a continuous-time, time-selective, Rayleigh block-fading channel oversampling the continuous-time channel output with respect to the symbol rate increases the capacity pre-log factor, implying that symbol rate sampling is not capacity-achieving. Moreover, we discuss the capacity pre-log factor for the important case of a bandlimited continuous-time stationary Rayleigh fading channel, where oversampling allows to acquire a sufficient statistics. This problem poses some challenging mathematical problems and is still open.
-
Ayalvadi Ganesh
University of Bristol
Channel Assignment as a Graph Colouring Problem
We study decentralised algorithms for spectrum allocation in multi-channel wireless systems. We model channel assignment as a node colouring problem, with the objective of minimising interference by maximising the spatial separation between nodes using the same channel. We consider a minimax criterion of maximising theminimum distance between two nodes assigned the same colour. We show that, if channels are assigned at random, then the minimum distance between nodes using the same channel scales as the ratio of the square root of the number of channels to the total number of nodes. On the other hand, a simple decentralised algorithm achieves a minimum distance that scales as the square root of the ratio of the number of channels to nodes.
-
Benedikt Jahnel
WIAS Berlin
Large deviations in relay-augmented wireless networks
We consider a single-cell network of random transmitters and relays in a bounded domain of
Euclidean space. We present two large deviation principles for transmitters which are unable to
communicate with the base station via one relay hop. In the first model, mobile transmitters
are unable to communicate due to low SIR. In the second model, static transmitters with time-
dependent transmission times are unable to communicate due to capacity constraints at the
chosen relay. In both cases the large-deviation rate function can be characterized as a relative
entropy w.r.t. the transmitter intensity. These results represent parts of the joint project with
the Leibniz-Institute for Innovations for High Performance Microelectronics at Frankfurt Oder.
-
Nicolas Macris
EPFL Lausanne
Spatial coupling in Bayesian inference
Spatial coupling first originated as an engineering construction in the field of error correcting codes for communications, and has been applied since then in various inference problems. This construction takes an underlying system and enlarges it by coupling many copies of this system along a spatial chain, in such a way that first order phase transition thresholds remain unchanged but metastable states disappear. This has desirable algorithmic consequences. More recently spatial coupling has been used as a proof technique to derive lower bounds for thresholds of (uncoupled) random constraint satisfaction problems and also to prove predictions of replica formulas for free energies and mutual informations in coding, compressive sensing and matrix factorization. The talk will review this set of ideas.
- Mathew Penrose
University of Bath
Recent results on variants of random geometric graphs
In the classic random geometric graph (RGG) model $G(n, r)$, $n$ vertices
are placed uniformly at random in the unit square and connected by an edge
whenever distant at most $r$ apart. We consider the following variants; first
the random bipartite geometric graph where there are two types of vertex
and connections only between opposite types, and second the "soft" RGG
where vertices at most $r$ apart are connected with probability $p$ (for a further
parameter $p$).
In both variants we describe asymptotic results on connectivity, both of
which illustrate that the main obstacle to connectivity is often the presence
of isolated vertices. We also give a result on percolation in the first of these
variants. If time permits we shall also describe a recent result on the domi-
nation number of the classic RGG.
- Dominic Schuhmacher
Georg-August-Universität Göttingen
Wireless network signals with moderately correlated shadowing still appear Poisson
The strengths of signals emitted from transmitters in a wireless network are observed by a user at a fixed position. We assume that transmitters are placed deterministically or randomly according to a hard core or Poisson point process and signals are subjected to power law path loss and log-normal shadowing effects. It has been recently shown that even in more general models the point process of signal strengths observed by the user is close to Poisson if shadowing effects are independent and "sufficiently strong". In the present work we provide upper bounds on the Wasserstein distance between the point process of signal strengths and the Poisson process with the same mean measure and show that Poisson process limit theorems hold also under the more realistic assumption that shadowing effects are moderately correlated.
- Slawomir Stanczak
HHI and TU Berlin
Efficient nonlinear estimation of sparse stochastic signals in communication systems
In contrast to Gaussian models, sparse stochastic processes contain significant information in higher-order moments that must be assessed and considered for efficient algorithms in next generation communication systems. These algorithms must exploit sparse and low-rank structures arising e.g. due to physical propagation effects of multiple-antenna transmission at higher frequencies. In this talk, we introduce a model of sparse stochastic signals using distributions over lp-balls, that allows for a unified quantification of necessary higher-order statistics. To obtain real-time capable estimation algorithms for next generation wireless systems, we propose a class of neural-network inspired estimators that are magnitudes less complex than general iterative and convex optimization alternatives. By quantifying precisely the resulting estimation error, novel points at the performance/complexity frontier may be found.
-
Giovanni Luca Torrisi
IAC CNR Rome
A large deviation approach to super-critical bootstrap percolation on the random graph $G(n,p)$.
We consider the Erdős--Rényi random graph $G(n,p)$ and we analyze the simple
irreversible epidemic process on the graph, known in the literature as bootstrap percolation.
We provide a fine asymptotic analysis of the final size $A_n^*$ of active nodes, under a suitable
super-critical regime. More specifically, we establish large deviation principles for the sequence of random variables
$\{\frac{n- A_n^*}{f(n)}\}_{n\geq 1}$, allowing the scaling function $f$ to vary in the
widest possible range.