Some topics in probability theory, combinatorics and information theory
Date
2016
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
University of Delaware
Abstract
This dissertation explores three topics at the intersection of probability theory, combinatorics and information theory. The first part focuses on studying small ball inequalities for sums and differences of independent, identically distributed random variables taking values in very general sets. Depending on the setting (abelian or non-abelian groups, or vector spaces, or Banach spaces) we provide a collection of inequalities relating different small ball probabilities that are sharp in many cases of interest. We show that underlying these distribution-free probabilistic inequalities are inequalities of extremal combinatorial nature, related among other things to classical packing problems such as the kissing number problem. As regards applications, we develop various moment inequalities.
The second part is devoted to exploring a formal parallel relation between entropy inequalities in information theory and sumset estimates in additive combinatorics. Our work is closely related to the study of more-sum-than-difference sets in additive combinatorics. Various information theoretical inequalities are obtained, such as the entropy analogue of Freiman-Pigarev inequality. We also present applications of our results in the construction of polar codes with significantly improved error probability compared to the canonical construction.
Concentration of measure principle is one of the cornerstones in geometric functional analysis and probability theory, and it is widely used in many other areas. In the third part, we study the concentration property of information content, which is one of the central interests in information theory, and it has great relevance with various other areas such as probability theory, statistics and statistical physics. Sharp exponential deviation estimates for the information content as well as a sharp bound on the varentropy for convex probability measures are obtained on Euclidean spaces.