## Section: Scientific Foundations

### Rate-distortion theory

Coding and joint source channel coding
rely on fundamental concepts of information theory, such as
notions of entropy, memoryless or correlated sources, of channel capacity,
or on rate-distortion performance bounds.
Compression algorithms are defined to be as close as possible to the
optimal rate-distortion bound, R(D) , for a given signal.
The source coding theorem establishes performance bounds for
lossless and lossy coding. In lossless coding, the lower
rate bound is given by the entropy of the source. In lossy
coding, the bound is given by the rate-distortion function R(D) .
This function R(D) gives
the minimum quantity of information needed to represent a given
signal under the constraint of
a given distortion.
The rate-distortion bound is usually called OPTA
(*Optimum Performance Theoretically Attainable* ). It is usually
difficult to find close-form expressions for the function R(D) ,
except for
specific cases such as Gaussian sources. For real signals, this function
is defined as the
convex-hull of all feasible (rate, distortion) points.
The problem of finding the rate-distortion function
on this convex hull then becomes a rate-distortion minimization
problem which, by using a Lagrangian formulation, can be expressed as

The Lagrangian cost function J is derivated with respect to the different optimisation parameters, e.g. with respect to coding parameters such as quantization factors. The parameter is then tuned in order to find the targeted rate-distortion point. When the problem is to optimise the end-to-end Quality of Service (QoS) of a communication system, the rate-distortion metrics must in addition take into account channel properties and channel coding. Joint source-channel coding optimisation allows to improve the tradeoff between compression efficiency and robustness to channel noise.