komm.QAModulation
Quadrature-amplitude modulation (QAM). It is a complex modulation scheme in which the constellation is given as a Cartesian product of two PAM constellations, namely, the in-phase constellation, and the quadrature constellation. More precisely, the $i$-th constellation symbol is given by $$ \begin{aligned} x_i = \left[ A_\mathrm{I} \left( 2i_\mathrm{I} - M_\mathrm{I} + 1 \right) + \mathrm{j} A_\mathrm{Q} \left( 2i_\mathrm{Q} - M_\mathrm{Q} + 1 \right) \right] \exp(\mathrm{j}\phi), \quad & i \in [0 : M), \\ & i_\mathrm{I} = i \bmod M_\mathrm{I}, \\ & i_\mathrm{Q} = \lfloor i / M_\mathrm{I} \rfloor, \end{aligned} $$ where $M_\mathrm{I}$ and $M_\mathrm{Q}$ are the orders (powers of $2$), and $A_\mathrm{I}$ and $A_\mathrm{Q}$ are the base amplitudes of the in-phase and quadrature constellations, respectively. Also, $\phi$ is the phase offset. The order of the resulting complex-valued constellation is $M = M_\mathrm{I} M_\mathrm{Q}$, a power of $2$.
Parameters:
-
orders
(tuple[int, int] | int
) –A tuple $(M_\mathrm{I}, M_\mathrm{Q})$ with the orders of the in-phase and quadrature constellations, respectively; both $M_\mathrm{I}$ and $M_\mathrm{Q}$ must be powers of $2$. If specified as a single integer $M$, then it is assumed that $M_\mathrm{I} = M_\mathrm{Q} = \sqrt{M}$; in this case, $M$ must be an square power of $2$.
-
base_amplitudes
(tuple[float, float] | float
) –A tuple $(A_\mathrm{I}, A_\mathrm{Q})$ with the base amplitudes of the in-phase and quadrature constellations, respectively. If specified as a single float $A$, then it is assumed that $A_\mathrm{I} = A_\mathrm{Q} = A$. The default value is $1.0$.
-
phase_offset
(float
) –The phase offset $\phi$ of the constellation. The default value is
0.0
. -
labeling
(Literal['natural_2d', 'reflected_2d'] | ArrayLike
) –The binary labeling of the modulation. Can be specified either as a 2D-array of integers (see base class for details), or as a string. In the latter case, the string must be either
'natural_2d'
or'reflected_2d'
. The default value is'reflected_2d'
, corresponding to the Gray labeling.
Examples:
-
The square $16$-QAM modulation with $(M_\mathrm{I}, M_\mathrm{Q}) = (4, 4)$ and $(A_\mathrm{I}, A_\mathrm{Q}) = (1, 1)$, and Gray labeling is depicted below.
>>> qam = komm.QAModulation(16) >>> qam.constellation array([-3.-3.j, -1.-3.j, 1.-3.j, 3.-3.j, -3.-1.j, -1.-1.j, 1.-1.j, 3.-1.j, -3.+1.j, -1.+1.j, 1.+1.j, 3.+1.j, -3.+3.j, -1.+3.j, 1.+3.j, 3.+3.j]) >>> qam.labeling array([[0, 0, 0, 0], [0, 1, 0, 0], [1, 1, 0, 0], [1, 0, 0, 0], [0, 0, 0, 1], [0, 1, 0, 1], [1, 1, 0, 1], [1, 0, 0, 1], [0, 0, 1, 1], [0, 1, 1, 1], [1, 1, 1, 1], [1, 0, 1, 1], [0, 0, 1, 0], [0, 1, 1, 0], [1, 1, 1, 0], [1, 0, 1, 0]])
-
The rectangular $8$-QAM modulation with $(M_\mathrm{I}, M_\mathrm{Q}) = (4, 2)$ and $(A_\mathrm{I}, A_\mathrm{Q}) = (1, 2)$, and natural labeling is depicted below.
>>> qam = komm.QAModulation( ... orders=(4, 2), ... base_amplitudes=(1.0, 2.0), ... labeling="natural_2d" ... ) >>> qam.constellation array([-3.-2.j, -1.-2.j, 1.-2.j, 3.-2.j, -3.+2.j, -1.+2.j, 1.+2.j, 3.+2.j]) >>> qam.labeling array([[0, 0, 0], [0, 1, 0], [1, 0, 0], [1, 1, 0], [0, 0, 1], [0, 1, 1], [1, 0, 1], [1, 1, 1]])
constellation
NDArray[complexfloating]
cached
property
The constellation $\mathbf{X}$ of the modulation.
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.constellation
array([-3.-3.j, -1.-3.j, 1.-3.j, 3.-3.j,
-3.-1.j, -1.-1.j, 1.-1.j, 3.-1.j,
-3.+1.j, -1.+1.j, 1.+1.j, 3.+1.j,
-3.+3.j, -1.+3.j, 1.+3.j, 3.+3.j])
labeling
NDArray[integer]
cached
property
The labeling $\mathbf{Q}$ of the modulation.
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.labeling
array([[0, 0, 0, 0], [0, 1, 0, 0], [1, 1, 0, 0], [1, 0, 0, 0],
[0, 0, 0, 1], [0, 1, 0, 1], [1, 1, 0, 1], [1, 0, 0, 1],
[0, 0, 1, 1], [0, 1, 1, 1], [1, 1, 1, 1], [1, 0, 1, 1],
[0, 0, 1, 0], [0, 1, 1, 0], [1, 1, 1, 0], [1, 0, 1, 0]])
inverse_labeling
dict[tuple[int, ...], int]
cached
property
The inverse labeling of the modulation. It is a dictionary that maps each binary tuple to the corresponding constellation index.
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.inverse_labeling
{(0, 0, 0, 0): 0, (0, 1, 0, 0): 1, (1, 1, 0, 0): 2, (1, 0, 0, 0): 3,
(0, 0, 0, 1): 4, (0, 1, 0, 1): 5, (1, 1, 0, 1): 6, (1, 0, 0, 1): 7,
(0, 0, 1, 1): 8, (0, 1, 1, 1): 9, (1, 1, 1, 1): 10, (1, 0, 1, 1): 11,
(0, 0, 1, 0): 12, (0, 1, 1, 0): 13, (1, 1, 1, 0): 14, (1, 0, 1, 0): 15}
order
int
cached
property
The order $M$ of the modulation.
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.order
16
bits_per_symbol
int
cached
property
The number $m$ of bits per symbol of the modulation. It is given by $$ m = \log_2 M, $$ where $M$ is the order of the modulation.
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.bits_per_symbol
4
energy_per_symbol
float
cached
property
The average symbol energy $E_\mathrm{s}$ of the constellation. It assumes equiprobable symbols. It is given by $$ E_\mathrm{s} = \frac{1}{M} \sum_{i \in [0:M)} \lVert x_i \rVert^2, $$ where $\lVert x_i \rVert^2$ is the energy of constellation symbol $x_i$, and $M$ is the order of the modulation.
For the QAM, it is given by $$ E_\mathrm{s} = \frac{A_\mathrm{I}^2}{3} \left( M_\mathrm{I}^2 - 1 \right) + \frac{A_\mathrm{Q}^2}{3} \left( M_\mathrm{Q}^2 - 1 \right). $$ For the special case of a square QAM, it simplifies to $$ E_\mathrm{s} = \frac{2A^2}{3}(M - 1). $$
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.energy_per_symbol
10.0
energy_per_bit
float
cached
property
The average bit energy $E_\mathrm{b}$ of the constellation. It assumes equiprobable symbols. It is given by $$ E_\mathrm{b} = \frac{E_\mathrm{s}}{m}, $$ where $E_\mathrm{s}$ is the average symbol energy, and $m$ is the number of bits per symbol of the modulation.
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.energy_per_bit
2.5
symbol_mean
complex
cached
property
The mean $\mu_\mathrm{s}$ of the constellation. It assumes equiprobable symbols. It is given by $$ \mu_\mathrm{s} = \frac{1}{M} \sum_{i \in [0:M)} x_i. $$
For the QAM, it is given by $$ \mu_\mathrm{s} = 0. $$
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.symbol_mean
0j
minimum_distance
float
cached
property
The minimum Euclidean distance $d_\mathrm{min}$ of the constellation. It is given by $$ d_\mathrm{min} = \min_ { i, j \in [0:M), ~ i \neq j } \lVert x_i - x_j \rVert. $$
For the QAM, it is given by $$ d_\mathrm{min} = 2 \min(A_\mathrm{I}, A_\mathrm{Q}). $$ For the special case of a square QAM, it simplifies to $$ d_\mathrm{min} = 2 A. $$
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.minimum_distance
2.0
modulate()
Modulates one or more sequences of bits to their corresponding constellation symbols.
Parameters:
-
input
(ArrayLike
) –The input sequence(s). Can be either a single sequence whose length is a multiple of $m$, or a multidimensional array where the last dimension is a multiple of $m$.
Returns:
-
output
(NDArray[complexfloating]
) –The output sequence(s). Has the same shape as the input, with the last dimension divided by $m$.
Examples:
>>> qam = komm.QAModulation(16)
>>> qam.modulate([0, 0, 1, 1, 0, 0, 0, 1])
array([-3.+1.j, -3.-1.j])
demodulate_hard()
Demodulates one or more sequences of received points to their corresponding sequences of hard bits ($\mathtt{0}$ or $\mathtt{1}$) using hard-decision decoding.
Parameters:
-
input
(ArrayLike
) –The input sequence(s). Can be either a single sequence, or a multidimensional array.
Returns:
-
output
(NDArray[integer]
) –The output sequence(s). Has the same shape as the input, with the last dimension multiplied by $m$.
demodulate_soft()
Demodulates one or more sequences of received points to their corresponding sequences of soft bits (L-values) using soft-decision decoding. The soft bits are the log-likelihood ratios of the bits, where positive values correspond to bit $\mathtt{0}$ and negative values correspond to bit $\mathtt{1}$.
Parameters:
-
input
(ArrayLike
) –The received sequence(s). Can be either a single sequence, or a multidimensional array.
-
snr
(float
) –The signal-to-noise ratio (SNR) of the channel. It should be a positive real number. The default value is
1.0
.
Returns:
-
output
(NDArray[floating]
) –The output sequence(s). Has the same shape as the input, with the last dimension multiplied by $m$.