Skip to content

komm.TerminatedConvolutionalCode

Terminated convolutional code. It is a linear block code obtained by terminating a $(n_0, k_0)$ convolutional code. A total of $h$ information blocks (each containing $k_0$ information bits) is encoded. The dimension of the resulting block code is thus $k = h k_0$; its length depends on the termination mode employed. There are three possible termination modes:

  • Direct truncation. The encoder always starts at state $0$, and its output ends immediately after the last information block. The encoder may not necessarily end in state $0$. The resulting block code will have length $n = h n_0$.

  • Zero termination. The encoder always starts and ends at state $0$. To achieve this, a sequence of $k \mu$ tail bits is appended to the information bits, where $\mu$ is the memory order of the convolutional code. The resulting block code will have length $n = (h + \mu) n_0$.

  • Tail-biting. The encoder always starts and ends at the same state. To achieve this, the initial state of the encoder is chosen as a function of the information bits. The resulting block code will have length $n = h n_0$.

For more details, see LC04, Sec. 12.7 and WBR01.

Attributes:

  • convolutional_code (ConvolutionalCode)

    The convolutional code to be terminated.

  • num_blocks (int)

    The number $h$ of information blocks.

  • mode (TerminationMode)

    The termination mode. It must be one of 'direct-truncation' | 'zero-termination' | 'tail-biting'. The default value is 'zero-termination'.

Examples:

>>> convolutional_code = komm.ConvolutionalCode([[0b1, 0b11]])
>>> code = komm.TerminatedConvolutionalCode(convolutional_code, num_blocks=3, mode='direct-truncation')
>>> (code.length, code.dimension, code.redundancy)
(6, 3, 3)
>>> code.generator_matrix
array([[1, 1, 0, 1, 0, 0],
       [0, 0, 1, 1, 0, 1],
       [0, 0, 0, 0, 1, 1]])
>>> code.minimum_distance()
2
>>> code = komm.TerminatedConvolutionalCode(convolutional_code, num_blocks=3, mode='zero-termination')
>>> (code.length, code.dimension, code.redundancy)
(8, 3, 5)
>>> code.generator_matrix
array([[1, 1, 0, 1, 0, 0, 0, 0],
       [0, 0, 1, 1, 0, 1, 0, 0],
       [0, 0, 0, 0, 1, 1, 0, 1]])
>>> code.minimum_distance()
3
>>> code = komm.TerminatedConvolutionalCode(convolutional_code, num_blocks=3, mode='tail-biting')
>>> (code.length, code.dimension, code.redundancy)
(6, 3, 3)
>>> code.generator_matrix
array([[1, 1, 0, 1, 0, 0],
       [0, 0, 1, 1, 0, 1],
       [0, 1, 0, 0, 1, 1]])
>>> code.minimum_distance()
3

length: int property

The length $n$ of the code.

dimension: int property

The dimension $k$ of the code.

redundancy: int property

The redundancy $m$ of the code.

rate: float property

The rate $R = k/n$ of the code.

generator_matrix: npt.NDArray[np.integer] cached property

The generator matrix $G \in \mathbb{B}^{k \times n}$ of the code.

generator_matrix_right_inverse: npt.NDArray[np.integer] cached property

The right-inverse $G^+ \in \mathbb{B}^{n \times k}$ of the generator matrix.

check_matrix: npt.NDArray[np.integer] cached property

The check matrix $H \in \mathbb{B}^{m \times n}$ of the code.

encode

Applies the encoding mapping $\Enc : \mathbb{B}^k \to \mathbb{B}^n$ of the code. This method takes one or more sequences of messages and returns their corresponding codeword sequences.

Parameters:

  • input (ArrayLike)

    The input sequence(s). Can be either a single sequence whose length is a multiple of $k$, or a multidimensional array where the last dimension is a multiple of $k$.

Returns:

  • output (NDArray[integer])

    The output sequence(s). Has the same shape as the input, with the last dimension expanded from $bk$ to $bn$, where $b$ is a positive integer.

inverse_encode

Applies the inverse encoding mapping $\Enc^{-1} : \mathbb{B}^n \to \mathbb{B}^k$ of the code. This method takes one or more sequences of codewords and returns their corresponding message sequences.

Parameters:

  • input (ArrayLike)

    The input sequence(s). Can be either a single sequence whose length is a multiple of $n$, or a multidimensional array where the last dimension is a multiple of $n$.

Returns:

  • output (NDArray[integer])

    The output sequence(s). Has the same shape as the input, with the last dimension contracted from $bn$ to $bk$, where $b$ is a positive integer.

check

Applies the check mapping $\mathrm{Chk}: \mathbb{B}^n \to \mathbb{B}^m$ of the code. This method takes one or more sequences of received words and returns their corresponding syndrome sequences.

Parameters:

  • input (ArrayLike)

    The input sequence(s). Can be either a single sequence whose length is a multiple of $n$, or a multidimensional array where the last dimension is a multiple of $n$.

Returns:

  • output (NDArray[integer])

    The output sequence(s). Has the same shape as the input, with the last dimension contracted from $bn$ to $bm$, where $b$ is a positive integer.

codewords cached

Returns the codewords of the code. This is a $2^k \times n$ matrix whose rows are all the codewords. The codeword in row $i$ corresponds to the message obtained by expressing $i$ in binary with $k$ bits (MSB in the right).

codeword_weight_distribution cached

Returns the codeword weight distribution of the code. This is an array of shape $(n + 1)$ in which element in position $w$ is equal to the number of codewords of Hamming weight $w$, for $w \in [0 : n]$.

minimum_distance cached

Returns the minimum distance $d$ of the code. This is equal to the minimum Hamming weight of the non-zero codewords.

coset_leaders cached

Returns the coset leaders of the code. This is a $2^m \times n$ matrix whose rows are all the coset leaders. The coset leader in row $i$ corresponds to the syndrome obtained by expressing $i$ in binary with $m$ bits (MSB in the right), and whose Hamming weight is minimal. This may be used as a LUT for syndrome-based decoding.

coset_leader_weight_distribution cached

Returns the coset leader weight distribution of the code. This is an array of shape $(n + 1)$ in which element in position $w$ is equal to the number of coset leaders of weight $w$, for $w \in [0 : n]$.

packing_radius cached

Returns the packing radius of the code. This is also called the error-correcting capability of the code, and is equal to $\lfloor (d - 1) / 2 \rfloor$.

covering_radius cached

Returns the covering radius of the code. This is equal to the maximum Hamming weight of the coset leaders.