Jump to content

Talk:BCH code

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Terrible Explication

[edit]

This is a general comment on the BCH entry ... it is AWFUL, TERRIBLE, OBTUSE, BAD, ....

Classic case of nerds writing for nerds. Most of us are just trying to figure out how things work, get things done, and make a little money - and don't have time to be nerds.

This entry should explain BCH codes ... not the mathematics behind them. This page should be moved to a separate referenced "BCH theory" page.

Replace this entry in entirety with something more intuitive and tutorial ... (... PLEASE ...) —Preceding unsigned comment added by 75.107.115.229 (talk) 05:25, 6 June 2010 (UTC)[reply]

I'm not sure what sort of explanation you're looking for, beyond the fact that BCH is a cyclic block code! The fact that it is useful is due to its particular mathematical properties, which can only really be explained by presented them.
However, an "applications" section would probably benefit the article. Oli Filth(talk|contribs) 11:14, 6 June 2010 (UTC)[reply]

This article is exactly as critized above - OBTUSE and of LITTLE USE to anyone who does not already have a sophisticated matamatical background. The observation that a BCH is a "cyclic block code" is of little help, and even seems to be a bit of a put down or an attempt at "witty repartee" rather than a genuine attempt to be helpful or to understand the problem. There most certenly is a helpful, intuitive level of explination between "its a cyclic block code" and the highly technical and inaccessable treatment given in this article. This isn't a criticism that should be dismissed or ridiculed - it goes to the heart of the public service that Wikipedia is trying to provide. Hopefully someone who has both an understanding of the subject and a desire to help others understand will make the needed improvements. Alan.A.Mick (talk) 14:58, 30 December 2011 (UTC)[reply]

Really? Describing BCH as a "cyclic block code" is seen as "witty repartee"? I think the entry is perfect just the way it is... If you think otherwise, then take a crack at rewriting it... *THAT* is the heart of the public service that Wikipedia provides... — Preceding unsigned comment added by 173.73.54.71 (talk) 17:04, 22 October 2012 (UTC)[reply]

It sounds like the above comments are not complaining about the description of BCH as a cyclic block code, but rather that this information is not sufficient to understand what BCH is or how it is used. A layman's explanation of what the jargon means, particularly why it is relevant, would be an improvement. Even better, set the jargon aside for the theory section, and explain what is being done and why. Is the term "Galois Field" meaningful in any way to someone learning about BCH on Wikipedia? Does the phrase "primitive narrow-sense BCH codes" help the reader understand why they are different from general BCH codes, why one might prefer either of them, or help the reader understand how to construct BCH codes? It is understandable that every article cannot be a hand-holding tutorial that gingerly leads the reader along to illumination, but surely there is some happy medium between that and strict mathematical formalism.

Some helpful information might be how to get the corrections for a given code. For example if the message is 512 bytes and the BCH length is 8, then the maximum correction is 4 (always?). Or can different GF(2n) fields be chosen to alter the corrective power. I think that a section on fields of 2n would be helpful for the people in the binary world. It is mathematically interesting for other fields, but a lot of people accessing the page will just be trying to understand GF(2n) only; I don't know if there are then short-cuts/simplification to some of the explanations?
CRC32 and Reed-Solomon have some similar subject matter. Not everyone will be familiar with algebraic fields. However, those criticizing could be constructive by saying what is better in those pages. BCH and Reed-Solomon are technically more difficult, so I would expect them to be a little more difficult to understand (require more reading). I think some worked binary examples would be helpful; along with the mathematical notation. Golay codes has some historical use. I think BCH could include technology that currently uses them. This is in the lead. Thanks to whoever has contributed to this page. 71.19.161.130 (talk) 17:11, 16 December 2013 (UTC)[reply]

Although BCH code is a class of cyclic error correction code, the fact that it is a cyclic code only makes clear that the maximum size of a message for a field based on GF(2^n) is (2^n-1) elements There is nothing in the encoding or decoding process of common error correcting codes where the fact that such codes are cyclic provides any benefit. Note that [original view Reed Solomon code] is not a cyclic code unless the evaluated data points are powers of the field primitive. Outside of academics, the most common usage for Reed Solomon original view encoding is some implementations of jerasure, an erasure only code used for data storage and typically a set of m evaluated data points are {0, 1, ... , m-1} resulting in a non-cyclic code, and for GF(2^n), the maximum message length is 2^n (as opposed to 2^n - 1). Rcgldr (talk) 19:24, 16 December 2020 (UTC)[reply]

Another problem with the references

[edit]

The last two links in the references are broken. 194.171.252.100 09:53, 14 September 2007 (UTC)[reply]

Problem with reference

[edit]

The second reference, to | this PDF, is not accessible to people without an account on the relevant database. As such, it seems that a full citation would be more appropriate: A Simple Step-by-Step Decoding of Binary BCH Codes CHR et al. IEICE Trans Fundamentals.2005; E88-A: 2236-2239 --Dyfrgi 07:55, 11 January 2007 (UTC)[reply]

Encoding/Decoding

[edit]

The examples are not very elaborate. How does one create the generator polynomial from , , ...? What is the importance of ? The way to determine the cycles of roots of , , and are not explained. It's not clarified that means the redundancy polynomial. How do I calculate the syndrome values? How do I calculate the error locator polynomials? Shall I continue?

Can anyone please provide us with a complete algorithm description, detailed enough so it can be replicated on paper or in source code?

--Zom-B 16:54, 24 June 2007 (UTC)[reply]

"What is the importance of ?" - it is just an example of a primitive polynomial used as the basis for GF(2^4). All non-zero numbers for this field are powers of x (hex 2). "How to calculate syndromes ... error locator polynomial" - I'm assuming the article has been updated since the question was asked, since it now explains the process. "cycles of roots" - I don't see this in the article anymore. I updated the article to explain the minimum polynomials for powers of 2 modulo the field primitive polynomial. The least common multiples are simply the product of non-duplicate minimum polynomials. Rcgldr (talk) 19:33, 16 December 2020 (UTC)[reply]

The example didn't seem to be right so I have expanded it. The encoding section needs to be revised accordingly but I don't have time to do that right now. Richard Pinch 06:50, 30 June 2007 (UTC)[reply]

I have created Czech version of the page (based on material from this one), now I am going to update the English in simillar way. The explanation of the unreadable characters correction leading to simple reuse of Euclidian algorithm with Forney formula ... will be added in near future (no new source material, just simplifying the reasoning).

Explanation of \Lambda could be simplified or rather fully replaced by the explanation with unreadable characters.

I have generalised the BCH code definition in Czech version to cover Reed-Solomon, what was somehow inconsistently suggested on this page. I am not sure if I am approved to do the same here in english version. Hippo.69 (talk) 18:39, 29 December 2012 (UTC)[reply]

Encoding should just be the process of calculating the polynomials and appending the values to the message? All methods for calculating CRC, etc can be used such as table look ups and calculations in parallel. If it is this simple, I don't think an extra section is needed. 71.19.161.130 (talk) 16:54, 16 December 2013 (UTC)[reply]

Dispute on Graph

[edit]

I dont know how this is going to be useful, but to the best of my knowledge this is the graph I obtained from running my simulation code. The main program is given here (ancillary functions omitted for want of space). You can get the full program from | here.

#!/usr/bin/octave -q

#A simple communications channel,

%BPSK uncoded.
System.BER=[];
System.Error=[];
System.Length=36*1000;
System.Mesg=zeros(1,System.Length);
System.Coded=[];

%BCH coded.
System.BCHError=[];
System.BCHBER=[];
System.BCHLength=System.Length*(63.0/36.0);
System.BCHCoded=zeros(1,System.BCHLength);
System.BCHDeCoded=[];


%BPSK parameters.
System.Modulated=[];
System.DeModulated=[];
System.DeCoded=[];
System.CarrierRate=10;
System.DataRate=2;
System.SNRdb=0:0.5:10;


gen_poly=BCH_poly();

Length=length(System.SNRdb);
System.BCHActualErrors=zeros(1,Length);
System.BCHCorrected=zeros(1,Length);

%Additive White Gaussian Noise, power.
System.N0=5;


for iter=1:Length

  disp('=>')
  System.SNRdb(iter)
  System.BCHBER

  disp('Simulating AWGN  channel with BCH & BPSK')

  %Value of SNR
  Eb=System.SNR=10^(System.SNRdb(iter)/10)*System.N0;
  
  %Signal strength, power.
  System.Eb=  System.SNR*System.N0;
  Ec=Eb*36.0/63.0;
  
  %Source Message
  System.Mesg=mod(floor(randn(1,System.Length)),2);

  Rxword=[];
  Chunks=length(System.Mesg)/36;
  for biter=1:Chunks
    bmesg=System.Mesg((biter-1)*36+1:biter*36);
    cwmesg=GF_product(gen_poly,GF_polarize(bmesg));

    %convert from the 0,-1 mode to the 1,0 mode.
    cwmesgTx=cwmesg+1;

    System.BCHCoded((biter-1)*63+1:(biter)*63)=cwmesg;

%Modulate BPSK
    bch1=bpskmod(cwmesgTx,20,10,sqrt(Ec));

%Add noise
    bch2=bch1+sqrt(System.N0)*boxmuller(length(bch1)); 

%Demodulate BPSK
    cwRx=bpskdemod(bch2,20,10,sqrt(Ec)); 

%   This is in the +1,+0 mode,
%   convert to a 0,-1 mode
    cwrecv=cwRx-1;

    Rxword=[Rxword cwrecv];

%error pos gives roots w.r.t index of 0.
   errpos=BCH_decode_berlekamp(cwrecv,63,36,5);%_berlekamp

%so add +1, to convert error position locations form 0 index, to +1 index.
    errpos=errpos+1;

    if isempty(errpos)
      %disp('Codeword is correct')
      correctw=cwrecv;
    else
      %printf('Correcting %d errors\n',length(errpos))
      %errpos
      correctw=cwrecv;
      for i=1:length(errpos)
        %flip the bits,
	correctw(errpos(i))=-(1+correctw(errpos(i)));
      end
    end

   %disp('Corrected CW')
   %correctw
   %fgets(stdin)

   
   CorrectError=length(errpos);
   System.BCHCorrected(iter)=System.BCHCorrected(iter)+CorrectError;

   System.BCHDeCoded((biter-1)*63+1:biter*63)=correctw;
  end
  
  System.BCHActualErrors(iter)=Difference(System.BCHCoded,Rxword);
  System.BCHError(iter)=Difference(System.BCHCoded,System.BCHDeCoded);
  System.BCHBER(iter)=System.BCHError(iter)/System.BCHLength;





  %BPSK part of simulation,
  System.Coded=System.Mesg;

  %Modulate 
  System.Modulated=bpskmod(System.Coded,System.CarrierRate,System.DataRate,sqrt(System.Eb));
  
  
  %Channel with AWGN
  System.Channel=System.Modulated + sqrt(System.N0)*randn(1,length(System.Modulated));


  %DeModulate
  System.DeModulated=bpskdemod(System.Channel,System.CarrierRate,System.DataRate,sqrt(System.Eb));

  %Channel Decode
  System.DeCoded=System.DeModulated;

  %Error Calculation
  System.Error(iter)=Difference(System.Coded,System.DeCoded);
  System.BER(iter)=System.Error(iter)/System.Length;

end

disp('System Error')
System.Error

disp('System BER')
System.BER

disp('System.BCHError')
System.BCHError

disp('System.BCHBER')
System.BCHBER

disp('System BCH Actual')
System.BCHActualErrors

disp('System BCH Corrected')
System.BCHCorrected

disp('System SNRdb')
System.SNRdb

semilogy(System.SNRdb,System.BER,";Simulated BPSK;",System.SNRdb,0.5*erfc(sqrt(10.^(System.SNRdb/10))),";Theoretical BPSK;",System.SNRdb,System.BCHBER,";Simulated BCH, [63,36];")
title "BPSK Uncoded, Vs BCH Coded systems"
save bchchannel_sim System
fgets(stdin)
fgets(stdin)

Let me know whats the problem, if there is any. --பராசக்தி 16:48, 16 September 2006 (UTC)


I tried to run your code on my octave, but I don't have the required other functions. Perhaps, it is better to consider the theoretical error rate. The (63,36) Binary BCH code with Berlekamp-Massey decoding should correct all error patterns of weight <= 5 and no error patterns of weight >5. Since the theoretical BER for BPSK is

,

we can write

where is the raw error rate at the input of the decoder (due to the rate adjusted SNR). This assumes further that all decoder failures are detected, which is not a terrible assumption for this code.

I think that should be equal to your "simulated BPSK" error rate. Perhaps you could add these theoretical curves to your graph to see what is going on.

Here is my Matlab code (apologies but my Octave installation has never quite worked for me).

n = 63;
k = 36;
t = 5;
rate = k/n;

ebno = 0:0.5:10;

bpsk_ber = 0.5*erfc(sqrt(10.^(ebno/10)));
coded_raw_ber = 0.5*erfc(sqrt(10.^(rate*ebno/10)));
coded_ber = 0*coded_raw_ber;

b = (0:n)/n;
b(1:(t+1)) = 0;
for i=1:length(ebno)
  coded_ber(i) = dot(binopdf(0:n,n,coded_raw_ber(i)),b);
end

semilogy(ebno,bpsk_ber,'r-',ebno,coded_raw_ber,'g-',ebno,coded_ber,'b-');
title('(63,36) BCH with bounded distance decoding');
legend('BPSK','Input to decoder','After decoder',3);
grid on


I have given a link to the code tarball on my website. Please use that for the other functions. AFAIK I think it is correct. The method maybe different, but I think it is statistically significant, as I ran the code for 36000 bits and got BER for 1e-4 (10x more bits than 1/BER). Also the right way would be to perform any simulation for just about till 10 errors or 100 errors accumulate at each SNR value as I understand from books by Moore on ECC. Let me know, now that other functions are avaiable. I will try & check your claims as well. --பராசக்தி 00:38, 19 September 2006 (UTC)

---

Here is an updated version of my code which uses the Matlab Communications Toolbox for the simulation. The theoretical BER formula/curve was updated to count only message bit errors.

n = 63;
k = 36;
t = 5;
rate = k/n;

ebno = 0:0.5:10;

ebno_lin = 10.^(ebno/10);
bpsk_ber = 0.5*erfc(sqrt(ebno_lin));
coded_raw_ber = 0.5*erfc(sqrt(rate*ebno_lin));
coded_ber = 0*coded_raw_ber;

% Theory
[X,Y] = meshgrid(0:k,0:n-k);
T = (((X+Y)>t)+0).*X;
for i=1:length(ebno)
  msg_err = binopdf(0:k,k,coded_raw_ber(i));
  par_err = binopdf(0:(n-k),n-k,coded_raw_ber(i))';
  joint = msg_err(ones(1,n-k+1),:) .* par_err(:,ones(1,k+1));
  coded_ber(i) = sum(sum(T.*joint))/k;
end

% Simulation (all zeros codeword)
M = 500;
sim_ber = zeros(1,length(ebno));
for i=1:2:15
  y = -1 + randn(M,n)/sqrt(2*rate*ebno_lin(i));
  ybit = (y>0)+0;
  xhat = bchdec(gf(ybit,1),n,k);
  sim_ber(i) = sum(sum(xhat~=0))/M/k;
end

semilogy(ebno,bpsk_ber,'r-',ebno,coded_raw_ber,'g-',ebno,coded_ber,'b-',ebno,sim_ber,'bo');
title('(63,36) BCH with bounded distance decoding');
legend('BPSK','Input to decoder','After decoder','Simulation',3);
xlabel('E_n / N_0');
ylabel('BER');
grid on;
set(gca,'YLim',[1e-7 1]);

The results of this script are plotted here. You will notice the theory and simulation now match because they are both correct.

Ideas from Wolfram

[edit]

Wolfram Alpha writes the function :


like this :


and


see [1]

The is written alternatively as


or


or more simply


see [2]

More to come... -- James Michael DuPont (talk) 07:34, 29 August 2010 (UTC)[reply]

Decoding example

[edit]

From Glrx's recent edit summary: "single digit symbols and errors make this a poor example." Thanks for your input. I feel that the example is a good one, as it is simple and practical. Additional examples could be added to fully illustrate the complexity of the subject, but I don't know how much value that would add to the article. Thoughts? Bobmath (talk) 15:30, 28 July 2011 (UTC)[reply]

Euclidean algorithm

[edit]

For RS codes, the Euclidean_decoder is another popular algorithm. I don't know if this is true for BCH codes, but since Reed–Solomon_error_correction section Peterson_decoder refers to this BCH article, perhaps the Euclidean decoder could be mentioned here as well with a link to Euclidean_decoder. Rcgldr (talk) 04:30, 3 November 2011 (UTC)[reply]

I agree, for nonbinary codes or for decodding with unreadable characters I think it is the best method as it computes during the calculation, so the Forney formula could be used easily afterwards. Peterson algorithm is good only for explanation purposes. Computing several determinants cannot be faster than gcd computation. Massey ... algorithm is probably comparable especially for binary case. Both favourite algorithms compute without 'guessing' it's degree. Hippo.69 (talk) 18:56, 29 December 2012 (UTC)[reply]

Simplified BCH Example

[edit]

The article says that in it is true that:

But I think that is true in .

Page 29 of Error Control Coding, by Lin and Costello, seems to support this case. 82.57.60.212 (talk) 11:23, 14 January 2012 (UTC) Damix[reply]

I don't have Lin and Costello, but addition in GF(2k) is just exclusive or. The 2ab does not represent a field multiply of 2 times ab but rather ab + ab — which is zero. The same confusion arises in formal differentiation. Glrx (talk) 20:11, 15 January 2012 (UTC)[reply]
To answer Damix's original concern, I would like to note the following mathematical fact. The characteristic of all the non-zero elements in (for prime ) is . That is, if you sum any element of times, you get zero. Arun Kandasamy 12:34, 25 February 2012 (UTC) — Preceding unsigned comment added by Arun ks (talkcontribs)

Syndroms numbering

[edit]

Hmmm, I should reindex the syndroms to be indexed in all sections the same way. ... in the next edit probably Hippo.69 (talk) 00:47, 30 December 2012 (UTC) I hope this is corrected now Hippo.69 (talk) 20:51, 4 January 2013 (UTC)[reply]

Possible error in the Forney algorithm explanation

[edit]

I am not familiar to the wikipedia encoding or anything like that but i want to point out something which seems to me like a mistake. Close to the beginning of the forney algorithm explanation, the exponent in the multiplicative formula for lambda(x) is negative, which contradicts the definiton that the roots of it are the reciprocals of the error locations (which in fact will be the error location themselves from this definiton). If this is a mistake i hope someone would fix it and if its not please explain to me why. Also excuse me for the lack of formatting or any spelling mistakes, i was typing this on my phone. — Preceding unsigned comment added by 37.26.148.151 (talk) 12:57, 16 September 2015 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on BCH code. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—cyberbot IITalk to my owner:Online 16:24, 22 March 2016 (UTC)[reply]

Chess hashing

[edit]

I just read that two early chess programs used BCH coding for their hash functions. What advantage does BCH coding add to hashing ( I am hoping it might eliminate collisions. Was/Is it not used more widely due to speed considerations - chess is always looking for maximum speed (accepting some degree of error in the process).) I was looking for a simple explanation but it appears this is not a simple subject. Thanks 2601:181:8301:4510:54EA:A058:77AF:BF1B (talk) 13:15, 24 June 2016 (UTC)[reply]

Example

[edit]

In the example it reads:

Seems an error to me. Madyno (talk) 21:46, 14 October 2021 (UTC)[reply]

@Madyno: - that part of the example is correct. The confusing part is stating that , but then reusing on the following lines for , , ... . It might have been more clear if another variable was used. For example, , and Rcgldr (talk) 20:49, 15 October 2021 (UTC)[reply]
@Rcgldr:
Thank for you explanation. However, being a mathematician, it makes no sense to me. looks like a polynomial, but if it is meant to be the number , why not say so. Madyno (talk) 21:16, 15 October 2021 (UTC)[reply]
@Madyno: - for this article, a finite field is defined by a reducing polynomial of degree . In the example, the reducing polynomial for is , with a primitive element value . There are a total of primitive element values: , but using is the simplest choice. In the following lines, is redefined as a power of alpha, which is why I think switching to using would have made this simpler to understand, with the goal of finding the minimum polynomial that satisfies , since it's the 1 bit coefficients of that define the minimum polynomials. Rcgldr (talk) 23:20, 15 October 2021 (UTC)[reply]
Thanks again, I do understand this all, except for the statement: . As far as I know, is a primitive element. The other 7 are . Extending with any one of these will generate . The confusing part is the (possible) redefinition of the generic variable as being the same as from thereon.Madyno (talk) 11:13, 16 October 2021 (UTC)[reply]

() We're running around in circles. Compare it with the complex numbers. Extend the reals with a root of the polynomial . Call this root . As far as I know, no one will say . Of course there is the sloppy way of saying: for it holds that . This doen't mean and are the same, but merely: If you give the value the polynomial will have the value 0. No more than hat. Madyno (talk) 20:02, 16 October 2021 (UTC)[reply]

Defining the 8 primitive elements as assumes , so why not just define the 8 primitive elements as , which is what I see in textbooks for error correction code. Since I've switched the terms for the reducing polynomial from x to z, then , matches the table on page 6 of bch_code.pdf, which lists . Rcgldr (talk) 12:11, 17 October 2021 (UTC)[reply]
The point is you're confusing different representations of . One way is the 16 bytes: 0000, 0001, ..., 1111. They form a variant of the 16 vectors: (0,0,0,0), (0,0,0,1), ..., (1,1,1,1) showing as a 4-dimensional vectorspace over . Another, equivalent, representation is with the polynomials where with is one of the vectors. Multiplication, also for the vector representation, is defined as for polynomials, but modulo for instance or another irreducible polynomial of degree 4. A third way is, as I wrote, extending with a primitive element , a root of an irreducible polynomial over with degree 4, for instance . In the representation with polynomials, the polynomial is a primitive root of unity. It generates . The successive powers are:
Hence is also a primitive root of unity. But is not: . Madyno (talk) 19:31, 17 October 2021 (UTC)[reply]
Most (if not all) of my textbooks for error correcting codes define a primitive element as a polynomial in x, such as . Getting back to the article, changing the terms of the reducing polynomial from x to z, should eliminate the confusing part of the article where x previously had two meanings. Rcgldr (talk) 20:35, 17 October 2021 (UTC)[reply]

In the end the confusion arises from the statement , where is a polynomial. Formally this should read: . Madyno (talk) 07:27, 18 October 2021 (UTC)[reply]

I made your suggested change . My confusion about this is due to textbooks and articles I've read. One of my textbooks, Error Control Coding by Shu Lin and Daniel J Costello Jr, has an example in where they define as a binary value: . The pdf file I linked to above just lists powers of in a table, where . In the textbook Error Correcting Codes by Peterson and Weldon, for modulo ... "let denote the residue class that contains X", followed by a table where . None of these use the syntax , even though that is more mathematically correct. The books and articles target implementations rather than theory. Rcgldr (talk) 08:10, 18 October 2021 (UTC)[reply]
If interested, I have example code that matches this wiki article for 3 error correction, using Sugiyama's extended Euclid decoder (note the target audience for this was a hardware group, so the decoder emulates a pair of shift registers). bch15.c. It is written for Visual Studio and used intrinsics for the carryless multiply instruction, PCLMULQDQ. Rcgldr (talk) 08:18, 18 October 2021 (UTC)[reply]
Okay, I rest my case. Madyno (talk) 12:19, 18 October 2021 (UTC)[reply]

Scope of Galois field

[edit]

Should the scope of the Galois field be qm rather than q? RossBoswell (talk) 21:00, 23 April 2024 (UTC)[reply]

Decoding based on extended Euclidean algorithm

[edit]

Is there a reason for using matrix style notation versus the simpler notation shown in the other Wikipedia articles about extended Euclidean algorithm or Reed Solomon implementation of the extended Euclidean algorithm? For decoding purposes, there is no need to calculate t as seen in Extended_Euclidean_algorithm#Example_2 which results in the simpler still Reed–Solomon_error_correction#Example_3 which renames s to A. There also the issue of always using , or using negative constant (versus variable) powers such as instead of , and it's not clear that or that means , since the syntax is also used for actual multiplication versus repeated addition depending on context. It could be simpler to follow if the example just choose a specific instance of and used 4 bit binary numbers such as where appropriate. Rcgldr (talk) 04:43, 11 July 2024 (UTC)[reply]