Welcome!

Hello, Ylloh, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are some pages that you might find helpful:

I hope you enjoy editing here and being a Wikipedian! Please sign your messages on discussion pages using four tildes (~~~~); this will automatically insert your username and the date. If you need help, check out Wikipedia:Questions, ask me on my talk page, or ask your question and then place {{helpme}} before the question on your talk page. Again, welcome! Oleg Alexandrov (talk) 03:00, 24 November 2007 (UTC)Reply

Style note edit

And a small note. Per WP:MoS#Sections and headings, one should use lowercase in section headings, like this. Cheers, Oleg Alexandrov (talk) 03:00, 24 November 2007 (UTC)Reply

Coding theory edit

Hello and thank you very much for your valuable contributions to the articles on various codes for error correction and detection! The reason that I removed category:Coding theory in Walsh code was simply that if the subject is related to Error detection and correction and not to any other subcategory (such as Data compression), there is no use having the article sorted in both categories. In fact, I think most if not all articles in the Coding theory category should be moved to the appropriate subcategory. What's your opinion on this? Isheden (talk) 12:57, 14 October 2010 (UTC)Reply

Thank you! I don't have a strong opinion on whether logical parent categories should be listed too, and would support your rule of not doing so for now. I'm sure there's a Wikipedia guideline somewhere, though. For me, it was just important to add those articles to some sensible category. ylloh (talk) 13:14, 14 October 2010 (UTC)Reply

Walsh–Hadamard edit

Hello. After you moved Walsh code to Walsh-Hadamard code, I moved it from Walsh-Hadamard code to Walsh–Hadamard code. That is the form required by WP:MOS for this sort of thing. (I.e. an en-dash, not a hyphen.) Michael Hardy (talk) 06:41, 30 July 2011 (UTC)Reply

Thanks! ylloh (talk) 07:57, 30 July 2011 (UTC)Reply

Pseudo-random generator edit

I was a bit confused for the moment because of the existence of three related articles: pseudorandom generator, pseudorandom number generator, and cryptographically secure pseudorandom number generator. Surprisingly, the first article is never mentioned in the other two articles. Do you have some good ideas on how the three articles could be better entangled? Nageh (talk) 18:00, 6 February 2012 (UTC)Reply

Yeah, it seems that some cleanup is necessary here. Unfortunately the three articles use different language and formalization. I guess there should be one main article with common concepts and the others should link back to that main artice.ylloh (talk) 18:54, 6 February 2012 (UTC)Reply

Hoeffding's inequality edit

I don't know how it looks with MathJax, but without it your expansion earlier today results in several red error messages beginning "Failed to parse (lexing error): \Pr\Big(\text{$n$ coin tosses...". I don't think you can use $ signs inside <math>. Qwfp (talk) 20:31, 21 February 2012 (UTC)Reply

This looked fine in MathJax. I corrected it. Thanks!ylloh (talk)
That was quick! That has indeed fixed it. Thanks, Qwfp (talk) 20:45, 21 February 2012 (UTC)Reply

expansion rates, again edit

Hi Ylloh,


I hope you are not mad at me because of my contradicting you so much. I think the root cause for our disagreement is what we think math articles on wikipedia are for.

Completly aside from that, I would like to ask you a favor, since you have a much better level in graphs theory than I have (BTW you should not call me an "expert" all the time, I'm not). Here is my problem:

As you know, for a d-regular graph, we have the relation

 

I have the intuition that for any graph, we have

 

where   is the maximum degree of the graph.

Is that relation wrong? If it is true, I did not find it in any source, and you know how I hate unsourced material... ;-)

I'd be thankful if you could help me on this. --MathsPoetry (talk) 21:44, 28 March 2012 (UTC)Reply

This is in fact true since every set S has at most   neighbors. It is a basic fact that people use all the time without explicitly stating it, which is why you are having a hard time finding a source. ylloh (talk) 21:52, 28 March 2012 (UTC)Reply
Thanks for that explanation, Ylloh. Thank you too for that discussion about the expander rates formula, I hope I was not too harsh with you, with was not my intention. I have finished with the English wikipedia, I will close my discussion page and user page right now. There's no point in trying to help people who don't want to be helped. No need to answer here, I won't read your answer. --MathsPoetry

Disambiguation link notification for September 20 edit

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Small-bias sample space, you added a link pointing to the disambiguation page Uniform distribution (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:29, 20 September 2012 (UTC)Reply

Merger edit

You seem to have merged Walsh-Hadamard code into Hadamard code with no discussion or explanation. Please explain your reasons for the merger, and why you chose not to discuss it. Deltahedron (talk) 17:39, 21 February 2013 (UTC)Reply

Please see my comment on Talk:Walsh–Hadamard_code. Thanks! ylloh (talk) 19:33, 21 February 2013 (UTC)Reply
I have responded there. Deltahedron (talk) 19:42, 21 February 2013 (UTC)Reply

reed solomon article - simple encoding procedure edit

I'm looking at the simple encoding procedure section and I seem to be missing something about the codewords being generated, specifically

 

Assume the message x = {1, 0, 0, ... , 0}, then C(x) = {1, 1, 1, ... , 1}, which clearly isn't a multiple of a generator polynomial. The original mesage can be restored using

 

but how is correction performed on such a codeword? Rcgldr (talk) 03:49, 31 March 2013 (UTC)Reply

Reading that section and the next section again, apparently "correction" meant trying a huge number of combinations, to generate the equivalent of a histogram, then choosing the most popular result. So I assume encoding by multiplying or taking the modulo with a generator polynomial wasn't done till later, and that since the 1980's (or perhaps earlier) almost all implementations use/used the modulo method: (original message) · x^t | (generator polynomial). Rcgldr (talk) 23:08, 1 April 2013 (UTC)Reply
You are right that the method   only works for received words that have not been corrupted. A simple (but inefficient) decoder that does correct up to half the distance many errors is at follows: For all possible messages, check whether the corresponding codeword is close to the received word. If so, we found the original message. This is the Theoretical decoder mentioned in the Reed-Solomon article and is very slow. The efficient methods mentioned in the article also work in the "simple encoding procedure" setting, but their description becomes a bit different since the encoding is a bit different. ylloh (talk) 15:10, 2 April 2013 (UTC)Reply
  • update - the article was missing two original view decoders, 1986 - Berlekamp-Welch - time complexity O(n^3), and 2002 Gao, time complexity ~ O(n^2) based on my testing, but I did not find any citable references, so I did not include a time complexity value in the article. I added these two decoders to the article and cleaned up the Berlekamp Welch article. There are also polynomial time list decoders (they generate a list of potential polynomials), which have the potential of going beyond the normal error correction limit, which I mentioned, but there are a few variations of these, and I didn't find a good example to attempt to explain. These decoders are relatively slow, meant for shorter length code words. Comparing Gao's original view extended Euclid decoder versus Sugiyama's BCH view extended Euclid decoder, Gao's decoder took 3 to 12 times as long for the same RS(N,K) code, depending on N and K (Gao's algorithm works with N+2 values per iteration, Sugiyama's with (N-K)+2 per iteration, the number of iterations for the same RS(N,K) and error count is the same based on my testing). Berlekamp-Welch is the slowest due to O(n^3) Gaussian elimination. Rcgldr (talk) 04:39, 23 January 2018 (UTC)Reply

Notation Used for Codes edit

Thanks for leading me to the page Block_code. I think the definition of block codes presented there is not the most general definition. The most general and (most widely accepted) definition of a block code is an injective map from  . where M is your message set and the code is represented as an (n,|M|,d) code. Of course, M can be  . From my understanding, while speaking in general, the term 'message length' does not make sense. When your set M is closed under linear operations and your map is linear, then your code is linear. An alternative definition of code is also the image set of C, in which the linearity just means this should be linear.

You get the definition that you mention from the definition in Block_code by setting   and  . I'm not sure which definition is most widely accepted, and this requires some literature evidence. In theoretical computer science, I have never seen any other definition than the one in Block_code. ylloh (talk) 13:39, 24 April 2013 (UTC)Reply

ArbCom elections are now open! edit

Hi,
You appear to be eligible to vote in the current Arbitration Committee election. The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to enact binding solutions for disputes between editors, primarily related to serious behavioural issues that the community has been unable to resolve. This includes the ability to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail. If you wish to participate, you are welcome to review the candidates' statements and submit your choices on the voting page. For the Election committee, MediaWiki message delivery (talk) 13:46, 23 November 2015 (UTC)Reply

ArbCom 2017 election voter message edit

Hello, Ylloh. Voting in the 2017 Arbitration Committee elections is now open until 23.59 on Sunday, 10 December. All users who registered an account before Saturday, 28 October 2017, made at least 150 mainspace edits before Wednesday, 1 November 2017 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2017 election, please review the candidates and submit your choices on the voting page. MediaWiki message delivery (talk) 18:42, 3 December 2017 (UTC)Reply

Reed Solomon's original view ... choice of evaluation points ... powers of α edit

Back in Feb 2013, you updated the article including this statement: "In many contexts it is convenient ...sequence of successive powers of a primitive root α"

For conventional BCH codes, the generator polynomial needs to have sequential powers of α as roots, and BCH variation of Reed Solomon codes are cyclic codes, but for Reed Solomon original view, I find no advantage to any permutation of evaluation points, including sequentially increasing powers of α, and I found no citable references that mention this. I have working programs for two polynomial time original view decoders, Berlekamp Welch (Gaussian elimination) and Gao (extended Euclid algorithm). As long as the encoder and decoder use the same permutation of evaluation points and code word values, it doesn't make any difference, as the recovered encoding polynomial will be the same. Welch has a reference to the original RS encoding, where the evaluation points begin with 0, followed by successive powers of α from 1 to n-2, then followed by 1 (== αn-1 == α0) The Original View of Reed–Solomon Codes, so although the powers of α are sequential, including a 0 in the set of evaluation points breaks the pattern. In my testing and in several articles, the set of evaluation points are simply {0,1,2,...n-1} ai = i-1, ignoring any field related ordering (such as powers of α). With the original view, since 0 is included in the set of evaluation points, the maximum code word size n can be the same as q, the number of elements in a field (for BCH, max value for n is q-1).

I'll wait for feedback or some time before I consider removing that paragraph from the original view section. The BCH view section already covers using sequential powers of α in it's generator polynomial. Rcgldr (talk) 05:05, 23 January 2018 (UTC)Reply

I initially deleted that paragraph with the intention of moving it closer to the start of that section, but restored (undo) it back. I updated the talk section. Rcgldr (talk) 08:57, 24 January 2018 (UTC)Reply

Reed Solomon - properties - original view - cyclic edit

Thank you for cleaning up the wording in the properties section about making original view cyclic. By using successive powers of α for evaluation points, the code is cyclic, and the inverse Fourier transform can be used to convert an error free codeword back into the original generating polynomial, or if the codeword was rotated, then the original polynomial with different constants but the same degree. Although this is interesting and should be included in the article, I'm not sure how original view encoders or decoders would benefit from a cyclic code, and many original view articles mention using 0 as one of the evaluation points, which would make the code non-cyclic. For BCH encoders, a LFSR can be used, but for original view encoding, the only optimizations I see are algorithms like Horner's method. Rcgldr (talk) 19:16, 6 February 2018 (UTC)Reply

ArbCom 2018 election voter message edit

Hello, Ylloh. Voting in the 2018 Arbitration Committee elections is now open until 23.59 on Sunday, 3 December. All users who registered an account before Sunday, 28 October 2018, made at least 150 mainspace edits before Thursday, 1 November 2018 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2018 election, please review the candidates and submit your choices on the voting page. MediaWiki message delivery (talk) 18:42, 19 November 2018 (UTC)Reply