• Comment: 0 improvement since last decline. Also, do not remove old draft templates. '''[[User:CanonNi]]''' (talkcontribs) 13:38, 16 May 2024 (UTC)

NOS Compressor

edit

Summary

edit

The NOS Compressor (Num Optimized Split) is a text compression algorithm designed to efficiently identify and encode repetitive patterns. Through advanced techniques such as Magic Split, subtoken generation via rotations, and Huffman encoding, the NOS Compressor achieves high compression rates without loss of information. This article details the compression process of the NOS Compressor, highlighting the importance of Magic Split in identifying the optimal splitting point for text tokenization.

Text compression is essential for efficient data storage and transmission. The NOS Compressor utilizes a combination of innovative techniques to achieve optimal compression, including Magic Split for identifying the optimal splitting point, subtoken generation via rotations, and Huffman encoding for efficient representation of repetitive patterns.

Magic Split: Identifying the Optimal Splitting Point

edit

Magic Split is a key component of the NOS Compressor that determines the optimal splitting point in the text. This process is carried out as follows:

  • Frequency Analysis: Magic Split analyzes the symbol distribution in the text to calculate the frequencies of each symbol.
  • Variation Calculation: The variation in the frequencies of adjacent symbols is calculated to identify the optimal splitting point that maximizes this variation.
  • Selection of Optimal Point: The symbol that maximizes the variation in the frequencies of adjacent symbols is selected as the optimal splitting point for text tokenization.
def magic_split(self):
    unique_symbols = set(self.text)
    symbol_distances = {}
    for symbol in unique_symbols:
        indices = [i for i, char in enumerate(self.text) if char == symbol]
        if len(indices) > 1:
            distances = [indices[i + 1] - indices[i] for i in range(len(indices) - 1)]
            symbol_distances[symbol] = distances

    variation = {symbol: max(distances) - min(distances) for symbol, distances in symbol_distances.items() if distances}

    mins = {}
    for v in variation:
        if variation[v]!=0 and variation[v]!=1:
            mins[v] = variation[v]

    best_symbol = min(mins, key=mins.get)

    return best_symbol

Subtoken Generation via Rotations

edit

Once the optimal splitting point is identified, the generation of subtokens via rotations proceeds. This process involves rotating each token to identify repetitive patterns in the text. The resulting subtokens are used for Huffman encoding.

Subtoken generation via rotations is a fundamental process in the NOS Compressor for identifying repetitive patterns in the text. This process consists of the following steps:

  • Token Rotation: Each token in the text is rotated to create a series of substrings.
  • Comparison of Substrings: The resulting substrings of rotated tokens are compared to identify common ones among them.
  • Identification of Optimal Subtokens: The longest and most frequently repeated subtokens are selected as the optimal symbols for encoding.
def rotate_string(self, string, n):
        indice = n % len(string)
        string_rotado = string[indice:] + string[:indice]
        return string_rotado

    def rotate_compare(self, tokiA, tokiB):
        if tokiA >= tokiB:
            tokA = tokiA
            tokB = tokiB
            ltokA = len(tokA)
        else:
            tokA = tokiB
            tokB = tokiA
            ltokA = len(tokB)

        i = 0
        rotations = {}
        while i < ltokA:
            tokrotated = self.rotate_string(tokA, i)
            rotations[str(i)] = self.common_string(tokrotated, tokB) 
            i += 1

        best_r = ""
        for x in rotations:
            lb = len(best_r)
            rot = rotations[x]
            lrot = len(rot)
            if lrot > 1 and lrot < ltokA and lrot > lb:
                best_r = rot

        return best_r

    def get_subTokens(self, spl):
        sub_tokens = self.texto.split(spl)
        toks = []
        for tok in sub_tokens:
            for tok2 in sub_tokens:
                if tok != tok2:
                    toks.append(self.rotate_compare(tok, tok2))

        return list(set(toks))

    def tokenize(self, spliter_optimo):
        tokens = self.get_subTokens(spliter_optimo)
        tokenized_sentence = {}
        chunk = self.texto.split(spliter_optimo)
        for txt in chunk:
            best_split = ""
            if len(txt)<3:
                tokenized_sentence[txt]= txt
            else:

                for tok in tokens:
                    if tok != "":
                        lt = len(tok)
                        lb = len(best_split)
                        spltxt = txt.split(tok)
                        if len(spltxt) > 1:
                            l0 = len(spltxt[0])
                            l1 = len(spltxt[1])
                            if lt < len(txt) and lt > lb:
                                best_split = tok
                                tokenized_sentence[txt] = " " + spltxt[0] + "-" + tok + "-" + spltxt[1]
        
        return tokenized_sentence

Huffman Encoding

edit

Huffman encoding is used to assign variable-length codes to subtokens, aiming to achieve efficient representation of the identified repetitive patterns. This process is based on constructing a Huffman tree from the frequencies of subtokens, followed by assigning codes to each subtoken based on its position in the Huffman tree.

Conclusions

edit

The NOS Compressor offers an effective solution for text compression, leveraging advanced techniques such as Magic Split, subtoken generation via rotations, and Huffman encoding. This approach allows for high compression rates without loss of information, making it suitable for a variety of applications requiring efficient data compression.

References

edit