According to ethereum (eth) co-founder Vitalik Buterin, the new image compression method Token for Image Tokenizer (TiTok ai) can encode images to a size large enough to be added on-chain.
On their social networks Warpcast eth/0x1389d35c” data-type=”link” data-id=”https://warpcast.com/vitalik.eth/0x1389d35c” target=”_blank” rel=””>accountButerin called the image compression method a new way to “encode a profile picture.” He went on to say that if he can compress an image to 320 bits, what he called “basically a hash”, the images would be small enough to go daisy chain for each user.
The co-founder of ethereum became interested in TiTok ai from an x post by a researcher at artificial intelligence (ai) imaging platform Leonardo ai.
The researcher, under the name @Ethan_smith_20, briefly explained how the method could help those interested in reinterpreting high-frequency details within images to successfully encode complex images into 32 tokens.
Buterin's perspective suggests that the method could make it significantly easier for developers and creators to create profile pictures and non-fungible tokens (nfts).
Resolve previous image tokenization issues
TiTok ai, developed through a collaborative effort by TikTok's parent company ByteDance and the University of Munich, is described as an innovative one-dimensional tokenization framework, which differs significantly from the predominant two-dimensional methods in use.
According to a research paper on the image tokenization method, ai allows TiTok to compress 256-by-256-pixel rendered images into “32 distinct tokens.”
The paper pointed out problems seen with previous image tokenization methods, such as VQGAN. Previously, image tokenization was possible, but strategies were limited to using “2D latent grids with fixed downsampling factors.”
2D tokenization could not avoid difficulties in handling redundancies found within images, and nearby regions showed many similarities.
TiTok, which uses ai, promises to solve this problem by using technologies that effectively tokenize images into 1D latent sequences to provide a “compact latent representation” and eliminate region redundancy.
Additionally, the tokenization strategy could help optimize image storage on blockchain platforms while delivering notable improvements in processing speed.
In addition, it has speeds up to 410 times faster than current technologies, which is a great step forward in computational efficiency.