We use cookies to ensure the functionality of our website, to personalize content and advertising, to provide social media features, and to analyze our traffic. If you allow us to do so, we also inform our social media, advertising and analysis partners about your use of our website. You can decide for yourself which categories you want to deny or allow. Please note that based on your settings not all functionalities of the site are available.
Further information can be found in our privacy policy.
Recent Comments
Given the promising results of the HCR framework for single drop regions, have you considered incorporating advanced sequence modeling techniques, such as Transformer architectures, to improve the reconstruction quality for multiple drop regions and longer gaps?
Thank you for your thoughtful response. Indeed, it could be possible for the SOTA Transformers architecture to capture better inferences from noisy and/or incomplete data. Thus, utilizing Transformer decoders for the reconstruction phase in conjunction with the introduced HCR (halftone-based compression and reconstruction) could prove beneficial to generate coherent outputs. However, for applications with limited computational resources, such as mobile apps, LSTM may still be competitive.
In our paper, we have set aside the exploration of these advanced techniques for future work.