top of page

Selected as Oral Peper (0.78% of 11,532 valid submissions) in CVPR2024

Geunhyuk Youk's paper was selected as an oral paper in CVPR 2024. Congratulations!


  • Poster: This means you’ll present your paper as a poster. Of the 2719 accepted papers, 2305 (84.8%) are in this category.

  • Poster (Highlight): This also means you’ll present your paper as a poster. However, your paper was identified by your Area Chair as being especially interesting or innovative. As in CVPR 2023, your paper will be annotated with a special symbol in the program and during the poster session. Of the 2719 accepted papers, 324 (11.9%) were selected as highlights.

  • Oral: This means you’ll present your paper as a ~15 minute oral talk, in addition to presenting a poster. This is a big honor, as only 90 (3.3%) of the 2719 accepted papers were selected as orals, which corresponds to 0.78% of 11,532 valid paper submission. It is also a big responsibility, since you will be presenting in front of an audience of thousands of people.

-------------------------------------------------------------------------------------------------------------------------------------------------

Title: FMA-Net: Flow Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring

Authors: Geunhyuk Youk, Jihyong Oh, Munchurl Kim

Abstract:

We present a joint learning scheme of video super-resolution and deblurring, called VSRDB, to restore clean high-resolution (HR) videos from blurry low-resolution (LR) ones. This joint restoration problem has drawn much less attention compared to single restoration problems. In this paper, we propose a novel flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA), which constitutes our VSRDB framework, denoted as FMA-Net. Specifically, our proposed FGDF enables precise estimation of both spatio-temporally-variant degradation and restoration kernels that are aware of motion trajectories through sophisticated motion representation learning. Compared to conventional dynamic filtering, the FGDF enables the FMA-Net to effectively handle large motions into the VSRDB. Additionally, the stacked FRMA blocks trained with our novel temporal anchor (TA) loss, which temporally anchors and sharpens features, refine features in a coarse-to-fine manner through iterative updates. Extensive experiments demonstrate the superiority of the proposed FMA-Net over state-of-the-art methods in terms of both quantitative and qualitative quality. Codes and pre-trained models are available at: https://kaist-viclab.github.io/fmanet-site.

-------------------------------------------------------------------------------------------------------------------------------------------------








コメント


bottom of page