Skip to yearly menu bar Skip to main content


Poster

Enhancing Tampered Text Detection through Frequency Feature Fusion and Decomposition

Zhongxi Chen · Shen Chen · Taiping Yao · Ke Sun · Shouhong Ding · Xianming Lin · liujuan cao · Rongrong Ji

# 225
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Document image tampering poses a grave risk to the veracity of information, with potential consequences ranging from misinformation dissemination to financial and identity fraud. While current detection methods utilize frequency information to uncover tampering invisible to the naked eye, they often fall short in precisely integrating this information and enhancing the high-frequency components vital for detecting subtle tampering. Addressing these gaps, we introduce the Feature Fusion and Decomposition Network (FFDN), a novel approach for Document Image Tampering Detection (DITD). Our method synergizes Visual Enhancement Module (VEM) with a Wavelet-like Frequency Enhancement (WFE) to improve the detection of subtle tampering traces. Specifically, the VEM enhancing the detection of subtle tampering traces while maintaining the integrity of the original RGB detection capabilities, and the WFE further decomposes features into high-frequency and low-frequency components, placing emphasis on minuscule, yet critical, tampering details. Rigorous testing on the DocTamper dataset confirms FFDN's preeminence, significantly outperforming existing state-of-the-art methods in detecting tampering.

Live content is unavailable. Log in and register to view live content