Multi-focus image fusion using maximum symmetric surround saliency detection

Durga Prasad Bavirisetti, Ravindra Dhuli

Abstract

 In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images.  In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method outperforms the existing methods.

Keywords

Saliency map; weight map; out of focus; image fusion;

Full Text:

PDF (2352Kb)
Copyright (c) 2016 Durga Prasad Bavirisetti, Ravindra Dhuli