school

UM E-Theses Collection (澳門大學電子學位論文庫)

check Full Text
Title

New regional multifocus image fusion techniques for extending depth of field

English Abstract

In digital imaging systems, due to the nature of the optics involved, the depth of field is constricted in the field of view. Parts of the scene are in focus while others are defocused. Image fusion is a research topic about combining information from multiple images into one fused image. Although a large number of methods have been proposed, many challenges remain in obtaining clearer resulting images with higher quality. This dissertation mainly addresses the multifocus image fusion problem, a special case of image fusion, by fusing several images of the same scene with different focuses. Firstly, existing research in multi-focus image fusion tends to emphasis on the pixel-level image fusion using transform domain methods. The region-level image fusion methods, especially the ones using new coding techniques, are still limited. To address this issue, we provide an overview of regional multifocus image fusion and two different orthogonal matching pursuit based sparse representation methods are adopted for regional multi-focus image fusion. The regional image fusion using sparse representation can achieve a comparable even better performance for multifocus image fusion problems. Secondly, the fusion results by regional fusion approaches are encouraging, how- ever, some information is still lost in the fusion procedure. The prominent problem of regional approaches is that their performance highly depends on the correct im- age segmentation. If there is incorrect segmentation, the artifacts will be generated in the fused image. The image segmentation steps in traditional regional approaches are time-consuming. Besides, the number of segmentation is uncontrollable and clarity in- formation is also not considered in image partition. Hence, to avoid the above issues, a novel fusion framework based on superpixel segmentation and its post processing is proposed. A fast and effective segmentation method is adopted to generate the super- pixels. Then the fusion of source images are conducted in a superpixel-by-superpixel manner. The local image features are better preserved in the superpixels instead of individual image pixels. More accurate initial decison maps are obtained by compar- ing the focus information in superpixels. A novel superpixel-based mean filtering is also proposed to improve the initial decision map. For this filter, the spatial consis- tency in the initial decision map is considered to effectively fix the incorrect selected superpixels. With the refined decision map, a better fusion result can be obtained. Lastly, a better fusion result can be generated when the clarity informatin is con- sidered in the segmentation process. Nevertheless, the way of introducing the clarity information to the segmentation process is still naive. To get better image partition, we need to find a new way to introduce the clarity information in the producing of superpixels. This motivates the new multifocus image fusion method that enhanced linear spectral clustering with depth information to produce better image partitions. We directly embed depth information into the linear spectral clustering. In addition, a multi-image guided filter is proposed to polish the initial decision map. Information from different source images is used by this filter for the purpose of refinement. With the refined decision map, a better fusion result can be acquired. In summary, this dissertation focuses on above three aspects that are aimed to de- velop new regional multifocus image fusion techniques for extending depth of field. All of them are published in book chapter or international journals.

Issue date

2018.

Author

Duan, Jun Wei

Faculty

Faculty of Science and Technology

Department

Department of Computer and Information Science

Degree

Ph.D.

Subject

Multispectral imaging

Supervisor

Chen, Long

Chen, C. L.

Files In This Item

Full-text (Intranet only)

Location
1/F Zone C
Library URL
991006732029706306