• Users Online: 1250
  • Print this page
  • Email this page

 Table of Contents  
ORIGINAL ARTICLE
Year : 2022  |  Volume : 12  |  Issue : 3  |  Page : 177-191

A hybrid approach to multimodal biometric recognition based on feature-level fusion of face, two irises, and both thumbprints


1 Department of Electrical Engineering, Shahed University, Tehran, Iran
2 Department of Computer Engineering, Shahed University, Tehran, Iran

Date of Submission29-Mar-2021
Date of Decision12-Mar-2022
Date of Acceptance19-Apr-2022
Date of Web Publication26-Jul-2022

Correspondence Address:
Mohammad A Doostari
Department of Computer Engineering, Shahed University, Tehran
Iran
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jmss.jmss_103_21

Rights and Permissions
  Abstract 


Background: The most significant motivations for designing multi-biometric systems are high-accuracy recognition, high-security assurances as well as overcoming the limitations like non-universality, noisy sensor data, and large intra-user variations. Therefore, choosing data for fusion is of high significance for the design of a multimodal biometric system. The feature vectors contain richer information than the scores, decisions and even raw data, thereby making feature-level fusion more effective than other levels. Method: In the proposed method, kernel is used for fusion in feature space. First, the face features are extracted using kernel-based methods, the features of both right and left irises are extracted using Hough Transform and Daugman algorithm methods, and the features of both thumb prints are extracted using the Gabor filter bank. Second, after normalization operations, we use kernel methods to map the feature vectors to a kernel Hilbert space where non-linear relations are shown as linear for the purpose of compatibility of feature spaces. Then, dimensionality reduction algorithms are used to the fusion of the feature vectors extracted from fingerprints, irises and the face. since the proposed system uses face, both right 7and left irises and right and left thumbprints, it is hybrid multi-biometric system. We c8arried out the tests on seven databases. Results: Our results show that the hybrid multimodal template, while being secure against spoof attacks and making the system robust, can use the dimensionality of only 15 features to increase the accuracy of a hybrid multimodal biometric system to 100%, which shows a significant improvement compared with uni-biometric and other multimodal systems. Conclusion: The proposed method can be used to search large databases. Consequently, a large database of a secure multimodal template could be correctly differentiated based on the corresponding class of a test sample without any consistency error.

Keywords: Feature-level fusion, hybrid, kernel, multimodal biometric


How to cite this article:
Safavipour MH, Doostari MA, Sadjedi H. A hybrid approach to multimodal biometric recognition based on feature-level fusion of face, two irises, and both thumbprints. J Med Signals Sens 2022;12:177-91

How to cite this URL:
Safavipour MH, Doostari MA, Sadjedi H. A hybrid approach to multimodal biometric recognition based on feature-level fusion of face, two irises, and both thumbprints. J Med Signals Sens [serial online] 2022 [cited 2022 Aug 13];12:177-91. Available from: https://www.jmssjournal.net/text.asp?2022/12/3/177/351883




  Introduction Top


The biometric systems relying on single biometric modality have the drawback of considerable limitations owing to biometric traits, poor data quality, and noise. Multi-biometric systems use fusion to integrate multiple biometric sources so that the authentication accuracy is improved.[1] There could be five rationally perceivable scenarios provisioning multiple biometric information sources. According to various available evidence sources, multi-biometric systems can be categorized into five multi-sensor, multi-algorithm, multi-instance, multi-sample, and multimodal system scenarios, where in the first four, several pieces of evidence can be drawn from only one single specific biometric trait (i.e., iris or fingerprint). This is while, for the 5th scenario (called as the multimodal biometric system), several biometric traits (i.e., iris and fingerprint) can be considered. Another possibility for a multi-biometric system is to use a combined set of scenarios picked from the above said five scenarios (typically known as hybrid multi-biometric systems). Moreover, to improve the user authentication complexity and security, a greater number of traits are combined together.[2] Hence, the implementation of multimodal biometric systems is recommended to address the aforesaid problems. Multimodal biometrics due to its enhanced reliability, applicability, and security, has evolved into a biometric recognition development orientation for many researchers.[3] Biometric data fusion may take place at four levels. If it happens at the sensor level,[4] raw data will be combined. This type of fusion is illogical for designing a multimodal system; however, it can be helpful for increasing the efficiency of a uni-biometric system. Feature-level fusion combines the feature vectors extracted from various biometrics of the same class. Furthermore, the scores achieved from various classifications may be combined at the score level[5] in case each classifier pertains to a single biometric. Owing to its simplicity and low-cost processing, this method is the most popular technique of fusion utilized in designing a biometric multi-biometric system. Finally, decision-level fusion may occur through combining several decisions, each of which is the product of a uni-biometric system.[6] Decision-level fusion is less efficient than even score-level fusion. Both the mentioned two levels rely on the unimodal biometrics recognition performance, when a limited space is to be improved. Feature level compared with the other three levels, is capable of detecting the most discriminative data from the original feature sets while removing the redundant information among the various feature sets.[3] Therefore, it is clear that feature-level fusion method proves best for designing a multimodal system due to rich information at feature vectors [Figure 1].
Figure 1: Different levels of fusion in multi-biometric

Click here to view


Fusion of feature vectors could occur within a feature space for the purpose of transforming multiple feature vectors into a single vector so as for the final vector to have a higher detection power than the original vectors through the processes of “serial or parallel combination,”xs; “dimensionality reduction algorithms,” or “binary feature fusion.” This article suggests feature-level fusion of five biometric modalities pertaining to the face, right thumb and left thumb fingerprints, and both right and left irises in the dimensionality reduction process in order to achieve a robust and secure biometric template for multimodal recognition system. First, face feature vectors are extracted by Kernel Linear Discriminant Analysis (KLDA), fingerprint features are extracted using Gabor filter bank, and irises features are extracted by Hough transform and Daugman algorithm. Then, using normalization operations, the kernel methods are used for the purpose of compatibility between the three feature spaces and creating distinction between linear and nonlinear relations. In doing so, along with dimensionality reduction, the feature vectors of thumbprints, irises, and the face are combined through mapping on the kernel Hilbert space.


  Material and Method Top


The block diagram of hybrid multimodal biometric system, comprising the three main modules of feature extraction, feature fusion, and classification, is demonstrated in [Figure 2]. The modules are explained in subsections.
Figure 2: The hybrid multimodal biometric system diagram block

Click here to view


Feature extraction module

The module for feature extraction extracts the best features for each of face, iris, and fingerprint biometrics separately and maps the system from image space to feature space.

Face feature extraction

As [Figure 3] shows, to extract face features, in case the images have any other background than the face, first face detection algorithms are used to segregate the face from the background image. Then, face features are identified through using the algorithms introduced for face recognition, such as principal component analysis (PCA),[7] linear discriminant analysis (LDA),[8] locality preserving projections (LPPs)[2] and local binary patterns,[1] discrete cosine transform[9] and single value decomposition,[10] canonical correlation analysis (CCA)[11] and discriminant correlation analysis,[12] neural networks (NNs), and deep learning.[13]
Figure 3: Face detector and extractor module

Click here to view


Face recognition has wide applicability, as an important and interesting topic of computer vision domain. The application can cover a wide range from surveillance and human–computer interface to access control and augmented reality. Although, it is still a problematic challenge both due to intrinsic and extrinsic appearance changes (e.g., aging and expression variations, occlusion, pose, and illumination variations).[14] Therefore, face recognition issue often is considered nonlinear mostly owing to its complexity, number and small-scale prototypes of images as well as the associated complexities. Given that the kernel techniques may effectively register nonlinear similarities among the samples, face recognition methods based on kernel have been introduced to develop linear algorithms, in which corresponding kernel functions are utilized to map the samples implicitly on a new feature space with higher dimensionalities. The kernel function definition would be k (x, y) = < Q (x), Q (y) > where Q: Rn—H denotes nonlinear mapping from the original space to a kernel Hilbert space and < Q (x), Q (y) > denotes the dot product of the two data vectors Q (x) and Q (y). Therefore, kernel function calculations may be compared with the dot product of two pieces of data in the kernel Hilbert space corresponding to that kernel. This is an important property of kernel functions resulting in the introduction of kernel trick.[15]

To find the best kernel function that can calculate the principal component or linear discriminant in a space by high-order correlations of the input pixels that create a face image, the input image is mapped into a higher-order feature space by using multiple kernels, and based on coding in Matlab, the kernel which responds better is going to be selected. The five kernel functions used for extracting face features as well as the kernel-based methods of dimensionality reduction, which are commonly used in many applications of kernel methods, can be represented as:

Gaussian function: (σ: Bandwidth parameter in Gaussian kernel)(1)

Polynomial function: (d: Degree of polynomial function) (2)

PolyPlus function:(3)

Linear function:(4)

Hamming function: (m: Number of pixels in the image)(5)

Iris feature extraction

Contrary to the existing challenges, iris recognition is attracting attention as an efficient biometric technology. The Daugman algorithm[16] and Hough transform[17] are used in extracting iris features. As [Figure 4] shows, the algorithm for extracting iris features may be summarized in three steps:
Figure 4: Iris segmentation and normalization

Click here to view


  1. The first and highly significant step in iris recognition is the iris boundaries' localization in the eye image
  2. Afterward the establishment of the iris inner and outer boundaries, an invocation of geometric normalization scheme is made and then a rubber sheet model is used to transform the iris texture present in the annular region from Cartesian coordinates to pseudo polar coordinates. Images segregated from circles are normalized into rectangular block in an equally dimensional form
  3. Although comparing the two irises can be made using the unwrapped iris (e.g., via correlation filters), generally a feature extraction procedure can be implemented for encoding the obtained textural content.


For the purpose of extracting the iris feature, the 1D Log-Gabor filter can be used on the normalized image for displaying the iris tissue information. The Log-Gabor filter, denoting the frequency response, can be represented as (6):



where f0 indicates the center frequency, and σ denotes the filter bandwidth.

The iris features are processed in a 9600-bit code and the upper and lower eyelashes in a 9600-bitmask.

Fingerprint feature extraction

Fingerprint recognition is primarily feature based (instead of being image based) and the used features would be having a physical interpretation. The fingerprint texture features are used as the fingerprint feature space. Such methods as the Gabor filter bank, minutiae Matching,[18] short-time Fourier transform,[19] and Gabor wavelet[20] for fingerprint feature extraction. One common method of feature extraction is the Gabor filter bank that is illustrated in [Figure 5].
Figure 5: Fingerprint feature extraction steps and Gabor filter bank resulting

Click here to view


After preprocessing (enhancement, binarization, and thinning steps) and improving the fingerprint image, the fingerprint feature extraction algorithm may be summarized in four major steps:

  1. Determining the reference point and corresponding target area
  2. Segmenting the target area around the reference point
  3. Filtering the target area at six or eight different directions using the Gabor filter bank
  4. Calculating the absolute standard deviation of gray levels at each segment in order to generate a feature vector.[21]


Feature fusion module

The feature space contains the richest data. It means that feature vectors are better both quantitatively and qualitatively than other levels in terms of information. Data fusion in the feature space containing the main components and discriminants of raw data (image space) is important from two aspects: first, they can derive a combination of discriminant information from the original set of features; second, they can separately eliminate extraneous and repetitive information produced by correlation between the set of features in order to make the best decision in the shortest possible time. In other words, feature fusion would produce the best vector to create maximum distinction and have minimum dimensions for the system to make the best decision.[6]

[Figure 6] illustrates the strategy of vector fusion in the feature space. The feature space is based on the three processes of “serial or parallel combination[1],[3], “dimensionality reduction” methods including “feature extraction” and “feature selection”[22],[23],[24],[25] or binary feature fusion.[22],[26],[27]
Figure 6: Fusion strategies for feature space

Click here to view


In this article, the features of right and left thumbprints, right and left irises, and the face are combined through the process of “dimensionality reduction.” As shown in [Figure 7], feature space fusion would take place in three separate steps. Initially, we normalize the feature vectors, i.e., the features not located in the same range are transferred to a similar range. We often see very extensive differences in the various ranges of feature vector values between the first, second, and third feature spaces. Therefore, normalization is necessary in all the three feature spaces before their fusion. If normalization is not carried out, the impact of one of space features will be dominant in the final result. In other words, feature vectors may have different distribution and variation ranges, which would impart them a significantly different impact after the fusion of the feature vector and the final result. Therefore, feature vectors must be normalized before fusion. The aim of normalization algorithms is often to change the mean and the variance of datasets to specific values. Using a single appropriate normalization method would also help fix the problem of outliers, which constitute one major reason for the education phase error. Such methods as min–max, median, and z-score are used for normalizing numeric datasets.
Figure 7: Fusion of features with dimensionality reduction algorithms based on kernel

Click here to view


sAfter normalization, kernel proper functions are applied separately on each of the feature spaces of the fingerprints, the irises, and the face in order to be transferred to the higher feature space of the kernel, where nonlinear relations are shown as linear. Then, in this space, by orthogonal linear transformation of PCA, the features are mapped on the new coordinate system so that the biggest variance of features is mapped on the 1th coordinate axis, the second-largest variance on the 2nd coordinate axis, and so on. That would help preserve the components from the main set with the most impact on variance to reduce dimensionality and help feature space fusion materialize. Real-world recognition applications face non-linear issues due to big dimensions, original data noise, and correlation between variables, requiring kernel-based dimensionality reduction methods (choosing the appropriate kernel).[28] There is often overlapping in the class distribution, and in most cases, as the number of classes increases, the recognition precision decreases. Therefore, on the one hand, using the appropriate kernel function in each feature space can lead to favorable separation between classes, and on the other, using LDA in the kernel Hilbert space can create a class structure. Therefore, the problems caused by the few number of samples and absence of supervisor will be resolved so that we will see better results from the fusion of the three sets of features. Using kernel non-LDA for the set of features in which class separation will be created means maximum correlation between the samples of each class while simultaneously the correlation between the samples from various classes is minimized. It is proven in this method that using the inner product operator between the features in the original space, it would be possible to directly find the optimal answer to the kernel non-LDA without having to compute the kernel function for reach of original space features.

Classifier module

As explained in previous sections, first the features of face, iris, and fingerprint images are extracted. Next, afterward the normalization through mapping feature vectors in the kernel (Hilbert) space, PCA or discrimination (for creating class structure) is used to reduce dimensionality and store a multimodal template of biometrics representing each class in the database. Finally, the classifier module compares every time in the recognition phase the new biometric multimodal template obtained from previous modules (extractor and feature fusion) with the combined modules previously stored in the database in the enrollment phase to determine its class based on further similarity (or shorter distance) between the new template and the stored template.

Good performance of the classifier module is of high significance in the efficiency of the system. In this article, as [Figure 8] shows, the output from nine classifiers has been evaluated. These classifiers include four classifiers with distance function, two radial basis function NN[29] and probabilistic neural network[30] classifiers, k-nearest neighbor classifier,[31] kernel support vector machine (KSVM) classifier,[32] and Gaussian classifier.[33]
Figure 8: Evaluated classifiers

Click here to view


The efficiency of many machine learning algorithms largely depends on the metric used to measure the similarity of input patterns.[34] Distance functions are the most common metrics used in classification. Any D: X × X→(0,∞) function which is satisfied by any desired value for x, y, and z such that D (x, y) ≥ 0, D (x, y) = 0 ⟷ x = y, D (x, y) = D (y, x), and D (x, z) ≤ D (x, y) + D (y, z) represents a distance or metric function, while the main four distance functions for classification are as follows [Table 1].
Table 1: Distance functions

Click here to view







  Experimental Results Top


Database

A basic part of biometric research lies in access to proper data to have an acceptable number of classes as well as sufficient number of samples for training and testing. Furthermore, one has to be able to create necessary diversity within the training space through modifying the specimens pertaining to each individual class to prove the significance of statistical tests. In this article, to study face recognition system, three databases have been used: ORL,[34],[35] FERET,[36] and multi-biometric database of Shahed University (gathered at Shahed University, Tehran, Iran) [Figure 9]. The Shahed University face database contains 500 images taken from 100 persons. Five images have been registered from each person, containing varying lighting, illuminations, facial expressions, and face postures.
Figure 9: Image samples from ORL, FERET, and Shahed face databases (partial)

Click here to view


The CASIA-IrisV1[37] database from the Chinese Academy of Sciences Institute of Automation (2006), was utilized for testing the proposed method. CASIA-IrisV1 comprises 756 iris images taken from 108 subjects. Furthermore, [Figure 10] shows, Shahed University's iris database contains 500 left iris images (100 persons with five images from the left iris of each person) and as many right iris images (100 persons with five images from the right iris of each person). Shahed University's iris database images have been recorded by using ICHECK-2E-S iris scanner, produced by Behin Pajoohesh Khavar Co. with the 4.0 lp/mm resolution at 60% or higher contrast and the images of >22 pixels per millimeter (more than 120 pixels per iris diameter) and the image dimensions of 22 mm × 38 mm. In [Figure 10], some 80 images are illustrated from Shahed University database and 60 others from the CASIA database.
Figure 10: Image samples from CASIA and Shahed iris databases (partial)

Click here to view


The fingerprint database of Shahed University also contains 5000 images from all 10 fingerprints of 100 students and staff of Shahed University in Tehran. The images have been recorded by FSCL-ZP fingerprint scanner produced by Behin Pajoohesh Khavar Co. with an imaging precision of 100 dpi. In [Figure 11], some 72 images from four databases of fingerprints of right and left hands are illustrated. In the tests, the databases of thumbprints of both hands have been used.
Figure 11: Image samples from Shahed thumb and index fingerprint databases (partial)

Click here to view


Performance evaluation

The objective sought in the training phase is to calculate necessary parameters for extracting features from images (raw data) so that the images distinguished from feature vectors would satisfy the target function (which can be the favorable recognition precision). Therefore, in the testing phase, the same parameters are applied on new data in order to determine the level of distinction of consequential feature vectors. Then, in order to study the efficiency of the system, the results of classification are compared with the favorable target function. These operations are similar to finding the weights of each individual neuron in the NN before studying the accuracy of test data classification, which would determine the efficiency of the NN. In these tests, 100 classes are envisaged for the system training and testing. For this purpose, the faces, right and left irises as well as the right and left thumbprints of 100 persons registered in the aforesaid databases were selected to extract their feature vector. Eighty percent of each person's images (class) are utilized for training and the rest 20% for testing.

Any biometric system performance could be influenced by the size of the database and the images contained. For the proposed system, recognition accuracy, precision, receiver operating characteristic (ROC) curve, area under the ROC curve (AUC), sensitivity, recall, specificity, and efficiency are used for evaluation [Table 2] where TP = true positive, FN = false negative, FP = false positive, and TN = true negative.
Table 2: Performance parameters

Click here to view


A ROC curve is developed through drawing the plot of the true-positive rate (also called as sensitivity) versus the false-positive rate (false match rate) at differing threshold settings. The false-positive rate is sometimes called as (1 − specificity). AUC denotes the probability by which the classifier determines the rank of a randomly selected positive instance above (greater than) a randomly selected negative instance (given that the “positive” rank is considered greater than “negative”). The following shows this clearly: the below curve area can be computed by (the integral boundaries are inevitably inversed because large threshold T value is lower on the X-axis):



Next, the optimal performance of the introduced system is presented with performance parameters including recognition accuracy, ROC curve, AUC, sensitivity, specificity, and efficiency. The ROC curves and the verification performance are not sufficient for the validation of the multi-biometric system's overall performance. Thus, Bengio et al.[38] proposed a statistical test including a half total error rate (HTER) and confidence interval (CI). Accordingly, in this study, a test of the introduced method is performed against these two parameters. Hence, the HTER can be calculated as follows:



To effect the computation of CI around HTER, we need to find the bound σ ×zα/2. Next, σ and zα/2 are defined as:





where the NG and NI, respectively, stand for the total number of intra-class comparisons and the total number of inter-class comparisons.

Now, we first illustrate the results obtained from uni-biometric systems for face, iris, and fingerprint recognition separately with corresponding classifications. Furthermore, the fusion of two fingerprints as well as both right and left irises is examined in multi-instance recognition system. Finally, the results of hybrid multimodal biometric system's recognition, obtained from the features-level fusion of face, two irises, and two thumbprints, are illustrated with the same classifications.

Uni-biometric face recognition

[Figure 12] illustrates the best results for face recognition by uni-biometric systems with the linear algorithms PCA, LDA, LPP, Feature Subset Selection (FSS), PDV and CCA and kernel-based non-linear algorithms kernel PCA(KPCA), KLDA, kernel Locality preserving projection (KLPP) for feature extraction on the three face databases (ORL, FERET and Shahed-University).
Figure 12: Results of face recognition by the uni-biometric system on ORL, FERET, and Shahed databases

Click here to view


As expected, given internal and external variations in the FERET and Shahed University databases on the one hand and the few number of training specimens on the other, kernel-based nonlinear methods function better for feature extraction. KLDA creates a class structure to resolve partly the problems originating from the low number of samples and lack of supervision and we see better performance in the Shahed University database where the number of training specimens is limited.

Multi-instance iris recognition

To investigate the iris uni-biometric system, we consider 100 classes of the CASIA database of right and left irises and we use three images of each iris in the left iris database, two images in training and one image in testing. Moreover, in the right iris database, four iris images are used at each class, three images in training and one image in testing. The Daugman algorithm and Hough transform are utilized for iris feature extraction. A total of 9600 features are extracted for the iris, and then using the six dimensionality reduction methods of PCA, LDA, CCA, KPCA, KLDA, and KLPP, the features are down to the dimensionality of 20–150 features, whose classification results are illustrated in [Figure 13].
Figure 13: Comparing results of multi-instance iris recognition system, (a) Hough transform, (b) Daugman with dimensionality reduction algorithms

Click here to view


In case the 9600 features extracted from iris are directly used for classification without any dimensionality reduction, we will obtain a maximum 93.52% recognition. Furthermore, NN classifications are virtually unusable owing to the low number of training specimens compared with the number of features. Applying nonlinear algorithms based on kernel functions with the KLDA (class structure) and the KPCA (without class structure) in the feature space would reduce dimensionality to 100 features and instead enhance recognition to 97% [Table 3].
Table 3: Comparing performance of five kernel-based dimensionality reduction algorithms

Click here to view


Multi-instance fingerprint recognition

By applying eight Gabor logarithm filters to various frequencies, 73,960 features are extracted from each fingerprint, and by using kernel functions and mapping features to a higher space of kernel, nonlinear relations will be transformed into linear relations. Then, by applying LDA and PCA in the kernel Hilbert space, 73,960 fingerprint features will be reduced to 150 features which we would give to classifier as input. The results of uni-biometric system recognition for 73,960 features extracted from one fingerprint and the 150 dimensionally reduced features are shown in [Table 4] after applying five kernel functions. The Hamming kernel function increases recognition to up to 75% in the KLDA class structure and to up to 69% in the KPCA nonclass structure. The fusion of features of the right and left thumbprints based on dimensionality reduction strategy would reduce dimension up to 150 features while increasing recognition to up to 87% [Figure 14].
Table 4: Comparing results of uni-biometric and multi-instance fingerprints recognition system for 5 kernel functions with two strategies of feature fusion

Click here to view
Figure 14: Comparing results of uni-biometric and multi-instance fingerprints recognition system with dimensionality reduction algorithms

Click here to view


Hybrid multimodal recognition system

[Table 5] compares the results of uni-biometric systems' recognition of face, iris, and fingerprint and feature fusion in the systems of multi-instance iris and fingerprint in the kernel Hilbert space after applying five various kernel functions.
Table 5: Comparing performance of uni-biometric and multi-instance systems for five kernel functions

Click here to view


The KLDA algorithm creates a class structure through Hamming and Gaussian kernels in order to extract the best face features (93% recognition). Furthermore, this algorithm extracts the best features with poly kernel (about 95% recognition) in the iris uni-biometric system and the beast features with linear kernel (about 70% recognition) in the fingerprint uni-biometric system. It is observed that linear functions are the best KLDA kernels for feature fusion in the iris and fingerprint multimodal systems (100% and 87%). Recognition in the KPCA nonclass structure with Gaussian and linear kernels declines 5% to 10% for face and fingerprint uni-biometric systems. However, linear function remains the best kernel for the fusion of fingerprint features.

[Table 6]'s results show clearly the efficacy of the method proposed by this article in the face, iris, and fingerprint features' extraction and fusion in obtaining a robust and secure multimodal template. In addition to obtaining 100% recognition using the introduced method, the reduction of features to the dimensionality of 35 is highly significant. In other words, the multimodal template obtained in the proposed method by combining 147920 (2 × 73,960) features pertaining to two fingerprints, 19,200 (9600 × 2) features pertaining to both rises, and 43,200 pixels from the face image is summarized in only 35 features. This 35 dimensionality feature vector can be a unique identifier of a person.
Table 6: Effective dimensions in the kernel Hilbert space in hybrid multimodal recognition system

Click here to view


In [Table 7], the performance of various classifiers is presented for the introduced hybrid multimodal recognition system. Taking the KLDA method and given the Poly-function, the dimensionality of only 15 features would be enough to obtain a multimodal template, so that the Dis_Angle metric classifiers and linear KSVM would bring about 100% recognition for the final decision in the hybrid multimodal biometric system. However, in the nonlinear KPCA method with Gaussian function, the length of this feature vector increases to 35 features with minor changes in recognition.
Table 7: Comparing performance of nine various classifiers in hybrid multimodal recognition system

Click here to view


The ROC curve (AUC = 0.9988) for the proposed hybrid multi-biometric system in [Figure 15] clearly illustrates the good performance of the introduced system. The feature fusion strategy in highly favorable performance of the multimodal biometric system proposed in this study is clear enough, particularly with three Dis_Angle metric classifiers, NN, and Kernel Support Vector Machine (KSVM). Even with this few number of features, there is high resolution and therefore fusion in the feature space based on the strategy of dimensionality reduction based on kernel functions is very appropriate.
Figure 15: ROC curves of the hybrid multimodal recognition systems on Shahed face database, CASIA right and left iris databases, and Shahed fingerprint databases. ROC – Receiver operating characteristic

Click here to view





  Conclusion Top


In this article, as the feature space has richer information (higher quality and quantity) than the image and decision spaces, feature-level fusion has greater effectivity over fusion at other levels (sensor, score, and decision). Therefore, it is suggested for obtaining a robust and secure multimodal template. Out of the three proposed strategies for the fusion of feature vectors, the dimensionality reduction process with kernel methods was suggested. For the fusion of feature vectors, each feature space has to be mapped with the appropriate kernel function based on the biometric used for that purpose. Kernel-based methods are used in transforming nonlinear problems to problems that may be resolved by a linear solution. That is why the features are mapped in the kernel Hilbert space from the original space by using an appropriate kernel function. The PCA and LDA algorithms are applied in the kernel Hilbert space for the fusion of face, iris, and fingerprint features while reducing dimensionality. In case the class structure is preserved, better results will be achieved. The proposed method is all appropriate for searching big databases (recognition uses). Therefore, it would be possible to accurately distinguish the corresponding class of a test sample in a big database of a secure multimodal template without any consistency error.

Financial support and sponsorship

None.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Li Y, Zou B, Deng SH, Zhou G. Using feature fusion strategies in continuous authentication on smartphones. IEEE Internet Comput 2020;24:49-56.  Back to cited text no. 1
    
2.
Joseph T, Kalaiselvan SA, Aswathy SU, Radhakrishnan R, Shamna AR. A multimodal biometric authentication scheme based on feature fusion for improving security in cloud environment. J Ambient Intell Human Comput 2021;12:6141-9.  Back to cited text no. 2
    
3.
Zhifang W, Jiaqi1 Z, Yanchao L, Guoqiang L, Qi H. Multi-feature Multimodal Biometric Recognition Based on Quaternion Locality Preserving Projection. Chin J Electron 2019;28:789-96.  Back to cited text no. 3
    
4.
Tong Y, Bai J, Chen X. Research on multi-sensor data fusion technology. J Phys Conf Ser 2020;1624:032046.  Back to cited text no. 4
    
5.
Zhang Y, Gao C, Pan S, Li Z, Xu Y, Qiu H. A score-level fusion of fingerprint matching with fingerprint liveness detection. IEEE Access 2020;8:183391-400.  Back to cited text no. 5
    
6.
Jain AK, Ross AA, Nandakumar K. Introduction to Biometrics. New York, London: Springer; 2011.  Back to cited text no. 6
    
7.
Zhang Y, Xiao X, Yang LX, Xiang Y, Zhong SH. Secure and Efficient Outsourcing of PCA-Based Face Recognition. IEEE Trans Inf Forensics Secur 2019;15:1683-95.  Back to cited text no. 7
    
8.
Tan X, Deng L, Yang Y, Qu Q, Wen L. Optimized regularized linear discriminant analysis for feature extraction in face recognition. Evol Intell 2019;12:73-82.  Back to cited text no. 8
    
9.
Abikoye OC, Shoyemi IF, Aro TO. Comparative analysis of illumination normalizations on principal component analysis based feature extraction for face recognition. FUOYE J Eng Technol 2019;4:67-9.  Back to cited text no. 9
    
10.
Agarwal A, Mishra G, Agarwal K. Super resolution technique for face recognition using SVD. Int J Eng Res Technol 2020;8:1-5.  Back to cited text no. 10
    
11.
Gao X, Sun Q, Xu H. Multiple-rank supervised canonical correlation analysis for feature extraction, fusion and recognition. IEEE Expert Syst Appl 2017;84:171-85.  Back to cited text no. 11
    
12.
Haghighat AM. Low Resolution Face Recognition in Surveillance Systems Using Discriminant Correlation Analysis. Paper Presented at the 12th IEEE International Conference on Automatic Face & Gesture Recognition; 2017.  Back to cited text no. 12
    
13.
Zangeneh E, Rahmati M, Mohsenzadeh Y. Low resolution face recognition using a two-branch deep convolutional neural network architecture. Expert Syst Appl 2020;139:1-11.  Back to cited text no. 13
    
14.
Wang D, Lu H, Yang MH. Kernel collaborative face recognition. Pattern Recognit 2015;48:3025-37.  Back to cited text no. 14
    
15.
Zhao H, Lai ZH, Leung H, Zhang X. Kernel-based nonlinear feature learning. In: Feature Learning and Understanding. Cham: Springer; 2020.  Back to cited text no. 15
    
16.
Alam M, Rahman Khan A, Salehin ZU, Uddin M, Jahan Soheli S, Zaman Khan T. Combined PCA-Daugman method: An Ecient technique for face and iris recognition. J Adv Math Comput Sci 2020;35:34-44.  Back to cited text no. 16
    
17.
Abiyev RH, Kilic KI. Robust feature extraction and iris recognition for biometric personal identification. InBiometric Systems, Design and Applications 2011. IntechOpen.  Back to cited text no. 17
    
18.
Patel RB, Hiran D, Patel J. Biometric Fingerprint Recognition Using Minutiae Score Matching. In: Biometric Fingerprint Recognition Using Minutiae Score Matching. Springer Link; 2020.  Back to cited text no. 18
    
19.
Manickam A, Devarasan E, Manogaran G, Kumar Priyan M, Varatharajan R, Hsu CH, et al. Score level based latent fingerprint enhancement and matching using SIFT feature. Multimed Tools Appl 2019;78:3065-85.  Back to cited text no. 19
    
20.
Onifade OF, Akinde P, Olubusola Isinkaye F. Circular Gabor wavelet algorithm for fingerprint liveness detection. J Adv Comput Sci Technol 2020;9:1-5.  Back to cited text no. 20
    
21.
Jain AK, Nandakumar K, Ross A. 50 years of biometric research: Accomplishments, challenges, and opportunities. IEEE Pattern Recognit Lett 2016;79:80-105.  Back to cited text no. 21
    
22.
Saini N, Sinha A. Efficient fusion of face and palmprint in Gabor filtered Wigner domain. Int J Biomet 2020;12:301-16.  Back to cited text no. 22
    
23.
Kamlaskar C, Deshmukh S, Gosavi S. Novel canonical correlation analysis based feature level fusion algorithm for multimodal recognition in biometric sensor systems. Sensor Lett 2019;17:75-86.  Back to cited text no. 23
    
24.
Tiong LC, Kim ST, Ro YM. Implementation of multimodal biometric recognition via multi-feature deep learning networks and feature fusion. Multimed Tools Appl 2019;78:22743-72.  Back to cited text no. 24
    
25.
Haghighat M, Abdel-Mottaleb M, Alhalabi W. Discriminant correlation analysis: Real-time feature level fusion for multimodal biometric recognition. EEE Trans Inf Forensics Secur 2016;11:1984-96.  Back to cited text no. 25
    
26.
Zhang H, Li S, Shi Y, Yang J. Graph fusion for finger multimodal biometrics. IEEE Access 2019;7:28607-15.  Back to cited text no. 26
    
27.
Kabir W, Omair Ahmad M, Swamy MN. A multi-biometric system based on feature and score level fusions. IEEE Access 2019;7:59437-50.  Back to cited text no. 27
    
28.
Kempfert KC, Wang Y, Chen C, Wong SW. A comparison study on nonlinear dimension reduction methods with kernel variations: Visualization, optimization and classification. Intelligent Data Analysis. 2020;24:267-90.  Back to cited text no. 28
    
29.
Roguia S, Mohamed N. An optimized RBF-neural network for breast cancer classification. Int J Inform Appl Math 2020;1:24-34.  Back to cited text no. 29
    
30.
Tang ZH. Leaf image recognition and classification based on GBDT-probabilistic neural network. J Phys Conf Ser 2020;1592:012061.  Back to cited text no. 30
    
31.
Prabavathy S, Rathikarani V, Dhanalakshmi P. Classification of musical instruments using SVM and KNN. Int J Innov Technol Explor Eng 2020;9:1186-90.  Back to cited text no. 31
    
32.
Hekmatmanesh A, Wu H, Jamaloo F, Li M. A combination of CSP-based method with softmargin SVM classifier and generalized RBF kernel for imagery-based brain computer interface applications. Multimed Tools Appl 2020;79:17521-49.  Back to cited text no. 32
    
33.
Dan CH, Wei Y, Ravikumar P. Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification. Proceedings of the 37th International Conference on Machine Learning, Online, PMLR 119; 2020.  Back to cited text no. 33
    
34.
Kusnadi A, Ngadiman VA, Prasetya SG. Image Restoration Effect on DCT High Frequency Removal and Wiener Algorithm for Detecting Facial Key Points. Vol. 7. Proceeding of the Electrical Engineering Computer Science and Informatics; 2020.  Back to cited text no. 34
    
35.
Phillips PJ, Newton EM. Meta-Analysis of Face Recognition Algorithms. In: 5th IEEE Conference on Automatic Face and Gesture Recognition, Washington DC; 2002.  Back to cited text no. 35
    
36.
Tallón-Ballesteros AJ. Computation of Virtual Training Samples and the Experiments on Face Recognition. Fuzzy Systems and Data Mining V: Proceedings of FSDM 2019. 2019;320:212.  Back to cited text no. 36
    
37.
Ihsanto E, Kurniawan J, Husna D, Alfan Presekal A, Ramli K. Development and analysis of a zeta method for low-cost, camera-based iris recognition. Int J Adv Comput Sci Appl 2020;11:320-6.  Back to cited text no. 37
    
38.
Bengio S, Mariéthoz J. A statistical significance test for person authentication. InProceedings of Odyssey 2004: The Speaker and Language Recognition Workshop 2004 (No. CONF).  Back to cited text no. 38
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9], [Figure 10], [Figure 11], [Figure 12], [Figure 13], [Figure 14], [Figure 15]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6], [Table 7]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Introduction
  Material and Method
  Experimental Results
  Conclusion
   References
   Article Figures
   Article Tables

 Article Access Statistics
    Viewed447    
    Printed6    
    Emailed0    
    PDF Downloaded61    
    Comments [Add]    

Recommend this journal