

ORIGINAL ARTICLE 

Year : 2022  Volume
: 12
 Issue : 3  Page : 177191 

A hybrid approach to multimodal biometric recognition based on featurelevel fusion of face, two irises, and both thumbprints
Mohammad H Safavipour^{1}, Mohammad A Doostari^{2}, Hamed Sadjedi^{1}
^{1} Department of Electrical Engineering, Shahed University, Tehran, Iran ^{2} Department of Computer Engineering, Shahed University, Tehran, Iran
Date of Submission  29Mar2021 
Date of Decision  12Mar2022 
Date of Acceptance  19Apr2022 
Date of Web Publication  26Jul2022 
Correspondence Address: Mohammad A Doostari Department of Computer Engineering, Shahed University, Tehran Iran
Source of Support: None, Conflict of Interest: None
DOI: 10.4103/jmss.jmss_103_21
Background: The most significant motivations for designing multibiometric systems are highaccuracy recognition, highsecurity assurances as well as overcoming the limitations like nonuniversality, noisy sensor data, and large intrauser variations. Therefore, choosing data for fusion is of high significance for the design of a multimodal biometric system. The feature vectors contain richer information than the scores, decisions and even raw data, thereby making featurelevel fusion more effective than other levels. Method: In the proposed method, kernel is used for fusion in feature space. First, the face features are extracted using kernelbased methods, the features of both right and left irises are extracted using Hough Transform and Daugman algorithm methods, and the features of both thumb prints are extracted using the Gabor filter bank. Second, after normalization operations, we use kernel methods to map the feature vectors to a kernel Hilbert space where nonlinear relations are shown as linear for the purpose of compatibility of feature spaces. Then, dimensionality reduction algorithms are used to the fusion of the feature vectors extracted from fingerprints, irises and the face. since the proposed system uses face, both right 7and left irises and right and left thumbprints, it is hybrid multibiometric system. We c8arried out the tests on seven databases. Results: Our results show that the hybrid multimodal template, while being secure against spoof attacks and making the system robust, can use the dimensionality of only 15 features to increase the accuracy of a hybrid multimodal biometric system to 100%, which shows a significant improvement compared with unibiometric and other multimodal systems. Conclusion: The proposed method can be used to search large databases. Consequently, a large database of a secure multimodal template could be correctly differentiated based on the corresponding class of a test sample without any consistency error.
Keywords: Featurelevel fusion, hybrid, kernel, multimodal biometric
How to cite this article: Safavipour MH, Doostari MA, Sadjedi H. A hybrid approach to multimodal biometric recognition based on featurelevel fusion of face, two irises, and both thumbprints. J Med Signals Sens 2022;12:17791 
How to cite this URL: Safavipour MH, Doostari MA, Sadjedi H. A hybrid approach to multimodal biometric recognition based on featurelevel fusion of face, two irises, and both thumbprints. J Med Signals Sens [serial online] 2022 [cited 2022 Dec 8];12:17791. Available from: https://www.jmssjournal.net/text.asp?2022/12/3/177/351883 
Introduction   
The biometric systems relying on single biometric modality have the drawback of considerable limitations owing to biometric traits, poor data quality, and noise. Multibiometric systems use fusion to integrate multiple biometric sources so that the authentication accuracy is improved.^{[1]} There could be five rationally perceivable scenarios provisioning multiple biometric information sources. According to various available evidence sources, multibiometric systems can be categorized into five multisensor, multialgorithm, multiinstance, multisample, and multimodal system scenarios, where in the first four, several pieces of evidence can be drawn from only one single specific biometric trait (i.e., iris or fingerprint). This is while, for the 5^{th} scenario (called as the multimodal biometric system), several biometric traits (i.e., iris and fingerprint) can be considered. Another possibility for a multibiometric system is to use a combined set of scenarios picked from the above said five scenarios (typically known as hybrid multibiometric systems). Moreover, to improve the user authentication complexity and security, a greater number of traits are combined together.^{[2]} Hence, the implementation of multimodal biometric systems is recommended to address the aforesaid problems. Multimodal biometrics due to its enhanced reliability, applicability, and security, has evolved into a biometric recognition development orientation for many researchers.^{[3]} Biometric data fusion may take place at four levels. If it happens at the sensor level,^{[4]} raw data will be combined. This type of fusion is illogical for designing a multimodal system; however, it can be helpful for increasing the efficiency of a unibiometric system. Featurelevel fusion combines the feature vectors extracted from various biometrics of the same class. Furthermore, the scores achieved from various classifications may be combined at the score level^{[5]} in case each classifier pertains to a single biometric. Owing to its simplicity and lowcost processing, this method is the most popular technique of fusion utilized in designing a biometric multibiometric system. Finally, decisionlevel fusion may occur through combining several decisions, each of which is the product of a unibiometric system.^{[6]} Decisionlevel fusion is less efficient than even scorelevel fusion. Both the mentioned two levels rely on the unimodal biometrics recognition performance, when a limited space is to be improved. Feature level compared with the other three levels, is capable of detecting the most discriminative data from the original feature sets while removing the redundant information among the various feature sets.^{[3]} Therefore, it is clear that featurelevel fusion method proves best for designing a multimodal system due to rich information at feature vectors [Figure 1].
Fusion of feature vectors could occur within a feature space for the purpose of transforming multiple feature vectors into a single vector so as for the final vector to have a higher detection power than the original vectors through the processes of “serial or parallel combination,”xs; “dimensionality reduction algorithms,” or “binary feature fusion.” This article suggests featurelevel fusion of five biometric modalities pertaining to the face, right thumb and left thumb fingerprints, and both right and left irises in the dimensionality reduction process in order to achieve a robust and secure biometric template for multimodal recognition system. First, face feature vectors are extracted by Kernel Linear Discriminant Analysis (KLDA), fingerprint features are extracted using Gabor filter bank, and irises features are extracted by Hough transform and Daugman algorithm. Then, using normalization operations, the kernel methods are used for the purpose of compatibility between the three feature spaces and creating distinction between linear and nonlinear relations. In doing so, along with dimensionality reduction, the feature vectors of thumbprints, irises, and the face are combined through mapping on the kernel Hilbert space.
Material and Method   
The block diagram of hybrid multimodal biometric system, comprising the three main modules of feature extraction, feature fusion, and classification, is demonstrated in [Figure 2]. The modules are explained in subsections.
Feature extraction module
The module for feature extraction extracts the best features for each of face, iris, and fingerprint biometrics separately and maps the system from image space to feature space.
Face feature extraction
As [Figure 3] shows, to extract face features, in case the images have any other background than the face, first face detection algorithms are used to segregate the face from the background image. Then, face features are identified through using the algorithms introduced for face recognition, such as principal component analysis (PCA),^{[7]} linear discriminant analysis (LDA),^{[8]} locality preserving projections (LPPs)^{[2]} and local binary patterns,^{[1]} discrete cosine transform^{[9]} and single value decomposition,^{[10]} canonical correlation analysis (CCA)^{[11]} and discriminant correlation analysis,^{[12]} neural networks (NNs), and deep learning.^{[13]}
Face recognition has wide applicability, as an important and interesting topic of computer vision domain. The application can cover a wide range from surveillance and human–computer interface to access control and augmented reality. Although, it is still a problematic challenge both due to intrinsic and extrinsic appearance changes (e.g., aging and expression variations, occlusion, pose, and illumination variations).^{[14]} Therefore, face recognition issue often is considered nonlinear mostly owing to its complexity, number and smallscale prototypes of images as well as the associated complexities. Given that the kernel techniques may effectively register nonlinear similarities among the samples, face recognition methods based on kernel have been introduced to develop linear algorithms, in which corresponding kernel functions are utilized to map the samples implicitly on a new feature space with higher dimensionalities. The kernel function definition would be k (x, y) = < Q (x), Q (y) > where Q: R^{n}—H denotes nonlinear mapping from the original space to a kernel Hilbert space and < Q (x), Q (y) > denotes the dot product of the two data vectors Q (x) and Q (y). Therefore, kernel function calculations may be compared with the dot product of two pieces of data in the kernel Hilbert space corresponding to that kernel. This is an important property of kernel functions resulting in the introduction of kernel trick.^{[15]}
To find the best kernel function that can calculate the principal component or linear discriminant in a space by highorder correlations of the input pixels that create a face image, the input image is mapped into a higherorder feature space by using multiple kernels, and based on coding in Matlab, the kernel which responds better is going to be selected. The five kernel functions used for extracting face features as well as the kernelbased methods of dimensionality reduction, which are commonly used in many applications of kernel methods, can be represented as:
Gaussian function: (σ: Bandwidth parameter in Gaussian kernel)(1)
Polynomial function: (d: Degree of polynomial function) (2)
PolyPlus function:(3)
Linear function:(4)
Hamming function: (m: Number of pixels in the image)(5)
Iris feature extraction
Contrary to the existing challenges, iris recognition is attracting attention as an efficient biometric technology. The Daugman algorithm^{[16]} and Hough transform^{[17]} are used in extracting iris features. As [Figure 4] shows, the algorithm for extracting iris features may be summarized in three steps:
 The first and highly significant step in iris recognition is the iris boundaries' localization in the eye image
 Afterward the establishment of the iris inner and outer boundaries, an invocation of geometric normalization scheme is made and then a rubber sheet model is used to transform the iris texture present in the annular region from Cartesian coordinates to pseudo polar coordinates. Images segregated from circles are normalized into rectangular block in an equally dimensional form
 Although comparing the two irises can be made using the unwrapped iris (e.g., via correlation filters), generally a feature extraction procedure can be implemented for encoding the obtained textural content.
For the purpose of extracting the iris feature, the 1D LogGabor filter can be used on the normalized image for displaying the iris tissue information. The LogGabor filter, denoting the frequency response, can be represented as (6):
where f0 indicates the center frequency, and σ denotes the filter bandwidth.
The iris features are processed in a 9600bit code and the upper and lower eyelashes in a 9600bitmask.
Fingerprint feature extraction
Fingerprint recognition is primarily feature based (instead of being image based) and the used features would be having a physical interpretation. The fingerprint texture features are used as the fingerprint feature space. Such methods as the Gabor filter bank, minutiae Matching,^{[18]} shorttime Fourier transform,^{[19]} and Gabor wavelet^{[20]} for fingerprint feature extraction. One common method of feature extraction is the Gabor filter bank that is illustrated in [Figure 5].  Figure 5: Fingerprint feature extraction steps and Gabor filter bank resulting
Click here to view 
After preprocessing (enhancement, binarization, and thinning steps) and improving the fingerprint image, the fingerprint feature extraction algorithm may be summarized in four major steps:
 Determining the reference point and corresponding target area
 Segmenting the target area around the reference point
 Filtering the target area at six or eight different directions using the Gabor filter bank
 Calculating the absolute standard deviation of gray levels at each segment in order to generate a feature vector.^{[21]}
Feature fusion module
The feature space contains the richest data. It means that feature vectors are better both quantitatively and qualitatively than other levels in terms of information. Data fusion in the feature space containing the main components and discriminants of raw data (image space) is important from two aspects: first, they can derive a combination of discriminant information from the original set of features; second, they can separately eliminate extraneous and repetitive information produced by correlation between the set of features in order to make the best decision in the shortest possible time. In other words, feature fusion would produce the best vector to create maximum distinction and have minimum dimensions for the system to make the best decision.^{[6]}
[Figure 6] illustrates the strategy of vector fusion in the feature space. The feature space is based on the three processes of “serial or parallel combination^{[1],[3]}, “dimensionality reduction” methods including “feature extraction” and “feature selection”^{[22],[23],[24],[25]} or binary feature fusion.^{[22],[26],[27]}
In this article, the features of right and left thumbprints, right and left irises, and the face are combined through the process of “dimensionality reduction.” As shown in [Figure 7], feature space fusion would take place in three separate steps. Initially, we normalize the feature vectors, i.e., the features not located in the same range are transferred to a similar range. We often see very extensive differences in the various ranges of feature vector values between the first, second, and third feature spaces. Therefore, normalization is necessary in all the three feature spaces before their fusion. If normalization is not carried out, the impact of one of space features will be dominant in the final result. In other words, feature vectors may have different distribution and variation ranges, which would impart them a significantly different impact after the fusion of the feature vector and the final result. Therefore, feature vectors must be normalized before fusion. The aim of normalization algorithms is often to change the mean and the variance of datasets to specific values. Using a single appropriate normalization method would also help fix the problem of outliers, which constitute one major reason for the education phase error. Such methods as min–max, median, and zscore are used for normalizing numeric datasets.  Figure 7: Fusion of features with dimensionality reduction algorithms based on kernel
Click here to view 
sAfter normalization, kernel proper functions are applied separately on each of the feature spaces of the fingerprints, the irises, and the face in order to be transferred to the higher feature space of the kernel, where nonlinear relations are shown as linear. Then, in this space, by orthogonal linear transformation of PCA, the features are mapped on the new coordinate system so that the biggest variance of features is mapped on the 1^{th} coordinate axis, the secondlargest variance on the 2^{nd} coordinate axis, and so on. That would help preserve the components from the main set with the most impact on variance to reduce dimensionality and help feature space fusion materialize. Realworld recognition applications face nonlinear issues due to big dimensions, original data noise, and correlation between variables, requiring kernelbased dimensionality reduction methods (choosing the appropriate kernel).^{[28]} There is often overlapping in the class distribution, and in most cases, as the number of classes increases, the recognition precision decreases. Therefore, on the one hand, using the appropriate kernel function in each feature space can lead to favorable separation between classes, and on the other, using LDA in the kernel Hilbert space can create a class structure. Therefore, the problems caused by the few number of samples and absence of supervisor will be resolved so that we will see better results from the fusion of the three sets of features. Using kernel nonLDA for the set of features in which class separation will be created means maximum correlation between the samples of each class while simultaneously the correlation between the samples from various classes is minimized. It is proven in this method that using the inner product operator between the features in the original space, it would be possible to directly find the optimal answer to the kernel nonLDA without having to compute the kernel function for reach of original space features.
Classifier module
As explained in previous sections, first the features of face, iris, and fingerprint images are extracted. Next, afterward the normalization through mapping feature vectors in the kernel (Hilbert) space, PCA or discrimination (for creating class structure) is used to reduce dimensionality and store a multimodal template of biometrics representing each class in the database. Finally, the classifier module compares every time in the recognition phase the new biometric multimodal template obtained from previous modules (extractor and feature fusion) with the combined modules previously stored in the database in the enrollment phase to determine its class based on further similarity (or shorter distance) between the new template and the stored template.
Good performance of the classifier module is of high significance in the efficiency of the system. In this article, as [Figure 8] shows, the output from nine classifiers has been evaluated. These classifiers include four classifiers with distance function, two radial basis function NN^{[29]} and probabilistic neural network^{[30]} classifiers, knearest neighbor classifier,^{[31]} kernel support vector machine (KSVM) classifier,^{[32]} and Gaussian classifier.^{[33]}
The efficiency of many machine learning algorithms largely depends on the metric used to measure the similarity of input patterns.^{[34]} Distance functions are the most common metrics used in classification. Any D: X × X→(0,∞) function which is satisfied by any desired value for x, y, and z such that D (x, y) ≥ 0, D (x, y) = 0 ⟷ x = y, D (x, y) = D (y, x), and D (x, z) ≤ D (x, y) + D (y, z) represents a distance or metric function, while the main four distance functions for classification are as follows [Table 1].
Experimental Results   
Database
A basic part of biometric research lies in access to proper data to have an acceptable number of classes as well as sufficient number of samples for training and testing. Furthermore, one has to be able to create necessary diversity within the training space through modifying the specimens pertaining to each individual class to prove the significance of statistical tests. In this article, to study face recognition system, three databases have been used: ORL,^{[34],[35]} FERET,^{[36]} and multibiometric database of Shahed University (gathered at Shahed University, Tehran, Iran) [Figure 9]. The Shahed University face database contains 500 images taken from 100 persons. Five images have been registered from each person, containing varying lighting, illuminations, facial expressions, and face postures.  Figure 9: Image samples from ORL, FERET, and Shahed face databases (partial)
Click here to view 
The CASIAIrisV1^{[37]} database from the Chinese Academy of Sciences Institute of Automation (2006), was utilized for testing the proposed method. CASIAIrisV1 comprises 756 iris images taken from 108 subjects. Furthermore, [Figure 10] shows, Shahed University's iris database contains 500 left iris images (100 persons with five images from the left iris of each person) and as many right iris images (100 persons with five images from the right iris of each person). Shahed University's iris database images have been recorded by using ICHECK2ES iris scanner, produced by Behin Pajoohesh Khavar Co. with the 4.0 lp/mm resolution at 60% or higher contrast and the images of >22 pixels per millimeter (more than 120 pixels per iris diameter) and the image dimensions of 22 mm × 38 mm. In [Figure 10], some 80 images are illustrated from Shahed University database and 60 others from the CASIA database.
The fingerprint database of Shahed University also contains 5000 images from all 10 fingerprints of 100 students and staff of Shahed University in Tehran. The images have been recorded by FSCLZP fingerprint scanner produced by Behin Pajoohesh Khavar Co. with an imaging precision of 100 dpi. In [Figure 11], some 72 images from four databases of fingerprints of right and left hands are illustrated. In the tests, the databases of thumbprints of both hands have been used.  Figure 11: Image samples from Shahed thumb and index fingerprint databases (partial)
Click here to view 
Performance evaluation
The objective sought in the training phase is to calculate necessary parameters for extracting features from images (raw data) so that the images distinguished from feature vectors would satisfy the target function (which can be the favorable recognition precision). Therefore, in the testing phase, the same parameters are applied on new data in order to determine the level of distinction of consequential feature vectors. Then, in order to study the efficiency of the system, the results of classification are compared with the favorable target function. These operations are similar to finding the weights of each individual neuron in the NN before studying the accuracy of test data classification, which would determine the efficiency of the NN. In these tests, 100 classes are envisaged for the system training and testing. For this purpose, the faces, right and left irises as well as the right and left thumbprints of 100 persons registered in the aforesaid databases were selected to extract their feature vector. Eighty percent of each person's images (class) are utilized for training and the rest 20% for testing.
Any biometric system performance could be influenced by the size of the database and the images contained. For the proposed system, recognition accuracy, precision, receiver operating characteristic (ROC) curve, area under the ROC curve (AUC), sensitivity, recall, specificity, and efficiency are used for evaluation [Table 2] where TP = true positive, FN = false negative, FP = false positive, and TN = true negative.
A ROC curve is developed through drawing the plot of the truepositive rate (also called as sensitivity) versus the falsepositive rate (false match rate) at differing threshold settings. The falsepositive rate is sometimes called as (1 − specificity). AUC denotes the probability by which the classifier determines the rank of a randomly selected positive instance above (greater than) a randomly selected negative instance (given that the “positive” rank is considered greater than “negative”). The following shows this clearly: the below curve area can be computed by (the integral boundaries are inevitably inversed because large threshold T value is lower on the Xaxis):
Next, the optimal performance of the introduced system is presented with performance parameters including recognition accuracy, ROC curve, AUC, sensitivity, specificity, and efficiency. The ROC curves and the verification performance are not sufficient for the validation of the multibiometric system's overall performance. Thus, Bengio et al.^{[38]} proposed a statistical test including a half total error rate (HTER) and confidence interval (CI). Accordingly, in this study, a test of the introduced method is performed against these two parameters. Hence, the HTER can be calculated as follows:
To effect the computation of CI around HTER, we need to find the bound σ ×zα/2. Next, σ and zα/2 are defined as:
where the NG and NI, respectively, stand for the total number of intraclass comparisons and the total number of interclass comparisons.
Now, we first illustrate the results obtained from unibiometric systems for face, iris, and fingerprint recognition separately with corresponding classifications. Furthermore, the fusion of two fingerprints as well as both right and left irises is examined in multiinstance recognition system. Finally, the results of hybrid multimodal biometric system's recognition, obtained from the featureslevel fusion of face, two irises, and two thumbprints, are illustrated with the same classifications.
Unibiometric face recognition
[Figure 12] illustrates the best results for face recognition by unibiometric systems with the linear algorithms PCA, LDA, LPP, Feature Subset Selection (FSS), PDV and CCA and kernelbased nonlinear algorithms kernel PCA(KPCA), KLDA, kernel Locality preserving projection (KLPP) for feature extraction on the three face databases (ORL, FERET and ShahedUniversity).  Figure 12: Results of face recognition by the unibiometric system on ORL, FERET, and Shahed databases
Click here to view 
As expected, given internal and external variations in the FERET and Shahed University databases on the one hand and the few number of training specimens on the other, kernelbased nonlinear methods function better for feature extraction. KLDA creates a class structure to resolve partly the problems originating from the low number of samples and lack of supervision and we see better performance in the Shahed University database where the number of training specimens is limited.
Multiinstance iris recognition
To investigate the iris unibiometric system, we consider 100 classes of the CASIA database of right and left irises and we use three images of each iris in the left iris database, two images in training and one image in testing. Moreover, in the right iris database, four iris images are used at each class, three images in training and one image in testing. The Daugman algorithm and Hough transform are utilized for iris feature extraction. A total of 9600 features are extracted for the iris, and then using the six dimensionality reduction methods of PCA, LDA, CCA, KPCA, KLDA, and KLPP, the features are down to the dimensionality of 20–150 features, whose classification results are illustrated in [Figure 13].  Figure 13: Comparing results of multiinstance iris recognition system, (a) Hough transform, (b) Daugman with dimensionality reduction algorithms
Click here to view 
In case the 9600 features extracted from iris are directly used for classification without any dimensionality reduction, we will obtain a maximum 93.52% recognition. Furthermore, NN classifications are virtually unusable owing to the low number of training specimens compared with the number of features. Applying nonlinear algorithms based on kernel functions with the KLDA (class structure) and the KPCA (without class structure) in the feature space would reduce dimensionality to 100 features and instead enhance recognition to 97% [Table 3].  Table 3: Comparing performance of five kernelbased dimensionality reduction algorithms
Click here to view 
Multiinstance fingerprint recognition
By applying eight Gabor logarithm filters to various frequencies, 73,960 features are extracted from each fingerprint, and by using kernel functions and mapping features to a higher space of kernel, nonlinear relations will be transformed into linear relations. Then, by applying LDA and PCA in the kernel Hilbert space, 73,960 fingerprint features will be reduced to 150 features which we would give to classifier as input. The results of unibiometric system recognition for 73,960 features extracted from one fingerprint and the 150 dimensionally reduced features are shown in [Table 4] after applying five kernel functions. The Hamming kernel function increases recognition to up to 75% in the KLDA class structure and to up to 69% in the KPCA nonclass structure. The fusion of features of the right and left thumbprints based on dimensionality reduction strategy would reduce dimension up to 150 features while increasing recognition to up to 87% [Figure 14].  Table 4: Comparing results of unibiometric and multiinstance fingerprints recognition system for 5 kernel functions with two strategies of feature fusion
Click here to view 
 Figure 14: Comparing results of unibiometric and multiinstance fingerprints recognition system with dimensionality reduction algorithms
Click here to view 
Hybrid multimodal recognition system
[Table 5] compares the results of unibiometric systems' recognition of face, iris, and fingerprint and feature fusion in the systems of multiinstance iris and fingerprint in the kernel Hilbert space after applying five various kernel functions.  Table 5: Comparing performance of unibiometric and multiinstance systems for five kernel functions
Click here to view 
The KLDA algorithm creates a class structure through Hamming and Gaussian kernels in order to extract the best face features (93% recognition). Furthermore, this algorithm extracts the best features with poly kernel (about 95% recognition) in the iris unibiometric system and the beast features with linear kernel (about 70% recognition) in the fingerprint unibiometric system. It is observed that linear functions are the best KLDA kernels for feature fusion in the iris and fingerprint multimodal systems (100% and 87%). Recognition in the KPCA nonclass structure with Gaussian and linear kernels declines 5% to 10% for face and fingerprint unibiometric systems. However, linear function remains the best kernel for the fusion of fingerprint features.
[Table 6]'s results show clearly the efficacy of the method proposed by this article in the face, iris, and fingerprint features' extraction and fusion in obtaining a robust and secure multimodal template. In addition to obtaining 100% recognition using the introduced method, the reduction of features to the dimensionality of 35 is highly significant. In other words, the multimodal template obtained in the proposed method by combining 147920 (2 × 73,960) features pertaining to two fingerprints, 19,200 (9600 × 2) features pertaining to both rises, and 43,200 pixels from the face image is summarized in only 35 features. This 35 dimensionality feature vector can be a unique identifier of a person.  Table 6: Effective dimensions in the kernel Hilbert space in hybrid multimodal recognition system
Click here to view 
In [Table 7], the performance of various classifiers is presented for the introduced hybrid multimodal recognition system. Taking the KLDA method and given the Polyfunction, the dimensionality of only 15 features would be enough to obtain a multimodal template, so that the Dis_Angle metric classifiers and linear KSVM would bring about 100% recognition for the final decision in the hybrid multimodal biometric system. However, in the nonlinear KPCA method with Gaussian function, the length of this feature vector increases to 35 features with minor changes in recognition.  Table 7: Comparing performance of nine various classifiers in hybrid multimodal recognition system
Click here to view 
The ROC curve (AUC = 0.9988) for the proposed hybrid multibiometric system in [Figure 15] clearly illustrates the good performance of the introduced system. The feature fusion strategy in highly favorable performance of the multimodal biometric system proposed in this study is clear enough, particularly with three Dis_Angle metric classifiers, NN, and Kernel Support Vector Machine (KSVM). Even with this few number of features, there is high resolution and therefore fusion in the feature space based on the strategy of dimensionality reduction based on kernel functions is very appropriate.  Figure 15: ROC curves of the hybrid multimodal recognition systems on Shahed face database, CASIA right and left iris databases, and Shahed fingerprint databases. ROC – Receiver operating characteristic
Click here to view 
Conclusion   
In this article, as the feature space has richer information (higher quality and quantity) than the image and decision spaces, featurelevel fusion has greater effectivity over fusion at other levels (sensor, score, and decision). Therefore, it is suggested for obtaining a robust and secure multimodal template. Out of the three proposed strategies for the fusion of feature vectors, the dimensionality reduction process with kernel methods was suggested. For the fusion of feature vectors, each feature space has to be mapped with the appropriate kernel function based on the biometric used for that purpose. Kernelbased methods are used in transforming nonlinear problems to problems that may be resolved by a linear solution. That is why the features are mapped in the kernel Hilbert space from the original space by using an appropriate kernel function. The PCA and LDA algorithms are applied in the kernel Hilbert space for the fusion of face, iris, and fingerprint features while reducing dimensionality. In case the class structure is preserved, better results will be achieved. The proposed method is all appropriate for searching big databases (recognition uses). Therefore, it would be possible to accurately distinguish the corresponding class of a test sample in a big database of a secure multimodal template without any consistency error.
Financial support and sponsorship
None.
Conflicts of interest
There are no conflicts of interest.
References   
1.  Li Y, Zou B, Deng SH, Zhou G. Using feature fusion strategies in continuous authentication on smartphones. IEEE Internet Comput 2020;24:4956. 
2.  Joseph T, Kalaiselvan SA, Aswathy SU, Radhakrishnan R, Shamna AR. A multimodal biometric authentication scheme based on feature fusion for improving security in cloud environment. J Ambient Intell Human Comput 2021;12:61419. 
3.  Zhifang W, Jiaqi1 Z, Yanchao L, Guoqiang L, Qi H. Multifeature Multimodal Biometric Recognition Based on Quaternion Locality Preserving Projection. Chin J Electron 2019;28:78996. 
4.  Tong Y, Bai J, Chen X. Research on multisensor data fusion technology. J Phys Conf Ser 2020;1624:032046. 
5.  Zhang Y, Gao C, Pan S, Li Z, Xu Y, Qiu H. A scorelevel fusion of fingerprint matching with fingerprint liveness detection. IEEE Access 2020;8:183391400. 
6.  Jain AK, Ross AA, Nandakumar K. Introduction to Biometrics. New York, London: Springer; 2011. 
7.  Zhang Y, Xiao X, Yang LX, Xiang Y, Zhong SH. Secure and Efficient Outsourcing of PCABased Face Recognition. IEEE Trans Inf Forensics Secur 2019;15:168395. 
8.  Tan X, Deng L, Yang Y, Qu Q, Wen L. Optimized regularized linear discriminant analysis for feature extraction in face recognition. Evol Intell 2019;12:7382. 
9.  Abikoye OC, Shoyemi IF, Aro TO. Comparative analysis of illumination normalizations on principal component analysis based feature extraction for face recognition. FUOYE J Eng Technol 2019;4:679. 
10.  Agarwal A, Mishra G, Agarwal K. Super resolution technique for face recognition using SVD. Int J Eng Res Technol 2020;8:15. 
11.  Gao X, Sun Q, Xu H. Multiplerank supervised canonical correlation analysis for feature extraction, fusion and recognition. IEEE Expert Syst Appl 2017;84:17185. 
12.  Haghighat AM. Low Resolution Face Recognition in Surveillance Systems Using Discriminant Correlation Analysis. Paper Presented at the 12 ^{th} IEEE International Conference on Automatic Face & Gesture Recognition; 2017. 
13.  Zangeneh E, Rahmati M, Mohsenzadeh Y. Low resolution face recognition using a twobranch deep convolutional neural network architecture. Expert Syst Appl 2020;139:111. 
14.  Wang D, Lu H, Yang MH. Kernel collaborative face recognition. Pattern Recognit 2015;48:302537. 
15.  Zhao H, Lai ZH, Leung H, Zhang X. Kernelbased nonlinear feature learning. In: Feature Learning and Understanding. Cham: Springer; 2020. 
16.  Alam M, Rahman Khan A, Salehin ZU, Uddin M, Jahan Soheli S, Zaman Khan T. Combined PCADaugman method: An Ecient technique for face and iris recognition. J Adv Math Comput Sci 2020;35:3444. 
17.  Abiyev RH, Kilic KI. Robust feature extraction and iris recognition for biometric personal identification. InBiometric Systems, Design and Applications 2011. IntechOpen. 
18.  Patel RB, Hiran D, Patel J. Biometric Fingerprint Recognition Using Minutiae Score Matching. In: Biometric Fingerprint Recognition Using Minutiae Score Matching. Springer Link; 2020. 
19.  Manickam A, Devarasan E, Manogaran G, Kumar Priyan M, Varatharajan R, Hsu CH, et al. Score level based latent fingerprint enhancement and matching using SIFT feature. Multimed Tools Appl 2019;78:306585. 
20.  Onifade OF, Akinde P, Olubusola Isinkaye F. Circular Gabor wavelet algorithm for fingerprint liveness detection. J Adv Comput Sci Technol 2020;9:15. 
21.  Jain AK, Nandakumar K, Ross A. 50 years of biometric research: Accomplishments, challenges, and opportunities. IEEE Pattern Recognit Lett 2016;79:80105. 
22.  Saini N, Sinha A. Efficient fusion of face and palmprint in Gabor filtered Wigner domain. Int J Biomet 2020;12:30116. 
23.  Kamlaskar C, Deshmukh S, Gosavi S. Novel canonical correlation analysis based feature level fusion algorithm for multimodal recognition in biometric sensor systems. Sensor Lett 2019;17:7586. 
24.  Tiong LC, Kim ST, Ro YM. Implementation of multimodal biometric recognition via multifeature deep learning networks and feature fusion. Multimed Tools Appl 2019;78:2274372. 
25.  Haghighat M, AbdelMottaleb M, Alhalabi W. Discriminant correlation analysis: Realtime feature level fusion for multimodal biometric recognition. EEE Trans Inf Forensics Secur 2016;11:198496. 
26.  Zhang H, Li S, Shi Y, Yang J. Graph fusion for finger multimodal biometrics. IEEE Access 2019;7:2860715. 
27.  Kabir W, Omair Ahmad M, Swamy MN. A multibiometric system based on feature and score level fusions. IEEE Access 2019;7:5943750. 
28.  Kempfert KC, Wang Y, Chen C, Wong SW. A comparison study on nonlinear dimension reduction methods with kernel variations: Visualization, optimization and classification. Intelligent Data Analysis. 2020;24:26790. 
29.  Roguia S, Mohamed N. An optimized RBFneural network for breast cancer classification. Int J Inform Appl Math 2020;1:2434. 
30.  Tang ZH. Leaf image recognition and classification based on GBDTprobabilistic neural network. J Phys Conf Ser 2020;1592:012061. 
31.  Prabavathy S, Rathikarani V, Dhanalakshmi P. Classification of musical instruments using SVM and KNN. Int J Innov Technol Explor Eng 2020;9:118690. 
32.  Hekmatmanesh A, Wu H, Jamaloo F, Li M. A combination of CSPbased method with softmargin SVM classifier and generalized RBF kernel for imagerybased brain computer interface applications. Multimed Tools Appl 2020;79:1752149. 
33.  Dan CH, Wei Y, Ravikumar P. Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification. Proceedings of the 37 ^{th} International Conference on Machine Learning, Online, PMLR 119; 2020. 
34.  Kusnadi A, Ngadiman VA, Prasetya SG. Image Restoration Effect on DCT High Frequency Removal and Wiener Algorithm for Detecting Facial Key Points. Vol. 7. Proceeding of the Electrical Engineering Computer Science and Informatics; 2020. 
35.  Phillips PJ, Newton EM. MetaAnalysis of Face Recognition Algorithms. In: 5 ^{th} IEEE Conference on Automatic Face and Gesture Recognition, Washington DC; 2002. 
36.  TallónBallesteros AJ. Computation of Virtual Training Samples and the Experiments on Face Recognition. Fuzzy Systems and Data Mining V: Proceedings of FSDM 2019. 2019;320:212. 
37.  Ihsanto E, Kurniawan J, Husna D, Alfan Presekal A, Ramli K. Development and analysis of a zeta method for lowcost, camerabased iris recognition. Int J Adv Comput Sci Appl 2020;11:3206. 
38.  Bengio S, Mariéthoz J. A statistical significance test for person authentication. InProceedings of Odyssey 2004: The Speaker and Language Recognition Workshop 2004 (No. CONF). 
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9], [Figure 10], [Figure 11], [Figure 12], [Figure 13], [Figure 14], [Figure 15]
[Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6], [Table 7]
