Representing the uncertainty in the fuzzy sets conceptualized by the pioneering work of Zadeh  is the main theme of this work. The fuzziness of a fuzzy set is called the uncertainty by another exponent of fuzzy sets, Yager  who has introduced the concept of specificity as an important measure of uncertainty in a fuzzy set or possibility distributions. As we are aware any crisp set is deemed to have zero fuzziness, finding the difference between the uncertainty and the specificity  of a fuzzy subset containing one and only one element is one way of measuring the uncertainty. Representing the uncertainty in the fuzzy sets by the entropy functions is another way.
Most of the entropy functions were defined in the probabilistic domain as an entropy measure gives the degree of uncertainty associated with a probability distribution. The Shannon entropy function  defined in the probabilistic domain has the logarithmic gain function which creates problems with zero probability; so it is replaced with the exponential gain in Pal and Pal entropy function . The Hanman-Anirban entropy function  contains polynomial exponential gain with free parameters which enable it to become a membership function.
The motivation for this work stems from two reasons. 1) To expand the scope of information sets in  by defining an adaptive exponential gain function that empowers a membership function to act as an agent, and 2) To develop higher form of information sets such as Hanman Transform that helps evaluate the information source values by way of higher level uncertainty representation and Hanman filter that helps modify the information.
In our previous work  we have introduced the information set and also developed some features and inner product classifier (IPC) for the authentication based on ear. In the present work we embark on extending the information sets to represent higher forms of uncertainty in addition to formulating a new classifier. The original information set features were derived from the non-normalized Hanman-Anirban entropy, which is not suitable for representing higher forms of uncertainty because of its constant parameters; hence this entropy needs to be made adaptive by assuming its parameters as variables. The power of the resulting adaptive entropy is immense as it can tackle both time varying and spatially varying situations. Our main consideration is here to see the applicability and suitability of information set based features for the distinct and unique iris textures.
The paper is organized as follows: Section 2 introduces the information set and describes the extraction of features based on this set in Section 3. Segmentation of iris and use of the information set based features for iris authentication are discussed in Section 4. Inner Product classifier (IPC) is described along with the formulation of Hanman Transform classifier in Section 5. The results of application of IPC on the Iris database using the proposed features are given in Section 6 followed by the conclusions in Section 7.
2. An Introduction to Information Sets
Assume a fuzzy set formed from a set of gray levels termed as the information source values and the corresponding membership function values . Each pair in the fuzzy set becomes a product in the information set on representing the uncertainty in the information source values using the Hanman-Anirban entropy function proved later.
Probability vs. Possibility: We consider here two types of uncertainty: probabilistic uncertainty which results from the probability distribution of the information source values (gray levels) and possibilistic uncertainty which results from their possibility distribution. The uncertainty in the probability distribution is defined by the Shannon entropy function  as
where . Pal and Pal  have used the exponential gain function in place of the logarithmic gain function to define
These two entropy functions give a measure of the probabilistic uncertainty. If we replace by the normalized in the range [0,1] the logarithmic gain function from (1) and the exponential gain function from (2) can’t model the possibility distribution of due to lack of parameters in them. Unlike probability distribution the possibilistic distribution requires a membership function which in turn needs parameters to model the distribution. As Hanman-Anirban entropy function being information theoretic entropy function contains parameters in its exponential gain function, which we can use to convert the gain function into a membership function. The non-normalized form of this function is defined as.
Just as (1) and (2), (3) is also probability based but it can represent the possibility distribution of after substituting it to replace in (3) and then choosing the parameters in the exponential gain function as statistical. The well known membership functions to represent possibility distribution are exponential and Gaussian membership functions, given by
where is the exponential membership function, is the Gaussian membership functions and is taken as . The fuzzifier  that gives the spread of the information source values with respect to the reference is defined as
This gives more spread than possible with variance. We will now consider a triangular membership function given by
Note that is the maximum of in a window or sub image. Assuming and setting the parameters as, , , , , Equation (3) takes the form with gain function becoming exponential:
Similarly with another choice of parameters, , , , , Equation (3) takes another form with gain function becoming Gaussian:
It may be noted that in the derivation of (8) and (9) the parameters are chosen to be statistical computed from the statistics of the sub images in windows and we are avoiding the normalization of the information, H in all equations but the normalization is inevitable during feature generation because of practicality. For the generality of membership function we ignore the superscripts e and g in Equations (8) and (9) respectively and represent the information set as
We can also derive the entropy function using the triangular membership function. Assuming , , , ; we have, since . The information set denoted by contains the complement of membership function. In the context of information sets, the role of the membership is enlarged by terming it as an agent, which can be its complement, square or intuitive. The agent can take care of both spatially and time varying information source values.
Definition of Information Set: A set of information source values can be converted into an Information set by representing the uncertainty in their distribution. The basic information set consists of a set of information values with each value being the product of information source value (property/attribute) and its membership value (agent in the general case). It is denoted by
Note that the membership function not only represents the distribution of information source values but also acts as an agent that helps generate different information sets such as , , .
Derivation of Information Sets by the Mamta-Hanman Entropy Function: The 2D non-normalized form of this entropy function  is given by
This entropy function allows us to change not only the exponential gain function but also the information source values thereby facilitating the generation of different types of information sets very easily. It is easy to derive (9) by fixing , , , , and . The exponential gain function in (11) becomes leading to and the corresponding information set is . This form allows to derive different information sets and with and respectively by converting the exponential gain function, into a membership function .
2.1. Hanman Transforms
These transforms are higher form of information sets. Note that information sets are the result of determining the uncertainty in the information source values whereas the transforms will be shown to be the result of determining the uncertainty in the information source values by the information gathered on them. The formulation of transforms is only possible if the parameters in the Hanman-Anirban entropy function are varying though they are assumed to be constant . We now present the adaptive entropy function and its properties.
2.2. The Adaptive Hanman-Anirban Entropy Function
The non-normalized Hanman-Antropy function with the varying parameters is called the adaptive entropy function which is relevant to spatially varying and time varying information source values. To this end, we modify this entropy function by taking two parameters a and b as zeros and other two parameters c and d as variables. The resulting adaptive entropy function is therefore:
. We will now prove that (13) satisfies the properties of an entropy function when and are varying.
The Proof of Properties:
1) The exponential gain function also called the information gain is the continuous function (here should not be confused with Iij which stands for the information source) for all and , so is a continuous function being the product of two continuous functions and hence H being the sum of continuous functions is also a continuous function.
2) is bounded. Since which means that . As is bounded for each I, H is also bounded.
3) With increase in , decreases since .
4) When , then H is an increasing function of n.
so H is an increasing function of n.
5) is a concave function where and .
The function is concave if the Hessian matrix is negative definite.
as and are in the range [0, 1].
The Hessian matrix is the second order partial derivative of square matrix having the following form:
where ; hence all the Eigen values of this Hessian matrix are negative. Thus Hessian is negative definite. So the entropy H is concave.
6) Entropy H is maximum when all ’s are equal. In other words,
In that case, .
7) The entropy is minimum if and only if all ’s are equal to 0’s and single .
Significance of Adaptive Hanman-Anirban Entropy Function: We have already seen the role of the information gain as an agent when the parameters are constant. We will now examine its usefulness in the context of varying parameters, i.e. . Taking the derivative of w.r.t. “ ” we get, . This means that the absolute derivative of the information gain with respect to the parameter, i.e. gives the information value. When the information gain is changing as a result of change in the parameter responsible for modifying the information source value, i.e. in this case, it produces the information set after adjusting the sign as the information value must be positive. The higher form of the information set results if the parameter is also an agent by itself. We will now derive the transforms based on this concept.
2.3. The Adaptive Hanman-Anirban Entropy Function as the Transform
Fixing , in (13) the entropy function takes the new incarnation called Hanman transform which transforms the spatial domain information source values into the information domain as:
In this, the exponential gain is made as a function of information value which is already shown to be a measure of the uncertainty. The new gain function termed as an agent is a function of the information value.
Note that the information source value weighted by this new agent in Hanman transform (21) gives a better representation of the uncertainty. The division of by the maximum gray level in a window, is necessitated from the fact that this ratio serves as a better statistic than mere in (21). Note that if information source values are normalized already, no division is needed.
Proof: The zero order transform can be obtained if we take , in (13) leading to . Similarly we can have . Note that the deviations of possibility distribution and probability distribution from unity are causing the uncertainty in the information source values. Here agents are and . In the case of Laplace transform the agent is where . If we choose ; then the agent in Fourier transform is which is complex. On the other hand is not a transform as it is a function of only. The first order transforms are: and where and represent possibilistic and probabilistic information values respectively. In view of this discussion the definition of transform now follows.
Definition of Transform: The gain function in the adaptive entropy function can be a function of the probabilistic information (distribution) or possibilistic information (distribution) and it weights the information source values giving rise to the first order (zero order) transform.
The Relevance of Transforms to the Real Life Scenario: The information source values received by our senses are perceived by the mind as the information values; hence these are natural variables just as the fuzzy variables. That is, using the information values perceived by the agent on the information source values, the entropy improves its uncertainty representation.
The Relation between Information Sets and Hanman Transforms: The information sets are derived directly from the Hanman-Anirban entropy function and those derived from the adaptive Hanman-Anirban entropy function are higher form of information sets. The latter are useful for the representation of time varying and spatially varying information source values.
The Heterogeneous Transforms:
If the agent is from another information source along with its membership function and the reference parameter then (21) becomes what is called Heterogeneous Transform.
In this the agent from a different information source evaluates the information source of interest.
Algorithm for Hanman Transform Features
1) Compute the membership function value for each gray level in a window of size W × W. In our experimental study we have used using Equation (4) for computing the membership value.
2) Obtain the normalized information value by dividing the information value with the maximum gray level in the window.
3) Multiply the normalized information value from Step 2 with the corresponding gray level in Equation (21).
4) Repeat Steps 2 and 3 in a window and sum all the products to get a feature value.
5) Form a feature vector by repeating Steps 1 - 4 on all windows of an iris strip.
2.4. Hanman Filter
Invariably the information sets derived from the fuzzy sets may not possess desirable characteristics. By modifying the information sets by certain functions or operators it is possible to get better features. The modification of the information is required to meet certain objectives like better classification or a new interpretation.
Let us see how to modify the information at a pixel in a window. This is done by taking the membership function as a function of parameter s. The modified is defined as
The dependency of the membership function in (22) on s is incorporated as
In type-1 fuzzy sets, the fuzzier is constant as in (24) but the type-2 fuzzy sets result from varying . Here the membership function depends on scale. We will modify by using an agent to provide a new content through the Hanman-Anirban entropy function with the substitution: , , leading to
where the parametric frequency of the cosine function is defined as ; , with . We can write the r.h.s. of Equation (25) as , which is a product of the information value and the cosine function. This filter is different from Gabor filter which is the convolution of image and the product of the Gaussian and cosine functions. We have no such restriction for in (25). By using Fij we can create several information images having varied frequency components. These images are aggregated to get a composite image. Next windows of varying size are used to partition this image and the values within a window are averaged to get a feature value.
Definition of 1st Order Filter: If is a function of Iij as in (24) then (25) is termed as the first order Hanman filter.
Definition of Zero-Order Filter: If is not a function of Iij but only a constant then (25) is termed as the zero-order Hanman filter. Let us choose in the exponential gain function by keeping then it converts the zero-order Hanman-filter similar to Gabor type as given by
We can fix “s” in (25) to any value. In the general case s is fixed to the window size, i.e. s = w. Then we have
An algorithm for the extraction of Hanman filter features is as follows: 1) Generate 12 information sets using a window of size W × W for W = 7 from an image for 3 values of u and four values of s, 2) Form the composite information set by aggregating all 12 sets, 3) Consider the average value in a window as the feature, 4) Repeat Steps 1 - 3 on all windows in an iris image to produce a feature vector, and 5) Generate different feature vectors corresponding to different values of W.
The Utility of Hanman Filter: Its utility is vested with the choice of a suitable type of functions that can modify the information. Consider an example of charcoal the elements of which may be represented as whereas the elements of the burning charcoal may be represented as the product of information value and temperature of charcoal, i.e. .
The Difference between the Hanman Transform and the Hanman Filter: The function of Hanman transform is to evaluate the information source values by the gain function using the information already obtained on it while the function of Hanman filter is to modify the information using a suitable function. They lead to higher forms of the information sets because the gain functions used are functions of information values.
Hanman Filter Features
An Example: Let us consider window of size 5 × 5 from Iris strip. The original gray levels are represented by , the normalized gray levels by , probability distribution by and membership function values (Gaussian) . Features of the First order HF are extracted using Equation (25) and those of the Zero order HF are extracted using Equation (26).
Two typical feature values for three values of frequency change (u) and two values of scale change (s) are shown in Table 1. A comparison of recognition rates due to different feature types is shown in Table 2 in which the basic Information values yield (3rd column) the highest recognition rate and the next highest recognition rate is by a kind of Hanman transform (5th column) that evaluates the information source values based on the membership function values instead of information values as in Hanman transform (7th column).
If two memberships in the role of agents evaluate the same information source value, we get the divergent information. Let be the set of information source values and let and be the two membership functions that look at differently. Then the divergent information is expressed as
Table 1. Typical feature values of Hanman filter.
Table 2. Comparison of different features based on the results of authentication.
The divergent evaluation simply follows from Hanman Transform as given by
We can use this measure in quantifying the quality of evaluation of any information source.
2.6. Random Information
By changing the membership function values randomly one can distort the distribution pattern present in the information values. If r is the random number the basic information can be turned into random by using:
The corresponding random evaluation is expressed as,
Assuming as the complementary information can be termed as the twisted information. This leads to the twisted evaluation expressed as,
3. Derivation of Information Set Based Features
3.1. Effective Information Source Value
This feature directly emerges from the definition of the basic information set. The Effective Information source value from the kth window is computed from:
Replacing with the Gaussian membership function in (33) leads to what we term as Effective Gaussian Information (EGI):
3.2. Total Effective Gaussian Information (TEGI)
Just as the above, this feature also comes directly from the basic information. TEGI is defined as the product of Effective Gaussian Information and the Effective Gaussian membership function value , given by
where is computed using:
We can also consider instead of or any arbitrary function but we have adopted only in our study.
3.3. Energy Features (EF)
From (12) we can write the gain function as . Here we have converted the gain function into the triangular function. Hence the energy feature from kth window taking is written as:
It may be noted that the choice of an appropriate membership function is an important issue that is evaded here by going in for an experimentally proven function.
3.4. Sigmoid Features (SF)
Unlike the energy features, these features are the result of considering the information values in the form of the sigmoid function, SF expressed as
where is the average gray level in the kth window.
To extract features an iris strip is divided into windows of size 7 × 7 and the gray levels are normalized. The number of features is equal to the number of non overlapping windows fitted into an iris strip. The classification of features is performed using the Inner Product Classifier (IPC) in .
4. Formulation of Inner Product Classifier (IPC)
This classifier makes use of the error vectors between the training feature vectors of a user and a single test feature vector. As our objective is to get the error vector of the least disorder we generate all possible t-normed error vectors by applying t-norms on any two error vectors of a user at a time. As each normed error vector involves two training feature vectors; these are averaged to get the aggregated training feature vector. The inner product of each t-normed error vector and the corresponding aggregated training feature vector must be the least to represent a user. The infimum of all the least inner products of all users gives the identity of a user. This is the concept behind the design of IPC.
Before presenting an algorithm, let us denote the number of users by , the number of training samples per user by and the number of feature values . The features are normalized by using:
where denotes the Information set based feature such as Effective Gaussian Information (EGI), Total Effective Gaussian Information (TEGI), Energy feature (EF), Sigmoid feature (SF), Hanman Transform (HT) feature and Hanman Filter (HF) feature. Note that stands for any one of the feature type .
Algorithm for IPC
1) Compute the error vector pertaining to a user (l) between the feature vectors denoted as the feature vectors, of the training samples of each user and as the feature vector of the unknown test sample, given by
where ; ; where i stands for ith sample of lth user and is the number of samples of a user and is the number of feature values.
2) Compute the normed error vectors from all possible pairs of error vectors belonging to the lth user using the Frank t-norm as follows:
where is the Frank t-norm given by:
As , the number of pairs generated from (35) is . Let , be the index for the number of pairs.
3) Find the average feature value of ith and kth training samples from
The above normed error vectors act as support vectors and the average feature vectors act as weights. The necessary and sufficient condition is that the inner product of and must be the least for the training sample to be matched with the test sample.
4) Evaluate the inner product from
The overall q is the error measure of associated with the lth user. While matching, which ever user yields the minimum of all over all l provides the identity of the test user that owns the training sample.
Extensions of IPC
Assuming that the exponential membership function of is and the corresponding information value is . Then replacing with the exponential of this information value in (42) gives the Hanman Transform classifier, expressed as,
Another extension is to have the weighted Hanman Transform classifier obtained by combining (42) and (43) as
5. Application to Iris Based Authentication
The above information-set based features are now implemented on iris textures to demonstrate their effectiveness in the authentication of users. Many approaches are in vogue in the literature for the iris recognition but they fail to yield good recognition rates on the partially occluded irises. As the texture is a region concept the proposed approach proceeds with the granularization of an image by varying the window size on the iris strip so as to get an appropriate texture representation. Moreover the proposed information set based approach is capable of modifying the information on the texture to facilitate easy classification. No new approach is attempted on segmentation of iris, so we have used the existing methods for segmentation. In this case study our emphasis is mainly on the texture representation and classification using the information set based features.
5.1. A Brief Review of Iris as a Biometric
Iris has been a topic of interest for person authentication ever since the pioneering works of Daugman  and Wildes . In iris recognition, the onus is on selecting the most suitable features that enable accurate classification. As iris is endowed with a specific texture, it can be used for investigating new texture representations and classifiers.
Gabor filter has played a significant role in characterizing the iris texture by way of iris codes generated using the phase information; hence it is one of the best tools to characterize and classify textures . The advantage of using Gabor filter is its ability to quantify the spatio-temporal component of texture. It may be noted that better recognition of irises can only stem out of better understanding of textures. Even after nearly 20 years of the inception of iris technology, efforts are still on finding better features and classifiers  .
5.2. Literature Survey
The original works of Daugman  and Wildes  are the harbinger for the iris based personal authentication. Daugman   uses Gabor wavelet phase information whereas Wildes uses the Laplacian of the Gaussian filter at multiple scales as features. Some important contributions on iris recognition are now discussed.
Segmentation of iris texture region plays a pivotal role in the iris recognition. Different approaches like morphological operations , thresholding using histogram curve analysis  are used for segmentation. Camus and Wildes  have presented a method that doesn’t rely on the edge detection by Hough transform for segmentation. The method of Du et al.  determines the accuracy of iris recognition for a partial iris image. There are a host of problems such as non-circular shape of iris and pupil and off axis images, which have prompted special consideration  . It has been proved that better iris segmentation will help in improving the overall performance of iris recognition . Many new methods on iris segmentation can be found in .
Gabor filter features are the most sought after so far as the texture is concerned . Other feature extraction methods like Hilbert transform , Wavelet based filters  are also extensively used in the literature. About the classification algorithms, mention may be made of the correlation of phase information from windows , Support Vector Machine (SVM)  apart from simple Euclidean distance classifiers.
Practical implementation of iris based biometrics requires faster and more efficient data storage and a possible solution to this problem is suggested using FPGA . Spoofing of iris from iris codes is a sure bet and to circumvent this, counterfeiting measures are developed in . Factors affecting the quality of iris images captured using visible wavelength are investigated in . Concerns regarding degradation of quality due to compression techniques are dispelled in . The quality of iris images and its effect on the recognition rates are analysed with respect to the visible area of the iris texture region . An attempt is made to enable iris recognition using directional wavelets . New methodology on biometric recognition using periocular region (facial region close to the eye) rather than the texture features from the visible iris in Near Infrared (NIR) lighting conditions are discussed in  whereas iris recognition using the score level fusion techniques on video frames is presented in .
5.3. Segmentation of Iris and Generation of Strips
Segmentation forms a very important part of iris recognition as is evident from its effect on the performance improvement . Though segmentation is not the main concern of this paper we will discuss the segmentation methodology briefly. The iris segmentation is done using the Hough transform based approach . In this, Canny edge detector  is applied to get the segmented regions followed by the Hough transform that detects the boundaries of circular regions in the segmented regions. For strip generation polar to rectangular conversion is employed without recourse to the interpolation. A sample image from the database and the corresponding iris are depicted in Figure 1. The iris strips are affected by the occlusion of eyes due to eyelids and eyelashes as evident from Figure 1(b). To rectify this problem the iris strip is juxtaposed with itself and the middle portion of the resulting strip is bereft of occlusion as in Figure 2(b). These middle rectangular strips are enhanced and normalized before feature extraction.
The database, CASIA-Iris-V3-Lamp  collected using a hand-held iris sensor has eye images of 411 people with at least 10 images per user. The intra class
Figure 1. Sample image of iris and the rectangular strip that is generated from it.
Figure 2. Generation of iris strip devoid of occlusions and eyelids.
variation was introduced in the database by turning the lamps on or off during the acquisition. The experiments were carried out on 4100 left eye images of 411 people with the training to test sample ratio of 9:1 using k-fold validation. This database also contains some samples having rotation, translation, occlusion and illumination effects as shown in Figure 3.
6. Results and Discussion
The extracted features from each iris strip are EGI, TEGI, SF, EF, HF and HT. The dimensions of all test strips are normalized before matching with the training strips.
6.1. The Features Used for Comparison
The performance of the above feature types is evaluated and compared with that of the conventional Gabor filter using SVM in . After numerous trails the parameters of Gabor filter are set as follows: The standard deviations: σx = 3 and σy = 3, Phase offset: 0, Aspect ratio: 1, Orientations: θ = π/4, 2 π /4, 3 π /4 and π and Wavelengths: λ = 1, 2, 3.
6.2. Performance Evaluation of the Proposed Features
As shown in Table 3, IPC and Linear SVM (SVML) show comparable results with the proposed features and Gabor features but Polynomial SVM (SVMP) gives good results only with HT and HF. The accuracies are the mean values of the recognition rates under the k-fold validation. IPC gives the best recognition rate of 98.1% with EF while SVML gives the best recognition rate of 99.2% with SF. The recognition rates with Gabor filter are 90.3% and 97.3% using IPC and SVML respectively. As Gabor features are very large numbering more than 10,000, all classifiers are slower by 10 times.
To tackle the problem of partially occluded eyes, we will apply the majority voting on the iris strips which enables better performance than that of the individual iris strips.
Figure 3. Example iris images in CASIA-Iris-Lamp.
Table 3. Features and their mean recognition rates with different classifiers after k fold validation.
6.3. Majority Voting
As noted in  certain regions of an iris strip like the middle region possess the discriminative texture. It may be noted that significant texture regions are present in iris at different radial distances away from the papillary boundary. This might be attributed to the fact that for some persons, the iris textures are spread over the region between the papillary boundary and the limbic boundary  while the majority of people have iris texture features lying closer to the papillary boundary. The aggregation of results from iris strips of different sizes enhances the overall recognition rate. In a few cases, correct classification is obtained with the small sized iris strips; hence the need for considering features from iris strips of different sizes.
Based on the above observation, the iris region between the papillary boundary and limbic boundary is divided into three sizes along with full size. The number of features depends upon the window size chosen to partition an iris. In our study, the window size is taken as 7. The feature vectors corresponding to iris strips of 1/4, 1/2, 3/4 and full size are 78, 156, 234 and 273 respectively. The original iris strip size is 48 × 270. The accuracy achieved with IPC on a particular strip size is given in the 3rd column of Table 4. The maximum recognition rate is obtained on 3/4 size strip by all feature types. The features extracted closer to the papillary boundary have less accuracy of detection than those closer to the middle of the iris region.
At the matching stage, each region of the test iris strip is matched with the corresponding regions of all the training strips considering only one type out of the six types of features using IPC. With a view to improve the results of IPC on individual strips, majority voting method is applied on the results of four iris strips obtained using features of one type at a time. It gives the identity of the concerned user which ever training iris strip gets the maximum votes (validation) from four strips of different sizes (similar to four classifiers) .
Table 4. Majority voting results for different features with IPC.
As mentioned above, when the decisions from the individual feature types on strips of different sizes are combined using the majority voting method, the final decision is as shown in the last but one column of Table 4. Further enhancement in the recognition rates is obtained when the results from all iris strips are combined using the classification accuracies due to individual feature types as weights similar to ranks  using IPC. Then the combined recognition rate from all the feature types on all four strips attains 100% as shown in the last column of Table 4. By applying the majority voting on the matching results of four iris strips of different sizes the effect of occlusions can be minimized to a great extent.
This type of segmental approach for iris recognition is proposed in . Instead of accept option that we have used in the majority voting method, the reject option can also be used to detect the possibility of erroneous classification in case we are unable to reach a consensus by the accept option.
6.4. A Comparison with the Existing Methods
We have also compared the performance of our features as in Table 4 in which the results correspond to 3/4th size of iris with that of the existing features such as PCA, ICA , Local binary patterns (LBP) , Gabor  and Log Gabor  on the same database using k-fold validation in Table 5. The highest performance (99.35%) is obtained with HF, EF, SF and HT using IPC whereas the highest performance of 96.2% is obtained with ICA using SVML.
6.5. Verification Evaluation
At the verification level, IPC is compared with Euclidean distance classifier (EC) on the proposed features. The performance of IPC and EC is shown in terms of two separate ROCs on six features denoted by EF, HF, SF, EGI, TEGI, and HT also judged by the recognition rates.
The Euclidean distance based ROC plot in Figure 4(b) shows the maximum GAR of 93.3% at FAR of 0.1% with HF features. A maximum GAR of 99% at FAR of 0.1% is achieved with HT by IPC in ROC of Figure 4(a). The perfromance of IPC is better than that of EC as shown in Figure 4(a) and Figure 4(b).
At the verification level the proposed features are also compared with Gabor filter as it is extensively used for iris. As shown in Figure 4(a) the proposed features perform better than Gabor filter.
Table 5. Comparison of the existing features using SVML.
Figure 4. ROC of average authentication by k-fold validation using different features with (a) IPC; (b) EC.
This paper moots the important concept of transform to represent higher form of uncertainty. This transform derived from the adaptive Hanman-Anirban entropy function is called the Hanman transform (HT). The transforms have an immense potential as they cater to both spatially varying and time varying situations. As the information need not be in the desirable form, this paper shows how to modify the information sets using a filter function resulting in Hanman Filter (HF) of zero-order and the first order. In addition to these two types of features, we have formulated four feature types that include: Effective Gaussian Information source value (EGI), Total Effective Gaussian Information (TEGI), Energy feature (EF) and Sigmoid Feature (SF). These features are extracted from the rectangular iris strip by partitioning it into windows of different sizes. The performance of IPC is similar to that of SVML, but consistent on all feature types. IPC gives the best results on EF whereas SVML gives the best results on SF. Out of all feature types EF and HT have an edge over other features. Thus the new features and IPC are shown to be effective on the iris database.
The results of authentication using iris strips of four sizes show that 3/4 size strips yield the best results on all feature types using IPC. An application of majority voting on the authentication results obtained with a single feature type on all four strips provides 99.8% accuracy whereas the second level majority voting with six feature types on all four strips achieves the 100% accuracy.
This paper makes several contributions that include: 1) proof of properties of the adaptive Hanman-Anirban entropy, 2) extension of information sets to Hanman filter and Hanman transforms, 3) derivation of information set based features, viz., EGI, TEGI, EF and SF and validation of these features on iris based authentication, and 4) formulation of Hanman Transform classifier.
One ramification of this work is that we can generate a plethora of features from information sets for tackling different kinds of problems though we have chosen iris to vindicate the effectiveness of our features.
This research work was funded by Department of Science and Technology (DST), Government of India. We acknowledge the database CASIA-IrisV3 from the Chinese Academy of Sciences, Institute of Automation.
 Du, Y., Bonney, B., Ives, R., Etter, D. and Schultz, R. (2005) Analysis of Partial Iris Recognition Using a 1-D Approach. International Conference on Acoustics, Speech, Signal Processing, Philadelphia, Vol. 2, 961-964.
 Abhyankar, A., Hornak, L. and Schuckers, S. (2005) Off-Angle Iris Recognition Using Bi-Orthogonal Wavelet Network System. 4th IEEE Workshop Automatic Identification Advanced Technologies, Buffalo, 16-18 October 2005, 239-244.
 Tisse, C., Martin, L., Torres, L. and Robert, M. (2002) Person Identification Technique Using Human Iris Recognition. 15th International Conference on Vision Interface, Calgary, 27-29 May 2002, 294-299.
 Huang, H. and Hu, G. (2005) Iris Recognition Based on Adjustable Scale Wavelet Transform. 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Shanghai, 1-4 September 2005, 7533-7536.
 Miyazawa, K., Ito, K., Aoki, T., Kobayashi, K. and Nakajima, H. (2005) An Efficient Iris Recognition Algorithm Using Phase-Based Image Matching. International Conference on Image Processing, Genoa, Vol. 2, 49-52.
 Roy, K. and Bhattacharya, P. (2006) Iris Recognition with Support Vector Machines. Lecture Notes in Computer Science Vol. 3832, International Conference on Biometrics, Hong Kong, 5-7 January 2006, 486-492.
 Sayeed, F., Hanmandlu, M., Ansari, A.Q. and Vasikarla, S. (2011) Iris Recognition Using Segmental Euclidean Distances. 8th International Conference on Information Technology: New Generations, Las Vegas, 11-13 April 2011, 520-525.
 Wang, Y. and Han, J.-Q. (2005) Iris Recognition Using Independent Component Analysis. Proceedings of the 4th International Conference on Machine Learning and Cybernetics, Guangzhou, 18-21 August 2005, 18-21.
 Sun, Z., Tan, T. and Qiu, X. (2006) Graph Matching Iris Image Blocks with Local Binary Pattern. International Conference on Biometrics, Hong Kong, 5-7 January 2006, 366-372.