Tikrit Journal of Pure Science

I mage transformation provide deep meaning about images feature, so many type of image transformation are appear in the last decade years, one of them is curvelet transformation which improve the image processing techniques specially in field of feature extraction. Personal authentication adopt biometric information to be one of the major coefficients in this field. Palm print one of the main approaches for personal identification. So studying the moments extracted from coefficients of curvelet transform of palm print image adopted in order to get high efficient system for personalization systems. Two major phases are constructed in this research to adopt the moments of low frequency coefficient of the curvelet for personal identification. In the first phase a database was built for 50 persons by acquisition nine images for both hands (9 for left hand and 9for right hand). images are acquired and then processed to extract ROI (region of interest) by looking for the palm centroid then a square shape will be fixed based on that centroid. This preprocess play an important step for stable features. Histogram is applied to the images and then apply SOBLE operator and morphological operation to highlight features of palm print, then apply decomposition on each image based on curvelet transformation

Image transformation provide deep meaning about images feature, so many type of image transformation are appear in the last decade years, one of them is curvelet transformation which improve the image processing techniques specially in field of feature extraction. Personal authentication adopt biometric information to be one of the major coefficients in this field. Palm print one of the main approaches for personal identification. So studying the moments extracted from coefficients of curvelet transform of palm print image adopted in order to get high efficient system for personalization systems. Two major phases are constructed in this research to adopt the moments of low frequency coefficient of the curvelet for personal identification. In the first phase a database was built for 50 persons by acquisition nine images for both hands (9 for left hand and 9for right hand). images are acquired and then processed to extract ROI (region of interest) by looking for the palm centroid then a square shape will be fixed based on that centroid. This preprocess play an important step for stable features. Histogram is applied to the images and then apply SOBLE operator and morphological operation to highlight features of palm print, then apply decomposition on each image based on curvelet transformation. Select low frequency coefficient (which hold the details). Evaluation of seven moments for each image (18images) then store there in the database file (so each person will have 126 values), this phase called personal database preparation. While the second phase is the detection phase, which apply the same steps to evaluate the moments as done in first phase then go through the database looking for the closest person to the tested one. System evaluation measured by statistical metrics which show good result goes up to 96% when applied on 50 person with different acquisition conditions. Also the effect of ROI dimension with individual hands and integrated both of them studied, which yield to recommended dimension of 192*192.

1-Introduction
Utilizing the physical and behavioral characteristics of humans is the main goal of a biometric systems for personality recognition [1]. Physiological features such as iris and fingerprints are unique to each person and are stable and it is not possible to repeat or observed as a person's characteristics in addition to the inability to steal them [2]. The features extracted from the face are among the most used biometrics, but the most important defect to their use is problems related to lighting, expressions and posture [1]. As for fingerprints, they also suffer from some defects as the elderly and workers are unable to give clear and acceptable fingerprints [3]. Iris is a biometric and reliable measures, but one of the most important determinants of using iris is the price of high-quality iris input devices in addition to the difficulty for users to accept the system because of discomfort in capturing the iris image [1]. Palm print recognition systems are promising technologies that have received great interest. These systems have been successful among many biometric systems due to real-time calculation, feature extraction and high accuracy. Palm print recognition systems have been used in a variety of commercial application where low-resolution images (100 dpi or less) are used and forensic applications where highresolution images are used(100 dpi or above) [4]. The patterns in the palm print provide a lot of useful information for identification, as the palm act as a reliable human identifier because these patterns are permanent and do not change throughout a person's life, in addition to being different from one person to another even in the case of twins [5].

2-Palm print features
In the definition of personal identity, the features of the palm are considered promising [1]. The inner surface of the palm contains three main lines(flexion creases) that are clear and do not change over the life of any person. These lines are called the heart line, the head line and the life line, it also contains secondary lines called wrinkles, which are the lines on the palm except the main lines and it is irregular and thinner from the main lines as well as the ridges that are found throughout the palm of the hand [5]. The three main lines and the secondary lines of the palm form between the second and fifth months of pregnancy, while other features appear after birth. The three main lines of the palm are dependent on heredity, while other features are not dependent on heredity and therefore have great importance in determining personal identity [6]. Depending on the patterns of the lines in the palm print, it is determined whether the two images are of the same person, and this is known as distinguishing palm print [5]. Figure(1) shows features that can be extracted from low-resolution and figure (2) show features that can be extracted from high resolution palm print images [4]. "on line palm print identification" to distinguish people in real time and then applied CCD camerabased palm print device to take pictures of the palm print. To represent the low resolution palm print image and the matching different palm print images 2d Gabor phase coding was used to extract the properties of palm print tissue. the hamming distance has been used for matching [3]. Tee Connie et.al In 2005 presented their research "an automated palm print recognition system" suggesting an automatic system to recognize the palm print relying on the scanner. Many sub space linear projection (PCA,FDA,ICA) techniques have been selected and compared to analyze the images of the palm print in multi-frequency, multi-precision representation. Experimental results show that applying FDA to the wavelet sub band gives FAR and FRR less than 1.356 and 1.492 [5]. In 2008, Adams Kong et.al. Published their research "three measures for secure palm print identification" in which they addressed three security problems, Reissue templets, single restart attacks and database attacks. They adopt random routing filter bank as an extract feature to create noise-like feature codes to reissue forms . The results have shown that messages in the forms result are decreased in accuracy [7]. "Empirical study of light source selection for palm print recognition" was studied by Zhenhua et.al in 2011 where they presented an analysis of the performance of the palm print recognition system under 7 different illuminations including white light. Experimental results have shown that white is not the best light, while yellow and purple light may achieve a higher resolution in recognizing the palm print than white light [8]. In 2011, V.Subbiah B, M.A Leo Vijilious published their paper" palm print recognition using contour let transform energy feature" providing a new way to extract the region of interest ROI and then apply the contour let transformations to extract features, and for the feature-setting process the energy of each sub band are calculated. For final biometric classification, the Nearest Neighbor Classifier is used. The results were promising when the contour let transform combined with the energy feature, a high accuracy was obtained [6]. In 2013, the researcher Hatem et. al. in their paper "Palmprint recognition using 2-d wavelet, Ridgelet, curvelet and Contourlet", which a comparative study was made between image transformations for purpose of palm print identifying. The transformation of the contourlet got the highest recognition degree, followed by the wavelet [4]. Sampada. A. Dhole, V.H. Patil adopt "palm print recognition using contour let transform" in 2015 where the research aims to analyze the performance of the palm print recognition system using contourlet features. The results have shown that using the contourlet and then the PCA for the purpose of reducing the dimensions gives a greater effect than using only the PCA [2]. In 2017, the researcher Pawan et.al. published their paper "Palmprint recognition using binary wavelet transform and LBP representation" Where they suggested a system for distinguishing the palm print by relying on the binary wavelet transformation due to its ability to represent the edges well, and the researcher obtained an genuine rate of 98% [9]. In 2020 palm print recognition goes to be done mechanically using high robust technique, Poonam and et. Al. in their research, "palm print recognition using robust template matching" yields better results in terms of Correct Recognition Rate (CRR) and Equal Error Rate (EER) [1]. Also in 2020, Fei, L., Zhang B.,et. Al. Published their research "feature extraction for 3-d palm print recognition" where palm print recognition goes over to match a 3d direction. The researcher touch this point and present a comprehensive overview of feature extraction and matching for 3-d palm print recognition [10].

Feature extraction
Feature extraction process refers to obtaining higherlevel information for an image, such as texture, color, and shape. Features contain information related to the image and will be used in image processing (such as search, retrieval, and storage) [11]. The process of converting the entered data into a set of features is called (feature extraction). Obtaining the most relevant information from the original data is the main goal of extracting features, and that information is represented in a less dimension area, especially when the entry data is very large so that it cannot be processed and it is suspected to be duplicated, so it is converted into a feature vector which classified into two main categories [12]:- global feature: -It is divided into two categories [12]:-1. Topological such as projection profiles and number of opening. 2. Statistical such as invariant moment.  local feature:-such as branches, joints and concave and convex parts Tuceryan and Jain have divided feature extraction methods into four main categories [11]:- structural domain:-The textures are represented by primitives and (micro texture) And the hierarchy of the spatial arrangement of the primitive (macro texture) .
 statistical domain:-According to the nondeterministic properties that manage the distributions and relationships between the gray levels of an image, statistical methods indirectly represent the texture. This technique is one of the first ways to see a machine vision.  model-based method:-Model-based texture analysis such as fractal mode and Markov based on building images that can be used to describe the texture. These methods describe the image as a linear combination of a set of basic functions or as a model of probability.  transformation based method:-In a transformbased mode, an image is represented in a space whose coordinate system has an interpretation closely related to texture properties (such as frequency or size). These methods depend on transformations such as Fourier transform, Gabor and wavelet transform, and the most widespread and preferred tool by researchers is the wavelet transform.

Curvelet transformation
The curvelet transform proposed by Candes and Donoho, It was developed to overcome the limitations that appeared in the traditional multi-scale representations [13]. The curvelet transform is a multi-resolution transformation method associated with computer vision and image processing. The curvelet transformation is a multi-scale geometric directional transformation that allows for nonadaptive sparse representations of objects that have edges [13]. The Fourier series sparsity concept has been destroyed by discontinuities. To reconstruct the discontinuities in the Fourier series with good accuracy, a large number of terms are required. the wavelet found to solve the Fourier series problem as it is localized and multi scale. Although the wavelet transformation is good at representing unique points in 1D and 2D signals, it fails to efficiently detect the curved singularity. The curvelet transform has been specially developed to represent objects that contain curves, i.e. objects that appear smooth except for discontinuities along a general curve, where images containing edges are a good example of such a type [14]. Figure (3) illustrates the edge representation ability of wavelet transform (A) and curvelet transform (B). Note that to represent the edge, more wavelets are required because of their square shape compared to curvelets, which are elongated needle shape [14]. The curvelet transform opens the possibility to Analyze an image with different block sizes, but with one transformation. The image is Analyze into group of wavelet bands, and then each band is Analyze by means of local ridgelet transform, as the size of the blocks can change at each scale [15]. The curvelet transformation algorithm is summarized by the following steps [16]: -1. For obtain sub band , the image is passed to the filter as shown in equation (1) Figure (4) shows discrete curvelet first generation(DCTG1) [15].

Invariant moments
Standard moments definition has the form of the projection of f (x, y) to the monomial term x p y q . The base group x p y q is not orthogonal, and thus recovering the image from these moments is computationally complex and extremely difficult, in addition to the information content of m pq possessing a degree of redundancy [17]. To overcome the problems associated with standard moments, teaque proposed orthogonal moments that depend on the theory of orthogonal polynomials. Zrenik's moment is a class of orthogonal moments [17]. Zernike moments has many characteristics that make it suitable for use in image analysis and pattern recognition, as it is orthogonal and not affected by image rotation in addition to the ease of creating it in random order [18]. Zernike introduced set of complex polynomials forming an integrated set on the unit circle (x2 + y2 = 1). Equation (1) indicates the form of this polynomial [17].
( , ) = ( sin , cos ) = ( ) exp( ) … . (2) Where n represents the order it is either zero or a positive integer, m represent repetition, and it is either a positive or negative integer subject to two conditions, n-| | =even and | |<=n P:-is the length of the vector from the origin to the pixel at the point (x, y). ϴ :-represents the angle between the vector p and the x axis. R nm :-denotes the radial polynomial and is represented by the equation(3) [18]. The projection of image function on this orthogonal basis function represents zernike moments and is explained by equation (5):- ( , ) * ( , ) … . (5) As for the digital images, it is explained by equation (6). = +1 ∑ ∑ ( , ) * ( , ), 2 + 2 ≤ 1 … . (6) When the moment of Zernike is calculated, the center of the image is determined first, then the coordinates of the unit circle range are determined, as the points outside it are neglected and are not taken into consideration when calculating [18]. Zernike moments has many advantages for uses in digital image processing applications, including [19]: -1. Zernike moments provides a unique description of the object with little redundancy of information.
2. The possibility of rebuilding images perfectly.
3. Zernike moments are not affected by rotation. 4. Compared with HU moments , Zernike moments are more flexible, accurate and easier to rebuild. 5. The use of Zernike moment is good in shape recognition applications due to its properties, as its constants can be calculated independently without the need to calculate the low order constants. 6. For the purpose of obtaining a good descriptor for a given database of images, orthogonal properties allow the evaluation of the order required to calculate the moments.

Proposed algorithm
The proposed algorithm based on the curvelet transformation included two main phases in order to determine the personality directly dependent on the characteristics of the palm print images, as follows: - Data base preparation phase 1. Nine left and right hand images are acquired for each person.
2. The region of interest is specified for each image.
3. An adapt histogram is applied to the ROI images to improve the color contrast of the images. 4. The SOBEL operator is applied and morphological operations are performed on the images to show the lines of the palm. 5. The images are analyzing by adopting the curvelet transformation. 6. The low frequency coefficient are taken. 7. Moments were calculated for these coefficient. 8. Moments are stored in the database where each person will have nine rows in the right-hand database and nine rows in the left-hand database.  Evaluation phase 1. One images of the left and right hand of the person to be tested is acquired.
2. The region of interest is determined for the left and right palm images.
3. An adapt histogram is applied to the image of the region of interest. 4. A SOBEL operator is applied and the morphological operations are performed on the region of interest. 5. The images of the region of interest will be analyzing by adopting the curvelet transformation. 6. The low frequency coefficient is specified. 7. The moments of these coefficient are calculated.
8. The moment values are compared with the previously stored databases for the left and right hands. 9. If the MSE value, correlation value and the Euclidean distance fall within the threshold limit specified in advance, a message will appear stating that, and on the contrary, new entries are added in the two databases for the new person. All vectors are approved for the purpose of measuring performance efficiency in order to determine personality. The linear correlation coefficient has been used and is illustrated by the equation (7): Where n=Quantity of information. ∑ =Total of the first variable value. ∑ =Total of the second variable value. ∑ =sum of the product of first and second value. ∑ 2 =sum of the squares of first and value. ∑ 2 =sum of the squares of second and value.

implementation steps
 Building the database In this example the steps that were listed in the previous example are adopted, but here the selected images are analyzed by the transformations of the curvelet with different levels and different sizes. Through the study of the curvelet factors, it was found that the lowest level, which contains the low frequencies, has extensive data for the image of the palm, and accordingly it was adopted in the personality detection by applying the following steps:-First step: -acquiring images and determining the region of interest, nine images will be acquired for the left hand and nine for the right hand as shown in figure(5), figure(6), figure(7) and figure (8).   Fourth step: -calculate seven moments for the first coefficient of curvelet transform and add an person's entry to the database (nine rows for the right left hand and nine rows for the right hand) as shown in the next section.  (15) and table (2), figure (16) show the values of moments for the first person in the database.   The effect of the change in the dimensions of the region of interest has been studied, database which was built to includes 15 persons for each of the dimensions mentioned in table (3), the table shows the values of the Euclidean distance with threshold of(0.006), in addition the mean square error with threshold value of(6*e-06) and correlation coefficient fixed to threshold value of(0.0097), after performing the test on a person which was previously registered in the database(person no=1).  (17),figure (18) and figure (19).    (4) is an example of the checks carried out on 5 people for the right and left hands, as it shows the results of the examination for a different images of a persons previously registered in the databases. The table (3) show that the use of dimensions 32 * 32 is not useful in the process of distinguishing as its results are far from accurate result. Increasing the dimensions to 64 * 64 led to a clear improvement in the process of recognition, but the results remained far from the pre-set threshold. Using the 128 * 128 dimensions leads to good recognition results, so we recommended to be used. The dimensions 192 * 192 yield better recognition, so it is highly recommend in personality authentication. Finally from above results it is evident that the use Euclidean distance in matching process gives more accurate results than using the correlation coefficient.

Result discussion
where the results of the mean square error support the results of the Euclidean distance.

Conclusion
It was found through the results that adoption of the curvelet transformation as a primary treatment of the images acquired through their decomposition into their primary factors led to a concentration in the palm print information, which clearly contributed to give a high degree of recognition, as it was found that the dimensions 192 * 192 were the most suitable dimension to in determine the personal identity.

Future work
 Through the results obtained and the conclusions that were mentioned, the possibility of adopting the proposed algorithm in some real applications to identify persons.  The possibility of using it in the security services.  The possibility of developing it by adopting other image transformations, and then comparing the results to reach more accurate results.  The possibility of substituting Zernike moment with the characteristics of the fractional dimension in order to extract the properties of palm print tissue.