A Pixel Based Method for Image Compression

digital images is to recruited image compression techniques to reduce images size for efficient storage and fast transmission. In this paper, a new scheme of pixel base technique is proposed for grayscale image compression that implicitly utilize hybrid techniques of spatial modelling base technique of minimum residual along with transformed technique of Discrete Wavelet Transform (DWT) that also impels mixed between lossless and lossy techniques to ensure highly performance in terms of compression ratio and quality. The proposed technique has been applied on a set of standard test images and the results obtained are significantly encourage compared with Joint Photographic Experts Group (JPEG).


Introduction
Currently with huge increasing of modern communication and networks applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are storing and sharing among peoples every moment. In spite of significant development in storage devices capacity and high-quality giant communication networks, the demand for compression algorithms is pivotal issue to keep lesstime and legitimate storage space [1]. Through previous decades, a various kinds of image compression methods had been proposed which can be broadly classified into two main types: lossless and lossy methods. With lossless (also called noiseless) compression, the reconstructed image is perfectly exact to the original one since there is rear to obtain error free or loss of data. This can be obviously seen in medical, security and military applications images. While for lossy compression it is impossible to reconstruct the exact quality of original image but it is possible with some noise due to losing of some data during compression process. This can be widely seen in fast transmission of still images over the Internet where the amount of error may be acceptable [2][3][4]. Thus, the choice between lossless and/or lossy techniques is determined by the targeted application requirements [5][6]. Image compression is performed when one or more of redundancies are curtailed. In image compression of gray based, three basic data redundancies may exist: Inter Pixel Redundancy, Coding Redundancy and Psycho-Visual Redundancy [1]. Image data redundancy removal is the backbone of image compression, namely depending on the redundancy utilization the image compression scheme and techniques vary, where the former implies lossless, lossy schemes, while the latter implies Transform Coding (TC), Spatial Coding (SC) and Hybrid Coding (HC) [5], A vast amount of techniques suggested in attempt to compress images efficiently, some of these techniques are being of standard base such as JPEG, JPEG2000, and large work still under study such as Block Truncation Coding, Vector Quantizer, Predictive Coding, and Fractal [7]. JPEG is the well-known efficient international standard image compression technique due to its efficiency in terms of high compression with excellent pleasing quality, ease of use and speed. JPEG is of transform coding techniques that exploits the Discrete Cosine Transform (DCT) which efficiently represent the spatial based image each segment (region) of size (8x8), followed by quantization process and zigzag ordering (from top left to bottom right) then finally/lastly encode the Direct Current (DC) and Alternating Current (AC) coefficients [8,9]. In this paper, we introduce a new hybrid lossless compression (mixed between lossy and lossless) technique to compress grayscale images. The new modelling technique, based on exploiting the pixel modelling of minimum residual, which is significantly efficient performance in terms of compression ratio and quality. Others sections of this paper are organized into the following: section 2 concerned with the most relevant works, sections 3, 4 and 5 discuss the proposed techniques, experimental results and conclusions respectively. The related works concerning with this work can be divided into two parts; the first part of survey works is devoted to discuss some effectors [10][11][12] exploited as a pre-processing step to eliminate inherited spatial redundancy, such as: Ghadah, K. and Shaymaa F. [10] in (2017) adopted hybrid technique for improving the polynomial techniques performance of linear base with in two stages. First stage starts by utilizing the lossy fixed predictor (one neighbour causal) technique to eliminate correlation embedding between pixels and applying wavelet transform followed by utilizing line polynomial on approximation subband along with applying soft thresholding on the rest subbands. Second stage utilized near lossless base by exploiting another coder stage as a difference between the lossy reconstruction image (of first stage) and the original image that quantized uniformly to guarantee the nonnegative integer indicating the error tolerance. Experimental results on three standard grayscale image of 256x256 show the dominated of suggested method performance than the traditional polynomial coding. F 0 and 1, Compression Ratio (CR) and Peak Signal to Noise Ratio (PSNR) were 12.2681, 12.4215, and 58.7706, 45.68397). The results directly affected by the fixed predictor model, with limitation of the one dimensional first order causality base Ghadah, K. and Murooj A. [11] in (2018) proposed lossy method aimed to improve polynomial techniques of linear based by using fixed predictor(s) and selective predictor techniques of lossy based scheme. Using fixed predictor to decorrelation the high dependency by eliminating embedded redundancy between neighboring pixels. Then utilized linear polynomial coding followed by applying uniform scalar quantization on polynomial approximation coefficients. Here, fixed predictor method regarded as pre-processing step which enhanced polynomial performance and preserving image information. Using selective predictor obviously riddled out any redundancy embedded with promising performance compare to fixed predictor model. Experimental results on grayscale standard images of size 256x256 indicates high image quality with promise compression ratio. For Lena image with fixed predictor of 9 local neighboring pixels, CR and PSNR were 5.3464 and 34.8224 respectively. The selective predictor complex with problems related to time and storage (index).
Ghadah, K., and Heba K., [12] in (2020) introduced lossless compression method for medical grayscale image using hierarchical technique and fixed predictor based. Hierarchical scheme is used to improve the performance of fixed predictor technique that is characterized by low compression rate if it is used alone. With utilizing of hierarchical scheme of even/odd based to partitioning input image into four quadrants and then apply the same fixed predictor on each quadrant, compression rate and quality of reconstructed image will be enhanced. The results have more improvement when exploited different fixed predictor models. This was evident from the results obtained during applying the algorithm to three standard medical images of size 256x256, for Brain, Knee and Tummy image compression ratio were: 14.6155, 13.3966 and 25.924. The second part is concerned with other pixel-based techniques, including: Firas J. and Hind Q. [13] in [2012] proposed an approach for image compression based on a new method called Five Modulus Method (FMM). The suggested approach could be applied on color image, but it is appropriate for bi-level images (white and black medical image). For easily, the original image is partitioning into nxn blocks and using a novel algorithm to transform entire image pixels into numbers divisible by 5 for each of the R, G and B planes. This will not affect the human visual system (HVS).
Then, image values divided by 5 that resulting a new image range between 0 -51. After that, find and subtract the minimum value form the resulted array. Approximately each pixel will needs 6-bits for representation that is surely less than traditional representation of 8-bits. In spite of high PSNR (44.376 for Lena image), but low compression ratio (between 1.6 -1.87) was gained. Therefore, this method can't be used as a standalone, but it might be used as a scheme embedded into other techniques. Kaur, N. [1] in [2013] presented a new method for image compression based on the image byte streaming and pixel correlation with DCT is implemented. Particularly, this technique is appropriate to design simple and fast decoders. Color images are separated in three planes and each plane compressed alone. The compressed size reduction is depend on the color coefficients, if there is more same color coefficients then the more size is reduced. From experimental results on different images of different sizes, it achieves more than 50% compression ratio (between 5.4 to 7.2) without any effect to quality of compressed images because it is used a bit references. The main limitation of this method it is used only with JPEG images compression process. Pralhadrao, V. and Saravanan, N. [14] in [2013] introduced one of the spatial domain lossless image compression algorithms called Pixel Size Reduction (PSR) for synthetic and other color images of 24 bits. The work idea is simulated to Huffman method, where image pixels are representing in least number of binary bits instead of 8 bits per color. Three basic steps are performed on input images, firstly, performs preparation of pixel occurrence table for each color component and stores in order. Secondly, do revaluing each pixel as a maximum occurrence to 0, next occurrence to 1 and so on followed by find out length of minimum bits and store it as header information for each pixel to help reconstruct pixel during decoding process. Lastly, compressed header pixels using Lempel-Ziv-Welch (LZW) encoding. Compare to standard methods like Huffman and RLE, the results show that CR between (1-1.5) of proposed PSR algorithm which is become well especially when highest occurring unique pixels are more. Narmatha, C. et al. [15] in [2017] they suggested a novel near lossless grayscale images compression scheme of pixel based. A level of security against intruders achieved during encoding and good quality gained for retrieved image at decoding. Processes of separation, shuffling and conversion are done through two stages resulting into binary image that is encoding as the last step of encoding processes. Experimental result show that about 60% of the original image can be reduced during the compression process and an error rate is minimum near to zero. The standard 256x256 grayscale images were tested using MATHLAB program, for Lena image the CR was about 50.761%. In spite of higher PSNR and good quality in most cases, but some cases it may not. Rime, R. and et al. [16] in [2016] introduced a new transformation method called Enhanced DPCM Transformation (EDT) for both medical and natural images. Huffman entropy encoding and Differential Pulse Code Modulation (DPCM) for lossless and near-lossless image compression are used. For simplicity, firstly, input image divide into a small blocks with assuming some predicted error during transmission and compression, all the smaller images are arranged on the basis of quotient and reminder. Secondly apply Huffman encoding on out coming image samples after find prediction error. In spite of more complexity, this method can be efficient for lossless compression and/or near-lossless medical image compression. Comparison this method with standard JPEG-LS, for Lena image CR and PSNR were 7.88, 39.16 and 9.43, 34.54) respectively.

Proposed Methodology
The proposed modelling technique takes advantage of the neighbouring pixels correlation. As neighbouring pixels are not statistically independent, we can utilized this dependency between adjacent pixels and building mathematic model based on finding the mean value for each set (row) of correlation pixels. We introduce a new technique of pixel base that implicitly of hybrid techniques of spatial modelling base technique of minimum residual along with transformed technique of DWT that also impels mixed between lossless and lossy techniques to ensure highly performance in terms of compression ratio and quality. The encoder of proposed method, as showing in (Fig.1), resulted in one vector of mean values, which will compress using DPCM, and two types of matrixes: first one is quotients matrix (small integer values) which will compressed using Huffman and LZW. Second one is reminders matrix (residual part) which have less valuable information that will compress using Haar Wavelet compression technique. The following steps are implemented in encoder unit (equations 1-4 are suggested in this work).
Step 1: Reading input squared grayscale image I of size mxn.
Step 2: Reading no. of neighbours (limit) and step size (inc).
Step 3: Reading input image row by row and compute mean value per row: such that: where: -Vmean (m): means vector of input image in order compute extra sub-mean values for subsequent steps.
Step 4: To control the compression ratio and image quality; the two input parameters were recruited which permit user to accomplish that: -For the first input parameter the number of pixel neighbours (ngb), by this parameter we can specific of how many extra sub-mean values will be computed.
-The second input parameter is the step size or increment value, such as inc value. By this parameter we can specify of how long (far) of the distance (difference) between two consecutive values of submean vector. The value of this parameter increases (by accumulate previous values) in each iteration, where the number of iterations should not exceed the value of the first parameter (ngb). The purpose from these iterations to computing a set of sub-means per mean value, such as equation (2) below: Vsubmean ( ) = Vmean(m) × inc … … . . (2) Where: Vsubmean: is a vector of sub-mean values per row, inc: is step size or increment value such that (0< inc <1), t: is a positive value such that 1 ≤ t ≤ ngb.
Step Here t index associated with nearest (lowest) positive value of the subtraction between I (m, n) value and Vsubmean(t) value. 5.b. At the same iteration of step a, calculate the reminder (residual) between current pixel value I(m, n) and the selected (nearest) sub-mean value Vsubmean(t) such that in equation (4): Res (m, n) = I(m, n) − Vsubmean(t) … . . (4) Where: Res: is an array of the reminder values (residual). Before moving to the next column, save (t) value in Indexes array cell, where (t) here represented how many the current pixel value is greater than any one of Vsubmean values of the row. Otherwise save zero value at Indexes array cell, which means that current pixel value is less than all Vsubmean values.

Fig. 1: Flowchart of Encoder unit
Step 6: Execution of above steps will resulted into two different arrays (Mx_aray and Res) and one vector (Vmean), each one of these arrays will compressed separately using different compression method, such that: -For mean vector (Vmean) we apply DPCM lossless coding method as described in equation (1). DPCM is simple, symmetry spatial coding techniques that utilized the correlation (similarity) embedded between neighbouring pixels in a flexible way where the model varying to the image details (characteristics), the DPCM core of two steps of prediction of stochastic based and finding error (residual) of probabilistic base, namely each pixel's value can be predicted form neighbouring pixels, and then finding the difference (residual) between the original and the predicted image [17]. Equation (5) is an example of DPCM of one-dimension matrix [18].
-Since an indexes array of (Mx_aray) have important data, we apply lossless entropy coding methods (Huffman and LZW).
-For an array of the reminder values (Res), we apply Haar Discrete Wavelet Transform (DWT) in three levels of hierarchal based. DWT is a coding strategy that attempts to segregate different characteristics of a signal in such a way that collects the signal energy into few components (low-pass subband), this procedure making the compression of these components more efficiently than the signal itself [19]. The simple Haar basis Wavelet adopted such as [19] is controlled by three parameters: Quantization step (Q), Alpha (α) and Beta (β). Quantization step for details sub-bands at each Wavelet level (w) are computed according to equation (6) [19], so that the quantization step (Qstep) is reduced with the increase of the wavelet level.
Qstep w ={ Q w α w- 1 for LH, HL in w level …(6) Q w βα w- 1 for HH in w level Where Qstep w is quantization step per level, w is wavelet level, LH, HL, HH are details sub-bands. In order to quantize lowhigh, highlow and highhigh sub-bands, the following equation (7) [19] is applied, then, at the third level, the coefficients of these subbands are compressed using Arithmetic Coding lossless technique.
QLHHL w ={ QHH w ={ LHHL w / Qstep w for LH, HL in w level ……(7) HH w / Qstep w for HH in w level Where QLHHL and QHH are quantized of residual coefficients. Lastly, for low-low (LL) sub band of the third level, it is compressed using Huffman coding since it have significant information. The output of the encoder is a compressed file that have three separated compressed items.
The best decoder what the reconstructed signal (image) have good quality with less degradation for human perception. The decoding process starts by reading compressed data and re-constructing approximated uncompressed image. (Fig 2) show the block diagram of decoder for proposed method. The decoder performed the following steps: -Rebuilt of mean vector (Vmean) by applying invers DPCM.
-The quotient (indexes) array (Mx_aray) is reconstructed by applying the invers of LZW and Huffman coding.
where: IRes is an array of de-quantized coefficients. Second, applying invers Haar DWT to reconstruct the approximated reminder values of (IRes) array.
-Reconstructed the original image value using equation (9), such that: I Reconstruct (m, n) = Mx aray (m, n) × inc + IRes (m, n) … … . . (9) Where IRes is the invers quantization array of residual part. An example below illustrate the encoder and decoder processes of our suggested method. Example: Let mean value of following first row in an image is 100, the number of neighbours limit (ngb) =8, and the step size (inc) is inc=0.25. ((not apply DWT on residual)) The shaded cell show the smallest difference positive value in Res array is 10, where the t=5

Decoding:
For retrieved value in pixel I (1,1) in the previous example, decoder unit has just three arrays (Vmean, Mx_aray, Res) and two parameters (limit, inc), so here we need to re-compute sub-mean value per row: Where Rvm is retrieved value. Rvm

Results and Discussions
The testing applied on a well-known standard images (Cameraman, Lena and Girl) all are gray levels (8bits/pixel) and of size 256×256, see (Fig 3). In this work the mean vector and indexes array are losslessly compressed due to significant data they have.
(1) ) . … (13) From the experimental results of this work that shown in (Tables 1 -3), some points can be mentioned: first, the values of mean vector are less variance among different tested images (between 78-110 byte) and have for fixed size in both cases indicate that mean values are not affected by control parameters. Second, the small size of resulted mean vector refer to advantage of using mixed between entropy coding methods DPCM and Huffman that exploited the high correlation between mean values. Third, the size of indexes matrix have more variance among tested images with byte size between (980 -1324) in case 1 and between (2050 -2450) in case 2, which mean that its size is affected by control parameters, where, as ngb increase the index matrix size being increase. Fourth, the mixing between LZW and Huffman encoding methods has high advantage in reducing indexes matrix. The results in tables bellow clearly illustrate that the technique is directly affected by the image's characteristics (varying in grayscale). In other words, the compression ratio is generally varies according to the image nature. Also, the method performance affected by control parameters (ngb and Qstep) and quantization parameters (Q, αand β) especially if it is applied with multiresolution technique. The values of these quantization parameters should be not equal to zero at any work level of Wavelet transform. Statistical based controlled by above two parameters which are specify the mean vector and indices size. From testing results, the best value of number of neighbours is less than 20 neighbour and value of step size between (0.125 and 0.25). The compression ratio increases gradually as the number of neighbourhood's increases and step size is decrease, but with more computational complexity. For transform based, the compression ratio is significant increase as number of wavelet level is increase due to more quantization quantities are achieved. (Fig.4) shows an example of implementation this method on standard image of Lean gray image with different control parameters. It's clearly to see how no. of neighbours and step size affected on residual image, at the same time, how the quantization value affected on image quality. ( Tables 1-3) show results of three images with two cases per image, for each case there are determined values for both ngb and step size. The best value for quantization step Q is between (20-30), for Alpha, the best value is between (0.8-1.3), while for Beta value is better to be more than Alpha value and less than 1.8.

Conclusions
The proposed method of hybrid base techniques, that have less error rate due to utilizing the best values related to mean value per row. From experimental results we can mention: 1-The tested images are of varying details of complex grayscale such as Lena, of less complexity such as Girl and of moderate nature of large smooth background Cameraman image, the mean vector and indexes array have significant information, so it is losslessly compressed. The diversity in the grayscale of the images has little effect on the size of these matrixes.
2-The number of neighbours (limit) and the step size / increment (inc) corresponds to the compression control parameters, which are effects on the method performance, where more neighbours and small step size lead to more compression (of rate about 30%). 3-Quantization parameters (Q, βand α) have considerable affection on method performance where for high values of these parameters there will be more compression gained (of rate about 15%). but with degradation in reconstructed image quality (of rate about 10%).

Work limitations
Standardization issue, the proposed method produce promising results of the performances, but still complex and needs to be optimized to be compound with the standard available techniques.