• Users Online: 427
  • Print this page
  • Email this page

 Table of Contents  
ORIGINAL ARTICLE
Year : 2019  |  Volume : 9  |  Issue : 4  |  Page : 211-220

A generalized ghost detection and segmentation method for double-joint photographic experts group compression


Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA

Date of Submission19-Apr-2019
Date of Decision27-May-2019
Date of Acceptance11-Jul-2019
Date of Web Publication23-Oct-2019

Correspondence Address:
Mrs. Sepideh Azarianpour
Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jmss.JMSS_19_19

Rights and Permissions
  Abstract 

Background: The versatility of digital photographs and vast usage of image processing tools have made the image manipulation accessible and ubiquitous. Thus, there is an urgent need to develop digital image forensics tools, specifically for joint photographic experts group (JPEG) format which is the most prevailing format for storing digital photographs. Existing double JPEG methods needs improvement to reduce their sensitivity to the random grid shifts which is highly common in manipulation scenario. Also, a fully automatic pipeline, in terms of segmentation followed by the classifier is still required. Methods: First, a low-pass filter (with some modifications) is used to distinguish between high-textured and low-textured areas. Then, using the inconsistency values between the quality-factors, a grayscale image, called the ghost image, is constituted. To automate the whole method, a novel segmentation method is also proposed, which extracts the ghost borders. In the last step of the proposed method, using Kolmogorov–Smirnov statistic, the distance between two separated areas (ghost area and the rest of the image) is calculated and compared with a predefined threshold to confirm the presence of forgery/authenticity. Results: In this study, a simple yet efficient algorithm to detect double-JPEG compression is proposed. This method reveals the sub-visual differences in the quality factor in the different parts of the image. Afterward, forgery borders are extracted and are used to assess authenticity score. In our experiments, the average specificity of our segmentation method exceeds 92% and the average precision is 75%. Conclusion: The final binary results for classification are compared with six state-of-the-art methods. According to several performance metrics, our method outperforms the previously proposed ones.

Keywords: Blind image forensics, double-joint photographic experts group compression, forgery detection, image authenticity, image tampering, quality-factor


How to cite this article:
Azarianpour S, Sadri AR. A generalized ghost detection and segmentation method for double-joint photographic experts group compression. J Med Signals Sens 2019;9:211-20

How to cite this URL:
Azarianpour S, Sadri AR. A generalized ghost detection and segmentation method for double-joint photographic experts group compression. J Med Signals Sens [serial online] 2019 [cited 2023 Jun 5];9:211-20. Available from: https://www.jmssjournal.net/text.asp?2019/9/4/211/269792


  Introduction Top


Versatility of digital cameras and cellphones has resulted in numerous multimedia files transmit on web or store in personal computers. Moreover, development of image-processing technology makes image manipulation much easier. Thus, media world today faces many challenges and doubts. Image authentication needs more concerns in legal and journalistic context.[1]

Digital image forensics (DIF) techniques address these problems and determine whether media files are original or forged.[1],[2],[3] In general, DIF approaches can be categorized into two groups: active forensics and passive forensics.[4] Active approaches, such as watermarking[5],[6],[7] and digital signature,[8],[9] have been used to authenticate credibility or ownership of digital media. These approaches require that some prior information be inserted into the image. In fact, the acquisition device must be equipped with the embedded security signal. Limitations of active approaches make them impractical and hard to use.[1] On the contrary, in passive or blind DIF methods, forensic analyzer detects image tampering without any prior knowledge or protection.[4] Nirmalkar etal. categorized passive DIF methods, also known as nonintrusive DIF methods into different groups.[10] Based on this categorization, an important group of passive DIF techniques is the format-based family. These methods scrutinize the artifacts and inconsistencies in different parts of an image and determine if the image is spliced of different compression levels. For this purpose, a wide variety of methods are available, including blocking,[2],[3],[11] quality factor,[12],[13],[14] discrete cosine transform (DCT) coefficients,[2],[15] or quantization error in image compression formats such as joint photographic experts group (JPEG) file format.[12],[16]

Image splicing is one of the most pervasive scenarios of image forgery, in which a region of the source JPEG image is cropped and moved into another target JPEG image to generate a composite forged image.[2] Then, the resulting composite image is compressed in JPEG file format one more time. The whole process causes double-JPEG compression. In the JPEG compression standard, the 8 × 8 DCT blocks are quantized by an 8 × 8 matrix known as quantization table.[17]

There are some recent studies that detect double-JPEG compression.[2],[3],[12],[14],[18],[19] The most relevant papers to our study are briefly discussed in the following.

The authors Lukas and Fridrich[20] explored the statistical pattern in the histogram of JPEG coefficients. Double peaks and missing values in the histogram of DCT coefficients are obvious symptoms of double-JPEG occurrence. In Lukas and Fridrich study,[20] an effective method for estimation of the primary quantization table is also proposed.

In Taimori et al. study,[21] based on generalized Benford's law by considering the distribution of the first digits of AC JPEG coefficients, singly compressed images can be distinguished from doubly compressed ones.

The study of Yang et al.[14] represented a four-class categorization for double-JPEG compression. Actually, the primary and the secondary compression grids can be shifted from each other[2] or they can employ different or the same quantization tables.[14],[22] These four categories are named as C1–C4 and are described below:

(C1) Aligned double-JPEG compression with different quantization matrix;

(C2) Aligned double-JPEG compression with the same quantization matrix;

(C3) Nonaligned double-JPEG compression with different quantization matrix;

(C4) Nonaligned double-JPEG compression with the same quantization matrix.

Yang et al.[14] also proposed a new method for identifying double-JPEG compressions with the same quantization matrix, i.e., C2 and C4. Quoted from Huang et al.,[22] C2 and C4 are more challenging. However, to our best knowledge, there is still no full-automatic method for C1 and C3. These algorithms still need development and reforms for implementation.

One of the existing methods for C1 is the JPEG ghost method presented by Farid.[13] This method detects local manipulation based on the difference between quality factors of two JPEG compressions.[13] In this method, the given image is compressed again with JPEG format. Under some constraints which will be discussed in Section 2, by differentiating this recompressed image and the image under inspection, the low-textured parts of the image become dark, and this dark area is called JPEG ghost.

There are some recent works related to C1 and C3 categories.[23],[24],[25] A previous study[23] re-arranged the quantized DCT coefficients with same frequency and applied multiple high-pass filters on them to extract the features. Finally, they exploited principle component analysis (PCA) dimensionality reduction approach in conjunction with support vector machine (SVM) classifier to train the classifier. A previous study[24] proposed a convolutional neural network-based method for double-JPEG compression detection in both aligned and nonaligned approaches. Dalmia and Okade[25] suggested a filtering procedure based on DCT histogram for nonaligned double-JPEG compressed images which reduce the noise effect related to misalignment of the DCT grids.

We have extended Farid's algorithm for C1 and C3 throughout some postprocessing and iterations in Azarian-Pour et al.[17] In this article, a new family of methods for addressing the problem of double-JPEG compression with different quantization matrix (C1 and C3) is presented. A novel segmentation method is also proposed which is the most appropriate approach for extracting ghost borders. The main contributions of this article are as follows.

  • Proposing a broad and straightforward family of methods to discriminate low-textured parts of the image from the whole image
  • Eliminating two previous constraints in Farid's method. Thereby, we do not necessarily require the primary quality factor to be greater than the secondary quality factor or DCT grids to be aligned
  • Reducing the computational complexity for the ghost detection step (in comparison to Farid[13])
  • Automating the analysis of the difference image, using our proposed segmentation algorithm which automatically reveals the location of the tampered area.


The rest of this article is organized as follows. Section 2 contains the main idea of the paper. The proposed algorithm is presented in Section 3, which consists of three main steps. The simulation results in Section 4 are devoted to compare the performances of these approaches.


  Main Idea and Problem Statement Top


In the standard JPEG compression format, each color channel of a color image is first partitioned into 8 × 8 pixel blocks and then converted to frequency space using a two-dimensional (2D)-DCT. Afterward, each DCT coefficient c is quantized by a quantization step s



Where denotes the rounding function. Now consider a set of coefficients cdq which are double quantized by quantization steps s0 and s1, respectively (s0 > s1), so



It has been shown in Farid[13] that the quantization history of cdq(the amounts of s0 and s1) can be determined. Actually,[13] has proved that the energy function versus s2 has a global minimum (zero) at s2=s1 and a local minimum at s2=s0(where denotes the requantized result of cdq by quantization step s2). Now, assume an image I goes through hypothetical scenario of [Figure 1]. In this situation, the whole image is called double quantized. However, there is a subtle difference between foreground region and background region. Since the final quality factor (the splicer quantization standard) is determined by the background which is assumed to be higher, the image will be doubly compressed (at quality factors q0 and q1) in forged regions, and it is singly compressed (at quality factor q1) in the original regions.
Figure 1: Hypothetical scenario for double-quantized image, composed of two different quality images

Click here to view


It has been similarly expressed in a previous literature[13] that in case (q01), the compression history of I can be determined from the difference energy image



Where I (x,y,cc) denotes the intensity of the pixel (x,y) on color channel cc of the image I, in which cc ∈ (R, G, B). The image Iq2 is the resaved version of image I at quality factor q2. The above equation would be very sensitive to the image content, meaning that it would be higher in the detailed regions and lower in the smoother regions. To compensate the image content texture, we first calculate the difference in a spatial window and then we normalized the averaged values into interval (0,1), which results in ghost image g using





Where the window size w is typically 16.[13]

Besides using JPEG ghost, the inconsistencies in quality factor of a double-quantized image can be determined using other techniques. JPEG ghost detection method needs another JPEG quantization to yield Iq2. However, simpler techniques, for instance, low-pass filtering, window averaging, etc., are able to create image Î similar to Iq2. As it will be discussed in Section 3, the image I - Î also contains information to separate low- and high-textured parts of the image.

The JPEG ghost detection method of Farid[13] has the advantage that it works for tampering detection of both high-textured images and low-textured images. However, it still suffers from a number of limitations. One of the fundamental constraints is that this approach only works for C1 (in Section 1). In other words, it does not work for nonaligned DCT grid cases. Second, the method of Farid[13] needs a manual search for the ghost, instead of an automatic search. As a result, in Farid,[13] forensic specialist has to make relentless efforts to detect the forged area. As we mentioned in Section 1, a previous literature[17] has overcome these limitations. Thus, the ideas of Azarian-Pour et al.[17] are exploited here too.


  Proposed Method Top


The proposed method contains three main steps. In the first and the foremost one, forgery footprints become visible. Tampering details are revealed in this step and they are recognizable with the naked eye, which is known as ghost area. However, to automate the method, in the second step, we extract the precise border of this ghost area throughout a novel segmentation method. In the last step, the classifier assigns the suspicious image to original or tampered group. In other words, in the second and third steps, we just analyze the information given by the first step.

First step: Proposed ghost detection method

To discriminate high-textured regions versus low-textured regions of a given image, first, we separate high-frequency areas and low-frequency ones. The schematic of the first step is illustrated in [Figure 2]. The low-pass filter (LPF) F is applied to image I, yielding Î. Therefore, the image I and its smoothed version, Î, are approximately the same, despite some differences in details. Incidentally, these inconsiderable differences include valuable details for revealing forgery. In this article, we propose eight different smoothing filters and they are compared; the first two filters are
Figure 2: Schematic of the proposed ghost detection method

Click here to view






Where F1 and F2 refer to unweighted and weighted 2D moving average LPFs, described in the spatial domain. The next three filters are described in the frequency domain, by their transfer functions,







These filters, respectively, are fifth-order Butterworth LPF (8), second-order Butterworth LPF (9), and Gaussian LPF (10). The term D (u,v) denotes the distance from the origin of the Fourier transform and D0= 25 is allocated as the cutoff frequency.

The next three filters originate from discrete wavelet transform which are schematically described in [Figure 3]. First, a 2D wavelet transform is applied to the image. Then, detail bands (H, V, and D) are discarded. Afterward, the reconstruction algorithm (inverse discrete wavelet transform) is applied only to the approximation band. Subsequently, the output will be a smoothed version of image I and at the same size. These filters are numbered as F6, F7, and F8 corresponding to wavelets db1, sym2, and sym8, respectively.
Figure 3: Schematic of low-pass filter F6 using two-dimensional-wavelet decomposition, wavelet type db1. The other two ones, F7 and F8, can be implemented using wavelet types sym2 and sym8, respectively

Click here to view


We expect that after applying these filters, the low-textured parts of the image are affected much less than the higher quality regions. Hence, the amount of IÎ in each pixel depends on its quality factor. Furthermore, as mentioned in a previous study,[13] to compensate the image content texture, a spatially averaging and normalizing is also utilized (block diagrams “2D Moving Window Average” and “Normalization” in [Figure 2]. By Substituting Iq2 with Î, the equations would be similar to (4) and (5).

Eventually, after normalizing the energy of the difference image, low-textured parts of the image become dark, while higher-quality parts are brighter. In other words, the amount of g is a measure of the quality.

Second step: Proposed multistage segmentation method

In this subsection, we explain our approach for extracting the ghost borders, which is an iterative three-stage algorithm. Then in the next section, segmentation result will be discussed and a classifier decides to assign the image to the original or tampered group.

Our approach for extracting ghost borders is composed of three stages [Figure 4]:
Figure 4: Schematic of the proposed segmentation method

Click here to view


Stage 1

At first, the ghost image g is partitioned into N nonoverlapped k × k pixel blocks. The effect of parameter k will be discussed later.

Stage 2

The 7D feature vector for each partition is calculated, according to [Table 1]. Image gi denotes the ithk × k partition of the image g which is under inspection. The term pi is the PDF of gray levels of gi, which is approximated by the histogram of the image gi, L is the number of gray levels and j ∈ (1,...,L). Feature numbers 1–4 denote the 1st–4th cumulants, that is, average, variance, skewness, and kurtosis.[26] Feature numbers 5–7 belong to gray-level difference statistics family. They, respectively, represent contrast, angular second moment, and the entropy.[27]
Table 1: Extracting feature vector for each k × k block

Click here to view


The term xi denotes the 7D corresponding feature vector of the ith partition. The output of this stage will be vector set (x1, x2,..., xN), where N is the total number of partitions.

Stage 3

In this step, the “authenticity label” for each block is obtained using k-means clustering algorithm. The output label would be 1 for singly-quantized or original regions and 0 for doubly quantized or fake regions. The label of the ith block is denoted by li∈ (0, 1). If images I and g are at the size of m × n pixels, then image l will be at the size of pixels ( denotes the ceiling function). Therefore, it is obvious that the parameter k affects the resolution of the image l.

Choosing parameter k

The parameter k should be chosen in a way that detects the ghost area, as well as maintaining preciseness and resolution. Too great amounts of k result in a low-resolution version of labels [Figure 5]a. It also causes an error in detecting labels. In the case that the forged area is comparatively smaller than block dimensions, the detection error will occur. On the other hand, as depicted in [Figure 5]b, [Figure 5]c, [Figure 5]d, [Figure 5]e, other problems will arise by choosing a small amount of k. The scattering of 0 and 1 labels in the entire image makes the forensics analyzer confused and causes errors in the classification step (the next step of the algorithm). Recalling the image splicing scenario, it is assumed that one part of an image is cut and pasted into another image. Hence, the ideal result of segmentation should have only one connected component. To this end, we develop a new iterative method for choosing the parameter k.
Figure 5: The first row: the results of applying different independent amounts of k, in panel (a), the accuracy of segmentation is very low, as the value of k decreases (panels [b-e]), a more accurate boundary is obtained but scattering and wholes begin to appear. The second row illustrates the procedure of modified k selection. In this approach, as k reduces (transition between panels [f-j]), a higher resolution of the tampered region can be achieved without any wholes and artifacts

Click here to view


In the first place, k is initialized with a large amount (almost). In the following iterations, it will be cut in half. At each iteration, segmentation is only applied to the edge blocks. We use the term edge block for each k × k partition, which has at least one adjacent block with a different label, at the previous iteration. Applying the procedure to edge blocks instead of all blocks plays a great role in reducing computational complexity. Furthermore, the labels are not scattered anymore, rather, they are distinctly separate. In this way, the advantages of both cases (large and small amounts of k) are preserved. Moreover, since k has to be cut in half at each iteration, it should be of the form 2p. [Figure 5]f, [Figure 5]g, [Figure 5]h, [Figure 5]i, [Figure 5]j illustrate the procedure above using the set of parameter k ∈ (64, 32, 16, 8, 4). As it is seen in [Figure 5], it results in a more accurate segmentation in comparison to a noniterative procedure.

Third step: Proposed classification method

After performing the segmentation task, a criterion is required to conclusively confirm or refute the presence of forgery. More distinct segmented regions can be a decisive proof of tampering, while the similarity between two segmented areas invalidates the segmentation result and represents the authenticity of the image. To this end, we will investigate different statistical measures of the distance between two probability distributions, namely Bhattacharyya distance,[28] Kullback–Leibler divergence,[28] symmetric Kullback–Leibler divergence,[28] and Kolmogorov–Smirnov[13] statistic. In this step, the distance d between two clusters of the image is calculated. Then, by comparing this criterion with a specific threshold “Th,” the algorithm reports the final authenticity evaluation. The larger distance shows an inherent difference between two segmented areas and confirms the presence of forgery. However, in the case that d< Th, it means that no forgery is detected and the segmentation result is unreliable. One-dimensional Bhattacharyya distance between cluster 0 (the ghost segment) and cluster 1 (the rest of the image), with normal gray-level distributions N (μ0,σ02) and N (μ1,σ12) is given by



Here, both regions are assumed to be normally distributed.

Kullback–Leibler divergence and symmetric Kullback–Leibler divergence are defined as



and



Where P0 (u) and P1 (u) are the probability density functions of segments 0 and 1, respectively, which can be estimated by their corresponding histogram.

Kolmogorov–Smirnov statistic is defined as



Where C0 (u) and C1 (u) are the cumulative distribution functions of segments 0 and 1, respectively. The amount of optimal threshold for each distance criterion will be calculated in Section 4.2. Using this optimal threshold and the best LPF, chosen in Section 4.1, the proposed algorithm is briefly illustrated in [Figure 6]. The first row assumes a composite forged photo combined from two original images in a black box. After performing the three steps, a forgery is detected. In the second row, an original image is analyzed, and at the end, its authenticity is approved.
Figure 6: The procedure of the proposed algorithm for forgery detection on a sample tampered image (upper row) and a sample original image (bottom row)

Click here to view



  Simulation Results Top


For our simulations, we have utilized the following four standard databases, all of them include original shots of landscapes, people, man-made objects, wildlife, monuments, both indoors and outdoors:

  • Uncompressed Color Image Database (UCID)[29] contains 1338 uncompressed raw images of size 512 × 384 pixels, in Tagged Image File Format (TIFF), captured by a Minolta Dimage 5 digital color camera, available for download[30]
  • McGill Calibrated Color Image Database, also known as CCID,[31] encompasses 1152 images in nine different categories. These images were taken by two Nikon Coolpix 5700 digital cameras, called “Pippin” and “Merry.” The images in this database are either the original size 1920 × 2560 pixel images, or scaled down versions at 786 × 576 pixels, with both TIFF and JPEG extension, available for download.[32] We utilize full-size, TIFF format. Moreover, we removed the 56 repeated items from this database, yielding a total of 1096 unique images.
  • Never-compressed Color Image Database, in this article called NCID,[33] consists of 5000 original TIFF raw format digital images of size 640 × 480 pixels, lossless true color and never compressed, bit depth of 24, available for download[34]
  • CASIA Tampered Image Detection Evaluation Database version 2.0 contains 7491 authentic and 5123 tampered color images. Their size varies from 240 × 160 to 900 × 600 pixels. Both uncompressed and JPEG compressed images with different quality factors exist in this database, available for download.[35] We ignore the whole tampered set, because we want to monitor the process of image splicing and be aware of their primary quality factors, for further evaluations.


Ultimately, 11704 TIFF and 3221 JPEG authentic images altogether have been exploited, in order to create singly and doubly compressed image sets. According to the image forgery process, as displayed in [Figure 1], JPEG images in different qualities are required to be spliced together. For this purpose, in the first place, all the TIFF images have been compressed in JPEG format, at quality factors of QF = (50, 55., 95), which results in ten different quality sets.

For creating manipulated images, the background is chosen from the group q1QF and the foreground is chosen from the group q0QF which is cropped with a random mask and inserted, as the tampered region, in the background. Thus, 100 different spliced sets are constituted, known by pertaining (q0, q1) pairs. The crop mask which is employed here is used as the Ground Truth (GT) later, for evaluation of the segmentation results.

All experiments were carried out in MATLAB R2016a using Intel® Core™ i5-2670QM (3.10GHz) processor and 4GB RAM.

Experimental results of ghost detection and segmentation steps

In addition to the eight smoothing filters defined in Section 3.1, the JPEG recompressor used in a previous study[13] is utilized here too, for comparing performances. There is no limitation on our proposed filters, but for Farid's method, the two following constraints are necessary:

  • The parameter of double quantization, q2, should be almost equal to low-quality factor to discriminate the ghost
  • The DCT grids of the tampered region of I and the JPEG compressor should be aligned.


Although we have no prior knowledge about q2 and possible shifting in DCT grids, we use the approaches of Azarian-Pour et al.[17] in order to overcome these limitations. Ultimately, nine different smoothing filters are compared and the best one is employed for the next section, in which we choose the best distance criterion and analyze the result of the classifier. To this end, 500 forged images are randomly chosen from the manipulated dataset. Each manipulated image is processed by the ghost detection step, using nine different smoothing filters. Afterward, the iterative segmentation method is applied to the ghost output.

In each case by comparing the final segmentation result and GT, the values of true positives (TPs), true negatives (TNs), false positives, and false negatives are obtained. It is obvious that reporting these values for each image or in the average form is not helpful. Instead, we use accuracy, precision, specificity, and sensitivity defined in Theodoridis and Koutroumbas study.[28]

These criteria are averaged over the 500 above-mentioned images and displayed in [Table 2], along with the average run time in terms of seconds.
Table 2: Average segmentation results for nine different filters

Click here to view


In the results of [Table 2], the best values of accuracy, specificity, sensitivity, and precision are displayed in boldface. It is seen that the Gaussian LPF shows the best performance overall. Thus, for further simulations, F5 (u,v) = e−D2(u, v)/2D02, D0= 25, is used as the smoothing filter. [Figure 7] shows two examples of segmentation of the forged area, using F5 as the smoothing filter.
Figure 7: Final segmentation results on two different forged images. (a) Spliced image using quality factor 85 (forged area) and 90 (background image). (b) Green area: Singly compressed, red area: doubly compressed. (c) Spiced image using 95 (forged area) and 85 (background image). (d) Green area: singly compressed, red area: doubly compressed

Click here to view


Experimental results of the classification step

The final classification step is a simple thresholding on the distance measure. Thus, the values of the threshold, Th, for each type of distance must be determined. For this purpose, we applied the steps 1 and 2 of the proposed method on 10,000 original and 10,000 tampered images. As a result, the segmented area is obtained. Then, using formulae (11–14), the distance between two areas is calculated. We set Th in a way which minimizes the classification error rate on this set of images. It occurs when P (e|Original) = P (e|Forged). The values of optimum Th and the minimum resulted error rate are displayed in [Table 3].
Table 3: Optimum threshold and minimum error rate for each distance measure

Click here to view


Due to the significant performance and accuracy of Kolmogorov–Smirnov statistic [Table 3], the classification task is performed by using this criterion.

For reporting the final results, a comprehensive simulation is performed. First, 10,000 authentic images are randomly chosen from the ten different quality groups, each one containing 1000 images. On the other hand, 10,000 tampered images are chosen from the 10 × 10 = 100 different quality groups, each one containing 100 images. The result of simulation is reported in [Table 4]. For the authentic images, the percentage of true detection (TN rate) versus quality factor is displayed and for the tampered ones, the percentage of true detection (TP rate) versus both q0 and q1 is depicted.
Table 4: Percentage of true detection rate, for different quality images

Click here to view


Moreover, for better illustration, the sensitivity of the tampered images is plotted versus △ q = |q0q1| in [Figure 8]. At the end, the F1 score defined Theodoridis and Koutroumbas[28] as
Figure 8: Sensitivity of composite forged images versus △ q = |q0q1|

Click here to view




is calculated, respectively, for UCID, NCID, CCID, and CASIA databases. Actually, F1 is a measure of the accuracy of a test. It considers both precision and sensitivity of the test to compute the score.

The final results of our proposed method are shown in [Table 5], compared to the six state-of-the-art methods. The results of Li's method,[36] Milani's method,[37] Dong's method,[38] and Taimori's method[39] are previously quoted.[39] In summary, the method used in a previous study[36] is based on extracting alternate current (AC) mode features from the first digit of DCT coefficients which is inspired by Benford's law. Milani etal. used a highly accurate approach based on the same feature set in Milani et al.,[37] using Markov transition probability matrix. Dong et al.[38] used quantized AC modes based on texture features. They exploited PCA algorithm for dimensionality reduction and SVM for training the classifier. The method of Taimori et al.[39] also used PCA and SVM accordingly for feature selection and classification. We also add the results of Azarian-Pour et al.,[17] which is based on a similar method to Farid[13] and is modified to be compatible with both aligned and nonaligned DCT grids. Finally, the performance of Yang et al.[23] has been compared which is based on the DCT coefficients with the same frequency which yield the direction effect. Again PCA and SVM are exploited for feature selection and classification. As it can be seen in [Table 5], in most cases, our method outperforms other methods, more particularly, in terms of sensitivity.
Table 5: Performance metrics of our proposed method, compared to five other methods

Click here to view



  Conclusion Top


In composite tampered images, often discrepancies in different parts of the image could lead forensic specialists to detect forgery. Low-quality factor in the JPEG compression scheme distinctly affects high-frequency texture in the image. Tracing these inconsistencies, we are able to estimate which regions do not originally belong to the image under inspection. In this article, we proposed a fully automatic method for detecting JPEG recompression based on separating low-frequency parts from high-frequency ones. It has been demonstrated that after applying a LPF, the low-textured parts of the image are affected less than high-quality regions. Hence, the difference (ghost image) in each pixel reveals inconsistencies in quality factor. Motivated by this observation, the new algorithm provides a procedure for constructing a method for forensic analyses of digital images which does not fail in the not aligned DCT grids cases. We also proposed a new segmentation method. Although we have used it for ghost detection purpose, this technique can be applied to other fields for performing segmentation tasks.

Financial support and sponsorship

None.

Conflicts of interest

There are no conflicts of interest.


  Biographies Top




Sepideh Azarianpour is a 2nd year PhD student in biomedical engineering department at Case Western Reserve University, Ohio, USA. She joined the Center of Computational Imaging and Personalized Diagnostics (CCIPD) in 2018. She earned her M.Sc. degree in electrical engineering at Sharif University of Technology, Tehran, Iran, and her B.Sc. degree in electrical engineering at Isfahan University of Technology, Isfahan, Iran. She pursues her research in developing, evaluating and applying novel quantitative methods for identifying sub-visual image features and employing them in different artificial intelligence applications.

Email: [email protected]



Amir Reza Sadri was born in Isfahan, Iran. He received the B.Sc. degree in electrical engineering from the Department of Electrical Engineering, University of Kashan, Kashan, Iran and the M.Sc. degree in electrical engineering from Isfahan University of Technology, Isfahan, Iran, in 2012. His research interests include medical image analysis, system identification and software developing

Email: [email protected]

 
  References Top

1.
Farid H. Image forgery detection. IEEE Signal Processing Magazine 2009;26:16-25.  Back to cited text no. 1
    
2.
Qu Z, Luo W, Huang J. A Convolutive Mixing Model for Shifted Double JPEG Compression with Application to Passive Image Authentication. In Proceedings IEEE International Conference Acoustics, Speech and Signal Process; 2008. p. 1661-4.  Back to cited text no. 2
    
3.
Bianchi T, Piva A. Detection of nonaligned double JPEG compression based on integer periodicity maps. IEEE Transactions on Information Forensics and Security 2012;7:842-8.  Back to cited text no. 3
    
4.
Li CT. Emerging Digital Forensics Applications for Crime Detection, Prevention, and Security. Information Science Reference, Hershey, PA, USA: IGI Global; 2013.  Back to cited text no. 4
    
5.
Eggers JJ, Girod B. Blind Watermarking Applied to Image Authentication. Vol. 3. In: Proceedings IEEE International Conference Acoustics, Speech and Signal Process; 2001. p. 1977-80.  Back to cited text no. 5
    
6.
Cox I, Miller M, Bloom J, Fridrich J, Kalker T. Digital Watermarking and Steganography. Elsevier Inc. Burlington, MA, USA: Morgan Kaufmann; 2007.  Back to cited text no. 6
    
7.
Chandra M, Pandey S, Chaudhary R. Digital Watermarking Technique for Protecting Digital Images. In: Proceedings International Conference Computer Science and Information Technology; 2010. p. 226-33.  Back to cited text no. 7
    
8.
Lou DC, Liu JL. Fault resilient and compression tolerant digital signature for image authentication. IEEE Transactions on Consumer Electronics 2000;46:31-9.  Back to cited text no. 8
    
9.
Schneider M, Chang SF. A Robust Content Based Digital Signature for Image Authentication. In: Proceedings IEEE International Conference Image Process; 1996. p. 227-30.  Back to cited text no. 9
    
10.
Nirmalkar N, Kamble S, Kakde S. A Review of Image Forgery Techniques and Their Detection. In: Proceedings International Conference Innovations in Information, Embedded and Communication Systems; 2015. p. 1-5.  Back to cited text no. 10
    
11.
Bianchi T, Piva A. Image forgery localization via block-grained analysis of JPEG artifacts. IEEE Transactions on Information Forensics and Security 2012;7:1003-17.  Back to cited text no. 11
    
12.
Galvan F, Puglisi G, Bruna AR, Battiato S. First quantization matrix estimation from double compressed JPEG images. IEEE Transactions on Information Forensics and Security 2014;9:1299-310.  Back to cited text no. 12
    
13.
Farid H. Exposing digital forgeries from JPEG ghosts. IEEE Transactions on Information Forensics and Security 2009;4:154-60.  Back to cited text no. 13
    
14.
Yang J, Xie J, Zhu G, Kwong S, Shi YQ. An effective method for detecting double JPEG compression with the same quantization matrix. IEEE Transactions on Information Forensics and Security 2014;9:1933-42.  Back to cited text no. 14
    
15.
Bianchi T, Rosa AD, Piva A. Improved DCT Coefficient Analysis for Forgery Localization in JPEG Images. In: Proceedings IEEE International Conference Acoustics, Speech and Signal Process; 2011. p. 2444-7.  Back to cited text no. 15
    
16.
Li B, Ng TT, Li X, Tan S, Huang J. Revealing the trace of high-quality JPEG compression through quantization noise analysis. IEEE Transactions on Information Forensics and Security 2015;10:558-73.  Back to cited text no. 16
    
17.
Azarian-Pour S, Babaie-Zadeh M, Sadri AR. An Automatic JPEG Ghost Detection Approach for Digital Image Forensics. In: Proceedings IEEE Iranian Conference Electrical Engineering; 2016. p. 1645-9.  Back to cited text no. 17
    
18.
Pevny T, Fridrich J. Detection of double-compression in JPEG images for applications in steganography. IEEE Transactions on Information Forensics and Security 2008;3:247-58.  Back to cited text no. 18
    
19.
Niu Y, Li X, Zhao Y, Ni R. An enhanced approach for detecting double JPEG compression with the same quantization matrix. Signal Process 2019;76:89-96. Available from: http://www.sciencedirect.com/science/article/pii/S0923596518309196. [Last accessed on 2019 Sep 10].  Back to cited text no. 19
    
20.
Lukas J, Fridrich J. Estimation of Primary Quantization Matrix in Double Compressed JPEG Images. In: Proceedings Digital Forensic Research Workshop; 2003. p. 5-8.  Back to cited text no. 20
    
21.
Taimori A, Razzazi F, Behrad A, Ahmadi A, Babaie-Zadeh M. A Proper Transform for Satisfying Benford's Law and its Application to Double JPEG Image Forensics. In: Proceedings IEEE International Symposium Signal Processing and Information Technology; 2012. p. 240-4.  Back to cited text no. 21
    
22.
Huang F, Huang J, Shi YQ. Detecting double JPEG compression with the same quantization matrix. IEEE Transactions on Information Forensics and Security 2010;5:848-56.  Back to cited text no. 22
    
23.
Yang P, Ni R, Zhao Y. Double JPEG Compression Detection by Exploring the Correlations in DCT Domain. In: 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE; 2018. p. 728-32.  Back to cited text no. 23
    
24.
Barni M, Bondi L, Bonettini N, Bestagini P, Costanzo A, Maggini M, et al. Aligned and non-aligned double JPEG detection using convolutional neural networks. J Vis Commun Image Represent 2017;49:153-63. Available from: http://www.sciencedirect.com/science/article/pii/S104732031730175X. [Last accessed on 2019 Sep 8].  Back to cited text no. 24
    
25.
Dalmia N, Okade M. Robust first quantization matrix estimation based on filtering of recompression artifacts for non-aligned double compressed JPEG images. Signal Process 2018;61:9-20. Available from: http://www.sciencedirect.com/science/article/pii/S0923596517302084. [Last accessed on 2019 Sep 10].  Back to cited text no. 25
    
26.
Comon P, Jutten C. Handbook of Blind Source Separation: Independent Component Analysis and Applications. Elsevier Inc. Burlington, MA, USA: Academic Press; 2010.  Back to cited text no. 26
    
27.
Costaridou L. Medical Image Analysis Methods. Taylor & Francis Group, Boca Raton, FL, USA: CRC Press; 2005.  Back to cited text no. 27
    
28.
Theodoridis S, Koutroumbas K. Pattern Recognition. 4th ed. Elsevier Inc. Burlington, MA, USA: Academic Press; 2008.  Back to cited text no. 28
    
29.
Schaefer G, Stich M. UCID: An uncompressed color image database. In: Electronic Imaging 2004. Electronic Imaging 2004, San Jose, California; USA: International Society for Optics and Photonics; 2003. p. 472-80.  Back to cited text no. 29
    
30.
Available from: http://homepages.lboro.ac.uk/cogs/datasets/ucid/ucid.html. [Last accessed on 2019 Sep 8].  Back to cited text no. 30
    
31.
Olmos A, Kingdom FA. A biologically inspired algorithm for the recovery of shading and reflectance images. Perception 2004;33:1463-73.  Back to cited text no. 31
    
32.
Availablr form: http://tabby.vision.mcgill.ca. [Last accessed on 2019 Sep 10].  Back to cited text no. 32
    
33.
Liu Q, Sung A, Qiao M. A method to detect JPEG-based double compression. 8th International Symposium on Neural Networks, ISNN 2011, Guilin, China; 2011. p. 466-76.  Back to cited text no. 33
    
34.
Available from: http://www.shsu.edu/~qxl005/New/Downloads/never_compressed_images.zip. [Last accessed on 2019 Sep 10].  Back to cited text no. 34
    
35.
Available from: http://forensics.idealtest.org/. [Last accessed on 2019 Sep 9].  Back to cited text no. 35
    
36.
Li B, Shi YQ, Huang J. Detecting Doubly Compressed JPEG Images by Using Mode BasedFirst Digit Features. In: Proceedings IEEE International Workshop Multimedia Signal Process; 2008. p. 730-5.  Back to cited text no. 36
    
37.
Milani S, Tagliasacchi M, Tubaro S. Discriminating Multiple JPEG Compression UsingFirst Digit Features. In: Proceedings IEEE International Conference Acoustics, Speech and Signal Process; 2012. p. 2253-6.  Back to cited text no. 37
    
38.
Dong L, Kong X, Wang B, You X. Double Compression Detection Based on Markov Model of theFirst Digits of DCT Coefficients. In: Proceedings IEEE International Conference Image and Graphics; 2011. p. 234-7.  Back to cited text no. 38
    
39.
Taimori A, Razzazi F, Behrad A, Ahmadi A, Babaie-Zadeh M. Quantization-unaware double JPEG compression detection. J Math Imaging Vis 2016;54:269-86.  Back to cited text no. 39
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Introduction
   Main Idea and Pr...
  Proposed Method
  Simulation Results
  Conclusion
  Biographies
   References
   Article Figures
   Article Tables

 Article Access Statistics
    Viewed2918    
    Printed205    
    Emailed0    
    PDF Downloaded262    
    Comments [Add]    

Recommend this journal