|
|
ORIGINAL ARTICLE |
|
Year : 2014 | Volume
: 4
| Issue : 3 | Page : 223-230 |
|
A New Seeded Region Growing Technique for Retinal Blood Vessels Extraction
Atefeh Sadat Sajadi, Seyed Hojat Sabzpoushan
Department of Biomedical Engineering, School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
Date of Web Publication | 19-Sep-2019 |
Correspondence Address: Atefeh Sadat Sajadi Department of Biomedical Engineering, School of Electrical Engineering, Iran University of Science and Technology, Tehran Iran
 Source of Support: None, Conflict of Interest: None  | 12 |
DOI: 10.4103/2228-7477.137841
Distribution of retinal blood vessels (RBVs) in retinal images has an important role in the prevention, diagnosis, monitoring and treatment of diseases, such as diabetes, high blood pressure, or heart disease. Therefore, detection of the exact location of RBVs is very important for Ophthalmologists. One of the frequently used techniques for extraction of these vessels is region growing-based Segmentation. In this paper, we propose a new region growing (RG) technique for RBVs extraction, called cellular automata-based segmentation. RG techniques often require manually seed point selection, that is, human intervention. However, due to the complex structure of vessels in retinal images, manual tracking of them is very difficult. Therefore, to make our proposed technique full automatic, we use an automatic seed point selection method. The proposed RG technique was tested on Digital Retinal Images for Vessel Extraction database for three different initial seed sets and evaluated against the manual segmentation of retinal images available at this database. Three quantitative criteria including accuracy, true positive rate and false positive rate, were considered to evaluate this method. The visual scrutiny of the segmentation results and the quantitative criteria show that, using cellular automata for extracting the blood vessels is promising. However, the important point at here is that the correct initial seeds have an effective role on the final results of segmentation. Keywords: Automatic seed point selection, cellular automata, image segmentation, region growing, retinal blood vessels
How to cite this article: Sajadi AS, Sabzpoushan SH. A New Seeded Region Growing Technique for Retinal Blood Vessels Extraction. J Med Signals Sens 2014;4:223-30 |
Introduction | |  |
Segmentation; separation of structures of interest from the background and each other, is an essential analysis function for which numerous algorithms have been developed in the field of image processing. In medical imaging, automated delineation of different image components is used for analyzing anatomical structure and tissue types, spatial distribution of function and activity, and pathological regions. Since segmentation requires classification of pixels, it is often treated as a pattern recognition (PR) problem and addressed with related techniques. Especially in medical imaging, where variability in the data may be high, PR techniques that provide flexibility and convenient automation are of special interest. [1]
Pattern recognition techniques deal with the automatic detection or classification of objects or features. Region growing (RG) (or grow-cut) is one of the PR techniques. This approach is a frequently used segmentation method. However, it has a main disadvantage. It often requires user-supplied seed points. [2] A seed point is the starting point for RG, and its selection is very important for the segmentation result. If a seed point is selected outside the region of interests (ROIs), the final segmentation result would be definitely incorrect. [3] In order to make the RG segmentation fully automatic, it is necessary to develop an automatic and accurate seed point selection method.
Several papers proposed RG techniques for image segmentation. Some of them proposed an automatic algorithm for seed selection, some papers considered semi-automatic approaches, and another one selected the seeds in a manual manner.
There are several researches that applied automatic approaches for seed selection. For example, Shan et al.[3] developed a new automatic seed point selection (ASPS) method for ultrasound images. This method is composed of five steps: (1) Reduce speckle, (2) select iterative threshold, (3) delete the boundary-connected regions, (4) rank the regions, and (5) determine the seed point. They compared their method with Madabhushi and Metaxas ASPS method. [4] After several preprocessing steps, a seed point score formula is used to evaluate a set of randomly selected points. [4] The point with the highest score is considered as the seed point. G'omez et al.,[5] introduced a new automatic seeded RG algorithm called automatic seeded region growing-IB1 that performs the segmentation of color and multispectral images. For selecting automatic seed, the histogram of each band is analyzed to obtain a set of representative pixel values, and the seeds are generated with all the image pixels with representative gray values. Three methods have been proposed to generate seeds automatically. [6] The first method partitions the image into a set of rectangular regions with fixed size and selects the centers of these rectangular regions as the seeds. The second one finds the edges of the image and obtains the initial seeds from the centroid of the color edges. Moreover, the third method extends the second method to deal with noise by applying an image smoothing filter. Feng et al.[7] proposed an automatic RG algorithm for video object segmentation, which features in automatic selection of seeds and thus the entire segmentation does not require any action from human users. To select the seeds automatically, for RG, they used a competitive learning neural network to do the initial segmentation. In fact, they set the skeletons of foreground and background as the seeded regions for object and background, respectively. Shih and Cheng [8] proposed an automatic seeded RG algorithm for color image segmentation.
Dalmau and Alarcon, [9] proposed a segmentation strategy using cellular automata (CA) with an automatic thresholding scheme for seed selection to extract of the retinal blood vessels (RBVs); called matched filter with cellular automata (MFCA). Bhuiyan et al.,[10] proposed an adaptive RG technique to extract the vessel's edges and segmentation them. Palomera-Pιrez et al.,[11] presented a multi-scale feature extraction with RG technique to RBVs, too.
Gao et al. and Hamamci et al.[12],[13] considered cellular automata-based segmentation (CAS) with semi-automatic selection of seeds. Gao et al.,[12] proposed an efficient three-dimensional method for solving medical tissue extraction task. In three-dimensional view, user only needs to specify certain two-dimensional image pixels as seeds in the multi-planar reformation. Hamamci et al.[13] utilized the following seed selection procedure: (1) Draw a line along the maximum visible diameter of the tumor, (2) crop the line 15% from each end and thicken to 3 pixels wide to obtain foreground-background seeds, (3) choose bounding box of the sphere having 20% longer of the line as volume of interest (VOI), and (4) use the 1 voxel wide border of this VOI as background seeds.
There are several examples of CAS techniques [14],[15],[16],[17],[18] too, that started the segmentation process using manually seed selection.
This paper is organized as follows. At first, the proposed method for RBVs extraction including automatic seeding, and CAS technique for final extraction is described. Then, experimental results are presented and compared with manual extracted RBVs. Finally, the discussion and conclusion will be provided.
Methods | |  |
[Figure 1] shows the block diagram of proposed RBVs extraction method. According to this figure, at first we consider an ASPS algorithm to determine some starting points for segmentation step. Next, the CAS technique is applied, and pixels are classified into vessel or nonvessel; background pixels. | Figure 1: Block diagram of proposed retinal blood vessels extraction technique
Click here to view |
Automatic Seed Point Selection
For selecting seed points automatically, two main criteria were considered.
Value Similarity Condition
A seed pixel candidate must have the similarity higher than a threshold value.
Spatial Proximity Condition
A seed pixel candidate must have the maximum spatial proximity to its neighbors less than a threshold value.
Each pixel that simultaneously satisfies both conditions 1 and 2, is selected as a seed point. Connected components of seed pixels are taken as one seed. Therefore, the selected seeds can be one pixel or one region with several pixels. [8]
For example, a specific measure of value similarity condition (VSC) between two pixels is the difference among the gray values, and a specific measure of spatial proximity condition (SPC) is Euclidean distance. The variance of the gray values in a region and the compactness of the region can also be used as measures of VSC and SPC of pixels within a region, respectively. [19]
We considered two measures like Shih and Cheng [8] work for criteria 1 and 2, that is, the Normalized Standard Deviations (NSD) for VSC and maximum relative Euclidean distance (MRED) for SPC.
In fact, a retinal image is specified in RGB color space. This color space is suitable for color display but, because of its high correlation among R, G, and B components, it is not good for color analysis. Furthermore, the distance in RGB color space does not represent the perceptual difference in a uniform scale. [8] Therefore, we calculated these two measures in YC b C r color space where, Y is the luminance, and C b and C r show the color information. C b is the difference between the blue component and a reference value, and C r is the difference between the red component and a reference value. [20] In this color space the color difference of human perception can be directly expressed by Euclidean distance. We can transform the RGB color space to the YC b C r color space using Eq. 1: [20]

Note that by choosing a low value for this threshold, a smaller number of pixels will be classified as seeds and some parts of vessels may be missed; oppositely, a higher number of pixels will be classified as seeds, and different regions may be connected.
About the retinal images, we must know that not all of the pixels in these images should be considered in the seed selection process. These pixels are those belong to the dark surrounding region in the retinal image. Therefore, we need to generate a mask that determines the main pixels or the same ROI! for the retinal image.
Cellular Automata Based Segmentation
Ulam and von Neumann originally conceived CA in the 1940s to provide a formal framework for investigating the behavior of complex, extended systems. CAs are dynamical systems in which space and time are discrete. [12]
A CAS is described as a triple A = [S, N, δ], where, S is a nonempty state set, N is the neighborhood system, and δ:S N0→S is the evolution rule (transition function).
Nonempty State Set S
Consider P and p ϵ P be set of sites in a discreet lattice L. The cell state S p in our case is actually a triplet (l p, θp, C p )- the "label" l p of the current cell, "strength" of the current cell θp , and cell "feature vector" C p, defined by the image. In general, we have: θp ϵ [0, 1].
A digital image is a two-dimensional array of n × m pixels. An unlabeled image may be considered as a particular configuration state of a CA, where cellular space is defined by the n × m array set by the image, and initial states for p ϵ P are set to:

When user starts the CAS by specifying the segmentation seeds, the seeded cells labels are set accordingly, while their strength is set to the seed strength value. This sets the initial state of the CA. The segmentation process starts to grow the selected seeds and try to occupy the entire regarded region. CAS calculations continue until CA converges to a stable configuration.
Experimental results | |  |
Dataset
The proposed CSA technique was tested on color retinal images available at Digital Retinal Images for Vessel Extraction (DRIVE) database. [23] The DRIVE database contains 40 available images that is divided into two sets: The training and the test sets, both containing 20 images. The results of the manual segmentation (gold standard), a second independent manual segmentation and a mask image delimiting the ROI are also available in this database.
Automatic Seed Point Selection Results
The results of the ASPS on color retinal images are shown in [Figure 2] for image 19 from DRIVE database. These results consist of main retinal image, generated mask for it, and the result of selected seeds applying conditions 1 and 2. Seed points are shown in green color. Otsu threshold considered as the threshold of NSD. For MRED, we considered experimental threshold of 0.025. | Figure 2: Automatic seed point selection result for image 19 from Digital Retinal Images for Vessel Extraction database. (a) Main image, (b) Region of interests, (c) Final selected seeds using maximum relative Euclidean Distance threshold of 0.025. Seed points have been shown in green color
Click here to view |
For MRED, experimental thresholds of 0.04 and 0.05 were applied to image 19 too, and their results compared with 0.025 threshold result. [Figure 3] shows the result of this comparing. In this figure, for more realization, we assigned a label to each pixel. Pixels with white color are the vascular seed points with label '1', pixels with green color are the background seeds with label '−1', and the other pixels with the label of '0' will be segmented as vessels or background after applying the CAS technique. | Figure 3: Automatic seed point selection for image 19 from Digital Retinal Images for Vessel Extraction database using Otsu threshold for condition 1 and threshold of (a) 0.025, (b) 0.04, (c) 0.05, for maximum relative Euclidean distance. Red circles in subfigure a show some parts that any seed point was selected for them and, in subfigure c show some parts that vessels and background were merged
Click here to view |
According to [Figure 3], by considering the threshold of 0.025 for MRED, the number of seeds is not enough, and we do not have any seed for some parts of vessels [Figure 3]a], so it is a small MRED threshold. For the threshold of 0.05, the number of selected seeds is very high so that different regions have been connected [Figure 3]c]. This connection causes some parts of vessels missed and wrongly classified as background seeds. By using the threshold of 0.04, these two problems have been relatively eliminated, that is, we have enough seeds for every parts of vessels and background, and these two regions are separate.
Cellular Automata-Based Segmentation Results
As we described previously, selected seeds have a determinant effect on the result of segmentation. In other words, if the seeds were selected outside the ROIs, the final segmentation result would be definitely incorrect. To see this effect, we apply our proposed segmentation algorithm on 20 test images from DRIVE database considering three sets of seeds. The first set was produced by considering the experimental threshold of 0.025 for MRED condition and the second one with experimental threshold of 0.04, and the third one was produced with experimental threshold of 0.05.
[Figure 4] shows the result of CAS for image 19 considering these three thresholds. As you see in this figure, by using the threshold of 0.025 for MRED, some parts of vessels are missed and the result of this segmentation is ineligible. Using the threshold of 0.04 and 0.05 shows more accurate results for RBVs segmentation. Comparing CAS results with the manual segmentation shows that CA cannot extract very thin vessels very well. It may be caused by selecting incorrect seed sets for vessels and background. | Figure 4: The result of applying cellular automata-based segmentation on image 19 from Digital Retinal Images for Vessel Extraction database considering three different thresholds for maximum relative Euclidean distance. (a) Using threshold of 0.025, (b) Using threshold of 0.04, (c) Using threshold of 0.05. (d) Manual segmentation of image 19
Click here to view |
Performance Evaluation
To evaluate CAS technique, we use some quantitative analysis: True positive rate (TPR), false positive rate (FPR), and accuracy (ACC). TPR and FPR are the ratio of well-segmented and wrong-segmented vessel pixels, respectively. ACC is a global measure providing the ratio of total well-segmented pixels. These criteria are defined as:

In Eqs. 16-18, TP (TN) is the number of vascular (nonvascular) pixels that are correctly segmented. FP and FN determine the number of pixels who have been wrongly segmented. [Table 2] shows the best and worst results of these quantitative criteria among 20 test color retinal images from DRIVE database for experimental thresholds of 0.025, 0.04, and 0.05 for MRED. | Table 2: The best and the worst results of quantitative criteria for 20 test images from DRIVE database using different experimental thresholds for MRED
Click here to view |
The averages of these criteria have been presented in [Table 2], too. According to these average values, using the experimental threshold of 0.04 shows better results for ACC and TPR rather than experimental threshold of 0.025. The result of FPR is better for experimental threshold of 0.025. For the thresholds of 0.04 and 0.05, there is a negligible difference between the quantitative criteria. However, due to the effect of high values for MRED threshold on seed selection; discussed previously, we propose using the threshold of 0.04 for this condition.
Dalmau and Alarcon [9] that used MFCA for extracting RBVs, and Palomera-Pérez et al., [11] considered ACC, TPR and FPR criteria for evaluating their segmentation results. [Table 3] compares the performance of their results with our proposed technique. According to this table, our technique is more accurate and has a low average FPR against these two methods. Of course, Dalmau's method has a better result for average TPR. | Table 3: Performance results compared to two other methods on the DRIVE database
Click here to view |
Bhuiyan et al., [10] applied their proposed method on STARE database [24] with an average ACC of 0.9498 for only five images.
Cellular Automata-based Segmentation Results on Abnormal Retinal Images
Diabetic retinopathy diseases cause some abnormalities, such as micro aneurysms (MAs) or hard-exudates (HEs), in retinal images. The red and yellow spots in the retinal images show MAs and HEs, respectively. Presence of these abnormalities affects the vessel segmentation results. Images 8 and 14 from DRIVE database contain HEs and MAs, respectively. The results of CAS on these abnormal images have been shown in [Figure 5] and [Figure 6] and the quantitative criteria have been presented in [Table 4] for experimental threshold of 0.04. These figures show that MAs and HEs have been selected wrongly as the vessel's seeds and extracted with CA. Thus, the number of wrong-segmented vessel pixels (FP) increases and this problem affects on the proposed method reliability. Therefore, it is necessary to remove them using preprocessing schemes before applying ASPS and CAS steps. | Figure 5: Cellular automata-based segmentation (CAS) results on abnormal retinal images. (a) Image 08 from Digital Retinal Images for Vessel Extraction database with hard-exudates abnormality. (b) automatic seed point selection result using Otsu threshold for condition 1 and threshold of 0.04 for maximum relative Euclidean distance. (c) The result of applying CAS. (d) Manual segmentation of image 08
Click here to view |
 | Figure 6: Cellular automata-based segmentation (CAS) results on abnormal retinal images. (a) Image 14 from Digital Retinal Images for Vessel Extraction database with micro aneurysms abnormality. (b) automatic seed point selection result using Otsu threshold for condition 1 and threshold of 0.04 for maximum relative Euclidean distance. (c) The result of applying CAS. (d) Manual segmentation of image 14
Click here to view |
Computational Costs
The running speed of the algorithm is a very important parameter. Especially in medical cases, it is vital to provide results immediately. Furthermore, long running times may be wearisome for users. Therefore, high-speed algorithms are usually considered to be utilized.
The average running time of different steps of the proposed method for RBV extraction, either with the test system information, has been presented in [Table 5]. The average elapsed time to extract blood vessels using CAS is almost 2 min. | Table 5: The average elapsed time to extract blood vessels using proposed method
Click here to view |
Discussion and conclusion | |  |
In this paper, a new RG technique was proposed for extraction the vessels in retinal images. This technique, which is called CAS, starts from some initial seed points. We used an automatic scheme based on two conditions, "value similarity" and "spatial proximity," for selecting these initial seeds. By considering different thresholds of SPC, we produced three seed sets with different length to see the effect of initial seeds on the final segmentation. The results show that CA can extract the blood vessels very well but, the initial seeds have an important role on the final segmentation, so that considering a small or a high number of them lead to an incomplete segmentation and some parts of vessels may be missed. Of course, this problem has occurred for extracting very thin vessels using CAS. Using an adaptive scheme for selecting the SPC according to the characteristics of each image can be a suggestion to eliminate this problem and achieve more accurate segmentation results. Another problem that affects on the segmentation results and must be removed is the presence of some abnormalities such as MAs and HEs. Detecting and removing the MAs and HEs and/or segmentation and classification of them can be considered as a new project in future.
References | |  |
1. | Bankman IN. Handbook of Medical Image Processing and Analysis. Part 2. Academic Press, San Diego CA, USA; 2009. p. 71-2. |
2. | Kirbas C, Quek F. A review of vessel extraction techniques and algorithms. ACM Comput Surv 2004;36:81-121. |
3. | Shan J, Cheng HD, Wang Y. A novel automatic seed point selection algorithm for breast ultrasound images. In: 19 th International Conference on Pattern Recognition; 2008. p. 1-4. |
4. | Madabhushi A, Metaxas DN. Combining low-, high-level and empirical domain knowledge for automated segmentation of ultrasonic breast lesions. IEEE Trans Med Imaging 2003;22:155-69. |
5. | G'omez O, Gonz'alez JA, Morales EF. Image segmentation using automatic seeded region growing and instance-based learning. In: 12 th Iberoamerican Congress on Pattern Recognition; 2007. p. 1-10. |
6. | Fan J, Zeng G, Body M, Hacid M. Seeded region growing: And extensive and comparative study. Pattern Recognit 2005;26:1139-56. |
7. | Feng Y, Fang H, Jiang J. Region Growing with Automatic Seeding for Semantic Video Object Segmentation. Berlin Heidelberg: Springer-Verlag; 2005. p. 542-49. |
8. | Shih FY, Cheng S. Automatic seeded region growing for color image segmentation. Image Vis Comput 2005;23:877-86. |
9. | Dalmau O, Alarcon T. MFCA: Matched filters with cellular automata for retinal vessel detection. Lect Notes Artif Intell 2011;7094:504-14. |
10. | Bhuiyan A, Nath B, Chua J. An adaptive region growing segmentation for blood vessel detection from retinal images. International Conference on Computer Vision Theory and Applications, Setubal, Portugal; 2007. p. 404-9. |
11. | Palomera-Pérez MA, Martinez-Perez ME, Benítez-Pérez H, Ortega-Arjona JL. Parallel multiscale feature extraction and region growing: Application in retinal blood vessel detection. IEEE Trans Inf Technol Biomed 2010;14:500-6. |
12. | Gao Y, Yang J, Xu X, Shi F. Efficient cellular automaton segmentation supervised by pyramid on medical volumetric data and real time implementation with graphics processing unit. Expert Syst Appl 2011;38:6866-71. |
13. | Hamamci A, Unal G, Kucuk N, Engin K. Cellular automata segmentation of brain tumors on post contrast MR images. Med Image Comput Comput Assist Interv 2010;13:137-46. |
14. | Adams R, Bischof L. Seeded region growing. IEEE Trans Pattern Anal Mach Intell 1994;16:641-7. |
15. | Kim E, Shen T, Huang X. A parallel cellular automata with label priors for interactive brain tumor segmentation. In: 23 th International Symposium on Computer-Based Medical Systems; 2010. p. 232-7. |
16. | Vezhnevets V, Konouchine V. GrowCut-Interactive multi-label N-D image segmentation by cellular automata. Graphicon; 2005. |
17. | Ghosh P, Antani SK, Long LR, Thoma GR. Unsupervised grow-cut: Cellular automata-based medical image segmentation. First IEEE International Conference on Healthcare Informatics, Imaging and Systems Biology; 2011. p. 40-7. |
18. | Kauffmann C, Piché N. Seeded ND medical image segmentation by cellular automaton on GPU. Int J Comput Assist Radiol Surg 2010;5:251-62. |
19. | Jain R, Kasturi R, Schunck BG. Machine Vision. New York: McGraw-Hill; 1995. p. 73-111. |
20. | Poynton CA. A Technical Introduction to Digital Video. New York: John Wiley and Sons, Inc., 1996. p. 175. |
21. | Otsu N. A threshold selection method from gray-level histogram. IEEE Trans Syst Man Cybern Syst1979;9:62-6. |
22. | Gray LA. Mathematician looks at Wolfram's new kind of science. Not Am Math Soc 2003; 50:200-11. |
23. | Drive. Digital retinal images for vessel extraction, 2008. Available from: http://www.isi.uu.nl/Research/Databases/DRIVE/. |
24. | Available from: http://www.parl.clemson.edu/stare/probing/. |
Authors | |  |
Atefeh Sadat Sajadi has been with the department of biomedical engineering at Iran University of Science and Technology (IUST), where she got her degree of M.Sc in biomedical engineering. Miss Sajadi research interest is on biomedical image processing and modeling as well.
Seyed Hojat Sabzpoushan is with the department of biomedical engineering at Iran University of Science and Technology (IUST). Dr Sabzposhan research interest is on biomedical systems modeling and control.
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6]
[Table 1], [Table 2], [Table 3], [Table 4], [Table 5]
This article has been cited by | 1 |
Extraction of hottest blood vessels from breast thermograms using state-of-the-art image segmentation methods |
|
| Aayesha S. Hakim,R. N. Awale | | Quantitative InfraRed Thermography Journal. 2021; : 1 | | [Pubmed] | [DOI] | | 2 |
Placental vascular tree characterization based on ex-vivo MRI with a potential application for placental insufficiency assessment |
|
| Daphna Link,Ariel Many,Liat Ben Sira,Ricardo Tarrasch,Stella Bak,Debora Kidron,Zoya Gordon,Simcha Yagel,Shaul Harel,Dafna Ben Bashat | | Placenta. 2020; | | [Pubmed] | [DOI] | | 3 |
Placental vascular tree characterization based on ex-vivo MRI with a potential application for placental insufficiency assessment |
|
| Daphna Link,Ariel Many,Liat Ben Sira,Ricardo Tarrasch,Stella Bak,Debora Kidron,Zoya Gordon,Simcha Yagel,Shaul Harel,Dafna Ben Bashat | | Placenta. 2020; | | [Pubmed] | [DOI] | | 4 |
A Review of Automated Methods for the Detection of Sickle Cell Disease |
|
| Pradeep Kumar Das,Sukadev Meher,Rutuparna Panda,Ajith Abraham | | IEEE Reviews in Biomedical Engineering. 2020; 13: 309 | | [Pubmed] | [DOI] | | 5 |
Pigment Identification of Ancient Wall Paintings Based on a Visible Spectral Image |
|
| Junfeng Li,Dehong Xie,Miaoxin Li,Shiwei Liu,Chun’ao Wei | | Journal of Spectroscopy. 2020; 2020: 1 | | [Pubmed] | [DOI] | | 6 |
Changes in blood flow distribution after hypogastric artery embolization and the ischaemic tolerance of the pelvic circulation |
|
| Jun Nitta,Katsuyuki Hoshina,Toshihiko Isaji | | Medicine. 2019; 98(5): e14214 | | [Pubmed] | [DOI] | | 7 |
Superpixel segmentation and pigment identification of colored relics based on visible spectral image |
|
| Junfeng Li,Xiaoxia Wan | | Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy. 2018; 189: 275 | | [Pubmed] | [DOI] | |
|
 |
 |
|