SlideShare a Scribd company logo
1
ECE 532 Digital Image Analysis
Project Report
On
Texture Based Feature Extraction and Object Tracking
By
Priyanka Goswami
December 2016
Department of Electrical and Computer Engineering
The University of Arizona
2
CONTENTS
Abstract
1.1 Introduction
1.2 feature extraction
1.3 Texture analysis based feature extraction
1.3.1 Local Binary Pattern
1.3.2 Local Derivative Pattern
1.3.3 Local Ternary Pattern
1.4 Tracking Cloud Position using Texture based Features
1.5 Project Outcome, Result and Analysis
Conclusion
References
3
ABSTRACT
Feature extraction involves reducing amount of resources used to describe a large set
of data. For images this involves deriving information form a large initial set, by transforming
images into a reduced set of features, known a feature vector. This can be used for storing
large no. of images in limited memory space and also for different applications like image
analysis, pattern recognition and image retrieval. This is required in a many fields like
machine learning, biomedical imaging, character recognition systems, satellite images, etc.
The effectiveness of feature extraction depends on the method adopted for extracting
the features from an image. Hence this project will involve implementing different texture
analysis based extraction techniques like Local Binary Pattern (LBP), Local Derivative
Pattern (LDP) and Local Ternary Pattern (LTP) and boundary based extraction technique like
Freeman Chain Code and carrying out a comparative study.
This will be applied to identify cloud patterns and track their motion (in pixel position
changes) in time series images (acquired from weather satellites like GOES).
4
1.1 Introduction
Feature extraction can be defined as identifying a set of defining characteristics of an
image and using this set of data as a representation of the image for storage, analysis and
image retrieval. This helps in reducing the amount of resources used to describe a large set of
data and is especially required if there is a need for storing and analysing a large number of
images like in medical applications, face or pattern recognition, optical character recognition
systems and for image retrieval. Because of limited memory and computational resources, it
is advantageous to store only the image features, rather than the complete image and use this
reduced data set for further processing and analysis. This also helps in reducing the actual
time taken by an application to process and analyse an image.
The effectiveness of feature extraction depends on the method adopted for extracting
the features from an image. Since these features are used in place of the actual image, it is
imperative that the method used for feature extraction, gives the best representation of the
image. The following project involves implementing different texture analysis based
extraction techniques like Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and
Local Ternary Pattern (LTP) and carrying out a comparative study based on various factors
like the computation time, size of feature set and image recognition rate. This is carried out
using a set of standard data set and the results are also compared to previous studies carried
out. The feature extraction algorithms are run on unprocessed images.
Based on the above analysis, texture analysis based feature extraction is applied to
time series images of cloud patterns and track their motions, in pixel position changes, using
difference in texture histograms. A simulation of the movement of clouds, based on their
edge position change is also carried out to show the motion in real time.
All the codes and graphs for analysis of results are developed in MATLAB. Datasets
used for carrying out the analysis are:
 Some standard images provided in MATLAB like peppers.png which is a 384 by 512
RGB image. This was converted to a Gray Scale image of 8 bits (0-256 gray scale values).
 The Extended Yale Face Database B contains 16128 images of 28 human subjects
under 9 poses and 64 illumination conditions [1].
 Images of Alaska from GOES-WEST sector form the website goes.gsfc.nasa.gov [2].
1.2 Feature Extraction
Feature extraction is one of the main components of modern digital image analysis. It
involves computing a set of descriptors that describe the image, and are unique to it.
Basically feature extraction helps in getting a summary of the image. These descriptors (also
called image “features”, “characteristics”, “labels” or “properties”) can be of different types
and for each image these associate a feature vector. The feature vector of an image can be
defined as a set of data will contain the relevant information about the image, and this
information can be used to accurately represent the image. Hence instead of using the original
images, the feature vector of the images can be used for carrying out analysis, recognition
and retrieval.
Feature extraction has become extremely important because of the following two
reasons:
5
1. Raw image can have large sizes. For example a typical MRI generates a 512 by 512
image, with pixel depth of 16 bit and the file size would be between 5-6 Mb and at a time
100-1000 images may be taken depending on the diagnosis [4]. Because of limited
memory, there can be limitations on the amount of images that can be stored at a time. an
issue, like in medical imaging, where images over a period of time (patient history) need
to be stored and processed to accurately carry out analysis and detection of diseases. Using
feature vectors, only the main characteristics of the images can be stored and this will
occupy much less space than the original image, thus allowing for storage of large no. of
images.
2. For carrying out different types of analysis only a few features of the original image may
be required. For example, to identify clouds and track them only the texture or edge
information of the cloud is needed. If the application is run on the whole set of original
images, not only will it use up a lot of computational power and resources like no. of
processors, RAM, etc. but also take a substantial amount of time. Compared to this if only
the texture or edge based feature vector of each image is used, time and computation will
be reduced substantially.
Hence feature extraction is of great importance in many fields like machine learning,
biomedical imaging, character recognition systems, satellite imaging, meteorological
imagining and weather predictions, etc.
There are different types of features that can be used to represent an image and they
can be categorised as follows:
1. Boundary, shapes, contour or edge based descriptors using techniques like Chain codes,
Thresholding, Hough Transform.
2. Image texture based descriptors using techniques like LBP, LTP, etc.
3. Spatial relations based descriptors, which give relations between the different features in
an image.
In the project, texture based image descriptors are used to represent images and
features are extracted using different texture analysis techniques.
1.3 Texture Analysis Based Feature Extraction
Texture can be defined as the “properties that represent the surface of an object” or
“as something consisting of mutually related elements”, like a group of pixel that represent a
common pattern/texture in an image and can be called a texture primitive or texture elements
or Texel [5]. Textures can be generally classified as smooth, coarse, fine grained, rough, etc.
Additionally image texture is scale dependent, i.e. if we look at the whole image of a grass
field, the texture appears to be uniform, but on dividing it in regions, every frame may have
slight variations in the texture.
Texture Analysis can be classified into two categories [6]:
1. Statistical Approach – Different textures are represented by feature vectors and using
statistics and probability, classification takes place. The texture character has a direct
relation to the spatial size of the texel. For recognition methods like developing co-
occurrence matrix and using them to classify textures was used. It is widely used for
texture analysis and classification
6
2. Syntactic Approach – This method is based on the assumption that in an image, texture
primitive are located at regular intervals and this placement has to be determine to define a
texture. For texture recognition, in the training phase for every texture class a “grammar”
structure is constructed and for recognition, the syntactic analysis of the texture “word” is
carried out and matched to the “grammar” to be defined into a class [5]. This approach is
not used as widely as the statistical approach.
Texture Analysis is a widely studied and researched field in digital image analysis and
machine vision. It can be used for feature extraction, object recognition, medical imaging and
classification, object tracking and motion analysis, recognition of cloud types from
meteorological satellite data [5], recognition and classification of fields, forest, agriculture
lands, cities, etc from satellite images and maps.
In the project, the following texture analysis techniques are used for feature
extraction:
1. Local Binary Pattern (LBP)
2. Local Derivative Pattern (LDP)
3. Local Ternary Pattern (LTP)
The following sections give a brief description about each technique along with the
algorithms used to develop the codes in MATLAB.
1.3.1 Local Binary Pattern
Local Binary Pattern (LBP) was first described by Ojala et al [7] as a two level
version of the original method proposed by Wang and He [6]. In the method, gray-level based
local patterns were used to create a histogram of the image, where each bin of the histogram
was used to define a localised texture of the image, i.e. the image is described as a collection
of micro-patterns and the LBP describes the first order derivatives. The main advantage of
this technique was it was rotationally invariant and computationally inexpensive. Also it is
unaffected by uniform gray-scale shift in the image. This technique has been widely used for
many applications like representation of textures and classification, face recognition systems
and object tracking.
In the scheme an image can be processed as a whole region or divided into different
regions. After this a cell is selected. The cell can be rectangular or circular with (P, R)
defined, with P= No. of pixels considered and R=Radius of the cell. Using the center pixel as
the threshold the neighbouring pixels are classified as 0 or 1. Once this process is carried out
for every neighbouring pixel, the binary value is converted to the equivalent decimal value.
This is done for every region for the whole image. Once the values are computed they are
used to make a histogram having 28
= 256 bins. If there are multiple regions, then histogram
for each region is concatenated.
In the project, the image is computed as a single entity and not segmented into
regions. Also a rectangular cell of 3 by 3 neighbourhoods is considered. All images are gray
level, and RGB images are processed after converting them to gray level. The algorithm is
developed as follows:
1. Store the gray level image and get the image dimensions [rows, cols].
2. Extend the image boundaries.
7
3. For every pixel in the image, select a 3 by 3 neighbourhood around it, keeping it as the
center. Let the center pixel be Gc with coordinates (x,y) and the neighbouring pixels be
Gn.
4. Keeping Gc as the threshold, for every neighbouring pixel, if Gn ≥ Gc then value is 1 else
value is 0.
5. After computing this for every neighbour pixel, the values are listed in clockwise manner
and the equivalent decimal value is calculated.
6. Once done for every pixel, the values are grouped in 256 bins and histogram is created.
Figure1shows a sample computation of LBP for a single pixel value Gc, with a 3 by 3
neighbourhood:
If Gn ≥ Gc = 1,
Else 0
Gc = 156
Figure 1. Computation of LBP in gray scale image
The LBP code, written in MATLAB is tested on the peppers.png image (after converting it to
gray scale). Figure 2 shows the gray scale image, output LBP image and the LBP histogram.
The histogram of the image gives the textural features of the image and stored as its feature
vector.
125 130 175
180 156 189
145 160 165
0 0 1
1 0 1
0 1 1
(00111101)2 = (61)10
8
Figure 2. LBP image and histogram of peppers.png image
1.3.2 Local Derivative Pattern
Since LBP, gives the first order derivative of the pattern in an image, it fails to detect
the finer details in an image and hence, B. Zhang and Y. Gao proposed a higher order local
pattern detector known as the Local Derivative Pattern (LDP) [9]. The method was able to
capture finer details in images and gave better results when used for facial recognition. The
LDP captures the change of derivative directions among local neighbours, and encode the
turning point in a given direction. Hence, for a center pixel, the LDP is calculated in 4
directions – 0o
, 45o
, 90o
and 135o
. For this a set of templates was proposed and these
templates were run on cell of 5 by 5 pixels, keeping the center pixel Gc as the reference point.
These templates are shown in Figure 3.
Figure 3. Templates for calculating the 2nd
order derivative pattern as proposed by B. Zhang and Y.
Gao[9] [Courtsey: B. Zhang, Y. Gao, S. Zhao, J. Liu, "Local derivative pattern versus local binary
pattern: Face recognition with higher-order local pattern descriptor]
9
For every pixel, for every direction an 8 bit binary pattern is generated. The final
value is calculated by concatenating the four 8 bit values to get a 32 bit value. The histogram
for the image is determined by concatenating the individual histograms for the four
directions.
For the project, the 2nd
order LDP was calculated for gray scale images and the histogram for
all the four directions were concatenated and stored as the feature vector of the image. For
LDP, the 4-point template is considered, which means for a given pixel Gc, if the value is
increasing or decreasing monotonically, then it is defined as ‘0’ else if there is change in
gradient, i.e. one is increasing and other is decreasing then the value is labelled as ‘1’.The
algorithm for the LDP, developed in MATLAB is as follows:
1. Get the gray level image and reflect the image boundaries.
2. Get the image dimensions as [rows, cols].
3. Keeping the first pixel as the center pixel, Gc, select a 5 by 5 neighbourhood.
4. Using the templates described in figure 3, calculate the 8 bit 2nd
order derivative pattern
for 0o
. Total 8 computations are there, for the 8 neighbouring pixels.
5. This is repeated for 45o
, 90o
, 135o
using the corresponding templates.
6. Histograms for each direction are separately computed and then concatenated to give the
overall feature vector of the image.
The code was first tested using the example values described in [9], and the results
were verified. The computation was done in the following way (Figure 5):
2 5 3 5 1
6 7 9 1 5
2 3 4 8 2
3 2 3 2 9
1 2 3 2 1
2 5 3 5 1
6 7 9 1 5
2 3 4 8 2
3 2 3 2 9
1 2 3 2 1
Ref. point = 4 (Templates are highlighted). The final result: 01010100. This is repeated in the opposite
direction and for all other angles, using the corresponding templates.
Figure 4. 2nd
order LDP calculation for four bits of 0o
After this it was run on the peppers.png image. The result of the code is shown in Figure 5,
which includes the original image, 2nd
order LDP images for the four angles.
2 5 3 5 1
6 7 9 1 5
2 3 4 8 2
3 2 3 2 9
1 2 3 2 1
2 5 3 5 1
6 7 9 1 5
2 3 4 8 2
3 2 3 2 9
1 2 3 2 1
2 5 3 5 1
6 7 9 1 5
2 3 4 8 2
3 2 3 2 9
1 2 3 2 1
10
Figure 5. 2nd
order LDP for 0o
, 45o
, 90o
and 135o
of peppers.png image
1.3.3. Local Ternary Pattern
Local Ternary Pattern (LTP) was introduced by X. Tan and B. Triggs, for face
recognition under difficult lighting conditions [10]. They proposed the method, as a
generalisation to the LBP scheme, which is more sensitive to noise and unable to accurately
detect smooth weak illumination gradients [10]. Instead of the 2 valued code (‘0’ and ‘1’)
11
used in LBP, LTP uses a 3 valued code (‘-1’, ’0’ and ‘1’). This is done by setting a threshold
value and classifying neighbourhood pixels, based on the value of central pixel and the
threshold.
For a central pixel Gc, neighbouring pixel Gn and Threshold value t, the LTP value is
computed as follow:
 If Gn ≥ Gc + t, then value is 1,
 If |Gn – Gc| < t, the value is 0,
 If Gn ≤ Gc – t, the value is -1.
Hence it can be said that a region around Gc, defined by the Gc ± t is assigned 0, any value
above or below this region is assigned 1 and -1 respectively. Using this technique, for an
image two LTP values are calculated for each pixel: 1.That consists of the decimal
representation of the ‘+1’ values called the Upper LTP and 2. That consists the decimal
representation of the ‘-1’ values called the Lower LTP. Figure 6 shows the sample calculation
of the values for a Gc in a 3 by 3 neighbourhood.
Lower LTP Upper LTP
(11010000)2 (00000111)2
Gc = 150 Threshold = 5 U_limit = 155, L_Limit = 145
Figure 6. Computing Upper and Lower LTP values in a 3 by 3 neighbourhood
Using this combined histogram is created and this gives the textural features of the
image.
The algorithm used for developing LTP code in MATLAB is described below:
1. Load the image and reflect the image boundaries.
2. Get the image dimensions in form of [rows, cols].
3. For central pixel Gc, select a 3 by 3 neighbourhood. For I =1..8 neighbour pixels, calculate
the value depending on the above conditions.
4. The values of ‘+1’ are stored as the Upper LTP and values of ‘-1’ are stored as the Lower
LTP.
5. The binary sequences of the two LTPs are converted to decimal equivalent and stored.
6. Histograms for each LTP are calculated and concatenated to give the overall feature vector
of the image.
135 120 148
156 150 115
162 158 154
-1 -1 0
1 0 -1
1 1 0
1 1 0
0 0 1
0 0 0
0 0 0
1 0 0
1 1 0
12
Figure 7 shows the output of the code based on the above LTP algorithm, developed
in MATLAB and executed on the peppers.png image. It includes the gray level image, the
Upper LTP and the Lower LTP image.
Figure 7. Lower and Upper LTP output images for pepper.png
1.4 Tracking Cloud Position using Texture based Features
Object tracking is used to determine the motion of different elements in a series of
images. This is useful in various fields like surveillance, meteorology and medical imaging.
Object tracking can be of two types:
1.Kernel Based Tracking 2. Contour Based Tracking
In the project, a form of contour based tracking scheme is developed to understand
and track the motion of clouds, from a time series of images taken by the GOES satellite. For
tracking the cloud position, the following algorithm is developed in MATLAB:
1. Using one of the texture analysis scheme (for example LBP or LTP), the texture histogram
for first image (in a series of image taken at different time sequence) is calculated and
stored as the reference histogram. This histogram gives the textural feature vector of the
cloud.
2. Now for every image in the sequence the histogram is calculated. If there is movement of
the cloud in the image, the local textures of the image will be different from the reference
image and hence the bin values of the histogram will be different.
3. The difference in the histograms is calculated using the χ2
(Chi Squared) Distance. This
technique is used because the difference between the larger valued bins is less important
13
compared to the smaller valued bins. This is because the larger valued bins represent a
constant texture (which may represent land in the actual image) and hence the difference
between the smaller valued bins has to be considered. This technique has been used
successfully in many object tracking schemes. Let Href and Himg be the two histograms.
The difference between them is calculated using following formula:
χ2
(Href,Himg) = ∑i (Hrefi - Himgi)2
/ (Hrefi + Himgi)
4. In this way the difference in histogram for image is calculated with respect to the reference
image and this gives us the pixel position difference of the cloud.
5. By plotting the pixel position difference, it can be determined how the cloud is moving, its
direction and the speed of movement. For example, in the plot if the pixel position
difference for multiple images remains constant then it implies that the cloud position has
not changed.
Figure 8 shows the gray scale image, the LBP and the Upper-LTP of one of the cloud images
Figure 8. Gray scale, LBP and LTP image of the cloud [gray scale image- courtsey: http://goes.gsfc.nasa.gov/]
1.5Project Outcome, Results and Analysis
The project had the following two objectives:
1. Developing and implementing the LBP, LDP and LTP texture analysis techniques codes in
MATLAB.
2. Using the above techniques for feature extraction and comparing the different techniques
based on the computation time, size of the feature vectors and accuracy (%) in
identification.
3. Based on the above result, using the best technique to implement cloud position tracking.
Once the LBP, LDP and LTP codes were developed and tested in MATLAB for the
peppers.png image, a comparative study for the three techniques was carried out to determine
which of them gave the best results. This was done as follows:
1. A Training Set was created using 50 images from the first 5 image sets form the Extended
Yale Face Database B.
14
2. Then from each set (each type of face) 30 images were selected, with each image having a
different facial position and illumination condition.
3. A code was written in MATLAB, to read all the images from the Training Set and get the
texture based feature vector (histogram) for each image using LBP, LDP and LTP and
store them.
4. The next step was to get feature vectors for each image set and compare them to the
Training Set. This was done using the nearest neighbour method (“knnclassify” function in
MATLAB). The code for this is shown below:
%Generating class labels
class_label=zeros(no_img,1);
for i=1:no_img
class_label(i,1)=ceil((i)/N); % 50 because there are 50 images of each class(for rounding of
to nearest integer)
end
class=knnclassify(test_set,train_set,class_label);
t_corr=0;
[total,cols]=size(test_set);
%Determining total no. of correctly identified images
for i=1:total
if class(i,1)==num
t_corr=t_corr+1;
end
end
per_acc=((t_corr)/total)*100; % Calculating the % Accuracy of identification
5. This is carried out for all the test set and a graph is plotted comparing the result.
Figure 9 shows the LBP, LDP and LTP images generated for one of the images from the
Extended Yale Face Database B.
Original Images
Local Binary Pattern
15
Local Derivative Pattern (2nd
order derivative in 0o
)
Local Ternary Pattern (Upper LTP)
Figure 9. LBP, LDP and LTP images of a test face image from the Extended Yale Dataset
Table 1 shows the accuracy of identification and classification based on the k nearest
neighbour, for three techniques and Figure 10 shows the graph of accuracy of identification
(%) for each image set, using the three techniques.
Set No. LBP - accuracy (%) LDP- accuracy (%) LTP - accuracy (%)
Set 1 66.7 61.7 61.7
Set 2 56.7 70.8 68.3
Set 3 70.0 69.2 68.3
Set 4 60.0 67.5 75.0
Set 5 66.7 84.2 78.3
Table 1. % accuracy in identifying images based on textural feature vectors for LBP, LDP, LTP
Figure 10. Accuracy (%) of identification for LBP, LTP, LDP
Based on the above results the following observations are made:
1. The results are consistent with previous studies carried out in [9], [10], [11] and [12].
2. The feature vector sixe for each image is smallest in case of LBP (256 by 1). LTP gives
feature vector of size 256 by 2 (Upper and Lower LTP), while LDP gives feature vector of
size 256 by 4 (for each direction – 0o
, 45o
, 90o
and 135o
).
16
3. From the above graph it can be said that the LDP technique has the highest accuracy (%)
of identification – 84.2% for Image Set 5.
4. Also LBP has the maximum variation in the accuracy (%), with the lowest value being
56.7% for Set 2 and highest accuracy of 70.0% for Set 3.
5. Additionally the most consistent result is given by the LTP technique.
6. On basis of complexity of code, LDP is the most complex and LBP is the least. Also
execution time is maximum for LDP and least for LBP.
After carrying out the above analysis, the LBP and LTP based texture analysis method
was implemented to carry out the cloud position tracking, described in section 1.4. A total of
40 images, taken over a period of time were analysed and the change in position was
determined in terms of pixel position changes by computing the χ2
value between the fixed
reference histogram and the cloud image histograms over time period of 8 hours. The images
IR2 channel images taken for the state of Alaska from the GOES-WEST sector (website:
http://goes.gsfc.nasa.gov). Figure 11 shows the edge contour of the clouds for images taken at
an interval of 1 hour and the graph of pixel position difference determined using LBP and
LTP. Figure 12 shows the graph of change in pixel position, showing cloud motion,
determined using LBP and LTP for 40 images taken over 8 hours.
Blue-Ref. Cloud
Edge at time To
Red – Cloud
Edge at time Tt
17
Figure 11. images giving contour of cloud edges and graph showing change in pixel value, for LBP
and LTP
Figure 12. Change in pixel position, for 40 image of clouds, using LBP and LTP
As seen from the above figure, LBP gives a much smoother graph of change in pixel
position compared to LTP, and this can be verified from the contour of cloud images that
show the advancing position of the clouds with respect to the reference. Hence it can be
concluded that for tracking cloud positions, LBP will give a better result compared to LTP.
18
CONCLUSION
Feature extraction is a very important component of digital image analysis and
machine vision, especially for image storage and retrieval for large data bases. Feature
vectors of images can also be used for fast processing and analysis, but the effectiveness of
this depends on the feature extraction technique. In the project texture based feature
extraction is carried out using three different feature analysis techniques: Local Binary
Pattern, Local Derivative Pattern and Local Ternary Pattern. The effectiveness of each
technique is analysed by performing an identification and classification algorithm using an
standard image set, the Extended Yale Face Database B. Based on it the LTP is determined to
be the most accurate while LBP gives the feature vector of least size, is less complex and
computationally fast. The LTP and LBP techniques are then used to carry out tracking of
cloud position, by determining the change in pixel value position. This is done by calculating
the Chi Square distance, among two graphs, the reference and the test image, taken from the
NASA GOES website, for state of Alaska. The contour of the cloud edge is also determined
using a standard MATLAB function imcontour, to verify the result. From the graphs, it can
be concluded the for cloud position tracking, the LBP technique gives a better result
compared to the LTP.
All the codes have been developed and tested in MATLAB.
19
REFERENCES
[1]. A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From few to many:
Illumination cone models for face recognition under variable lighting and pose,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 23, no. 6, pp. 643–660, Jun. 2001
[2]. Ir2 images of Alaska – GOES-WEST sectors, [online]:
http://goes.gsfc.nasa.gov/goeswest/alaska/
[3]. J. Sklansky, "Image segmentation and feature extraction", IEEE Trans. on Systems Man
and Cybernetics, vol. 8, pp. 237-247, 1981
[4]. ‘Magnetic Resonance Imaging’, [Online]: http://www.telemedproviders.com
/telemedicine-articles/91-magnetic-resonance-imaging-mri.html
[5].M. Sonka, V. Hlavac, R. Boyle, “Image Processing, Analysis and Machine Vision”,
Thomson-Engineering, 2007
[6]. R. M. Haralick, "Statistical and Structural Approaches to Texture", Proceedings of the
IEEE, vol. 67, no. 5, pp. 786-804, 1979
[7]. T. Ojala, M. Pietikäinen, and T. T. Mäenpää, “Multiresolution gray-scale and rotation
invariant texture classification with Local Binary Pattern,” IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002.
[8].L. Wang, D. He, “Texture classification using texture spectrum”, Pattern Recognition
23,905-910 (1990)
[9]. B. Zhang, Y. Gao, S. Zhao, J. Liu, "Local derivative pattern versus local binary pattern:
Face recognition with higher-order local pattern descriptor", IEEE Trans. Image Process.,
vol. 19, no. 2, pp. 533-544, Feb. 2010
[10]. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under
difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 1635–1650
[11]. S.K. Vipparthi, S. Murala, A.B. Gonde, Q.M. Jonathan Wu, "Local directional mask
maximum edge patterns for image retrieval and face recognition", IET Computer Vision,
vol. 10, no. 3, pp. 182-192, March 2016
[12]. H. Rami, M. Hamri, Lh. Masmoudi, ‘ Object Tracking in Images Sequence using Local
Binary Pattern (LBP)’, International Journal of Computer Applications (0975 – 8887)
Volume 63– No.20, February 2013
[13]. www.mathworks.com

More Related Content

Texture based feature extraction and object tracking

  • 1. 1 ECE 532 Digital Image Analysis Project Report On Texture Based Feature Extraction and Object Tracking By Priyanka Goswami December 2016 Department of Electrical and Computer Engineering The University of Arizona
  • 2. 2 CONTENTS Abstract 1.1 Introduction 1.2 feature extraction 1.3 Texture analysis based feature extraction 1.3.1 Local Binary Pattern 1.3.2 Local Derivative Pattern 1.3.3 Local Ternary Pattern 1.4 Tracking Cloud Position using Texture based Features 1.5 Project Outcome, Result and Analysis Conclusion References
  • 3. 3 ABSTRACT Feature extraction involves reducing amount of resources used to describe a large set of data. For images this involves deriving information form a large initial set, by transforming images into a reduced set of features, known a feature vector. This can be used for storing large no. of images in limited memory space and also for different applications like image analysis, pattern recognition and image retrieval. This is required in a many fields like machine learning, biomedical imaging, character recognition systems, satellite images, etc. The effectiveness of feature extraction depends on the method adopted for extracting the features from an image. Hence this project will involve implementing different texture analysis based extraction techniques like Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Ternary Pattern (LTP) and boundary based extraction technique like Freeman Chain Code and carrying out a comparative study. This will be applied to identify cloud patterns and track their motion (in pixel position changes) in time series images (acquired from weather satellites like GOES).
  • 4. 4 1.1 Introduction Feature extraction can be defined as identifying a set of defining characteristics of an image and using this set of data as a representation of the image for storage, analysis and image retrieval. This helps in reducing the amount of resources used to describe a large set of data and is especially required if there is a need for storing and analysing a large number of images like in medical applications, face or pattern recognition, optical character recognition systems and for image retrieval. Because of limited memory and computational resources, it is advantageous to store only the image features, rather than the complete image and use this reduced data set for further processing and analysis. This also helps in reducing the actual time taken by an application to process and analyse an image. The effectiveness of feature extraction depends on the method adopted for extracting the features from an image. Since these features are used in place of the actual image, it is imperative that the method used for feature extraction, gives the best representation of the image. The following project involves implementing different texture analysis based extraction techniques like Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Ternary Pattern (LTP) and carrying out a comparative study based on various factors like the computation time, size of feature set and image recognition rate. This is carried out using a set of standard data set and the results are also compared to previous studies carried out. The feature extraction algorithms are run on unprocessed images. Based on the above analysis, texture analysis based feature extraction is applied to time series images of cloud patterns and track their motions, in pixel position changes, using difference in texture histograms. A simulation of the movement of clouds, based on their edge position change is also carried out to show the motion in real time. All the codes and graphs for analysis of results are developed in MATLAB. Datasets used for carrying out the analysis are:  Some standard images provided in MATLAB like peppers.png which is a 384 by 512 RGB image. This was converted to a Gray Scale image of 8 bits (0-256 gray scale values).  The Extended Yale Face Database B contains 16128 images of 28 human subjects under 9 poses and 64 illumination conditions [1].  Images of Alaska from GOES-WEST sector form the website goes.gsfc.nasa.gov [2]. 1.2 Feature Extraction Feature extraction is one of the main components of modern digital image analysis. It involves computing a set of descriptors that describe the image, and are unique to it. Basically feature extraction helps in getting a summary of the image. These descriptors (also called image “features”, “characteristics”, “labels” or “properties”) can be of different types and for each image these associate a feature vector. The feature vector of an image can be defined as a set of data will contain the relevant information about the image, and this information can be used to accurately represent the image. Hence instead of using the original images, the feature vector of the images can be used for carrying out analysis, recognition and retrieval. Feature extraction has become extremely important because of the following two reasons:
  • 5. 5 1. Raw image can have large sizes. For example a typical MRI generates a 512 by 512 image, with pixel depth of 16 bit and the file size would be between 5-6 Mb and at a time 100-1000 images may be taken depending on the diagnosis [4]. Because of limited memory, there can be limitations on the amount of images that can be stored at a time. an issue, like in medical imaging, where images over a period of time (patient history) need to be stored and processed to accurately carry out analysis and detection of diseases. Using feature vectors, only the main characteristics of the images can be stored and this will occupy much less space than the original image, thus allowing for storage of large no. of images. 2. For carrying out different types of analysis only a few features of the original image may be required. For example, to identify clouds and track them only the texture or edge information of the cloud is needed. If the application is run on the whole set of original images, not only will it use up a lot of computational power and resources like no. of processors, RAM, etc. but also take a substantial amount of time. Compared to this if only the texture or edge based feature vector of each image is used, time and computation will be reduced substantially. Hence feature extraction is of great importance in many fields like machine learning, biomedical imaging, character recognition systems, satellite imaging, meteorological imagining and weather predictions, etc. There are different types of features that can be used to represent an image and they can be categorised as follows: 1. Boundary, shapes, contour or edge based descriptors using techniques like Chain codes, Thresholding, Hough Transform. 2. Image texture based descriptors using techniques like LBP, LTP, etc. 3. Spatial relations based descriptors, which give relations between the different features in an image. In the project, texture based image descriptors are used to represent images and features are extracted using different texture analysis techniques. 1.3 Texture Analysis Based Feature Extraction Texture can be defined as the “properties that represent the surface of an object” or “as something consisting of mutually related elements”, like a group of pixel that represent a common pattern/texture in an image and can be called a texture primitive or texture elements or Texel [5]. Textures can be generally classified as smooth, coarse, fine grained, rough, etc. Additionally image texture is scale dependent, i.e. if we look at the whole image of a grass field, the texture appears to be uniform, but on dividing it in regions, every frame may have slight variations in the texture. Texture Analysis can be classified into two categories [6]: 1. Statistical Approach – Different textures are represented by feature vectors and using statistics and probability, classification takes place. The texture character has a direct relation to the spatial size of the texel. For recognition methods like developing co- occurrence matrix and using them to classify textures was used. It is widely used for texture analysis and classification
  • 6. 6 2. Syntactic Approach – This method is based on the assumption that in an image, texture primitive are located at regular intervals and this placement has to be determine to define a texture. For texture recognition, in the training phase for every texture class a “grammar” structure is constructed and for recognition, the syntactic analysis of the texture “word” is carried out and matched to the “grammar” to be defined into a class [5]. This approach is not used as widely as the statistical approach. Texture Analysis is a widely studied and researched field in digital image analysis and machine vision. It can be used for feature extraction, object recognition, medical imaging and classification, object tracking and motion analysis, recognition of cloud types from meteorological satellite data [5], recognition and classification of fields, forest, agriculture lands, cities, etc from satellite images and maps. In the project, the following texture analysis techniques are used for feature extraction: 1. Local Binary Pattern (LBP) 2. Local Derivative Pattern (LDP) 3. Local Ternary Pattern (LTP) The following sections give a brief description about each technique along with the algorithms used to develop the codes in MATLAB. 1.3.1 Local Binary Pattern Local Binary Pattern (LBP) was first described by Ojala et al [7] as a two level version of the original method proposed by Wang and He [6]. In the method, gray-level based local patterns were used to create a histogram of the image, where each bin of the histogram was used to define a localised texture of the image, i.e. the image is described as a collection of micro-patterns and the LBP describes the first order derivatives. The main advantage of this technique was it was rotationally invariant and computationally inexpensive. Also it is unaffected by uniform gray-scale shift in the image. This technique has been widely used for many applications like representation of textures and classification, face recognition systems and object tracking. In the scheme an image can be processed as a whole region or divided into different regions. After this a cell is selected. The cell can be rectangular or circular with (P, R) defined, with P= No. of pixels considered and R=Radius of the cell. Using the center pixel as the threshold the neighbouring pixels are classified as 0 or 1. Once this process is carried out for every neighbouring pixel, the binary value is converted to the equivalent decimal value. This is done for every region for the whole image. Once the values are computed they are used to make a histogram having 28 = 256 bins. If there are multiple regions, then histogram for each region is concatenated. In the project, the image is computed as a single entity and not segmented into regions. Also a rectangular cell of 3 by 3 neighbourhoods is considered. All images are gray level, and RGB images are processed after converting them to gray level. The algorithm is developed as follows: 1. Store the gray level image and get the image dimensions [rows, cols]. 2. Extend the image boundaries.
  • 7. 7 3. For every pixel in the image, select a 3 by 3 neighbourhood around it, keeping it as the center. Let the center pixel be Gc with coordinates (x,y) and the neighbouring pixels be Gn. 4. Keeping Gc as the threshold, for every neighbouring pixel, if Gn ≥ Gc then value is 1 else value is 0. 5. After computing this for every neighbour pixel, the values are listed in clockwise manner and the equivalent decimal value is calculated. 6. Once done for every pixel, the values are grouped in 256 bins and histogram is created. Figure1shows a sample computation of LBP for a single pixel value Gc, with a 3 by 3 neighbourhood: If Gn ≥ Gc = 1, Else 0 Gc = 156 Figure 1. Computation of LBP in gray scale image The LBP code, written in MATLAB is tested on the peppers.png image (after converting it to gray scale). Figure 2 shows the gray scale image, output LBP image and the LBP histogram. The histogram of the image gives the textural features of the image and stored as its feature vector. 125 130 175 180 156 189 145 160 165 0 0 1 1 0 1 0 1 1 (00111101)2 = (61)10
  • 8. 8 Figure 2. LBP image and histogram of peppers.png image 1.3.2 Local Derivative Pattern Since LBP, gives the first order derivative of the pattern in an image, it fails to detect the finer details in an image and hence, B. Zhang and Y. Gao proposed a higher order local pattern detector known as the Local Derivative Pattern (LDP) [9]. The method was able to capture finer details in images and gave better results when used for facial recognition. The LDP captures the change of derivative directions among local neighbours, and encode the turning point in a given direction. Hence, for a center pixel, the LDP is calculated in 4 directions – 0o , 45o , 90o and 135o . For this a set of templates was proposed and these templates were run on cell of 5 by 5 pixels, keeping the center pixel Gc as the reference point. These templates are shown in Figure 3. Figure 3. Templates for calculating the 2nd order derivative pattern as proposed by B. Zhang and Y. Gao[9] [Courtsey: B. Zhang, Y. Gao, S. Zhao, J. Liu, "Local derivative pattern versus local binary pattern: Face recognition with higher-order local pattern descriptor]
  • 9. 9 For every pixel, for every direction an 8 bit binary pattern is generated. The final value is calculated by concatenating the four 8 bit values to get a 32 bit value. The histogram for the image is determined by concatenating the individual histograms for the four directions. For the project, the 2nd order LDP was calculated for gray scale images and the histogram for all the four directions were concatenated and stored as the feature vector of the image. For LDP, the 4-point template is considered, which means for a given pixel Gc, if the value is increasing or decreasing monotonically, then it is defined as ‘0’ else if there is change in gradient, i.e. one is increasing and other is decreasing then the value is labelled as ‘1’.The algorithm for the LDP, developed in MATLAB is as follows: 1. Get the gray level image and reflect the image boundaries. 2. Get the image dimensions as [rows, cols]. 3. Keeping the first pixel as the center pixel, Gc, select a 5 by 5 neighbourhood. 4. Using the templates described in figure 3, calculate the 8 bit 2nd order derivative pattern for 0o . Total 8 computations are there, for the 8 neighbouring pixels. 5. This is repeated for 45o , 90o , 135o using the corresponding templates. 6. Histograms for each direction are separately computed and then concatenated to give the overall feature vector of the image. The code was first tested using the example values described in [9], and the results were verified. The computation was done in the following way (Figure 5): 2 5 3 5 1 6 7 9 1 5 2 3 4 8 2 3 2 3 2 9 1 2 3 2 1 2 5 3 5 1 6 7 9 1 5 2 3 4 8 2 3 2 3 2 9 1 2 3 2 1 Ref. point = 4 (Templates are highlighted). The final result: 01010100. This is repeated in the opposite direction and for all other angles, using the corresponding templates. Figure 4. 2nd order LDP calculation for four bits of 0o After this it was run on the peppers.png image. The result of the code is shown in Figure 5, which includes the original image, 2nd order LDP images for the four angles. 2 5 3 5 1 6 7 9 1 5 2 3 4 8 2 3 2 3 2 9 1 2 3 2 1 2 5 3 5 1 6 7 9 1 5 2 3 4 8 2 3 2 3 2 9 1 2 3 2 1 2 5 3 5 1 6 7 9 1 5 2 3 4 8 2 3 2 3 2 9 1 2 3 2 1
  • 10. 10 Figure 5. 2nd order LDP for 0o , 45o , 90o and 135o of peppers.png image 1.3.3. Local Ternary Pattern Local Ternary Pattern (LTP) was introduced by X. Tan and B. Triggs, for face recognition under difficult lighting conditions [10]. They proposed the method, as a generalisation to the LBP scheme, which is more sensitive to noise and unable to accurately detect smooth weak illumination gradients [10]. Instead of the 2 valued code (‘0’ and ‘1’)
  • 11. 11 used in LBP, LTP uses a 3 valued code (‘-1’, ’0’ and ‘1’). This is done by setting a threshold value and classifying neighbourhood pixels, based on the value of central pixel and the threshold. For a central pixel Gc, neighbouring pixel Gn and Threshold value t, the LTP value is computed as follow:  If Gn ≥ Gc + t, then value is 1,  If |Gn – Gc| < t, the value is 0,  If Gn ≤ Gc – t, the value is -1. Hence it can be said that a region around Gc, defined by the Gc ± t is assigned 0, any value above or below this region is assigned 1 and -1 respectively. Using this technique, for an image two LTP values are calculated for each pixel: 1.That consists of the decimal representation of the ‘+1’ values called the Upper LTP and 2. That consists the decimal representation of the ‘-1’ values called the Lower LTP. Figure 6 shows the sample calculation of the values for a Gc in a 3 by 3 neighbourhood. Lower LTP Upper LTP (11010000)2 (00000111)2 Gc = 150 Threshold = 5 U_limit = 155, L_Limit = 145 Figure 6. Computing Upper and Lower LTP values in a 3 by 3 neighbourhood Using this combined histogram is created and this gives the textural features of the image. The algorithm used for developing LTP code in MATLAB is described below: 1. Load the image and reflect the image boundaries. 2. Get the image dimensions in form of [rows, cols]. 3. For central pixel Gc, select a 3 by 3 neighbourhood. For I =1..8 neighbour pixels, calculate the value depending on the above conditions. 4. The values of ‘+1’ are stored as the Upper LTP and values of ‘-1’ are stored as the Lower LTP. 5. The binary sequences of the two LTPs are converted to decimal equivalent and stored. 6. Histograms for each LTP are calculated and concatenated to give the overall feature vector of the image. 135 120 148 156 150 115 162 158 154 -1 -1 0 1 0 -1 1 1 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 1 0
  • 12. 12 Figure 7 shows the output of the code based on the above LTP algorithm, developed in MATLAB and executed on the peppers.png image. It includes the gray level image, the Upper LTP and the Lower LTP image. Figure 7. Lower and Upper LTP output images for pepper.png 1.4 Tracking Cloud Position using Texture based Features Object tracking is used to determine the motion of different elements in a series of images. This is useful in various fields like surveillance, meteorology and medical imaging. Object tracking can be of two types: 1.Kernel Based Tracking 2. Contour Based Tracking In the project, a form of contour based tracking scheme is developed to understand and track the motion of clouds, from a time series of images taken by the GOES satellite. For tracking the cloud position, the following algorithm is developed in MATLAB: 1. Using one of the texture analysis scheme (for example LBP or LTP), the texture histogram for first image (in a series of image taken at different time sequence) is calculated and stored as the reference histogram. This histogram gives the textural feature vector of the cloud. 2. Now for every image in the sequence the histogram is calculated. If there is movement of the cloud in the image, the local textures of the image will be different from the reference image and hence the bin values of the histogram will be different. 3. The difference in the histograms is calculated using the χ2 (Chi Squared) Distance. This technique is used because the difference between the larger valued bins is less important
  • 13. 13 compared to the smaller valued bins. This is because the larger valued bins represent a constant texture (which may represent land in the actual image) and hence the difference between the smaller valued bins has to be considered. This technique has been used successfully in many object tracking schemes. Let Href and Himg be the two histograms. The difference between them is calculated using following formula: χ2 (Href,Himg) = ∑i (Hrefi - Himgi)2 / (Hrefi + Himgi) 4. In this way the difference in histogram for image is calculated with respect to the reference image and this gives us the pixel position difference of the cloud. 5. By plotting the pixel position difference, it can be determined how the cloud is moving, its direction and the speed of movement. For example, in the plot if the pixel position difference for multiple images remains constant then it implies that the cloud position has not changed. Figure 8 shows the gray scale image, the LBP and the Upper-LTP of one of the cloud images Figure 8. Gray scale, LBP and LTP image of the cloud [gray scale image- courtsey: http://goes.gsfc.nasa.gov/] 1.5Project Outcome, Results and Analysis The project had the following two objectives: 1. Developing and implementing the LBP, LDP and LTP texture analysis techniques codes in MATLAB. 2. Using the above techniques for feature extraction and comparing the different techniques based on the computation time, size of the feature vectors and accuracy (%) in identification. 3. Based on the above result, using the best technique to implement cloud position tracking. Once the LBP, LDP and LTP codes were developed and tested in MATLAB for the peppers.png image, a comparative study for the three techniques was carried out to determine which of them gave the best results. This was done as follows: 1. A Training Set was created using 50 images from the first 5 image sets form the Extended Yale Face Database B.
  • 14. 14 2. Then from each set (each type of face) 30 images were selected, with each image having a different facial position and illumination condition. 3. A code was written in MATLAB, to read all the images from the Training Set and get the texture based feature vector (histogram) for each image using LBP, LDP and LTP and store them. 4. The next step was to get feature vectors for each image set and compare them to the Training Set. This was done using the nearest neighbour method (“knnclassify” function in MATLAB). The code for this is shown below: %Generating class labels class_label=zeros(no_img,1); for i=1:no_img class_label(i,1)=ceil((i)/N); % 50 because there are 50 images of each class(for rounding of to nearest integer) end class=knnclassify(test_set,train_set,class_label); t_corr=0; [total,cols]=size(test_set); %Determining total no. of correctly identified images for i=1:total if class(i,1)==num t_corr=t_corr+1; end end per_acc=((t_corr)/total)*100; % Calculating the % Accuracy of identification 5. This is carried out for all the test set and a graph is plotted comparing the result. Figure 9 shows the LBP, LDP and LTP images generated for one of the images from the Extended Yale Face Database B. Original Images Local Binary Pattern
  • 15. 15 Local Derivative Pattern (2nd order derivative in 0o ) Local Ternary Pattern (Upper LTP) Figure 9. LBP, LDP and LTP images of a test face image from the Extended Yale Dataset Table 1 shows the accuracy of identification and classification based on the k nearest neighbour, for three techniques and Figure 10 shows the graph of accuracy of identification (%) for each image set, using the three techniques. Set No. LBP - accuracy (%) LDP- accuracy (%) LTP - accuracy (%) Set 1 66.7 61.7 61.7 Set 2 56.7 70.8 68.3 Set 3 70.0 69.2 68.3 Set 4 60.0 67.5 75.0 Set 5 66.7 84.2 78.3 Table 1. % accuracy in identifying images based on textural feature vectors for LBP, LDP, LTP Figure 10. Accuracy (%) of identification for LBP, LTP, LDP Based on the above results the following observations are made: 1. The results are consistent with previous studies carried out in [9], [10], [11] and [12]. 2. The feature vector sixe for each image is smallest in case of LBP (256 by 1). LTP gives feature vector of size 256 by 2 (Upper and Lower LTP), while LDP gives feature vector of size 256 by 4 (for each direction – 0o , 45o , 90o and 135o ).
  • 16. 16 3. From the above graph it can be said that the LDP technique has the highest accuracy (%) of identification – 84.2% for Image Set 5. 4. Also LBP has the maximum variation in the accuracy (%), with the lowest value being 56.7% for Set 2 and highest accuracy of 70.0% for Set 3. 5. Additionally the most consistent result is given by the LTP technique. 6. On basis of complexity of code, LDP is the most complex and LBP is the least. Also execution time is maximum for LDP and least for LBP. After carrying out the above analysis, the LBP and LTP based texture analysis method was implemented to carry out the cloud position tracking, described in section 1.4. A total of 40 images, taken over a period of time were analysed and the change in position was determined in terms of pixel position changes by computing the χ2 value between the fixed reference histogram and the cloud image histograms over time period of 8 hours. The images IR2 channel images taken for the state of Alaska from the GOES-WEST sector (website: http://goes.gsfc.nasa.gov). Figure 11 shows the edge contour of the clouds for images taken at an interval of 1 hour and the graph of pixel position difference determined using LBP and LTP. Figure 12 shows the graph of change in pixel position, showing cloud motion, determined using LBP and LTP for 40 images taken over 8 hours. Blue-Ref. Cloud Edge at time To Red – Cloud Edge at time Tt
  • 17. 17 Figure 11. images giving contour of cloud edges and graph showing change in pixel value, for LBP and LTP Figure 12. Change in pixel position, for 40 image of clouds, using LBP and LTP As seen from the above figure, LBP gives a much smoother graph of change in pixel position compared to LTP, and this can be verified from the contour of cloud images that show the advancing position of the clouds with respect to the reference. Hence it can be concluded that for tracking cloud positions, LBP will give a better result compared to LTP.
  • 18. 18 CONCLUSION Feature extraction is a very important component of digital image analysis and machine vision, especially for image storage and retrieval for large data bases. Feature vectors of images can also be used for fast processing and analysis, but the effectiveness of this depends on the feature extraction technique. In the project texture based feature extraction is carried out using three different feature analysis techniques: Local Binary Pattern, Local Derivative Pattern and Local Ternary Pattern. The effectiveness of each technique is analysed by performing an identification and classification algorithm using an standard image set, the Extended Yale Face Database B. Based on it the LTP is determined to be the most accurate while LBP gives the feature vector of least size, is less complex and computationally fast. The LTP and LBP techniques are then used to carry out tracking of cloud position, by determining the change in pixel value position. This is done by calculating the Chi Square distance, among two graphs, the reference and the test image, taken from the NASA GOES website, for state of Alaska. The contour of the cloud edge is also determined using a standard MATLAB function imcontour, to verify the result. From the graphs, it can be concluded the for cloud position tracking, the LBP technique gives a better result compared to the LTP. All the codes have been developed and tested in MATLAB.
  • 19. 19 REFERENCES [1]. A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 6, pp. 643–660, Jun. 2001 [2]. Ir2 images of Alaska – GOES-WEST sectors, [online]: http://goes.gsfc.nasa.gov/goeswest/alaska/ [3]. J. Sklansky, "Image segmentation and feature extraction", IEEE Trans. on Systems Man and Cybernetics, vol. 8, pp. 237-247, 1981 [4]. ‘Magnetic Resonance Imaging’, [Online]: http://www.telemedproviders.com /telemedicine-articles/91-magnetic-resonance-imaging-mri.html [5].M. Sonka, V. Hlavac, R. Boyle, “Image Processing, Analysis and Machine Vision”, Thomson-Engineering, 2007 [6]. R. M. Haralick, "Statistical and Structural Approaches to Texture", Proceedings of the IEEE, vol. 67, no. 5, pp. 786-804, 1979 [7]. T. Ojala, M. Pietikäinen, and T. T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with Local Binary Pattern,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002. [8].L. Wang, D. He, “Texture classification using texture spectrum”, Pattern Recognition 23,905-910 (1990) [9]. B. Zhang, Y. Gao, S. Zhao, J. Liu, "Local derivative pattern versus local binary pattern: Face recognition with higher-order local pattern descriptor", IEEE Trans. Image Process., vol. 19, no. 2, pp. 533-544, Feb. 2010 [10]. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 1635–1650 [11]. S.K. Vipparthi, S. Murala, A.B. Gonde, Q.M. Jonathan Wu, "Local directional mask maximum edge patterns for image retrieval and face recognition", IET Computer Vision, vol. 10, no. 3, pp. 182-192, March 2016 [12]. H. Rami, M. Hamri, Lh. Masmoudi, ‘ Object Tracking in Images Sequence using Local Binary Pattern (LBP)’, International Journal of Computer Applications (0975 – 8887) Volume 63– No.20, February 2013 [13]. www.mathworks.com