Here is the ColorBlobDetectionActivity
class of the color blob detection sample, and the particular chunk of code that I am facing difficulty in understanding is Line#114 to Line # 135 in the onTouch
method implemented in this class.
When the onTouch
method is invoked, that is when the user touches a colored blob, the int rows= mRgba.rows()
and int cols = mRgba.cols()
is calculated. Since mRgba
is a Mat
which was returned by onCameraFrame()
, it means it represents a camera frame.So I think rows
and cols
now represent the number of pixels along x-axis, and that along y-axis of the frame.
Since a frame is the area viewed by the camera (which in this app is the full screen of the device), so rows
and cols
represent the number of pixels along x-axis and y-axis of the screen respectively.
The next two statements are:
int xOffset = (mOpenCvCameraView.getWidth() - cols) / 2;
int yOffset = (mOpenCvCameraView.getHeight() - rows) / 2;
The questions are:
What exactly do
xOffset
andyOffset
represent?mOpenCvCameraView
is an instance ofCameraBridgeViewBase
, which according to the documentation is a basic class responsible for implementing the interaction of Camera and OpenCV. The documentation ongetWidth()
andgetHeight()
is silent, but I think it is also the width and height of the camera frame (in pixels?), so it should be same asrows
andcols
. Is that correct?Can you explain a bit the formula they have used to calculate
xOffset
andyOffset
(in the above two statemnets)?