45
$\begingroup$

I'd like to take pictures of labels on a jar of food, and be able to transform them so the label is flat, with the right and left side resized to be even with the center of the image.

Ideally, I'd like to use the contrast between the label and the background in order to find the edges and apply the correction. Otherwise, I can ask the user to somehow identify the corners and sides of the image.


I'm looking for general techniques and algorithms to take an image that is skewed spherically (cylindrically in my case) and can flatten the image. Currently the image of a label that is wrapped around a jar or bottle, will have features and text that shrinks as it recedes to the right or left of the image. Also the lines that denote the edge of the label, will only be parallel in the center of the image, and will skew towards each-other on the right and left extreme of the label.

After manipulating the image, I would like to be left with an almost perfect rectangle where the text and features are uniformly sized, as if I took a picture of the label when it was not on the jar or bottle.

Also, I would like it if the technique could automatically detect the edges of the label, in order to apply the suitable correction. Otherwise I would have to ask my user to indicate the label boundaries.

I've already Googled and found articles like this one: flattening curved documents, but I am looking for something a bit simpler, as my needs are for labels with a simple curve.

$\endgroup$
5
  • $\begingroup$ Nikie has what appears to be a all-encompassing solution. It gets much simpler, though, if you know that the camera is always "square" to the jar, with no confusing background. Then you find the edges of the jar and apply the simple trigonometric (arcsine?) transformation, without much additional fiddling. Once the image is flattened you can isolate the label itself. $\endgroup$ Commented May 19, 2012 at 11:37
  • $\begingroup$ @Daniel That is what I did here. Ideally one would take into account the not-perfectly-parallel projection as well, but I didn't. $\endgroup$
    – Szabolcs
    Commented May 21, 2012 at 10:58
  • $\begingroup$ the work is very good. but the code showing error in my system. i am using matlab 2017a is it compatible with it. thank you, $\endgroup$ Commented Feb 5, 2018 at 9:54
  • $\begingroup$ When I run your program, I got this error: C:/Users/Michael Balcerzak/Documents/opencv-text-detection/opencv-text-detection/imageSticting.py:103: UserWarning: Bi-quadratic interpolation behavior has changed due to a bug in the implementation of scikit-image. The new version now serves as a wrapper around SciPy's interpolation functions, which itself is not verified to be a correct implementation. Until skimage's implementation is fixed, we recommend to use bi-linear or bi-cubic interpolation instead. warped = tf.warp(img, tform3, order=2) How to fix this $\endgroup$ Commented May 11, 2020 at 18:51
  • $\begingroup$ @MichaelBalcerzak Welcome to SE.SP! Please do not add a comment as an answer. Your question, as it stands, is not a good fit for this site. You are asking a programming question. Even if the reason for the code is signal processing, we do not debug code here. Please ask your question on Stack Overflow itself. $\endgroup$
    – Peter K.
    Commented May 11, 2020 at 22:03

1 Answer 1

64
$\begingroup$

A similar question was asked on Mathematica.Stackexchange. My answer over there evolved and got quite long in the end, so I'll summarize the algorithm here.

Abstract

The basic idea is:

  1. Find the label.
  2. Find the borders of the label
  3. Find a mapping that maps image coordinates to cylinder coordinates so that it maps the pixels along the top border of the label to ([anything] / 0), the pixels along the right border to (1 / [anything]) and so on.
  4. Transform the image using this mapping

The algorithm only works for images where:

  1. the label is brighter than the background (this is needed for the label detection)
  2. the label is rectangular (this is used to measure the quality of a mapping)
  3. the jar is (almost) vertical (this is used to keep the mapping function simple)
  4. the jar is cylindrical (this is used to keep the mapping function simple)

However, the algorithm is modular. At least in principle, you could write your own label detection that does not require a dark background, or you could write your own quality measurement function that can cope with elliptical or octagonal labels.

Results

These images were processed fully automatically, i.e. the algorithm takes the source image, works for a few seconds, then shows the mapping (left) and the un-distorted image (right):

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

The next images were processed with a modified version of the algorithm, were the user selects the left and right borders of the jar (not the label), because the curvature of the label cannot be estimated from the image in a frontal shot (i.e. the fully automatic algorithm would return images that are slightly distorted):

enter image description here

enter image description here

Implementation:

1. Find the label

The label is bright in front of a dark background, so I can find it easily using binarization:

src = Import["https://i.sstatic.net/rfNu7.png"];
binary = FillingTransform[DeleteBorderComponents[Binarize[src]]]

binarized image

I simply pick the largest connected component and assume that's the label:

labelMask = Image[SortBy[ComponentMeasurements[binary, {"Area", "Mask"}][[All, 2]], First][[-1, 2]]]

largest component

2. Find the borders of the label

Next step: find the top/bottom/left/right borders using simple derivative convolution masks:

topBorder = DeleteSmallComponents[ImageConvolve[labelMask, {{1}, {-1}}]];
bottomBorder = DeleteSmallComponents[ImageConvolve[labelMask, {{-1}, {1}}]];
leftBorder = DeleteSmallComponents[ImageConvolve[labelMask, {{1, -1}}]];
rightBorder = DeleteSmallComponents[ImageConvolve[labelMask, {{-1, 1}}]];

enter image description here

This is a little helper function that finds all white pixels in one of these four images and converts the indices to coordinates (Position returns indices, and indices are 1-based {y,x}-tuples, where y=1 is at the top of the image. But all the image processing functions expect coordinates, which are 0-based {x,y}-tuples, where y=0 is the bottom of the image):

{w, h} = ImageDimensions[topBorder];
maskToPoints = Function[mask, {#[[2]]-1, h - #[[1]]+1} & /@ Position[ImageData[mask], 1.]];

3. Find a mapping from image to cylinder coordinates

Now I have four separate lists of coordinates of the top, bottom, left, right borders of the label. I define a mapping from image coordinates to cylinder coordinates:

arcSinSeries = Normal[Series[ArcSin[\[Alpha]], {\[Alpha], 0, 10}]]
Clear[mapping];
mapping[{x_, y_}] := 
   {
    c1 + c2*(arcSinSeries /. \[Alpha] -> (x - cx)/r) + c3*y + c4*x*y, 
    top + y*height + tilt1*Sqrt[Clip[r^2 - (x - cx)^2, {0.01, \[Infinity]}]] + tilt2*y*Sqrt[Clip[r^2 - (x - cx)^2, {0.01, \[Infinity]}]]
   }

This is a cylindrical mapping, that maps X/Y-coordinates in the source image to cylindrical coordinates. The mapping has 10 degrees of freedom for height/radius/center/perspective/tilt. I used the Taylor series to approximate the arc sine, because I couldn't get the optimization working with ArcSin directly. The Clip calls are my ad-hoc attempt to prevent complex numbers during the optimization. There's a trade-off here: On the one hand, the function should be as close to an exact cylindrical mapping as possible, to give the lowest possible distortion. On the other hand, if it's to complicated, it gets much harder to find optimal values for the degrees of freedom automatically. (The nice thing about doing image processing with Mathematica is that you can play around with mathematical models like this very easily, introduce additional terms for different distortions and use the same optimization functions to get final results. I've never been able to do anything like that using OpenCV or Matlab. But I never tried the symbolic toolbox for Matlab, maybe that makes it more useful.)

Next I define an "error function" that measures the quality of a image -> cylinder coordinate mapping. It's just the sum of squared errors for the border pixels:

errorFunction =
  Flatten[{
    (mapping[#][[1]])^2 & /@ maskToPoints[leftBorder],
    (mapping[#][[1]] - 1)^2 & /@ maskToPoints[rightBorder],
    (mapping[#][[2]] - 1)^2 & /@ maskToPoints[topBorder],
    (mapping[#][[2]])^2 & /@ maskToPoints[bottomBorder]
    }];

This error function measures the "quality" of a mapping: It's lowest if the points on the left border are mapped to (0 / [anything]), pixels on the top border are mapped to ([anything] / 0) and so on.

Now I can tell Mathematica to find coefficients that minimize this error function. I can make "educated guesses" about some of the coefficients (e.g. the radius and center of the jar in the image). I use these as starting points of the optimization:

leftMean = Mean[maskToPoints[leftBorder]][[1]];
rightMean = Mean[maskToPoints[rightBorder]][[1]];
topMean = Mean[maskToPoints[topBorder]][[2]];
bottomMean = Mean[maskToPoints[bottomBorder]][[2]];
solution = 
 FindMinimum[
   Total[errorFunction], 
    {{c1, 0}, {c2, rightMean - leftMean}, {c3, 0}, {c4, 0}, 
     {cx, (leftMean + rightMean)/2}, 
     {top, topMean}, 
     {r, rightMean - leftMean}, 
     {height, bottomMean - topMean}, 
     {tilt1, 0}, {tilt2, 0}}][[2]]

FindMinimum finds values for the 10 degrees of freedom of my mapping function that minimize the error function. Combine the generic mapping and this solution and I get a mapping from X/Y image coordinates, that fits the label area. I can visualize this mapping using Mathematica's ContourPlot function:

Show[src,
 ContourPlot[mapping[{x, y}][[1]] /. solution, {x, 0, w}, {y, 0, h}, 
  ContourShading -> None, ContourStyle -> Red, 
  Contours -> Range[0, 1, 0.1], 
  RegionFunction -> Function[{x, y}, 0 <= (mapping[{x, y}][[2]] /. solution) <= 1]],
 ContourPlot[mapping[{x, y}][[2]] /. solution, {x, 0, w}, {y, 0, h}, 
  ContourShading -> None, ContourStyle -> Red, 
  Contours -> Range[0, 1, 0.2],
  RegionFunction -> Function[{x, y}, 0 <= (mapping[{x, y}][[1]] /. solution) <= 1]]]

enter image description here

4. Transform the image

Finally, I use Mathematica's ImageForwardTransform function to distort the image according to this mapping:

ImageForwardTransformation[src, mapping[#] /. solution &, {400, 300}, DataRange -> Full, PlotRange -> {{0, 1}, {0, 1}}]

That gives the results as shown above.

Manually assisted version

The algorithm above is full-automatic. No adjustments required. It works reasonably well as long as the picture is taken from above or below. But if it's a frontal shot, the radius of the jar can not be estimated from the shape of the label. In these cases, I get much better results if I let the user enter the left/right borders of the jar manually, and set the corresponding degrees of freedom in the mapping explicitly.

This code lets the user select the left/right borders:

LocatorPane[Dynamic[{{xLeft, y1}, {xRight, y2}}], 
 Dynamic[Show[src, 
   Graphics[{Red, Line[{{xLeft, 0}, {xLeft, h}}], 
     Line[{{xRight, 0}, {xRight, h}}]}]]]]

LocatorPane

This is the alternative optimization code, where the center&radius are given explicitly.

manualAdjustments = {cx -> (xLeft + xRight)/2, r -> (xRight - xLeft)/2};
solution = 
  FindMinimum[
   Total[minimize /. manualAdjustments], 
    {{c1, 0}, {c2, rightMean - leftMean}, {c3, 0}, {c4, 0}, 
     {top, topMean}, 
     {height, bottomMean - topMean}, 
     {tilt1, 0}, {tilt2, 0}}][[2]]
solution = Join[solution, manualAdjustments]
$\endgroup$
2
  • 12
    $\begingroup$ Removes sunglasses ... mother of god... $\endgroup$
    – Spacey
    Commented May 18, 2012 at 18:37
  • $\begingroup$ Do you happen to have a reference to the cylindrical mapping? And perhaps equations for the inverse mapping? @niki-estner $\endgroup$
    – Ita
    Commented Jun 4, 2018 at 17:15

Not the answer you're looking for? Browse other questions tagged or ask your own question.