OCR depends on clear images. If details are a bit unclear to a human reader, OCR would have even more difficulty discerning characters.
Ideally, when scanning or photographing text, the image should be optimized so that there is clear contrast between text and background. Wrinkles and folds should be minimized, e.g., by perpendicular lighting in photography, or moderate pressure in scanning. If there are colored stains, images can be adjusted to remove blotches of that color.
Images can be also improved, afterwards, for use in OCR. About three minutes were spent using free IrfanView to produce the image below from that in the question. It was processed "by inspection" to decrease gamma, increase contrast, and increase sharpness, but this processing could be improved by testing with the OCR tool to optimize accuracy.
![Processed image](https://cdn.statically.io/img/i.sstatic.net/Ya1Q8.jpg)
In addition, if one is using Tesseract extensively on similar data, it is possible to train the tool to recognize specific fonts and specific characters. If one is dealing just with numerical data, for example, Tesseract can be trained to recognize only digits, punctuation and spaces, increasing accuracy. That training takes some effort, and might be worthwhile only for a long-term project with voluminous data (e.g., digitizing many back-issues of a newspaper that used only a few fonts).