The fallacy is about picking a "target" in retrospect after one already has the data (akin to drawing a target against the tightest cluster of bullet holes on the barn after one has already shot the gun to create them) and then calculating the probability that the data would all be so close to the target in the same way one would if the target had been predicted beforehand.
For example, if one wants to show that some food has a health benefit, one could take a sample of people who started eating that food and look at how they compared to a control group on a very large number of health variables. Even if the food has no causal effect on any of the variables, if one picks a large enough number of them to test, it may actually be fairly likely there will be some statistically significant difference between the control group and the test group on some variable, just by random chance (similar to the spurious correlations website which charts a large number of different variables and then only shows the ones whose graphs happen to "match" fairly well). And then if one picks the variable that has the largest difference between the test group and the control group--say, performance on a test of grip strength--and calculates in retrospect the "probability" that the two groups would differ so much on that variable under a null hypothesis, one may get a low probability and claim that this makes a case for rejecting the null hypothesis and saying the food was the cause of the difference.
I don't think the wiki's phrasing about "differences in data are ignored, but similarities are overemphasized" is very clear, but one could say in my example the "similarity" that's overemphasized is the way the members of the test group are similar to one another in having a statistically significant level of higher average grip strength, while the "differences" that are ignored are all the other variables where members of the test group aren't any more strongly correlated with one another on that variable than they are with members of the control group.
The wiki gets that particular phrasing from this list of fallacies which they cite, you can see the page for it here and the examples they give of focusing on similarities but ignoring differences, like a dating site that tries to claim two people are a great match by highlighting a few questions they answered similarly while ignoring all the other questions they didn't.
Note that when these types of examples are analogized to the Texas sharpshooter, the important thing is that the target is chosen in retrospect, it's not important to the analogy that the person drawing the target is also the one who "performed a method", i.e. shot the gun. If one sees a friend's car that has a bunch of bugs that have splattered on the windshield, and draws a target around the greatest cluster and then argues the bugs must be preferentially attracted to that part of the windshield, it would be the same fallacy. I don't think there's a name for the idea of a version of this fallacy where it's treated as important that the same person created the data by performing a method and then picked the target in light of the data, if that's what you're asking for.