Grade inflation has been an issue in the US since mid 1970s, so welcome to the club. See endgradeinflation.org. None of the attempts to curb it have been successful so far; the practice of student evaluations is deep-rooted in US colleges, and cannot be easily modified.
The uphill battle against grade inflation has been spearheaded by University of North Carolina, Chapel Hill, one of top 5 large public US universities. They put a rather extensive research effort into figuring out the patterns of grade inflation. The cause, as you observed, is what economists call market failure, when the self-motivated actions of the players lead to outcomes that are worse for everybody. The employers of the graduates, and the grad programs they apply for, suffer the most, as they cannot distinguish good students from bad students. Organizations and student societies that rely solely on GPA (grade point average) discover great differences between disciplines: the humanities end of the spectrum have been hit the hardest by grade inflation, while engineering and sciences that have more specific assessment and evaluation criteria tend to produce lower grades. The opening page of this 2000 report provides a specific figure to answer your question: about 15% increase in student evaluations associated with 1 standard deviation increase in the course average grade. This standard deviation was 0.4 on the American scale that goes from 0 to 4; at the time of writing the report, the average GPA at UNC was 3.18.
In mid 2000s, UNC came up with an idea of an effective grade, called achievement index. In very simplistic terms, it essentially normalizes each class to have the same GPA. Each student is mapped onto a percentile implied by his grade in a given class, relative to the distribution of grades in this class; percentiles across all classes that a student took would be aggregated; and the ultimate student's achievement GPA would be reported based on the normative judgement of what the university wants to see as the average GPA and the range of grades. This idea is based on item-response theory, or can alternatively be explained using Bayesian methods (a maximum a posteriori estimate of student ability). As you can imagine, this literally caused a student unrest that UNC has not seen since the civil rights movement of the 1960s (o tempora o mores... how petty motives are these days), so the faculty chickened out and ruled against it.
Still, UNC has found a way to put the grades into the context by augmenting the transcript with the average GPA of other students who took this particular class, student's percentile in a given class, and the "schedule point average" = average GPA of all the students in the classes that a student took. The above link shows a clear picture of somebody who had a nominal GPA of 3.6, way up from the average GPA of classmates of 3.0, consistently performing above the median (7 grades above the median, 5 at the median, 0 below), vs. somebody who has only be able to achieve GPA of 2.5 in easier classes with average GPA of 3.2 (1 grade above the median, 3 at the median, 9 below).
The dramatic timeline (if you know how to read between the lines... I grew up in Soviet Union and have this unfortunate skill) of UNC attempts to deal with grade inflation is available here. Some other institutions are likely to use these or similar ideas, including another high-profile public school, Berkeley. (The administrator's claim that the university's computer system cannot handle the additional evaluation method is ridiculous; I could do these numbers on my laptop.)