BlueRaja's answer is certainly more complete than this one (and gives good references), but here's a rough overview of linear programming. Suppose that you have a linear function (in high school courses, it's typically a function of two variables) that you want to optimize on a convex "feasible" region bounded by linear equations (again, in high school, the bounds are typically described using linear inequalities in two variables).
Because the function to optimize is linear, the set of points for which the function has a particular value, say c, is a line (and all such lines are parallel) and the value of the function anywhere to one side of the line is greater than c and anywhere to the other side of the line is less than c. So, you can think about moving through this set of constant-value lines to increase/decrease the value of the target function as appropriate to optimize it.
Since the feasible region is bounded by linear equations, as the constant-value line moves through and out of the feasible region, it last touches the feasible region at a vertex (or possibly at all points on an edge connecting two vertices), so the optimal solution must occur at a vertex of the feasible region.
Given all that, linear programming comes down to evaluating the target function at all the vertices of the feasible region to find the optimal value.