I taught introductory programming at the college level.
It was a bread-and-butter course, all the faculty did it, and I think we did it quite well.
We followed a common text and had common exams, but we each had our own classroom method that worked.
It's been a long time since then, but occasionally I get to tutor some kid in programming, and the whole picture is about the same.
The way I do it is to start at the bottom, as concrete as possible.
What students know is a structure.
They already have a lot of concepts.
I am building further concepts on top of those, and I am pruning off concepts they may form which are counter-productive.
At the same time, I make them learn by doing.
I had built a little computer with an Intel 8008 chip, some EPROM, and a few circuits.
I had programmed it to play a little duet when the I/O chip was connected to a couple speakers.
I would explain how the little program worked, with an inner loop to count down a counter.
That would act as a delay.
Then it would toggle the output bit and do it again.
It would do that for a while, and then switch to another delay, giving another pitch, and so on.
The memory chip had a little timer, and if I tucked a capacitor lead under one of the timer inputs, the program would run veeeeery slowly.
The class could hear the speakers going click, click, click...
I wanted the class to understand that the computer was doing very simple things one step at a time.
Then I would un-hook the capacitor lead, and the "music" would burst forth. (applause)
Then I had built a simulator for a very simple decimal computer, having 1000 memory locations, each holding a signed 4-digit decimal number.
It had very simple opcodes like "add to accumulator", "jump if negative", and so on.
I would have them write little programs in this "machine language", like adding two numbers, or adding up a list of numbers.
Then they could watch it work by single-stepping, or holding down the Enter key to watch it run "fast".
The point of this was to put in place the concept that computers can only do a very small number of different basic operations, and they do them one-at-a-time.
This is to counter the impression they have that computers are complicated, and that they do everything all at the same time, and read your mind in the bargain.
From there we went on to programming in a "real" language (BASIC :), starting with very simple but interesting programs, working up through conditionals, loops, arrays, files, merging, and so on.
The object was to put in place a sufficient skill set so they could take on a project of their own choosing, because that's the only thing that makes programming interesting - the use to which you can put it.
I would throw out some ideas for projects, and then they would take it from there.
I would ask for written ideas, and then progress reports, to keep them from postponing it to the last minute and then panicking.
I think the projects were the best part, because they were learning under their own power.
That initial grounding in a very concrete understanding of what computers do made it much easier to teach concepts later on that would otherwise be real speed-bumps, like arrays or (in a later course) pointers.
We tend to glorify the concept of "abstraction" as this wonderful thing, but it needs to be built on a concrete foundation, not on air.