It's easy to forget that computer science is a very young field. Young fields are messy. How many names can you think of for the idea of a sequence of objects? How about the idea of mapping keys to values? We've not yet gone through a period of contraction and simplification, reviewing all this new knowledge we've generated, getting rid of the epicycles and hasty complications and teasing out, perhaps, a simpler point of view.
If I ask you to imagine scientific progress, you probably picture activities that generate new knowledge: test tubes and telescopes, raising the bar, heroically expanding the upper bound of human understanding. Smart people doing hard things.
There's a second kind of progress that is not at all concerned with the upper bound of human understanding. It deals with raising the average: teachers and learners, expanding the pool of people who understand what's going on.
Teaching itself comes in at least two forms. It's not just about broadcasting knowledge. No matter how many students you have, if that's all you're doing you aren't making as much progress as you could. The internet is a powerful tool for Type I teaching, but it can't help much with Type II. That is why it is not a satisfactory replacement.
The second type of teaching is a form of compression, making things easier to understand. I don't mean simply eliding details, or making your proofs more terse. I mean compression in the time it takes to explain an idea and its implications.
Computer science is hard. Logic is hard. And that's fine. But if we leave this world as complicated as we found it then we've failed to do our jobs. Think about it this way: if the next generation learns at the same speed as yours, they won't have time to move beyond you. Type II teaching is what enables Type I progress.
Physics went through a period of compression in the middle of the last century. Richard Feynman's reputation wasn't built on discovering new particles or laws of nature, but for discovering better ways to reason about what we already knew.  Mathematics has gone though several rebuilding periods. That's why you can pick up a child's math book today and find negative numbers, the square root of two, and many cheerful facts about the square of the hypotenuse. Every one of those mundane ideas was once the hardest problem in the world. My word, people died in arguments over the Pythagorean Theorem. Now we teach it to kids in a half hour. If that's not progress I don't know what is.
So how do we get there in computer science? How can we simplify what we already know so the next crop learns what they need to put our best efforts to shame?
The second form of progress is closely related to the second form of teaching. To my mind, understanding and explaining are just opposite ends of the same process. The only way to prove that you understand something is to explain it to somebody else. Not even using the knowledge is an airtight proof. That's why teachers are always telling you to show your work, to explain step-by-step how you got to the answer.
The most powerful way I know of to understand and explain is through story. Rendering a complex idea into a simple example, analogy, metaphor or allegory simultaneously achieves compression and a way to spread that idea far and wide. Making a good story also forces you to think hard about ways to drive home both the idea and its implications.
For example, the XOR operation is widely used in cryptography and steganography. It has the useful property that, if applied twice, it gives back the original input. So A xor B creates the encrypted jibberish C by combining the message A with the secret key B. C xor B gives you back A, the original message. This neat fact is rediscovered every day by novice programmers who proceed to use it incorrectly. Getting it right involves tricky math, devious thinking, and expert peer review. It's much easier to break an encryption scheme than it is to write one.
Now imagine a lizard named Xor. He's a chameleon. He wants to blend into the background and be invisible. "I'm part dinosaur on my mother's side," says Xor proudly. "My great-aunt is a Steganosaurus. The last time I saw her, I didn't even see her!" The trouble is that Xor is colorblind. He thinks he's hidden but everyone else can actually see him. His friends try and fail to convince him otherwise until the day a bird swoops down and eats him. Poor Xor!
A month from now you will probably remember more about Xor than the technical explanation. The story loses detail, but I suggest that happens anyway. And, maybe, you will be that much more cautious about writing your own code for hiding information.
I've encountered a lot of resistance from people in the field to "reification", locking abstract ideas into concrete analogies. For the most part these criticisms are valid. It is essentially a throwback to ancient memory techniques. The flaw of story is that it can only express so much, and imprecisely. There are lots of complex ideas that cannot be easily analogized. You can't draw a five-dimensional cube, either. But a lot of stuff can be.
The power of story is that it tickles the animistic circuitry taking up so much of our brains, and needs little context. A good story can survive millennia. Once when my sister-in-law was teaching the Alouette song to her children, she stopped and wondered what the words meant. You've probably heard it:
Alouette, gentille Alouette, Alouette, je te plumerai...
She looked it up and learned that it describes the process of plucking all the feathers from a bird for cooking. This horrified her for a moment, but children were expected to help with such things and they were taught how with work songs. It wasn't a sick joke. It was dinner. This is the power of story and song: generations after dressing a bird ceased to be a relevant skill, people still sing Alouette in a language they don't even understand. The trick, then, is to pick new payloads.
Great examples of teaching through story can be found in, of all places, Japanese manga comics and anime. One, called Spice & Wolf, follows the adventures of an ambitious traveling peddler and his supernatural companion-slash-love-interest, an unemployed harvest goddess. So far, so weird. But it's actually a course in economics and political history. Each episode explains an economic mechanism and how it affects our plucky heroes: past-posting, price discrimination, triangular trade, barter, letters of credit, tariffs, the intersection of war and politics, and the evolution of trading guilds into international banking.
It's one thing to hear about "information arbitrage". It's another to follow Spice & Wolf as they race to sell cargo before a wavefront of news reaches the outlying towns, getting squeezed by established merchants manipulating exchange rates, or cajoling a friend to put a good word in with the guild. This kind of show is rare but there are manga that teach about cooking, history, and the essentials of Shinto.
Another valid criticism of story is that it sticks around long after it's not needed. Lord knows I've spent hours I'll never get back battling outdated "folklore" about how computers work, often inside my own head. Worse, the demands of compelling drama and memorable story may not map to the bits that are important to learn. Or if they do today, they may not tomorrow. I still catch myself waiting a few seconds after turning off my computer before moving it so as not to damage the hard drive. But it no longer has a hard drive.
I don't have a good answer for this except "tread carefully". Analogize only fundamental ideas, or ideas that are likely to still be relevant in the medium or long term. Be always willing to throw a mental model away. I ask myself, "Is there a good chance this little allegory be obsolete in ten years? Yes? Delete it, and look deeper." This is also a good way to compress further. Fundamental ideas are more enlightening anyway.
I've been telling you stories here all along. A claim, a principle, an illustrative example, then a summing up. We've covered elements of history, psychology, theories of progress and education, a bit of computer science, and the nature of story itself, all in less than two thousand words. Nothing new was discovered; no Type I progress has been made. Only questions and ideas, compressed and enhanced in a way that, I hope, will live beyond me. If it's possible to do all that here, why not try the same with computer science?
 "This was a typical Richard Feynman explanation. On the one hand, it infuriated the experts who had worked on the problem because it neglected to even mention all of the clever problems that they had solved. On the other hand, it delighted the listeners since they could walk away from it with a real understanding of the phenomenon and how it was connected to physical reality." -- Danny Hillis, "Richard Feynman and The Connection Machine"