A good EMP burst will create high voltage pulses in all exposed conductors. Anything susceptible to voltage will pop. Which is pretty much everything made from semiconductors. Doesn't have to be on. Or plugged in. The real question is how much EMP. If it is enough, pop go all the transistors that read out the data. Most equipment is not hardened. It takes lots of extra parts and design to harden against this. All consumer stuff is made for low cost, this kind of issue is ignored.
Hard drives are fairly unreliable, if you have many of them. The recent firmware problems turned out to have affected many years of production of drives. My own personal raid six server has five drives, and they all suffer this firmware problem. They are server grade drives, the best made by a top vendor. The particular bug is not easy to trigger but if triggered the drive goes offline and won't come back. It can be fixed but it is easy to lose the data if not very careful. Just in my own gear, I've had a couple hard drives fail in the Tivo, a couple in laptops (not dropped), and we had a top of the line 10,000 RPM fibrechannel drive pop just a couple weeks ago in one of our servers at work. This is a small fraction of the stuff I see. The price of hard drives and the technology in them has pressed the market very hard and the margins are thin. Seems to me that many years ago when hard drives were a lot more expensive that we had fewer failures. But it is hard to compare.
Of course my personal super duper raid six SOHO server is offline now because a fan is going bad. If it is run long enough it might cook all five drives. It is supposed to protect itself against that kind of problem, but I turned it off till I get another fan to be sure. Of course turning it off raises the possibility that the firmware problem will surface because it has to do with a power cycle when the error log is exactly a certain length.
Anyway, suffice to say, over the last 30 plus years I've seen a lot of folks who lost a lot of data on those reliable hard drives. The new ones may be more reliable, but it doesn't appear so, and when they crash they lose a lot more data than the old ones.
Not far from here, in Oakland, CA a few years ago there was a big fire. The kind of thing you'd expect "cannot happen". Lots of hard drives were melted into slag as people ran from their homes. Many people also didn't make it, and this in a modern US city.
The recent events in Japan undoubtedly also caused an incredible data loss. It is hard to be prepared for everything. If you back all your important data into the cloud, it is not clear that raid is really needed. But for me, there is too much to do it all and so a local raid server is most cost effective compromise. I still put a fair amount in the cloud and am moving more that way as time goes on. One approach to the distribution problem is to get a friend in a remote city to mirror with. Raid six servers mirrored (or backed up) across a distance is pretty good. If a fumble finger doesn't erase the wrong thing, and the delete gets mirrored. And use different brands of disks and perhaps even servers. Still a software flaw in the mirror can get ya. If you have a lot of data, you might not notice the problem for years. Then "where did those old photos go?"
Since this subtle seagate bug had been in their drives for years, and not discovered, it was quite a surprise for everyone. According to experts I know or have read, the full extent of the problem has never been fully acknowledged. But I haven't followed it lately. Fixing the firmware in my drives is not easy, I don't think it can be done in the server they're in, and the process itself runs some risk of bricking the drive.