One of the main differences between standard HDD and SSD is the one which is also the most popular – no moving parts.
As opposed to platters spinning at high speeds, magnetic heads which float over the platter and read and write data, SSD is actually a bit more complicated USB Flash.
(Image source: http://www.cnet.com/how-to/digital-storage-basics-part-4-ssd-explained/)
Even though that comparison is oversimplification, the basic principle is the same.
Of course, the main advantage is, that, when you drop it (not if, the vast majority of us dropped the drive at one point or another), chances that it will be physically damaged and prevent data recovery is slim to almost none whatsoever.
As you can see in the picture above, SSD is comprised of controller, DRAM or NAND flash memory, servings as a cache (this is optional), and DRAM (server solutions) or NAND memory chips (commercial solution), which actually hold the data.
Controller is a processor whose basic task is to bridge the contact from the device that uses SSD to memory chips themselves. He runs a microcode (firmware – which is a mini operating system). He decides how the SSD behave, and what functionalities are available from series to series – reading, writing, ECC, encryption, garbage collecting, runs the algorithm for storing the data on different memory cells, etc.
Memory chips architecture is organized into blocks, and blocks are divided into pages. The size of the pages is typically 512B, 2048B or 4096B. In each page, there is an additional ~1/32 ECC add-on, whose purpose is to verify the integrity of the read data. For example, if the block is 16KiB, that is 32 pages, 512B each (data) plus another 16B (ECC).
The first problem with these drives is that memories of this type have limited number of reads and, more importantly writes (erasure also counts toward this number). Another problem is the appearance of bad blocks. The usual practice in standard HDD, that they have more than declared size in capacity, so this space for a time can be used to replace the sectors which are declared bad. MAnufacturers applied the same reasoning here. The SSD capacity is up to 25% bigger than the declared for the product. That “extra” zone or space cannot be accessed by usual means, not even the OS(although that depends on SSD manufacturer)
Solid state drives can loose data due to:
- damage of physical or electronic nature
- bad sectors
- firmware corruption
- file corruption by software or operating system
- memory cells degradation due to a large and frequent number of writes
- accidental file deletition
- fire, flood, and other natural disasters
Data recovery ffrom SSD can be done in two ways. First, and by far easier, is in cases of the electronic malfunction and firmware corruption, where that damage can be repaired, or using special tools to simulate the proper functioning of the firmware, so there is data access.
The other one, more complicated by exponential degree, is to unsoldier memory chips, one by one, make memory dumps, and start reconstructing the data. Its a long and tedious process, lasting from 60-90 days. Tthat also means, a lot more expensive.
And, one last thing. To sell their products to different governments, army and multinational companies, the manufacturers had to implement very severe and strong data encryption into their products. Almost all new SSD automatically encrypt all data written into them. In some cases, without the help of the manufacturer, it is impossible to recover the data.