Delkin Blog

Understanding Embedded Memory Systems

Understanding Embedded Memory Systems

 

For new engineers, understanding complicated embedded computing systems and memory subsystems can be a challenge. The good news is that these systems are actually easier to understand than they seem on the surface. It is simply necessary to break down the parts of the systems to understand how they work. Understanding how embedded memory systems work makes it possible for engineers and OEMs to choose the right components for their applications.

 

At a basic level, embedded memory contains volatile or non-volatile hardware components. Volatile storage is made of temporary storage spaces, while non-volatile consists of persistent storage. Both forms store information that is used for binary code and group data as bytes, which contain 8 bits of information, double bytes, which have 16 bits, and quad bytes, which have 32 bits. The embedded memory system is controlled by the embedded operating system.

 

In addition to the embedded operating system, software also helps to control the memory system. Software systems are responsible for triggering the CPU—also called the processor—to send electricity through an address line that tells where the data that is required is stored on the memory chip. The electrical pulse travels through the address line to a transistor. There, the pulse can charge a capacitor. The capacitor can store a 1 bit, which is on, or a 0 bit, which is off.

 

Random access memory, or RAM, uses volatile memory. This kind of memory is only accessible if the power is on. In the early days of embedded systems, read-only memory, or ROM, ran applications and the operating system. Both RAM and ROM can be used in byte information units.

 

For data retention, non-volatile memory is used. This kind of memory is necessary for large containers of bytes that are organized according to sectors, or underlying groups, particularly in servers and personal computers. For fixed-disk drives, like HDDs, and solid-state drives (SSDs), it has not been necessary for memory to be small in size and to have low power requirements. However, these things are critically important for embedded systems. In embedded systems, space is at a premium, and low power usage is essential. For these systems, non-volatile or persistent storage can be achieved using a removable media source, such as a USB drive, SD card, or CF card, or it may itself be embedded and non-removable. For embedded systems, access to the data in the memory and the stability of the data must be maintained with or without a power source. Because embedded systems have small form factors, physical space is a major constraint on the kind of memory that can be used. Many embedded systems use blocks of 512 or 4,192 Kbytes for small form factor devices.

 

Volatile Memory 101

The role of volatile memory, or RAM, is reading instructions generated by the CPU. This can lead to operations being performed that cause data to be written back to memory. Although this process involved 8 bits in the past, 32-bit and 64-bit computing are now the norm, thanks to the increased power of modern embedded CPUs.

 

Volatile memory can be found in chips along the package substrate that contains the CPU and I/O ports. This is called a system-on-chip—or SOC—and is often found when there is limited space on the board, such as in small devices like drones.

 

Non-Volatile Memory 101

Rotating disks were once the norm for persistent storage. These days, flash memory has become more common. Flash memory was originally referred to as flash RAM and was invented in 1984 by Dr. Fujio Masuoka. The flash name came from his colleague at Toshiba, Shoji Ariizumi, who thought that erasing data from cells seemed similar to a camera flash.

 

Flash memory is programmed electronically for long-term service. Original forms of flash memory were called EEPROM devices, which stands for electrically erasable programmable read-only memory. In these devices, a series of grids and rows are connected via transistors, with a layer of oxide between each transistor. When one transistor is connected to another, the value of the cell is 1. When they are disconnected, the cell value is 0. Cell values are changed from 0 to 1 through the Fowler-Nordheim tunneling process. During this process, which is also known as field emission, electrons move through a cell’s barrier to change that cell’s charge. After this process is completed and the power is removed, the cell should store the last setting—either programmed or erased.

 

Although many people associate flash memory with consumer devices, this technology has far broader uses. While flash is a useful component of cell phones, digital cameras, and other mobile devices, it can also be used in industrial and military applications. In fact, any time a device has a small form factor but needs permanent data storage, flash is a viable solution. Additionally, flash can be incorporated in the form of non-resident memory, such as a removable USB, or it can take a resident form and be attached directly to the board. Because flash does not have any moving parts, it is much more stable than other types of memory.

 

There are both good and negative things to consider with flash, as there are with all technology. By understanding the pros and cons of flash, you can make the right memory selection for your application. First, here are some of the benefits of choosing flash memory:

 

  • Flash memory can be extremely small, which is an important feature for many devices. Although small in size, flash can still provide a significant amount of storage and top-level reliability.
  • Flash operates with very low power consumption.
  • Flash doesn’t have any rotating media, which helps to enhance the reliability of the device in which it is used. Likewise, it doesn’t have any physical disk head movement or heat issues, which means that there shouldn’t be any problems with random I/O performance.

 

Here is a look at some of the potential challenges of working with flash:

 

  • Flash memory can wear out, and the issue of wearing is continuing to increase as flash memory devices are becoming smaller.
  • Flash technology is continuing to evolve and improve. While these changes may be beneficial in the long run, they can trigger issues in OEM production cycles, such as causing parts to become obsolete.
  • The flash memory chip fabrication process is not the same across the board. As such, there are can be differences in timing, performance, and quality of flash memory devices. This is true even in products with the same parts number and same manufacturer.
  • Flash memory cells can experience bit disturbance. To counteract this issue, it is necessary for flash to have error detection and correction algorithms.
  • A power loss can lead to incomplete writes and erases unless there is an integrated system of protection.
  • If data must be wiped completely clean from a device, having flash memory can leave some lingering questions. There is not a clear away to completely erase data from flash memory.

 

Two Types of Flash

There are two main types of flash memory on the market: NOR and NAND. NOR was designed as a replacement for ROM and other forms of non-volatile memory. It allows individual bytes to be read, thanks to a full set of address lines, and it can usually run as quickly as DRAM. As such, NOR lets programs run XIP, or execute in place, directly. The challenge with NOR is that erases happen extremely slowly. If erase operations are not performed regularly, this not an issue. NOR can be used for OS image storage, system configuration information, and removable CF cards.

 

NAND has a smaller footprint and lower cost with higher capacities than NOR. Unlike NOR, NAND is limited to a serial interface. This means that reads, writes, and erases have to happen in blocks rather than individual bytes. NAND is a good replacement for hard drives, but it can’t fit in for ROM like NOR can. Despite those potential issues, NAND has become the dominant flash technology and is used in both removable and resident form. For applications that need fast erase operations and high capacities at a low cost, NAND flash memory is a good fit.

 

One of the most important features of NAND to understand is how information is organized in the memory cells. SLC—or single-level cell—keeps one bit of information on each cell. MLC, or multi-level cell, memory stores two bits per cell, while TLC, or triple-level cell, stores three or more bits of information per cell. With the increased cell capacity, there are some trade-offs. While they may make storage solutions more affordable, every increase in cell capacity is associated with a slowdown in performance and increased risk of bit errors. MLC functions about one-third as fast as SLC and is more prone to errors. TLC is slower than MLC and has an even greater risk of errors. This increased risk of errors means that more complex error-correcting codes are necessary. Generally, the endurance of NAND flash memory is improved if its blocks undergo fewer write operations.

 

NAND needs a controller for commands and data movements that occur between the flash memory and the host computer. Block reads, writes, erases, and other basic operations happen through the controller. In some designs, a discrete controller performs these tasks while driver software manages other flash activities, such as making sure wear happens evenly and restoring bad blocks. However, some designers of flash memory chips increase efficiencies by having a built-in controller in the NAND flash’s physical package. This is called managed NAND.

 

Although there are efficiencies that can be gained from managed NAND chips, they have traditionally been more costly than non-managed NAND. The prices are coming more inline with each other, however, and an increasing number of engineers are choosing managed NAND in the form of resident managed NAND as an embedded multimedia card, or eMMC, in place of flash and a controller. eMMC offers a good way for some designers to get around the quirks of NAND memory, but it also has a few of its own challenges. For instance, slow read times with random reads are a problem.

 

For designers, there is no simple answer when it comes to the question of which form of flash memory is the best. There are only the best kinds of flash memory for specific systems. Carefully considering how the memory will be used is the most important step in making a selection. With tradeoffs between price, endurance, quality, and capacity all necessary, developers have to prioritize the features that are most important for their applications.

 

Decoding the Interface

For resident flash memory, there is a direct interface between the address bus, the data, and the flash itself. In order to prevent confusion between developers and flash memory manufacturers, there is a common flash memory interface—CFI—that sets an open interface standard. This CFI is used by most flash memory vendors, and it is endorsed by the Joint Electron Device Engineering Council—JEDEC—through their non-volatile memory subcommittee. Because of this, parts from different vendors are very similar and comply with physical, electrical, and common interface standards.

 

These standards are also at play in small embedded systems, although it may be necessary to look carefully through small embedded board datasheets to find out if persistent storage is incorporated into the board. Some boards have both resident storage and space for removable storage, such as a USB port.

 

System Software Operations

For persistent flash memory, it’s necessary for system software to power the operations. A software data storage stack consisting of connected software components is what allows this to happen. The system software separates different services to allow apps to make requests of the file system that can be translated to block requests, similar to the way this process worked in hard drives that were block based. Through the data storage stack, it is possible for the file system and block drivers to replace those components that may sometimes be included as part of the embedded OS, allowing the system software to provide flexible and reliable service.

 

Flash memory drivers don’t only read, write, and erase, but also ensure that the performance of the memory is as reliable as possible. To achieve this, wear-leveling algorithms are included as part of flash memory programs. These algorithms ensure that the wear that happens in memory devices occurs evenly, instead of allowing the same block to be written many times over. Keep in mind that in NAND flash memory, entire blocks are used in operations, so blocks—and not just pages—have to be removed when they are rewritten too many times. The wear-leveling algorithm included in flash memory is a critical part of its performance. Without it, blocks would wear out extremely quickly and the flash memory itself would only perform for a fraction of its expected lifespan.

 

Another form of protective software that is integrated into NAND flash memory is BBM, or bad block management. This software looks for write errors and failures. If the software finds a problem, the block in question can be remapped to a different location. There is typically a space reserved for the remapping of bad blocks.

 

Understanding the Life of Data

Data stored in memory goes through a multi-step process. During the startup of the system is the boot process. The boot image comes from the persistent memory and is loaded in RAM. This includes all of the components of the system, such as the drivers and apps. Once that image is loaded into RAM, the volatile memory is restarted, which makes the data in the flash memory—the non-volatile data—available.

 

During normal operations, a series of tasks has to occur. For example, if the device is a sensor-based IoT device, applications utilize persistent data stored in the file system via the flash memory. The tasks may include file opening, file system operations, and reading and writing data. Today, there is a push for data collection to increasingly be performed on local embedded devices. This is increasingly necessary for platforms that require security for connected devices.

 

When the system has to shut down, it is important for it to follow a particular order of operations. The outstanding I/O requests to flash memory must be completed so that the data can be saved to the memory before the power is turned off. Although orderly, powered shutdowns are ideal for preventing system errors, the system should also provide data protection that prevents loss in the event of a power failure.

 

While navigating the world of modern embedded memory can seem daunting, it is possible to find the right solutions to fit your application’s requirements when you work with a trusted provider. Whether you need embedded memory solutions for commercial grade or industrial grade applications, you can trust the team at Delkin to help you select the best option to fit your needs. We offer a range of rugged, customizable storage solutions. Get in touch with our product team to learn more about our custom storage options.

 

Contact