mempkg¢â - Modelling Arbitrarily Large Memories in VHDL...


 

mempkg¢â
Modelling Arbitrarily Large Memories in VHDL

2003.9.1

Modeling large memories is a tradeoff. On the one hand, if you model the memory as an array, you statically allocate at least that much memory on the host machine. On the other hand, if you choose to model only part of the memory, leaving the other address lines not connected, you perform a partial simulation on your system. Neither solution provides the flexibility of a create-on-demand memory that is described below.

The create-on-demand memory works on the following principle: Whenever there is a write to the memory, it appends to an underlying linked list. The linked list stores both the address and the data that were supplied. Reading is performed by scanning the list till an address match occurs. If no address match occurs, an X is returned. This has the advantage that you allocate only as much computer memory as you really need.

This method is not without its disadvantages, though. Assume that you did use the entire memory for simulation - in that case, the linked list is likely to consume more computer memory than a simple array. Further, as the list length increases, the real time required for searching the list increases. The real time problem can be reduced by sorting and storing, using a binary search, or by using hashing functions.

Even with this, the memory consumption problem does not go away. A partial solution is possible if you know beforehand that the address and data will not exceed 32 bits. If so, you can choose to store the address and data as integers; this is cheaper than storing std_logic_vectors. Assuming MVL9, each bit in a std_logic_vector would require 4 bits in real memory to store. If the simulator is efficient, 32 address lines would require 16 bytes of real memory. If you stored it as an integer however, 32 bits would require only 4 bytes. The address and a pointer to the next element in the list would require another 4 + 4 = 8 bytes. Thus, a total of 12 bytes per structural element are required, in lieu of 16 per array element. This is certainly cheaper, but of course, there would be the computational overhead of conversion between std_logic_vector and integer for every read and write.

You can choose to make your own tradeoffs when you make models using linked lists. The good news is that these tradeoffs can result in a faster, more resource-efficient model.

Whichever way you choose, you will need to build a linked list in VHDL to be able to successfully model the memory without the penalties described above.

'mempkg.vhd', is a package written in VHDL 93. A VHDL memory model, 1 Meg locations x 32 bits wide, that uses mempkg is also provided. The test bench and associated vectors file is also available at the time of download as a ready to use 'tar' bundle. The memory package defines the linked list and write and read functions for the memory. You can choose to build whatever memory you want around this; the package does not impose restrictions about whether you want to model an SRAM or DRAM or FIFO, or any other kind of memory element.

Details on compiling, running and using mempkg are available in a README file provided.

Ãâó: http://www.comit.com/


ÆÄÀϸí: mempkg.tar (28,672 bytes)

  Send to a colleague | Print this document