U.S. patent application number 15/944191 was filed with the patent office on 2018-08-09 for information processing device, non-transitory computer readable recording medium, and information processing system.
The applicant listed for this patent is Toshiba Memory Corporation. Invention is credited to Daisuke Hashimoto, Shinichi Kanno.
Application Number | 20180225198 15/944191 |
Document ID | / |
Family ID | 56111298 |
Filed Date | 2018-08-09 |
United States Patent
Application |
20180225198 |
Kind Code |
A1 |
Kanno; Shinichi ; et
al. |
August 9, 2018 |
INFORMATION PROCESSING DEVICE, NON-TRANSITORY COMPUTER READABLE
RECORDING MEDIUM, AND INFORMATION PROCESSING SYSTEM
Abstract
According to one embodiment, an information processing device
includes a nonvolatile memory, assignment unit, and transmission
unit. The assignment unit assigns logical address spaces to spaces.
Each of the spaces is assigned to at least one write management
area included in a nonvolatile memory. The write management area is
a unit of an area which manages the number of write. The
transmission unit transmits a command for the nonvolatile memory
and identification data of a space assigned to a logical address
space corresponding to the command.
Inventors: |
Kanno; Shinichi; (Tokyo,
JP) ; Hashimoto; Daisuke; (Musashino Tokyo,
JP) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Toshiba Memory Corporation |
Tokyo |
|
JP |
|
|
Family ID: |
56111298 |
Appl. No.: |
15/944191 |
Filed: |
April 3, 2018 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14656413 |
Mar 12, 2015 |
9977734 |
|
|
15944191 |
|
|
|
|
62090690 |
Dec 11, 2014 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06F 2212/7207 20130101;
G06F 2212/7202 20130101; G06F 12/0246 20130101; G06F 2212/7205
20130101; G06F 2212/7201 20130101 |
International
Class: |
G06F 12/02 20060101
G06F012/02 |
Claims
1. A memory system comprising: a nonvolatile memory comprising
first and second namespaces; and a controller configured to control
the nonvolatile memory, and communicate with an external processing
device, wherein the first namespace corresponds to a first logical
address space of a first program executed in the external
processing device, and a second logical address space of a second
program executed in the external processing device, the second
namespace corresponds to a third logical address space of a third
program executed in the external processing device, and a fourth
logical address space of a fourth program executed in the external
processing device, and the controller translates a first logical
address in the first or second logical address space to a first
physical address indicating a first location in the first
namespace, and translates a second logical address in the third or
fourth logical address space to a second physical address
indicating a second location in the second namespace.
2. The memory system of claim 1, wherein the first logical address
space has a first characteristic feature which is different from a
second characteristic feature of the second logical address space,
and the third logical address space has a third characteristic
feature which is different from a fourth characteristic feature of
the fourth logical address space.
3. The memory system of claim 2, wherein the first to fourth
characteristic features are first to fourth write frequency
features about the first to fourth logical address spaces.
4. The memory system of claim 1, wherein the controller receives a
write command, identification data of a namespace assigned to a
logical address space corresponding to the write command, write
data, and a third logical address of the write data, from the
external processing device, translates the third logical address to
a third physical address of a location in the namespace indicated
by the identification data, and writes the write data to the
location in the namespace.
5. The memory system of claim 1, wherein the controller receives a
read command, identification data of a namespace assigned to a
logical address space corresponding to the read command, read data,
and a logical address of the read data, from the external
processing device, translates the logical address to a physical
address of a location in the namespace indicated by the
identification data, reads the read data from the location in the
namespace, and transmits the read data to the external processing
device.
6. The memory system of claim 1, wherein the controller comprises a
address translation table associating each of the first and second
logical addresses with each of the first and second physical
addresses, and management data associating each of erase unit areas
included in the nonvolatile memory with the first or second
namespace.
7. The memory system of claim 6, wherein the nonvolatile memory is
a NAND flash memory, and the erase unit areas are physical
blocks.
8. The memory system of claim 6, wherein the controller receives a
configuration command, and generates the management data based on
the configuration command.
9. The memory system of claim 6, wherein the controller observes a
first data storage condition of the first namespace and a second
data storage condition of the second namespace, and allocates each
of the erase unit areas to the first or second namespace based on
the first and second data storage conditions.
10. The memory system of claim 9, wherein the controller allocates
each of the erase unit areas to the first or second namespace such
that the first data storage condition is on the same level as the
second data storage condition.
11. The memory system of claim 9, wherein the first data storage
condition includes a data capacity, an access frequency, a write
frequency, the number of accesses, the number of writing, or a data
storage ratio about the first namespace, and the second data
storage condition includes a data capacity, an access frequency, a
write frequency, the number of accesses, the number of writing, or
a data storage ratio about the second namespace.
12. The memory system of claim 1, wherein the controller executes
garbage collection in each of the first and second namespaces.
13. The memory system of claim 12, wherein the controller changes
an empty erase unit area from the first namespace to the second
namespace after the garbage collection in the first namespace, to
execute wear leveling between the first namespace and the second
namespace.
14. The memory system of claim 12, wherein the nonvolatile memory
comprises each of first and second provisioning areas corresponding
to each of the first and second namespaces.
15. The memory system of claim 12, wherein the controller comprises
first and second buffer memories corresponding to the first and
second namespaces.
16. A memory system comprising: a nonvolatile memory comprises
first and second namespaces; and a controller configured to control
the nonvolatile memory, and communicate with a processing device,
wherein the first namespace corresponds to a first logical address
space of a first program executed in the processing device, and a
second logical address space of a second program executed in the
processing device, the second namespace corresponds to a third
logical address space of a third program executed in the processing
device, and the controller translates a first logical address in
the first or second logical address space to a first physical
address indicating a first location in the first namespace, and
translates a second logical address in the third logical address
space to a second physical address indicating a second location in
the second namespace.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of application Ser. No.
14/656,413, filed Mar. 12, 2015 and claims the benefit of U.S.
Provisional Application No. 62/090,690, filed Dec. 11, 2014, the
entire contents of which are incorporated herein by reference.
FIELD
[0002] Embodiments described herein relate generally to an
information processing device, non-transitory computer readable
recording medium, and information processing system.
BACKGROUND
[0003] A solid state drive (SSD) includes a nonvolatile
semiconductor memory and has an interface which is similar to that
of a hard disk drive (HDD). For example, at the time of data
writing, the SSD receives a write command, logical block addressing
(LBA) of a writing destination, and write data from an information
processing device, translates the LBA into physical block
addressing (PBA) based on a lookup table (LUT), and writes the
write data to a position indicated by the PBA.
BRIEF DESCRIPTION OF THE DRAWING
[0004] FIG. 1 is a block diagram showing an example of a structure
of an information processing system according to a first
embodiment;
[0005] FIG. 2 is a block diagram showing an example of a
relationship between LBA spaces, namespaces, address translation
tables, garbage collection units, and management data;
[0006] FIG. 3 is a flowchart showing an example of a process
performed by a reception unit and a configuration unit according to
the first embodiment;
[0007] FIG. 4 is a flow chart showing an example of a process
performed by a garbage collection unit and an address translation
unit according to the first embodiment;
[0008] FIG. 5 is a block diagram showing an example of an
allocating state of namespaces according to the first
embodiment;
[0009] FIG. 6 is a flow chart showing an example of a process
executed by the information processing device according to the
first embodiment;
[0010] FIG. 7 is a block diagram showing an example of a structure
of an information processing system of a second embodiment;
[0011] FIG. 8 is a data structural diagram showing an example of a
translation table according to the second embodiment;
[0012] FIG. 9 is a flowchart showing an example of a write process
of a memory system according to the second embodiment;
[0013] FIG. 10 is a flowchart showing an example of a read process
of the memory system of the second embodiment;
[0014] FIG. 11 is a block diagram showing an example of a structure
of an information processing system according to a third
embodiment; and
[0015] FIG. 12 is a perspective view showing a storage system
according to the third embodiment.
DETAILED DESCRIPTION
[0016] In general, according to one embodiment, an information
processing device includes an assignment unit, and transmission
unit. The assignment unit assigns logical address spaces to spaces.
Each of the spaces is assigned to at least one write management
area of write management areas included in a nonvolatile memory.
The write management area is a unit of an area which manages the
number of write. The transmission unit transmits a command for the
nonvolatile memory and identification data of a space assigned to a
logical address space corresponding to the command.
[0017] Embodiments will be described hereinafter with reference to
drawings. In a following description, the same reference numerals
denote components having nearly the same functions and
arrangements, and a repetitive description thereof will be given if
necessary. In the following embodiments, access means both data
reading and data writing.
First Embodiment
[0018] FIG. 1 is a block diagram showing an example of a structure
of an information processing system according to the present
embodiment.
[0019] An information processing system 1 includes an information
processing device 2 and a memory system 3. The information
processing system 1 may include a plurality of information
processing device 2. A case where the information processing system
1 includes a plurality of information processing device 2 is
explained later in a second embodiment.
(Explanation of the Memory System 3)
[0020] The memory system 3 is, for example, an SSD, and includes a
controller 4 and a nonvolatile memory 5. The memory system 3 may be
included in the information processing device 2, and the
information processing device 2 and the memory system 3 may be
connected through a network in a data communicative manner.
[0021] In the present embodiment, at least one NAND flash memory is
used as the nonvolatile memory 5. However, the present embodiment
can be applied to various nonvolatile memories including a
plurality of write management areas, and such various nonvolatile
memories may be, for example, a NOR flash memory, magnetoresistive
random access memory (MRAM), phase change random access memory
(PRAM), resistive random access memory (ReRAM), and ferroelectric
random access memory (FeRAM). Here, the write management area is an
area of a unit which manages the number of writes. The nonvolatile
memory 5 may include a three dimensional memory.
[0022] For example, the nonvolatile memory 5 includes a plurality
of blocks (physical blocks). The plurality of blocks include a
plurality of memory cells arranged at crossing points of word lines
and bit lines. In the nonvolatile memory 5, data are erased at once
block by block. That is, a block is an area of a unit of data
erase. Data write and data read are performed page by page (word
line by word line) in each block. That is, a page is an area of a
unit of data write or an area of a unit of data read.
[0023] In the present embodiment, the number of writes is managed
block by block.
[0024] The information processing device 2 is a host device of the
memory system 3. The information processing device 2 sends a
configuration command C1 which associates the blocks of the
nonvolatile memory 5 with a space including at least one block to
the memory system 3.
[0025] In the following description, the space will be explained as
a namespace.
[0026] Furthermore, the information processing device 2 sends a
write command C2 together with namespace identification data (NSID)
6, LBA 7 which indicates a writing destination, data size 8 of the
write data, and write data 9 to the memory system 3.
[0027] In the present embodiment, a plurality of namespaces
NS.sub.0 to NS.sub.M (M is an integer which is 1 or more) are each
space which can be obtained from dividing a plurality of blocks
B.sub.0 to B.sub.N (N is an integer which is M or more) included in
the nonvolatile memory 5. In the present embodiment, the namespace
NS.sub.0 includes the blocks B.sub.0 to B.sub.2, and the namespace
NS.sub.M includes the blocks B.sub.N-2 to B.sub.N. The other
namespaces NS.sub.1 to NS.sub.M-1 are the same as the namespaces
NS.sub.0 and NS.sub.M. Note that the assignment relationship
between the namespaces NS.sub.0 to NS.sub.M and the blocks B.sub.0
to B.sub.N is an example, and the number of the blocks to be
assigned to a single namespace can be arbitrarily changed. The
number of blocks may be different between namespaces.
[0028] The controller 4 includes a memory unit 10, buffer memories
F.sub.0 to F.sub.M, and a processor 11.
[0029] The memory unit 10 stores address translation tables T.sub.0
to T.sub.M corresponding to their respective namespaces NS.sub.0 to
NS.sub.M. For example, the memory unit 10 may be used as a work
memory. The memory unit 10 may be a volatile memory such as dynamic
random access memory (DRAM) or static random access memory (SRAM),
or may be a nonvolatile memory. The memory unit 10 may be a
combination of a volatile memory and a nonvolatile memory.
[0030] Each of address translation tables T.sub.0 to T.sub.M is
data associating LBA with PBA based on the data write with respect
to namespaces NS.sub.0 to NS.sub.M, and may be LUT, for example.
Note that a part of or the whole address translation tables T0 to
TM may be stored in a different memory such as memory 12.
[0031] Each of buffer memories F.sub.0 to F.sub.M stores the write
data until the data amount becomes suitable based on the data write
with respect to namespaces NS.sub.0 to NS.sub.M.
[0032] The processor 11 includes a memory 12, reception unit 13,
configuration unit 14, address translation unit 15, write unit 16,
and garbage collection unit G.sub.0 to G.sub.M.
[0033] The memory 12 stores a program 17 and management data 18. In
the present embodiment, the memory 12 is included in the processor
11; however, it may be provided outside the processor 11. The
memory 12 is, for example, a nonvolatile memory. Note that a part
of or the whole program 17 and management data 18 may be stored in
a different memory such as the memory unit 10.
[0034] The program 17 is, for example, a firmware. The processor 11
executes the program 17 to function as the reception unit 13,
configuration unit 14, address translation unit 15, write unit 16,
and garbage collection units G.sub.0 to G.sub.M.
[0035] The management data 18 indicates a relationship between the
namespaces NS.sub.0 to NS.sub.M and the blocks B.sub.0 to B.sub.N.
Referring to the management data 18, which block is in which
namespace can be determined.
[0036] The reception unit 13 receives, from the information
processing device 2, the configuration command C1 to associate each
block with each namespace in the nonvolatile memory 5. Furthermore,
the reception unit 13 receives, from the information processing
device 2, the write command C2, NSID 6, LBA 7, data size 8, and
data 9.
[0037] In the following description, a case where the write commend
C2 is with the NSID 6 which represents the namespace NS.sub.0 is
explained for the sake of simplification. However, the write
command C2 can be with the NSID which represents the other
namespaces NS.sub.1 to NS.sub.M.
[0038] When the reception unit 13 receives the configuration
command C1 of the namespace, the configuration unit 14 assigns the
blocks B.sub.0 to B.sub.N to the namespaces NS.sub.0 to NS.sub.M to
generate the management data 18 and stores the management data 18
in the memory 12. The assignment of the blocks B.sub.0 to B.sub.N
to the namespaces NS.sub.0 to NS.sub.M may be performed by the
configuration unit 14 observing data storage conditions of the
namespaces NS.sub.0 to NS.sub.M in such a manner that the data
capacities, access frequencies, write frequencies, the numbers of
accesses, the numbers of writes, or data storage ratios are set to
the same level between the namespaces NS.sub.0 to NS.sub.M. Or, the
assignment may be performed based on an instruction from the
information processing device 2, or an instruction from the manager
of the memory system 3.
[0039] The data capacity here is a writable data size, the access
frequency or the write frequency is the number of accesses or the
number of writes per unit time, and the data storage ratio is a
value which indicates a ratio of an area size which the data is
already stored with respect to an area size.
[0040] Furthermore, the configuration unit 14 transfers an empty
block in which no data is stored from a namespace categorized as
pre-garbage collection to the other namespace based on the garbage
collection result executed for each of the namespaces NS.sub.0 to
NS.sub.M, and updates the management data 18. Thus, the wear
leveling can be performed between the namespaces NS.sub.0 to
NS.sub.M. The assignment change between the namespaces NS.sub.0 to
NS.sub.M and the blocks B.sub.0 to B.sub.N may be performed by the
configuration unit 14 observing the data storage conditions of the
namespaces NS.sub.0 to NS.sub.M based on an observation result as
in the time of generation of the management data 18. Or, the
assignment change may be performed based on an instruction from the
information processing device 2 or an instruction from the manager
of the memory system 3. For example, the change of the namespaces
NS.sub.0 to NS.sub.M are performed to convert the empty block of
the namespace with lower data capacity, lower access frequency,
lower number of access, or lower data storage ratio to the
namespace with higher data capacity, higher access frequency,
higher number of access, or higher data storage ratio.
[0041] Furthermore, the configuration unit 14 sets provisioning
areas P.sub.0 to P.sub.M which are not normally used for each of
the namespaces NS.sub.0 to NS.sub.M in the nonvolatile memory 5
based on the configuration command C1 for over provisioning. The
setting of the provisioning areas P.sub.0 to P.sub.M may be
performed by the configuration unit 14 based on the data capacity
of each of the namespaces NS.sub.0 to NS.sub.M. Or, the setting may
be performed based on an instruction from the information
processing device 2, or an instruction from the manager of the
memory system 3.
[0042] In the present embodiment, the provisioning areas P.sub.0 to
P.sub.M are secured in the nonvolatile memory 5; however, they may
be secured in any other memory in the memory system 3. For example,
the provisioning areas P.sub.0 to P.sub.M may be secured in a
memory such as DRAM or SRAM.
[0043] When the reception unit 13 receives the write command C2,
the address translation unit 15 executes associating to translate
the LBA 7 with the write command C2 into the PBA for the address
translation table T.sub.0 corresponding to the namespace NS.sub.0
which indicates the NSID 6 with the write command C2.
[0044] In the present embodiment, the address translation unit 15
is achieved by the processor 11; however, the address translation
unit 15 may be structured separately from the processor 11.
[0045] Furthermore, the address translation unit 15 performs the
address translation based on the address translation tables T.sub.0
to T.sub.M; however, the address translation may be performed by a
key-value type retrieval. For example, the LBA is set as a key and
the PBA is set as a value for achieving the address translation by
key-value type retrieval.
[0046] The write unit 16 writes the write data 9 in a position
indicated by the PBA obtained from the address translation unit 15.
In the present embodiment, the write unit 16 stores the write data
9 in the buffer memory F.sub.0 corresponding to the namespace
NS.sub.0 indicated by the NSID 6 with the write command C2. Then,
the write unit 16 writes the data of the buffer memory F.sub.0 to a
position indicated by the PBA when the buffer memory F.sub.0
reaches the data amount suitable for the namespace NS.sub.0.
[0047] The garbage collection units G.sub.0 to G.sub.M correspond
to the namespaces NS.sub.0 to NS.sub.M and execute the garbage
collection in each of the namespaces NS.sub.0 to NS.sub.M. The
garbage collection is a process to release an unnecessary memory
area or a process to secure a continuous available memory area by
collecting data written in a memory area with gaps. The garbage
collection units G.sub.0 to G.sub.M may be configured to execute
garbage collections in parallel, or consecutively.
[0048] The garbage collection is explained in detail using the
garbage collection unit G.sub.0 as an example selected from the
garbage collection unit G.sub.0 to G.sub.M. The garbage collection
unit G.sub.0 first selects the blocks B.sub.0 to B.sub.2
corresponding to the namespace NS.sub.0 based on the management
data 18. Then, the garbage collection unit G.sub.0 performs the
garbage collection with respect to the selected blocks B.sub.0 to
B.sub.2. Then, based on a result of the garbage collection
performed by the garbage collection unit G.sub.0, the address
translation unit 15 updates the address translation table
T.sub.0.
[0049] Note that, in the present embodiment, the LBA and the PBA
are associated with each other in the address translation tables
T.sub.0 to T.sub.M and the block identifiable by the PBA and the
NSID are associated with each other in the management data 18.
Therefore, when LBA is a unique address without redundant other LBA
and the management data 18 is generated, the namespace NS.sub.0
which is a writing destination can be specified from the LBA 7
attached to the write command C2 at the processor 11 side.
Therefore, after the generation of the management data 18 without
redundant LBA 7, attaching the NSID 6 to the write command C2 can
be omitted, and the NSID 6 may be acquired at the processor 11 side
based on the LBA 7, address translation tables T.sub.0 to T.sub.M,
and management data 18.
[0050] FIG. 2 is a block diagram showing an example of a
relationship between LBA spaces, the namespaces NS.sub.0 to
NS.sub.M, the address translation tables T.sub.0 to T.sub.M, the
garbage collection units G.sub.0 to G.sub.M, and the management
data 18.
[0051] LBA spaces A.sub.0 to A.sub.M of the information processing
device 2 are assigned to the namespaces NS.sub.0 to NS.sub.M,
respectively.
[0052] The LBA space A.sub.0 includes logical addresses 0 to
E.sub.0. The LBA space A.sub.1 includes logical addresses 0 to
E.sub.1. The LBA space A.sub.M includes logical addresses 0 to
E.sub.M. The other LBA spaces A.sub.2 to A.sub.M-1 include a
plurality of logical addresses similarly.
[0053] In the following description, the LBA space A.sub.0 and the
namespace NS.sub.0 assigned to the LBA space A.sub.0 are explained
representatively for the sake of simplification. However, the other
LBA spaces A.sub.1 to A.sub.M and namespaces NS' to NS.sub.M are
structured the same.
[0054] When writing the data of the LBA space A.sub.0 to the
nonvolatile memory 5, the information processing device 2 sends the
write command C2, NSID 6 indicating the namespace NS.sub.0
corresponding to the LBA space A.sub.0, LBA 7 within LBA space
A.sub.0, data size 8, and write data 9 corresponding to the LBA 7
to the memory system 3.
[0055] The management data 18 associates the namespace NS.sub.0
with the blocks B.sub.0 to B.sub.2.
[0056] The garbage collection unit G.sub.0 performs the garbage
collection with respect to the blocks B.sub.0 to B.sub.2 included
in the namespace NS.sub.0 corresponding to the garbage collection
unit G.sub.0 based on the management data 18.
[0057] As a result of the garbage collection, data arrangement will
be changed within the blocks B.sub.0 to B.sub.2. Therefore, the
garbage collection unit G.sub.0 instructs the address translation
unit 15 which is omitted in FIG. 2 to perform the update of address
translation table T.sub.0. The address translation unit 15 updates
the address translation table T.sub.0 corresponding to the
namespace NS.sub.0 to match the data arrangement after the garbage
collection.
[0058] FIG. 3 is a flowchart showing an example of a process
performed by the reception unit 13 and the configuration unit 14
according to the present embodiment.
[0059] In step S301, the reception unit 13 receives the
configuration command C1 of the namespaces NS.sub.0 to
NS.sub.M.
[0060] In step S302, the configuration unit 14 assigns the blocks
B.sub.0 to B.sub.N of the nonvolatile memory 5 to the namespaces
NS.sub.0 to NS.sub.M and generates the management data 18.
[0061] In step S303, the configuration unit 14 stores the
management data 18 in the memory 12.
[0062] FIG. 4 is a flow chart showing an example of a process
performed by the garbage collection unit G.sub.0 and the address
translation unit 15 according to the present embodiment. Note that
the same process is executed in the other garbage collection units
G.sub.1 to G.sub.M. The process shown in FIG. 4 may be performed
based on an instruction from the information processing device 2,
for example. Or, the process may be performed based on an
instruction from the manager of the memory system 3. Furthermore,
the garbage collection unit G.sub.0 may execute the process of FIG.
4 voluntarily by, for example, observing the data storage condition
of the namespace NS.sub.0 of the garbage collection target and
determining the start of the garbage collection appropriately. More
specifically, the garbage collection unit G.sub.0 executes the
garbage collection with respect to the namespace NS.sub.0 when the
number of empty blocks within the namespace NS.sub.0 is a
predetermined number or less, or when a ratio of empty blocks to
the whole blocks within the namespace NS.sub.0 is a predetermined
value or less.
[0063] In step S401, the garbage collection unit G.sub.0 selects
the blocks B.sub.0 to B.sub.2 corresponding to the namespace
NS.sub.0 which is the garbage collection target based on the
management data 18.
[0064] In step S402, the garbage collection unit G.sub.0 executes
the garbage collection with respect to the blocks B.sub.0 to
B.sub.2 within the selected namespace NS.sub.0.
[0065] In step S403, the address translation unit 15 updates the
address translation table T.sub.0 corresponding to the namespace
NS.sub.0 which is the garbage collection target based on the
conditions of the blocks B.sub.0 to B.sub.2 after the garbage
collection.
[0066] In the present embodiment explained as above, a
predetermined block amount or a block amount set by the information
processing device 2 can be assigned to each of the namespaces
NS.sub.0 to NS.sub.M, and the data corresponding to the namespaces
NS.sub.0 to NS.sub.M can be written to the blocks B.sub.0 to
B.sub.M assigned to the namespaces NS.sub.0 to NS.sub.M, and
different data amounts can be set to the namespaces NS.sub.0 to
NS.sub.M.
[0067] In the present embodiment, the garbage collection can be
performed in each of the namespaces NS.sub.0 to NS.sub.M
independently and efficiently.
[0068] In the present embodiment, as a result of the garbage
collection, the empty block which do not store data can be
transferred from the namespace before the garbage collection to the
other namespace, and the empty block can be secured within the
other namespace. Therefore, the namespace to be assigned to the
block can be changed, the wear leveling can be performed between
the namespaces NS.sub.0 to NS.sub.M, and the life of the
nonvolatile memory 5 can be prolonged.
[0069] In the present embodiment, the provisioning areas P.sub.0 to
P.sub.M having different data amounts can be set in each of the
namespaces NS.sub.0 to NS.sub.M, and the over provisioning can be
achieved in each of the namespaces NS.sub.0 to NS.sub.M. Thus, the
write speed can be accelerated and performance can be maintained,
and consequently, the reliability can be improved.
[0070] In the present embodiment, the address translation tables
T.sub.0 to T.sub.M are managed for each of the namespaces NS.sub.0
to NS.sub.M, and the address translation and changing of the
relationship between the LBA and PBA can be performed efficiently
in each of the namespaces NS.sub.0 to NS.sub.M.
[0071] In the present embodiment, if the address translation is
performed by the key-value type retrieval, even the data volume of
the nonvolatile memory 5 is large, the address translation can be
performed efficiently.
[0072] In the present embodiment, highly sophisticated memory
management can be achieved in each of the namespaces NS.sub.0 to
NS.sub.M, the life of the nonvolatile memory 5 can be prolonged,
the production costs can be reduced, and write/read processes
to/from the nonvolatile memory 5 divided by the namespaces NS.sub.0
to NS.sub.M can be rapid.
(Explanation of Information Processing Device 2)
[0073] The information processing device 2 includes a memory 21 and
a processor 22.
[0074] In the information processing device 2, various programs,
including application programs and operating systems, (hereinafter
referred to as an object or objects) and various kinds of data can
be identified by object identification data (hereinafter referred
to as an object ID or object IDs).
[0075] The information processing device 2 assigns LBA spaces
A.sub.0 to A.sub.M to objects, and manages LBA spaces A.sub.0 to
A.sub.M assigned to each of the objects identified by their
respective object IDs.
[0076] The memory 21 is, for example, a nonvolatile memory for
storing a program 23.
[0077] The processor 22 executes the program 23 stored in the
memory 21 to function as a frequency calculation unit 24, an
assignment unit 25 and a transmission unit 26.
[0078] The frequency calculation unit 24 calculates the write
frequency for each of the LBA spaces corresponding to each of the
objects.
[0079] The assignment unit 25 assigns the LBA spaces A.sub.0 to
A.sub.M to the namespaces NS.sub.0 to NS.sub.M based on the write
frequency for each of the LBA spaces corresponding to each of the
objects.
[0080] The transmission unit 26 generates the configuration command
C1 based on an assignment result of the assignment unit 25, and
transmits the configuration command C1 to the memory system 3.
[0081] The transmission unit 26 furthermore generates the write
command C2 based on the assignment result of the assignment unit
25, and transmits the write command C2, the NSID 6 which represents
the namespace NS.sub.0 assigned to an LBA space of an object
transmitting the write command C2, the data size 8 and the write
data 9 to the memory system 3.
[0082] FIG. 5 is a block diagram showing an example of an
allocating state of namespaces according to the present
embodiment.
[0083] FIG. 5 illustrates a state in which the LBA spaces of the
objects are assigned to four namespaces NS.sub.0 to NS.sub.3.
However, the number of the namespaces may be two or more.
[0084] The write frequencies for the LBA spaces of the objects are
assigned to any one of the write frequency groups. FIG. 5
illustrates a case in which six write frequency groups L.sub.0 to
L.sub.5 are used. Write frequency groups L.sub.0 to L.sub.5 are
successively designated as follows from high to low in write
frequencies: the write frequency group L.sub.0 is designated
"Extremely Hot"; the write frequency group L.sub.1, "Hot"; the
write frequency group L.sub.2, "Warm"; the write frequency group
L.sub.3, "Cool"; the write frequency group L.sub.4, "Cold"; and the
write frequency group L.sub.5, "Extremely Cold".
[0085] The write frequency groups L.sub.0 to L.sub.5 are assigned
to the namespaces NS.sub.0 to NS.sub.3 based on both their
respective characteristic features and their respective object
IDs.
[0086] For example, the write frequency groups L.sub.0 to L.sub.5
are assigned to the namespaces NS.sub.0 to NS.sub.3 in such a
manner that the write frequency groups having different
characteristic features are included in the same name space.
[0087] For example, the write frequency group L.sub.0 is extremely
high in write frequency, which means that extra regions must be
secured plentifully. Therefore, the write frequency group L.sub.0
is assigned to the name spaces NS.sub.0 to NS.sub.3. This
arrangement makes it possible to use extra regions efficiently.
[0088] For example, the write frequency group L.sub.5, which is
extremely low in write frequency, is dividedly assigned to the
namespaces NS.sub.2 and NS.sub.3. The divisional assignment of the
write frequency group L.sub.5 to the namespaces NS.sub.2 and
NS.sub.3 improves efficiency of the garbage collection.
[0089] For example, the write frequency group L.sub.1, which is
high in write frequency, is assigned to the namespaces NS.sub.0 or
NS.sub.3 on an object basis, and the LBA space of one object is
assigned to one namespace NS.sub.0 or NS.sub.3, so that the garbage
collection will be independently executed on an object basis such
as on a user application basis. Therefore, when the garbage
collection is executed for a certain user application, the
performance of the other user application is prevented from
deteriorating.
[0090] FIG. 6 is a flowchart showing an example of a process
executed by the information processing device 2 according to the
present embodiment.
[0091] In Step S601, the frequency calculation unit 24 calculates
the write frequency for each of the LBA spaces corresponding to
each of the objects.
[0092] In Step S602, the assignment unit 25 assigns the LBA space
for each of the objects to any one of write frequency groups
L.sub.0 to L.sub.5 based on the write frequency corresponding to
the LBA spaces for each of the objects.
[0093] In Step S603, the assignment unit 25 assigns each of the LBA
spaces to at least one of namespaces NS.sub.0 to NS.sub.M based on
each object ID and write frequency groups L.sub.0 to L.sub.5.
[0094] In Step S604, the transmission unit 26 transmits the write
command C2, the NSID 6 which represents the assignment result of
the namespaces NS.sub.0 to NS.sub.M, the LBA 7, the data size 8 and
the write data 9 to the memory system 3.
[0095] In the present embodiment explained above, the LBA spaces
A.sub.0 to A.sub.M for each of the objects are assigned to the
namespaces NS.sub.0 to NS.sub.M based on the write frequency for
each of the LBA spaces corresponding to each of the objects.
Therefore, the memory system 3 will be improved in service quality
and device performance, prolonged in lifetime, and appropriate in
setting.
[0096] For example, the present embodiment makes it possible to
adjust the write frequency and the number of writes for each of
namespaces NS.sub.0 to NS.sub.3 by assigning different LBA spaces
having characteristic features to one of the namespaces.
[0097] For example, in the present embodiment, an LBA space being
extremely high in write frequency can be assigned to a plurality of
namespaces, which makes it possible to secure extra regions
plentifully.
[0098] For example, an LBA space being extremely low in write
frequency can be assigned to a plurality of namespaces in the
present embodiment, which makes it possible to make garbage
collection efficient.
[0099] For example, in the present embodiment, an LBA space of an
object being high in write frequency is assigned to one of the
namespaces. Therefore, when the garbage collection is executed for
a certain user application, the performance of the other user
application is prevented from deteriorating.
[0100] In the present embodiment, both the assignment of the LBA
spaces A.sub.0 to A.sub.M and the assignment of the name spaces
NS.sub.0 to NS.sub.M are executed based on write frequencies.
Instead, however, other information such as a combination of the
numbers of writes, write frequencies and read frequencies may be
used to execute both the assignment of the LBA spaces A.sub.0 to
A.sub.M and the assignment of the namespaces NS.sub.0 to
NS.sub.M.
[0101] Furthermore, it is possible to cause the assignment unit 25
respectively assign the LBA spaces A.sub.0 to A.sub.M to the
namespaces NS.sub.0 to NS.sub.M based on the user setting.
[0102] In the present embodiment, a compaction unit of each of the
namespaces NS.sub.0 to NS.sub.M may be provided instead of or
together with garbage collection units G.sub.0 to G.sub.M. The
compaction unit corresponding to each of namespaces NS.sub.0 to
NS.sub.M executes compaction with respect to each of the namespaces
NS.sub.0 to NS.sub.M based on the management data 18.
[0103] In the present embodiment, the communication of
configuration command C1 between, for example, the information
processing device 2 and the memory system 3 may be omitted. For
example, the address translation unit 15 may include a part of or
the whole functions of the configuration unit 14. For example, the
address translation unit 15 may generate the management data 18 and
address translation tables T.sub.0 to T.sub.M of the namespaces NS0
to NS.sub.M by associating the NSID 6 and LBA 7 added to the write
command C2 with the PBA corresponding to the LBA 7. The management
data 18 and the address translation tables T.sub.0 to T.sub.M may
be coupled or divided arbitrarily. The structure in which the
communication of the configuration command C1 is omitted and the
address translation unit 15 includes a part of or the whole
functions of the configuration unit 14 is explained in detail in
the following second embodiment.
Second Embodiment
[0104] In the present embodiment, explained is an information
processing system in which a memory system writes write data from a
plurality of information processing devices and sends the read data
to the information processing devices.
[0105] FIG. 7 is a block diagram showing an example of a structure
of an information processing system of the present embodiment.
[0106] An information processing system 1A includes a plurality of
information processing devices D.sub.0 to D.sub.M and a memory
system 3A. Each of the information processing devices D.sub.0 to
D.sub.M functions similarly to the information processing device 2.
The memory system 3A differs from the above memory system 3 mainly
because it includes a translation table (translation data) 20
instead of the address translation tables T.sub.0 to T.sub.M and
management data 18, it transmits/receives data, information,
signal, and command to/from the information processing devices
D.sub.0 to D.sub.M, and the address translation unit 15 functions
as the configuration unit 14. In the present embodiment,
differences from the first embodiment are explained, and the same
explanation or substantially the same explanation may be omitted or
simplified.
[0107] The memory system 3A is included in, for example, a cloud
computing system. In the present embodiment, a case where the
memory system 3A is shared with the information processing devices
D.sub.0 to D.sub.M is exemplified; however, it may be shared with a
plurality of users. At least one of the information processing
devices D.sub.0 to D.sub.M may be a virtual machine.
[0108] In the present embodiment, NSID added to a command is used
as an access key to namespaces.
[0109] In the present embodiment, the information processing
devices D.sub.0 to D.sub.M have access rights to their
corresponding namespaces NS.sub.0 to NS.sub.M. However, only a
single information processing device may have access rights to one
or more namespaces, or a plurality of information processing
devices may have an access right to a common namespace.
[0110] Each of the information processing devices D.sub.0 to
D.sub.M transfers, together with the write command C2, an NSID 6W
indicative of its corresponding write destination space, LBA 7W
indicative of the write destination, data size 8, and write data 9W
to the memory system 3A.
[0111] Each of the information processing devices D.sub.0 to
D.sub.M transfers, together with a read command C3, an NSID 6R
indicative of its corresponding read destination space, and LBA 7R
indicative of the read destination to the memory system 3A.
[0112] Each of the information processing devices D.sub.0 to
D.sub.M receives read data 9R corresponding to the read command C3
or information indicative of a read error from the memory system
3A.
[0113] The memory system 3A includes a controller 4A and the
nonvolatile memory 5.
[0114] The controller 4A includes an interface unit 19, memory unit
10, buffer memory F.sub.0 to F.sub.M, and processor 11. In the
present embodiment, the number of processor 11 in the controller 4A
can be changed optionally to be one or more.
[0115] The interface unit 19 transmits/receives data, information,
signal, and command to/from external devices such as the
information processing devices D.sub.0 to D.sub.M.
[0116] The memory unit 10 stores a translation table 20. A part of
or the whole translation table 20 may be stored in a different
memory such as the memory 12.
[0117] The translation table 20 is data which associates the LBA,
PBA, and NSID with each other. The translation table 20 is
explained later with reference to FIG. 8.
[0118] The buffer memories F.sub.0 to F.sub.M are used for write
buffer memories and read buffer memories with respect to namespaces
NS.sub.0 to NS.sub.M.
[0119] The processor 11 includes the memory 12 storing the program
17, reception unit 13, address translation unit 15, write unit 16,
read unit 21, and garbage collection units G.sub.0 to G.sub.M. When
the program 17 is executed, the processor 11 functions as the
reception unit 13, address translation unit 15, write unit 16, read
unit 21, and garbage collection units G.sub.0 to G.sub.M.
[0120] The reception unit 13 receives, at the time of data write,
the write command C2, NSID 6W, LBA 7W, data size 8, and write data
9W from the information processing devices D.sub.0 to D.sub.M
through the interface unit 19.
[0121] The reception unit 13 receives, at the time of data read,
the read command C3, NSID 6R, and LBA 7R from the information
processing devices D.sub.0 to D.sub.M through the interface unit
19.
[0122] When the reception unit 13 receives the write command C2,
based on the LBA 7W and NSID 6W added to the write command C2, the
address translation unit 15 determines the PBA of the write
destination in the namespace indicated by the NSID 6W. The address
translation unit 15 then updates the translation table 20
associating the LBA 7W, NSID 6W, and determined PBA with each
other.
[0123] When the read command C3 is received by the reception unit
13, based on the LBA 7R and NSID 6R added to the read command C3,
and the translation table 20, the address translation unit 15
determines the PBA of the read destination indicated by the NSID
6R.
[0124] The write unit 16 writes the write data 9W at a position
indicated by the PBA corresponding to the namespace indicated by
the NSID 6W via a buffer memory corresponding to the namespace
indicated by the NSID 6W.
[0125] The read unit 21 reads the read data 9R from the position
indicated by the PBA corresponding to the namespace indicated by
NSID 6R via the buffer memory corresponding to the namespace
indicated by NSID 6R. Then, the read unit 21 sends the read data 9R
to the information processing device issuing the read commend C3
via the interface unit 19.
[0126] In the present embodiment, the garbage collection units
G.sub.0 to G.sub.M execute garbage collection of each of the
namespaces NS.sub.0 to NS.sub.M based on the translation table
20.
[0127] FIG. 8 is a data structural diagram showing an example of
the translation table 20 according to the present embodiment.
[0128] The translation table 20 manages the LBA, PBA, and
[0129] NSID with each other. For example, the translation table 20
associates the LBA 200, PBA 300, and NS.sub.0 with each other. For
example, the translation table 20 associates the LBA 201, PBA 301,
and NS.sub.0 with each other. For example, the translation table 20
associates the LBA 200, PBA 399, and NSM with each other.
[0130] The address translation unit 15 determines the PBA such that
the PBA 300 associated with the LBA 200 and the NSID indicative of
the namespace NS.sub.0 and PBA 399 associated with the LBA 200 and
the NSID indicative of the namespace NS.sub.M differ from each
other.
[0131] Thus, the address translation unit 15 can select PBA 300
when the NSID received with the LBA 200 indicates the namespace
NS.sub.0 and select PBA 399 when the NSID received with the LBA 200
indicates the namespace NS.sub.M.
[0132] Therefore, even if the same logical address is used between
a plurality of information processing devices D.sub.0 to D.sub.M,
the memory system 3A can be shared with the information processing
devices D.sub.0 to D.sub.M.
[0133] FIG. 9 is a flowchart showing an example of a write process
of the memory system 3A according to the present embodiment.
[0134] As to FIG. 9, the explanation thereof is presented given
that the write command C2 is issued from the information processing
device D.sub.0 amongst the information processing devices D.sub.0
to D.sub.M, and the NSID 6W which indicates the namespace NS.sub.0
is added to the write command C2. However, the process is performed
similarly when the write commend C2 is issued from any of the
information processing devices D.sub.1 to D.sub.M. Furthermore, the
process is performed similarly when the NSID 6W which indicates any
of the other namespaces NS.sub.1 to NS.sub.M is added to the write
command C2.
[0135] In step S901, the reception unit 13 receives the write
command C2, NSID 6W, LBA 7W, data size 8, and write data 9W from
the information processing device D.sub.0 via the interface unit
19.
[0136] In step S902, when the write command C2 is received by the
reception unit 13, based on the LBA 7W and NSID 6W added to the
write command C2, the address translation unit 15 determines the
PBA of a write destination in the namespace NS.sub.0 indicated by
the NSID 6W.
[0137] In step S903, the address translation unit 15 updates the
translation table 20 associating the LBA 7W, NSID 6W, determined
PBA with each other.
[0138] In step S904, the write unit 16 writes the write data 9W at
a position indicated by the PBA corresponding to the namespace
NS.sub.0 indicated by the NSID 6W via the buffer memory F.sub.0
corresponding to the namespace NS.sub.0 indicated by the NSID
6W.
[0139] FIG. 10 is a flowchart showing an example of a read process
of the memory system 3A according to the present embodiment.
[0140] As to FIG. 10, the explanation is presented given that the
read command C3 is issued from information processing device DM
amongst the information processing devices D.sub.0 to D.sub.M, and
the NSID 6R which indicates the namespace NS.sub.M is added to the
read command C3. However, the process is performed similarly when
the read commend C3 is issued from any of the information
processing devices D.sub.1 to D.sub.M-1. Furthermore, the process
is performed similarly when the NSID 6R which indicates any of the
other namespaces NS.sub.1 to NS.sub.M-1 is added to the read
command C3.
[0141] In step S1001, the reception unit 13 receives the read
command C3, NSID 6R, and LBA 7R from the information processing
device D.sub.M via the interface unit 19.
[0142] In step S1002, when the read command C3 is received by the
reception unit 13, based on the LBA 7R and NSID 6R added to the
read command C3, and translation table 20, the address translation
unit 15 determines the PBA of a read destination.
[0143] In step S1003, the read unit 21 reads the read data 9R from
the position indicated by PBA corresponding to the namespace
NS.sub.M indicated by NSID 6R via the buffer memory F.sub.M
corresponding to the namespace NS.sub.M indicated by NSID 6R, and
sends the read data 9R to the information processing devices
D.sub.M issuing the read command C3 via the interface unit 19.
[0144] In the present embodiment described above, the nonvolatile
memory 5 is divided into a plurality of the namespaces NS.sub.0 to
NS.sub.M. The information processing devices D.sub.0 to D.sub.M can
access the namespaces whose access rights are granted thereto.
Consequently, data security can be improved.
[0145] The controller 4A of the memory system 3A controls the
namespaces NS.sub.0 to NS.sub.M independently space by space.
Therefore, conditions of use of each of the namespaces NS.sub.0 to
NS.sub.M may be difference.
[0146] The memory system 3A associates the LBA, PBA, and NSID with
each other, and thus, even if the same LBA sent from a plurality of
independent information processing devices is received, the data
can be distinguished based on the NSID.
[0147] In each of the above embodiments, data in a table format can
be implemented as a different data structure such as a list
format.
Third Embodiment
[0148] In the present embodiment, the information processing
systems 1 and 1A explained in the first and second embodiments are
further explained in detail.
[0149] FIG. 11 is a block diagram showing of an example of a detail
structure of the information processing system 1 according to the
present embodiment.
[0150] In FIG. 11, the information processing system 1B includes an
information processing device 2B and a memory system 3B. The
information processing system 1B may include a plurality of
information processing devices as in the second embodiment. That
is, the information processing devices 2 and D0 to DM of the first
and second embodiments correspond to the information processing
devices 2B.
[0151] The memory systems 3 and 3A according to the first and
second embodiments correspond to the memory system 3B.
[0152] the processor 11 of the first and second embodiments
corresponds to CPU 43F and 43B.
[0153] The address translation tables T.sub.0 to T.sub.M according
to the first embodiment and the translation table 20 of the second
embodiment correspond to a LUT 45.
[0154] The memory unit 10 of the first and second embodiments
corresponds to a DRAM 47.
[0155] The interface unit 19 according to the second embodiment
corresponds to a host interface 41 and a host interface controller
42.
[0156] The buffer memories F.sub.0 to F.sub.M of the first and
second embodiments correspond to a write buffer WB and read buffer
RB.
[0157] The information processing device 2B functions as a host
device.
[0158] The controller 4 includes a front end 4F and a back end
4B.
[0159] The front end (host communication unit) 4F includes a host
interface 41, host interface controller 42, encode/decode unit 44,
and CPU 43F.
[0160] The host interface 41 communicates with the information
processing device 2B to exchange requests (write command, read
command, erase command), LBA, and data.
[0161] The host interface controller (control unit) 42 controls the
communication of the host interface 41 based on the control of the
CPU 43F.
[0162] The encode/decode unit (advanced encryption standard (AES))
44 encodes the write data (plaintext) transmitted from the host
interface controller 42 in a data write operation. The
encode/decode unit 44 decodes encoded read data transmitted from
the read buffer RB of the back end 4B in a data read operation.
Note that the transmission of the write data and read data can be
performed without using the encode/decode unit 44 as occasion
demands.
[0163] The CPU 43F controls the above components 41, 42, and 44 of
the front end 4F to control the whole function of the front end
4F.
[0164] The back end (memory communication unit) 4B includes a write
buffer WB, read buffer RB, LUT unit 45, DDRC 46, DRAM 47, DMAC 48,
ECC 49, randomizer RZ, NANDC 50, and CPU 43B.
[0165] The write buffer (write data transfer unit) WB stores the
write data transmitted from the information processing device 2B
temporarily. Specifically, the write buffer WB temporarily stores
the write data until it reaches to a predetermined data size
suitable for the nonvolatile memory 5.
[0166] The read buffer (read data transfer unit) RB stores the read
data read from the nonvolatile memory 5 temporarily. Specifically,
the read buffer RB rearranges the read data to be the order
suitable for the information processing device 2B (the order of the
logical address LBA designated by the information processing device
2B).
[0167] The LUT 45 is a data to translate the logical address LBA
into a predetermined physical address PBA.
[0168] The DDRC 46 controls double data rate (DDR) in the DRAM
47.
[0169] The DRAM 47 is a nonvolatile memory which stores, for
example, the LUT 45.
[0170] The direct memory access controller (DMAC) 48 transfers the
write data and the read data through an internal bus IB. In FIG.
11, only a single DMAC 48 is shown; however, the controller 4 may
include two or more DMACs 48. The DMAC 48 may be set in various
positions inside the controller 4.
[0171] The ECC (error correction unit) 49 adds an error correction
code (ECC) to the write data transmitted from the write buffer WB.
When the read data is transmitted to the read buffer RB, the ECC
49, if necessary, corrects the read data read from the nonvolatile
memory 5 using the added ECC.
[0172] The randomizer RZ (or scrambler) disperses the write data in
such a manner that the write data are not biased in a certain page
or in a word line direction of the nonvolatile memory 5 in the data
write operation. By dispersing the write data in this manner, the
number of write can be standardized and the cell life of the memory
cell MC of the nonvolatile memory 5 can be prolonged. Therefore,
the reliability of the nonvolatile memory 5 can be improved.
Furthermore, the read data read from the nonvolatile memory 5
passes through the randomizer RZ in the data read operation.
[0173] The NAND controller (NANDC) 50 uses a plurality of channels
(four channels CH0 to CH3 are shown in the Figure) to access the
nonvolatile memory 5 in parallel in order to satisfy a demand for a
certain speed.
[0174] The CPU 43B controls each component above (45 to 50, and RZ)
of the back end 4B to control the whole function of the back end
4B.
[0175] Note that the structure of the controller 4 shown in FIG. 11
is an example and no limitation is intended thereby.
[0176] FIG. 12 is a perspective view showing a storage system
according to the present embodiment.
[0177] The storage system 100 includes the memory system 3B as an
SSD.
[0178] The memory system 3B is, for example, a relatively small
module of which external size will be approximately 20 mm.times.30
mm. Note that the size and scale of the memory system 3B is not
limited thereto and may be changed into various sizes
arbitrarily.
[0179] Furthermore, the memory system 3B may be applicable to the
information processing device 2B as a server used in a data center
or a cloud computing system employed in a company (enterprise) or
the like. Thus, the memory system 3B may be an enterprise SSD
(eSSD).
[0180] The memory system 3B includes a plurality of connectors (for
example, slots) 30 opening upwardly, for example. Each connector 30
is a serial attached SCSI (SAS) connector or the like. With the SAS
connector, a high speed mutual communication can be established
between the information processing device 2B and each memory system
3B via a dual port of 6 Gbps. Note that, the connector 30 may be a
PCI express (PCIe) or NVM express (NVMe).
[0181] A plurality of memory systems 3B are individually attached
to the connectors 30 of the information processing device 2B and
supported in such an arrangement that they stand in an
approximately vertical direction. Using this structure, a plurality
of memory systems 3B can be mounted collectively in a compact size,
and the memory systems 3B can be miniaturized. Furthermore, the
shape of each memory system 3B of the present embodiment is 2.5
inch small form factor (SFF). With this shape, the memory system 3B
can be compatible with an enterprise HDD (eHDD) and the easy system
compatibility with the eHDD can be achieved.
[0182] Note that the memory system 3B is not limited to the use in
an enterprise HDD. For example, the memory system 3B can be used as
a memory medium of a consumer electronic device such as a notebook
portable computer or a tablet terminal.
[0183] As can be understood from the above, the information
processing system 1B and the storage system 100 having the
structure of the present embodiment can achieve a mass storage
advantage with the same advantages of the first and second
embodiments.
[0184] While certain embodiments have been described, these
embodiments have been presented by way of example only, and are not
intended to limit the scope of the inventions. Indeed, the novel
embodiments described herein may be embodied in a variety of other
forms; furthermore, various omissions, substitutions and changes in
the form of the embodiments described herein may be made without
departing from the spirit of the inventions. The accompanying
claims and their equivalents are intended to cover such forms or
modifications as would fall within the scope and spirit of the
inventions.
* * * * *