U.S. patent application number 11/197499 was filed with the patent office on 2005-12-15 for data back up method and its programs.
Invention is credited to Hirabayashi, Motoaki, Yamada, Kyoko, Yamada, Mitsugu.
Application Number | 20050278299 11/197499 |
Document ID | / |
Family ID | 29243769 |
Filed Date | 2005-12-15 |
United States Patent
Application |
20050278299 |
Kind Code |
A1 |
Yamada, Kyoko ; et
al. |
December 15, 2005 |
Data back up method and its programs
Abstract
User data backup functions are realized through a computer,
which is located on the management service provider corporation
side and interfaces between a user side computer environment and a
storage service side computer environment to support storage
service. This computer selects storage devices which meet the user
side conditions from a plurality of entirely or partially empty
storage devices owned by the storage service side computer
environment. The computer receives user data from the user side
computer environment, divides the user data into records of a
predetermined size and transmits the records to the storage service
side computer environment so that the records are distributed and
stored to the selected storage devices.
Inventors: |
Yamada, Kyoko; (Kawasaki,
JP) ; Hirabayashi, Motoaki; (Yokohama, JP) ;
Yamada, Mitsugu; (Yokohama, JP) |
Correspondence
Address: |
MATTINGLY, STANGER, MALUR & BRUNDIDGE, P.C.
1800 DIAGONAL ROAD
SUITE 370
ALEXANDRIA
VA
22314
US
|
Family ID: |
29243769 |
Appl. No.: |
11/197499 |
Filed: |
August 5, 2005 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11197499 |
Aug 5, 2005 |
|
|
|
10367767 |
Feb 19, 2003 |
|
|
|
Current U.S.
Class: |
1/1 ;
707/999.001; 714/E11.125 |
Current CPC
Class: |
G06F 11/1469 20130101;
G06F 11/1458 20130101; Y10S 707/99955 20130101; Y10S 707/99953
20130101; G06F 11/1466 20130101; G06F 11/1464 20130101 |
Class at
Publication: |
707/001 |
International
Class: |
G06F 007/00 |
Foreign Application Data
Date |
Code |
Application Number |
Apr 26, 2002 |
JP |
2002-125447 |
Claims
What is claimed is:
1. An information processing system comprising: a plurality of
storage systems having a plurality of storage means; a computer,
coupled to the storage systems, which responds to a data storing
request from a user, and which is configured to obtain information
relating to storage areas from each of the storage systems, select
one or more storage areas which satisfy a user condition based on
the information relating to storage areas of each of the storage
systems, and transfer data to one or more storage systems having
the selected one or more storage areas in response to the data
storing request.
2. The information processing system according to claim 1, wherein
the user condition includes at least one of a division number of
the data, a cost to store the data, and a data storing term of the
data.
3. The information processing system according to claim 2, wherein
the obtained information includes at least one of an available
storage area, a cost to use the available storage area, and an
available term of each storage system.
4. The information processing system according to claim 3, wherein
the computer is further configured to select storage areas
corresponding to the division number included in the user
condition, divide the data based on the division number, and
transfer the divided data to storage systems having the selected
storage areas in response to the data storing request.
5. The information processing system according to claim 4, wherein
the computer responds to a data read request, and the computer is
further configured to send a read request to each storage system
which the divided data is transferred, and to generate data from
the divided data in response to the data read request.
6. The information processing system according to claim 4, wherein
each of the data size to be divided is determined by available
storage area.
7. The information processing system according to claim 3, wherein
the computer is further configured to re-select storage areas when
a total cost of the selected storage areas is more than the cost
included in the user condition.
8. The information processing system according to claim 4, wherein
the computer is further configured to monitor available terms
included in the obtained information, select a storage area if the
available term is exceeded, and migrate the divided data stored in
the storage area that has exceeded the available term to the
selected area.
9. An information processing system comprising: a plurality of
storage systems having a plurality of storage areas; a computer,
coupled to the storage systems, which responds to a data storing
request from a user, and which obtains information relating to
storage areas from each of the storage systems, selects one or more
storage areas which satisfy a user condition based on the
information relating to storage areas of each of the storage
systems, and transfers data to one or more storage systems having
the selected one or more storage areas in response to the data
storing request.
10. The information processing system according to claim 9, wherein
the user condition includes at least one of a division number of
the data, a cost to store the data, and a data storing term of the
data.
11. The information processing system according to claim 10,
wherein the obtained information includes at least one of an
available storage area, a cost to use the available storage area,
and an available term of each storage system.
12. The information processing system according to claim 11,
wherein the computer selects storage areas corresponding to the
division number included in the user condition, divides the data
based on the division number, and transfers the divided data to
storage systems having the selected storage areas.
13. The information processing system according to claim 12,
wherein the computer responds to a data read request, and wherein
the computer sends a read request to each storage system to which
the divided data is transferred, and generates data from the
divided data in response to the data read request.
14. The information processing system according to claim 12,
wherein each of the data size is determined by available storage
area.
15. The information processing system according to claim 11,
wherein the computer re-selects storage areas when a total cost of
the selected storage areas is more than the cost included in the
user condition.
16. The information processing system according to claim 12,
wherein the computer monitors available terms included in the
obtained information, selects a storage area if the available term
is exceeded, and migrates the divided data stored in the storage
area that has exceeded the available term to the selected storage
area.
17. A data storing method comprising: obtaining information
relating to storage areas from a plurality of storage systems;
selecting one or more storage areas which satisfy a user condition
based on information relating to storage areas of each storage
system; and transferring data to one or more storage systems having
the selected one or more storage areas.
18. A data storing method comprising: obtaining information
relating to storage areas from a plurality of storage systems;
selecting two or more storage areas which satisfy a user condition
based on information relating to storage areas of each storage
system, dividing the data; and transferring the divided data to two
or more storage systems having the selected two or more storage
areas.
19. The data storing method according to claim 17, wherein the user
condition includes at least one of a division number of the data, a
cost to store the data, and a data storing term of the data.
20. The data storing method according to claim 18, wherein the
obtained information includes at least one of an available storage
area, a cost to use the available storage area, and an available
term of each storage system.
21. The data storing method according to claim 19, wherein the
storage area is selected by the division number, and the data is
divided by the division number.
22. The data storing method according to claim 17, further
comprising: sending a read request to each storage system to which
the divided data is transferred; and generating data from the
divided data.
23. The data storing method according to claim 17, wherein each of
the data size to be divided is determined by the available storage
area.
24. The data storing method according to claim 18, further
comprising: re-selecting storage areas when a total cost of the
selected storage areas is more than the cost included in the user
condition.
25. The data storing method according to claim 18, further
comprising: monitoring available terms included in the obtained
information; selecting a storage area if the available term is
exceeded; and migrating the divided data stored in the storage area
that has exceeded the available term to the selected storage area.
Description
[0001] The present application is a continuation of application
Ser. No. 10/367,767, filed Feb. 19, 2003, the contents of which are
incorporated herein by reference.
BACKGROUND OF THE INVENTION
[0002] The present invention relates to storage backup techniques,
and more particularly, to a technique for backing up storage in a
remote place.
[0003] Since contents of disk storage may be lost-by an unexpected
accident, data backup is made in most computer systems. Further,
data backup tape and other media are kept in a remote site so that
they will not be lost together with the original copies in case of
a fire, earthquake or the like. Accordingly, a SAN (storage area
network)-used backup method is disclosed in Japanese Patent
Laid-open No. 2002-7304. Also in Japanese Patent Laid-open No.
2000-82008, a data sharing-based backup method is disclosed.
[0004] Large-scale earthquakes, synchronized attacks by viruses,
etc. prove a further growing threat to computer systems and their
data, which is making it mandatory to keep two or three copies of
each data as well as backing up them in a remote site. Backup is
therefore becoming a swelling burden in terms of storage capacity,
cost and overhead.
SUMMARY OF THE INVENTION
[0005] The present invention has been made in view of such a
background as mentioned above and an object of the present
invention is to provide a high safe data backup storage device
advantageous in terms of cost.
[0006] According to one aspect of the present invention, there is
provided a user data backup technique by computer means which
resides between a user side computer environment and a storage
service side computer environment to support storage service,
the-data backup method comprising the steps of: selecting a storage
device which meets the user side conditions from a plurality of
entirely or partially empty storage devices owned by the storage
service side computer environment; receiving user data from the
user side computer environment; and transmitting the user data to
the storage service side computer environment for storage in the
selected storage device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 shows the configuration of a storage service system
according to a preferred embodiment of the present invention;
[0008] FIG. 2 shows an example of disk dividing and data
storage;
[0009] FIG. 3 shows an example of a data allocation TBL 123;
[0010] FIG. 4 shows an example of a user conditions TBL 122;
[0011] FIG. 5 shows an example of a SSP conditions TBL 124;
[0012] FIG. 6 shows an example of a disk dividing TBL 125;
[0013] FIG. 7 is a flowchart showing a processing procedure by a
matching unit 115 in the preferred embodiment;
[0014] FIG. 8 is a time chart showing a processing procedure for
data backup in the preferred embodiment;
[0015] FIG. 9 is a time chart showing a processing procedure for
data restoration in the preferred embodiment;
[0016] FIG. 10 is a time chart showing a processing procedure for
data migration in the preferred embodiment; and
[0017] FIG. 11 is a time chart showing a processing procedure for
disk return in the preferred embodiment.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0018] Preferred embodiments of the present invention will be
described in detail with reference to the accompanying drawings
below.
[0019] FIG. 1 shows a configuration of a backup storage service
system according to an embodiment of the present invention. In the
system of FIG. 1, a management service provider corporation
(hereinafter denoted as a MSP) provides data backup service to
users (hereinafter denoted as USRs) by using idle resources of a
plurality of storage service provider corporations (hereinafter
denoted as SSPs). Each SSP has a SSP server 103 which is a computer
environment on the storage service side. Each USR has a USR server
102 which is a computer environment on the user side. A MSP server
101 is computing means which interfaces between the two computer
environments in order to support the storage service.
[0020] In FIG. 1, the MSP server 101 receives data from USR servers
102-1 and 102-2, divides the data into records and stores the
records in idle resources managed by SSP servers 103-1, 103-2 and
103-3. Here, idle resources mean the currently unused areas of the
storage devices such as tape libraries and RAID devices provided
for data center business and storage service business. The MSP
server 101, the USR servers 102 and the SSP servers 103 are
connected via a network such as the Internet.
[0021] The MSP server 101 comprises a processing section composed
of a service reception unit 113, a resource management unit 114, a
matching unit 115, a data transfer unit 116, a data dividing unit
117, a data restore unit 118 and a data migration unit 119. Also
there are provided, on its storage device, a user conditions TBL
(table) 122, a data allocation TBL 123, a SSP conditions TBL 124
and a disk dividing TBL 125.
[0022] The service reception unit 113 is notified by the USR
servers 102 of the size of leach data to be backed up and the
user's preferred condition (cost, etc.) for using the backup
storage service. The service reception unit 113 stores those
obtained conditions into the user conditions TBL 122. The resource
management unit 1114 is notified by the SSP servers 103 of their
conditions (empty disk capacity, availability period, etc.) for
providing disks. The resource management unit 114 stores those
obtained conditions into the SSP conditions TBL 124.
[0023] The matching unit 115 searches the user conditions TBL 122
and SSP conditions TB 124 for mutually conforming combinations. The
data transfer unit 116 controls data transfer between the USR
servers 102 and the MSP server 101 and between the MSP server 101
and the SSP services 103.
[0024] The data dividing nit 117 divides user data into records
whose size is determined, depending on the empty disk capacities
offered by the SSP servers for backup. The data restore unit 118
refers to the data allocation TBL 123 and reassembles original data
from records distributed to a plurality of SSP disks. The data
migration unit 119 moves data to an empty area in another SSP
server 103 if the availability term of the current backup disk
expires or if it becomes necessary during the availability term to
return the disk which is currently used as an idle resource. The
user is not required to be aware of any data migration executed
since the pertinent processing completes within the MSP, which
results in a reduced operational cost for the user.
[0025] The user conditions TBL 122 stores the user's preferred
conditions such as cost and availability term. This table will be
described later in detail with reference to FIG. 4. The data
allocation TBL 123 stores information about how SSP disks are
allocated to user data. In order to raise the safety of data, the
data allocation TBL is duplicated. The other copy is held in a
separate alternative MSP server. If the MSP server 101 becomes not
available due to disaster or failure, the data allocation TBL 123
in the alternative MSP server is accessed. This table will be
described later in detail with reference to FIG. 3. The SSP
conditions TBL 124 stores disk lending conditions such as empty
capacity and cost. This table will be described later in detail
with reference to FIG. 5. The disk dividing TBL 125 stores how the
lent disks are divided by the MSP into partitions. This table will
be described later in detail with reference to FIG. 6.
[0026] Each USR server 102 comprises a service demanding unit 111
and a data transfer unit 112. The USR server 102-1 manages user
data A (131) while the USR server 102-2 manages user data B (132)
and user data C (133).
[0027] When a USR server 102 uses the storage service, the USR
server 102 issues a backup demanding request to notify the MSP of
the required disk capacity, preferred cost, term of use and number
of distributions. The number of distributions, which may be
specified arbitrarily by the user, is an index determining the
number of sites to which the data is apportioned for storage.
Generally in a local site, RAID technology is used so that accesses
are dispersed to a plurality of storage devices such as hard disks.
In this case of the present invention, data is apportioned to a
plurality of separate SSP disks connected via a network. How many
SSP disks are to be used is determined by the number of
distributions, a variable specified by the user. Reliability can be
raised by using them like a single disk.
[0028] Once some disks are judged appropriate for backup by the MSP
server 101, the data is sent to the MSP server 101 from the data
transfer unit 112. To restore data from backup disks, the USR
server 102 issues a restore demanding request to the MSP server 101
and receives the data from the MSP server 101 via the data transfer
unit 112.
[0029] Each SSP server 103 comprises a resource registering unit
120 and a data transfer unit 121. The SSP server 103-1 manages disk
A (141), disk B (142) and disk C (143). The SSP server 103-2
manages disk ID (144) and disk E (145). The SSP server 103-3
manages disk F (146), disk G (147) and disk H (148).
[0030] Of the disks managed by a SSP server 103, those disks
available for the backup storage service are registered to the MSP
server 101 by the resource registering unit 120. The data transfer
unit 121 manages data exchange between the USR server and the MSP
server.
[0031] For example, assume that the backup storage service is to be
applied to the user data A (131) in the USR server 102-1 and the
user data B (132) in the USR server 102-2. Hereinafter, a disk
means a logically independent storage device. Empty disks lent as
idle resources are the disk B (142) and disk C (143) under
management of the SSP server 103-1, the disk D (144) under
management of the SSP server 103-2 and the disk F (146) and disk q
(147) under management of the SSP server 103-3. A total of five
disks are offered for the backup storage service.
[0032] Disk B is divided into partition 1 (142-1), partition 2
(142-2) and partition 31 (142-3); Disk C is divided into partition
1 (143-1) and partition 2 (143-2); Disk D is divided into partition
1 (144-1), partition 2 (144-2) and partition 3 (144-3); Disk F is
divided into partition 1 (146-1) and partition 2 (146-2); and
divided into partition 1 (147-1), partition 2 (147-2) and partition
3 (147-3).
[0033] User data A (131), after given the matching processing and
then divided into five records in the MSP server 101, is stored to
disk B partition 1 (142-1) in the SSP server 103-1, disk C
partition 1 (143-1) in the SSP server 103-1, disk D partition 1
(144-1) in the SSP server 103-2, disk F partition 1 (146-1) in the
SSP server 103-3 and disk G partition 1 (147-1) in the SSP server
103-3.
[0034] FIG. 2 shows how data on a user disk may be divided and
stored for backup in the aforementioned embodiment. An original
disk 201 is divided into a plurality of records according to the
capacities of the backup disks. The original disk 201 corresponds
to user data A (131) or user data B (132). The backup disks are
represented by disks given numerals 211, 212, 213, 214 and 215
respectively. Each record, labeled with symbol R, may be either a
block, a character or a bit. One block is data consisting of
characters. The resultant records are sequentially stored on the
respective disk partitions in accordance with the number of
distributions. In the case of FIG. 2 where the specified number of
distributions is assumed to be 5, the records are sequentially
stored on the five backup disk 211, 212, 213, 214 and 215. Also
note that in this example an EDD or parity record is stored for
every four data records (R1, R2, R3 and R4 for instance). If one of
some four adjacent records is lost, its corresponding ECC or parity
record, code information, lean be used to regenerate the lost
record. Thus, R1, R2, R3, R4 and their ECC or parity are stored on
backup disks 211, 212, 213, 214 and 215, respectively. In this
manner, the divided records are sequentially stored on the backup
disks. Combining the apportioning of data among a plurality of
disks depending on the number of distributions with a parity check
or ECC technique, this method is aimed at not only improving access
performance but also securing the data.
[0035] Since ECC or parity information is stored as a separate
record, even if one disk becomes not available due to a failure or
the like, it is possible to restore data from records on the other
disks. In addition, since each disk is a separate SSP disk, it is
not possible to restore the whole data from one disk, which brings
about a merit that the security of important data can be protected.
Generally, making the size of each record smaller raises security
although this requires longer processing time.
[0036] FIG. 3 is an example of the data allocation TBL 123 showing
how user data is allocated to backup disks. In the data allocation
TBL 123, data 401 contains a user server name and a disk name,
indicating which user data is backed up. Each backup disk 402
contains a SSP server name, disk name and a partition name,
indicating which partition is hit by the matching unit 115. In the
case of FIG. 3, user data stored on USR1-A is divided into five
sets and stored respectively in SSP1-B1, SSP1-C1, SSP2-D1, SSP3-F1
and SSP3-G1. Likewise, user data stored on USR2-B is divided into
five sets and stored respectively in SSP1-B2, SSP1-C2, SSP2-D2,
SSP3-F2 and SSP3-G2.
[0037] FIG. 4 is an example of the user conditions TBL 122 where
user-specified conditions for using the backup storage service are
stored. In the user conditions TBL 122, each user data 501 contains
a user server name and a disk name, indicating which user data is
concerned. Each capacity 502 contains the size of the data. Each
cost 503 contains a monthly rental fee per unit capacity. Each term
(start-end) 504 contains two dates between which the service is to
be used or the user data is to be backed up. Each number of
distributions 505 contains an index indicating the number of disks
to which the user data, including ECC or parity records, is to be
apportioned. Specifying a higher value I for the number of
distributions results in higher safety since the user data will be
apportioned among a large number of disks.
[0038] FIG. 5 is an example of the SSP conditions TBL 124 where SSP
specified conditions or providing the backup storage service are
stored. In the SSP conditions TBL 124, each SSP disk 601 contains a
SSP name and a disk name, identifying a disk registered by the SSP
Each capacity 602 indicates the empty capacity of the disk. Each
cost 603 indicates the monthly rental fee per unit capacity charged
for the disk. Each term (start-end) 6041 contains two dates between
which the disk is available. Each installation site 605 contains
the name of the site where the disk resides.
[0039] FIG. 6 is an example of the disk dividing TBL 125 where how
available SSP disks ate divided into partitions before lent to
users when the storage service is used in the present embodiment.
In the disk providing TBL 125, each disk 701 contains a SSP name
and a disk name, identifying a SSP disk registered by the SSP. Each
partition 702 contains the name of a partition on the disk. In FIG.
6, disk SSP1-B is divided into three partitions named B1, B2 and B3
respectively. Likewise, SSP1-C is divided into two partitions named
C1 and C2 respectively. Information in each partition 702 field
corresponds to the logical block number associated with the
partition within the disk.
[0040] FIG. 7 shows a flowchart describing how the matching unit
115 operates to search the user conditions and SSP conditions for
mutually conforming combinations. If the operation of the matching
unit 115 is started, conditions for using the backup service are
obtained from the user conditions TBL 122 (Step 300). Then SSP
conditions for providing the backup service are obtained from the
SSP conditions TBL 124 (Step 301). Then, the minimum backup
capacity per disk is calculated by dividing the capacity to back up
by (the number of distributions--1) (Step 302). The matching unit
115 searches for appropriate which meet this minimum backup disk
capacity and other conditions. At first, the condition level is set
toll before condition level judgment is done (Step 303). Search is
done at each condition level.
[0041] At condition level 1, the minimum backup disk capacity is
compared with the capacity 602 of a SSP disk (Step 306) and if the
minimum backup disk capacity is smaller, the term (start and end)
during which the SSP disk is available is compared with the term
(start and end) during which the user wants to back up the data
(Step 307). If the term during which the user wants to back up the
data is within the term during which the SSP disk is available, the
identifier of the SSP disk is stored to the memo y (Step 309). This
judgment flow is executed for each SSP disk registered (Step 305).
Of the hit SSP disks, the lowest cost SSP disk is selected (Step
310) and the minimum backup dirk capacity is allocated from the
selected SSP disk (Step 311). The hit SSP disk is excluded from the
object of comparison in Steps 306 and 307 (Step 312). This loop is
repeated as many times as the number of distributions (Step 304) so
that as many conforming backup disks as the number of distributions
are detected. The total cost for using the backup disks hit in this
manner is calculated and compared with the user cost (Step 313). If
the calculated total cost exceeds the user cost, the condition
level is incremented by 1 (Step 314) to execute another search
flow.
[0042] At condition level 2, the date from which the SSP disk is
available is compared with the date from which the user wants to
back up the data (Step 308). Not like in Step 307, the date until
which the SSP disk is available is not compared with the date until
which the user wants to back up the data. Then, the total cost for
using the backup disks hit in this manner is calculated and
compared with the user cost (Step 313). If the calculated total
cost exceeds the user cost, the condition level is incremented by 1
(Step 314) to execute another search flow.
[0043] At condition level 3, it is not required to distribute the
data to different SSP disks and therefore it is allowed to store
the data on the same SSP disk. That is, backup disk search is
repeated without executing Step 312 where each hit SSP disk is
excluded from the object of comparison.
[0044] At condition level 3, it is possible that all the user data
is allocated to a single SSP disk. Instead of condition level 3,
the processing procedure may also be altered in such a manner that
1 is subtracted from the number of distributions specified by the
user and hen condition level 1 is executed again from Step 302. In
this case, the total cost may be reduced to the user cost or below
without ignoring the user-desired number of distributions. Note
that concentrating the user data to only one SSP disk or SSP server
103 is also an implementation of the present invention although the
safety of the user data is sacrificed.
[0045] If backup disks are determined as mentioned above, the
allocation and disk dividing are registered (Step 315). That is,
the SSP disk partitions allocated according to the minimum backup
disk capacity are registered to the disk dividing TBL 125 and data
allocation TBL 123. Note that if the matching unit 115 fails to
find any SSP disk conforming to the user conditions even at
condition level 3, the matching unit 115 terminates its processing
after notifying the USR server 102 of the failure.
[0046] FIG. 8 is a time chart indicating how exchanges are done
when data is backed up in the embodiment. The resource management
unit 114 in the MSP server 101 requests SSP conditions from the
resou1ce registering unit 120 in the SSP server 103 (1). When an
empty disk is registered in the SSP server 103, the resource
management unit 120 registers the conditions to the resource
management unit 114 (2). To make the empty disk available, the
resource management unit 114 requests the resource registering unit
120 to reserve the empty disk for backup (3).
[0047] For the USR server 102 to use the backup storage service,
the service demanding unit 111 issues a service demanding request
to the service reception unit 113 and notifies the unit of
user-desired conditions (4). Upon receiving the request, the
service reception unit 113 issues a matching processing request to
the matching unit 115 (5). The matching unit 115 searches the user
conditions and SSP conditions for mutually conforming combinations.
After the search, the matching unit 115 notifies the resource
management unit 114 of the matching result (6). The resource
management unit 114 refers to the data allocation TBL 123 and disk
dividing TBL 125 and issues a rental request unit 120 in a backup
SSP 101 (7). The rental request includes the specification of which
disk partitions is to be used for backup. After a rental request is
issued to each backup SSP server 103, the resource management unit
114 notifies the matching unit 115 of the completion (8). Then, the
matching unit 115 issues a rental settlement notice to the service
reception unit 113 (9). Finally, the service reception unit 113
issues a rental settlement notice to the service demanding unit 111
(10).
[0048] Upon receiving the rental settlement notice, the USR server
102 transmits the original data from the data transfer unit 112 to
the data transfer unit 116 in the MSP server 101 (11). The transfer
unit 116 passes the data to the data dividing unit 117 (12).
[0049] According to the data allocation TBL 123, the data dividing
unit 117 divides the data into records for distribution to the
backup SSP servers 103 and transmits the records to the data
transfer unit 116 (13). The data transfer unit 116 transfer the
records to the data transfer unit 121 of each backup SSP server 103
(14).
[0050] FIG. 9 is a time c1art indicating how exchanges are done
when data is restored in the embodiment. For a USR server 102 to
restore data by using the backup storage service, the service
demanding unit 111 issues a service demanding request to the
service reception unit 113 and specifies which user data 401 is to
be restored (1). Upon receiving the service demanding request, the
service reception unit 113 requests the resource management unit
114 to restore the data (2).
[0051] According to the data allocation TBL 123 and disk dividing
TBL 125, the resource management unit 114 requests the resource
registering unit 120 in each backup SSP server 103 to transfer the
target data (3). This request includes the specification of a
backup disk partition from which data is to be transferred. The
data transfer unit 121 passes the data to the data transfer unit
t16 in the MSP server 101 (4). The transferred data is allowed to
enter the data restore unit 118 where restore processing is done
(5). If the restore processing succeeds, the data is transferred
from the data restore unit 118 via the data transfer unit 116 (6)
to the data transfer unit 112 in the USR server 102 (7).
[0052] Note that if records cannot be obtained from a SSP server
103 due to failure or the like, the data restore unit 118
regenerates the lost records by using ECC or parity
information.
[0053] FIG. 10 is a time chart indicating how processing is done
for data migration. If the available term of a resource expires,
the MSP server 1011 must perform data migration. The resource
management unit t14 checks the term 604 fields of the SSP
conditions TBL 124, and if the available term of any SSP disk
expires, it issues a data migration request to the data migration
unit 119 (1). The data migration unit 119 issues a matching
processing request for the user data to be moved to the matching
unit 115 (2). The user data to be moved is identified by referring
to the disk dividing TBL 125 and data allocation TBL 123.
[0054] The matching unit 115 searches the user conditions and SSP
conditions for mutually conforming combinations. Note that the data
to be moved is limited to the data stored in resources whose
availability term has expired. Therefore, the minimum backup
capacity and the number of distributions are set for the data to be
moved without using the initially set minimum backup space and
number of distributions. At step 302, the number of SSP disks whose
availability term has expired is used instead of (the number of
distributions--1). Cost comparison at Step 313 is omitted, too.
After the search, the matching unit 115 notifies the resource
management unit 114 of the matching result (3), and the resource
management unit 114 issuer a rental notice to the resource
registering unit 120 of, for example, the SSP server 103-2 (4). The
rental notice includes the specification of a new disk partition
for backup. The resource management unit 114 requests the resource
registering unit 120 of the SSP server 103-1 (whose availability
term has expired) to collect the data (5). The collection 1eQuest
includes the specification of the disk partition to which the data
is to be moved. The data transfer unit 121 transfers the data to
the data transfer unit 116 of the MSP server 101 (6).
[0055] The data migration unit 119 is notified that the data is
transferred to the MSP server 101 (7). The data migration unit 119
notifies the data transfer unit 116 of the address of the data
destination (the address of the SSP server 103-2, disk identifier,
partition identifier, logical block number of the partition in the
disk, etc.) (8). Then, the data transfer unit 116 transfers the
received data to the data transfer unit 121 of the data destination
SSP server 103-2 (9). When migration of the user data which must be
moved to new backup disks is complete, the data migration unit 119
updates the disk dividing TBL 125 and data allocation TBL 123 so as
to indicate the new backup disks. In addition, the resource
management unit 114 of the MSP server 101 issues a rental ending
request to the resource management unit 120 of the SSP server 103-1
used formerly for backup (10). This request includes the
specification of the disk partition whose rental is to be
terminated.
[0056] FIG. 11 is a time chart indicating how exchanges are done
when a disk is returned. If a SSP server 103 requests the MSP
server 101 to return a lent storage device within its availability
term, the MSP server 101 must execute data migration, too. Upon
receiving a storage return request from the resource registering
unit 120 of, for example, the SSP server 103-1 (0), the resource
management unit 114 starts the data migration procedure based on
this request. This request includes the identifier of a disk which
is to be returned. The subsequent procedure is same as described
with reference to FIG. 10.
[0057] For user data under generation management, it is usual that
a plurality of backup copies are held for the corresponding
generations. Term 504 fields can be set so as to always retain a
predetermined number of such backup copies. In this case, the MSP
server 101 monitors the term 504 fields of the user conditions TBL
122, and if the term of a disk completes or expires, issues a
rental ending request to the SSP server 103 having the disk in
order to release its partitions.
[0058] According to the present invention, it is possible to
provide users with a data backup storage device which shows high
safety against disasters, etc. and is advantageous in terms of
cost.
* * * * *