U.S. patent application number 11/058198 was filed with the patent office on 2005-06-23 for heterogeneous computer system, heterogeneous input/output system and data back-up method for the systems.
Invention is credited to Fukuzawa, Yasuko, Nakano, Toshio, Yamamoto, Akira.
Application Number | 20050138241 11/058198 |
Document ID | / |
Family ID | 14218504 |
Filed Date | 2005-06-23 |
United States Patent
Application |
20050138241 |
Kind Code |
A1 |
Fukuzawa, Yasuko ; et
al. |
June 23, 2005 |
Heterogeneous computer system, heterogeneous input/output system
and data back-up method for the systems
Abstract
A system for storing data including first and second storage
systems, each having a disk controller and plural disks under
control of the disk controller. The first disk controller receives
plural first I/O requests each including a different first disk ID
from the other first I/O requests, and determines if the first disk
ID indicates one of the second disks. If the first disk ID
indicates one of the second disks, the first disk controller
obtains an address of the second disk controller and a second disk
ID of the second disks, and sends a second I/O request to the
second disk controller based on the obtained address and disk ID.
Plural second I/O requests, whose target second disk is different
from each other based on the different first disk ID, are sent from
the first storage system to the second storage system.
Inventors: |
Fukuzawa, Yasuko;
(Yokohama-shi, JP) ; Yamamoto, Akira;
(Sagamihara-shi, JP) ; Nakano, Toshio;
(Chigasaki-shi, JP) |
Correspondence
Address: |
MATTINGLY, STANGER, MALUR & BRUNDIDGE, P.C.
1800 DIAGONAL ROAD
SUITE 370
ALEXANDRIA
VA
22314
US
|
Family ID: |
14218504 |
Appl. No.: |
11/058198 |
Filed: |
February 16, 2005 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11058198 |
Feb 16, 2005 |
|
|
|
10663656 |
Sep 17, 2003 |
|
|
|
6892268 |
|
|
|
|
10663656 |
Sep 17, 2003 |
|
|
|
10326978 |
Dec 24, 2002 |
|
|
|
6721841 |
|
|
|
|
10326978 |
Dec 24, 2002 |
|
|
|
09594012 |
Jun 15, 2000 |
|
|
|
6529976 |
|
|
|
|
09594012 |
Jun 15, 2000 |
|
|
|
09052985 |
Apr 1, 1998 |
|
|
|
6098129 |
|
|
|
|
Current U.S.
Class: |
710/36 ;
714/E11.12 |
Current CPC
Class: |
G06F 11/1456 20130101;
G06F 3/0623 20130101; G06F 3/0685 20130101; G06F 3/065
20130101 |
Class at
Publication: |
710/036 |
International
Class: |
G06F 013/28 |
Foreign Application Data
Date |
Code |
Application Number |
Apr 1, 1997 |
JP |
09-098389 |
Claims
What is claimed is:
1. A system for storing data comprising: a first storage system
including a first disk controller and a plurality of first disks
under control of the first disk controller; and a second storage
system including a second disk controller and a plurality of second
disks under control of the second disk controller, wherein the
first disk controller is configured to receive a plurality of first
I/O requests each including a different first disk ID from the
other first I/O requests, and for each first I/O request, determine
if the first disk ID indicates one of the plurality of second
disks, and wherein for each first I/O request, if the first disk ID
indicates one of the plurality of second disks, the first disk
controller is configured to obtain an address of the second disk
controller and a second disk ID of the one of the plurality of
second disks based on the first disk ID, and sends a second I/O
request to the second disk controller according to the obtained
address of the second controller and the obtained second disk ID,
and wherein a plurality of second I/O requests, whose target second
disk is different from each other based on the different first disk
ID, are sent from the first storage system to the second storage
system.
2. A system for storing data according to claim 1, wherein the
first storage system further comprises: correlation information
among a first disk ID, the address of the second disk controller,
and a second disk ID, for each first disk ID included in the
plurality of first I/O requests.
3. A system for storing data according to claim 2, wherein the
correlation information is set in the first disk controller by a
computer, which is different from a computer sending the plurality
of first I/O requests to the first disk controller.
4. A system for storing data according to claim 1, wherein the
second disk ID is an address of a second disk to identify the
second disk among the plurality of second disks.
5. A system for storing data according to claim 1, wherein the
second disk ID is an address assigned to the second storage
system.
6. A system for storing data according to claim 1, wherein for each
first I/O request, if the first disk ID indicates one of the
plurality of first disks, the first disk controller is configured
to execute I/O processing to the one of the plurality of first
disks according to the received first I/O request.
7. A system for storing data according to claim 1, wherein the
second storage system further comprises: an interface for coupling
to a computer.
8. A system for storing data according to claim 7, wherein the
second storage system is coupled to the first storage system via
the same type of interface as the interface for coupling to a
computer.
9. A system for storing data comprising: a first storage system
including a first disk controller and a plurality of first disks
coupled to the first disk controller; and a second storage system
including a second disk controller and a plurality of second disks
coupled to the second disk controller, wherein the first disk
controller is configured to manage a plurality of first disk IDs,
which includes a plurality of first type first disk IDs each
designating one of the plurality of second disks, wherein the first
disk controller is configured to receive a first I/O request
including a first disk ID, and select one disk controller coupled
to a disk designated by the first disk ID included in the receive
first I/O request, and wherein if the selected disk controller is
the second disk controller, the first disk controller is configured
to obtain an address of the second disk controller, obtain a second
disk ID of a second disk associated with the first disk ID among a
plurality of second disk IDs each associated with one of the
plurality of second type first disk IDs, and send a second I/O
request to the second storage system according to the obtained
address of the second disk controller and the obtained second disk
ID.
10. A system for storing data according to claim 9, wherein the
first storage system further comprises: correlation information
among a second type first disk ID, the address of the second disk
controller, and a second disk ID for each of the plurality of
second type first disk IDs.
11. A system for storing data according to claim 9, wherein the
second disk ID is an address assigned to the second storage
system.
12. A system for storing data according to claim 10, wherein the
correlation information is set in the first disk controller by a
computer, which is different from a computer issuing a first I/O
request to the first storage system.
13. A system for storing data according to claim 9, wherein if the
selected disk controller is the first disk controller, the first
disk controller is configured to execute I/O process to one of the
plurality of first disks according to the received first I/O
request.
14. A system for storing data according to claim 9, wherein the
second storage system further comprises: an interface for coupling
to a computer.
15. A system for storing data according to claim 14, wherein the
second storage system is coupled to the first storage system via
the same type of interface as the interface for coupling to a
computer.
16. A system for storing data comprising: a first disk controller;
and a plurality of first disks coupled to the first disk
controller, wherein the first disk controller is configured to
manage a plurality of first disk IDs, which includes a plurality of
first type first disk IDs each designating one of a plurality of
second disks of another storage system including a plurality of
said second disks and a second disk controller, wherein the first
disk controller is configured to receive a first I/O request
including a first disk ID, and select one disk controller coupled
to a disk designated by the first disk ID included in the receive
first I/O request, and wherein if the selected disk controller is
the second disk controller, the first disk controller is configured
to obtain an address of the second disk controller, obtain a second
disk ID of a second disk associated with the first disk ID among a
plurality of second disk IDs each associated with one of a
plurality of second type first disk IDs, and send a second I/O
request to the second storage system according to the obtained
address of the second disk controller and the obtained second disk
ID.
17. A storage system according to claim 16, further comprising:
correlation information among a second type first disk ID, the
address of the second disk controller, and a second disk ID for
each of the plurality of second type first disk IDs.
18. A storage system according to claim 17, wherein the correlation
information is set in the first disk controller by a computer,
which is different from a computer issuing a first I/O request to
the first storage system.
19. A storage system according to claim 16, wherein the second disk
ID is an address assigned to the second storage system.
20. A storage system according to claim 16, wherein if the selected
disk controller is the first disk controller, the first disk
controller is configured to execute I/O process to one of the
plurality of first disks according to the received first I/O
request.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a continuation of application
Ser. No. 10/663,656, filed Sep. 17, 2003; which is a divisional of
application Ser. No. 10/326,978 filed on Dec. 24, 2002, now U.S.
Pat. No. 6,721,841; which is a continuation of application Ser. No.
09/594,012, filed on Jun. 15, 2000, now U.S. Pat. No. 6,529,976,
which is a continuation of application Ser. No. 09/052,985 filed on
Apr. 1, 1998, now U.S. Pat. No. 6,098,129, the contents of which
are hereby incorporated herein by reference.
BACKGROUND OF THE INVENTION
[0002] The present invention relates to a heterogeneous computer
system comprising a host computer and a plurality of I/O
subsystems, and more in particular to a method for making it
possible to back up the data stored in a memory between a host
computer and an I/O subsystem which cannot be directly connected
due to the difference in access interface, and a heterogeneous
computer system including a plurality of I/O subsystems having
different access interfaces connected to the system and the host
computer.
[0003] In mainframes, a large scale of memory hierarchy (storage
hierarchy) including a combination of a plurality of external
memories having different processing speeds and different storage
capacities is accompanied by a satisfactory data management
function and an overall storage management function intended to
support an optimum data arrangement and an efficient operation. The
IBM's DFSMS (Data Facility Storage Management Subsystem) is an
example, which is described in detail in "IBM SYSTEMS JOURNAL, Vol.
28, No. 1, 1989, pp. 77-103.
[0004] The disk data of the I/O subsystem of the mainframe computer
system having the above-mentioned management function can be backed
up in a medium such as a magnetic tape or a magnetic tape library
capable of storing a large quantity of data with a low cost per
bit.
[0005] An open system such as a personal computer or a work
station, unlike the mainframe, is not equipped with a magnetic tape
or a magnetic tape library capable of storing a large quantity of
data.
[0006] Generally, in an open system such as a personal computer or
a work station, a disk is accessed in accordance with a
fixed-length record format, while the mainframe accesses a disk in
accordance with a variable-length record format called the count
key data format.
[0007] As a result, the disk subsystem for the mainframe computer
is often configured independently of the disk subsystem for the
open system.
[0008] On the other hand, a technique for transmitting and
receiving data between I/O subsystems is disclosed in U.S. Pat. No.
5,155,845.
[0009] In a disk subsystem for an open system and a disk subsystem
for a mainframe computer which use different host computers, the
back-up and other functions are independently operated and
managed.
[0010] In view of the fact that the open system lacks a medium such
as a magnetic tape or a magnetic tape library capable of storing a
larger quantity of data, as described above, it is effective to
back up the data in the I/O subsystem of the mainframe.
[0011] An ordinary disk system for the open system, however, cannot
be connected directly to the mainframe due to the difference in the
interface thereof.
[0012] U.S. Pat. No. 5,155,845 fails to disclose how to process the
read/write operation for a storage system not directly connected to
a host computer.
SUMMARY OF THE INVENTION
[0013] An object of the present invention is to provide a method
and a system for backing up data stored in a memory between a host
computer and an I/O subsystem that cannot be connected directly to
each other due to the difference in access interface.
[0014] Specifically, an object of the invention is to provide a
method and a system for backing up data stored in an I/O subsystem
of an open system from a mainframe not directly connected to the
I/O subsystem.
[0015] Another object of the invention is to provide a method and a
computer system in which a mainframe is capable of accessing a
memory of an I/O subsystem of an open system not directly connected
to the mainframe.
[0016] Still another object of the invention is to provide a system
and a method of access in which two or more I/O subsystems having
different interfaces can be connected to a mainframe.
[0017] In order to achieve the above-mentioned objects, according
to one aspect of the present invention, there is provided a
heterogeneous computer system comprising a first host computer, a
first I/O subsystem directly connected to the first host computer
by an interface of variable-length record format and including at
least one external memory, a second host computer, a second I/O
subsystem directly connected to the second host computer by an
interface of fixed-length record format and including at least one
external memory, and a communication unit for connecting the first
I/O subsystem to the second I/O subsystem;
[0018] wherein the first I/O subsystem includes a table for storing
a device address of an external memory, data indicating one of the
external memory of the first I/O subsystem and the external memory
of the second I/O subsystem to which the device address is
assigned, and a device address of the external memory in the second
I/O subsystem when the device address is assigned to the external
memory of the second I/O subsystem; and
[0019] wherein upon receipt of a read/write request conforming to
the interface of variable-length record format from the first host
computer and including an address of an external memory to be read
from or written into, and upon decision, with reference to the
table, that the external memory address included in the read/write
request is assigned to the external memory included in the second
I/O subsystem, the first I/O subsystem converts the read/write
request into a second read/write request conforming to the
interface of fixed-length record format and sends the second
read/write request to the second I/O subsystem.
[0020] According to another aspect of the invention, there is
provided a heterogeneous computer system comprising a first host
computer, a first I/O subsystem directly connected to the first
host computer by an interface of variable length record format and
including at least one external memory, a back-up system connected
to the first host computer, a second host computer, a second I/O
subsystem directly connected to the second host computer by an
interface of fixed length record format and including at least one
external memory, and a communication unit for connecting the first
I/O subsystem to the second I/O subsystem;
[0021] wherein the first host computer includes a means for issuing
to the first I/O subsystem a read request conforming to the
interface of variable-length record format and containing the
address of an external memory from which data is to be read, and
backing up the data received from the first I/o subsystem into the
back-up system;
[0022] wherein the first I/O subsystem includes a table for storing
the device address of an external memory, data indicating that one
of the external memory of the first and the second I/O subsystems
to which the device address is assigned, and the device address of
the external memory in the second I/O subsystem when the first
device address is assigned to the external memory of the second I/o
subsystem; and
[0023] wherein upon receipt from the first host computer of a read
request conforming to the interface of variable-length record
format including an external memory address to be read, and upon
decision, with reference to the above mentioned table, that the
device address of the memory address included in the read request
is assigned to the external memory included in the second I/O
subsystem, the first I/O subsystem converts the read request into a
second read request conforming to the fixed-length interface and
sends the second read request to the second I/O subsystem while at
the same time sending to the first host computer the data received
from the second I/O subsystem.
[0024] According to still another aspect of the invention, there is
provided a heterogeneous computer system comprising a first host
computer, a first I/O subsystem directly connected to the first
host computer by an interface of variable length record format and
including at least one external memory, a back-up system connected
to the first host computer, a second host computer, a second I/O
subsystem directly connected to the second host computer by an
interface of fixed length record format and including at least one
external memory, and a communication unit for connecting the first
I/O subsystem to the second I/O subsystem;
[0025] wherein the first host computer includes a means for issuing
to the first I/O subsystem a write request conforming to the
interface of variable-length record format including the address of
an external memory into which data is to be written, and sending
the data read from the back-up system to the first I/O
subsystem;
[0026] wherein the first I/O subsystem includes a table for storing
the device address of an external memory, data indicating that one
of the external memory of the first and the second I/O subsystems
to which the device address is assigned, and the device address of
the external memory in the second I/O subsystem when the first
device address is assigned to the external memory of the second I/O
subsystem; and
[0027] wherein upon receipt from the first host computer of a read
request conforming to the interface of variable-length record
format including the device address an external memory to be
written into, and upon decision, with reference to the table, that
the address of the external memory included in the write request is
assigned to the external memory included in the second I/O
subsystem, the first I/O subsystem converts the write request into
a second write request conforming to the interface of fixed-length
record format, sends the second read request to the second I/O
subsystem while at the same time sending the data received from the
first host computer to the second I/O subsystem.
[0028] According to yet another aspect of the invention, there is
provided a heterogeneous I/O system for use with a host, computer
connected thereto, comprising a first I/O subsystem including at
least one external memory, and a second I/O subsystem connected to
the first I/O subsystem and including at least one external
memory;
[0029] wherein the first I/O subsystem includes a table for storing
a device address of an external memory, data indicating one of the
external memories of the first and the second I/O subsystems to
which the device address is assigned, and a device address of the
external memory in the second I/O subsystem when the first device
address is assigned to the external memory of the second I/O
subsystem;
[0030] wherein upon receipt of a read/write request designating the
device address of an external memory to be read from or written
into by the host computer, and upon decision, with reference to the
table, that the external memory address in the designated address
is assigned to the external memory included in the second I/O
subsystem, the first I/O subsystem sends the read/write request to
the second I/O subsystem.
[0031] According to a further aspect of the invention, there is
provided a heterogeneous I/O system for use with a host computer
connected thereto, comprising a first I/O subsystem having an
interface of variable-length record format and including at least
one external memory, a second I/O subsystem having an interface of
fixed-length record format and including at least one external
memory, and a communication unit for connecting the first I/O
subsystem to the second I/O subsystem;
[0032] wherein the first I/O subsystem includes a table for storing
a device address of an external memory, data indicating one of the
external memories of the first and the second I/O subsystems to
which the device address is assigned, and a device address of the
external memory in the second I/O subsystem when the first device
address is assigned to the external memory of the second I/O
subsystem; and
[0033] wherein upon receipt from the host computer of a read/write
request conforming to the interface of variable-length record
format including the address of an external memory to be read from
or written into, and upon decision, with reference to the table,
that the external memory address included in the read/write request
is assigned to the external memory included in the second I/O
subsystem, the first I/O subsystem converts the read/write request
into a second read/write request conforming to the interface of
fixed-length record format and sends it to the second I/O
subsystem.
[0034] According to an embodiment of the invention, there is
provided a heterogeneous computer system, wherein an I/O subsystem
for an open system is connected to an I/O subsystem for a mainframe
by a communication unit, wherein, in order to access data in the
I/O subsystem for an open system from the mainframe for enabling
the data in the disk connected to the I/O subsystem for the open
system to be backed up in a magnetic tape library system; a table
is prepared for assigning a vacant address of the memory in the
local subsystem to the memory of the I/O subsystem for the open
system, wherein a request of variable-length record format received
from the mainframe is converted into a request of fixed-length
record format for the open system; wherein the disk designated
according to the table is accessed, and wherein the data thus
obtained is sent to the mainframe and backed up in the back-up
system.
[0035] This configuration can back up the data of an I/O subsystem
for an open system in a back-up system under the management of a
mainframe not directly connected to the particular I/O
subsystem.
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] FIG. 1 is a block diagram showing a configuration of a
heterogeneous computer system according to an embodiment of the
present invention.
[0037] FIG. 2 is a block diagram showing a configuration of a
heterogeneous computer system according to another embodiment of
the invention.
[0038] FIG. 3 is a block diagram showing a configuration of a disk
controller of the heterogeneous computer system shown in FIGS. 1
and 2.
[0039] FIG. 4 is a diagram showing a configuration of a local
controller-connected disk data (table) for the systems shown in
FIGS. 1 and 2.
[0040] FIG. 5 is a diagram showing a configuration of a remote
controller-connected disk data (table) for the systems shown in
FIGS. 1 and 2.
[0041] FIG. 6 is a diagram showing the interconnection of disk
devices as viewed from the mainframe.
[0042] FIG. 7 is a diagram showing an example of the processing
flow of a disk controller A in the case where the data in an I/O
subsystem for an open system is backed up in an MT library system
of the mainframe.
[0043] FIG. 8 is a diagram showing an example of the processing
flow of a disk controller A in the case where data are restored in
an I/O subsystem for an open system from an MT library system of
the mainframe.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0044] Embodiments of the invention will be described below with
reference to the accompanying drawings.
[0045] FIG. 1 is a diagram showing a configuration of a computer
system according to an embodiment of the invention.
[0046] A processing system A 100 includes a mainframe 101, a
channel interface A 102, a channel interface B 103, a magnetic tape
(MT) controller 106, a magnetic tape library controller 130, a
magnetic tape library 107, a disk controller A 104, a disk drive
group A 105 and a service processor A 109. A back-up processing
device 162 and a restore processing device 164 are mounted on the
mainframe 101.
[0047] The mainframe 101 accesses the disk controller A 104 through
the channel interface B 103 conforming with a variable-length
record format called the count-key data format.
[0048] The count-key-data format is a record format in which a
record constituting a unit of read/write operation is configured of
three fields including a count field, a key field and a data
field.
[0049] A record ID is stored in the count field, a key data for
accessing the record is stored in the key field, and the data used
by an application program is stored in the data field.
[0050] In the description that follows, the magnetic tape (MT)
controller 106, the magnetic tape library controller 130 and the
magnetic tape library 107 are collectively referred to as an MT
library system 116. The disk controller A 104 and the disk drive
group A 105 constitute an 110 subsystem 10 connected to the
mainframe 101. In similar fashion, the disk controller B 113 and
the disk drive group B 114 constitute an I/O subsystem 20 connected
to a host 111 for an open system.
[0051] An optical disk or the like, as well as a magnetic disk,
constitutes a rank of storage hierarchy connected through the
channel interface. The following description refers to the case in
which the MT library system 116 is connected.
[0052] The disk controller A 104 contains local
controller-connected disk data 314 and remote controller-connected
disk data 315.
[0053] The local controller-connected disk data 314 and the remote
controller connected disk data 315 are data provided for making it
possible for the mainframe to access a disk device of the I/O
subsystem not directly connected thereto. Specifically, the data
314 and 415 are a table for assigning a vacant address of the
memory in the local I/O subsystem for the processing system A to
the memory of the I/O subsystem for the open system so that the
data in the I/O subsystem 20 for the processing system B can be
accessed from the mainframe 101. The data 314 and 315 will be
described in detail later.
[0054] The processing system B 110 includes a host 111 for the open
system, a SCSI (small computer system interface) 112, the disk
controller B 113, the disk drive group B 114 and a service
processor B 115.
[0055] The host 111 for the open system accesses the disk
controller B 113 through the SCSI 112 having a fixed-length record
which is a unit of read/write operation.
[0056] The disk controller A 104 and the disk controller B 113 are
connected by a communication line 108. The communication line 108
can be, for example, a SCSI cable B 117.
[0057] In the description that follows, the count-key-data format
will be called the CKD format, and the fixed-length block format
will be called an FBA (fixed block architecture) format.
[0058] Also, the record of the CKD format will be referred to as
the CKD record, and the record of the FBA format will be referred
to as the FBA record.
[0059] FIG. 2 is a diagram showing another example of a computer
system according to the invention, in which a single I/O subsystem,
for the mainframe is connected to two or more I/O subsystems for an
open system.
[0060] In a processing system.times.120, the interfaces of an open
system host X 121 and a disk controller X 123 are connected to each
other by a fiber channel interface 122. The fiber channel interface
122 is an optical fiber cable which can increase the length of
connection between a host and a control device.
[0061] In many case, however, a fiber channel interface based on
SCSI is employed between a host and a control device.
[0062] Also, an interface such as a fiber channel interface X 126
can be used to connect a disk controller X 123 and the disk
controller B 113.
[0063] The data back-up system in the configuration of FIG. 2 is an
expansion of the data back-up system in the configuration of FIG.
1.
[0064] The fundamental operation of each system is such that the
mainframe 101 and the hosts 111 and 121 for the open system access
the magnetic tape library 107 constituting an external memory or
the disk drive group A 105, the disk drive group B 114 and the disk
drive group X 124 through each interface.
[0065] The process in the mainframe 101 establishes a route to the
data stored externally through each interface under the control of
an arbitrary operating system such as Hitachi's VOS3
(virtual-storage operating system 3) for supporting the channel
interface, while the process in the host for the open system
establishes a route to the externally-stored data through each
interface under the control of an arbitrary operating system such
as UNIX (a registered trade mark owned by X/Open in U.S.A. and
other countries) for supporting the SCSI.
[0066] FIG. 3 is a diagram showing a configuration of the disk
controller A 104.
[0067] The disk controller A 104 includes a MPU 302 for executing a
control system process 307 of the disk controller, a memory 301, a
host data transfer device 303, a disk/cache device 304, an
inter-I/O subsystem data transfer device 305, a data transfer
device 306 and a control bus 308 for connecting these devices.
[0068] The control system process 307 operates in a multitask or
multiprocessor environment.
[0069] The memory 301 includes various microprograms 312 and
various data 313.
[0070] Especially, the disk controller A 104 has stored therein the
local controller connected data 314 and the remote
controller-connected disk data 315, as described above with
reference to FIG. 1.
[0071] The disk controller B 113 and the disk controller X 123 have
a configuration similar to the disk controller A 104 and will not
be described in detail.
[0072] The disk controller B 113 and the disk controller X 123,
however, are not required to contain the local controller-connected
disk data 314 and the remote controller-connected disk data
315.
[0073] The local controller-connected disk data 314 is the data
indicating the connections of the controllers and the like, and
stored in the memory 301 of the disk controller A 104. The local
controller-connected disk data 314 exists as the data corresponding
to each disk device.
[0074] The local controller-connected disk data 314 is shown in
FIG. 4.
[0075] The device address 400 is an identifier (ID) for
discriminating a disk device to be read from or written into by a
host computer such as the mainframe 101, and is the data also
contained in the read/write request issued by the host computer
such as the mainframe 101.
[0076] Local controller connection data 401 is the 25 data
indicating whether or not the disk drive corresponding to the
controller-connected disk data 314 is actually connected to a
controller.
[0077] A remote controller connection pointer 402 indicates whether
or not the controller-connected disk data 314 is assigned to a disk
drive connected to a remote controller.
[0078] In the case where the such data is assigned to a disk drive
connected to a remote controller, the pointer indicates a
corresponding remote controller-connected disk data 315. Otherwise,
the pointer assumes a null value.
[0079] In the case where the remote controller connection pointer
402 is valid (i.e., in the case where the particular device address
400 is assigned to a disk device connected to a remote controller),
it represents the state in which the local controller connection
data 401 is not assigned.
[0080] In the case where the remote controller connection pointer
402 is invalid (i.e., in the case where the device address 400 is
not assigned to a disk drive connected to a remote controller), on
the other hand, the local controller connection data 401 may
indicate the state of no-assignment.
[0081] In other words, the device address 400 may be assigned to
neither a disk device connected to a local controller nor a disk
device connected to a remote controller.
[0082] An attribute 403 is the data unique to a device including
the interface, the function, the data format and the block length
of the disk drive.
[0083] The local controller-connected disk data 315 shown in FIG. 5
is the data corresponding to a disk drive not directly connected to
the disk controller A 104.
[0084] It follows therefore that the remote controller-connected
disk data 315, on the other hand, is pointed to by anyone of the
local controller-connected disk data 314.
[0085] A connection controller address 500 represents the address
of a controller connected with a disk device corresponding to the
remote controller-connected disk data 315. According to this
embodiment, the address of the disk controller B 113 is stored as
the connection controller address 500.
[0086] A disk address 501 represents the address assigned in the
controller actually connected to a corresponding disk drive.
[0087] The local controller-connected disk data 314 and the remote
controller connected disk data 315 are set from the service
processor 109.
[0088] According to this embodiment, the mainframe 101 recognizes
that the disk drive group B 114 (disks C and D) is also connected
to the disk controller A 104 through the disk controller B 113, as
shown in FIG. 6, taking advantage of the local controller-connected
disk data 314 and the remote controller-connected disk data 315
shown in FIGS. 4 and 5.
[0089] This is because of the fact that the vacant address of disk
drive available in the disk controller A 104 is assigned by the
disk controller A 104 to a disk drive of the I/O subsystem for an
open system.
[0090] Now, the back-up processing will be described with reference
to FIGS. 1, 7 and 8.
[0091] Specifically, in FIG. 1 the back-up process 162 on the
mainframe 101 causes the data in the disk device group B 114 of the
open system of the processing system B to be backed up in the MT
library system 116 through the disk controller A 104 and the
mainframe 101 of the processing system A.
[0092] Conversely, the data backed up in the MT library system 116
is restored in the disk drive group B 114 of the open system of the
processing system B through the mainframe 101 and the disk
controller A 104 of the processing system A.
[0093] The back-up operation and the restoration described above
are executed in response to a command from the mainframe 101.
[0094] First, an explanation will be given of the case in which the
data in the disk drive group B 114 of the open system for the
processing system B is backed up in the MT library system 116
through the disk controller A 104 and the mainframe 101 of the
processing system A.
[0095] As already described above, the mainframe 101 has recognized
that the disk drive group B 114 (disks C and D) are also connected
to the disk drive A 104. Therefore, the operation of the mainframe
101, which is simply to issue a read request to the disk controller
A 104 and back up the received data in the MT library system 116,
will not be described specifically.
[0096] In the case of backing up data into the MT library system
116, the mainframe 101 issues a read request to the disk controller
A 104. The disk controller A 104 executes the process in accordance
with the flowchart of FIG. 7 in response to a read request from the
mainframe 101.
[0097] First, step 700 finds out a corresponding local
controller-connected disk data 314 from the address of the disk
drive designated in the read request.
[0098] Step 701 checks whether the designated disk drive is
connected to the disk controller A 104 or not.
[0099] In the case where the disk drive is connected to the disk
controller A 104, step 702 reads the corresponding data from the
particular disk drive.
[0100] In the case where the disk drive is not connected to the
disk controller A 104, in contrast, step 703 checks whether the
designated disk drive is connected to a remote disk controller
(disk controller B 113). In other words, it checks whether the
remote controller connection pointer 402 assumes a null value.
[0101] In the case where the check result shows that the remote
controller connection pointer 402 assumes a null value indicating
that the designated disk drive is not connected to the remote disk
controller, an error is reported in step 704.
[0102] The operation specifically related to the invention is
represented by step 705 and subsequent steps executed in the case
where a designated disk drive is connected to a remote disk
controller (disk controller B 113).
[0103] First, in the case where the check result shows that the
remote controller connection pointer 402 does not assume the null
value indicating that the designated disk drive is connected to a
remote disk controller, step 705 finds out the remote
controller-connected disk data 315 corresponding to the designated
disk drive based on the remote controller connection pointer 402.
Then, the address of the disk controller (disk controller B 113)
actually connected to the designated disk drive and the address of
the disk drive in the disk drive group B connected to the
particular disk controller B 113 are acquired on the basis of the
remote controller-connected disk data 315 found as above.
[0104] Then, step 706 converts the address of the data to be read
which has been received in the read request into the format of the
disk drive connected to the disk controller B 113.
[0105] In a read/write request from the mainframe 101, the address
of data to be read or written is normally designated by the
cylinder number, the head number and the record number according to
the CKD format.
[0106] The record address expressed by the cylinder number, the
head number and the record number will hereinafter be called
CCHHR.
[0107] The disk drive connected to the disk controller B 113, on
the other hand, has an access interface designated by LBA (logical
block address) in accordance with the FBA format.
[0108] Consequently, step 706 converts the access address of the
data to be read from CKD format to FBA format.
[0109] The conversion formula is given, for example, by
LBA=(CC.times.number of heads+HH).times.track length+record
number.times.record length
[0110] According to this embodiment, the disk controller A 104 and
the disk controller B 113 may have the same interface, in which
case the conversion of the input/output interface format is not
required.
[0111] Step 707 issues a request to the disk controller B 113 to
read the data from the area of the corresponding disk drive
calculated in step 706.
[0112] Step 708 waits for the arrival of the requested data from
the disk controller B 113.
[0113] Step 709 sends the data received from the disk controller B
113 to the main frame 101 thereby to complete the process.
[0114] The disk controller B 113 simply reads the data requested by
the disk controller A 104 from a disk drive, and sends it to the
disk controller A 104. This process, therefore, is not described
specifically in the processing flow.
[0115] Next, an explanation will be given of a case in which data
backed up in the MT library system 116 is restored by the restore
process 164 on the mainframe 101 in the disk drive group B 114 of
the open system of the processing system B through the disk
controller A 104 and the mainframe 101 of the processing system
A.
[0116] As described already above, the mainframe 101 has recognized
that the disk drive group B 113 (disks C and D) are also connected
to the disk controller A 104.
[0117] Therefore, no explanation will be given of the operation of
the mainframe 101 which is simply to issue a write request to the
disk controller A 104 to write the data read from the MT library
system 116.
[0118] Upon receipt of a write request from the mainframe 101, the
disk controller A 104 executes the process in accordance with the
flowchart of FIG. 8.
[0119] In the processing flow of FIG. 8, steps 800 to 801,803 to
806 are similar to steps 700 to 701, 703 to 706 in FIG. 7,
respectively, and therefore will not be explained. Also, step 802
is normally the write operation, since the request from the
mainframe 101 is a write request. Only the parts different from
FIG. 7 will be described below.
[0120] Step 807 issues a request to the disk controller B 113 to
write data in the area of the corresponding disk drive calculated
in step 807.
[0121] Next, in step 808, the write data is received from the
mainframe 101 and sent to the disk controller B 113.
[0122] Then, step 809 waits for a report on the completion of the
write request from the disk controller B 113, and upon receipt of
the completion report, sends it to the mainframe 101 thereby to
complete the process.
[0123] The disk controller B 113 simply reads the data requested by
the disk controller A 104 from the corresponding disk drive and
sends it to the disk controller A 104. The related processing flow,
therefore, is not shown specifically.
[0124] The foregoing description concerns a system for backing up
data of the disk drive group B 114 of the open system of the
processing system B by the processing system A. As another
embodiment, a heterogeneous I/O subsystem can be configured in
which only the disk controller B and the disk drive group B are
connected to the processing system A and the mainframe is connected
with two I/O subsystems having different interfaces. In such a
case, three or more instead of two I/O subsystems can be
connected.
[0125] The above-mentioned embodiment permits data to be backed up
between I/O subsystems having different access interfaces.
[0126] As a result, data stored in an I/O subsystem for an open
system can be backed up into an I/O subsystem for the
mainframe.
[0127] Also, the back-up mechanism of the mainframe includes a
large-capacity, high-performance and high-reliability MT library
system. The data of the I/O subsystem for an open system,
therefore, can be backed up b a mainframe back-up mechanism high in
performance and reliability.
[0128] Further, different I/O subsystems can be connected to the
mainframe.
* * * * *