U.S. patent number 9,158,630 [Application Number 14/133,945] was granted by the patent office on 2015-10-13 for testing integrity of replicated storage.
This patent grant is currently assigned to EMC Corporation. The grantee listed for this patent is EMC Corporation. Invention is credited to Assaf Natanzon.
United States Patent |
9,158,630 |
Natanzon |
October 13, 2015 |
Testing integrity of replicated storage
Abstract
In one aspect, a method includes marking a list of locations
currently being shipped to a replica site, generating course
granularity hash signatures of data for each area of memory in the
snapshot volume, generating fine granularity hash signatures of
data for each subarea of memory in the snapshot volume, sending
course granularity hash signatures to the replica site for each
area not being changed in a production volume and sending fine
granularity hash signatures to the replica site for each area being
changed in the production volume. The snapshot volume is a snapshot
of the production volume.
Inventors: |
Natanzon; Assaf (Tel Aviv,
IL) |
Applicant: |
Name |
City |
State |
Country |
Type |
EMC Corporation |
Hopkinton |
MA |
US |
|
|
Assignee: |
EMC Corporation (Hopkinton,
MA)
|
Family
ID: |
54252673 |
Appl.
No.: |
14/133,945 |
Filed: |
December 19, 2013 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06F
3/065 (20130101); G06F 11/2097 (20130101); G06F
11/2094 (20130101); G06F 3/067 (20130101); G06F
3/0619 (20130101); G06F 2201/83 (20130101); G06F
2201/855 (20130101); G06F 2201/84 (20130101) |
Current International
Class: |
G06F
12/00 (20060101); G06F 13/00 (20060101); G06F
11/14 (20060101); G06F 3/06 (20060101) |
References Cited
[Referenced By]
U.S. Patent Documents
Foreign Patent Documents
|
|
|
|
|
|
|
1154356 |
|
Nov 2001 |
|
EP |
|
WO 00 45581 |
|
Aug 2000 |
|
WO |
|
Other References
Gibson, "Five Point Plan Lies at the Heart of Compression
Technology:" Apr. 29, 1991; p. 1. cited by applicant .
Soules, "Metadata Efficiency in Versioning File Systems;" 2003; pp.
1-16. cited by applicant .
AIX System Management Concepts: Operating Systems and Devices; May
2000; pp. 1-280. cited by applicant .
Soules et al.; "Matadata Efficiency in a Comprehensive Versioning
File System;" May 2002; CMU-CS-02-145; School of Computer Science,
Carnegie Mellon University, Pittsburgh, PA 15213; 33 pages. cited
by applicant .
Linux Filesystems; Sams Publishing; 2002; pp. 17-22 and 67-71.
cited by applicant .
Bunyan, "Multiplexing in a BrightStor.RTM. ARCserve.RTM. Backup
Release 11;" Mar. 2004; pp. 1-4. cited by applicant .
Marks, "Network Computing;" Feb. 2, 2006; pp. 1-8. cited by
applicant .
Hill, "Network Computing;" Jun. 8, 2006; pp. 1-9. cited by
applicant .
Microsoft Computer Dictionary; 2002; Press Fifth Edition; 2 pages.
cited by applicant .
Retrieved from http://en.wikipedia.org/wiki/DEFLATE; DEFLATE; Jun.
19, 2008; pp. 1-6. cited by applicant .
Retrieved from http://en.wikipedia.org/wiki/Huffman.sub.--coding;
Huffman Coding; Jun. 8, 2008; pp. 1-11. cited by applicant .
Retrieved from http://en.wikipedia.org/wiki/LZ77; LZ77 and LZ78;
Jun. 17, 2008; pp. 1-2. cited by applicant .
U.S. Appl. No. 11/609,560 downloaded Jan. 7, 2015 265 pages. cited
by applicant .
U.S. Appl. No. 12/057,652 downloaded Jan. 7, 2015 296 pages. cited
by applicant .
U.S. Appl. No. 11/609,561 downloaded Jan. 7, 2015 219 pages. cited
by applicant .
U.S. Appl. No. 11/356,920 downloaded Jan. 7, 2015 272 pages. cited
by applicant .
U.S. Appl. No. 10/512,687 downloaded Jan. 7, 2015 Part 1 of 2; 300
pages. cited by applicant .
U.S. Appl. No. 10/512,687 downloaded Jan. 7, 2015 Part 2 of 2; 254
pages. cited by applicant .
U.S. Appl. No. 11/536,233 downloaded Jan. 7, 2015 256 pages. cited
by applicant .
U.S. Appl. No. 11/536,215 downloaded Jan. 7, 2015 172 pages. cited
by applicant .
U.S. Appl. No. 11/536,160 downloaded Jan. 7, 2015 230 pages. cited
by applicant .
U.S. Appl. No. 11/964,168 downloaded Jan. 7, 2015 222 pages. cited
by applicant.
|
Primary Examiner: Rossiter; Sean D
Attorney, Agent or Firm: Daly, Crowley, Mofford &
Durkee, LLP
Claims
What is claimed is:
1. A method comprising: marking a list of locations currently being
shipped to a replica site; generating course granularity hash
signatures of data for each area of memory in the snapshot volume,
the snapshot volume being a snapshot of a production volume;
generating fine granularity hash signatures of data for each
subarea of memory in the snapshot volume; sending course
granularity hash signatures to the replica site for each area not
being changed in the production volume; and sending fine
granularity hash signatures to the replica site for each area being
changed in the production volume.
2. The method of claim 1, further comprising: receiving from the
replica site fine granularity hash signatures of data for subareas
of memory in a replica volume; and comparing the fine granularity
hash signatures from the replica site with the fine granularity
hash signatures of the production site for corresponding subareas
of memory.
3. The method of claim 2, further comprising: comparing the fine
granularity hash signatures received from the production site with
the fine granularity hash signatures of the replica volume for
corresponding subareas of memory; and sending from the replica site
the potential locations of corruption in the replica volume for the
fine granularity hash signatures received from the production site
that do not match with the fine granularity hash signatures of the
replica volume.
4. The method of claim 3, further comprising: comparing the course
granularity hash signatures received from the production site with
the course granularity hash signatures of the replica volume for
corresponding areas of memory; and sending, from the replica site
to the production site, fine granularity hash signatures for those
subareas within area of memory where the course granularity hash
signature received form the production site does not match the
course granularity hash signature of the corresponding area in the
replica volume.
5. The method of claim 1, wherein a sub-area is equal to a smallest
block used in snapshot shipping.
6. The method of claim 1, further comprising checking first at
least one of areas within the replica volume with a higher recent
activity during an integrity test or areas within the replica
volume which are not write active during an integrity test.
7. The method of claim 1, further comprising, if a new snapshot of
the production volume is configured to be shipped to the replica
site, adding differences between fine granularity hash signatures
from the replica site and the fine granularity hash signatures at
the production site to a suspected difference list.
8. The method of claim 7, further comprising: removing entries from
the suspected difference list for locations being updated in the
production volume; and reporting the suspected list as errors if
the suspected list is not empty.
9. The method of claim 1, further comprising pausing shipping a
snapshot of the production volume from the production site to the
replica site if an integrity check of a portion of a volume is not
complete.
10. The method of claim 9, further comprising reporting as errors
differences between fine granularity hash signatures from the
replica site and the fine granularity hash signatures at the
production site.
11. An apparatus, comprising: electronic hardware circuitry
configured to: mark a list of locations currently being shipped to
a replica site; generate course granularity hash signatures of data
for each area of memory in the snapshot volume, the snapshot volume
being a snapshot of a production volume; generate fine granularity
hash signatures of data for each subarea of memory in the snapshot
volume; send course granularity hash signatures to the replica site
for each area not being changed in the production volume; and send
fine granularity hash signatures to the replica site for each area
being changed in the production volume; wherein the circuitry
comprises at least one of a processor, a memory, a programmable
logic device or a logic gate.
12. The apparatus of claim 11, further comprising circuitry
configured to: receive from the replica site fine granularity hash
signatures of data for subareas of memory in a replica volume;
compare the fine granularity hash signatures from the replica site
with the fine granularity hash signatures of the production site
for corresponding subareas of memory; compare the fine granularity
hash signatures received from the production site with the fine
granularity hash signatures of the replica volume for corresponding
subareas of memory; send from the replica site the potential
locations of corruption in the replica volume for the fine
granularity hash signatures received from the production site that
do not match with the fine granularity hash signatures of the
replica volume; compare the course granularity hash signatures
received from the production site with the course granularity hash
signatures of the replica volume for corresponding areas of memory;
and send, from the replica site to the production site, fine
granularity hash signatures for those subareas within area of
memory where the course granularity hash signature received form
the production site does not match the course granularity hash
signature of the corresponding area in the replica volume.
13. The apparatus of claim 11, further comprising circuitry
configured to check first at least one of areas within the replica
volume with a higher recent activity during an integrity test or
areas within the replica volume which are not write active during
an integrity test.
14. The apparatus of claim 11, further comprising circuitry
configured to if a new snapshot of the production volume is
configured to be shipped to the replica site: add differences
between fine granularity hash signatures from the replica site and
the fine granularity hash signatures at the production site to a
suspected difference list; remove entries from the suspected
difference list for locations being updated in the production
volume; and report the suspected list as errors if the suspected
list is not empty.
15. The apparatus of claim 11, further comprising circuitry
configured to: pause shipping a snapshot of the production volume
from the production site to the replica site if an integrity check
of a portion of a volume is not complete; and report as errors
differences between fine granularity hash signatures from the
replica site and the fine granularity hash signatures at the
production site.
16. An article comprising: a non-transitory computer-readable
medium that stores computer-executable instructions, the
instructions causing a machine to: mark a list of locations
currently being shipped to a replica site; generate course
granularity hash signatures of data for each area of memory in the
snapshot volume, the snapshot volume being a snapshot of a
production volume; generate fine granularity hash signatures of
data for each subarea of memory in the snapshot volume; send course
granularity hash signatures to the replica site for each area not
being changed in the production volume; and send fine granularity
hash signatures to the replica site for each area being changed in
the production volume.
17. The article of claim 16, further comprising instructions
causing the machine to: receive from the replica site fine
granularity hash signatures of data for subareas of memory in a
replica volume; compare the fine granularity hash signatures from
the replica site with the fine granularity hash signatures of the
production site for corresponding subareas of memory; compare the
fine granularity hash signatures received from the production site
with the fine granularity hash signatures of the replica volume for
corresponding subareas of memory; send from the replica site the
potential locations of corruption in the replica volume for the
fine granularity hash signatures received from the production site
that do not match with the fine granularity hash signatures of the
replica volume. compare the course granularity hash signatures
received from the production site with the course granularity hash
signatures of the replica volume for corresponding areas of memory;
and send, from the replica site to the production site, fine
granularity hash signatures for those subareas within area of
memory where the course granularity hash signature received form
the production site does not match the course granularity hash
signature of the corresponding area in the replica volume.
18. The article of claim 16, further comprising instructions
causing the machine to check first at least one of areas within the
replica volume with a higher recent activity during an integrity
test or areas within the replica volume which are not write active
during an integrity test.
19. The article of claim 16, further comprising instructions
causing the machine to if a new snapshot of the production volume
is configured to be shipped to the replica site: add differences
between fine granularity hash signatures from the replica site and
the fine granularity hash signatures at the production site to a
suspected difference list; remove entries from the suspected
difference list for locations being updated in the production
volume; and report the suspected list as errors if the suspected
list is not empty.
20. The article of claim 16, further comprising instructions
causing the machine to: pause shipping a snapshot of the production
volume from the production site to the replica site if an integrity
check of a portion of a volume is not complete; and report as
errors differences between fine granularity hash signatures from
the replica site and the fine granularity hash signatures at the
production site.
Description
BACKGROUND
Computer data is vital to today's organizations and a significant
part of protection against disasters is focused on data protection.
As solid-state memory has advanced to the point where cost of
memory has become a relatively insignificant factor, organizations
can afford to operate with systems that store and process terabytes
of data.
Conventional data protection systems include tape backup drives,
for storing organizational production site data on a periodic
basis. Another conventional data protection system uses data
replication, by creating a copy of production site data of an
organization on a secondary backup storage system, and updating the
backup with changes. The backup storage system may be situated in
the same physical location as the production storage system, or in
a physically remote location. Data replication systems generally
operate either at the application level, at the file system level,
or at the data block level.
SUMMARY
In one aspect, a method includes marking a list of locations
currently being shipped to a replica site, generating course
granularity hash signatures of data for each area of memory in the
snapshot volume, generating fine granularity hash signatures of
data for each subarea of memory in the snapshot volume, sending
course granularity hash signatures to the replica site for each
area not being changed in a production volume and sending fine
granularity hash signatures to the replica site for each area being
changed in the production volume. The snapshot volume is a snapshot
of the production volume.
In another aspect, an apparatus includes electronic hardware
circuitry configured to mark a list of locations currently being
shipped to a replica site, generate course granularity hash
signatures of data for each area of memory in the snapshot volume,
generate fine granularity hash signatures of data for each subarea
of memory in the snapshot volume, send course granularity hash
signatures to the replica site for each area not being changed in a
production volume and send fine granularity hash signatures to the
replica site for each area being changed in the production volume.
The snapshot volume is a snapshot of the production volume.
In a further aspect, an article includes a non-transitory
computer-readable medium that stores computer-executable
instructions. The instructions cause a machine to mark a list of
locations currently being shipped to a replica site, generate
course granularity hash signatures of data for each area of memory
in the snapshot volume, the snapshot volume being a snapshot of a
production volume, generate fine granularity hash signatures of
data for each subarea of memory in the snapshot volume, send course
granularity hash signatures to the replica site for each area not
being changed in the production volume and send fine granularity
hash signatures to the replica site for each area being changed in
the production volume.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of an example of a data protection system
used with a continuous replication mode.
FIG. 2 is an illustration of an example of a journal history of
write transactions for a storage system.
FIG. 3 is a block diagram of an example of a data protection system
used with a snapshot shipping mode.
FIG. 4 is a flowchart of an example of a process to replicate data
from a production site to a replication site using the snapshot
shipping mode.
FIGS. 5 to 7 are flowcharts of an example of processes used to test
the integrity of a replicated storage.
FIGS. 8 and 9 are flowcharts of processes used to focus integrity
testing within a replicate storage.
FIG. 10 is a computer on which any of the processes of FIGS. 5 to 9
may be implemented.
DETAILED DESCRIPTION
Described herein are techniques to test the integrity of a
replicated storage.
The following definitions may be useful in understanding the
specification and claims.
BACKUP SITE--a facility where replicated production site data is
stored; the backup site may be located in a remote site or at the
same location as the production site;
BOOKMARK--a bookmark is metadata information stored in a
replication journal which indicates a point in time.
DATA PROTECTION APPLIANCE (DPA)--a computer or a cluster of
computers responsible for data protection services including inter
alia data replication of a storage system, and journaling of I/O
requests issued by a host computer to the storage system;
HASH SIGNATURE--a hash signature is generated using an algorithm
such as a cryptographic hash function (e.g., SHA-1 or SHA-2) and
sometimes referred to herein as a signature;
HOST--at least one computer or networks of computers that runs at
least one data processing application that issues I/O requests to
one or more storage systems; a host is an initiator with a SAN;
HOST DEVICE--an internal interface in a host, to a logical storage
unit;
IMAGE--a copy of a logical storage unit at a specific point in
time;
INITIATOR--a node in a SAN that issues I/O requests;
I/O DATA--Data that will be or is written to a volume by, for
example, an application, sometimes called write transaction data or
write data;
I/O REQUEST--an input/output request (sometimes referred to as an
I/O), which may be a read I/O request (sometimes referred to as a
read request or a read) or a write I/O request (sometimes referred
to as a write request or a write);
JOURNAL--a record of write transactions issued to a storage system;
used to maintain a duplicate storage system, and to roll back the
duplicate storage system to a previous point in time;
LOGICAL UNIT--a logical entity provided by a storage system for
accessing data from the storage system. The logical disk may be a
physical logical unit or a virtual logical unit;
LUN--a logical unit number for identifying a logical unit;
PHYSICAL LOGICAL UNIT--a physical entity, such as a disk or an
array of disks, for storing data in storage locations that can be
accessed by address;
PRODUCTION SITE--a facility where one or more host computers run
data processing applications that write data to a storage system
and read data from the storage system;
REMOTE ACKNOWLEDGEMENTS--an acknowledgement from remote DPA to the
local DPA that data arrived at the remote DPA (either to the
appliance or the journal);
SIGNATURE--a signature is a hash signature;
SPLITTER ACKNOWLEDGEMENT--an acknowledgement from a DPA to the
protection agent (splitter) that data has been received at the DPA;
this may be achieved by an SCSI status command;
SAN--a storage area network of nodes that send and receive an I/O
and other requests, each node in the network being an initiator or
a target, or both an initiator and a target;
SOURCE SIDE--a transmitter of data within a data replication
workflow, during normal operation a production site is the source
side; and during data recovery a backup site is the source side,
sometimes called a primary side;
STORAGE SYSTEM--a SAN entity that provides multiple logical units
for access by multiple SAN initiators;
TARGET--a node in a SAN that replies to I/O requests;
TARGET SIDE--a receiver of data within a data replication workflow;
during normal operation a back site is the target side, and during
data recovery a production site is the target side, sometimes
called a secondary side;
VIRTUAL LOGICAL UNIT--a virtual storage entity which is treated as
a logical unit by virtual machines;
WAN--a wide area network that connects local networks and enables
them to communicate with one another, such as the Internet.
A description of journaling and some techniques associated with
journaling may be described in the patent titled "METHODS AND
APPARATUS FOR OPTIMAL JOURNALING FOR CONTINUOUS DATA REPLICATION"
and with U.S. Pat. No. 7,516,287, which is hereby incorporated by
reference.
AN EXAMPLE OF A REPLICATION SYSTEM USED WITH A CONTINUOUS
REPLICATION MODE (FIGS. 1 AND 2)
Referring to FIG. 1, a data protection system 100 includes two
sites; Site I, which is a production site, and Site II, which is a
backup site or replica site. Under normal operation the production
site is the source side of system 100, and the backup site is the
target side of the system. The backup site is responsible for
replicating production site data. Additionally, the backup site
enables roll back of Site I data to an earlier pointing time, which
may be used in the event of data corruption of a disaster, or
alternatively in order to view or to access data from an earlier
point in time.
FIG. 1 is an overview of a system for data replication of either
physical or virtual logical units. Thus, one of ordinary skill in
the art would appreciate that in a virtual environment a
hypervisor, in one example, would consume logical units and
generate a distributed file system on them such as VMFS creates
files in the file system and expose the files as logical units to
the virtual machines (each VMDK is seen as a SCSI device by virtual
hosts). In another example, the hypervisor consumes a network based
file system and exposes files in the NFS as SCSI devices to virtual
hosts.
During normal operations, the direction of replicate data flow goes
from source side to target side. It is possible, however, for a
user to reverse the direction of replicate data flow, in which case
Site I starts to behave as a target backup site, and Site II starts
to behave as a source production site. Such change of replication
direction is referred to as a "failover". A failover may be
performed in the event of a disaster at the production site, or for
other reasons. In some data architectures, Site I or Site II
behaves as a production site for a portion of stored data, and
behaves simultaneously as a backup site for another portion of
stored data. In some data architectures, a portion of stored data
is replicated to a backup site, and another portion is not.
The production site and the backup site may be remote from one
another, or they may both be situated at a common site, local to
one another. Local data protection has the advantage of minimizing
data lag between target and source, and remote data protection has
the advantage is being robust in the event that a disaster occurs
at the source side.
The source and target sides communicate via a wide area network
(WAN) 128, although other types of networks may be used.
Each side of system 100 includes three major components coupled via
a storage area network (SAN); namely, (i) a storage system, (ii) a
host computer, and (iii) a data protection appliance (DPA).
Specifically with reference to FIG. 1, the source side SAN includes
a source host computer 104, a source storage system 108, and a
source DPA 112. Similarly, the target side SAN includes a target
host computer 116, a target storage system 120, and a target DPA
124. As well, the protection agent (sometimes referred to as a
splitter) may run on the host, or on the storage, or in the network
or at a hypervisor level, and that DPAs are optional and DPA code
may run on the storage array too, or the DPA 124 may run as a
virtual machine.
Generally, a SAN includes one or more devices, referred to as
"nodes". A node in a SAN may be an "initiator" or a "target", or
both. An initiator node is a device that is able to initiate
requests to one or more other devices; and a target node is a
device that is able to reply to requests, such as SCSI commands,
sent by an initiator node. A SAN may also include network switches,
such as fiber channel switches. The communication links between
each host computer and its corresponding storage system may be any
appropriate medium suitable for data transfer, such as fiber
communication channel links.
The host communicates with its corresponding storage system using
small computer system interface (SCSI) commands.
System 100 includes source storage system 108 and target storage
system 120. Each storage system includes physical storage units for
storing data, such as disks or arrays of disks. Typically, storage
systems 108 and 120 are target nodes. In order to enable initiators
to send requests to storage system 108, storage system 108 exposes
one or more logical units (LU) to which commands are issued. Thus,
storage systems 108 and 120 are SAN entities that provide multiple
logical units for access by multiple SAN initiators.
Logical units are a logical entity provided by a storage system,
for accessing data stored in the storage system. The logical unit
may be a physical logical unit or a virtual logical unit. A logical
unit is identified by a unique logical unit number (LUN). Storage
system 108 exposes a logical unit 136, designated as LU A, and
storage system 120 exposes a logical unit 156, designated as LU
B.
LU B is used for replicating LU A. As such, LU B is generated as a
copy of LU A. In one embodiment, LU B is configured so that its
size is identical to the size of LU A. Thus, for LU A, storage
system 120 serves as a backup for source side storage system 108.
Alternatively, as mentioned hereinabove, some logical units of
storage system 120 may be used to back up logical units of storage
system 108, and other logical units of storage system 120 may be
used for other purposes. Moreover, there is symmetric replication
whereby some logical units of storage system 108 are used for
replicating logical units of storage system 120, and other logical
units of storage system 120 are used for replicating other logical
units of storage system 108.
System 100 includes a source side host computer 104 and a target
side host computer 116. A host computer may be one computer, or a
plurality of computers, or a network of distributed computers, each
computer may include inter alia a conventional CPU, volatile and
non-volatile memory, a data bus, an I/O interface, a display
interface and a network interface. Generally a host computer runs
at least one data processing application, such as a database
application and an e-mail server.
Generally, an operating system of a host computer creates a host
device for each logical unit exposed by a storage system in the
host computer SAN. A host device is a logical entity in a host
computer, through which a host computer may access a logical unit.
Host device 104 identifies LU A and generates a corresponding host
device 140, designated as Device A, through which it can access LU
A. Similarly, host computer 116 identifies LU B and generates a
corresponding device 160, designated as Device B.
In the course of continuous operation, host computer 104 is a SAN
initiator that issues I/O requests (write/read operations) through
host device 140 to LU A using, for example, SCSI commands. Such
requests are generally transmitted to LU A with an address that
includes a specific device identifier, an offset within the device,
and a data size. Offsets are generally aligned to 512 byte blocks.
The average size of a write operation issued by host computer 104
may be, for example, 10 kilobytes (KB); i.e., 20 blocks. For an I/O
rate of 50 megabytes (MB) per second, this corresponds to
approximately 5,000 write transactions per second.
System 100 includes two data protection appliances, a source side
DPA 112 and a target side DPA 124. A DPA performs various data
protection services, such as data replication of a storage system,
and journaling of I/O requests issued by a host computer to source
side storage system data. As explained in detail herein, when
acting as a target side DPA, a DPA may also enable roll back of
data to an earlier point in time, and processing of rolled back
data at the target site. Each DPA 112 and 124 is a computer that
includes inter alia one or more conventional CPUs and internal
memory.
For additional safety precaution, each DPA is a cluster of such
computers. Use of a cluster ensures that if a DPA computer is down,
then the DPA functionality switches over to another computer. The
DPA computers within a DPA cluster communicate with one another
using at least one communication link suitable for data transfer
via fiber channel or IP based protocols, or such other transfer
protocol. One computer from the DPA cluster serves as the DPA
leader. The DPA cluster leader coordinates between the computers in
the cluster, and may also perform other tasks that require
coordination between the computers, such as load balancing.
In the architecture illustrated in FIG. 1, DPA 112 and DPA 124 are
standalone devices integrated within a SAN. Alternatively, each of
DPA 112 and DPA 124 may be integrated into storage system 108 and
storage system 120, respectively, or integrated into host computer
104 and host computer 116, respectively. Both DPAs communicate with
their respective host computers through communication lines such as
fiber channels using, for example, SCSI commands or any other
protocol.
DPAs 112 and 124 are configured to act as initiators in the SAN;
i.e., they can issue I/O requests using, for example, SCSI
commands, to access logical units on their respective storage
systems. DPA 112 and DPA 124 are also configured with the necessary
functionality to act as targets; i.e., to reply to I/O requests,
such as SCSI commands, issued by other initiators in the SAN,
including inter alia their respective host computers 104 and 116.
Being target nodes, DPA 112 and DPA 124 may dynamically expose or
remove one or more logical units.
As described hereinabove, Site I and Site II may each behave
simultaneously as a production site and a backup site for different
logical units. As such, DPA 112 and DPA 124 may each behave as a
source DPA for some logical units, and as a target DPA for other
logical units, at the same time.
Host computer 104 and host computer 116 include protection agents
144 and 164, respectively. Protection agents 144 and 164 intercept
SCSI commands issued by their respective host computers, via host
devices to logical units that are accessible to the host computers.
A data protection agent may act on an intercepted SCSI commands
issued to a logical unit, in one of the following ways: send the
SCSI commands to its intended logical unit; redirect the SCSI
command to another logical unit; split the SCSI command by sending
it first to the respective DPA; after the DPA returns an
acknowledgement, send the SCSI command to its intended logical
unit; fail a SCSI command by returning an error return code; and
delay a SCSI command by not returning an acknowledgement to the
respective host computer.
A protection agent may handle different SCSI commands, differently,
according to the type of the command. For example, a SCSI command
inquiring about the size of a certain logical unit may be sent
directly to that logical unit, while a SCSI write command may be
split and sent first to a DPA associated with the agent. A
protection agent may also change its behavior for handling SCSI
commands, for example as a result of an instruction received from
the DPA.
Specifically, the behavior of a protection agent for a certain host
device generally corresponds to the behavior of its associated DPA
with respect to the logical unit of the host device. When a DPA
behaves as a source site DPA for a certain logical unit, then
during normal course of operation, the associated protection agent
splits I/O requests issued by a host computer to the host device
corresponding to that logical unit. Similarly, when a DPA behaves
as a target device for a certain logical unit, then during normal
course of operation, the associated protection agent fails I/O
requests issued by host computer to the host device corresponding
to that logical unit.
Communication between protection agents and their respective DPAs
may use any protocol suitable for data transfer within a SAN, such
as fiber channel, or SCSI over fiber channel. The communication may
be direct, or via a logical unit exposed by the DPA. Protection
agents communicate with their respective DPAs by sending SCSI
commands over fiber channel.
Protection agents 144 and 164 are drivers located in their
respective host computers 104 and 116. Alternatively, a protection
agent may also be located in a fiber channel switch, or in any
other device situated in a data path between a host computer and a
storage system or on the storage system itself. In a virtualized
environment, the protection agent may run at the hypervisor layer
or in a virtual machine providing a virtualization layer.
What follows is a detailed description of system behavior under
normal production mode, and under recovery mode.
In production mode DPA 112 acts as a source site DPA for LU A.
Thus, protection agent 144 is configured to act as a source side
protection agent; i.e., as a splitter for host device A.
Specifically, protection agent 144 replicates SCSI I/O write
requests. A replicated SCSI I/O write request is sent to DPA 112.
After receiving an acknowledgement from DPA 124, protection agent
144 then sends the SCSI I/O write request to LU A. After receiving
a second acknowledgement from storage system 108 host computer 104
acknowledges that an I/O command complete.
When DPA 112 receives a replicated SCSI write request from data
protection agent 144, DPA 112 transmits certain I/O information
characterizing the write request, packaged as a "write
transaction", over WAN 128 to DPA 124 on the target side, for
journaling and for incorporation within target storage system
120.
DPA 112 may send its write transactions to DPA 124 using a variety
of modes of transmission, including inter alia (i) a synchronous
mode, (ii) an asynchronous mode, and (iii) a snapshot mode. In
synchronous mode, DPA 112 sends each write transaction to DPA 124,
receives back an acknowledgement from DPA 124, and in turns sends
an acknowledgement back to protection agent 144. Protection agent
144 waits until receipt of such acknowledgement before sending the
SCSI write request to LU A.
In asynchronous mode, DPA 112 sends an acknowledgement to
protection agent 144 upon receipt of each I/O request, before
receiving an acknowledgement back from DPA 124.
In snapshot mode, DPA 112 receives several I/O requests and
combines them into an aggregate "snapshot" of all write activity
performed in the multiple I/O requests, and sends the snapshot to
DPA 124, for journaling and for incorporation in target storage
system 120. In snapshot mode DPA 112 also sends an acknowledgement
to protection agent 144 upon receipt of each I/O request, before
receiving an acknowledgement back from DPA 124.
For the sake of clarity, the ensuing discussion assumes that
information is transmitted at write-by-write granularity.
While in production mode, DPA 124 receives replicated data of LU A
from DPA 112, and performs journaling and writing to storage system
120. When applying write operations to storage system 120, DPA 124
acts as an initiator, and sends SCSI commands to LU B.
During a recovery mode, DPA 124 undoes the write transactions in
the journal, so as to restore storage system 120 to the state it
was at, at an earlier time.
As described hereinabove, LU B is used as a backup of LU A. As
such, during normal production mode, while data written to LU A by
host computer 104 is replicated from LU A to LU B, host computer
116 should not be sending I/O requests to LU B. To prevent such I/O
requests from being sent, protection agent 164 acts as a target
site protection agent for host Device B and fails I/O requests sent
from host computer 116 to LU B through host Device B.
Target storage system 120 exposes a logical unit 176, referred to
as a "journal LU", for maintaining a history of write transactions
made to LU B, referred to as a "journal". Alternatively, journal LU
176 may be striped over several logical units, or may reside within
all of or a portion of another logical unit. DPA 124 includes a
journal processor 180 for managing the journal.
Journal processor 180 functions generally to manage the journal
entries of LU B. Specifically, journal processor 180 enters write
transactions received by DPA 124 from DPA 112 into the journal, by
writing them into the journal LU, reads the undo information for
the transaction from LU B, updates the journal entries in the
journal LU with undo information, applies the journal transactions
to LU B, and removes already-applied transactions from the
journal.
Referring to FIG. 2, which is an illustration of a write
transaction 200 for a journal. The journal may be used to provide
an adaptor for access to storage 120 at the state it was in at any
specified point in time. Since the journal contains the "undo"
information necessary to roll back storage system 120, data that
was stored in specific memory locations at the specified point in
time may be obtained by undoing write transactions that occurred
subsequent to such point in time.
Write transaction 200 generally includes the following fields: one
or more identifiers; a time stamp, which is the date & time at
which the transaction was received by source side DPA 112; a write
size, which is the size of the data block; a location in journal LU
176 where the data is entered; a location in LU B where the data is
to be written; and the data itself.
Write transaction 200 is transmitted from source side DPA 112 to
target side DPA 124. As shown in FIG. 2, DPA 124 records the write
transaction 200 in the journal that includes four streams. A first
stream, referred to as a DO stream, includes new data for writing
in LU B. A second stream, referred to as an DO METADATA stream,
includes metadata for the write transaction, such as an identifier,
a date & time, a write size, a beginning address in LU B for
writing the new data in, and a pointer to the offset in the DO
stream where the corresponding data is located. Similarly, a third
stream, referred to as an UNDO stream, includes old data that was
overwritten in LU B; and a fourth stream, referred to as an UNDO
METADATA, include an identifier, a date & time, a write size, a
beginning address in LU B where data was to be overwritten, and a
pointer to the offset in the UNDO stream where the corresponding
old data is located.
In practice each of the four streams holds a plurality of write
transaction data. As write transactions are received dynamically by
target DPA 124, they are recorded at the end of the DO stream and
the end of the DO METADATA stream, prior to committing the
transaction. During transaction application, when the various write
transactions are applied to LU B, prior to writing the new DO data
into addresses within the storage system, the older data currently
located in such addresses is recorded into the UNDO stream. In some
examples, the metadata stream (e.g., UNDO METADATA stream or the DO
METADATA stream) and the data stream (e.g., UNDO stream or DO
stream) may be kept in a single stream each (i.e., one UNDO data
and UNDO METADATA stream and one DO data and DO METADATA stream) by
interleaving the metadata into the data stream.
AN EXAMPLE OF A REPLICATION SYSTEM USED WITH A SNAPSHOT SHIPPING
MODE (FIGS. 3 AND 4)
Referring to FIG. 3, a data protection system 300 includes a data
protection appliance (DPA) cluster 302a and a storage array 306a at
a production site and a DPA cluster 302b and a storage array 306b
at a replication site. The DPA clusters 302a, 302b are connected by
a network 304 (e.g., a WAN, a Fibre Channel and so forth).
The storage array 306a includes a primary storage volume 312a, a
journal 316a, a first snapshot 322a, a second snapshot 322b, APIs
350a and a delta marking stream 360. The storage array 306b
includes a replica storage volume 312b which replicates the primary
storage 312a, a journal 316b, a list of snapshots 340, and APIs
350b.
Referring to FIG. 4, an example of a process to send data from the
production site to the replication site using a snapshot shipping
mode is a process 400. Process 400 generates a first snapshot of a
production storage array (402). For example, the DPA cluster 302a
generates a first snapshot 322a of the primary storage volume 312a.
In one example, the DPA cluster 302a generates the first snapshot
322a using the API 350a. At first time initialization, the first
snapshot is sent to the replica site.
Process 400 generates a second snapshot of a production storage
array (406). For example, the DPA cluster 302a generates a second
snapshot 322b of the primary storage volume 312a. In one example,
the DPA cluster 302a generates the second snapshot 322b using the
API 350a.
Process 400 obtains differences between the first snapshot and the
second snapshot (408). For example, the DPA cluster 302a obtains
the differences between the first snapshot 322a and the second
snapshot 322b by using the API 350a.
In one example, obtaining the differences between the first
snapshot and the second snapshot may be performed by executing a
vendor specific read difference command. The read difference
command is a vendor specific command which returns locations of the
difference and data in the locations. In one example, the result of
executing the read difference command is a sequence of (location,
length); (location, length) and so forth. In other examples, the
read difference command returns a change bitmap.
Process 400 adds the differences to a delta marking stream (410).
For example, the DPA cluster 302a adds the differences obtained in
processing block 408 to the delta marking stream 360 using an the
API 350a.
Process 400 deletes the first snapshot (418). For example, the DPA
cluster 302a deletes the first snapshot 322a.
Process 400 reads the data that changed in the second snapshot
(422) and sends the data to the replication site (440). For
example, the DPA cluster 302a reads the data that changed in the
second snapshot 322b and sends the data to the DPA cluster
302b.
Process 400 renames the second snapshot to the first snapshot (446)
and performs processing block 406. For example, the DPA cluster
302a renames the second snapshot to the first snapshot.
FIGS. 5 to 7 depict an example of processes used to test the
integrity of a replicated storage. FIGS. 5 and 7 depict processes
(e.g., a process 500 and a process 700 respectively) that occur at
the production site while the FIG. 6 depicts a process (e.g., a
process 600) that occurs at replication site.
Theoretically checking the integrity of replication in a snapshot
shipping mode is relatively easy. Once a snapshot is shipped to the
replica site, both production and replica storage have the same
snapshot and the system can just start scrubbing the devices by
creating strong hash signature (say SHA-1 or SHA-2) for all the
disk or for long parts of the disk (if fine granular errors need to
be found). Then once a comparison is complete the system can decide
if production and replica volumes are identical or not.
The issue with the above approach is that it requires that the
snapshots to be in existence a long time. In particular, the
snapshots cannot be erased during the period of checking for
integrity. If the replicated volume size is large, the scrubbing
can take a significant amount of time and thus keeping the
snapshots will require significant amount of memory space.
As will be described herein, the integrity check is started at the
beginning of the volume and continues to the end of the volume. The
integrity check is performed a portion of the volume at a time.
Since the system is in the middle of snapshot shipping, some of the
portion may be changed during the shipping of the snapshot. By
performing processes 500, 600 and 700 the snapshots do not need to
be kept for long periods of time consuming significant amounts of
memory.
Referring to FIG. 5, process 500 generates a snapshot of the
production volume to form a snapshot volume (502).
Process 500 goes to a first portion of the snapshot volume (508).
The integrity check is performed a portion of the volume at a time.
In one particular example, the snapshot volume may be a terabyte
and a portion of the snapshot volume may be 10 gigabytes.
Process 500 reads a first area of the portion of the snapshot of
the volume (514). In one example, a portion is 10 gigabytes and the
first area is 10 megabytes.
Process 500 generates a course granularity hash (518). A course
granularity hash signature is a hash signature of the data in a
area.
Process 500 generates fine granularity hash signatures (522). The
area is further broken down into subareas. A fine granularity hash
signature is a hash signature of the data in a subarea. A fine
granularity hash signature is generated for each subarea that makes
up the area. In one example, a subarea may be a size of a smallest
block used for snapshot shipping.
Process 500 determines if the area of memory intersects with a
portion of the volume that is under going changes due differences
occurring in the production volume from the current snapshot volume
(528). While the integrity check is occurring, portions of the
production volume may change. Process 500 determines if those
differences occurred within the area of memory currently being
processed.
If the area of memory does intersect with a portion of the volume
that is undergoing changes, process 500 sends the fine granularity
hash signatures to the replica site (530).
If the area of memory does not intersect with a portion of the
volume that is undergoing changes, process 500 sends the course
granularity hash signature to the replica site (536).
Process 500 determines if there are any more areas left to process
in the current portion (538). If there are more areas left, process
500 reads the next area (540) and repeats processing block 518.
If there are no more areas left, process 500 determines if there
any more portions left in the snapshot volume (542). If there are
more portions, process 500 goes to the next portion (546) and reads
the first area (514). If there are no more portions, process 500
ends. In some embodiments, if the check of the portion is not
complete the system will not move to ship the next snapshots (i.e.,
going from processing block 446 to processing block 406).
In other embodiments, the integrity check may not delay the
snapshot shipping process. In this case when a second snapshot is
generated (406), the integrity tool is configured to start reading
from the second snapshot, but the differences are added to a
special data structure of areas to be ignored.
Referring to FIG. 6, process 600 goes to the first area of the
portion of the replica volume (602). Process 600 determines that
for the current area whether a course granularity hash signature
was received from the production site for the corresponding area in
the snapshot volume (608). If a course granularity hash signature
was received from the production site, process 600 determines a
course granularity hash signature for the corresponding area in the
replica volume (614).
Process 600 determines if the course granularity hash signature for
a area from the snapshot volume at the production site is the same
as the course granularity hash signature for the corresponding area
in the replica volume (618). If the course granularity hash
signature for an area from the snapshot volume at the production
site is not the same as the course granularity hash signature for
the corresponding area in the replica volume, process 600 generates
and sends fine granularity hash signatures of the subareas of the
area in the replica volume to the production site (622).
If a course granularity hash signature was not received from the
production site, process 600 determines if fine granularity hash
signatures for the corresponding area in the replica volume were
received (626). If fine granularity hash signatures were received
from the production site, process 600 determines fine granularity
hash signatures for the corresponding subareas in the replica
volume (632).
Process 600 determines if the fine granularity hash signatures for
a area from the snapshot volume at the production site is the same
as the fine granularity hash signatures for the corresponding
subareas in the replica volume (636). If the fine granularity hash
signatures for an area from the snapshot volume at the production
site is not the same as the fine granularity hash signatures for
the corresponding subareas in the replica volume, process 600 send
the locations of those differences to the production site
(642).
Process 600 determines if there any more areas left (628) and if
there are more areas left, goes to the next area (630) and repeats
processing block 608.
If there are not anymore areas left, process 600 determines if
there are any portions left (632); and if there are portions left,
process 600, goes to the next portion (634) and repeats processing
block 602. If there are no portions left, process 600 ends.
Referring to FIG. 7, process 700 determines if fine granularity
hash signatures was received from the replica site (702). For
example, fine granularity hash signatures sent by the replica site
in processing block 622 (FIG. 6).
If there were fine granularity hash signatures received from the
replica site, process 700 determines if there are any differences
between the fine granularity hash signatures from the replica site
and the fine granularity hash signatures at the production site for
the corresponding subareas (704). If there are differences, process
700 marks the areas as suspected dirty in a dirty list (710).
Process 700 determines if bad locations were received from the
replica site (714). For example, those locations sent by the
replica site in processing block 642 (FIG. 6). If there are bad
locations received, process 700 marks the locations in the dirty
list (716).
Process 700 determines if the system is configured to stop or pause
shipping new snapshots of the production volume to the replica site
until the integrity testing of a portion is complete (722) and if
the system is configured as such, process 700 reports the dirty
list as integrity errors (730).
Otherwise, process 700 removes entries from the dirty list that are
being changed in the production volume (734). If process 700
determines that the dirty list is not empty (738), process 700
reports the dirty list as a list of integrity errors (742).
Referring to FIGS. 8 and 9, as described above, the portions chosen
for processing were not chosen for any particular. However, if
portions were chosen where there was a greater chance of finding
errors, then finding these errors would occur faster. Process 800
and 900 are examples of processes to determine which portions of
the volume to check first.
Referring to FIG. 8, process 800 checks for regions in a volume
with a higher recent activity (810). For example, locations which
are more write active in the last few days or since the last cheek
are checked first, as these locations are probably more important
and if there was corruption due to replication error it is more
likely to happen within these blocks. In one particular example,
storage tiering statistics of the storage (e.g., EMC.RTM. fully
automated storage tiering (FAST.RTM.)) are used to find the most
active areas.
Referring to FIG. 9, process 900 checks regions which are not write
active (902). For example, storage statistics are used to determine
locations which are not write active. For example, the defect may
be caused if the spindles of a device themselves are not
functioning well (i.e., some sectors are corrupted). If the sectors
are corrupted and active in the system, theses error would be
discovered. However, if an area of storage is not accessed or the
sectors are only read from then the sectors may not be verified at
the replica site since the system does not read from them.
Referring to FIG. 10, in one example, a computer 1000 includes a
processor 1002, a volatile memory 1004, a non-volatile memory 1006
(e.g., hard disk) and the user interface (UI) 1008 (e.g., a
graphical user interface, a mouse, a keyboard, a display, touch
screen and so forth). The non-volatile memory 1006 stores computer
instructions 1012, an operating system 1016 and data 1018. In one
example, the computer instructions 1012 are executed by the
processor 1002 out of volatile memory 1004 to perform all or part
of the processes described herein (e.g., processes 500, 600, 700,
800 and 900).
The processes described herein (e.g., processes 500, 600, 700, 800
and 900) are not limited to use with the hardware and software of
FIG. 10; they may find applicability in any computing or processing
environment and with any type of machine or set of machines that is
capable of running a computer program. The processes described
herein may be implemented in hardware, software, or a combination
of the two. The processes described herein may be implemented in
computer programs executed on programmable computers/machines that
each includes a processor, a non-transitory machine-readable medium
or other article of manufacture that is readable by the processor
(including volatile and non-volatile memory and/or storage
elements), at least one input device, and one or more output
devices. Program code may be applied to data entered using an input
device to perform any of the processes described herein and to
generate output information.
The system may be implemented, at least in part, via a computer
program product, (e.g., in a non-transitory machine-readable
storage medium such as, for example, a non-transitory
computer-readable medium), for execution by, or to control the
operation of, data processing apparatus (e.g., a programmable
processor, a computer, or multiple computers)). Each such program
may be implemented in a high level procedural or object-oriented
programming language to communicate with a computer system.
However, the programs may be implemented in assembly or machine
language. The language may be a compiled or an interpreted language
and it may be deployed in any form, including as a stand-alone
program or as a module, component, subroutine, or other unit
suitable for use in a computing environment. A computer program may
be deployed to be executed on one computer or on multiple computers
at one site or distributed across multiple sites and interconnected
by a communication network. A computer program may be stored on a
non-transitory machine-readable medium that is readable by a
general or special purpose programmable computer for configuring
and operating the computer when the non-transitory machine-readable
medium is read by the computer to perform the processes described
herein. For example, the processes described herein may also be
implemented as a non-transitory machine-readable storage medium,
configured with a computer program, where upon execution,
instructions in the computer program cause the computer to operate
in accordance with the processes. A non-transitory machine-readable
medium may include but is not limited to a hard drive, compact
disc, flash memory, non-volatile memory, volatile memory, magnetic
diskette and so forth but does not include a transitory signal per
se.
The processes described herein are not limited to the specific
examples described. For example, the processes 500, 600, 700, 800
and 900 are not limited to the specific processing order of FIGS. 5
to 9, respectively. Rather, any of the processing blocks of FIGS. 5
to 9 may be re-ordered, combined or removed, performed in parallel
or in serial, as necessary, to achieve the results set forth
above.
In other examples, one of ordinary skill in the art would recognize
that increasing and decreasing reference counts may be done
opposite as described, For example, the reference count can be
decreased and then increased. One of ordinary skill in the art
would also recognize that a value is changed from a first state to
a second state when the signature data is needed to avoid erasure
of the data and when the data is no longer needed the value returns
to a first state.
The processing blocks (for example, in the processes 500, 600, 700,
800 and 900) associated with implementing the system may be
performed by one or more programmable processors executing one or
more computer programs to perform the functions of the system. All
or part of the system may be implemented as, special purpose logic
circuitry (e.g., an FPGA (field-programmable gate array) and/or an
ASIC (application-specific integrated circuit)). All or part of the
system may be implemented using electronic hardware circuitry that
include electronic devices such as, for example, at least one of a
processor, a memory, a programmable logic device or a logic
gate.
Elements of different embodiments described herein may be combined
to form other embodiments not specifically set forth above. Other
embodiments not specifically described herein are also within the
scope of the following claims.
* * * * *
References