U.S. patent application number 13/118250 was filed with the patent office on 2011-12-01 for systems and methods for performing data replication.
This patent application is currently assigned to COMMVAULT SYSTEMS, INC.. Invention is credited to Andrei Erofeev.
Application Number | 20110295805 13/118250 |
Document ID | / |
Family ID | 45004435 |
Filed Date | 2011-12-01 |
United States Patent
Application |
20110295805 |
Kind Code |
A1 |
Erofeev; Andrei |
December 1, 2011 |
SYSTEMS AND METHODS FOR PERFORMING DATA REPLICATION
Abstract
Preparing source data to be replicated in a continuous data
replication environment. Certain systems and methods populate a
file name database with entries having a unique file identifier
descriptor (FID), short name and a FID of the parent directory of
each directory or file on a source storage device. Such information
is advantageously gathered during scanning of a live file system
without requiring a snapshot of the source storage device. The
database can be further used to generate absolute file names
associated with data operations to be replayed on a destination
storage device. Based on the obtained FIDs, certain embodiments can
further combine write operations to be replayed on the destination
storage device and/or avoid replicating temporary files to the
destination system.
Inventors: |
Erofeev; Andrei; (Marlboro,
NJ) |
Assignee: |
COMMVAULT SYSTEMS, INC.
Oceanport
NJ
|
Family ID: |
45004435 |
Appl. No.: |
13/118250 |
Filed: |
May 27, 2011 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61349624 |
May 28, 2010 |
|
|
|
Current U.S.
Class: |
707/634 ;
707/E17.005 |
Current CPC
Class: |
G06F 16/1844 20190101;
H04L 67/1095 20130101; G06F 11/1435 20130101; G06F 11/1458
20130101; G06F 11/2094 20130101; G06F 11/1471 20130101; G06F
2201/84 20130101; H04L 29/0854 20130101; G06F 16/1734 20190101;
G06F 11/1662 20130101 |
Class at
Publication: |
707/634 ;
707/E17.005 |
International
Class: |
G06F 17/30 20060101
G06F017/30 |
Claims
1. A method for identifying data to be copied in a data replication
system, the method comprising: obtaining with a scanning module
executing on a computing device a first file identifier descriptor
(FID) of a first directory on a live source file system, the first
FID being one of a plurality of unique identifiers corresponding to
a plurality of directories and files on the source file system;
adding the first FID to a queue; storing a current journal sequence
number from a file system filter driver identifying a first time;
following said storing, accessing a current directory of the
plurality of directories on the source file system that corresponds
to a next FID stored in the queue; obtaining additional FIDs for
each immediate child directory and immediate child file in the
current directory; if no changes have been made to the current
directory since the first time, populating a file name database
with the additional FIDs of each immediate child directory and
immediate child file in the current directory, adding the
additional FIDs of each immediate child directory of the current
directory to the queue, and removing the next FID from the queue;
and if changes have been made to the first directory since the
first time, repeating said storing, said accessing and said
obtaining the additional FIDs.
2. The method of claim 1, wherein said obtaining the first FID is
performed without performing a snapshot on the source file
system.
3. The method of claim 1, wherein the first directory is the
current directory.
4. The method of claim 1, additionally comprising repeating said
storing, said accessing and said obtaining the additional FIDs for
each FID stored in the queue.
5. The method of claim 1, wherein said populating the file name
database comprises for each immediate child directory and immediate
child file in the current directory and storing in the file name
database: the additional FID for the immediate child directory or
immediate child file; a corresponding short name for the immediate
child directory or immediate child file; and the next FID as a
parent directory of the immediate child directory or immediate
child file.
6. The method of claim 1, wherein said changes comprise namespace
changes to the current directory.
7. The method of claim 1, wherein the first directory comprises a
root directory of the live source file system.
8. The method of claim 1, additionally comprising monitoring at
least one data management operation directed to first data stored
in the source file system.
9. The method of claim 8, additionally comprising replaying the at
least one data management operation on replication data stored on a
destination file system.
10. The method of claim 9, additionally comprising: constructing,
from information populated in the file name database, an absolute
file name that corresponds to the location of the first data on the
source file system; and transmitting the absolute file name to the
destination system to direct said replaying of the at least one
data management operation.
11. A system for preparing data for replication from a source
computing device in a network, the system comprising: a queue
configured to store a plurality of file identifier descriptors
(FIDs) each comprising a unique identifier that corresponds to one
of a plurality of directories and files on a source file system; a
scanning module executing on a computing device and configured to
scan the source file system while in a live state and to populate
the queue with the plurality of FIDs; a database comprising file
name data that associates each of the plurality of FIDs with a
short name and a parent FID, wherein the scanning module is further
configured to populate the database with the file name data based
on said scan of the source file system in the live state; and at
least one database thread configured to receive a data entry
identifying a data management operation associated with at least
one of the plurality of directories and files on the source file
system and to construct from the FID associated with the at least
one directory or file an absolute file name for transmission to a
destination system along with a copy of the data management
operation for replying on the destination system.
12. The system of claim 11, wherein the scanning module is further
configured to: access a current directory of the plurality of
directories on the source file system that corresponds to a next
FID in the queue; and obtain additional FIDs for each immediate
child directory and immediate child file in the current
directory.
13. The system of claim 12, wherein the scanning module is further
configured to: populate the file name database with the additional
FIDs of each immediate child directory and immediate child file in
the current directory, and add the additional FIDs of each
immediate child directory of the current directory to the
queue.
14. The system of claim 11, further comprising a filter driver
situated between the source file system and at least one
application configured to request the data management
operation.
15. The system of claim 14, wherein the filter driver is further
configured to assign journal sequence numbers to each journal entry
associated with a requested change to the source file system.
16. The system of claim 15, wherein the scanning module is further
configured to receive a current journal sequence number from the
filter driver prior to accessing the current directory.
17. The system of claim 16, wherein the scanning module is
configured to repeat said accessing and obtaining when changes are
detected to the current directory following a time of the current
journal sequence number but prior to said obtaining.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of priority under 35
U.S.C. .sctn.119(e) of U.S. Provisional Patent Application No.
61/349,624, filed on May 28, 2010, and entitled "SYSTEMS AND
METHODS FOR PERFORMING DATA REPLICATION," the disclosure of which
is hereby incorporated by reference in its entirety.
BACKGROUND
[0002] 1. Field
[0003] The present disclosure relates to performing copy and/or
data management operations in a computer network and, in
particular, to systems and methods for performing data replication
in a storage management system.
[0004] 2. Description of the Related Art
[0005] Computers have become an integral part of business
operations such that many banks, insurance companies, brokerage
firms, financial service providers, and a variety of other
businesses rely on computer networks to store, manipulate, and
display information that is constantly subject to change.
Oftentimes, the success or failure of an important transaction may
turn on the availability of information that is both accurate and
current. Accordingly, businesses worldwide recognize the commercial
value of their data and seek reliable, cost-effective ways to
protect the information stored on their computer networks.
[0006] To address the need to maintain current copies of electronic
information, certain data replication systems have been provided to
"continuously" copy data from one or more source machines to one or
more destination machines. These continuous data replication (CDR)
systems provide several advantages for disaster recovery solutions
and can substantially reduce the amount of data that is lost during
an unanticipated system failure.
[0007] One drawback of such CDR systems is that, during an initial
synchronization phase, many systems record absolute file names when
scanning a source storage device in order to replicate the scanned
data to a same location on a destination storage device. Moreover,
this scanning is generally performed while the source file system
is in a fixed state, such as based on a snapshot of the file
system.
SUMMARY
[0008] In view of the foregoing, a need exists for improved systems
and methods for preparing and transmitting source data to be
replicated to a destination system. For instance, there is a need
for systems and methods for scanning a live file system during an
initial synchronization phase between the source and destination
systems.
[0009] Certain embodiments of the invention are provided for
intelligent data replication. In particular, embodiments of the
invention include improved systems and methods scanning a source
file system having data to be copied in a CDR system. In certain
embodiments, such scanning is performed on a live file system
without requiring a snapshot of the file system data. For example,
file identifier descriptors (FIDs), which are generally not exposed
to outside the kernel, of files and/or directories on the file
system can be used to populate a file name database usable to
construct an absolute file name when transmitting data to a
replication system.
[0010] In certain further embodiments, use of FIDs to track files
on the source system and/or an introduction of a slight delay in
the replication process allows for more efficient data replication
operations. For instance, journal entries representing monitored
data operations on the source file system can be stored without
storing the actual data. Moreover, location information in the log
entries can be analyzed to determine if multiple write operations
on the source system can be combined into a single write operation
on the destination system. In yet other embodiments, temporary
files on the source system can be identified and not copied to the
destination system.
[0011] According to certain embodiments, a method is provided for
identifying data to be copied in a data replication system. The
method can include obtaining with a scanning module executing on a
computing device a first file identifier descriptor (FID) of a
first directory on a live source file system. In some cases, the
first FID is one of a plurality of unique identifiers corresponding
to a plurality of directories and files on the source file system.
The method may further include adding the first FID to a queue, and
can also include storing a current journal sequence number from a
file system filter driver identifying a first time. In some
instances, the method includes, following said storing, accessing a
current directory of the plurality of directories on the source
file system that corresponds to a next FID stored in the queue. The
method can additional include obtaining additional FIDs for each
immediate child directory and immediate child file in the current
directory. If no changes have been made to the current directory
since the first time, the method can include: populating a file
name database with the additional FIDs of each immediate child
directory and immediate child file in the current directory; adding
the additional FIDs of each immediate child directory of the
current directory to the queue; and/or removing the next FID from
the queue. If changes have been made to the first directory since
the first time, the method can include repeating said storing, said
accessing and said obtaining the additional FIDs.
[0012] In some embodiments, a system is provided for preparing data
for replication from a source computing device in a network. The
may include a queue configured to store a plurality of file
identifier descriptors (FIDs) each comprising a unique identifier
that corresponds to one of a plurality of directories and files on
a source file system. The system can also include a scanning module
executing on a computing device and configured to scan the source
file system while in a live state and to populate the queue with
the plurality of FIDs. In certain cases, the system additionally
includes a database comprising file name data that associates each
of the plurality of FIDs with a short name and a parent FID. The
scanning module can be further configured to populate the database
with the file name data based on said scan of the source file
system in the live state. The system can also include at least one
database thread configured to receive a data entry identifying a
data management operation associated with at least one of the
plurality of directories and files on the source file system and to
construct from the FID associated with the at least one directory
or file an absolute file name for transmission to a destination
system along with a copy of the data management operation for
replying on the destination system.
[0013] According to other aspects of the disclosure, a method is
provided for performing data replication. The method can include
monitoring a plurality of journal entries associated with writing
data to a source storage device. The method may further include
identifying a first journal entry of the plurality of journal
entries. The first journal entry may comprise a first data write
operation, a first file identifier descriptor (FID) of a file to be
modified by the first data write operation on the source storage
device, and a first location of a first portion of the file to be
modified. The method can also include identifying a second journal
entry of the plurality of journal entries, the second journal entry
comprising a second data write operation, a second FID of a file to
be modified by the second data write operation on the source
storage device, and a second location of a second portion of the
file to be modified. In some instances, the method additionally
includes determining that the first and second data write
operations can be combined into a single write operation. The
method may also include constructing an absolute file name based on
at least one of said first and second FIDs, wherein neither the
first nor second journal entries comprises the absolute file name.
In some embodiments, the method includes transmitting the single
write operation and the absolute file name to a destination storage
device to replay on the destination storage device the data
modifications associated with the first and second write
operations.
[0014] According to yet further aspects of the disclosure, a system
is provided for performing data replication. The system can include
at least one computer application executing on a computing device
and configured to generate operations associated with data on a
source storage device. The system may also include a filter module
disposed between the at least one computer application and the
first storage device. The filter module can be configured to
identify from the operations, a first data modification operation,
a first file identifier descriptor (FID) of a file to be modified
by the first data modification operation, and a first location of a
first portion of the file to be modified, and a second data
modification operation, a second FID of a file to be modified by
the second data modification operation, and a second location of a
second portion of the file to be modified. The system can further
include a processing module configured to determine that the first
and second data modification operations can be combined into a
single modification operation. In some embodiments, the system also
includes at least one database thread configured to construct an
absolute file name for replaying the single modification operation
on replication data of a destination storage device based on at
least one of said first and second FIDs. In some cases, neither the
first nor second data modification operations comprises the
absolute file name.
[0015] According to other embodiments, a system is provided for
performing data replication. The system can include means for
monitoring a plurality of journal entries associated with writing
data to a source storage device. The system can further include
means for identifying a first journal entry of the plurality of
journal entries, the first journal entry comprising a first data
write operation, a first file identifier descriptor (FID) of a file
to be modified on the source storage device, and a first location
of a first portion of the file to be modified, and for identifying
a second journal entry of the plurality of journal entries, the
second journal entry comprising a second data write operation, a
second FID of a file to be modified on the source storage device,
and a second location of a second portion of the file to be
modified. The system can also include means for determining that
the first and second data write operations can be combined into a
single write operation. In certain embodiments, the system further
includes means for constructing an absolute file name based on at
least one of said first and second FIDs, wherein neither the first
nor second journal entries comprises the absolute file name. The
system may additionally include means for transmitting the single
write operation and the absolute file name to a destination storage
device to replay on the destination storage device the data
modifications associated with the first and second write
operations.
[0016] According to additional aspects, a method is provided for
performing data replication. The method can include monitoring data
operations associated with an application executing on a computing
device, the data operations operative to write data to a first
storage device. The method can also include populating a log file
with a plurality of data entries indicative of the data operations.
In some cases, the method includes identifying a first one of the
plurality of data entries associated with writing data to a
temporary file on the first storage device. The method may
additionally include replaying to a second storage device, based on
a portion of the data entries, a portion of the data operations to
replicate data to a first location on the second storage device.
The portion of the data entries according to some embodiments does
not include the first one of the plurality of data entries.
[0017] According to some aspects of the disclosure, a system is
provided for performing data replication between two computing
devices in a network. The system can include at least one computer
application executing on a first computing device and configured to
generate a plurality of operations associated with storing data on
a source storage device, the data comprising at least one temporary
file and at least one non-transitory file. The system may also
include a log file comprising a plurality of data entries
indicative of the plurality of operations. In some cases, a first
one of the plurality of data entries is associated with writing the
at least one temporary file. According to some embodiments, the
system includes a processing module executing on and configured to
identify a first one of the plurality of data entries associated
with writing the temporary file. The processing module may be
further configured to copy a portion of the entries of the log file
to a second computing device in network communication with the
first computing device. The portion of the data entries according
to some embodiments does not include the first one of the plurality
of data entries.
[0018] For purposes of summarizing the disclosure, certain aspects,
advantages and novel features of the inventions have been described
herein. It is to be understood that not necessarily all such
advantages may be achieved in accordance with any particular
embodiment of the invention. Thus, the invention may be embodied or
carried out in a manner that achieves or optimizes one advantage or
group of advantages as taught herein without necessarily achieving
other advantages as may be taught or suggested herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] FIG. 1 illustrates a block diagram of a data replication
system according to certain embodiments of the invention.
[0020] FIG. 2 illustrates a block diagram of an exemplary
embodiment of a source system of the data replication system of
FIG. 1.
[0021] FIG. 3 illustrates a block diagram of an exemplary
embodiment of a destination system of the data replication system
of FIG. 1.
[0022] FIG. 4 illustrates a block diagram of further details of an
exemplary embodiment of the data replication system of FIG. 1.
[0023] FIG. 5 illustrates various fields of an exemplary embodiment
of a log entry usable by the data replication systems of FIGS. 1
and 4.
[0024] FIG. 6 illustrates a block diagram of an exemplary
embodiment of the data replication system of FIG. 1 having a
storage manager module.
[0025] FIG. 7 illustrates a flow chart of an exemplary embodiment
of an installation process usable by the data replication system of
FIG. 4.
[0026] FIG. 8 illustrates a flow chart of an embodiment of a
process of taking a consistency replication point usable by the
data replication system of FIG. 4.
[0027] FIG. 9 illustrates a block diagram of an exemplary
embodiment of a pathname translation system usable with embodiments
of a data replication system.
[0028] FIG. 10 illustrates an exemplary embodiment of a pathname
translation database usable with embodiments of the pathname
translation system of FIG. 9.
[0029] FIG. 11 illustrates a flowchart of an exemplary embodiment
of a pathname translation process executable by the pathname
translation system of FIG. 9.
[0030] FIG. 12 illustrates a block diagram of another embodiment of
a pathname or file name translation system usable with embodiments
of a data replication system.
[0031] FIG. 13 illustrates a flowchart of a process for scanning a
live file system, according to certain embodiments of the
invention.
[0032] FIG. 14 illustrates a flowchart of a scan routine usable
within the process of FIG. 13, according to certain embodiments of
the invention.
[0033] FIGS. 15A and 15B illustrate a flowchart of a replication
process for interleaving a stream of journal entries with database
results of a live file system scan, according to certain
embodiments of the invention.
[0034] FIG. 16 illustrates a flowchart of another replication
process usable with embodiments of a data replication system.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0035] As will be seen from the disclosure herein, certain
embodiments of systems and methods are provided for intelligent
data replication. In particular, embodiments of the invention
include improved systems and methods for scanning a source file
system having data to be copied in a CDR system. In certain
embodiments, such scanning is performed on a live file system
without requiring a snapshot of the file system data. For example,
FIDs of files and/or directories on the file system can be used to
populate a file name database usable to construct an absolute file
name when transmitting data to the replication system.
[0036] In certain further embodiments, use of FIDs to track files
on the source system and/or accumulating a group of journal entries
to transmit during the replication process allows for more
efficient data replication operations. For instance, logs with
entries representing monitored data operations on the source file
system can be stored without actual data. Moreover, location
information in the log entries can be analyzed to determine if
multiple write operations on the source system can be combined into
a single write operation on the destination system. In yet other
embodiments, temporary files on the source system can be passed
over when identifying source data to be copied to the destination
system.
[0037] The features of the systems and methods will now be
described with reference to the drawings summarized above.
Throughout the drawings, reference numbers are re-used to indicate
correspondence between referenced elements. The drawings,
associated descriptions, and specific implementation are provided
to illustrate embodiments of the invention and not to limit the
scope of the disclosure.
[0038] Moreover, embodiments of the invention can be used in
combination with replication systems and methods described in U.S.
Pat. No. 7,651,593, which is hereby incorporated herein in its
entirety to be considered part of this specification.
[0039] FIG. 1 illustrates a block diagram of a data replication
system 100 according to certain embodiments of the invention. As
shown, the replication system 100 comprises a source system 102
capable of communicating with a destination system 104 by sending
and/or receiving data over a network 106. For instance, in certain
embodiments, the destination system 104 receives and/or stores a
replicated copy of at least a portion of data, such as
application-specific data, associated with the source system
102.
[0040] The illustrated network 106 advantageously comprises any
means for communicating data between two or more systems or
components. It certain embodiments, the network 106 comprises a
computer network. For example, the network 106 may comprise a
public network such as the Internet, virtual private network (VPN),
token ring or TCP/IP based network, wide area network (WAN), local
area network (LAN), an intranet network, point-to-point link, a
wireless network, cellular network, wireless data transmission
system, two-way cable system, interactive kiosk network, satellite
network, broadband network, baseband network, combinations of the
same or the like. In embodiments wherein the source system 102 and
destination system 104 are part of the same computing device, the
network 106 may represent a communications socket or other suitable
internal data transfer path or mechanism.
[0041] As shown, the source system 102 comprises one or more
applications 108 residing on and/or being executed by a computing
device. For instance, the applications 108 may comprise software
applications that interact with a user to process data and may
include, for example, database applications (e.g., SQL
applications), word processors, spreadsheets, financial
applications, management applications, e-commerce applications,
browsers, combinations of the same or the like. For example, in
certain embodiments, the applications 108 may comprise one or more
of the following: MICROSOFT EXCHANGE, MICROSOFT SHAREPOINT,
MICROSOFT SQL SERVER, ORACLE, MICROSOFT WORD and LOTUS NOTES.
[0042] The source system 102 further comprises one or more
processes, such as filter drivers 110, that interact with data
(e.g., production data) associated with the applications 108. For
instance, the filter driver 110 may comprise a file system filter
driver, an operating system driver, a filtering program, a data
trapping program, an application, a module of the application 108,
an application programming interface ("API"), or other like
software module or process that, among other things, monitors
and/or intercepts particular application requests targeted at a
file system, another file system filter driver, a network attached
storage ("NAS"), a storage area network ("SAN"), mass storage
and/or other memory or raw data. In some embodiments, the filter
driver 110 may reside in the I/O stack of the application 108 and
may intercept, analyze and/or copy certain data traveling from the
application 108 to a file system.
[0043] In certain embodiments, the filter driver 110 may intercept
data modification operations that include changes, updates and new
information (e.g., data writes) with respect to the application(s)
108 of interest. For example, the filter driver 110 may locate,
monitor and/or process one or more of the following with respect to
a particular application 108, application type or group of
applications: data management operations (e.g., data write
operations, file attribute modifications), logs or journals (e.g.,
NTFS change journal), configuration files, file settings, control
files, other files used by the application 108, combinations of the
same or the like. In certain embodiments, such data may also be
gathered from files across multiple storage systems within the
source system 102. Furthermore, the filter driver 110 may be
configured to monitor changes to particular files, such as files
identified as being associated with data of the applications
108.
[0044] In certain embodiments, multiple filter drivers 110 may be
deployed on a computing system, each filter driver being dedicated
to data of a particular application 108. In such embodiments, not
all information associated with the client system 102 may be
captured by the filter drivers 110 and, thus, the impact on system
performance may be reduced. In other embodiments, the filter driver
110 may be suitable for use with multiple application types and/or
may be adaptable or configurable for use with multiple applications
108. For example, one or more instances of customized or
particularizing filtering programs may be instantiated based on
application specifics or other needs or preferences.
[0045] The illustrated source system 102 further comprises a source
storage device 112. The source storage device 112 may include any
type of media capable of storing data. For example, the source
storage device 112 may comprise magnetic storage (such as a disk or
a tape drive) or other type of mass storage. In certain
embodiments, the source storage device 112 may be internal and/or
external to (e.g., remote to) the computing device(s) having the
applications 108 and the filter drivers 110.
[0046] As further illustrated in FIG. 1, the destination system 104
comprises a replication module 114 and a destination storage device
116. In certain embodiments, the replication module 114 is
configured to monitor and/or manage the copying of data from the
source system 102 to the destination system 104, such as data
retrieved by the filter drivers 110. In yet other embodiments, the
replication module 114 is a "dumb" server or terminal that receives
and executes instructions from the source system 102.
[0047] The destination storage device 116 may include any type of
media capable of storing data, such as replication data sent from
the source system 102. For example, the destination storage 116
device may comprise magnetic storage (such as a disk or a tape
drive) or other type of mass storage. In certain embodiments, the
destination storage device 116 may be internal and/or external to
the computing device(s) having the replication module 114.
[0048] In certain embodiments, the source storage device 112 and/or
the destination storage device 116 may be implemented as one or
more storage "volumes" that include physical storage disks defining
an overall logical arrangement of storage space. For instance,
disks within a particular volume may be organized as one or more
groups of redundant array of independent (or inexpensive) disks
(RAID). In certain embodiments, either or both of the storage
devices 112, 116 may include multiple storage devices of the same
or different media.
[0049] FIG. 2 illustrates a block diagram of an exemplary
embodiment of the source system 102 of FIG. 1. In particular, the
source system 102 comprises a client computer 230 on which the
application(s) 108 and the filter driver(s) 110 reside and/or are
executed. In certain embodiments, the client computer 230 comprises
any computing device capable of processing data and includes, for
example, a server computer, a workstation, a personal computer, a
cell phone, a portable computing device, a tablet computer, a
handheld computing device, a personal digital assistant (PDA) or
the like.
[0050] The illustrated client computer 230 further comprises a file
system 234 for organizing files and directories accessible by the
client computer 230. In certain embodiments, the file system 234
comprises a data structure usable to keep track of a collection of
files and/or directories stored on the source storage device 112.
The file system 234 may include, for example, a local file system,
a network file system, a file server, a management program or the
like, or may include multiple file systems accessible by an
operating system. For instance, in embodiments wherein the storage
device 112 is associated with multiple volumes, each volume may be
associated with its own file system 234, or a single file system
234 may span across the multiple volumes.
[0051] The illustrated client computer 230 also comprises one or
more data agents 236. In certain embodiments, the data agent 236
comprises a module responsible for performing data and/or storage
tasks related to the client computer 230. For example, the data
agent 236 may manage and/or coordinate the compilation of and/or
transferring of replication data from the source system 102. In
other embodiments, the data agent 236 may provide archiving,
migrating, and/or recovery of client computer data.
[0052] In certain embodiments, the client computer 230 comprises a
plurality of data agents 236, each of which performs data
management operations related to data associated with each
application 108. In such embodiments, the data agent 236 may be
aware of the various files, folders, registry files and/or system
resources that are impacted by a particular application 108. For
instance, the data agent 236 may be programmed to detect data
management requests by a particular application 108 and determine
which files, folders and/or system resources are associated with
the data management requests.
[0053] In certain embodiments, the data agent 236 is configured to
perform data management operations in accordance with one or more
"storage policies" or other preferences. A storage policy may
include a data structure or other information having a set of
preferences and other storage criteria for performing a storage
operation. The preferences and storage criteria may include, but
are not limited to, information regarding storage locations,
relationships between system components, network pathways,
retention policies, data characteristics, compression or encryption
requirements, preferred system components, combinations of the same
or the like.
[0054] In certain embodiments, one or more data agents 236 are
configured to perform an initial "seeding" or synchronization
process of a replication process. For example, prior to (or
concurrently with) data replication using one or more filter
drivers 110, the data agent 236 may perform a scan of the source
system 102 (e.g., the source storage device 112). For instance, the
data agent 236 may evaluate the folders and/or directory structure
of the source system 102 to determine which folders are used by a
particular application 108. In certain embodiments, the data agent
236 may also identify, arrange, and queue necessary data of the
application 108 to provide a proper platform for replication. For
example, the data agent 236 may populate source log(s) 244 with
application data that has already been written to the source
storage database 112. In certain embodiments, this populating is
performed based on a snapshot or point-in-time copy of the file
system. In yet other embodiments, as described below, the data
agent 236 is configured to scan a live file system.
[0055] In certain embodiments, when the data agent 236 is initially
installed or enabled on the client computer 230, the data agent 236
may evaluate the application 108. For instance, the data agent 108
may determine the application's organizational structure, which may
include, for example, folder, directory and file information. The
information gathered by the data agent 236 may be sufficient to
define a complete "set" of information to be replicated such that
suitable baseline data representing the current operational state
of the application 108 is identified. In some instances, this
initial process may require the examination and identification of
data related to application operations occurring prior to the
installation of data agent 236. The data agent 236 may also be
configured to identify general configuration and operational
information regarding the application 108.
[0056] In certain embodiments, the data agent 236 may be configured
to access and/or monitor particular files, folders, directories,
registries, preferences and/or other like data structures for
information to be replicated. All or a portion of the information
gathered by the data agent 236 may be copied over to the
destination system 104 as part of the initial seeding or
initialization process. After the seeding process is complete, data
replication may occur on a substantially continuous basis based on
data transfers occurring between application(s) 108 and source
storage device 112. In certain embodiments, the seeding process may
occur substantially concurrently with execution of the
application(s) 108. For instance, data operations from the
application(s) 108 may be temporarily stored in a queue or buffer
until the seeding process, or a portion thereof, is complete.
[0057] As shown in FIG. 2, the client computer 230 communicates
through the file system 234 with the source storage device 112,
which further includes a database 240 and database logs 242. In yet
other embodiments, the client computer may communicate with NAS or
the like. In certain embodiments, data intended for the source
storage device 112 may be first written to a file in the database
logs 242 and subsequently committed to the database 240 in
accordance with data management techniques for enhancing storage
operation performance. Moreover, although only one database 240 and
one database log 242 are depicted in FIG. 2, it will be understood
that the source storage device 112 may comprise additional
databases 240, database logs 242 and/or other directory and file
storage structures to meet the storage needs of the client computer
230.
[0058] As illustrated in FIG. 2, the filter driver 110 is
advantageously located between the application 108 and the file
system 234. For instance, the filter driver 110 may be deployed in
the stack as an I/O buffer and/or process in the data path between
the application 108 and the file system 234. In such embodiments,
the filter driver 110 may intercept, snoop, supervise, trap,
process or otherwise be cognizant of some or all operations (e.g.,
data modification operations, file modification operations, read
operations and the like) from the application 108 to its associated
location(s) on the source storage device 112.
[0059] For example, in certain embodiments, the filter driver 110
may communicate with an associated data agent 236 to determine
where data for a particular application 108 will be stored (e.g.,
particular folders on the file system 234). In certain embodiments,
the filter driver 110 and/or the data agent 236 may also monitor
and/or parse data management operations to determine if new or
additional folders are affected by the production volume data of
the particular application 108. In certain embodiments, the data
agent 236 may monitor data management operations and/or other data
for other purposes, such as, for example, for satisfying a query or
command by a storage manager component or the like.
[0060] As further depicted in FIG. 2, one or more of the filter
drivers 110 and associated data agent(s) 236 may be grouped
together as a single module, such as driver module 237. In yet
other embodiments, the data agent(s) 236 may be separate from the
driver module 237.
[0061] As discussed above, in certain embodiments, the filter
driver 110 is preferably configured to monitor and/or filter data
management operations associated with a particular application 108.
The filter driver 110 may be further configured, according to
predefined criteria, to cause particular data to be written to one
or more source logs 244 for subsequent replication. For instance,
the filter driver 110 may be configured to intercept, scrub, parse
and/or trap data management operations and to populate the source
logs 244 with changes associated therewith.
[0062] In certain embodiments, the filter driver 110 may examine
the data management operation in progress, determine whether the
type of operation is one of interest for replication purposes,
and/or copy select or all data to source log 244. For instance, as
discussed above, the filter driver 110 may determine if the data
management operation concerns data in one or more files determined
as relevant to replication (e.g., files that may store data for a
particular application). In other embodiments, the filter driver
110 may generate log entries for all data management
operations.
[0063] The filter driver 110 may further process and/or traverse
the data and copy, generate or examine other relevant information,
such as a log entry number, time information (e.g., time stamp),
application type, data size and start field, combinations of the
same or the like, that may be useful in the replication process. In
other embodiments, the filter driver 110 may monitor files on the
source storage device 112 for modifications of data relating to the
subject application 108. For instance, as disclosed above, the
filter driver 110 may monitor a select group of files, which have
been associated with the application 108, or folders to detect
changes to data stored therein. In certain embodiments, the filter
driver 110 or other system component may detect when a data write
operation of the application is made to a file or folder not in the
select group. The filter driver 110 or other system component may
then determine from the properties of the data write modification
if the subject folder or file should be added to the select group
(for subsequent monitoring).
[0064] In certain embodiments, the filter driver 110 is deployed
(e.g., by data agent 236) on the client computer 230 prior to the
beginning of the replication process. In embodiments wherein the
filter driver 110 is deployed after replication begins, pertinent
application data already stored on the source storage device 112
may be copied to the source logs 244 prior to the replication
process (e.g., during the initial "seeding" process described
above).
[0065] In certain embodiments, the filter driver 110 may be enabled
and/or disabled by the data agent 236. For instance, enabling the
filter driver 110 may allows it to populate an associated source
log 244 with log entries from application data passed from the
application 108 to the source storage device 112. When the filter
driver 110 is disabled, data may pass directly through to the
source storage device 112 without being copied to the source logs
244.
[0066] In certain embodiments, the data agent 236 monitors the
storage capacity of the source logs 244. For instance, when one or
more of the source logs 244 reach a particular memory threshold,
the data agent 236 may open a socket and communicate to the
destination system 104 that a copy of the source log 244 is ready
to be transmitted. In other embodiments, the data agent 236 may be
configured to copy the source log 244 to the destination system 104
at periodic intervals or in accordance with other predefined
criteria. In yet other embodiments, the source logs maintain the
history of previous intercepted changes (e.g., the last N gigabytes
of previous changes). As just one example scenario, the history of
intercepted changes can be used in the event that network
connectivity is temporarily lost. For example, the history of
intercepted changes can be accessed, and any changes that were not
transmitted due to the connectivity interruption can be transmitted
or retransmitted to the appropriate destination. This may be
particularly useful where there are multiple destination devices
and where the changes are successfully transmitted to a first
subset of the multiple destination devices, but not a second subset
of the multiple destination devices. In this case, the history can
be accessed to transmit or retransmit the appropriate intercepted
changes to the second subset of destination devices.
[0067] In certain embodiments, the source system 102 communicates
with the associated destination system to verify that the two
systems are synchronized. For instance, the source system 102 may
receive from the destination system an identification (e.g., unique
serial number) of the data write operation currently being
replicated by the destination system. The source system 102 may
then compare the received identification with data write operation
being forwarded to the source storage device 112.
[0068] FIG. 3 illustrates a block diagram of an exemplary
embodiment of the destination system 104 of FIG. 1. In particular,
the destination system 104 comprises the replication module 114,
which communicates with one or more replication logs 352 and the
destination storage device 116. In certain embodiments, the
replication module 114 comprises any computing device capable of
processing data and includes, for example, a server computer, a
workstation, a personal computer or the like.
[0069] In certain embodiments, the replication logs 352 contain a
copy of the data stored on the source logs of a client system, such
as the source logs 244 of FIG. 2. The replication logs 352 comprise
any type of memory capable of storing data including, for example,
cache memory. In certain embodiments, the replication logs 352 may
reside on the destination system 104, such as, for example, on the
destination storage device 116, or at least a portion of the
replication logs 352 may be external to the destination system 104.
In certain embodiments, once the replication logs 352 have been
populated with the data from the source logs 244, the data on the
source logs 244 is available to be erased and/or overwritten to
conserve memory space.
[0070] The replication module 114 of the destination system 104
further comprises a replication agent 356 and one or more
processes, such as threads 358. In certain embodiments, the
replication agent 356 comprises one or more software modules that
coordinate the transfer of data from the replication logs 352 to
the destination storage device 116.
[0071] For example, in certain embodiments, the replication agent
356 instantiates an appropriate number of threads, processes, or
routines, 358 for copying data from the replication logs 352 to the
destination storage device 116. In certain embodiments, the number
of threads 358 is based on one or more of the following factors:
the number of log files sent from the source logs 244 to the
replication logs 352, information received from the data agent(s)
236, information generated by the filter driver(s) 110, and the
type(s) of application data being tracked.
[0072] In certain embodiments, the replication agent 356 further
includes mapping or correlation information that determines when
and to where the data from the replication logs 352 is copied by
the threads 358. In certain embodiments, such mapping information
may be based on system- or user-defined parameters and/or may be
automatically generated, such as based on the status of the
destination storage device 116.
[0073] The one or more threads 358 (or processes) direct movement
of data from replication logs 352 to the appropriate location on
the destination storage device 116. In operation, in certain
embodiments, the threads 358 advantageously process (or traverse)
replication logs 352 for particular types of data and then copy
that data to certain locations on one or more replication volumes
based on data paths identified by the replication agent 356 and/or
associated with each thread 358. For example, the thread(s) 358 may
sequentially process each entry in the replication log 352 and
write the associated data to the destination storage device
116.
[0074] In certain embodiments, each thread 358 is assigned to a
hard-coded path pair, which includes (i) a source path identifying
the location on the source storage device 112 associated with a
data management operation (e.g., c:\Folder\) and (ii) a destination
path identifying the location on the destination storage device 116
to receive the replicated data (e.g., D:\folder\) from the thread
358.
[0075] FIG. 4 illustrates further details of a replication system
400 in accordance with certain embodiments of the invention. As
illustrated, the replication system 400 comprises the source system
102 in communication with the destination system 104, portions of
which are described in more detail with respect to FIGS. 1-3.
[0076] As detailed above, in certain embodiments, the filter driver
110 preferably substantially continuously populates data relating
to one or more of the applications 108 to the source logs 244. As
shown in FIG. 4, the source logs 244 further comprise a first log
file 460 and a second log file 462. In certain embodiments, the
filter driver 110 sequentially writes log entries to the source
logs 244, and when a certain capacity of the first log file 460 is
reached, the filter driver 110 begins populating the second log
file 462 with log entries.
[0077] In yet other embodiments, data relating to each application
108 of interest may be written to a particular log file established
for that application. For example, with reference to FIG. 4, the
first log file 460 may relate to a first application of interest,
whereas the second log file 462 may relate to a second application
of interest.
[0078] In certain embodiments, each of the log files of the source
logs 244 may be established by the data agent(s) 236 and/or the
filter driver(s) 110 as part of an initial deployment or
initialization process. Moreover, data may be written to the source
logs 244 as determined by preferences stored on or accessed by the
client computer 230 in a preference database 465.
[0079] For example, as further shown in FIG. 4, the first and
second log files 460, 462 may comprise a series of entries, each
having an identifier that indicates the sequence order and/or type
of entry being made. For instance, the illustrated entry identifier
("L1") may indicate that the particular entry represents a first
database entry in a particular order of operation. The illustrated
entry identifier ("L2") may indicate a second database entry in a
particular order of operation, and so forth. The illustrated entry
identifier ("Dl") may indicate that the particular entry represents
a first database commit entry in a particular order of operation.
Thus, in the example described above, the log entries identified by
L1 and L2 may correspond to modifications associated with a
particular database transaction, and the log entry identified by D1
may correspond to a commit command for the particular
transaction.
[0080] It will be understood that, although only two log files are
shown in FIG. 4, more or fewer log files may be used with
embodiments of the invention. For instance, multiple applications
108 may be monitored by the filter drivers 110 and, thus,
additional log files may be added as necessary or desired.
Moreover, although in some embodiments, each application 108 and
each log file in the source logs 244 may have its own associated
filter driver 110, in other embodiments, a single filter driver 110
may be deployed and configured for use with multiple applications
108 such that there are separate log files for each monitored
application 108.
[0081] With continued reference to FIG. 4, in certain embodiments
of the invention, the data agent 236 and/or filter driver 110 may
be advantageously configured to pause, or quiesce, the application
108 during data replication. For instance, the data agent 236 may
cause the application 108 to temporarily suspend data management
operations to the source storage device 112 once the application
108 reaches a known "good," "stable" or "recoverable" state. In
certain embodiments, such a state may be defined as when particular
computing operations of the application 108 are complete to a point
such that further operation, recovery and/or rolling back of the
application 108 may occur, based on the recorded data, without the
loss of critical information or computing operations needed for
operation of the application 108. This point of referential
integrity is generally referred to herein as a known good state of
the application 108.
[0082] In certain embodiments, the data agent 236 instructs the
quiescing of the application 108 through an application programming
interface (API). For instance, the data agent 236 may send a
command (e.g., FLRSNAP.FOO) to the application 108 that causes the
application 108 to quiesce. When the application 108 has placed
itself in a known good state, the application 108 may send an
acknowledgment to the data agent 236.
[0083] In certain embodiments, once the data management operations
are suspended, the I/O buffers in the data path of the application
are flushed (and/or the writes in the queues are flushed), and the
source logs 244 are populated. For example, some or all of the
pending data management operations (e.g., as of the time of the
suspension of the application) may be allowed to complete and/or
percolate through the data path. The filter driver 110 and/or data
agent 236 then inserts a logical marker or tag in the source log
file denoting that a "consistency point" or "consistency recovery
point" has been reached. In some embodiments, the consistency point
indicates the time at which the application 108 is at a known good
state. For instance, in certain embodiments, the data agent 236
instructs the filter driver 110 to insert a consistency point entry
into the source logs 244.
[0084] FIG. 4 illustrates consistency point entries as log entries
463 and 464 in, respectively, the first and second log files 460,
462. As shown, the consistency point entries are represented by
"CRP" in the source logs 244. In certain embodiments, once the
consistency point is identified and inserted into the source log
244, the data agent 236 may instruct the application 108 to
"restart" so as to resume normal data management operations from
the application 108 to the source storage device 112.
[0085] Notwithstanding the foregoing, it will be understood that,
in certain embodiments, although application 108 is quiesced, it
need not actually pause or suspend operation during the quiescent
period. Rather, the application 108 may continue to operate
substantially normally but may internally queue, or otherwise
buffer, data management operations intended for the source storage
device 112. After the quiescent period, the buffered modification
operations may be allowed to complete (i.e., be sent to the source
storage device 112).
[0086] In yet other embodiments, policies for the frequency of
consistency point entries may be automatically generated. For
instance, the data agent 236 may be configured to quiesce the
application 108 based on the status (e.g., capacity) of the source
logs 244, the replication logs 352 and/or the destination storage
device 116. In yet other embodiments, quiescing of the application
108 may be performed based on an automatic reporting procedure. For
instance, a module of the replication system 400 may be configured
to gather, receive and/or analyze information associated with a
failure rate and/or health of applicable servers. Additional
details of such status monitoring are provided in U.S. patent
application Ser. No. 11/120,619, filed May 2, 2005, now published
as US 2006-0053261 A1, which is hereby incorporated herein by
reference in its entirety. For example, the frequency of
consistency points may be selected or adjusted to mitigate risks
detected in a storage network.
[0087] In certain embodiments, one or more log entries in the
source logs 244 are preferably associated with journal sequence
numbers and/or time information, such as, for example, assigned a
time stamp indicative of the client system time with which the
particular log entries are associated. For instance, the time
information may indicate the time at which: the log entry is
written to the source log 244, the data management operation is
generated by the application 108, the data modification operation
is committed to disk or the like. In certain embodiments, not all
the log entries are assigned a time stamp. Rather particular types
of data, such as for example, consistency point markers and/or
database commit entries, are assigned time stamps.
[0088] In certain embodiments of the invention, the data agent 236
coordinates with the replication agent 356 to copy log files from
the source logs 244 to the replication logs 352. Such copying may
be initiated based on any suitable factor, such as, for example,
preset copying intervals, capacity thresholds reached in the source
logs 244, time lapsed since the last copy operation, replication
agent 356 requests for a copy operation, and/or based on specific
parameters or requirements associated with a particular application
108. For instance, certain data-sensitive applications may be
copied more frequently than other applications in order to reduce
the amount of potential data loss due to a failure occurring
between copy operations.
[0089] As further illustrated in FIG. 4, the replication logs 352
include a first log file 466 and a second log file 468. In certain
embodiments, each of these log files 466, 468 corresponds,
respectively, to the first log file 460 and the second log file 462
of the source logs 244. For instance, data may be transferred
between the replication log(s) 352 and the source log(s) 244 such
that the order in which the data was stored in the source log(s)
244 is preserved. In addition, the log files may be recreated in
the replication log(s) 352 to reflect the organization of source
logs 244. For example, the first log file 460 and the second log
file 462 in the source logs 244 may be transferred and recreated by
the replication agent 356 and/or the data agent 236. In other
embodiments, however, data may be transferred and stored in a
different order without preserving source system correlations
and/or may be rearranged on or during transfer to or upon arrival
in replication volumes 116A, 116B.
[0090] The illustrated destination system 104 further comprises an
optional preference database 470 in communication with the
replication agent 356. In certain embodiments, the preference
database 470 includes storage policies or other preferences usable
by the replication agent 356 in managing data. For instance, the
stored preferences may indicate the desired frequency at which the
threads 358 should copy the data from the destination logs 352 to
the replication volumes 116A, 116B. The preference database 470 may
also store path information for detailing to which location(s) on
the replication volume(s) 116A, 116B the data in the replication
log(s) 352 should be copied. In yet other embodiments, the
preference database 470 may include storage policies that dictate
particular criteria for performing one or more data management
operations on the replicated data.
[0091] With continued reference to FIG. 4, the replication module
114 further comprises one or more processes, such as a replication
set or a log processing module 469 with a first thread 358A and a
second thread 358B. In certain embodiments, as discussed above, the
threads 358A, 358B are instantiated by the replication agent 356 to
transfer data from the first and second replication logs 466, 468
to the first replication volume 116A and/or the second replication
volume 116B.
[0092] In certain embodiments, the threads 358A, 358B utilize time
stamp or other temporal information that enables processing and/or
replaying of modification operations. For example, based on time
stamp information, the threads 358A, 358B may rearrange the
replication data such that the data is stored on the one or more
replication volumes in the proper order (e.g., the order in which
the data was intended to be written to the source storage device
112). In such embodiments, the replicated data may be subsequently
retrieved, recalled or otherwise accessed or processed and may be
used to accurately restore the state of the application 108 as it
existed at a given point in time. In yet other embodiments, other
data management operations (e.g., searching, data classification)
may be performed on the replicated data.
[0093] In certain embodiments, instructions for the storage
operations are sent from the data agent 236 on the source system
102. For instance, the instructions may be included in the log file
entries copied from the source system 102. In yet other
embodiments, the storage operations are coordinated by the
replication agent 356 (e.g., according to storage polices stored in
the preference database 470) in combination with, or independent
of, the data agent 236. In yet other embodiments, policies for
storage operations may be stored in another system management
component (e.g., a storage manager module).
[0094] In certain embodiments, a snapshot is taken for each volume
in which data is being replicated. For instance, with reference to
FIG. 4, first thread 358A is writing to the first replication
volume 116A, and second thread 358B is writing to the second
replication volume 116B. In such embodiments, when the first and
second threads 358A, 358B arrive at a consistency point log entry,
a snapshot is taken of the replicated data in each replication
volume 116A, 116B.
[0095] In certain preferred embodiments, when the snapshot is
performed at a particular consistency point, the time of the
snapshot is advantageously logically associated with the time that
the consistency point was generated at the client system 102 (e.g.,
the client system time of the known good state of the application
108). For instance, the time stamp of the consistency point may be
used to logically assign a "time" to the snapshot of the replicated
data. In such a process, the snapshot of the replicated data then
appears as if the snapshot was directly taken on the data in the
source system 102 at the time of the consistency point. Such a
process allows for the snapshot data to be viewed as a direct copy
of the production volume data for a particular application (e.g.,
source storage device 112) at a certain point in time (e.g., the
time of a known good state of an application).
[0096] While certain embodiments of storage operations have been
disclosed as being usable with the replication system 400 of FIG.
4, a wide variety of other storage operations may also be performed
on the replication data and/or in conjunction with consistency
point information. For example, other copies of the replicated data
may be performed, such as, but not limited to, creation, storage,
retrieval, migration, deletion, auxiliary copies, incremental
copies, differential copies, Hierarchical Storage Management
("HSM") copies, archive copies, backup copies, Information
Lifecycle Management ("ILM") copies, other types of copies and
versions of electronic data or the like.
[0097] In certain embodiments, after appropriate storage operations
are performed on the replicated data, a message may be sent to
other system management components (e.g., a snapshot manager and/or
optional storage manager) indicating that the replication process
is complete up to the time stamp associated with consistency point.
At this point, the replication agent 356 may instruct copy
operations associated with the threads 358A, 358B to resume.
[0098] FIG. 5 illustrates an exemplary embodiment of a data
structure of a log entry 500 usable with the replication systems
described herein. In certain embodiments, the log entry 500
comprises information regarding modifications to data and/or files
on the source storage device 112 and may include, for example,
information regarding: which file was modified, the time of the
modification, the type of modification, the relative data, a unique
identification, combinations of the same or the like. For exemplary
purposes, the various fields of the log entry 500 will be described
with respect to a data write operation in the replication system
400 of FIG. 4.
[0099] In certain embodiments, the log entry 500 is initially
generated by the filter driver 110 and is stored in the source log
244. For example, the log entry 500 may comprise a data word having
a plurality of fields. As illustrated, the log entry 500 comprises
a log entry number field 502, a path field 504, a time stamp field
506, an application type field 508, a write type field 510, a size
field 512, a checksum field 514, an offset field 516 and a payload
field 522.
[0100] The log entry number field 502 may include information
regarding the entry number assigned to the log entry 500 for system
management purposes such that entries may be tracked and reordered
relative to one another if necessary. For example, as mentioned
herein, log entries may be arranged in a temporally sequential
manner based on the application write operation with which the
particular log entry 500 is associated. In certain embodiments, log
entry numbers or other information may be recycled over time once
all the numbers in a particular range have been used. In yet other
embodiments, the log entry number field 502 may be configured to
store other types of identification data for labeling the log entry
500.
[0101] The path field 504 may include information regarding the
file path on the source storage device 112 with which the data
write operation was associated. For example, a path of
"C:\DIR\USER\" may indicate that the log entry corresponds to an
operation writing data to a folder or file on the source storage
device having the designated pathname. In certain embodiments, the
path field 504 may include an absolute file pathname. In other
embodiments, the path field 504 may include an abbreviated
pathname, an FID, and/or an inode (e.g., for UNIX-based
systems).
[0102] Moreover, the path field 504 may include information
relating to the log entry's replication volume destination, and
thus may be useful in establishing or confirming correlation or
pairing information used by the thread(s) 358A, 358B. For instance,
in certain embodiments, the file path of a particular log file may
be hard-coded to one or more particular replication volume(s).
[0103] The time stamp field 506 may include information relating to
the time when the subject data write occurred. In certain
embodiments, the time stamp is advantageously associated with the
time of the client computer 230 on which the application 108 is
executing. For instance, the filter driver 110 may access the
source system time when generating the log entry 500. In other
embodiments, the time stamp may be provided by the filter driver
110 and/or may be relative to the replication system time.
[0104] The application type field 508 may include information
identifying the application type with which the log entry 500 is
associated (e.g., MICROSOFT OUTLOOK data, MICROSOFT SHAREPOINT
data, ORACLE data, SQL data, MICROSOFT WORD data, MICROSOFT
INTERNET EXPLORER data or the like).
[0105] The write type field 510 may include information regarding
the category of write data involved with the log entry 500. For
instance, the write type may identify if the log entry 500 is
associated with a database modification, a log write, a database
commit command, a consistency point or the like. In certain
embodiments, the information in the write type field 510 is used to
implement parallelism between multiple threads when performing data
replication. For instance, a first thread (e.g., thread 358A) may
handle log write commands, and a second thread (e.g., thread 358B)
may handle commit database commands. In certain embodiments, the
data stored in the write type field 510 may be used for
prioritizing the processing of various log entries (e.g.,
processing by the threads 358).
[0106] The size field 512 may include information relating to the
size (e.g., the number of bytes) of the data being modified by the
data write operation. In yet other embodiments, the size field 512
may contain information relating to the size of other or additional
segments within the log entry 500, such as, for example, the size
of the payload field 522.
[0107] The checksum field 514 may include information relating to
error checking to ensure, for example, that the log entry 500, when
created and subsequently transmitted, contains the expected number
of bits and has not been corrupted or otherwise impermissibly
changed. For instance, the checksum field 514 may store data
representing the arithmetic sum of some or all of the fields in the
log entry 500.
[0108] The offset field 516 may include information relating to the
location within a file or portion of data that the data write is
occurring. For instance, if the subject data write operation is
associated with modifying the twentieth through the thirtieth bytes
of a file or piece of data fifty bytes long, the offset field 516
may store a value of twenty. In such embodiments, the information
in the offset field 516 may be used jointly with the information in
the size field 512 to identify the entire portion of a file being
modified. For instance, in the above example the size field 512 may
store a value of eleven to indicate the length of the modified
section (i.e., twentieth through thirtieth bytes).
[0109] The payload field 522 may include information relating to
the data written from the application 108 to the source storage
device 112. This information generally represents the application
data captured by the filter driver 110 for replication and may
include additional information for the ongoing operation or
reconstitution of the application 108.
[0110] It will be understood that the illustrative filter driver
log entry 500 shown in FIG. 5 merely represents one possible
embodiment of a log entry suitable for use with embodiments of the
invention and that other embodiments may be used if desired. For
example, in other embodiments, the log entry 500 may comprise more
or fewer fields to accommodate the requirements of the particular
replication or storage operation system involved and/or to achieve
certain data or management goals, such as conserving memory,
increasing processing speed and increasing the amount of
information in each log entry. For instance, in certain embodiments
wherein the path determination for a particular log file or log
entry is dynamic, the log entry 500 may not include the path field
504. In other embodiments, the log entry 500 may include a priority
field that may be used for prioritizing replication and/or data
management operations of data associated with the log entry
500.
[0111] In other embodiments, the log entry 500 may concern a file
attribute change rather than a data write operation. In such
embodiments, the write type field 510 may identify the log entry
500 as being associated with a file attribute change. Furthermore,
the log entry 500 may store information regarding the new file
attribute but would not require offset or size values to be stored
in the size field 512 and/or the offset field 516.
[0112] In yet other embodiments, as discussed in more detail below,
the log entry 500 may not have a payload portion. Such embodiments
can significantly reduce the size of the log files and/or increase
system performance since copies of the actual data entries are not
needed. Rather, information stored in the log entry 500 can be used
by a file system driver (e.g., filter driver 110) to obtain a copy
of the data from the source storage device 112, when need. Such
information can be obtained from the path field 504, size field
512, offset field 516 and/or other data identification information,
such as inodes, FIDs or the like.
[0113] FIG. 6 illustrates another embodiment of a replication
system 600 similar to the replication system 400 of FIG. 4. As
shown, the replication system 600 further includes a storage
manager 680 that communicates with the source system 102 and the
replication system 104. In certain embodiments, the storage manager
680 is a software module or application that is configured to
direct the performance of one or more storage operations and, in
particular, the replication of data from the source system 102 to
the replication system 104. In further embodiments, the storage
manager 680 may perform one or more of the operations or functions
described above with respect to the data agent 236 and/or the
replication agent 356. For instance, the storage manager 680 may
direct and/or coordinate the performance of one or more storage
operations on the replicated data (e.g., snapshots of the
replicated data).
[0114] In certain embodiments, the storage manager 680 maintains an
index 682, such as a cache, for storing information relating to:
logical relationships and associations between components of the
replication system 600, user preferences, management tasks, and/or
other useful data. For example, the storage manager 680 may use its
index 682 to track the location and timestamps of one or more
snapshots of the replicated data. In certain embodiments, the
storage manager 680 may track logical associations between one or
more media agents (not shown) and/or storage devices.
[0115] The storage manager 680 may also use its index 682 to track
the status of data management operations to be performed, storage
patterns associated with the system components such as media use,
storage growth, network bandwidth, Service Level Agreement ("SLA")
compliance levels, data protection levels, storage policy
information, storage criteria associated with user preferences,
retention criteria, storage operation preferences, and other
storage-related information. The index 682 may typically reside on
the storage manager's hard disk and/or other database.
[0116] As shown in FIG. 6, the storage manager 680 further
communicates with a database 684. In certain embodiments, the
storage manager database 684 comprises a memory for storing system
management information relating to the replication of data. For
instance, the database 684 may be configured to store storage
and/or restore policies, user preferences, the status or location
of system components or data, combinations of the same and the
like. In yet other embodiments, the database 684 may be configured
to store information described above with respect to the index 682.
In yet other embodiments, at least a portion of the index 682 may
be stored on the database 684.
[0117] Additional details of storage manager modules useful with
embodiments of the replication systems described herein are
described in U.S. Pat. No. 7,389,311, issued Jun. 17, 2008, which
is hereby incorporated herein by reference in its entirety.
[0118] FIG. 7 illustrates a simplified flowchart of an
initialization process 700 in accordance with certain embodiments
of the invention. In particular, the initialization process 700
concerns certain preliminary processes and acts for setting up a
system for performing data replication, as disclosed herein. For
exemplary purposes, the initialization process 700 will be
described hereinafter with reference to the components of the
replication system 400 of FIG. 4.
[0119] The initialization process 700 begins with Block 705,
wherein one or more data agent(s) 236 are installed on the client
computer 230. In certain embodiments, the data agent 236 may be
installed remotely from other portions of the replication system
400 based on a particular need or to conform to certain directives
or resident storage policies. In other embodiments, the data agent
236 may be installed locally by a system user as desired. For
instance, installation of the data agent 236 may include deployment
and installation of object code files and supporting software.
[0120] In certain embodiments, the data agent 236 may be installed
for each application 108 of interest, or one or more data agents
236 may be installed for a larger number of applications 108.
Furthermore, in certain embodiments, an installation guide such as
a wizard or other program may recommend the appropriate number and
type of data agents 236 to install (which may be performed
substantially automatically based on application and system
configuration information).
[0121] At Block 710, the installed data agents 236 may perform
certain auto-discovery routines in order to determine basic system
and application information. In some embodiments, the
auto-discovery routines may be considered part of the installation
process. For example, the data agent 236 may begin the
auto-discovery process by scanning and evaluating the folder and
directory structure of the client computer 230 to determine which
folders are used by a particular application 108. In certain
embodiments, such information allows the data agent 236 to identify
and locate files or other information necessary to replicate the
current operating state of the application 108 of interest.
[0122] In certain embodiments, the scanning and evaluation process
may involve scanning multiple physical and/or logical volumes
associated with the source storage device 112 and/or within a given
network or enterprise to locate the data and system configuration
information necessary for data replication.
[0123] After the appropriate resources have been discovered and
examined, the data agent 236 may identify, arrange, coordinate
and/or queue the necessary data within various locations or
instances of the application 108 to establish a platform for proper
data replication (Block 715). In certain embodiments, this process
may be a precursor for performing the initial seeding or
synchronization operation described above.
[0124] Next, at Block 720, the data agent 236 communicates with the
replication agent 356. For instance, the data agent 236 may
transmit to the replication agent 356 information regarding the
replication of data. The data agent 236 may also request
information from the replication agent 356 and/or other network
management components for any information that may bear on, or be
related to, the correlation or mapping of network storage paths for
replication data. For example, the data agent 236 may consult the
preference database 470 of the destination system 104, the
preference database 465 of the source system 102 and/or a storage
manager component, for correlation or pairing information. Based on
this information, data paths may be identified for use by threads
358 when copying data from the replication logs 352 to the
replication volumes 116A, 116B. In certain embodiments, one or more
data paths may be dynamically coded or determined, such as, for
example, based on one or more storage policies and/or
preferences.
[0125] At Block 730, the initialization process 700 includes
installing and initializing the filter drivers 110. In certain
embodiments, such installation and/or initialization is based at
least in part on information obtained by the data agent 236 during
the discovery or scanning process (Block 710). For example, in
certain embodiments, one or more filter drivers 110 may be
installed by the data agent 236 in the I/O path of the
application(s) 108.
[0126] FIG. 8 illustrates a simplified flowchart of an embodiment
of a process of taking a consistency replication point in
accordance with certain embodiments of the invention. In
particular, the replication process 800 involves the copying of
data from a source system to a destination system. Furthermore, in
certain embodiments, the replication process 800 is configured to
be performed after completion of the initialization process 700 of
FIG. 7. For exemplary purposes, the replication process 800 will be
described hereinafter with reference to the components of the
replication system 400 of FIG. 4.
[0127] The replication process 800 begins with Block 805, wherein
the filter driver 110 populates the source log(s) 244 with data
associated with the application 108, such as data identified by the
data agent 236. As discussed in more detail above, such data may
relate to data or file modification operations being passed from
the application 108 to the source storage device 112. In certain
embodiments, the filter driver 110 populates the source logs 244 in
a temporally sequential manner such that operations and data are
recorded in time descending (or ascending) order (e.g., first
operation at the top and last operation at the bottom).
[0128] In certain embodiments, the data is populated in the source
logs 244 in a format similar to the structure of the log entry 500
of FIG. 5. In other embodiments, the data may be populated in other
suitable formats to satisfy the requirements of the particular
replication system. For instance, the log file format may comprise
a two- or multi-column structure, wherein the information in a
first column may indicate the type of data operation performed, and
the log entry's position in the log file indicates the order of the
operation relative to other operations in the log file. The
information in a second column may indicate the payload data
associated with the data operation indicated by the first
column.
[0129] After or concurrently with Block 805, the data agent 236 or
other system component pauses or quiesces the application 108
(Block 810). As discussed above, such quiescing causes the
application 108 to temporarily suspend data modification operations
to the source storage device 112 once the application 108 reaches a
known good state.
[0130] Once new modification operations are suspended and the
associated source log 244 is populated based on the modification
operations up to the known good state, the data agent 236 or other
replication system component inserts a logical marker or tag in the
source log 244 (Block 815). This "consistency point" denotes that
the state of the data is such that the application 108 may be
recovered or that further stable operation from that point going
forward is ensured. Once the consistency point is identified and
established, the data agent 236 may restart the application 108
such that data modification operations from the application 108 to
the source storage device 112 resume.
[0131] As referenced by Block 820, the data agent 236 or other
system management component coordinates the transfer of the data in
the source logs 244. In certain embodiments, the data agent 236
coordinates with the replication agent 356 to copy data from the
source logs 244 to the replication log(s) 352. For instance, the
replication agent 356 and/or data agent 236 may open a network path
or a communication socket between the source log(s) 244 and the
replication log(s) 352. The log entries of the source log(s) 244
may then be transferred as described above to populate the
replication log(s) 352. In certain embodiments, as the replication
log 352 is populated, the replication agent 356 may also obtain
configuration information from the data agent 236 or other system
management component such as, for example, a storage manager. Such
configuration information may identify aspects of the set of
information being transferred as well as identify pairing
information that correlates certain types of replication data with
certain replication volumes or other storage destinations.
[0132] At Block 825, the replication process 800 includes
instantiating one or more threads 358 to begin the transfer of data
from the replication log(s) 352 to certain replication volumes
116A, 116B. In certain embodiments, the replication agent 356 is
configured to instantiate one or more of the threads 358A, 358B. In
certain embodiments, the threads 358 are instantiated and/or
particularized based on pairing or correlation information received
from a management component and/or based on certain system
configuration information (e.g., available replication volumes),
data path information, the type of information in the transferred
data set, combinations of the same and the like. For example, the
replication agent 356 may instantiate one or more threads 358 that
correlate certain data types with certain data volumes and may
specify primary and alternate data paths.
[0133] Once instantiated, the threads 358 process and/or traverse
the replication log(s) 352 until a consistency point is encountered
(Block 830). In certain embodiments, when reaching a consistency
point, the thread 358 stops scanning the replication log 352 and
notifies the replication agent 356 that the thread 358 has reached
the consistency point (Block 835).
[0134] In certain embodiments, once all active threads 358
associated with traversing the replication logs 352 have notified
the replication agent 356 that a consistency point has been
reached, the replication process 800 moves to Block 840. At this
point, the replicated data stored in the replication volumes 116A,
116B preferably represents a known good state of the application
108.
[0135] At Block 840, the replication agent 356 suspends further
operation by the threads 358. For instance, the replication agent
356 may suspend data writes to the destination volumes 116A, 116B.
At this point, the replication process 800 proceeds with Block 845,
wherein one or more storage operations (e.g., snapshots) may be
performed on the replicated data, which are described in more
detail above.
[0136] As discussed above, one of the advantages of the embodiments
of the data replication systems disclosed herein is that such
systems are capable of translating information intercepted by a
filter driver on a first (source) system into information that is
suitable for replay (e.g., replication) on a second (destination)
system. In certain embodiments, however, the identification of
files or directories in the source system may not be suitable for
use with the directory structure of the destination system.
[0137] For example, in UNIX-based systems, such as SOLARIS and
LINUX, file system operations are generally identified as
operations on "inodes" (or "vnodes") such that files are referenced
by a unique inode number and/or by a combination of one or more
directory inode numbers and a short name. Such systems often
utilize file name or pathname translation algorithms to implement a
user-level hierarchical view of the file system.
[0138] Such usage of inodes and short names, however, is not
conducive for replaying data modification operations on a second
system, such as occurs in the data replication systems disclosed
herein. That is, a path having one or more inodes and/or short
names does not provide a destination system with the appropriate
information for performing the replicated data modification
operation.
[0139] On certain operating systems (e.g., SOLARIS 10, LINUX 2.6)
pathname translation may sometimes be performed within the
operating system kernel by traversing backwards a directory name
lookup cache (DNLC). Using such translation systems in the data
replication environment, however, may yield concurrency issues if
certain locking processes are not performed. For instance, in order
to ensure that other threads or processes do not rename one of the
components of a file's absolute path between the time that the
thread computes the absolute path and the time that a relevant log
entry is emitted, the DNLC would need to be locked against updates
from other threads during that period of time. Having this central
lock on the DNLC, however, may impose severe performance penalties
on the entire operating system.
[0140] FIG. 9 illustrates a block diagram of an exemplary
embodiment of a data path 900 usable to generate journal entries.
Portions of the data path 900 can be configured to more efficiently
perform pathname translation in a data replication system. For
example, in certain embodiments, the data path 900 is
advantageously configured to convert inode numbers (such as those
used inside the kernel driver and/or associated virtual file system
handlers) of a source system into absolute file pathnames to be
used on one or more replication systems. In certain embodiments,
all or a portion of the pathname translation is advantageously
implemented in the application space external to the kernel space
(e.g., in "userland"), thereby reducing potential loads on the
source system.
[0141] As shown, the data path 900 comprises a filter driver 910.
In certain embodiments, the filter driver 910 is configured to
monitor data management operations, such as data write operations
or file attribute modification operations, associated with a
computer application executing on a source computer. For instance,
such operations may comprise changes to data in a production level
memory. Examples of embodiments of filter drivers usable with the
data path 900 are described in more detail herein.
[0142] The filter driver 910 is further configured to populate a
queue 912 with log entries, or "raw" journal entries, related to
detected data modification operations from the application. In
certain embodiments, the log entries generated by the filter driver
910 are each associated with an inode that identifies to which
directory and/or file on the source storage device the associated
data modification was directed. The queue 912 is configured to
store the log entries until they are processed by a driver thread
(or process) 914. In certain embodiments, the queue 912 is
implemented in volatile memory on the source system.
[0143] The queue 912 forwards the log entries to the driver thread
914. In certain embodiments, the driver thread 914 polls the queue
912 for newly-generated log entries by the filter 910. The driver
thread 914 subsequently stores the log entries in a buffer 916. In
certain embodiments, the buffer 916 may be labeled a "raw" buffer
in that it is configured to store "raw" log entries, which were
generated by the filter driver 910 and/or which do not yet have an
absolute file pathname.
[0144] In certain embodiments, the buffer 916 is a memory-based
queue for storing the log entries until processed by a database
thread (or process) 918. In certain embodiments, the buffer 916
advantageously facilitates and/or expedites the unloading of raw
records from expensive driver memory to swappable application
memory. For instance, the buffer 916 may comprise an application
level-buffer of a size between approximately 40 megabytes and
approximately 60 megabytes. In certain embodiments, the buffer 916
is advantageously implemented as a first-in first-out buffer.
[0145] In certain embodiments, the database thread 918 is
advantageously capable of performing inode-to-pathname translation
for each of the log entries in the buffer 916. After performing the
translation, the database thread 918 may send the log entry (with
the absolute file pathname instead of the inode entry) to a desired
destination, such as a replication system, for further
processing.
[0146] In certain embodiments, the database thread 918 is
configured to access a pathname database 920 to enable the thread
918 to perform pathname translation. The pathname database 920
advantageously stores information that associates one or more
inodes or short names with an absolute file pathname. In yet other
embodiments, the pathname database 920 may comprise other means or
data for performing pathname translation, including, but not
limited to, a flat table, customized code, combinations of the same
or the like.
[0147] In certain embodiments of the invention, accessing the
pathname database 920 introduces delay into the data path 900. For
example, at certain points in the replication process, the filter
driver 910 may generate log entries at a quicker pace than the
pathname translations being performed by the database thread 918.
For instance, high activity disk lookups in the database 920 for
each log entry may require more time than the generation of the log
entries by the filter driver 910.
[0148] In such embodiments, the buffer 916 is advantageously
capable of adapting itself to the speed of the database thread 918.
For example, when the lookups by the database thread 918 are
relatively fast, the buffer 916 does not introduce significant
delay into the data flow (e.g., relatively no performance
degradation due to the buffer 916). Thus, the buffer 916 may be
advantageously sized to be relatively transparent to the data
stream (e.g., has a small footprint). However, when the database
lookups begin to slow down, the buffer 916 is able to store
multiple log entries until the database thread 918 is able to catch
up.
[0149] Other mechanisms may be used to prevent user applications
from over-running the data path components (e.g., the queue 912,
the buffer 916, etc.). For example, in some cases user processes
can generate input/output so fast that such components overflow and
starts swapping. In such a case, the filter driver 910 (or other
appropriate component, such as the driver thread 914) may throttle
the input/output by introducing small delays into the input/output
path. For example, the filter driver 910 may lengthen the delays
when an in-memory queue maintained by the filter driver 910
approaches a preconfigured limit. Where the input/output throttling
does not remedy the situation, and overflow still occurs, the
system may abort and reinitialize the replication process.
[0150] In certain embodiments, the database lookups by the database
thread 918 may become so time intensive that the maximum storage
capacity of the buffer 916 is reached. In such embodiments, the
buffer 916 is configured to provide disk swapping functionality to
avoid overflow of the buffer 916, which may result in memory
problems and/or aborting replication. For instance, as shown in
FIG. 9, the buffer 916 may store excess log entries in a folder in
memory 922. In certain embodiments, the memory 922 may comprise a
disk and/or may be located on the storage device of the source
machine.
[0151] FIG. 10 illustrates an embodiment of a pathname database 920
of the data path 900 of FIG. 9. In particular, the pathname
database 920 may be advantageously accessed by the database thread
918 when determining an absolute file pathname for one or more log
entries.
[0152] The illustrated pathname database 920 is configured for
inode-to-pathname translation, such as for a UNIX-based system. In
particular, the pathname database 920 includes three columns: a
directory inode (or parent inode) column 1022, a short name column
1024 and an entry inode column 1026. In yet other embodiments, as
described in more detail below, the inode information in the
database 920 can be replaced with FIDs.
[0153] In certain embodiments, each inode in a UNIX-based system is
recorded as an entry in the pathname database 920. For instance,
FIG. 10 illustrates a system having four inodes, each having a
single entry in the entry inode column 1026 and having a value of
"1" through "4." The corresponding short name column 1024
identifies the short name of the file or folder associated with the
particular inode. For instance, entry inode "4" identifies a folder
or file with the short name of "user," while entry inode "1"
identifies a root directory. The directory inode column 1022, or
parent inode column, identifies the inode of the parent directory
to the particular entry inode. For instance, entry inode "3," which
has a short name of "file," is a child of the folder with an inode
of "2."
[0154] As can be seen from the illustrated pathname database 920,
when the database thread 918 receives a log entry with a particular
inode, the database thread 918 is able to access the pathname
database 920 and construct an absolute file pathname using the
information stored therein for transmission to the destination
system.
[0155] FIG. 11 illustrates an embodiment of a simplified pathname
translation process 1100, such as may be performed by the database
thread 918 of FIG. 11 in conjunction with the pathname database 920
of FIG. 10. For example, the pathname translation process 1100 may
be used to translate an inode to a pathname, such as an absolute
file pathname to be used by a destination system in replicating
data.
[0156] As shown, the translation process 1100 begins at Block 1105,
wherein the database thread 918 receives a log entry to be
processed. For example, with reference to FIG. 9, the database
thread 918 may retrieve the log entry from a buffer 916. In certain
embodiments, the log entry preferably represents a data
modification operation associated with a particular application on
the source system.
[0157] At Block 1110, the database thread 918 identifies the inode
associated with the particular operation represented by the log
entry. For instance, the inode may represent a file or folder to
which data is to be written. In other embodiments, the inode in the
log entry may identify a file name to be modified or other data or
file modification operation.
[0158] At Block 1115, the database thread 918 accesses the pathname
database 920 to acquire information for translating the inode to an
absolute file pathname. In particular, the database thread 918
searches the entry inode column 1026 for an entry that corresponds
to the value of the log entry inode. Once the corresponding inode
entry is found, the database thread 918 determines (and stores) the
associated short name from the short name column 1024 (Block
1120).
[0159] The translation process then proceeds with Block 1125. If
the subject inode does not correspond to the root directory ("/"),
the database thread 918 identifies from the directory inode 1022
the inode of the parent directory (Block 1130). The database thread
918 then searches the entry inode column 1026 for the parent
directory inode (Block 1135) and adds the short name associated
with the parent directory inode to the absolute file pathname
(Block 1140).
[0160] The translation process 1100 then returns to Block 1125 to
repeat the lookups and construction of the absolute file pathname
until the database thread 918 reaches the root directory. Once the
database thread 918 reaches the root directory, the database thread
918 stores the fully translated file pathname with the associated
log entry (Block 1145), and the translation process 1100
terminates.
[0161] For exemplary purposes, the translation process 1100 will be
now be described with reference to a data write command "vop_write
(4, DATA)" and the values illustrated in the pathname database of
FIG. 10. To begin the translation process, the database thread 918
receives the log entry representing the command "vop_write (4,
DATA)" (Block 1105) which corresponds to writing "DATA" to inode
"4" on the source system (Block 1110).
[0162] The database thread 918 then accesses the pathname database
920 and searches the entry inode column 1026 for a value of "4"
(Block 1115). Upon finding "4" in the entry inode column 1026, the
database thread 918 determines from the short name column 1024 that
the short name corresponding to inode "4" is "user" (Block
1120).
[0163] Because inode "4" does not correspond to the root directory
(Block 1125), the database thread 918 identifies from the directory
inode column 1022 that the parent directory inode of inode "4" is
inode "2" (Block 1130). The database thread 918 then returns to
search the inode entry column 1026 for the inode value of "2"
(Block 1135), determines that the short name for inode "2" is
"dir," and adds "dir" to the file pathname (Block 1140).
[0164] Because inode "2" does not correspond to the root directory
(Block 1125), the database thread 918 identifies from the directory
inode column 1022 that the parent directory inode of inode "2" is
inode "1" (Block 1130). The database thread 918 then searches the
inode entry column 1026 for the inode value of "1" (Block 1135) and
determines that the inode "1" corresponds to the root directory
("/") (Block 1140).
[0165] Now that the database thread 918 has encountered the root
directory (Block 1125), the database thread 918 stores the
translated file pathname (i.e., "/dir/user") with the subject log
entry, and the translation process 1100 terminates.
[0166] It will be understood that the translation process 1100 may
differ in other embodiments of the invention in order to suit the
needs of the particular system(s) involved. For instance, the
translation process 1100 may be used to translate particular inodes
into file pathnames shorter than an absolute file pathname, such as
for example a relative pathname. In yet other embodiments, the
process can use FIDs in place of inodes to construct absolute file
names of files on the source system.
[0167] In certain embodiments, the three-column database 920
provides significant advantages over a flat two-column table (e.g.,
with an inode column and an absolute file pathname column). For
instance, the three-column database structure of the pathname
database 920 may use less memory than the two-column table and/or
expedite folder rename operations. As an example, when a name of a
folder is modified, the three-column database structure allows for
a single lookup and modification (e.g., modifying the short name
column 1024 entry associated with the entry inode column 1026 entry
of the subject inode), while the two-column table would require
multiple lookups and modifications corresponding to each entry
having an absolute file pathname that includes the folder to be
renamed.
[0168] As discussed above, in certain embodiments, the pathname
database 920 is maintained in userland (e.g., an application space
external to the kernel space). In such embodiments, the pathname
database 920 may be advantageously managed and/or accessed by
userland code without impacting the resources of the operating
system kernel or other applications.
[0169] In certain embodiments, the pathname database 920 may be
initially populated during an initialization period. For instance,
a snapshot may be taken to produce a static image of the file
system of the source system. The pathname database 920 may then be
populated based on the snapshot. As subsequent changes are made to
file names of the source system, corresponding changes are made in
the pathname database 920 in order to maintain synchronization. In
yet other embodiments, as discussed in more detail below, the
pathname database 920 can be populated based on scan of a live
source file system.
[0170] In yet other embodiments, the pathname database 920 may be
specific to the files and/or folders of one or more particular
applications. For example, the pathname database 920 may include
inodes, short names and related information only for those inodes
affected by a single application (e.g., MICROSOFT EXCHANGE). In yet
other embodiments, multiple pathname databases 920 may be used.
[0171] As can be appreciated in the translation systems and methods
described with reference to FIGS. 9-11, it can be important for the
file system on the destination system to be in a synchronized state
with the pathname database 920; otherwise the replication system
can encounter a file system error (e.g., a file directory does not
exist) when attempting to apply a journal entry on the destination
system. In certain embodiments, this error can result in
replication failure and require a resynchronization of the both the
source and destination systems via the initial seeding or
synchronization process discussed above.
[0172] Moreover, as discussed, certain embodiments of the initial
synchronization process include performing an initial file system
scan and populating the pathname database 920. As mentioned, this
can be performed by scanning a file system snapshot of the source
system, which obtains a static image of the file system. In
particular, the replication system can take a consistent snapshot
in which, for the duration of the snapshot (e.g., the file system
is flushed, frozen, and snapped by file system driver), no
namespace changing operations are allowed, such as renames of a
directory, deletes, and/or creates. If the replication system
detects that some of these operations have occurred during the
synchronization process, the replication system may need to delete
the snapshot and re-perform the scan. In certain circumstances,
especially in active file systems, the replication system can get
trapped in a virtually infinite loop trying to take the snapshot
over and over again due to the constantly changing files.
[0173] In yet other embodiments, the source file system may not
support the taking of consistent snapshots or may require
additional drivers to be installed, thereby complicating the
snapshot process. Moreover, taking snapshots of a root file system
can introduce even further complications. For example, in some
cases a root file system is allocated on a file system on which a
snapshot cannot be taken, or on which it is difficult to take a
snapshot. For instance, in Linux based systems the root file system
is often located on a separate partition, outside of the Linux
Volume Manager (LVM). Moreover, system directories (e.g., /etc.,
/var/tmp, /tmp) are often not sub-divided and are therefore all
located on the same root file system. In such cases, it can be
difficult to take a snapshot because modifications to these
directories occur on an on-going, continual basis.
[0174] Thus, certain embodiments of the invention provide systems
and methods for producing a consistent image of a live source file
system in a pathname, or file name, translation database and/or on
a destination system without requiring a static image of the file
system. Such embodiments can advantageously allow for changes to
the source file system to occur while other portions of the file
system are being scanned, thereby expediting the initial seeding of
the file name database and/or destination system and replaying of
intercepted data changes on the replication system.
[0175] For example, certain embodiments of the invention in a UNIX
environment utilize FIDs for performing snapless synchronization
and replication in a CDR system. In certain embodiments, each FID
comprises a sequence of between eight and sixteen bytes that
uniquely identifies a file or folder in a file system.
[0176] In certain embodiments, the FIDs are introduced part of a
UNIX kernel for supporting stateless implementation of Network File
System (NFS) version 3 or below. For example, the NFS 3 file system
can access files and directories via handles, in which the file
system encodes all relevant information that it needs to later
translate the FID to the corresponding file or directory inode.
[0177] In certain embodiments, NFS does not interpret contents of
handles, but it uses the contents to directly refer to the files
and directories of interest. For instance, these handles can
contain FIDs, file/directory inode numbers, generation numbers or
the like. Moreover, file systems that are NFS-compatible (i.e., can
be exported via NFS) can support the use of FID scanning, as
discussed in more detail below.
[0178] The use of FIDs can provide several advantages both during
scanning and during replication, such as for improving writes to
the destination system. For instance, systems can address
subdirectories and carry on with scanning even while the user makes
changes to the file system, including such changes as renaming
parent folders. Moreover, because the file system translates FID
information to locate files on a storage device (e.g., mapping of
FIDs to vnodes), the FIDs can be used to identify and access files
or folders that are renamed or moved in a file system.
[0179] Another advantage is that generation IDs that are encoded
into FIDs give additional robustness. For instance, if a file or
directory is deleted and then recreated elsewhere during scanning,
the file system may reuse an inode number. However, file systems
using FIDs generally increment the generation ID portion of the FID
with each new file system object, thereby resulting in an absolute
unique FID. Thus, FIDs are unique both in space and in time, and
using them can reduce the chance of accidentally confusing an old
file system object with a recreated one.
[0180] FIG. 12 illustrates a block diagram of another embodiment of
a pathname or file name translation system 1200 that is configured
to more efficiently perform file name translation in a data
replication system. For example, in certain embodiments, the
translation system 1200 is advantageously configured to associate
FIDs, which are generally used primarily inside the kernel, with
file short names and store the associations in a database. The
system 1200 can be further configured to convert FIDs referencing
files on a source system into absolute file names to be used on one
or more replication systems. This translation can be advantageously
implemented in the application space external to the kernel space
(e.g., userland), thereby reducing potential loads on the source
system.
[0181] As shown, the system 1200 includes the file system 1202 of
the source computing system. In certain embodiments, the file
system 1202 comprises a UNIX environment implementing NFS. In other
embodiments, the file system 1202 can comprise a NFS-compatible
file system.
[0182] In communication with the file system 1202 is a scanning
module 1204. In certain embodiments, the scanning module 1204 is
configured to scan a live file system (e.g., file system 1202), to
build a database of FIDs and associated short names that reflect
the structure of the file system 1202, such as during an initial
seeding or synchronization phase of data replication. For instance,
the scanning module 1204 can advantageously populate the database
without performing a snapshot of the source file system 1202.
[0183] In certain embodiments, the scanning module 1204 can
comprise one or more filter drivers, such as file system drivers
that execute on a computing device, such as the source computing
device. In certain embodiments, the scanning module 1204 can
comprise one or more data agents 236. In yet other embodiments, the
scanning module 1204 can comprise a plurality of modules, either in
software or hardware, that are configured to perform the functions
described herein.
[0184] In particular, the scanning module 1204 maintains a FID
queue 1206 to assist with producing a consistent image of the live
source file system 1202. For instance, the FID queue 1206 can store
a plurality of FIDs processed by the scanning module 1204 to
populate a database. In certain embodiments, the queue 1206
comprises a first in-first out (FIFO) buffer or other like
memory.
[0185] The system 1200 further comprises a database thread 1208
configured to translate FIDs to absolute file names for replaying
operations (e.g., as stored in a replication log file) on a
destination system. For example, after performing the file name
translation, the database thread 1208 may send a log entry (with
the absolute file name instead of the FID) to a desired
destination, such as a replication system, for further
processing.
[0186] In certain embodiments, the database thread 1208 is
configured to access a file name database 1210 to enable the thread
1208 to perform file name translation. The file name database 1210
advantageously stores information that associates one or more FIDs
with short names and directory information. For instance, the file
name database 1210 can be similar to the pathname database 920
illustrated in FIGS. 9 and 10, with inode information of the
pathname database 920 being replaced with FID information. In yet
other embodiments, the file name database 1210 may comprise other
means or data for performing file name translation, including, but
not limited to, a flat table, customized code, combinations of the
same or the like.
[0187] FIG. 13 illustrates a flowchart of an exemplary embodiment
of a process 1300 for scanning a live file system. In particular,
the process 1300 can be advantageously used to scan a source file
system in a replication environment, such as a CDR environment,
without performing a snapshot on the source file system. Such file
system scanning can be less sensitive to user changes during the
scan and is able to interleave journal entries generated after the
scan with a file name database (e.g., database 1210) populated as
part of the scan.
[0188] For example, the process 1300 may be performed on the file
system 234 of the source system 102 of FIG. 2. For exemplary
purposes, the process 1300 will be described with reference to the
components of the file name translation system 1200 of FIG. 12.
[0189] As shown, the process 1300 begins with Block 1305 by
establishing an empty queue, such as queue 1206, for holding FIDs
during scanning of the file system 1202. At Block 1310, the process
1300 also creates an empty file name database 1210, as described
above.
[0190] The scanning module 1204 then adds the FID of the source
file system's root directory to the queue 1206 (Block 1315) and
obtains a file descriptor by opening the root directory in
read-only (RO) mode (Block 1320). The scanning module 1204 can
obtain the file descriptor by issuing an open( ) call, for example.
The file descriptor may comprise an integer or other appropriate
identifier in userland, and can be used as a file handler or file
identifier for input/output. Behind the scenes in the kernel, the
file descriptor number can be associated with the corresponding
file object. Thus, when a userland application writes some data to
a file descriptor, the kernel is aware of what object the data
should be written to. At this point, after obtaining the file
descriptor, the process 1300 begins a recursive procedure for
stepping through each of the directories in the file system 1202
and populating the database 1210 with information usable to
recreate a consistent image of the file system 1202 on the
destination system.
[0191] As shown, at Block 1325, the scanning module 1204 obtains
the next FID from the queue 1206. In the initial pass through the
process 1300, the FID will generally be the root directory FID. At
Block 1330, the scanning module 1204 asks the filter driver to
associate, in the kernel, the appropriate previously obtained file
descriptor with the current FID. In certain embodiments, the
scanning module 1204 invokes an ioctl( ) API (e.g., FLR_OPEN_BY_FID
(fd, FID)) that receives both a pre-open file descriptor and an
FID. In response, a file system filter driver then converts the FID
to a file system vnode via a file-system-provided API and inserts
the obtained vnode into the handler or file structure corresponding
to the passed file descriptor. Once this has completed, the
application can then access the file or directory by making usual
system calls and passing them the modified file descriptor.
[0192] At Block 1331, the scanning module 1204 scans the directory
corresponding to the FID using the associated file descriptor. For
example, the scanning module 1204 steps into the directory
associated with the current FID, such as through invoking the
fchdir(fd) command, and reads each of the direct directory
children, such as through the opendir(".") and readdir( ) commands.
At Block 1335, for each detected subdirectory, the scanning module
1204 appends the FID associated with the subdirectory to the end of
the queue 1206 for further analysis.
[0193] At Block 1340, for each of the immediate directory children,
the scanning module 1204 also populates the file name database 1210
with the FID and relative (short) name information. For instance,
the scanning module 1204 may insert a row in the database 1210 that
includes a parent directory FID, a short name of the file or folder
and the entry's FID (see, e.g., FIG. 10).
[0194] Moving to Block 1345, the scanning module 1204 determines if
there are additional FIDs stored in the queue 1206. If so, the
scanning module 1204 returns to Block 1325 to obtain the next FID
from the queue 1206 and to step through the immediate children of
the directory associated with that FID. In certain situations, with
the continuously changing file system and the possibility that the
same directory is scanned more than once, the scanning module 1204
can further resolve possible structural inconsistency problems
between the scan list of direct children and the contents of the
file name database 1210 (Block 1346). This technique is described
in further detail below (e.g., with respect to the process 1400 of
FIG. 14).
[0195] When there are no additional FIDs stored in the queue 1206,
the process 1300 concludes and monitors log or journal entries for
any changes to the source file system directories (Block 1350).
[0196] As can be seen, with the process 1300, the scanning module
1204 does not address directories or files by absolute file names.
Rather, the scanning module 1204 scans each of the file system
directories individually by addressing each directory by its unique
FID, and by populating the database 1210 with the FIDs of children,
along with their relative (short) names. UNIX systems typically do
not allow direct userland access to file system objects using FIDs.
Thus, the process 1300 generally constructs a dummy file descriptor
that is initially associated with the root directory. The filter
driver then locates desired file or directory objects (e.g.,
children files or directories) by their FID and associates those
objects with the dummy file descriptor. Thus, a userland
application can then, e.g., use a "read directory" operation to
obtain the list of children.
[0197] One of the benefits of the snapless scanning process 1300 of
FIG. 13 is that the file system 1202 can undergo changes by the
user during scanning without requiring a rescan of the entire file
system. However, in certain circumstances, although the user's
changes are intercepted by a file system driver and are appended to
a change journal for further replay, and although the scanning
module 1204 is made less sensitive to user's changes by using the
FID-driven scan, replication processes disclosed herein can still
encounter problems in the replicating phase when trying to replay
collected journal entries on the destination system and/or when
performing database lookups.
[0198] For instance, in the case of snapshot-based scanning (e.g.,
FIGS. 9-11), the image on the destination system and the contents
of the pathname database 920 are populated based on a point-in-time
replica of the entire file system (i.e., the snapshot). Thus, all
pending journal entries can be applied to the database 920 and/or
replayed on destination system following the initial scan in the
order that they were generated because, logically, the journal
entries are generated after the snapshot.
[0199] However, in the case of snapless scanning (e.g., FIGS. 12
and 13), each directory is scanned at a different time and, likely,
during user modifications to different portions of the file system.
Thus, it becomes important to know if a particular folder or
directory was scanned before or after a particular journal entry
was generated. That is, if a journal entry associated with the
contents of a folder is generated before the scanning of the
folder, it may not be appropriate to apply the journal entry to the
folder. Otherwise, there would be a risk of introducing
inconsistencies into the file name database 1210, which can require
the rescanning of one or more directories in order to repopulate
the database 1210.
[0200] To address such risks of inconsistencies, in certain
embodiments of the invention, when scanning a particular directory
during an FID-driven scan, systems and methods can query the file
system driver for its current journal sequence number. The sequence
number is then stored in the file name database 1210 or other
location along with the identification of the current directory's
children. When the system is to apply a stream of journal entries
to the database 1210, the system can ignore all journal entries
that were generated before the subject directory was scanned, as
identified by the stored journal sequence number.
[0201] FIG. 14 further illustrates a flowchart of an exemplary
embodiment of a scan routine 1400 for obtaining both structural
information from a file system (e.g., which FID represents a child
of which parent directory FID) and the time at which the particular
relationship was observed. In certain embodiments, this information
is stored in the same database. In particular, the scan routine
1400 can be implemented as part of the scan process 1300 (e.g., at
Blocks 1325 to 1340) to improve FID-driven scanning and preserve
consistency between the source file system image and the file name
database. For exemplary purposes, the process 1400 will be
described with reference to the components of the system 1200 of
FIG. 12.
[0202] The scanning module 1204 obtains the first directory FID
from the queue 1206 and associates a file descriptor with the FID,
such as via a ioctl( ) call (e.g., a FLR_OPEN_BY_FID(fd,FID)
command). For example, these actions may be performed in the manner
described above with respect to FIG. 13 (Blocks 1305 and 1310). At
Block 1415, the routine 1400 steps into the directory of the
current FID. For example, as discussed above, this can take place
by the scanning module 1204 invoking the fchdir(fd) command.
[0203] At Block 1420, the scanning module 1204 obtains the current
journal sequence number. In certain embodiments, the sequence
number is assigned by a file system filter driver to each newly
generated journal entry, being incremented with elementary changes
made to the file system. In certain embodiments, the scanning
module 1204 obtains the sequence number from the filter driver and
can be used as a measure of time and to advantageously coordinate
the file system scan results with the stream of journal entries
generated due to user's changes to file system data.
[0204] Upon obtaining the journal sequence number, the scanning
module 1204 begins monitoring the current directory for changes
(Block 1425). For instance, the scanning module can invoke an
ioctl( ) call that takes the FID of the current directory and
initiates collecting statistics for the directory in the driver. In
certain embodiments, collecting the statistics comprises utilizing
a counter of namespace changing operations applicable to the
current directory and intercepted by the driver.
[0205] The scanning module 1204 then obtains the FID for each
immediate child in the directory (Block 1430), as discussed in more
detail above with respect to Block 1335. After processing each of
the immediate children in the current directory, the scanning
module 1204 determines if there were any structural changes to the
directory during the scan (Block 1435). For instance, the scanning
module 1204 could issue an ioctl( ) call that stops the collecting
of statistics (e.g., by the driver) and returns the number of
namespace changing operations that happened since Block 1425.
[0206] If there were structural changes to the directory during
that time, the routine 1400 assumes that the scan was not clean and
repeats the scan process for the current directory by returning to
Block 1425. On the other hand, if no changes are detected, the
routine continues on with the file scanning process. For example,
the routine proceeds to populate the file name database 1210 with
the FIDs of direct children of the current directory. This can
occur in the fashion described above with respect to the process
1300 of FIG. 13 (Block 1340). In certain embodiments, the process
1400 also includes storing the scan sequence number obtained at
Block 1420 for the current directory. The routine in one
configuration also stores the sequence number obtained at Block
1420 in the file name database 1210. For example, the stored
sequence number can then be used during replication to apply
collected log entries, as described below with respect to FIGS.
15A-15B.
[0207] In certain situations, with the continuously changing file
system and the possibility that the same directory is scanned more
than once, the scanning module 1204 can further resolve possible
structural inconsistency problems between the scan list of direct
children and the contents of the file name database 1210. For
instance, the scanning module 1204 can request a rescan of
suspicious file system objects by appending their FIDs to the queue
1206 and/or by re-parenting to "null_fid" all database children of
the current directory that are not identified on the scan list.
[0208] In certain embodiments, for files that are on the scan list
but are not identified in the database 1210 as children of the
current directory, the routine 1400, such as through the database
thread 1208, can add rows corresponding to the files in the
database 1210. For subdirectories that are on the scan list but are
not identified in the database 1210 as children of the current
directory, the routine 1400 can determined if the database already
has an entry for the subdirectory. If so, and the subdirectory is
identified as a child of another directory, the database thread
1208 can re-parent the subdirectory to the current directory and
request a re-scan of the previous parent by obtaining its FID and
appending it to the queue 1206. Otherwise, the database thread 1208
can add a new row describing that the subdirectory is the child of
the current directory and can append the FID of the subdirectory to
the FID queue 1206, thereby ensuring that before the scan
completed, the child subdirectory will be recursively scanned as
well.
[0209] After the file name database 1210 is initially populated,
certain embodiments of the invention are configured to detect file
system changes on the source system and replicate such changes to
the file name database 1210 and, ultimately, to the destination
system. FIGS. 15A and 15B illustrate a flowchart of an exemplary
replication process 1500 for interleaving a stream of journal
entries with the results of the live file system scan in the
database 1210, such as generated by the process 1300 and/or routine
1400. For exemplary purposes, the process 1500 will be described
will be described with reference to the components of the file name
translation system 1200 of FIG. 12.
[0210] In general, the process 1500 includes obtaining and
comparing sequence numbers of journal entries with scan sequence
numbers of respective FIDs in the database 1210. Based on this
comparison, the process 1500 determines whether or not to apply the
journal entry to the database 1210 and destination system, to
discard the journal entry, or to trigger an additional FID
scan.
[0211] The process 1500 begins at Block 1505 by obtaining the next
journal entry and its associated sequence number from the file
system driver, such as the filter driver 110, or a source log 244.
The process 1500 determines if the current journal entry is
associated with a rename (or move) operation of a file or
subdirectory from one parent directory to another (Block 1510). If
not, the process 1500 then determines if the current journal entry
is associated with a create or remove operation of a file or
subdirectory in a parent directory (Block 1515). If not, the
process 1500 returns to Block 1505 to obtain the next journal
entry.
[0212] If it is determined at Block 1515 that the journal entry is
associated with a create or remove operation, the scanning module
1204 and/or the database thread 1208 look up the parent directory's
FID in the database 1210 to obtain the FID's scan sequence number,
such as discussed with respect to Block 1420 (Block 1520). If the
sequence number of the journal entry is less than or equal to the
FID sequence number (Block 1525), the process 1500 disregards the
journal entry under the assumption that the journal entry was
generated before the scanning of the corresponding portion of the
file system (Block 1530). The process 1500 then returns to Block
1505 to obtain the next journal entry.
[0213] At block 1534, the process 1500 updates the database 1210 to
reflect or include the obtained journal entry. If the sequence
number of the journal entry is greater than the FID sequence number
in the database 1210, the database thread 1208 obtains from the
database 1210 the absolute file names of both the parent directory
and the created/removed entity (Block 1535). With the information,
the database thread 1208 sends the journal entry for replay on the
destination system (Block 1540). The process 1500 then returns to
Block 1515 to obtain the next journal entry.
[0214] If at Block 1510, the journal entry is associated with a
rename (or move) operation of a file or subdirectory from one
parent directory to another, the process 1500 moves to Block 1545
to obtain from the database 1210 the FIDs of both the source and
destination parent directories. If the journal entry sequence
number is greater than the sequence number associated with the scan
of the source directory (Block 1550), the process 1500 then
determines if the journal entry sequence number is also greater
than the sequence number associated with the scan of the
destination directory (Block 1555). If it is not, the database
thread 1208 removes the child from the source directory in the
database 1210 (Block 1560) and converts the journal entry from a
rename operation to a remove operation (Block 1565). The database
thread 1208 then sends the journal entry to the destination system
to remove the child from the source directory (Block 1570). The
process 1500 then returns to Block 1505.
[0215] If at Block 1555 the journal entry sequence number is
determined to be greater than the sequence number associated with
the scan of the source and destination directories, the database
thread 1208 applies the journal entry to the database 1210 (Block
1575). The database thread 1208 further obtains from the database
1210 the absolute file names of the involved file system objects
(Block 1580) and sends the journal entry to the destination system
for replay (Block 1585). The process 1500 then returns to Block
1505.
[0216] If at Block 1550 it is determined that the journal entry
sequence number is less than or equal to the sequence number
associated with the scan of the source directory, the process 1500
further determines if the journal entry sequence number is greater
than the sequence number associated with the scan of the
destination directory (Block 1590). If so, the process 1500
recognizes that the FID of the object being moved is not in the
file name database 1210. That is, the source directory was scanned
after the rename was detected, and the destination directory was
scanned before the rename was detected, indicating that the
scanning module 1204 missed the moved file system object. In this
situation, the process 1500 repeats the file system scan beginning
with the FID of the object moved in the rename operation (Block
1592). The process 1500 then returns to Block 1505 to obtain the
next journal entry.
[0217] If at Block 1590 it is determined that that the journal
entry sequence number is less than or equal to the sequence number
associated with the scans of the source and destination
directories, the process 1500 disregards the journal entry (i.e.,
occurred before scans of both source and parent directories) (Block
1595) and returns to Block 1505 to obtain the next journal
entry.
[0218] In certain embodiments, the use of FIDs in file system
scanning and/or causing the associated filter driver to refer to
affected file system objects by FIDs can advantageously provide for
more efficient handling of write operations. For instance, written
data does not need to be journaled from the file system driver to
userland. Rather, the FID of the modified file and the
offset/length of the modified regions can be sent to the userland
application for use in reading the data directly from the file by
opening the file with the FID and by merging the modified byte
ranges.
[0219] This process can provide several benefits. For instance, not
piping written data from the driver to the userland application or
journal can provide significant improvements in performance. That
is, copying data first in the driver's memory, then passing the
data to the userland application and storing the data in the disk
cache can be quite expensive. By not journaling the actual data,
but obtaining the data when needed directly from disk, system
performance can be improved several times.
[0220] In yet other embodiments, the use of FIDs in combination
with a slight delay in the actual transfer of data to the
destination system can allow replication systems to accumulate a
list of changed byte ranges in memory. This can provide further
advantages in that the replication system can analyze the changed
bytes and optimize and/or improve replication of data to the
destination system.
[0221] For example, in certain embodiments, inventive systems and
methods can combine multiple write operations into a single write
operation based on the FIDs and byte ranges associated with
operations by one or more applications. For example, the journal
entry stream identifying the data operations intended for the
source file system can be modified to refer to FIDs instead of
inode numbers and to journal the offset and/or length of
overwritten byte ranges instead of actual data. This allows systems
and methods to obtain written data directly from disk, thereby
achieving a significant improvement in performance.
[0222] For instance, in certain embodiments, the file system filter
driver and/or data agent(s) monitoring data operations can write
repetitive writes to a single location. In yet other embodiments,
the file system driver can combine modified adjacent byte ranges
into a single write operation. In further embodiments, the file
system driver can read non-combinable byte ranges in the order of
increasing file offsets, thereby obtaining better performance from
the file system and during subsequent replication.
[0223] In certain embodiments, the file system driver and/or data
agent(s) can improve replication with respect to temporary files.
The phrases "temporary file" or "temporary data" are broad terms
and are used herein in their ordinary sense and include, without
limitation, data that is created by a program or application (e.g.,
editors and compilers) for some transitory purpose, but deleted
later, generally within a short period of time.
[0224] For instance, in conventional replication systems, when an
application creates a temporary file, the new contents of the
temporary file are generally replicated from the source to
destination system. A brief time, later the REMOVE command is
replicated from the source system that deletes the transferred data
on the destination system, mimicking the manner in which the file
was created and removed by the application(s) on the source
system.
[0225] By introducing the slight delay in the replication process,
such a by accumulated a number of journal entries, inventive
systems and methods may encounter an error (e.g., a "no such file
or directory" or "file not found" error) from the file system when
attempting to read the contents of a temporary file when, within
the delay period, the temporary data has been removed from the
source system. As a result, the replication system does not send
the temporary data across to the destination system, as the file
system is not able to locate the deleted file by the FID.
[0226] FIG. 16 illustrates a flowchart of an exemplary process 1600
including a delay period as discussed above for improving
continuous data replication. In particular, the process 1600
addresses the analysis of multiple (e.g., two) data operations
received from one or more applications during an introduced delay
period; however, it will be appreciated that other embodiments of
the invention can introduce longer delays that capture additional
data operations for analysis. For exemplary purposes, the process
1600 will be described with reference to the components of the
replication system 400 of FIG. 4 utilizing an FID-driven
replication procedure.
[0227] As shown at Block 1605, the process 1600 begins by receiving
from the filter driver 110 a first journal entry related to a
modified file. In certain embodiments, the file system filter
driver 110 intercepts or otherwise accesses a data modification
operation sent by the application(s) 108. At Block 1610, the data
agent 236 identifies the FID of the file to be modified on the
source system 102 and the offset and length of the modified
portions of the file. In certain embodiments, the filter driver 110
advantageously does not store or otherwise retain a copy of the
actual data to be modified for each such data modification
operation.
[0228] Also, instead of immediately transmitting to the destination
system 104 the logs and data associated with replaying the data
operation on the destination system 104, the process 1600
introduces a delay in the replication of data. In certain
embodiments, this delay is between approximately three and four
seconds, in other embodiments, the delay can be of a shorter or
longer duration.
[0229] Due to the delay, the data agent 236 receives at least a
second journal entry before the data associated with the first
journal entry is accessed (Block 1615). At Block 1620, based on the
second journal entry, the data agent 236 identifies the FID of the
file to be modified on the source system 102 and the offset and
length of the modified portions of the file. Based on the data
location information received from both the first and second
journal entries, the data agent 236 determines if the data
modification operations from the two journal entries are write
operations for the same data (Block 1625). If so, the filter driver
110 processes only the later data write operation associated with
the second journal entry and accesses the modified data portions on
disk for transmission to the destination system (Block 1630). The
earlier data operation of the first journal entry is ignored as
being out-of-date.
[0230] However, if the data operations are for different file
regions, the data agent 236 determines if the operations concern
writes to adjacent byte ranges that can be combined (Block 1635).
For example, the data agent 236 can determine if the distance
between the two byte ranges is larger than a predetermined
threshold. For instance, the threshold can be based on the size of
overhead (e.g., a header) associated with journal entries. In
certain embodiments, the threshold distance is 200 bytes. In yet
other embodiments, the distance can be larger (e.g., 1 KB) or
shorter and/or dynamically adjusted.
[0231] If the distance between the two byte ranges is less than the
threshold, the process 1600 combines the separate write operations
of the first and second journal entries into a single journal entry
having a single write. In this case, the single write operation is
replayed on the destination system 104 with both byte ranges being
replicated (Block 1640).
[0232] If the byte ranges are sufficiently separated, the data
operations from the two journal entries cannot be combined, and the
process 1600 handles the journal entries separately (Block 1645).
That is, the data agent 236 accesses each of the modified portions
of the file(s) based on the information in the two journal entries.
If either of the data access requests results in a particular type
of file system error, such as a "no such file or directory" or
"file not found" error (Block 1650), the process 1600 discards the
journal entry associated with the request (Block 1655). For
instance, in certain embodiments, due to the introduced delay, by
the time the data is requested, the data may have already been
deleted, moved or removed, such as is the case with temporary
files.
[0233] Finally, if no error is received when trying to access the
data, the process 1600 transfers the modified portions pertaining
to each journal entry for replay and replication on the replication
system 104. In certain embodiments, the transfer and/or replay of
the journal entries can be performed in order of increasing file
offsets, especially with journal entries associated with the same
FID.
[0234] Although the process 1600 is described with reference to
particular arrangements, it will be understood that other
embodiments of the invention may have more or fewer blocks that
those described above. For instance, the data location information
extracted from the second journal entry can be further compared
with data location information of a third journal entry or
additional journal entries, Thus, as can be seen, the process 1600
can be repeated for each subsequent journal entry captured by the
filter driver 110.
[0235] Embodiments of the invention have been described herein with
reference to UNIX file systems and can include LINUX, XFS, Veritas,
EXT3 file systems and the like.
[0236] In certain embodiments of the invention, data replication
systems and methods may be used in a modular storage management
system, embodiments of which are described in more detail in U.S.
Pat. No. 7,035,880, issued Apr. 5, 2006, which is hereby
incorporated herein by reference in its entirety. For example, the
data replication system may be part of a storage operation cell
that includes combinations of hardware and software components
directed to performing storage operations on electronic data.
Exemplary storage operation cells usable with embodiments of the
invention include CommCells as embodied in the QNet storage
management system and the QiNetix storage management system by
CommVault Systems, Inc. (Oceanport, N.J.), and as further described
in U.S. Pat. No. 7,454,569, issued Nov. 18, 2008, which is hereby
incorporated herein by reference in its entirety.
[0237] Systems and modules described herein may comprise software,
firmware, hardware, or any combination(s) of software, firmware, or
hardware suitable for the purposes described herein. Software and
other modules may reside on servers, workstations, personal
computers, computerized tablets, PDAs, and other devices suitable
for the purposes described herein. Software and other modules may
be accessible via local memory, via a network, via a browser, or
via other means suitable for the purposes described herein. Data
structures described herein may comprise computer files, variables,
programming arrays, programming structures, or any electronic
information storage schemes or methods, or any combinations
thereof, suitable for the purposes described herein. User interface
elements described herein may comprise elements from graphical user
interfaces, command line interfaces, and other interfaces suitable
for the purposes described herein.
[0238] Embodiments of the invention are also described above with
reference to flowchart illustrations and/or block diagrams of
methods, apparatus (systems) and computer program products. It will
be understood that each block of the flowchart illustrations and/or
block diagrams, and combinations of blocks in the flowchart
illustrations and/or block diagrams, may be implemented by computer
program instructions. These computer program instructions may be
provided to a processor of a general purpose computer, special
purpose computer, or other programmable data processing apparatus
to produce a machine, such that the instructions, which execute via
the processor of the computer or other programmable data processing
apparatus, create means for implementing the acts specified in the
flowchart and/or block diagram block or blocks.
[0239] These computer program instructions may also be stored in a
computer-readable memory that can direct a computer or other
programmable data processing apparatus to operate in a particular
manner, such that the instructions stored in the computer-readable
memory produce an article of manufacture including instruction
means which implement the acts specified in the flowchart and/or
block diagram block or blocks. The computer program instructions
may also be loaded onto a computer or other programmable data
processing apparatus to cause a series of operations to be
performed on the computer or other programmable apparatus to
produce a computer implemented process such that the instructions
which execute on the computer or other programmable apparatus
provide steps for implementing the acts specified in the flowchart
and/or block diagram block or blocks.
[0240] In addition, methods and functions described herein are not
limited to any particular sequence, and the acts or blocks relating
thereto can be performed in other sequences that are appropriate.
For example, described acts or blocks may be performed in an order
other than that specifically disclosed, or multiple acts or blocks
may be combined in a single act or block.
[0241] While certain embodiments of the inventions have been
described, these embodiments have been presented by way of example
only, and are not intended to limit the scope of the disclosure.
Indeed, the novel methods and systems described herein may be
embodied in a variety of other forms; furthermore, various
omissions, substitutions and changes in the form of the methods and
systems described herein may be made without departing from the
spirit of the disclosure. The accompanying claims and their
equivalents are intended to cover such forms or modifications as
would fall within the scope and spirit of the disclosure.
* * * * *