Method, system and serving node for data backup and restoration

Hu; Huanhuan ;   et al.

Patent Application Summary

U.S. patent application number 14/006674 was filed with the patent office on 2014-01-23 for method, system and serving node for data backup and restoration. This patent application is currently assigned to ZTE CORPORATION. The applicant listed for this patent is Bin Guo, Huanhuan Hu, Haowei Li. Invention is credited to Bin Guo, Huanhuan Hu, Haowei Li.

Application Number20140025638 14/006674
Document ID /
Family ID46858642
Filed Date2014-01-23

United States Patent Application 20140025638
Kind Code A1
Hu; Huanhuan ;   et al. January 23, 2014

Method, system and serving node for data backup and restoration

Abstract

Disclosed is a method, a system and a serving node for data backup and restoration, in which a serving node generates a backup schedule after receiving a data backup request, uploads the schedule to a cluster public storage block, each serving node sends a copy to be backed up to a target node for consolidation; a client sends a data record in a consolidated copy to the serving node through a data writing request, generates a data distribution table according to a load balancing principle, sends the request to other serving nodes according to the table to perform a write operation, the client sends the data writing request for a next data record after the write operation, until the copy is completely restored. The backup and restoration is performed in a complex cluster network accounting for performance differences among serving nodes, data consistency, data integrity and data uniform distribution.


Inventors: Hu; Huanhuan; (Shenzhen, CN) ; Li; Haowei; (Shenzhen, CN) ; Guo; Bin; (Shenzhen, CN)
Applicant:
Name City State Country Type

Hu; Huanhuan
Li; Haowei
Guo; Bin

Shenzhen
Shenzhen
Shenzhen

CN
CN
CN
Assignee: ZTE CORPORATION
Shenzhen, Guangdong
CN

Family ID: 46858642
Appl. No.: 14/006674
Filed: August 16, 2011
PCT Filed: August 16, 2011
PCT NO: PCT/CN11/78466
371 Date: September 22, 2013

Current U.S. Class: 707/654
Current CPC Class: G06F 11/1464 20130101; H04L 67/1097 20130101; G06F 11/1461 20130101; G06F 2201/82 20130101; H04L 67/2857 20130101; G06F 16/21 20190101; G06F 11/1469 20130101; H04L 67/1002 20130101; G06F 16/184 20190101
Class at Publication: 707/654
International Class: G06F 17/30 20060101 G06F017/30

Foreign Application Data

Date Code Application Number
Mar 22, 2011 CN 201110069873.4

Claims



1. A method for data backup and restoration, comprising: generating, by one of a plurality of serving nodes, a backup schedule, after receipt of a data backup request; uploading, by the serving node, the generated backup schedule to a cluster public storage block; sending, by each of the plurality of serving nodes, a copy which needs to be backed up to a target node for consolidation, according to the backup schedule; sending, by a client, a data record in a consolidated copy to the serving node through a data writing request; generating, by one of the plurality of serving nodes, a data distribution table according to a load balancing principle; sending, by the serving node, the data writing request to other serving nodes to perform a write operation, according to the generated data distribution table; and sending, by the client, a data writing request for a next data record in the consolidated copy after the write operation is completed, until the copy is completely restored.

2. The method according to claim 1, further comprising: after sending the data writing request to other serving nodes according to the data distribution table, storing, by the client, the data record in a linked list for failed data record restoration when the write operation fails, wherein whenever the client sends the data writing request, the data record in the linked list for failed data record restoration is sent preferentially.

3. The method according to claim 1, wherein the cluster public storage block is located in the serving node, and wherein data in the cluster public storage block can be accessed or modified in real time by any one of the serving nodes in the cluster.

4. The method according to claim 1, further comprising: before sending the copy which needs to be backed up, hot backing up, by each of the serving nodes, local data which needs to be backed up in a specified path.

5. The method according to claim 1, wherein the sending the copy which needs to be backed up to the target node comprises: transmitting the copy which needs to be backed up to one or more target nodes external to the cluster.

6. The method according to claim 1, wherein the consolidation comprises: consolidating, by the target node, a plurality of copies of a single data file into a latest copy according to a version number and a timestamp of the data, when the target node detects that the data in all of the serving nodes are complete according to the backup schedule.

7. A system for data backup and restoration, comprising a serving node, a client and a target node, wherein the serving node is configured to generate a backup schedule after receipt of a data backup request, to upload the generated backup schedule to a cluster public storage block, to send a copy which needs to be backed up to a target node for consolidation, to generate a data distribution table according to a load balancing principle after receipt of a data writing request, to send the data writing request to other serving nodes according to the data distribution table to perform a write operation, and to send a notification to the client when the write operation is completed; and wherein the client is configured to send a data record in a consolidated copy to the target node through the data writing request, and to send the data writing request for a next data record after receipt of the notification, until the copy is completely restored.

8. The system according to claim 7, wherein the service is further configured so that after sending the data writing request to other serving nodes to perform the write operation according to the data distribution table, the client stores the data record in a linked list for failed data record restoration when the write operation fails, wherein whenever the client sends the data writing request, the data record in the linked list for failed data record restoration is sent preferentially.

9. The system according to claim 7, wherein the consolidation performed by the target node comprises: consolidating by the target node a plurality of copies of a single data file into a latest copy according to a version number and a timestamp of the data, when the target node detects that the data in all of the serving nodes are complete according to the backup schedule.

10. A serving node for data backup and restoration, configured to generate a backup schedule after receipt of a data backup request, to upload the generated backup schedule to a cluster public storage block, to send a copy which needs to be backed up to a target node for consolidation, to generate a data distribution table according to a load balancing principle after receipt of a data writing request, to send a data writing request to other serving nodes according to a data distribution table to perform a write operation, and to send a notification to a client when the write operation is completed.

11. The serving node according to claim 9, wherein the serving node is further configured so that after sending the data writing request to other serving nodes to perform the write operation according to the data distribution table, the client stores the data record in a linked list for failed data record restoration when the write operation fails, wherein whenever the client sends the data writing request, the data record in the linked list for failed data record restoration is sent preferentially.

12. The method according to claim 2, wherein the cluster public storage block is located in the serving node, and wherein data in the cluster public storage block can be accessed or modified in real time by any one of the serving nodes in the cluster.

13. The method according to claim 2, further comprising: before sending the copy which needs to be backed up, hot backing up, by each of the serving nodes, local data which needs to be backed up in a specified path.

14. The method according to claim 2, wherein the sending the copy which needs to be backed up to the target node comprises: transmitting the copy which needs to be backed up to one or more target nodes external to the cluster.

15. The method according to claim 2, wherein the consolidation comprises: consolidating, by the target node, a plurality of copies of a single data file into a latest copy according to a version number and a timestamp of the data, when the target node detects that the data in all of the serving nodes are complete according to the backup schedule.

16. The system according to claim 8, wherein the consolidation performed by the target node comprises: consolidating by the target node a plurality of copies of a single data file into a latest copy according to a version number and a timestamp of the data, when the target node detects that the data in all of the serving nodes are complete according to the backup schedule.

17. The serving node according to claim 16, wherein the serving node is further configured so that after sending the data writing request to other serving nodes to perform the write operation according to the data distribution table, the client stores the data record in a linked list for failed data record restoration when the write operation fails, wherein whenever the client sends the data writing request, the data record in the linked list for failed data record restoration is sent preferentially.
Description



TECHNICAL FIELD

[0001] The application relates to a distributed cache system in cloud computing, and in particular to a method, system and serving node for data backup and restoration.

BACKGROUND

[0002] Development and Convergence of conventional computer technologies and network technologies, such as grid computing, distributed computing, parallel computing, utility computing network storage technologies, virtualization technologies, load balance technologies and the like, result in cloud computing. The cloud computing intends to integrate a plurality of relatively low cost computing entities over a network into a system having significant computing power. Distributed cache is an aspect of the cloud computing, and functions to provide distributed storage service for mass data and high-speed read and write capability.

[0003] Generally, a distributed cache system is composed of several server nodes (also known as serving node) and clients interconnected with each other. The server node is used to store data and the client may perform data manipulation on the server, such as writing, reading, updating, deleting and the like. Generally, the written data would not only be stored in a single server node. Instead, copies of the data file may be stored in a plurality of nodes for backup. Each of the copies comprises one or more data records composed of Keys and Values, a Key serves as an index to the data and a Value is data content represented by the Key. Logically, there is a one-to-one correspondence between the Keys and the Values. Attribute information, such as a version number, a timestamp and the like is added to each data record when the data record is actually stored, so as to ensure consistency of the data stored in the distributed cache system. The attribute information is stored in the data records of the copy. In a distributed cache system cluster (hereinafter referred to as cluster), each of a plurality of serving nodes store a respective data copy of a same data. It is a hard issue to perform data backup and restoration on the plurality of serving nodes in the cluster. The core of the issue is how to perform the data backup and restoration and keep data consistency, data integrity and data uniform distribution under complex network environment of the clusters with consideration of performance differences between respective serving nodes.

SUMMARY

[0004] Accordingly, the application provides a method, a system and a serving node for data backup and restoration which are able to perform the data back and restoration in a distributed cache system.

[0005] To this end, the technical solutions of the application are provided as follows.

[0006] One aspect of the application provides a method for data backup and restoration, which may comprise: generating, by one of a plurality of serving nodes, a backup schedule, after receipt of a data backup request; uploading, by the serving node, the generated backup schedule to a cluster public storage block; sending, by each of the plurality of serving nodes, a copy which needs to be backed up to a target node for consolidation, according to the backup schedule; sending, by a client, a data record in a consolidated copy to the serving node through a data writing request; generating, by one of the plurality of serving nodes, a data distribution table according to a load balancing principle; sending, by the serving node, the data writing request to other serving nodes to perform a write operation, according to the generated data distribution table; and sending, by the client, a data writing request for a next data record in the consolidated copy after the write operation is completed, until the copy is completely restored.

[0007] Alternatively, the method may further comprise: after sending the data writing request to other serving nodes according to the data distribution table, storing, by the client, the data record in a linked list for failed data record restoration when the write operation fails, whenever the client sends the data writing request, the data record in the linked list for failed data record restoration is sent preferentially.

[0008] Alternatively, the cluster public storage block may be located in the serving node, and wherein data in the cluster public storage block may be accessed or modified in real time by any one of the serving nodes in the cluster.

[0009] Alternatively, the method may further comprise: before sending the copy which needs to be backed up, hot backing up, by each of the serving nodes, local data which needs to be backed up in a specified path.

[0010] Alternatively, the sending the copy which needs to be backed up to the target node may comprise: transmitting the copy which needs to be backed up to one or more target nodes external to the cluster.

[0011] Alternatively, the consolidation may comprise: consolidating, by the target node, a plurality of copies of a single data file into a latest copy according to a version number and a timestamp of the data, when the target node detects that the data in all of the serving nodes are complete according to the backup schedule.

[0012] Another aspect of the application provides a system for data backup and restoration, comprising a serving node, a client and a target node, wherein the serving node may be configured to generate a backup schedule after receipt of a data backup request, to upload the generated backup schedule to a cluster public storage block, to send a copy which needs to be backed up to a target node for consolidation, to generate a data distribution table according to a load balancing principle after receipt of a data writing request, to send the data writing request to other serving nodes according to the data distribution table to perform a write operation, and to send a notification to the client when the write operation is completed; and wherein the client may be configured to send a data record in a consolidated copy to the target node through the data writing request, and to send the data writing request for a next data record after receipt of the notification, until the copy is completely restored.

[0013] Alternatively, the service may be further configured so that after sending the data writing request to other serving nodes to perform the write operation according to the data distribution table, the client stores the data record in a linked list for failed data record restoration when the write operation fails, wherein whenever the client sends the data writing request, the data record in the linked list for failed data record restoration is sent preferentially.

[0014] Alternatively, the consolidation performed by the target node may comprise: The target node consolidates a plurality of copies of a single data file into a latest copy according to a version number and a timestamp of the data, when the target node detects that the data in all of the serving nodes are complete according to the backup schedule.

[0015] Another aspect of the application provides a serving node for data backup and restoration, which may be configured to generate a backup schedule after a receipt of a data backup request, to upload the generated backup schedule to a cluster public storage block, to send a copy which needs to be backed up to a target node for consolidation, to generate a data distribution table according to a load balancing principle after receipt of a data writing request, to send a data writing request to other serving nodes according to a data distribution table to perform a write operation, and to send a notification to a client when the write operation is completed.

[0016] Alternatively, the serving node may be further configured so that after sending the data writing request to other serving nodes to perform the write operation according to the data distribution table, the client stores the data record in a linked list for failed data record restoration when the write operation fails, wherein whenever the client sends the data writing request, the data record in the linked list for failed data record restoration is sent preferentially.

[0017] According to the method, the system and the serving node for data backup and restoration, the serving node generates a backup schedule after receipt of a data backup request, uploads the generated backup schedule to a cluster public storage block, so that each serving node sends a copy which needs to be backed up to a target node for consolidation; and the client sends a data record in a consolidated copy to the serving node through a data writing request, to generate a data distribution table according to a load balancing principle so as to send the data writing request to other serving nodes according to the data distribution table to perform a write operation, the client sends the data writing request for a next data record after the write operation is completed, until the copy is completely restored. Through the above method and system, the data backup and restoration may be performed in a complex network with the clusters with consideration of performance differences among respective serving nodes and data consistency, data integrity and data uniform distribution. In general, the above method is an effective, highly reliable data backup and restoration solution to ensure the data consistency in the distributed cache system, and is an inventive solution to address the technical issue of data backup and restoration in the distributed cache system.

BRIEF DESCRIPTION OF THE DRAWINGS

[0018] FIG. 1 is a flowchart of a method for data backup and restoration according to the application.

[0019] FIG. 2 is a flowchart of a method for data backup according to an embodiment of the application.

[0020] FIG. 3 is a flowchart of a method for consolidation according to an embodiment of the application.

[0021] FIG. 4 is a flowchart of a method for data restoration according to an embodiment of the application.

[0022] FIG. 5 is a structure diagram of a system for data backup and restoration according to the application.

DETAILED DESCRIPTION

[0023] Generally, data backup in a cluster using a distributed cache system comprises two steps: an exporting step, in which a plurality of copies of a data file in the cluster are exported to a target node; and an consolidating step, in which these copies are consolidated into a data file based on a certain consolidation algorithm after the exporting step is performed, and the consolidated data file is stored for future data restoration.

[0024] Data restoration in the cluster may be performed by data import. Specifically, a data writing request is sent to a serving node in the cluster from a client, so as to write data records in the data file into the distributed cache system one by one.

[0025] Hereinafter, the technical solutions of the application will be described in detail with reference to the drawings and the embodiments.

[0026] FIG. 1 is a flowchart of a method for data backup and restoration according to the application. As shown in FIG. 1, the method comprises the following steps.

[0027] Step 101: a serving node generates a backup schedule after receipt of a data backup request.

[0028] Specifically, a client sends a data backup request to any one of serving nodes, as desired; the serving node, which has received the data backup request, generates the backup schedule according to a data distribution table in the cluster. The serving node may be referred to as a backup schedule generating node.

[0029] Step 102: the backup schedule is uploaded to a cluster public storage block.

[0030] Specifically, the cluster public storage block is located in the serving node and the data therein may be accessed or modified in real time by any serving node in the cluster.

[0031] Additionally, after the step 102, a target node may monitor the backup schedule in the cluster public storage block and place the backup schedule to an "enable" state.

[0032] Step 103: each serving node sends a copy which needs to be backed up to the target node according to the backup schedule.

[0033] Additionally, before sending a copy which needs to be backed up, each serving node may hot back up local data, which needs to be backed up, in a specified path.

[0034] Specifically, each serving node in the cluster is able to monitor the state of the backup schedule in the public storage block; if the state is "enable", each serving node hot backs up local data, which needs to be backed up, in the specified path, and then sends the copy to the target node. The data hot backup means that the serving node completely backs up data at a certain time point without corruption of the data, even if the data is being modified. The advantage of using hot backup is that it may ensure that the data in the cluster can be backed up while being updated by the client. Additionally, after each serving node sends the copy which needs to be backed up to the target node, the serving node may send a message to the cluster public storage block, to change the backup schedule from the "enable" state to a "sent" state.

[0035] In the step of sending the copy which needs to be backed up to the target node, the copy which needs to be backed up may be transmitted to one or more target nodes external to the cluster. In this way, it may ensure that there is sufficient storage space in the target node for storing the backup data file. If the storage space in the target node is full or the target node is disconnected, another target node may be specified to continue the backup.

[0036] Step 104: the target node performs consolidation.

[0037] Specifically, the target node detects that data in all serving nodes are complete according to the backup schedule, and then consolidates a plurality of copies of a single data file into a latest copy according to attributes of the data such as a version number and a timestamp. In this way, data consistency can be ensured. The step of consolidating copies into the latest copy comprises:

[0038] Step 104a: a first copy of the data file is opened to obtain a data record, i.e., a Key and a corresponding Value;

[0039] Step 104b: other copies of the data file are opened to obtain other Values corresponding to the Key obtained in the step 104a;

[0040] Step 104c: all Value copies corresponding to the same Key are compared with each other according to latest data version number and timestamp principle, to obtain a latest Value copy.

[0041] Step 104d: a consolidated Value is written to the first copy of the data file and this data record is deleted from other copies of the data file.

[0042] Step 104e: data records in the first copy of the data file are traversed to consolidate remaining data records in other copies of the data file, so as to write the consolidated data record to the first copy of the data file.

[0043] Additionally, after the target node performs the consolidation, a message is sent to the cluster public storage block, and the backup schedule is changed to a "consolidation completed" state; the client polls whether the backup schedule is in the "consolidation completed" state, if yes, the backup process ends.

[0044] Step 105: the client sends the data records in the consolidated copy to the serving node through a data writing request.

[0045] Specifically, the client opens the consolidated copy and traverses the data records therein to fetch one data record, and adds this data record to the data writing request, and then sends the data writing request to any one of the serving nodes in the cluster.

[0046] Step 106: the serving node generates a data distribution table according to a load balancing principle.

[0047] Specifically, as the data distribution table is generated according to the load balancing principle, it can be ensure that the data may be evenly restored among the serving nodes and the data load in each serving node is balanced in the complex cluster network.

[0048] Step 107: the serving node sends the data writing request to other serving nodes according to the data distribution table to perform a write operation, and sends a data writing request for a next data record after the above write operation is completed, until the copy is completely restored.

[0049] Additionally, after the serving node sends the data writing request to other serving nodes according to the data distribution table to perform the write operation, if the write operation fails, the client stores the data record in a linked list for failed data record restoration, in which the data record in the linked list is sent preferentially, whenever the client sends the data writing request.

[0050] FIG. 2 is a flowchart of data backup according to an embodiment of the application. As shown in FIG. 2, the flowchart comprises:

[0051] Step 201: a client sends a data backup request to serving node 1 in a cluster, and the serving node 1 generates a backup schedule according to a data distribution table in the cluster.

[0052] Step 202: the serving node 1 uploads the local backup schedule to a cluster public storage block.

[0053] Step 203: a target node monitors the backup schedule in the cluster public storage block and then places the backup schedule to an "enable" state.

[0054] Step 204: each serving node (serving nodes 1-2 show in FIG. 2) in the cluster polls the state of the backup schedule in the cluster public storage block, if the state is "enable", each serving node hot backs up local data in a specified data directory.

[0055] Step 205: each serving node (serving nodes 1-2 show in FIG. 2) in the cluster sends the data backup in the data directory to the target node and changes the backup schedule from the "enable" state to a "sent" state.

[0056] Step 206: the target node polls whether the backup schedule is in the "copy completed" state, if yes, the target performs the consolidation.

[0057] Step 207: after the consolidation, the target node places backup schedule to a "consolidation completed" state.

[0058] Step 208: the client polls whether the backup schedule is in the "consolidation completed" state, if yes, the backup process ends.

[0059] FIG. 3 is a flowchart of consolidation according to an embodiment of the application. As shown in FIG. 3, the consolidation comprises:

[0060] Step 301: each copy of the data file in the cluster is opened.

[0061] Step 302: a first copy of the data file is traversed to fetch one data record composed of a Key, a Value, a version number, a timestamp and the like.

[0062] Step 303: data records corresponding to the same Key in other data copies are fetched in turn.

[0063] Step 304: a latest copy is selected from a plurality of copies of the same data record as a consolidated data record according to latest version number and timestamp principle.

[0064] Step 305: the consolidated data record is written to the first copy of the data file.

[0065] Step 306: the date records corresponding to the Key in other data copies are deleted.

[0066] Step 307: a second copy of the data file is traversed after each data record in the first copy of the data file is traversed, and the consolidated data record is written to the first copy of the data file according to steps 302-306, until all copies of the data file are traversed.

[0067] FIG. 4 is a flowchart of data restoration according to an embodiment of the application. As shown in FIG. 4, the data restoration comprises the following steps:

[0068] Step 401: a client opens a backup data file, fetches one data record, adds the fetched date record to a data writing request, and sends the data writing request to a serving node 1 in a cluster.

[0069] Step 402: the serving node in the cluster generates a data distribution table according to a load balancing principle, after receipt of the data writing request.

[0070] Step 403: the serving node that generates the data distribution table sends the data writing request to other serving nodes according to the data distribution table.

[0071] Step 404: another serving node which receives the data writing request locally stores the data records carried by the request and returns a corresponding state code to the serving node that generates the data distribution table.

[0072] Step 405: the serving node that generates the data distribution, i.e., the serving node 1, returns the state code to the client, if the write operation is successful, the client fetches a next data record; if the write operation fails, the client stores the data record in the linked list for failed data record restoration, in which the data record is sent preferentially whenever the client sends a data writing request.

[0073] Additionally, each time before the client sends a data writing request, the client fetches a next data record from the linked list for failed data record restoration or the backup data file, and repeats steps 401 to 405, until all data records in the backup data file are successfully restored.

[0074] FIG. 5 is a structure diagram of a data backup and restoration system according to the application. As shown in FIG. 5, the system comprises serving nodes 51, a client 52 and target nodes 53.

[0075] The serving node 51 is configured to generate a backup schedule after receipt of a data backup request; upload the backup schedule to a cluster public storage block; send a copy which needs to be backed up to the target node 53 according to the backup schedule; send the data writing request to other serving node 51 according to a data distribution table to perform a write operation, after generation of the data distribution table according to a load balancing principle when the data writing request is received; and send a notification to the client 52 after the write operation is completed.

[0076] Specifically, the client 52 sends the data backup request to any one of serving node 51, as desired; and the serving node 51, which receives the data backup request, generates the backup schedule according to the current data distribution table in the cluster. The cluster public storage block is located in the serving node 51 and data therein may be accessed or modified in real time by any one of serving node 51 in the cluster. Each serving node 51 in the cluster is able to monitor a state of the backup schedule in the cluster public storage block; if the state is "enable", each serving node 51 hot backs up local data, which needs to be backed up, in a specified path, and then sends the backup data file to the target node 53. The hot backup means that the serving node 51 completely backs up data at a certain time point without corruption of the data, even if the data is being modified. The advantage of using hot backup is that it may ensure that the data in the cluster can be backed up while being updated by the client. The serving node 51 generates the data distribution table according to the load balancing principle so as to ensure that the data may be evenly restored among the serving nodes 51 and to ensure the data load on each serving node 51 is balanced in the complex cluster network. Additionally, after the serving node 51 sends the backup data file which needs to be backed up to the target node 53, the message is sent to the cluster public storage block, and the backup schedule is changed from the "enable" state to a "sent" state.

[0077] Additionally, the serving node 51 may be configured to hot back up local data which needs to be backed up in a specified path, before sending a copy which needs to be backed up.

[0078] Additionally, the serving node 51 may be configured so that after the data writing request is sent to other serving node 51 according to the data distribution table to perform a write operation, if the write operation fails, the client stores the data record in the linked list for failed data record restoration, in which the data record in the linked list is sent preferentially, whenever the client 52 sends the data writing request.

[0079] The client 52 is configured to send a data record in the consolidated copy to the serving node 51 through the data writing request, and send the data writing request for a next data record after receiving a completion notification, until all copies are restored.

[0080] Specifically, the client 52 opens the consolidated copy and traverses data records therein to fetch one data record, adds this data record to the data writing request to be sent by the client, and then sends the data writing request to any one of serving node 51 in the cluster. The data distribution table is generated according to the load balancing principle so as to ensure that the data may be evenly restored among the serving nodes and to ensure the data load on each serving node is balanced in the complex cluster network.

[0081] The target node 53 is configured to perform the consolidation.

[0082] Specifically, the target node is one or more nodes external to the cluster. To perform the consolidation, the target node consolidates a plurality of copies of a single data into a latest copy according to attributes of the data file such as version number and timestamp, when the target node monitors that data in all serving nodes are complete according to the backup schedule.

[0083] Additionally, after performing the consolidation, the target node 53 sends a message to the cluster public storage block, and places the backup schedule to a "consolidation completed" state; the client 52 polls whether the backup schedule is in the "consolidation completed" state, if yes, the backup process ends.

[0084] Additionally, the target node 53 may be configured to monitor the backup schedule in the cluster public storage block and place the backup schedule to an "enable" state.

[0085] The application further provides the serving node 51 in the data backup and restoration system, as shown in FIG. 5.

[0086] The serving node 51 is configured to generate the backup schedule after the receipt of the data backup request; upload the backup schedule to the cluster public storage block; send the copy which needs to be backed up to the target node 53 according to the backup schedule; send the data writing request to other serving nodes 51 according to the data distribution table to perform the write operation, after generation of the data distribution table according to the load balancing principle when the data writing request is received; and inform the client 52 after the write operation is completed.

[0087] Specifically, the client 52 sends the data backup request to any one of serving node 51, as desired; and the serving node 51, which receives the data backup request, generates the backup schedule according to the data distribution table in the cluster. The cluster public storage block is in the serving node 51 and data therein may be accessed or modified in real time by any one of serving node 51 on the cluster. Each serving node 51 in the cluster is able to monitor the state of the backup schedule in the cluster public storage block; if the state is "enable", each serving node 51 hot backs up local data, which needs to be backed up, in the specified path, and then sends the backup dada file to the target node 53. The hot backup means that the serving node 51 is still able to completely back up data at a certain time point without corruption of the data, even if the data is being modified. The advantage of using hot backup is that it may ensure that the data in the cluster can be backed up while being updated by the client. The serving node 51 generates the data distribution table according to the load balancing principle so as to ensure that the data may be evenly restored among the serving node 51 and to ensure the data load on each serving node 51 is balanced in the complex cluster network. Additionally, after the serving node 51 sends the copy which needs to be backed up to the target node 53, the message is sent to the cluster public storage block, and the backup schedule is changed from the "enable" state to a "sent" state.

[0088] Additionally, the serving node 51 may be configured to hot back up local data which needs to be backed up in the specified path, before sending the copy which needs to be backed up.

[0089] Additionally, the serving node 51 may be configured so that after the data writing request is sent to other serving nodes 51 according to the data distribution table to perform the write operation, if the write operation fails, the client stores the data record in the linked list for failed data record restoration, in which the data record is sent preferentially, whenever the client sends a data writing request.

[0090] It should be understood that the foregoing description are only preferred embodiments of the disclosure and not intended to limit the patent scope of the disclosure. All changes, equivalent substitution or modifications without departing from the spirit and scope of the application shall be fall into the protection scope of the application.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed