Configurable Dynamic Load Shedding Method in Distributed Stream Computing system

Park; Ji Hyoun ;   et al.

Patent Application Summary

U.S. patent application number 13/962971 was filed with the patent office on 2015-02-12 for configurable dynamic load shedding method in distributed stream computing system. This patent application is currently assigned to Hong Kong Applied Science and Technology Research Institute Company Limited. The applicant listed for this patent is Hong Kong Applied Science and Technology Research Institute Company Limited. Invention is credited to Zhi Bin Lei, Ji Hyoun Park, Kangheng Wu.

Application Number20150046506 13/962971
Document ID /
Family ID52449556
Filed Date2015-02-12

United States Patent Application 20150046506
Kind Code A1
Park; Ji Hyoun ;   et al. February 12, 2015

Configurable Dynamic Load Shedding Method in Distributed Stream Computing system

Abstract

A computer implemented method of load shedding used in stream computing system that considers the relative importance of each of the applications processing the incoming input data or events. The method of load shedding method also accounts for system physical constraints, such as memories and CPU utilization. The load shedding method first observes the workload of each application and arriving rate of the incoming input data or events. If the system is under an overloading condition, calculate a input data or event drop ratio for each application such that the projected sum of all applications' workload will be at or below the system capacity when the unprocessed input data or events are dropped according to the drop ratio for each application.


Inventors: Park; Ji Hyoun; (Hong Kong, HK) ; Wu; Kangheng; (Hong Kong, HK) ; Lei; Zhi Bin; (Hong Kong, HK)
Applicant:
Name City State Country Type

Hong Kong Applied Science and Technology Research Institute Company Limited

Hong Kong

HK
Assignee: Hong Kong Applied Science and Technology Research Institute Company Limited
Hong Kong
HK

Family ID: 52449556
Appl. No.: 13/962971
Filed: August 9, 2013

Current U.S. Class: 709/201
Current CPC Class: G06F 9/5083 20130101
Class at Publication: 709/201
International Class: H04L 29/08 20060101 H04L029/08

Claims



1. A computer implemented method for load shedding in a distributed stream computing system, comprising: detecting a processing latency; calculating a target projection point for system load; if the processing latency is greater than a latency threshold: calculating a drop ratio for each of one or more applications running in the system based on one or more drop ratio computation factors comprising: the target projection point for system load, arriving rate of data or events, processing rate of data or events, amount of system resources for processing data or events, and system resource capacity; determining a load shedding percentage using the drop ratio for each of the one or more applications; dropping a fraction of unprocessed data or events by the drop percentage for each of the one or more applications; repeating the method steps until the processing latency is not greater than the latency threshold.

2. The method of claim 1, wherein the load shedding percentage is constrained by and proportional to available buffer in the corresponding application for holding unprocessed input data or events.

3. The method of claim 1, further comprising: determining an incremental drop ratio delta for each of the one or more applications, wherein the incremental drop ratio delta is the drop ratio for the corresponding application divided by a number; wherein the load shedding percentage being initially equal to the incremental drop ratio delta for the corresponding application and increments by the same delta for each cycle.

4. The method of claim 3, wherein the number for dividing drop ratio is proportional to available buffer in the corresponding application for holding unprocessed input data or events.

5. The method of claim 1, wherein the drop ratio computation factors further comprising a relative importance of each of the one or more applications running in the system.

6. The method of claim 5, wherein computational result accuracy of the application is changed by adjusting the relative importance of the corresponding application.

7. The method of claim 1, wherein the target projection point for system load is calculated to be a projection point on a system capacity line of the system such that the system resources are utilized at maximum under constraints of the one or more configuration parameters.

8. The method of claim 1, wherein the target projection point for system load is calculated to be a projection point at a distance below a system capacity line of the system according to a guaranteed probability of buffer overflow control requirement.

9. The method of claim 1, wherein the processing latency being the average of one or more process latencies observed for a certain period of time at one or more nodes in the system.

10. The method of claim 1, wherein the processing latency being the minimum of one or more process latencies observed at one or more nodes in the system.

11. A load shedding module for a distributed stream computing system, the load shedding module is configured to execute a process comprising: detecting a processing latency; calculating a target projection point for system load; if the processing latency is greater than a latency threshold: calculating a drop ratio for each of one or more applications running in the system based on one or more drop ratio computation factors comprising: the target projection point for system load, arriving rate of data or events, processing rate of data or events, amount of system resources for processing data or events, and system resource capacity; determining a load shedding percentage using the drop ratio for each of the one or more applications; dropping a fraction of unprocessed data or events by the load shedding percentage for each of the one or more applications; repeating the method steps until the processing latency is not greater than the latency threshold.

12. The load shedding module of claim 11, wherein the load shedding percentage is constrained by and proportional to available buffer in the corresponding application for holding unprocessed input data or events.

13. The load shedding module of claim 1, configured to further execute: determining an incremental drop ratio delta for each of the one or more applications, wherein the incremental drop ratio delta is the drop ratio for the corresponding application divided by a number; wherein the load shedding percentage being initially equal to the incremental drop ratio delta for the corresponding application and increments by the same delta for each cycle.

14. The load shedding module of claim 13, wherein the number for dividing drop ratio is proportional to available buffer in the corresponding application for holding unprocessed input data or events.

15. The load shedding module of claim 11, wherein the drop ratio computation factors further comprising a relative importance of each of the one or more applications running in the system.

16. The load shedding module of claim 15, wherein computational result accuracy of the application is changed by adjusting the relative importance of the corresponding application.

17. The load shedding module of claim 11, wherein the target projection point for system load is calculated to be a projection point on a system capacity line of the system such that the system resources are utilized at maximum under constraints of the one or more configuration parameters.

18. The load shedding module of claim 11, wherein the target projection point for system load is calculated to be a projection point at a distance below a system capacity line of the system according to a guaranteed probability of buffer overflow control requirement.

19. The load shedding module of claim 11, wherein the processing latency being the average of one or more process latencies observed for a certain period of time at one or more nodes in the system.

20. The load shedding module of claim 11, wherein the processing latency being the minimum of one or more process latencies observed at one or more nodes in the system.
Description



COPYRIGHT NOTICE

[0001] A portion of the disclosure of this patent document contains material, which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.

FIELD OF THE INVENTION

[0002] The present invention relates generally to information systems used in distributed stream computing. Particularly, the present invention relates to overload management in distributed stream computing systems. Still more specifically, the present invention relates to load shredding methods in distributed stream computing systems.

BACKGROUND

[0003] Stream computing is about producing a continuous stream of fresh computational results as new data or events are being input in real-time. Resource provisioning and allocation are particularly difficult due to the time-varying and sporadic nature of the occurrence of new data or events that induces unknown resource demands over time. Under overload condition in which the arriving rate of new data or event exceeds the capacity of the system, the system lacks the resources to process the new incoming data or events within a tolerable time period. Consequently the processing latency grows uncontrollably, compromising the freshness of the stream of computational results.

[0004] Computing architectures and techniques have been developed to address the abovementioned problem. One such architecture is to divide computational resources into physical or logical units (nodes) for processing the input data or events, and distribute the incoming input data or events to the nodes according to a distribution scheme. The distribution scheme can be as simple as round robin or as complex as intelligent distribution based on constantly monitored load levels of the nodes. The advantage of such architecture is that computational processing can be distributed and performed in parallel, and physical/logical units of computational resources can be added or removed according to the actual runtime load levels, thus achieving scalability. One example of such distributed stream computing systems is described in the document: Neumeyer et al., S4: Distributed Stream Computing Platform, Santa Clara, Calif., U.S.A., 2010; the content of which is incorporated herein by reference in its entirety.

[0005] Load shedding is a computing technique that discards some fraction of unprocessed input data or events in order to reduce the system load, in turn reduces the observable latency of the stream of computational results. One issue with load shedding is how to most efficiently discard unprocessed input data or events and yet ensuring the deviations from the perfect computing results are minimized.

[0006] One of the load shedding strategies is to eliminate the incoming input data or events once the system resource capacity is reach; for example, when a buffer for holding input data or events to be processed is full. However, this strategy treats all input data or events in-discriminatively and does not account for the difference in importance of the input data or events. This leads to unpredictable or poor accuracy in the computational results. In addition, the rate of data or event elimination cannot be adjusted for varying input data or event arriving rate and observable processing latency during runtime.

[0007] Another load shedding strategy is to continuously monitor the actual processing latency and/or resource (such as CPU and memories) utilization, compare with a pre-determined optimal processing latency and/or resource utilization rate, and discard randomly selected unprocessed input data or events based on the differences between the actual and optimal processing latencies and/or resource utilization rates. One example of this strategy is described in the document: Kalyvianaki et al., Overload Management in Data Stream Processing Systems with Latency Guarantees, Stockholm, Sweden, 2012; the content of which is incorporated herein by reference in its entirety. This strategy, however, suffers the same problem of unpredictable or poor accuracy in the computing results.

[0008] Some other load shedding strategies require the system to have active knowledge of the usage of the input data. The usage can be in the form of data queries of the input data specified by a user. The decisions of when and what to discard rely on the analysis of these queries in order to determine the different levels of importance of the input data. Runtime control of the discard decisions can be achieved by specially designed queries. For example, the U.S. Patent Application Publication No. 2012/027,843 discloses a method of controlling load shedding for excluding data streams of a data process input into a data stream management system.

[0009] Another example of one such load shedding strategy applies XML query processing on input data and makes discard decisions based on patterns of XML data structures. The details of this example is disclosed in the document: Wei et al., Utility-driven Load Shedding for XML Stream Processing, Worcester Polytechnic Institute, U.S.A., 2003; the content of which is incorporated herein by reference in its entirety. The downside of these load shedding strategies, however, is that they are not flexible, and highly application and data specific.

SUMMARY

[0010] It is an objective of the presently claimed invention to provide a method of load shedding used in distributed stream computing systems that is efficient, optimal, flexible, and balanced between computing result accuracy and processing latency.

[0011] It is a further objective to provide such method of load shedding that considers the relative importance of each of the applications processing the incoming input data or events. The presently claimed method of load shedding also accounts for system physical constraints, such as memories and CPU utilization. The load shedding method first observes the workload of each application and arriving rate of the incoming input data or events. If the system is under an overloading condition, calculate a input data or event drop ratio for each application such that the projected sum of all applications' workload will be at or below the system capacity when the unprocessed input data or events are dropped according to the drop ratio for each application.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] Embodiments of the invention are described in more detail hereinafter with reference to the drawings, in which

[0013] FIG. 1 shows, in a 2-dimensional space, the system capacity line of an exemplary distributed stream computing system having two applications, a current system load under an overloading condition, and three target projection points as aids in illustrating the presently claimed loading shedding method;

[0014] FIG. 2 further shows the current system load being moved towards the target projection point in incremental steps; and

[0015] FIG. 3 further shows a revised target projection point for system stability with guaranteed probability of buffer overflow control.

DETAILED DESCRIPTION

[0016] In the following description, load shedding methods and systems used in distributed stream computing systems and the likes are set forth as preferred examples. It will be apparent to those skilled in the art that modifications, including additions and/or substitutions may be made without departing from the scope and spirit of the invention. Specific details may be omitted so as not to obscure the invention; however, the disclosure is written to enable one skilled in the art to practice the teachings herein without undue experimentation.

[0017] In accordance to various embodiments, the load shedding method approaches the problem of how much and which input data or events to drop by first defining an architecture of distributed stream computing system where a plurality of applications are deployed in one or more physical computing processing units with each including all necessary computing resources such as CPUs and memories, virtual partitions of computing processing units, or logical computing processing units (collectively referred to as "nodes").

[0018] Each node is running one or more instances of the applications. An application running in one or more nodes is denoted by App.sub.i. The application App.sub.i requires a certain amount of computing resources, denoted by C.sub.i, of the nodes to process an incoming input data or event. The arriving rate of the incoming input data or events to be processed by App.sub.i is the number of incoming input data or events which arrive in a unit of time, denoted by .sub.i. The required workload of the application App.sub.i for processing the input data or event in runtime is then (.sub.1*C.sub.i). The actual processing rate of the input data or events being processed by App.sub.i is the number of input data or events processed in a unit of time, denoted by x.sub.i. The load shedding percentage of input data or events is then p.sub.i=(.sub.i-x.sub.i)/.sub.i. The computing capacity of a node is denoted by M.sub.j. Therefore, a foreseeable overloading condition can be defined as Sum.sub.i(.sub.i*C.sub.i)>Sum.sub.j(M.sub.j). In other words, when the sum of required workload of all applications exceeds the sum of all nodes' computing capacities, an overloading condition occurs.

[0019] When the distributed stream computing system is running at maximum capacity, the sum of all applications' actual workloads equals to the sum of all nodes' computing capacities. This can be mathematically represented by Sum.sub.i(x.sub.i*C.sub.i)=Sum.sub.j(M.sub.j) or Sum.sub.i(x.sub.i*C.sub.i)-Sum.sub.j(M.sub.j)=0. Mathematically, Sum.sub.i(x.sub.i*C.sub.i)-Sum.sub.j(M.sub.j)=0 is a hyper-plane (referred to as "system capacity line"); along with the minimum boundary condition point: x.sub.i=0, they form a bounded multi-dimensional shape in a multi-dimensional space. Let P(x.sub.1, x.sub.2, x.sub.3, . . . x.sub.N) be a point in the multi-dimensional space and represents the current system load with all the applications running When P(x.sub.1, x.sub.2, x.sub.3, . . . x.sub.N) is located on the system capacity line, the sum of all applications' actual workloads equals to the sum of all nodes' computing capacities. When P(x.sub.1, x.sub.2, x.sub.3, . . . x.sub.N) is located within the multi-dimensional shape bounded by the Sum.sub.i(x.sub.i*C.sub.i)-Sum.sub.j(M.sub.j)=0 hyper-plane and the x.sub.i=0 point (below system capacity line), the sum of all applications' actual workloads is below the sum of all nodes' computing capacities, an under-loading condition is occurring. When P(x.sub.1, x.sub.2, x.sub.3, . . . x.sub.N) is located outside of the bounded multi-dimensional shape (above system capacity line), the sum of all applications' actual workloads is above the sum of all nodes' computing capacities, an overloading condition is occurring. In order to reduce the actual average processing latency to equal or below the user-acceptable average processing latency, the load shedding module is to drop certain input data or events, and by doing so bring the system load to a target projection point on or below the system capacity line.

[0020] FIG. 1 shows, in a 2-dimensional space, the system capacity line of an exemplary distributed stream computing system having two applications: App.sub.1 and App.sub.2 experiencing an overloading condition. In this 2-dimensional space, a current system load, P.sub.0, is located above the system capacity line; three target projection points of system load, P'.sub.1, P'.sub.2, and P'.sub.3 are identified. P'.sub.1 is achieved by dropping input data or events to be processed by App.sub.1, P'.sub.3 is achieved by dropping input data or events to be processed by App.sub.2, and P'.sub.2 is achieved by dropping input data or events to be processed by App.sub.1 and App.sub.2. P'.sub.2 is the optimal target projection point as the least number of input data or events will be dropped for each of App.sub.1 and App.sub.2, hence least impacting the computational result accuracy of both applications. The following shows the mathematical calculation of P'.sub.2:

System capacity line=C.sub.1*x.sub.1+C.sub.2*x.sub.2-Sum.sub.j(M.sub.j)=0

For P.sub.0(x.sub.1[0], x.sub.2[0]),

P'.sub.2(x.sub.1[2], x.sub.2[2])=(x.sub.1[0]-C.sub.1*(C.sub.1*x.sub.1[0]+C.sub.2*x.sub.2[0]-Su- m.sub.j(M.sub.j))/(C.sub.1 2+C.sub.2 2),

X.sub.2[0]-C.sub.2*(C.sub.1*x.sub.1[0]+C.sub.2*x.sub.2[0]-Sum.sub.j(M.su- b.j))/(C.sub.1 2+C.sub.2 2)

[0021] It can be assumed that at the initial overloading condition and before load shedding begins, the applications' process rates, x.sub.1[0] and x.sub.2[0], at the current system load, P.sub.0, are the input data or event arriving rates, .sub.1 and .sub.2 respectively.

[0022] To generalize, for a current system load, P(x.sub.1, x.sub.2, x.sub.3, . . . x.sub.N), the optimal target projection point, P'(x'.sub.1, x'.sub.2, x'.sub.3, . . x'.sub.N), can be calculated as:

x'.sub.i=x.sub.i-C.sub.i*(Sum.sub.i(C.sub.i*x.sub.i)-Sum.sub.j(M.sub.j))- /Sum.sub.i(C.sub.i 2).

[0023] The load shedding percentage of incoming input data or events, or drop ratio, for each application is:

p.sub.i=(x.sub.i-x'.sub.i)/x.sub.i or (.sub.i-x'.sub.i)/.sub.i for x.sub.i=.sub.i.

[0024] In order to minimize the negative effect on the computational result accuracy of the applications, incoming input data or events are dropped incrementally according to an increasing load shedding percentage calculated for each application. The calculation takes into consideration the available buffer for each application to hold unprocessed incoming input data or events. Referring to FIG. 2. The current system load, P.sub.0, is being moved towards the target projection point, P'.sub.2, in steps due to the incoming input data or events to be processed by App.sub.1 and App.sub.2 being dropped using load shedding percentage that increase in n.sub.1 and n.sub.2 steps of sizes:

delta.sub.1=(.sub.1-x'.sub.1)/(n.sub.1*.sub.1) and delta.sub.2=(.sub.2-x'.sub.2)/(n.sub.2*.sub.2) respectively.

[0025] To generalize, the incremental drop ratio delta is: (.sub.i-x'.sub.i)/(n.sub.i*.sub.i); where n.sub.i is a number proportional to the available buffer in App.sub.i for holding unprocessed incoming input data or events.

[0026] Taking the relative importance of the application into additional consideration, the system capacity line is modified to be:

Sum.sub.i(x.sub.i*s.sub.i*C.sub.i)-Sum.sub.j(M.sub.j)=0, where s.sub.i is the relative importance coefficient of App.sub.i.

[0027] The incremental drop ratio delta is then modified to be: (.sub.i-x'.sub.i)/(n.sub.i*.sub.i*s.sub.i).

[0028] The relative importance coefficient can be pre-configured, dynamically adjusted, and updated in runtime based on conditions of the applications and the distributed stream computing system. For example, to increase the computational result accuracy of an application, its corresponding relative importance coefficient value can be made larger.

[0029] Assuming the occurrence pattern of incoming input data or events follows a random Gaussian distribution. Further assuming that the arriving rates, (.sub.1, .sub.2, .sub.3, . . . .sub.N), take the mean values with a standard deviation of r. The current system load, P, becomes the center point of a shape in the multi-dimensional space having a volume proportional to r. Inside this shape are all the probable current system load values. To compensate for some of probable current system loads that are higher than P, the target projection point of system load, P', must be set somewhere below the system capacity line to ensure system stability with guaranteed probability of buffer overflow control. For example, if P' is set at a distance of 1.times.r below the system capacity line, there is 68% confidence that the buffers will not overflow; 2.times.r for 95% confidence; and 3.times.r for 99.7% confidence.

[0030] Referring to FIG. 3. The current system load, P.sub.0, is the center of a circle having a radius of r. The circle area contains all the probable current system load values. To ensure system stability with 99.7% confidence that the buffers will not overflow, the target projection point of system load, P'.sub.2, is set at 3.times.r below the system capacity line.

[0031] In accordance to various embodiments, a load shedding module implementing the method of the present claimed invention monitors the processing latencies of the nodes and if any one node is exhibiting an observed latency that is greater than a pre-defined user acceptable latency value, the load shedding module computes the target projection point of system load, a drop ratio, and an incremental drop ratio delta for each of the applications running in the distributed stream computing system. The target projection point can optionally be revised according to guaranteed probability of buffer overflow control requirements and a revised drop ratio and incremental drop ratio delta for each of the applications are determined. Each application drops its unprocessed input data or events by a load shedding percentage that is equal to its corresponding incremental drop ratio delta initially and increments by the same delta for each cycle until the observed average latency at each nodes is not greater than the pre-defined user acceptable latency value.

[0032] The embodiments disclosed herein may be implemented using general purpose or specialized computing devices, computer processors, or electronic circuitries including but not limited to digital signal processors (DSP), application specific integrated circuits (ASIC), field programmable gate arrays (FPGA), and other programmable logic devices configured or programmed according to the teachings of the present disclosure. Computer instructions or software codes running in the general purpose or specialized computing devices, computer processors, or programmable logic devices can readily be prepared by practitioners skilled in the software or electronic art based on the teachings of the present disclosure.

[0033] In some embodiments, the present invention includes computer storage media having computer instructions or software codes stored therein which can be used to program computers or microprocessors to perform any of the processes of the present invention. The storage media can include, but are not limited to, floppy disks, optical discs, Blu-ray Disc, DVD, CD-ROMs, and magneto-optical disks, ROMs, RAMs, flash memory devices, or any type of media or devices suitable for storing instructions, codes, and/or data.

[0034] The foregoing description of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art.

[0035] The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalence.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed