System And Method For Managing Mpls-te Overload

JAIN; PRADEEP G. ;   et al.

Patent Application Summary

U.S. patent application number 13/838673 was filed with the patent office on 2013-12-05 for system and method for managing mpls-te overload. The applicant listed for this patent is PRADEEP G. JAIN, JAISHAL SHAH, KANWAR D. SINGH, SRIKRISHNAN VENKATARAMAN. Invention is credited to PRADEEP G. JAIN, JAISHAL SHAH, KANWAR D. SINGH, SRIKRISHNAN VENKATARAMAN.

Application Number20130322254 13/838673
Document ID /
Family ID49670125
Filed Date2013-12-05

United States Patent Application 20130322254
Kind Code A1
JAIN; PRADEEP G. ;   et al. December 5, 2013

SYSTEM AND METHOD FOR MANAGING MPLS-TE OVERLOAD

Abstract

A system, method and apparatus for detecting MPLS-TE overload conditions and informing a IGP routing protocol, wherein the IGP routing protocol communicates the overload condition to the nodes in the MPLS TE routing domain by inserting new flag or bit value in an OSPF Router Information Capability TLV or an IS-IS Router Capability TLV.


Inventors: JAIN; PRADEEP G.; (MOUNTAIN VIEW, CA) ; SINGH; KANWAR D.; (MOUNTAIN VIEW, CA) ; SHAH; JAISHAL; (MOUNTAIN VIEW, CA) ; VENKATARAMAN; SRIKRISHNAN; (MOUNTAIN VIEW, CA)
Applicant:
Name City State Country Type

JAIN; PRADEEP G.
SINGH; KANWAR D.
SHAH; JAISHAL
VENKATARAMAN; SRIKRISHNAN

MOUNTAIN VIEW
MOUNTAIN VIEW
MOUNTAIN VIEW
MOUNTAIN VIEW

CA
CA
CA
CA

US
US
US
US
Family ID: 49670125
Appl. No.: 13/838673
Filed: March 15, 2013

Related U.S. Patent Documents

Application Number Filing Date Patent Number
61653219 May 30, 2012

Current U.S. Class: 370/236
Current CPC Class: H04L 47/125 20130101; H04L 45/124 20130101; H04L 43/16 20130101; H04L 45/125 20130101; H04L 45/50 20130101; H04L 47/724 20130101; H04L 45/16 20130101; H04L 47/35 20130101; H04L 45/04 20130101
Class at Publication: 370/236
International Class: H04L 12/56 20060101 H04L012/56

Claims



1. A method for managing MPLS-TE loading, comprising: monitoring a utilization level of a label switch router (LSR) associated with one or more label switched path (LSPs); and in response to a determination of said utilization level being indicative of a MPLS-TE overload condition, transmitting an overload message toward an Interior Gateway Protocol (IGP) router, said overload message adapted to cause IGP router to advertise said overload condition. 10

2. The method of claim 1, wherein said utilization level is associated with one or more of a memory utilization level, a central processing unit (CPU) utilization level and an input/output utilization level.

3. The method of claim 1, wherein said utilization level is associated with one or more of a number of received RSVP packets, a rate of RSVP packet reception, a number of dropped RSVP packets and a rate of dropped RSVP packets.

4. The method of claim 1, wherein IGP is adapted to advertise said overload condition to routers within the IGP domain via a flag setting or bit state within a Router Information Capability TLV or sub-TLV.

5. The method of claim 4, wherein said IGP comprises an Open Shortest Path First (OSPF) protocol.

6. The method of claim 5, wherein said OSPF protocol advertises said overload condition via a flag setting or bit having a first state within a OSPF Router Information Capability Type-Length-Value (TLV) or sub-TLV. 30

7. The method of claim 1, wherein said IGP comprises an Intermediate System to Intermediate System (IS-IS) protocol.

8. The method of claim 7, wherein said IS-IS protocol advertises said overload condition via a flag setting or bit having a first state within a IS-IS Router Information Capability TLV or sub-TLV.

9. The method of claim 1, further comprising: in response to a determination that said utilization level is no longer indicative of a MPLS-TE overload condition, transmitting an non-overload message toward an Interior Gateway Protocol (IGP) router, said non-overload message adapted to cause IGP to advertise said non-overload condition.

10. The method of claim 9, wherein said utilization level is associated with one or more of a memory utilization level, a central processing unit (CPU) utilization level, an input/output utilization level, a number of received RSVP packets, a rate of RSVP packet reception, a number of dropped RSVP packets and a rate of dropped RSVP packets.

11. The method of claim 5, wherein said OSPF protocol advertises a non-overload condition via said flag setting or bit having a second state within a OSPF Router Information Capability Type-Length-Value (TLV) or sub-TLV.

12. The method of claim 7, wherein said IS-IS protocol advertises a non-overload condition via said flag setting or bit having a second state within a IS-IS Router Information Capability TLV or sub-TLV.

13. The method of claim 9, wherein: an OSPF protocol advertises an overload condition via a flag setting or bit having a first state within a OSPF Router Information Capability Type-Length-Value (TLV) or sub-TLV; and said OSPF protocol advertises a non-overload condition via said flag setting or bit having a second state within said OSPF Router Information Capability Type-Length-Value (TLV) or sub-TLV.

14. The method of claim 9, wherein: an IS-IS protocol advertises an overload condition via a flag setting or bit having a first state within a IS-IS Router Information Capability Type-Length-Value (TLV) or sub-TLV; and said IS-IS protocol advertises a non-overload condition via said flag setting or bit having a second state within said IS-IS Router Information Capability Type-Length-Value (TLV) or sub-TLV.

15. The method of claim 1, wherein said IGP advertised overload condition is adapted to cause other LSRs to reroute existing LSPs around an overloaded LSR.

16. The method of claim 1, wherein said IGP advertised overload condition inhibiting other LSRs from routing new LSPs through said overloaded LSR.

17. The method of claim 9, wherein said IGP advertised non-overload enabling other LSRs to route existing and new LSPs to the non-overloaded LSR.

18. A telecom network element for managing MPLS-TE loading, comprising a processor configured for: monitoring a utilization level of a label switch router (LSR) associated with one or more label switched path (LSPs); and in response to a determination of said utilization level being indicative of a MPLS-TE overload condition, transmitting an overload message toward an Interior Gateway Protocol (IGP) router, said overload message adapted to cause IGP to advertise said overload condition.

19. A computer program product wherein computer instructions, when executed by a processor in a telecom network element, adapt the operation of the telecom network element to perform a method for managing MPLS-TE loading, the method comprising: monitoring a utilization level of a label switch router (LSR) associated with one or more label switched path (LSPs); and in response to a determination of said utilization level being indicative of a MPLS-TE overload condition, transmitting an overload message toward an Interior Gateway Protocol (IGP) router, said overload message adapted to cause IGP to advertise said overload condition.

20. A tangible and non-transient computer readable storage medium storing instructions which, when executed by a computer, adapt the operation of the computer to provide a method for managing MPLS-TE loading, the method comprising: monitoring a utilization level of a label switch router (LSR) associated with one or more label switched path (LSPs); and in response to a determination of said utilization level being indicative of a MPLS-TE overload condition, transmitting an overload message toward an Interior Gateway Protocol (IGP) router, said overload message adapted to cause IGP to advertise said overload condition.
Description



CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/653,219, filed May 30, 2012, entitled TE-LSP SYSTEMS AND METHODS (Attorney Docket No. 811458-PSP) which application is incorporated herein by reference in its entirety.

FIELD OF THE INVENTION

[0002] The invention relates to the field of communication networks such as multi-protocol label switching (MPLS) networks and, more particularly but not exclusively, to resource overload detection and management mechanisms.

BACKGROUND

[0003] Multiprotocol Label Switching (MPLS) enables efficient delivery of a wide variety of differentiated, end-to-end services. Multiprotocol Label Switching (MPLS) traffic engineering (TE) provides a mechanism for selecting efficient paths across an MPLS network based on bandwidth considerations and administrative rules. Each label switching router maintains a TE link state database with a current network topology. Once a path is computed, TE is used to maintain a forwarding state along that path.

[0004] In the case of Resource Reservation Protocol (RSVP) Inter-Domain TE-LSPs, a router or other network element or node may experience a resource overutilization condition (i.e., insufficient memory, processor, input/output or other resources) in response to receiving a large number of RSVP Packets. Such a condition may result in the RSVP/MPLS process temporarily dropping RSVP packets to conserve resources. If the condition persists, then the node may start tearing down existing MPLS-TE LSPs to release resources, which in turn may lead to service interruption in a Service Provider Network.

SUMMARY

[0005] Various deficiencies in the prior art are addressed by systems, methods, apparatus, mechanism, telecom network elements and the like for managing MPLS-TE loading, such as by detecting, responding to, and otherwise managing MPLS-TE loading conditions a manner adapted to minimize service impact. Various embodiments provide a mechanism of alerting other routers, network elements or nodes in an MPLS-TE Domain so that they may avoid using an overloaded node in subsequent new MPLS-TE LSP path computations.

[0006] Various embodiments are directed toward propagating information indicative of a MPLS TE-Overload condition. In particular, upon detecting such a condition, various MPLS/RSVP tasks inform a routing protocol (e.g., OSPF, IS-IS and the like) about this state. The routing protocol in turn communicates the overload condition to the nodes in the MPLS TE routing domain by inserting a new flag or bit value in an OSPF Router Information Capability TLV (if using OSPF) or an IS-IS Router Capability TLV (if using IS-IS).

[0007] A method for managing MPLS-TE loading according to one embodiment comprises: monitoring a utilization level of a label switch router (LSR) associated with one or more label switched path (LSPs); and in response to a said utilization level being indicative of a MPLS-TE overload condition, transmitting an overload message toward an Interior Gateway Protocol (IGP), said overload message adapted to cause said IGP to advertise said overload condition.

[0008] The utilization level may be associated with one or more of a memory utilization level, a central processing unit (CPU) utilization level, an input/output utilization level, a number of received RSVP packets, a rate of RSVP packet reception, a number of dropped RSVP packets and a rate of dropped RSVP packets.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] The teachings herein can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:

[0010] FIG. 1 depicts a high-level block diagram of a system benefiting from various embodiments;

[0011] FIG. 2 depicts a flow diagram of a method according to one embodiment;

[0012] FIG. 3 depicts a high-level block diagram of a computing device suitable for use in performing functions described herein.

[0013] To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.

DETAILED DESCRIPTION OF THE INVENTION

[0014] Various embodiments provide systems, methods and/or apparatus for detecting, responding to, and otherwise managing MPLS-TE Overload conditions a manner adapted to minimize service impact.

[0015] Generally speaking, various embodiments are directed toward propagating information indicative of a MPLS TE-Overload condition. In particular, upon detecting such a condition, various MPLS/RSVP tasks inform a routing protocol (e.g., OSPF, IS-IS and the like) about this state. The routing protocol in turn communicates the overload condition to the nodes in the MPLS TE routing domain by inserting new flag or bit value in an OSPF Router Information Capability TLV (if using OSPF) or an IS-IS Router Capability TLV (if using IS-IS).

[0016] FIG. 1 depicts a high-level block diagram of a communication network benefiting from various embodiments. Specifically, the network 100 of FIG. 1 provides a Multi-Protocol Label Switching (MPLS) network supporting Resource Reservation Protocol (RSVP). The network may be modified by those skilled in the art to use other MPLS related protocols rather that the exemplary protocol discussed herein.

[0017] As depicted in FIG. 1, exemplary network 100 includes a plurality of nodes 110.sub.1-110.sub.7 (collectively, nodes 110) that are interconnected via a plurality of communication links 120 (collectively, communication links 120). The network 100 is managed by a management system 130, which may provide any suitable management functions for the network 100. While the network 100 may compr9sebe any suitable type of network and, thus, the nodes 110 may be any suitable types of nodes. For example, the network 102 may be an MPLS network in which nodes 110 are label switching routers (LSRs).

[0018] The nodes 110 are configured for transporting traffic within the network 102. The nodes 110 may transport traffic within network 102 using any suitable protocols (e.g., Internet Protocol (IP), MPLS, and the like, as well as various combinations thereof).

[0019] The nodes 110 are configured to collect link state information associated with the communication link(s) 120 to which each node 110 is connected. The nodes 110 are further configured to flood the collected link state information within network 102.

[0020] In one embodiment, the collection and flooding of link state information is performed using an Interior Gateway Protocol (IGP) supporting link-state, such as Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), or any other suitable protocol. In this manner, each node 110 receives link state information associated with network 102 and, thus, each node 110 is able to maintain a database including information suitable for use in computing paths (e.g., network topology information, link state information, and the like). This type of database is typically referred to as a Traffic Engineering (TE) database. The nodes 110 also may be configured to store link constraints for use in computing paths for network 102.

[0021] The link constraints may include any suitable link constraints which may be evaluated within the context of path computation. For example, the link constraints may include one or more of a link utilization for the link, a minimum link capacity required for a link, a maximum link bandwidth allowed for a link, a link cost associated with a link, an administrative constraint associated with the link, and the like, as well as various combinations thereof.

[0022] The link constraints may be configured on the nodes 110 in any suitable manner. For example, the link constraints may be pre-configured on the nodes 110 (e.g., automatically and/or by administrators), specified when requesting path computation or establishment, and the like, as well as various combinations thereof. In such embodiments, the link constraints may be provided to the nodes 110, for storage on the nodes 110, from any suitable source(s) of link constraints (e.g., a management system such as MS 130, or any other suitable source).

[0023] Although primarily depicted and described herein with respect to embodiments in which link constraints are configured on the nodes 110, in other embodiments the link constraints may not be stored on the nodes 110. For example, in embodiments in which path computation is performed by a device or devices other than nodes 110 (e.g., by a management system, such as MS 130), link constraints may only be available to the device(s) computing the paths.

[0024] In network 102, at least a portion of the nodes 110 may be configured to operate as ingress nodes into network 102 and, similarly, at least a portion of the nodes 110 may be configured to operate as egress nodes from network 102. In FIG. 1, for example, for a given path between node 110.sub.1 and node 110.sub.7, node 110.sub.1 operates as an ingress node for the path and node 110.sub.7 operates as an egress node for the path. It will be appreciated that each of the nodes 110 may operate as an ingress node only, an egress node only, or both an ingress and egress node (e.g., for different traffic flows).

[0025] As each of the nodes 110 may be configured to operate as an ingress node and/or as an egress node, each node 110 configured to operate as an ingress node may be referred to as an ingress node 110 and each node 110 configured to operate as an egress node may be referred to as an egress node 110.

[0026] In one embodiment, the ingress nodes 110 each are configured for computing paths to egress nodes 110, thereby enabling establishment of connections, from the ingress nodes 110 to the egress nodes 110, configured for transporting traffic via the network 102. The ingress nodes 110, in response to path computation requests, compute the requested paths based on the network information (e.g., network topology, link state, and the like, which may be available in a TE database and/or any other suitable database or databases) and link constraints available to the ingress nodes 110, respectively. The ingress nodes 110, upon computation of paths, may then initiate establishment of connections using the computed paths. The ingress nodes 110 may then transmit information to the egress nodes 110 via the established connections, at which point the egress nodes 110 may then forward the information to other networks and devices.

[0027] In one embodiment, MS 130 is configured for computing paths from ingress nodes 110 to egress nodes 110, thereby enabling establishing of connections, from the ingress nodes 110 to the egress nodes 110, configured for transporting traffic via the network 102. The MS 130, in response to path computation requests, computes the requested paths based on the network information (e.g., network topology, link state, and the like, which may be available in a TE database and/or any other suitable database or databases) and link constraints available to MS 130. The MS 130, upon computing a path, transmits path configuration information for the computed path to the relevant nodes 110, where the path configuration information may be used to establish a connection via the computed path within network 102. The ingress node 110 of the computed path may then transmit information to the egress node 110 via the connection, at which point the egress node 110 may then forward the information to other networks and devices.

[0028] In various embodiments, the network 102 comprises an MPLS network in which nodes 110 are label switching routers (LSRs) operating according to Multi-Protocol Label Switching (MPLS) Label Distribution Protocol (LDP).

[0029] FIG. 2 depicts a flow diagram of a method according to one embodiment. Specifically, FIG. 2 depicts a flow diagram of a method for managing MPLS-TE Overload conditions a manner adapted to minimize service impact. The method 200 of FIG. 2 contemplates that some or all of a plurality of label switching routers (LSRs) associated with various label switched paths (LSPs) through a MPLS network operate to monitor various operating parameters to determine thereby whether a MPLS-TE overload condition exists or is imminent.

[0030] At step 210, a LSP is established between an ingress node and an egress node. Referring to box 215, the established LSP further supports upstream and downstream messages between the various LSRs. In particular, RSVP path message is propagated downstream toward the egress node while RSVP Resv messages are propagated upstream toward the ingress node.

[0031] At step 220, at one or more of the egress (or transit) nodes or LSRs forming the LSP, resource utilization is monitored to determine if a MPLS-TE overload condition exists or is imminent. Referring to box 225, this determination may be made with respect to memory, CPU, input/output or other resources, a number of received RSVP packets, a rate of RSVP packet reception, a number of dropped RSVP packet, the rate at which RSVP packets are dropped, one or more resource utilization threshold levels and/or other mechanisms. For example, a MPLS/RSVP task processing mechanism at the node or LSR continues polling is system resource utilization and/or RSVP packet reception statistics.

[0032] At step 230, in response to a determination that a MPLS-TE overload condition exists or is imminent at a particular node or LSR, the MPLS/RSVP task processing mechanism (or other mechanism) at the node or LSR informs the IGP of the overload condition.

[0033] At step 240, the IGP advertises the MPLS-TE overload condition to routers within the MPLS domain. Referring to box 245, such advertising is performed via a new or predefined flag or bit setting within a IGP router capability TLV or sub-TLV, such as a Open Shortest Path First (OSPF) routing protocol, Intermediate System To Intermediate System (IS-IS) routing protocol and the like. Other IGP advertising mechanisms may also be used. Further, other types of IGP may also be used.

[0034] Various embodiments described herein utilize IGP advertising mechanisms conforming to IS-IS CAPABILITY TLVs and sub-TLVs such as described in more detail in Internet Engineering Task Force (IETF) document "IS-IS Extensions for Advertising Router Information." As defined therein, the IS-IS router CAPABILITY TLV is composed of 1 octet specifying the number of bytes in the value field, and a variable length value field, starting with 4 octets of Router ID, indicating the source of the TLV, and followed by 1 octet of flags. A set of optional sub-TLVs may follow the flag field. Sub-TLVs are formatted as described in IETF Request for Comment (RFC) 3784. Various embodiments use assigned or unassigned bits or flags within a value field (or other fields) to indicate an overload condition.

[0035] Various embodiments described herein utilize IGP advertising mechanisms conforming to OSPF CAPABILITY TLVs and sub-TLVs such as described in more detail in IETF RFC 4970. The format of the Router Informational Capabilities TLV includes a "value" field comprising a variable length sequence of capability bits rounded to a multiple of 4 octets padded with undefined bits. Various embodiments use assigned or unassigned bits or flags within the value field (or other fields) to indicate an overload condition.

[0036] For example, upon determining that a MPLS-TE overload condition exists or is imminent, a MPLS/RSVP task processing mechanism at the node or LSR may inform IGP of the overload condition. The IGP in turn advertises this condition by adapting or setting to a first state a flag or bit setting of a OSPF router info capability TLV, IS-IS router info capability TLV, other TLV, existing LSP attribute and the like.

[0037] Various embodiments are adapted to propagating information indicative of a MPLS-TE overload condition upstream to a head-end router (such as an ingress LSP, ABR and the like) adapted to cause the head end router to initiate or trigger a reroute (if desired) of one or more LSPs supported by transit or egress LSRs. A head-end router receiving information indicative of a downstream MPLS-TE overload condition may request re-routing for any existing TE-LSP transiting the overloaded egress (or transit) nodes or LSRs forming the TE-LSP. Suitable mechanisms for requesting rerouting exist, including those described in more detail in various Internet Engineering Task Force (IETF) Request for Comment (RFC), such as RFC5710 (PathErr Message Triggered MPLS and GMPLS LSP Reroutes).

[0038] Generally speaking, when performing a path computation for any new TE-LSP, the head-end router should avoid a router advertising an MPLS-TE overload condition if possible. In this manner, for existing or new MPLS-TE LSPs associated with an overloaded router, one or more head-end routers operate to reduce the RSVP load (resource load) associated with the overloaded router. In this manner, the resources of the overloaded router are conserved such that existing RSVP sessions may quickly return to a normal working state.

[0039] Thus, in various embodiments, an IGP advertised overload condition operates to inhibit other LSRs from routing new LSPs through an overloaded LSR. Similarly, an IGP advertised non-overload condition operates to enable other LSRs to route existing and new LSPs to the non-overloaded LSR.

[0040] At step 250, in response to a determination that a MPLS-TE overload condition no longer exists or is imminent at a particular node or LSR, the MPLS/RSVP task processing mechanism (or other mechanism) at the node or LSR informs the IGP of the non-overload condition. The IGP advertises the MPLS-TE non-overload condition to routers within the MPLS domain in a manner similar to that described above with respect to step 240.

[0041] For example, upon determining that a MPLS-TE overload condition no longer exists or is no longer imminent, a MPLS/RSVP task processing mechanism at the node or LSR may inform IGP of the non-overload condition. The IGP in turn advertises this condition by adapting or setting to a second state a flag or bit setting of a OSPF router info capability TLV, IS-IS router info capability TLV, other TLV, existing LSP attribute and the like.

[0042] Thus, in one embodiment, a node entering a MPLS-TE Overloaded state informs the IGP of this state such that the IGP advertises the overload state to all the nodes in the in the MPLS-TE domain by, illustratively, setting a MPLS-TE overload flag or bit in a corresponding TLV or sub-TLV. Similarly, a node exiting a MPLS-TE Overloaded state (i.e., returning to a normal state) informs the IGP of this state such that the IGP advertises the normal state to all the nodes in the in the MPLS-TE domain by, illustratively, resetting a MPLS-TE overload flag or bit in a corresponding TLV or sub-TLV.

[0043] Thus, the IGP advertised overload condition is adapted to cause other LSRs to reroute existing LSPs around an overloaded LSR and/or routing new LSPs through an overloaded LSR. Similarly, an IGP advertised non-overload condition is adapted to cause other LSRs to again route existing or new LSPs through a previously overloaded LSR is such routing is appropriate in terms of cost constraints, path management criteria and so on.

[0044] FIG. 3 depicts a high-level block diagram of a computing device, such as a processor in a telecom network element, suitable for use in performing functions described herein, such as the various network management functions, LSR functions, encapsulation functions, routing/path functions and so on associated with the various elements described above with respect to the figures.

[0045] As depicted in FIG. 3, computing device 300 includes a processor element 303 (e.g., a central processing unit (CPU) and/or other suitable processor(s)), a memory 304 (e.g., random access memory (RAM), read only memory (ROM), and the like), a cooperating module/process 305, and various input/output devices 306 (e.g., a user input device (such as a keyboard, a keypad, a mouse, and the like), a user output device (such as a display, a speaker, and the like), an input port, an output port, a receiver, a transmitter, and storage devices (e.g., a persistent solid state drive, a hard disk drive, a compact disk drive, and the like)).

[0046] It will be appreciated that the functions depicted and described herein may be implemented in software and/or in a combination of software and hardware, e.g., using a general purpose computer, one or more application specific integrated circuits (ASIC), and/or any other hardware equivalents. In one embodiment, the cooperating process 305 can be loaded into memory 304 and executed by processor 303 to implement the functions as discussed herein. Thus, cooperating process 305 (including associated data structures) can be stored on a computer readable storage medium, e.g., RAM memory, magnetic or optical drive or diskette, and the like.

[0047] It will be appreciated that computing device 300 depicted in FIG. 3 provides a general architecture and functionality suitable for implementing functional elements described herein or portions of the functional elements described herein.

[0048] It is contemplated that some of the steps discussed herein as software methods may be implemented within hardware, for example, as circuitry that cooperates with the processor to perform various method steps. Portions of the functions/elements described herein may be implemented as a computer program product wherein computer instructions, when processed by a computing device, adapt the operation of the computing device such that the methods and/or techniques described herein are invoked or otherwise provided. Instructions for invoking the inventive methods may be stored in tangible and non-transitory computer readable medium such as fixed or removable media or memory, transmitted via a tangible or intangible data stream in a broadcast or other signal bearing medium, and/or stored within a memory within a computing device operating according to the instructions.

[0049] Although various embodiments which incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. Thus, while the foregoing is directed to various embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. As such, the appropriate scope of the invention is to be determined according to the claims.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed