Collapsing And Placement Of Applications

Pang; Jackson Ngoc Ki ;   et al.

Patent Application Summary

U.S. patent application number 15/173431 was filed with the patent office on 2016-12-08 for collapsing and placement of applications. The applicant listed for this patent is Cisco Technology, Inc.. Invention is credited to Jackson Ngoc Ki Pang, Ali Parandehgheibi, Michael Standish Watts.

Application Number20160357424 15/173431
Document ID /
Family ID57451053
Filed Date2016-12-08

United States Patent Application 20160357424
Kind Code A1
Pang; Jackson Ngoc Ki ;   et al. December 8, 2016

COLLAPSING AND PLACEMENT OF APPLICATIONS

Abstract

The present technology is directed to providing visibility of data flows in a multi-tier application and to help network teams understand the dataflow of an application and develop the application's dataflow. The technology is directed to an application dependency map visualized in a collapsible chart. The collapsible chart displays the policies/relationships between each logical entity that carries a multi-tier application. The collapsible multi-tier application UI displays the data flows of a multi-tier application. Such charts are large and complex, and the present technology attempts to avoid displaying the entire topology of such multi-tier applications, while focusing on dependency relationships of interest.


Inventors: Pang; Jackson Ngoc Ki; (Sunnyvale, CA) ; Watts; Michael Standish; (Mill Valley, CA) ; Parandehgheibi; Ali; (Sunnyvale, CA)
Applicant:
Name City State Country Type

Cisco Technology, Inc.

San Jose

CA

US
Family ID: 57451053
Appl. No.: 15/173431
Filed: June 3, 2016

Related U.S. Patent Documents

Application Number Filing Date Patent Number
62171899 Jun 5, 2015

Current U.S. Class: 1/1
Current CPC Class: G06F 9/45558 20130101; G06F 2221/2111 20130101; H04L 43/08 20130101; H04L 43/12 20130101; H04L 67/22 20130101; G06F 16/235 20190101; H04L 41/0668 20130101; H04L 43/062 20130101; G06F 16/173 20190101; H04L 1/242 20130101; H04L 9/3239 20130101; H04L 43/02 20130101; H04L 45/66 20130101; H04L 45/74 20130101; G06F 16/174 20190101; G06F 21/566 20130101; G06N 20/00 20190101; H04L 41/0803 20130101; H04L 41/0893 20130101; H04L 47/31 20130101; H04L 63/1441 20130101; H04L 67/12 20130101; G06F 3/0482 20130101; G06F 2009/45595 20130101; G06N 99/00 20130101; H04L 43/045 20130101; H04L 45/46 20130101; H04L 61/2007 20130101; H04L 67/1002 20130101; H04L 41/0816 20130101; H04L 43/0811 20130101; H04L 43/106 20130101; H04L 45/507 20130101; H04J 3/0661 20130101; H04L 63/1408 20130101; H04L 63/16 20130101; H04W 84/18 20130101; G06F 16/248 20190101; H04L 47/20 20130101; H04L 47/2483 20130101; H04L 69/22 20130101; G06F 16/1748 20190101; G06F 2009/45587 20130101; H04L 43/0805 20130101; G06F 16/137 20190101; H04L 43/0864 20130101; H04L 43/16 20130101; H04L 47/2441 20130101; H04L 41/046 20130101; G06F 2221/2115 20130101; H04L 63/0263 20130101; H04L 67/36 20130101; H04L 63/1416 20130101; G06F 16/2322 20190101; G06F 2009/4557 20130101; H04L 9/0866 20130101; H04L 41/0806 20130101; H04L 43/0876 20130101; H04L 41/22 20130101; H04L 69/16 20130101; H04L 43/0841 20130101; G06F 16/122 20190101; H04L 47/28 20130101; H04J 3/14 20130101; G06F 21/53 20130101; G06F 2221/2105 20130101; H04L 41/12 20130101; H04L 67/10 20130101; G06T 11/206 20130101; H04L 47/32 20130101; H04L 63/1425 20130101; H04L 67/16 20130101; G06F 16/17 20190101; G06F 2221/2145 20130101; H04L 67/42 20130101; G06F 16/162 20190101; G06F 16/1744 20190101; H04L 9/3242 20130101; G06F 3/04847 20130101; G06F 16/24578 20190101; G06F 2221/033 20130101; H04L 63/0876 20130101; G06F 16/9535 20190101; H04L 47/11 20130101; H04L 63/1433 20130101; G06F 16/285 20190101; G06F 16/288 20190101; H04L 41/16 20130101; H04W 72/08 20130101; H04L 43/0829 20130101; H04L 43/10 20130101; H04L 45/306 20130101; H04L 63/145 20130101; H04L 63/20 20130101; G06F 2009/45591 20130101; H04L 45/38 20130101; G06F 3/04842 20130101; H04L 43/04 20130101; H04L 63/1458 20130101; G06F 21/552 20130101; H04L 43/0858 20130101; H04L 43/0882 20130101; H04L 43/0888 20130101; H04L 63/0227 20130101; H04L 63/1466 20130101; G06F 16/2365 20190101; G06F 16/29 20190101; G06F 2221/2101 20130101; H04L 63/06 20130101
International Class: G06F 3/0484 20060101 G06F003/0484; G06F 17/30 20060101 G06F017/30; G06F 3/0482 20060101 G06F003/0482; G06T 11/20 20060101 G06T011/20

Claims



1. A method for visualizing a multi-tier application comprising: representing the multi-tier application in a dynamic graph as a limited number of logical entities, at least one logical entity being a first cluster of additional logical entities; receiving an input effective to explode the first cluster of logical entities in to a plurality of additional logical entities, at least one of the additional logical entities being a second cluster of logical entities; and rearranging the dynamic graph to accommodate the plurality of additional logical entities.

2. The method of claim 1, comprising: selecting the limited number of logical entities to represent the multi-tier application based on query criteria, and arranging the limited number of logical entities based on the query criteria.

3. The method of claim 1, comprising: representing logical entities being a cluster of additional logical entities in a first representation, while representing logical entities that have been fully exploded to reveal all dependencies in a second representation.

4. The method of claim 1, wherein logical entities that are partially exploded such that some dependencies are revealed while other dependencies are clustered are represented in the first representation.

5. The method of claim 1, comprising: anchoring a logical entity in response to receiving an input, wherein when the dynamic graph is rearranged, the logical entity remains anchored in place.

6. The method of claim 1, comprising: receiving a selection of one of the plurality of logical entities; and presenting a description of the logical entity.

7. The method of claim 5, wherein the rearranging the dynamic graph to accommodate the plurality of additional logical entities includes shifting the existing logical entities to the left, except the anchored logical entity, and introducing the additional logical entities to the right

8. The method of claim 1, comprising: hiding a logical entity in response to a received input.

9. A system for arranging a graph representing a multi-tier application the system comprising: a processor; and a non-transitory computer readable medium storing processor executable instructions, the instructions effective to cause the processor to: represent the multi-tier application in a dynamic graph as a limited number of nodes representing logical entities, at least one node being a first cluster of additional logical entities; receive an input effective to explode the first cluster of logical entities in to a plurality of additional nodes representing logical entities, at least one of the additional nodes being a second cluster of logical entities; and rearranging the dynamic graph to accommodate the plurality of additional nodes representing logical entities.

10. The system of claim 9, wherein the instructions are effective to: select the nodes representing logical entities to represent the multi-tier application based on query criteria, and arrange the nodes representing logical entities based on the query criteria.

11. The system of claim 9, wherein the instructions are effective to: represent nodes being a cluster of additional logical entities in a first representation, while representing nodes that have been fully exploded to reveal all dependencies in a second representation.

12. The system of claim 9, wherein nodes are partially exploded such that some dependencies are revealed while other dependencies are clustered are represented in the first representation.

13. The system of claim 9, wherein the instructions are effective to: anchor a node in response to receiving an input, wherein when the dynamic graph is rearranged, the node remains anchored in place.

14. The system of claim 9, wherein the instructions are effective to: receive a selection of one of nodes; and present a description of the logical entity represented by the node.

15. A non-transitory computer readable medium comprising instructions stored thereon, the instructions effective to cause the processor to: represent the multi-tier application in a dynamic graph as a limited number of nodes representing logical entities, at least one node being a first cluster of additional logical entities; receive an input effective to explode the first cluster of logical entities in to a plurality of additional nodes representing logical entities, at least one of the additional nodes being a second cluster of logical entities; and rearranging the dynamic graph to accommodate the plurality of additional nodes representing logical entities.

16. The non-transitory computer readable medium of claim 15, wherein the instructions are effective to: select the nodes representing logical entities to represent the multi-tier application based on query criteria, and arrange the nodes representing logical entities based on the query criteria.

17. The non-transitory computer readable medium of claim 15, wherein the instructions are effective to: represent nodes being a cluster of additional logical entities in a first representation, while representing nodes that have been fully exploded to reveal all dependencies in a second representation.

18. The non-transitory computer readable medium of claim 15, wherein nodes are partially exploded such that some dependencies are revealed while other dependencies are clustered are represented in the first representation.

19. The non-transitory computer readable medium of claim 15, wherein the instructions are effective to: anchor a node in response to receiving an input, wherein when the dynamic graph is rearranged, the node remains anchored in place.

20. The non-transitory computer readable medium of claim 15, wherein the instructions are effective to: receive an input effective to explode the first cluster of logical entities in to a plurality of additional nodes representing logical entities, at least one of the additional nodes being a second cluster of logical entities, wherein the expanded additional nodes are presented to give an illusion of a tree structure.
Description



RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Application No. 62/171,899, entitled "SYSTEM FOR MONITORING AND MANAGING DATACENTERS," filed Jun. 5, 2015, which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002] The present technology pertains visualization of a multi-tier application, and more specifically to the customization of the visualization of the multi-tier application.

BACKGROUND

[0003] Datacenters can include a large number of servers and virtual machines. As such datacenters can have a large number of data flows between each server and virtual machine. Monitoring and managing the network of a datacenter can be cumbersome especially with a datacenter with a large number of servers, virtual machines and data flows. Visualizing the network of a datacenter can help network operators manage and monitor the network of a datacenter. However, because of the large number of data flows, visualizing these data flows can be very cumbersome.

[0004] Multi-tier applications can be very complex. Thousands of logical entities could potentially be responsible for the performance of an application. Given their complexity, a multi-tier application cannot easily be viewed in a conventional dependency graph. Prior art solutions too often attempt to display an entire network structure or application structure by displaying an entire chart that can be zoomed in or zoomed out (see e.g., U.S. Pat. No. 9,246,773 issued on Jan. 26, 2016). Such solutions rely heavily on algorithms to layout the graph. However, given the complexity of multi-tier applications, conventional systems are not well suited to allowing a system administrator to explore such graphs.

BRIEF DESCRIPTION OF THE FIGURES

[0005] In order to describe the manner in which the above-recited and other advantages and attributes of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only example embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:

[0006] FIG. 1 illustrates an example of a network traffic monitoring system in accordance with some embodiments;

[0007] FIG. 2 illustrates an example of a network environment in accordance with some embodiments;

[0008] FIG. 3 illustrates an example of a data pipeline for determining clusters in an application dependency map in accordance with some embodiments;

[0009] FIG. 4 illustrates an example method for creating and interacting with a dynamic graph in accordance with some embodiments;

[0010] FIGS. 5A and 5B illustrate example graph accordance with some embodiments; and

[0011] FIG. 6 illustrates an example system embodiment.

DESCRIPTION OF EXAMPLE EMBODIMENTS

Overview

[0012] The present technology is directed to providing visibility of data flows in a multi-tier application and to help network teams understand the dataflow of an application and develop the application's dataflow. The technology is directed to an application dependency map visualized in a collapsible chart. The collapsible chart displays the policies/relationships between each logical entity that carries a multi-tier application. The collapsible multi-tier application UI displays the data flows of a multi-tier application. Such charts are large and complex, and the present technology attempts to avoid displaying the entire topology of such multi-tier applications, while focusing on dependency relationships of interest.

[0013] The present technology is directed to an application dependency map visualized in a collapsible dynamic graph. The dynamic graph displays policies/relationships between each logical entity of a multi-tier application. A graph user interface displays the dependencies between each logical entity of application, and the interface allows a user to expand or collapse a node representing a logical entity. Nodes can be moved and anchored within the interface to allow a user to explore the graph and find a desired view of the multi-tier application.

[0014] Edges connecting the nodes represent policies between the nodes.

[0015] The graph is created using data gathered from a data collection and analytics layer. Data used and made visible in the collapsible tree flow chart include, e.g, data flows from one logical entity to another logical entity, policies that govern the data flows from one logical entity to another logical entity, what host the data flow came from, what host group the data flow came from; and what subnet the data flow came from.

[0016] The UI is customizable. A user can select elements to adjust subnet groupings and cluster groupings. Additionally the user can upload side information. Examples of side information are DNS names, host names, etc.

Description

[0017] Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.

[0018] The disclosed technology is directed to the visualization of data flows within a datacenter. Specifically, to the generation and presentation of a parallel coordinate chart to represent analyzed data describing data flows. The parallel coordinate chart can be used to display a large number of data flows in the same chart, each flow intersects a plurality of parallel lines representing attributes of the respective flows. By representing flows in this manner it can be possible to identify outlier flows for further investigation.

[0019] Referring now to the drawings, FIG. 1 is an illustration of a network traffic monitoring system 100 in accordance with an embodiment. The network traffic monitoring system 100 can include a configuration manager 102, sensors 104, a collector module 106, a data mover module 108, an analytics engine 110, and a presentation module 112. In FIG. 1, the analytics engine 110 is also shown in communication with out-of-band data sources 114, third party data sources 116, and a network controller 118.

[0020] The configuration manager 102 can be used to provision and maintain the sensors 104, including installing sensor software or firmware in various nodes of a network, configuring the sensors 104, updating the sensor software or firmware, among other sensor management tasks. For example, the sensors 104 can be implemented as virtual partition images (e.g., virtual machine (VM) images or container images), and the configuration manager 102 can distribute the images to host machines. In general, a virtual partition may be an instance of a VM, container, sandbox, or other isolated software environment. The software environment may include an operating system and application software. For software running within a virtual partition, the virtual partition may appear to be, for example, one of many servers or one of many operating systems executed on a single physical server. The configuration manager 102 can instantiate a new virtual partition or migrate an existing partition to a different physical server. The configuration manager 102 can also be used to configure the new or migrated sensor.

[0021] The configuration manager 102 can monitor the health of the sensors 104. For example, the configuration manager 102 may request for status updates and/or receive heartbeat messages, initiate performance tests, generate health checks, and perform other health monitoring tasks. In some embodiments, the configuration manager 102 can also authenticate the sensors 104. For instance, the sensors 104 can be assigned a unique identifier, such as by using a one-way hash function of a sensor's basic input/out system (BIOS) universally unique identifier (UUID) and a secret key stored by the configuration image manager 102. The UUID can be a large number that may be difficult for a malicious sensor or other device or component to guess. In some embodiments, the configuration manager 102 can keep the sensors 104 up to date by installing the latest versions of sensor software and/or applying patches. The configuration manager 102 can obtain these updates automatically from a local source or the Internet.

[0022] The sensors 104 can reside on various nodes of a network, such as a virtual partition (e.g., VM or container) 120; a hypervisor or shared kernel managing one or more virtual partitions and/or physical servers 122, an application-specific integrated circuit (ASIC) 124 of a switch, router, gateway, or other networking device, or a packet capture (pcap) 126 appliance (e.g., a standalone packet monitor, a device connected to a network devices monitoring port, a device connected in series along a main trunk of a datacenter, or similar device), or other element of a network. The sensors 104 can monitor network traffic between nodes, and send network traffic data and corresponding data (e.g., host data, process data, user data, etc.) to the collectors 108 for storage. For example, the sensors 104 can sniff packets being sent over its hosts' physical or virtual network interface card (NIC), or individual processes can be configured to report network traffic and corresponding data to the sensors 104. Incorporating the sensors 104 on multiple nodes and within multiple partitions of some nodes of the network can provide for robust capture of network traffic and corresponding data from each hop of data transmission. In some embodiments, each node of the network (e.g., VM, container, or other virtual partition 120, hypervisor, shared kernel, or physical server 122, ASIC 124, pcap 126, etc.) includes a respective sensor 104. However, it should be understood that various software and hardware configurations can be used to implement the sensor network 104.

[0023] As the sensors 104 capture communications and corresponding data, they may continuously send network traffic data to the collectors 108. The network traffic data can include metadata relating to a packet, a collection of packets, a flow, a bidirectional flow, a group of flows, a session, or a network communication of another granularity. That is, the network traffic data can generally include any information describing communication on all layers of the Open Systems Interconnection (OSI) model. For example, the network traffic data can include source/destination MAC address, source/destination IP address, protocol, port number, etc. In some embodiments, the network traffic data can also include summaries of network activity or other network statistics such as number of packets, number of bytes, number of flows, bandwidth usage, response time, latency, packet loss, jitter, and other network statistics.

[0024] The sensors 104 can also determine additional data for each session, bidirectional flow, flow, packet, or other more granular or less granular network communication. The additional data can include host and/or endpoint information, virtual partition information, sensor information, process information, user information, tenant information, application information, network topology, application dependency mapping, cluster information, or other information corresponding to each flow.

[0025] In some embodiments, the sensors 104 can perform some preprocessing of the network traffic and corresponding data before sending the data to the collectors 108. For example, the sensors 104 can remove extraneous or duplicative data or they can create summaries of the data (e.g., latency, number of packets per flow, number of bytes per flow, number of flows, etc.). In some embodiments, the sensors 104 can be configured to only capture certain types of network information and disregard the rest. In some embodiments, the sensors 104 can be configured to capture only a representative sample of packets (e.g., every 1,000th packet or other suitable sample rate) and corresponding data.

[0026] Since the sensors 104 may be located throughout the network, network traffic and corresponding data can be collected from multiple vantage points or multiple perspectives in the network to provide a more comprehensive view of network behavior. The capture of network traffic and corresponding data from multiple perspectives rather than just at a single sensor located in the data path or in communication with a component in the data path, allows the data to be correlated from the various data sources, which may be used as additional data points by the analytics engine 110. Further, collecting network traffic and corresponding data from multiple points of view ensures more accurate data is captured. For example, a conventional sensor network may be limited to sensors running on external-facing network devices (e.g., routers, switches, network appliances, etc.) such that east-west traffic, including VM-to-VM or container-to-container traffic on a same host, may not be monitored. In addition, packets that are dropped before traversing a network device or packets containing errors may not be accurately monitored by the conventional sensor network. The sensor network 104 of various embodiments substantially mitigates or eliminates these issues altogether by locating sensors at multiple points of potential failure. Moreover, the network traffic monitoring system 100 can verify multiple instances of data for a flow (e.g., source endpoint flow data, network device flow data, and endpoint flow data) against one another.

[0027] In some embodiments, the network traffic monitoring system 100 can assess a degree of accuracy of flow data sets from multiple sensors and utilize a flow data set from a single sensor determined to be the most accurate and/or complete. The degree of accuracy can be based on factors such as network topology (e.g., a sensor closer to the source may be more likely to be more accurate than a sensor closer to the destination), a state of a sensor or a node hosting the sensor (e.g., a compromised sensor/node may have less accurate flow data than an uncompromised sensor/node), or flow data volume (e.g., a sensor capturing a greater number of packets for a flow may be more accurate than a sensor capturing a smaller number of packets).

[0028] In some embodiments, the network traffic monitoring system 100 can assemble the most accurate flow data set and corresponding data from multiple sensors. For instance, a first sensor along a data path may capture data for a first packet of a flow but may be missing data for a second packet of the flow while the situation is reversed for a second sensor along the data path. The network traffic monitoring system 100 can assemble data for the flow from the first packet captured by the first sensor and the second packet captured by the second sensor.

[0029] As discussed, the sensors 104 can send network traffic and corresponding data to the collectors 106. In some embodiments, each sensor can be assigned to a primary collector and a secondary collector as part of a high availability scheme. If the primary collector fails or communications between the sensor and the primary collector are not otherwise possible, a sensor can send its network traffic and corresponding data to the secondary collector. In other embodiments, the sensors 104 are not assigned specific collectors but the network traffic monitoring system 100 can determine an optimal collector for receiving the network traffic and corresponding data through a discovery process. In such embodiments, a sensor can change where it sends it network traffic and corresponding data if its environments changes, such as if a default collector fails or if the sensor is migrated to a new location and it would be optimal for the sensor to send its data to a different collector. For example, it may be preferable for the sensor to send its network traffic and corresponding data on a particular path and/or to a particular collector based on latency, shortest path, monetary cost (e.g., using private resources versus a public resources provided by a public cloud provider), error rate, or some combination of these factors. In other embodiments, a sensor can send different types of network traffic and corresponding data to different collectors. For example, the sensor can send first network traffic and corresponding data related to one type of process to one collector and second network traffic and corresponding data related to another type of process to another collector.

[0030] The collectors 106 can be any type of storage medium that can serve as a repository for the network traffic and corresponding data captured by the sensors 104. In some embodiments, data storage for the collectors 106 is located in an in-memory database, such as dashDB from IBM.RTM., although it should be appreciated that the data storage for the collectors 106 can be any software and/or hardware capable of providing rapid random access speeds typically used for analytics software. In various embodiments, the collectors 106 can utilize solid state drives, disk drives, magnetic tape drives, or a combination of the foregoing according to cost, responsiveness, and size requirements. Further, the collectors 106 can utilize various database structures such as a normalized relational database or a NoSQL database, among others.

[0031] In some embodiments, the collectors 106 may only serve as network storage for the network traffic monitoring system 100. In such embodiments, the network traffic monitoring system 100 can include a data mover module 108 for retrieving data from the collectors 106 and making the data available to network clients, such as the components of the analytics engine 110. In effect, the data mover module 108 can serve as a gateway for presenting network-attached storage to the network clients. In other embodiments, the collectors 106 can perform additional functions, such as organizing, summarizing, and preprocessing data. For example, the collectors 106 can tabulate how often packets of certain sizes or types are transmitted from different nodes of the network. The collectors 106 can also characterize the traffic flows going to and from various nodes. In some embodiments, the collectors 106 can match packets based on sequence numbers, thus identifying traffic flows and connection links. As it may be inefficient to retain all data indefinitely in certain circumstances, in some embodiments, the collectors 106 can periodically replace detailed network traffic data with consolidated summaries. In this manner, the collectors 106 can retain a complete dataset describing one period (e.g., the past minute or other suitable period of time), with a smaller dataset of another period (e.g., the previous 2-10 minutes or other suitable period of time), and progressively consolidate network traffic and corresponding data of other periods of time (e.g., day, week, month, year, etc.). In some embodiments, network traffic and corresponding data for a set of flows identified as normal or routine can be winnowed at an earlier period of time while a more complete data set may be retained for a lengthier period of time for another set of flows identified as anomalous or as an attack.

[0032] Computer networks may be exposed to a variety of different attacks that expose vulnerabilities of computer systems in order to compromise their security. Some network traffic may be associated with malicious programs or devices. The analytics engine 110 may be provided with examples of network states corresponding to an attack and network states corresponding to normal operation. The analytics engine 110 can then analyze network traffic and corresponding data to recognize when the network is under attack. In some embodiments, the network may operate within a trusted environment for a period of time so that the analytics engine 110 can establish a baseline of normal operation. Since malware is constantly evolving and changing, machine learning may be used to dynamically update models for identifying malicious traffic patterns.

[0033] In some embodiments, the analytics engine 110 may be used to identify observations which differ from other examples in a dataset. For example, if a training set of example data with known outlier labels exists, supervised anomaly detection techniques may be used. Supervised anomaly detection techniques utilize data sets that have been labeled as normal and abnormal and train a classifier. In a case in which it is unknown whether examples in the training data are outliers, unsupervised anomaly techniques may be used. Unsupervised anomaly detection techniques may be used to detect anomalies in an unlabeled test data set under the assumption that the majority of instances in the data set are normal by looking for instances that seem to fit to the remainder of the data set.

[0034] The analytics engine 110 can include a data lake 130, an application dependency mapping (ADM) module 140, and elastic processing engines 150. The data lake 130 is a large-scale storage repository that provides massive storage for various types of data, enormous processing power, and the ability to handle nearly limitless concurrent tasks or jobs. In some embodiments, the data lake 130 is implemented using the Hadoop.RTM. Distributed File System (HDFS.TM.) from Apache.RTM. Software Foundation of Forest Hill, Md. HDFS.TM. is a highly scalable and distributed file system that can scale to thousands of cluster nodes, millions of files, and petabytes of data. HDFS.TM. is optimized for batch processing where data locations are exposed to allow computations to take place where the data resides. HDFS.TM. provides a single namespace for an entire cluster to allow for data coherency in a write-once, read-many access model. That is, clients can only append to existing files in the node. In HDFS.TM., files are separated into blocks, which are typically 64 MB in size and are replicated in multiple data nodes. Clients access data directly from data nodes.

[0035] In some embodiments, the data mover 108 receives raw network traffic and corresponding data from the collectors 106 and distributes or pushes the data to the data lake 130. The data lake 130 can also receive and store out-of-band data 114, such as statuses on power levels, network availability, server performance, temperature conditions, cage door positions, and other data from internal sources, and third party data 116, such as security reports (e.g., provided by Cisco.RTM. Systems, Inc. of San Jose, Calif., Arbor Networks.RTM. of Burlington, Mass., Symantec.RTM. Corp. of Sunnyvale, Calif., Sophos.RTM. Group plc of Abingdon, England, Microsoft.RTM. Corp. of Seattle, Wash., Verizon.RTM. Communications, Inc. of New York, N.Y., among others), geolocation data, IP watch lists, Whois data, configuration management database (CMDB) or configuration management system (CMS) as a service, and other data from external sources. In other embodiments, the data lake 130 may instead fetch or pull raw traffic and corresponding data from the collectors 106 and relevant data from the out-of-band data sources 114 and the third party data sources 116. In yet other embodiments, the functionality of the collectors 106, the data mover 108, the out-of-band data sources 114, the third party data sources 116, and the data lake 130 can be combined. Various combinations and configurations are possible as would be known to one of ordinary skill in the art.

[0036] Each component of the data lake 130 can perform certain processing of the raw network traffic data and/or other data (e.g., host data, process data, user data, out-of-band data or third party data) to transform the raw data to a form useable by the elastic processing engines 150. In some embodiments, the data lake 130 can include repositories for flow attributes 132, host and/or endpoint attributes 134, process attributes 136, and policy attributes 138. In some embodiments, the data lake 130 can also include repositories for VM or container attributes, application attributes, tenant attributes, network topology, application dependency maps, cluster attributes, etc.

[0037] The flow attributes 132 relate to information about flows traversing the network. A flow is generally one or more packets sharing certain attributes that are sent within a network within a specified period of time. The flow attributes 132 can include packet header fields such as a source address (e.g., Internet Protocol (IP) address, Media Access Control (MAC) address, Domain Name System (DNS) name, or other network address), source port, destination address, destination port, protocol type, class of service, among other fields. The source address may correspond to a first endpoint (e.g., network device, physical server, virtual partition, etc.) of the network, and the destination address may correspond to a second endpoint, a multicast group, or a broadcast domain. The flow attributes 132 can also include aggregate packet data such as flow start time, flow end time, number of packets for a flow, number of bytes for a flow, the union of TCP flags for a flow, among other flow data.

[0038] The host and/or endpoint attributes 134 describe host and/or endpoint data for each flow, and can include host and/or endpoint name, network address, operating system, CPU usage, network usage, disk space, ports, logged users, scheduled jobs, open files, and information regarding files and/or directories stored on a host and/or endpoint (e.g., presence, absence, or modifications of log files, configuration files, device special files, or protected electronic information). As discussed, in some embodiments, the host and/or endpoints attributes 134 can also include the out-of-band data 114 regarding hosts such as power level, temperature, and physical location (e.g., room, row, rack, cage door position, etc.) or the third party data 116 such as whether a host and/or endpoint is on an IP watch list or otherwise associated with a security threat, Whois data, or geocoordinates. In some embodiments, the out-of-band data 114 and the third party data 116 may be associated by process, user, flow, or other more granular or less granular network element or network communication.

[0039] The process attributes 136 relate to process data corresponding to each flow, and can include process name (e.g., bash, httpd, netstat, etc.), ID, parent process ID, path (e.g., /usr2/username/bin/, /usr/local/bin, /usr/bin, etc.), CPU utilization, memory utilization, memory address, scheduling information, nice value, flags, priority, status, start time, terminal type, CPU time taken by the process, the command that started the process, and information regarding a process owner (e.g., user name, ID, user's real name, e-mail address, user's groups, terminal information, login time, expiration date of login, idle time, and information regarding files and/or directories of the user).

[0040] The policy attributes 138 contain information relating to network policies. Policies establish whether a particular flow is allowed or denied by the network as well as a specific route by which a packet traverses the network. Policies can also be used to mark packets so that certain kinds of traffic receive differentiated service when used in combination with queuing techniques such as those based on priority, fairness, weighted fairness, token bucket, random early detection, round robin, among others. The policy attributes 138 can include policy statistics such as a number of times a policy was enforced or a number of times a policy was not enforced. The policy attributes 138 can also include associations with network traffic data. For example, flows found to be non-conformant can be linked or tagged with corresponding policies to assist in the investigation of non-conformance.

[0041] The analytics engine 110 may include any number of engines 150, including for example, a flow engine 152 for identifying flows (e.g., flow engine 152) or an attacks engine 154 for identify attacks to the network. In some embodiments, the analytics engine can include a separate distributed denial of service (DDoS) attack engine 155 for specifically detecting DDoS attacks. In other embodiments, a DDoS attack engine may be a component or a sub-engine of a general attacks engine. In some embodiments, the attacks engine 154 and/or the DDoS engine 155 can use machine learning techniques to identify security threats to a network. For example, the attacks engine 154 and/or the DDoS engine 155 can be provided with examples of network states corresponding to an attack and network states corresponding to normal operation. The attacks engine 154 and/or the DDoS engine 155 can then analyze network traffic data to recognize when the network is under attack. In some embodiments, the network can operate within a trusted environment for a time to establish a baseline for normal network operation for the attacks engine 154 and/or the DDoS.

[0042] The analytics engine 110 may further include a search engine 156. The search engine 156 may be configured, for example to perform a structured search, an NLP (Natural Language Processing) search, or a visual search. Data may be provided to the engines from one or more processing components.

[0043] The analytics engine 110 can also include a policy engine 158 that manages network policy, including creating and/or importing policies, monitoring policy conformance and non-conformance, enforcing policy, simulating changes to policy or network elements affecting policy, among other policy-related tasks.

[0044] The ADM module 140 can determine dependencies of applications of the network. That is, particular patterns of traffic may correspond to an application, and the interconnectivity or dependencies of the application can be mapped to generate a graph for the application (i.e., an application dependency mapping). In this context, an application refers to a set of networking components that provides connectivity for a given set of workloads. For example, in a conventional three-tier architecture for a web application, first endpoints of the web tier, second endpoints of the application tier, and third endpoints of the data tier make up the web application. The ADM module 140 can receive input data from various repositories of the data lake 130 (e.g., the flow attributes 132, the host and/or endpoint attributes 134, the process attributes 136, etc.). The ADM module 140 may analyze the input data to determine that there is first traffic flowing between external endpoints on port 80 of the first endpoints corresponding to Hypertext Transfer Protocol (HTTP) requests and responses. The input data may also indicate second traffic between first ports of the first endpoints and second ports of the second endpoints corresponding to application server requests and responses and third traffic flowing between third ports of the second endpoints and fourth ports of the third endpoints corresponding to database requests and responses. The ADM module 140 may define an ADM for the web application as a three-tier application including a first EPG comprising the first endpoints, a second EPG comprising the second endpoints, and a third EPG comprising the third endpoints.

[0045] The presentation module 116 can include an application programming interface (API) or command line interface (CLI) 160, a security information and event management (SIEM) interface 162, and a web front-end 164. As the analytics engine 110 processes network traffic and corresponding data and generates analytics data, the analytics data may not be in a human-readable form or it may be too voluminous for a user to navigate. The presentation module 116 can take the analytics data generated by analytics engine 110 and further summarize, filter, and organize the analytics data as well as create intuitive presentations for the analytics data.

[0046] In some embodiments, the API or CLI 160 can be implemented using Hadoop.RTM. Hive from Apache.RTM. for the back end, and Java.RTM. Database Connectivity (JDBC) from Oracle.RTM. Corporation of Redwood Shores, Calif., as an API layer. Hive is a data warehouse infrastructure that provides data summarization and ad hoc querying. Hive provides a mechanism to query data using a variation of structured query language (SQL) that is called HiveQL. JDBC is an application programming interface (API) for the programming language Java.RTM., which defines how a client may access a database.

[0047] In some embodiments, the SIEM interface 162 can be implemented using Hadoop.RTM. Kafka for the back end, and software provided by Splunk.RTM., Inc. of San Francisco, Calif. as the SIEM platform. Kafka is a distributed messaging system that is partitioned and replicated. Kafka uses the concept of topics. Topics are feeds of messages in specific categories. In some embodiments, Kafka can take raw packet captures and telemetry information from the data mover 108 as input, and output messages to a SIEM platform, such as Splunk.RTM.. The Splunk.RTM. platform is utilized for searching, monitoring, and analyzing machine-generated data.

[0048] In some embodiments, the web front-end 164 can be implemented using software provided by MongoDB.RTM., Inc. of New York, N.Y. and Hadoop.RTM. ElasticSearch from Apache.RTM. for the back-end, and Ruby on Rails.TM. as the web application framework. MongoDB.RTM. is a document-oriented NoSQL database based on documents in the form of JavaScript.RTM. Object Notation (JSON) with dynamic schemas. ElasticSearch is a scalable and real-time search and analytics engine that provides domain-specific language (DSL) full querying based on JSON. Ruby on Rails.TM. is model-view-controller (MVC) framework that provides default structures for a database, a web service, and web pages. Ruby on Rails.TM. relies on web standards such as JSON or extensible markup language (XML) for data transfer, and hypertext markup language (HTML), cascading style sheets, (CSS), and JavaScript.RTM. for display and user interfacing.

[0049] Although FIG. 1 illustrates an example configuration of the various components of a network traffic monitoring system, those of skill in the art will understand that the components of the network traffic monitoring system 100 or any system described herein can be configured in a number of different ways and can include any other type and number of components. For example, the sensors 104, the collectors 106, the data mover 108, and the data lake 130 can belong to one hardware and/or software module or multiple separate modules. Other modules can also be combined into fewer components and/or further divided into more components.

[0050] FIG. 2 illustrates an example of a network environment 200 in accordance with an embodiment. In some embodiments, a network traffic monitoring system, such as the network traffic monitoring system 100 of FIG. 1, can be implemented in the network environment 200. It should be understood that, for the network environment 200 and any environment discussed herein, there can be additional or fewer nodes, devices, links, networks, or components in similar or alternative configurations. Embodiments with different numbers and/or types of clients, networks, nodes, cloud components, servers, software components, devices, virtual or physical resources, configurations, topologies, services, appliances, deployments, or network devices are also contemplated herein. Further, the network environment 200 can include any number or type of resources, which can be accessed and utilized by clients or tenants. The illustrations and examples provided herein are for clarity and simplicity.

[0051] The network environment 200 can include a network fabric 202, a Layer 2 (L2) network 204, a Layer 3 (L3) network 206, and servers 208a, 208b, 208c, 208d, and 208e (collectively, 208). The network fabric 202 can include spine switches 210a, 210b, 210c, and 210d (collectively, "210") and leaf switches 212a, 212b, 212c, 212d, and 212e (collectively, "212"). The spine switches 210 can connect to the leaf switches 212 in the network fabric 202. The leaf switches 212 can include access ports (or non-fabric ports) and fabric ports. The fabric ports can provide uplinks to the spine switches 210, while the access ports can provide connectivity to endpoints (e.g., the servers 208), internal networks (e.g., the L2 network 204), or external networks (e.g., the L3 network 206).

[0052] The leaf switches 212 can reside at the edge of the network fabric 202, and can thus represent the physical network edge. For instance, in some embodiments, the leaf switches 212d and 212e operate as border leaf switches in communication with edge devices 214 located in the external network 206. The border leaf switches 212d and 212e may be used to connect any type of external network device, service (e.g., firewall, deep packet inspector, traffic monitor, load balancer, etc.), or network (e.g., the L3 network 206) to the fabric 202.

[0053] Although the network fabric 202 is illustrated and described herein as an example leaf-spine architecture, one of ordinary skill in the art will readily recognize that various embodiments can be implemented based on any network topology, including any data center or cloud network fabric. Indeed, other architectures, designs, infrastructures, and variations are contemplated herein. For example, the principles disclosed herein are applicable to topologies including three-tier (including core, aggregation, and access levels), fat tree, mesh, bus, hub and spoke, etc. Thus, in some embodiments, the leaf switches 212 can be top-of-rack switches configured according to a top-of-rack architecture. In other embodiments, the leaf switches 212 can be aggregation switches in any particular topology, such as end-of-row or middle-of-row topologies. In some embodiments, the leaf switches 212 can also be implemented using aggregation switches.

[0054] Moreover, the topology illustrated in FIG. 2 and described herein is readily scalable and may accommodate a large number of components, as well as more complicated arrangements and configurations. For example, the network may include any number of fabrics 202, which may be geographically dispersed or located in the same geographic area. Thus, network nodes may be used in any suitable network topology, which may include any number of servers, virtual machines or containers, switches, routers, appliances, controllers, gateways, or other nodes interconnected to form a large and complex network. Nodes may be coupled to other nodes or networks through one or more interfaces employing any suitable wired or wireless connection, which provides a viable pathway for electronic communications.

[0055] Network communications in the network fabric 202 can flow through the leaf switches 212. In some embodiments, the leaf switches 212 can provide endpoints (e.g., the servers 208), internal networks (e.g., the L2 network 204), or external networks (e.g., the L3 network 206) access to the network fabric 202, and can connect the leaf switches 212 to each other. In some embodiments, the leaf switches 212 can connect endpoint groups (EPGs) to the network fabric 202, internal networks (e.g., the L2 network 204), and/or any external networks (e.g., the L3 network 206). EPGs are groupings of applications, or application components, and tiers for implementing forwarding and policy logic. EPGs can allow for separation of network policy, security, and forwarding from addressing by using logical application boundaries. EPGs can be used in the network environment 200 for mapping applications in the network. For example, EPGs can comprise a grouping of endpoints in the network indicating connectivity and policy for applications.

[0056] As discussed, the servers 208 can connect to the network fabric 202 via the leaf switches 212. For example, the servers 208a and 208b can connect directly to the leaf switches 212a and 212b, which can connect the servers 208a and 208b to the network fabric 202 and/or any of the other leaf switches. The servers 208c and 208d can connect to the leaf switches 212b and 212c via the L2 network 204. The servers 208c and 208d and the L2 network 204 make up a local area network (LAN). LANs can connect nodes over dedicated private communications links located in the same general physical location, such as a building or campus.

[0057] The WAN 206 can connect to the leaf switches 212d or 212e via the L3 network 206. WANs can connect geographically dispersed nodes over long-distance communications links, such as common carrier telephone lines, optical light paths, synchronous optical networks (SONET), or synchronous digital hierarchy (SDH) links. LANs and WANs can include L2 and/or L3 networks and endpoints.

[0058] The Internet is an example of a WAN that connects disparate networks throughout the world, providing global communication between nodes on various networks. The nodes typically communicate over the network by exchanging discrete frames or packets of data according to predefined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). In this context, a protocol can refer to a set of rules defining how the nodes interact with each other. Computer networks may be further interconnected by an intermediate network node, such as a router, to extend the effective size of each network. The endpoints 208 can include any communication device or component, such as a computer, server, blade, hypervisor, virtual machine, container, process (e.g., running on a virtual machine), switch, router, gateway, host, device, external network, etc.

[0059] In some embodiments, the network environment 200 also includes a network controller running on the host 208a. The network controller is implemented using the Application Policy Infrastructure Controller (APIC.TM.) from Cisco.RTM.. The APIC.TM. provides a centralized point of automation and management, policy programming, application deployment, and health monitoring for the fabric 202. In some embodiments, the APIC.TM. is operated as a replicated synchronized clustered controller. In other embodiments, other configurations or software-defined networking (SDN) platforms can be utilized for managing the fabric 202.

[0060] In some embodiments, a physical server 208 may have instantiated thereon a hypervisor 216 for creating and running one or more virtual switches (not shown) and one or more virtual machines 218, as shown for the host 208b. In other embodiments, physical servers may run a shared kernel for hosting containers. In yet other embodiments, the physical server 208 can run other software for supporting other virtual partitioning approaches. Networks in accordance with various embodiments may include any number of physical servers hosting any number of virtual machines, containers, or other virtual partitions. Hosts may also comprise blade/physical servers without virtual machines, containers, or other virtual partitions, such as the servers 208a, 208c, 208d, and 208e.

[0061] The network environment 200 can also integrate a network traffic monitoring system, such as the network traffic monitoring system 100 shown in FIG. 1. For example, the network traffic monitoring system of FIG. 2 includes sensors 220a, 220b, 220c, and 220d (collectively, "220"), collectors 222, and an analytics engine, such as the analytics engine 110 of FIG. 1, executing on the server 208e. The analytics engine 208e can receive and process network traffic data collected by the collectors 222 and detected by the sensors 220 placed on nodes located throughout the network environment 200. Although the analytics engine 208e is shown to be a standalone network appliance in FIG. 2, it will be appreciated that the analytics engine 208e can also be implemented as a virtual partition (e.g., VM or container) that can be distributed onto a host or cluster of hosts, software as a service (SaaS), or other suitable method of distribution. In some embodiments, the sensors 220 run on the leaf switches 212 (e.g., the sensor 220a), the hosts 208 (e.g., the sensor 220b), the hypervisor 216 (e.g., the sensor 220c), and the VMs 218 (e.g., the sensor 220d). In other embodiments, the sensors 220 can also run on the spine switches 210, virtual switches, service appliances (e.g., firewall, deep packet inspector, traffic monitor, load balancer, etc.) and in between network elements. In some embodiments, sensors 220 can be located at each (or nearly every) network component to capture granular packet statistics and data at each hop of data transmission. In other embodiments, the sensors 220 may not be installed in all components or portions of the network (e.g., shared hosting environment in which customers have exclusive control of some virtual machines).

[0062] As shown in FIG. 2, a host may include multiple sensors 220 running on the host (e.g., the host sensor 220b) and various components of the host (e.g., the hypervisor sensor 220c and the VM sensor 220d) so that all (or substantially all) packets traversing the network environment 200 may be monitored. For example, if one of the VMs 218 running on the host 208b receives a first packet from the WAN 206, the first packet may pass through the border leaf switch 212d, the spine switch 210b, the leaf switch 212b, the host 208b, the hypervisor 216, and the VM. Since all or nearly all of these components contain a respective sensor, the first packet will likely be identified and reported to one of the collectors 222. As another example, if a second packet is transmitted from one of the VMs 218 running on the host 208b to the host 208d, sensors installed along the data path, such as at the VM 218, the hypervisor 216, the host 208b, the leaf switch 212b, and the host 208d will likely result in capture of metadata from the second packet.

[0063] FIG. 3 illustrates an example of a data pipeline 300 for determining clusters in an application dependency map in accordance with an example embodiment. In some embodiments, the data pipeline 300 can be directed by a network traffic monitoring system, such as the network traffic monitoring system 100 of FIG. 1; an analytics engine, such as the analytics engine 110 of FIG. 1; an application dependency mapping module, such as the ADM module 140 of FIG. 1; or other network service or network appliance. The data pipeline 300 includes a data collection stage 302 in which network traffic data and corresponding data (e.g., host data, process data, user data, etc.) are captured by sensors (e.g., the sensors 104 of FIG. 1) located throughout the network. The data may comprise, for example, raw flow data and raw process data. As discussed, the data can be captured from multiple perspectives to provide a comprehensive view of the network. The data collected may also include other types of information, such as tenant information, virtual partition information, out-of-band information, third party information, and other relevant information. In some embodiments, the flow data and associated data can be aggregated and summarized daily or according to another suitable increment of time, and flow vectors, process vectors, host vectors, and other attribute vectors can be calculated during the data collection stage 302. This can substantially reduce processing during an ADM run.

[0064] The data pipeline 300 also includes an ADM input data stage 304 in which a network or security administrator or other authorized user may configure an ADM run by selecting the date range of the flow data and associated data to analyze, and those nodes for which the administrator wants application dependency maps and/or cluster information. In some embodiments, the administrator can also input side information, such as server load balance, route tags, and previously identified clusters during the ADM input data stage 304. In other embodiments, the side information can be automatically pulled or another network element can push the side information for the ADM run.

[0065] The next stage of the data pipeline 300 is pre-processing 306. During the pre-processing stage 306, nodes of the network are partitioned into selected node and dependency node subnets. Selected nodes are those nodes for which the user requests application dependency maps and cluster information. Dependency nodes are those nodes that are not explicitly selected by the users for an ADM run but are nodes that communicate with the selected nodes. To obtain the partitioning information, edges of an application dependency map (i.e., flow data) and unprocessed attribute vectors can be analyzed.

[0066] Other tasks can also be performed during the pre-processing stage 306, including identifying dependencies of the selected nodes and the dependency nodes; replacing the dependency nodes with tags based on the dependency nodes' subnet names; extracting attribute vectors for the selected nodes, such as by aggregating daily vectors across multiple days, calculating term frequency-inverse document frequency (tf-idf), and normalizing the vectors (e.g., l.sub.2 normalization); and identifying existing clusters.

[0067] In some embodiments, the pre-processing stage 306 can include early attribute fusion pre-processing. Early fusion is a fusion scheme in which attributes are combined into a single representation. Attributes may be derived from various domains (e.g., network, host, virtual partition, process, user, etc.), and an attribute vector in an early fusion system may represent the concatenation of disparate attribute types or domains.

[0068] Early fusion may be effective for attributes that are similar or have a similar structure (e.g., fields of TCP and UDP packets or flows). Such attributes may be characterized as being a same type or being within a same domain. Early fusion may be less effective for distant attributes or attributes of different types or domains (e.g., flow-based attributes versus process-based attributes). Thus, in some embodiments, only attributes in the network domain (i.e., network traffic-based attributes, such as packet header information, number of packets for a flow, number of bytes for a flow, and similar data) may be analyzed. In other embodiments, an ADM run may limit analysis to attributes in the process domain (i.e., process-based attributes, such as process name, parent process, process owner, etc.). In yet other embodiments, attribute sets in other domains (e.g., the host domain, virtual partition domain, user domain, etc.) may be the focus of the ADM run.

[0069] After pre-processing, the data pipeline 300 may proceed to a clustering stage 308. In the clustering stage 308, various machine learning techniques can be implemented to analyze attribute vectors within a single domain or across different domains to determine the optimal clustering given a set of input nodes. Machine learning is an area of computer science in which the goal is to develop models using example observations (i.e., training data), that can be used to make predictions on new observations. The models or logic are not based on theory but are empirically based or data-driven.

[0070] During the clustering stage 308, respective attribute vectors of nodes are evaluated using machine learning to identify an optimal clustering for a selected set of nodes. Supervised or unsupervised learning techniques can be used depending on the availability of training data and other related information (e.g., network topology). For example, an ADM module (or other suitable system) can receive configuration information regarding a network from a configuration management system (CMS), configuration management database (CMDB), or other similar system. In some embodiments, the ADM module can receive the configuration data in a proprietary or open source format utilized by the CMS or CMDB and translate the information to training data observations for the particular machine learning approach(es) implemented by the ADM module. In other embodiments, the CMS or CMDB and the ADM module may be closely integrated and the CMS or CMDB can automatically provide the ADM module with suitable training data. In yet other embodiments, a network administrator or authorized user may receive the configuration data from the CM and the administrator or user can manually label nodes to create the training data.

[0071] In some embodiments network traffic monitoring system 100 is useful for presenting a visualization of data flows so that a network administrator can better monitor or investigation data flows. Presentation module 112 can present an application dependency map by providing a dynamic graph interface effective to customize a view of an application.

[0072] Multi-tier applications can be very complex. Thousands of logical entities could potentially be responsible for the performance of an application. Given their complexity, a multi-tier application cannot easily be viewed in a conventional dependency graph.

[0073] FIG. 4 illustrates an example method for producing an application dependency graph in a dynamic graph interface, and FIG. 5A and FIG. 5B illustrate an example graph 500 in an example graph interface and illustrate functionality provided by the dynamic graph interface.

[0074] In some embodiments, graph 500 can be produced by presentation module 112 in combination with ADM engine 140 by searching data analyzed by an analytics engine (ADM engine 140) in response to a received query to visualize a multi-tier application 402.

[0075] The search can identify a plurality of logical entities, which could number thousands or more logical entities depending on the application, and select 404 a limited number of logical entities to represent the multi-tier application based on the query criteria. It would not be useful to represent the application in great detail, as the amount of information presented would be overwhelming and hard to visualize in the dynamic graph interface. Instead, presentation module 112 can be configured to only represent a maximum number of nodes representing logical entities in an initially presented graph. The maximum number can be configurable. Likewise a minimum number of nodes representing logical entities can also be configured.

[0076] The selected 404 logical entities are used to represent 406 the multi-tier application in a dynamic graph 500 as a limited number of nodes representing logical entities. As illustrated in FIG. 5A, ten nodes are used to represent the application in graph 500. However, as illustrated in FIG. 5B, which expands only node 508, many additional logical entities are collapsed into clusters of logical entities. The logical entities within the cluster can be many layers deep, or can represent endpoints. The logical entities/clusters can be determined by the ADM engine 140, as described above.

[0077] In some embodiments, the limited number of nodes representing logical entities can be arranged 408 based on the query criteria. While some queries might be higher level queries that request to view an entire application, some queries might be lower level queries that attempt to view a tier of the multi-tier application, or entities grouped into a given subnet, or functional entities. In some embodiments, the query might even identify a particular logical entity that is desired to be viewed. Especially in embodiments wherein the query has some specificity it can be desirable to arrange the logical entities (and select entities to be displayed) based on search criteria. For example, if a cluster of logical entities best matches the search criteria, a nodes representing that cluster might be arranged into the center of the dynamic graph, or arranged to the left of the dynamic graph, or in any arrangement selected by an arrangement algorithm. In such embodiments, data reflecting how well a logical entity matches search criteria can be an input into a graph layout algorithm.

[0078] Since multi-tier applications can be some complex and have so many logical entities, it is a benefit of the present technology to provide functionality that allows the dynamic graph interface to receive inputs effective to customize the graph, or to explore the graph.

[0079] FIG. 5A illustrates an example initial view of a graph 500 in the dynamic graph interface. Graph 500 presents logical entities of application 502 "nprd." The graph presents a plurality of nodes such as node 508 and node 514. Some nodes are fully expanded such as node 514, which is represented as a filled in circle. While collapsed nodes are visually district, such as node 508, which is represented as an outlined circle. Lines between the nodes can represent edges of graph, or logical communication flows.

[0080] In some embodiments, the dynamic graph interface can receive an input, such as a double click, on one of the nodes, and in response to the input can the graph interface can explode 410 the node in to a plurality of additional logical entities. This is illustrated in FIG. 5, wherein node 508 in FIG. 5A is illustrated as an outlined circle--indicating that it is a node representing a collection of collapsed logical entities. When a double click is received on node 508, it can become expanded as illustrated in FIG. 5B. In FIG. 5B node 508 has now become a filled in circle indicating that it is completely expanded. However, the nodes representing the logical entities that have expanded from node 508 may represent further logical entities. See for example, node 524, which is represented as an outlined circle; this indicates it too can be expanded.

[0081] As a node 410 is expanded, the graph 500 can be rearranged 412 to accommodate the plurality of additional nodes representing logical entities. As illustrated in FIG. 5B some of the already existing nodes in graph 500 have shifted to the right, and some nodes have adjusted their spacing.

[0082] While the expansion of node 508 appears to have expanded into a tree-like structure, it is important to note that the actual structure of the graph is not a tree. The expanded nodes representing logical entities does not necessary represent end nodes, and in many cases the expanded nodes representing logical entities might intercommunicate or communicate back to other existing entities in the graph. However, in some embodiments, it can be useful to give a tree-like impression to the user of the dynamic graph interface. If a user is looking to see where communications flow from entity 508, it may be most comfortable to the user to view the expanded entities to the right, which appear to be downstream. However, this is just an illusion created for convenience. None of the expanded entities need to be downstream of entity 508.

[0083] In addition to the expansion of logical entities, FIG. 5A illustrates additional inputs available for arranging and rearranging graph 500. The graph can be zoomed in/out with controls 504. Additionally, nodes representing logical entities can be anchored in place with control 506. Nodes 514 and 508 have been anchored in place as represented by an anchor symbol inside the circle representing the entity. When a graph is rearranged, the nodes anchored in place will not move. In some embodiments, graph 500 can be rearranged to make a node representing an entity the center of the graph 500 using control 510.

[0084] In addition to the illustrated controls, nodes representing logical entities can be repositioned through drag actions received in the dynamic graph interface. Nodes representing logical entities can also be hidden or collapsed into other nodes.

[0085] When a node, such as node 508 in FIG. 5A, that represents a collection of other entities is expanded, the node can transform to represent only a single entity, such as node 508 in FIG. 5B. This can be evident through information display 516 in FIG. 5A and FIG. 5B. When a node is selected, information regarding that node can be displayed in information display 516. For example in FIG. 5A, node 508 is selected, and information display 516 shows the name of the logical entity represented by the node, the number of endpoints represented by the node, the number of neighbors to the logical entities represented by the node, the number of subnets encompassed by the node, the number of logical entities that are provided with data or communication flows from entities represented by the node, and the number of logical entities that consume data or communication flows from entities represented by the node. In FIG. 5B, the information regarding the node has changed because node 508 has been expanded and many of the entities previously represented by node 508 are represented by other nodes (such as node 524).

[0086] Once graph 500 is arranged in a way suitable to the user, control 512 can be used to save and share the graph. In some embodiments, the arrangement of the graph can be saved as a template for future visualizations of the same application with updated data.

[0087] FIG. 6 illustrates a conventional system bus computing system architecture 600 that can be used with any of the system components illustrated in FIG. 1 wherein the components of the system 600 are in electrical communication with each other using a bus 605. Exemplary system 600 includes a processing unit (CPU or processor) 610 and a system bus 605 that couples various system components including the system memory 615, such as read only memory (ROM) 670 and random access memory (RAM) 675, to the processor 610. The system 600 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor 610. The system 600 can copy data from the memory 615 and/or the storage device 630 to the cache 612 for quick access by the processor 610. In this way, the cache can provide a performance boost that avoids processor 610 delays while waiting for data. These and other modules can control or be configured to control the processor 610 to perform various actions. Other system memory 615 may be available for use as well. The memory 615 can include multiple different types of memory with different performance characteristics. The processor 610 can include any general purpose processor and a hardware module or software module, such as module 1 637, module 2 634, and module 3 636 stored in storage device 630, configured to control the processor 610 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 610 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.

[0088] To enable user interaction with the computing device 600, an input device 645 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 635 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 600. The communications interface 640 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic attributes here may easily be substituted for improved hardware or firmware arrangements as they are developed.

[0089] Storage device 630 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 675, read only memory (ROM) 680, and hybrids thereof.

[0090] The storage device 630 can include software modules 638, 634, 636 for controlling the processor 610. Other hardware or software modules are contemplated. The storage device 630 can be connected to the system bus 605. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 610, bus 605, display 635, and so forth, to carry out the function.

[0091] For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.

[0092] In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.

[0093] Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.

[0094] Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.

[0095] The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.

[0096] Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular attributes or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural attributes and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described attributes or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described attributes and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Moreover, claim language reciting "at least one of" a set indicates that one member of the set or multiple members of the set satisfy the claim.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed