U.S. patent number 11,294,893 [Application Number 15/145,738] was granted by the patent office on 2022-04-05 for aggregation of queries.
This patent grant is currently assigned to Pure Storage, Inc.. The grantee listed for this patent is Pure Storage, Inc.. Invention is credited to Par Botes.
![](/patent/grant/11294893/US11294893-20220405-D00000.png)
![](/patent/grant/11294893/US11294893-20220405-D00001.png)
![](/patent/grant/11294893/US11294893-20220405-D00002.png)
![](/patent/grant/11294893/US11294893-20220405-D00003.png)
![](/patent/grant/11294893/US11294893-20220405-D00004.png)
![](/patent/grant/11294893/US11294893-20220405-D00005.png)
![](/patent/grant/11294893/US11294893-20220405-D00006.png)
![](/patent/grant/11294893/US11294893-20220405-D00007.png)
United States Patent |
11,294,893 |
Botes |
April 5, 2022 |
Aggregation of queries
Abstract
A method for querying a storage system is provided. The method
includes receiving, at one of a plurality of storage nodes of the
storage system, a query relating to metadata of the storage system.
The method includes determining which authorities have ownership of
ranges of user data to which the query pertains and distributing
the query or portions of the query to the authorities that have
ownership of the data, wherein each of the authorities access the
metadata of the storage system associated with the query. The
method includes aggregating replies to the query from the
authorities that have ownership of the ranges of user data, to form
a query reply.
Inventors: |
Botes; Par (Mountain View,
CA) |
Applicant: |
Name |
City |
State |
Country |
Type |
Pure Storage, Inc. |
Mountain View |
CA |
US |
|
|
Assignee: |
Pure Storage, Inc. (Mountain
View, CA)
|
Family
ID: |
1000006219727 |
Appl.
No.: |
15/145,738 |
Filed: |
May 3, 2016 |
Prior Publication Data
|
|
|
|
Document
Identifier |
Publication Date |
|
US 20160275142 A1 |
Sep 22, 2016 |
|
Related U.S. Patent Documents
|
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
Issue Date |
|
|
14664434 |
Mar 20, 2015 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06F
16/2433 (20190101) |
Current International
Class: |
G06F
16/00 (20190101); G06F 16/242 (20190101) |
Field of
Search: |
;711/154,156,158,170
;714/6.24,6.32 |
References Cited
[Referenced By]
U.S. Patent Documents
Foreign Patent Documents
|
|
|
|
|
|
|
2164006 |
|
Mar 2010 |
|
EP |
|
2256621 |
|
Dec 2010 |
|
EP |
|
WO 02-13033 |
|
Feb 2002 |
|
WO |
|
WO 2008103569 |
|
Aug 2008 |
|
WO |
|
WO 2008157081 |
|
Dec 2008 |
|
WO |
|
WO 2013032825 |
|
Jul 2013 |
|
WO |
|
2016154151 |
|
Sep 2016 |
|
WO |
|
Other References
International Search Report and the Written Opinion of the
International Searching Authority, PCT/US2016/023485, dated Jul.
21, 2016. cited by applicant .
U.S. Appl. No. 14/664,434, filed Mar. 20, 2015, SQL-Like Query
Language for Selecting and Retrieving Systems Telemetry Including
Performance, Access and Audit Data, Par Botes. cited by applicant
.
U.S. Appl. No. 15/145,738, filed May 3, 2016, SQL-Like Query
Language for Selecting and Retrieving Systems Telemetry Including
Performance, Access and Audit Data, Par Botes. cited by applicant
.
Hwang, Kai, et al. "RAID-x: A New Distributed Disk Array for
I/O-Centric Cluster Computing," HPDC '00 Proceedings of the 9th
IEEE International Symposium on High Performance Distributed
Computing, IEEE, 2000, pp. 279-286. cited by applicant .
Schmid, Patrick: "RAID Scaling Charts, Part 3:4-128 kB Stripes
Compared", Tom's Hardware, Nov. 27, 2007
(http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS.1735-4.html),
See pp. 1-2. cited by applicant .
Storer, Mark W. et al., "Pergamum: Replacing Tape with Energy
Efficient, Reliable, Disk-Based Archival Storage," Fast '08: 6th
USENIX Conference on File and Storage Technologies, San Jose, CA,
Feb. 26-29, 2008 pp. 1-16. cited by applicant .
Ju-Kyeong Kim et al., "Data Access Frequency based Data Replication
Method using Erasure Codes in Cloud Storage System", Journal of the
Institute of Electronics and Information Engineers, Feb. 2014, vol.
51, No. 2, pp. 85-91. cited by applicant .
International Search Report and the Written Opinion of the
International Searching Authority, PCT/US2015/018169, dated May 15,
2015. cited by applicant .
International Search Report and the Written Opinion of the
International Searching Authority, PCT/US2015/034302, dated Sep.
11, 2015. cited by applicant .
International Search Report and the Written Opinion of the
International Searching Authority, PCT/US2015/039135, dated Sep.
18, 2015. cited by applicant .
International Search Report and the Written Opinion of the
International Searching Authority, PCT/US2015/039136, dated Sep.
23, 2015. cited by applicant .
International Search Report, PCT/US2015/039142, dated Sep. 24,
2015. cited by applicant .
International Search Report, PCT/US2015/034291, dated Sep. 30,
2015. cited by applicant .
International Search Report and the Written Opinion of the
International Searching Authority, PCT/US2015/039137, dated Oct. 1,
2015. cited by applicant .
International Search Report, PCT/US2015/044370, dated Dec. 15,
2015. cited by applicant .
International Search Report amd the Written Opinion of the
International Searching Authority, PCT/US2016/031039, dated May 5,
2016. cited by applicant .
International Search Report, PCT/US2016/014604, dated May 19, 2016.
cited by applicant .
International Search Report, PCT/US2016/014361, dated May 30, 2016.
cited by applicant .
International Search Report, PCT/US2016/014356, dated Jun. 28,
2016. cited by applicant .
International Search Report, PCT/US2016/014357, dated Jun. 29,
2016. cited by applicant .
International Seach Report and the Written Opinion of the
International Searching Authority, PCT/US2016/016504, dated Jul. 6,
2016. cited by applicant .
International Seach Report and the Written Opinion of the
International Searching Authority, PCT/US2016/024391, dated Jul.
12, 2016. cited by applicant .
International Seach Report and the Written Opinion of the
International Searching Authority, PCT/US2016/026529, dated Jul.
19, 2016. cited by applicant .
International Seach Report and the Written Opinion of the
International Searching Authority, PCT/US2016/023485, dated Jul.
21, 2016. cited by applicant .
International Seach Report and the Written Opinion of the
International Searching Authority, PCT/US2016/033306, dated Aug.
19, 2016. cited by applicant .
International Seach Report and the Written Opinion of the
International Searching Authority, PCT/US2016/047808, dated Nov.
25, 2016. cited by applicant .
Stalzer, Mark A., "FlashBlades: System Architecture and
Applications," Proceedings of the 2nd Workshop on Architectures and
Systems for Big Data, Association for Computing Machinery, New
York, NY, 2012, pp. 10-14. cited by applicant .
International Seach Report and the Written Opinion of the
International Searching Authority, PCT/US2016/042147, dated Nov.
30, 2016. cited by applicant.
|
Primary Examiner: Corrielus; Jean M
Attorney, Agent or Firm: Womble Bond Dickinson (US) LLP
Claims
The invention claimed is:
1. A method, comprising: receiving a query relating to metadata of
a storage system; determining which of a plurality of authorities,
implemented in the storage nodes and controlling how and where data
is stored, have ownership of ranges of user data associated with
the metadata to which the query pertains; distributing the query to
the authorities that have ownership of the user data associated
with the metadata to which the query pertains, wherein the query
causes each of the authorities to access the metadata of the
storage system associated with the query and transmit a reply to a
metadata query engine residing in one of the plurality of storage
nodes, wherein at least one of the authorities to which the query
or portions thereof is distributed is in the storage node differing
from the at least one storage node that has the metadata query
engine receiving the query; and aggregating the information from
the replies to the query from the authorities, to form a query
reply for transmission responsive to the receiving.
2. The method of claim 1, wherein a metadata query engine is
configured to receive the query and select a master authority from
among the plurality of authorities, and wherein the master
authority is configured to perform the determining, the
distributing and the aggregating.
3. The method of claim 1, wherein the method further comprises
formatting the aggregated replies to form the query reply.
4. The method of claim 1, wherein each of the plurality of storage
nodes has a distributed portion of the metadata query engine.
5. The method of claim 1, wherein: a processor of the one of the
plurality of storage nodes performs the receiving, query parsing,
and query reply formatting; a processor of a second one of the
plurality of storage nodes performs the determining; and a
processor of a third one of the plurality of storage nodes performs
the accesses to the metadata as duties of the authorities receiving
the distributed query or portions of the query.
6. The method of claim 1, wherein: the metadata includes
information about changes to the user data; the query is a
real-time question about transactions involving the user data; and
replies to the query or portion thereof from each authority are
based on accesses to the information about changes to the user
data.
7. A storage system, comprising: a plurality of storage nodes, each
having a processor, a plurality of authorities, and at least one
storage unit having a first memory configured to hold metadata and
a second memory configured to hold user data, wherein each
authority owns a range of user data; at least one storage node
having a metadata query engine; and the at least one storage node
configured to perform a method comprising: receiving, at one of a
plurality of storage nodes of the storage system, a query relating
to metadata of the storage system; determining which of a plurality
of authorities that are implemented in the storage nodes and
control how and where data is stored have ownership of ranges of
user data associated with the metadata to which the query pertains;
distributing the query to the authorities that have ownership of
the user data associated with the metadata to which the query
pertains, wherein the query causes each of the authorities to
access the metadata of the storage system associated with the query
and transmit a reply to a metadata query engine in one of the
plurality of storage nodes, each reply comprising information based
on the query or one or more of the portions of the query and based
on the accessing of the metadata, wherein at least one of the
authorities to which the query or portions thereof is distributed
is in the storage node differing from the at least one storage node
that has the metadata query engine receiving the query;
aggregating, by the metadata query engine, the information from the
replies to the query from the authorities, to form a query reply;
and transmitting the query reply, by the query engine, responsive
to the receiving the query.
8. The storage system of claim 7, wherein a metadata query engine
is configured to receive the query and select a master authority
from among the plurality of authorities, and wherein the master
authority is configured to perform the determining, the
distributing and the aggregating.
9. The storage system of claim 7, wherein the method further
comprises formatting the aggregated replies to form the query
reply.
10. The storage system of claim 7, wherein each of the plurality of
storage nodes has a distributed portion of the metadata query
engine.
11. The storage system of claim 7, wherein: a processor of the one
of the plurality of storage nodes performs the receiving, query
parsing, and query reply formatting; a processor of a second one of
the plurality of storage nodes performs the determining; and a
processor of a third one of the plurality of storage nodes performs
the accesses to the metadata as duties of the authorities receiving
the distributed query or portions of the query.
12. The storage system of claim 7, wherein: the metadata includes
information about changes to the user data; the query is a
real-time question about transactions involving the user data; and
replies to the query or portion thereof from each such authority
are based on accesses to the information about changes to the user
data.
13. A tangible, non-transitory, computer-readable media having
instructions thereupon for a processor of a storage system, which
when executed cause the processor to perform a method comprising:
receiving, at one of a plurality of storage nodes of the storage
system, a query relating to metadata of the storage system, wherein
at least the one of the plurality of storage nodes has a metadata
query engine, wherein each of the plurality of storage nodes has a
processor and implements a plurality of authorities, and at least
one storage unit having a first memory configured to hold metadata
and a second memory configured to hold user data, and wherein each
authority owns a range of user data; determining which of the
plurality of authorities have ownership of ranges of user data
associated with the metadata to which the query pertains;
distributing the query to the authorities that have ownership of
the user data associated with the metadata to which the query
pertains, wherein the query causes each authority to access the
metadata of the storage system and transmit a reply to a metadata
query engine in one of the plurality of storage nodes, each reply
comprising information based on the query and based on the
accessing of the metadata, wherein at least one of the authorities
to which the query or portions thereof is distributed is in the
storage node differing from the at least one storage node that has
the metadata query engine receiving the query; and aggregating, by
the metadata query engine, the information from the replies to the
query from the authorities, to form a query reply for transmission
responsive to the receiving.
14. The computer-readable media of claim 13, wherein the metadata
query engine is configured to select a master authority from among
the plurality of authorities, and wherein the master authority is
configured to perform the determining, the distributing and the
aggregating.
15. The computer-readable media of claim 13, wherein the method
further comprises formatting the aggregated replies to form the
query reply.
16. The computer-readable media of claim 13, wherein: a processor
of first one of the plurality of storage nodes performs the
receiving and further duties of the metadata query engine,
including query parsing, and query reply formatting; a processor of
a second one of the plurality of storage nodes performs the
determining; and processors of at least a third one and a fourth
one of the plurality of storage nodes perform the accesses to the
metadata of the storage system as duties of the authorities
receiving the distributed query or portions thereof.
17. The computer-readable media of claim 13, wherein: the metadata
includes information about changes to the user data; the query is a
real-time question about transactions involving the user data; and
replies to the query or portion thereof from each such authority
are based on accesses to the information about changes to the user
data.
Description
BACKGROUND
Solid-state memory, such as flash, is currently in use in
solid-state drives (SSD) to augment or replace conventional hard
disk drives (HDD), writable CD (compact disk) or writable DVD
(digital versatile disk) drives, collectively known as spinning
media, and tape drives, for storage of large amounts of data. Flash
and other solid-state memories have characteristics that differ
from spinning media. Yet, many solid-state drives are designed to
conform to hard disk drive standards for compatibility reasons,
which makes it difficult to provide enhanced features or take
advantage of unique aspects of flash and other solid-state memory.
In addition, it can be difficult to get information about system
operation from a solid-state drive conforming to a hard disk drive
standard and operating as a closed system.
It is within this context that the embodiments arise.
SUMMARY
In some embodiments, a method for querying a storage system is
provided. The method includes receiving, at one of a plurality of
storage nodes of the storage system, a query relating to metadata
of the storage system. The method includes determining which
authorities have ownership of ranges of user data to which the
query pertains and distributing the query or portions of the query
to the authorities that have ownership of the data, wherein each of
the authorities access the metadata of the storage system
associated with the query. The method includes aggregating replies
to the query from the authorities that have ownership of the ranges
of user data, to form a query reply. The embodiments include a
computer readable medium having instruction which when executed by
a processor perform the method. The embodiments also include a
storage system configured to perform the method.
Other aspects and advantages of the embodiments will become
apparent from the following detailed description taken in
conjunction with the accompanying drawings which illustrate, by way
of example, the principles of the described embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
The described embodiments and the advantages thereof may best be
understood by reference to the following description taken in
conjunction with the accompanying drawings. These drawings in no
way limit any changes in form and detail that may be made to the
described embodiments by one skilled in the art without departing
from the spirit and scope of the described embodiments.
FIG. 1 is a perspective view of a storage cluster with multiple
storage nodes and internal storage coupled to each storage node to
provide network attached storage, in accordance with some
embodiments.
FIG. 2 is a block diagram showing an interconnect switch coupling
multiple storage nodes in accordance with some embodiments.
FIG. 3 is a multiple level block diagram, showing contents of a
storage node with a metadata query engine, and contents of one of
the non-volatile solid state storage units in accordance with some
embodiments.
FIG. 4 is a flow diagram of a method for querying regarding
metadata in a storage cluster, which can be practiced by an
embodiment of a storage node.
FIG. 5 is a combination block and action diagram, showing the
metadata query engine of FIG. 3 communicating with a master
authority that distributes a query or portions of a query to
authorities that own ranges of user data in accordance with some
embodiments.
FIG. 6 is a block diagram showing an example arrangement of storage
nodes, non-volatile solid-state storages or storage units, metadata
query engines and authorities, accessing metadata of the storage
system in answer to a query in accordance with some
embodiments.
FIG. 7 is a flow diagram of a method for querying a storage system,
which can be practiced using a metadata query engine as shown in
FIGS. 3-6 in accordance with some embodiments.
FIG. 8 is an illustration showing an exemplary computing device
which may implement the embodiments described herein.
DETAILED DESCRIPTION
A storage cluster that has a metadata query engine in one or more
of the storage nodes of the storage cluster is herein described.
The metadata query engine can receive queries regarding metadata,
access metadata and form results according to the queries. Results
are returned from the storage node(s) in the form of a reply, or a
file that can be read in the storage cluster. Various embodiments
use a format of a statistical query language (SQL), and may comply
with the Open Data Base Connectivity (ODBC) standard.
The embodiments below describe a storage cluster that stores user
data, such as user data originating from one or more user or client
systems or other sources external to the storage cluster. The
storage cluster distributes user data across storage nodes housed
within a chassis, using erasure coding and redundant copies of
metadata. Erasure coding refers to a method of data protection or
reconstruction in which data is stored across a set of different
locations, such as disks, storage nodes or geographic locations.
Flash memory is one type of solid-state memory that may be
integrated with the embodiments, although the embodiments may be
extended to other types of solid-state memory or other storage
medium, including non-solid state memory. Control of storage
locations and workloads are distributed across the storage
locations in a clustered peer-to-peer system. Tasks such as
mediating communications between the various storage nodes,
detecting when a storage node has become unavailable, and balancing
I/Os (inputs and outputs) across the various storage nodes, are all
handled on a distributed basis. Data is laid out or distributed
across multiple storage nodes in data fragments or stripes that
support data recovery in some embodiments. Ownership of data can be
reassigned within a cluster, independent of input and output
patterns. This architecture described in more detail below allows a
storage node in the cluster to fail, with the system remaining
operational, since the data can be reconstructed from other storage
nodes and thus remain available for input and output operations. In
various embodiments, a storage node may be referred to as a cluster
node, a blade, or a server.
The storage cluster is contained within a chassis, i.e., an
enclosure housing one or more storage nodes. A mechanism to provide
power to each storage node, such as a power distribution bus, and a
communication mechanism, such as a communication bus that enables
communication between the storage nodes are included within the
chassis. The storage cluster can run as an independent system in
one location according to some embodiments. In one embodiment, a
chassis contains at least two instances of both the power
distribution and the communication bus which may be enabled or
disabled independently. The internal communication bus may be an
Ethernet bus, however, other technologies such as Peripheral
Component Interconnect (PCI) Express, InfiniBand, and others, are
equally suitable. The chassis provides a port for an external
communication bus for enabling communication between multiple
chassis, directly or through a switch, and with client systems. The
external communication may use a technology such as Ethernet,
InfiniBand, Fibre Channel, etc. In some embodiments, the external
communication bus uses different communication bus technologies for
inter-chassis and client communication. If a switch is deployed
within or between chassis, the switch may act as a translation
between multiple protocols or technologies. When multiple chassis
are connected to define a storage cluster, the storage cluster may
be accessed by a client using either proprietary interfaces or
standard interfaces such as network file system (NFS), common
internet file system (CIFS), small computer system interface (SCSI)
or hypertext transfer protocol (HTTP). Translation from the client
protocol may occur at the switch, chassis external communication
bus or within each storage node.
Each storage node may be one or more storage servers and each
storage server is connected to one or more non-volatile solid state
memory units, which may be referred to as storage units. One
embodiment includes a single storage server in each storage node
and between one to eight non-volatile solid state memory units,
however this one example is not meant to be limiting. The storage
server may include a processor, dynamic random access memory (DRAM)
and interfaces for the internal communication bus and power
distribution for each of the power buses. Inside the storage node,
the interfaces and storage unit share a communication bus, e.g.,
PCI Express, in some embodiments. The non-volatile solid state
memory units may directly access the internal communication bus
interface through a storage node communication bus, or request the
storage node to access the bus interface. The non-volatile solid
state memory unit contains an embedded central processing unit
(CPU), solid state storage controller, and a quantity of solid
state mass storage, e.g., between 2-32 terabytes (TB) in some
embodiments. An embedded volatile storage medium, such as DRAM, and
an energy reserve apparatus are included in the non-volatile solid
state memory unit. In some embodiments, the energy reserve
apparatus is a capacitor, super-capacitor, or battery that enables
transferring a subset of DRAM contents to a stable storage medium
in the case of power loss. In some embodiments, the non-volatile
solid state memory unit is constructed with a storage class memory,
such as phase change or magnetoresistive random access memory
(MRAM) that substitutes for DRAM and enables a reduced power
hold-up apparatus.
One of many features of the storage nodes and non-volatile solid
state storage is the ability to proactively rebuild data in a
storage cluster. The storage nodes and non-volatile solid state
storage can determine when a storage node or non-volatile solid
state storage in the storage cluster is unreachable, independent of
whether there is an attempt to read data involving that storage
node or non-volatile solid state storage. The storage nodes and
non-volatile solid state storage then cooperate to recover and
rebuild the data in at least partially new locations. This
constitutes a proactive rebuild, in that the system rebuilds data
without waiting until the data is needed for a read access
initiated from a client system employing the storage cluster. These
and further details of the storage memory and operation thereof are
discussed below.
FIG. 1 is a perspective view of a storage cluster 160, with
multiple storage nodes 150 and internal solid-state memory coupled
to each storage node to provide network attached storage or storage
area network, in accordance with some embodiments. A network
attached storage, storage area network, or a storage cluster, or
other storage memory, could include one or more storage clusters
160, each having one or more storage nodes 150, in a flexible and
reconfigurable arrangement of both the physical components and the
amount of storage memory provided thereby. The storage cluster 160
is designed to fit in a rack, and one or more racks can be set up
and populated as desired for the storage memory. The storage
cluster 160 has a chassis 138 having multiple slots 142. It should
be appreciated that chassis 138 may be referred to as a housing,
enclosure, or rack unit. In one embodiment, the chassis 138 has
fourteen slots 142, although other numbers of slots are readily
devised. For example, some embodiments have four slots, eight
slots, sixteen slots, thirty-two slots, or other suitable number of
slots. Each slot 142 can accommodate one storage node 150 in some
embodiments. Chassis 138 includes flaps 148 that can be utilized to
mount the chassis 138 on a rack. Fans 144 provide air circulation
for cooling of the storage nodes 150 and components thereof,
although other cooling components could be used, or an embodiment
could be devised without cooling components. A switch fabric 146
couples storage nodes 150 within chassis 138 together and to a
network for communication to the memory. In an embodiment depicted
in FIG. 1, the slots 142 to the left of the switch fabric 146 and
fans 144 are shown occupied by storage nodes 150, while the slots
142 to the right of the switch fabric 146 and fans 144 are empty
and available for insertion of storage node 150 for illustrative
purposes. This configuration is one example, and one or more
storage nodes 150 could occupy the slots 142 in various further
arrangements. The storage node arrangements need not be sequential
or adjacent in some embodiments. Storage nodes 150 are hot
pluggable, meaning that a storage node 150 can be inserted into a
slot 142 in the chassis 138, or removed from a slot 142, without
stopping or powering down the system. Upon insertion or removal of
storage node 150 from slot 142, the system automatically
reconfigures in order to recognize and adapt to the change.
Reconfiguration, in some embodiments, includes restoring redundancy
and/or rebalancing data or load.
Each storage node 150 can have multiple components. In the
embodiment shown here, the storage node 150 includes a printed
circuit board 158 populated by a CPU 156, i.e., processor, a memory
154 coupled to the CPU 156, and a non-volatile solid state storage
152 coupled to the CPU 156, although other mountings and/or
components could be used in further embodiments. The memory 154 has
instructions which are executed by the CPU 156 and/or data operated
on by the CPU 156. As further explained below, the non-volatile
solid state storage 152 includes flash or, in further embodiments,
other types of solid-state memory.
Referring to FIG. 1, storage cluster 160 is scalable, meaning that
storage capacity with non-uniform storage sizes is readily added,
as described above. One or more storage nodes 150 can be plugged
into or removed from each chassis and the storage cluster
self-configures in some embodiments. Plug-in storage nodes 150,
whether installed in a chassis as delivered or later added, can
have different sizes. For example, in one embodiment a storage node
150 can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB,
etc. In further embodiments, a storage node 150 could have any
multiple of other storage amounts or capacities. Storage capacity
of each storage node 150 is broadcast, and influences decisions of
how to stripe the data. For maximum storage efficiency, an
embodiment can self-configure as wide as possible in the stripe,
subject to a predetermined requirement of continued operation with
loss of up to one, or up to two, non-volatile solid state storage
units 152 or storage nodes 150 within the chassis.
FIG. 2 is a block diagram showing a communications interconnect 170
and power distribution bus 172 coupling multiple storage nodes 150.
Referring back to FIG. 1, the communications interconnect 170 can
be included in or implemented with the switch fabric 146 in some
embodiments. Where multiple storage clusters 160 occupy a rack, the
communications interconnect 170 can be included in or implemented
with a top of rack switch, in some embodiments. As illustrated in
FIG. 2, storage cluster 160 is enclosed within a single chassis
138. External port 176 is coupled to storage nodes 150 through
communications interconnect 170, while external port 174 is coupled
directly to a storage node. External power port 178 is coupled to
power distribution bus 172. Storage nodes 150 may include varying
amounts and differing capacities of non-volatile solid state
storage 152 as described with reference to FIG. 1. In addition, one
or more storage nodes 150 may be a compute only storage node as
illustrated in FIG. 2. Authorities 168 are implemented on the
non-volatile solid state storages 152, for example as lists or
other data structures stored in memory. In some embodiments the
authorities are stored within the non-volatile solid state storage
152 and supported by software executing on a controller or other
processor of the non-volatile solid state storage 152. In a further
embodiment, authorities 168 are implemented on the storage nodes
150, for example as lists or other data structures stored in the
memory 154 and supported by software executing on the CPU 156 of
the storage node 150. Authorities 168 control how and where data is
stored in the non-volatile solid state storages 152 in some
embodiments. This control assists in determining which type of
erasure coding scheme is applied to the data, and which storage
nodes 150 have which portions of the data. Each authority 168 may
be assigned to a non-volatile solid state storage 152. Each
authority may control a range of Mode numbers, segment numbers, or
other data identifiers which are assigned to data by a file system,
by the storage nodes 150, or by the non-volatile solid state
storage 152, in various embodiments.
Every piece of data, and every piece of metadata, has redundancy in
the system in some embodiments. In addition, every piece of data
and every piece of metadata has an owner, which may be referred to
as an authority. If that authority is unreachable, for example
through failure of a storage node, there is a plan of succession
for how to find that data or that metadata. In various embodiments,
there are redundant copies of authorities 168. Authorities 168 have
a relationship to storage nodes 150 and non-volatile solid state
storage 152 in some embodiments. Each authority 168, covering a
range of data segment numbers or other identifiers of the data, may
be assigned to a specific non-volatile solid state storage 152. In
some embodiments the authorities 168 for all of such ranges are
distributed over the non-volatile solid state storages 152 of a
storage cluster. Each storage node 150 has a network port that
provides access to the non-volatile solid state storage(s) 152 of
that storage node 150. Data can be stored in a segment, which is
associated with a segment number and that segment number is an
indirection for a configuration of a RAID (redundant array of
independent disks) stripe in some embodiments. The assignment and
use of the authorities 168 thus establishes an indirection to data.
Indirection may be referred to as the ability to reference data
indirectly, in this case via an authority 168, in accordance with
some embodiments. A segment identifies a set of non-volatile solid
state storage 152 and a local identifier into the set of
non-volatile solid state storage 152 that may contain data. In some
embodiments, the local identifier is an offset into the device and
may be reused sequentially by multiple segments. In other
embodiments the local identifier is unique for a specific segment
and never reused. The offsets in the non-volatile solid state
storage 152 are applied to locating data for writing to or reading
from the non-volatile solid state storage 152 (in the form of a
RAID stripe). Data is striped across multiple units of non-volatile
solid state storage 152, which may include or be different from the
non-volatile solid state storage 152 having the authority 168 for a
particular data segment.
If there is a change in where a particular segment of data is
located, e.g., during a data move or a data reconstruction, the
authority 168 for that data segment should be consulted, at that
non-volatile solid state storage 152 or storage node 150 having
that authority 168. In order to locate a particular piece of data,
embodiments calculate a hash value for a data segment or apply an
Mode number or a data segment number. The output of this operation
points to a non-volatile solid state storage 152 having the
authority 168 for that particular piece of data. In some
embodiments there are two stages to this operation. The first stage
maps an entity identifier (ID), e.g., a segment number, Mode
number, or directory number to an authority identifier. This
mapping may include a calculation such as a hash or a bit mask. The
second stage is mapping the authority identifier to a particular
non-volatile solid state storage 152, which may be done through an
explicit mapping. The operation is repeatable, so that when the
calculation is performed, the result of the calculation repeatably
and reliably points to a particular non-volatile solid state
storage 152 having that authority 168. The operation may include
the set of reachable storage nodes as input. If the set of
reachable non-volatile solid state storage units changes the
optimal set changes. In some embodiments, the persisted value is
the current assignment (which is always true) and the calculated
value is the target assignment the cluster will attempt to
reconfigure towards. This calculation may be used to determine the
optimal non-volatile solid state storage 152 for an authority in
the presence of a set of non-volatile solid state storage 152 that
are reachable and constitute the same cluster. The calculation also
determines an ordered set of peer non-volatile solid state storage
152 that will also record the authority to non-volatile solid state
storage mapping so that the authority may be determined even if the
assigned non-volatile solid state storage is unreachable. A
duplicate or substitute authority 168 may be consulted if a
specific authority 168 is unavailable in some embodiments.
With reference to FIGS. 1 and 2, two of the many tasks of the CPU
156 on a storage node 150 are to break up write data, and
reassemble read data. When the system has determined that data is
to be written, the authority 168 for that data is located as above.
When the segment ID for data is already determined the request to
write is forwarded to the non-volatile solid state storage 152
currently determined to be the host of the authority 168 determined
from the segment. The host CPU 156 of the storage node 150, on
which the non-volatile solid state storage 152 and corresponding
authority 168 reside, then breaks up or shards the data and
transmits the data out to various non-volatile solid state storage
152. The transmitted data is written as a data stripe in accordance
with an erasure coding scheme. In some embodiments, data is
requested to be pulled, and in other embodiments, data is pushed.
In reverse, when data is read, the authority 168 for the segment ID
containing the data is located as described above. The host CPU 156
of the storage node 150 on which the non-volatile solid state
storage 152 and corresponding authority 168 reside requests the
data from the non-volatile solid state storage and corresponding
storage nodes pointed to by the authority. In some embodiments the
data is read from flash storage as a data stripe. The host CPU 156
of storage node 150 then reassembles the read data, correcting any
errors (if present) according to the appropriate erasure coding
scheme, and forwards the reassembled data to the network. In
further embodiments, some or all of these tasks can be handled in
the non-volatile solid state storage 152. In some embodiments, the
segment host requests the data be sent to storage node 150 by
requesting pages from storage and then sending the data to the
storage node making the original request.
In some systems, for example in UNIX-style file systems, data is
handled with an index node or Mode, which specifies a data
structure that represents an object in a file system. The object
could be a file or a directory, for example. Metadata may accompany
the object, as attributes such as permission data and a creation
timestamp, among other attributes. A segment number could be
assigned to all or a portion of such an object in a file system. In
other systems, data segments are handled with a segment number
assigned elsewhere. For purposes of discussion, the unit of
distribution is an entity, and an entity can be a file, a directory
or a segment. That is, entities are units of data or metadata
stored by a storage system. Entities are grouped into sets called
authorities. Each authority has an authority owner, which is a
storage node that has the exclusive right to update the entities in
the authority. In other words, a storage node contains the
authority, and that the authority, in turn, contains entities.
A segment is a logical container of data in accordance with some
embodiments. A segment is an address space between medium address
space and physical flash locations, i.e., the data segment number,
are in this address space. Segments may also contain meta-data,
which enable data redundancy to be restored (rewritten to different
flash locations or devices) without the involvement of higher level
software. In one embodiment, an internal format of a segment
contains client data and medium mappings to determine the position
of that data. Each data segment is protected, e.g., from memory and
other failures, by breaking the segment into a number of data and
parity shards, where applicable. The data and parity shards are
distributed, i.e., striped, across non-volatile solid state storage
152 coupled to the host CPUs 156 (See FIG. 5) in accordance with an
erasure coding scheme. Usage of the term segments refers to the
container and its place in the address space of segments in some
embodiments. Usage of the term stripe refers to the same set of
shards as a segment and includes how the shards are distributed
along with redundancy or parity information in accordance with some
embodiments.
A series of address-space transformations takes place across an
entire storage system. At the top are the directory entries (file
names) which link to an inode. Modes point into medium address
space, where data is logically stored. Medium addresses may be
mapped through a series of indirect mediums to spread the load of
large files, or implement data services like deduplication or
snapshots. Medium addresses may be mapped through a series of
indirect mediums to spread the load of large files, or implement
data services like deduplication or snapshots. Segment addresses
are then translated into physical flash locations. Physical flash
locations have an address range bounded by the amount of flash in
the system in accordance with some embodiments. Medium addresses
and segment addresses are logical containers, and in some
embodiments use a 128 bit or larger identifier so as to be
practically infinite, with a likelihood of reuse calculated as
longer than the expected life of the system. Addresses from logical
containers are allocated in a hierarchical fashion in some
embodiments. Initially, each non-volatile solid state storage 152
may be assigned a range of address space. Within this assigned
range, the non-volatile solid state storage 152 is able to allocate
addresses without synchronization with other non-volatile solid
state storage 152.
Data and metadata is stored by a set of underlying storage layouts
that are optimized for varying workload patterns and storage
devices. These layouts incorporate multiple redundancy schemes,
compression formats and index algorithms Some of these layouts
store information about authorities and authority masters, while
others store file metadata and file data. The redundancy schemes
include error correction codes that tolerate corrupted bits within
a single storage device (such as a NAND flash chip), erasure codes
that tolerate the failure of multiple storage nodes, and
replication schemes that tolerate data center or regional failures.
In some embodiments, low density parity check (LDPC) code is used
within a single storage unit. Reed-Solomon encoding is used within
a storage cluster, and mirroring is used within a storage grid in
some embodiments. Metadata may be stored using an ordered log
structured index (such as a Log Structured Merge Tree), and large
data may not be stored in a log structured layout.
In order to maintain consistency across multiple copies of an
entity, the storage nodes agree implicitly on two things through
calculations: (1) the authority that contains the entity, and (2)
the storage node that contains the authority. The assignment of
entities to authorities can be done by pseudorandomly assigning
entities to authorities, by splitting entities into ranges based
upon an externally produced key, or by placing a single entity into
each authority. Examples of pseudorandom schemes are linear hashing
and the Replication Under Scalable Hashing (RUSH) family of hashes,
including Controlled Replication Under Scalable Hashing (CRUSH). In
some embodiments, pseudo-random assignment is utilized only for
assigning authorities to nodes because the set of nodes can change.
The set of authorities cannot change so any subjective function may
be applied in these embodiments. Some placement schemes
automatically place authorities on storage nodes, while other
placement schemes rely on an explicit mapping of authorities to
storage nodes. In some embodiments, a pseudorandom scheme is
utilized to map from each authority to a set of candidate authority
owners. A pseudorandom data distribution function related to CRUSH
may assign authorities to storage nodes and create a list of where
the authorities are assigned. Each storage node has a copy of the
pseudorandom data distribution function, and can arrive at the same
calculation for distributing, and later finding or locating an
authority. Each of the pseudorandom schemes requires the reachable
set of storage nodes as input in some embodiments in order to
conclude the same target nodes. Once an entity has been placed in
an authority, the entity may be stored on physical devices so that
no expected failure will lead to unexpected data loss. In some
embodiments, rebalancing algorithms attempt to store the copies of
all entities within an authority in the same layout and on the same
set of machines.
Examples of expected failures include device failures, stolen
machines, datacenter fires, and regional disasters, such as nuclear
or geological events. Different failures lead to different levels
of acceptable data loss. In some embodiments, a stolen storage node
impacts neither the security nor the reliability of the system,
while depending on system configuration, a regional event could
lead to no loss of data, a few seconds or minutes of lost updates,
or even complete data loss.
In the embodiments, the placement of data for storage redundancy is
independent of the placement of authorities for data consistency.
In some embodiments, storage nodes that contain authorities do not
contain any persistent storage. Instead, the storage nodes are
connected to non-volatile solid state storage units that do not
contain authorities. The communications interconnect between
storage nodes and non-volatile solid state storage units consists
of multiple communication technologies and has non-uniform
performance and fault tolerance characteristics. In some
embodiments, as mentioned above, non-volatile solid state storage
units are connected to storage nodes via PCI express, storage nodes
are connected together within a single chassis using Ethernet
backplane, and chassis are connected together to form a storage
cluster. Storage clusters are connected to clients using Ethernet
or fiber channel in some embodiments. If multiple storage clusters
are configured into a storage grid, the multiple storage clusters
are connected using the Internet or other long-distance networking
links, such as a "metro scale" link or private link that does not
traverse the internet.
Authority owners have the exclusive right to modify entities, to
migrate entities from one non-volatile solid state storage unit to
another non-volatile solid state storage unit, and to add and
remove copies of entities. This allows for maintaining the
redundancy of the underlying data. When an authority owner fails,
is going to be decommissioned, or is overloaded, the authority is
transferred to a new storage node. Transient failures make it
non-trivial to ensure that all non-faulty machines agree upon the
new authority location. The ambiguity that arises due to transient
failures can be achieved automatically by a consensus protocol such
as Paxos, hot-warm failover schemes, via manual intervention by a
remote system administrator, or by a local hardware administrator
(such as by physically removing the failed machine from the
cluster, or pressing a button on the failed machine). In some
embodiments, a consensus protocol is used, and failover is
automatic. If too many failures or replication events occur in too
short a time period, the system goes into a self-preservation mode
and halts replication and data movement activities until an
administrator intervenes in accordance with some embodiments.
As authorities are transferred between storage nodes and authority
owners update entities in their authorities, the system transfers
messages between the storage nodes and non-volatile solid state
storage units. With regard to persistent messages, messages that
have different purposes are of different types. Depending on the
type of the message, the system maintains different ordering and
durability guarantees. As the persistent messages are being
processed, the messages are temporarily stored in multiple durable
and non-durable storage hardware technologies. In some embodiments,
messages are stored in RAM, NVRAM and on NAND flash devices, and a
variety of protocols are used in order to make efficient use of
each storage medium. Latency-sensitive client requests may be
persisted in replicated NVRAM, and then later NAND, while
background rebalancing operations are persisted directly to
NAND.
Persistent messages are persistently stored prior to being
replicated. This allows the system to continue to serve client
requests despite failures and component replacement. Although many
hardware components contain unique identifiers that are visible to
system administrators, manufacturer, hardware supply chain and
ongoing monitoring quality control infrastructure, applications
running on top of the infrastructure address virtualize addresses.
These virtualized addresses do not change over the lifetime of the
storage system, regardless of component failures and replacements.
This allows each component of the storage system to be replaced
over time without reconfiguration or disruptions of client request
processing.
In some embodiments, the virtualized addresses are stored with
sufficient redundancy. A continuous monitoring system correlates
hardware and software status and the hardware identifiers. This
allows detection and prediction of failures due to faulty
components and manufacturing details. The monitoring system also
enables the proactive transfer of authorities and entities away
from impacted devices before failure occurs by removing the
component from the critical path in some embodiments.
FIG. 3 is a multiple level block diagram, showing contents of a
storage node 150 with a metadata query engine 228, and contents of
a non-volatile solid state storage 152 of the storage node 150.
Data is communicated to and from the storage node 150 by a network
interface controller (NIC) 202 in some embodiments. Each storage
node 150 has a CPU 156, and one or more non-volatile solid state
storage 152, as discussed above. Moving down one level in FIG. 3,
each non-volatile solid state storage 152 has a relatively fast
non-volatile solid state memory, such as nonvolatile random access
memory (NVRAM) 204, and flash memory 206. In some embodiments,
NVRAM 204 may be a component that does not require program/erase
cycles (DRAM, MRAM, PCM), and can be a memory that can support
being written vastly more often than the memory is read from.
Moving down another level in FIG. 3, the NVRAM 204 is implemented
in one embodiment as high speed volatile memory, such as dynamic
random access memory (DRAM) 216, backed up by energy reserve 218.
Energy reserve 218 provides sufficient electrical power to keep the
DRAM 216 powered long enough for contents to be transferred to the
flash memory 206 in the event of power failure. In some
embodiments, energy reserve 218 is a capacitor, super-capacitor,
battery, or other device, that supplies a suitable supply of energy
sufficient to enable the transfer of the contents of DRAM 216 to a
stable storage medium in the case of power loss. The flash memory
206 is implemented as multiple flash dies 222, which may be
referred to as packages of flash dies 222 or an array of flash dies
222. It should be appreciated that the flash dies 222 could be
packaged in any number of ways, with a single die per package,
multiple dies per package (i.e. multichip packages), in hybrid
packages, as bare dies on a printed circuit board or other
substrate, as encapsulated dies, etc. In the embodiment shown, the
non-volatile solid state storage 152 has a controller 212 or other
processor, and an input output (I/O) port 210 coupled to the
controller 212. I/O port 210 is coupled to the CPU 156 and/or the
network interface controller 202 of the flash storage node 150.
Flash input output (I/O) port 220 is coupled to the flash dies 222,
and a direct memory access unit (DMA) 214 is coupled to the
controller 212, the DRAM 216 and the flash dies 222. In the
embodiment shown, the I/O port 210, controller 212, DMA unit 214
and flash I/O port 220 are implemented on a programmable logic
device (PLD) 208, e.g., a field programmable gate array (FPGA). In
this embodiment, each flash die 222 has pages, organized as sixteen
kB (kilobyte) pages 224, and a register 226 through which data can
be written to or read from the flash die 222. In further
embodiments, other types of solid-state memory are used in place
of, or in addition to flash memory illustrated within flash die
222.
The metadata query engine 228 can be found on one or more of the
storage nodes 150 of the storage cluster 160. A query regarding
metadata is sent (for example through the network interface
controller 202) to the storage node 150, which responds to the
query. In some embodiments, the format of the query and the
response to the query conforms to the Open Data Base Connectivity
standard and/or provides a structured query language or other
formatting with features of a structured query language. Various
embodiments of the metadata query engine 228 have various
capabilities, some of which are discussed below. One version can
transform a portion of metadata from a native storage format into a
relational format, in response to a query. The relational format
shows one or more relationships between one portion of metadata and
another portion of metadata in some embodiments. In some
embodiments, there may be a schema defining a structure relating
portions of the metadata to one another, and the response to the
query could be expressed in terms of such a schema. While metadata
query engine 228 is illustrated as a separate unit in FIG. 3, it
should be appreciated that this is not meant to be limiting. That
is, metadata query engine 228 may be integrated with CPU 156 of
storage node 150 or PLD 208 of the storage unit 152. In some
embodiments, the authority for a particular storage node 150 may
cooperate with CPU 156 and metadata query engine 228 in order to
achieve the functionality described herein, such as receiving
queries regarding metadata, accessing metadata and generating
results according to the queries. It should be further appreciated
that metadata query engine 228 operates within a storage node, as
opposed to a compute node, to execute the functionality of the
embodiments presented.
In some embodiments, the metadata query engine 228 receives the
query and stores the query. The metadata query engine 228 populates
a file with results of the query. Alternatively, the metadata query
engine 228 updates contents of a file with results of the query, as
a response to reading the file. The file is available for reading
from the storage cluster 160, and reading the file returns the
results of the query. Some versions of metadata query engine 228
can sort, retrieve and aggregate types of metadata. There may be an
expression in the query pertaining to the metadata, and the
metadata query engine 228 can retrieve metadata pertaining to the
query, evaluate an expression in the query and return a result of
the query. Alternatively, a query could include a nested query,
which the metadata query engine 228 evaluates and applies to
metadata, and returns a result. As mentioned above, metadata query
engine 228 may be integrated with CPU 156 in some embodiments.
Metadata query engine 228 may also be software stored in memory of
storage node 150 and executed by CPU 156. In an alternative
embodiment, metadata query engine 228 may be embodied as a
programmable logic device, such as an application specific
integrated circuit or field programmable gate array. Thus, metadata
query engine 228 may be embodied as hardware, software, firmware,
or any combination of these embodiments.
One embodiment of the metadata query engine 228 adds, updates, or
deletes metadata according to the query. The metadata may relate to
user data that is striped across the storage nodes 150 and located
via an authority as described above in some embodiments. Below are
some examples of queries which various embodiments of the metadata
query engine 228 could handle. The examples are written
descriptively and could be expressed in an appropriate format or
programming language according to a suitable specification for the
queries. A query could ask to be shown the most frequently accessed
files in a recent time span. Alternatively, the top ten (or other
specified number of) files by size, or the one hundred most
recently accessed files, etc. A query could ask to aggregate the
total amount of storage capacity that sits within a specified
directory in some embodiments. A query could ask which client is
the most frequent accessor of a group of files, or who has accessed
a specified file besides the owner of the file. Relational views of
files or data could be requested in some embodiments, such as
showing which data is related to which files, or which files or
data is generated by which applications or which owners or groups,
etc. A query could be defined and stored, and the system could
output, populate or update a file or files on an ongoing basis, or
responsive to reading a file. The file could then be read, which
shows the results of the query as of the most recent update to the
file. Queries about operations, status, memory health and system
health could also be handled. Further examples of queries are
readily devised in accordance with the teachings herein. It should
be appreciated that the above examples are not meant to be limiting
as the queries may be related to any storage system telemetry,
which includes performance, access and audit data associated with
the system.
Storage cluster 160, in various embodiments as disclosed herein,
can be contrasted with storage arrays in general. The storage nodes
150 are part of a collection that creates the storage cluster 160.
Each storage node 150 owns a slice of data and computing required
to provide the data. Multiple storage nodes 150 cooperate to store
and retrieve the data. Storage memory or storage devices, as used
in storage arrays in general, are less involved with processing and
manipulating the data. Storage memory or storage devices in a
storage array receive commands to read, write, or erase data. The
storage memory or storage devices in a storage array are not aware
of a larger system in which they are embedded, or what the data
means. Storage memory or storage devices in storage arrays can
include various types of storage memory, such as RAM, solid state
drives, hard disk drives, etc. The storage units 152 described
herein have multiple interfaces active simultaneously and serving
multiple purposes. In some embodiments, some of the functionality
of a storage node 150 is shifted into a storage unit 152,
transforming the storage unit 152 into a combination of storage
unit 152 and storage node 150. Placing computing (relative to
storage data) into the storage unit 152 places this computing
closer to the data itself. The various system embodiments have a
hierarchy of storage node layers with different capabilities. By
contrast, in a storage array, a controller owns and knows
everything about all of the data that the controller manages in a
shelf or storage devices. In a storage cluster 160, as described
herein, multiple controllers in multiple storage units 152 and/or
storage nodes 150 cooperate in various ways (e.g., for erasure
coding, data sharding, metadata communication and redundancy,
storage capacity expansion or contraction, data recovery, and so
on).
FIG. 4 is a flow diagram of a method for querying regarding
metadata in a storage cluster, which can be practiced by an
embodiment of a storage node. Particularly, the method can be
practiced by a processor of a storage node in a storage cluster, as
opposed to a compute node executing an application. In an action
402, a query is received regarding metadata. The query could be
formatted according to a statistical query language or conform to
the Open Data Base Connectivity standard. It should be appreciated
that the query can be related to storage system telemetry, which
includes performance, access and audit data. In an action 404,
metadata is accessed according to the query. The metadata could be
in one or more of the storage nodes of the storage cluster and
located via an authority. It should be appreciated that the
metadata may reside in one or more of the storage units. In an
action 406, a result is formed according to the query. The result
could be formatted according to a statistical query language or
conform to the Open Data Base Connectivity standard. In some
embodiments, the metadata is stored and accessed in a native
storage format and the result can be transformed to a relational
format as described above. The result of the query is returned, in
an action 408.
FIG. 5 is a combination block and action diagram, showing the
metadata query engine 228 of FIG. 3 communicating with a master
authority 506 that distributes a query 502 or portions of a query
to authorities 168 that own ranges of user data. Various processors
in the storage system can perform the duties of the metadata query
engine(s) 228, the master authority 506 and other authorities 168.
One example arrangement is depicted and described with reference to
FIG. 6. In some embodiments, the metadata query engine 228 performs
the duties of the master authority 506. It should be appreciated
that while an embodiment describing a master authority for
achieving the communication and distribution of the query is
described, this is not meant to be limiting as alternative means
are readily devised.
Continuing with FIG. 5, in the scenario depicted, a query 502
relating to metadata in the storage system comes into a metadata
query engine 228, in a storage node 150. This could follow the same
path as for external I/O processing. In some embodiments, there is
one metadata query engine 228 in one of the storage nodes 150 in a
storage system. In some embodiments, each storage node 150 has a
metadata query engine 228, or a portion of a distributed metadata
query engine 228. One or more compute-only nodes of the storage
system could also have a metadata query engine 228 or a portion of
a distributed metadata query engine 228. A query 502 could arrive
at any of these. The metadata query engine 228 designates a master
authority 506 in some embodiments, which could be one of the
authorities 168 in that same storage node 150 that has the metadata
query engine 228, or could be another authority 168 in another
storage node 150. A query parser 512 in the metadata query engine
228 parses the query 502. Depending upon the nature of the query,
and specific to implementations, the query parser 512 may break the
query into sub-queries. The metadata query engine 228 then forwards
the query or the sub-queries to the selected master authority
506.
The master authority 506, or one of the metadata query engines 228
or a processor operating under direction by the master authority
506, determines which authorities 168 own which data portions and
which metadata relating to those data portions relevant to the
query 502. For example, the master authority 506 could perform a
mapping operation, mapping subject matter of the query 502 to
ranges of user data, and then mapping these to corresponding
authorities 168. Based on this mapping, the master authority 506
(e.g., as above) forwards the query 502 or a version thereof, or
divides up the query into query portions 510 and sends these, to
the respective authorities 168. The master authority 506, or more
specifically a processor or the metadata query engine 228 acting on
behalf of the master authority 506, thus distributes the query 502
or portions of the query 502 to the authorities 168 that have
ownership of the data to which the query 502 pertains. Upon
receiving the query 502 or the query portion 510, each such
authority 168 (or, more specifically, a processor acting under
direction of or on behalf of an authority 168) accesses metadata in
the storage nodes 150, gathers relevant information, and returns
the information in reply to the query 502 or query portion 510, to
the master authority 506, or in some embodiments returns the
information to the metadata query engine 228. The master authority
506 aggregates the query results 508. In some embodiments, the
metadata query engine 228 has an aggregator 514 that aggregates the
query results 508. In the embodiment shown in FIG. 5, the metadata
query engine 228 has a formatter 516 that formats or reformats this
aggregated metadata in some embodiments, and presents the query
results in a query reply 504. The query reply 504 is sent out by
the metadata query engine 228, in response to the original query
502.
As mentioned above, various embodiments of the storage system have
and use distributed protocols for accessing data, in which there is
no single, global master. In some embodiments, each authority, as
an owner of a range of data, can generate or obtain a token that
certifies validity of an I/O command made by that authority, so as
to access data. Tokens could be voted on and signed by a witness
quorum of storage nodes. In some embodiments a token has a time
interval of validity, with a timestamp, and the token expires after
the time interval passes. Messages could be accompanied by or
reference tokens, so as to validate each message. The token
provides evidence of authority ownership that is independently
verifiable, e.g., through a signature or other readily devisable
means. In some embodiments, the token is specific to assignment of
the authority and a storage node of the storage cluster. The token
may have a timestamp, an authority identifier and/or a storage node
identifier, among various possibilities. Having a lease duration,
i.e., the time interval of validity of the token, provides the
desired properties that the token can be independently validated
without requiring a coordinated global update in order to evolve or
revoke authority ownership. In some embodiments, all commands are
validated by a receiver, i.e., a receiving authority at the storage
node in which that authority resides, against the lease token held
by the command issuing authority at the time of a command
submission.
An authority can delegate a task to another authority, on the same
or another storage node in the storage cluster. In versions with
tokens, this delegation could be accompanied by, associated with
and/or validated by a token. Usage of a distributed protocol, e.g.,
embodiments with tokens, prevents vulnerability of the system to a
single point of failure as may occur with a single master. The
embodiments may utilize any known messaging system for a
distributed system as the master/slave embodiment and the token
based embodiment (where there is no master) are two examples
provided for illustrative purposes. Under a token based protocol,
various processors of the system may be utilized to execute the
token based protocol or any distributed protocol without a master
similarly as discussed above with reference to FIG. 5 and the
master-slave protocol. For example, in some embodiments the
metadata query engine 228 contains the logic (software, hardware,
firmware, or combinations thereof) for performing actions under a
token based protocol.
FIG. 6 is a block diagram showing an example arrangement of storage
nodes 150, non-volatile solid-state storages or storage units 152,
metadata query engines 228 and authorities 168, accessing metadata
of the storage system in answer to a query 502. This shows both the
versatility and the distributed nature of the metadata query engine
228 in a storage system. Conceptually, the metadata query engine
228 could be considered as distributed across some or all of the
storage nodes 150, or each of the storage nodes 150 could be
considered to have a copy of or an instantiation of the metadata
query engine 228. In some embodiments, the CPU 156 of the storage
node 150 performs duties of the metadata query engine 228 and the
authorities 168 in that storage node 150. In this embodiment, as
one example, the metadata is in the NVRAM 204 of each non-volatile
solid-state storage or storage unit 152. A controller 212 in the
non-volatile solid-state storage or storage unit 152 performs
accesses to metadata and manages the NVRAM 204 and the flash memory
206, as previously described. As an example scenario, the CPU 156,
as the processor of one of the storage nodes 150, could perform the
receiving of the query 502 and further duties of the metadata query
engine 228 in that storage node 150. These duties could also
include query parsing and query reply formatting. The CPU 156, as
the processor of another one of the storage nodes 150, could
perform the duties of a master authority 506 in that storage node
150, as selected by the metadata query engine 228 that receives the
query 502. And, processors in further storage nodes 150 could
perform accesses to metadata of the storage system as duties of the
authorities 168 that receive the distributed query 502 or query
portions 510.
Thus, authorities 168 to which the query 502 or query portions 510
are distributed can be in the same or differing storage nodes 150
from the storage node 150 that has the metadata query engine 228
that receives the query 502. In one example of system behavior, the
metadata includes information about changes to user data. The query
502 could be a real-time question about transactions involving user
data. Replies to the query 502 or query portions 510 would then be
based on accesses to the information about changes to user data,
which are recorded in the metadata. A query could ask about
performance of the storage system, for example to determine
performance benchmarks or determine if or why performance has
decreased. Metadata could track system states, which could be used
for tracking system performance or to determine if a user has
changed aspects of the system. As in other embodiments of the
storage system or storage cluster 160, each authority 168 owns a
range of user data, and these ranges of user data are
non-overlapping in some embodiments. That is, a range of user data
owned by one authority 168 does not overlap with a range of data
owned by another authority 168. A storage node 150 can have two or
more authorities 168. Authorities 168 can be moved from one storage
node 150 to another storage node 150, or to or from a compute-only
node. Authorities 168 can be implemented as data structures in
metadata.
FIG. 7 is a flow diagram of a method for querying a storage system,
which can be practiced using a metadata query engine as shown in
FIGS. 3-6. More specifically, the method can be practiced using one
or more processors acting on behalf of one or more metadata query
engines or a distributed query engine, in a storage system. In an
action 702, a query is received. For example, the query could be
received at a storage node, or at a metadata query engine in a
storage node. In an action 704, it is determined which authorities
have ownership of ranges of user data to which the query pertains.
This action can be performed by a processor, a metadata query
engine, a master authority selected by the metadata query engine, a
processor acting on behalf of such a master authority, or through a
distributed protocol without a master, e.g., a token based
protocol.
In an action 706, the query, or portions of the query, are
distributed to the authorities that have ownership of the ranges of
user data to which the query pertains. This operation can be
executed by the structural entities named above. Each authority
accesses metadata, in an action 708. Each authority forms a reply
to the query or portion of the query, in an action 710. These
authorities can be in various storage nodes in the storage system,
and are not necessarily in the same storage node as the storage
node that has the metadata query engine, although they can be.
Replies from the authorities are aggregated to form the query
reply, in an action 712. Such aggregating can be performed by the
master authority, by the metadata query engine, or through a token
based protocol. In some embodiments, in an action 714, the query
reply is formatted. This can be performed by the metadata query
engine.
It should be appreciated that the methods described herein may be
performed with a digital processing system, such as a conventional,
general-purpose computer system. Special purpose computers, which
are designed or programmed to perform only one function may be used
in the alternative. FIG. 8 is an illustration showing an exemplary
computing device which may implement the embodiments described
herein. The computing device of FIG. 8 may be used to perform
embodiments of the functionality for a metadata query engine in a
storage system in accordance with some embodiments. The computing
device includes a central processing unit (CPU) 801, which is
coupled through a bus 805 to a memory 803, and mass storage device
807. Mass storage device 807 represents a persistent data storage
device such as a disc drive, which may be local or remote in some
embodiments. The mass storage device 807 could implement a backup
storage, in some embodiments. Memory 803 may include read only
memory, random access memory, etc. Applications resident on the
computing device may be stored on or accessed via a computer
readable medium such as memory 803 or mass storage device 807 in
some embodiments. Applications may also be in the form of modulated
electronic signals modulated accessed via a network modem or other
network interface of the computing device. It should be appreciated
that CPU 801 may be embodied in a general-purpose processor, a
special purpose processor, or a specially programmed logic device
in some embodiments.
Display 811 is in communication with CPU 801, memory 803, and mass
storage device 807, through bus 805. Display 811 is configured to
display any visualization tools or reports associated with the
system described herein. Input/output device 809 is coupled to bus
805 in order to communicate information in command selections to
CPU 801. It should be appreciated that data to and from external
devices may be communicated through the input/output device 809.
CPU 801 can be defined to execute the functionality described
herein to enable the functionality described with reference to
FIGS. 1-7. The code embodying this functionality may be stored
within memory 803 or mass storage device 807 for execution by a
processor such as CPU 801 in some embodiments. The operating system
on the computing device may be MS-WINDOWS.TM., UNIX.TM., LINUX.TM.,
iOS.TM., CentOS.TM., Android.TM., Redhat Linux.TM., z/OS.TM., or
other known operating systems. It should be appreciated that the
embodiments described herein may also be integrated with a
virtualized computing system implemented with physical computing
resources.
Detailed illustrative embodiments are disclosed herein. However,
specific functional details disclosed herein are merely
representative for purposes of describing embodiments. Embodiments
may, however, be embodied in many alternate forms and should not be
construed as limited to only the embodiments set forth herein.
It should be understood that although the terms first, second, etc.
may be used herein to describe various steps or calculations, these
steps or calculations should not be limited by these terms. These
terms are only used to distinguish one step or calculation from
another. For example, a first calculation could be termed a second
calculation, and, similarly, a second step could be termed a first
step, without departing from the scope of this disclosure. As used
herein, the term "and/or" and the "/" symbol includes any and all
combinations of one or more of the associated listed items.
As used herein, the singular forms "a", "an" and "the" are intended
to include the plural forms as well, unless the context clearly
indicates otherwise. It will be further understood that the terms
"comprises", "comprising", "includes", and/or "including", when
used herein, specify the presence of stated features, integers,
steps, operations, elements, and/or components, but do not preclude
the presence or addition of one or more other features, integers,
steps, operations, elements, components, and/or groups thereof.
Therefore, the terminology used herein is for the purpose of
describing particular embodiments only and is not intended to be
limiting.
It should also be noted that in some alternative implementations,
the functions/acts noted may occur out of the order noted in the
figures. For example, two figures shown in succession may in fact
be executed substantially concurrently or may sometimes be executed
in the reverse order, depending upon the functionality/acts
involved.
With the above embodiments in mind, it should be understood that
the embodiments might employ various computer-implemented
operations involving data stored in computer systems. These
operations are those requiring physical manipulation of physical
quantities. Usually, though not necessarily, these quantities take
the form of electrical or magnetic signals capable of being stored,
transferred, combined, compared, and otherwise manipulated.
Further, the manipulations performed are often referred to in
terms, such as producing, identifying, determining, or comparing.
Any of the operations described herein that form part of the
embodiments are useful machine operations. The embodiments also
relate to a device or an apparatus for performing these operations.
The apparatus can be specially constructed for the required
purpose, or the apparatus can be a general-purpose computer
selectively activated or configured by a computer program stored in
the computer. In particular, various general-purpose machines can
be used with computer programs written in accordance with the
teachings herein, or it may be more convenient to construct a more
specialized apparatus to perform the required operations.
A module, an application, a layer, an agent or other
method-operable entity could be implemented as hardware, firmware,
or a processor executing software, or combinations thereof. It
should be appreciated that, where a software-based embodiment is
disclosed herein, the software can be embodied in a physical
machine such as a controller. For example, a controller could
include a first module and a second module. A controller could be
configured to perform various actions, e.g., of a method, an
application, a layer or an agent.
The embodiments can also be embodied as computer readable code on a
non-transitory computer readable medium. The computer readable
medium is any data storage device that can store data, which can be
thereafter read by a computer system. Examples of the computer
readable medium include hard drives, network attached storage
(NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs,
CD-RWs, magnetic tapes, and other optical and non-optical data
storage devices. The computer readable medium can also be
distributed over a network coupled computer system so that the
computer readable code is stored and executed in a distributed
fashion. Embodiments described herein may be practiced with various
computer system configurations including hand-held devices,
tablets, microprocessor systems, microprocessor-based or
programmable consumer electronics, minicomputers, mainframe
computers and the like. The embodiments can also be practiced in
distributed computing environments where tasks are performed by
remote processing devices that are linked through a wire-based or
wireless network.
Although the method operations were described in a specific order,
it should be understood that other operations may be performed in
between described operations, described operations may be adjusted
so that they occur at slightly different times or the described
operations may be distributed in a system which allows the
occurrence of the processing operations at various intervals
associated with the processing.
In various embodiments, one or more portions of the methods and
mechanisms described herein may form part of a cloud-computing
environment. In such embodiments, resources may be provided over
the Internet as services according to one or more various models.
Such models may include Infrastructure as a Service (IaaS),
Platform as a Service (PaaS), and Software as a Service (SaaS). In
IaaS, computer infrastructure is delivered as a service. In such a
case, the computing equipment is generally owned and operated by
the service provider. In the PaaS model, software tools and
underlying equipment used by developers to develop software
solutions may be provided as a service and hosted by the service
provider. SaaS typically includes a service provider licensing
software as a service on demand. The service provider may host the
software, or may deploy the software to a customer for a given
period of time. Numerous combinations of the above models are
possible and are contemplated.
Various units, circuits, or other components may be described or
claimed as "configured to" perform a task or tasks. In such
contexts, the phrase "configured to" is used to connote structure
by indicating that the units/circuits/components include structure
(e.g., circuitry) that performs the task or tasks during operation.
As such, the unit/circuit/component can be said to be configured to
perform the task even when the specified unit/circuit/component is
not currently operational (e.g., is not on). The
units/circuits/components used with the "configured to" language
include hardware--for example, circuits, memory storing program
instructions executable to implement the operation, etc. Reciting
that a unit/circuit/component is "configured to" perform one or
more tasks is expressly intended not to invoke 35 U.S.C. 112, sixth
paragraph, for that unit/circuit/component. Additionally,
"configured to" can include generic structure (e.g., generic
circuitry) that is manipulated by software and/or firmware (e.g.,
an FPGA or a general-purpose processor executing software) to
operate in manner that is capable of performing the task(s) at
issue. "Configured to" may also include adapting a manufacturing
process (e.g., a semiconductor fabrication facility) to fabricate
devices (e.g., integrated circuits) that are adapted to implement
or perform one or more tasks.
The foregoing description, for the purpose of explanation, has been
described with reference to specific embodiments. However, the
illustrative discussions above are not intended to be exhaustive or
to limit the invention to the precise forms disclosed. Many
modifications and variations are possible in view of the above
teachings. The embodiments were chosen and described in order to
best explain the principles of the embodiments and its practical
applications, to thereby enable others skilled in the art to best
utilize the embodiments and various modifications as may be suited
to the particular use contemplated. Accordingly, the present
embodiments are to be considered as illustrative and not
restrictive, and the invention is not to be limited to the details
given herein, but may be modified within the scope and equivalents
of the appended claims.
* * * * *
References