U.S. patent application number 15/900525 was filed with the patent office on 2018-07-05 for managed storage unit shutdown in a distributed storage network.
The applicant listed for this patent is International Business Machines Corporation. Invention is credited to Andrew D. Baptist, Wesley B. Leggette, Jason K. Resch.
Application Number | 20180188964 15/900525 |
Document ID | / |
Family ID | 62711716 |
Filed Date | 2018-07-05 |
United States Patent
Application |
20180188964 |
Kind Code |
A1 |
Resch; Jason K. ; et
al. |
July 5, 2018 |
MANAGED STORAGE UNIT SHUTDOWN IN A DISTRIBUTED STORAGE NETWORK
Abstract
A method begins by a load balancing module of a distributed
storage network (DSN) receiving status information from a set of
DSN processing units and selecting a DSN processing unit from the
set of DSN processing units to process the data access request
based on the status information. The method continues with the load
balancing module transmitting, by the data access request to the
DSN processing unit selected to process a data access request and
receiving an indication of unfavorable performance from the DSN
processing unit. The method continues with the load balancing
module cancelling the data access request and receiving a second
indication from the DSN processing unit, wherein the second
indication indicates favorable performance.
Inventors: |
Resch; Jason K.; (Chicago,
IL) ; Baptist; Andrew D.; (Mt. Pleasant, WI) ;
Leggette; Wesley B.; (Chicago, IL) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
International Business Machines Corporation |
Armonk |
NY |
US |
|
|
Family ID: |
62711716 |
Appl. No.: |
15/900525 |
Filed: |
February 20, 2018 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
15345172 |
Nov 7, 2016 |
9927976 |
|
|
15900525 |
|
|
|
|
14307625 |
Jun 18, 2014 |
9495118 |
|
|
15345172 |
|
|
|
|
61860498 |
Jul 31, 2013 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06F 3/0644 20130101;
H04L 67/18 20130101; H04L 67/1097 20130101; G06F 3/0635 20130101;
G06F 3/0604 20130101; G06F 3/067 20130101; H04L 67/1008 20130101;
G06F 2206/1012 20130101; G06F 3/0638 20130101; G06F 2211/1028
20130101; H04L 67/306 20130101; G06F 11/1076 20130101; G06F 3/0619
20130101; H03M 13/373 20130101; H04L 67/025 20130101; G06F 3/061
20130101; G06F 3/064 20130101; H03M 13/3761 20130101; H03M 13/1515
20130101 |
International
Class: |
G06F 3/06 20060101
G06F003/06; H04L 29/08 20060101 H04L029/08; G06F 11/10 20060101
G06F011/10 |
Claims
1. A method for execution by a load balancing module of a dispersed
storage network (DSN), the method comprises: receiving, at the load
balancing module, status information from each DSN processing unit
of a set of DSN processing units; based on the status information,
selecting, by the load balancing module, a DSN processing unit of
the set of DSN processing units to process a data access request;
transmitting, by the load balancing module, the data access request
to the DSN processing unit selected to process a data access
request; receiving, at the load balancing module, a first
indication from the DSN processing unit, wherein the DSN processing
unit continues to process previously pending data access requests,
and further wherein the first indication indicates unfavorable
performance; detecting, by the load balancing module, the first
indication from the DSN processing unit; based on the first
indication, cancelling, by the load balancing module, the selecting
the DSN processing unit to process the data access request; and
receiving, at the load balancing module, a second indication from
the DSN processing unit, wherein the second indication indicates
favorable performance.
2. The method of claim 1, wherein the status information from each
DSN processing unit of the set of DSN processing units includes at
least one of an indicator of a time frame of availability, an
indicator of a time frame of unavailability, a time frame for a
scheduled software update, a time frame for a scheduled new
hardware addition, an error message, a maintenance schedule, a
communications error rate, and a storage error rate.
3. The method of claim 1, wherein the first indication indicates
unfavorable performance based on the DSN processing unit
determining to temporarily suspend operation.
4. The method of claim 3, wherein the determining, by the DSN
processing unit to temporarily suspend operation is based on at
least one of adding new software, activating new hardware,
recovering from a storage error, recovering from a communications
error, receiving a suspend request, and interpreting a maintenance
schedule.
5. The method of claim 1, wherein the previously pending data
access requests include one or more encoded data slice access
activities.
6. The method of claim 1, wherein the selecting, by the load
balancing module, a DSN processing unit of the set of DSN
processing units to process a data access request is further based
on at least one of receiving resource status information,
initiating a query, receiving an error message, and detecting an
unfavorable performance state.
7. The method of claim 1, wherein the selecting, by the load
balancing module, a DSN processing unit of the set of DSN
processing units to process a data access request is further based
on the DSN processing unit comparing favorably to availability of
other DSN processing units of the set of DSN processing units.
8. The method of claim 1, wherein the first indication indicating
unfavorable performance includes the DSN processing unit doing at
least one of ignoring the request, sending a late unfavorable
response, issuing unfavorable resource status information, and
ignoring resource status requests from the load balancing
module.
9. The method of claim 1, further comprising: after cancelling, by
the load balancing module, the selecting the DSN processing unit to
process the data access request selecting another DSN processing
unit to process the data access request.
10. A computing device comprises: an interface for interfacing with
a network; memory; and a processing module operably coupled to the
interface and to the memory, wherein the processing module is
operable to: receive status information from each DSN processing
unit of a set of DSN processing units; based on the status
information, select a DSN processing unit of the set of DSN
processing units to process a data access request; transmit the
data access request to the DSN processing unit selected to process
a data access request; receive a first indication from the DSN
processing unit, wherein the DSN processing unit continues to
process previously accepted data access requests, and further
wherein the first indication indicates unfavorable performance;
detect the first indication from the DSN processing unit; based on
the first indication, cancel the data access request for the DSN
processing unit; and receive a second indication from the DSN
processing unit, wherein the second indication indicates favorable
performance.
11. The computing device of claim 10, wherein the status
information from each DSN processing unit of the set of DSN
processing units includes at least one of an indicator of a time
frame of availability, an indicator of a time frame of
unavailability, a time frame for a scheduled software update, a
time frame for a scheduled new hardware addition, an error message,
a maintenance schedule, a communications error rate, and a storage
error rate.
12. The computing device of claim 10, wherein the first indication
indicates unfavorable performance based on a temporary suspension
of operation.
13. The computing device of claim 12, wherein the DSN processing
unit has determined to temporarily suspend operation based on at
least one of adding new software, activating new hardware,
recovering from a storage error, recovering from a communications
error, receiving a suspend request, and interpreting a maintenance
schedule.
14. The computing device of claim 10, wherein the previously
pending data access requests include one or more encoded data slice
access activities.
15. The computing device of claim 10, wherein the processing module
is further operable to operable to select a DSN processing unit of
the set of DSN processing units to process a data access request
further based on at least one of receiving resource status
information, initiating a query, receiving an error message, and
detecting an unfavorable performance state.
16. The computing device of claim 10, wherein the processing module
is further operable to operable to select a DSN processing unit of
the set of DSN processing units to process a data access request is
further based on the DSN processing unit comparing favorably to
availability of other DSN processing units of the set of DSN
processing units.
17. The computing device of claim 10, wherein the first indication
includes the DSN processing unit executing at least one of ignoring
the request, sending a late unfavorable response, issuing
unfavorable resource status information, and ignoring resource
status requests from the computing device.
18. The computing device of claim 10, further comprising: select
another DSN processing unit to process the data access request.
19. A computer readable storage medium comprises: at least one
memory section that stores operational instructions that, when
executed by one or more processing resources of a plurality of
processing resources of one or more computing devices of a
distributed network, causes the one or more computing devices to:
receive status information from each DSN processing unit of a set
of DSN processing units; based on the status information, select a
DSN processing unit of the set of DSN processing units to process a
data access request; transmit the data access request to the DSN
processing unit selected to process a data access request; receive
a first indication from the DSN processing unit, wherein the DSN
processing unit continues to process previously accepted data
access requests, and further wherein the first indication indicates
unfavorable performance; detect the first indication from the DSN
processing unit; based on the first indication, cancel the data
access request for the DSN processing unit; and receive a second
indication from the DSN processing unit, wherein the second
indication indicates favorable performance.
20. The computer readable storage medium of claim 19, wherein the
status information from each DSN processing unit of the set of DSN
processing units includes at least one of an indicator of a time
frame of availability, an indicator of a time frame of
unavailability, a time frame for a scheduled software update, a
time frame for a scheduled new hardware addition, an error message,
a maintenance schedule, a communications error rate, and a storage
error rate.
Description
CROSS REFERENCE TO RELATED PATENTS
[0001] The present U.S. Utility Patent Application claims priority
pursuant to 35 U.S.C. .sctn.120 as a continuation-in-part of U.S.
Utility application Ser. No. 15/345,172, entitled "STORING DATA IN
A DIRECTORY-LESS DISPERSED STORAGE NETWORK," filed Nov. 7, 2016,
which claims priority pursuant to 35 U.S.C. .sctn.121 as a
divisional of U.S. Utility application Ser. No. 14/307,625,
entitled "STORING DATA IN A DIRECTORY-LESS DISPERSED STORAGE
NETWORK", filed Jun. 18, 2014, now U.S. Pat. No. 9,495,118, issued
on Nov. 15, 2016, which claims priority pursuant to 35 U.S.C.
.sctn.119(e) to U.S. Provisional Application No. 61/860,498,
entitled "DISPERSED STORAGE AND COMPUTING NETWORK COMPONENTS AND
OPTIMIZATIONS", filed Jul. 31, 2013, all of which are hereby
incorporated herein by reference in their entirety and made part of
the present U.S. Utility Patent Application for all purposes.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
[0002] Not applicable.
INCORPORATION-BY-REFERENCE OF MATERIAL SUBMITTED ON A COMPACT
DISC
[0003] Not applicable.
BACKGROUND OF THE INVENTION
Technical Field of the Invention
[0004] This present disclosure relates generally to computer
networks and more particularly to dispersed storage of data and
distributed task processing of data.
Description of Related Art
[0005] Computing devices are known to communicate data, process
data, and/or store data. Such computing devices range from wireless
smart phones, laptops, tablets, personal computers (PC), work
stations, and video game devices, to data centers that support
millions of web searches, stock trades, or on-line purchases every
day. In general, a computing device includes a central processing
unit (CPU), a memory system, user input/output interfaces,
peripheral device interfaces, and an interconnecting bus
structure.
[0006] As is further known, a computer may effectively extend its
CPU by using "cloud computing" to perform one or more computing
functions (e.g., a service, an application, an algorithm, an
arithmetic logic function, etc.) on behalf of the computer.
Further, for large services, applications, and/or functions, cloud
computing may be performed by multiple cloud computing resources in
a distributed manner to improve the response time for completion of
the service, application, and/or function. For example, Hadoop is
an open source software framework that supports distributed
applications enabling application execution by thousands of
computers.
[0007] In addition to cloud computing, a computer may use "cloud
storage" as part of its memory system. As is known, cloud storage
enables a user, via its computer, to store files, applications,
etc. on an Internet storage system. The Internet storage system may
include a RAID (redundant array of independent disks) system and/or
a dispersed storage system that uses an error correction scheme to
encode data for storage.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
[0008] FIG. 1 is a schematic block diagram of an embodiment of a
dispersed or distributed storage network (DSN) in accordance with
the present invention;
[0009] FIG. 2 is a schematic block diagram of an embodiment of a
computing core in accordance with the present invention;
[0010] FIG. 3 is a schematic block diagram of an example of
dispersed storage error encoding of data in accordance with the
present invention;
[0011] FIG. 4 is a schematic block diagram of a generic example of
an error encoding function in accordance with the present
invention;
[0012] FIG. 5 is a schematic block diagram of a specific example of
an error encoding function in accordance with the present
invention;
[0013] FIG. 6 is a schematic block diagram of an example of a slice
name of an encoded data slice (EDS) in accordance with the present
invention;
[0014] FIG. 7 is a schematic block diagram of an example of
dispersed storage error decoding of data in accordance with the
present invention;
[0015] FIG. 8 is a schematic block diagram of a generic example of
an error decoding function in accordance with the present
invention;
[0016] FIG. 9A is a schematic block diagram of another embodiment
of a dispersed storage network (DSN) system in accordance with the
present disclosure;
[0017] FIG. 9B is a diagram illustrating an example of
load-balancing in accordance with the present disclosure;
[0018] FIG. 10A is a schematic block diagram of another embodiment
of a distributed storage (DS) execution unit in accordance with the
present disclosure;
[0019] FIG. 10B is a diagram illustrating an example of memory
utilization in accordance with the present disclosure;
[0020] FIG. 10C is a diagram illustrating another example of memory
utilization in accordance with the present disclosure;
[0021] FIG. 10D is a flowchart illustrating an example of updating
memory utilization information in accordance with the present
disclosure;
[0022] FIG. 10E is a flowchart illustrating example ways to
identify slices needing a rebuild in accordance with the present
disclosure;
[0023] FIG. 10F is a flowchart illustrating another example of
updating memory utilization information;
[0024] FIG. 10G is a schematic block diagram illustrating an
example DS client module structure for memory utilization;
[0025] FIG. 11A is a schematic block diagram of another embodiment
of a dispersed storage network (DSN) system in accordance with the
present disclosure;
[0026] FIG. 11B is a diagram illustrating an example of generating
a slice name in accordance with the present disclosure;
[0027] FIG. 11C is a flowchart illustrating an example of
co-locating storage of data in accordance with the present
disclosure;
[0028] FIG. 11D is a flowchart illustrating one example of
obtaining the plurality of sets of encoded data slices to be
co-located; and
[0029] FIG. 11E is a schematic block diagram of another embodiment
of a dispersed storage network (DSN) system in accordance with the
present disclosure.
DETAILED DESCRIPTION OF THE INVENTION
[0030] FIG. 1 is a schematic block diagram of an embodiment of a
dispersed, or distributed, storage network (DSN) 10 that includes a
plurality of computing devices 12-16, a managing unit 18, an
integrity processing unit 20, and a DSN memory 22. The components
of the DSN 10 are coupled to a network 24, which may include one or
more wireless and/or wire lined communication systems; one or more
non-public intranet systems and/or public internet systems; and/or
one or more local area networks (LAN) and/or wide area networks
(WAN).
[0031] The DSN memory 22 includes a plurality of storage units 36
that may be located at geographically different sites (e.g., one in
Chicago, one in Milwaukee, etc.), at a common site, or a
combination thereof. For example, if the DSN memory 22 includes
eight storage units 36, each storage unit is located at a different
site. As another example, if the DSN memory 22 includes eight
storage units 36, all eight storage units are located at the same
site. As yet another example, if the DSN memory 22 includes eight
storage units 36, a first pair of storage units are at a first
common site, a second pair of storage units are at a second common
site, a third pair of storage units are at a third common site, and
a fourth pair of storage units are at a fourth common site. Note
that a DSN memory 22 may include more or less than eight storage
units 36. Further note that each storage unit 36 includes a
computing core (as shown in FIG. 2, or components thereof) and a
plurality of memory devices for storing dispersed error encoded
data.
[0032] Each of the computing devices 12-16, the managing unit 18,
and the integrity processing unit 20 include a computing core 26,
which includes network interfaces 30-33. Computing devices 12-16
may each be a portable computing device and/or a fixed computing
device. A portable computing device may be a social networking
device, a gaming device, a cell phone, a smart phone, a digital
assistant, a digital music player, a digital video player, a laptop
computer, a handheld computer, a tablet, a video game controller,
and/or any other portable device that includes a computing core. A
fixed computing device may be a computer (PC), a computer server, a
cable set-top box, a satellite receiver, a television set, a
printer, a fax machine, home entertainment equipment, a video game
console, and/or any type of home or office computing equipment.
Note that each of the managing unit 18 and the integrity processing
unit 20 may be separate computing devices, may be a common
computing device, and/or may be integrated into one or more of the
computing devices 12-16 and/or into one or more of the storage
units 36.
[0033] Each interface 30, 32, and 33 includes software and hardware
to support one or more communication links via the network 24
indirectly and/or directly. For example, interface 30 supports a
communication link (e.g., wired, wireless, direct, via a LAN, via
the network 24, etc.) between computing devices 14 and 16. As
another example, interface 32 supports communication links (e.g., a
wired connection, a wireless connection, a LAN connection, and/or
any other type of connection to/from the network 24) between
computing devices 12 and 16 and the DSN memory 22. As yet another
example, interface 33 supports a communication link for each of the
managing unit 18 and the integrity processing unit 20 to the
network 24.
[0034] Computing devices 12 and 16 include a dispersed storage (DS)
client module 34, which enables the computing device to dispersed
storage error encode and decode data (e.g., data 40) as
subsequently described with reference to one or more of FIGS. 3-8.
In this example embodiment, computing device 16 functions as a
dispersed storage processing agent for computing device 14. In this
role, computing device 16 dispersed storage error encodes and
decodes data on behalf of computing device 14. With the use of
dispersed storage error encoding and decoding, the DSN 10 is
tolerant of a significant number of storage unit failures (the
number of failures is based on parameters of the dispersed storage
error encoding function) without loss of data and without the need
for a redundant or backup copies of the data. Further, the DSN 10
stores data for an indefinite period of time without data loss and
in a secure manner (e.g., the system is very resistant to
unauthorized attempts at accessing the data).
[0035] In operation, the managing unit 18 performs DS management
services. For example, the managing unit 18 establishes distributed
data storage parameters (e.g., vault creation, distributed storage
parameters, security parameters, billing information, user profile
information, etc.) for computing devices 12-14 individually or as
part of a group of user devices. As a specific example, the
managing unit 18 coordinates creation of a vault (e.g., a virtual
memory block associated with a portion of an overall namespace of
the DSN) within the DSN memory 22 for a user device, a group of
devices, or for public access and establishes per vault dispersed
storage (DS) error encoding parameters for a vault. The managing
unit 18 facilitates storage of DS error encoding parameters for
each vault by updating registry information of the DSN 10, where
the registry information may be stored in the DSN memory 22, a
computing device 12-16, the managing unit 18, and/or the integrity
processing unit 20.
[0036] The managing unit 18 creates and stores user profile
information (e.g., an access control list (ACL)) in local memory
and/or within memory of the DSN memory 22. The user profile
information includes authentication information, permissions,
and/or the security parameters. The security parameters may include
encryption/decryption scheme, one or more encryption keys, key
generation scheme, and/or data encoding/decoding scheme.
[0037] The managing unit 18 creates billing information for a
particular user, a user group, a vault access, public vault access,
etc. For instance, the managing unit 18 tracks the number of times
a user accesses a non-public vault and/or public vaults, which can
be used to generate a per-access billing information. In another
instance, the managing unit 18 tracks the amount of data stored
and/or retrieved by a user device and/or a user group, which can be
used to generate a per-data-amount billing information.
[0038] As another example, the managing unit 18 performs network
operations, network administration, and/or network maintenance.
Network operations includes authenticating user data allocation
requests (e.g., read and/or write requests), managing creation of
vaults, establishing authentication credentials for user devices,
adding/deleting components (e.g., user devices, storage units,
and/or computing devices with a DS client module 34) to/from the
DSN 10, and/or establishing authentication credentials for the
storage units 36. Network administration includes monitoring
devices and/or units for failures, maintaining vault information,
determining device and/or unit activation status, determining
device and/or unit loading, and/or determining any other system
level operation that affects the performance level of the DSN 10.
Network maintenance includes facilitating replacing, upgrading,
repairing, and/or expanding a device and/or unit of the DSN 10.
[0039] The integrity processing unit 20 performs rebuilding of
`bad` or missing encoded data slices. At a high level, the
integrity processing unit 20 performs rebuilding by periodically
attempting to retrieve/list encoded data slices, and/or slice names
of the encoded data slices, from the DSN memory 22. For retrieved
encoded slices, they are checked for errors due to data corruption,
outdated version, etc. If a slice includes an error, it is flagged
as a `bad` slice. For encoded data slices that were not received
and/or not listed, they are flagged as missing slices. Bad and/or
missing slices are subsequently rebuilt using other retrieved
encoded data slices that are deemed to be good slices to produce
rebuilt slices. The rebuilt slices are stored in the DSN memory
22.
[0040] FIG. 2 is a schematic block diagram of an embodiment of a
computing core 26 that includes a processing module 50, a memory
controller 52, main memory 54, a video graphics processing unit 55,
an input/output (IO) controller 56, a peripheral component
interconnect (PCI) interface 58, an IO interface module 60, at
least one IO device interface module 62, a read only memory (ROM)
basic input output system (BIOS) 64, and one or more memory
interface modules. The one or more memory interface module(s)
includes one or more of a universal serial bus (USB) interface
module 66, a host bus adapter (HBA) interface module 68, a network
interface module 70, a flash interface module 72, a hard drive
interface module 74, and a DSN interface module 76.
[0041] The DSN interface module 76 functions to mimic a
conventional operating system (OS) file system interface (e.g.,
network file system (NFS), flash file system (FFS), disk file
system (DFS), file transfer protocol (FTP), web-based distributed
authoring and versioning (WebDAV), etc.) and/or a block memory
interface (e.g., small computer system interface (SCSI), internet
small computer system interface (iSCSI), etc.). The DSN interface
module 76 and/or the network interface module 70 may function as
one or more of the interface 30-33 of FIG. 1. Note that the IO
device interface module 62 and/or the memory interface modules
66-76 may be collectively or individually referred to as IO
ports.
[0042] FIG. 3 is a schematic block diagram of an example of
dispersed storage error encoding of data. When a computing device
12 or 16 has data to store it disperse storage error encodes the
data in accordance with a dispersed storage error encoding process
based on dispersed storage error encoding parameters. The dispersed
storage error encoding parameters include an encoding function
(e.g., information dispersal algorithm, Reed-Solomon, Cauchy
Reed-Solomon, systematic encoding, non-systematic encoding, on-line
codes, etc.), a data segmenting protocol (e.g., data segment size,
fixed, variable, etc.), and per data segment encoding values. The
per data segment encoding values include a total, or pillar width,
number (T) of encoded data slices per encoding of a data segment
(i.e., in a set of encoded data slices); a decode threshold number
(D) of encoded data slices of a set of encoded data slices that are
needed to recover the data segment; a read threshold number (R) of
encoded data slices to indicate a number of encoded data slices per
set to be read from storage for decoding of the data segment;
and/or a write threshold number (W) to indicate a number of encoded
data slices per set that must be accurately stored before the
encoded data segment is deemed to have been properly stored. The
dispersed storage error encoding parameters may further include
slicing information (e.g., the number of encoded data slices that
will be created for each data segment) and/or slice security
information (e.g., per encoded data slice encryption, compression,
integrity checksum, etc.).
[0043] In the present example, Cauchy Reed-Solomon has been
selected as the encoding function (a generic example is shown in
FIG. 4 and a specific example is shown in FIG. 5); the data
segmenting protocol is to divide the data object into fixed sized
data segments; and the per data segment encoding values include: a
pillar width of 5, a decode threshold of 3, a read threshold of 4,
and a write threshold of 4. In accordance with the data segmenting
protocol, the computing device 12 or 16 divides the data (e.g., a
file (e.g., text, video, audio, etc.), a data object, or other data
arrangement) into a plurality of fixed sized data segments (e.g., 1
through Y of a fixed size in range of Kilo-bytes to Tera-bytes or
more). The number of data segments created is dependent of the size
of the data and the data segmenting protocol.
[0044] The computing device 12 or 16 then disperse storage error
encodes a data segment using the selected encoding function (e.g.,
Cauchy Reed-Solomon) to produce a set of encoded data slices. FIG.
4 illustrates a generic Cauchy Reed-Solomon encoding function,
which includes an encoding matrix (EM), a data matrix (DM), and a
coded matrix (CM). The size of the encoding matrix (EM) is
dependent on the pillar width number (T) and the decode threshold
number (D) of selected per data segment encoding values. To produce
the data matrix (DM), the data segment is divided into a plurality
of data blocks and the data blocks are arranged into D number of
rows with Z data blocks per row. Note that Z is a function of the
number of data blocks created from the data segment and the decode
threshold number (D). The coded matrix is produced by matrix
multiplying the data matrix by the encoding matrix.
[0045] FIG. 5 illustrates a specific example of Cauchy Reed-Solomon
encoding with a pillar number (T) of five and decode threshold
number of three. In this example, a first data segment is divided
into twelve data blocks (D1-D12). The coded matrix includes five
rows of coded data blocks, where the first row of X11-X14
corresponds to a first encoded data slice (EDS 1_1), the second row
of X21-X24 corresponds to a second encoded data slice (EDS 2_1),
the third row of X31-X34 corresponds to a third encoded data slice
(EDS 3_1), the fourth row of X41-X44 corresponds to a fourth
encoded data slice (EDS 4_1), and the fifth row of X51-X54
corresponds to a fifth encoded data slice (EDS 5_1). Note that the
second number of the EDS designation corresponds to the data
segment number.
[0046] Returning to the discussion of FIG. 3, the computing device
also creates a slice name (SN) for each encoded data slice (EDS) in
the set of encoded data slices. A typical format for a slice name
80 is shown in FIG. 6. As shown, the slice name (SN) 80 includes a
pillar number of the encoded data slice (e.g., one of 1-T), a data
segment number (e.g., one of 1-Y), a vault identifier (ID), a data
object identifier (ID), and may further include revision level
information of the encoded data slices. The slice name functions
as, at least part of, a DSN address for the encoded data slice for
storage and retrieval from the DSN memory 22.
[0047] As a result of encoding, the computing device 12 or 16
produces a plurality of sets of encoded data slices, which are
provided with their respective slice names to the storage units for
storage. As shown, the first set of encoded data slices includes
EDS 1_1 through EDS 5_1 and the first set of slice names includes
SN 1_1 through SN 5_1 and the last set of encoded data slices
includes EDS 1_Y through EDS 5_Y and the last set of slice names
includes SN 1_Y through SN 5_Y.
[0048] FIG. 7 is a schematic block diagram of an example of
dispersed storage error decoding of a data object that was
dispersed storage error encoded and stored in the example of FIG.
4. In this example, the computing device 12 or 16 retrieves from
the storage units at least the decode threshold number of encoded
data slices per data segment. As a specific example, the computing
device retrieves a read threshold number of encoded data
slices.
[0049] To recover a data segment from a decode threshold number of
encoded data slices, the computing device uses a decoding function
as shown in FIG. 8. As shown, the decoding function is essentially
an inverse of the encoding function of FIG. 4. The coded matrix
includes a decode threshold number of rows (e.g., three in this
example) and the decoding matrix in an inversion of the encoding
matrix that includes the corresponding rows of the coded matrix.
For example, if the coded matrix includes rows 1, 2, and 4, the
encoding matrix is reduced to rows 1, 2, and 4, and then inverted
to produce the decoding matrix.
[0050] FIG. 9A is a schematic block diagram of another embodiment
of a dispersed storage network (DSN) system that includes the DSN
memory 22 of FIG. 1, a set of storage units 1-N, where each storage
unit includes the computing device 16 of FIG. 1, and a
load-balancing module 498. The DSN memory 22 includes the storage
unit set 438. The storage unit set 438 includes a set of storage
units 36 of FIG. 1.
[0051] The system functions to store data 500 as a plurality of
sets of encoded data slices 504 in the storage unit set 438. The
load-balancing module 498 selects one of the computing devices,
based on resource status information 502 from the computing
devices, to encode the data 500 using a dispersed storage error
coding function to produce the plurality of sets of encoded data
slices 504 for storage in the storage unit set 438. The resource
status information 502 includes one or more of an indicator of a
time frame of availability, an indicator of a time frame of
unavailability, a time frame for a scheduled software update, a
time frame for a scheduled new hardware addition, an error message,
a maintenance schedule, a communications error rate, and a storage
error rate.
[0052] In an example of operation, a computing device determines to
at least temporarily suspend operations. The determining may be
based on one or more of adding new software, activating new
hardware, recovering from a storage error, recovering from a
communications error, receiving a suspend request, and interpreting
the maintenance schedule. The computing device continues to perform
a slice access activity with regards to pending data access
requests associated with the computing device. The load-balancing
module 498 receives a new data access request. The load-balancing
module 498 determines availability of each of the computing devices
based on one or more of receiving resource status information 502,
initiating a query, receiving an error message, and detecting an
unfavorable performance (e.g., detecting slow response latency).
The load-balancing module 498 selects the computing device when the
availability (e.g., previously known availability) of the computing
device compares favorably to availability of other computing
devices. The load-balancing module 498 forwards the data access
requests to the computing device.
[0053] While suspending operations, the computing devices indicates
the unfavorable performance to the load-balancing module. The
indicating unfavorable performance includes at least one of
ignoring the request, sending a late unfavorable response, issuing
unfavorable resource status information, and ignoring resource
status requests from the load-balancing module. The load-balancing
module 498 interprets the indication to determine that the data
access request is to be reassigned. The load-balancing module 498
un-selects the computing device from the data access assignment.
For example, the load-balancing module sends a cancellation message
to the computing device and selects another computing device and
sends the data access request to the other computing device.
[0054] FIG. 9B is a diagram illustrating an example of
load-balancing. The method includes step 506 where a computing
device determines to temporarily suspend operations. The method
continues at step 508 where the computing device continues to
execute pending operations. For example, the computing device
continues to process previously accepted data access requests. The
method continues at step 510 where a load-balancing module receives
a data access request. The method continues at step 512 where the
load-balancing module assesses availability of a set of computing
devices that includes the computing device. The assessing includes
producing availability information based on one or more of
interpreting performance indicators, receiving resource status
information, initiating a query, receiving an error message, and
detecting favorable performance.
[0055] The method continues at step 514 where the load-balancing
module selects the computing device for execution of the data
access request. For example, the load-balancing module selects the
computing device when availability of the computing device compares
more favorably to availability of other computing devices. The
method continues at step 516 where the load-balancing module
forwards the data access request to the computing device.
[0056] The method continues at step 518 where the computing device
indicates unfavorable performance. For example, the computing
device ignores the data access requests. As another example, the
computing device waits a delay time period before sending a data
access response causing the load-balancing module to interpret the
data access response as a late data access response associated with
unfavorable performance. As yet another example, the computing
device delays responses associated with previous accepted data
access requests. The method continues at step 520 where the
load-balancing module detects the indicated unfavorable
performance. For example, the load-balancing module detects the
indicated unfavorable performance when the data access response was
not received within a desired response timeframe.
[0057] The method continues at step 522 where the load-balancing
module un-selects the computing device for execution of the data
access request. The un-selecting includes one or more of sending a
cancellation message to the computing device, selecting another
computing device for the data access request, and assigning the
other computing device the data access request.
[0058] The method continues at step 524 where the computing device
determines to resume operations. The determining may be based on
one or more of detecting that new software is operational,
detecting that new hardware is operational, detecting that an error
condition has cleared, and detecting that a level of pending data
access requests has fallen below a low data access request
threshold level. The method continues at step 526 where the
computing device indicates favorable performance. For example, the
computing device generates data access responses in accordance with
desired data access response timing. As another example, the
computing device responds to all data access requests. As yet
another example, the computing device sends favorable resource
status information to the load-balancing module.
[0059] FIG. 10A is a schematic block diagram of another embodiment
of a storage unit (SU) 36 that includes the distributed storage
(DS) client module 34 and one or more memory devices 88. The memory
includes a plurality of portions of memory associated with
different utilizations. The portions may be physical memory or
virtual memory space. The plurality of portions includes one or
more portions utilized for slices memory 606, utilized for rebuilt
slices memory 608, reserved for rebuilt slices memory 610, and
un-utilized memory 612. The un-utilized memory 612 is associated
with available storage capacity, where the available storage
capacity may be calculated as a memory size minus memory used for
each of the utilized for slices memory 606, memory used for the
utilized for rebuilt slices memory 608, and memory used for the
reserved for rebuilt slices memory 610.
[0060] The storage unit 36 functions to store encoded data slices
600 in the utilized for slices memory 606 and store rebuilt encoded
data slices 602 in the utilized for rebuilt slices memory 608. The
DS client module 34 may obtain the rebuilt encoded data slices by
at least one of: receiving the rebuilt encoded data slices and
generating the rebuilt encoded data slices by retrieving
representations of encoded data slices from a decode threshold
number of other storage units 36. When encoded data slices are to
be stored, the DS client module 34 determines whether sufficient
available storage capacity of the un-utilized memory is available
for utilization for slices memory. For instance, the DS client
module compares a size of an encoded data slice for storage to the
size of the un-utilized memory. The DS client module indicates that
storage space is available when the size of the encoded data slice
is less than the size of the un-utilized memory. The DS client
module 34 may determine the size of the reserved for rebuilt slices
memory based on identifying encoded data slices to be rebuilt. The
identifying includes at least one of detecting a slice error and
receiving an indication of the slice error.
[0061] In an example of operation, the DS client module 34
identifies a plurality of encoded data slices requiring rebuilding.
The DS client module 34 determines an amount of reserve memory 610
required for storage of rebuilt slices for the identified plurality
of encoded data slices requiring rebuilding. The determining may
include exchanging memory utilization information 604 with at least
one other Storage unit, where the exchanging includes receiving an
amount of memory required for an encoded data slice associated
with, for example, a slice error. The DS client module 34 updates
the memory utilization information to include the amount of reserve
memory required. The memory utilization information includes one or
more of size of the utilized for slices memory, size of the
utilized for rebuilt slices memory, size of the reserved for
rebuilt slices memory, and size of the un-utilized memory. The DS
client module 34 outputs the memory utilization information 604 to
one or more of a computing device, a managing unit, and a user
device.
[0062] The DS client module 34 obtains rebuilt encoded data slices
(e.g., receives, generates) and stores the rebuilt encoded data
slices in the utilized for rebuilt encoded data slices memory.
Accordingly, the DS client module updates the reserved for rebuilt
slices memory by a similar memory size amount as storage of the
rebuild encoded data slices (e.g., lowers size of reserved for
rebuilt slices memory and raises size for utilized for rebuilt
slices memory). The DS client module updates the memory utilization
information and may output the updated memory utilization
information.
[0063] FIGS. 10B-C are diagrams illustrating examples of memory
utilization for a series of times frames, where each timeframe
indicates an amount of memory utilized for slices, rebuilt slices,
reserved for rebuilt slices, unutilized, and a total amount of
memory capacity. The total amount of memory capacity remains
constant over the time intervals. In particular, FIG. 10B
illustrates examples of the memory utilization 614 for a first set
of time intervals T1-5. At T1, stored slices use 300 TB of memory
space of a total capacity of 500 TB of memory space leaving 200 TB
of unutilized memory space. At T2, 50 TB of slices for rebuilding
are detected such that reserved for rebuilding is incremented by 50
TB and unutilized memory space is lowered by 50 TB from 200 TB to
150 TB. At T3, a first 20 TB of rebuilt slices are obtained and
stored such that the reserved memory space for rebuilt slices is
lowered by 20 TB from 50 TB to 30 TB. At T4, a remaining 30 TB of
rebuilt slices are obtained and stored such that the reserve memory
space rebuilt slices is lowered by another 30 TB from 30 TB two 0
TB and the rebuilt slices is raised to buy 30 TB from 20 TB to 50
TB. At T5, the rebuilt slices are moved to the memory space for
slices thus raising the rebuilt slices by 50 TB from 300 TB to 350
TB. Utilized memory includes the combination 615 of utilized for
slices memory 606, memory used for the utilized for rebuilt slices
memory 608, and memory used for the reserved for rebuilt slices
memory 610.
[0064] FIG. 10C continues the examples of memory utilization 616
for second set of time intervals T6-T10. The example begins at time
interval T6 which is equivalent to memory utilization of T5. At T7,
100 TB of new slices are stored thus raising the memory utilization
of slices from 350 TB to 450 TB and lowering the unutilized memory
space from 150 TB to 50 TB. At T8, 50 TB of slices for rebuilding
is detected such that memory space of reserved for rebuilding is
incremented by 50 TB from zero to 50 TB and memory space of
unutilized is lowered by 50 TB from 50 TB two 0 TB. Requests for
storage of new slices are rejected since the memory space of the
unutilized memory is zero. At T9, 50 TB of rebuilt slices are
received and stored in the memory space of the rebuilt slices thus
raising the rebuilt slices from 0 TB to 50 TB and lowering the
memory space for rebuilt slices from 50 TB to 0 TB. At T10, the
slices of the memory space rebuilt slices is considered part of the
memory space of slices thus raising the memory space of the slices
from 450 TB to 500 TB and lowering the memory space of the rebuilt
slices from 50 TB to 0 TB. As such, the memory storage space is
full and subsequent request for storage of slices or rebuilt slices
shall be rejected.
[0065] FIG. 10D is a flowchart illustrating an example of updating
memory utilization information. The method begins at step 618 where
a processing module (e.g., of a distributed storage DS client
module) identifies a plurality of encoded data slices requiring
rebuilding. As further delineated in FIG. 10E (flowchart
illustrating example ways to identify slices needing a rebuild),
the identifying includes at least one of: receiving an error
message 632 (e.g., no slices detected for rebuild, no access to
rebuild information, not enough space to rebuild, etc.); receiving
a rebuilding request 634 (e.g., to rebuild specific data slices or
range of data slices); detecting missing or corrupted encoded data
slices by comparing a list of locally stored encoded data slices
(or range of slices) to a list of remotely stored encoded data
slices (or range of slices) associated with the locally stored
encoded data slices to identify missing slices or detecting
unfavorable slice integrity (e.g., corrupted slices); monitoring
downloads 638 to the DS memory meeting minimum read/write (R/W)
width thresholds but less than a full pillar width (successful
download, but not all slices above threshold successfully
downloaded); determining 640 when DSN read/write (R/W) requests
occur for the plurality of encoded data slices and comparing to
known times of inaccessibility for the DS memory storing the
plurality of encoded data slices (e.g., DS memory was down for
maintenance when original slice R/W request occurred); and querying
vaults related to the plurality of encoded data slices 641 to
determine one or more missing or corrupted encoded data slices
(e.g., other vaults sharing the same data slices may have a list or
copies which include the missing or corrupted data slices).
[0066] The rebuilding of the plurality of encoded data slices is,
in one embodiment, queued for at least one of individual, group, or
batch processing and the processing will be performed at a
significant time delay from the queuing. As the rebuild processing
may occur in the future, the embodiments of FIGS. 10A-G, ensure
that memory space is set aside for rebuilds such that interceding
requests for memory slice storage will not over utilize memory
needed for the rebuild before it has a chance to occur.
[0067] The method continues at the step 620 where the processing
module determines an amount of memory space to reserve for the
plurality of encoded data slices requiring rebuilding. The
determining includes identifying slice sizes based on at least one
of initiating a slice size query with regards to the remotely
stored encoded data slices, receiving a query response, and
performing a local lookup based on a slice name.
[0068] The method continues at step 622 where the processing module
updates memory utilization information to include the amount of
memory space to reserve. For example, the processing module
increments an amount of memory reserved for rebuilt slices by the
amount of memory space to reserve and decrements unutilized memory
space by the amount of memory space to reserve. The method
continues at step 624 where the processing module sends the memory
utilization information to at least one of a storing entity and a
managing unit. The sending may further include determining whether
a sum of an amount of memory utilized for slices, an amount of
memory utilize for rebuilt slices, and an amount of memory reserved
for rebuilt slices is greater than a capacity of memory. When the
sum is greater, the processing module may further send an
indication that the memory is full.
[0069] The method continues at step 626 where the processing module
obtains rebuilt encoded data slices (e.g., received, generate). The
method continues at step 628 where the processing module stores the
rebuilt encoded data slices in a local DS memory. The method
continues at step 630 where the processing module updates the
amount of memory space to reserve for remaining encoded data slices
requiring rebuilding. The updating includes determining an amount
of memory space utilized to store the obtained rebuilt encoded data
slices, incrementing the amount of memory space utilized for
rebuilt slices by the amount of memory space utilized to store the
obtained rebuilt encoded data slices, and decrementing the amount
of memory space reserved for rebuilt slices by the amount of memory
space utilized to store the obtained rebuilt encoded data slices.
The updating may further include updating the memory space utilized
for slices to include the amount of memory space utilized to store
the obtained rebuilt encoded data slices and decrementing the
amount of memory space utilized to store the rebuild encoded data
slices. The method loops back to the step where the processing
module updates the memory utilization information.
[0070] FIG. 10F is a flowchart illustrating another example of
updating memory utilization information. The method begins at step
642 where a processing module (e.g., DS integrity processing unit
20) attempts to retrieve a plurality of encoded data slices from a
DS memory to perform an integrity check. Slices are retrieved based
on any of: list(s) of slice addresses, list(s) of names, range(s)
of slice addresses and range(s) of slice names. In step 644, it is
determined if the encoded data slices were retrieved during the
attempted retrieval. In step 646, for encoded data slices that were
not received and/or not listed, they are flagged as missing slices.
For retrieved encoded data slices, they are checked for errors due
to data corruption, outdated version, etc. In step 648, if a slice
includes an error, it is flagged as a `bad` slice. Bad and/or
missing slices are subsequently rebuilt using other retrieved
encoded data slices that are deemed to be good slices to produce
rebuilt slices.
[0071] The rebuilding of the plurality of encoded data slices is,
in one embodiment, queued for at least one of individual, group, or
batch processing and the processing will be performed at a
significant time delay from the queuing. As the rebuild processing
may occur in the future, the embodiments of FIGS. 11A-G, ensure
that memory space is set aside for rebuilds such that interceding
requests for memory slice storage will not over utilize memory
needed for the rebuild before it has a chance to occur.
[0072] The method continues at the step 650 where the processing
module determines an amount of memory space to reserve for the
plurality of encoded data slices requiring rebuilding. The
determining includes identifying slice sizes based on at least one
of initiating a slice size query with regards to the remotely
stored encoded data slices, receiving a query response, and
performing a local lookup based on a slice name.
[0073] The method continues at step 652 where the processing module
updates memory utilization information to include the amount of
memory space to reserve. For example, the processing module
increments an amount of memory reserved for rebuilt slices by the
amount of memory space to reserve and decrements unutilized memory
space by the amount of memory space to reserve. The method
continues at step 653 where the processing module sends the memory
utilization information to at least one of a storing entity (e.g.,
storage/vault peers), user units and a managing unit. The sending
may further include determining whether a sum of an amount of
memory utilized for slices, an amount of memory utilize for rebuilt
slices, and an amount of memory reserved for rebuilt slices is
greater than a capacity of memory. When the sum is greater, the
processing module may further send an indication that the memory is
full.
[0074] The method continues at step 654 where the processing module
obtains rebuilt encoded data slices (e.g., received, generated) and
stores, in step 656, the rebuilt encoded data slices in a local DS
memory. The method continues at step 657 where the processing
module updates the amount of memory space to reserve for remaining
encoded data slices requiring rebuilding. The updating includes
determining an amount of memory space utilized to store the
obtained rebuilt encoded data slices, incrementing the amount of
memory space utilized for rebuilt slices by the amount of memory
space utilized to store the obtained rebuilt encoded data slices,
and decrementing the amount of memory space reserved for rebuilt
slices by the amount of memory space utilized to store the obtained
rebuilt encoded data slices. The updating may further include
updating the memory space utilized for slices to include the amount
of memory space utilized to store the obtained rebuilt encoded data
slices and decrementing the amount of memory space utilized to
store the rebuild encoded data slices.
[0075] FIG. 10G is a schematic block diagram illustrating an
example DS client module 34 structure for memory utilization. DS
client module 34 may include a plurality of processing modules (or
sub-modules) to perform one or more steps of the embodiments of
FIGS. 10A-F. While this example is shown as seven separate modules,
the modules may be combined/separated into any number of modules
(local or remote) to complete the various steps and functions of
the various embodiments of FIGS. 10A-F.
[0076] As shown, identify module 34-1 identifies a plurality of
encoded data slices that require rebuilding, wherein rebuilding of
the plurality of encoded data slices is queued for at least one of
individual, group, or batch processing and the processing will be
performed at a significant time delay from the queuing. Determine
module 34-2 determines an amount of memory required for storage of
the rebuild encoded data slices for the plurality of encoded data
slices. Update module 34-3 updates utilization information of the
memory by allocating a portion of available memory to the amount of
memory required. Indicate module 34-4 indicates the memory
utilization (e.g., by sending the updated utilization information
604 of the memory to at least one of a storing entity (e.g., other
storage/vault peers) and a managing unit). Obtain module 34-5
obtains rebuilt data slices (e.g., from other good copies or
related vaults or generates them from other encoded data slices).
Store module 34-6 stores the rebuilt encoded data slices in the
reserve memory; and modify module 34-7 modifies the utilization
information to reflect the stored rebuilt encoded data slices.
Additional modules may be included within DS client module 34 to
perform additional tasks (for example, but not limited to, passing
encoded data slices to/from slice memory during non-rebuild
write/read (W/R) operations). Alternatively, obtain module 34-5 and
store module 34-6 may perform the receive and store slices 600
tasks, respectively.
[0077] FIG. 11A is a schematic block diagram of another embodiment
of a dispersed storage network (DSN) system that includes the
disbursing storage (DS) processing unit 16 and the distributed
storage network (DSN) module 22 of FIG. 1. The DSN module 22
includes at least two Storage unit sets 1-2. Each storage unit set
includes a set of storage units 36 of FIG. 1. The system functions
to store at least two data objects in a common storage unit
set.
[0078] In an example of operation, the computing device 16 receives
a data object 1 write request 700. The computing device 16 encodes
data object 1 using a dispersed storage error coding function to
produce first sets (data object 1) of encoded data slices 700-1, 2,
. . . n (where n equals the width (number of pillars) of the
encoded data slice set). The computing device 16 generates first
sets of slice names for the first sets of encoded data slices. The
computing device 16 issues one or more sets of data object 1 write
slice requests to a storage unit set 1 that includes the first sets
of encoded data slices and the corresponding first sets of slice
names, where the first sets of slice names fall within a range of
slice names associated with the storage unit set 1.
[0079] With data object 1 stored in the first set of storage units
36, the computing device 16 receives a data object 2 co-locate
write request 702 with regards to storing a second data object in
the same set of storage units 36 as the first data object (e.g., in
the storage unit set 1). The data object 2 co-locate write request
includes a data identifier (ID) of the data object to be co-located
with (e.g., a data ID of the data object 1), a data ID of the
second data object (e.g., the data object 2 to be co-located), and
may include the data (e.g., data object 2) to be co-located when it
is not already stored within the DSN module 22.
[0080] When the data object to be co-located (e.g., the second data
object) is included in the data object 2 co-locate write request,
the computing device 16 identifies the set of storage units 36
associated with the data ID of data object 1 to be co-located with
(e.g., the storage unit set 1). The determining includes accessing
one or more of a directory and a dispersed hierarchical index to
identify a DSN address associated with the data ID of data object 1
to be co-located with and performing a DSN address-to-physical
location table lookup to identify the set of storage units 36
associated with the data ID of data object 1 to be co-located with.
Next, the computing device encodes the second data object (data
object 2) to produce second sets of encoded data slices for storage
in the storage unit set 1. The computing device 16 generates second
sets of slice names for the second sets of encoded data slices,
where the second sets of slice names are based on the first sets of
slice names such that the second sets of slice names fall within a
range of slice names associated with a range of slice names
associated with the set of storage units 36 associated with the
data ID of data object 1 to be co-located with. computing device 16
issues data object 2 write slice requests to the set of storage
units 36 associated with the data ID of the data object to be
co-located with (e.g., to storage unit set 1), where the data
object 2 write slice requests includes the second sets of encoded
data slices.
[0081] When the data object to be co-located is not included in the
data object 2 co-locate write request, the computing device 16
determines whether the data object to be co-located is already
co-located. The determining includes the computing device 16
identifying the storage unit set associated with storage of the
second data object and comparing the identity to the identity of
the Storage unit set associated with storage of the first data
object. When data object 2 to be co-located is not already
co-located (e.g., with data object 1), the computing device 16
recovers data object 2 from the storage unit set associated with
storage of the second data object (e.g., from storage unit set 2).
The recovering includes issuing data object 2 read slice requests
704 to the storage unit set associated with storage of the second
data object and receiving the second sets of encoded data slices
(e.g., received from storage unit set 2). Next, the computing
device 16 issues the data object 2 write slice requests to the set
of storage units 36 associated with the data ID of the data object
1 to be co-located with (e.g., to storage unit set 1), where the
data object 2 write slice requests includes the received second
sets of encoded data slices and the corresponding second sets of
slice names.
[0082] FIG. 11B is a diagram illustrating an example of generating
an updated slice name for a previously stored encoded data slice of
a second data object to be co-located with one or more encoded data
slices of a first data object. The slice name 706 has a structure
that includes a slice index field 708, a vault identifier (ID)
field 710, a generation field 712, an object number field 714, and
a segment number field 716. A substantial number of the fields of
the slice name structure of a slice name of the previously stored
encoded data slice of the second data object are updated to be
substantially aligned with corresponding fields of the slice name
structure of a slice name of the one or more encoded data slices of
the first data object. For example, a vault ID field entry of the
previous data object 2 slice 1 is updated to be substantially the
same as a vault ID field entry of data object 1 slice 1. As another
example, an object number field entry of the previous data object 2
slice 1 is updated based on an object number field entry of the
previous data object 2 slice 1 such that the slice name of the
updated data object 2 slice 1 falls within a range of slice names
associated with storage of the first data object.
[0083] FIG. 11C is a flowchart illustrating an example of
co-locating storage of data objects. The method begins at step 718
where a processing module (e.g., a distributed storage (DS)
processing unit) receives a data object 2 co-locate write request
to co-locate a data object 2 with a data object 1 to be co-located
with. The write request includes one or more of data identifiers
(IDs) for the data object 2 to be co-located and the data object 1
to be co-located with. The method continues at step 720 where the
processing module obtains a plurality of sets of encoded data
slices for the data object 2 to co-locate. The obtaining includes
one of receiving, generating, and retrieving. When receiving, the
processing module extracts the plurality of sets of encoded data
slices from the write request 700. When generating, the processing
module encodes the data object 2 to be co-located using a dispersed
storage error coding function to produce the plurality of sets of
encoded data slices. When retrieving, the processing module
identifies previous sets of slice names utilized to store the
plurality of sets of encoded data slices based on a data ID of the
data object 2 to become co-located, issues one or more sets of read
slice requests to a previously utilized set of storage units where
the one or more sets of read slice requests includes the previous
sets of slice names, and receiving the plurality of sets of encoded
data slices 704.
[0084] The method continues at the step 722 where the processing
module generates a plurality of sets of slice names for the
plurality of sets of encoded data slices based on addressing
information of the data object 1 to be co-located with. For
example, the processing module generates the plurality of sets of
slice names to include a vault ID associated with the data object
to be co-located with and an object number field entry that causes
the generated plurality of sets of slice names to fall within a
slice name range that is associated with a set of storage units
where the data object to be co-located with is stored.
[0085] The method continues at the step 724 where the processing
module stores the plurality of sets of encoded data slices in the
set of storage units using the generated plurality of sets of slice
names. The storing includes generating one or more sets of write
slice requests that includes the plurality of sets of encoded data
slices and the generated plurality of sets of slice names and
outputting the one or more sets of read slice requests to the set
of storage units. When storage of the plurality of sets of encoded
data slices in the set of storage units is confirmed, and when the
plurality of sets of encoded data slices were retrieved using the
previous sets of slice names, the method continues at the step 726
where the processing module deletes the plurality of sets of
encoded data slices utilizing the previous sets of slice names. For
example, the processing module issues a set of delete slice
requests that includes the previous sets of slice names to the
previous utilized set of storage units.
[0086] FIG. 11D is a flowchart illustrating one example of
obtaining the plurality of sets of encoded data slices to be
co-located. The obtaining, step 720, includes multiple processing
paths for receiving, generating, and retrieving the plurality of
sets of encoded data slices to be co-located (data object 2) based
on the location of data object 2 at the time of the request. When
receiving, the processing module extracts in step 727 the ID of
data object 1, ID of data object 2 and, if included with the
request, the plurality of data object 2 sets of encoded slices from
the write request 700. When data object 2 to be co-located (e.g.,
the second data object) is included in the data object 2 co-locate
write request, the computing device 16 identifies, beginning with
step 730, the set of storage units 36 associated with data ID 1 of
the data object to be co-located with (e.g., the storage unit set
1). The determining includes accessing one or more of a directory
in step 731 and a dispersed hierarchical index in step 732 to
identify a DSN address associated with data object 1 ID to be
co-located with and performing a DSN address-to-physical location
table lookup in step 734 to identify the physical location (PL)
address set of storage units 36 associated with the data ID of the
data object to be co-located with. If data object 2 is not already
encoded, it is encoded in step 729 using a dispersed storage error
coding function.
[0087] When the data object to be co-located is not included in the
data object 2 co-locate write request, the computing device 16
determines whether the data object to be co-located is already
co-located. The determining includes comparing data object 2 PL to
data object 1 PL. If they are co-located (data object 2 PL is
stored within a range of addresses for data object 1 PL) no further
action is required. When data object 2 to be co-located is not
already co-located, the computing device 16 recovers (reads), in
step 736, the second data object from the storage unit set
associated with storage of the second data object (e.g., from
storage unit set 2).
[0088] FIG. 11E is a schematic block diagram of another embodiment
of a dispersed storage network (DSN) system in accordance with the
present disclosure. Computing device 16 may include a plurality of
processing modules (or sub-modules) to perform one or more steps of
the embodiments of FIGS. 11A-D. While this example is shown as four
separate modules, the modules may be combined or separated into any
number of modules (local or remote) to complete the various steps
and functions of the various embodiments of FIGS. 11A-D.
[0089] As shown, receive module 16-1 operates to receive a data
object co-locate write request. Obtain module 16-2 operates to
obtain a plurality of sets of encoded data slices for a data object
to co-locate. Generate module 16-3 operates to generate a plurality
of sets of slice names for the data object to co-locate based on
another plurality of sets of slice names associated with a data
object to be co-located with. Store module 16-4 operates to store
the plurality of sets of encoded data slices in DS memory using the
generated plurality of sets of slice names for the data object
co-locate.
[0090] As may be used herein, the terms "substantially" and
"approximately" provides an industry-accepted tolerance for its
corresponding term and/or relativity between items. Such an
industry-accepted tolerance ranges from less than one percent to
fifty percent and corresponds to, but is not limited to, component
values, integrated circuit process variations, temperature
variations, rise and fall times, and/or thermal noise. Such
relativity between items ranges from a difference of a few percent
to magnitude differences. As may also be used herein, the term(s)
"operably coupled to", "coupled to", and/or "coupling" includes
direct coupling between items and/or indirect coupling between
items via an intervening item (e.g., an item includes, but is not
limited to, a component, an element, a circuit, and/or a module)
where, for indirect coupling, the intervening item does not modify
the information of a signal but may adjust its current level,
voltage level, and/or power level. As may further be used herein,
inferred coupling (i.e., where one element is coupled to another
element by inference) includes direct and indirect coupling between
two items in the same manner as "coupled to". As may even further
be used herein, the term "operable to" or "operably coupled to"
indicates that an item includes one or more of power connections,
input(s), output(s), etc., to perform, when activated, one or more
its corresponding functions and may further include inferred
coupling to one or more other items. As may still further be used
herein, the term "associated with", includes direct and/or indirect
coupling of separate items and/or one item being embedded within
another item. As may be used herein, the term "compares favorably",
indicates that a comparison between two or more items, signals,
etc., provides a desired relationship. For example, when the
desired relationship is that signal 1 has a greater magnitude than
signal 2, a favorable comparison may be achieved when the magnitude
of signal 1 is greater than that of signal 2 or when the magnitude
of signal 2 is less than that of signal
[0091] As may also be used herein, the terms "processing module",
"processing circuit", and/or "processing unit" may be a single
processing device or a plurality of processing devices. Such a
processing device may be a microprocessor, micro-controller,
digital signal processor, microcomputer, central processing unit,
field programmable gate array, programmable logic device, state
machine, logic circuitry, analog circuitry, digital circuitry,
and/or any device that manipulates signals (analog and/or digital)
based on hard coding of the circuitry and/or operational
instructions. The processing module, module, processing circuit,
and/or processing unit may be, or further include, memory and/or an
integrated memory element, which may be a single memory device, a
plurality of memory devices, and/or embedded circuitry of another
processing module, module, processing circuit, and/or processing
unit. Such a memory device may be a read-only memory, random access
memory, volatile memory, non-volatile memory, static memory,
dynamic memory, flash memory, cache memory, and/or any device that
stores digital information. Note that if the processing module,
module, processing circuit, and/or processing unit includes more
than one processing device, the processing devices may be centrally
located (e.g., directly coupled together via a wired and/or
wireless bus structure) or may be distributedly located (e.g.,
cloud computing via indirect coupling via a local area network
and/or a wide area network). Further note that if the processing
module, module, processing circuit, and/or processing unit
implements one or more of its functions via a state machine, analog
circuitry, digital circuitry, and/or logic circuitry, the memory
and/or memory element storing the corresponding operational
instructions may be embedded within, or external to, the circuitry
comprising the state machine, analog circuitry, digital circuitry,
and/or logic circuitry. Still further note that, the memory element
may store, and the processing module, module, processing circuit,
and/or processing unit executes, hard coded and/or operational
instructions corresponding to at least some of the steps and/or
functions illustrated in one or more of the Figures. Such a memory
device or memory element can be included in an article of
manufacture.
[0092] The present disclosure has been described above with the aid
of method steps illustrating the performance of specified functions
and relationships thereof. The boundaries and sequence of these
functional building blocks and method steps have been arbitrarily
defined herein for convenience of description. Alternate boundaries
and sequences can be defined so long as the specified functions and
relationships are appropriately performed. Any such alternate
boundaries or sequences are thus within the scope and spirit of the
claimed disclosure. Further, the boundaries of these functional
building blocks have been arbitrarily defined for convenience of
description. Alternate boundaries could be defined as long as the
certain significant functions are appropriately performed.
Similarly, flow diagram blocks may also have been arbitrarily
defined herein to illustrate certain significant functionality. To
the extent used, the flow diagram block boundaries and sequence
could have been defined otherwise and still perform the certain
significant functionality. Such alternate definitions of both
functional building blocks and flow diagram blocks and sequences
are thus within the scope and spirit of the claimed disclosure. One
of average skill in the art will also recognize that the functional
building blocks, and other illustrative blocks, modules and
components herein, can be implemented as illustrated or by discrete
components, application specific integrated circuits, processors
executing appropriate software and the like or any combination
thereof.
[0093] The present disclosure may have also been described, at
least in part, in terms of one or more embodiments. An embodiment
of the present disclosure is used herein to illustrate the present
disclosure, an aspect thereof, a feature thereof, a concept
thereof, and/or an example thereof. A physical embodiment of an
apparatus, an article of manufacture, a machine, and/or of a
process that embodies the present disclosure may include one or
more of the aspects, features, concepts, examples, etc. described
with reference to one or more of the embodiments discussed herein.
Further, from figure to figure, the embodiments may incorporate the
same or similarly named functions, steps, modules, etc. that may
use the same or different reference numbers and, as such, the
functions, steps, modules, etc. may be the same or similar
functions, steps, modules, etc. or different ones.
[0094] While the transistors in the above described figure(s)
is/are shown as field effect transistors (FETs), as one of ordinary
skill in the art will appreciate, the transistors may be
implemented using any type of transistor structure including, but
not limited to, bipolar, metal oxide semiconductor field effect
transistors (MOSFET), N-well transistors, P-well transistors,
enhancement mode, depletion mode, and zero voltage threshold (VT)
transistors.
[0095] Unless specifically stated to the contra, signals to, from,
and/or between elements in a figure of any of the figures presented
herein may be analog or digital, continuous time or discrete time,
and single-ended or differential. For instance, if a signal path is
shown as a single-ended path, it also represents a differential
signal path. Similarly, if a signal path is shown as a differential
path, it also represents a single-ended signal path. While one or
more particular architectures are described herein, other
architectures can likewise be implemented that use one or more data
buses not expressly shown, direct connectivity between elements,
and/or indirect coupling between other elements as recognized by
one of average skill in the art.
[0096] The term "module" is used in the description of the various
embodiments of the present disclosure. A module includes a
processing module, a functional block, hardware, and/or software
stored on memory for performing one or more functions as may be
described herein. Note that, if the module is implemented via
hardware, the hardware may operate independently and/or in
conjunction software and/or firmware. As used herein, a module may
contain one or more sub-modules, each of which may be one or more
modules.
[0097] While particular combinations of various functions and
features of the present disclosure have been expressly described
herein, other combinations of these features and functions are
likewise possible. The present disclosure is not limited by the
particular examples disclosed herein and expressly incorporates
these other combinations.
* * * * *