U.S. patent application number 11/973395 was filed with the patent office on 2008-06-12 for redundant multiple computer architecture.
Invention is credited to John M. Holt.
Application Number | 20080140982 11/973395 |
Document ID | / |
Family ID | 39268046 |
Filed Date | 2008-06-12 |
United States Patent
Application |
20080140982 |
Kind Code |
A1 |
Holt; John M. |
June 12, 2008 |
Redundant multiple computer architecture
Abstract
A multiple computer system incorporating a redundant memory
architecture is disclosed. Memory locations (0-99, A-C) stored on
one machine (C1, M1) are stored on the hierarchically adjacent
machines (C2, M2) and maintained updated. In the event of the
failure of only one machine, the hierarchically adjacent machine
has the memory locations of the failed machine and is able to
resume or take over the computational tasks of the failed machine
thereby providing a measure of redundancy. Both distributed memory
systems (DSM) and replicated memory system (RSM) are disclosed. In
particular, a partially replicated memory system, structure, and
method for replicating in using the same, are all disclosed.
Inventors: |
Holt; John M.; (Essex,
GB) |
Correspondence
Address: |
PERKINS COIE LLP
P.O. BOX 2168
MENLO PARK
CA
94026
US
|
Family ID: |
39268046 |
Appl. No.: |
11/973395 |
Filed: |
October 5, 2007 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60850532 |
Oct 9, 2006 |
|
|
|
Current U.S.
Class: |
711/173 ;
711/E12.016 |
Current CPC
Class: |
G06F 9/52 20130101 |
Class at
Publication: |
711/173 ;
711/E12.016 |
International
Class: |
G06F 12/00 20060101
G06F012/00 |
Foreign Application Data
Date |
Code |
Application Number |
Oct 5, 2006 |
AU |
2006905527 |
Claims
1. A method of storing data in a multiple computer system
comprising a plurality of computers each having a local memory and
each being interconnected to the other computers via a
communications network, said method comprising: (i) partitioning
the local memory of each computer into two compartments; (ii) for
each computer storing data created by, or required for, the
operation of said computer firstly in a compartment in said
computer, and secondly in a compartment of one other computer; and
(iii) updating changes in content or value in said stored data at
both said compartments; whereby in the event of failure of only one
of said computers said stored and updated data is available in the
remaining computers.
2. The method as in claim 1, including the further step of: (iv)
allocating a hierarchical order to said computers; and (v) for each
computer storing the data for that computer in one of said local
memory compartments and storing the data for the hierarchically
adjacent computer in the other compartment of said local
memory.
3. The method as in claim 2, including the step of: (vi) making all
the data stored on each computer accessible to all other ones of
said computers to thereby form a distributed shared memory computer
system.
4. The method as in claim 12, including the step of: (vii)
replicating some of the stored data and storing same on each said
computer, but not replicating all of said stored data to thereby
form a partially replicated stored memory computer system.
5. The method as in claim 4, wherein the replicated stored memory
of each computer is substantially the same.
6. The method as in claim 4, wherein the replicated stored memory
is substantially located in a single computer.
7. The method as in claim is 1, including the further step of:
transmitting changes made to a memory location of a first computer
to another computer for storage therein, and said other computer
transmitting said changes to the remaining computers.
8. The method as in claim 7, wherein said multiple computers are
arranged in a hierarchical order and said first computer and said
other computer are adjacent computers in said hierarchical
order.
9. A computer program stored in a computer readable media, the
computer program including executable computer program instructions
and adapted for execution by a plurality of computers in a multiple
computer system including a plurality of computers to modify the
operation of the multiple computer system; the modification of
operation including performing a method of storing data in said
multiple computer system, said method comprising: (i) partitioning
the local memory of each computer into two compartments; (ii) for
each computer storing data created by, or required for, the
operation of said computer firstly in a compartment in said
computer, and secondly in a compartment of one other computer; and
(iii) updating changes in content or value in said stored data at
both said compartments; whereby in the event of failure of only one
of said computers said stored and updated data is available in the
remaining computers.
10. A multiple computer system comprising: a plurality of computers
each having a local memory and each being interconnected to the
other computers via a communications network, the local memory of
each computer being partitioned into two compartments; data storage
allocation means to allocate to each computer data created by, or
required for, the operation of that computer firstly in a
compartment in that computer, and secondly in a compartment of one
other computer; and data updating means to store changes in the
content or value of said stored data at both said compartments;
whereby in the event of failure of only one of said computers all
said stored and updated data is available in the remaining
computers.
11. The system as in claim 10, wherein said computers are arranged
in a hierarchical order and each computer stores data for that
computer in one of said local memory compartments and stores data
for the hierarchically adjacent computer in the other compartment
of said local memory.
12. The system as in claim 10, wherein all data stored on each
computer is accessible to all other ones of said computers so that
said system comprises a distributed shared memory computer
system.
13. The system as in claim 10, wherein some of said stored data is
replicated and stored on each of said computers, but not all of
said stored data is replicated, so that said system comprises a
partially replicated stored memory computer system.
14. The system as in claim 13, wherein the replicated stored memory
of each computer is substantially the same.
15. The system as in claim 10, wherein the replicated stored memory
is substantially located in a single computer.
16. The system as in any one of claim 15, wherein changes made to a
memory location of a first computer are transmitted to another
computer for storage therein, and said other computer transmitting
said changes to the remaining computers.
17. The system as in claim 16, wherein said multiple computers are
arranged in a hierarchical order and said first computer and said
other computer are adjacent computers in said hierarchical
order.
18. A data storage allocation control system comprising: means for
identifying data created by or required for the operation of a
particular primary computer; and means for allocating said
identified data firstly in a first partitioned memory compartment
in that particular primary computer, and for allocating said
identified data secondly in a second partitioned memory compartment
of one other computer different from said primary computer.
19. A data storage allocation control system as defined in claim
18, further comprising: data updating means to store changes in the
content or value of said stored data at both said first and said
second compartments.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims the benefit of priority to
U.S. Provisional Application Nos. 60/850,532 (5027W-US) filed 9
Oct. 2006; and to Australian Provisional Application No. 2006905529
(5027W-AU) filed on 5 Oct. 2006, each of which are hereby
incorporated herein by reference.
[0002] This application is related to concurrently filed U.S.
Application entitled "Redundant Multiple Computer Architecture,"
(Attorney Docket No. 61130-8022.US02 (5027W-US02)) and concurrently
filed U.S. Application entitled "Redundant Multiple Computer
Architecture," (Attorney Docket No. 61130-8022.US03 (5027W-US03)),
each of which are hereby incorporated herein by reference.
FIELD OF THE INVENTION
[0003] The present invention relates to multiple computer systems
and to single computer systems operating in a multiple computer
system environment. In particular, the present invention relates to
the provision of redundancy in multiple computer systems.
BACKGROUND
[0004] Ideally, redundancy is provided in a multiple computer
system so that in the event that one computer fails, not only is
the data which is stored in local application memory of the failed
computer preserved on another computer, but that other computer (or
a different computer), or a number of computers is/are able to step
in and undertake the computing task previously undertaken by the
application program of the failed computer.
[0005] Hitherto, such redundancy has not been available. For
example, in super computing a "checkpoint" system is used. Under
this arrangement at predetermined intervals of, say, every hour or
after some predetermined or dynamically determined number of
operations have been performed, executing stops and a permanent
record is made of the current status and current data of each
computer. As a consequence, in the event of a failure, it is
necessary to stop all computers, restore the status and data as of
the last checkpoint, and then with a replaced computer, or a
repaired computer, recommence executing instructions as of the last
checkpoint.
[0006] Another form of multiple computer system is that known as
Distributed Shared Memory (DSM). Here individual computers are
interconnected by means of a communications network or some other
equivalent communications link and the local memory of each of the
computers is accessible by any one of the other computers. Hitherto
in DSM computing redundancy has not been possible.
[0007] A different form of multiple computer system has recently
been described, but not commercially used, and this is known as
Replicated Shared Memory (RSM). This system is described in
International Patent Application No. PCT/AU2005/000580 (Attorney
Ref 5027F-WO) published under WO 2005/103926 (to which U.S. patent
application Ser. No. 11/111,946 and published under No.
2005-0262313 corresponds) in the name of the present applicant.
This specification discloses how different portions of an
application program written to execute on only a single computer
can be operated substantially simultaneously on a corresponding
different one of a plurality of computers. That simultaneous
operation has not been commercially used as of the priority date of
the present application. International Patent Application Nos.
PCT/AU2005/001641 (WO 2006/110,937) (Attorney Ref 5027F-D1-WO) to
which U.S. patent application Ser. No. 11/259,885 entitled:
"Computer Architecture Method of Operation for Multi-Computer
Distributed Processing and Co-ordinated Memory and Asset Handling"
corresponds and PCT/AU2006/000532 (WO 2006/110, 957) (Attorney Ref:
5027F-D2-WO) both in the name of the present applicant and both
unpublished as at the priority date of the present application,
also disclose further details. The contents of the specification of
each of the abovementioned prior application(s) are hereby
incorporated into the present specification by cross reference for
all purposes.
[0008] Briefly stated, the abovementioned patent specifications
disclose that at least one application program written to be
operated on only a single computer can be simultaneously operated
on a number of computers each with independent local memory. The
memory locations required for the operation of that program are
replicated in the independent local memory of each computer. On
each occasion on which the application program writes new data to
any replicated memory location, that new data is transmitted and
stored at each corresponding memory location of each computer. Thus
apart from the possibility of transmission delays, each computer
has a local memory the contents of which are substantially
identical to the local memory of each other computer and are
updated to remain so. Since all application programs, in general,
read data much more frequently than they cause new data to be
written, the abovementioned arrangement enables very substantial
advantages in computing speed to be achieved. In particular, the
stratagem enables two or more commodity computers interconnected by
a commodity communications network to be operated simultaneously
running under the application program written to be executed on
only a single computer.
GENESIS OF THE INVENTION
[0009] The genesis of the present invention is a desire to provide
at least some redundancy in multiple computer systems.
SUMMARY OF THE INVENTION
[0010] According to a first aspect of the present invention there
is disclosed a method of storing data in a multiple computer system
comprising a plurality of computers each having a local memory and
each being interconnected to the other computers via a
communications network, said method comprising the steps of:
[0011] (i) partitioning the local memory of each computer into two
compartments,
[0012] (ii) for each computer storing data created by, or required
for, the operation of said computer firstly in a compartment in
said computer, and secondly in a compartment of one other computer,
and
[0013] (iii) updating changes in content or value in said stored
data at both said compartments,
whereby in the event of failure of only one of said computers said
stored and updated data is available in the remaining
computers.
[0014] According to second aspect of the present invention there is
disclosed a multiple computer system comprising a plurality of
computers each having a local memory and each being interconnected
to the other computers via a communications network, the local
memory of each computer being partitioned into two compartments,
said system including data storage allocation means to allocate to
each computer data created by, or required for, the operation of
that computer firstly in a compartment in that computer, and
secondly in a compartment of one other computer, and data updating
means to store changes in the content or value of said stored data
at both said compartments, whereby in the event of failure of only
one of said computers all said stored and updated data is available
in the remaining computers.
[0015] According to a third aspect of the present invention there
is disclosed a single computer adapted to operate in a multiple
computer system comprising a plurality of computers each having a
local memory and each being interconnected to the other computers
via a communications network, said single computer having a local
memory which is partitioned into two compartments, a communications
port for connection with said communications network, a data
updating means connected with said communications port to receive
data from, or send data to, said communications port, and a data
storage allocation means to store in a first of said compartments
first data created by, or required for, the operation of said
computer, to send said first data to said communications port for
storage in another computer, and to receive from said
communications port second data created by, or required for, the
operation of another computer whereby in the event of failure of
said another computer the data required for said single computer to
take over the computational tasks of said another computer is
present in said single computer.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] Embodiments of the present invention will now be described
with reference to the drawings in which:
[0017] FIG. 1 is a schematic representation of a Redundant Array of
Independent Disks (RAID) in which static data is able to be stored
in a redundant matter,
[0018] FIG. 2 is a schematic representation of a DSM multiple
computer system,
[0019] FIG. 3 is a schematic representation of a DSM multiple
computer system with memory arranged to provide redundancy,
[0020] FIG. 4A is a schematic illustration of a prior art computer
arranged to operate JAVA code and thereby constitute a single JAVA
virtual machine,
[0021] FIG. 4B is a drawing similar to FIG. 1A but illustrating the
initial loading of code,
[0022] FIG. 4C illustrates the interconnection of a multiplicity of
computers each being a JAVA virtual machine to form a multiple
computer system,
[0023] FIG. 5 schematically illustrates "n" application running
computers to which at least one additional server machine X is
connected,
[0024] FIG. 5A is a schematic representation of an RSM multiple
computer system,
[0025] FIG. 5B is a similar schematic representation of a partial
or hybrid RSM multiple computer system,
[0026] FIGS. 6 and 7 are each a schematic representation of an RSM
multiple computer system,
[0027] FIGS. 6A and 7A illustrate a modified case of FIGS. 6 and 7
of partially replicated application memory
locations/contents/values,
[0028] FIG. 8 is a modification to the arrangement illustrated in
FIG. 5 in which partial replicated shared memory is provided with
redundancy,
[0029] FIG. 9 is a view similar to FIG. 8 and illustrating another
partial replicated shared memory system
[0030] FIG. 10 is a further embodiment in which redundancy is
provided by means of an additional single computer, and
[0031] FIG. 11 is a view similar to FIG. 10 and illustrating a
modification to the arrangement of FIG. 10.
DETAILED DESCRIPTION
[0032] In computing tasks where continued access to stored data on
a disk drive storage device is crucial, it is known to provide disk
drive redundancy by means of a Redundant Array of Independent Disks
(RAID) and such an arrangement is schematically illustrated in FIG.
1. It is important to note in this connection that the redundancy
of the disk drive is in relation to failure of a single disk and
has nothing to do with the failure of the computer which needs to
access the data stored on the disk. It is also noted that the data
is static in the sense that the data once written to the disk does
not change and is persistent until it is eventually
overwritten.
[0033] In the arrangement illustrated in FIG. 1, a computer 1 is
connected to a disk controller 2 which is in turn connected to a
plurality of "n" disks or disk drives D1, D2, . . . Dn, where "n"
is an integer greater than or equal to two. In the illustrated
embodiment, five disks or disk drives D1-D5 are illustrated. Data
from the computer or machine 1 is sent to the disk controller 2
where a decision is made as to what data to store on which disk.
Some data A is stored on disk D1, some data B is stored on disk D2,
some data C is stored on disk D3, and some data D is stored on disk
D4. In order to provide redundancy, some additional data, which is
conventionally termed parity data, is stored on disk 5 and this is
indicated as P[A+B+C+D]. The concept of parity is well known in
computing. In order to give a trivial example, if the value of A is
12, the value of B is 13, the value of C is 14, and the value of D
is 15 then utilising a simple parity algorithm what is stored on
disk D is the sum 54 of these four individual pieces of data. As a
consequence, if for any reason disk 1, for example, were to fail,
then it would be possible to reconstitute the data A by taking the
value of the data stored on disk 5 (e.g. the parity sum 54) and
subtracting 13, 14, and then 15 in turn from this total to arrive
at the original figure for A. This is an example of a reversible
encoding technique. In general, parity utilises reversible encoding
techniques. It will be appreciated in the light of the description
provided here in, that this is merely an illustrative example of a
particular kind of parity information and recovery of the original
data from the failed disk drive using the stored parity data, and
that the invention is not limited only to this particular form of
parity data or data recovery, but rather contemplates any form of
parity data and recovery
[0034] In FIG. 1, each of the disks, D1-D5 are shown as having only
three data locations. In the second data location are stored data
W, X, Y, and Z and their parity data sum in disks D2-D5 and D1
respectively. Similarly, data H, I, J, and K are stored on disks
D3, D4, D5, and D1 respectively whilst their parity data sum is
stored on disk D2. This arrangement distributes the stored sums, or
parity data, amongst the various disks and this is advantageous
since it evens out the storage requirement between disks. That is,
it would be possible to store the data A, the data W and the data H
for example all on disk D1 and store all the parity data on disk D5
but this arrangement is generally undesirable.
[0035] The abovementioned arrangement provides an acceptable level
of redundancy, particularly where a delay can be tolerated between
the time of failure and the time at which operation of the data
store can re-commence. However, it should be noted that the
computer 1 is not a multiple computer system and that the
redundancy is only in respect of the static data stored on the
disks and so the RAID system does not provide any assistance in the
event of the failure of computer 1.
[0036] Turning now to FIG. 2, a known multiple computer system is
illustrated in which "n" computers C1, C2 . . . Cn are provided
each of which has a corresponding local memory m1, m2 . . . mn. The
computers C1, C2 . . . Cn are interconnected by means of a
communication system 5 which typically takes the form of a
commercially available ETHERNET or similar communication system or
network, though any communication network or system capable of
providing the described level of communication may be utilised. For
the purposes of explanation, but not as a limitation of the
invention, each of the individual memories is provided with 100
memory locations which are conveniently consecutively numbered so
that the memory locations of the local memory m1 are 0-99, whilst
the memory locations for the local memory m2 are numbered 100-199,
etc. A characteristic of the DSM system is that each of the
individual computers is able to access each of the memory locations
of all the other computers in addition to its own memory locations.
This architecture arrangement has the advantage of increasing the
total memory available to all the computers, however, it does
result in slowing of the computational speed of the multiple
computer system because of the need for memory reads and memory
writes to take place from one computer to another via the
communications system 5.
[0037] In accordance with a first embodiment of the present
invention, as illustrated in FIG. 3, the abovementioned distributed
shared memory multiple computer system can be modified by
partitioning the memory of each computer into two parts. The
computers are arranged in a hierarchy being numbered from C1
through to Cn. Each computer preferably has its "own" memory stored
in one of the compartments of the partitioned local memory, and the
memory of the adjacent hierarchical computer in the other local
memory compartment. Thus local memory m2 of computer C2 includes
the memory locations 100-199 of computer C2 and includes memory
locations R0-R99 which are a replica of the memory locations 0-99
of computer C1.
[0038] In the multiple computer system of FIG. 3, on those
occasions where data is to be read, it is read from the "normal"
computer. Thus if memory location 120 is to be read this is read
from computer C2 which would have been the case for the computer
system of FIG. 2. However, on those occasions where data is to be
written, or overwritten, then the data has to be written to two
locations. For example, in the case of memory location 20, the data
is written to computer C1 and is also written to computer C2 to
memory location R20.
[0039] Thus in the arrangement of FIG. 3, if the computer C1, for
example were to fail then a request to read, for example, memory
location 58 which was directed to computer C1 would be
unsuccessful. Instead the request is then directed to the adjacent
computer C2 and memory location R58 is read from computer C2. In
this way, the failure of one of the computers C1-Cn does not
disrupt the entire operation of the multiple computer system.
[0040] The computational tasks which were carried out by the failed
computer should be re-allocated so as to share these amongst the
remaining computers.
[0041] In one embodiment the computers each use a "virtual memory
page faults" procedure, or similar to ensure that every time that a
particular computer such as C1 writes to a replicated application
memory location/content/value, the content of value of that write
operation (that is, the updated value of the written-to replicated
application memory location) is subsequently updated to the
corresponding replica application memory locations/contents/values
of computer C2. Alternatively, each machine C1. C5 may use any
"tagging" (or similar "marking", "alerting") means or methods to
record or indicate that a write to one or more replicated
application memory locations/contents/values has taken place, and
that in due course, the identified replicated application memory
locations which have been recorded or identified as having been
written to, are to have their new value in turn propagated to all
other corresponding replica application memory
locations/contents/values on one or more other member machines of
the replicated shared memory arrangement or other operating
plurality of machines. One such tagging method is disclosed in the
International Patent Application Nos. PCT/AU2005/001641
(WO2006/110937) (Attorney Ref 5027F-D1-WO) to which U.S. patent
application Ser. No. 11/259,885 entitled: "Computer Architecture
Method of Operation for Multi-Computer Distributed Processing and
Co-ordinated Memory and Asset Handling" corresponds and
PCT/AU2006/000532 (WO2006/110957) (Attorney Ref 5027F-D2-WO).
Ultimately however, how the writes are detected is not important,
what is important is that they be detected and in due course the
memory contents or value is sent to computer C2.
[0042] In addition to computer C2 being updated with writes to the
memory of computer C1, the computer C2 is preferably also updated
from time to time with advice that computer C1 in executing its
portion of the application program 50 has reached certain
"milestone" instructions.
[0043] In a simple embodiment of this "milestone" technique, from
time to time each computer (eg C1) halts execution of code and for
each thread records the program counter and associated state data
(eg one or more of thread stacks, register memory locations and
method frames). This information is then sent to the corresponding
computer C2. Then the computer C1 resumes execution. This simple
embodiment may not work with all application programs but will work
with a substantial number or proportion of such application
programs. In a further embodiment, both "milestones" and memory
changes are collected and/or sent at the same time (ie at the time
of code execution halt, or the execution halt is timed to coincide
with the detected write to memory) so that computer C2 receives
both together. "Together" in this instance can be a single message
containing both items of data, or two or more messages closely
spaced in time.
[0044] In the event that a computer, for example computer C1,
should fail, then several consequences flow. Firstly, updates to
the memory location of computer C1 are sent to computer C2 instead.
Secondly, computer C2 is able to initiate execution of the
application program previously executed by computer C1 commencing
at the position of the last "milestone" instruction reached by
computer C1 prior to its failure. In this connection the computer
C2 utilises both the application code and the memory contents of
computer C1 which are replicated in computer C2.
[0045] The above-mentioned failure is able to be detected by a
conventional detector attached to each of the application program
running machines and reporting to machine X, for example.
[0046] Such a detector is commercially available as a Simple
Network Management Protocol (SNMP). This is essentially a small
program which operates in the background and provides a specified
output signal in the event that failure is detected.
[0047] Such a detector is able to sense failure in a number of
ways, any one, or more, of which can be used simultaneously. For
example, machine X can interrogate each of the other machines M1,
M2, . . . Mn in turn requesting a reply. If no reply is forthcoming
after a predetermined time, or after a small number of "reminders"
are sent, also without reply, the non-responding machine is
pronounced "dead".
[0048] Alternatively, or additionally, each of the machines M1, . .
. Mn can at regular intervals, say every 30 seconds, send a
predetermined message to machine X (or to all other machines in the
absence of a server) to say that all is well. In the absence of
such a message the machine can be presumed "dead" or can be
interrogated (and if it then fails to respond) is pronounced
"dead".
[0049] Further methods include looking for a turn on event in an
uninterruptible power supply (UPS) used to power each machine which
therefore indicates a failure of mains power. Similarly,
conventional switches such as those manufactured by CISCO of
California, USA include a provision to check either the presence of
power to the communications network 53, or whether the network
cable is disconnected.
[0050] In some circumstances, for example for enhanced redundancy
or for increased bandwidth, each individual machine can be
"multi-peered" which means there are two or more links between the
machine and the communications network 53. An SNMP product which
provides two options in this circumstance-namely wait for both/all
links to fail before signalling machine failure, or signal machine
failure if any one link fails, is the 12 Port Gigabit Managed
Switch GSM 7212 sold under the trade marks NETGEAR and PROSAFE.
[0051] The arrangements illustrated in FIGS. 4A-4C are described
with reference to the JAVA language. However, it will be apparent
to those skilled in the art that the invention is not limited to
this language and, in particular can be used with other languages
(including procedural, declarative and object oriented languages)
including the MICROSOFT.NET platform and architecture (Visual
Basic, Visual C, and Visual C++, and Visual C#), FORTRAN, C, C++,
COBOL, BASIC and the like.
[0052] It is known in the prior art to provide a single computer or
machine (produced by any one of various manufacturers and having an
operating system (or equivalent control software or other
mechanism) operating in any one of various different languages)
utilizing the particular language of the application by creating a
virtual machine as illustrated in FIG. 4A.
[0053] The code and data and virtual machine configuration or
arrangement of FIG. 4A takes the form of the application code 50
written in the JAVA language and executing within the JAVA virtual
machine 61. Thus where the intended language of the application is
the language JAVA, a JAVA virtual machine is used which is able to
operate code in JAVA irrespective of the machine manufacturer and
internal details of the computer or machine. For further details,
see "The JAVA Virtual Machine Specification" 2.sup.nd Edition by T.
Lindholm and F. Yellin of Sun Microsystems Inc of the USA which is
incorporated herein by reference.
[0054] This conventional art arrangement of FIG. 4A is modified by
the present applicant by the provision of an additional facility
which is conveniently termed a "distributed run time" or a
"distributed run time system" DRT 71 and as seen in FIG. 4B.
[0055] In FIGS. 4B and 4C, the application code 50 is loaded onto
the Java Virtual Machine(s) M1, M2, . . . Mn in cooperation with
the distributed runtime system 71, through the loading procedure
indicated by arrow 75 or 75A or 75B. As used herein the terms
"distributed runtime" and the "distributed run time system" are
essentially synonymous, and by means of illustration but not
limitation are generally understood to include library code and
processes which support software written in a particular language
running on a particular platform. Additionally, a distributed
runtime system may also include library code and processes which
support software written in a particular language running within a
particular distributed computing environment. A runtime system
(whether a distributed runtime system or not) typically deals with
the details of the interface between the program and the operating
system such as system calls, program start-up and termination, and
memory management. For purposes of background, a conventional
Distributed Computing Environment (DCE) (that does not provide the
capabilities of the inventive distributed run time or distributed
run time system 71 used in the preferred embodiments of the present
invention) is available from the Open Software Foundation. This
Distributed Computing Environment (DCE) performs a form of
computer-to-computer communication for software running on the
machines, but among its many limitations, it is not able to
implement the desired modification or communication operations.
Among its functions and operations the preferred DRT 71 coordinates
the particular communications between the plurality of machines M1,
M2, . . . Mn. Moreover, the preferred distributed runtime 71 comes
into operation during the loading procedure indicated by arrow 75A
or 75B of the JAVA application 50 on each JAVA virtual machine 72
or machines JVM#1, JVM#2, . . . JVM#n of FIG. 4C. It will be
appreciated in light of the description provided herein that
although many examples and descriptions are provided relative to
the JAVA language and JAVA virtual machines so that the reader may
get the benefit of specific examples, there is no restriction to
either the JAVA language or JAVA virtual machines, or to any other
language, virtual machine, machine or operating environment.
[0056] FIG. 4C shows in modified form the arrangement of the JAVA
virtual machines, each as illustrated in FIG. 4B. It will be
apparent that again the same application code 50 is loaded onto
each machine M1, M2 . . . Mn. However, the communications between
each machine M1, M2 . . . Mn are as indicated by arrows 83, and
although physically routed through the machine hardware, are
advantageously controlled by the individual DRT's 71/1 . . . 71/n
within each machine. Thus, in practice this may be conceptionalised
as the DRT's 71/1, . . . 71/n communicating with each other via the
network or other communications link 53 rather than the machines
M1, M2 . . . Mn communicating directly themselves or with each
other. Contemplated and included are either this direct
communication between machines M1, M2 . . . Mn or DRT's 71/1, 71/2
. . . 71/n or a combination of such communications. The preferred
DRT 71 provides communication that is transport, protocol, and link
independent.
[0057] The one common application program or application code 50
and its executable version (with likely modification) is
simultaneously or concurrently executing across the plurality of
computers or machines M1, M2 . . . Mn. The application program 50
is written to execute on a single machine or computer (or to
operate on the multiple computer system of the abovementioned
patent applications which emulate single computer operation).
Essentially the modified structure is to replicate an identical
memory structure and contents on each of the individual
machines.
[0058] The term "common application program" is to be understood to
mean an application program or application program code written to
operate on a single machine, and loaded and/or executed in whole or
in part on each one of the plurality of computers or machines M1,
M2 . . . Mn, or optionally on each one of some subset of the
plurality of computers or machines M1, M2 . . . Mn. Put somewhat
differently, there is a common application program represented in
application code 50. This is either a single copy or a plurality of
identical copies each individually modified to generate a modified
copy or version of the application program or program code. Each
copy or instance is then prepared for execution on the
corresponding machine. At the point after they are modified they
are common in the sense that they perform similar operations and
operate consistently and coherently with each other. It will be
appreciated that a plurality of computers, machines, information
appliances, or the like implementing the above arrangement may
optionally be connected to or coupled with other computers,
machines, information appliances, or the like that do not implement
the above arrangement.
[0059] The same application program 50 (such as for example a
parallel merge sort, or a computational fluid dynamics application
or a data mining application) is run on each machine, but the
executable code of that application program is modified on each
machine as necessary such that each executing instance (copy or
replica) on each machine coordinates its local operations on that
particular machine with the operations of the respective instances
(or copies or replicas) on the other machines such that they
function together in a consistent, coherent and coordinated manner
and give the appearance of being one global instance of the
application (i.e. a "meta-application").
[0060] The copies or replicas of the same or substantially the same
application codes, are each loaded onto a corresponding one of the
interoperating and connected machines or computers. As the
characteristics of each machine or computer may differ, the
application code 50 may be modified before loading, or during the
loading process, or with some disadvantages after the loading
process, to provide a customization or modification of the
application code on each machine. Some dissimilarity between the
programs or application codes on the different machines may be
permitted so long as the other requirements for interoperability,
consistency, and coherency as described herein can be maintained.
As it will become apparent hereafter, each of the machines M1, M2 .
. . Mn and thus all of the machines M1, M2 . . . Mn have the same
or substantially the same application code 50, usually with a
modification that may be machine specific.
[0061] Before the loading of, or during the loading of, or at any
time preceding the execution of, the application code 50 (or the
relevant portion thereof) on each machine M1, M2 . . . Mn, each
application code 50 is modified by a corresponding modifier 51
according to the same rules (or substantially the same rules since
minor optimizing changes are permitted within each modifier 51/1,
51/2 . . . 51/n).
[0062] Each of the machines M1, M2 . . . Mn operates with the same
(or substantially the same or similar) modifier 51 (in some
arrangements implemented as a distributed run time or DRT 71 and in
other arrangements implemented as an adjunct to the application
code and data 50, and also able to be implemented within the JAVA
virtual machine itself). Thus all of the machines M1, M2 . . . Mn
have the same (or substantially the same or similar) modifier 51
for each modification required. A different modification, for
example, may be required for memory management and replication, for
initialization, for finalization, and/or for synchronization
(though not all of these modification types may be required for all
arrangements).
[0063] There are alternative implementations of the modifier 51 and
the distributed run time 71. For example, as indicated by broken
lines in FIG. 1C, the modifier 51 may be implemented as a component
of or within the distributed run time 71, and therefore the DRT 71
may implement the functions and operations of the modifier 51.
Alternatively, the function and operation of the modifier 51 may be
implemented outside of the structure, software, firmware, or other
means used to implement the DRT 71 such as within the code and data
50, or within the JAVA virtual machine itself. In one embodiment,
both the modifier 51 and DRT 71 are implemented or written in a
single piece of computer program code that provides the functions
of the DRT and modifier. In this case the modifier function and
structure is, in practice, subsumed into the DRT. Independent of
how it is implemented, the modifier function and structure is
responsible for modifying the executable code of the application
code program, and the distributed run time function and structure
is responsible for implementing communications between and among
the computers or machines. The communications functionality in one
arrangement is implemented via an intermediary protocol layer
within the computer program code of the DRT on each machine. The
DRT can, for example, implement a communications stack in the JAVA
language and use the Transmission Control Protocol/Internet
Protocol (TCP/IP) to provide for communications or talking between
the machines. These functions or operations may be implemented in a
variety of ways, and it will be appreciated in light of the
description provided herein that exactly how these functions or
operations are implemented or divided between structural and/or
procedural elements, or between computer program code or data
structures, is not important or crucial.
[0064] However, in the arrangement illustrated in FIG. 4C, a
plurality of individual computers or machines M1, M2 . . . Mn are
provided, each of which are interconnected via a communications
network 53 or other communications link. Each individual computer
or machine is provided with a corresponding modifier 51. Each
individual computer is also provided with a communications port
which connects to the communications network. The communications
network 53 or path can be any electronic signalling, data, or
digital communications network or path and is preferably a slow
speed, and thus low cost, communications path, such as a network
connection over the Internet or any common networking
configurations including ETHERNET or INFINIBAND and extensions and
improvements, thereto. Preferably, the computers are provided with
one or more known communications ports (such as CISCO Power Connect
5224 Switches) which connect with the communications network
53.
[0065] As a consequence of the above described arrangement, if each
of the machines M1, M2, . . . , Mn has, say, an internal or local
memory capability of 10 MB, then the total memory available to the
application code 50 in its entirety is not, as one might expect,
the number of machines (n) times 10 MB. Nor is it the additive
combination of the internal memory capability of all n machines.
Instead it is either 10 MB, or some number greater than 10 MB but
less than n.times.10 MB. In the situation where the internal memory
capacities of the machines are different, which is permissible,
then in the case where the internal memory in one machine is
smaller than the internal memory capability of at least one other
of the machines, then the size of the smallest memory of any of the
machines may be used as the maximum memory capacity of the machines
when such memory (or a portion thereof) is to be treated as
`common` memory (i.e. similar equivalent memory on each of the
machines M1 . . . Mn) or otherwise used to execute the common
application code.
[0066] However, even though the manner that the internal memory of
each machine is treated may initially appear to be a possible
constraint on performance, how this results in improved operation
and performance will become apparent hereafter. Naturally, each
machine M1, M2 . . . Mn has a private (i.e. `non-common`) internal
memory capability. The private internal memory capability of the
machines M1, M2, . . . , Mn are normally approximately equal but
need not be. For example, when a multiple computer system is
implemented or organized using existing computers, machines, or
information appliances, owned or operated by different entities,
the internal memory capabilities may be quite different. On the
other hand, if a new multiple computer system is being implemented,
each machine or computer is preferably selected to have an
identical internal memory capability, but this need not be so.
[0067] It is to be understood that the independent local memory of
each machine represents only that part of the machine's total
memory which is allocated to that portion of the application
program running on that machine. Thus, other memory will be
occupied by the machine's operating system and other computational
tasks unrelated to the application program 50.
[0068] Non-commercial operation of a prototype multiple computer
system indicates that not every machine or computer in the system
utilises or needs to refer to (e.g. have a local replica of) every
possible memory location. As a consequence, it is possible to
operate a multiple computer system without the local memory of each
machine being identical to every other machine, so long as the
local memory of each machine is sufficient for the operation of
that machine. That is to say, provided a particular machine does
not need to refer to (for example have a local replica of) some
specific memory locations, then it does not matter that those
specific memory locations are not replicated in that particular
machine.
[0069] It may also be advantageous to select the amounts of
internal memory in each machine to achieve a desired performance
level in each machine and across a constellation or network of
connected or coupled plurality of machines, computers, or
information appliances M1; M2, . . . , Mn. Having described these
internal and common memory considerations, it will be apparent in
light of the description provided herein that the amount of memory
that can be common between machines is not a limitation.
[0070] In some arrangements, some or all of the plurality of
individual computers or machines can be contained within a single
housing or chassis (such as so-called "blade servers" manufactured
by Hewlett-Packard Development Company, Intel Corporation, IBM
Corporation and others) or the multiple processors (eg symmetric
multiple processors or SMPs) or multiple core processors (eg dual
core processors and chip multithreading processors) manufactured by
Intel, AMD, or others, or implemented on a single printed circuit
board or even within a single chip or chipset. Similarly, also
included are computers or machines having multiple cores, multiple
CPU's or other processing logic.
[0071] When implemented in a non-JAVA language or application code
environment, the generalized platform, and/or virtual machine
and/or machine and/or runtime system is able to operate application
code 50 in the language(s) (possibly including for example, but not
limited to any one or more of source-code languages,
intermediate-code languages, object-code languages, machine-code
languages, and any other code languages) of that platform and/or
virtual machine and/or machine and/or runtime system environment,
and utilize the platform, and/or virtual machine and/or machine
and/or runtime system and/or language architecture irrespective of
the machine or processor manufacturer and the internal details of
the machine. It will also be appreciated that the platform and/or
runtime system can include virtual machine and non-virtual machine
software and/or firmware architectures, as well as hardware and
direct hardware coded applications and implementations.
[0072] For a more general set of virtual machine or abstract
machine environments, and for current and future computers and/or
computing machines and/or information appliances or processing
systems, and that may not utilize or require utilization of either
classes and/or objects, the structure, method and computer program
and computer program product are still applicable. Examples of
computers and/or computing machines that do not utilize either
classes and/or objects include for example, the x86 computer
architecture manufactured by Intel Corporation and others, the
SPARC computer architecture manufactured by Sun Microsystems, Inc
and others, the Power PC computer architecture manufactured by
International Business Machines Corporation and others, and the
personal computer products made by Apple Computer, Inc., and
others.
[0073] For these types of computers, computing machines,
information appliances, and the virtual machine or virtual
computing environments implemented thereon that do not utilize the
idea of classes or objects, may be generalized for example to
include primitive data types (such as integer data types, floating
point data types, long data types, double data types, string data
types, character data types and Boolean data types), structured
data types (such as arrays and records), derived types, or other
code or data structures of procedural languages or other languages
and environments such as functions, pointers, components, modules,
structures, reference and unions. These structures and procedures
when applied in combination when required, maintain a computing
environment where memory locations, address ranges, objects,
classes, assets, resources, or any other procedural or structural
aspect of a computer or computing environment are where required
created, maintained, operated, and deactivated or deleted in a
coordinated, coherent, and consistent manner across the plurality
of individual machines M1, M2 . . . Mn.
[0074] This analysis or scrutiny of the application code 50 can
take place either prior to loading the application program code 50,
or during the application program code 50 loading procedure, or
even after the application program code 50 loading procedure (or
some combination of these). It may be likened to an
instrumentation, program transformation, translation, or
compilation procedure in that the application code can be
instrumented with additional instructions, and/or otherwise
modified by meaning-preserving program manipulations, and/or
optionally translated from an input code language to a different
code language (such as for example from source-code language or
intermediate-code language to object-code language or machine-code
language). In this connection it is understood that the term
"compilation" normally or conventionally involves a change in code
or language, for example, from source code to object code or from
one language to another language. However, in the present instance
the term "compilation" (and its grammatical equivalents) is not so
restricted and can also include or embrace modifications within the
same code or language. For example, the compilation and its
equivalents are understood to encompass both ordinary compilation
(such as for example by way of illustration but not limitation,
from source-code to object code), and compilation from source-code
to source-code, as well as compilation from object-code to object
code, and any altered combinations therein. It is also inclusive of
so-called "intermediary-code languages" which are a form of "pseudo
object-code".
[0075] By way of illustration and not limitation, in one
arrangement, the analysis or scrutiny of the application code 50
takes place during the loading of the application program code such
as by the operating system reading the application code 50 from the
hard disk or other storage device, medium or source and copying it
into memory and preparing to begin execution of the application
program code. In another arrangement, in a JAVA virtual machine,
the analysis or scrutiny may take place during the class loading
procedure of the java.lang.ClassLoader.loadClass method (e.g.
"java.lang.ClassLoader.loadClass( )").
[0076] Alternatively, or additionally, the analysis or scrutiny of
the application code 50 (or of a portion of the application code)
may take place even after the application program code loading
procedure, such as after the operating system has loaded the
application code into memory, or optionally even after execution of
the relevant corresponding portion of the application program code
has started, such as for example after the JAVA virtual machine has
loaded the application code into the virtual machine via the
"java.lang.ClassLoader.loadClass( )" method and optionally
commenced execution.
[0077] Persons skilled in the computing arts will be aware of
various possible techniques that may be used in the modification of
computer code, including but not limited to instrumentation,
program transformation, translation, or compilation means and/or
methods.
[0078] One such technique is to make the modification(s) to the
application code, without a preceding or consequential change of
the language of the application code. Another such technique is to
convert the original code (for example, JAVA language source-code)
into an intermediate representation (or intermediate-code language,
or pseudo code), such as JAVA byte code. Once this conversion takes
place the modification is made to the byte code and then the
conversion may be reversed. This gives the desired result of
modified JAVA code.
[0079] A further possible technique is to convert the application
program to machine code, either directly from source-code or via
the abovementioned intermediate language or through some other
intermediate means. Then the machine code is modified before being
loaded and executed. A still further such technique is to convert
the original code to an intermediate representation, which is thus
modified and subsequently converted into machine code.
[0080] The present invention encompasses all such modification
routes and also a combination of two, three or even more, of such
routes.
[0081] The DRT 71 or other code modifying means is responsible for
creating or replicating a memory structure and contents on each of
the individual machines M1, M2 . . . Mn that permits the plurality
of machines to interoperate. In some arrangements this replicated
memory structure will be identical. Whilst in other arrangements
this memory structure will have portions that are identical and
other portions that are not. In still other arrangements the memory
structures are different only in format or storage conventions such
as Big Endian or Little Endian formats or conventions.
[0082] These structures and procedures when applied in combination
when required, maintain a computing environment where the memory
locations, address ranges, objects, classes, assets, resources, or
any other procedural or structural aspect of a computer or
computing environment are where required created, maintained,
operated, and deactivated or deleted in a coordinated, coherent,
and consistent manner across the plurality of individual machines
M1, M2 . . . Mn.
[0083] Therefore the terminology "one", "single", and "common"
application code or program includes the situation where all
machines M1, M2 . . . Mn are operating or executing the same
program or code and not different (and unrelated) programs, in
other words copies or replicas of same or substantially the same
application code are loaded onto each of the interoperating and
connected machines or computers.
[0084] In conventional arrangements utilising distributed software,
memory access from one machine's software to memory physically
located on another machine typically takes place via the network
interconnecting the machines. Thus, the local memory of each
machine is able to be accessed by any other machine and can
therefore cannot be said to be independent. However, because the
read and/or write memory access to memory physically located on
another computer require the use of the slow network
interconnecting the computers, in these configurations such memory
accesses can result in substantial delays in memory read/write
processing operations, potentially of the order of
10.sup.6-10.sup.7 cycles of the central processing unit of the
machine (given contemporary processor speeds). Ultimately this
delay is dependent upon numerous factors, such as for example, the
speed, bandwidth, and/or latency of the communication network. This
in large part accounts for the diminished performance of the
multiple interconnected machines in the prior art arrangement.
[0085] However, in the present arrangement all reading of memory
locations or data is satisfied locally because a current value of
all (or some subset of all) memory locations is stored on the
machine carrying out the processing which generates the demand to
read memory.
[0086] Similarly, all writing of memory locations or data is
satisfied locally because a current value of all (or some subset of
all) memory locations is stored on the machine carrying out the
processing which generates the demand to write to memory.
[0087] Such local memory read and write processing operation can
typically be satisfied within 10.sup.2-10.sup.3 cycles of the
central processing unit. Thus, in practice there is substantially
less waiting for memory accesses which involves and/or writes.
Also, the local memory of each machine is not able to be accessed
by any other machine and can therefore be said to be
independent.
[0088] The arrangement is transport, network, and communications
path independent, and does not depend on how the communication
between machines or DRTs takes place. Even electronic mail (email)
exchanges between machines or DRTs may suffice for the
communications.
[0089] In connection with the above, it will be seen from FIG. 5
that there are a number of machines M1, M2, . . . Mn, "n" being an
integer greater than or equal to two, on which the application
program 50 of FIG. 4C is being run substantially simultaneously.
These machines are allocated a number 1, 2, 3, . . . etc. in a
hierarchical order. This order is normally looped or closed so that
whilst machines 2 and 3 are hierarchically adjacent, so too are
machines "n" and 1. There is preferably a further machine X which
is provided to enable various housekeeping functions to be carried
out, such as acting as a lock server. In particular, the further
machine X can be a low value machine, and much less expensive than
the other machines which can have desirable attributes such as
processor speed. Furthermore, an additional low value machine (X+1)
is preferably available to provide redundancy in case machine X
should fail. Where two such server machines X and X+1 are provided,
they are preferably, for reasons of simplicity, operated as dual
machines in a cluster configuration. Machines X and X+1 could be
operated as a multiple computer system in accordance with the
present invention, if desired. However this would result in
generally undesirable complexity. If the machine X is not provided
then its functions, such as housekeeping functions, are provided by
one, or some, or all of the other machines.
[0090] FIG. 5A is a schematic diagram of a replicated shared memory
system. In FIG. 5A three machines are shown, of a total of "n"
machines (n being an integer greater than one) that is machines M1,
M2, . . . Mn. Additionally, a communications network 53 is shown
interconnecting the three machines and a preferable (but optional)
server machine X which can also be provided and which is indicated
by broken lines. In each of the individual machines, there exists a
memory 102 and a CPU 103. In each memory 102 there exists three
memory locations, a memory location A, a memory location B, and a
memory location C. Each of these three memory locations is
replicated in a memory 102 of each machine.
[0091] This arrangement of the replicated shared memory system
allows a single application program written for, and intended to be
run on, a single machine, to be substantially simultaneously
executed on a plurality of machines, each with independent local
memories, accessible only by the corresponding portion of the
application program executing on that machine, and interconnected
via the network 53. In International Patent Application No
PCT/AU2005/001641 (WO2006/110,937) (Attorney Ref 5027F-D1-WO) to
which U.S. patent application Ser. No. 11/259,885 entitled:
"Computer Architecture Method of Operation for Multi-Computer
Distributed Processing and Co-ordinated Memory and Asset Handling"
corresponds, a technique is disclosed to detect modifications or
manipulations made to a replicated memory location, such as a write
to a replicated memory location A by machine M1 and correspondingly
propagate this changed value written by machine M1 to the other
machines M2 . . . Mn which each have a local replica of memory
location A. This result is achieved by the preferred embodiment of
detecting write instructions in the executable object code of the
application to be run that write to a replicated memory location,
such as memory location A, and modifying the executable object code
of the application program, at the point corresponding to each such
detected write operation, such that new instructions are inserted
to additionally record, mark, tag, or by some such other recording
means indicate that the value of the written memory location has
changed.
[0092] An alternative arrangement is that illustrated in FIG. 5B
and termed partial or hybrid replicated shared memory (RSM). Here
memory location A is replicated on computers or machines M1 and M2,
memory location B is replicated on machines M1 and Mn, and memory
location C is replicated on machines M1, M2 and Mn. However, the
memory locations D and E are present only on machine M1, the memory
locations F and G are present only on machine M2, and the memory
locations Y and Z are present only on machine Mn. Such an
arrangement is disclosed in Australian Patent Application No. 2005
905 582 Attorney Ref 50271 (to which U.S. patent application Ser.
No. 11/583,958 (60/730,543) and PCT/AU2006/001447 (WO2007/041762)
correspond). In such a partial or hybrid RSM systems changes made
by one computer to memory locations which are not replicated on any
other computer do not need to be updated at all. Furthermore, a
change made by any one computer to a memory location which is only
replicated on some computers of the multiple computer system need
only be propagated or updated to those some computers (and not to
all other computers).
[0093] Consequently, for both RSM and partial RSM, a background
thread task or process is able to, at a later stage, propagate the
changed value to the other machines which also replicate the
written to memory location, such that subject to an update and
propagation delay, the memory contents of the written to memory
location on all of the machines on which a replica exists, are
substantially identical. Various other alternative embodiments are
also disclosed in the abovementioned prior art.
[0094] Turning now to FIG. 6, an example of the RSM multiple
computer system of FIG. 5 is as illustrated with "n" being 5 so
that in this example there are five computers M1-M5. In FIG. 6,
application memory locations such as "A", "B", etc are replicated
in the independent local memory of each machine and are numbered
accordingly so that machine M1 has replica application memory
location/content/value A1 and the equivalent replica application
memory location/content/value on machine M2 is location A2, and so
on for the other machines and replicated application memory
locations/contents/values. Apart from minor delays in updating of
replicated application memory locations with updated content/data,
the contents or value of each of the replica application memory
locations/content/value A (e.g. A1, A2, etc.) is identical. This is
also true for application memory locations/contents/values B, C,
and so on.
[0095] In the event that the operation of machine M1 causes the
content or value of replicated application memory location/content
A1 to be changed/updated (e.g. written to by the application
program or application program code), the DRT of machine M1 causes
the new/changed contents or value of replica application memory
location/content "A1" to be transmitted from machine M1 via the
communications network 53 to another machine (which is preferably
the hierarchically adjacent machine M2). This communication is
indicated by transmission 601 in FIG. 6. Machine M2 receives this
information, updates its own corresponding replica application
memory location/content A2 and then has its DRT transmit the
new/changed contents or values to each of the other machines M3-M5
as transmission 602, or alternatively re-transmits the received
replica memory update transmission 601 as transmission 602 to
machines M3-M5.
[0096] Turning now to FIG. 6A, a modified example of FIG. 6 is
shown. Specifically indicated in FIG. 6A is an arrangement of
partially replicated application memory locations/contents/values,
where replicated application memory location/content/value "A" is
not replicated on all machines, but instead only machines M1, M2
and M5. Also indicated are partially replicated application memory
locations "B", "C", "L", "W", and "Z", as well as a fully
replicated application memory location "D" which is indicated to be
replicated on all machines M1 . . . M5. Specifically indicated is
replica memory update transmission 601A which corresponds to
replica memory update transmission 601 of FIG. 6. Also shown is
replica memory update transmission 602A which corresponds to
replica memory update transmission 602 of FIG. 6, however unlike
transmission 602 which was sent to all machines M3 . . . M5,
transmission 602A is only sent to those machines on which a
corresponding replica application memory location/content/value "A"
resides--that is, machine M5. Thus, as illustrated in FIG. 6A,
replica memory update transmissions sent by machine M2 (or more
generally, a paired machine) are preferably only sent to those
machines on which a corresponding replica memory
location/value/content resides. As a consequence of this preferred
arrangement, superfluous or unnecessary replica memory update
transmissions are not sent to machines on which corresponding
replica memory location(s)/content(s)/value(s) are not resident or
do not exist, thereby conserving bandwidth of the network 53.
[0097] In a similar fashion, as illustrated in FIG. 7, should the
execution of the application program carried out by machine M3
result in the content or value of replicated application memory
location/content "C" being amended (that is, replica application
memory location/content "C3"), then the new/changed value or
content is communicated by the DRT of machine M3 to machine M4 as
indicated by transmission 701 in FIG. 7. Machine M4 updates its
corresponding replica application memory location C4 and
communicates the change to the other machines M1, M2 and M5 on
which a corresponding replica memory location/content resides as
indicated by transmission 702 in FIG. 7.
[0098] In one embodiment the machines M1. M5 in FIG. 6 and FIG. 7
each use a "virtual memory page faults" procedure, or similar to
ensure that every time that a machine writes to a replicated
application memory location/content, the content or value of that
write operation (that is, the updated value of the written-to
replicated application memory location) is subsequently updated to
the hierarchical adjacent machine (M2 and M4 respectively) or other
paired machine. Alternatively, each machine M1 . . . M5 may use any
"tagging" (or similar "marking", "alerting") means or methods to
record or indicate that a write to one or more replicated
application memory locations/contents/values has taken place, and
that in due course, the identified replicated application memory
locations which have been recorded or identified as having been
written to, are to have their new value in turn propagated to all
other corresponding replica application memory
locations/contents/values on one or more other member machines of
the replicated shared memory arrangement or other operating
plurality of machines. One such tagging method is disclosed in the
International Patent Application Nos. PCT/AU2005/001641
(WO2006/110937) (Attorney Ref 5027F-D1-WO) to which U.S. patent
application Ser. No. 11/259,885 entitled: "Computer Architecture
Method of Operation for Multi-Computer Distributed Processing and
Co-ordinated Memory and Asset Handling" corresponds and
PCT/AU2006/000532 (WO2006/110957) (Attorney Ref 5027F-D2-WO).
Ultimately however, how the writes are detected is not important,
what is important is that they be detected and in due course the
memory contents or value is sent to the hierarchical adjacent
machine (or other paired machine).
[0099] Preferably, the replica memory update transmissions sent by
a first machine (such as machine M1) to a second machine (such as
machine M2), comprises an identifier and updated value of the
written-to replicated application memory location. International
Patent Application Nos. PCT/AU2005/001641 (WO2006/110937) (Attorney
Ref 5027F-D1-WO) to which U.S. patent application Ser. No.
11/259,885 entitled: "Computer Architecture Method of Operation for
Multi-Computer Distributed Processing and Co-ordinated Memory and
Asset Handling" corresponds and PCT/AU2006/000532 (WO2006/110957)
(Attorney Ref 5027F-D2-WO), disclose an arrangement of replica
memory update transmissions comprising replica memory
location/content identifiers and associated update values, and the
contents of each specification of the abovementioned prior
application(s) are hereby incorporated into the present
specification by cross reference for all purposes.
[0100] In a further preferred arrangement, the replica memory
update transmissions sent by a first machine (such as machine M1)
to a second machine (such as machine M2) further comprises at least
one "count value" and/or "resolution value" associated with one or
more replica memory location/content identifiers and associated
update values.
[0101] The data protocol or data format which is used to transmit
information between the various machines enables bundles or packets
of data to be transmitted or received out of the sequence in which
they were created. One way of doing this is to utilize the
contention detection, recognition and data format techniques
described in International Patent Application No. PCT/AU2007/ . . .
entitled "Advanced Contention Detection" (Attorney Reference
5027T-WO) lodged simultaneously herewith and claiming priority of
Australian Patent Application No. 2006 905 527, (and to which U.S.
Provisional Patent Application No. 60/850,711 corresponds). The
contents of the above specifications are hereby incorporated in the
present specification in full for all purposes.
[0102] Briefly stated, the abovementioned data protocol or message
format includes both the address of a memory location where a value
or content is to be changed, the new value or content, and a count
number indicative of the position of the new value or content in a
sequence of consecutively sent new values or content.
[0103] Thus a sequence of messages are issued from one or more
sources. Typically each source is one computer of a multiple
computer system and the messages are memory updating messages which
include a memory address and a (new or updated) memory content.
[0104] Thus each source issues a string or sequence of messages
which are arranged in a time sequence of initiation or
transmission. The problem arises that the communication network 53
cannot always guarantee that the messages will be received in their
order of transmission. Thus a message which is delayed may update a
specific memory location with an old or stale content which
inadvertently overwrites a fresh or current content.
[0105] In order to address this problem each source of messages
includes a count value in each message. The count value indicates
the position of each message in the sequence of messages issuing
from that source. Thus each new message from a source has a count
value incremented (preferably by one) relative to the preceding
messages. Thus the message recipient is able to both detect out of
order messages, and ignore any messages having a count value lower
than the last received message from that source. Thus earlier sent
but later received messages do not cause stale data to overwrite
current data.
[0106] As explained in the abovementioned cross referenced
specifications, later received packets which are later in sequence
than earlier received packets overwrite the content or value of the
earlier received packet with the content or value of the later
received packet. However, in the event that delays, latency and the
like within the network 53 result in a later received packet being
one which is earlier in sequence than an earlier received packet,
then the content or value of the earlier received packet is not
overwritten and the later received packet is effectively discarded.
Each receiving computer is able to determine where the latest
received packet is in the sequence because of the accompanying
count value. Thus if the later received packet has a count value
which is greater than the last received packet, then the current
content or value is overwritten with the newly received content or
value. Conversely, if the newly received packet has a count value
which is lower than the existing count value, then the received
packet is not used to overwrite the existing value or content. In
the event that the count values of both the existing packet and the
received packet are identical, then a contention is signalled and
this can be resolved.
[0107] This resolution requires a machine which is about to
propagate a new value for a memory location, and provided that
machine is the same machine which generated the previous value for
the same memory location, then the count value for the newly
generated memory is not increased by one (1) but instead is
increased by more than one such as by being increased by two (2)
(or by at least two). A fuller explanation is contained in the
abovementioned cross referenced PCT specification.
[0108] Preferably also, the replica memory update transmissions
sent by a first machine (such as machine M1) to a second machine
(such as machine M2) further includes a list or table of one or
more addresses or other identifiers or identifying means of one or
more other machine(s) to which the replica memory update
transmission is to be directed by the paired second machine (e.g.
machine M2). Preferably, such list of one or more addresses or
other identifiers or identifying means includes those machines on
which corresponding replica application memory
location(s)/content(s)/value(s) of the replica memory update
transmission reside, and excludes those machines in which no
corresponding replica application memory
location(s)/content(s)/value(s) of the replica memory update
transmission reside. Preferably then, the paired second machine
(e.g. machine M2) upon receipt of a replica memory update
transmission from its paired first machine (e.g machine M1),
utilises the associated list of one or more addresses or other
identifiers or identifying means of the received replica memory
update transmission to either forward the received transmission to
the machines identified by such list, or alternatively generate a
new corresponding replica memory update transmission to be sent to
the machines identified by such list.
[0109] Each of the hierarchical adjacent machines M2, M4, etc. (or
other paired machines) has loaded on it the same application
program 50 (and preferably the same portion of the same application
program 50), and associated replicated application program memory
locations/contents/values (such as replicated application memory
location "A"), as its corresponding adjacent machines M1, M3, etc
(or other paired machines). Preferably however, this portion of the
application program stored on the hierarchical adjacent machines
M2, M4, etc. is not being executed but is merely available to
commence execution in the even of failure of the adjacent machine
M1, M3, etc.
[0110] In the event that the operation of machine M1 causes the
content or value of the replicated application memory
location/content/value A to be changed/updated (such as for
example, by the application program and/or application program code
writing/storing a new value of "99" to replica application memory
location "A"), the DRT of machine M1 causes the new contents or
value of replicated application memory location "A" (that is, the
updated value "99") to be transmitted in a replica memory update
transmission 601 from machine M1 via the communications network 53
to the machine M2 (or other paired machine). Preferably the replica
memory update transmission 601 comprises the identity (or other
identifier) of replicated application memory location "A", and ther
associated updated value of replica application memory location "A"
(that is, the updated value "99"). Preferably additionally, the
replica memory update transmission 601 further comprises at least
one "count value" and/or "resolution value", and which is to be
associated with the updated value of replica memory location "A".
Machine M2 upon receipt of replica memory update transmission 601,
updates its own corresponding replica application memory
location/content/value A2 with the received updated value "99", and
then has its DRT transmit either the received replica update
transmission 601 (shown as replica update transmission 602), or
alternatively transmit a new replica memory update transmission
(comprising the identity and new content(s)/value(s), and
preferably an associated "count value" and/or "resolution value",
of replicated memory location A, of the received replica update
transmission 601) to each of the other machines M3 . . . M5. This
communication is indicated by broken arrows in FIG. 6. The updating
techniques and equipment are as described in the above-mentioned
cross-referenced applications and are preferably implemented by the
computer code disclosed therein
[0111] Turning now to FIG. 7A, an arrangement of partially
replicated application memory locations/contents/values, where
replicated application memory location/content/value "A" is not
replicated on all machines, but instead only machines M1, M2 and
M5. Also indicated are partially replicated application memory
locations "B", "C", "L", "W", and "Z", as well as a fully
replicated application memory location "D" which is indicated to be
replicated on all machines M1 . . . M5. Specifically indicated is
replica memory update transmission 701A from machine M3 to machine
M5 for an updated value of replicated application memory location
"L", and a corresponding replica memory update transmission 702A
from machine M5. to those machines on which a corresponding replica
application memory location/content/value "L" resides--that is,
machine M2. Thus, as illustrated in FIG. 7A, replica memory update
transmissions sent by machine M5 (or more generally, a paired
machine) are preferably only sent to those machines on which a
corresponding replica memory location/value/content resides. As a
consequence of this preferred arrangement, superfluous or
unnecessary replica memory update transmissions are not sent to
machines on which corresponding replica memory
location(s)/content(s)/value(s) are not resident or do not exist,
thereby conserving bandwidth of the network 53.
[0112] In addition, each of the hierarchical adjacent machines M2,
M4, etc. is preferably updated from time to time with advice that
the adjacent machine M1, M3, etc. in executing its portion of the
application program 50 has reached certain "milestone"
instructions.
[0113] In a simple embodiment of this "milestone" technique, from
time to time each of the adjacent machines M1, M3, etc. halts
execution of the application program code (that is, the executing
code and/or threads of application program 50), and for each thread
records the program counter and associated state data (such as for
example but not restricted to one or more of application's thread
invocation stack(s), register memory locations/contents/values, and
method frames). This information is then sent to the hierarchical
adjacent machines M2, M4, etc (or other paired machine), preferably
in a similar manner of transmission as that utilised by replica
memory update transmission (such as for example replica memory
update transmission 601 or 602). Then the machines M1, M3, etc.
resume execution. Alternatively, a spare thread can capture the
current status and associated state data of one or more executing
threads without halting such executing threads. This simple
embodiment may not work with all application programs but will work
with a substantial number or proportion of such application
programs. In a further embodiment, both "milestones" and replica
memory update transmissions are collected and/or sent at the same
time (i.e. at the time of the code execution halt, or the execution
halt is timed to coincide with one or more of the replica memory
update transmissions/messages of the machines M1, M3, etc.) so that
the machines M2, M4, etc. receive both together. Thus, "together"
means receiving both in either order at the same time or within a
small interval of time.
[0114] In the event that, say, machine M5 should fail, then several
consequences flow. Firstly, replica memory update transmissions by
all other machines to the failed machine (e.g. machine M5) are
preferably discontinued, whilst replica memory update transmissions
by all other machines continue to be sent as normal to all
remaining machines (that is, excluding the failed machine M5).
Preferably, all other machines (e.g. machines M1-M4) are updated of
the failure of machine M5, and thereafter preferably do not send
replica memory update transmissions to the failed machine M5. Thus
each machine which is still operative is continually updated with
replica memory update transmissions by all other machines even
though no further replica memory update transmissions are sent to
failed machine M5, or alternatively replica memory update
transmissions/messages sent to failed machine M5 are of no effect.
Thus the execution carried out by the non-failed machines M1-M4 can
continue. Secondly and optionally, machine M1 (which is the
hierarchical adjacent machine (paired machine) to the failed
machine M5) is able to initiate execution of the portion of the
application program previously executed by machine M5 commencing at
the position of the last "milestone" state data received by machine
M1 from machine M5 prior to failure. In this connection machine M1
utilizes both the same application program code and the replicated
application memory locations/contents/values of machine M5 which
are available in machine M1 either in a disk store or some other
memory arrangement.
[0115] The above-mentioned failure is able to be detected by a
conventional detector attached to each of the application program
running machines and reporting to machine X, for example.
[0116] One such detector arrangement may be through the use of the
Simple Network Management Protocol (SNMP) of a switch
interconnecting each of the plural machines. This is essentially a
small program which operates in the background of the switch and
provides a specified output signal in the event that failure of a
communications link interconnecting a machine (such as a
disconnected network cable) is detected. Machine X may either then
"poll" the switch using the SNMP protocol to enquire about the
network connection status of each of the machines, or alternative
receive a message or signal from the SNMP equipped switch informing
machine X when a link failure of an individual machine has occurred
(such as for example, a network cable being cut or
disconnected).
[0117] A second alternative detector arrangement to sense failure
of a machine is by machine X "polling" each machine directly at
regular intervals. For example, machine X can interrogate each of
the other machines M1, M2, . . . Mn in turn requesting a reply. If
no reply is forthcoming after a predetermined time, or after a
small number of "reminders" are sent, also without reply, the
non-responding machine is pronounced "dead"/"failed".
[0118] Alternatively, or additionally, each of the machines M1, . .
. Mn can at regular intervals, say every 30 seconds, send a
predetermined message to machine X (or to all other machines in the
absence of a server) to say that all is well. In the absence of
such a message the machine can be presumed "dead"/"failed" or can
be interrogated (and if it then fails to respond) is pronounced
"dead"/"failed".
[0119] Further methods include looking for a turn on event in an
uninterruptible power supply (UPS) used to power each machine which
therefore indicates a failure of mains power. Similarly,
conventional switches such as those manufactured by CISCO of
California, USA include a provision to check either the presence of
power to a communications network cable, and whether the network
cable is disconnected.
[0120] In some circumstances, for example for enhanced redundancy
or for increased bandwidth, each individual machine can be
"multi-peered" which means there are two or more links between the
machine and the communications network 53. An SNMP product which
provides two options in this circumstance-namely wait for both/all
links to fail before signalling machine failure, or signal machine
failure if any one link fails, is the 12 Port Gigabit Managed
Switch GSM 7212 sold under the trade marks NETGEAR and PROSAFE.
[0121] A disadvantage of the arrangement illustrated in FIG. 6 is
that there is considerable traffic on each of the interconnections
between the machines M1, M2 . . . M5 and the communications network
53 since, as indicated by the two arrows pointing in opposite
directions for machine M2, it is both receiving messages from
machine M1 and sending messages to all other machines. Restated,
the communications link or port of machine M2 both receives the
replica memory update transmissions of machine M1, and sends such
received transmissions to all other machines M3 . . . M5. As a
consequence, there is a requirement for considerable bandwidth in
the individual communication links interconnecting each machine to
the communication network 53.
[0122] In accordance with a preferred embodiment of the present
invention, better utilization of bandwidth is achieved where there
is a direct communications link between each of single machine and
its "hierarchical adjacent machine" (or other paired machine), for
example machine M1 and M2 of FIG. 6. In the arrangement illustrated
in FIG. 6, in the event that machine M1 changes/updates the
contents or value of replicated application memory
location/content/value "A", then this information is transmitted
directly from machine M1 to M2 via such direct communications link.
As in the previous embodiment, machine M2 thereafter receives and
processes via such direct communications link the received replica
memory update transmission as described above for transmission 601
of FIG. 6. Thus, following receipt of such transmission, a second
transmission is sent via the communications network 53 (either
taking the form of the original received transmission, or
alternatively a new transmission generated by machine M2) of the
updated contents or value of replica application memory
location/content/value "A" received by machine M2 via the direct
communications link, and sent to each of the remaining machines M3
. . . M5 in accordance with the above description for replica
memory update transmission 601.
[0123] Such an alternative arrangement as this has one significant
advantage. The demands on bandwidth for the interconnections
between the mirroring machines of the second group and the
communications network 53 are reduced because replica memory update
transmissions from machine M1 to machine M2, and subsequently from
machine M2 to machines M3 . . . M5, both having the same updated
replica application memory contents/values of replicated memory
location "A", are not received and sent respectively on the same
communications link (and therefore, the same updated replica
application memory contents/values of replicated application memory
location "A" are not being sent twice (in opposite directions) on
the same communications link).
[0124] In this connection "direct" can include within its scope any
link which avoids the network 53, or specialised linkages through
the network 53. Additionally, such a "direct" connection can
further include any other arrangement (such as multiple links
between machines M1 . . . M5 and the network 53) in which a single
replica memory update transmission (and/or associated updated
content(s)/value(s)) of a first machine (such as machine M1) does
not traverse the same communications link of the corresponding
"hierarchical adjacent machine" (e.g. machine M1/2, or other paired
machine) more than once. As an example of the latter, if machines
M1 and M2 are each provided with a dual port connection to the
network 53, then one port of each dual port can provide the direct
connection.
[0125] The tasks which machine M5 were previously undertaking prior
to failure are now, because the "milestones" state data of machine
M5 is also available in machine M1 allocated to, and initiated by,
the hierarchically adjacent machine M1.
[0126] Naturally, under these circumstances, the computational load
on machine M1 (having assumed the computational load-of machine M5
in addition to its own load) is very much greater than that of the
other machines and therefore it is desirable for there to be an
evening out, or re-distribution, of the computational loads amongst
the remaining machines. This evening out, levelling, or
re-distribution, of the computational load amongst the remaining
machines is however optional, and may depend on one or more of a
variety of factors, for example on the capabilities of the machine
and whether the machine may be able to handle the increased
computational burden.
[0127] Turning now to FIG. 8, a still further embodiment based upon
the architecture of FIG. 6 is illustrated. In this embodiment, the
application memory of each of the machines of the multiple computer
system is modified so that there is hybrid replicated shared
memory. That is to say, each of the machines includes two distinct
regions of application memory. One region is a replicated region
containing replicated application memory locations/contents such as
R1 and R2 each of which is replicated on each machine.
[0128] The other portion or region of the application memory of
each computer M1, M2, . . . Mn is a local application memory which
is partitioned into two compartments. The first compartment for
machine M1, for example, contains application memory locations such
as A, B and C which are used only by the portions of the
application program of machine M1 and thus are not replicated
throughout all other machines for use by the other portions of the
application program of the other machines. Instead, in order to
provide redundancy as in the arrangement described above in
connection with FIG. 3, a replica of application memory locations
A, B and C is stored in the other compartment of the hierarchically
adjacent machine (or other paired machine), which in this example
is machine M2.
[0129] Similarly, machine M2 has local application memory
locations/contents D, E and F which are stored in the first
compartment of machine M2's local application memory and replicated
in the second compartment of machine M3 (not illustrated).
[0130] Preferably the memory of the second compartments is stored
in some auxiliary memory such as a hard disk where it is available
but does not fetter machine M1's normal operation (such as for
example, consuming available local memory or application memory),
however this is not a requirement of this invention.
[0131] In the event of machine failure, for example failure of
machine M1, the replicated application memory locations/contents
such as R1 and R2 are already available on all other machines. The
independent memory of machine M1 (that is, the application memory
of the first compartment) is available on machine M2 and thus is
not lost by the failure of machine M1. The tasks which machine M1
was previously undertaking prior to failure are now, because the
"milestones" of machine M1 are also stored in machine M2 allocated
to, and initiated by, the hierarchically adjacent machine M2. The
machine M2 already has available to it replicas of the application
memory locations/contents A, B and C which are specific to the
computational tasks previously being carried out by machine M1 and
which are now to be carried out by machine M2. Machine Mn continues
its computational tasks and continues to have access to the
application memory locations it requires namely memory locations X,
Y and Z and the fact that the replica of these application memory
locations has failed on machine M1 is of no consequence. Preferably
also, machine Mn would be notified of the failure of machine M1,
and thereafter discontinue updating transmissions of application
memory locations X, Y, and Z to machine M1.
[0132] Again, the computational load on machine M2 (having assumed
the computational load of machine M1 in addition to its own load)
is very much greater than that of the other machines and therefore
it is desirable for there to be an evening out or re-distribution
of the computational loads amongst the remaining machines. As in
the other embodiment, this evening out, levelling, or
re-distribution, of the computational loads amongst the remaining
machines is however optional, and may depend on one or more of a
variety of factors, for example on the capabilities of the machine
and whether the machine may be able to handle the increased
computational burden.
[0133] Turning now to FIG. 9, a further development of the
arrangement illustrated in FIG. 8 is illustrated in FIG. 9 in
respect of a multiple computer system having three machines or
computers M1, M2 and M3. It will be apparent that the invention is
not limited to any particular number of machines, so long as there
are a sufficient number of machines to provide the redundancy
described herein. As in FIG. 8, application memory locations R1 and
R2 are replicated application memory locations/contents on all
machines. Machine M1 has application memory locations A and B for
its use and a replica of these locations is stored on machine M2 in
the form of locations A.sup.1 and B.sup.1 which are preferably data
compression versions of the contents of memory locations A and B
respectively. Similarly, machine M2 has application memory
locations C and D for its own use and stored in the hierarchically
adjacent machine M3 are pointers or labels C.sup.1 and D.sup.1 to
the location on a hard disk HD3 where the contents or value of the
application memory locations C and D are replicated on the hard
disk of computer M3.
[0134] Again, in the event of failure of any one of the machines
M1, M2, and M3 then the content of the memory locations unique to
the failed machine can be reconstituted from the data stored on
machines which are operative.
[0135] Turning now to FIG. 10, in a further embodiment a multiple
computer system utilizing four machines M1-M4 is illustrated. Here
the machines which execute the application program 50 are the
machines M1-M3 and the additional machine M4 is provided for the
purposes of redundancy. The multiple computers M1-M3 operate under
a hybrid or partial RSM arrangement so that the independent
application memory of each machine M1-M3 is divided into two
portions. In the first such portion are located all those
application memory locations such as R1 and R2 which are replicated
on each machine M1-M3 (or at least two machines) and maintained up
to date by the in due course replica memory update transmissions
sent via the network 53.
[0136] In addition, each of the machines M1-M3 has a second portion
of its independent application memory in which are located those
application memory locations/contents such as A and B for machine
M1 that are only required for the execution of that portion of the
application program 50 being executed by machine M1. Similarly,
machines M2 and M3 only require access to application memory
locations C and D and to application memory locations E and F
respectively.
[0137] In order to provide redundancy, the further machine M4 is
provided. Machine M4 need not be identical to any one of the
machines M1-M3, nor need any one of the machines M1-M3 be identical
to any of the others, but clearly they can be if desired. Machine
M4 may or may not have replicated application memory
locations/contents/values R1 and R2. A copy of each of the
application memory locations A-F is provided on machine M4. In
addition changes made to the contents or value of any of the
application memory locations A-F are communicated by the machine
causing the change (ie one of machines M1-M3) to the redundancy
machine M4.
[0138] Furthermore, the redundancy machine M4 is provided with a
copy of the portion of the application program 50 as loaded onto,
and modified for use by, each of the machines M1-M3.
[0139] In addition, the redundancy machine M4 receives from time to
time the abovementioned "milestone" state data from each of the
application programs executing machines M1-M3 which indicates the
progress to date of each of the machines M1-M3.
[0140] Thus, in the event that one (say M2) of the application
program executing machines M1-M3 should fail, then machine M4 is
able to initiate execution from the last "milestone" state data
reached by machine M2. For this activity, machine M4 utilizes the
copy of machine M2's application program as stored in machine M2,
and the contents or values of application memory locations/contents
C and D as stored by machine M4 and previously utilized by machine
M2. Finally, machine M4 in taking over the computational task
carried out by machine M2 can be expected to need to refer to the
content or value of the replicated application memory locations R1,
R2 etc. which, although not present in machine M4, can be read from
any one of the remaining application program executing machines
which has not failed (ie machines M1 and M3 in this example).
[0141] In the further embodiment illustrated in FIG. 11, the
machine M4 is as described above in relation to FIG. 10 save that
the machine M4 has a hard disk memory HD4 upon which are stored the
replica contents or values of the application memory locations A-F
of machines M1-M3. In machine M4 are stored pointers or labels
A.sup.1-F.sup.1 which point to the corresponding storage locations
A-F in the hard disk HD4.
[0142] The foregoing describes only some embodiments of the present
invention and modifications, obvious to those skilled in the art,
can be made thereto without departing from the scope of the present
invention. For example, reference to JAVA includes both the JAVA
language and also JAVA platform and architecture.
[0143] In all described instances of modification, where the
application code 50 is modified before, or during loading, or even
after loading but before execution of the unmodified application
code has commenced, it is to be understood that the modified
application code is loaded in place of, and executed in place of,
the unmodified application code subsequently to the modifications
being performed.
[0144] Alternatively, in the instances where modification takes
place after loading and after execution of the unmodified
application code has commenced, it is to be understood that the
unmodified application code may either be replaced with the
modified application code in whole, corresponding to the
modifications being performed, or alternatively, the unmodified
application code may be replaced in part or incrementally as the
modifications are performed incrementally on the executing
unmodified application code. Regardless of which such modification
routes are used, the modifications subsequent to being performed
execute in place of the unmodified application code.
[0145] It is advantageous to use a global identifier is as a form
of `meta-name` or `meta-identity` for all the similar equivalent
local objects (or classes, or assets or resources or the like) on
each one of the plurality of machines M1, M2 . . . Mn. For example,
rather than having to keep track of each unique local name or
identity of each similar equivalent local object on each machine of
the plurality of similar equivalent objects, one may instead define
or use a global name corresponding to the plurality of similar
equivalent objects on each machine (e.g. "globalname7787"), and
with the understanding that each machine relates the global name to
a specific local name or object (e.g. "globalname7787" corresponds
to object "localobject456" on machine M1, and "globalname7787"
corresponds to object "localobject885" on machine M2, and
"globalname7787" corresponds to object "localobject111" on machine
M3, and so forth).
[0146] It will also be apparent to those skilled in the art in
light of the detailed description provided herein that in a table
or list or other data structure created by each DRT 71 when
initially recording or creating the list of all, or some subset of
all objects (e.g. memory locations or fields), for each such
recorded object on each machine M1, M2 . . . Mn there is a name or
identity which is common or similar on each of the machines M1, M2
. . . Mn. However, in the individual machines the local object
corresponding to a given name or identity will or may vary over
time since each machine may, and generally will, store memory
values or contents at different memory locations according to its
own internal processes. Thus the table, or list, or other data
structure in each of the DRTs will have, in general, different
local memory locations corresponding to a single memory name or
identity, but each global "memory name" or identity will have the
same "memory value or content" stored in the different local memory
locations. So for each global name there will be a family of
corresponding independent local memory locations with one family
member in each of the computers. Although the local memory name may
differ, the asset, object, location etc has essentially the same
content or value. So the family is coherent.
[0147] The term "table" or "tabulation" as used herein is intended
to embrace any list or organised data structure of whatever format
and within which data can be stored and read out in an ordered
fashion.
[0148] It will also be apparent to those skilled in the art in
light of the description provided herein that the abovementioned
modification of the application program code 50 during loading can
be accomplished in many ways or by a variety of means. These ways
or means include, but are not limited to at least the following
five ways and variations or combinations of these five, including
by: [0149] (i) re-compilation at loading, [0150] (ii) a
pre-compilation procedure prior to loading, [0151] (iii)
compilation prior to loading, [0152] (iv) "just-in-time"
compilation(s), or [0153] (v) re-compilation after loading (but,
for example, before execution of the relevant or corresponding
application code in a distributed environment).
[0154] Traditionally the term "compilation" implies a change in
code or language, for example, from source to object code or one
language to another. Clearly the use of the term "compilation" (and
its grammatical equivalents) in the present specification is not so
restricted and can also include or embrace modifications within the
same code or language.
[0155] Those skilled in the computer and/or programming arts will
be aware that when additional code or instructions is/are inserted
into an existing code or instruction set to modify same, the
existing code or instruction set may well require further
modification (such as for example, by re-numbering of sequential
instructions) so that offsets, branching, attributes, mark up and
the like are properly handled or catered for.
[0156] Similarly, in the JAVA language memory locations include,
for example, both fields and array types. The above description
deals with fields and the changes required for array types are
essentially the same mutatis mutandis. Also the present invention
is equally applicable to similar programming languages (including
procedural, declarative and object orientated languages) to JAVA
including Microsoft.NET platform and architecture (Visual Basic,
Visual C/C.sup.++, and C#) FORTRAN, C/C.sup.++, COBOL, BASIC
etc.
[0157] The terms object and class used herein are derived from the
JAVA environment and are intended to embrace similar terms derived
from different environments such as dynamically linked libraries
(DLL), or object code packages, or function unit or memory
locations.
[0158] Various implementations are described relative to the above
arrangements. Any one or each of these various means may be
implemented by computer program code statements or instructions
(possibly including by a plurality of computer program code
statements or instructions) that execute within computer logic
circuits, processors, ASICs, logic or electronic circuit hardware,
microprocessors, microcontrollers or other logic to modify the
operation of such logic or circuits to accomplish the recited
operation or function. In another implementation firmware is used
and in other implementations hardware. Furthermore, any one or each
of these various implementations may be a combination of computer
program software, firmware, and/or hardware.
[0159] Any and each of the abovedescribed methods, procedures,
and/or routines may advantageously be implemented as a computer
program and/or computer program product stored on any tangible
media or existing in electronic, signal, or digital form. Such
computer program or computer program products comprising
instructions separately and/or organized as modules, programs,
subroutines, or in any other way for execution in processing logic
such as in a processor or microprocessor of a computer, computing
machine, or information appliance; the computer program or computer
program products modifying the operation of the computer in which
it executes or on a computer coupled with, connected to, or
otherwise in signal communications with the computer on which the
computer program or computer program product is present or
executing. Such a computer program or computer program product
modifies the operation and architectural structure of the computer,
computing machine, and/or information appliance to alter the
technical operation of the computer and realize the technical
effects described herein.
[0160] The invention may therefore constitute a computer program
product comprising a set of program instructions stored in a
storage medium or existing electronically in any form and operable
to permit a plurality of computers to carry out any of the methods,
procedures, routines, or the like as described herein including in
any of the claims.
[0161] Furthermore, the invention includes (but is not limited to)
a plurality of computers, or a single computer adapted to interact
with a plurality of computers, interconnected via a communication
network or other communications link or path and each operable to
substantially simultaneously or concurrently execute the same or a
different portion of an application code written to operate on only
a single computer on a corresponding different one of computers.
The computers are programmed to carry out any of the methods,
procedures, or routines described in the specification or set forth
in any of the claims, on being loaded with a computer program
product or upon subsequent instruction. Similarly, the invention
also includes within its scope a single computer arranged to
co-operate with like, or substantially similar, computers to form a
multiple computer system
[0162] The term "distributed runtime system", "distributed
runtime", or "DRT" and such similar terms used herein are intended
to capture or include within their scope any application support
system (potentially of hardware, or firmware, or software, or
combination and potentially comprising code, or data, or operations
or combination) to facilitate, enable, and/or otherwise support the
operation of an application program written for a single machine
(e.g. written for a single logical shared-memory machine) to
instead operate on a multiple computer system with independent
local memories and operating in a replicated shared memory
arrangement. Such DRT or other "application support software" may
take many forms, including being either partially or completely
implemented in hardware, firmware, software, or various
combinations therein.
[0163] The methods of this invention described herein are
preferably implemented in such an application support system, such
as DRT described in International Patent Application No.
PCT/AU2005/000580 published under WO 2005/103926 (and to which U.S.
patent application Ser. No. 111/111,946 Attorney Code 5027F-US
corresponds), however this is not a requirement of this invention.
Alternatively, an implementation of the methods may take the form
of a functional or effective application support system (such as a
DRT described in the abovementioned PCT specification) either in
isolation, or in combination with other softwares, hardwares,
firmwares, or other methods of any of the above incorporated
specifications, or combinations therein.
[0164] The reader is directed to the abovementioned PCT
specification for a full description, explanation and examples of a
distributed runtime system (DRT) generally, and more specifically a
distributed runtime system for the modification of application
program code suitable for operation on a multiple computer system
with independent local memories functioning as a replicated shared
memory arrangement, and the subsequent operation of such modified
application program code on such multiple computer system with
independent local memories operating as a replicated shared memory
arrangement.
[0165] Also, the reader is directed to the abovementioned PCT
specification for further explanation, examples, and description of
various provided methods and means which may be used to modify
application program code during loading or at other times.
[0166] Also, the reader is directed to the abovementioned PCT
specification for further explanation, examples, and description of
various provided methods and means which may be used to modify
application program code suitable for operation on a multiple
computer system with independent local memories and operating as a
replicated shared memory arrangement.
[0167] Finally, the reader is directed to the abovementioned PCT
specification for further explanation, examples, and description of
various provided methods and means which may be used to operate
replicated memories of a replicated shared memory arrangement, such
as updating of replicated memories when one of such replicated
memories is written-to or modified.
[0168] In alternative multicomputer arrangements, such as
distributed shared memory arrangements and more general distributed
computing arrangements, the above described methods may still be
applicable, advantageous, and used. Specifically, any
multi-computer arrangement where replica, "replica-like",
duplicate, mirror, cached or copied memory locations exist, such as
any multiple computer arrangement where memory locations (singular
or plural), objects, classes, libraries, packages etc are resident
on a plurality of connected machines and preferably updated to
remain consistent, then the methods may apply. For example,
distributed computing arrangements of a plurality of machines (such
as distributed shared memory arrangements) with cached memory
locations resident on two or more machines and optionally updated
to remain consistent comprise a functional "replicated memory
system" with regard to such cached memory locations, and is to be
included within the scope of the present invention. Thus, it is to
be understood that the aforementioned methods apply to such
alternative multiple computer arrangements. The above disclosed
methods may be applied in such "functional replicated memory
systems" (such as distributed shared memory systems with caches)
mutatis mutandis.
[0169] It is also provided and envisaged that any of the described
functions or operations described as being performed by an optional
server machine X (or multiple optional server machines) may instead
be performed by any one or more than one of the other participating
machines of the plurality (such as machines M1, M2, M3 . . . Mn of
FIG. 1).
[0170] Alternatively or in combination, it is also further provided
and envisaged that any of the described functions or operations
described as being performed by an optional server machine X (or
multiple optional server machines) may instead be partially
performed by (for example broken up amongst) any one or more of the
other participating machines of the plurality, such that the
plurality of machines taken together accomplish the described
functions or operations described as being performed by an optional
machine X. For example, the described functions or operations
described as being performed by an optional server machine X may
broken up amongst one or more of the participating machines of the
plurality.
[0171] Further alternatively or in combination, it is also further
provided and envisaged that any of the described functions or
operations described as being performed by an optional server
machine X (or multiple optional server m0achines) may instead be
performed or accomplished by a combination of an optional server
machine X (or multiple optional server machines) and any one or
more of the other participating machines of the plurality (such as
machines M1, M2, M3 . . . Mn), such that the plurality of machines
and optional server machines taken together accomplish the
described functions or operations described as being performed by
an optional single machine X. For example, the described functions
or operations described as being performed by an optional server
machine X may broken up amongst one or more of an optional server
machine X and one or more of the participating machines of the
plurality.
[0172] The terms "object" and "class" used herein are derived from
the JAVA environment and are intended to embrace similar terms
derived from different environments, such as modules, components,
packages, structs, libraries, and the like.
[0173] The use of the term "object" and "class" used herein is
intended to embrace any association of one or more memory
locations. Specifically for example, the term "object" and "class"
is intended to include within its scope any association of plural
memory locations, such as a related set of memory locations (such
as, one or more memory locations comprising an array data
structure, one or more memory locations comprising a struct, one or
more memory locations comprising a related set of variables, or the
like).
[0174] Reference to JAVA in the above description and drawings
includes, together or independently, the JAVA language, the JAVA
platform, the JAVA architecture, and the JAVA virtual machine.
Additionally, the present invention is equally applicable mutatis
mutandis to other non-JAVA computer languages (possibly including
for example, but not limited to any one or more of, programming
languages, source-code languages, intermediate-code languages,
object-code languages, machine-code languages, assembly-code
languages, or any other code languages), machines (possibly
including for example, but not limited to any one or more of,
virtual machines, abstract machines, real machines, and the like),
computer architectures (possible including for example, but not
limited to any one or more of, real computer/machine architectures,
or virtual computer/machine architectures, or abstract
computer/machine architectures, or microarchitectures, or
instruction set architectures, or the like), or platforms (possible
including for example, but not limited to any one or more of,
computer/computing platforms, or operating systems, or programming
languages, or runtime libraries, or the like).
[0175] Examples of such programming languages include procedural
programming languages, or declarative programming languages, or
object-oriented programming languages. Further examples of such
programming languages include the Microsoft.NET language(s) (such
as Visual BASIC, Visual BASIC.NET, Visual C/C++, Visual C/C++.NET,
C#, C#.NET, etc), FORTRAN, C/C++, Objective C, COBOL, BASIC, Ruby,
Python, etc.
[0176] Examples of such machines include the JAVA Virtual Machine,
the Microsoft.NET CLR, virtual machine monitors, hypervisors,
VMWare, Xen, and the like.
[0177] Examples of such computer architectures include, Intel
Corporation's x86 computer architecture and instruction set
architecture, Intel Corporation's NetBurst microarchitecture, Intel
Corporation's Core microarchitecture, Sun Microsystems' SPARC
computer architecture and instruction set architecture, Sun
Microsystems' UltraSPARC III microarchitecture, IBM Corporation's
POWER computer architecture and instruction set architecture, IBM
Corporation's POWER4/POWER5/POWER6 microarchitecture, and the
like.
[0178] Examples of such platforms include, Microsoft's Windows XP
operating system and software platform, Microsoft's Windows Vista
operating system and software platform, the Linux operating system
and software platform, Sun Microsystems' Solaris operating system
and software platform, IBM Corporation's AIX operating system and
software platform, Sun Microsystems' JAVA platform, Microsoft's NET
platform, and the like.
[0179] When implemented in a non-JAVA language or application code
environment, the generalized platform, and/or virtual machine
and/or machine and/or runtime system is able to operate application
code 50 in the language(s) (including for example, but not limited
to any one or more of source-code languages, intermediate-code
languages, object-code languages, machine-code languages, and any
other code languages) of that platform, and/or virtual machine
and/or machine and/or runtime system environment, and utilize the
platform, and/or virtual machine and/or machine and/or runtime
system and/or language architecture irrespective of the machine
manufacturer and the internal details of the machine. It will also
be appreciated in light of the description provided herein that
platform and/or runtime system may include virtual machine and
non-virtual machine software and/or firmware architectures, as well
as hardware and direct hardware coded applications and
implementations.
[0180] For a more general set of virtual machine or abstract
machine environments, and for current and future computers and/or
computing machines and/or information appliances or processing
systems, and that may not utilize or require utilization of either
classes and/or objects, the structure, method, and computer program
and computer program product are still applicable. Examples of
computers and/or computing machines that do not utilize either
classes and/or objects include for example, the x86 computer
architecture manufactured by Intel Corporation and others, the
SPARC computer architecture manufactured by Sun Microsystems, Inc
and others, the PowerPC computer architecture manufactured by
International Business Machines Corporation and others, and the
personal computer products made by Apple Computer, Inc., and
others. For these types of computers, computing machines,
information appliances, and the virtual machine or virtual
computing environments implemented thereon that do not utilize the
idea of classes or objects, may be generalized for example to
include primitive data types (such as integer data types, floating
point data types, long data types, double data types, string data
types, character data types and Boolean data types), structured
data types (such as arrays and records) derived types, or other
code or data structures of procedural languages or other languages
and environments such as functions, pointers, components, modules,
structures, references and unions.
[0181] In the JAVA language memory locations include, for example,
both fields and elements of array data structures. The above
description deals with fields and the changes required for array
data structures are essentially the same mutatis mutandis.
[0182] Any and all embodiments of the present invention are
anticipated to be able to take numerous forms and implementations,
including in software implementations, hardware implementations,
silicon implementations, firmware implementation, or
software/hardware/silicon/firmware combination implementations.
[0183] Various methods and/or means are described relative to
embodiments of the present invention. In at least one embodiment of
the invention, any one or each of these various means may be
implemented by computer program code statements or instructions
(possibly including by a plurality of computer program code
statements or instructions) that execute within computer logic
circuits, processors, ASICs, microprocessors, microcontrollers, or
other logic to modify the operation of such logic or circuits to
accomplish the recited operation or function. In another
embodiment, any one or each of these various means may be
implemented in firmware and in other embodiments such may be
implemented in hardware. Furthermore, in at least one embodiment of
the invention, any one or each of these various means may be
implemented by a combination of computer program software,
firmware, and/or hardware.
[0184] Any and each of the aforedescribed methods, procedures,
and/or routines may advantageously be implemented as a computer
program and/or computer program product stored on any tangible
media or existing in electronic, signal, or digital form. Such
computer program or computer program products comprising
instructions separately and/or organized as modules, programs,
subroutines, or in any other way for execution in processing logic
such as in a processor or microprocessor of a computer, computing
machine, or information appliance; the computer program or computer
program products modifying the operation of the computer on which
it executes or on a computer coupled with, connected to, or
otherwise in signal communications with the computer on which the
computer program or computer program product is present or
executing. Such computer program or computer program product
modifying the operation and architectural structure of the
computer, computing machine, and/or information appliance to alter
the technical operation of the computer and realize the technical
effects described herein.
[0185] For ease of description, some or all of the indicated memory
locations herein may be indicated or described to be replicated on
each machine (as shown in FIG. 5A), and therefore, replica memory
updates to any of the replicated memory locations by one machine,
will be transmitted/sent to all other machines. Importantly, the
methods and embodiments of this invention are not restricted to
wholly replicated memory arrangements, but are applicable to and
operable for partially replicated shared memory arrangements
mutatis mutandis (e.g. where one or more memory locations are only
replicated on a subset of a plurality of machines, such as shown in
FIG. 5B).
[0186] To summarize, there is disclosed a method of storing data in
a multiple computer system comprising a plurality of computers each
having a local memory and each being interconnected to the other
computers via a communications network, the method comprising the
steps of:
(i) partitioning the local memory of each computer into two
compartments, (ii) for each computer storing data created by, or
required for, the operation of the computer firstly in a
compartment in the computer, and secondly in a compartment of one
other computer, and (iii) updating changes in content or value in
the stored data at both the compartments, whereby in the event of
failure of only one of the computers the stored and updated data is
available in the remaining computers.
[0187] Preferably the method includes the further step of:
(iv) allocating a hierarchical order to the computers, and (v) for
each computer storing the data for that computer in one of the
local memory compartments and storing the data for the
hierarchically adjacent computer in the other compartment of the
local memory.
[0188] Preferably the method includes the step of:
(vi) making all the data stored on each computer accessible to all
other ones of the computers to thereby form a distributed shared
memory computer system.
[0189] Preferably the method includes the step of:
(vii) replicating some of the stored data and storing same on each
the computer, but not replicating all of the stored data to thereby
form a partially replicated stored memory computer system.
[0190] Preferably the replicated stored memory of each computer is
substantially the same.
[0191] Preferably the replicated stored memory is substantially
located in a single computer.
[0192] Preferably the method includes the further step of
transmitting changes made to a memory location of a first computer
to another computer for storage therein, and the other computer
transmitting the changes to the remaining computers.
[0193] Preferably the multiple computers are arranged in a
hierarchical order and the first computer and the other computer
are adjacent computers in the hierarchical order.
[0194] Also disclosed is a multiple computer system comprising a
plurality of computers each having a local memory and each being
interconnected to the other computers via a communications network,
the local memory of each computer being partitioned into two
compartments, the system including data storage allocation means to
allocate to each computer data created by, or required for, the
operation of that computer firstly in a compartment in that
computer, and secondly in a compartment of one other computer, and
data updating means to store changes in the content or value of the
stored data at both the compartments, whereby in the event of
failure of only one of the computers all the stored and updated
data is available in the remaining computers.
[0195] Preferably the computers are arranged in a hierarchical
order and each computer stores data for that computer in one of the
local memory compartments and stores data for the hierarchically
adjacent computer in the other compartment of the local memory.
[0196] Preferably all data stored on each computer is accessible to
all other ones of the computers whereby the system comprises a
distributed shared memory computer system.
[0197] Preferably some of the stored data is replicated and stored
on each of the computers, but not all of the stored data is
replicated whereby the system comprises a partially replicated
stored memory computer system.
[0198] Preferably the replicated stored memory of each computer is
substantially the same.
[0199] Preferably the replicated stored memory is substantially
located in a single computer.
[0200] Preferably changes made to a memory location of a first
computer are transmitted to another computer for storage therein,
and the other computer transmitting the changes to the remaining
computers.
[0201] Preferably the multiple computers are arranged in a
hierarchical order and the first computer and the other computer
are adjacent computers in the hierarchical order.
[0202] Further there is disclosed a single computer adapted to
operate in a multiple computer system comprising a plurality of
computers each having a local memory and each being interconnected
to the other computers via a communications network, the single
computer having a local memory which is partitioned into two
compartments, a communications port for connection with the
communications network, a data updating means connected with the
communications port to receive data from, or send data to, the
communications port, and a data storage allocation means to store
in a first of the compartments first data created by, or required
for, the operation of the computer, to send the first data to the
communications port for storage in another computer, and to receive
from the communications port second data created by, or required
for, the operation of another computer whereby in the event of
failure of the another computer the data required for the single
computer to take over the computational tasks of the another
computer is present in the single computer.
[0203] Preferably the multiple computer system has a hierarchical
order allocated to the computers thereof, and the another computer
comprises the hierarchically adjacent computer.
[0204] Furthermore there is disclosed a method of storing data in a
multiple computer system comprising a plurality of computers each
having an independent local memory and each being interconnected to
the other computers via a communications network, each of the
computers executing a portion of a same application program written
to be operated on a single machine, and at least one application
memory location/content replicated in each of the independent local
memories, the method comprising the steps of:
(i) partitioning the local application memory of each computer into
two compartments, (ii) for the executing portion of the application
program of each computer storing data created by, or required for,
the operation of the portion by the computer firstly in a
compartment in the computer, and secondly in a compartment of one
other computer, and (iii) updating changes in content or value of
application memory locations in the stored data at both the
compartments, whereby in the event of failure of only one of the
computers the stored and updated data is available in the remaining
computers.
[0205] Further there is disclosed a multiple computer system
comprising a plurality of computers each having an independent
local memory and executing a different portion of a same
application program, the independent local memories comprising at
least one application memory location/content replicated on each of
the independent memories, and where the application program is
written to operate on only a single computer, each the computer
being interconnected to the other computers via a communications
network, the local application memory of each computer being
partitioned into two compartments, the system including data
storage allocation means to allocate to each computer data created
by, or required for, the operation of that computer firstly in a
compartment in that computer, and secondly in a compartment of one
other computer, and data updating means to store changes in the
content or value of the stored data at both the compartments,
whereby in the event of failure of only one of the computers all
the stored and updated data is available in the remaining
computers.
[0206] Further still there is disclosed a single computer adapted
to operate in a multiple computer system comprising a plurality of
computers each having a independent local memory and executing a
different portion of a same application program, the independent
local memories comprising at least one application memory
location/content replicated in each of the independent memories,
and where the application program is written to operation on only a
single computer, and each the computer being interconnected to the
other computers via a communications network, the single computer
having a local application memory which is partitioned into two
compartments, a communications port for connection with the
communications network, a data updating means connected with the
communications port to receive data from, or send data to, the
communications port, and a data storage allocation means to store
in a first of the compartments first data created by, or required
for, the operation of the computer, to send the first data to the
communications port for storage in another computer, and to receive
from the communications port second data created by, or required
for, the operation of another computer whereby in the event of
failure of the another computer the data required for the single
computer to take over the computational tasks of the another
computer is present in the single computer.
[0207] The term "comprising" (and its grammatical variations) as
used herein is used in the inclusive sense of "having" or
"including" and not in the exclusive sense of "consisting only
of"
* * * * *