U.S. patent application number 11/188309 was filed with the patent office on 2006-02-02 for delegating tasks between multiple processor cores.
This patent application is currently assigned to Texas Instruments Incorporated. Invention is credited to Gerard Chauvel.
Application Number | 20060026407 11/188309 |
Document ID | / |
Family ID | 34931294 |
Filed Date | 2006-02-02 |
United States Patent
Application |
20060026407 |
Kind Code |
A1 |
Chauvel; Gerard |
February 2, 2006 |
Delegating tasks between multiple processor cores
Abstract
An electronic device comprising a first processor and a second
processor, the second processor coupled to the first processor and
adapted to receive an address from the first processor, to pause
execution of a first thread at a switch point, and to use the
address to retrieve and execute a group of instructions in a second
thread. Prior to executing the group of instructions in the second
thread, the second processor pushes onto a hardware-controlled
stack data pertaining to the switch point, the data comprising
information needed to resume execution of the first thread at the
switch point.
Inventors: |
Chauvel; Gerard; (Antibes,
FR) |
Correspondence
Address: |
TEXAS INSTRUMENTS INCORPORATED
P O BOX 655474, M/S 3999
DALLAS
TX
75265
US
|
Assignee: |
Texas Instruments
Incorporated
Dallas
TX
|
Family ID: |
34931294 |
Appl. No.: |
11/188309 |
Filed: |
July 25, 2005 |
Current U.S.
Class: |
712/228 ;
711/E12.067; 712/242 |
Current CPC
Class: |
G06F 9/30174 20130101;
G06F 2212/6012 20130101; G06F 9/45504 20130101; Y02D 10/00
20180101; Y02D 10/13 20180101; G06F 12/1081 20130101; G06F 12/0802
20130101 |
Class at
Publication: |
712/228 ;
712/242 |
International
Class: |
G06F 9/00 20060101
G06F009/00 |
Foreign Application Data
Date |
Code |
Application Number |
Jul 27, 2004 |
EP |
04291918.3 |
Claims
1. An electronic device, comprising: a first processor; and a
second processor coupled to the first processor and adapted to
receive an address from the first processor, to pause execution of
a first thread at a switch point, and to use said address to
retrieve and execute a group of instructions in a second thread;
wherein, prior to executing the group of instructions in the second
thread, the second processor pushes onto a hardware-controlled
stack data pertaining to the switch point, said data comprising
information needed to resume execution of the first thread at the
switch point.
2. The electronic device of claim 1, wherein the group of
instructions is pre-programmed into the second processor.
3. The electronic device of claim 1, wherein the data comprises
only a minimum amount of information needed to resume execution of
the first thread at the switch point.
4. The electronic device of claim 3, wherein said data comprises no
more than four registers.
5. The electronic device of claim 4, wherein three of said four
registers consist of two program counters and a status of the
second processor.
6. The electronic device of claim 3, wherein the minimum amount of
information comprises contents of only those registers used by the
group of instructions.
7. The electronic device of claim 1, wherein, prior to resuming
execution of the first thread at the switch point, the second
processor pops the data off of the stack and stores the data to
registers in the second processor.
8. The electronic device of claim 1, wherein the second processor
stores an address of a topmost stack entry to a memory prior to
executing the second thread.
9. The electronic device of claim 1, wherein the first processor
sends to the second processor a command with said address, and
wherein the command causes the second processor to retrieve and
execute the group of instructions.
10. The electronic device of claim 1, wherein the device is at
least one of a battery-operated device or a mobile communication
device.
11. A processor, comprising: decode logic adapted to receive from
another processor an address of a group of instructions; and fetch
logic coupled to the decode logic and adapted to fetch the group of
instructions from storage; wherein the decode logic pauses
processing of a first thread at a switch point and processes the
group of instructions in a separate thread; wherein, prior to
processing the group of instructions, the processor pushes onto a
hardware-controlled stack data pertaining to the switch point, said
data comprising contents of registers used by the group of
instructions.
12. The processor of claim 11, wherein the group of instructions is
pre-programmed into the processor.
13. The processor of claim 11, wherein the data comprises only
contents of no more than four registers.
14. The processor of claim 13, wherein three of said four registers
consist of two program counters and a status of the processor.
15. The processor of claim 11, wherein, prior to resuming execution
of the first thread at the switch point, the processor pops the
data off of the stack and stores the data to said registers in the
processor.
16. The processor of claim 11, wherein the processor stores an
address of a topmost stack entry to a memory prior to executing the
separate thread.
17. The processor of claim 11, wherein the processor receives a
command with said address, and wherein the command causes the
processor to retrieve and execute the group of instructions.
18. A method of delegating a task from a first processor to a
second processor, comprising: transferring an address of a group of
instructions from the first processor to the second processor;
pausing execution of a first thread in the second processor at a
switch point; pushing data onto a stack, said data comprising
contents of registers used by said group of instructions;
retrieving said group of instructions using the address; executing
said group of instructions in a second thread; and popping said
data off of the stack and storing the data to said registers in the
second processor.
19. The method of claim 18, wherein pushing data onto the stack
comprises pushing a maximum of four registers onto the stack.
20. The method of claim 18, wherein pushing data onto the stack
comprises pushing only contents of registers used by said group of
instructions.
21. The method of claim 20, wherein pushing only contents of
registers used by said group of instructions comprises pushing two
program counters and a status of the second processor onto the
stack.
22. The method of claim 18 further comprising transferring a
command from the first processor to the second processor, wherein
the command causes the second processor to retrieve and execute
said group of instructions.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to European Patent
Application No. 04291918.3, filed on Jul. 27, 2004 and incorporated
herein by reference. This application is related to co-pending and
commonly assigned applications Ser. No. ______ (Attorney Docket No.
TI-38581 (1962-22200)), entitled, "Emulating A Direct Memory Access
Controller," and Ser. No. ______ (Attorney Docket No. TI-38584
(1962-22500), entitled, "Interrupt Management In Dual Core
Processors," which are incorporated by reference herein.
BACKGROUND
[0002] Many systems comprise dual processor cores. One of these
processor cores is typically designated to be the "host," or main,
processor. The other processor may be termed a "secondary"
processor. While performing a series of tasks, the host processor
may determine that delegating one or more tasks to the secondary
processor would be expeditious, so that the host processor may
allocate its resources for performing other tasks. In such a case,
the host processor must program the secondary processor to perform
the task or tasks that are to be delegated. For example, if the
host processor delegates the execution of a particular algorithm to
the secondary processor, the host processor must program the
secondary processor to execute the algorithm. It is time-consuming
and energy-consuming for a host processor to have to program the
secondary processor.
BRIEF SUMMARY
[0003] Disclosed herein is a technique for delegating tasks between
multiple processor cores. An illustrative embodiment comprises an
electronic device comprising a first processor and a second
processor, the second processor coupled to the first processor and
adapted to receive an address from the first processor, to pause
execution of a first thread at a switch point, and to use the
address to retrieve and execute a group of instructions in a second
thread. Prior to executing the group of instructions in the second
thread, the second processor pushes onto a hardware-controlled
stack data pertaining to the switch point, the data comprising
information needed to resume execution of the first thread at the
switch point.
[0004] Another illustrative embodiment comprises a processor that
comprises decode logic adapted to receive from another processor an
address of a group of instructions. The processor also comprises
fetch logic coupled to the decode logic and adapted to fetch the
group of instructions from storage. The decode logic pauses
processing of a first thread at a switch point and processes the
group of instructions in a separate thread. Prior to processing the
group of instructions, the processor pushes onto a
hardware-controlled stack data pertaining to the switch point, the
data comprising contents of registers used by the group of
instructions.
[0005] Yet another illustrative embodiment comprises a method of
delegating a task from a first processor to a second processor. The
method comprises transferring an address of a group of instructions
from the first processor to the second processor, pausing execution
of a first thread in the second processor at a switch point,
pushing data onto a stack, the data comprising contents of
registers used by the group of instructions. The method further
comprises retrieving the group of instructions using the address,
executing the group of instructions in a second thread, and popping
the data off of the stack and storing the data to the registers in
the second processor.
NOTATION AND NOMENCLATURE
[0006] Certain terms are used throughout the following description
and claims to refer to particular system components. As one skilled
in the art will appreciate, companies may refer to a component by
different names. This document does not intend to distinguish
between components that differ in name but not function. In the
following discussion and in the claims, the terms "including" and
"comprising" are used in an open-ended fashion, and thus should be
interpreted to mean "including, but not limited to . . . ". Also,
the term "couple" or "couples" is intended to mean either an
indirect or direct connection. Thus, if a first device couples to a
second device, that connection may be through a direct connection,
or through an indirect connection via other devices and
connections.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] For a more detailed description of the preferred embodiments
of the present invention, reference will now be made to the
accompanying drawings, wherein:
[0008] FIG. 1 shows a diagram of a system in accordance with
preferred embodiments of the invention and including a Java Stack
Machine ("JSM") and a Main Processor Unit ("MPU"), in accordance
with embodiments of the invention;
[0009] FIG. 2 shows a block diagram of the JSM of FIG. 1 in
accordance with preferred embodiments of the invention;
[0010] FIG. 3 shows various registers used in the JSM of FIGS. 1
and 2, in accordance with embodiments of the invention;
[0011] FIG. 4 shows the preferred operation of the JSM to include
"micro-sequences," in accordance with embodiments of the
invention;
[0012] FIG. 5 shows an illustrative switching process between two
execution threads, in accordance with a preferred embodiment of the
invention;
[0013] FIG. 6 shows an illustrative 32-bit instruction that may be
incorporated into a micro-sequence, in accordance with a preferred
embodiment of the invention;
[0014] FIG. 7 shows a flow diagram of the switching process of FIG.
5, in accordance with embodiments of the invention;
[0015] FIG. 8 shows a flow diagram describing a delegation
technique in accordance with a preferred embodiment of the
invention; and
[0016] FIG. 9 shows the system described herein, in accordance with
preferred embodiments of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0017] The following discussion is directed to various embodiments
of the invention. Although one or more of these embodiments may be
preferred, the embodiments disclosed should not be interpreted, or
otherwise used, as limiting the scope of the disclosure, including
the claims, unless otherwise specified. In addition, one skilled in
the art will understand that the following description has broad
application, and the discussion of any embodiment is meant only to
be exemplary of that embodiment, and not intended to intimate that
the scope of the disclosure, including the claims, is limited to
that embodiment.
[0018] Described herein is a technique by which a host processor
may delegate a task to a secondary processor by simply sending a
command and an address to the secondary processor. The command
causes the secondary processor to use the address to locate and
retrieve a group of instructions that has been pre-programmed into
the secondary processor. Executing this group of instructions
causes the secondary processor to perform whatever task the host
processor delegated to the secondary processor. However, before the
secondary processor executes the group of instructions, it must
first stop what it is doing in a currently executing thread and
must further "bookmark" its place in the currently executing
thread. By bookmarking its place in the currently executing thread,
the secondary processor can execute the group of instructions and
then resume executing in the thread at the bookmarked location.
Accordingly, a technique for bookmarking a spot in a thread and a
technique for delegating tasks from the host processor to the
secondary processor are now discussed in turn.
[0019] In the context of software code, a "thread" may be defined
as a single stream of code execution. While executing a software
program, a processor may switch from a first thread to a second
thread in order to complete a particular task. For example, the
first thread may comprise some stimulus (i.e., instruction) that,
when executed by the processor, causes the processor to halt
execution of the first thread and to begin execution of the second
thread. The second thread may comprise the performance of some task
by a different portion of the software program.
[0020] The point in the first thread at which the switch is made
may be termed the "switch point." When switching from the first
thread to the second thread, the processor first "bookmarks" the
switch point, so that when the processor has finished executing the
second thread of code, it can resume execution in the first thread
at the switch point.
[0021] In order to bookmark the switch point, the processor stores
all information that pertains to the switch point (known as the
"context" of the switch point). Such information includes all
registers, the program counter, pointer to the stack, etc. The
processor copies such information to memory and retrieves the
information later to resume execution in the first thread at the
switch point. Bookmarking the switch point is time-consuming and
consumes power which may be in limited supply in, for example, a
battery-operated device such as a mobile phone.
[0022] Processors that store to memory all information pertaining
to the switch point unnecessarily spend time and power doing so.
Whereas the aforementioned processors store all registers, the
program counter, stack pointer, etc., the subject matter described
herein is achieved at least in part by the realization that in many
cases, fewer than all such information need be stored. For example,
only three values are saved to sufficiently bookmark the switch
point: the program counter (PC), a second program counter called
the micro-program counter (.mu.PC), discussed below, and a status
register. Once the processor has finished executing the second
thread, these three values provide sufficient information for the
processor to find the switch point in the first thread and resume
execution at that switch point.
[0023] Accordingly, described herein is a programmable electronic
device, such as a processor, that is able to bookmark a switch
point using a minimal amount of information pertaining to the
switch point. A "minimal" amount of information generally comprises
information in one or more registers, but not all registers, of a
processor core. For example, in some embodiments, a "minimal"
amount of information comprises a PC register, a .mu.PC register
and a status register. In other embodiments, a "minimal" amount of
information comprises the PC register, the .mu.PC register and the
status register, as well as one or more additional registers, but
less than all registers. In still other embodiments, a "minimal"
amount of information comprises less than all registers. In yet
other embodiments, a "minimal" amount of information consists of
only the information (i.e., registers) necessary to bookmark a
switch point, where the amount of information (i.e., number of
registers) varies depending on the processor used and/or the
software application being processed. In such cases, the "minimal"
amount of information may simply be one register or may be all of
the registers in the processor core. Instead of storing all switch
point information to memory, the processor described herein pushes
a minimal amount of switch point information onto a processor
stack. Later, when the processor needs the switch point
information, it pops the information off of the stack and uses the
information to resume execution at the switch point. In this way,
the time and power demands placed on the processor are reduced or
even minimized, resulting in increased performance.
[0024] Some situations, however, require more than a minimal amount
of information to be stored. For example, in these situations, a
minimum amount of information may not be sufficient to properly
bookmark a switch point. Accordingly, the disclosed processor is
capable of bookmarking a switch point using a minimal amount of
information ("minimal context store") needed to resume execution at
the switch point. The processor also is capable of bookmarking a
switch point using more than a minimal amount of information ("full
context store"), as described further below.
[0025] The processor described herein is particularly suited for
executing Java.TM. Bytecodes or comparable code. As is well known,
Java is particularly suited for embedded applications. Java is a
stack-based language, meaning that a processor stack is heavily
used when executing various instructions (e.g., Bytecodes), which
instructions generally have a size of 8 bits. Java is a relatively
"dense" language meaning that on average each instruction may
perform a large number of functions compared to various other
instructions. The dense nature of Java is of particular benefit for
portable, battery-operated devices that preferably include as
little memory as possible to save space and power. The reason,
however, for executing Java code is not material to this disclosure
or the claims which follow. Further, the processor advantageously
includes one or more features that permit the execution of the Java
code to be accelerated.
[0026] Referring now to FIG. 1, a system 100 is shown in accordance
with a preferred embodiment of the invention. As shown, the system
includes at least two processors 102 and 104. Processor 102 is
referred to for purposes of this disclosure as a Java Stack Machine
("JSM") and processor 104 may be referred to as a Main Processor
Unit ("MPU"). System 100 may also include memory 106 coupled to
both the JSM 102 and MPU 104 and thus accessible by both
processors. At least a portion of the memory 106 may be shared by
both processors meaning that both processors may access the same
shared memory locations. Further, if desired, a portion of the
memory 106 may be designated as private to one processor or the
other. System 100 also includes a Java Virtual Machine ("JVM") 108,
compiler 110, and a display 114. The MPU 104 preferably includes an
interface to one or more input/output ("I/O") devices such as a
keypad to permit a user to control various aspects of the system
100. In addition, data streams may be received from the I/O space
into the JSM 102 to be processed by the JSM 102. Other components
(not specifically shown) may be included as desired for various
applications.
[0027] As is generally well known, Java code comprises a plurality
of "Bytecodes" 112. Bytecodes 112 may be provided to the JVM 108,
compiled by compiler 110 and provided to the JSM 102 and/or MPU 104
for execution therein. In accordance with a preferred embodiment of
the invention, the JSM 102 may execute at least some, and generally
most, of the Java Bytecodes. When appropriate, however, the JSM 102
may request the MPU 104 to execute one or more Java Bytecodes not
executed or executable by the JSM 102. In addition to executing
Java Bytecodes, the MPU 104 also may execute non-Java instructions.
The MPU 104 also hosts an operating system ("O/S") (not
specifically shown) which performs various functions including
system memory management, the system task management that schedules
the JVM 108 and most or all other native tasks running on the
system, management of the display 114, receiving input from input
devices, etc. Without limitation, Java code may be used to perform
any one of a variety of applications including multimedia, games or
web based applications in the system 100, while non-Java code,
which may comprise the O/S and other native applications, may still
run on the system on the MPU 104.
[0028] The JVM 108 generally comprises a combination of software
and hardware. The software may include the compiler 110 and the
hardware may include the JSM 102. The JVM may include a class
loader, Bytecode verifier, garbage collector, and a Bytecode
interpreter loop to interpret the Bytecodes that are not executed
on the JSM processor 102.
[0029] In accordance with preferred embodiments of the invention,
the JSM 102 may execute at least two types of instruction sets. One
type of instruction set may comprise standard Java Bytecodes. As is
well-known, Java is a stack-based programming language in which
instructions generally target a stack. For example, an integer add
("IADD") Java instruction pops two integers off the top of the
stack, adds them together, and pushes the sum back on the stack. A
"simple" Bytecode instruction is generally one in which the JSM 102
may perform an immediate operation either in a single cycle (e.g.,
an "iadd" instruction) or in several cycles (e.g., "dup2_x2"). A
"complex" Bytecode instruction is one in which several memory
accesses may be required to be made within the JVM data structure
for various verifications (e.g., NULL pointer, array boundaries).
As will be described in further detail below, one or more of the
complex Bytecodes may be replaced by a "micro-sequence" comprising
various other instructions.
[0030] Another type of instruction set executed by the JSM 102 may
include instructions other than standard Java instructions. In
accordance with at least some embodiments of the invention, the
other instruction set may include register-based and memory-based
operations to be performed. This other type of instruction set
generally complements the Java instruction set and, accordingly,
may be referred to as a complementary instruction set architecture
("C-ISA"). By complementary, it is meant that a complex Java
Bytecode may be replaced by a "micro-sequence" comprising C-ISA
instructions. The execution of Java may be made more efficient and
run faster by replacing some sequences of Bytecodes by preferably
shorter and more efficient sequences of C-ISA instructions. The two
sets of instructions may be used in a complementary fashion to
obtain satisfactory code density and efficiency. As such, the JSM
102 generally comprises a stack-based architecture for efficient
and accelerated execution of Java Bytecodes combined with a
register-based architecture for executing register and memory based
C-ISA instructions. Both architectures preferably are tightly
combined and integrated through the C-ISA. Because various of the
data structures described herein are generally JVM-dependent and
thus may change from one JVM implementation to another, the
software flexibility of the micro-sequence provides a mechanism for
various JVM optimizations now known or later developed.
[0031] FIG. 2 shows an exemplary block diagram of the JSM 102. As
shown, the JSM includes a core 120 coupled to data storage 122 and
instruction storage 130. The core may include one or more
components as shown. Such components preferably include a plurality
of registers 140, three address generation units ("AGUs") 142, 147,
micro-translation lookaside buffers (micro-TLBs) 144, 156, a
multi-entry micro-stack 146, an arithmetic logic unit ("ALU") 148,
a multiplier 150, decode logic 152, and instruction fetch logic
154. In general, operands may be retrieved from data storage 122 or
from the micro-stack 146 and processed by the ALU 148, while
instructions may be fetched from instruction storage 130 by fetch
logic 154 and decoded by decode logic 152. The address generation
unit 142 may be used to calculate addresses based, at least in
part, on data contained in the registers 140. The AGUs 142 may
calculate addresses for C-ISA instructions. The AGUs 142 may
support parallel data accesses for C-ISA instructions that perform
array or other types of processing. The AGU 147 couples to the
micro-stack 146 and may manage overflow and underflow conditions in
the micro-stack preferably in parallel. The micro-TLBs 144,156
generally perform the function of a cache for the address
translation and memory protection information bits that are
preferably under the control of the operating system running on the
MPU 104. The decode logic 152 comprises auxiliary registers
151.
[0032] Referring now to FIG. 3, the registers 140 may include 16
registers designated as R0-R15. In some embodiments, registers
R0-R5 and R8-R14 may be used as general purposes ("GP") registers
usable for any purpose by the programmer. Other registers, and some
of the GP registers, may be used for specific functions. For
example, in addition to use as a GP register, register R5 may be
used to store the base address of a portion of memory in which Java
local variables may be stored when used by the current Java method.
The top of the micro-stack 146 can be referenced by the values in
registers R6 and R7. The top of the micro-stack 146 has a matching
address in external memory pointed to by register R6. The values
contained in the micro-stack 146 are the latest updated values,
while their corresponding values in external memory may or may not
be up to date. Register R7 provides the data value stored at the
top of the micro-stack 146. Register R15 may be used for status and
control of the JSM 102. At least one bit (called the
"Micro-Sequence-Active" bit) in status register R15 is used to
indicate whether the JSM 102 is executing a simple instruction or a
complex instruction through a micro-sequence. This bit controls, in
particular, which program counter is used (PC or .mu.PC) to fetch
the next instruction, as will be explained below.
[0033] Referring again to FIG. 2, as noted above, the JSM 102 is
adapted to process and execute instructions from at least two
instruction sets, at least one having instructions from a
stack-based instruction set (e.g., Java). The stack-based
instruction set may include Java Bytecodes. Unless empty, Java
Bytecodes may pop data from and push data onto the micro-stack 146.
The micro-stack 146 preferably comprises the top n entries of a
larger stack that is implemented in data storage 122. Although the
value of n may vary in different embodiments, in accordance with at
least some embodiments, the size n of the micro-stack may be the
top eight entries in the larger, memory-based stack. The
micro-stack 146 preferably comprises a plurality of gates in the
core 120 of the JSM 102. By implementing the micro-stack 146 in
gates (e.g., registers) in the core 120 of the processor 102,
access to the data contained in the micro-stack 146 is generally
very fast, although any particular access speed is not a limitation
on this disclosure.
[0034] The ALU 148 adds, subtracts, and shifts data. The multiplier
150 may be used to multiply two values together in one or more
cycles. The instruction fetch logic 154 generally fetches
instructions from instruction storage 130. The instructions may be
decoded by decode logic 152. Because the JSM 102 is adapted to
process instructions from at least two instruction sets, the decode
logic 152 generally comprises at least two modes of operation, one
mode for each instruction set. As such, the decode logic unit 152
may include a Java mode in which Java instructions-may be decoded
and a C-ISA mode in which C-ISA instructions may be decoded.
[0035] The data storage 122 generally comprises data cache
("D-cache") 124 and data random access memory ("DRAM") 126.
Reference may be made to U.S. Pat. No. 6,826,652, filed Jun. 9,
2000 and U.S. Pat. No. 6,792,508, filed Jun. 9, 2000, both
incorporated herein by reference. Reference also may be made to
U.S. Ser. No. 09/932,794 (Publication No. 20020069332), filed Aug.
17, 2001 and incorporated herein by reference. The stack (excluding
the micro-stack 146), arrays and non-critical data may be stored in
the D-cache 124, while Java local variables, critical data and
non-Java variables (e.g., C, C++) may be stored in D-RAM 126. The
instruction storage 130 may comprise instruction RAM ("I-RAM") 132
and instruction cache ("I-cache") 134. The I-RAM 132 may be used
for "complex" micro-sequenced Bytecodes or micro-sequences, as
described below. The I-cache 134 may be used to store other types
of Java Bytecode and mixed Java/C-ISA instructions.
[0036] As noted above, the C-ISA instructions generally complement
the standard Java Bytecodes. For example, the compiler 110 may scan
a series of Java Bytecodes 112 and replace a complex Bytecode with
a micro-sequence as explained previously. The micro-sequence may be
created to optimize the function(s) performed by the replaced
complex Bytecodes.
[0037] FIG. 4 illustrates the operation of the JSM 102 to replace
Java Bytecodes with micro-sequences. FIG. 4 shows some, but not
necessarily all, components of the JSM. In particular, the
instruction storage 130, the decode logic 152, and a micro-sequence
vector table 162 are shown. The decode logic 152 receives
instructions from the instruction storage 130 and accesses the
micro-sequence vector table 162. In general and as described above,
the decode logic 152 receives instructions (e.g., instructions 170)
from instruction storage 130 via instruction fetch logic 154 (FIG.
2) and decodes the instructions to determine the type of
instruction for subsequent processing and execution. In accordance
with the preferred embodiments, the JSM 102 either executes the
Bytecode from instructions 170 or replaces a Bytecode from
instructions 170 with a micro-sequence as described below.
[0038] The micro-sequence vector table 162 may be implemented in
the decode logic 152 or as separate logic in the JSM 102. The
micro-sequence vector table 162 preferably includes a plurality of
entries 164. The entries 164 may include one entry for each
Bytecode that the JSM may receive. For example, if there are a
total of 256 Bytecodes, the micro-sequence vector table 162
preferably comprises at least 256 entries. Each entry 164
preferably includes at least two fields--a field 166 and an
associated field 168. Field 168 may comprise a single bit that
indicates whether the instruction 170 is to be directly executed or
whether the associated field 166 contains a reference to a
micro-sequence. For example, a bit 168 having a value of "0" ("not
set") may indicate the field 166 is invalid and thus, the
corresponding Bytecode from instructions 170 is directly executable
by the JSM. Bit 168 having a value of "1" ("set") may indicate that
the associated field 166 contains a reference to a
micro-sequence.
[0039] If the bit 168 indicates the associated field 166 includes a
reference to a micro-sequence, the reference may comprise the full
starting address in instruction storage 130 of the micro-sequence
or a part of the starting address that can be concatenated with a
base address that may be programmable in the JSM. In the former
case, field 166 may provide as many address bits as are required to
access the full memory space. In the latter case, a register within
the JSM registers 140 is programmed to hold the base address and
the vector table 162 may supply only the offset to access the start
of the micro-sequence. Most or all JSM internal registers 140 and
any other registers preferably are accessible by the main processor
unit 104 and, therefore, may be modified by the JVM as necessary.
Although not required, this latter addressing technique may be
preferred to reduce the number of bits needed within field 166. At
least a portion 180 of the instruction 130 may be allocated for
storage of micro-sequences and thus the starting address may point
to a location in micro-sequence storage 130 at which a particular
micro-sequence can be found. The portion 180 may be implemented in
I-RAM 132 shown above in FIG. 2.
[0040] Although the micro-sequence vector table 162 may be loaded
and modified in accordance with a variety of techniques, the
following discussion includes a preferred technique. The vector
table 162 preferably comprises a JSM resource that is addressable
via a register 140. A single entry 164 or a block of entries within
the vector table 162 may be loaded by information from the data
cache 124 (FIG. 2). When loading multiple entries (e.g., all of the
entries 164) in the table 162, a repeat loop of instructions may be
executed. Prior to executing the repeat loop, a register (e.g., R0)
preferably is loaded with the starting address of the block of
memory containing the data to load into the table. Another register
(e.g., R1) preferably is loaded with the size of the block to load
into the table. Register R14 is loaded with the value that
corresponds to the first entry in the vector table that is to be
updated/loaded.
[0041] The repeated instruction loop preferably comprises two
instructions that are repeated n times. The value n preferably is
the value stored in register R1. The first instruction in the loop
preferably performs a load from the start address of the block (R0)
to the first entry in the vector table 162. The second instruction
in the loop preferably adds an "immediate" value to the block start
address. The immediate value may be "2" if each entry in the vector
table is 16 bits wide. The loop repeats itself to load the desired
portions of the total depending on the starting address.
[0042] In operation, the decode logic 152 uses a Bytecode from
instructions 170 as an index into micro-sequence vector table 162.
Once the decode logic 152 locates the indexed entry 164, the decode
logic 152 examines the associated bit 168 to determine whether the
Bytecode is to be replaced by a micro-sequence. If the bit 168
indicates that the Bytecode can be directly processed and executed
by the JSM, then the instruction is so executed. If, however, the
bit 168 indicates that the Bytecode is to be replaced by a
micro-sequence, then the decode logic 152 preferably changes this
instruction into a "no operation" (NOP) and sets the
micro-sequence-active bit (described above) in the status register
R15. In another embodiment, the JSM's pipe may be stalled to fetch
and replace this micro-sequenced instruction by the first
instruction of the micro-sequence. Changing the micro-sequenced
Bytecode into a NOP while fetching the first instruction of the
micro-sequence permits the JSM to process multi-cycle instructions
that are further advanced in the pipe without additional latency.
The micro-sequence-active bit may be set at any suitable time such
as when the micro-sequence enters the JSM execution stage (not
specifically shown).
[0043] As described above, the JSM 102 implements two program
counters-the PC and the .mu.PC. The PC and the .mu.PC are stored in
auxiliary registers 151, which in turn is stored in the decode
logic 152. In accordance with a preferred embodiment, one of these
two program counters is the active program counter used to fetch
and decode instructions. The PC 186 may be the currently active
program counter when the decode logic 152 encounters a Bytecode to
be replaced by a micro-sequence. Setting the status register's
micro-sequence-active bit causes the micro-program counter 188 to
become the active program counter instead of the program counter
186. Also, the contents of the field 166 associated with the
micro-sequenced Bytecode preferably are loaded into the .mu.PC 188.
At this point, the JSM 102 is ready to begin fetching and decoding
the instructions comprising the micro-sequence. At or about the
time the decode logic begins using the .mu.PC 188, the PC 186
preferably is incremented by a suitable value to point the PC 186
to the next instruction following the Bytecode that is replaced by
the micro-sequence. In at least some embodiments, the
micro-sequence-active bit within the status register R15 may only
be changed when the first instruction of the micro-sequence enters
the execute phase of JSM 102 pipe. The switch from PC 186 to the
.mu.PC 188 preferably is effective immediately after the
micro-sequenced instruction is decoded, thereby reducing the
latency.
[0044] The micro-sequence may end with a predetermined value or
Bytecode from the C-ISA called "RtuS" (return from micro-sequence)
that indicates the end of the sequence. This C-ISA instruction
causes a switch from the .mu.PC 188 to the PC 186 upon completion
of the micro-sequence. Preferably, the PC 186 previously was
incremented, as discussed above, so that the value of the PC 186
points to the next instruction to be decoded. The instruction may
have a delayed effect or an immediate effect depending on the
embodiment that is implemented. In embodiments with an immediate
effect, the switch from the .mu.PC 188 to the PC 186 is performed
immediately after the instruction is decoded and the instruction
after the RtuS instruction is the instruction pointed to by the
address present in the PC 186.
[0045] As discussed above, one or more Bytecodes may be replaced
with a micro-sequence or a group of other instructions. Such
replacement instructions may comprise any suitable instructions for
the particular application and situation at hand. At least some
such suitable instructions are disclosed in U.S. Ser. No.
10/631,308 (Publication No. 20040024989), filed Jul. 31, 2003 and
incorporated herein by reference.
[0046] Replacement micro-sequence instructions also may be used to
bookmark switch points when switching code execution threads.
Referring to FIG. 5, the line marked "T1" denotes a first thread T1
that is processed by the JSM 102. The thread T1 comprises a
plurality of Bytecode instructions, a plurality of micro-sequence
instructions, or some combination thereof. As previously explained,
the instructions that are executed in thread T1 are retrieved from
the instruction storage 130. More specifically, Bytecodes are
retrieved from the Bytecode storage 170 and micro-sequence
instructions are retrieved from micro-sequence storage 180.
[0047] While processing thread T1, the decode logic 152 may
encounter a sequence of JSM instructions that causes the processing
of thread Ti to be paused and the processing of a separate thread
T2 to be initialized. This sequence is executed in thread T1 at or
immediately prior to switch point 502. Execution of this sequence
causes processing of thread T1 to stop, and processing of a
separate thread T2 (denoted by line "T2") to begin in order to
perform some separate task in thread T2. In some embodiments,
instead of comprising a sequence of instructions (hereinafter
referred to as "switch instructions") that explicitly performs a
thread switch, thread T1 may comprise a sequence of instructions
that calls an operating system (OS) call (e.g., threadyield ( ) ),
which OS call selects one of a plurality of threads to execute
based on thread priorities as dictated by the OS. A thread switch
also may be directly initialized by the OS. Specifically, if the OS
is running on the MPU 104, the OS may use a sequence of MPU
commands to initialize the thread switch.
[0048] Before the JSM 102 switches from processing thread T1 to
processing thread T2, however, information pertaining to the switch
point 502 (i.e., "context" information) is stored by being pushed
onto a T1 stack 123 (e.g., a memory-based stack designated
specifically for thread T1 and stored in storage 122, FIG. 2) of
the JSM 102. In some embodiments, the context information may be
pushed onto the micro-stack 146. The use of the term
"hardware-controlled stack" below and/or in the claims may refer to
the micro-stack 146, the T1 stack 123 or the T2 stack 125, which T1
stack 123 and T2 stack 125 may be used as a micro-stack (e.g., like
micro-stack 146). Although the embodiments below are discussed in
terms of the T1 stack 123 and/or the T2 stack 125, the scope of
disclosure is not limited to the use of these particular stacks and
other stacks (e.g., micro-stack 146) may be substituted for the T1
stack 123 and/or the T2 stack 125. Further, in preferred
embodiments, the context information is a minimal amount of
information, as described below.
[0049] Context information that is collected preferably comprises
the values of the PC 186, .mu.PC 188 and status register (register
R15) as they are at the switch point 502. When the decode logic 152
encounters a sequence of switch instructions while processing
thread T1, the sequence causes the execution of thread T1 to be
halted at switch point 502, the context of switch point 502 to be
saved, and the execution of thread T2 to be initialized. In some
embodiments, commands sent from the MPU 104 may perform a function
similar to that of a sequence of switch instructions.
[0050] Regardless of whether a switch from thread T1 to thread T2
is initialized by code in thread T1 or commands received from the
MPU 104, the switching processes are similar. As described above,
the execution of thread T1 is first halted. Once the JSM 102 has
stopped processing thread T1, the JSM 102 is made to store the
context of the switch point 502. The context of the switch point
502 preferably comprises the minimum amount of information
necessary for the JSM 102 to resume processing thread T1 at switch
point 502 after the JSM 102 has finished processing thread T2. The
JSM 102 stores the context of the switch point 502 by retrieving
the PC 186 and the .mu.PC 188 from the auxiliary registers 151 and
pushing them onto the T1 stack 123. The JSM 102 also retrieves the
value of the status register R15 and pushes that value onto the T1
stack 123 as well. These three values--the PC 186, the .mu.PC 188
and the status register R15--together comprise the minimum amount
of information needed for the JSM 102 to resume processing thread
T1 at switch point 502 after processing thread T2.
[0051] However, in some embodiments, it is preferable to also store
a fourth value for efficiency purposes. Accordingly, the JSM 102
pushes a fourth value onto the T1 stack 123, where the fourth value
is variable. For example, the fourth value may be one of the
registers 140. The scope of disclosure is not limited to pushing
the PC 186, .mu.PC 188, status register and variable register onto
the stack in any particular order, nor is the scope of disclosure
limited to pushing these particular values onto the stack. As
described above, any suitable number of values (e.g., a minimum
amount of information) may be pushed onto the stack to store a
context.
[0052] In some embodiments, the switch instructions in the thread
T1 may be 32-bit instructions that, when executed, call a
subroutine or some other portion of code comprising instructions
that store the context of the switch point 502 by pushing context
values (e.g., PC, .mu.PC, status register) onto the T1 stack 123.
FIG. 6 shows an illustrative embodiment of such 32-bit
instructions. Specifically, FIG. 6 shows a 32-bit instruction 599
that comprises information that describes the class of the 32-bit
instruction 599 and further specifies the type of the instruction
599. For example, as shown in the figure, bits 31:28 describe the
class of the instruction and bits 27:24 and bits 3:0 describe the
particular type of instruction being used. Bits 27:24 and bits 3:0
may specify, for example, that the instruction is a minimum-context
push instruction which, when executed, causes various context
values to be pushed onto the stack, as described above. Bits 23:2
are not of significance and preferably do not contain arguments or
other relevant data. Instead, bits 23:2 may contain placeholder
values (e.g., "0" bits). The scope of disclosure is not limited to
the use of instructions as shown in FIG. 6. Context values also may
be pushed onto the T1 stack 123 by commands received from the MPU
104.
[0053] Each thread has its own RAM base address for storing local
variables used by that thread. The micro-sequence may contain
instructions that, when executed, cause the JSM 102 to clean and
invalidate the DRAM 126 to save the local variables being used by
thread T1. More specifically, at least some of the contents of the
DRAM 126 preferably are transferred to other areas of the storage
122, such as another DRAM (not specifically shown) that may be
located in the storage 122. The DRAM 126 then is invalidated to
clear space in the DRAM 126 for local variables that are used by
thread T2. After the DRAM 126 has been cleaned and invalidated, the
JSM 102 also may push the RAM base address onto the main stack, so
that the local variables used by thread T1 may be retrieved for
later use. Also, because each thread pushes and pops different
values onto the micro-stack 146, the JSM 102 may further clean and
invalidate the micro-stack 146 in order to preserve the entries of
the micro-stack 146 and to clear the micro-stack 146 for use by
thread T2. In at least some embodiments, the entries of the
micro-stack 146 may be copied and/or transferred to the data cache
124. Further, in some embodiments, the JSM 102 may invalidate the
current entries of the micro-stack 146, so that after a thread
switch, the entries loaded into the micro-stack 146 replace the
invalidated entries.
[0054] After the PC 186, the .mu.PC 188, the status register R15
and an optional fourth register have been pushed onto the T1 stack
123, the JSM 102 stores the stack pointer (i.e., register R6). The
stack pointer may be defined as the address of the topmost entry on
T1 stack 123 and may be stored in any suitable memory (e.g.,
storage 122). Once at least the PC 186, .mu.PC 188, and the status
register have been pushed onto the T1 stack 123, and once the stack
pointer for the T1 stack 123 has been stored in memory, the context
of switch point 502 has been stored.
[0055] Because the context has been stored, the JSM 102 is ready to
switch from thread T1 to thread T2. Similar to thread T1, thread T2
comprises a plurality of instructions (e.g., Bytecodes,
micro-sequences or a combination thereof). Like thread T1, thread
T2 may be executed multiple times. However, each time processing
switches from thread T1 to thread T2, as the context of thread T1
is stored from the JSM 102 onto a stack, so should the context of
thread T2 be loaded from a stack onto the JSM 102. The context of
thread T2 may be found on top of the T2 stack 125. The T2 stack 125
preferably is a memory-based stack, specifically designated for
thread T2 and stored in the storage 122. The thread T2 context may
have been pushed onto the T2 stack 125 at the end of a previous
iteration in a substantially similar fashion to the context-saving
process described above in relation to thread T1, or,
alternatively, the thread T2 context may have been pushed onto the
T2 stack 125 during the creation of the thread T2. It also may have
occurred during the last thread switch of thread T2.
[0056] Thus, to begin processing thread T2, the JSM 102 loads the
stack pointer for T2 stack 125 from the storage 122 to register R6.
The RAM base address is loaded from the T2 stack 125, thus loading
the local variables for thread T2. The JSM 102 also loads the
context of thread T2 from the T2 stack 125 onto the auxiliary
registers 151 and/or the registers 140. In particular, the JSM 102
uses specific instructions to pop context values off of the T2
stack 125, where at least some of the specific instructions are
indivisible. For example, a MCTXPOP instruction may be used to pop
minimum context values off of the T2 stack 125. This MCTXPOP
instruction, in at least some embodiments, is indivisible,
mandatory for performing a context switch, and should not be
preempted. In this way, the JSM 102 is initialized to the context
of the previous iteration of thread T2. Thus, the JSM 102
effectively is able to resume processing where it "left off." The
JSM 102 decodes and executes thread T2 in a similar fashion to
thread T1.
[0057] After thread T2 has been executed, the JSM 102 may resume
processing thread T1 at switch point 502. To resume processing
thread T1 at switch point 502, the JSM 102 loads the context
information of thread T1 from the T1 stack 123. The JSM 102 loads
the stack pointer of thread T1 from the storage 122 and into
register R6. The JSM 102 then pops the RAM base address off of the
T1 stack 123 and uses the RAM base address to load the local
variables for thread T1. The JSM 102 also pops the status value,
.mu.PC 188 and the PC 186 off of the T1 stack 123. The JSM 102
stores the status value to the register R15 and stores the .mu.PC
188 and the PC 186 to the auxiliary registers 151. In this way, the
context information that is stored on top of the T1 stack 123 is
popped off the stack and is used by the JSM 102 to return to the
context of switch point 502. The JSM 102 may now resume processing
thread T1 at switch point 502. The thread switch from thread T2 to
thread T1 may be controlled by a sequence of code being executed in
thread T2 or, alternatively, by commands sent from the MPU 104. The
thread switching technique described above may be applied to any
suitable pair of threads in the system 100.
[0058] FIG. 7 shows a flowchart summarizing the process used to
switch from one thread to another thread. The process 600 may begin
by processing thread T1 (block 602). The process 600 comprises
monitoring for a sequence of code in thread T1, or commands from
the MPU 104, that initialize a thread switch from thread T1 to
thread T2 (block 604). If no such sequence is encountered or no
such command is received from the MPU 104, the process 600
comprises continuing to process thread T1 (block 602). However, if
such a sequence or MPU 104 command is encountered, then the process
600 comprises halting processing of thread T1 (block 606) and
pushing either the full or minimum context to the T1 stack (block
608), as previously described.
[0059] The process 600 further comprises cleaning and invalidating
the RAM (block 610), pushing the RAM base address onto the T1 stack
(block 612), cleaning and invalidating the micro-stack (block 614),
and storing the T1 stack pointer to any suitable memory (block
616). The context of thread T1 has now been saved. Before beginning
to process thread T2, the context of thread T2 (if any) is to be
loaded from the T2 stack. Specifically, the process 600 comprises
loading the T2 stack pointer from memory (block 618), popping the
RAM base address from the T2 stack (block 620), popping the full or
minimum context from the T2 stack (block 622), and subsequently
beginning processing of the thread T2 (block 624).
[0060] In the embodiments described above, a minimum context (i.e.,
PC, .mu.PC, status register) is pushed onto a stack to bookmark a
switch point. While storing the minimum-context is faster than
storing the full-context (i.e., all registers in the JSM core), and
storing the minimum context onto the stack is faster than moving
registers from the JSM 102 to the D-RAM 126, in some embodiments,
it may be desirable to perform a full-context store instead of a
minimum-context store, for reasons previously described. Thus, in
such embodiments, full contexts also may be stored and/or loaded,
in which case most or all of the registers 140 as well as most or
all of the auxiliary registers 151 are stored and/or loaded with
each thread switch. For instance, in cases where one or more
register values other than the PC 186, .mu.PC 188, and status
register are affected in a second thread, it may be desirable to
store all register values via a full-context store. In such cases,
the 32-bit instructions described above and shown in FIG. 6 may
comprise data (e.g., in bits 31:24 and 3:0) that causes a
full-context store to be performed. Similarly, a 32-bit instruction
may comprise data that causes a full-context load to be performed.
Further, as also described above, a full-context store and/or load
may be initialized by a command from the MPU 104 instead of by code
being executed in thread T1. A full-context store and/or load is
performed in a similar manner to a minimum-context store and/or
load, with the exception being a difference in the number of
registers stored and/or loaded.
[0061] As explained above, the technique of storing contexts during
thread switches may be used to service commands received by the JSM
102 from the MPU 104. For example, in performing a series of tasks,
the MPU 104 may determine that delegating one or more tasks to the
JSM 102 would be expeditious, so that the MPU 104 may allocate its
resources to performing other tasks. In such a case, the MPU 104
sends a command to the JSM 102, instructing the JSM 102 to perform
a particular task. The command is coupled with a parameter, which
parameter preferably comprises the address of a micro-sequence. The
JSM 102, upon receiving the command and the associated parameter,
stores the parameter in a suitable storage unit, such as a register
140, an auxiliary register 151 or on any one of the stacks in the
JSM 102. The JSM 102 then uses the parameter (i.e., the
micro-sequence address) to locate the micro-sequence in the
micro-sequence storage 180. Upon locating the micro-sequence, the
JSM 102 retrieves the micro-sequence and executes the
micro-sequence, thus obeying the command sent from the MPU 104. The
micro-sequence preferably is pre-programmed into the micro-sequence
storage 180.
[0062] In obeying the command from the MPU 104, the JSM 102 may be
required to pause whatever task it is completing at the moment the
command is received from the MPU 104. More specifically, the JSM
102 may be performing a particular task or executing a sequence of
code in a first thread T1 when it is interrupted with the command
from the MPU 104. In order for the JSM 102 to service the command,
it must first pause the execution of the first thread T1 at a
switch point and bookmark the switch point by storing the context
of the first thread T1. The JSM 102 then may service the command
from the MPU 104 in a second thread T2. Once the command from the
MPU 104 has been serviced, the JSM 102 may resume execution at the
switch point in the first thread T1 by retrieving the stored
context of the first thread T1.
[0063] The JSM 102 stores contexts and retrieves contexts in a
manner similar to that previously described. In particular, when
storing the context, the JSM 102 stores either a full context or,
preferably, a minimum context. When storing a full context, the JSM
102 pushes all available registers 140 (and optionally auxiliary
registers 151) onto the T1 stack 123. When storing a minimum
context, the JSM 102 pushes the PC 186, the .mu.PC 188, the status
register R15 and optionally a fourth register value onto the T1
stack 123. In either case, before shifting to the second thread T2,
the JSM 102 also stores the value of the stack pointer (i.e.,
register R6) in any suitable memory (e.g., DRAM 126). As previously
described, the JSM 102 stores the value of the stack pointer so
that, when it is ready to resume executing thread T1, the JSM 102
is able to locate the context information that is on the T1 stack
123. Specifically, once the JSM 102 has serviced the command from
the MPU 104 and is ready to resume executing thread T1 at the
switch point, the JSM 102 uses the stack pointer to locate the
context information on the T1 stack 123. Once the context
information is located, the JSM 102 pops the context information
off of the stack and stores the context information to the
appropriate registers (e.g., registers 140 and/or auxiliary
registers 151) in the JSM 102. The JSM 102 then may resume
executing in thread T1.
[0064] This technique is summarized in FIG. 8. The process 800
shown in FIG. 8 begins with the MPU 104 determining to delegate a
task to the JSM 102 (block 802). Accordingly, the MPU 104 delegates
the task by sending to the JSM 102 a command along with a parameter
(block 804). The command instructs the JSM 102 to use the parameter
(i.e., an address of a micro-sequence) to find a corresponding
micro-sequence and to process the micro-sequence. Once the JSM 102
receives the command from the MPU 104, the JSM 102 pauses
processing of a current thread T1 at a switch point (block 806).
The JSM 102 then stores the context of the thread T1 at the switch
point (block 808). The JSM 102 then uses the parameter received
from the MPU 104 to find the micro-sequence (block 810). As
explained above, the parameter contains the address of this
micro-sequence. The JSM 102 subsequently retrieves and executes the
micro-sequence in a thread T2 (block 812). After executing the
micro-sequence, the JSM 102 restores the context of the switch
point in thread T1 (block 814). Finally, the JSM 102 may resume
executing thread T1 at the switch point (block 816). Such a
technique is not limited to commands received from the MPU 104.
Instead, the JSM 102 may apply this technique to any task delegated
to the JSM 102, such as an interrupt, exception routine, etc.
[0065] System 100 may be implemented as a mobile cell phone 415
such as that shown in FIG. 9. As shown, the battery-operated,
mobile communication device includes an integrated keypad 412 and
display 414. The JSM processor 102 and MPU processor 104 and other
components may be included in electronics package 410 connected to
the keypad 412, display 414, and radio frequency ("RF") circuitry
416. The RF circuitry 416 may be connected to an antenna 418.
[0066] Although the above embodiments have been described in the
context of dual processor cores, the techniques described herein
also are applicable to any number of processor cores. For example,
the system 100 may comprise the MPU 104, the JSM 102, as well as at
least one additional processor core. The host processor (i.e., the
MPU 104) may delegate tasks to the JSM 102 as well as any of the
additional processor cores, using the techniques described
above.
[0067] While the preferred embodiments of the present invention
have been shown and described, modifications thereof can be made by
one skilled in the art without departing from the spirit and
teachings of the invention. The embodiments described herein are
exemplary only, and are not intended to be limiting. Many
variations and modifications of the invention disclosed herein are
possible and are within the scope of the invention. Accordingly,
the scope of protection is not limited by the description set out
above. Each and every claim is incorporated into the specification
as an embodiment of the present invention.
* * * * *