U.S. patent application number 13/548924 was filed with the patent office on 2013-07-18 for processor with multi-level looping vector coprocessor.
This patent application is currently assigned to TEXAS INSTRUMENTS INCORPORATED. The applicant listed for this patent is Peter CHANG, Ching-Yu HUNG, Shinri INAMORI, Jagadeesh SANKARAN. Invention is credited to Peter CHANG, Ching-Yu HUNG, Shinri INAMORI, Jagadeesh SANKARAN.
Application Number | 20130185540 13/548924 |
Document ID | / |
Family ID | 48780833 |
Filed Date | 2013-07-18 |
United States Patent
Application |
20130185540 |
Kind Code |
A1 |
HUNG; Ching-Yu ; et
al. |
July 18, 2013 |
PROCESSOR WITH MULTI-LEVEL LOOPING VECTOR COPROCESSOR
Abstract
A processor includes a scalar processor core and a vector
coprocessor core coupled to the scalar processor core. The scalar
processor core includes a program memory interface through which
the scalar processor retrieves instructions from a program memory.
The instructions include scalar instructions executable by the
scalar processor and vector instructions executable by the vector
coprocessor core. The vector coprocessor core includes a plurality
of execution units and a vector command buffer. The vector command
buffer is configured to decode vector instructions passed by the
scalar processor core, to determine whether vector instructions
defining an instruction loop have been decoded, and to initiate
execution of the instruction loop by one or more of the execution
units based on a determination that all of the vector instructions
of the instruction loop have been decoded.
Inventors: |
HUNG; Ching-Yu; (Pleasanton,
CA) ; INAMORI; Shinri; (Kanagawa, JP) ;
SANKARAN; Jagadeesh; (Allen, TX) ; CHANG; Peter;
(Colleyville, TX) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
HUNG; Ching-Yu
INAMORI; Shinri
SANKARAN; Jagadeesh
CHANG; Peter |
Pleasanton
Kanagawa
Allen
Colleyville |
CA
TX
TX |
US
JP
US
US |
|
|
Assignee: |
TEXAS INSTRUMENTS
INCORPORATED
Dallas
TX
|
Family ID: |
48780833 |
Appl. No.: |
13/548924 |
Filed: |
July 13, 2012 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61507652 |
Jul 14, 2011 |
|
|
|
Current U.S.
Class: |
712/7 |
Current CPC
Class: |
G06F 9/3001 20130101;
G06F 9/30098 20130101; G06F 9/30021 20130101; G06F 15/8007
20130101; G06F 9/3012 20130101; G06F 9/3013 20130101; G06F 15/76
20130101; G06F 9/30036 20130101; G06F 9/30043 20130101; G06F 9/345
20130101; G06F 9/30032 20130101; G06F 9/3887 20130101; G06F 15/8053
20130101; G06F 9/30065 20130101; G06F 9/30087 20130101 |
Class at
Publication: |
712/7 |
International
Class: |
G06F 15/80 20060101
G06F015/80 |
Claims
1. A processor, comprising: a scalar processor core; and a vector
coprocessor core coupled to the scalar processor core; the scalar
processor core comprising: a program memory interface through which
the scalar processor retrieves instructions from a program memory,
the instructions comprising scalar instructions executable by the
scalar processor and vector instructions executable by the vector
coprocessor core; a coprocessor interface through which the scalar
processor passes the vector instructions to the vector coprocessor;
the vector coprocessor core, comprising: a plurality of execution
units; and a vector command buffer configured to: decode the vector
instructions passed by the scalar processor core; determine whether
vector instructions defining an instruction loop have been decoded;
and initiate execution of the instruction loop by one or more of
the execution units based on a determination that all of the vector
instructions of the instruction loop have been decoded.
2. The processor of claim 1, wherein the instruction loop comprises
a plurality of nested loops and the vector coprocessor core is
configured to execute the plurality of nested loops without looping
overhead.
3. The processor of claim 1, wherein the vector coprocessor core is
configured to execute vector instructions only by execution of the
vector instructions within an instruction loop comprising a
predetermined plurality of nested loops.
4. The processor of claim 1, wherein the scalar processor core is
configured to execute the scalar instructions while the vector
coprocessor core executes the vector instructions.
5. The processor of claim 1, wherein the vector command buffer
comprises storage for a plurality of vector commands, each of the
vector commands comprising nested loops of vector instructions; and
the vector command buffer is configured to decode the vector
instructions of a second vector command passed by the scalar
processor core while the execution units execute a first vector
command.
6. The processor of claim 1, wherein the scalar processor core is
configured to service interrupts while stalled awaiting completion
of execution of vector instructions by the vector coprocessor.
7. The processor of claim 1, wherein the vector processor core
further comprises an operand memory interface configured to
simultaneously access a plurality of banks of memory, each of the
banks organized as a plurality of sub-banks of memory.
8. The processor of claim 7, wherein the operand memory interface
is configured to simultaneously access the plurality of sub-banks
of memory.
9. The processor of claim 7, wherein vector processor core further
comprises a plurality of address generators, each of the address
generators configured to provide an address for accessing one of
the sub-banks of memory.
10. The processor of claim 1, wherein the instruction loop
comprises an outermost loop about a plurality of nested loops,
wherein the vector coprocessor core is configured to change, with
no overhead, at least one of a location in memory of data to
processed by the nested loops and a dimension of an array of data
to be processed by the nested loops.
11. The processor of claim 1, wherein the vector coprocessor core
is configured to exit the instruction loop prior to execution of a
predetermined number of iterations of the instruction loop; wherein
the vector coprocessor core is configured to schedule the exit from
a predetermined number of nested loops of the instruction loop to
occur at completion of a current iteration of an innermost loop of
the nested loops based on a value stored in a register of the
vector coprocessor core.
12. The processor of claim 1, wherein the vector coprocessor core
is configured to exit an outermost loop of the instruction loop
prior to execution of a predetermined number of iterations of the
outermost loop; wherein the vector coprocessor core is configured
to schedule the exit from the outermost loop to occur at completion
of all iterations of loops within the outermost loop based on a
value stored in a register of the vector coprocessor core.
13. A vector coprocessor, comprising: a plurality of execution
units configured to simultaneously apply an instruction specified
operation to different data; and a vector command buffer configured
to: decode instructions to be executed by the execution units;
identify an instruction loop in the instructions; and provide the
instructions to the execution units for execution based on a
determination that all of the instructions of an identified
instruction loop have been decoded.
14. The vector coprocessor of claim 13, further comprising loop
control logic configured to control execution of a plurality of
nested loop of the instruction loop without looping overhead.
15. The vector coprocessor of claim 14, wherein the loop control
logic is configured to exit the instruction loop prior to execution
of a predetermined number of iterations of the instruction loop;
wherein the loop control logic is configured to schedule the exit
from the predetermined number of nested loops of the instruction
loop to occur at completion of a current iteration of an innermost
loop of the nested loops based on a value read from a register of
the vector coprocessor core during the current iteration of the
innermost loop.
16. The vector coprocessor of claim 14, wherein the loop control
logic is configured to exit an outermost loop of the instruction
loop prior to execution of a predetermined number of iterations of
the outermost loop; wherein the loop control logic is configured to
schedule the exit from the outermost loop to occur at completion of
all iterations of nested loops within the outermost loop based on a
value read from a register of the vector coprocessor core during a
current iteration of an innermost loop of the nested loops.
17. The vector coprocessor of claim 14, wherein the instruction
loop comprises an outermost loop about a plurality of nested loops,
wherein the loop control logic is configured to change, with no
overhead, at least one of a distance to a next object in memory to
be processed by the nested loops and a dimension of the next object
to be processed by the nested loops.
18. The vector coprocessor of claim 13, wherein the vector command
buffer comprises storage for a plurality of vector commands, each
of the vector commands comprising nested loops of instructions; and
the vector command buffer is configured to decode the instructions
of a second vector command while the execution units execute a
first vector command.
19. The vector coprocessor of claim 13, further comprising an
operand memory interface configured to simultaneously access a
plurality of banks of memory, each of the banks organized as a
plurality of sub-banks of memory.
20. The vector coprocessor of claim 16, wherein the operand memory
interface is configured to simultaneously access the plurality of
sub-banks of memory.
21. The vector coprocessor of claim 16, further comprising a
plurality of address generators, each of the address generators
configured to provide an address for accessing one or more of the
sub-banks of memory.
22. The vector coprocessor of claim 13, vector command buffer is
configured to identify the instruction loop based on detection of a
looping instruction in the instructions, wherein the looping
instruction is indicative of a start of the instruction loop and
specifies a length of the instruction loop.
23. The vector coprocessor of claim 13, further comprising: a
processor interface through which the vector processor receives
from a different processor: instructions to be executed by the
vector processor; memory storage addresses of the instructions to
executed by the vector processor, wherein each address is provided
concurrently with one of the instructions; and data values to be
written to a register of the vector processor, wherein each data
value is provided concurrently with one of the instructions.
24. A processor, comprising: a control processor core; and a
single-instruction multiple data (SIMD) coprocessor core coupled to
the control processor core; the control processor core comprising:
a program memory interface through which the control processor
retrieves instructions from a program memory, the instructions
comprising instructions executable by the scalar processor and SIMD
instructions executable by the SIMD coprocessor core; the SIMD
coprocessor core, comprising: a plurality of execution units; and a
vector command buffer configured to: group SIMD instructions
received from the control processer core into an instruction loop;
and initiate execution of the instruction loop based on a complete
set of SIMD instructions of an instruction loop being received from
the control processor core; and loop control logic coupled to the
vector command buffer, the loop control logic configured to manage
execution of the instruction loop as a plurality of nested loops
without loop overhead.
25. The processor of claim 24, wherein the control processor core
is configured to: execute instructions while the SIMD coprocessor
core executes the instruction loop; and service interrupts while
stalled awaiting completion of execution of the instruction
loop.
26. The processor of claim 24, wherein the vector command buffer
comprises: an SIMD instruction decoder; and storage for a plurality
of vector commands, each vector command comprising an instruction
loop; wherein the SIMD instruction decoder is configured to decode
a vector command while a previously decoded vector command is being
executed.
27. The processor of claim 24 further comprising: a plurality of
banks of memory coupled to the control processor core and to the
SIMD coprocessor core, wherein each bank of memory comprises a
number of sub-banks equal to a number of SIMD processing lanes of
the SIMD coprocessor core; wherein the SIMD coprocessor core
comprises a memory interface configured to simultaneously access
all the sub-banks of a given one of the banks of memory.
28. The processor of claim 24, wherein the loop control logic is
configured to: schedule exit from a predetermined number of nested
loops of the instruction loop to occur at completion of a current
iteration of an innermost loop of the nested loops based on a value
read from a register of the vector coprocessor core during the
current iteration of the innermost loop; and schedule exit from an
outermost loop of the instruction loop to occur at completion of
all iterations of the nested loops within the outermost loop based
on the value read from the register of the vector coprocessor core
during the current iteration of the innermost loop; wherein the
current iteration is not the last iteration of the instruction set
at initiation of instruction loop execution.
29. The processor claim 24, wherein the loop control logic is
configured to change, with no overhead, on execution of an
outermost loop of the instruction loop, at least one of a distance
to a next object in memory to be processed by the instruction loop
and a dimension of the next object to be processed by the
instruction loop.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] The present application claims priority to U.S. Provisional
Patent Application No. 61/507,652, filed on Jul. 14, 2011 (Attorney
Docket No. TI-70051 PS); which is hereby incorporated herein by
reference in its entirety.
BACKGROUND
[0002] Various processor designs include coprocessors that are
intended to accelerate execution of a given set of processing
tasks. Some such coprocessors achieve good performance/area in
typical processing tasks, such as scaling, filtering,
transformation, sum of absolute differences, etc., executed by a
digital signal processor (DSP). However, as the complexity of
digital signal processing algorithms increases, processing tasks
often require numerous passes of processing through a coprocessor,
compromising power efficiency. Furthermore, access patterns
required by DSP algorithms are becoming less regular, thereby
negatively impacting the overall processing efficiency of
coprocessors designed to accommodate more regular access patterns.
Consequently, processor and coprocessor architectures that provide
improved processing, power, and/or area efficiency are
desirable.
SUMMARY
[0003] A processor that includes a control processor core and a
vector processor core is disclosed herein. In one embodiment, a
processor includes a scalar processor core and a vector coprocessor
core coupled to the scalar processor core. The scalar processor
core includes a program memory interface through which the scalar
processor retrieves instructions from a program memory. The
instructions include scalar instructions executable by the scalar
processor and vector instructions executable by the vector
coprocessor core. The vector coprocessor core includes a plurality
of execution units and a vector command buffer. The vector command
buffer is configured to decode vector instructions passed by the
scalar processor core, to determine whether vector instructions
defining an instruction loop have been decoded, and to initiate
execution of the instruction loop by one or more of the execution
units based on a determination that all of the vector instructions
of the instruction loop have been decoded.
[0004] In another embodiment, a vector coprocessor includes a
plurality of execution units and a vector command buffer. The
execution units are configured to simultaneously apply an
instruction specified operation to different data values. The
vector command buffer is configured to decode instructions to be
executed by the execution units, to identify an instruction loop in
the instructions, and to provide the instructions to the execution
units for execution based on a determination that all of the
instructions of an identified instruction loop have been
decoded.
[0005] In a further embodiment, a processor includes a control
processor core and a single-instruction multiple data (SIMD)
coprocessor core coupled to the control processor core. The control
processor core includes a program memory interface through which
the control processor core retrieves instructions from a program
memory. The instructions comprise instructions executable by the
control processor core and SIMD instructions executable by the SIMD
coprocessor core. The SIMD coprocessor core includes a plurality of
execution units, a vector command buffer, and loop control logic
coupled to the vector command buffer. The vector command buffer is
configured to group SIMD instructions received from the control
processer core into an instruction loop, and to initiate execution
of the instruction loop based on a complete set of SIMD
instructions of an instruction loop being received from the control
processor core. The loop control logic is configured to manage
execution of the instruction loop as a plurality of nested loops
without loop overhead.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] For a detailed description of exemplary embodiments of the
invention, reference will now be made to the accompanying drawings
in which:
[0007] FIG. 1 shows a block diagram of a processor in accordance
with various embodiments;
[0008] FIG. 2 shows a block diagram of a processor in accordance
with various embodiments;
[0009] FIG. 3 shows a block diagram of a vector coprocessor core in
accordance with various embodiments;
[0010] FIG. 4 show a block diagram of an vector command buffer of
the vector coprocessor core in accordance with various
embodiments;
[0011] FIG. 5 shows a diagram of scalar processor core and vector
coprocessor core execution interaction in accordance with various
embodiments;
[0012] FIGS. 6A-6FH show load data distributions provided by a load
unit of a vector coprocessor core in accordance with various
embodiments;
[0013] FIG. 7 shows a table of load unit data distributions in
accordance with various embodiments; and
[0014] FIG. 8 shows a table of store unit data distributions in
accordance with various embodiments.
NOTATION AND NOMENCLATURE
[0015] Certain terms are used throughout the following description
and claims to refer to particular system components. As one skilled
in the art will appreciate, companies may refer to a component by
different names. This document does not intend to distinguish
between components that differ in name but not function. In the
following discussion and in the claims, the terms "including" and
"comprising" are used in an open-ended fashion, and thus should be
interpreted to mean "including, but not limited to . . . " Also,
the term "couple" or "couples" is intended to mean either an
indirect or direct electrical connection. Thus, if a first device
couples to a second device, that connection may be through a direct
electrical connection, or through an indirect electrical connection
via other devices and connections. Further, the term "software"
includes any executable code capable of running on a processor,
regardless of the media used to store the software. Thus, code
stored in memory (e.g., non-volatile memory), and sometimes
referred to as "embedded firmware," is included within the
definition of software. The recitation "based on" is intended to
mean "based at least in part on." Therefore, if X is based on Y, X
may be based on Y and any number of other factors. The terms
"alternate," "alternating" and the like are used to designate every
other one of a series.
DETAILED DESCRIPTION
[0016] The following discussion is directed to various embodiments
of the invention. Although one or more of these embodiments may be
preferred, the embodiments disclosed should not be interpreted, or
otherwise used, as limiting the scope of the disclosure, including
the claims. In addition, one skilled in the art will understand
that the following description has broad application, and the
discussion of any embodiment is meant only to be exemplary of that
embodiment, and not intended to intimate that the scope of the
disclosure, including the claims, is limited to that
embodiment.
[0017] Embodiments of the processor disclosed herein provide
improved performance without sacrificing area or power efficiency.
FIG. 1 shows a block diagram of a processor 100 in accordance with
various embodiments. The processor 100 includes a scalar processor
core 102, a vector coprocessor core 104, a program memory 106, a
data memory 108, a working buffer memory 110, an A buffer memory
112, and a B buffer memory 114. The A and B buffer memories 112,
114 are partitioned into a low and high A buffer memory (112A,
112B) and a low and high B buffer memory (114A, 114B) to allow
simultaneous direct memory access (DMA) and access by the cores
102, 104. To support N-way processing by the vector coprocessor
core 102, each of the working buffer memory 110, A buffer memory
112, and B buffer memory 114 may comprise N simultaneously
accessible banks. For example, if the vector coprocessor core 104
is an 8-way single-instruction multiple-data (SIMD) core, then each
of the working, A, and B buffers 110, 112, 114 may comprise 8 banks
each of suitable word width (e.g., 32 bits or more wide) that are
simultaneously accessible by the vector coprocessor core 104.
Switching network 118 provide signal routing between the memories
108, 110, 112, 114 and the various systems that share access to
memory (e.g., DMA and the processor cores 102, 104).
[0018] FIG. 2 shows a block diagram of the processor 100 including
various peripherals, including DMA controller 202, memory
management units 204, clock generator 206, interrupt controller
208, counter/time module 210, trace port 214, memory mapped
registers 212 and various interconnect structures that link the
components of the processor 100.
[0019] The scalar processor core 102 may be a reduced instruction
set processor core, and include various components, such as
execution units, registers, instruction decoders, peripherals,
input/output systems and various other components and sub-systems.
Embodiments of the scalar processor core 102 may include a
plurality of execution units that perform data manipulation
operations. For example, an embodiment of the scalar processor core
102 may include five execution units, a first execution unit
performs the logical, shift, rotation, extraction, reverse, clear,
set, and equal operations, a second execution unit performs data
movement operations, a third execution unit performs arithmetic
operations, a fourth execution unit performs multiplication, and a
fifth execution unit performs division. In some embodiments, the
scalar processor core 102 serves as a control processor for the
processor 100, and executes control operations, services
interrupts, etc., while the vector coprocessor core 104 serves as a
signal processor for processing signal data (e.g., image signals)
provided to the vector coprocessor core 104 via the memories 110,
112, 114.
[0020] The program memory 106 stores instructions to be executed by
the scalar core 102 interspersed with instructions to be executed
by the vector coprocessor core 104. The scalar processor core 102
accesses the program memory 106 and retrieves therefrom an
instruction stream comprising instructions to be executed by the
scalar processor core 102 and instructions to be executed by the
vector coprocessor core 104. The scalar processor core 102
identifies instructions to be executed by the vector coprocessor
core 104 and provides the instructions to the vector coprocessor
core 104 via a coprocessor interface 116. In some embodiments, the
scalar processor 102 provides vector instructions, control data,
and/or loop instruction program memory addresses to the vector
coprocessor core 104 via the coprocessor interface 116. The loop
instruction program memory addresses may be provided concurrently
with a loop instruction, and the control data may be provided
concurrently with a control register load instruction. In some
embodiments, the program memory 106 may be a cache memory that
fetches instructions from a memory external to the processor 100
and provides the instructions to the scalar processor core 102.
[0021] FIG. 3 shows a block diagram of the vector coprocessor core
104 in accordance with various embodiments. The vector coprocessor
core 104 may be an SIMD processor that executes instructions
arranged as a loop. More specifically, the vector coprocessor core
104 executes vector instructions within a plurality of nested
loops. In some embodiments, the vector coprocessor core 104
includes built-in looping control that executes instructions in
four or more nested loops with zero looping overhead. The vector
coprocessor core 104 includes a command decoder/buffer 302, loop
control logic 304, a vector register file 306, processing elements
308, a table look-up unit 310, a histogram unit 312, load units
314, store units 316, and address generators 318. The load units
314 and store units 316 access the working buffer memory 110, an A
buffer memory 112, and a B buffer memory 114 through a memory
interface 320. The address generators 318 compute the addresses
applied by the load and store units 314, 316 for accessing memory.
Each address generator 318 is capable of multi-dimensional
addressing that computes an address based on the indices of the
nested loops and corresponding constants (e.g.,
address=base+i.sub.1*const.sub.1+i.sub.2*const.sub.2+i.sub.3*const.sub.3+-
i.sub.4*const.sub.4 for 4-dimensional addressing where i.sub.n is a
loop index for one of four nested loops).
[0022] The memory interface 320 connects the vector coprocessor
core 104 via a lane of interconnect corresponding to each bank of
each of memories 110, 112, 114. Thus, a memory 110, 112, 114 having
eight parallel banks (e.g., 32-bit banks) connects to the vector
coprocessor core 104 via eight parallel memory lanes, where each
memory lane connects to a port of the memory interface 320. Memory
lanes that connect to adjacent ports of the memory interface 320
are termed adjacent memory lanes.
[0023] The vector coprocessor core 104 is N-way SIMD, where in the
embodiment of FIG. 3, N=8. N may be different in other embodiments.
Thus, the coprocessor core 104 includes N processing lanes, where
each lane includes a processing element 308 and a set of registers
of the vector register file 306 that provide operands to and store
results generated by the processing element 308. Each processing
element 308 may include a plurality of function units that operate
on (e.g., multiply, add, compare, etc.) the operands provided by
the register file 306. Accordingly, the register file 306 is N-way
and includes storage of a plurality of entries. For example, the
register file 306 may be N.times.16 where the register file
includes sixteen registers for each of the N ways of the vector
coprocessor core 104. Corresponding registers of adjacent ways are
termed adjacent registers. Thus, a register RO of SIMD way 0 is
adjacent to register RO of SIMD way 1. Similarly, register RO of
SIMD way 0 and register 0 of SIMD way 2 are alternate registers.
The processing elements 308 and the registers of the register file
306 are sized to process data values of various sizes. In some
embodiments, the processing elements 308 and the registers of the
register file 306 are sized to process 40 bit and smaller data
values (e.g., 32 bit, 16 bit, 8, bit). Other embodiments may be
sized to process different data value sizes.
[0024] As noted above, the vector coprocessor core 104 repeatedly
executes a vector instruction sequence (referred to as a vector
command) within a nested loop. The nested looping is controlled by
the loop control logic 304. While the vector coprocessor core 104
is executing vector commands, the scalar core 102 continues to
decode and execute the instruction stream retrieved from program
memory 106, until execution of a coprocessor synchronization
instruction (by the scalar core 102) forces the scalar core 102 to
stall for vector coprocessor core 104 vector command completion.
While the scalar core 102 is stalled, the scalar core 102 may
service interrupts unless interrupt processing is disabled. Thus,
the scalar core 102 executes instructions and services interrupts
in parallel with vector coprocessor core 104 instruction execution.
Instruction execution by the scalar core 102 may be synchronized
with instruction execution by the vector coprocessor core 104 based
on the scalar core 102 executing a synchronization instruction that
causes the scalar core 102 to stall until the vector coprocessor
core 104 asserts a synchronization signal indicating that vector
processing is complete. Assertion the synchronization signal may be
triggered by execution of a synchronization instruction by the
vector coprocessor core 104.
[0025] The command decode/buffer 302 of the vector coprocessor core
104 includes an instruction buffer that provides temporary storage
for vector instructions. FIG. 4 shows a block diagram of the
command decode/buffer 302 of the vector coprocessor core 104 in
accordance with various embodiments. The command decode/buffer 302
includes a pre-decode first-in first-out (FIFO) buffer 402, a
vector instruction decoder 404, and vector command storage buffers
406. Each vector command storage buffer 406 includes capacity to
store a complete vector command of maximum size. Vector
instructions flow from the scalar processor core 102 through the
pre-decode FIFO 402 and are decoded by the vector instruction
decoder 404. The decoded vector instructions corresponding to a
given vector command are stored in one of the vector command
storage buffers 406, and each stored vector command is provided for
execution in sequence. Execution of a decoded vector command is
initiated (e.g., the vector command is read out of the vector
command storage buffer 406) only after the complete vector command
is decoded and stored in a vector command storage buffer 406. Thus,
the command decode/buffer 302 loads a vector command into each of
the vector command storage buffers 406, and when the vector command
storage buffers 406 are occupied additional vector instructions
received by the command decode/buffer 302 are stored in the
pre-decode buffer 402 until execution of a vector command is
complete, at which time the FIFO buffered vector command may be
decoded and loaded into the emptied vector command storage buffer
406 previously occupied by the executed vector command.
[0026] FIG. 5 shows a diagram of scalar processor core 102 and
vector coprocessor core 104 interaction in accordance with various
embodiments. In FIG. 5, vector instructions i0-i3 form a first
exemplary vector command, vector instructions i4-i7 form a second
exemplary vector command, and vector instructions i8-i11 form a
third exemplary vector command. At time T1, the scalar processor
core 102 recognizes vector instructions in the instruction stream
fetched from program memory 106. In response, the scalar processor
core 102 asserts the vector valid signal (vec_valid) and passes the
identified vector instructions to the vector coprocessor core 104.
At time T2, the first vector command has been transferred to the
vector coprocessor core 104, and the vector coprocessor core 104
initiates execution of the first vector command while the scalar
processor core 102 continues to transfer the vector instructions of
the second vector command to the vector coprocessor core 104. At
time T3, transfer of the second vector command to the vector
coprocessor core 104 is complete, and the execution of the first
vector command is ongoing. Consequently, the vector coprocessor
core 104 negates the ready signal (vec_rdy) which causes the scalar
processor core 102 to discontinue vector instruction transfer. At
time T4, execution of the first vector command is complete, and
execution of the second vector command begins. With completion of
the first vector command, vector coprocessor core 104 asserts the
ready signal, and the command decode/buffer 302 receives the vector
instructions of the third vector command. At time T5, the vector
coprocessor core 104 completes execution of the second vector
command. At time T6, transfer of the third vector command is
complete, and the vector coprocessor core 104 initiates execution
of the third vector command. A VWDONE instruction follows the last
instruction of the third vector command. The VWDONE instruction
causes the scalar processor core 102 to stall pending completion of
the third vector command by the vector coprocessor core 104. When
the vector coprocessor core 104 completes execution of the third
vector command, the vector coprocessor core 104 executes the VWDONE
command which causes the vector coprocessor core 104 to assert the
vector done signal (vec_done). Assertion of the vector done signal
allows the scalar processor core 102 to resume execution, thus
providing core synchronization.
[0027] In order to highlight certain aspects of scalar processor
core 102 and vector coprocessor core 104 interaction, the example
of FIG. 5 excludes the pre-decode FIFO 402. With inclusion of the
pre-decode FIFO 402, the vector coprocessor core 104 delays
negation of vec_rdy until the pre-decode FIFO 402 is full.
[0028] Within the multi-level nested loop executed by the vector
coprocessor core 104, operations of vector command execution can be
represented as sequential load, arithmetic operation, store, and
pointer update stages, where a number of operations may be executed
in each stage. The following listing shows a skeleton of the nested
loop model for a four loop embodiment of the vector coprocessor
core 104. There are 4 loop variables, i1, i2, i3, and i4. Each loop
variable is incremented from 0 to Ipend1 . . . 4.
TABLE-US-00001 EVE_compute(...) { for (i1=0; i1<=lpend1; i1++) {
for (i2=0; i2<=lpend2; i2++) { for (i3=0; i3<=lpend3; i3++) {
for (i4=0; i4<=lpend4; i4++) { for (k=0; k<num_inits; k++)
initialize_vreg_from_parameters(...); for (k=0; k<num_loads;
k++) load_vreg_from_local_memory(...); for (k=0; k<num_ops; k++)
op(...); // 2 functional units, executing 2 ops per cycle for (k=0;
k<num_stores; k++) store_vreg_to_local_memory(...); for (k=0;
k<num_agens; k++) update_agen(...); } } } }
[0029] Each iteration of the innermost loop (i4) executes in a
number of cycles equal to the maximal number of cycles spent in
execution of loads, arithmetic operations, and stores within the
loop. Cycle count for the arthmetic operations is constant for each
interation, but cycle count for load and store operations can
change depending on pointer update, loop level, and read/write
memory contention.
[0030] Embodiments define a vector command with a loop initiation
instruction, VLOOP. [0031] VLOOP cmd_type, CL#: cmd_len, PL#:
param_len where: [0032] cmd_type specifies the loop type: compute
(executed by the processing elements), table lookup (executed by
the table lookup unit), or histogram (executed by the histogram
unit); [0033] cmd_len specifies the length of the vector command;
and [0034] param_len specifies the length of the memory stored
parameter file associated with the vector command.
[0035] The vector instructions following VLOOP initialize the
registers and address generators of the vector coprocessor core
104, and specify the load operations, arithmetic and data
manipulation operations, and store operations to be performed with
the nested loops. The parameters applicable to execution of a
vector command (e.g., loop counts, address pointers to arrays,
constants used in the computation, round/truncate shift count,
saturation bounds, etc.) may stored in memory (e.g., 110, 112, 114)
by the scalar processor core 102 as a parameter file and retrieved
by the vector coprocessor core 102 as part of loop
initialization.
[0036] While embodiments of the vector coprocessor core 104 may
always execute a fixed number of nested loops (e.g., 4 as shown in
the model above), with loop terminal counts of zero or greater,
some embodiments include an optional outermost loop (e.g., an
optional fifth loop). The optional outermost loop encompasses the
fixed number of nested loops associated with the VLOOP instruction,
and may be instantiated separately from the fixed number of nested
loops. As with the nested loops associated with the VLOOP
instruction, execution of the optional outermost loop requires no
looping overhead. Each iteration of the optional outermost loop may
advance a parameter pointer associated with the nested loops. For
example, the parameter pointer may be advanced by param_len
provided in the VLOOP instruction. The parameter pointer references
the parameter file that contains the parameters applicable to
execution of the vector command as explained above (loop counts,
etc.). By changing the parameters of the vector command with each
iteration of the outermost loop, embodiments of the vector
coprocessor core 104 can apply the vector command to
objects/structures/arrays of varying dimension or having varying
inter-object spacing. For example, changing loop counts for the
nested loops allows the vector coprocessor core 104 to processes
objects of varying dimensions with a single vector command, and
without the overhead of a software loop.
[0037] The loop count of the optional outer loop and the parameter
pointer may be set by execution of an instruction by the vector
coprocessor core 104. The instruction may load a parameter into a
control register of the core 104 as: [0038] VCTRL
<scalar_register>, <control_register> where: [0039]
scalar_register specifies a register containg a value to loaded as
an outermost loop count or parameter pointer; and [0040]
control_register specifies a destination register, where the
destination register may be the outermost loop end count register
or the vector command parameter pointer register.
[0041] Execution of a vector command may be complete when a total
number of iterations specified in the parameter file for each loop
of the vector command are complete. Because it is advantageous in
some situations to terminate the vector command prior to execution
of all specified loop iterations, the vector coprocessor core 104
provides early termination of a vector command. Early termination
is useful when, for example, the vector command has identified a
condition in the data being processed that makes additional
processing of the data superfluous. Early termination of a vector
command is provided for by execution, in the vector command, of a
loop early exit instruction defined as: [0042] VEXITNZ level, src1
where: [0043] level specifies whether a vector command (i.e., loops
associated with a VLOOP instruction) or an optional outermost loop
is to be exited; and [0044] src1 specifies a register containing a
value that determines whether to perform the early exit.
[0045] Execution of the VEXITNZ instruction causes the vector
coprocessor core 104 to examine the value contained in the register
src1 (e.g., associated with a given SIMD lane), and to schedule
loop termination if the value is non-zero. Other embodiments may
schedule loop termination based on other conditions of the value
(e.g., zero, particular bit set, etc.). If the level parameter
indicates that the vector command is to be exited, then the vector
coprocessor core 104 schedules the nested loops associated with the
vector command to terminate after completion of the current
iteration of the innermost of the nest loops. Thus, if the level
parameter indicates that the vector command is to be exited, any
optional outmost loop encompassing the vector command is not
exited, and a next iteration of the vector command may be
executed.
[0046] If the level parameter indicates that the optional outermost
loop is to be exited, then, on identification of the terminal state
of src1, the vector coprocessor core 104 schedules the optional
outermost loop to terminate after completion of all remaining
iterations of the nested loops associated with the vector command
encompassed by the optional outermost loop.
[0047] The load units 314 move data from the memories 110, 112, 114
to the registers of the vector register file 306, and include
routing circuitry that distributes data values retrieved from the
memories 110, 112, 114 to the registers in various patterns that
facilitate efficient processing. Load instructions executed by the
vector coprocessor core 104 specify how the data is to be
distributed to the registers. FIGS. 6A-6FH show load data
distributions provided by the load unit 314 of the vector
coprocessor core 104 in accordance with various embodiments. While
the illustrative distributions of FIGS. 6A-6F are directed loading
data values of a given size (e.g., 16 bits), embodiments of the
load units 314 may apply similar distributions to data values of
other sizes (e.g., 8 bits, 32 bits, etc.). The load units 314 may
move data from memory 110, 112, 114 to the vector registers 306
with instruction specified distribution in a single instruction
cycle.
[0048] FIG. 6A shows a load unit 314 retrieving a data value from
each of eight locations of a memory 110, 112, 114, (e.g., a value
from each of eight banks) via eight adjacent lanes and distributing
the retrieved data values to eight adjacent registers of the vector
register file 306 (e.g., a register corresponding to each SIMD
lane). More generally, the load unit 314 moves a value from memory
via each of a plurality adjacent lanes, and distributes the data
values to a plurality of adjacent registers of the vector register
file 306 in a single instruction cycle.
[0049] FIG. 6B shows a load unit 314 retrieving a data value from a
single location of a memory 110, 112, 114, and distributing the
retrieved data value to each of eight adjacent registers of the
vector register file 306. More generally, the load unit 314 moves a
value from a single location of a memory 110, 112, 114, and
distributes the data value to a plurality of adjacent registers of
the vector register file 306 in a single instruction cycle. Thus,
the load unit 314 may distribute a single value from memory 110,
112, 114 to each of N ways of the vector coprocessor core 104.
[0050] FIG. 6C shows a load unit 314 retrieving a data value from
each of two locations of a memory 110, 112, 114 via adjacent lanes,
and distributing the retrieved data values to each of four adjacent
pairs of registers of the vector register file 306. More generally,
the load unit 314 moves a value from each of two locations of a
memory 110, 112, 114 via adjacent lanes, and distributes the data
value to a plurality of adjacent pairs of registers of the vector
register file 306 in a single instruction cycle. That is, each
value of the pair of values is written to alternate registers of
the register file 306 (e.g., one value to odd indexed registers and
the other value to even indexed registers). Thus, the load unit 314
may distribute a pair of values from memory 110, 112, 114 to each
of N/2 way pairs of the vector coprocessor core 104.
[0051] FIG. 6D shows a load unit 314 retrieving a data value from
each of eight locations of a memory 110, 112, 114 via alternate
lanes (e.g., from odd indexed locations or even indexed locations),
and distributing the retrieved data values to eight adjacent
registers of the vector register file 306. More generally, the load
unit 314 moves a value from each of a plurality of locations of a
memory 110, 112, 114 via alternate lanes, and distributes the data
values to a plurality of adjacent registers of the vector register
file 306 in a single instruction cycle. Thus, the load unit 314
provides down-sampling of the data stored in memory by a factor of
two.
[0052] FIG. 6E shows a load unit 314 retrieving a data value from
each of four locations of a memory 110, 112, 114 via adjacent
lanes, and distributing each of the retrieved data values to two
adjacent registers of the vector register file 306. More generally,
the load unit 314 moves a value from each of a plurality locations
of a memory 110, 112, 114 via adjacent lanes, and distributes each
of the data values to two adjacent registers of the vector register
file 306 in a single instruction cycle. Thus, the load unit 314
provides up-sampling of the data stored in memory by a factor of
two.
[0053] FIG. 6F shows a load unit 314 retrieving a data value from
each of sixteen locations of a memory 110, 112, 114 via adjacent
lanes, and distributing each of the retrieved data values to
registers of the vector register file 306 such that data values
retrieved via even numbered lanes are distributed to adjacent
registers and data values retrieved via odd numbered lanes are
distributed to adjacent registers. More generally, the load unit
314 moves a value from each of a plurality locations of a memory
110, 112, 114 via adjacent lanes, and distributes the data values
in deinterleaved fashion to two sets of adjacent registers of the
vector register file 306. Thus, the load unit 314 provides
deinterleaving of data values across registers M and M+1 where
register M encompasses a given register of each way of the N-way
vector coprocessor core 104 in a single instruction cycle.
[0054] Some embodiments of the load unit 314 also provide custom
distribution. With custom distribution, the load unit 314
distributes one or more data values retrieved from a memory 110,
112, 114 to registers of the vector register file 306 in accordance
with a distribution pattern specified by an instruction loaded
distribution control register or a distribution control structure
retrieved from memory. Load with custom distribution can move data
from memory to the vector register file 306 in a single instruction
cycle. The custom distribution may be arbitrary. Custom
distribution allows the number of values read from memory, the
number of registers of the register file 306 loaded, and the
distribution of data to the registers to be specified. In some
embodiments of the load unit 314, custom distribution allows
loading of data across multiple rows of the vector register file
306 with instruction defined distribution. For example, execution
of a single custom load instruction may cause a load unit 314 to
move values from memory locations 0-7 to registers V[0][0-7] and
move values from memory locations 3-10 to registers V[1][0-7]. Such
data loading may be applied to facilitate motion estimation
searching in a video system.
[0055] Some embodiments of the load unit 314 further provide for
loading with expansion. In loading with expansion, the load unit
314 retrieves a compacted (collated) array from a memory 110, 112,
114 and expands the array such the elements of the array are
repositioned (e.g., to precompacted locations) in registers of the
vector register file 306. The positioning of each element of the
array is determined by expansion information loaded into an
expansion control register via instruction. For example, given
array {A,B,C} retrieved from memory and expansion control
information {0,0,1,0,1,1,0,0}, the retrieved array may be expanded
to {0,0,A,0,B,C,0,0} and written to registers of the register file
306. Load with expansion moves data from memory to the vector
register file 306 with expansion in a single instruction cycle.
[0056] FIG. 7 shows a table of data distributions that may be
implemented by the load unit 314 in accordance with various
embodiments. Operation of the load units 314 may be invoked by
execution of a vector load instruction by the vector coprocessor
core 104. The vector load instruction may take the form of: [0057]
VLD<type>_<distribution> base[agen], vreg where: [0058]
type specifies the data size (e.g., byte, half-word, word, etc.);
[0059] distribution specifies the data distribution option
(described above) to be applied; [0060] base specifies a register
containing an address; [0061] agen specifies an address generator
for indexing; and [0062] vreg specifies a vector register to be
loaded.
[0063] The timing of vector load instruction execution may be
determined by the load units 314 (i.e., by hardware) based, for
example, on when the data retrieved by the load is needed by the
processing elements 308, and memory interface availability. In
contrast, the timing of the computations performed by the
processing elements 308 may be determined by the sequence of vector
instructions provided by the scalar processor core 102.
[0064] The store units 316 include routing circuitry that
distributes data values retrieved from the registers of the vector
register file 306 to locations in the memories 110, 112, 114 in
various patterns that facilitate efficient processing. Store
instructions executed by the vector coprocessor core 104 specify
how the data is to be distributed to memory. At least some of the
data distributions provide by the store unit 316 reverse the data
distributions provided by the load units 314. The store units 316
may provide the data distributions described herein for data values
of various lengths (e.g., 32, 16, 8 bit values). The store units
316 move data from the vector registers 306 to memory 110, 112, 114
with instruction specified distribution in a single instruction
cycle.
[0065] A store unit 316 may move data from a plurality of adjacent
registers of the register file 306 to locations in memory 110, 112,
114 via adjacent memory lanes in a single instruction cycle. For
example, data values corresponding to a given register of each of
N-ways of the vector coprocessor core 104 may be moved to memory
via adjacent memory lanes in a single instruction cycle. The store
unit 316 may also move a value from a single given register of the
register file 306 to a given location in memory 110, 112, 114 in a
single instruction cycle.
[0066] The store unit 316 may provide downsampling by a factor of
two by storing data retrieved from alternate registers of the
vector register file 306 (i.e., data from each of alternate ways of
the vector coprocessor core 104) to locations of memory 110, 112,
114 via adjacent memory lanes. Thus, the store unit 316 may provide
an operation that reverses the upsampling by two shown in FIG. 6E.
The store unit 316 provides the movement of data from registers to
memory with down sampling in a single instruction cycle.
[0067] Embodiments of the store unit 316 may provide interleaving
of data values retrieved from registers of the vector register file
306 while moving the data values to memory. The interleaving
reverses the distribution shown in FIG. 6F such that data values
retrieved from a first set of adjacent registers are written to
memory locations via even indexed memory lanes and data values
retrieved from a second set of adjacent registers are interleaved
therewith by writing the data values to memory locations via odd
indexed memory lanes. The store unit 316 provides the movement of
data from registers to memory with interleaving in a single
instruction cycle.
[0068] Embodiments of the store unit 316 may provide for
transposition of data values retrieved from registers of the vector
register file 306 while moving the data values to memory, where,
for example, the data values form a row or column of an array. Data
values corresponding to each way of the vector coprocessor core 104
may be written to memory at an index corresponding to the index of
the register providing the data value times the number of ways plus
one. Thus, for 8-way SIMD, reg[0] is written to mem[0], reg[1] is
written to mem[9], reg[2] is written to mem[18], etc. Where, the
transposed register values are written to different banks of
memory, the store unit 316 provides movement of N data values from
registers to memory with transposition in a single instruction
cycle.
[0069] Embodiments of the store unit 316 may provide collation of
data values retrieved from registers of the vector register file
306 while moving the data values to memory. The collating reverses
the expansion distribution provided by the load units 314. The
collation compacts the data retrieved from adjacent registers of
the vector register file 306, by writing to locations of memory via
adjacent memory lanes those data values identified in collation
control information stored in a register. For example, given
registers containing an array {0,0,A,0,B,C,0,0} and collation
control information {0,0,1,0,1,1,0,0}, the store unit 316 stores
{A,B,C} in memory. The store unit 316 provides the movement of data
from registers to memory with collation in a single instruction
cycle.
[0070] Embodiments of the store unit 316 may provide data-driven
addressing (DDA) of data values retrieved from registers of the
vector register file 306 while moving the data values to memory.
The data-driven addressing generates a memory address for each of a
plurality of adjacent registers of the vector register file 306
using offset values provided from a DDA control register. The DDA
control register may be a register of the vector register file
corresponding the way of the register containing the value to
written to memory. Register data values corresponding to each of
the N ways of the vector coprocessor core may be stored to memory
in a single instruction cycle if the DDA control register specified
offsets provide for the data values to be written to different
memory banks. If the DDA control register specified offsets provide
for the data values to be written to memory banks that preclude
simultaneously writing all data values, then the store unit 316 may
write the data values in a plurality of cycles selected to minimize
the number of memory cycles used to write the register values to
memory.
[0071] Embodiments of the store unit 316 may provide for moving
data values retrieved from a plurality of adjacent registers of the
vector register file 306 to locations of the memory via alternate
memory lanes, thus skipping every other memory location. The store
units 316 may write the plurality of data values to alternate
locations in memory 110, 112, 114 in a single instruction
cycle.
[0072] FIG. 8 shows a table of data distributions that may be
implemented by the store unit 316 in accordance with various
embodiments. Operation of the store units 316 may be invoked by
execution of a vector store instruction by the vector coprocessor
core 104. The vector store instruction may take the form of: [0073]
[pred] VST<type>_<distribution>_<wr_loop> vreg,
base[agen], RND_SAT: rnd_sat_param where: [0074] pred specifies a
register containing a condition value that determines whether the
store is performed; [0075] type specifies the data size (e.g.,
byte, half-word, word, etc.); [0076] distribution specifies the
data distribution option to be applied; [0077] wr_loop specifies
the nested loop level where the store is to be performed; [0078]
vreg specifies a vector register to be stored; [0079] base
specifies a register containing an address; [0080] agen specifies
an address generator for indexing; and [0081] RND_SAT:
rnd_sat_param specifies the rounding/saturation to be applied to
the stored data.
[0082] The store units 316 provide selectable rounding and/or
saturation of data values as the values are moved from the vector
registers 306 to memory 110, 112, 114. Application of
rounding/saturation adds no additional cycles to the store
operation. Embodiments may selectably enable or disable rounding.
With regard to saturation, embodiments may selectably perform
saturation according to following options: [0083] NO_SAT: no
saturation performed; [0084] SYMM: signed symmetrical saturation
[-bound, bound] (for unsigned store, [0, bound]); [0085] ASYMM:
signed asymmetrical saturation [-bound-1, bound] (for unsigned
store, [0, bound]), useful for fixed bit width. For example, when
bound=1023, saturate to [-1024, 1023]; [0086] 4PARAM: use 4
parameter registers to specify sat_high_cmp, sat_high_set,
sat.sub.--low.sub.--cmp, sat.sub.--low.sub.--set; [0087] SYMM32:
use 2 parameter registers to specify a 32-bit bound, then follow
SYMM above; and [0088] ASYMM32: use 2 parameter registers to
specify a 32-bit bound, then follow ASYMM above.
[0089] The timing of vector store instruction execution is
determined by the store units 316 (i.e., by hardware) based, for
example, on availability of the memories 110, 112, 114. In
contrast, the timing of the computations performed by the
processing elements 308 may be determined by the sequence of vector
instructions provided by the scalar processor core 102.
[0090] The processing elements 308 of the vector coprocessor core
104 include logic that accelerates SIMD processing of signal data.
In SIMD processing, each of the N processing lanes (e.g., the
processing element of the lane) is generally isolated from each of
the other processing lanes. Embodiments of the vector coprocessor
core 104 improve SIMD processing efficiency by providing
communication between the processing elements 308 of the SIMD
lanes.
[0091] Some embodiments of the vector coprocessor core 104 include
logic that compares values stored in two registers of the vector
register file 306 associated with each SIMD processing lane. That
is values of two registers associated with a first lane are
compared, values of two registers associated with a second lane are
compared, etc. The vector coprocessor core 104 packs the result of
the comparison in each lane into a data value, and broadcasts
(i.e., writes) the data value to a destination register associated
with each SIMD lane. Thus, the processing element 308 of each SIMD
lane is provided access to the results of the comparison for all
SIMD lanes. The vector coprocessor core 104 performs the
comparison, packing, and broadcasting as execution of a vector bit
packing instruction, which may be defined as: [0092] VBITPK src1,
src2, dst where: [0093] src1 and src2 specify the registers to be
compared; and [0094] dst specifies the register to which the packed
comparison results are to be written.
[0095] Some embodiments of the vector coprocessor core 104 include
logic that copies a value of one register to another within each
SIMD lane based on a packed array of flags, where each flag
corresponds to an SIMD lane. Thus, given the packed flag value in a
register, each SIMD lane identifies the flag value corresponding to
the lane (e.g., bit 0 of the register for lane 0, bit 1 of the
register for lane 1, etc.). If the flag value is "1" then a
specified source register of the lane is copied to a specified
destination register of the lane. If the flag value is "0" then
zero is written to the specified destination register of the lane.
The vector coprocessor core 104 performs the unpacking of the flag
value and the register copying as execution of a vector bit
unpacking instruction, which may be defined as: [0096] VBITUNPK
src1, src2, dst where: [0097] src1 specifies the register
containing the packed per lane flag values; [0098] src2 specifies
the register to be copied based on the flag value for the lane; and
[0099] dst specifies the destination register to written.
[0100] Some embodiments of the vector coprocessor core 104 include
logic that transposes values of a given register across SIMD lanes.
For example, as shown below, a given register in each of a 4-way
vector coprocessor core 104 contains the values 8, 4, 0.times.C,
and 2. The vector coprocessor core 104 transposes the bit values
such that bit 0 values of each lane are written to the specified
destination register of lane 0, bit 1 values of each lane are
written to the specified destination register of lane 1, etc.
TABLE-US-00002 Source: bit position lane value 0 1 2 3 0 1 1 0 0 0
1 2 0 1 0 0 2 3 1 1 0 0 3 4 0 0 1 0
TABLE-US-00003 Destination: bit position lane value 0 1 2 3 0 5 1 0
1 0 1 6 0 1 1 0 2 8 0 0 0 1 3 0 0 0 0 0
Thus, the vector coprocessor core 104 transposes the bits of the
source register across SIMD lanes. The vector coprocessor core 104
performs the transposition as execution of a vector bit transpose
instruction, which may be defined as: [0101] VBITTR src1, dst
where: [0102] src1 specifies the register containing the bits to be
transposed; and [0103] dst specifies the register to which the
transposed bits are written.
[0104] Some embodiments of the processing element 308 include logic
that provides bit level interleaving and deinterleaving of values
stored in registers of the vector register file 306 corresponding
to the processing element 308. For example, the processing element
308 may provide bit interleaving as shown below. In bit
interleaving the bit values of two specified source registers are
interleaved in a destination register, such that successive bits of
each source register are written to alternate bit locations of the
destination register.
src1=0.times.25 (0000.sub.--0000.sub.--0010.sub.--0101),
src2=0.times.11 (0000.sub.--0000.sub.--0001.sub.--0001),
dst=0.times.523
(0000.sub.--0000.sub.--0000.sub.--0000.sub.--0000.sub.--1001.sub.--0010.s-
ub.--0011)
[0105] The processing element 308 performs the interleaving as
execution of a vector bit interleave instruction, which may be
defined as: [0106] VBITI src1, src2, dst where: [0107] src1 and
src2 specify the registers containing the bits to be interleaved;
and [0108] dst specifies the register to which the interleaved bits
are written.
[0109] The processing element 308 executes deinterleaving to
reverse the interleaving operation described above. In
deinterleaving, the processing element 308 writes even indexed bits
of a specified source register to a first destination register and
writes odd indexed bits to a second destination register. For
example:
src=0.times.523
(0000.sub.--0000.sub.--0000.sub.--0000.sub.--0000.sub.--1001.sub.--0010.s-
ub.--0011)
dst1=0.times.25 (0000.sub.--0000.sub.--0010.sub.--0101),
dst2=0.times.11 (0000.sub.--0000.sub.--0001.sub.--0001),
[0110] The processing element 308 performs the deinterleaving as
execution of a vector bit deinterleave instruction, which may be
defined as: [0111] VBITDI src, dst 1, dst 2, where: [0112] src
specifies the register containing the bits to be deinterleaved; and
[0113] dst1 and dst2 specify the registers to which the
deinterleaved bits are written.
[0114] Embodiments of the vector coprocessor core 104 may also
interleave register values across SIMD lanes. For example, for
8-way SIMD, the vector coprocessor core 104 may provide single
element interleaving of two specified source registers as:
dst1[0]=src1[0];
dst1[1]=src2[0];
dst1[2]=src1[1];
dst1[3]=src2[1];
dst1[4]=src1[2];
dst1[5]=src2[2];
dst1[6]=src1[3];
dst1[7]=src2[3];
dst2[0]=src1[4];
dst2[1]=src2[4];
dst2[2]=src1[5];
dst2[3]=src2[5];
dst2[4]=src1[6];
dst2[5]=src2[6];
dst2[6]=src1[7];
dst2[7]=src2[7];
where the bracketed index value refers the SIMD lane. The vector
coprocessor core 104 performs the interleaving as execution of a
vector interleave instruction, which may be defined as [0115]
VINTRLV src1/dst1, src2/dst2, where src1/dst1 and src2/dst2 specify
source registers to be interleaved and the registers to be
written.
[0116] The vector coprocessor core 104 may also interleave register
values across SIMD lanes with 2-element frequency. For example, for
8-way SIMD, the vector coprocessor core 104 may provide 2-element
interleaving of two specified source registers as:
dst1[0]=src1[0];
dst1[1]=src1[1];
dst1[2]=src2[0];
dst1[3]=src2[1];
dst1[4]=src1[2];
dst1[5]=src1[3];
dst1[6]=src2[2];
dst1[7]=src2[3];
dst2[0]=src1[4];
dst2[1]=src1[5];
dst2[2]=src2[4];
dst2[3]=src2[5];
dst2[4]=src1[6];
dst2[5]=src1[7];
dst2[6]=src2[6];
dst2[7]=src2[7];
where the bracketed index value refers the SIMD lane. The vector
coprocessor core 104 performs the 2-element interleaving as
execution of a vector interleave instruction, which may be defined
as: [0117] VINTRLV2 src1/dst1, src2/dst2, where src1/dst1 and
src2/dst2 specify source registers to be interleaved and the
registers to be written.
[0118] The vector coprocessor core 104 may also interleave register
values across SIMD lanes with 4-element frequency. For example, for
8-way SIMD, the vector coprocessor core 104 may provide 4-element
interleaving of two specified source registers as:
dst1[0]=src1[0];
dst1[1]=src1[1];
dst1[2]=src1[2];
dst1[3]=src1[3];
dst1[4]=src2[0];
dst1[5]=src2[1];
dst1[6]=src2[2];
dst1[7]=src2[3];
dst2[0]=src1[4];
dst2[1]=src1[5];
dst2[2]=src1[6];
dst2[3]=src1[7];
dst2[4]=src2[4];
dst2[5]=src2[5];
dst2[6]=src2[6];
dst2[7]=src2[7];
where the bracketed index value refers the SIMD lane. The vector
coprocessor core 104 performs the 4-element interleaving as
execution of a vector interleave instruction, which may be defined
as: [0119] VINTRLV4 src1/dst1, src2/dst2, where src1/dst1 and
src2/dst2 specify source registers to be interleaved and the
registers to be written.
[0120] Embodiments of the vector coprocessor core 104 provide
deinterleaving of register values across SIMD lanes. Corresponding
to the single element interleaving described above, the vector
coprocessor core 104 provides single element deinterleaving. For
example, for 8-way SIMD, the vector coprocessor core 104 may
provide single element deinterleaving of two specified source
registers as:
dst1[0]=src1[0];
dst2[0]=src1[1];
dst1[1]=src1[2];
dst2[1]=src1[3];
dst1[2]=src1[4];
dst2[2]=src1[5];
dst1[3]=src1[6];
dst2[3]=src1[7];
dst1[4]=src2[0];
dst2[4]=src2[1];
dst1[5]=src2[2];
dst2[5]=src2[3];
dst1[6]=src2[4];
dst2[6]=src2[5];
dst1[7]=src2[6];
dst2[7]=src2[7];
The vector coprocessor core 104 performs the deinterleaving as
execution of a vector interleave instruction, which may be defined
as: [0121] VDINTRLV src1/dst1, src2/dst2, where src1/dst1 and
src2/dst2 specify source registers to be deinterleaved and the
registers to be written.
[0122] Corresponding to the 2-element interleaving described above,
the vector coprocessor core 104 provides 2-element deinterleaving.
For example, for 8-way SIMD, the vector coprocessor core 104 may
provide 2-element deinterleaving of two specified source registers
as:
dst1[0]=src1[0];
dst1[1]=src1[1];
dst2[0]=src1[2];
dst2[1]=src1[3];
dst1[2]=src1[4];
dst1[3]=src1[5];
dst2[2]=src1[6];
dst2[3]=src1[7];
dst1[4]=src2[0];
dst1[5]=src2[1];
dst2[4]=src2[2];
dst2[5]=src2[3];
dst1[6]=src2[4];
dst1[7]=src2[5];
dst2[6]=src2[6];
dst2[7]=src2[7];
The vector coprocessor core 104 performs the 2-element
deinterleaving as execution of a vector interleave instruction,
which may be defined as: [0123] VDINTRLV2 src1/dst1, src2/dst2,
where src1/dst1 and src2/dst2 specify source registers to be
deinterleaved and the registers to be written.
[0124] The processing elements 308 are configured to conditionally
move data from a first register to second register based on an
iteration condition of the nested loops being true. The conditional
move is performed in a single instruction cycle. The processing
elements 308 perform the conditional move as execution of a
conditional move instruction, which may defined as: [0125] VCMOV
cond, src, dst where: [0126] src and dst specify the register from
which and to which data is to be moved; and [0127] cond specifies
the iteration condition of the nested loops under which the move is
to be performed.
[0128] The loop iteration condition (cond) may specify performing
the move: [0129] on every iteration of the inner-most loop (loop
M); [0130] on the final iteration of the inner-most loop; [0131] in
loop M-1, prior to entering loop M; [0132] in loop M-2, prior to
entering loop M-1; [0133] in loop M-3, prior to entering loop M-2;
[0134] on the final iteration of loops M and M-1; or [0135] on the
final iteration of loops M, M-1, and M-2.
[0136] The processing elements 308 are configured to conditionally
swap data values between two registers in a single instruction
cycle based on a value contained in a specified condition register.
Each processing element 308 executes the swap based on the
condition register associated with the SIMD lane corresponding to
the processing element 308. The processing elements 308 perform the
value swap as execution of a conditional swap instruction, which
may defined as: [0137] VSWAP cond, src1/dst1, src2/dst2 where:
[0138] src1/dst1 and src2/dst2 specify the registers having values
to be swapped; and [0139] cond specifies the condition register
that controls whether the swap is to be performed. In some
embodiments, the swap is performed if the least significant bit of
the condition register is set.
[0140] The processing elements 308 are configured to sort two
values contained in specified registers in a single instruction
cycle. The processing element 308 compares the two values. The
smaller of the values is written to a first register, and the
larger of the two values is written to a second register. The
processing elements 308 perform the value sort as execution of a
sort instruction, which may defined as: [0141] VSORT2 src1/dst1,
src2/dst2 where src1/dst1 and src2/dst2 specify the registers
having values to be sorted. The smaller of the two values is
written to dst1, and the larger of the two values is written to
dst2.
[0142] The processing elements 308 include logic that generates a
result value from values contained in three specified registers. A
processing element 308 may, in a single instruction cycle, add
three register values, logically "and" three register values,
logically "or" three register values, or add two register values
and subtract a third register value. The processing elements 308
perform these operations as execution of instructions, which may
defined as: [0143] VADD3 src1, src2, src3, dst where: [0144] src1,
src2, and src3 specify the registers containing values to be
summed; and [0145] dst specifies the register to which the
summation result is to be written. [0146] VAND3 src1, src2, src3,
dst where: [0147] src1, src2, and src3 specify the registers
containing values to be logically "and'd"; and [0148] dst specifies
the register to which the "and" result is to be written. [0149]
VOR3 src1, src2, src3, dst where: [0150] src1, src2, and src3
specify the registers containing values to be logically "or'd"; and
[0151] dst specifies the register to which the "or" result is to be
written. [0152] VADIF3 src1, src2, src3, dst where: [0153] src1 and
src3 specify the registers containing values to be summed; [0154]
src2 specifies the register containing a value to subtracted from
the sum of src1 and src3; and [0155] dst specifies the register to
which the final result is to be written.
[0156] The table lookup unit 310 is a processing unit separate from
the processing elements 308 and the histogram unit 312. The table
lookup unit 310 accelerates lookup of data values stored in tables
in the memories 110, 112, 114. The table lookup unit 310 can
perform N lookups (where N is the number of SIMD lanes of the
vector coprocessor core 104) per cycle. The table lookup unit 310
executes the table lookups in a nested loop. The table lookup loop
is defined by a VLOOP instruction that specifies table lookup
operation. The vector command specified by VLOOP and the associated
vector instructions cause the table lookup unit 310 to retrieve a
specified set of values from one or more tables stored in the
memories 110, 112, 114, and store the retrieved values in the
memories 110, 112, 114 at a different specified location.
[0157] A table lookup vector command initializes address generators
used to access information defining which values are to be
retrieved from a lookup table, used to lookup table location in
memory 110, 112, 114, and used to define where the retrieved lookup
table values are to be stored. In each iteration of the table
lookup vector command, the table lookup unit 310 retrieves
information identifying the data to be fetched from the lookup
table, applies the information in conjunction with the lookup table
location to fetch the data, and stores the fetched data to memory
110, 112, 114 for subsequent access by a compute loop executing on
the vector coprocessor core 104. The table lookup unit 310 may
fetch table data from memories 110, 112, 114 based on a vector load
instruction as disclosed herein, and store the fetched data to
memories 110, 112, 114 using a vector store instruction as
disclosed herein. Embodiments of the table lookup unit 310 may also
fetch data from memories 110, 112, 114 using a vector table load
instruction, which may be defined as: [0158]
VTLD<type>_<m>TBL_<n>PT tbl_base[tbl_agen] [V2],
V0, RND_SAT: rnd_sat where: [0159] type specifies the data size
(e.g., byte, half-word, word, etc.); [0160] _<m>TBL specifies
the number of lookup tables to be accessed in parallel; [0161]
_<n>PT specifies the number of data items per lookup table to
be loaded; [0162] tbl_base specifies a lookup table base address;
[0163] tbl_agen specifies an address generator containing offset to
a given table; [0164] V2 specifies a vector register containing a
data item specific offset into the given table; [0165] V0 specifies
a vector register to which the retrieved table data is to be
written; and [0166] RND_SAT: rnd_sat specifies a
rounding/saturation mode to be applied to the table lookup
indices.
[0167] As shown by the vector table lookup instruction, the table
lookup unit 310 may fetch one or more data values from one or more
tables simultaneously, where each of the multiple tables is located
in a different bank of memories 110, 112, 114. Fetching multiple
values from a table for a given index is advantageous when
interpolation is to be applied to the values (e.g., bilinear or
bicubic interpolation). Some embodiments of the table lookup unit
310 constrain the number of tables accessed and/or data values
accessed in parallel. For example, the product of the number of
tables accessed and the number of data values retrieved per table
may be restricted to be less than the number of SIMD lanes of the
vector coprocessor core 104. In some embodiments, the number of
data values retrieved per table access may be restricted to be 1,
2, or 4. Table 1 below shows allowable table and value number
combinations for some embodiments of an 8-way SIMD vector
coprocessor core 104.
TABLE-US-00004 TABLE 1 Table Lookup Constraints Number of parallel
tables, Num items per lookup, num_par_tbl Table type
num_data_per_lu 1 2 4 8 Byte 1 2 4 8 Half word 1 2 4 8 Word 1 2 4
8
[0168] The histogram unit 312 is a processing unit separate from
the processing elements 308 and the table lookup unit 310. The
histogram unit 312 accelerates construction of histograms in the
memories 110, 112, 114. The histogram unit 312 provides
construction of normal histograms, in which an addressed histogram
bin entry is incremented by 1, and weighted histograms, in which an
addressed histogram bin entry is incremented by a value provided as
an element in a weight array input. The histogram unit 312 can
perform N histogram bin updates (where N is the number of SIMD
lanes of the vector coprocessor core 104) simultaneously. The
histogram unit 312 executes the histogram bin updates in a nested
loop. The histogram loop is defined by a VLOOP instruction that
specifies histogram operation. The vector command specified by
VLOOP and the associated vector instructions cause the histogram
unit 312 to retrieve histogram bin values from one or more
histograms stored in the memories 110, 112, 114, increment the
retrieved values in accordance with a predetermined weight, and
store the updated values in the memories 110, 112, 114 at the
locations from which the values were retrieved.
[0169] A histogram vector command initializes the increment value
by which the retrieved histogram bin values are to be increased,
loads an index to a histogram bin, fetches the value from the
histogram bin from memory 110, 112, 114, adds the increment value
to the histogram bin, and stores the updated histogram bin value to
memory 110, 112, 114. Bin value and weights may be signed or
unsigned. Saturation may be applied to the updated histogram bin
value in accordance with the type (e.g., signed/unsigned, data
size, etc.) in conjunction with the store operation. Vector load
instructions, as disclosed herein, may be used to initialize the
increment value and load the bin index. Embodiments of the
histogram unit 312 may fetch histogram bin values from memories
110, 112, 114 in accordance with a histogram load instruction,
which may be defined as: [0170] VHLD<type>_<m>HIST
hist_base[hist_agen] [V2], V0, RND_SAT: rnd_sat where: [0171] type
specifies the data size (e.g., byte, half-word, word, etc.); [0172]
_<m> HIST specifies the number of histograms to be accessed
in parallel; [0173] hist_base specifies a histogram base address;
[0174] hist_agen specifies an address generator containing offset
to a given histogram; [0175] V2 specifies a vector register
containing a histogram bin specific offset into the given
histogram; [0176] V0 specifies a vector register to which the
histogram bin value is to be written; and [0177] RND_SAT: rnd_sat
specifies a rounding/saturation mode to be applied to the histogram
indices.
[0178] Embodiments of the histogram unit 312 may store updated
histogram bin values to memories 110, 112, 114 in accordance with a
histogram store instruction, which may be defined as: [0179]
VHST<type>_<m>HIST V0, hist_base[hist_agen][V2] where:
[0180] type specifies the data size (e.g., byte, half-word, word,
etc.); [0181] _<m> HIST specifies the number of histograms to
be accessed in parallel; [0182] V0 specifies a vector register
containing the histogram bin value to be written to memory; [0183]
hist_base specifies a histogram base address; [0184] hist_agen
specifies an address generator containing offset to a given
histogram; and [0185] V2 specifies a vector register containing a
histogram bin specific offset into the given histogram.
[0186] Embodiments of the processor 100 may be applied to advantage
in any number of devices and/or systems that employ real-time data
processing. Embodiments may be particularly well suited for use in
devices that employ image and/or vision processing, such as
consumer devices that that include imaging systems. Such devices
may include an image sensor for acquiring image data and/or a
display device for displaying acquired and/or processed image data.
For example, embodiments of the processor 100 may be included in
mobile telephones, tablet computers, and other mobile devices to
provide image processing while reducing overall power
consumption.
[0187] The above discussion is meant to be illustrative of the
principles and various embodiments of the present invention.
Numerous variations and modifications will become apparent to those
skilled in the art once the above disclosure is fully appreciated.
It is intended that the following claims be interpreted to embrace
all such variations and modifications.
* * * * *