U.S. patent application number 13/763974 was filed with the patent office on 2013-08-15 for method and apparatus for rendering translucent and opaque objects.
This patent application is currently assigned to Imagination Technologies Limited. The applicant listed for this patent is Imagination Technologies Limited. Invention is credited to John W. Howson.
Application Number | 20130207977 13/763974 |
Document ID | / |
Family ID | 27772710 |
Filed Date | 2013-08-15 |
United States Patent
Application |
20130207977 |
Kind Code |
A1 |
Howson; John W. |
August 15, 2013 |
Method and Apparatus for Rendering Translucent and Opaque
Objects
Abstract
A method and an apparatus provided for rendering
three-dimensional computer graphic images which include both
translucent and opaque objects. A list of objects which may be
visible in the images is determined and for each pixel in the list
a determination is made as to whether or not the object in the list
may be visible at that pixel. A data tag is stored for a
transparent object determined to be visible at the pixel, and the
data tag and object data are passed to a texturing and shading unit
when the translucent object is determined to be overwriting the
location in the tag buffer already occupied by another data
tag.
Inventors: |
Howson; John W.; (St.
Albans, GB) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Imagination Technologies Limited; |
|
|
US |
|
|
Assignee: |
Imagination Technologies
Limited
Kings Langley
GB
|
Family ID: |
27772710 |
Appl. No.: |
13/763974 |
Filed: |
February 11, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11787893 |
Apr 18, 2007 |
8446409 |
|
|
13763974 |
|
|
|
|
10795561 |
Mar 5, 2004 |
|
|
|
11787893 |
|
|
|
|
Current U.S.
Class: |
345/422 |
Current CPC
Class: |
G06T 2210/62 20130101;
G06T 15/405 20130101; G06T 15/40 20130101 |
Class at
Publication: |
345/422 |
International
Class: |
G06T 15/40 20060101
G06T015/40 |
Foreign Application Data
Date |
Code |
Application Number |
Jul 25, 2003 |
GB |
0317479.4 |
Claims
1. A method for rendering an image from 3-dimensional object
descriptions, comprising: determining, for a set of pixels in the
image, a list of objects that may be visible in a pixel of the set
of pixels; scan converting an object from the list of objects to
produce depth and position information for the object at the pixel;
using the depth and position information to determine whether or
not the object is visible at the pixel; storing a tag indicating
the identity of the object, if the object is determined to be
visible at the pixel, in a tag buffer of a plurality of tag
buffers; and if the tag is overwriting a location in the tag buffer
already occupied by another tag, then passing a tag already
occupying a location in the plurality of tag buffers to a texturing
and shading unit, and passing data for the object indicated by the
tag, that was passed to the texturing and shading unit, to the
texturing and shading unit.
2. The method for rendering an image from 3-dimensional object
descriptions of claim 1, further comprising determining whether the
object is translucent and before storing the tag, determining
whether a candidate tag buffer location already includes a valid
tag, and if so, then cycling to a subsequent tag buffer and
repeating the determining until finding a candidate tag buffer
location that has no valid tag for the pixel and then storing the
tag in that tag buffer, and otherwise performing the overwriting of
the location with the valid tag.
3. The method for rendering an image from 3-dimensional object
descriptions of claim 1, further comprising determining whether the
object is opaque and responsively invalidating tags stored in other
tag buffers of the plurality of tag buffers pertaining to that
pixel so that the tag is not overwriting a location in the tag
buffer that stores a valid tag.
4. The method for rendering an image from 3-dimensional object
descriptions of claim 1, further comprising cycling the plurality
of tag buffers when storing the tag.
5. The method for rendering an image from 3-dimensional object
descriptions of claim 4, wherein the tag that is passed to the
texturing and shading unit is from a first tag buffer in the
cycle.
6. A method of image rasterization from a 3-D scene description,
comprising: writing a tag to a location in a current tag buffer of
a plurality of tag buffers, responsive to determining that an
opaque primitive is visible at a pixel, the tag indicating which
primitive is visible at that pixel; processing a translucent
primitive by determining whether the translucent primitive is
visible, pixel by pixel, for a set of pixels, and if the
translucent primitive is visible, then determining if the location
in the current tag buffer is occupied, and if the location in the
current tag buffer is occupied then iteratively moving to a next
tag buffer, until identifying a tag buffer in which a tag
indicating the translucent primitive can be written or determining
that no tag buffer of the plurality of tag buffers can receive the
tag indicating the translucent primitive, and then responsively
spawning a pass to process a primitive identified by a tag in the
plurality of tag buffers.
7. A system for rendering an image from a 3-D scene description,
comprising: a source of a list of primitives, that are within one
or more pixels of the image; a parameter fetch unit operable to
fetch parameters for a primitive in the list of primitives; a scan
converter coupled to receive parameters for the primitive from the
parameter fetch unit and to produce depth and position of each
primitive for each pixel of the one or more pixels; a hidden
surface removal unit coupled with the scan converter and operable
to remove the primitive from further processing with respect to
each of the pixels, if the primitive is not visible at that pixel,
and otherwise to retain the primitive, based on the depth and the
position of the primitive at each of the pixels; a Texture and
Shading Unit (TSU); a plurality of tag buffers, wherein the tag
buffers store tags that each are a piece of data that indicates a
particular primitive, and the plurality of tag buffers are operable
to be cycled to provide a current tag buffer; a pass spawn
controller (PSC) coupled to receive rasterization state from the
parameter fetch unit and coupled with the plurality of tag buffers,
the PSC being operable to determine whether the primitive is
translucent or opaque, if the primitive is opaque then to store a
tag indicating the primitive in the current tag buffer of the
plurality of tag buffers, and if the primitive is translucent then
to cycle the plurality of tag buffers and check for presence of a
valid tag, until reaching a tag buffer without a valid tag, and
then storing a tag identifying the primitive in that tag buffer, or
after checking each tag buffer and finding that all tag buffers
include a valid tag, then flushing at least one tag from the
plurality of tag buffers to the TSU and passing data for the object
indicated by the tag to the TSU.
8. The system for rendering an image from a 3-D scene description
of claim 7, wherein the tag that is flushed is from a first tag
buffer in the cycle.
9. The system for rendering an image from a 3-D scene description
of claim 7, wherein each of the TSU and the PSC operate on multiple
pixels in parallel.
10. The system for rendering an image from a 3-D scene description
of claim 7, wherein the one or more pixels are from a screen-space
tile, and the hidden surface removal unit operates on all of the
pixels of the screen-space tile before moving to a subsequent
screen-space tile.
11. The system for rendering an image from a 3-D scene description
of claim 7, wherein the hidden surface removal unit is operable to
determine whether a punch-through object is to be treated as being
opaque, and responsively to pass data for any pixel where the
punch-through object is visible to the TSU, and wherein the TSU is
operable to feedback data to the hidden surface removal unit.
12. The system for rendering an image from a 3-D scene description
of claim 11, wherein the hidden surface removal unit is operable to
update depth buffer locations for pixels identified in the feedback
data from the TSU and invalidate locations in the plurality of tag
buffers relating to those pixels.
13. The system for rendering an image from a 3-D scene description
of claim 7, wherein the hidden surface removal unit is operable to
determine whether a punch-through object is to be treated as being
at least partially translucent and to flush all tag buffer
locations relating to pixels where the punch-through object is
visible to the TSU.
14. The system for rendering an image from a 3-D scene description
of claim 12, wherein the TSU is operable to split state for the
punch-through object between state required to determine
punch-through state and that required to render the final
image.
15. The method of image rasterization from a 3-D scene description
of claim 6, wherein the spawning of the pass to process the
primitive identified by the tag in the plurality of tag buffers
comprises flushing a section of the current tag buffer in which the
tag is to be written.
16. The method of image rasterization from a 3-D scene description
of claim 6, wherein the spawning of the pass to process the
primitive identified by the tag in the plurality of tag buffers
comprises flushing only the tag in a location of the current tag
buffer in which the tag is to be written, and writing the tag to
the flushed location in the current tag buffer.
17. The method of image rasterization from a 3-D scene description
of claim 6, wherein the set of pixels are the pixels of a
screen-space tile, and further comprising flushing remaining valid
tags in the plurality of tag buffers in response to determining
that there are no more triangles that may be visible in the
screen-space tile.
18. The method of image rasterization from a 3-D scene description
of claim 6, further comprising determining whether a punch-through
object is to be treated as being opaque, and responsively to pass
data for any pixel where the punch-through object is visible to a
texturing and shading unit.
19. The method of image rasterization from a 3-D scene description
of claim 18, further comprising updating depth buffer locations for
pixels identified in feedback data from the texturing and shading
unit, and invalidating locations in the plurality of tag buffers
relating to those pixels.
20. The method of image rasterization from a 3-D scene description
of claim 6, further comprising determining whether a punch-through
object is to be treated as being at least partially translucent and
responsively flushing all tag buffer locations relating to pixels
where the punch-through object is visible to the TSU.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of 11/787,893 filed on
Apr. 4, 2007, which is a continuation of 10/795,561, filed on Mar.
5, 2004, now abandoned, which claims priority from GB 0317479.4
filed on Jul. 25, 2003, all of these applications are incorporated
herein in their entireties for all purposes
[0002] This invention relates to a 3-dimensional computer graphics
system and in particular to methods and apparatus which reduce the
number of times the data assigned to each pixel have to be modified
when rendering an image in such a system.
[0003] Tile based rendering systems are known. These break down an
image to be rendered into a plurality of rectangular blocks or
tiles. The way in which this is done and the subsequent texturing
and shading performed is shown schematically in FIG. 1. This shows
a geometry processing unit 2 which receives the image data from an
application and transforms it into screen space using a well-known
method. The data is then supplied to a tiling unit 4, which inserts
the screen space geometry into object lists for a set of defined
rectangular regions, or tiles, 6. Each list contains primitives
(surfaces) that exist wholly or partially in a sub-region of a
screen (i.e. a tile). A list exists for every tile on the screen,
although it should be borne in mind that some lists may have no
data in them.
[0004] Data then passes tile by tile to a hidden surface removal
unit 8 (HSR) and from there to a texturing and shading unit 10
(TSU). The HSR unit processes each primitive in the tile and passes
to the TSU only data about visible pixels.
[0005] Many images comprise both opaque and translucent objects. In
order to correctly render such an image, the HSR unit must pass
"layers" of pixels which need to be shaded to the TSU. This is
because more than one object will contribute to the image data to
be applied to a particular pixel. For example the view from the
inside of a building looking through a pane of dirty glass requires
both the geometry visible through the glass, and then the pane of
glass itself to be passed to the TSU. This process is referred to
as "pass spawning".
[0006] Typically, a tile based rendering device of the type shown
in FIG. 1 will use a buffer to hold a tag for the front most object
for each pixel in the tile currently being processed. A pass is
typically spawned whenever the HSR unit 8 processes a translucent
object, before the visibility test is performed. This results in
all currently visible tags stored in the buffer followed by the
visible pixels of the translucent object being sent to the TSU,
i.e. more than one set of pixel data being passed for each
pixel,
[0007] The flow diagram of FIG. 2 illustrates this approach, in
this, a determination is made at 12 as to whether or not a
primitive being processed is opaque. If it is not, then the buffer
of tags is sent at 14 to the TSU 10. All visible tags for the
non-opaque primitives are then also passed to the TSU at 15, the
HSR unit 8 will then move onto the next primitive at 16, if the
primitive is determined to be opaque at step 12 then its tags are
written into the buffer at 16 before moving onto the next primitive
at 18. The tag is a piece of data indicating which object is
visible at a pixel. More than one tag per pixel is required when
translucent objects cover opaque objects.
[0008] Use of the approach above means that opaque pixels that are
not covered by translucent pixels, and are potentially obscured by
further opaque objects may be passed to the TSU unnecessarily, In
addition to this, a translucent object is passed to the TSU even if
an opaque object subsequently obscures it.
SUMMARY OF THE INVENTION
[0009] The presence of the tag buffer in the above description
enables modifications to be made to the pass spawning rules (also
described above) that allow the removal of some unnecessary
passes.
[0010] In an embodiment of the present invention, rather than
spawning a pass at the point a translucent object is seen, the
translucent tags are rendered into the tag buffer in the same
manner as opaque objects and a pass only spawned at the point a
visible translucent pixel is required to be written to a location
that is already occupied. Further, as the translucent object tags
are now being rendered into the tag buffer there is no need to pass
them immediately to the TSU, Therefore, in the event of them being
subsequently obscured they may be discarded.
[0011] The invention is defined with more precision in the appended
claims to which reference should now be made.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] Preferred embodiments of the invention will now be described
in detail by way of example with reference to the accompanying
drawings in which:
[0013] FIG. 1 shows a block diagram of a tile based rendering
system discussed above;
[0014] FIG. 2 shows a flow chart of a known pass spawning
system;
[0015] FIG. 3 (a)-(c) shows a sequence of three triangles being
rendered using modified pass spawning rules embodying the
invention;
[0016] FIG. 4 is a flow diagram of the embodiment of the
invention;
[0017] FIG. 5 is an enhancement to FIG. 4; and
[0018] FIG. 6 is a block diagram of an embodiment of the
invention,
DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
[0019] In FIG. 3 a sequence of three triangles is shown being
rendered using a set of modified pass spawning rules. In FIG. 1a,
an opaque triangle T1 is rendered into a tag buffer. If a
translucent triangle T2 is rendered on top of the visible opaque
pixels then the HSR unit must pass those pixels to the TSU before
it can continue to rasterise the translucent tags. The opaque
triangle is encountered in scan line order as T2 is rasterised.
Thus all the previously rasterised pixels of T1 and the pixels of
T2 are passed to the TSU. This leaves the remainder of the
translucent object as shown in FIG. 3b. An opaque triangle T3 is
then rendered into the tag buffer as shown in FIG. 3c, This
triangle T3 obscures all of T1 and T2.
[0020] It will be seen that tags from T1 and T2 have been passed
unnecessarily to the TSU in spite of the improved rules. The
triangles passed unnecessarily are shown by dotted lines in FIG.
3c. In addition to this, if T3 had been translucent and was
subsequently obscured by another object, all tags from T2 would be
passed to the TSU.
[0021] IF more than one tag buffer is provided then this can
further reduce the number of unnecessary passes. In a system with N
tag buffers a minimum of N obscured passes can be removed. This is
achieved by switching to a new tag buffer each time an attempt is
made to write a translucent pixel to an occupied location, until
all tag buffers have been written to. At this point, the first tag
buffer is passed to the TSU.
[0022] If an opaque pixel is written to the current tag buffer this
will result in the same pixel being invalidated in all other
buffers, thus removing any obscured pixels. This can be done as any
pixel that is written to by an opaque primitive will only be
composed of the data generated by that primitive. The flow chart of
FIG. 4 illustrates how the pass spawning rules behave in this
case.
[0023] After the start at 20 of processing an object, as in FIG. 2,
a determination is made as to the type of the object at 22 and 24.
If an object is determined to be opaque at 22 then the system
executes the opaque processing path at 100, if not and the object
is determined to be punch through at 24 then punch through
processing path is executed at 300 otherwise translucent processing
is executed at 200.
[0024] If the object is opaque it is processed from 100 on a pixel
by pixel basis. For each pixel within the object the system first
determines its visibility at 102. If a pixel is not visible then
the system skips to 108 to determine if there any more pixels left
in the object. If a pixel is visible then its tag is written into
the current tag buffer at 104, the tags in all other buffers are
cleared at 106. The system then determines at 108 if any more
pixels are left to process from the current object, if there are it
moves to the next pixel at 110 and continues processing from 102.
If there are no more pixels to process in the object then the
system moves to the next object at 112 and returns to 20.
[0025] 3D computer graphics often use what is termed as "punch
through" objects. These objects use a back end test to determine if
a pixel should be drawn. For example, before a pixel is written to
the frame buffer its alpha value can be compared against a
reference using one of several compare modes, if the result of this
comparison is true then the pixel is determined to be visible, if
false then it is not. Pixels that are determined to not be visible
do not update the depth buffer. It should be noted that this test
can be applied to both opaque and partially translucent objects.
This technique is common in 3D game applications because it allows
complex scenes such as forests to be modelled using relatively few
polygons and because a traditional Z buffer can correctly render
punch through translucency irrespective of the order in which
polygons are presented to the system.
[0026] As the update of the depth buffer is dependant on the
results of the back end test tile based rendering (TBR) systems
must handle these objects in a special manner. Existing TBR systems
will first spawn a pass as if the punch through objects were
transparent, the punch through object is then passed to the TSU
which then feeds back visibility information to the HSR unit which
updates the depth buffer as necessary.
[0027] The handling of punch through objects can be optimised. If
punch through is applied to an opaque object it will either be
fully visible or not visible at all, this allows them to be treated
as opaque with respect to the flushing of the tag buffers.
Specifically at the point an opaque punch through object is
received any pixels that are determined to be visible by the hidden
surface removal (HSR) unit are passed directly to the TSU. The TSU
then feeds back pixels that are valid to the HSR unit which will,
for the fed back pixels, update the depth buffer as appropriate and
invalidate the same pixels in the tag buffer. The later is possible
as the act of passing the punch through pixels to the TSU means
that they have already been drawn so any valid tags at the same
locations in the tag buffer are no longer needed. If multiple tags
buffers are present then the tags are invalidated across all
buffers.
[0028] Partially transparent punch through data requires all tag
buffers up to and including the current tag buffer to be flushed to
the TSU. This is because the transparency may need to be blended
with any overlapped tags currently contained in the tag buffer.
Alternatively, the punch through tags may be passed to the TSU with
their states modified such that they will not update the frame
buffer image and their tags are written to the next buffer as
dictated by the punch through test. This allows objects that lie
under the punch through object that are subsequently obscured not
to be rendered. However, this is at the cost of potentially
rendering the punch through object twice, once to determine pixel
visibility and once to render the final image if it is not
obscured. The impact of rendering the punch through object twice
could be reduced by splitting the TSU pixel's shading state into
that required to determine punch through state and that required to
render the final image.
[0029] As far as the flow diagram of FIG. 4 is concerned, where the
object is determined to be a punch through at 24, the system jumps
to Process Punch Through 300. The object is then processed pixel by
pixel, first determining visibility at 302. If a pixel is not
visible the system skips to 324. If a pixel is visible a further
test is made at 304 to determine if the object pixel is also
transparent, and if this is determined to be the first visible
pixel within the object at 306 then all tag buffers up to and
including the current buffer are flushed at 308, i.e. sent to the
TSU. The test for translucency is performed per pixel so that the
tag buffers do not get flushed in the event of, the object not
being visible. The pixels for the object itself are then sent to
the TSU at 310 where texturing and shading are applied at 312 using
well-known methods. A punch through test is then applied at 314 and
the validity of the pixel determined at 316. If the pixel is found
to be invalid at 316, e.g. it fails the alpha test, the system
skips to 324. If the pixel is valid its coordinates are passed back
to the HSR unit at 318, which will then store the pixels depth
value to the depth buffer at 320 and invalidate the corresponding
tag in all tag buffers at 322. The system then determines if there
are any more pixels to be processed in the object at 324, if there
are it moves to the next pixel at 326 and jumps back to 302 to
continue processing. If no more pixels are present in the object
the system moves to the next object and returns to 20.
[0030] When the punch through determination at 24 is negative then
the object must be translucent and the system jumps to Process
Translucent 200. The object is then processed pixel by pixel, first
determining visibility at 202. If a pixel is not visible the system
skips to 222. If the pixel is visible the system determines if
location in the current tag buffer is occupied at 204. If the
current tag buffer location is determined to be occupied the system
will move to the next tag buffer at 206. A determination is then
made as to whether or not there are any valid tags in the buffer at
208 and if there are, they are sent to the TSU at 210 and the tag
buffer reset at 212. If there are no valid tags then a tag is
written at 220 and the system goes on to the next pixel or object
as described for opaque objects.
[0031] As opaque objects invalidate tags across all buffers, the
pass spawning rules can be further extended such that passes only
spawn when no tag buffer can be found into which a translucent
pixel can be written. FIG. 5 illustrates these updated rules which
can be used to replace the portion of the flow diagram of FIG. 4
surrounded by a dotted line. Instead of the determination at 208 as
to whether there are any valid tags in the buffer, a determination
is made as to whether or not this buffer has been looked at before.
If it has, then the flow moves onto 210 and 212. If it has not,
flow passes back to 204 where a determination is made as to whether
or not the tag location is occupied. If it is, then the diagram
moves to the next tag buffer at 240 before again determining
whether or not that buffer has been looked at at 242.
[0032] A further enhancement can be made to single and multiple
buffer implementations. Rather than flushing the whole tag buffer
at the point that no unoccupied pixel can be found for a
translucent object, only those tags that would be overwritten by
the translucent pixel are flushed to the TSU. The main disadvantage
of this approach is that it can result in the partial submission of
an object to a TSU which can result in it being submitted many
times. This leads to additional state fetch and set up costs in the
TSU. This could be alleviated by submitting all pixels with the
same tag value to the TSU rather than only those that are
overlapped by the translucent object. Alternatively, the tag buffer
could be subdivided into square/rectangular sections such that when
the above condition occurs only the section of the tag buffer
containing the conflict would be flushed. This approach also
potentially results in multiple submissions of tags to the TSU but
to a lesser extent.
[0033] A block diagram of a preferred embodiment of the invention
is shown in FIG. 6. This comprises a parameter fetch unit 50 which
reads in per tile lists of triangles and rasterisation state from a
per triangle list 52. These are then passed to a scan converter 54
and to a pass spawn control unit 56 respectively. The scan
converter 54 generates position, depth and stencil values for each
pixel within each triangle and passes them to an HSR unit 58. The
HSR unit determines the visibility of each pixel and passes this
information onto the pass spawning control unit (PSC) 56. This unit
has access to two or more tag buffers 60 which are cycled through
in a circular manner. For opaque pixels the PSC unit writes the tag
of the triangle to the current tag buffer and invalidates the
corresponding location in the other buffer. For a translucent pixel
the PSC unit checks to see if the location in the current tag
buffer is valid. If it is, then it switches to the other tag
buffer. If a tag location is not valid it writes the translucent
tag and moves onto the next pixel. If the tag location is valid
then the tag buffer is flushed to the texturing and shading unit
62. At this point all locations in the tag buffer are marked as
invalid and the translucent tag is written to the buffer. For an
opaque punch through pixel the PSC passes all visible pixels
directly to the TSU 62. The TSU determines pixel validity as
appropriate and returns the status of those pixels to the HSR and
PSC units 58, 56. The HSR and PSC units then update depth buffer
and invalidate locations in tag buffers respectively for valid
pixels. Translucent punch through pixels behave the same as opaque
punch through except that the PSC unit will flush all currently
valid tags in the tag buffers to the TSU before proceeding. This
process is repeated for all pixels in all triangles within the
tile.
[0034] When the parameter fetch unit 50 determines that there are
no more triangles in the current tile it signals to the pass
spawning unit 56 to flush any remaining valid tags from the tag
buffers to the TSU 62. The parameter fetch unit then proceeds to
read the parameter list for the next tile and repeats the process
until all tiles that make up the final image have been-rendered. It
should be noted that all of the units, with the exception of the
parameter fetch, can be modified to operate on multiple pixels in
parallel, thereby speeding up the process.
[0035] The HSR unit 58 has access to a depth and stencil buffer 64
in which the depth and stencil values for each pixel within each
triangle are stored.
* * * * *