U.S. patent application number 11/326839 was filed with the patent office on 2006-07-13 for method of processing three-dimensional image in mobile device.
This patent application is currently assigned to LG Electronics Inc.. Invention is credited to Hang Shin Cho, Tae Seong Kim, Min Jeong Lee.
Application Number | 20060153425 11/326839 |
Document ID | / |
Family ID | 36647747 |
Filed Date | 2006-07-13 |
United States Patent
Application |
20060153425 |
Kind Code |
A1 |
Kim; Tae Seong ; et
al. |
July 13, 2006 |
Method of processing three-dimensional image in mobile device
Abstract
A processing method of interfacing a 3D image and a camera image
is provided. In the processing method, a specific image pattern
defined by a user is recognized, the recognized pattern is traced
within an image, and a camera image and a 3D image are interfaced
based on the tracing result. A 3D object is animated and rendered
using a 3D graphic engine. The rendered image of the 3D object and
the camera image are integrated and displayed.
Inventors: |
Kim; Tae Seong; (Seoul,
KR) ; Lee; Min Jeong; (Seoul, KR) ; Cho; Hang
Shin; (Seoul, KR) |
Correspondence
Address: |
LEE, HONG, DEGERMAN, KANG & SCHMADEKA;14Th Floor
801 S. Figueroa Street
Los Angeles
CA
90017
US
|
Assignee: |
LG Electronics Inc.
|
Family ID: |
36647747 |
Appl. No.: |
11/326839 |
Filed: |
January 6, 2006 |
Current U.S.
Class: |
382/103 ;
348/E13.02; 348/E13.022; 348/E13.064; 382/154 |
Current CPC
Class: |
A63F 2300/6045 20130101;
A63F 13/10 20130101; H04N 13/10 20180501; A63F 13/213 20140902;
G06F 3/04883 20130101; H04N 13/275 20180501; H04N 13/261 20180501;
A63F 13/65 20140902; A63F 2300/1087 20130101; A63F 2300/406
20130101 |
Class at
Publication: |
382/103 ;
382/154 |
International
Class: |
G06K 9/00 20060101
G06K009/00 |
Foreign Application Data
Date |
Code |
Application Number |
Jan 7, 2005 |
KR |
10-2005-0001843 |
Claims
1. A method of processing a 3D image in a mobile device, the mobile
device having a camera and a 3D graphic engine, the method
comprising: recognizing a specific pattern designated by a user
based on images inputted by the camera, and setting the recognized
pattern as a defined pattern; tracing the defined pattern among the
images inputted by the camera, and interfacing the pattern and a 3D
image; and performing an animation process followed by a response
of a corresponding 3D image in accordance with the tracing result
of the pattern interfaced with the 3D image, integrating the
processed image and the camera image, and displaying the integrated
image.
2. The method according to claim 1, wherein the mobile device
includes mobile phone, PDA, PDA phone, smart phone, notebook
computer, and PMP, all of which has a camera mounted thereon.
3. The method according to claim 1, wherein the tracing result of
the defined pattern within the camera image is used as a user
interface.
4. The method according to claim 1, wherein the specific pattern is
manually designated by the user or designated by a previously
stored image file.
5. The method according to claim 1, wherein the specific pattern is
selected in accordance with a color, a region, or an inputted
image.
6. The method according to claim 1, wherein the specific pattern is
designated from the previously stored image file; regions similar
to the image file are detected within the camera image and are set
as candidate patterns; and the user approves the desired pattern of
the detected candidate patterns.
7. The method according to claim 1, wherein the response of an
animation image followed by the tracing of the specific pattern is
based on a collision process between the specific pattern within
the camera and a 3D object.
8. The method according to claim 1, wherein the interface between
the camera image and the 3D image includes: extracting a region
where a 3D object is rendered; extracting a region where the
pattern of the camera image exists; and checking whether or not the
two regions are intersected with each other.
9. The method according to claim 1, wherein the recognition and
setting of the specific pattern are executed when a system starts,
are called and executed for re-designating the pattern when the
pattern tracing fails, and are executed when there is a forced call
of an application program level, including a designation of a new
pattern by the user.
10. The method according to claim 1, wherein the pattern is traced
in real time by referring to a position information of the pattern
in a previous frame of the camera image and estimating a motion of
a pattern of a next frame.
11. A method of processing an interface between a camera image and
a graphic image, comprising: setting a specific pattern of an image
as an image-based interface tool; recognizing the set pattern among
images inputted by a camera, and tracing the recognized pattern;
generating a response of a graphic image in accordance with the
pattern tracing; and combining the traced pattern and the
responding graphic image, and displaying the combined image.
12. The method according to claim 11, wherein the specific pattern
is set among image acquired by the camera.
13. The method according to claim 11, wherein characteristic of the
specific pattern is manually designated by a user.
14. The method according to claim 11, wherein characteristic of the
specific pattern is designated by setting a color of the pattern,
or setting a region of the pattern, or selecting a previously
stored image.
15. The method according to claim 11, wherein the setting of the
specific pattern includes: designating an image for the setting of
the pattern; detecting a pattern similar to the designated image
within images acquired by the camera; and setting the pattern
selected among the detected candidate patterns by the user.
16. The method according to claim 11, wherein a specific operation
of the device is carried out from the response of the graphic image
according to the pattern tracing.
17. The method according to claim 11, wherein the response of the
graphic image in accordance with the pattern tracing is an
animation effect.
18. The method according to claim 11, wherein the set pattern is
mapped into a specific object within the graphic image, the mapped
image is combined with the graphic image in real time, and the
combined image is displayed.
19. The method according to claim 11, wherein the generation of the
response of the graphic image in accordance with the pattern
tracing includes: extracting a region where a specific object is
rendered within the graphic image; extracting a region where the
set pattern of the camera image exists; and generating the response
of the graphic image according to whether the two regions are
intersected with each other.
20. The method according to claim 11, wherein the interfaced image
is a 2D camera image and a 3D graphic image.
Description
[0001] Pursuant to 35 U.S.C. .sctn. 119(a), this application claims
the benefit of earlier filing date and right of priority to Korean
Patent Application No(s). 10-2005-0001843 filed on Jan. 7, 2005,
which is hereby incorporated by reference herein in their
entirety.
BACKGROUND OF THE INVENTION
[0002] 1. Field of the Invention
[0003] The present invention relates to a processing method of
interfacing a three-dimensional (3D) image and a camera image in a
mobile device.
[0004] 2. Description of the Related Art
[0005] Cameras with several hundred thousand to several million
pixels tend to be built in mobile devices that are represented by a
mobile phone. Various kinds of mobile devices with various
functions are commercially available. Examples of the mobile
devices are mobile phone, personal digital assistant (PDA), smart
phone, portable multimedia player (PMP), and MP3 player. Using the
cameras built in the mobile devices, an image can be directly
acquired and the acquired image can be stored in the device and can
also be transmitted or edited.
[0006] In mobile device markets, generalization of various kinds of
mobile devices with built-in cameras is in progress. At the same
time, various applications using the mobile devices have been
developed and released. The mobile devices and the applications
support a function of acquiring photograph or moving picture, a
function of transmitting the acquired photograph or moving picture
to a PC or another mobile device and storing it, a function of
editing and searching the photograph or moving picture, and a
function of transmitting the photograph or moving picture to a
personal homepage or Blog in Internet site and displaying it.
[0007] In the mobile device, an image processing makes it possible
to acquire an image by means of the camera and display a 2D or 3D
graphic image. A representative application using the 3D display
function is a 3D game. The 3D game has been introduced as a new
trend in game markets of the mobile device, which have been limited
to two-dimension.
[0008] The interface of an existing camera and a 3D image system
has been performed in a PC or arcade game system. For example, a
motion of a gamer (user) is traced by an inter-frame image flow in
a user image inputted by the camera. A response suitable for the 3D
graphic image based on an estimation result of the user's motion is
applied. In this manner, the user can enjoy the 3D game.
[0009] In the existing PC or arcade game system, the interface
between the camera image and the 3D graphic engine requires
complicated operations, including an analysis of an image in each
frame, a calculation and matching of the motion, and so on. For
this reason, it is very inefficient to directly apply the interface
system to the mobile device having a relatively low performance
processor compared with the PC or arcade game system. In addition,
if concurrently using a 3D display having a relatively large amount
of calculation compared with a 2D image, a real-time display and
interaction cannot be secured and thus the application may be
greatly limited.
[0010] In order to use the interface system in the mobile device, a
solution having a simpler and faster response has to be developed.
Therefore, there is a demand for a processing system that can
interface the camera image and the 3D graphic engine in the mobile
device having the camera and the 3D graphic engine mounted
thereon.
SUMMARY OF THE INVENTION
[0011] Accordingly, the present invention is directed to a method
of processing a 3D image in a mobile device that substantially
obviate one or more problems due to limitations and disadvantages
of the related art.
[0012] An object of the present invention is to provide a method of
processing a 3D image in a mobile device, capable of interfacing a
camera image and a 3D graphic engine and securing a real-time
operation.
[0013] Another object of the present invention is to provide a
processing method of interfacing a 3D image and a camera image. In
the processing method, a specific image pattern defined by a user
is recognized, the recognized pattern is traced within an image,
and a camera image and a 3D image are interfaced based on the
tracing result. A 3D object is animated and rendered using a 3D
graphic engine. The rendered image of the 3D object and the camera
image are integrated and displayed.
[0014] A further object of the present invention is to provide a
method of processing a 3D image in a mobile device, in which an
image acquired by a camera can be used as a kind of a user
interface and an interaction can be executed through various
motions in front of the camera by using the camera as the user
interface. Also, an interaction with the contents within the mobile
device can be executed by the motion itself of the mobile
device.
[0015] Additional advantages, objects, and features of the
invention will be set forth in part in the description which
follows and in part will become apparent to those having ordinary
skill in the art upon examination of the following or may be
learned from practice of the invention. The objectives and other
advantages of the invention may be realized and attained by the
structure particularly pointed out in the written description and
claims hereof as well as the appended drawings.
[0016] To achieve these objects and other advantages and in
accordance with the purpose of the invention, as embodied and
broadly described herein, there is provided a method of processing
a 3D image in a mobile device, the mobile device having a camera
and a 3D graphic engine, the method including: recognizing a
specific pattern designated by a user based on images inputted by
the camera, and setting the recognized pattern as a defined
pattern; tracing the defined pattern among the images inputted by
the camera, and interfacing the pattern and a 3D image; and
performing an animation process followed by a response of a
corresponding 3D image in accordance with the tracing result of the
pattern interfaced with the 3D image, integrating the processed
image and the camera image, and displaying the integrated
image.
[0017] In another aspect of the present invention, there is
provided a method of processing an interface between a camera image
and a graphic image, including: setting a specific pattern of an
image as an image-based interface tool; recognizing the set pattern
among images inputted by a camera, and tracing the recognized
pattern; generating a response of a graphic image in accordance
with the pattern tracing; and combining the traced pattern and the
responding graphic image, and displaying the combined image.
[0018] A system that can allow the interaction through the
interface of the camera and the 3D graphic engine that are built in
the mobile device is implemented. The present invention can be
applied to the 3D graphic based game of the mobile device. By using
the camera as a new interface in addition to a 2D or 3D game based
on the exiting key manipulation, a user's motion is inputted
through the camera. Therefore, an interaction with a 3D object on
the screen can be undergone in an interface environment of a real
space and a virtual space.
[0019] Also, in an application such as a pet avatar, an operation
of directly caressing or punishing a 3D object can be possible. An
operation of taking a picture with a pet object can be
possible.
[0020] Further, since the camera image can be used as a kind of a
user interface, an interaction can be executed through various
motions in front of the camera. Also, an interaction with the
contents within the mobile device can be executed by the motion
itself of the mobile device. The camera image can be used as
various user interfaces through a combination of an existing key, a
fingerprint sensor, a gravity sensor, and a joystick.
[0021] It is to be understood that both the foregoing general
description and the following detailed description of the present
invention are exemplary and explanatory and are intended to provide
further explanation of the invention as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The accompanying drawings, which are included to provide a
further understanding of the invention and are incorporated in and
constitute a part of this application, illustrate embodiment(s) of
the invention and together with the description serve to explain
the principle of the invention. In the drawings:
[0023] FIG. 1 is a block diagram of a mobile device according to
the present invention;
[0024] FIG. 2 is a flowchart illustrating a method of interfacing a
camera image and a 3D graphic engine according to the present
invention;
[0025] FIG. 3 is a view of exemplary images given by a method of
defining a pattern according to the present invention;
[0026] FIG. 4 is a view of exemplary images for explaining a method
of tracing a motion of a pattern defined according to the present
invention;
[0027] FIG. 5 is a view of exemplary images for explaining a method
of testing an intersection between a rendered region of a 3D object
and a pattern region; and
[0028] FIG. 6 is a view of exemplary images of each operation when
the camera image and the 3D graphic engine are interfaced according
to the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0029] Reference will now be made in detail to the preferred
embodiments of the present invention, examples of which are
illustrated in the accompanying drawings. Wherever possible, the
same reference numbers will be used throughout the drawings to
refer to the same or like parts.
[0030] Hereinafter, a method of processing a 3D image in a mobile
device according to the present invention will be described in
detail with reference to the accompanying drawings.
[0031] FIG. 1 is a block diagram of a mobile device according to
the present invention. Examples of the mobile device are PDA, PDA
phone, smart phone, notebook computer, and PMP, on which a camera
is mounted. In FIG. 1, a mobile phone with a built-in camera is
illustrated as an example.
[0032] Referring to FIG. 1, the mobile device according to the
present invention includes a camera module 10 for acquiring an
image, a display unit 20 for displaying an image, a processor 30
for managing a pattern recognition, a pattern tracing, and a
control of an interface display with a 3D image, a 3D graphic
engine 40 for processing the 3D graphic image, a memory 50 for
storing data, a communication module 60 for performing a
wired/wireless communication, and a user interface 70 for allowing
a user to manipulate the device.
[0033] The camera module 10 takes an image of an object (including
the user himself/herself) designated by the user, and processes the
taken image and then transmits the processed image to the display
unit 20. Under control of the processor 30, the camera image may be
stored in the memory 50 or transmitted through the communication
module 60. The processor 30 displays the image taken by the camera
module 10, stores it in the memory 50, or performs a
transmission-related control, and manages a signal processing and
control, such as a pattern recognition, a setting, a tracing, and
an interface display of the pattern for interface with a 3D graphic
engine. The latter function may be contained in a 3D graphic
system. The 3D graphic engine 40 constructs and outputs a 3D
animation image and performs an animating process followed by a
predetermined response corresponding to the result of a specific
pattern tracing within the camera image. The user interface 70 is
generally a key input unit. Meanwhile, in some cases, the user
interface 70 may be a fingerprint sensor, a gravity sensor, a
joystick, or a touch screen.
[0034] FIG. 2 is a flowchart illustrating a method of interfacing a
camera image and a 3D image in the mobile device with the camera
module and the 3D graphic engine.
[0035] In operation S10, a specific pattern is defined in a 2D
image inputted from the camera. This operation is performed for
defining the specific pattern and tracing the movement of the
specific pattern, instead of tracing all moving portions of the
camera image.
[0036] In an entire system, the pattern recognition corresponds to
an initialization operation. The pattern recognition is executed
when the system starts, or it is called and executed so as to
re-designate the pattern when the pattern tracing fails. Also, the
pattern recognition is executed when an application program level
is forcedly called, for example, when the user designates a new
pattern.
[0037] In this embodiment, the pattern is defined using a
user-oriented passive method. The definition of the pattern using
the user-oriented passive method is for the purpose of allowing the
user to accurately define desired pattern. In addition, it aims to
provide flexibility such that the user can designate desired
feature as a pattern.
[0038] In this embodiment, the specific pattern can be defined as
one of an input image S11, a color S12 designated by the user, and
a region. First, an image intended to be set as the pattern is
taken by the camera. Then, if the color is designated, a color
designated in a specific threshold range within the corresponding
image is detected and the detected color is set as the pattern. If
the region is inputted, the user can directly designate the region
by moving a pointer within the image. This can be simply
implemented by a method of setting a region by moving a specific
pointer using a direction key of the user interface. Meanwhile, in
a case where a touch screen is provided in the user interface, a
region can be designated more easily by directly touching the
corresponding region. This method can receive the user-defined
pattern most correctly.
[0039] In a case where the image is inputted, if the mobile device
has already contained the image corresponding to the pattern in a
form of the corresponding image file, a method of recognizing a
pattern similar to the image designated in the given corresponding
image is used. The pattern is defined using these methods, and the
user is allowed to approve the defined pattern, thereby setting the
specific pattern. The set pattern will be later traced in an
integrated display state of the camera image and the 3D image. A
processing for inducing the response of the 3D image according to
the tracing result is performed.
[0040] FIG. 3 illustrates an example of the pattern recognition.
Two specific patterns 121 and 122 are recognized from the images
110 and 120 taken by the camera. The patterns 121 and 122 show the
result recognized based on the respective colors. For example, the
first pattern 121 is a blue color point and the second pattern 122
is a red color point.
[0041] Referring again to FIG. 2, in operations S21 and S22, the
pattern is traced after the pattern recognition. The pattern
recognition is to trace the recognized and set patterns 121 and 122
within the camera image S100 in real time. The patterns are traced
through a motion estimation of the pattern of a next frame by
referring to position information of the patterns 121 and 122 of a
previous frame. A real-time tracing is possible.
[0042] FIG. 3 illustrates the motion tracing of the defined
patterns. The two defined patterns 121 and 122 within the image 120
taken by the camera are illustrated. When the patterns 121 and 122
moves to a predetermined position, the first pattern 121 is
represented as a first pattern 121a at a new position where the
first pattern 121 moves, and the second pattern 122 is represented
by a second pattern 122a at a new position where the second pattern
122 moves.
[0043] In operation S31, if the pattern tracing succeeds, an
applied operation processing (for example, a collision test) is
performed on the 2D image (the camera image containing the pattern)
and the 3D image. That is, while performing the interface
processing of the camera image and the 3D image by using the
pattern obtained from the camera image and the display result of
the 3D graphic engine, a region where the 3D object is rendered on
the screen is extracted, and a region where the pattern of the
camera image exists is extracted. Through these processes, it is
checked whether or not two regions are intersected with each
other.
[0044] FIG. 5 illustrates an example of the collision test. Images
130 and 131 obtained before and after the collision of the traced
patterns 121 and 122 and the 3D object 141 will be described with
reference to FIG. 5.
[0045] In both of the images 130 and 131 before and after the
collision, user characters 140 containing the specific patterns 121
and 122 by the interface processing of the camera image and the 3D
image appear in an animation form, and the corresponding 3D objects
141 also appear. As illustrated in FIG. 5, in the image 131 after
the collision, the first pattern 121 of the two patterns 121 and
122 is intersected with the region where the 3D object 141 is
rendered.
[0046] In operation S32, the operation result of the previous stage
is reflected. For example, an animation responding after the
collision is set. That is, when the specific patterns 121 and 122
contained in the user characters 140 collide with the 3D object
141, an animation (variation of the object) exhibiting a specific
response in the 3D object 141 is set. In this case, an operation
suitable for the application as well as the simple animation can be
performed in an upper application. For example, in the case of a
fighting game between the user character 140 and the 3D object 141,
the patterns 121 and 122 are mapped into both fists of the user
character 140 and the patterns 121 and 122 are considered as
striking the 3D object 141. The 3D object 141 may fall down or
frown. In addition, an operation such as increasing a score made by
the user may be performed.
[0047] In operation S34, an application logic and a 3D animation
are implemented based on the 3D data S200. In operation S35, the 3D
animation S300 is displayed. That is, using the 3D graphic engine
mounted on the mobile device, the 3D object is animated and
rendered. The resulting 3D image S300 is constructed.
[0048] In operation S40, the camera image S100 and the 3D image
S300 are integrated. That is, the integrated image of the rendered
image of the 3D object and the camera image is generated.
[0049] In operation S50, the integrated image of the camera image
and the 3D image is displayed on the display unit (LCD). In
operation S60, the entire process for one frame is terminated.
Then, the procedures from the pattern tracing to the displaying of
the integrated image with respect to a next frame are
performed.
[0050] FIG. 6 illustrates an example of an image processing
according to the method of processing the 3D image in the mobile
device.
[0051] As described above, the image 210 for the pattern
recognition is inputted using the camera. Two regions with specific
colors (regions where dense specific colors are expressed) are
selected from the image 210 taken by the camera and are recognized
as the specific patterns defined by the user. In the next image
220, the recognized patterns are illustrated. The recognized
pattern is combined with the 3D image. That is, as illustrated in
the next image 230, in the 3D animation game, the picture necessary
for an opponent object, which is a game object, and background is
constructed by combining the patterns with the user character and
then is displayed.
[0052] Next, the motion of the specific pattern defined by the user
is traced and it is determined whether or not the specific pattern
collides (intersected) with the rendered region of the 3D object.
The image 250 is referred when the pattern region collides with the
rendered region of the 3D object is referred. In this manner, when
the pattern region collides with the rendered region of the 3D
object, the 3D object responds (varies) as illustrated in the
processed image 260 given after the collision. This response can be
variously changed with respect to the motion of the specific
pattern within the camera image according to the intension of the
application program.
[0053] In the method of processing the 3D image in the mobile
device according to the present invention, the process of
recognizing the specific pattern, tracing the recognized pattern,
and assigning the specific response according to the tracing result
can be applied to the user interface as well as the 3D graphic
engine interface.
[0054] That is, the camera image is used as a kind of user
interface, such as key, fingerprint sensor, gravity sensor, and
joystick. The present invention can provide the applicable
utilization of the user interface through the combination with the
existing interface.
[0055] For example, it is assumed that one specific pattern is
recognized and set, and several image content lists are displayed.
In this case, the method of selecting one content from the image
content lists will be described below. In the case of the existing
key input interface, the specific content can be selected using the
direction key (or numerical key) and an enter key. However,
according to the present invention, the existing interface can be
replaced by selecting the content corresponding to the position
where the specific pattern moves within the camera image.
[0056] For this purpose, the displaying of the integrated image of
the content list image and the pattern image can be simply
implemented by applying the method of displaying the integrated
image of the camera image and the 3D image. That is, it can be
considered that a mouse-cursor concept is replaced with a camera
image-pattern recognition and trace concept.
[0057] The above description of the interface is merely exemplary,
and various interface methods based on the camera image can be
implemented based on the method of processing the 3D image in the
mobile device.
[0058] It will be apparent to those skilled in the art that various
modifications and variations can be made in the present invention.
Thus, it is intended that the present invention covers the
modifications and variations of this invention provided they come
within the scope of the appended claims and their equivalent.
* * * * *