Image Capture Device For Starting Specific Action In Advance When Determining That Specific Action Is About To Be Triggered And Related Image Capture Method Thereof

Ju; Chi-Cheng ;   et al.

Patent Application Summary

U.S. patent application number 13/868092 was filed with the patent office on 2013-11-28 for image capture device for starting specific action in advance when determining that specific action is about to be triggered and related image capture method thereof. This patent application is currently assigned to MEDIATEK INC.. The applicant listed for this patent is MEDIATEK INC.. Invention is credited to Ding-Yun Chen, Cheng-Tsai Ho, Chi-Cheng Ju.

Application Number20130314558 13/868092
Document ID /
Family ID49621289
Filed Date2013-11-28

United States Patent Application 20130314558
Kind Code A1
Ju; Chi-Cheng ;   et al. November 28, 2013

IMAGE CAPTURE DEVICE FOR STARTING SPECIFIC ACTION IN ADVANCE WHEN DETERMINING THAT SPECIFIC ACTION IS ABOUT TO BE TRIGGERED AND RELATED IMAGE CAPTURE METHOD THEREOF

Abstract

An image capture device has an image capture module, a sensor and a controller. The sensor senses an object to generate a sensing result. The controller checks the sensing result to determine if a specific action associated with the image capture module is about to be triggered, and controls the image capture module to start the specific action in advance when determining that the specific action is about to be triggered.


Inventors: Ju; Chi-Cheng; (Hsinchu City, TW) ; Chen; Ding-Yun; (Taipei City, TW) ; Ho; Cheng-Tsai; (Taichung City, TW)
Applicant:
Name City State Country Type

MEDIATEK INC.

Hsin-Chu

TW
Assignee: MEDIATEK INC.
Hsin-Chu
TW

Family ID: 49621289
Appl. No.: 13/868092
Filed: April 22, 2013

Related U.S. Patent Documents

Application Number Filing Date Patent Number
61651499 May 24, 2012

Current U.S. Class: 348/208.99
Current CPC Class: G06T 2207/20192 20130101; H04N 5/23264 20130101; G06T 5/003 20130101; H04N 5/23222 20130101; H04N 5/23248 20130101; G06T 3/4007 20130101; G06T 2207/30201 20130101; G06T 2207/30168 20130101; G06T 3/4053 20130101; H04N 5/772 20130101; G06T 2207/10016 20130101; G06T 7/0002 20130101; H04N 5/23254 20130101; H04N 5/23293 20130101; G06T 3/4023 20130101; H04N 9/79 20130101; G06T 3/40 20130101
Class at Publication: 348/208.99
International Class: H04N 5/232 20060101 H04N005/232

Claims



1. An image capture device comprising: an image capture module; a sensor, arranged for sensing an object to generate a sensing result; and a controller, arranged for checking the sensing result to determine if a specific action associated with the image capture module is about to be triggered and controlling the image capture module to start the specific action in advance when determining that the specific action is about to be triggered.

2. The image capture device of claim 1, wherein the specific action is an image capture action, an action of starting video recording or an action of ending video recording.

3. The image capture device of claim 1, wherein the controller refers to the sensing result to determine a distance between the object and the image capture device, and refers to the distance to determine if the specific action is about to be triggered.

4. The image capture device of claim 3, wherein the controller determines that the specific action is about to be triggered when the distance is continuously found shorter than a predetermined threshold over a predetermined time duration.

5. The image capture device of claim 3, wherein the controller determines that the specific action is about to be triggered when the distance is shorter than the predetermined threshold and a next distance between the object and the image capture device is shorter than the distance.

6. The image capture device of claim 3, wherein the controller determines the distance by using skin color information of the object that is derived from the sensing result.

7. The image capture device of claim 6, wherein the controller determines that the distance is shorter when an area of skin color is found larger.

8. The image capture device of claim 3, wherein the controller determines the distance by using light information that is derived from the sensing result.

9. The image capture device of claim 3, wherein the controller determines the distance by using proximity information of the object that is derived from the sensing result.

10. The image capture device of claim 3, wherein the controller determines the distance by using range information of the object that is derived from the sensing result or depth information of the object that is derived from the sensing result.

11. The image capture device of claim 10, wherein when the controller determines the distance by using the depth information of the object, the sensor is a depth sensing liquid crystal display (LCD) panel.

12. The image capture device of claim 1, wherein the controller refers one of an electrical property and a magnetic property of the sensing result to determine if the specific action is about to be triggered.

13. The image capture device of claim 12, wherein the sensor is a floating touch panel.

14. The image capture device of claim 12, wherein the object sensed by the sensor is a pen with magnetism.

15. An image capture method comprising: sensing an object to generate a sensing result; checking the sensing result to determine if a specific action associated with an image capture module is about to be triggered; and when determining that the specific action is about to be triggered, controlling the image capture module to start the specific action in advance.

16. The image capture method of claim 15, wherein the specific action is an image capture action, an action of starting video recording or an action of ending video recording.

17. The image capture method of claim 15, wherein the step of checking the sensing result to determine if the specific action associated with the image capture module is about to be triggered comprises: referring to the sensing result to determine a distance between the object and the image capture device; and referring to the distance to determine if the specific action is about to be triggered.

18. The image capture method of claim 17, wherein it is determined that the specific action is about to be triggered when the distance is continuously found shorter than a predetermined threshold over a predetermined time duration.

19. The image capture method of claim 17, wherein it is determined that the specific action is about to be triggered when the distance is shorter than the predetermined threshold and a next distance between the object and the image capture device is shorter than the distance.

20. The image capture method of claim 17, wherein the step of referring to the sensing result to determine the distance between the object and the image capture device comprises: determining the distance by using skin color information of the object that is derived from the sensing result.

21. The image capture method of claim 20, wherein it is determined that the distance is shorter when an area of skin color is found larger.

22. The image capture method of claim 17, wherein the step of referring to the sensing result to determine the distance between the object and the image capture device comprises: determining the distance by using light information that is derived from the sensing result.

23. The image capture method of claim 17, wherein the step of referring to the sensing result to determine the distance between the object and the image capture device comprises: determining the distance by using proximity information of the object that is derived from the sensing result.

24. The image capture method of claim 17, wherein the step of referring to the sensing result to determine the distance between the object and the image capture device comprises: determining the distance by using range information of the object that is derived from the sensing result or depth information of the object that is derived from the sensing result.

25. The image capture method of claim 15, wherein the step of checking the sensing result to determine if the specific action associated with the image capture module is about to be triggered comprises: referring to one of an electrical property and a magnetic property of the sensing output to determine if the specific action is about to be triggered.
Description



CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. provisional application No. 61/651,499, filed on May 24, 2012 and incorporated herein by reference.

BACKGROUND

[0002] The disclosed embodiments of the present invention relate to controlling an image capture module, and more particularly, to an image capture device for starting a specific action in advance when determining that the specific action associated with an image capture module is about to be triggered and related image capture method thereof.

[0003] Camera modules have become popular elements used in a variety of applications. For example, a smartphone is typically equipped with a camera module, thus allowing a user to easily and conveniently take pictures by using the smartphone. However, due to inherent characteristics of the smartphone, the smartphone is prone to generate blurred images. For example, the camera aperture and/or sensor size of the smartphone is typically small, which leads to a small amount of light arriving at each pixel in camera sensor. As a result, the image quality may suffer from the small camera aperture and/or sensor size.

[0004] Besides, due to lightweight and portability of the smartphone, the smartphone tends to be affected by hand shake. Specifically, when user's finger touches a physical shutter/capture bottom or a virtual shutter/capture button on the smartphone, the shake of the smartphone will last for a period of time. Hence, any picture taken during this period of time would be affected by the hand shake. An image deblurring algorithm may be performed upon the blurred images. However, the computational complexity of the image deblurring algorithm is very high, resulting in considerable power consumption. Besides, artifact will be introduced if the image deblurring algorithm is not perfect.

[0005] Moreover, a camera module with an optical image stabilizer (OIS) is expensive. Hence, the conventional smartphone is generally equipped with a digital image stabilizer (i.e., an electronic image stabilizer (EIS)). The digital image stabilizer can counteract the motion of images, but fails to prevent image blurring.

[0006] Thus, there is a need for an innovative image capture device which is capable of generating non-blurred pictures.

SUMMARY

[0007] In accordance with exemplary embodiments of the present invention, an image capture device for starting a specific action in advance when determining that the specific action associated with an image capture module is about to be triggered and related image capture method thereof are proposed to solve the above-mentioned problem.

[0008] According to a first aspect of the present invention, an exemplary image capture device is disclosed. The exemplary image capture device includes an image capture module, a sensor arranged for sensing an object to generate a sensing result, and a controller arranged for checking the sensing result to determine if a specific action associated with the image capture module is about to be triggered and controlling the image capture module to start the specific action in advance when determining that the specific action is about to be triggered.

[0009] According to a second aspect of the present invention, an exemplary image capture method is disclosed. The exemplary image capture method includes: sensing an object to generate a sensing result; checking the sensing result to determine if a specific action associated with an image capture module is about to be triggered; and when determining that the specific action is about to be triggered, controlling the image capture module to start the specific action in advance.

[0010] These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] FIG. 1 is a block diagram illustrating an image capture device according to an embodiment of the present invention.

[0012] FIG. 2 is a flowchart illustrating an image capture method according to an embodiment of the present invention.

[0013] FIG. 3 is a diagram illustrating a first embodiment of step 206 shown in FIG. 2.

[0014] FIG. 4 is a diagram illustrating a second embodiment of step 206 shown in FIG. 2.

[0015] FIG. 5 is a diagram illustrating a third embodiment of step 206 shown in FIG. 2.

[0016] FIG. 6 is a diagram illustrating a fourth embodiment of step 206 shown in FIG. 2.

DETAILED DESCRIPTION

[0017] Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to . . . ". Also, the term "couple" is intended to mean either an indirect or direct electrical connection. Accordingly, if one device is coupled to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.

[0018] The main concept of the present invention is to capture one or more still image(s) or start/end a video recording operation before an object (e.g., a finger of a user or a pen with magnetism that is used by the user) actually touches an image capture device. In this way, the image blurring caused by unwanted hand shake applied to the image capture device is avoided. Further details are described as below.

[0019] Please refer to FIG. 1, which is a block diagram illustrating an image capture device according to an embodiment of the present invention. The image capture device 100 may be at least a portion (i.e., part or all) of an electronic device. For example, the image capture device 100 may be implemented in a portable device such as a smartphone or a digital camera. In this embodiment, the image capture device 100 includes, but is not limited to, an image capture module 102, a sensor 104 and a controller 106. The image capture module 102 has the image capture capability, and may be used to generate still image(s) under an image capture mode (i.e., a photo mode) and generate a video sequence under a video recording mode. As the present invention focuses on the control scheme applied to the image capture module 102 rather than an internal structure of the image capture module 102, further description of the internal structure of the image capture module 102 is omitted here for brevity.

[0020] The sensor 104 is coupled to the controller 106, and arranged for sensing an object OBJ to generate a sensing result SR. The object OBJ may trigger a specific action to be performed by the image capture module 102. Thus, the sensing result SR carries information indicative of the triggering status of the specific action. By way of example, but not limitation, the specific action may be an image capture action or an action of starting/ending video recording; and the object OBJ may be a finger of a user or a pen with magnetism that is used by the user.

[0021] The controller 106 is coupled to the sensor 104 and the image capture module 102, and arranged for receiving the sensing result SR and controlling the image capture module 102 based on the received sensing result SR. Specifically, the controller 106 checks the sensing result SR to determine if the specific action associated with the image capture module 102 is about to be triggered, and controlling the image capture module 102 to start the specific action in advance when determining that the specific action is about to be triggered (i.e., the object OBJ is close to the image capture device 100 but does not touch the image capture device 100 yet). In a case where the specific action is an image capture action, the image capture module 102 is controlled by the controller 106 to start the image capture action (i.e., enter an image capture mode) before the image capture device 100 is actually touched by the object OBJ, thus making captured still images free from image blurring caused by unwanted hand shake. In another case where the specific action is an action of starting video recording, the image capture module 102 is controlled by the controller 106 to start the action of starting video recording (i.e., enter a video recording mode) before the image capture device 100 is actually touched by the object OBJ, thus making captured video frames in the beginning of the video recording free from image blurring caused by hand shake. In yet another case where the specific action is an action of ending video recording, the image capture module 102 is controlled by the controller 106 to start the action of ending video recording (i.e., leave the video recording mode) before the image capture device 100 is actually touched by the object OBJ, thus making captured video frames in the end of the video recording free from image blurring caused by hand shake.

[0022] Please refer to FIG. 1 in conjunction with FIG. 2. FIG. 2 is a flowchart illustrating an image capture method according to an embodiment of the present invention. Provided that the result is substantially the same, the steps are not required to be executed in the exact order shown in FIG. 2. The exemplary image capture method may include following steps.

[0023] Step 200: Start.

[0024] Step 202: The image capture module 102 enters a camera preview mode.

[0025] Step 204: Utilize the sensor 104 to sense the object OBJ, and accordingly generate the sensing result SR.

[0026] Step 206: Utilize the controller 106 to check the sensing result SR to determine if the specific action associated with the image capture module 102 is about to be triggered. If yes, go to step 208; otherwise, go to step 202.

[0027] Step 208: Utilize the controller 106 to control the image capture module 102 to leave camera preview mode and enter a different camera mode (e.g., an image capture mode or a video recording mode) to start the specific action.

[0028] Step 210: The specific action is actually triggered by the object OBJ touching the image capture device 100.

[0029] Step 212: End.

[0030] Before the user actually triggers the specific action (e.g., an image capture action, an action of starting video recording, or an action of ending video recording), the image capture module 102 may enter a camera preview mode to generate a preview image or a preview video sequence on a display screen (not shown) of the image capture device 100 (step 202). Thus, the image capture module 102 stays in the camera preview mode until it is determined that the specific action associated with the image capture module 102 is about to be triggered (step 206). As can be seen from the flowchart in FIG. 2, the specific action is started in advance at the time the controller 106 judges that the specific action is about to be triggered (steps 206 and 208). That is, when a predetermined criterion is met, the controller 106 would activate the specific action of the image capture module 102 even though the object OBJ does not actually trigger the specific action (steps 208 and 210).

[0031] As mentioned above, step 206 is performed to determine whether the specific action should be activated in advance. In one exemplary design, the controller 106 may refer to the sensing result SR to determine a distance D between the object OBJ and the image capture device 100 (e.g., a distance between the object OBJ and the sensor 104), and refers to the distance D to determine if the specific action is about to be triggered. Please refer to FIG. 3, which is a diagram illustrating a first embodiment of step 206 shown in FIG. 2. In this embodiment, the step 206 may be realized using following steps.

[0032] Step 302: Estimate the distance D between the object OBJ and the image capture device 100 according to information given by the sensing result SR.

[0033] Step 304: Compare the distance D with a predetermined threshold TH.sub.D.

[0034] Step 306: Check if the distance D is shorter than the predetermined threshold TH.sub.D. If yes, go to step 308; otherwise, go to step 316.

[0035] Step 308: Count a time period T in which the distance D is continuously found shorter than the predetermined threshold TH.sub.D.

[0036] Step 310: Compare the time period T with a predetermined time duration TH.sub.T.

[0037] Step 312: Check if the time period T reaches the predetermined time duration TH.sub.T. If yes, go to step 314; otherwise, go to step 302.

[0038] Step 314: Determine that the specific action is about to be triggered.

[0039] Step 316: Determine that the specific action is not about to be triggered.

[0040] In this embodiment, the controller 106 determines that the specific action is about to be triggered when the distance D is continuously found shorter than the predetermined threshold TH.sub.D over the predetermined time duration TH.sub.T. Specifically, when the distance D becomes shorter than the predetermined threshold TH.sub.D, this means that the object OBJ is close to the image capture device 100 (steps 302-306). It is possible that the user is going to trigger the specific action associated with the image capture module 102. To avoid misjudgment, the predetermined time duration TH.sub.T is employed in this embodiment. Therefore, if the time period in which the distance D remains shorter than the predetermined threshold TH.sub.D does not last up to the predetermined time duration TH.sub.T, the controller 106 would not decide that the specific action is about to be triggered (steps 308-312). That is, when there is one determination result showing that the distance D is not shorter than the predetermined threshold TH.sub.D before the predetermined time duration TH.sub.T is expired, the controller 106 skips the current counting operation of the time period T in which the distance D remains shorter than the predetermined threshold TH.sub.D, and decides that the specific action is not about to be triggered.

[0041] The flow shown in FIG. 3 is merely one feasible implementation of the step 206 shown in FIG. 2. In an alternative design, the steps 308-312 may be omitted. Hence, the controller 106 may determine that the specific action is about to be triggered each time the distance D is found shorter than the predetermined threshold TH.sub.D. This also falls within the scope of the present invention.

[0042] In the exemplary shown in FIG. 3, steps 308-312 are used to avoid misjudgment by checking if the distance D is continuously found shorter than the predetermined threshold TH.sub.D over the predetermined time duration TH.sub.T. Alternatively, a different misjudgment prevention scheme may be employed. Please refer to FIG. 4, which is a diagram illustrating a second embodiment of step 206 shown in FIG. 2. In this embodiment, the step 206 may be realized using following steps.

[0043] Step 502: Estimate the distance (e.g., a first distance D.sub.1) between the object OBJ and the image capture device 100 according to information given by the sensing result SR.

[0044] Step 504: Compare the first distance D.sub.1 with a predetermined threshold TH.sub.D.

[0045] Step 506: Check if the first distance D.sub.1 is shorter than the predetermined threshold TH.sub.D. If yes, go to step 508; otherwise, go to step 516.

[0046] Step 508: Estimate the distance (e.g., a second distance D.sub.2) between the object OBJ and the image capture device 100 according to information given by the sensing result SR.

[0047] Step 510: Compare the second distance D.sub.2 with the first distance D.sub.1.

[0048] Step 512: Check if the second distance D.sub.2 is shorter than the first distance D.sub.1. If yes, go to step 514; otherwise, go to step 516.

[0049] Step 514: Determine that the specific action is about to be triggered.

[0050] Step 516: Determine that the specific action is not about to be triggered.

[0051] In this embodiment, the controller 106 determines that the specific action is about to be triggered when the estimated distance (i.e., first distance D.sub.1) is shorter than the predetermined threshold TH.sub.D at one time point and then the estimated distance (i.e., second distance D.sub.2) becomes shorter at the next time point. Specifically, when the first distance D.sub.1 becomes shorter than the predetermined threshold TH.sub.D, this means that the object OBJ is close to the image capture device 100 (steps 502-506). It is possible that the user is going to trigger the specific action associated with the image capture module 102. To avoid misjudgment, the distance between the object OBJ and the image capture device 100 is estimated again. Therefore, if the second distance D.sub.2 is not shorter than the first distance D.sub.1, the controller 106 would not decide that the specific action is about to be triggered (steps 508-512 and 516). That is, the controller 106 does not decide that the specific action is about to be triggered unless the sequentially estimated distances D.sub.1 and D.sub.2 are both shorter than the predetermined threshold TH.sub.D and the later is shorter than the former (steps 508-514).

[0052] In step 302/502/508, the distance D/D.sub.1 /D.sub.2 between the object OBJ and the image capture device 100 is estimated by the controller 106 based on information given by the sensing result SR generated from the sensor 104. Several examples for achieving estimation of the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100 are given as below.

[0053] In a first exemplary implementation, the sensor 104 acts as a shutter/capture button, and the controller 106 is configured to determine the distance D/D.sub.1/D.sub.2 by using skin color information of the object OBJ that is derived from the sensing result SR. For example, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a back camera of the smartphone. Thus, the sensor 104 generates captured images of the object OBJ to serve as the sensing result SR. After receiving the sensing result SR (i.e., captured images of the object OBJ), the controller 106 analyzes each captured image of the object OBJ to obtain the skin color information of the object OBJ. As the sensor 104 is a back camera which acts as a shutter/capture button, the user may use his/her finger to touch the sensor 104 to trigger the aforementioned specific action associated with the front camera (i.e., the image capture module 102). The skin color information of the object OBJ would indicate a finger area within each captured image of the object OBJ. The size of the finger area is inversely proportional to the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100. That is, if the finger area is larger, the object OBJ is closer to the image capture device 100. Hence, the size of the finger area can be used to estimate the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100. In this embodiment, the controller 106 determines that the distance D/D.sub.1/D2 is shorter when an area of skin color (i.e., the size of the finger area) is found larger. When the finger area increases to occupy most of the captured image of the object OBJ (i.e., the non-zero distance D/D.sub.1/D.sub.2 is shorter than the predetermined threshold TH.sub.D), it is possible that user's finger is going to touch the shutter/capture button.

[0054] Alternatively, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a color sensor implemented in the smartphone. Thus, the sensor 104 detects the skin color of the object OBJ, and accordingly generates the sensing result SR. In other words, the skin color information of the object OBJ is directly provided by the sensor 104. As the sensor 104 acts as a shutter/capture button, the user may use his/her finger to touch the sensor 104 to trigger the aforementioned specific action. After receiving the sensing result SR (i.e., skin color detection result), the controller 106 is capable of determining if user's finger is approaching the shutter/capture button by monitoring the size variation of the finger area.

[0055] In a second exemplary implementation, the sensor 104 acts as a shutter/capture button, and the controller 106 is configured to determine the distance D/D.sub.1/D.sub.2 by using light information of the object OBJ that is derived from the sensing result SR. For example, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a back camera of the smartphone. Thus, the sensor 104 generates captured images of the object OBJ to serve as the sensing result SR. After receiving the sensing result SR (i.e., captured images of the object OBJ), the controller 106 analyzes each captured image of the object OBJ to obtain the light information (i.e., brightness information). As the sensor 104 acts as a shutter/capture button, the user may use his/her finger to touch the sensor 104 to trigger the aforementioned specific action. The light information would indicate whether user's finger is close to the image capture device 100 due to the fact that the intensity of the brightness is inversely proportional to the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100. That is, if the captured image generated from the sensor 104 becomes darker, the object OBJ is closer to the image capture device 100. Hence, the intensity of brightness can be used to estimate the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100. In this embodiment, the controller 106 determines that the distance D/D.sub.1/D.sub.2 is shorter when the intensity of brightness is found lower. When the intensity of brightness decreases to be close to a dark level (i.e., the non-zero distance D/D.sub.1/D.sub.2 is shorter than the predetermined threshold TH.sub.D), it is possible that user's finger is going to touch the shutter/capture button.

[0056] Alternatively, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a light sensor implemented in the smartphone. Thus, the sensor 104 detects the ambient light, and accordingly generates the sensing result SR. In other words, the light information is directly provided by the sensor 104. As the sensor 104 acts as a shutter/capture button, the user may use his/her finger to touch the sensor 104 to trigger the aforementioned specific action. After receiving the sensing result SR (i.e., ambient light detection result), the controller 106 is capable of determining if user's finger is approaching the shutter/capture button by monitoring the brightness variation of the ambient light detection result.

[0057] In a third exemplary implementation, the sensor 104 acts as a shutter/capture button, and the controller 106 is configured to determine the distance D/D.sub.1/D.sub.2 by using proximity information of the object OBJ that is derived from the sensing result SR. For example, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a back camera of the smartphone. Thus, the sensor 104 generates captured images of the object OBJ to serve as the sensing result SR. After receiving the sensing result SR (i.e., captured images of the object OBJ), the controller 106 analyzes each captured image of the object OBJ to obtain the proximity information of the object OBJ. As the sensor 104 acts as a shutter/capture button, the user may use his/her finger to touch the sensor 104 to trigger the aforementioned specific action. The proximity information of the object OBJ would indicate whether the object OBJ is in the proximity of the image capture device 100. Hence, the proximity information can be used to estimate the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100. In this embodiment, the controller 106 determines that the distance D/D.sub.1/D.sub.2 is shorter when the proximity information of the object OBJ indicates that the object OBJ is closer to the image capture device 100. When the proximity information of the object OBJ indicates that the object OBJ is close to the image capture device 100, it is possible that user's finger is going to touch the shutter/capture button.

[0058] Alternatively, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a proximity sensor implemented in the smartphone. Thus, the sensor 104 detects if the object OBJ is in the proximity of the image capture device 100, and accordingly generates the sensing result SR. In other words, the proximity information of the object OBJ is directly provided by the sensor 104. As the sensor 104 acts as a shutter/capture button, the user may use his/her finger to touch the sensor 104 to trigger the aforementioned specific action. After receiving the sensing result SR (i.e., proximity detection result), the controller 106 is capable of determining if user's finger is approaching the shutter/capture button by monitoring the variation of the proximity detection result.

[0059] In a fourth exemplary implementation, the sensor 104 acts as a shutter/capture button, and the controller 106 is configured to determine the distance D/D.sub.1/D.sub.2 by using range information of the object OBJ that is derived from the sensing result SR. For example, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a back camera of the smartphone. Thus, the sensor 104 generates captured images of the object OBJ to serve as the sensing result SR. After receiving the sensing result SR (i.e., captured images of the object OBJ), the controller 106 analyzes each captured image of the object OBJ to obtain the range information of the object OBJ. As the sensor 104 acts as a shutter/capture button, the user may use his/her finger to touch the sensor 104 to trigger the aforementioned specific action. The range information of the object OBJ directly gives an estimated value of the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100. Hence, the controller 106 determines that the distance D/D.sub.1/D.sub.2 is shorter when the range information of the object OBJ indicates that the object OBJ is closer to the image capture device 100. When the range information of the object OBJ indicates that the object OBJ is close to the image capture device 100, it is possible that user's finger is going to touch the shutter/capture button.

[0060] Alternatively, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a range sensor implemented in the smartphone. Thus, the sensor 104 measures the distance between the object OBJ and the image capture device 100, and accordingly generates the sensing result SR. In other words, the range information of the object OBJ is directly provided by the sensor 104. As the sensor 104 acts as a shutter/capture button, the user may use his/her finger to touch the sensor 104 to trigger the aforementioned specific action. After receiving the sensing result SR (i.e., range detection result), the controller 106 is capable of determining if user's finger is approaching the shutter/capture button by monitoring the variation of the range detection result.

[0061] In a fifth exemplary implementation, the controller 106 is configured to determine the distance D/D.sub.1/D.sub.2 by using depth information of the object OBJ that is derived from the sensing result SR. For example, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a dual-lens camera of the smartphone. Thus, the sensor 104 is capable of generating a plurality of image pairs, each including a left-view captured image and a right-view captured image of the object OBJ, to serve as the sensing result SR. After receiving the sensing result SR (i.e., image pairs), the controller 106 may perform disparity analysis based on the left-view captured image and the right-view captured image of each image pair, and then refer to the disparity analysis result to obtain the depth information of the object OBJ. The estimated depth of the object OBJ is proportional to the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100. Hence, the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100 can be estimated based on the depth information of the object OBJ. In this embodiment, the controller 106 determines that the distance D/D.sub.1/D.sub.2 is shorter when the depth information of the object OBJ indicates that the object OBJ is closer to the image capture device 100. Therefore, before the object OBJ, such as user's finger, actually touches a shutter/capture button to trigger the aforementioned specific action, the depth information of the object OBJ would indicate that the object OBJ is approaching the image capture device 100. When the depth information of the object OBJ indicates that the object OBJ is close to the image capture device 100, it is possible that user's finger is going to touch the shutter/capture button.

[0062] Alternatively, the image capture module 102 is a front camera of a smartphone, and the sensor 104 is a depth sensor implemented in the smartphone. Thus, the sensor 104 measures the depth of the object OBJ, and accordingly generates the sensing result SR. In other words, the depth information of the object OBJ is directly provided by the sensor 104. As mentioned above, the user may use his/her finger to touch a shutter/capture button 104 to trigger the aforementioned specific action. After receiving the sensing result SR (i.e., depth detection result), the controller 106 is capable of determining if user's finger is approaching the shutter/capture button by monitoring the variation of the depth detection result.

[0063] In a sixth exemplary implementation, the sensor 104 is implemented using a depth sensing liquid crystal display (LCD) panel. More specifically, the sensor 104 is an LCD panel with depth sensing elements integrated therein. Hence, the sensor 104 may be used to display a virtual shutter/capture button. The controller 106 is configured to determine the distance D/D.sub.1/D.sub.2 by using depth information of the object OBJ that is derived from the sensing result SR, where the depth information of the object OBJ is directly provided by the sensor 104. As the user may use his/her finger to touch the virtual shutter/capture button displayed on the depth sensing LCD panel to trigger the aforementioned specific action, the controller 106 is capable of determining if user's finger is approaching the virtual shutter/capture button by monitoring the variation of the depth detection result. When the object OBJ is close to the virtual shutter/capture button on the screen, it is possible that user's finger is going to touch the virtual shutter/capture button on the screen.

[0064] Regarding the exemplary flows shown in FIG. 3 and FIG. 4, the distance D/D.sub.1/D.sub.2 between the object OBJ and the image capture device 100 is needed to be estimated/calculated based on information given by the sensing result SR. However, this is for illustrative purposes only, and is not meant to be a limitation of the present invention. Please refer to FIG. 5, which is a diagram illustrating a third embodiment of step 206 shown in FIG. 2. In this embodiment, the step 206 may be realized using following steps.

[0065] Step 402: Compare one of an electrical property (e.g., current magnitude) and a magnetic property (e.g., magnetism magnitude) of the sensing result SR with a predetermined threshold TH.sub.P.

[0066] Step 404: Check if the checked property is greater than the predetermined threshold TH.sub.P. If yes, go to step 406; otherwise, go to step 414.

[0067] Step 406: Count a time period T in which the checked property is continuously found greater than the predetermined threshold TH.sub.P.

[0068] Step 408: Compare the time period T with a predetermined time duration TH.sub.T.

[0069] Step 410: Check if the time period T reaches the predetermined time duration TH.sub.T. If yes, go to step 412; otherwise, go to step 402.

[0070] Step 412: Determine that the specific action is about to be triggered.

[0071] Step 414: Determine that the specific action is not about to be triggered.

[0072] In this embodiment, the controller 106 determines that the specific action is about to be triggered when the checked property (e.g., one of the electrical property (e.g., current magnitude) and the magnetic property (e.g., magnetism magnitude) of the sensing result SR) is continuously found greater than the predetermined threshold TH.sub.P over the predetermined time duration TH.sub.T. Specifically, when the checked property becomes greater than the predetermined threshold TH.sub.P, this means that the object OBJ is close to, but does not have contact with, the image capture device 100 (steps 402 and 404). It is possible that the user is going to trigger the specific action associated with the image capture module 102. To avoid misjudgment, the predetermined time duration TH.sub.T is employed in this embodiment. Therefore, if the time period in which the checked property is greater than the predetermined threshold TH.sub.P does not last up to the predetermined time duration TH.sub.T, the controller 106 would not decide that the specific action is about to be triggered (steps 406-410). That is, when there is one determination result showing that the checked property is not greater than the predetermined threshold TH.sub.P before the predetermined time duration TH.sub.T is expired, the controller 106 skips the current counting operation of the time period T in which the checked property is greater than the predetermined threshold TH.sub.P, and decides that the specific action is not about to be triggered.

[0073] The flow shown in FIG. 5 is merely one feasible implementation of the step 206 shown in FIG. 2. In an alternative design, the steps 406-410 may be omitted. Hence, the controller 106 may determine that the specific action is about to be triggered each time the checked property is found greater than the predetermined threshold TH.sub.P. This also falls within the scope of the present invention.

[0074] In the exemplary design shown in FIG. 5, steps 406-410 are used to avoid misjudgment by checking if the checked property (e.g., the electrical/magnetic property of the sensing result SR) is continuously found greater than the predetermined threshold TH.sub.P over the predetermined time duration TH.sub.T. Alternatively, a different misjudgment prevention scheme may be employed. Please refer to FIG. 6, which is a diagram illustrating a fourth embodiment of step 206 shown in FIG. 2. In this embodiment, the step 206 may be realized using following steps.

[0075] Step 602: Compare a first checked property P.sub.1 with a predetermined threshold TH.sub.P, where the first checked property P.sub.1 is one of an electrical property (e.g., current magnitude) and a magnetic property (e.g., magnetism magnitude) of the sensing result SR.

[0076] Step 604: Check if the first checked property P.sub.1 is greater than the predetermined threshold TH.sub.P. If yes, go to step 606; otherwise, go to step 612.

[0077] Step 606: Compare a second checked property P.sub.2 with the first checked property P.sub.1, where the second checked property P.sub.2 is also one of the electrical property (e.g., current magnitude) and the magnetic property (e.g., magnetism magnitude) of the sensing result SR. Specifically, both of the first checked property P.sub.1 and the second checked property P.sub.2 may be electrical properties or magnetic properties.

[0078] Step 608: Check if the second checked property P.sub.2 is greater than the first checked property P.sub.1. If yes, go to step 610; otherwise, go to step 612.

[0079] Step 610: Determine that the specific action is about to be triggered.

[0080] Step 612: Determine that the specific action is not about to be triggered.

[0081] In this embodiment, the controller 106 determines that the specific action is about to be triggered when the checked property (i.e., first checked property P.sub.1) is greater than the predetermined threshold TH.sub.P at one time point and then the checked property (i.e., second checked property P.sub.2) becomes greater at the next time point. Specifically, when the first checked property P.sub.1 becomes greater than the predetermined threshold TH.sub.P, this means that the object OBJ is close to, but does not have contact with, the image capture device 100 (steps 602 and 604). It is possible that the user is going to trigger the specific action associated with the image capture module 102. To avoid misjudgment, the electrical/magnetic property of the sensing result SR is checked again. Therefore, if the second checked property P.sub.2 is not greater than the first checked property P.sub.1, the controller 106 would not decide that the specific action is about to be triggered (steps 606, 608, and 612). That is, the controller 106 does not decide that the specific action is about to be triggered unless the sequentially checked properties P.sub.1 and P.sub.2 are both greater than the predetermined threshold TH.sub.P and the later is greater than the former (steps 608 and 610).

[0082] As mentioned above, the controller 106 refers one of the electrical property and the magnetic property of the sensing result SR to determine if the specific action is about to be triggered. In a case where the electrical property (e.g., current magnitude) of the sensing result SR is checked in step 402/602/606, the sensor 104 may be implemented using a floating touch panel composed of self capacitive sensors. Hence, the sensing result SR of the sensor 104 would have its current magnitude inversely proportional to the distance between the object OBJ and the image capture device 100. Due to the use of self capacitive sensors, the sensor 104 is able to detect the object OBJ before the object OBJ has a physical contact with the sensor 104. In addition, a virtual shutter/capture button may be displayed on a screen beneath the floating touch panel. As the user may use his/her finger to touch the virtual shutter/capture button for triggering the aforementioned specific action by having a physical contact with the sensor 104 disposed on the screen, the controller 106 is capable of determining if user's finger is approaching the virtual shutter/capture button by monitoring the variation of the current magnitude of the sensing result SR. When the object OBJ is found close to the virtual shutter/capture button on the screen, it is possible that user's finger is going to touch the virtual shutter/capture button.

[0083] In another case where the magnetic property (e.g., magnetism magnitude) of the sensing result SR is checked in step 402/602/606, the object OBJ may be a pen with magnetism, and the sensor 104 may be implemented using is a sensor board installed on the image capture device 100. Specifically, based on the magnetic coupling between the object OBJ and the sensor 104, the sensor 104 generates the sensing result SR with a corresponding magnetism magnitude. Hence, the sensing result SR of the sensor 104 would have its magnetism magnitude inversely proportional to the distance between the object OBJ and the image capture device 100. Due to the use of the pen with magnetism, the sensor 104 is able to detect the object OBJ before the object OBJ has a physical contact with a virtual shutter/capture button on a screen to trigger the aforementioned specific action. The controller 106 is capable of determining if pen with magnetism is approaching the virtual shutter/capture button by monitoring the variation of the magnetism magnitude of the sensing result SR. When the object OBJ is found close to the shutter/capture button, it is possible that the pen with magnetism is going to touch the virtual shutter/capture button.

[0084] Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed