US20140363097A1 - Image capture system and operation method thereof - Google Patents
Image capture system and operation method thereof Download PDFInfo
- Publication number
- US20140363097A1 US20140363097A1 US14/222,679 US201414222679A US2014363097A1 US 20140363097 A1 US20140363097 A1 US 20140363097A1 US 201414222679 A US201414222679 A US 201414222679A US 2014363097 A1 US2014363097 A1 US 2014363097A1
- Authority
- US
- United States
- Prior art keywords
- feature
- depth map
- unit
- information
- capture system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06T7/0051—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/003—Aspects relating to the "2D+depth" image format
Definitions
- the present invention relates to an image capture system and an operation method thereof, and particularly to an image capture system and an operation method thereof that can generate and output a feature depth map simultaneously including a depth information and a feature information corresponding to at least one object of an original image to decrease transmission data amount and bandwidth for the feature depth map.
- a gesture application provided by the prior art wants to determine whether an operator is an effective operator, the simplest method executed by the gesture application is face detection or face determination.
- the gesture application provided by the prior art utilizes gray level information or color information of an original image to execute face detection, text recognition, or any pattern recognition (e.g. quick response (QR) code.) and utilizes depth information generated from the original image to execute gesture detection.
- QR quick response
- the gesture application provided by the prior art simultaneously needs information of the original image and depth information, so a disadvantage of the gesture application provided by the prior art is that the gesture application needs more transmission data amount and bandwidth. Therefore, the gesture application provided by the prior art is not a good choice for a user.
- An embodiment provides an image capture system.
- the image capture system 100 includes a depth information generation unit, a feature extraction unit, and a merging unit.
- the depth information generation unit generates a depth information corresponding to at least one object of an original image.
- the feature extraction unit generates a feature information corresponding to the at least one object of the original image.
- the merging unit is coupled to the depth information generation unit and the feature extraction unit, and merges the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit.
- Another embodiment provides an operation method of an image capture system, wherein the image capture system comprises a depth information generation unit, a feature extraction unit, and a merging unit.
- the operation method includes the depth information generation unit generating a depth information corresponding to at least one object of an original image; the feature extraction unit generating a feature information corresponding to the at least one object of the original image; and the merging unit merging the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit.
- the present invention provides an image capture system and an operation method thereof.
- the image capture system and the operation method utilize a depth information generation unit of the image capture system to generate a depth information corresponding to at least one object of an original image, a feature extraction unit of the image capture system to generate a feature information corresponding to the at least one object of the original image, and a merging unit of the image capture system to generate a feature depth map by merging the depth information and the feature information.
- a depth information generation unit of the image capture system to generate a depth information corresponding to at least one object of an original image
- a feature extraction unit of the image capture system to generate a feature information corresponding to the at least one object of the original image
- a merging unit of the image capture system to generate a feature depth map by merging the depth information and the feature information.
- the feature depth map simultaneously includes the depth information and the feature information, transmission data amount and bandwidth for the feature depth map can be decreased.
- FIG. 1 is a diagram illustrating an image capture system for outputting depth and feature information of at least one object in an original image according to an embodiment.
- FIG. 2 is a diagram illustrating the original image.
- FIG. 3 is a diagram illustrating depth information corresponding to an object of the original image.
- FIG. 4 is a diagram illustrating a feature depth map.
- FIG. 5 is a diagram illustrating a recognition result generated by the application unit after the application unit executes face recognition and gesture recognition on the feature depth map.
- FIG. 6 is flowchart illustrating an operation method of the image capture system 100 according to another embodiment.
- FIG. 1 is a diagram illustrating an image capture system 100 according to an embodiment
- FIG. 2 is a diagram illustrating an original image OIM
- FIG. 3 is a diagram illustrating depth information corresponding to an object of the original image OIM
- FIG. 4 is a diagram illustrating a feature depth map.
- the image capture system 100 includes a depth information generation unit 102 , a feature extraction unit 104 , and a merging unit 106 .
- the depth information generation unit 102 is used for generating a depth information 108 (as shown in FIG. 3 ) corresponding to an object 110 of the original image OIM (as shown in FIG. 2 ).
- the present invention is not limited to the original image OIM only including the object 110 . That is to say, the original image OIM can include at least one object.
- the feature extraction unit 104 is used for generating a feature information corresponding to the object 110 according to the original image OIM.
- the feature extraction unit 104 can generate the feature information corresponding to the object 110 according to eye edges, a face edge, or a lip of the object 110 of the original image OIM.
- the present invention is not limited to the feature extraction unit 104 generating the feature information corresponding to the object 110 according to the eye edges, the face edge, or the lip of the object 110 .
- the merging unit 106 is coupled to the depth information generation unit 102 and the feature extraction unit 104 .
- the merging unit 106 After the merging unit 106 receives the depth information 108 from the depth information generation unit 102 and the feature information from the feature extraction unit 104 , the merging unit 106 gives a first weight to the depth information 108 and a second weight to the feature information corresponding to the object 110 . Then, the merging unit 106 can merge the depth information 108 and the feature information corresponding to the object 110 into a feature depth map 112 (as shown in FIG. 4 ) and output the feature depth map 112 to an application unit 114 according to the first weight and the second weight, wherein the feature information corresponds to high frequency parts of the feature depth map 112 and the depth information 108 corresponds to low frequency parts of the feature depth map 112 .
- the image capture system 100 can utilize a high-pass filter (not shown in FIG. 1 ) to filter the feature depth map 112 to generate the high frequency parts of the feature depth map 112 , and a low-pass filter (not shown in FIG. 1 ) to filter the feature depth map 112 to generate the low frequency parts of the feature depth map 112 .
- a high-pass filter (not shown in FIG. 1 ) to filter the feature depth map 112 to generate the high frequency parts of the feature depth map 112
- a low-pass filter (not shown in FIG. 1 ) to filter the feature depth map 112 to generate the low frequency parts of the feature depth map 112 .
- the application unit 114 utilizes the high frequency parts of the feature depth map 112 to execute face recognition corresponding to the object 110 and the low frequency parts of the feature depth map 112 to execute gesture recognition corresponding to the object 110 .
- the application unit 114 can also utilize the high frequency parts of the feature depth map 112 to recognize patterns corresponding to the object 110 , or characters shown in the original image OIM.
- the application unit 114 can utilize the high frequency parts of the feature depth map 112 to execute face recognition, text recognition, QR code recognition, pattern recognition, or profile recognition corresponding to the object 110 .
- FIG. 5 is a diagram illustrating a recognition result 116 generated by the application unit 114 after the application unit 114 executes face recognition and gesture recognition on the feature depth map 112 .
- the recognition result 116 includes a face profile 1162 and a body profile 1164 corresponding to the object 110 .
- the application unit 114 can utilize the recognition result 116 to execute corresponding operation.
- the application unit 114 can utilize the low frequency parts of the feature depth map 112 to determine a distance between the object 110 and the image capture system 100 .
- the application unit 114 can simultaneously utilize the low frequency parts of the feature depth map 112 to execute gesture recognition corresponding to the object 110 and determine a distance between the object 110 and the image capture system 100 .
- FIG. 6 is flowchart illustrating an operation method of the image capture system 100 according to another embodiment.
- the operation method in FIG. 6 is illustrated using the image capture system 100 in FIG. 1 .
- Detailed steps are as follows:
- Step 600 Start.
- Step 602 The depth information generation unit 102 generates depth information 108 corresponding to the object 110 according to the original image OIM.
- Step 604 The feature extraction unit 104 generates feature information corresponding to the object 110 according to the original image OIM.
- Step 606 The merging unit 106 gives a first weight to the depth information 108 and a second weight to the feature information corresponding to the object 110 .
- Step 608 The merging unit 106 merges the depth information 108 and the feature information corresponding to the object 110 to generate a feature depth map 112 according to the first weight and the second weight.
- Step 610 End.
- the feature extraction unit 104 can generate the feature information corresponding to the object 110 according to eye edges, a face edge, or a lip of the object 110 of the original image OIM. But, the present invention is not limited to the feature extraction unit 104 generating the feature information corresponding to the object 110 according to the eye edges, the face edge, or the lip of the object 110 .
- Step 606 after the merging unit 106 receives the depth information 108 from the depth information generation unit 102 and the feature information from the feature extraction unit 104 , the merging unit 106 gives the first weight to the depth information 108 and the second weight to the feature information corresponding to the object 110 .
- the merging unit 106 can merge the depth information 108 and the feature information corresponding to the object 110 to generate and output the feature depth map 112 (as shown in FIG. 4 ) to the application unit 114 according to the first weight and the second weight, wherein the feature information corresponds to high frequency parts of the feature depth map 112 and the depth information 108 corresponds to low frequency parts of the feature depth map 112 .
- the image capture system 100 can utilize a high-pass filter (not shown in FIG. 1 ) to filter the feature depth map 112 to generate the high frequency parts of the feature depth map 112 , and a low-pass filter (not shown in FIG. 1 ) to filter the feature depth map 112 to generate the low frequency parts of the feature depth map 112 .
- the application unit 114 can utilize the high frequency parts of the feature depth map 112 to execute face recognition corresponding to the object 110 and the low frequency parts of the feature depth map 112 to execute gesture recognition corresponding to the object 110 .
- the application unit 114 can utilize the high frequency parts of the feature depth map 112 to execute face recognition, text recognition, QR code recognition, pattern recognition, or profile recognition corresponding to the object 110 .
- the application unit 114 can also utilize the high frequency parts of the feature depth map 112 to recognize veins corresponding to the object 110 or characters shown in the original image OIM.
- the application unit 114 can generate a recognition result 116 .
- the recognition result 116 includes a face profile 1162 and a body profile 1164 corresponding to the object 110 . Then, the application unit 114 can utilize the recognition result 116 to execute corresponding operation. Further, in another embodiment of the present invention, the application unit 114 can utilize the low frequency parts of the feature depth map 112 to determine a distance between the object 110 and the image capture system 100 . Further, in another embodiment of the present invention, the application unit 114 can simultaneously utilize the low frequency parts of the feature depth map 112 to execute gesture recognition corresponding to the object 110 and determine a distance between the object 110 and the image capture system 100 .
- the image capture system and the operation method thereof the depth information generation unit to generate a depth information corresponding to at least one object of an original image, the feature extraction unit to generate a feature information corresponding to the at least one object of the original image, and the merging unit to generate a feature depth map by merging the depth information and the feature information.
- the feature depth map simultaneously includes the depth information and the feature information, transmission data amount and bandwidth for the feature depth map can be decreased.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
An image capture system includes a depth information generation unit, a feature extraction unit, and a merging unit. The depth information generation unit generates a depth information corresponding to at least one object of an original image. The feature extraction unit generates a feature information corresponding to the at least one object of the original image. The merging unit is coupled to the depth information generation unit and the feature extraction unit, and merges the depth information and the feature information into a feature depth map and outputs the feature depth map to an application unit.
Description
- This application claims the benefit of U.S. Provisional Application No. 61/831,620, filed on Jun. 6, 2013 and entitled “Depth Map Post Process System,” the contents of which are incorporated herein by reference.
- 1. Field of the Invention
- The present invention relates to an image capture system and an operation method thereof, and particularly to an image capture system and an operation method thereof that can generate and output a feature depth map simultaneously including a depth information and a feature information corresponding to at least one object of an original image to decrease transmission data amount and bandwidth for the feature depth map.
- 2. Description of the Prior Art
- If a gesture application provided by the prior art wants to determine whether an operator is an effective operator, the simplest method executed by the gesture application is face detection or face determination. Generally speaking, the gesture application provided by the prior art utilizes gray level information or color information of an original image to execute face detection, text recognition, or any pattern recognition (e.g. quick response (QR) code.) and utilizes depth information generated from the original image to execute gesture detection. However, the gesture application provided by the prior art simultaneously needs information of the original image and depth information, so a disadvantage of the gesture application provided by the prior art is that the gesture application needs more transmission data amount and bandwidth. Therefore, the gesture application provided by the prior art is not a good choice for a user.
- An embodiment provides an image capture system. The
image capture system 100 includes a depth information generation unit, a feature extraction unit, and a merging unit. The depth information generation unit generates a depth information corresponding to at least one object of an original image. The feature extraction unit generates a feature information corresponding to the at least one object of the original image. The merging unit is coupled to the depth information generation unit and the feature extraction unit, and merges the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit. - Another embodiment provides an operation method of an image capture system, wherein the image capture system comprises a depth information generation unit, a feature extraction unit, and a merging unit. The operation method includes the depth information generation unit generating a depth information corresponding to at least one object of an original image; the feature extraction unit generating a feature information corresponding to the at least one object of the original image; and the merging unit merging the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit.
- The present invention provides an image capture system and an operation method thereof. The image capture system and the operation method utilize a depth information generation unit of the image capture system to generate a depth information corresponding to at least one object of an original image, a feature extraction unit of the image capture system to generate a feature information corresponding to the at least one object of the original image, and a merging unit of the image capture system to generate a feature depth map by merging the depth information and the feature information. Compared to the prior art, because the feature depth map simultaneously includes the depth information and the feature information, transmission data amount and bandwidth for the feature depth map can be decreased.
- These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
-
FIG. 1 is a diagram illustrating an image capture system for outputting depth and feature information of at least one object in an original image according to an embodiment. -
FIG. 2 is a diagram illustrating the original image. -
FIG. 3 is a diagram illustrating depth information corresponding to an object of the original image. -
FIG. 4 is a diagram illustrating a feature depth map. -
FIG. 5 is a diagram illustrating a recognition result generated by the application unit after the application unit executes face recognition and gesture recognition on the feature depth map. -
FIG. 6 is flowchart illustrating an operation method of theimage capture system 100 according to another embodiment. - Please refer to
FIG. 1 ,FIG. 2 ,FIG. 3 , andFIG. 4 .FIG. 1 is a diagram illustrating animage capture system 100 according to an embodiment,FIG. 2 is a diagram illustrating an original image OIM,FIG. 3 is a diagram illustrating depth information corresponding to an object of the original image OIM, andFIG. 4 is a diagram illustrating a feature depth map. As shown inFIG. 1 , theimage capture system 100 includes a depthinformation generation unit 102, afeature extraction unit 104, and a mergingunit 106. The depthinformation generation unit 102 is used for generating a depth information 108 (as shown inFIG. 3 ) corresponding to anobject 110 of the original image OIM (as shown inFIG. 2 ). But, the present invention is not limited to the original image OIM only including theobject 110. That is to say, the original image OIM can include at least one object. Thefeature extraction unit 104 is used for generating a feature information corresponding to theobject 110 according to the original image OIM. For example, thefeature extraction unit 104 can generate the feature information corresponding to theobject 110 according to eye edges, a face edge, or a lip of theobject 110 of the original image OIM. But, the present invention is not limited to thefeature extraction unit 104 generating the feature information corresponding to theobject 110 according to the eye edges, the face edge, or the lip of theobject 110. As shown inFIG. 1 , themerging unit 106 is coupled to the depthinformation generation unit 102 and thefeature extraction unit 104. After the mergingunit 106 receives thedepth information 108 from the depthinformation generation unit 102 and the feature information from thefeature extraction unit 104, themerging unit 106 gives a first weight to thedepth information 108 and a second weight to the feature information corresponding to theobject 110. Then, the mergingunit 106 can merge thedepth information 108 and the feature information corresponding to theobject 110 into a feature depth map 112 (as shown inFIG. 4 ) and output thefeature depth map 112 to anapplication unit 114 according to the first weight and the second weight, wherein the feature information corresponds to high frequency parts of thefeature depth map 112 and thedepth information 108 corresponds to low frequency parts of thefeature depth map 112. In addition, theimage capture system 100 can utilize a high-pass filter (not shown inFIG. 1 ) to filter thefeature depth map 112 to generate the high frequency parts of thefeature depth map 112, and a low-pass filter (not shown inFIG. 1 ) to filter thefeature depth map 112 to generate the low frequency parts of thefeature depth map 112. - As shown in
FIG. 1 , after theapplication unit 114 receives thefeature depth map 112, theapplication unit 114 utilizes the high frequency parts of thefeature depth map 112 to execute face recognition corresponding to theobject 110 and the low frequency parts of thefeature depth map 112 to execute gesture recognition corresponding to theobject 110. In addition, theapplication unit 114 can also utilize the high frequency parts of thefeature depth map 112 to recognize patterns corresponding to theobject 110, or characters shown in the original image OIM. In addition, in another embodiment of the present invention, after theapplication unit 114 receives thefeature depth map 112, theapplication unit 114 can utilize the high frequency parts of thefeature depth map 112 to execute face recognition, text recognition, QR code recognition, pattern recognition, or profile recognition corresponding to theobject 110. - Please refer to
FIG. 5 .FIG. 5 is a diagram illustrating arecognition result 116 generated by theapplication unit 114 after theapplication unit 114 executes face recognition and gesture recognition on thefeature depth map 112. As shown inFIG. 5 , therecognition result 116 includes aface profile 1162 and abody profile 1164 corresponding to theobject 110. Then, theapplication unit 114 can utilize therecognition result 116 to execute corresponding operation. Further, in another embodiment of the present invention, theapplication unit 114 can utilize the low frequency parts of thefeature depth map 112 to determine a distance between theobject 110 and theimage capture system 100. Further, in another embodiment of the present invention, theapplication unit 114 can simultaneously utilize the low frequency parts of thefeature depth map 112 to execute gesture recognition corresponding to theobject 110 and determine a distance between theobject 110 and theimage capture system 100. - Please refer to
FIGS. 1 to 6 .FIG. 6 is flowchart illustrating an operation method of theimage capture system 100 according to another embodiment. The operation method inFIG. 6 is illustrated using theimage capture system 100 inFIG. 1 . Detailed steps are as follows: - Step 600: Start.
- Step 602: The depth
information generation unit 102 generatesdepth information 108 corresponding to theobject 110 according to the original image OIM. - Step 604: The
feature extraction unit 104 generates feature information corresponding to theobject 110 according to the original image OIM. - Step 606: The merging
unit 106 gives a first weight to thedepth information 108 and a second weight to the feature information corresponding to theobject 110. - Step 608: The merging
unit 106 merges thedepth information 108 and the feature information corresponding to theobject 110 to generate afeature depth map 112 according to the first weight and the second weight. - Step 610: End.
- In
Step 604, thefeature extraction unit 104 can generate the feature information corresponding to theobject 110 according to eye edges, a face edge, or a lip of theobject 110 of the original image OIM. But, the present invention is not limited to thefeature extraction unit 104 generating the feature information corresponding to theobject 110 according to the eye edges, the face edge, or the lip of theobject 110. InStep 606, after the mergingunit 106 receives thedepth information 108 from the depthinformation generation unit 102 and the feature information from thefeature extraction unit 104, the mergingunit 106 gives the first weight to thedepth information 108 and the second weight to the feature information corresponding to theobject 110. Then, inStep 608, the mergingunit 106 can merge thedepth information 108 and the feature information corresponding to theobject 110 to generate and output the feature depth map 112 (as shown inFIG. 4 ) to theapplication unit 114 according to the first weight and the second weight, wherein the feature information corresponds to high frequency parts of thefeature depth map 112 and thedepth information 108 corresponds to low frequency parts of thefeature depth map 112. In addition, theimage capture system 100 can utilize a high-pass filter (not shown inFIG. 1 ) to filter thefeature depth map 112 to generate the high frequency parts of thefeature depth map 112, and a low-pass filter (not shown inFIG. 1 ) to filter thefeature depth map 112 to generate the low frequency parts of thefeature depth map 112. - As shown in
FIG. 1 , after theapplication unit 114 receives thefeature depth map 112, theapplication unit 114 can utilize the high frequency parts of thefeature depth map 112 to execute face recognition corresponding to theobject 110 and the low frequency parts of thefeature depth map 112 to execute gesture recognition corresponding to theobject 110. - In addition, in another embodiment of the present invention, after the
application unit 114 receives thefeature depth map 112, theapplication unit 114 can utilize the high frequency parts of thefeature depth map 112 to execute face recognition, text recognition, QR code recognition, pattern recognition, or profile recognition corresponding to theobject 110. - In addition, the
application unit 114 can also utilize the high frequency parts of thefeature depth map 112 to recognize veins corresponding to theobject 110 or characters shown in the original image OIM. - In addition, after the
application unit 114 executes face recognition and gesture recognition on thefeature depth map 112, theapplication unit 114 can generate arecognition result 116. As shown inFIG. 5 , therecognition result 116 includes aface profile 1162 and abody profile 1164 corresponding to theobject 110. Then, theapplication unit 114 can utilize therecognition result 116 to execute corresponding operation. Further, in another embodiment of the present invention, theapplication unit 114 can utilize the low frequency parts of thefeature depth map 112 to determine a distance between theobject 110 and theimage capture system 100. Further, in another embodiment of the present invention, theapplication unit 114 can simultaneously utilize the low frequency parts of thefeature depth map 112 to execute gesture recognition corresponding to theobject 110 and determine a distance between theobject 110 and theimage capture system 100. - To sum up, the image capture system and the operation method thereof the depth information generation unit to generate a depth information corresponding to at least one object of an original image, the feature extraction unit to generate a feature information corresponding to the at least one object of the original image, and the merging unit to generate a feature depth map by merging the depth information and the feature information. Compared to the prior art, because the feature depth map simultaneously includes the depth information and the feature information, transmission data amount and bandwidth for the feature depth map can be decreased.
- Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Claims (14)
1. An image capture system comprising:
a depth information generation unit generating a depth information corresponding to at least one object of an original image;
a feature extraction unit generating a feature information corresponding to the at least one object of the original image; and
a merging unit coupled to the depth information generation unit and the feature extraction unit, the merging unit merging the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit.
2. The image capture system of claim 1 , wherein the merging unit gives a first weight to the depth information and a second weight to the feature information, and the merging unit merges the depth information and the feature information into the feature depth map according to the first weight and the second weight.
3. The image capture system of claim 1 , wherein the feature information corresponds to high frequency parts of the feature depth map and the depth information corresponds to low frequency parts of the feature depth map.
4. The image capture system of claim 3 , wherein the application unit utilizes the high frequency parts of the feature depth map to execute face recognition corresponding to the at least one object.
5. The image capture system of claim 3 , wherein the application unit utilizes the low frequency parts of the feature depth map to execute gesture recognition corresponding to the at least one object.
6. The image capture system of claim 3 , wherein the application unit utilizes the low frequency parts of the feature depth map to determine a distance between the at least one object and the image capture system.
7. The image capture system of claim 3 , wherein the application unit utilizes the low frequency parts of the feature depth map to execute gesture recognition corresponding to the at least one object and determine a distance between the at least one object and the image capture system.
8. An operation method of an image capture system, wherein the image capture system comprises a depth information generation unit, a feature extraction unit, and a merging unit, the operation method comprising:
the depth information generation unit generating a depth information corresponding to at least one object of an original image;
the feature extraction unit generating a feature information corresponding to the at least one object of the original image; and
the merging unit merging the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit.
9. The operation method of claim 8 , wherein the merging unit merging the depth information and the feature information into the feature depth map comprises:
the merging unit giving a first weight to the depth information and a second weight to the feature information; and
the merging unit merging the depth information and the feature information into the feature depth map according to the first weight and the second weight.
10. The operation method of claim 8 , wherein the feature information corresponds to high frequency parts of the feature depth map and the depth information corresponds to low frequency parts of the feature depth map.
11. The operation method of claim 10 , wherein the application unit utilizes the high frequency parts of the feature depth map to execute face recognition corresponding to the at least one object.
12. The operation method of claim 10 , wherein the application unit utilizes the low frequency parts of the feature depth map to execute gesture recognition corresponding to the at least one object.
13. The operation method of claim 10 , wherein the application unit utilizes the low frequency parts of the feature depth map to determine a distance between the at least one object and the image capture system.
14. The operation method of claim 10 , wherein the application unit utilizes the low frequency parts of the feature depth map to execute gesture recognition corresponding to the at least one object and determine a distance between the at least one object and the image capture system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/222,679 US20140363097A1 (en) | 2013-06-06 | 2014-03-23 | Image capture system and operation method thereof |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361831620P | 2013-06-06 | 2013-06-06 | |
US14/222,679 US20140363097A1 (en) | 2013-06-06 | 2014-03-23 | Image capture system and operation method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140363097A1 true US20140363097A1 (en) | 2014-12-11 |
Family
ID=52005140
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/222,679 Abandoned US20140363097A1 (en) | 2013-06-06 | 2014-03-23 | Image capture system and operation method thereof |
US14/297,592 Active 2034-12-10 US10096170B2 (en) | 2013-06-06 | 2014-06-05 | Image device for determining an invalid depth information of a depth image and operation method thereof |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/297,592 Active 2034-12-10 US10096170B2 (en) | 2013-06-06 | 2014-06-05 | Image device for determining an invalid depth information of a depth image and operation method thereof |
Country Status (2)
Country | Link |
---|---|
US (2) | US20140363097A1 (en) |
CN (1) | CN205123933U (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9066075B2 (en) * | 2009-02-13 | 2015-06-23 | Thomson Licensing | Depth map coding to reduce rendered distortion |
US20140363097A1 (en) * | 2013-06-06 | 2014-12-11 | Etron Technology, Inc. | Image capture system and operation method thereof |
EP3132598A1 (en) * | 2014-04-17 | 2017-02-22 | Sony Corporation | Depth assisted scene recognition for a camera |
US10425630B2 (en) * | 2014-12-31 | 2019-09-24 | Nokia Technologies Oy | Stereo imaging |
US10404969B2 (en) * | 2015-01-20 | 2019-09-03 | Qualcomm Incorporated | Method and apparatus for multiple technology depth map acquisition and fusion |
US9846919B2 (en) | 2015-02-16 | 2017-12-19 | Samsung Electronics Co., Ltd. | Data processing device for processing multiple sensor data and system including the same |
FR3063374B1 (en) * | 2017-02-27 | 2019-06-07 | Stmicroelectronics Sa | METHOD AND DEVICE FOR DETERMINING A DEPTH MAP OF A SCENE |
TWI672677B (en) * | 2017-03-31 | 2019-09-21 | 鈺立微電子股份有限公司 | Depth map generation device for merging multiple depth maps |
CN108932464A (en) * | 2017-06-09 | 2018-12-04 | 北京猎户星空科技有限公司 | Passenger flow volume statistical method and device |
CN109785225B (en) * | 2017-11-13 | 2023-06-16 | 虹软科技股份有限公司 | Method and device for correcting image |
CN109785390B (en) * | 2017-11-13 | 2022-04-01 | 虹软科技股份有限公司 | Method and device for image correction |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030235341A1 (en) * | 2002-04-11 | 2003-12-25 | Gokturk Salih Burak | Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications |
US20050180627A1 (en) * | 2004-02-13 | 2005-08-18 | Ming-Hsuan Yang | Face recognition system |
US20060056679A1 (en) * | 2003-01-17 | 2006-03-16 | Koninklijke Philips Electronics, N.V. | Full depth map acquisition |
US20070024614A1 (en) * | 2005-07-26 | 2007-02-01 | Tam Wa J | Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging |
US20090324059A1 (en) * | 2006-09-04 | 2009-12-31 | Koninklijke Philips Electronics N.V. | Method for determining a depth map from images, device for determining a depth map |
US20090324062A1 (en) * | 2008-06-25 | 2009-12-31 | Samsung Electronics Co., Ltd. | Image processing method |
US20100046837A1 (en) * | 2006-11-21 | 2010-02-25 | Koninklijke Philips Electronics N.V. | Generation of depth map for an image |
US20100066811A1 (en) * | 2008-08-11 | 2010-03-18 | Electronics And Telecommunications Research Institute | Stereo vision system and control method thereof |
US20100183236A1 (en) * | 2009-01-21 | 2010-07-22 | Samsung Electronics Co., Ltd. | Method, medium, and apparatus of filtering depth noise using depth information |
US20120293488A1 (en) * | 2011-05-18 | 2012-11-22 | Himax Media Solutions, Inc. | Stereo image correction system and method |
US20130141433A1 (en) * | 2011-12-02 | 2013-06-06 | Per Astrand | Methods, Systems and Computer Program Products for Creating Three Dimensional Meshes from Two Dimensional Images |
US20140146139A1 (en) * | 2011-07-06 | 2014-05-29 | Telefonaktiebolaget L M Ericsson (Publ) | Depth or disparity map upscaling |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1805981A4 (en) * | 2004-10-05 | 2008-06-11 | Threeflow Inc | Method of producing improved lenticular images |
US20140363097A1 (en) * | 2013-06-06 | 2014-12-11 | Etron Technology, Inc. | Image capture system and operation method thereof |
-
2014
- 2014-03-23 US US14/222,679 patent/US20140363097A1/en not_active Abandoned
- 2014-06-05 US US14/297,592 patent/US10096170B2/en active Active
- 2014-06-06 CN CN201420301752.7U patent/CN205123933U/en not_active Expired - Lifetime
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030235341A1 (en) * | 2002-04-11 | 2003-12-25 | Gokturk Salih Burak | Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications |
US20060056679A1 (en) * | 2003-01-17 | 2006-03-16 | Koninklijke Philips Electronics, N.V. | Full depth map acquisition |
US20050180627A1 (en) * | 2004-02-13 | 2005-08-18 | Ming-Hsuan Yang | Face recognition system |
US20070024614A1 (en) * | 2005-07-26 | 2007-02-01 | Tam Wa J | Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging |
US20090324059A1 (en) * | 2006-09-04 | 2009-12-31 | Koninklijke Philips Electronics N.V. | Method for determining a depth map from images, device for determining a depth map |
US20100046837A1 (en) * | 2006-11-21 | 2010-02-25 | Koninklijke Philips Electronics N.V. | Generation of depth map for an image |
US20090324062A1 (en) * | 2008-06-25 | 2009-12-31 | Samsung Electronics Co., Ltd. | Image processing method |
US20100066811A1 (en) * | 2008-08-11 | 2010-03-18 | Electronics And Telecommunications Research Institute | Stereo vision system and control method thereof |
US20100183236A1 (en) * | 2009-01-21 | 2010-07-22 | Samsung Electronics Co., Ltd. | Method, medium, and apparatus of filtering depth noise using depth information |
US20120293488A1 (en) * | 2011-05-18 | 2012-11-22 | Himax Media Solutions, Inc. | Stereo image correction system and method |
US20140146139A1 (en) * | 2011-07-06 | 2014-05-29 | Telefonaktiebolaget L M Ericsson (Publ) | Depth or disparity map upscaling |
US20130141433A1 (en) * | 2011-12-02 | 2013-06-06 | Per Astrand | Methods, Systems and Computer Program Products for Creating Three Dimensional Meshes from Two Dimensional Images |
Non-Patent Citations (3)
Title |
---|
Elmezain et al, Hand Gesture Recognition Based on Combined Features Extraction, World Academy of Science, Engineering and Technology, Vol:3 2009-12-21 * |
Elmezain et al, Improving Hand Gesture Recognition Using 3D Combined Features, 2009 Second International Conference on Machine Vision. * |
M. Van den Bergh and L. Van Gool, "Combining RGB and ToF cameras for real-time 3D hand gesture interaction," in Workshop on Applications of Computer Vision (WACV), pp. 66-72, 2011. * |
Also Published As
Publication number | Publication date |
---|---|
US10096170B2 (en) | 2018-10-09 |
US20140362179A1 (en) | 2014-12-11 |
CN205123933U (en) | 2016-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20140363097A1 (en) | Image capture system and operation method thereof | |
CN111553282B (en) | Method and device for detecting a vehicle | |
KR101980360B1 (en) | Apparatus and method for object recognition with convolution neural network | |
US20160062456A1 (en) | Method and apparatus for live user recognition | |
KR20160015662A (en) | Method of stereo matching and apparatus for performing the method | |
MY195861A (en) | Information Processing Method, Electronic Device, and Computer Storage Medium | |
US10372896B2 (en) | Pattern input apparatus and method, and recording medium using the same | |
JP2010217954A5 (en) | ||
JP2016538649A5 (en) | ||
US9355333B2 (en) | Pattern recognition based on information integration | |
CN111260569A (en) | Method and device for correcting image inclination, electronic equipment and storage medium | |
US20180268551A1 (en) | A file conversion method and apparatus | |
WO2014180387A1 (en) | Information input method and device | |
AU2016208411B2 (en) | Identifying shapes in an image by comparing bézier curves | |
EP2928174A1 (en) | A method and device for capturing a document | |
SG11201909071UA (en) | Image processing methods and apparatuses, computer readable storage media and eletronic devices | |
AU2017302245A1 (en) | Optical character recognition utilizing hashed templates | |
TW201541364A (en) | Image processing apparatus and processing method thereof | |
US10747327B2 (en) | Technologies for adaptive downsampling for gesture recognition | |
CN105843523B (en) | Information processing method and device | |
US10514807B2 (en) | Television virtual touch control method and system | |
RU2013110494A (en) | IMAGE PROCESSING DEVICE WITH EVALUATION LEVEL IMPLEMENTING SOFTWARE AND HARDWARE ALGORITHMS OF VARIOUS ACCURACY | |
JP6742837B2 (en) | Image processing apparatus, image processing method and program | |
RU2013149995A (en) | DEFINITION OF GRAPHIC INFORMATION ADDED TO THE VIDEO SIGNAL | |
CN109738908B (en) | Alarm method, device and system based on laser radar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ETRON TECHNOLOGY, INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LEE, CHI-FENG;REEL/FRAME:032503/0475 Effective date: 20140225 |
|
AS | Assignment |
Owner name: EYS3D MICROELECTRONICS, CO., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ETRON TECHNOLOGY, INC.;REEL/FRAME:037746/0589 Effective date: 20160111 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |