CN103376950A - Image locating method and interactive image system using same - Google Patents

Image locating method and interactive image system using same Download PDF

Info

Publication number
CN103376950A
CN103376950A CN2012101104530A CN201210110453A CN103376950A CN 103376950 A CN103376950 A CN 103376950A CN 2012101104530 A CN2012101104530 A CN 2012101104530A CN 201210110453 A CN201210110453 A CN 201210110453A CN 103376950 A CN103376950 A CN 103376950A
Authority
CN
China
Prior art keywords
image
reference point
subject image
processing unit
picture frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012101104530A
Other languages
Chinese (zh)
Other versions
CN103376950B (en
Inventor
高铭璨
杨恕先
程瀚平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixart Imaging Inc
Original Assignee
Pixart Imaging Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixart Imaging Inc filed Critical Pixart Imaging Inc
Priority to CN201210110453.0A priority Critical patent/CN103376950B/en
Publication of CN103376950A publication Critical patent/CN103376950A/en
Application granted granted Critical
Publication of CN103376950B publication Critical patent/CN103376950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

An image locating method includes the steps of using an image sensor to acquire an image frame; recognizing at least one object image in the image frame; comparing the size of the object image to a size threshold, and recognizing the object image, having the size value larger than the size threshold, as a reference point image; and locating the reference point image. The invention further provides an interactive image system.

Description

Image position method and use the interaction image system of described method
Technical field
The present invention relates to a kind of pointing system, particularly a kind of image position method and use the interaction image system of described method.
Background technology
Interactive picture system utilizes imageing sensor to obtain continuously to include a plurality of picture frames of at least one reference point image usually, and correspondingly control electronic installation according to the change in location between described picture frame, for example action of cursor on the control display picture according to described reference point image.In order correctly described cursor to be controlled, at first must in described picture frame, correctly locate described reference point image.
For example United States Patent (USP) the 7th, 796, No. 116, title is " being used for the electronic installation (electronic equipment for handheld vision based absolute pointing system) according to the absolute pointing system of hand-held vision " to disclose a kind of image position method; For example with reference to shown in Figure 1A, described image position method comprises the following step: the intensity level of determining each pixel in the picture frame 9; Determine to comprise the rectangular extent 92 of reference point image 91; Judge a plurality of valid pixels that are higher than predetermined threshold value in the described rectangular extent 92; And the coordinate of determining described reference point image 91 according to described intensity level and the location of pixels of each valid pixel.
Yet, in the known image localization method, be the get off location of carrying out described reference point image 91 of prerequisite that the interference brightness of hypothesis in the described picture frame 9 can not be higher than described predetermined threshold value.Yet, in fact when having interference in the described rectangular extent 92, may make the described intensity level of partial pixel be higher than described predetermined threshold value, for example the pixel among Figure 1B 921.At this moment, if calculate the coordinate time of described reference point image 91 according to the known image localization method, the situation of coordinate offset may appear.
In addition, in the scope of described reference point image 91, also might exist and disturb and so that the described intensity level of partial pixel is lower than described predetermined threshold value, for example the pixel among Figure 1B 911.At this moment, if according to the coordinate time of known image localization method computing reference dot image 91, the situation of coordinate offset may appear equally.If the coordinate of the reference point image 91 that calculates is incorrect, will cause the situation of mistake control to occur.
Given this, the present invention also proposes a kind of image position method and uses the interaction image system of described method, and it can effectively get rid of interference to increase setting accuracy and stability.
Summary of the invention
The purpose of this invention is to provide a kind of image position method and use the interaction image system of described method, whether it is hollow image by subject image relatively with dimension threshold and identification reference point image, uses the interference of getting rid of interference and surround lighting.
The invention provides a kind of image position method, comprise the following step: use imageing sensor to obtain picture frame; The use processing unit is identified at least one subject image in the described picture frame; Use subject image size and the dimension threshold of the more described subject image of described processing unit, and described subject image size is identified as the reference point image greater than the described subject image of described dimension threshold; And use described processing unit to locate described reference point image.
The present invention also provides a kind of interaction image system, comprises electronic installation and telepilot.Described electronic installation comprises at least one reference point and receiving element, and wherein said receiving element is used for reception control signal.Described telepilot comprises imageing sensor, processing unit and transmitter unit.Described imageing sensor is used for obtaining continuously a plurality of picture frames that comprise at least one subject image.Described processing unit is used for identifying the described subject image of described picture frame, according to the subject image size identification of described subject image corresponding to the reference point image of described reference point and locate described reference point image.Described transmitter unit is according to the described control signal of the delivering of described reference point image.
The present invention also provides a kind of interaction image system, comprises display device and telepilot.Described display device comprises the light that at least one reference point is sent default spectrum.Described telepilot obtains continuously a plurality of picture frames of comprising at least one subject image, correspondingly controls described display device according to the subject image size identification of described subject image corresponding to the reference point image of described reference point, the described reference point image in location and according to the change in location of described reference point image.
The image position method of embodiment of the present invention and using in the interaction image system of described method, described luminance threshold can be fixed value or change value; Wherein, described fixed value can be set in advance, and the mean flow rate that described change value can be a picture frame multiply by ratio, and described ratio is decided by the variance of the brightness value of each pixel in the described picture frame.
The image position method of embodiment of the present invention and using in the interaction image system of described method, described dimension threshold can be fixed value or change value; Wherein, described fixed value can be set in advance, and described change value can be that the average-size of reference point image described in the picture frame multiply by ratio, and described ratio is decided by the variance of the size of each reference point image in the described picture frame.
The image position method of embodiment of the present invention and using in the interaction image system of described method, in picture frame, brightness value is identified as subject image greater than a pixel or a plurality of neighbor of luminance threshold; The subject image that will meet the subject image dimensional requirement is identified as the reference point image; And according to the brightness value of each pixel of reference point image and center of gravity that location of pixels calculates described reference point image or center to carry out the location of described reference point image.
Description of drawings
Figure 1A and 1B illustrate the synoptic diagram of known image localization method.
Fig. 2 illustrates the synoptic diagram of the interaction image system of embodiment of the present invention.
Fig. 3 illustrates the process flow diagram of the image position method of embodiment of the present invention.
Fig. 4 illustrates in the image position method of embodiment of the present invention, the synoptic diagram of picture frame and subject image.
Fig. 5 illustrates in the image position method of embodiment of the present invention, another synoptic diagram of picture frame and subject image.
Fig. 6 illustrates in the image position method of embodiment of the present invention, another synoptic diagram of picture frame and subject image.
Description of reference numerals
10 electronic installations, 11 reference point
13 display frames of 12 receiving elements
131 cursors, 20 telepilots
21 imageing sensors, 22 processing units
23 transmitter units, 4 picture frames
41 reference point images 410,42 disturb pixel
411-414 image section 411a, 412a image section starting point
411b, 412b image section terminal point S 31-S 34Step
9 picture frames, 91 reference point images
92 rectangular extent 911,921 pixels
The S control signal
Embodiment
For allow above and other purpose of the present invention, feature and advantage can be more obvious, hereinafter will cooperate accompanying drawing, elaborate.In explanation of the present invention, identical member illustrates first at this with identical symbolic representation.
With reference to shown in Figure 2, it illustrates the synoptic diagram of the interaction image system of embodiment of the present invention.Described interaction image system comprises electronic installation 10 and telepilot 20.Described telepilot 20 is used for obtaining continuously a plurality of picture frames of comprising at least one subject image, correspondingly controls described electronic installation 10 according to the subject image size identification of described subject image corresponding to the reference point image of at least one reference point, the described reference point image in location and according to position and/or the change in location of described reference point image, for example control the action of cursor on the described electronic installation 10 or performed software, but be not limited to this.In the present invention explanation, described subject image refers to not yet the subject image through size identification, so subject image may be reference point image, ambient-light images or interference; Described reference point image refers to meet the subject image of pre-set dimension scope.
Described electronic installation 10 comprises at least one reference point 11 (being shown as two reference point herein) and receiving element 12.Described reference point 11 for example can be light emitting diode or laser diode, is used for sending the light of default spectrum, preferably sends ruddiness, infrared light or other invisible lights.Described receiving element 12 is used for coupling described telepilot 20 wired or wirelessly, is used for receiving the control signal S that described telepilot 20 is sent; For example, when described electronic installation 10 when having the display device of display frame 13, but display highlighting 131 is controlled for described telepilot 20 in the described display frame 13; Wherein, the mode of telepilot control electronic installation has been known, and spirit of the present invention is the coordinate of correct position reference dot image.In addition, wired and Radio Transmission Technology is known, so do not give unnecessary details in this.In other embodiments, described reference point 11 also can be independent of outside the described electronic installation 10.
Described telepilot 20 comprises imageing sensor 21, processing unit 22 and transmitter unit 23.Described imageing sensor 21 for example can be COMS imageing sensor, ccd image sensor or other sensors for the sense light energy, be used for obtaining and export a plurality of picture frames that comprise at least one subject image continuously, wherein said picture frame can be analogy image or numerical digit image.For example, when described imageing sensor 21 output digit image, for example can comprise analog/digital converting unit (ADC) in the described imageing sensor 21 and be used for analog signal is converted to digital signals.Described processing unit 22 can be digital signal processor (DSP), be used for receiving the picture frame that described imageing sensor 21 exports and carry out aftertreatment, comprise the subject image of identification in the described picture frame, according to the subject image size of described subject image and/or subject image shape recognition corresponding to the reference point image of at least one reference point and locate described reference point image.At last, described processing unit 22 according to the information (for example change in location) of the described reference point image of consecutive image interframe by described transmitter unit 23 send wired or wirelessly control signal S to described electronic installation 10 correspondingly to control.When described imageing sensor 21 output classes during than image, described processing unit 22 for example comprises the analog/digital converting unit and is used for analog signal is converted to digital signals.In other words, described processing unit 22 comes the position reference dot image according to the numerical digit image.
In detail, in the embodiment of the present invention, described telepilot 20 is distinguished reference point image and interference according to size and the shape of subject image in advance before positioning, to increase setting accuracy and operational stability.
Scrutablely be, described telepilot 20 comprises data and the parameter that the working storage (not shown) is used for temporary computation process usually, and wherein said working storage can comprise or not be contained in the described processing unit 22.
With reference to shown in Figure 3, the process flow diagram that it illustrates the image position method of embodiment of the present invention comprises the following step: use imageing sensor to obtain picture frame (step S 31); The use processing unit is identified at least one subject image (the step S in the described picture frame 32); Use subject image size and the dimension threshold of the more described subject image of described processing unit, and described subject image size is identified as reference point image (step S greater than the described subject image of described dimension threshold 33); And use described processing unit to locate described reference point image (step S 34).Behind the location of finishing reference point image described in a plurality of picture frames, can according to position and/or the change in location of described reference point image, export control signal S to described electronic installation 10 by described transmitter unit 23.The embodiment of the image position method of present embodiment is described as follows:
Step S 31: the imageing sensor 21 of described telepilot 20 obtains continuously picture frame and is sent to described processing unit 22 with sampling frequency.
Step S 32: the processing unit 22 of described telepilot 20 is in the described picture frame, and brightness value is identified as subject image greater than pixel or a plurality of neighbor (person of being interconnected with one another) of luminance threshold.In the present embodiment, described processing unit 22 for example can be identified described subject image according to two kinds of embodiments.
In a kind of embodiment, described processing unit 33 can be stored to impact damper with one whole picture frame in advance.Then, described processing unit 33 is compared the brightness value of this each pixel of picture frame with at least one luminance threshold; When each pixel brightness value of a certain pixel region (can comprise one or more pixels) during greater than described luminance threshold, described pixel region is identified as subject image, and adjacent and brightness value is identified as greater than a plurality of pixels of described luminance threshold belongs to same subject image.Scrutable is to have the partial pixel zone to cause being identified as subject image because of interference in the described picture frame.Described luminance threshold for example can be the preset ratio that can represent grey-scale range, and for example when brightness can 256 GTGs represents, described luminance threshold for example can be 0.5 * 256; Wherein, described preset ratio can be determined according to the identification demand.
In another embodiment, described processing unit 33 receives the pixel information of each pixel one by one, and when receiving described pixel information i.e. more described pixel information and at least one luminance threshold.When the brightness value of described pixel information during greater than described luminance threshold, be identified as valid pixel and be stored in the working storage, then sequentially identify again the pixel information of next pixel; This embodiment can reduce the usage space of working storage.
For example with reference to shown in Figure 4, suppose in the picture frame 4 that described processing unit 22 is listed as the 1st pixel since the 1st and sequentially reads line by line each pixel.For example the brightness value of pixel coordinate (3,1) is greater than described luminance threshold, and the starting point coordinate 411a of 22 document image sections 411 of described processing unit is in working storage; Then, the information (for example comprising pixel coordinate and GTG value) of each pixel and being stored in the working storage in the described image section 411 of record take described starting point coordinate 411a as starting point; Then, for example the brightness value of pixel coordinate (7,1) is less than described luminance threshold, and the terminal point coordinate 411b of 22 described image section 411 of record of described processing unit and finish the recognizer of first row in working storage.Scrutable is if there is another image section in the first row, then can identify in the same manner and record the information of each pixel of described another image section.
Then, can identify respectively and record starting point coordinate 412a, image section 412 and the terminal point coordinate 412b of secondary series with same program.Then, judge that then described image section 411 and 412 belongs to the same object image if satisfy following formula,
Seg_L≤Preline_Obj i_ R; And
Seg_R≥Preline-Obj i_L;
When wherein, for example reading the Y row of picture frame 4; Seg_L represents the starting point coordinate of image section (for example 412) left of unknown object image in the Y row; Seg_R represents right-hand terminal point coordinate of image section of unknown object image in the Y row; Preline-Obj i_ L represents the left starting point coordinate of the image section (for example 411) of subject image i in the Y-1 row; Preline-Obj i_ R represents the right-hand terminal point coordinate of the image section of subject image i in the Y-1 row.In other words, adjacent and brightness value is identified as greater than a plurality of pixels of described luminance threshold belongs to same subject image.Then, can identify and record with same program the object image information of other row.The detailed embodiment of present embodiment for example can disclose with reference to the United States Patent (USP) by the application of this case common assignee US2006/0245649 number and US2006/0245652 number disclosed content.Similarly, also may there be the partial pixel zone to cause being identified as subject image because of interference in the described picture frame 4.
After this step was finished, 22 of the processing units of described telepilot 20 can identify at least one subject image with given shape; Wherein, described given shape is according to employed recognition methods and difference for example may be rectangle or irregular shape.
Step S 33: the processing unit 22 comparison step S of described telepilot 20 32The middle subject image that identifies and dimension threshold (or range of size) are used to eliminate and are disturbed and ambient light interference.For example, described subject image size can be identified as the reference point image and described subject image size is identified as interference less than the subject image of described dimension threshold greater than the subject image of dimension threshold.For example with reference to shown in Figure 5, include subject image 41 and subject image 42 through identification in the described picture frame 4, described subject image 42 makes the brightness value of its pixel greater than described luminance threshold owing to being interfered.In the present embodiment, such as, but not limited to, described dimension threshold can being made as 3 pixels, therefore when subject image during less than 3 pixels, being judged as interference and getting rid of, so the subject image 42 among Fig. 5 will be excluded when positioning.In addition, when subject image greater than preset area, then can be judged as ambient light interference and be excluded equally, wherein said preset area for example can be determined with the operating distance of described electronic installation 20 according to size and the described telepilot 20 of described reference point 11, for example the corresponding reference point size of different operating distance can be stored in advance.In addition, after identifying the reference point image, the processing unit 22 of described telepilot 20 can and then be identified described reference point image and whether is met preset shape (for example herein for circular), do not meet the image that then can be considered as other objects in the environment of described preset shape, use the interference of getting rid of surround lighting.
Step S 34: the processing unit 22 of described telepilot 20 calculate the center of gravity of described reference point image or center as the coordinate of described reference point image to position.Center of gravity or the center of for example, calculating described reference point image according to brightness value and the location of pixels of described reference point image at least a portion pixel.
At last, the processing unit 22 of described telepilot 20 is sent to described electronic installation 10 with result of calculation by described transmitter unit 23.
In another embodiment, the processing unit 22 of described telepilot 20 carries out also whether identification reference point image is the step of hollow image, causes the brightness value of partial pixel to be lower than described luminance threshold to avoid reference point image range internal cause to disturb and is excluded in position fixing process; Namely, the brightness value that in the described reference point image range of processing unit 22 identification of described telepilot 20 whether pixel is arranged is less than described luminance threshold, and judges less than the elemental area of described luminance threshold whether described reference point image is hollow image according to brightness value in the described reference point image.For example with reference to shown in Figure 6, suppose that described processing unit 22 is according to step S 33Identify reference point image 41, the brightness value that comprises pixel 410 in the described reference point image 41 is lower than described luminance threshold (for example being called hollow area), and described processing unit 22 then calculates the area ratio of the entire area of the elemental area of described hollow area and described reference point image 41; When described area ratio represents then that greater than area threshold described hollow area is not to cause (being that the reference point image belongs to hollow image) by disturbing, and will get rid of brightness value and the location of pixels of described each pixel of hollow area in position fixing process; When described area ratio represents then that less than described area threshold described hollow area is (being that the reference point image belongs to solid image) of causing by disturbing, and will comprise brightness value and the location of pixels of described each pixel of hollow area in position fixing process.The detailed embodiment of present embodiment can disclose US2006/0245649 number content with reference to the United States Patent (USP) by the application of this case common assignee equally.Described processing unit 22 then utilizes the brightness value of at least a portion pixel (can comprise or not comprise hollow area) in the reference point image and center of gravity or the center of location of pixels computing reference dot image.
In addition, among the present invention, described luminance threshold and described dimension threshold all can be the change values that fixed value maybe can be dynamically adjusted.
For example, after the identification of finishing a picture frame, described processing unit 22 calculates the mean flow rate of present all pixels of picture frame, and described mean flow rate be multiply by the variable luminance threshold that ratio (for example X% * mean flow rate) is used as next picture frame; Wherein, X is positive number.In addition, the numerical value of X also can be finely tuned according to the variance (variance) of each pixel brightness value in whole the picture frame; For example, when described variance is larger, represent that the difference of the brightness value of each pixel in this picture frame is larger, at this moment the preferred numerical value that improves X; Otherwise, when described variance is less, represent that the difference of the brightness value of each pixel in this picture frame is less, the numerical value of preferred reduction this moment X; Wherein, the fine setting numerical example as can with as described in variance positive correlation, to improve the correctness of recognition object image.In other words, described luminance threshold can be determined according to mean flow rate, maximum brightness or the minimum brightness of last image or present image, to determine described luminance threshold according to the brightness of the actual image that obtained.
For example, after the identification of finishing a picture frame, described processing unit 22 calculates the average-size of all reference point images in the present picture frame, and described average-size be multiply by the variable dimension threshold that ratio (for example Y% * mean flow rate) is used as next picture frame; Wherein, Y is positive number.In addition, the numerical value of Y also can be finely tuned according to the variance of each reference point picture size in whole the picture frame; For example, when described variance is larger, represent that the difference of the size of each reference point image in this picture frame is larger, at this moment the preferred numerical value that improves Y; Otherwise, when described variance is less, represent that the difference of the size of each reference point image in this picture frame is less, the numerical value of preferred reduction this moment Y; Wherein, the fine setting numerical example as can with as described in variance positive correlation, to improve the correctness of identification reference point image.In other words, described dimension threshold can be determined according to average subject image size, largest object picture size or the smallest object picture size of last image or present image, to determine described dimension threshold according to the actual objects in images picture size of being obtained.
In sum, the known image localization method is interfered easily and may causes the situation of image coordinate skew.Therefore, the present invention also proposes a kind of image position method (Fig. 3) and uses the interaction image system (Fig. 2) of described method, and it is by relatively whether subject image and dimension threshold and identification reference point image are that hollow image is got rid of interference and improved stability.
Although the present invention is open with aforementioned embodiments, so it is not for restriction the present invention, any the technical staff in the technical field of the invention, without departing from the spirit and scope of the present invention, can do various changes and modification.Therefore protection scope of the present invention is with being as the criterion that appending claims was defined.

Claims (20)

1. image position method, this image position method comprises the following step:
Use imageing sensor to obtain picture frame;
The use processing unit is identified at least one subject image in the described picture frame;
Use subject image size and the dimension threshold of the more described subject image of described processing unit, and described subject image size is identified as the reference point image greater than the described subject image of described dimension threshold; And
Use described processing unit to locate described reference point image.
2. image position method according to claim 1, wherein in the step of at least one subject image of identification in the described picture frame, described processing unit is identified as described subject image with the brightness value in the described picture frame greater than a pixel or a plurality of neighbor of luminance threshold.
3. image position method according to claim 2, wherein said luminance threshold is fixed value or change value; The mean flow rate that described change value is a described picture frame multiply by ratio.
4. image position method according to claim 1, wherein said dimension threshold is fixed value or change value; Described change value is that the average-size of reference point image described in the described picture frame multiply by ratio.
5. image position method according to claim 1, this image position method also comprises:
Whether use described processing unit to identify described reference point image is hollow image.
6. image position method according to claim 1 is wherein located in the step of described reference point image, and described processing unit positions according to brightness value and the location of pixels of at least a portion pixel of described reference point image.
7. image position method according to claim 1, this image position method also comprises:
Use described processing unit to identify described reference point image and whether meet preset shape.
8. interaction image system, this interaction image system comprises:
Electronic installation, this electronic installation comprises:
At least one reference point; With
Receiving element is used for reception control signal; And
Telepilot, this telepilot comprises:
Imageing sensor is used for obtaining continuously a plurality of picture frames that comprise at least one subject image;
Processing unit is used for identifying the described subject image of described picture frame, according to the subject image size identification of described subject image corresponding to the reference point image of described reference point and locate described reference point image; With
Transmitter unit is according to the described control signal of the delivering of described reference point image.
9. interaction image system according to claim 8, wherein said processing unit is identified as described subject image with the brightness value in the described picture frame greater than a pixel or a plurality of neighbor of luminance threshold.
10. interaction image system according to claim 9, wherein said luminance threshold is fixed value or change value; The mean flow rate that described change value is a described picture frame multiply by ratio.
11. interaction image system according to claim 8, wherein said processing unit is identified as described subject image size described reference point image and described subject image size is identified as interference less than the described subject image of described dimension threshold greater than the described subject image of dimension threshold.
12. interaction image system according to claim 11, wherein said dimension threshold are fixed value or change value; Described change value is that the average-size of reference point image described in the described picture frame multiply by ratio.
13. whether interaction image system according to claim 8, wherein said processing unit are also identified described reference point image is hollow image.
14. interaction image system according to claim 8, wherein said processing unit positions according to brightness value and the location of pixels of at least a portion pixel of described reference point image.
15. also identifying described reference point image, interaction image system according to claim 8, wherein said processing unit whether meet preset shape.
16. an interaction image system, this interaction image system comprises:
Display device, this display device comprises at least one reference point, and this at least one reference point is sent the light of default spectrum; And
Telepilot obtains continuously a plurality of picture frames of comprising at least one subject image, correspondingly controls described display device according to the subject image size identification of described subject image corresponding to the reference point image of described reference point, the described reference point image in location and according to the change in location of described reference point image.
17. interaction image system according to claim 16, wherein said telepilot is identified as described subject image with the brightness value in the described picture frame greater than a pixel or a plurality of neighbor of luminance threshold.
18. interaction image system according to claim 17, wherein said telepilot is identified as described subject image size described reference point image and described subject image size is identified as interference less than the described subject image of described dimension threshold greater than the described subject image of dimension threshold.
19. interaction image system according to claim 18, wherein said telepilot also identify whether pixel is arranged in the described reference point image described brightness value less than described luminance threshold.
20. interaction image system according to claim 19, wherein said telepilot judges less than the elemental area of described luminance threshold whether described reference point image is hollow image according to brightness value described in the described reference point image.
CN201210110453.0A 2012-04-13 2012-04-13 Image position method and use the interaction image system of described method Active CN103376950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210110453.0A CN103376950B (en) 2012-04-13 2012-04-13 Image position method and use the interaction image system of described method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210110453.0A CN103376950B (en) 2012-04-13 2012-04-13 Image position method and use the interaction image system of described method

Publications (2)

Publication Number Publication Date
CN103376950A true CN103376950A (en) 2013-10-30
CN103376950B CN103376950B (en) 2016-06-01

Family

ID=49462150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210110453.0A Active CN103376950B (en) 2012-04-13 2012-04-13 Image position method and use the interaction image system of described method

Country Status (1)

Country Link
CN (1) CN103376950B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229318A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 The training method and device of gesture identification and gesture identification network, equipment, medium
CN108445873A (en) * 2017-02-16 2018-08-24 深圳市昊宇世纪科技有限公司 A kind of intelligent mobile device and its movement technique

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787605A (en) * 2004-12-10 2006-06-14 精工爱普生株式会社 Control system, apparatus compatible with the system, and remote controller
CN101776952A (en) * 2010-01-29 2010-07-14 联动天下科技(大连)有限公司 Novel interactive projection system
CN101877056A (en) * 2009-12-21 2010-11-03 北京中星微电子有限公司 Facial expression recognition method and system, and training method and system of expression classifier
CN102136058A (en) * 2011-04-26 2011-07-27 中国农业大学 Bar code image identification method
US20110310302A1 (en) * 2010-06-17 2011-12-22 Satoru Takeuchi Image processing apparatus, image processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787605A (en) * 2004-12-10 2006-06-14 精工爱普生株式会社 Control system, apparatus compatible with the system, and remote controller
CN101877056A (en) * 2009-12-21 2010-11-03 北京中星微电子有限公司 Facial expression recognition method and system, and training method and system of expression classifier
CN101776952A (en) * 2010-01-29 2010-07-14 联动天下科技(大连)有限公司 Novel interactive projection system
US20110310302A1 (en) * 2010-06-17 2011-12-22 Satoru Takeuchi Image processing apparatus, image processing method, and program
CN102136058A (en) * 2011-04-26 2011-07-27 中国农业大学 Bar code image identification method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108445873A (en) * 2017-02-16 2018-08-24 深圳市昊宇世纪科技有限公司 A kind of intelligent mobile device and its movement technique
CN108229318A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 The training method and device of gesture identification and gesture identification network, equipment, medium

Also Published As

Publication number Publication date
CN103376950B (en) 2016-06-01

Similar Documents

Publication Publication Date Title
US8144929B2 (en) Human pursuit system, human pursuit apparatus and human pursuit program
CN105718031B (en) Gesture recognition method and device
CN103069796A (en) Method for counting objects and apparatus using a plurality of sensors
CN106934351B (en) Gesture recognition method and device and electronic equipment
KR101307234B1 (en) Parking management system based on object recognition
US9310908B2 (en) Color sampling method and touch control device thereof
EP3547253B1 (en) Image analysis method and device
KR20170001223A (en) Information extracting system, information extracting apparatus, information extracting method of thereof and non-transitory computer readable medium
KR101114744B1 (en) Method for recognizing a text from an image
US20140210712A1 (en) Optical pointing system
CN111291671A (en) Gesture control method and related equipment
CN103376950A (en) Image locating method and interactive image system using same
KR101695728B1 (en) Display system including stereo camera and position detecting method using the same
US20200329220A1 (en) Image processing apparatus and method
KR101332832B1 (en) Indoor positioning method using motion recognition unit
CN102279922A (en) Bar code image recognition system applied to handheld device and relevant method
RU2602829C2 (en) Assessment of control criteria from remote control device with camera
US9134812B2 (en) Image positioning method and interactive imaging system using the same
TWI618001B (en) Object recognition system and object recognition method
US9389731B2 (en) Optical touch system having an image sensing module for generating a two-dimensional image and converting to a one-dimensional feature
US20200320328A1 (en) Image processing device, control method, and control program
KR101642602B1 (en) System and method of detecting parking by software using analog/digital closed-circuit television image
KR20130037939A (en) Image recognition device and method for improving image recognition rate
US10885617B2 (en) Image analysis method and image analysis system for server
CN104205124B (en) The system and method for identification objects

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant