CN103034373A - Automatic selection method and system for effective region of area array camera positioning image - Google Patents
Automatic selection method and system for effective region of area array camera positioning image Download PDFInfo
- Publication number
- CN103034373A CN103034373A CN2012104814160A CN201210481416A CN103034373A CN 103034373 A CN103034373 A CN 103034373A CN 2012104814160 A CN2012104814160 A CN 2012104814160A CN 201210481416 A CN201210481416 A CN 201210481416A CN 103034373 A CN103034373 A CN 103034373A
- Authority
- CN
- China
- Prior art keywords
- image
- rectangle
- correction point
- touch
- screen correction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an automatic selection method and an automatic selection system for an effective region of an area array camera positioning image. The method comprises the following steps of: photographing an original area array image and performing binarization processing on the original area array image; setting a screen correction point on a display screen and acquiring the transverse position of the screen correction point in the original area array image; setting a small image region by taking the transverse position as the center and a set width value as width in the left-right direction; extracting a noise part from the small image region and dividing into an upper part and a lower part; extracting a rectangular region between the two noise parts as a target rectangle; and acquiring an image for clicking the screen correction point through touching of a touch object in real time, acquiring an upper light spot and a lower light spot entering the target rectangle, and determining an acquisition base line value of a background frame image according to positional relation of the upper and lower light spots so as to determine the effective acquisition region of the image. According to the automatic selection method and the automatic selection system, a black image region corresponding to a background frame can be newly acquired without adjusting the position of hardware of the camera by a professional, and the position of the target background frame image is acquired.
Description
Technical field
The present invention relates to touch screen camera location technology, be specifically related to automatic selecting method and the system of a kind of battle array camera positioning image effective coverage.
Background technology
The characteristics such as the camera location technology is high with its frame per second, and is simple in structure are occupied critical role in the touching technique field at present.Face battle array camera location technology is to adopt common area array CCD/CMOS photosensitive part as image acquisition element, by the parameter setting, from the ccd image of whole large resolution, gather the less arrowband image that touches hot spot that comprises of size and export, as the original image of follow-up location spot identification.The arrowband image of its selection generally all is corresponding the touch frame of positioning equipment, situation for the details in a play not acted out on stage, but told through dialogues speck, when having the object of touch to enter in the posting, camera can photograph the reflected light of touch objects, and the touch frame is the black background zone, as in Chinese patent 200910039966.5, disclosing a kind of touch-screen locating device and touch-screen localization method, this technical scheme arranges wide-angle camera at any drift angle of display screen, two dark light absorbent frames lay respectively on the display screen frame of the adjacent both sides of described wide-angle camera, and the plane of vertical and described display screen, described two plane mirror frames lay respectively on the display screen frame of both sides, described wide-angle camera opposite, and the plane of vertical and described display screen.But face battle array camera location technology belongs to the optical alignment technology, will cause camera when certain change occurs in the position of camera is not to face background frame according to the background frame image of previous parameter output, perhaps parts of images can not face background frame, thereby may there be neighbourhood noise in the background frame image that photographs.As shown in Figure 2, the original face system of battle formations picture that photographs for camera, the zone of turning white among the figure is neighbourhood noise light, photograph the background frame part middle being divided into than blackboard, as shown in Figure 3, the normal touch situation is by the parameter setting, to make the background frame image of four white line encirclements among camera collection Fig. 3 as touching location original background block diagram picture.If but in the practical application, camera is subjected to displacement, may cause the picture position to change, and previous acquisition parameter does not change, thereby so that the original background block diagram picture that collects does not drop on zone corresponding to positioning frame.As shown in Figure 4, move on the integral image, so that the part that four white lines surround has broken away from zone corresponding to background frame.When having real touch object to enter the touch area, real touch hot spot can not be identified and export to camera, causes background frame to touch.In order to address the above problem, a kind of method is to adopt the method for adjusting the camera hardware location, so that the background frame image that camera is taken is again over against the narrow frame of background frame, but this method complicated operation, technical requirement is high, generally be only suitable for the professional and technical personnel, and generally all can't operate as non-patent technology personnel's user, increased the difficulty of later maintenance.
Summary of the invention
Primary and foremost purpose of the present invention is to overcome shortcomings and deficiencies of the prior art, and a kind of automatic selecting method that can automatically select the face battle array camera positioning image effective coverage of effective touch area is provided.
Another object of the present invention also is to provide the automatic selective system of a kind of battle array camera positioning image effective coverage.
Purpose of the present invention is achieved through the following technical solutions:
The automatic selecting method of a kind of battle array camera positioning image effective coverage comprises the steps:
Take original face system of battle formations picture and it is carried out binary conversion treatment;
At display screen the screen correction point is set, obtains the lateral attitude of screen correction point in original face system of battle formations picture;
Centered by the lateral attitude of screen correction point, about set width value as width take one, set a little graph region;
In little graph region, extract noise section and be divided into up and down two parts;
Rectangular area about extracting between two parts noise is as target rectangle;
Real-time Collection has touch objects to touch the image of clicking screen correction point, obtain up and down two hot spots in the target approach rectangle, determine the collection baseline value of background frame image according to the position relationship of two hot spots up and down, determine to touch the effective pickup area of positioning image according to gathering baseline value.
The automatic selective system of a kind of battle array camera positioning image effective coverage is characterized in that, comprising:
Image acquisition units is used for taking and binary conversion treatment original face system of battle formations picture and real-time touch image, and the original face system of battle formations is looked like to be transferred to screen correction point setup unit and will touch in real time image transmitting to effective pickup area acquiring unit;
Screen correction point setup unit is used at display screen the screen correction point being set, and obtains this screen correction point lateral attitude in original face system of battle formations picture and be transferred to little graph region setup unit;
Little graph region setup unit is used for centered by the lateral attitude of screen correction point, about take a setting width value as width, set a little graph region and be transferred to the noise section extraction unit;
The noise section extraction unit, two parts are transferred to the target rectangle acquiring unit for extracting noise section in little graph region and being divided into up and down;
The target rectangle acquiring unit, the rectangular area about being used for extracting between two parts noise is as target rectangle;
Effective pickup area acquiring unit, be used for receiving and touch in real time image, and according to up and down two hot spots in the Image Acquisition target approach rectangle that touch objects touch click screen correction point is arranged, determine the collection baseline value of background frame image according to the position relationship of two hot spots up and down, determine to touch the effective pickup area of positioning image according to gathering baseline value.
Compared with prior art, the beneficial effect of technical solution of the present invention is:
The present invention at first gathers original image and comes the screen correction point of setting is positioned, a little graph region will be intercepted from original face system of battle formations picture centered by the screen correction point of setting, noise section in the little graph region extracted and be divided into up and down two parts, obtain picture position corresponding to positioning frame---target rectangle according to the position, the left and right sides of two-part noise and little graph region up and down, then continuous acquisition image, click the screen correction point with touch objects, and according to obtaining up and down two hot spots in the target rectangle, when the hot spot gathering is nearer, calculate the apical position of two hot spots, thereby determine the collection baseline value of the background frame image that this screen correction point is corresponding, utilize this collection baseline value to establish the effective coverage that touches positioning image, realization is treated to the basis with image, and be aided with the automatic selection that the operation of clicking screen realizes face battle array camera image locating area, need not the professional and technical personnel adjusts the camera hardware location, only need again black image zone corresponding to background extraction frame (being posting), and the position that obtains target background block diagram picture gets final product, dirigibility is strong, and need not to rely on artificial adjustment, avoided personal error.
Description of drawings
Fig. 1 is prior art one touch-screen locating device synoptic diagram;
Fig. 2 photographs the original face system of battle formations as synoptic diagram among the present invention;
Fig. 3 is that the normal original face system of battle formations is as hot spot output background frame image interception area schematic;
The skew original face system of battle formations for occuring as hot spot output background frame image interception area schematic in Fig. 4;
Fig. 5 is the process flow diagram of the automatic selecting method embodiment of of the present invention battle array camera positioning image effective coverage;
Fig. 6 is screen correction point synoptic diagram;
Fig. 7 is noise region and background frame image-region synoptic diagram in the original face system of battle formations picture;
Fig. 8 is the process flow diagram of the automatic selecting method preferred embodiment of of the present invention battle array camera positioning image effective coverage;
Fig. 9 is the structural representation of the automatic selective system embodiment of of the present invention battle array camera positioning image effective coverage.
Embodiment
Below in conjunction with drawings and Examples technical scheme of the present invention is described further, but embodiments of the present invention are not limited to this.
As shown in Figure 1, structural representation for a touch-screen locating device in the prior art, post the posting 301,302 and 303 of black light-absorbing material at left and right, the lower frame of touch screen 101, camera 401 and 402 is installed in the upper side frame left and right corner, and camera 401 and 402 is taken the visual angle and covered whole touch screen.For details in a play not acted out on stage, but told through dialogues speck situation, because posting adopts light absorbent, so in the image of camera shooting, image corresponding to the narrow limit of posting is black region, the noise region that both sides outside this zone, image may exist out-of-shape and quantity, pixel size to differ.These noises all are distributed in the both sides in the corresponding black image of posting zone.This embodiment adopts following method to obtain black image zone corresponding to posting based on this hardware platform, and the position that obtains target background block diagram picture.Because concerning all cameras, technical scheme of the present invention is the same, we adopt the upper left corner camera 401 among Fig. 1 to be illustrated for example here.
As shown in Figure 5, be the process flow diagram of the automatic selecting method embodiment of of the present invention battle array camera positioning image effective coverage.As shown in Figure 5, the automatic selecting method concrete steps of the face battle array camera positioning image effective coverage of this embodiment comprise:
Step S201 takes original face system of battle formations picture and it is carried out binary conversion treatment; As shown in Figure 3, be the original face system of battle formations picture that photographs, the zone of wherein turning white among the figure is neighbourhood noise light, and photograph the posting part middle being divided into than blackboard.
Step S202 arranges the screen correction point at display screen, obtains the lateral attitude of screen correction point in original face system of battle formations picture;
As shown in Figure 6, be screen correction point synoptic diagram, wherein P is the screen correction point, and C is camera, and aix is camera optical axis position.Screen correction point P sets according to experience, generally gets the position that camera is convenient to take.The position of camera C can read from positional parameter, thereby can determine the deflection angle of straight line CP, and the deflection angle of the optical axis aix of camera C also can read from positional parameter, thereby can gather according to mathematics geometric transformation and camera c the screen coordinate of the parameter combining screen check point of original face system of battle formations picture, when calculating screen correction point P point and being touched, lateral attitude, hot spot place in the original face system of battle formations picture.
Step S203, centered by the lateral attitude of screen correction point, about set width value as width take one, set a little graph region A0; Width value is wherein rule of thumb set.Because little graph region A0 is a zone centered by screen correction point, during screen correction point in touch objects touch display screen curtain is arranged, the hot spot of felt pen will appear among the little graph region A0.
Step S204 extracts noise section and is divided into up and down two parts in little graph region.
Step S205, the rectangular area about extracting between two parts noise is as target rectangle.
Step S206, Real-time Collection has touch objects to touch the image of clicking screen correction point, obtain up and down two hot spots in the target approach rectangle, determine the collection baseline value of background frame image according to the position relationship of two hot spots up and down, determine to touch the effective pickup area of positioning image according to gathering baseline value.
Accordingly, technical scheme according to this specific embodiment, at first gathering the original face system of battle formations looks like to come the screen correction point to setting to position, a little graph region will be intercepted from original face system of battle formations picture centered by the screen correction point of setting, noise section in the little graph region extracted and be divided into up and down two parts, obtain picture position corresponding to positioning frame---target rectangle according to the position, the left and right sides of two-part noise and little graph region up and down, then continuous acquisition image, click the screen correction point with touch objects, and according to obtaining up and down two hot spots in the target rectangle, when the hot spot gathering is nearer, calculate the location of pixels at the place, top of two hot spots, thereby determine the collection baseline value of the background frame image that this screen correction point is corresponding, utilize this collection baseline value to establish the effective pickup area that touches positioning image, realization is treated to the basis with image, and be aided with the automatic selection that the operation of clicking screen realizes face battle array camera image locating area, need not the professional and technical personnel adjusts the camera hardware location, only need again the background extraction block diagram as the black image zone at place, and the position that obtains target background block diagram picture gets final product, dirigibility is strong, and need not to rely on artificial adjustment, avoided personal error.
In specific implementation process, in little graph region, extracting noise section and being divided into up and down two parts among the step S204, can realize in the following way: little graph region is carried out image recognition extract noise section, and noise section represented with rectangle, classifying in position according to rectangle, is divided into up and down two groups of rectangles.In this implementation, the screen coordinate by rectangle comes position and the size of each noise region are positioned.The classification of rectangle is carried out according to its position, can adopt the position coordinates that obtains one by one rectangle to be divided into two groups up and down, i.e. up and down two groups of rectangles of background frame place image.Because can there be noise in the both sides outside effective coverage corresponding to the narrow limit of posting, therefore, by realizing the effective coverage of positioning image is positioned to the location of noise section.As shown in Figure 7, large rectangle A0 is little graph region, and two little rectangle z1 and z2 are the noise region.
In specific implementation process, rectangular area about the described extraction in step S205 between two parts noise is as target rectangle, be specially: obtain the coordinate position of one group of top rectangle along slope coordinate minimum of position and the coordinate position of position one group of rectangle along slope coordinate maximum on the lower, position, the left and right sides in conjunction with little graph region, extract a rectangular area as target rectangle, and record the coordinate position of this target rectangle.Zone between noise region is the zone of background frame image, therefore, be the zone of background frame image between the coordinate position of the coordinate position of one group of rectangle along slope coordinate minimum that the position is top and position one group of rectangle along slope coordinate maximum on the lower, utilize position, the cell area left and right sides to position, can get access to target rectangle, as shown in Figure 7, little rectangle B is target rectangle, and it is between noise region z1 and z2.
In specific implementation process, because touch screen is generally posted glass, exist reflective, when the touch objects touch screen is arranged, have two symmetrical hot spots and occur, and these two hot spots are move toward one another namely in target rectangle, have two to be and to distribute up and down and the hot spots of move toward one another.Therefore, among the step S205, Real-time Collection to have touch objects touch to click in the target rectangle of image of screen correction point to have up and down two hot spots formation.
In step S205, described up and down two hot spots that obtain in the target approach rectangle, the position relationship of two hot spots is determined the collection baseline value of background frame image about the described basis, determine the effective pickup area of image according to gathering baseline value, be specially: the setpoint distance threshold value, obtain the up and down location of pixels at the place, top of two hot spots, when the distance between the location of pixels at place, two hot spot tops during less than distance threshold, the symmetrical intermediate point of judging the location of pixels at this place, two hot spot tops is the collection baseline value of background frame image, determines to touch the effective coverage of positioning image as benchmark to gather baseline value.As the location of pixels of establishing place, two hot spot tops is respectively Y0 and Y1, and then the symmetrical intermediate point of the location of pixels at place, two hot spot tops is Y=(Y0+Y1)/2.Its scope is determined take the collection baseline value as benchmark in the effective coverage that wherein touches positioning image, the value of this scope is set by experience, as, can be to gather baseline value as standard, down get the width of 8 pixels, length is 1280 pixels, and intercepting image out is the background frame image from original face system of battle formations picture like this, and this background frame image will be used as follow-up touch location and use.
Fig. 8 has provided the process flow diagram of a preferred embodiment, takes original face system of battle formations picture and it is carried out binary conversion treatment, to realize the step S501 among Fig. 3 and to enter step S502; In step S502, at display screen a plurality of screen correction points are set, obtain respectively each lateral attitude of screen correction point in original face system of battle formations picture, enter step S503; In step S503, centered by the lateral attitude of each screen correction point, about set width value take one and set the little graph region of each screen correction point correspondence as width, enter step S504; In step S504, in each little graph region, extract noise section and be divided into up and down two parts, enter step S505; In step S505, extract each the rectangular area between two parts noise is up and down entered step S506 as the target rectangle of each screen correction point correspondence; In step S506, Real-time Collection has touch objects to touch the image of clicking each screen correction point, obtain up and down two hot spots that enter in each target rectangle, determine the collection baseline value of the background frame image of each screen correction point correspondence to enter step S507 according to the position relationship of two hot spots up and down; In step S507, deviation between more a plurality of collection baseline values, when the deviate that gathers between the baseline value all is not more than deviation threshold, then get the mean value of a plurality of collection baseline values or minimum collection baseline value as final collection baseline value, otherwise prompting user can't realize touching the automatic calibration of the effective pickup area of positioning image, thinks that displacement or deformation that camera occurs need to proofread and correct by adjusting the methods such as camera physical location this moment.
According to the automatic selecting method of a kind of the battle array camera positioning image effective coverage of the invention described above, the present invention also provides the automatic selective system of a kind of battle array camera positioning image effective coverage.Below be elaborated with regard to the specific embodiment of the automatic selective system of face battle array camera positioning image effective coverage.As shown in Figure 9, be the structural representation of the specific embodiment of the automatic selective system of face battle array camera positioning image effective coverage.
The automatic selective system of the face battle array camera positioning image effective coverage of this specific embodiment specifically comprises: image acquisition units 601, screen correction point setup unit 602, little graph region setup unit 603, noise section extraction unit 604, target rectangle acquiring unit 605 and effective pickup area acquiring unit 606;
Image acquisition units 601 is used for taking and binary conversion treatment original face system of battle formations picture and real-time touch image, and the original face system of battle formations is looked like to be transferred to screen correction point setup unit and will touch in real time image transmitting to effective pickup area acquiring unit; Image acquisition units generally adopts camera to realize.
Screen correction point setup unit 602 is used at display screen the screen correction point being set, and obtains this screen correction point lateral attitude in original face system of battle formations picture and be transferred to little graph region setup unit.
Little graph region setup unit 603 is used for centered by the lateral attitude of screen correction point, about take a setting width value as width, set a little graph region and be transferred to the noise section extraction unit.
Noise section extraction unit 604, two parts are transferred to the target rectangle acquiring unit for extracting noise section in little graph region and being divided into up and down.One most preferably in the mode, the noise section extraction unit specifically is used for: little graph region is carried out image recognition extract noise section, then noise section is represented with rectangle, classify according to the position of rectangle, be divided into up and down two groups of rectangles.
Target rectangle acquiring unit 605, the rectangular area about being used for extracting between two parts noise is as target rectangle.In a most preferred mode, the target rectangle acquiring unit specifically is used for: obtain the coordinate position of one group of top rectangle along slope coordinate minimum of position and the coordinate position of position one group of rectangle along slope coordinate maximum on the lower, position, the left and right sides in conjunction with little graph region, extract a rectangular area as target rectangle, and record the coordinate position of this target rectangle.In a most preferred mode, the effective coverage acquiring unit specifically is used for: the setpoint distance threshold value; Obtain the up and down location of pixels at the place, top of two hot spots, when the distance between the location of pixels at place, two hot spot tops during less than distance threshold, judge that the symmetrical intermediate point of the location of pixels at this place, two hot spot tops is the collection baseline value of background frame image; Determine to touch the effective coverage of positioning image as benchmark to gather baseline value.
Effective pickup area acquiring unit 606, be used for receiving and touch in real time image, and according to up and down two hot spots in the Image Acquisition target approach rectangle that touch objects touch click screen correction point is arranged, determine the collection baseline value of background frame image according to the position relationship of two hot spots up and down, determine the effective pickup area of image according to gathering baseline value.
Claims (9)
1. the automatic selecting method of a face battle array camera positioning image effective coverage is characterized in that, comprises the steps:
Take original face system of battle formations picture and it is carried out binary conversion treatment;
At display screen the screen correction point is set, obtains the lateral attitude of screen correction point in original face system of battle formations picture;
Centered by the lateral attitude of screen correction point, about set width value as width take one, set a little graph region;
In little graph region, extract noise section and be divided into up and down two parts;
Rectangular area about extracting between two parts noise is as target rectangle;
Real-time Collection has touch objects to touch the image of clicking screen correction point, obtain up and down two hot spots in the target approach rectangle, determine the collection baseline value of background frame image according to the position relationship of two hot spots up and down, determine to touch the effective pickup area of positioning image according to gathering baseline value.
2. the automatic selecting method of according to claim 1 battle array camera positioning image effective coverage is characterized in that, describedly extracts noise section and be divided into up and down that two-part concrete steps are in little graph region:
Little graph region is carried out image recognition extract noise section;
Noise section is represented with rectangle;
Classifying in position according to rectangle, is divided into up and down two groups of rectangles.
3. the automatic selecting method of according to claim 2 battle array camera positioning image effective coverage is characterized in that, the rectangular area between two parts noise is as target rectangle up and down in described extraction, and concrete steps are:
Obtain the coordinate position of one group of top rectangle along slope coordinate minimum of position and the coordinate position of position one group of rectangle along slope coordinate maximum on the lower, position, the left and right sides in conjunction with little graph region, extract a rectangular area as target rectangle, and record the coordinate position of this target rectangle.
4. the automatic selecting method of according to claim 1 battle array camera positioning image effective coverage, it is characterized in that, the position relationship of two hot spots is determined the collection baseline value of background frame image about the described basis, determines that according to gathering baseline value the concrete steps of the effective pickup area of image are:
The setpoint distance threshold value;
Obtain the up and down location of pixels at place, two hot spot tops;
When the distance between the location of pixels at place, two hot spot tops during less than distance threshold, judge that the symmetrical intermediate point of the location of pixels at this place, two hot spot tops is the collection baseline value of background frame image;
Determine to touch effective pickup area of positioning image as benchmark to gather baseline value.
5. according to claim 1 to the automatic selecting method of 4 each described battle array camera positioning image effective coverage, it is characterized in that described method also comprises:
At screen a plurality of screen correction points are set, obtain respectively target rectangle corresponding to each screen correction point, and obtain the collection baseline value of background frame image separately according to each target rectangle, deviation between more a plurality of collection baseline values, when the deviate that gathers between the baseline value all is not more than deviation threshold, then get the mean value of a plurality of collection baseline values as final collection baseline value, otherwise prompting user can't realize touching the automatic straightening of positioning image effective coverage.
6. the automatic selective system of a face battle array camera positioning image effective coverage is characterized in that, comprising:
Image acquisition units is used for taking and binary conversion treatment original face system of battle formations picture and real-time touch image, and the original face system of battle formations is looked like to be transferred to screen correction point setup unit and will touch in real time image transmitting to effective pickup area acquiring unit;
Screen correction point setup unit is used at display screen the screen correction point being set, and obtains this screen correction point lateral attitude in original face system of battle formations picture and be transferred to little graph region setup unit;
Little graph region setup unit is used for centered by the lateral attitude of screen correction point, about take a setting width value as width, set a little graph region and be transferred to the noise section extraction unit;
The noise section extraction unit, two parts are transferred to the target rectangle acquiring unit for extracting noise section in little graph region and being divided into up and down;
The target rectangle acquiring unit, the rectangular area about being used for extracting between two parts noise is as target rectangle;
Effective pickup area acquiring unit, be used for receiving and touch in real time image, and according to up and down two hot spots in the Image Acquisition target approach rectangle that touch objects touch click screen correction point is arranged, determine the collection baseline value of background frame image according to the position relationship of two hot spots up and down, according to the effective pickup area that gathers baseline value and determine to touch positioning image.
7. the automatic selective system of according to claim 6 battle array camera positioning image effective coverage is characterized in that the noise section extraction unit specifically is used for:
Little graph region is carried out image recognition extract noise section, then noise section is represented with rectangle, classify according to the position of rectangle, be divided into up and down two groups of rectangles.
8. the automatic selective system of according to claim 7 battle array camera positioning image effective coverage is characterized in that, described target rectangle acquiring unit specifically is used for:
Obtain the coordinate position of one group of top rectangle along slope coordinate minimum of position and the coordinate position of position one group of rectangle along slope coordinate maximum on the lower, position, the left and right sides in conjunction with little graph region, extract a rectangular area as target rectangle, and record the coordinate position of this target rectangle.
9. the automatic selective system of according to claim 6 battle array camera positioning image effective coverage is characterized in that, described effective coverage acquiring unit specifically is used for:
The setpoint distance threshold value;
Obtain the up and down location of pixels at the place, top of two hot spots;
When the distance between the location of pixels at place, two hot spot tops during less than distance threshold, judge that the symmetrical intermediate point of the location of pixels at this place, two hot spot tops is the collection baseline value of background frame image;
Determine to touch effective pickup area of positioning image as benchmark to gather baseline value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210481416.0A CN103034373B (en) | 2012-11-23 | 2012-11-23 | The automatic selecting method of battle array camera positioning image effective coverage, face and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210481416.0A CN103034373B (en) | 2012-11-23 | 2012-11-23 | The automatic selecting method of battle array camera positioning image effective coverage, face and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103034373A true CN103034373A (en) | 2013-04-10 |
CN103034373B CN103034373B (en) | 2015-09-09 |
Family
ID=48021323
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210481416.0A Expired - Fee Related CN103034373B (en) | 2012-11-23 | 2012-11-23 | The automatic selecting method of battle array camera positioning image effective coverage, face and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103034373B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633493A (en) * | 2017-09-28 | 2018-01-26 | 珠海博明视觉科技有限公司 | A kind of method that adaptive background suitable for industrial detection deducts |
CN107979771A (en) * | 2016-10-25 | 2018-05-01 | 三星电子株式会社 | The method of electronic equipment and control electronics |
CN112506361A (en) * | 2020-11-23 | 2021-03-16 | 北京建筑大学 | Man-machine interaction method and system based on light-emitting pen and double cameras |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1584917A (en) * | 2004-06-11 | 2005-02-23 | 清华大学 | Living body iris patterns collecting method and collector |
CN1847782A (en) * | 2006-03-29 | 2006-10-18 | 东南大学 | Two-dimensional image area positioning method based on grating projection |
CN201403144Y (en) * | 2009-04-16 | 2010-02-10 | 杭州晨安机电技术有限公司 | Locked tracking camera system |
CN101840055A (en) * | 2010-05-28 | 2010-09-22 | 浙江工业大学 | Video auto-focusing system based on embedded media processor |
-
2012
- 2012-11-23 CN CN201210481416.0A patent/CN103034373B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1584917A (en) * | 2004-06-11 | 2005-02-23 | 清华大学 | Living body iris patterns collecting method and collector |
CN1847782A (en) * | 2006-03-29 | 2006-10-18 | 东南大学 | Two-dimensional image area positioning method based on grating projection |
CN201403144Y (en) * | 2009-04-16 | 2010-02-10 | 杭州晨安机电技术有限公司 | Locked tracking camera system |
CN101840055A (en) * | 2010-05-28 | 2010-09-22 | 浙江工业大学 | Video auto-focusing system based on embedded media processor |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107979771A (en) * | 2016-10-25 | 2018-05-01 | 三星电子株式会社 | The method of electronic equipment and control electronics |
CN107979771B (en) * | 2016-10-25 | 2021-06-11 | 三星电子株式会社 | Electronic apparatus and method of controlling the same |
US11128908B2 (en) | 2016-10-25 | 2021-09-21 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling the same |
CN107633493A (en) * | 2017-09-28 | 2018-01-26 | 珠海博明视觉科技有限公司 | A kind of method that adaptive background suitable for industrial detection deducts |
CN112506361A (en) * | 2020-11-23 | 2021-03-16 | 北京建筑大学 | Man-machine interaction method and system based on light-emitting pen and double cameras |
CN112506361B (en) * | 2020-11-23 | 2023-02-28 | 北京建筑大学 | Man-machine interaction method and system based on light-emitting pen and double cameras |
Also Published As
Publication number | Publication date |
---|---|
CN103034373B (en) | 2015-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104038740B (en) | Method and device for shielding privacy region of PTZ (Pan/Tilt/Zoom) surveillance camera | |
CN103716594B (en) | Panorama splicing linkage method and device based on moving target detecting | |
CN103558910B (en) | A kind of intelligent display system of automatic tracking head pose | |
US11682231B2 (en) | Living body detection method and device | |
CN103716595A (en) | Linkage control method and device for panoramic mosaic camera and dome camera | |
WO2018076392A1 (en) | Pedestrian statistical method and apparatus based on recognition of parietal region of human body | |
CN103218605A (en) | Quick eye locating method based on integral projection and edge detection | |
CN107527353B (en) | Projection picture outer frame detection method based on visual processing | |
CN110662014B (en) | Light field camera four-dimensional data large depth-of-field three-dimensional display method | |
CN103913149B (en) | A kind of binocular range-measurement system and distance-finding method thereof based on STM32 single-chip microcomputer | |
CN102905136B (en) | A kind of video coding-decoding method, system | |
CN103455792A (en) | Guest flow statistics method and system | |
CN105701809A (en) | Flat-field correction method based on line-scan digital camera scanning | |
CN103034373B (en) | The automatic selecting method of battle array camera positioning image effective coverage, face and system | |
CN107480678A (en) | A kind of chessboard recognition methods and identifying system | |
CN103176668B (en) | A kind of shooting method for correcting image for camera location touch system | |
CN104537627B (en) | A kind of post-processing approach of depth image | |
CN108446032A (en) | A kind of mouse gestures implementation method in projection interactive system | |
CN102012770A (en) | Image correction-based camera positioning method | |
CN104537663A (en) | Method for rapid correction of image dithering | |
CN104065949B (en) | A kind of Television Virtual touch control method and system | |
CN103949054A (en) | Infrared light gun positioning method and system | |
CN102023759A (en) | Writing and locating method of active pen | |
CN102023763A (en) | Positioning method of touch system camera | |
CN108647697B (en) | Target boundary detection method and device based on improved Hough transformation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150909 Termination date: 20211123 |