CN103324361B - Method and system for positioning touch point - Google Patents

Method and system for positioning touch point Download PDF

Info

Publication number
CN103324361B
CN103324361B CN201310270984.0A CN201310270984A CN103324361B CN 103324361 B CN103324361 B CN 103324361B CN 201310270984 A CN201310270984 A CN 201310270984A CN 103324361 B CN103324361 B CN 103324361B
Authority
CN
China
Prior art keywords
image
touch point
effective coverage
value
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310270984.0A
Other languages
Chinese (zh)
Other versions
CN103324361A (en
Inventor
黄斐铨
黄安麒
刘龙玮
刘伟高
何学志
徐翱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shirui Electronics Co Ltd
Original Assignee
Guangzhou Shirui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shirui Electronics Co Ltd filed Critical Guangzhou Shirui Electronics Co Ltd
Priority to CN201310270984.0A priority Critical patent/CN103324361B/en
Publication of CN103324361A publication Critical patent/CN103324361A/en
Application granted granted Critical
Publication of CN103324361B publication Critical patent/CN103324361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A method of touch point location comprising the steps of: intercepting the acquired image according to a prestored effective area of the touch point to obtain an image of the effective area of the touch point; wherein the touch point effective area is an area where a touch point appears in the image; storing the touch point effective area image; and detecting the position of the touch point according to the stored effective area image of the touch point. The invention also provides a corresponding system, because only partial image of the effective area of the touch point in the collected image is stored, the image outside the effective area is discarded, the image data is greatly reduced, thereby reducing the consumption of memory resources and improving the utilization rate of storage resources, and the position of the touch point is detected from the image of the effective area of the touch point, thereby reducing the hardware resources occupied in the detection process and improving the detection efficiency.

Description

The method and system of location, touch point
Technical field
The present invention relates to optical imaging touch screen technical field, particularly relate to the method for location, touch point and beSystem.
Background technology
In optical imaging touch screen system, be respectively provided with camera in the upper left corner and the upper right corner of touch-screen, canTo be to be respectively with light supply apparatus above or below camera, respectively establish at touch-screen left side, right side and downsideHave reflex reflector, reflex reflector can be retroreflecting material apparatus; Also can be on touch-screen left side, the right sideSide and downside are respectively provided with light-emitting device. While having light source to penetrate near camera head, through reflex reflector, by lightXian Yuan returns on road, and the light that left and right light-emitting device emits can cover touch-screen, and all can turn back to and take the photographPicture head.
Its operation principle is, the figure of the light sending by camera photographic light sources after reflex reflector reflectionPicture, when having the object of touch to enter touch area, can block part the light that enters camera on touch-screen,The black region existing in white bright wisp region in the picture of camera collection, the figure that first acquisition camera is takenLook like and be stored in internal memory, then read the image of storage and utilize related algorithm to detect in imageTouch point, then obtain touch point coordinate by correlation computations.
In above-mentioned technology, the image that each camera gathering is taken, internal memory all needs to store entire image,Image data amount is large, such as, if camera pixel resolution is 480*640, the data volume of 1 pixel is1 byte need to create the image buffer of 300K in internal memory, and this is to disappear greatly to memory sourceConsumption, causes utilization ratio of storage resources low, and the transmission entire image length that expends time in, and touches in subsequent detectionPoint is also from entire image, to search when the position, takies more hardware resource, and detection efficiency is low.
Summary of the invention
Based on this, be necessary for existing touch point location technology utilization ratio of storage resources low low with detection efficiencyProblem, the method and system of location, a kind of touch point is provided.
A method for location, touch point, comprises step:
Intercept according to the effective coverage, touch point prestoring the image gathering and obtain effective coverage, touch point image; ItsIn, effective coverage, described touch point is that touch point appears at the region in described image;
Effective coverage, described touch point image is stored;
According to the position, image detection touch point, effective coverage, touch point of described storage;
The effective coverage, touch point that described basis prestores intercepts the image gathering and obtains effective coverage, touch point imageBefore step, also comprise step:
Initial pictures is divided into each image block according to gray value difference, gathers special to described each image blockThe value of levying;
From the characteristic value of image block described in each, traversal is selected the characteristic value of an image block; Determine described spyThe value of levying place image block belongs to the probability of effective coverage class;
In the time that the characteristic value of each image block has traveled through, obtain institute according to the described probability of described each image blockState the effective coverage, touch point in initial pictures.
A kind of touch point navigation system, comprising:
Cut apart acquisition module, for initial pictures is divided into each image block according to gray value difference, to instituteState each image block acquisition characteristics value;
Class validation module, selects an image block for the characteristic value traversal from image block described in eachCharacteristic value; Determine that described characteristic value place image block belongs to the probability of effective coverage class; When each image blockWhen characteristic value has traveled through, obtain the touch in described initial pictures according to the described probability of described each image blockPoint effective coverage;
Interception module, has for intercept the image acquisition touch point gathering according to the effective coverage, touch point prestoringEffect area image; Wherein, effective coverage, described touch point is that touch point appears at the region in described image;
Memory module, for storing effective coverage, described touch point image;
Detection module, for according to the position, image detection touch point, effective coverage, touch point of described storage.
The method and system of location, above-mentioned touch point, the touch point first prestoring appears at the touch point in imageEffective coverage, then utilizes this effective coverage, touch point to remove to intercept the image gathering and obtains effective district, touch pointArea image is stored, then detects position, touch point from this effective coverage, touch point image, owing to only depositingStorage gathers effective coverage, touch point parts of images in image, has abandoned the image outside the part of effective coverage, imageData reduce greatly, thereby have reduced memory source consumption, have improved utilization ratio of storage resources, and, beFrom the image of effective coverage, touch point, detect position, touch point, reduced the hardware resource that testing process takies,Improve detection efficiency.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the embodiment of the method for location, touch point of the present invention;
Fig. 2 is the schematic flow sheet that the embodiment of the present invention is determined the effective coverage, touch point in initial pictures;
Fig. 3 is embodiment of the present invention binary image schematic diagram;
Fig. 4 is the nine grids schematic diagram before embodiment of the present invention S5 numbering;
Fig. 5 is the nine grids schematic diagram after embodiment of the present invention S5 numbering;
Fig. 6 is the schematic diagram that embodiment of the present invention binary image is divided into each image block;
Fig. 7 is the structural representation of touch point navigation system embodiment of the present invention.
Detailed description of the invention
Each embodiment of the method and system of locating for touch point of the present invention is below described in detail.
Referring to Fig. 1, be the schematic flow sheet of the embodiment of the method for location, touch point of the present invention, comprise step:
Step S101: intercept according to the effective coverage, touch point prestoring the image gathering and obtain effective district, touch pointArea image; Wherein, effective coverage, touch point is that touch point appears at the region in image;
Step S102: effective coverage, touch point image is stored;
Step S103: according to the position, image detection touch point, effective coverage, touch point storing.
The solution of the present invention, is mainly the effective coverage, touch point getting by prestoring, by gather figureBefore picture is stored, utilize this effective coverage, touch point to intercept the image gathering, due to effective coverage, touch pointFor touch point appears at the region in image, so just can be by the image useful part detecting for touch pointStore, and abandon image nonuseable part, then from image useful part, detect touch point, thereby subtractThe consumption of few memory source, has improved utilization ratio of storage resources, and, be from effective coverage, touch point imageIn detect position, touch point, reduced the hardware resource that testing process takies, improved detection efficiency. In vainLook bright wisp region is reflectorized material or light emitting source region, and the present invention is referred to as effective coverage, touch point.
Effective coverage, touch point in original image based on prestoring, to effective district, touch point of the image gatheringTerritory also comprises the effective coverage, touch point of determining in initial pictures before intercepting step.
Need statement, in technology of the present invention, the effective coverage, touch point prestoring, its scopeAccuracy, range size have material impact for the utilization of memory source and follow-up touch point testing process,And determine that effective coverage, touch point can utilize the technology such as experiment statistics, picture search, or as follows:
Initial pictures is divided into each image block according to gray value difference, to each image block acquisition characteristics value;
The evaluate formula that the characteristic value substitution of each image block is prestored, to maximum scores value pair in each score valueThe image block of answering carries out bearing of trend extension, obtains the effective coverage, touch point in initial pictures, wherein, commentsDividing formula is V=RP (w1| t), V represents score value, represents to prestore weights, P (w1| the characteristic value that t) expression prestoresPlace image block is the probability of effective coverage, touch point.
For the clear technology of the present invention of a step more, below provide one preferably to obtain effective district, touch pointThe embodiment in territory, but be not limited to this. As shown in Figure 2, for the embodiment of the present invention is determined in initial picturesThe schematic flow sheet of effective coverage, touch point, comprises step:
Step S201: the gray scale of initial pictures is carried out to binary conversion treatment, obtain binary image;
Step S202: binary image is divided into each image block, and to be wherein divided into touch point effective for image blockArea image and effective coverage, non-touch point image;
Step S203: each image block based on binary image is adopted each image block of reply initial picturesCollection characteristic value;
Step S204: traversal is selected the characteristic value of an image block from the characteristic value of each image block;
Step S205: adopt following posterior probability formula to determine that characteristic value place image block belongs to effective coverage classProbability:
P ( w 1 | t ) = p ( t | w 1 ) P ( w 1 ) Σ j = 1 2 p ( t | w j ) P ( w j ) ,
Wherein, P (w1| t) representation feature value t place image block belongs to effective coverage class w1Probability, each touchImage composition effective coverage, some effective coverage class, w1Represent effective coverage class; Each effective coverage, non-touch pointImage composition inactive area class, w2Represent inactive area class; P (t|wj) represent prestore at wjIn there is featureThe probability of value t place image block, P (w1) represent the general of effective coverage, touch point image in each image block of prestoringRate, P (w2) represent to belong to the probability of effective coverage, non-touch point image in each image block of prestoring according to characteristic valueObtain the score value of this image block in the probability of effective coverage class and the product of the weights that prestore;
Step S206: whether the characteristic value that judges each image block has traveled through, if not, returns to step S204,If so, enter step S207;
Step S207: the image block corresponding to maximum scores value in each score value carries out bearing of trend extension, obtainsObtain the effective coverage, touch point in initial pictures.
In a specific embodiment, obtain binary image by following steps:
Each pixel of traversal binary image, in the time that the gray value of pixel is less than predetermined threshold value, by this pictureThe gray value of vegetarian refreshments is revised as the first preset value, in the time that the gray value of pixel is greater than predetermined threshold value, by this pictureThe gray value of vegetarian refreshments is revised as the second preset value.
In a specific embodiment, the method that binary image is divided into each image block has a variety of,Can adopt watershed method, average drifting method etc. Also can adopt following methods:
Travel through successively each pixel of binary image;
In the time that the gray value of pixel is the first preset value, this pixel is not numbered;
In the time that the gray value of pixel is the second preset value, the if there is no neighbor pixel of numbering, willThis pixel is set up new numbering, if when the numbering of neighbor pixel is identical, this pixel is added withNeighbor pixel is numbered identical numbering, if the numbering of neighbor pixel is when different, by this pixel andThe unified numbering for one of them neighbor pixel of numbering of neighbor pixel.
In a specific embodiment, determine that effective coverage, the touch point step in initial pictures needs instruction beforeGet P (wj)、p(t|wj)、P(w1| t), weights. Comprise step:
Gathering quantity is the original image of default number, and the gray value of each original image is carried out to binary conversion treatment,Obtain respectively binary image;
Respectively each binary image is divided into each image block, wherein image block comprises effective coverage, touch pointImage and effective coverage, non-touch point image;
Receive each effective coverage, touch point image and piece number and each non-touch point effective coverage image and pieceNumber, is classified as effective coverage class by each effective coverage, touch point image, by each non-touch point effective coverage figurePicture is classified as inactive area class, according to formulaDetermine P (wj), wherein, M1Represent effective coverage, touch pointImage block number, M2Represent effective coverage, non-touch point image block number;
To each effective coverage, touch point image and each effective coverage, non-touch point IMAQ characteristic value, rootRelation according to each characteristic value and effective coverage class and inactive area class is determined formula p ( t | w j ) = { a 0 ( 0 &le; t < 128 ) a 1 ( 128 &le; t < 256 ) a 2 ( 256 &le; t < 384 ) a 3 ( 384 &le; t < 512 ) a 4 ( 512 &le; t < 640 ) , T representation feature value;
According to P (wj)、p(t|wj) adopt partial differential to ask extremum method to determine weights.
In a specific embodiment, characteristic value t can be image block center position coordinates abscissa x andOrdinate y, characteristic value can be also the length W of image block or the width H of image block, certainly also canBeing wherein two or three. Can also be other representative characteristic values, specifically set as required.Wherein, the abscissa of center is the average of maximum abscissa in image block pixel value and minimum abscissaValue, the ordinate of center is maximum ordinate in image block pixel value and the mean value of minimum ordinate,Length is maximum abscissa in image block pixel value and the difference of minimum abscissa, and width is image block pixelMaximum ordinate in value and the difference of minimum ordinate.
Be described with a concrete utilization scheme below:
Advanced training. Series of products can be produced much out, and whole series only need to be trained once,Training this time by the data of N product individuality as training set, multiple historigrams that need not single individualityPicture data are as training. After training once, obtain the first prior probability, the second prior probability, firstProbability density function, the second probability density function, posterior probability, weights, also just just shaped. Then everyInferior production is just carried out the selected of effective coverage, a touch point to this individuality when body one by one, selected after, thisThe effective coverage, original image touch point of individuality has just been determined, the direct base of image gathering after this is individualJust passable in effective coverage, original image touch point. Certainly, for more accurate, also can be only for oneProduct is trained, and gathers multiple historigrams of a product. Herein taking the original image that gathers N product asExample explanation, training process detailed process is as follows:
First gather the original image of N product, N can set as required. N original image doneFor sample, sample space size is N.
Each original image is carried out to binary conversion treatment, travel through the pixel of original image, will be greater than or equal to150 gray value is revised as 255, and the gray value lower than 150 is revised as to 0. As shown in Figure 3, for whereinBinary image after one width original image binaryzation. Effect after binary conversion treatment that Here it is, object is letterChange image information, each pixel only has two states: bright or dark. In order to distinguish, in Fig. 3, use slanted barLine represents that gray value is 0 region (being dark areas), with white portion represent gray value be 255 region (Bright area). Arranging according to the statistics of concrete product of this threshold value determine, the present embodiment adopt 150 thisNumerical value.
N original image all after binary conversion treatment, carries out image and cuts apart. It is by previous step that image is cut apart objectBinary conversion treatment after in the image that obtains bright all find out and to they labels separately. FirstLine by line scan (or scan by column, method is the same), sentence according to the ownership of its adjacent pixel having scannedWhich piece is the disconnected current pixel scanning belong to. In the time that the gray value of pixel is 0, this pixel is not enteredLine number or enroll special number, the present embodiment is not to be numbered example explanation. When the gray value of pixel is 255Time, if there is no the neighbor pixel of numbering, sets up new numbering by this pixel, if adjacent pictureVegetarian refreshments is numbered when identical, this pixel is added and numbers identical numbering with neighbor pixel, if adjacentWhen the numbering of pixel is different, by unified the numbering of this pixel and neighbor pixel for one of them is adjacentThe numbering of pixel. As Fig. 4, be described in the mode of nine grids. Nine grids by S1, S2, S3, S4,S5, S6, S7, S8, S9 composition. In Fig. 4, can find out, S1, S2, S3, S4 are scanning elementPoint, S5 is the pixel of just preparing scanning. In figure, S1 gray value is 255 and is first pixel of scanning,Number 1; The gray value of pixel S2 is 0, so not to its numbering. Pixel S3 gray value is 255,Because there is not the neighbor pixel of numbering, pixel S3 is enrolled to new numbering, be numbered 2; PixelThe gray value of point S4 is 255, and has the neighbor pixel that is numbered 1, pixel S4 is numbered to 1;The gray value of pixel S5 is 255, the neighbor pixel of pixel S5 be S1, S2, S3, S4, S6,S7, S8, S9. Be numbered 1 neighbor pixel S1, S4 and be numbered 2 neighbor because existPoint S3, so be 1 by pixel S1, S3, S4, S5 Unified number, merges into same figure by itPicture piece, as shown in Figure 5.
By after the scan original image numbering in Fig. 3, can obtain as Fig. 6. Fig. 6 is that binary image is cut apartFor the schematic diagram of each image block. This schematic diagram is made up of 7 image blocks: 1,2,3,4,5,6,7.Image block comprises effective coverage, touch point image and effective coverage, non-touch point image, and in Fig. 6, this is formerEffective coverage, the touch point image of beginning image is 4, effective coverage, non-touch point image is 1,2,3,5,6,7。
After N binary image all segments, effective coverage, the touch point image of each original image and non-Effective coverage, touch point image can manually mark. Effective coverage, the touch point image sets of all original imagesBecome effective coverage class, in all original images except other image blocks of effective coverage, touch point image are called non-Effective coverage, touch point image, effective coverage, non-touch point image composition inactive area class. This method is by connecingReceive each effective coverage, touch point image and piece number and each non-touch point effective coverage image and piece number, will be eachEffective coverage, the touch point image of original image is classified as effective coverage class w1, by the non-touch point of each original imageEffective coverage image is classified as inactive area class w2. In Fig. 6, image block 4 is classified as to w1Class, by imagePiece 1,2,3,5,6,7 is classified as w2Class. Because each original image only has a piece to belong to target class,So w1Class piece number is N, supposes w2The piece number of class is M. The first prior probability isTheTwo prior probabilities are P ( w 2 ) = M N + M .
Each image block based on each binary image each image block acquisition characteristics to reply original imageValue. Describe with one of them binary image, to corresponding former of each image block of this binary imageEach image block acquisition characteristics value of beginning image. Characteristic value comprise image block center position coordinates, length,Width. Maximum abscissa maxX in image block pixel value, the minimum abscissa minX in image block pixel value,Maximum ordinate maxY in image block pixel value, the minimum ordinate minY in image block pixel value. ?Heart position coordinatesLength W=maxX-minX, widthH=maxY-minY。
Adopt the mode of piecewise function to ask p (t|wj) be likelihood function. Center position coordinates abscissa first generalRate density function is as follows:
p ( x | w 1 ) = a 0 ( 0 &le; x < 128 ) a 1 ( 128 &le; x < 256 ) a 2 ( 256 &le; x < 384 ) a 3 ( 384 &le; x < 512 ) a 4 ( 512 &le; x < 640 )
p(x|w1) be illustrated in w1In class, there is the probability that characteristic value center position coordinates abscissa x is certain value. SystemMeter w1In class, the number of 0≤x < 128 is A0,Other value is also so calculatedFunction segmentationMode do not limit, region divide also unrestricted, can design according to actual conditions. Other likelihood functionp(y|wj)、p(W|wj) and p (H|wj) be also to obtain by same procedure. The present embodiment adopts segmentation form markGoing out probability density function, can be also other likelihood functions, such as obtain a continuous letter according to statisticsNumber.
According to P (wj)、p(t|wj), adopt Bayesian decision formula to determine posterior probability formula:
Posterior probability formula P (wj| x) refer in the time that input feature vector value is x value, shouldImage block belongs to wjThe probability function of class. Such as, P (w1| component x x) referring at input feature value is certainWhen value, this image block belongs to target class w1Probability be how many. P (x|wj) refer at wjSituation under, occurThe probability of characteristic value x.Represent the first prior probability,Represent the second prioriProbability.
Correspondingly have, P ( w j | y ) = p ( y | w j ) P ( w j ) p ( y ) , P ( w j | W ) = p ( W | w j ) P ( w j ) p ( W ) , P ( w j | H ) = p ( H | w j ) P ( w j ) p ( H ) .
According to default evaluate formula V=R1P(w1|x)+R2P(w1|y)+R3P(w1|W)+R4P(w1| H) and posteriorityNew probability formula, adopts partial differential to ask extremum method to determine weights R1、R2、R3、R4. Wherein, evaluate formula canSet as required, not necessarily press the present embodiment mode. Ask weights process as follows:
1) initialize R1、R2、R3、R4Be 1;
2) choose weights, such as first getting R1
3) weights of choosing are added to 1. With this score function, all samples are marked. Owing to knowingThe correct piece (and image block of effective coverage class) of all samples in road, so just can calculate this score functionReally rate.
4) repeat 3) step, until the accuracy of score function starts to decline. These weights are got this and were repeatedCorrespondence in journey is the value of high accuracy.
5) choose next weights, then repeat 2), 3), 4) step, until four weights were all selectedOne time.
6) repeat 2), 3), 4), 5) step K time. The accuracy of score function is through repeatedly repeating itAfter can converge to gradually in certain level, so this value of number of repetition K will be determined according to actual conditions.While finally stopping repetition, determine R1、R2、R3、R4The value of four weights.
Training finishes.
After training finishes, can start to determine the effective coverage, touch point in initial pictures, process is as follows:
First, the gray scale of initial pictures is carried out to binary conversion treatment, obtain binary image, method as above,Do not repeat them here;
Then, binary image is divided into each image block, method as above, does not repeat them here;
Each image block based on binary image each image block acquisition characteristics value to reply initial pictures:Center position coordinatesLength W=maxX-minX, widthH=maxY-minY。
From the characteristic value of each image block, traversal is selected a characteristic value of an image block, with characteristic value xFor example, according to formula p ( x | w 1 ) = a 0 ( 0 &le; x < 128 ) a 1 ( 128 &le; x < 256 ) a 2 ( 256 &le; x < 384 ) a 3 ( 384 &le; x < 512 ) a 4 ( 512 &le; x < 640 ) Can obtain w1In class, occur that characteristic value x is the probability of certain value.Such as x value gets 200, probability is a1. By w1The probability that occurs other characteristic values of this image block in class is also countedCalculate. In like manner can calculate w2In class, there is the probability of the characteristic value of this image block. By each value of calculatingSubstitution posterior probability formulaThereby can obtain in input feature vector value during for certain value,This image block belongs to target class w1Probability. By each posterior probability substitution evaluate formula of gained
V=R1P(w1|x)+R2P(w1|y)+R3P(w1|W)+R4P(w1|H)
Can calculate the score value of this image block. The score value of other image blocks obtains by same procedure.The highest effective image piece that is of marking, does stretch processing by effective image piece the right and left less than the part on limitSupply area of absence, finally obtain the effective coverage, touch point in initial pictures. To touching in initial picturesTouch an effective coverage and prestore, in the image gathering afterwards, directly according to the effective district, touch point prestoringTerritory intercepts the image gathering and obtains effective coverage, touch point image; Effective coverage, touch point image is stored;According to the position, image detection touch point, effective coverage, touch point storing.
The present embodiment has only been lifted a kind of wherein method of calculating the effective coverage, touch point in initial pictures, also canTo adopt additive method, such as the gray scale of initial pictures is carried out to rim detection, determine the first edge; Work as limitWhen the gray value of edge point is less than the 3rd preset value, the gray value of monitoring marginal point neighbor pixel, if adjacentWhen the gray value of pixel is greater than the 4th preset value, delete this marginal point, using neighbor pixel as edgePoint; Whether the gray value that judges each marginal point in the first preset range, if not, is deleted not pre-firstIf the marginal point in scope; Delete the marginal point that abscissa is identical and marginal point number is greater than two, according to deletionAfter edge determine the second edge, according to the first edge to the break place compensation of the second edge, to after compensationThe second edge carries out bearing of trend stretching, determines effective touch area that prestores. Additive method repeats no more herein.
According to the method for location, above-mentioned touch point, the invention provides a kind of touch point navigation system.
Referring to Fig. 7, be the structural representation of touch point navigation system embodiment of the present invention, comprising:
Interception module 701, obtains touch point for intercept the image gathering according to the effective coverage, touch point prestoringEffective coverage image; Wherein, effective coverage, touch point is that touch point appears at the region in image;
Memory module 702, for storing effective coverage, touch point image;
Detection module 703, for according to store position, image detection touch point, effective coverage, touch point.
In a specific embodiment, also comprise:
Effective coverage, touch point determination module, for being divided into each figure by initial pictures according to gray value differencePicture piece, to each image block acquisition characteristics value; The evaluate formula that the characteristic value substitution of each image block is prestored,The image block corresponding to maximum scores value in each score value carries out bearing of trend extension, obtains in initial picturesEffective coverage, touch point, wherein, evaluate formula is V=RP (w1| t), V represents score value, represents to prestore weights,P(w1| t) represent that the characteristic value place image block prestoring is the probability of effective coverage, touch point.
In a specific embodiment, also comprise:
Cut apart acquisition module, for initial pictures is divided into each image block according to gray value difference, to respectivelyIndividual image block acquisition characteristics value;
Class validation module, for the feature of the image block of characteristic value traversal selection from each image blockValue; Adopt following posterior probability formula to determine that characteristic value place image block belongs to the probability of effective coverage class:
P ( w 1 | t ) = p ( t | w 1 ) P ( w 1 ) &Sigma; j = 1 2 p ( t | w j ) P ( w j ) ,
Wherein, P (w1| t) representation feature value t place image block belongs to effective coverage class w1Probability, each touchImage composition effective coverage, some effective coverage class, w1Represent effective coverage class; Each effective coverage, non-touch pointImage composition inactive area class, w2Represent inactive area class; P (t|wj) represent prestore at wjIn there is featureThe probability of value t place image block, P (w1) represent the general of effective coverage, touch point image in each image block of prestoringRate, P (w2) represent the probability of effective coverage, non-touch point image in each image block of prestoring; Belong to according to characteristic valueObtain the score value of this image block in the probability of effective coverage class and the product of the weights that prestore;
In the time that the characteristic value of each image block has traveled through, to image block corresponding to maximum scores value in each score valueCarry out bearing of trend extension, obtain the effective coverage, touch point in initial pictures.
In a specific embodiment, above-mentioned cut apart acquisition module also for: the gray scale of initial pictures is carried outBinary conversion treatment, obtains binary image; Binary image is divided into each image block, wherein each imagePiece comprises effective coverage, touch point image and effective coverage, non-touch point image; Based on each of binary imageImage block each image block acquisition characteristics value to reply initial pictures.
In a specific embodiment, the above-mentioned acquisition module of cutting apart, comprising:
Spider module, for traveling through successively each pixel of binary image;
Numbering module, in the time that the gray value of pixel is the first preset value, does not compile this pixelNumber; In the time that the gray value of pixel is the second preset value, the if there is no neighbor pixel of numbering, willThis pixel is set up new numbering, if when the numbering of neighbor pixel is identical, this pixel is added withNeighbor pixel is numbered identical numbering, if the numbering of neighbor pixel is when different, by this pixel andThe unified numbering for one of them neighbor pixel of numbering of neighbor pixel.
In a specific embodiment, also comprise training module, for:
Gathering quantity is the original image of default number, and the gray value of each original image is carried out to binary conversion treatment,Obtain respectively binary image;
Respectively each binary image is divided into each image block, wherein image block comprises effective coverage, touch pointImage and effective coverage, non-touch point image;
Receive each effective coverage, touch point image and piece number and each non-touch point effective coverage image and pieceNumber, is classified as effective coverage class by each effective coverage, touch point image, by each non-touch point effective coverage figurePicture is classified as inactive area class, according to formulaDetermine P (wj), wherein, M1Represent effective coverage, touch pointImage block number, M2Represent effective coverage, non-touch point image block number;
To each effective coverage, touch point image and each effective coverage, non-touch point IMAQ characteristic value, rootRelation according to each characteristic value and effective coverage class and inactive area class is determined formula
p ( t | w j ) = { a 0 ( 0 &le; t < 128 ) a 1 ( 128 &le; t < 256 ) a 2 ( 256 &le; t < 384 ) a 3 ( 384 &le; t < 512 ) a 4 ( 512 &le; t < 640 ) ;
According to P (wj)、p(t|wj) adopt partial differential to ask extremum method to determine weights.
Characteristic value t can be in the abscissa of center position coordinates of image block and ordinate, length, widthOne or more, wherein, the abscissa of center is the maximum abscissa and minimum in image block pixel valueThe mean value of abscissa, the ordinate of center is that the maximum ordinate in image block pixel value is indulged with minimumThe mean value of coordinate, length is maximum abscissa in image block pixel value and the difference of minimum abscissa, wideDegree is the maximum ordinate in image block pixel value and the difference of minimum ordinate.
Detailed process is described in this programme method, does not repeat them here.
The above embodiment has only expressed several embodiment of the present invention, and it describes comparatively concrete and detailed,But can not therefore be interpreted as the restriction to the scope of the claims of the present invention. It should be pointed out that for this areaThose of ordinary skill, without departing from the inventive concept of the premise, can also make some distortion andImprove, these all belong to protection scope of the present invention. Therefore, the protection domain of patent of the present invention should be with appendedClaim is as the criterion.

Claims (10)

1. a method for location, touch point, is characterized in that, comprises step:
Intercept according to the effective coverage, touch point prestoring the image gathering and obtain effective coverage, touch point image; ItsIn, effective coverage, described touch point is that touch point appears at the region in described image;
Effective coverage, described touch point image is stored;
According to the position, image detection touch point, effective coverage, touch point of described storage;
The effective coverage, touch point that described basis prestores intercepts the image gathering and obtains effective coverage, touch point imageBefore step, also comprise step:
Initial pictures is divided into each image block according to gray value difference, gathers special to described each image blockThe value of levying;
From the characteristic value of image block described in each, traversal is selected the characteristic value of an image block; Determine described spyThe value of levying place image block belongs to the probability of effective coverage class;
In the time that the characteristic value of each image block has traveled through, obtain institute according to the described probability of described each image blockState the effective coverage, touch point in initial pictures.
2. the method for location, touch point according to claim 1, is characterized in that, described determine described inCharacteristic value place image block belongs to the probability of effective coverage class, comprises step:
Adopt following posterior probability formula to determine that described characteristic value place image block belongs to the general of effective coverage classRate:
P ( w 1 | t ) = p ( t | w 1 ) P ( w 1 ) &Sigma; j = 1 2 p ( t | w j ) P ( w j ) ,
Wherein, P (w1| t) representation feature value t place image block belongs to effective coverage class w1Probability, each touchImage composition effective coverage, some effective coverage class, w1Represent effective coverage class; Each effective coverage, non-touch pointImage composition inactive area class, w2Represent inactive area class; P (t|wj) represent prestore at wjIn there is featureThe probability of value t place image block, P (w1) represent the general of effective coverage, touch point image in each image block of prestoringRate, P (w2) represent the probability of effective coverage, non-touch point image in each image block of prestoring;
When the described characteristic value when each image block has traveled through, obtain according to the described probability of described each image blockObtain the effective coverage, touch point in described initial pictures, comprising:
The probability that belongs to effective coverage class according to described characteristic value obtains this image block with the product of the weights that prestoreScore value;
In the time that the characteristic value of each image block has traveled through, to figure corresponding to maximum scores value in each described score valueCarry out bearing of trend extension as piece, obtain the effective coverage, touch point in described initial pictures.
3. the method for location, touch point according to claim 1, is characterized in that, described by initial graphPicture is divided into each image block according to gray value difference, to described each image block acquisition characteristics value step, and bagDraw together step:
The gray scale of initial pictures is carried out to binary conversion treatment, obtain binary image;
Described binary image is divided into each image block, and wherein each described image block comprises that touch point is effectiveArea image and effective coverage, non-touch point image;
Each image block based on described binary image is to tackling each image block collection of described initial picturesCharacteristic value.
4. the method for location, touch point according to claim 3, is characterized in that, described by described twoValue image is divided into each image block step, comprises step:
Travel through successively each pixel of described binary image;
In the time that the gray value of pixel is the first preset value, this pixel is not numbered;
In the time that the gray value of pixel is the second preset value, the if there is no neighbor pixel of numbering, willThis pixel is set up new numbering, if when the numbering of neighbor pixel is identical, this pixel is added withNeighbor pixel is numbered identical numbering, if the numbering of neighbor pixel is when different, by this pixel andThe unified numbering for one of them neighbor pixel of numbering of neighbor pixel.
5. according to the method for the location, touch point described in claim 2 to 4 any one, it is characterized in that,Described initial pictures is divided into each image block according to gray value difference, gathers special to described each image blockBefore the value of levying step, also comprise step:
Gathering quantity is the original image of default number, and the gray value of each described original image is carried out to binaryzationProcess, obtain respectively binary image;
Respectively each described binary image is divided into each image block, wherein said image block comprises touch pointEffective coverage image and effective coverage, non-touch point image;
Receive each effective coverage, touch point image and piece number and each non-touch point effective coverage image and pieceNumber, is classified as effective coverage class by each effective coverage, touch point image, by each non-touch point effective coverage figurePicture is classified as inactive area class, according to formulaDetermine described P (wj), wherein, M1Represent that touch point is effectiveArea image piece number, M2Represent effective coverage, non-touch point image block number;
To effective coverage, touch point image described in each and effective coverage, non-touch point IMAQ spy described in eachThe value of levying, determines formula according to the relation of each described characteristic value and described effective coverage class and inactive area class p ( t | w j ) = a 0 ( 0 &le; t < 128 ) a 1 ( 128 &le; t < 256 ) a 2 ( 256 &le; t < 384 ) a 3 ( 384 &le; t < 512 ) a 4 ( 512 &le; t < 640 ) ;
Wherein, described a0For probability, a of characteristic value t in 0≤t <, 128 spans1For characteristic value t is at 128≤tProbability, a in < 256 spans2For probability, a of characteristic value t in 256≤t <, 384 spans3For spyProbability and a of the value of levying t in 384≤t <, 512 spans4For characteristic value t is at 512≤t <, 640 value modelsProbability in enclosing;
According to described P (wj)、p(t|wj) adopt partial differential to ask extremum method to determine described weights.
6. a touch point navigation system, is characterized in that, comprising:
Cut apart acquisition module, for initial pictures is divided into each image block according to gray value difference, to instituteState each image block acquisition characteristics value;
Class validation module, selects an image block for the characteristic value traversal from image block described in eachCharacteristic value; Determine that described characteristic value place image block belongs to the probability of effective coverage class; When each image blockWhen characteristic value has traveled through, obtain the touch in described initial pictures according to the described probability of described each image blockPoint effective coverage;
Interception module, has for intercept the image acquisition touch point gathering according to the effective coverage, touch point prestoringEffect area image; Wherein, effective coverage, described touch point is that touch point appears at the region in described image;
Memory module, for storing effective coverage, described touch point image;
Detection module, for according to the position, image detection touch point, effective coverage, touch point of described storage.
7. touch point according to claim 6 navigation system, is characterized in that,
Described class validation module, adopts following posterior probability formula to determine that described characteristic value place image block belongs toProbability in effective coverage class:
P ( w 1 | t ) = p ( t | w 1 ) P ( w 1 ) &Sigma; j = 1 2 p ( t | w j ) P ( w j ) ,
Wherein, P (w1| t) representation feature value t place image block belongs to effective coverage class w1Probability, each touchImage composition effective coverage, some effective coverage class, w1Represent effective coverage class; Each effective coverage, non-touch pointImage composition inactive area class, w2Represent inactive area class; P (t|wj) represent prestore at wjIn there is featureThe probability of value t place image block, P (w1) represent the general of effective coverage, touch point image in each image block of prestoringRate, P (w2) represent the probability of effective coverage, non-touch point image in each image block of prestoring;
When the described characteristic value when each image block has traveled through, obtain according to the described probability of described each image blockObtain the effective coverage, touch point in described initial pictures, comprising:
The probability that belongs to effective coverage class according to described characteristic value obtains this image block with the product of the weights that prestoreScore value;
In the time that the characteristic value of each image block has traveled through, to figure corresponding to maximum scores value in each described score valueCarry out bearing of trend extension as piece, obtain the effective coverage, touch point in described initial pictures.
8. touch point according to claim 6 navigation system, is characterized in that, described in cut apart collection mouldPiece, also for: the gray scale of initial pictures is carried out to binary conversion treatment, obtain binary image; By described twoValue image is divided into each image block, and wherein each described image block comprises effective coverage, touch point image and non-Effective coverage, touch point image; Each image block based on described binary image is to tackling described initial picturesEach image block acquisition characteristics value.
9. touch point according to claim 8 navigation system, is characterized in that, described in cut apart collection mouldPiece, comprising:
Spider module, for traveling through successively each pixel of described binary image;
Numbering module, in the time that the gray value of pixel is the first preset value, does not compile this pixelNumber; In the time that the gray value of pixel is the second preset value, the if there is no neighbor pixel of numbering, willThis pixel is set up new numbering, if when the numbering of neighbor pixel is identical, this pixel is added withNeighbor pixel is numbered identical numbering, if the numbering of neighbor pixel is when different, by this pixel andThe unified numbering for one of them neighbor pixel of numbering of neighbor pixel.
10. according to the touch point navigation system described in claim 7 to 9 any one, it is characterized in that,Also comprise training module, for:
Gathering quantity is the original image of default number, and the gray value of each described original image is carried out to binaryzationProcess, obtain respectively binary image;
Respectively each described binary image is divided into each image block, wherein said image block comprises touch pointEffective coverage image and effective coverage, non-touch point image;
Receive each effective coverage, touch point image and piece number and each non-touch point effective coverage image and pieceNumber, is classified as effective coverage class by each effective coverage, touch point image, by each non-touch point effective coverage figurePicture is classified as inactive area class, according to formulaDetermine described P (wj), wherein, M1Represent that touch point is effectiveArea image piece number, M2Represent effective coverage, non-touch point image block number;
To effective coverage, touch point image described in each and effective coverage, non-touch point IMAQ spy described in eachThe value of levying, determines described formula according to the relation of each described characteristic value and described effective coverage class and inactive area class p ( t | w j ) = a 0 ( 0 &le; t < 128 ) a 1 ( 128 &le; t < 256 ) a 2 ( 256 &le; t < 384 ) a 3 ( 384 &le; t < 512 ) a 4 ( 512 &le; t < 640 ) ;
Wherein, described a0For probability, a of characteristic value t in 0≤t <, 128 spans1For characteristic value t is at 128≤tProbability, a in < 256 spans2For probability, a of characteristic value t in 256≤t <, 384 spans3For spyProbability and a of the value of levying t in 384≤t <, 512 spans4For characteristic value t is at 512≤t <, 640 value modelsProbability in enclosing;
According to described P (wj)、p(t|wj) adopt partial differential to ask extremum method to determine described weights.
CN201310270984.0A 2013-06-28 2013-06-28 Method and system for positioning touch point Active CN103324361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310270984.0A CN103324361B (en) 2013-06-28 2013-06-28 Method and system for positioning touch point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310270984.0A CN103324361B (en) 2013-06-28 2013-06-28 Method and system for positioning touch point

Publications (2)

Publication Number Publication Date
CN103324361A CN103324361A (en) 2013-09-25
CN103324361B true CN103324361B (en) 2016-05-25

Family

ID=49193152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310270984.0A Active CN103324361B (en) 2013-06-28 2013-06-28 Method and system for positioning touch point

Country Status (1)

Country Link
CN (1) CN103324361B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107077253A (en) * 2015-04-15 2017-08-18 奥林巴斯株式会社 Touch-panel device
CN106502476B (en) * 2016-11-04 2019-10-01 青岛海信电器股份有限公司 Multi-touch of infrared touch screen recognition methods and device
CN109637153A (en) * 2019-01-25 2019-04-16 合肥市智信汽车科技有限公司 A kind of vehicle-mounted mobile violation snap-shooting system based on machine vision
CN111081340B (en) * 2019-12-04 2023-11-03 四川骏逸富顿科技有限公司 Method for remotely detecting whether electronic prescription information is complete or not
CN113934089A (en) * 2020-06-29 2022-01-14 中强光电股份有限公司 Projection positioning system and projection positioning method thereof
CN114103845B (en) * 2022-01-25 2022-04-15 星河智联汽车科技有限公司 Vehicle central control screen operator identity recognition method and device and vehicle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393498B (en) * 2008-10-31 2010-09-15 广东威创视讯科技股份有限公司 Image processing process for touch screen positioning
CN102323866B (en) * 2011-09-01 2013-08-21 广东威创视讯科技股份有限公司 Camera shooting type touch control method and device
KR20130038081A (en) * 2011-10-07 2013-04-17 삼성전자주식회사 Apparatus and method for detecting input position by deter using displaying pattern determination

Also Published As

Publication number Publication date
CN103324361A (en) 2013-09-25

Similar Documents

Publication Publication Date Title
CN103324361B (en) Method and system for positioning touch point
US20190188528A1 (en) Text detection method and apparatus, and storage medium
CN108121991B (en) Deep learning ship target detection method based on edge candidate region extraction
CN104809481B (en) A kind of natural scene Method for text detection based on adaptive Color-based clustering
CN103810503B (en) Depth study based method for detecting salient regions in natural image
CN104809452A (en) Fingerprint identification method
CN104809464A (en) Fingerprint information processing method
US11941891B2 (en) Method for detecting lane line, vehicle and computing device
CN105718912B (en) A kind of vehicle characteristics object detecting method based on deep learning
CN111259878A (en) Method and equipment for detecting text
CN105513038A (en) Image matching method and mobile phone application test platform
CN110245697B (en) Surface contamination detection method, terminal device and storage medium
CN101593277A (en) A kind of complicated color image Chinese version zone automatic positioning method and device
CN103577818A (en) Method and device for recognizing image characters
CN105719243A (en) Image processing device and method
CN110245600B (en) Unmanned aerial vehicle road detection method for self-adaptive initial quick stroke width
CN105608454A (en) Text structure part detection neural network based text detection method and system
CN109522807B (en) Satellite image recognition system and method based on self-generated features and electronic equipment
Cai et al. Sequential pattern mining of geo-tagged photos with an arbitrary regions-of-interest detection method
CN102542244A (en) Face detection method and system and computer program product
CN108876818A (en) A kind of method for tracking target based on like physical property and correlation filtering
CN112766218B (en) Cross-domain pedestrian re-recognition method and device based on asymmetric combined teaching network
CN106062778A (en) Fingerprint identification method, device and terminal
CN104915944A (en) Method and device for determining black margin position information of video
CN111445386A (en) Image correction method based on four-point detection of text content

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant