CN104202547B - Method, projection interactive approach and its system of target object are extracted in projected picture - Google Patents
Method, projection interactive approach and its system of target object are extracted in projected picture Download PDFInfo
- Publication number
- CN104202547B CN104202547B CN201410429157.6A CN201410429157A CN104202547B CN 104202547 B CN104202547 B CN 104202547B CN 201410429157 A CN201410429157 A CN 201410429157A CN 104202547 B CN104202547 B CN 104202547B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- pixel value
- target object
- projection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Abstract
The present invention provides the method and system that target object is extracted in a kind of projected picture, and this method includes:The projected image of projector is obtained, projection prognostic chart picture of the projected image for perspective plane is generated according to default position transformational relation and pixel value transformational relation;The current display image shown on the perspective plane is gathered by camera device;The display image and the difference of the pixel value of same position pixel in the projection prognostic chart picture are contrasted, the pixel that difference described in the display image is more than predetermined threshold value is extracted, obtains target object;The present invention can accurately extract target object from projected picture.The present invention also provides a kind of projection interactive approach and system, can effectively improve interactive processing efficiency during projection.
Description
Technical field
The present invention relates to projection art, the method that target object is extracted in more particularly to a kind of projected picture, one
Plant the system that target object is extracted in projected picture, one kind projection interactive approach, and a kind of projection interaction system.
Background technology
Although the multimodal human-computer interaction technology of integrated use vision, the sense of hearing, tactile, smell, the sense of taste etc. is more and more
It is applied, however, both hands are as action important in virtual reality system and perception relational model, it is in virtual reality system
Still play not replaceable effect.At present, touch-screen is as a kind of newest computer input apparatus, it be it is most simple at present,
A kind of convenient, natural man-machine interaction mode.It imparts multimedia with brand-new looks, is extremely attractive brand-new many matchmakers
Body interactive device.With the development of science and technology the use of projecting apparatus is also more and more extensive, and training conference, classroom instruction, cinema etc.
Deng.Projecting apparatus it is easy to use, it can become any one plane one display screen.
With the development of science and technology, camera and projection have gradually entered into the life of ordinary citizen, projection is applied to each at present
Aspect, such as teaching, various meetings.The automatic identification of gesture is carried out by projecting with camera turns into current study hotspot,
By the automatic identification of gesture, more preferable man-machine interaction has been reached so that the use of projection more facilitates.
In interactive projection system, in order to gesture is identified, first have to detect target area from projected image:
Hand region;There are a variety of methods using the method detection target area of computer vision in conventional art, it is most common at present
It is to be detected using the color and shape of hand, but carrying out detection using the colour of skin in projection interactive system has two big disadvantages
End:One is that the light that projecting apparatus is sent is radiated at and can change the color of arm when on arm, comes difficult to detection band;Two be when throwing
When shadow picture includes human hand in itself, error detection can be caused.
To sum up, in the interactive field of projection at present, the method for detecting target area in projected picture, it detects knot
Fruit precision is relatively low, causes the recognition capability of the projected picture of motor behavior in to(for) target object poor, projects interactive processing
It is inefficient.
The content of the invention
Based on this, the present invention provides a kind of method that target object is extracted in projected picture, can be accurately from projected picture
In extract target object.
The present invention also provides a kind of projection interactive approach, can effectively improve interactive processing efficiency during projection.
A kind of method that target object is extracted in projected picture, comprises the following steps:
The projected image of projector is obtained, is thrown according to default position transformational relation and the generation of pixel value transformational relation are described
Projection prognostic chart picture of the shadow image for perspective plane;
The current display image shown on the perspective plane is gathered by camera device;
The display image and the difference of the pixel value of same position pixel in the projection prognostic chart picture are contrasted, is extracted
Difference described in the display image is more than the pixel of predetermined threshold value, obtains target object.
One kind projection interactive approach, including the method that target object is extracted in above-mentioned projected picture, in addition to following step
Suddenly:
Display image described in each frame gathered from the camera device detects the space-time characteristic of the target object;
The space-time characteristic is inputted to default motor behavior grader, the motion row of the target object is identified
To perform default control instruction corresponding with the motor behavior according to the motor behavior.
A kind of system that target object is extracted in projected picture, including:
Generation module, the projected image for obtaining projector is changed according to default position transformational relation and pixel value
Relation generates projection prognostic chart picture of the projected image for perspective plane;
Extraction module, for gathering the current display image shown on the perspective plane by the camera device;
Target object extraction module, for contrasting the display image and same position pixel in the projection prognostic chart picture
The difference of the pixel value of point, extracts the pixel that difference described in the display image is more than predetermined threshold value, obtains target object.
A kind of system that target object is extracted in projection interaction system, including above-mentioned projected picture, in addition to:
Space-time characteristic detection module, for from describedCamera device is gatheredEach frame described in display image detect the mesh
Mark the space-time characteristic of object;
Identification module, for the space-time characteristic to be inputted to default motor behavior grader, identifies the target
The motor behavior of object, default control instruction corresponding with the motor behavior is performed according to the motor behavior.
The method and system of target object is extracted in above-mentioned projected picture, according to default position transformational relation and pixel value
Transformational relation generates projection prognostic chart picture of the projected image relative to perspective plane;Thrown by camera device collection is current described
The display image shown on shadow face;By contrasting projection prognostic chart picture and the display image, same position pixel is obtained
The difference of pixel value is more than the pixel of predetermined threshold value, extracts the pixel as target object;The present invention passes through to projection
Image is predicted, and will predict that obtained projection prognostic chart picture is contrasted with actual display image, if there is the picture of pixel
Element value is different, can determine whether occur target object on perspective plane, so as to accurately extract target object area in display image
Domain, its testing result precision is higher.
Above-mentioned projection interactive approach and system, can accurately extract target object region, so that in projected picture
Recognition capability for the motor behavior of target object is preferable, significantly improves projection interactive processing efficiency.
Brief description of the drawings
Fig. 1 is application scenario diagram of the method for extraction target object in projected picture of the present invention in embodiment one.
Fig. 2 is schematic flow sheet of the method for extraction target object in projected picture of the present invention in embodiment two.
Fig. 3 is the schematic flow sheet of acquisition pixel position transformational relation in Fig. 2.
Fig. 4 is the schematic diagram of position detection image.
Fig. 5 is the schematic flow sheet of acquisition pixel pixel value transformational relation in Fig. 2.
Fig. 6 be Fig. 2 in generate the projected image projection prognostic chart picture schematic flow sheet.
Schematic flow sheets of the Fig. 7 for present invention projection interactive approach in embodiment three.
Fig. 8 be Fig. 7 in detect the target object motion feature schematic flow sheet.
Fig. 9 is structural representation of the system of extraction target object in projected picture of the present invention in example IV.
Figure 10 is structural representation of the projection interaction system of the present invention in embodiment five.
Embodiment
The present invention is described in further detail with reference to embodiment and accompanying drawing, but embodiments of the present invention are not limited to
This.
Embodiment one,
As shown in figure 1, being the application scenarios of the method for extraction target object in projected picture of the present invention in one embodiment
Figure, including computing device 12, projection arrangement 13, camera device 11;Wherein, projection arrangement 13 and camera device 11 respectively and calculate
Equipment 12 is connected, and computing device 12 is used to store project content, control projection arrangement 13, receives the data that camera device 11 is inputted
And carry out image processing and analyzing;Projection arrangement 13 is used to project content projecting to perspective plane 14, and camera device 11 is used for
Perspective plane 14 is shot, video data and image data acquiring is carried out.
Embodiment two,
As shown in Fig. 2 being the flow signal of the method for extraction target object in projected picture of the present invention in one embodiment
Figure, the present embodiment is applied to illustrate in computing device in this way, it may include following steps:
S21, the projected image for obtaining projector, institute is generated according to default position transformational relation and pixel value transformational relation
State projection prognostic chart picture of the projected image for perspective plane;
S22, pass through the current display image shown on the perspective plane of camera device collection;
S23, the contrast display image and the difference of the pixel value of same position pixel in the projection prognostic chart picture,
The pixel that difference described in the display image is more than predetermined threshold value is extracted, target object is obtained.
In the present embodiment, in the system that projection arrangement and camera device are constituted, project content is stored in computing device,
Computing device can obtain the image content that projection goes out, therefore, and computing device can enter to the image that camera device is read
Row estimation prediction;If do not blocked on projected picture by target object, then the image that camera device is read should and be calculated
It is device predicted go out image it is very close, if some position of projected picture is blocked by target object, camera device it is actual read
The image that the image and computing device taken is predicted has larger difference at the place of blocking of target object.According to this information,
It can not be disturbed by projection arrangement picture, it is accurate to judge whether projected picture includes target object to be detected, and can be accurate
Find position of the target object on projected picture.
For step S21, the projected image of projector is obtained, is closed according to default position transformational relation and pixel value conversion
Projection prognostic chart picture of system's generation projected image for perspective plane;
Wherein, projected image, that is, be stored in the project content in computing device;Project prognostic chart picture, i.e. computing device root
Image obtained from being predicted estimation according to projected image;
In order to predict the image on the perspective plane that camera device is collected, it is necessary first to obtain projected image and predicted with projection
Conversion corresponding relation between image, wherein, two information in pixel position and pixel pixel value are related to inside image, therefore
Correspondingly, two transformational relations, i.e. pixel position transformational relation, and pixel pixel value transformational relation need to be determined, according to
Position transformational relation and pixel value transformational relation can then generate the projection prognostic chart picture of the projected image.
In order to obtain the projection prognostic chart picture of projected image, both corresponding relations need to be set up, position transformational relation passes through
Default position detection image is projected into perspective plane, its first image formed, the inspection of contrast position are gathered by camera device
The difference of each pixel in altimetric image and the first image and obtain;The present embodiment is to set up the process of both pixel position relationships
Exemplified by illustrate;As shown in figure 3, may also include the steps of:
S31, the control projection arrangement project default position detection image to the perspective plane, wherein, the position
Detection image is default grid image, and the grid image is divided into the alternate grid of multiple black-white colors;
S32, the first image formed by the camera device collection position detection image on the perspective plane;
The ratio of the coordinate of S33, the coordinate according to the angle point of the position detection image and described first image angle point,
Calculating obtains the position transformational relation;
In the present embodiment, it is contemplated that projected image is inputted to projection arrangement, perspective plane is projected to by projection arrangement, then
By camera device shoot perspective plane and during display image on the Current projection face that collects, in this processing procedure, perspective plane
The reasons such as the systematic error of possible out-of-flatness, projecting apparatus and camera device, can cause projected image and display image not fully
Unanimously;If for example, perspective plane out-of-flatness, situations such as display image might have stretching, compression or distort;Therefore two need to be determined
The position transformational relation of pixel between person.
First, control the projection arrangement that default position detection image is projected into the perspective plane;Wherein, institute's rheme
Detection image is put for default grid image, the grid image is divided into the alternate grid of multiple black-white colors;Using grid
Image, due to having multiple grids in its image, grid has four summits, is conducive to carrying out the mark of pixel geometric position
It is fixed;As shown in figure 4, showing a kind of schematic diagram of position detection image;The grid image of the present embodiment, grid number therein
It can determine according to actual needs, the present embodiment is defined not to this, the number of grid is more, then result precision is got over
It is high;In grid image, the color of adjacent square is different, respectively black-white colors, the black-white colors of the present embodiment, refers to gray value
Respectively 0 and 255 color;Using black-white colors, the difference of both gray values is maximum, is conducive to improving processing accuracy.
After position detection image is projected on perspective plane, gathered by camera device, obtain what it was formed on the projection surface
First image;
The angle point of the present embodiment, refers to that gray value gradient changes faster pixel;Position detection in the present embodiment
In image and the first image, because alternate grid color is black-white colors, the summit graded of each grid is most fast, and angle point is
Refer to the summit of each grid;Obtain in the angle point of position detection image and the angle point of described first image, record two images
The coordinate of each angle point;
The angle point of position detection image is corresponding with the angle point of described first image, if angle point exists in two images
Coordinate difference, can obtain the position transformational relation according to both coordinate ratios.
Specifically, the position transformational relation is actually each pixel reflecting in the first image in the detection image of position
Relation is penetrated, position transformational relation can be represented by matrix;Assuming that P is any point in the detection image of position, according to P'=
K*P can obtain P' corresponding with P points on the first image;K is above-mentioned position transformational relation matrix;If the matrix K is 3 × 3
Matrix, then have 9 unknown elements in matrix, by the angle point and the angle point of described first image of the position detection image of above-mentioned acquisition
Coordinate substitute into respectively in P'=K*P, then can obtain multiple formulas, the matrix can be obtained by solving equation, obtain position and turn
Change relation.
Projected image also has pixel value transformational relation with projection prognostic chart picture, and pixel value transformational relation is by will be default
Pixel value detection image projects perspective plane, and its second image formed, contrast pixel value detection figure are gathered by camera device
The pixel value of each pixel in picture and the second image and obtain;The present embodiment is to set up both pixel pixel value transformational relations
Illustrated exemplified by process;As shown in figure 5, may also include the steps of:
S51, the control projecting apparatus project default pixel value detection image to the perspective plane, wherein, the pixel
Being worth detection image includes three-primary-color image, black image and white image;
S52, the second figure formed by the camera device collection pixel value detection image on the perspective plane
Picture;
S53, pixel value and described each pixel of second image according to each pixel of the pixel value detection image
Pixel value ratio, calculating obtain the pixel value transformational relation;
Wherein, the pixel value transformational relation can be:
C=A (VP+F);
The pixel value that C is pixel M in second image, A is the reflectivity on perspective plane, and V is blend of colors matrix, P
For the pixel value of the pixel value detection image pixel M ', F is the contribution of ambient light, pixel M position and pixel M '
Position it is identical.
In the present embodiment, it is contemplated that projected image is inputted to projection arrangement, perspective plane is projected to by projection arrangement, then
By camera device shoot perspective plane and during display image on the Current projection face that collects, in this processing procedure, due to taking the photograph
The reason such as photosensitive uneven, the camera lens distortion of camera and the influence of ambient light, even if identical color is in video camera
Edge and center also show different pixel values, projected image and display image can be caused to there is color distortion, therefore need
It is determined that pixel pixel value transformational relation between the two.
First, control the projection arrangement that default color detection image is projected into the perspective plane;Wherein, the picture
Including three-primary-color image, black image, (RGB color value is plain value detection image:R:0, G:0, B:0) with white image (RGB face
Colour is:R:255, G:255, B:255);Three-primary-color image is that (RGB color value is red image:R:255, G:0, B:0) it is, green
(RGB color value is color image:R:0, G:255, B:0), (RGB color value is blue image:R:0, G:0, B:255).
After pixel value detection image is projected on perspective plane, gathered by camera device, obtain it and formed on the projection surface
The second image;According to the pixel value of the pixel value detection image each pixel and second image each pixel
The ratio of pixel value, calculating obtains the pixel value transformational relation.
In the present embodiment, pixel value transformational relation can be mathematical modeling, be shown below:
C=A (VP+F)
Wherein,
Vectorial C represents the pixel value of some pixel in the second image;Vectorial P represents corresponding in pixel value detection image
The pixel value of pixel;Matrix A represents the reflectivity on perspective plane;Vectorial F represents the contribution of ambient light;Matrix V is referred to as color
Interaction in hybrid matrix, description system between each Color Channel.Pass through above-mentioned pixel value detection image, Ke Yiji
Matrix A, matrix V and matrix F are calculated, pixel value transformational relation is obtained.
Above-mentioned position transformational relation and pixel value transformational relation is being determined, can thrown according to projected image generation is corresponding
Shadow prognostic chart picture, in a preferred embodiment, as shown in fig. 6, described change according to default position transformational relation and pixel value
The step of relation generates the projection prognostic chart picture of the projected image may include:
S61, according to the position transformational relation, the position of each pixel in the projected image is changed, obtained
Position after being changed to the pixel;
S62, according to default pixel value transformational relation, the pixel value of each pixel of projected image is turned
Change, obtain the pixel value after each pixel conversion;
S63, corresponding pixel set according to the position after the conversion and the pixel value after conversion, obtain the projection
Prognostic chart picture;
In the present embodiment, according to position transformational relation and pixel value transformational relation, by changing accordingly, you can quick
To projection prognostic chart picture.
The current display image shown on the perspective plane is gathered for step S22, by camera device;
Projected image is projected on perspective plane by projection arrangement, and perspective plane is shot by camera device, is shot
Video data by multiple image Sequence composition, obtain the display image on perspective plane.
For step S23, the contrast display image and the pixel of same position pixel in the projection prognostic chart picture
The difference of value, extracts the pixel that difference described in the display image is more than predetermined threshold value, obtains target object;
In the present embodiment, each pixel in prognostic chart picture can will be projected, same position corresponding with display image
Each pixel is contrasted, both pixel values of contrast;If perspective plane does not occur detection target, projection prognostic chart picture is with showing
The similarity of diagram picture is higher;If detection target occurs in perspective plane, detection target is also projected onto on projected picture, then collected
Projected picture on display image detection target area can with projection prognostic chart picture can be variant;Therefore, contrast two is passed through
The pixel value of person's same position pixel, if pixel value is larger, can regard the pixel as target pixel points;Specifically
, pixel value can be compared with predetermined threshold value, predetermined threshold value can be set according to actual needs, and the present embodiment is to this
Without limiting;All target pixel points in display image are obtained, then can obtain the target object to be detected.
The method that target object is extracted in the projected picture of the present embodiment, according to default position transformational relation and pixel value
Transformational relation generates projection prognostic chart picture of the projected image relative to perspective plane;Thrown by camera device collection is current described
The display image shown on shadow face;By contrasting projection prognostic chart picture and the display image, same position pixel is obtained
The difference of pixel value is more than the pixel of predetermined threshold value, extracts the pixel as target object;The present embodiment passes through to throwing
Shadow image is predicted, and will predict that obtained projection prognostic chart picture is contrasted with actual display image, if there is pixel
Pixel value is different, can determine whether occur target object on perspective plane, so as to accurately extract target object in display image
Region, its testing result precision is higher.
Embodiment three,
As shown in fig. 7, the present invention also provides a kind of projection interactive approach, the present embodiment is applied to the projection interactive approach
Illustrated exemplified by computing device, this method includes the method that target object is extracted in the projected picture in embodiment two, bag
Include following steps:
S21, the projected image for obtaining projector, institute is generated according to default position transformational relation and pixel value transformational relation
State projection prognostic chart picture of the projected image for perspective plane;
S22, pass through the current display image shown on the perspective plane of camera device collection;
S23, the contrast display image and the difference of the pixel value of same position pixel in the projection prognostic chart picture,
The pixel that difference described in the display image is more than predetermined threshold value is extracted, target object is obtained;
Display image described in S74, each frame from camera device collection detects the space-time characteristic of the target object;
S75, the space-time characteristic inputted to default motor behavior grader, identify the motion of the target object
Behavior, default control instruction corresponding with the motor behavior is performed according to the motor behavior;
The embodiment of step S21~S23 in the present embodiment can will not be repeated here as described in embodiment two.
Display image described in each frame for step S74, from camera device collection detects the target object
Space-time characteristic;
If occur target object on projected picture, and target object has motor behavior, due to that can be extracted from display image
Go out target object, then the motion feature of the target object can be identified from the continuous multiple frames display image in video data.
In a preferred embodiment, as shown in figure 8, display image described in each frame from the video data is detected
The step of motion feature for going out the target object, includes:
S81, extraction SURF set of characteristic points and light stream spy from the target object of the continuous display image of N frames
Point set is levied, the point-of-interest of the target object is obtained;Wherein, the point-of-interest is the SURF set of characteristic points and light
The common factor of set of characteristic points is flowed, N is that default frame number detects unit;
S82, multiple Delaunay triangles using the Delaunay triangles rule structure point-of-interest;
S83, according to default weight coefficient the SURF characteristic vectors and Optical-flow Feature of each Delaunay triangles are entered
Row weighted calculation, obtains the space-time characteristic;
Target object is made up of the target pixel points in display image, and SURF (speeded up are extracted in the picture
Robust features) feature principle it is as follows:
1) extraction of characteristic point:
Assuming that function f (x, y), Hessian matrix H are made up of function partial derivative.The pixel of some in image is defined first
Point I (x, y) Hessian matrixes are:
So as to which each pixel can obtain a Hessian matrix, Hessian matrix discriminates are:
The value of discriminate is the characteristic value of H-matrix, it is possible to use the symbol of result of determination classifies all pixels point, according to
Discriminate value is positive and negative, thus judge the pixel whether extreme point.
Then wave filter is selected, the wave filter of the present embodiment selects second order standard gaussian function.Pass through specific internuclear volume
Product calculates second-order partial differential coefficient, can thus calculate three matrix element L of H-matrixxx, Lxy, Lyy, so that it is public to calculate H-matrix
Formula is as follows:
The value of its determinant:
Then, target pixel points are detected with mainly have following two step:
A. gaussian filtering:Generated using different σOrTemplate, in image
Target object region carries out convolution algorithm;
B. corresponding peak value is searched in the locational space and metric space in target object region in the picture.
The present embodiment will introduce the concept of image stack, be exactly one group of size identical image, and these images are all according to not
With size gaussian filtering second order guided mode plate, arranged from small to large along Z-direction according to template size, each picture in such intermediate layer
The field of vegetarian refreshments is just 3 × 3 × 3 (including two layers up and down).If the characteristic value α of the pixel is the maximum in this 27 points,
So it is considered that the point is SURF characteristic points.
2) matching of characteristic point:
A. characteristic vector is found:If carrying out the matching of characteristic point, it is necessary to extract the characteristic vector of characteristic point, then profit
Whether it is two images mutually corresponding point that two points are judged with two vectorial similarity degrees.
Calculate the principal direction of characteristic point:The present embodiment can use the length of side for 4 template, to any one target pixel points,
The template is applied to the point, the difference of black and white partial pixel value in Haar features is then calculated, is used as the picture of the point
Element value, obtains the characteristic vector harrx of horizontal direction, and vertical direction characteristic vector harry.
In order to ensure rotational invariance, in SURF, its histogram of gradients is not counted, but in statistical nature point field
Harr features:I.e. centered on characteristic point, it is 6 scale-values to calculate radius, counts 60 degree of fan-shaped interior institutes a little in level and hangs down
Nogata to summation, and different Gauss weights are assigned to it.It is that 6 pixels set up crew neck domain by center of circle radius of characteristic point, meter
Calculation, which is drawn in crew neck domain, 109 pixels, and respective vectorial direction angle=is obtained respectively to this 109 pixels
These angle are divided into 60,120 ..., 300,360 grade, 6 values by acrtan (harry/harrx) according to nearest neighbouring rule
On, their harrx is added by the pixel being divided in same scope with harry respectively.But to embodying adjacent
The bigger influence of pixel, in addition it is also necessary to consider Gauss weight coefficient.So obtain the harrx and maximum harry of maximum, composition
Principal direction vector.
B. SURF feature point description operators are constructed:It is also that a square-shaped frame is taken around characteristic point in SURF, frame
The length of side is 8 yardsticks.Calculate the gradient magnitude for obtaining 4 ﹡, 4 block of pixels and direction (can be computed using step A
Harrx and harry), the region segmentation for being 8 by the length of side is the region T1 that the length of side is 2,2,3,4, so each region just includes
The region being made up of 4 pixels 4 smaller.Harrx and harry is exactly to subtract black using white portion grey scale pixel value
Partial pixel gray value both can obtain direction vector.Such vectorial one has 16, and the deflection of these direction vectors is returned
And to up and down tiltedly up and down on 8 directions, and calculate in T1,2,3,4 value in this 8 directions.So, due to having 8
Subregion, therefore, Feature Descriptor are made up of 32 characteristic vectors altogether.
C. the matching of characteristic point:Using simplest two inner product of vectors maximum as the point most matched, a threshold value is set,
Only when this maximum is more than the threshold value, two Feature Points Matchings can be thought.
The principle for extracting Optical-flow Feature point in the picture is as follows:
Light stream, refers to the instantaneous velocity of pixel motion of the space motion object on observation imaging plane, is to utilize image
Correlation in sequence between change and consecutive frame of the pixel in time-domain exists to find between previous frame and present frame
Corresponding relation, so as to calculate the method for the movable information of object between consecutive frame.
Optical flow method is really changed over time by the intensity of detection image pixel and then is inferred to object translational speed
With the method in direction.Each moment has the vector set of a two dimension or multidimensional, such as (x, y, t), represent specified coordinate in t
The instantaneous velocity of point.If I (x, y, t) is the intensity of t (x, y) point, in very short time Δ t, x, y increases Δ x respectively,
Δ y, can be obtained:
Assuming that object is located at (x, y) point in moment t, it is located at (x+ Δs x, y+ Δ y) points, then have following formula in t+ Δs t:
I (x+ Δs x, y+ Δ y, t+ Δ t)=I (x, y, t)
Therefore
Assuming thatSo
Ixu+IyU=-It, i.e.,Assuming that in a small local domain of (u, v), brightness is
Constant, then:
I.e.:
The purpose of optical flow computation, is just so thatMinimum, the minimum value obtained is exactly the direction of light stream and big
It is small.
The present embodiment goes out SURF points using SURF methods to the target object region detection in every frame display image, to extracting
The SURF characteristic points arrived carry out SURF feature descriptions, obtain the SURF characteristic vectors of SURF characteristic points;The present embodiment is with video counts
In continuous N frames display image be one detection unit, such as can be 4 frames, 5 frames or 6 frames, can according to actual needs and
Setting.
Optical-flow Feature point is obtained by optical flow method simultaneously, Optical-flow Feature is calculated from the 1st frame in N two field pictures into nth frame
Obtain, obtain Optical-flow Feature point, and Optical-flow Feature point Optical-flow Feature vector.
Both then obtain point-of-interest, point-of-interest is that obtained SURF points and Optical-flow Feature point are screened, i.e.,
The characteristic point that feature has is exactly point-of-interest, that is to say the common factor of SURF point sets and Optical-flow Feature point set;
Multiple Delaunay triangles of the point-of-interest are built using Delaunay triangles rule, are just used
Delaunay triangles rule is limited obtained point-of-interest.In this manner it is possible to from one group of characteristic point rather than one
Space-time characteristic is extracted in independent characteristic point.In each Delaunay triangles, every three points can just form a triangle
Region, the extraction of space-time characteristic is carried out by the region as a unit.
From the N-1 frames in N frame display images each point-of-interest extract Optical-flow Feature, then according to Optical-flow Feature come
Each point-of-interest is tracked.
As local motion feature, the matrix that can be obtained first according to optical flow method estimates motion feature point, then
The point-of-interest of a motion in each video segment can be represented using 5 dimensional feature vectors.5 dimensional feature vector includes
X+, x-, y+, y- and without luminous flux x0, wherein x+ represents the positive measurement of x-axis, and x- represents the measurement of x-axis negative sense.Each is regarded
The motion feature of frequency fragment will be normalized so that all component and all to be approximately 1.By from all (N-1)
5 dimensional feature vectors obtained in individual frame period constitute a motion vector, and therefore, the dimension of motion feature is exactly (N-1) × 5.
Final step, can by the motion feature of each Delaunay triangles after comprehensive SURF features and normalization
To obtain a space-time characteristic vector.
SURF descriptors using three characteristic points of Delaunay triangles are used as Local textural feature.Three points
SURF descriptors are combined according to the order of successively decreasing of its absolute value.Because SURF descriptors are 64 dimensions, therefore, the dimension of textural characteristics
Spend and tieed up for 64 × 3=192.
Then, the motion vector (M-1) × 5 tieed up is according to the sequential combination of successively decreasing of its absolute value into a local motion spy
Levy.
Due to having carried out Delaunay triangulation process to all points, so regarding three o'clock in each triangle as one
It is overall, therefore characteristic vector is 3 × 64=192 dimensions;
Comprehensive SURF features and motion feature vector, are weighted to it, it is possible to obtain space-time characteristic vector, this weighting
Coefficient w can be set according to actual needs, for example, can be determined by experiment, and the dimension of final space-time characteristic vector is:192+
(M-1) × 3 × 5, wherein " × 3 " represent to merge the motion vector of 3 points in Delaunay triangles.
The present embodiment carries out subdivision again using Delaunay triangles to the point-of-interest of extraction.So, follow-up space-time
Feature is exactly point extracted in each Delta Region rather than each, and this causes the information more robust of space-time characteristic and rich
Richness, improves the precision of motor behavior identification.
For step S75, the space-time characteristic is inputted to default motor behavior grader, the object is identified
The motor behavior of body, default control instruction corresponding with the motor behavior is performed according to the motor behavior;
Wherein, there is a variety of default space-time characteristics and its corresponding motor behavior in motor behavior grader, extracting
Go out after the space-time characteristic of target object, space-time characteristic is inputted and matched into grader, so as to identify motor behavior, enter
And corresponding control instruction is performed according to motor behavior.
The present embodiment carries out the Classification and Identification of space-time characteristic using SVMs (SVM) method:
The present embodiment uses RBF, also as gaussian kernel function:
Its principle is:
Assuming that having the positive negative sample of a pile training data, labeled as { xi,yi, i=1 ..., l, yi∈{-1,1},xi∈
Rd, it is assumed that there is a hyperplane H:Wx+b=0, can be accurate separated these samples, while in the presence of two
Parallel to H hyperplane H1 and H2:
Wx+b=1
Wx+b=-1
The positive negative sample nearest from H is set just to fall on H1 and H2, such sample is exactly supporting vector.So other institutes
Some training samples will all be located at outside H1 and H2, that is, meet following constraint:
w·xi+b≥1,yi=1
w·xi+b≤-1,yi=-1
Write as unified formula, be:
yi(w·xi+b)-1≥0
The key step of SVM algorithm is as follows:
A. training set is given:
T={ (x1,y1),(x2,y2),...,(xn,yn)}
B. quadratic programming problem is solved:
Wherein
Solve
C. calculating parameter w, and choose a positive componentCalculate b:
D. decision boundaries are constructed:G (x)=(w*·x)+b*=0, thus try to achieve decision function:
F (x)=sgn (g (x))
In the present embodiment, first according to the motor behavior for wanting identification, the data of multiple scenes are chosen, in addition, choosing one
Do not include the scene of these behaviors a bit, Sample video is carried out after basic motion feature and SURF feature extractions processing, carried out
Cluster, generation video vector carries out aforesaid operations to all video samples successively, and each width can all obtain one group of video arrow
Amount, all sample images carry out svm gaussian kernel function training.
Feature extraction is carried out to the video data of input, space-time characteristic is then generated, by grader, judgement belongs to some
Motor behavior, according to the motor behavior identified, performs corresponding control instruction, so as to realize that projection is interactive.
The projection interactive approach of the present embodiment, can accurately extract target object region, and feature is carried out to target area
Extract, can motion row simpler, quickly and efficiently to target object using SURF characteristic points and optical flow method detection method
To be identified, the recognition capability of the motor behavior in projected picture for target object is preferable, significantly improves projection interactive
Treatment effeciency.
Example IV,
As shown in figure 9, being the structural representation of the system of extraction target object in projected picture of the present invention in one embodiment
Figure, including:
Generation module 91, the projected image for obtaining projector turns according to default position transformational relation and pixel value
Change relation and generate projection prognostic chart picture of the projected image for perspective plane;
Extraction module 92, for gathering the current display image shown on the perspective plane by the camera device;
Target object extraction module 93, for contrasting the display image and same position picture in the projection prognostic chart picture
The difference of the pixel value of vegetarian refreshments, extracts the pixel that difference described in the display image is more than predetermined threshold value, obtains object
Body.
In a preferred embodiment, it may also include:
First control module, for controlling the projection arrangement to the default position detection image of perspective plane projection,
Wherein, the position detection image is default grid image, and the grid image is divided into the alternate side of multiple black-white colors
Lattice;
First acquisition module, for gathering position detection image shape on the perspective plane by the camera device
Into the first image;
Position transformational relation computing module, coordinate and first figure for the angle point according to the position detection image
The ratio of the coordinate of the angle point of picture, calculating obtains the position transformational relation.
In a preferred embodiment, it may also include:
Second control module, for controlling the projection arrangement to the default pixel value detection figure of perspective plane projection
Picture, wherein, the pixel value detection image includes three-primary-color image, black image and white image;
Second acquisition module, for gathering the pixel value detection image on the perspective plane by the camera device
The second image formed;
Pixel value transformational relation computing module, for the pixel value according to the pixel value detection image each pixel and
The pixel value of each pixel of second image, calculating obtains the pixel value transformational relation;
Wherein, the pixel value transformational relation can be:
C=A (VP+F);
The pixel value that C is pixel M in second image, A is the reflectivity on perspective plane, and V is blend of colors matrix, P
For the pixel value of the pixel value detection image pixel M ', F is the contribution of ambient light, pixel M position and pixel M '
Position it is identical.
In a preferred embodiment, the generation module 91 can be additionally used in:According to the position transformational relation, thrown described
The position of each pixel is changed in shadow image, obtains the position after the pixel conversion;According to default pixel value
Transformational relation, the pixel value of each pixel of projected image is changed, and is obtained after each pixel conversion
Pixel value;Corresponding pixel is set according to the position after the conversion and the pixel value after conversion, the projection prediction is obtained
Image;
In the present embodiment, according to position transformational relation and pixel value transformational relation, by changing accordingly, you can quick
To projection prognostic chart picture.
The system that target object is extracted in the present embodiment projected picture, turns according to default position transformational relation and pixel value
Change relation and generate projection prognostic chart picture of the projected image relative to perspective plane;It is current in the projection by camera device collection
The display image shown on face;By contrasting projection prognostic chart picture and the display image, the picture of same position pixel is obtained
The difference of element value is more than the pixel of predetermined threshold value, extracts the pixel as target object;The present invention is by perspective view
As being predicted, it will predict that obtained projection prognostic chart picture is contrasted with actual display image, if there is the pixel of pixel
Value is different, can determine whether occur target object on perspective plane, so as to accurately extract target object region in display image,
Its testing result precision is higher.
Embodiment five,
As shown in Figure 10, it is structural representation of the projection interaction system of the present invention in embodiment five, including such as embodiment
The system that target object is extracted in projected picture described in four, including following module:
Generation module 91, the projected image for obtaining projector turns according to default position transformational relation and pixel value
Change relation and generate projection prognostic chart picture of the projected image for perspective plane;
Extraction module 92, for gathering the current display image shown on the perspective plane by the camera device;
Target object extraction module 93, for contrasting the display image and same position picture in the projection prognostic chart picture
The difference of the pixel value of vegetarian refreshments, extracts the pixel that difference described in the display image is more than predetermined threshold value, obtains object
Body;
Space-time characteristic detection module 101, institute is detected for display image described in each frame for being gathered from the camera device
State the space-time characteristic of target object;
In a preferred embodiment, the space-time characteristic detection module 101 can be additionally used in:From the continuous display of N frames
SURF set of characteristic points and Optical-flow Feature point set are extracted in the target object of image, the sense for obtaining the target object is emerging
Interesting point;Wherein, the point-of-interest is the common factor of the SURF set of characteristic points and Optical-flow Feature point set, and N is default frame
Number detects unit;Multiple Delaunay triangles of the point-of-interest are built using Delaunay triangles rule;According to pre-
If weight coefficient the SURF characteristic vectors and Optical-flow Feature of each Delaunay triangles are weighted, obtain described
Space-time characteristic.
Identification module 102, for the space-time characteristic to be inputted to default motor behavior grader, identifies the mesh
The motor behavior of object is marked, default control instruction corresponding with the motor behavior is performed according to the motor behavior;
The projection interaction system of the present embodiment, can accurately extract target object region, and feature is carried out to target area
Extract, can motion row simpler, quickly and efficiently to target object using SURF characteristic points and optical flow method detection method
To be identified, the recognition capability of the motor behavior in projected picture for target object is preferable, significantly improves projection interactive
Treatment effeciency.
The method and system of target object is extracted in projected picture of the present invention, according to default position transformational relation and pixel
It is worth transformational relation and generates projection prognostic chart picture of the projected image relative to perspective plane;It is current described by camera device collection
The display image shown on perspective plane;By contrasting projection prognostic chart picture and the display image, same position pixel is obtained
Pixel value difference be more than predetermined threshold value pixel, extract the pixel as target object;The present invention passes through to throwing
Shadow image is predicted, and will predict that obtained projection prognostic chart picture is contrasted with actual display image, if there is pixel
Pixel value is different, can determine whether occur target object on perspective plane, so as to accurately extract target object in display image
Region, its testing result precision is higher.
Present invention projection interactive approach and system, can accurately extract target object region, so that in projected picture
Recognition capability for the motor behavior of target object is preferable, significantly improves projection interactive processing.
Embodiment described above only expresses the several embodiments of the present invention, and it describes more specific and detailed, but simultaneously
Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention
Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
Claims (8)
1. the method for target object is extracted in a kind of projected picture, it is characterised in that comprise the following steps:
The projected image of projector is obtained, the perspective view is generated according to default position transformational relation and pixel value transformational relation
As the projection prognostic chart picture for perspective plane;
The current display image shown on the perspective plane is gathered by camera device;
The display image and the difference of the pixel value of same position pixel in the projection prognostic chart picture are contrasted, extracts described
Difference described in display image is more than the pixel of predetermined threshold value, obtains target object;
Also comprise the following steps:
The projection arrangement is controlled to project default pixel value detection image to the perspective plane, wherein, the pixel value detection
Image includes three-primary-color image, black image and white image;
The second image that the pixel value detection image is formed on the perspective plane is gathered by the camera device;
According to the pixel value of each pixel of the pixel value and second image of the pixel value detection image each pixel
Ratio, calculating obtain the pixel value transformational relation;
The mathematical modeling of the pixel value transformational relation is:
C=A (VP+F);
The pixel value that C is pixel M in second image, A is the reflectivity on perspective plane, and V is blend of colors matrix, and P is institute
Pixel value detection image pixel M ' pixel value is stated, F is the contribution of ambient light, pixel M position and pixel M ' position
Put identical.
2. the method for target object is extracted in projected picture according to claim 1, it is characterised in that also including following step
Suddenly:
The projection arrangement is controlled to project default position detection image to the perspective plane, wherein, the position detection image
For default grid image, the grid image is divided into the alternate grid of multiple black-white colors;
The first image that the position detection image is formed on the perspective plane is gathered by the camera device;
According to the ratio of the coordinate of the angle point of the position detection image and the coordinate of the angle point of described first image, calculating is obtained
The position transformational relation.
3. the method for target object is extracted in projected picture according to claim 2, it is characterised in that the basis is preset
Position transformational relation and pixel value transformational relation include the step of generate the projection prognostic chart picture of the projected image:
According to the position transformational relation, the position of each pixel in the projected image is changed, the picture is obtained
Position after vegetarian refreshments conversion;
According to default pixel value transformational relation, the pixel value of each pixel of projected image is changed, institute is obtained
State the pixel value after the conversion of each pixel;
Corresponding pixel is set according to the position after the conversion and the pixel value after conversion, the projection prognostic chart is obtained
Picture.
4. one kind projection interactive approach, it is characterised in that including being carried in the projected picture as described in any one of claims 1 to 3
The method for taking target object, also comprises the following steps:
Display image described in each frame gathered from the camera device detects the space-time characteristic of the target object;
The space-time characteristic is inputted to default motor behavior grader, the motor behavior of the target object, root is identified
Default control instruction corresponding with the motor behavior is performed according to the motor behavior.
5. projection interactive approach according to claim 4, it is characterised in that each frame gathered from the camera device
The step of display image detects the space-time characteristic of the target object includes:
SURF set of characteristic points and Optical-flow Feature point set are extracted from the target object of the continuous display image of N frames
Close, obtain the point-of-interest of the target object;Wherein, the point-of-interest is the SURF set of characteristic points and Optical-flow Feature
The common factor of point set, N is that default frame number detects unit;
Multiple Delaunay triangles of the point-of-interest are built using Delaunay triangles rule;
Meter is weighted to the SURF characteristic vectors and Optical-flow Feature of each Delaunay triangles according to default weight coefficient
Calculate, obtain the space-time characteristic.
6. the system of target object is extracted in a kind of projected picture, it is characterised in that including:
Generation module, the projected image for obtaining projector, according to default position transformational relation and pixel value transformational relation
Generate projection prognostic chart picture of the projected image for perspective plane;
Extraction module, for gathering the current display image shown on the perspective plane by camera device;
Target object extraction module, for contrasting the display image and same position pixel in the projection prognostic chart picture
The difference of pixel value, extracts the pixel that difference described in the display image is more than predetermined threshold value, obtains target object;
Also include:
Second control module, default pixel value detection image is projected for controlling the projection arrangement to the perspective plane, its
In, the pixel value detection image includes three-primary-color image, black image and white image;
Second acquisition module, is formed for gathering the pixel value detection image by the camera device on the perspective plane
The second image;
Pixel value transformational relation computing module, for the pixel value according to the pixel value detection image each pixel and described
The ratio of the pixel value of second image each pixel, calculating obtains the pixel value transformational relation;
The mathematical modeling of the pixel value transformational relation is:
C=A (VP+F);
The pixel value that C is pixel M in second image, A is the reflectivity on perspective plane, and V is blend of colors matrix, and P is institute
Pixel value detection image pixel M ' pixel value is stated, F is the contribution of ambient light, pixel M position and pixel M ' position
Put identical.
7. the system of target object is extracted in projected picture according to claim 6, it is characterised in that also include:
First control module, for controlling the projection arrangement to the default position detection image of perspective plane projection, wherein,
The position detection image is default grid image, and the grid image is divided into the alternate grid of multiple black-white colors;
First acquisition module, for gathering what the position detection image was formed on the perspective plane by the camera device
First image;
Position transformational relation computing module, coordinate and described first image for angle point according to the position detection image
The ratio of the coordinate of angle point, calculating obtains the position transformational relation.
8. a kind of projection interaction system, it is characterised in that including being carried in the projected picture as described in any one of claim 6 to 7
The system for taking target object, in addition to:
Space-time characteristic detection module, the object is detected for display image described in each frame for being gathered from the camera device
The space-time characteristic of body;
Identification module, for the space-time characteristic to be inputted to default motor behavior grader, identifies the target object
Motor behavior, default control instruction corresponding with the motor behavior is performed according to the motor behavior.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410429157.6A CN104202547B (en) | 2014-08-27 | 2014-08-27 | Method, projection interactive approach and its system of target object are extracted in projected picture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410429157.6A CN104202547B (en) | 2014-08-27 | 2014-08-27 | Method, projection interactive approach and its system of target object are extracted in projected picture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104202547A CN104202547A (en) | 2014-12-10 |
CN104202547B true CN104202547B (en) | 2017-10-10 |
Family
ID=52087768
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410429157.6A Expired - Fee Related CN104202547B (en) | 2014-08-27 | 2014-08-27 | Method, projection interactive approach and its system of target object are extracted in projected picture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104202547B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106559627B (en) * | 2015-09-25 | 2021-05-11 | 中兴通讯股份有限公司 | Projection method, device and equipment |
CN106559629B (en) * | 2015-09-29 | 2020-08-04 | 中兴通讯股份有限公司 | Projection method, device and equipment |
CN106611004B (en) * | 2015-10-26 | 2019-04-12 | 北京捷泰天域信息技术有限公司 | Points of interest attribute display methods based on vector regular quadrangle grid |
CN106998463A (en) * | 2016-01-26 | 2017-08-01 | 宁波舜宇光电信息有限公司 | The method of testing of camera module based on latticed mark version |
CN107133911B (en) * | 2016-02-26 | 2020-04-24 | 比亚迪股份有限公司 | Method and device for displaying reversing image |
CN106774827B (en) * | 2016-11-21 | 2019-12-27 | 歌尔科技有限公司 | Projection interaction method, projection interaction device and intelligent terminal |
CN106713884A (en) * | 2017-02-10 | 2017-05-24 | 南昌前哨科技有限公司 | Immersive interactive projection system |
CN110244840A (en) * | 2019-05-24 | 2019-09-17 | 华为技术有限公司 | Image processing method, relevant device and computer storage medium |
CN110827289B (en) * | 2019-10-08 | 2022-06-14 | 歌尔光学科技有限公司 | Method and device for extracting target image in projector definition test |
CN111447171B (en) * | 2019-10-26 | 2021-09-03 | 四川蜀天信息技术有限公司 | Automated content data analysis platform and method |
CN112040208B (en) * | 2020-09-09 | 2022-04-26 | 南昌虚拟现实研究院股份有限公司 | Image processing method, image processing device, readable storage medium and computer equipment |
CN111932686B (en) * | 2020-09-09 | 2021-01-01 | 南昌虚拟现实研究院股份有限公司 | Mapping relation determining method and device, readable storage medium and computer equipment |
CN114520894B (en) * | 2020-11-18 | 2022-11-15 | 成都极米科技股份有限公司 | Projection area determining method and device, projection equipment and readable storage medium |
CN113421313B (en) * | 2021-05-14 | 2023-07-25 | 北京达佳互联信息技术有限公司 | Image construction method and device, electronic equipment and storage medium |
CN113949853B (en) * | 2021-10-13 | 2022-10-25 | 济南景雄影音科技有限公司 | Projection system with environment adaptive adjustment capability |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1666248A (en) * | 2002-06-26 | 2005-09-07 | Vkb有限公司 | Multifunctional integrated image sensor and application to virtual interface technology |
CN101140661A (en) * | 2007-09-04 | 2008-03-12 | 杭州镭星科技有限公司 | Real time object identification method taking dynamic projection as background |
CN102063618A (en) * | 2011-01-13 | 2011-05-18 | 中科芯集成电路股份有限公司 | Dynamic gesture identification method in interactive system |
CN102184008A (en) * | 2011-05-03 | 2011-09-14 | 北京天盛世纪科技发展有限公司 | Interactive projection system and method |
CN103677274A (en) * | 2013-12-24 | 2014-03-26 | 广东威创视讯科技股份有限公司 | Interactive projection method and system based on active vision |
CN103988150A (en) * | 2011-03-25 | 2014-08-13 | 奥布隆工业有限公司 | Fast fingertip detection for initializing vision-based hand tracker |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9128366B2 (en) * | 2012-05-22 | 2015-09-08 | Ricoh Company, Ltd. | Image processing system, image processing method, and computer program product |
-
2014
- 2014-08-27 CN CN201410429157.6A patent/CN104202547B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1666248A (en) * | 2002-06-26 | 2005-09-07 | Vkb有限公司 | Multifunctional integrated image sensor and application to virtual interface technology |
CN101140661A (en) * | 2007-09-04 | 2008-03-12 | 杭州镭星科技有限公司 | Real time object identification method taking dynamic projection as background |
CN102063618A (en) * | 2011-01-13 | 2011-05-18 | 中科芯集成电路股份有限公司 | Dynamic gesture identification method in interactive system |
CN103988150A (en) * | 2011-03-25 | 2014-08-13 | 奥布隆工业有限公司 | Fast fingertip detection for initializing vision-based hand tracker |
CN102184008A (en) * | 2011-05-03 | 2011-09-14 | 北京天盛世纪科技发展有限公司 | Interactive projection system and method |
CN103677274A (en) * | 2013-12-24 | 2014-03-26 | 广东威创视讯科技股份有限公司 | Interactive projection method and system based on active vision |
Non-Patent Citations (1)
Title |
---|
《基于SURF特征和Delaunay三角网格的图像匹配》;闫自庚 等;《自动化学报》;20140630;第1-2节 * |
Also Published As
Publication number | Publication date |
---|---|
CN104202547A (en) | 2014-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104202547B (en) | Method, projection interactive approach and its system of target object are extracted in projected picture | |
CN110543837B (en) | Visible light airport airplane detection method based on potential target point | |
Dudhane et al. | C^ 2msnet: A novel approach for single image haze removal | |
CN106951870B (en) | Intelligent detection and early warning method for active visual attention of significant events of surveillance video | |
CN109255324A (en) | Gesture processing method, interaction control method and equipment | |
CN110929593B (en) | Real-time significance pedestrian detection method based on detail discrimination | |
CN108171196A (en) | A kind of method for detecting human face and device | |
CN107273832B (en) | License plate recognition method and system based on integral channel characteristics and convolutional neural network | |
CN108549891A (en) | Multi-scale diffusion well-marked target detection method based on background Yu target priori | |
CN108334847A (en) | A kind of face identification method based on deep learning under real scene | |
CN103020992B (en) | A kind of video image conspicuousness detection method based on motion color-associations | |
CN107833221A (en) | A kind of water leakage monitoring method based on multi-channel feature fusion and machine learning | |
CN106874826A (en) | Face key point-tracking method and device | |
CN108647625A (en) | A kind of expression recognition method and device | |
US8515127B2 (en) | Multispectral detection of personal attributes for video surveillance | |
CN112949572A (en) | Slim-YOLOv 3-based mask wearing condition detection method | |
CN109685045A (en) | A kind of Moving Targets Based on Video Streams tracking and system | |
CN109902576B (en) | Training method and application of head and shoulder image classifier | |
CN104598889B (en) | The method and apparatus of Human bodys' response | |
CN103530638A (en) | Method for matching pedestrians under multiple cameras | |
CN114926747A (en) | Remote sensing image directional target detection method based on multi-feature aggregation and interaction | |
CN107590427A (en) | Monitor video accident detection method based on space-time interest points noise reduction | |
CN106971158A (en) | A kind of pedestrian detection method based on CoLBP symbiosis feature Yu GSS features | |
CN110263868A (en) | Image classification network based on SuperPoint feature | |
CN108734200A (en) | Human body target visible detection method and device based on BING features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: Kezhu road high tech Industrial Development Zone, Guangzhou city of Guangdong Province, No. 233 510670 Patentee after: VTRON GROUP Co.,Ltd. Address before: 510670 Guangdong city of Guangzhou province Kezhu Guangzhou high tech Industrial Development Zone, Road No. 233 Patentee before: VTRON TECHNOLOGIES Ltd. |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171010 Termination date: 20210827 |