CN105243395B - A kind of human body image comparison method and device - Google Patents
A kind of human body image comparison method and device Download PDFInfo
- Publication number
- CN105243395B CN105243395B CN201510742746.4A CN201510742746A CN105243395B CN 105243395 B CN105243395 B CN 105243395B CN 201510742746 A CN201510742746 A CN 201510742746A CN 105243395 B CN105243395 B CN 105243395B
- Authority
- CN
- China
- Prior art keywords
- human body
- image
- body image
- region
- feature difference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
The present invention provides a kind of human body image comparison method and device, the method includes:Obtain the first human body image and the second human body image;First human body image is divided into multiple images region;It selects the image-region of different number to be spliced from multiple images region, obtains multiple human body subgraphs;Deep learning is carried out to obtained multiple human body subgraphs and the second human body image, obtains the feature difference figure of the first human body image and the second human body image;Deep learning is carried out to obtained feature difference figure, obtains the comparison result of the first human body image and the second human body image, comparison result includes that phase Sihe is dissimilar;By human body image comparison method provided by the present application and device, the accuracy rate of human body image comparison can be improved.
Description
Technical field
The present invention relates to technical field of image processing, in particular to a kind of human body image comparison method and device.
Background technology
Currently, in order to found from monitor video particular persons (such as:Suspect) character image and determine special
Determine the whereabouts of personage, Security Personnel just needs the figure and features feature according to particular persons, and the magnanimity shot from video monitoring system regards
The monitoring image with particular persons, and the peace of the video monitoring system according to shooting monitoring image are determined in frequency monitoring image
Place is filled to determine the whereabouts of particular persons.
In the related technology, it in order to find the monitoring image with particular persons from the video monitoring image of magnanimity, can incite somebody to action
Human body image in the human body image and video monitoring image of particular persons is respectively classified into upper, middle, and lower part, then to dividing
Human body image afterwards carries out deep learning, and is compared from massive video monitoring image according to the result of deep learning and provided spy
Determine the monitoring image of personage.
During finding the monitoring image with particular persons in the video monitoring image from magnanimity, some videos prison
The human body image shown in control image only has upper half of human body or only head, remaining human body parts is blocked, so will
After the human body image being blocked is divided into upper, middle, and lower part, it is likely to result in the middle parts of images of human body image after dividing
And/or just not no human body parts of particular persons in the partial image of lower part, then there is no human body during carrying out deep learning
Partial image can largely effect on the comparison result of human body image, reduce the accuracy rate that human body image compares.
Invention content
In view of this, the embodiment of the present invention is designed to provide a kind of human body image comparison method and device, to improve
The accuracy rate that human body image compares.
In a first aspect, an embodiment of the present invention provides a kind of human body image comparison methods, including:
Obtain the first human body image and the second human body image;
First human body image is divided into multiple images region;
It selects the image-region of different number to be spliced from described multiple images region, obtains multiple human body subgraphs
Picture;
Deep learning is carried out to obtained the multiple human body subgraph and second human body image, obtains described the
The feature difference figure of one human body image and second human body image;
Deep learning is carried out to the obtained feature difference figure, obtains first human body image and second human body
The comparison result of image, the comparison result include that phase Sihe is dissimilar.
With reference to first aspect, an embodiment of the present invention provides the first possible embodiments of first aspect, wherein from
The image-region of different number is selected to be spliced in described multiple images region, obtaining multiple human body subgraphs includes:
From multiple described image regions, selects k image-region to be spliced respectively, obtains multiple human body subgraphs,
Multiple described image regions are averagely divided by first human body image and are obtained from top to bottom;
Wherein,M indicates first human figure
As the image-region quantity divided.
With reference to first aspect, an embodiment of the present invention provides second of possible embodiments of first aspect, wherein right
Obtained the multiple human body subgraph and second human body image carry out deep learning, obtain first human body image
Feature difference figure with second human body image includes:
Deep learning is carried out to multiple human body subgraphs and second human body image, obtains multiple first human figures
As characteristic pattern and the second human body image characteristic pattern;
According to obtained multiple first human body characteristics of image figures and the second human body image characteristic pattern, described in acquisition
The first human body characteristics of image figure of each of multiple first human body characteristics of image figures respectively with the second human body image characteristic pattern
Feature difference figure.
With reference to first aspect, an embodiment of the present invention provides the third possible embodiments of first aspect, wherein right
Obtained the first human body characteristics of image figure and the second human body image characteristic pattern is handled, and obtains the multiple first
Each of human body image characteristic pattern the first human body characteristics of image figure is poor with the feature of the second human body image characteristic pattern respectively
Value figure includes:
The value regional center characterized by each pixel point coordinates stored in preset pixel coordinate set, according to preset
Characteristic value area size, will work as respectively ex-first lady's body characteristics of image figure and the second human body image characteristic pattern be divided into it is multiple
The First Eigenvalue region and multiple Second Eigenvalue regions;
Profile maxima is obtained respectively from the multiple the First Eigenvalue region and the multiple Second Eigenvalue region;
Calculate respectively from same characteristic features value regional center the First Eigenvalue region and Second Eigenvalue region in obtain
The difference of the profile maxima taken obtains multiple feature differences;
Using the multiple feature difference as pixel value, according to preset feature difference figure size, described current first is generated
The feature difference figure of human body image characteristic pattern and the second human body image characteristic pattern.
With reference to first aspect, an embodiment of the present invention provides the 4th kind of possible embodiments of first aspect, wherein right
The obtained feature difference figure carries out deep learning, obtains the comparison of first human body image and second human body image
As a result include:
Deep learning is carried out to obtained each feature difference figure, obtains corresponding human body of each feature difference figure
The similar parameter of image and second human body image;
Determine similarity of the maximum similar parameter as first human body image and second human body image;
When the similarity is more than or equal to the similarity threshold of setting, first human body image and described second is obtained
The similar comparison result of human body image;
When the similarity is less than the similarity threshold of setting, first human body image and second human body are obtained
The comparison result of image dissmilarity.
Second aspect, an embodiment of the present invention provides a kind of human body image comparison devices, including:
Acquisition module, for obtaining the first human body image and the second human body image;
Image division module, for first human body image to be divided into multiple images region;
Image mosaic module, for selecting the image-region of different number to be spliced from described multiple images region,
Obtain multiple human body subgraphs;
Feature difference figure acquisition module, for obtaining the multiple human body subgraph and second human body image
Deep learning is carried out, the feature difference figure of first human body image and second human body image is obtained;
Comparing module obtains first human body image for carrying out deep learning to the obtained feature difference figure
With the comparison result of second human body image, the comparison result includes that phase Sihe is dissimilar.
In conjunction with second aspect, an embodiment of the present invention provides the first possible embodiments of second aspect, wherein institute
It states image mosaic module to be specifically used for from multiple described image regions, selects k image-region to be spliced respectively, obtain more
A human body subgraph, multiple described image regions are averagely divided by first human body image and are obtained from top to bottom;
Wherein,M indicates first human figure
As the image-region quantity divided.
In conjunction with second aspect, an embodiment of the present invention provides second of possible embodiments of second aspect, wherein institute
Stating feature difference figure acquisition module includes:
Deep learning unit, for carrying out deep learning to multiple human body subgraphs and second human body image,
Obtain multiple first human body characteristics of image figures and the second human body image characteristic pattern;
Feature difference figure acquiring unit, for according to obtained multiple first human body characteristics of image figures and described second
Human body image characteristic pattern, obtain the first human body characteristics of image figure of each of the multiple first human body characteristics of image figure respectively and
The feature difference figure of the second human body image characteristic pattern.
In conjunction with second aspect, an embodiment of the present invention provides the third possible embodiments of second aspect, wherein institute
Stating feature difference figure acquiring unit includes:
Region division subelement, for the value characterized by each pixel point coordinates stored in preset pixel coordinate set
Regional center will work as ex-first lady's body characteristics of image figure and second human body respectively according to preset characteristic value area size
Characteristics of image figure is divided into multiple the First Eigenvalue regions and multiple Second Eigenvalue regions;
Profile maxima obtains subelement, is used for from the multiple the First Eigenvalue region and the multiple Second Eigenvalue
Profile maxima is obtained in region respectively;
Feature difference computation subunit, for calculating respectively from the First Eigenvalue area with same characteristic features value regional center
The difference of the profile maxima obtained in domain and Second Eigenvalue region, obtains multiple feature differences;
Feature difference figure generates subelement, is used for using the multiple feature difference as pixel value, poor according to preset feature
It is worth figure size, generates the feature difference figure for working as ex-first lady's body characteristics of image figure and the second human body image characteristic pattern.
In conjunction with second aspect, an embodiment of the present invention provides the 4th kind of possible embodiments of second aspect, wherein institute
Stating comparing module includes:
Similar parameter computing unit obtains each described for carrying out deep learning to obtained each feature difference figure
The similar parameter of feature difference figure corresponding human body subgraph and second human body image;
Similarity determining unit, for determining maximum similar parameter as first human body image and second people
The similarity of body image;
First comparison result determination unit, for when the similarity is more than or equal to the similarity threshold set, obtaining
First human body image comparison result similar with second human body image;
Second comparison result determination unit, for when the similarity is less than the similarity threshold set, obtaining described
The comparison result of first human body image and the second human body image dissmilarity.
Human body image comparison method and device provided in an embodiment of the present invention, it is multiple by the way that the first human body image to be divided into
Image-region;And according to the multiple images region after division, selected from multiple images region the image-region of different number into
Row splicing, obtains multiple human body subgraphs, then by obtained multiple human body subgraphs, is carried out respectively with the second human body image
It compares, it is being compared as a result, with human body image to be divided into each section human body after upper, middle, and lower part in the prior art
The part that is blocked in image can reduce the human body image comparison process of human body image comparison result accuracy rate and compare, and reduce human body
Be blocked influence of the part to comparison result in image, improves the accuracy rate that human body image compares.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment cited below particularly, and coordinate
Appended attached drawing, is described in detail below.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows a kind of implementation system involved by a kind of human body image comparison method provided in an embodiment of the present invention
Structural schematic diagram;
Fig. 2 shows a kind of flow charts for human body image comparison method that the embodiment of the present invention 1 is provided;
Fig. 3 shows the signal of characteristics of image figure in a kind of human body image comparison method that the embodiment of the present invention 1 is provided
Figure;
Fig. 4 shows the schematic diagram for another human body image comparison method that the embodiment of the present invention 2 is provided;
Fig. 5 shows a kind of structural schematic diagram for human body image comparison device that the embodiment of the present invention 3 is provided.
Specific implementation mode
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, the detailed description of the embodiment of the present invention to providing in the accompanying drawings is not intended to limit claimed invention below
Range, but it is merely representative of the selected embodiment of the present invention.Based on the embodiment of the present invention, those skilled in the art are not doing
The every other embodiment obtained under the premise of going out creative work, shall fall within the protection scope of the present invention.
In view of in relevant human body image comparison technology, being found with particular person in the video monitoring image from magnanimity
During the monitoring image of object, the human body image shown in some video monitoring images only has upper half of human body or only head
Portion, remaining human body parts are blocked, so after the human body image being blocked is divided into upper, middle, and lower part, may make
At just not no human body parts of particular persons in the middle parts of images and/or lower part partial image of human body image after division, then
It carries out there is no the comparison result that the image of human body parts can largely effect on human body image during deep learning, reduces human figure
As the accuracy rate compared.Based on this, an embodiment of the present invention provides a kind of human body image comparison method and devices.
Referring to Fig. 1, it illustrates a kind of implementation systems involved by human body image comparison method provided in an embodiment of the present invention
The structural schematic diagram of system, the system include:Human body image compares equipment 10, and human body image compares equipment 10 and includes:Human body image
It compares device 100 and compares the image library 101 that device 100 carries out data interaction with human body image.
Wherein, human body image compares device 100, for obtaining the first human body image and the second human body image;It will be the first
Body image is divided into multiple images region;It selects the image-region of different number to be spliced from multiple images region, obtains
Multiple human body subgraphs;Deep learning is carried out to obtained multiple human body subgraphs and the second human body image, is obtained the first
The feature difference figure of body image and the second human body image;Deep learning is carried out to obtained feature difference figure, obtains the first human body
The comparison result of image and the second human body image, when the human body image by comparing in determining image library 101 cannot be to first
When human body image is identified, the first human body image is sent to image library 101;Image library 101, for receiving human body image ratio
To device 100 send the first human body image and store.
Human body image compare equipment 10 may be used existing disposable type server or computing device to human figure
As being compared, no longer repeat one by one here.
Human body image compares device 100 and existing any central processing unit, microprocessor or Programmable may be used
Human body image is compared in part, no longer repeats one by one here.
Image library 101 may be used existing any large-capacity storage media and be stored to human body image, here no longer
It repeats one by one.
Embodiment 1
Referring to Fig. 2, the present embodiment provides a kind of human body image comparison methods, include the following steps:
Step 200 obtains the first human body image and the second human body image.
First human body image is shot from monitor video by the input equipment of human body image comparison equipment by Security Personnel
Video frame in the width that selects there is the image of personage, the first human body image can be suspect or missing crew etc.
It needs to be determined that the image of the particular persons of identity.
Human body image compare equipment input equipment can be mouse, keyboard or Trackpad etc. can make Security Personnel from
The external equipment that any computer of character image is selected in video frame, no longer repeats one by one here.
Second human body image is the arbitrary character image stored in image library, can be personage whole body shine or
Upper part of the body photo etc. can characterize the image of people's figure and features feature.
Include in first human body image and the second human body image and only include a character image, so being retouched in the present embodiment
The process that the human body image stated compares is the process that a single human body image compares another single human body image.
First human body image is divided into multiple images region by step 202.
First human body image is divided into the image-region of preset quantity by existing any image division methods, here
No longer repeat one by one.Wherein, preset quantity may be greater than 3 natural number, so the first human body image is general in the present embodiment
In the case of be divided into the image-regions of multiple quantity such as 4,5,6 or 7.
Step 204 selects the image-region of different number to be spliced from multiple images region, obtains multiple human body
Image.
Human body subgraph respectively includes the different piece of human body in the first human body image, for example is carrying out image-region spelling
After connecing, it includes the first body figure that somebody's body subgraph, which includes the head of human body in the first body image, somebody's body subgraph,
The upper body portion of human body as in, and somebody's body subgraph includes all parts of human body in the first body image.
When part human body in the first human body image is blocked, for example, in the first human body image human body lower body portion
When being blocked, then obtained multiple human body subgraphs include human body of the upper body portion of human body in the first human body image
Image be exactly include human body parts are most, inhuman body portion is minimum human body subgraph, to by the human body subgraph with
Second human body image carries out image comparison, it is possible to reduce the image for the part that is blocked in the first human body image is compared in human body image
Influence in the process.
Step 206 carries out deep learning to obtained multiple human body subgraphs and the second human body image, obtains the first
The feature difference figure of body image and the second human body image.
To multiple human body subgraphs and the second human body image by having mutually isostructural two sub- convolutional neural networks
Deep learning is carried out, the two sub- convolutional neural networks are by identical primary image processing arithmetic element, convolution algorithm unit
It is constituted with down-sampling arithmetic element.
Feature difference figure is the image that can indicate the first human body image and the second human body image similarity degree, wherein special
Each pixel value in sign differential chart more tends to 0, and the similarity degree of the first human body image and the second human body image is higher.
Step 208 carries out deep learning to obtained feature difference figure, obtains the first human body image and the second human body image
Comparison result, comparison result includes that phase Sihe is dissimilar.
The sub- convolutional neural networks of deep learning are carried out using with to above-mentioned human body subgraph and the second human body image not
Isostructural sub- convolutional neural networks to carry out deep learning to feature difference figure.
The sub- convolutional neural networks that deep learning is carried out to feature difference figure are transported by primary image processing arithmetic element, convolution
Calculate unit, down-sampling arithmetic element and softmax graders composition.
In conclusion human body image comparison method provided in this embodiment, multiple by the way that the first human body image to be divided into
Image-region;And according to the multiple images region after division, selected from multiple images region the image-region of different number into
Row splicing, obtains multiple human body subgraphs, then by obtained multiple human body subgraphs, is carried out respectively with the second human body image
It compares, it is being compared as a result, with human body image to be divided into each section human body after upper, middle, and lower part in the prior art
The part that is blocked in image can reduce the human body image comparison process of human body image comparison result accuracy rate and compare, and reduce human body
Be blocked influence of the part to comparison result in image, improves the accuracy rate that human body image compares.
In order to which the image-region after being divided by the first human body image obtains multiple human body subgraphs, specifically, from multiple
It selects the image-region of different number to be spliced in image-region, obtains multiple human body subgraphs and include the following steps:
From multiple images region, selects k image-region to be spliced respectively, obtain multiple human body subgraphs, it is multiple
Image-region is averagely divided by the first human body image and is obtained from top to bottom;
Wherein,M indicates that the first human body image is drawn
The image-region quantity divided.
Image-region is spliced by following example, the process for obtaining multiple human body subgraphs is further retouched
It states:
M=6 is set, the first human body image of input is averagely divided into 6 image-regions from top to bottom, this 6 image districts
Domain is expressed as R1、R2、R3、R4、R5And R6.According toIt determines
From 6 image-regions, 3 to 6 image-regions are selected to constitute different human body subgraphs respectively.Wherein, by image-region R1、
R2And R3Constitute human body subgraph(being upper 1/2 part of the first human body image), by region R1、R2、R3And R4Constitute human body
Subgraph(being upper 2/3 part of the first human body image), by image-region R1、R2、R3、R4And R5Constitute human body subgraph(being upper 5/6 part of the first human body image), by image-region R1、R2、R3、R4、R5And R6Constitute human body subgraph
(being the first human body image);After handling in this way the first human body image, one is obtained 4 different size of human body subgraphs
PictureIt is divided equally into 6 tectonic imagesCover the first human figure
The local message and global information of picture, and each Zhang Renti subgraphs can include the upper half of human body image in the first human body image
Body part (upper half of human body is affected to result in pedestrian retrieval), at the same will not it is too many due to human body subgraph quantity and
Influence obtains the speed of comparison result.
In conclusion by from multiple images region, different number is taken respectively Image-region spliced, to obtain multiple human bodies
Image, can in the first human body image human body be blocked in image only have a part of human body image in the case of, can pass through
Obtained multiple human body subgraphs are compared, remove what the part of unrelated human body in image compared human body image as far as possible
It influences, improves the success rate compared to human body image.
In the related technology, during carrying out human body image comparison, the human body image to being obtained from image library is needed
Deep learning is carried out after being divided again, to increase the processing time of human body image comparison, in order to reduce human figure
As the processing time length compared, so deep learning is carried out to obtained multiple human body subgraphs and the second human body image,
The feature difference figure for obtaining the first human body image and the second human body image includes the following steps 1 to step 2:
(1) deep learning is carried out to multiple human body subgraphs and the second human body image, it is special obtains multiple first human body images
Sign figure and the second human body image characteristic pattern;
(2) according to obtained multiple first human body characteristics of image figures and the second human body image characteristic pattern, multiple first is obtained
Each of human body image characteristic pattern the first human body characteristics of image figure feature difference figure with the second human body image characteristic pattern respectively.
Specifically, step 1 includes the following steps 11 to 12:
(11) deep learning is carried out to obtained human body subgraph from convolutional neural networks by first, obtains multiple first
Human body image characteristic pattern;
(12) depth is carried out to any second human body image selected from image library from convolutional neural networks by second
Study, obtains the second human body image characteristic pattern, second is identical from convolutional neural networks and first from the structure of convolutional neural networks.
Wherein, first includes identical primary image processing fortune from convolutional neural networks and second from convolutional neural networks
Calculate unit, convolution algorithm unit and down-sampling arithmetic element.
Certainly, first figure can also may be implemented using other from convolutional neural networks from convolutional neural networks and second
As any arithmetic element of deep learning function, with substitute above-mentioned primary image processing arithmetic element, convolution algorithm unit and
At least one of down-sampling arithmetic element arithmetic element, no longer repeats one by one here.
The difference of the posture of human body, can make same people in difference when in the related technology, due to shooting angle and shooting
Image in seem that difference is very big, so in these cases, the human body in image cannot be carried out effectively to identify and compare
It is right, in order to effectively being identified with the same human bodies of different gestures in different images, step 2 include the following steps 21 to
24:
(21) the value regional center characterized by each pixel point coordinates stored in preset pixel coordinate set, according to pre-
If characteristic value area size, will work as respectively ex-first lady's body characteristics of image figure and the second human body image characteristic pattern be divided into it is multiple
The First Eigenvalue region and multiple Second Eigenvalue regions;
(22) profile maxima is obtained respectively from multiple the First Eigenvalue regions and multiple Second Eigenvalue regions;
(23) calculate respectively from same characteristic features value regional center the First Eigenvalue region and Second Eigenvalue region
The difference of the profile maxima of middle acquisition obtains multiple feature differences;
(24) it using multiple feature differences as pixel value, according to preset feature difference figure size, generates and works as ex-first lady's body
The feature difference figure of characteristics of image figure and the second human body image characteristic pattern.
The content described to step 21 to step 22 by following example is described further:
Referring to Fig. 3, a kind of form of expression of human body image characteristic pattern is shown, the coordinate of pixel shown in figure is more
The characteristic area center in a characteristic value region, the size that each characteristic value region is arranged is 3 × 3 size.Therefore, according to fig. 3
Shown in image, with coordinate (2,2), (4,2), (6,2), (8,2), (2,4), (4,4), (6,4), (8,4), (2,6), (4,6),
(6,6), it is looked for respectively in (8,6), (2,8), (4,8), (6,8), the characteristic area that (8,8) are characterized regional center, size is 3x3
Go out the maximum characteristic value in each characteristic area in human body image characteristic pattern.
Wherein, the preset characteristic value area size in the First Eigenvalue region and Second Eigenvalue region is that n × n-pixel is big
It is small, n ∈ { 3,5,7,9,11 }.
21 to step 24 as can be seen that the first human body characteristics of image figure and the second human body can be obtained through the above steps
The feature difference figure of characteristics of image figure illustrates the first human body when the pixel value of each pixel in feature difference figure tends to 0
The similarity degree of image and the second human body image is higher, the difference of the posture of human body when to due to shooting angle and shooting
Cause to seem in different photos difference it is very big be that the human body of same people is effectively identified in fact.
In conclusion during human body image compares, deep learning directly is carried out to the second human body image, being not necessarily to will
Second human body image is divided carries out deep learning again, it is possible to reduce calculation amount when image compares improves human body image
Comparison speed.
In the related technology, it is respectively obtaining human body image to be detected and is comparing the deep learning result of each section in people's image
Afterwards, more complicated Fusion Features must be passed through to operate, can just obtain human body image to be detected and compares the comparison of people's image
As a result, in order to which the comparison result of human body image can be obtained faster, deep learning is carried out to obtained feature difference figure, is obtained
The comparison result of first human body image and the second human body image includes the following steps 1 to step 4:
(1) deep learning is carried out to obtained each feature difference figure, obtains corresponding human body of each feature difference figure
The similar parameter of image and the second human body image;
(2) similarity of the maximum similar parameter as the first human body image and the second human body image is determined;
(3) when similarity is more than or equal to the similarity threshold of setting, the first human body image and the second human body image are obtained
Similar comparison result, similar comparison result are indicated by number 1;
(4) when similarity is less than the similarity threshold of setting, the first human body image and the second human body image not phase are obtained
As comparison result, dissimilar comparison result by number 0 indicate.
Specifically, step 1 includes:By third corresponding human body of every feature difference figure is obtained from convolutional neural networks
The similar parameter of image and the second human body image.
The sub- convolutional neural networks of third are by primary image processing arithmetic element, convolution algorithm unit, down-sampling arithmetic element
It is formed with softmax graders.
Certainly, from convolutional neural networks any of picture depth learning functionality can also may be implemented using other in third
Arithmetic element, with substitute above-mentioned primary image processing arithmetic element, convolution algorithm unit, down-sampling arithmetic element and
At least one of softmax graders arithmetic element, no longer repeats one by one here.
In conclusion by deep learning and simple numeric ratio to operation, so that it may with determine the first human body image with
Whether the second human body image is similar, accelerates the comparison speed of human body image.
Embodiment 2
Referring to Fig. 3, the present embodiment provides another human body image comparison methods, include the following steps:
(1) input human body image A and human body image B, image A are current human body images to be retrieved, and image B is in image library
Arbitrary human body image;
(2) region division:Input picture A is divided equally into m region from top to bottom, this m region is expressed as Rj j
={ 1,2,3 ..., m-1, m }, takes preceding k A region Rj,
J=1,2,3 ..., and k-1, k } constitute subgraphIt may be constructed altogether in this wayExpression pairUpwards
Rounding) a subgraph
As m=6, following region division example is obtained:
Input human body image A is averagely divided into 6 regions from top to bottom, this 6 regions are expressed as R1、R2、R3、
R4、R5、R6.By region R1、R2、R3Constitute subgraph(it is upper 1/2) for inputting human body image A, by region R1、R2、R3、R4
Constitute subgraph(it is upper 2/3) for inputting human body image A, by region R1、R2、R3、R4、R5Constitute subgraph(it is defeated
Enter human body image A it is upper 5/6), by region R1、R2、R3、R4、R5、R6Constitute subgraph(being input human body image A);This
Sample carries out region division one to input human body image A and 4 different size of human body subgraphs is obtained
It is divided equally into 6 tectonic imagesCover the local message of input picture and global letter
Breath, and each imageCan include input picture main contents (in pedestrian retrieval upper half of human body on result influence compared with
It greatly), while will not be due to imageInfluence processing speed too much.
(3) by human body imageWith human body image B be input to sub- convolutional neural networks C1 and C2 into
Row characteristic pattern extraction (sub- convolutional neural networks C1 and C2 has identical structure), output characteristic pattern f and characteristic pattern g;Son
Convolutional neural networks C1 and C2 handles operation by primary image, and convolution algorithm and down-sampling operation are constituted;
(4) the feature difference figure of characteristic pattern f and characteristic pattern g are calculated:It is the neighborhood window of n × n to take size, with 2 pixels
For step-length on characteristic pattern f from top to bottom, from left to right slide neighborhood window, maximum value V is sought in neighborhood windowf;With 2
Pixel be step-length on characteristic pattern g from top to bottom, from left to right slide neighborhood window, maximum value V is sought in neighborhood windowg;
Difference is made to the difference acquired above and seeks absolute value, i.e. Vabs=abs (Vf-Vg);Until having handled characteristic pattern f and characteristic pattern g
Upper all position feature values, as shown in formula 1, specific steps:
Take size be n × n neighborhood window, using 2 pixels as step-length on characteristic pattern f and characteristic pattern g from top to bottom, from
Left-to-right sliding neighborhood window executes step (a), (b), (c) respectively, until having handled all positions on characteristic pattern f and characteristic pattern g
Set characteristic value.
(a) characteristic pattern f and maximum value Vs of the characteristic pattern g in n × n neighborhoods centered on coordinate (x, y) are sought respectivelyf
And Vg;
(b) to above-mentioned VfAnd VgMake poor, i.e. V=Vf-Vg;
(c) absolute value, i.e. V are sought to above-mentioned differenceabs=abs (V);
K (x, y)=abs (max (N (f (x, y)))-max (N (g (x, y)))) (1)
Wherein, the feature difference figure of K (x, y) denotation coordination position (x, y), N (f (x, y)) ∈ Rn×nIt is with coordinate (x, y)
Centered on characteristic value f (x, y) n × n neighborhoods, N (g (x, y)) ∈ Rn×nBe centered on coordinate (x, y) characteristic value g (x,
Y) n × n neighborhoods, max (N (f (x, y))) indicate to seek the maximum value of n × n neighborhoods N (f (x, y)), max (N (g (x, y))) table
Show the maximum value for seeking n × n neighborhoods N (g (x, y)), abs (z) indicates to seek absolute value to z, the value range of n in the present embodiment
It is { 3,5,7,9,11 } n=, the representative value of n is n=5;
(5) feature difference figure K is inputted into sub- convolutional neural networks C3 extraction features and classifies that (deep learning network is a kind of
Self-learning networks are characterized in what self study came out, artificially do not select.);Sub- convolutional neural networks C3 is handled by primary image
Operation, convolution algorithm, down-sampling operation and softmax graders composition;
(6) result is exported:Output is a binary variable, and 0 indicates that input human body image A is dissimilar with human body image B,
1 indicates that input human body image A is similar to human body image B.
In conclusion human body image comparison method provided in this embodiment, multiple by the way that the first human body image to be divided into
Image-region;And according to the multiple images region after division, selected from multiple images region the image-region of different number into
Row splicing, obtains multiple human body subgraphs, then by obtained multiple human body subgraphs, is carried out respectively with the second human body image
It compares, it is being compared as a result, with human body image to be divided into each section human body after upper, middle, and lower part in the prior art
The part that is blocked in image can reduce the human body image comparison process of human body image comparison result accuracy rate and compare, and reduce human body
Be blocked influence of the part to comparison result in image, improves the accuracy rate that human body image compares.
Embodiment 3
Referring to Fig. 5, the present embodiment provides a kind of human body image comparison devices, compare other side for executing above-mentioned human body image
Method, including:Acquisition module 500, image division module 502, image mosaic module 504, feature difference figure acquisition module 506 and ratio
To module 508.
Acquisition module 500, for obtaining the first human body image and the second human body image;
Image division module 502 is connect with acquisition module 500, for the first human body image to be divided into multiple images area
Domain;
Image mosaic module 504 is connect with image division module 502, for selecting different numbers from multiple images region
The image-region of amount is spliced, and multiple human body subgraphs are obtained;
Feature difference figure acquisition module 506, connect with image mosaic module 504, for multiple human body subgraphs to obtaining
Picture and the second human body image carry out deep learning, obtain the feature difference figure of the first human body image and the second human body image;
Comparing module 508 is connect with feature difference figure acquisition module 506, deep for being carried out to obtained feature difference figure
Degree study, obtains the comparison result of the first human body image and the second human body image, and comparison result includes that phase Sihe is dissimilar.
In order to which the image-region after being divided by the first human body image obtains multiple human body subgraphs, image mosaic module
504 are specifically used for from multiple images region, select k image-region to be spliced respectively, obtain multiple human body subgraphs, more
A image-region is averagely divided by the first human body image and is obtained from top to bottom;
Wherein,M indicates that the first human body image is drawn
The image-region quantity divided.
In conclusion by from multiple images region, different number is taken respectively Image-region spliced, to obtain multiple human bodies
Image, can in the first human body image human body be blocked in image only have a part of human body image in the case of, can pass through
Obtained multiple human body subgraphs are compared, remove what the part of unrelated human body in image compared human body image as far as possible
It influences, improves the success rate compared to human body image.
In the related technology, during carrying out human body image comparison, the human body image to being obtained from image library is needed
Deep learning is carried out after being divided again, to increase the processing time of human body image comparison, in order to reduce human figure
As the processing time length compared, feature difference figure acquisition module 506 includes:
Deep learning unit obtains multiple for carrying out deep learning to multiple human body subgraphs and the second human body image
First human body characteristics of image figure and the second human body image characteristic pattern;
Feature difference figure acquiring unit, for according to obtained multiple first human body characteristics of image figures and the second human body image
Characteristic pattern, obtain the first human body characteristics of image figure of each of multiple first human body characteristics of image figures respectively with the second human body image
The feature difference figure of characteristic pattern.
In conclusion during human body image compares, deep learning directly is carried out to the second human body image, being not necessarily to will
Second human body image is divided carries out deep learning again, it is possible to reduce calculation amount when image compares improves human body image
Comparison speed.
The difference of the posture of human body, can make same people in difference when in the related technology, due to shooting angle and shooting
Image in seem that difference is very big, so in these cases, the human body in image cannot be carried out effectively to identify and compare
It is right, in order to effectively be identified that feature difference figure acquiring unit includes to the same human body with different gestures in different images:
Region division subelement, for the value characterized by each pixel point coordinates stored in preset pixel coordinate set
Regional center will work as ex-first lady's body characteristics of image figure and the second human body image respectively according to preset characteristic value area size
Characteristic pattern is divided into multiple the First Eigenvalue regions and multiple Second Eigenvalue regions;
Profile maxima obtains subelement, for dividing from multiple the First Eigenvalue regions and multiple Second Eigenvalue regions
Profile maxima is not obtained;
Feature difference computation subunit, for calculating respectively from the First Eigenvalue area with same characteristic features value regional center
The difference of the profile maxima obtained in domain and Second Eigenvalue region, obtains multiple feature differences;
Feature difference figure generates subelement, is used for using multiple feature differences as pixel value, according to preset feature difference figure
Size generates the feature difference figure when ex-first lady's body characteristics of image figure and the second human body image characteristic pattern.
In conclusion the feature difference figure of the first human body characteristics of image figure and the second human body image characteristic pattern can be obtained,
When the pixel value of each pixel in feature difference figure tends to 0, illustrate the phase of the first human body image and the second human body image
Higher like degree, the difference of the posture of human body causes to seem poor in different photos when to due to shooting angle and shooting
Very big is not that the human body of same people is effectively identified in fact.
In the related technology, it is respectively obtaining human body image to be detected and is comparing the deep learning result of each section in people's image
Afterwards, more complicated Fusion Features must be passed through to operate, can just obtain human body image to be detected and compares the comparison of people's image
As a result, in order to obtain the comparison result of human body image faster, comparing module 508 includes:
Similar parameter computing unit obtains each feature for carrying out deep learning to obtained each feature difference figure
The similar parameter of differential chart corresponding human body subgraph and the second human body image;
Similarity determining unit, for determining maximum similar parameter as the first human body image and the second human body image
Similarity;
First comparison result determination unit, for when similarity is more than or equal to the similarity threshold set, obtaining first
Human body image comparison result similar with the second human body image;
Second comparison result determination unit, for when similarity is less than the similarity threshold set, obtaining the first human body
The comparison result of image and the second human body image dissmilarity.
In conclusion by deep learning and simple numeric ratio to operation, so that it may with determine the first human body image with
Whether the second human body image is similar, accelerates the comparison speed of human body image.
In conclusion human body image comparison device provided in this embodiment, multiple by the way that the first human body image to be divided into
Image-region;And according to the multiple images region after division, selected from multiple images region the image-region of different number into
Row splicing, obtains multiple human body subgraphs, then by obtained multiple human body subgraphs, is carried out respectively with the second human body image
It compares, it is being compared as a result, with human body image to be divided into each section human body after upper, middle, and lower part in the prior art
The part that is blocked in image can reduce the human body image comparison process of human body image comparison result accuracy rate and compare, and reduce human body
Be blocked influence of the part to comparison result in image, improves the accuracy rate that human body image compares.
The computer program product for the progress human body image comparison method that the embodiment of the present invention is provided, including store journey
The computer readable storage medium of sequence code, the instruction that said program code includes can be used for executing institute in previous methods embodiment
The method stated, specific implementation can be found in embodiment of the method, and details are not described herein.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with
It realizes by another way.The apparatus embodiments described above are merely exemplary, for example, the division of the unit,
Only a kind of division of logic function, formula that in actual implementation, there may be another division manner, in another example, multiple units or component can
To combine or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or beg for
The mutual coupling, direct-coupling or communication connection of opinion can be by some communication interfaces, device or unit it is indirect
Coupling or communication connection can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple
In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be expressed in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. a kind of human body image comparison method, which is characterized in that including:
Obtain the first human body image and the second human body image;First human body image is the particular persons it needs to be determined that identity
Image;Second human body image is the image that can characterize people's figure and features feature;
First human body image is divided into multiple images region;
It selects the image-region of different number to be spliced from described multiple images region, obtains multiple human body subgraphs;
Deep learning is carried out to obtained the multiple human body subgraph and second human body image, is obtained described the first
The feature difference figure of body image and second human body image;
Deep learning is carried out to the obtained feature difference figure, obtains first human body image and second human body image
Comparison result, the comparison result includes that phase Sihe is dissimilar.
2. according to the method described in claim 1, it is characterized in that, selecting the figure of different number from described multiple images region
As region is spliced, obtaining multiple human body subgraphs includes:
From multiple described image regions, selects k image-region to be spliced respectively, obtain multiple human body subgraphs, it is multiple
Described image region is averagely divided by first human body image and is obtained from top to bottom;
Wherein,M indicates that first human body image is drawn
The image-region quantity divided.
3. according to the method described in claim 1, it is characterized in that, the multiple human body subgraph to obtaining and described
Two human body images carry out deep learning, obtain the feature difference figure packet of first human body image and second human body image
It includes:
Deep learning is carried out to multiple human body subgraphs and second human body image, it is special to obtain multiple first human body images
Sign figure and the second human body image characteristic pattern;
According to obtained multiple first human body characteristics of image figures and the second human body image characteristic pattern, obtain the multiple
The first human body characteristics of image figure of each of first human body characteristics of image figure spy with the second human body image characteristic pattern respectively
Levy differential chart.
4. according to the method described in claim 3, it is characterized in that, the first human body characteristics of image figure to obtaining and described
Second human body image characteristic pattern is handled, and the first human body image of each of the multiple first human body characteristics of image figure is obtained
Feature difference figure of the characteristic pattern respectively with the second human body image characteristic pattern include:
The value regional center characterized by each pixel point coordinates stored in preset pixel coordinate set, according to preset feature
It is worth area size, will works as ex-first lady's body characteristics of image figure respectively and the second human body image characteristic pattern is divided into multiple first
Characteristic value region and multiple Second Eigenvalue regions;
Profile maxima is obtained respectively from the multiple the First Eigenvalue region and the multiple Second Eigenvalue region;
Calculate respectively from same characteristic features value regional center the First Eigenvalue region and Second Eigenvalue region in obtain
The difference of profile maxima obtains multiple feature differences;
Using the multiple feature difference as pixel value, according to preset feature difference figure size, generate described when ex-first lady's body
The feature difference figure of characteristics of image figure and the second human body image characteristic pattern.
5. according to the method described in claim 1, it is characterized in that, carry out deep learning to the obtained feature difference figure,
The comparison result for obtaining first human body image and second human body image includes:
Deep learning is carried out to obtained each feature difference figure, obtains the corresponding human body subgraph of each feature difference figure
With the similar parameter of second human body image;
Determine similarity of the maximum similar parameter as first human body image and second human body image;
When the similarity is more than or equal to the similarity threshold of setting, first human body image and second human body are obtained
The similar comparison result of image;
When the similarity is less than the similarity threshold of setting, first human body image and second human body image are obtained
Dissimilar comparison result.
6. a kind of human body image comparison device, which is characterized in that including:
Acquisition module, for obtaining the first human body image and the second human body image;First human body image is it needs to be determined that body
The image of the particular persons of part;Second human body image is the image that can characterize people's figure and features feature;
Image division module, for first human body image to be divided into multiple images region;
Image mosaic module is obtained for selecting the image-region of different number to be spliced from described multiple images region
Multiple human body subgraphs;
Feature difference figure acquisition module, for obtaining the multiple human body subgraph and second human body image carry out
Deep learning obtains the feature difference figure of first human body image and second human body image;
Comparing module obtains first human body image and institute for carrying out deep learning to the obtained feature difference figure
The comparison result of the second human body image is stated, the comparison result includes that phase Sihe is dissimilar.
7. device according to claim 6, which is characterized in that described image concatenation module is specifically used for from multiple figures
As in region, selecting k image-region to be spliced respectively, obtaining multiple human body subgraphs, multiple described image regions are by institute
Stating the first human body image, averagely division obtains from top to bottom;
Wherein,M indicates that first human body image is drawn
The image-region quantity divided.
8. device according to claim 6, which is characterized in that the feature difference figure acquisition module includes:
Deep learning unit is obtained for carrying out deep learning to multiple human body subgraphs and second human body image
Multiple first human body characteristics of image figures and the second human body image characteristic pattern;
Feature difference figure acquiring unit, for according to obtained multiple first human body characteristics of image figures and second human body
Characteristics of image figure, obtain the first human body characteristics of image figure of each of the multiple first human body characteristics of image figure respectively with it is described
The feature difference figure of second human body image characteristic pattern.
9. device according to claim 8, which is characterized in that the feature difference figure acquiring unit includes:
Region division subelement, for the value region characterized by each pixel point coordinates stored in preset pixel coordinate set
Center will work as ex-first lady's body characteristics of image figure and second human body image respectively according to preset characteristic value area size
Characteristic pattern is divided into multiple the First Eigenvalue regions and multiple Second Eigenvalue regions;
Profile maxima obtains subelement, is used for from the multiple the First Eigenvalue region and the multiple Second Eigenvalue region
It is middle to obtain profile maxima respectively;
Feature difference computation subunit, for calculates respectively from same characteristic features value regional center the First Eigenvalue region with
The difference of the profile maxima obtained in Second Eigenvalue region obtains multiple feature differences;
Feature difference figure generates subelement, is used for using the multiple feature difference as pixel value, according to preset feature difference figure
Size generates the feature difference figure for working as ex-first lady's body characteristics of image figure and the second human body image characteristic pattern.
10. device according to claim 6, which is characterized in that the comparing module includes:
Similar parameter computing unit obtains each feature for carrying out deep learning to obtained each feature difference figure
The similar parameter of differential chart corresponding human body subgraph and second human body image;
Similarity determining unit, for determining maximum similar parameter as first human body image and second human figure
The similarity of picture;
First comparison result determination unit, for when the similarity is more than or equal to the similarity threshold set, obtaining described
First human body image comparison result similar with second human body image;
Second comparison result determination unit, for when the similarity is less than the similarity threshold set, obtaining described first
The comparison result of human body image and the second human body image dissmilarity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510742746.4A CN105243395B (en) | 2015-11-04 | 2015-11-04 | A kind of human body image comparison method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510742746.4A CN105243395B (en) | 2015-11-04 | 2015-11-04 | A kind of human body image comparison method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105243395A CN105243395A (en) | 2016-01-13 |
CN105243395B true CN105243395B (en) | 2018-10-19 |
Family
ID=55041036
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510742746.4A Active CN105243395B (en) | 2015-11-04 | 2015-11-04 | A kind of human body image comparison method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105243395B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107545221A (en) * | 2016-06-28 | 2018-01-05 | 北京京东尚科信息技术有限公司 | Baby kicks quilt recognition methods, system and device |
JP2018036870A (en) * | 2016-08-31 | 2018-03-08 | 富士ゼロックス株式会社 | Image processing device, and program |
CN106599860B (en) * | 2016-12-20 | 2019-11-26 | 北京小米移动软件有限公司 | A kind of method and apparatus of Face datection |
CN106875385A (en) * | 2017-02-09 | 2017-06-20 | 广州中国科学院软件应用技术研究所 | A kind of high robust region intrusion detection algorithm |
CN106711865A (en) * | 2017-02-21 | 2017-05-24 | 国网山东省电力公司邹城市供电公司 | Cable digging tool |
CN106707109A (en) * | 2017-02-21 | 2017-05-24 | 国网山东省电力公司邹城市供电公司 | Underground cable detection system |
CN106815590A (en) * | 2017-02-21 | 2017-06-09 | 国网山东省电力公司邹城市供电公司 | A kind of cable trough and cable management system |
CN108509961A (en) * | 2017-02-27 | 2018-09-07 | 北京旷视科技有限公司 | Image processing method and device |
CN107402947B (en) * | 2017-03-29 | 2020-12-08 | 北京猿力教育科技有限公司 | Picture retrieval model establishing method and device and picture retrieval method and device |
CN106982741A (en) * | 2017-04-06 | 2017-07-28 | 南京三宝弘正视觉科技有限公司 | A kind of pet supervisory-controlled robot and system |
CN107009373A (en) * | 2017-04-06 | 2017-08-04 | 南京三宝弘正视觉科技有限公司 | A kind of solitary's supervisory-controlled robot and system |
CN107014420A (en) * | 2017-04-14 | 2017-08-04 | 南京三宝弘正视觉科技有限公司 | A kind of sensor detection electronic pen and system |
CN112308102B (en) * | 2019-08-01 | 2022-05-17 | 北京易真学思教育科技有限公司 | Image similarity calculation method, calculation device, and storage medium |
CN112712489A (en) * | 2020-12-31 | 2021-04-27 | 北京澎思科技有限公司 | Method, system and computer readable storage medium for image processing |
-
2015
- 2015-11-04 CN CN201510742746.4A patent/CN105243395B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN105243395A (en) | 2016-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105243395B (en) | A kind of human body image comparison method and device | |
US11093789B2 (en) | Method and apparatus for object re-identification | |
CN109325954B (en) | Image segmentation method and device and electronic equipment | |
CN109376681B (en) | Multi-person posture estimation method and system | |
CN107169463B (en) | Method for detecting human face, device, computer equipment and storage medium | |
US10824916B2 (en) | Weakly supervised learning for classifying images | |
CN109145766B (en) | Model training method and device, recognition method, electronic device and storage medium | |
US20180114071A1 (en) | Method for analysing media content | |
CN110427932B (en) | Method and device for identifying multiple bill areas in image | |
CN109643448A (en) | Fine granularity object identification in robot system | |
CN108229280A (en) | Time domain motion detection method and system, electronic equipment, computer storage media | |
US10289884B2 (en) | Image analyzer, image analysis method, computer program product, and image analysis system | |
JP2007128195A (en) | Image processing system | |
US20140270479A1 (en) | Systems and methods for parameter estimation of images | |
Yigitbasi et al. | Edge detection using artificial bee colony algorithm (ABC) | |
Kumar et al. | Adaptive cluster tendency visualization and anomaly detection for streaming data | |
KR20160033800A (en) | Method for counting person and counting apparatus | |
JP6948851B2 (en) | Information processing device, information processing method | |
CN110084175A (en) | A kind of object detection method, object detecting device and electronic equipment | |
CN107918767A (en) | Object detection method, device, electronic equipment and computer-readable medium | |
Maddalena et al. | Exploiting color and depth for background subtraction | |
Sreela et al. | Action recognition in still images using residual neural network features | |
JP2021051589A5 (en) | ||
CN104615613B (en) | The polymerization of global characteristics description | |
CN107025433B (en) | Video event human concept learning method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PP01 | Preservation of patent right | ||
PP01 | Preservation of patent right |
Effective date of registration: 20220726 Granted publication date: 20181019 |