CN107944403A - Pedestrian's attribute detection method and device in a kind of image - Google Patents
Pedestrian's attribute detection method and device in a kind of image Download PDFInfo
- Publication number
- CN107944403A CN107944403A CN201711230016.1A CN201711230016A CN107944403A CN 107944403 A CN107944403 A CN 107944403A CN 201711230016 A CN201711230016 A CN 201711230016A CN 107944403 A CN107944403 A CN 107944403A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- subregion
- image
- area
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses pedestrian detection method and device in a kind of image, the described method includes:Pedestrian in image to be detected is detected, obtains the first pedestrian area for including the pedestrian;The edge of pedestrian described in first pedestrian area is extracted, obtains the edge of the pedestrian;It is determined as the second pedestrian area by the edge including the pedestrian and less than the region of first pedestrian area;N number of subregion is marked off in second pedestrian area, the body subregion of the pedestrian is included per sub-regions or includes the adjunct of the pedestrian;The corresponding described image to be detected of the N number of subregion is input in convolutional neural networks, exports body subregion or appendicular attributive character that each region includes.
Description
Technical field
This application involves machine learning techniques field, pedestrian's attribute detection method and dress in more particularly to a kind of image
Put.
Background technology
With the development of Video Supervision Technique, intelligent video monitoring is applied in more and more scenes, such as traffic, business
Field, hospital, cell, park etc., the application of intelligent video monitoring is in various scenes, and pedestrian's detection of attribute is carried out by image
Lay a good foundation.
The identification of pedestrian's attribute, is by inputting the video containing pedestrian to be detected, pedestrian then being identified from video
A kind of technology of every attribute.The technology relate to many such as computer vision, image procossing, pattern-recognition, machine learning
Section.
Can only be according to pedestrian's general image of a certain frame in video, using traditional in traditional pedestrian's Attribute Recognition system
Machine learning method carries out the identification of pedestrian's attribute.Result influence when the positioning of pedestrian is inaccurate on identification is very big, and
Conventional machines learning method can achieve the effect that also limited.City-level security protection video amount is huge, the identification of pedestrian's attribute structureization
The quantity of model can be more and more with the growth of attribute number purpose, cause calculation amount huge.
Therefore, the accurate identification of pedestrian's attribute how is solved at present, improves recognition efficiency, is become urgently to be resolved hurrily at present
Problem.
The content of the invention
The embodiment of the present application provides pedestrian's attribute detection method and device in a kind of image, to improve the inspection of pedestrian's attribute
The efficiency of survey.
The embodiment of the present application provides pedestrian's attribute detection method in a kind of image, it is characterised in that the described method includes:
Pedestrian in image to be detected is detected, obtains the first pedestrian area for including the pedestrian;
The edge of pedestrian described in first pedestrian area is extracted, obtains the edge of the pedestrian;
It is determined as the second pedestrian area by the edge including the pedestrian and less than the region of first pedestrian area;
N number of subregion is marked off in second pedestrian area, the body of the pedestrian is included per sub-regions
Subregion or the adjunct for including the pedestrian;
N number of subregion is input in convolutional neural networks, exports the body subregion included per sub-regions
Or the attributive character of adjunct subregion.
A kind of possible implementation, obtains the first pedestrian area for including the pedestrian, including:
Obtain the gradient orientation histogram of described image to be detected;
Graded Density distribution in the gradient orientation histogram of described image to be detected, determines described image to be detected
Feature vector;
The feature vector of described image to be detected and the feature vector of the default sample image including pedestrian are compared
Compared with, however, it is determined that the feature vector of described image to be detected and the feature vector of the sample image difference within a preset range,
Then determine in described image to be detected there are pedestrian, and demarcate the first pedestrian area where the pedestrian.
A kind of possible implementation, the edge for determining the pedestrian, including:
The edge of pedestrian in first pedestrian area is determined according to color connected region algorithm.
A kind of possible implementation, it is described to mark off N number of subregion in second pedestrian area, including:
Obtain the gradient orientation histogram of second pedestrian area;
Graded Density distribution in the gradient orientation histogram of second pedestrian area, determines second pedestrian
The feature vector in region;
For any one subregion in N number of subregion, however, it is determined that there are one in second pedestrian area
The difference of the feature vector for the subregion demarcated in the feature vector of subregion and default sample image within a preset range, then
Determine the characteristics of image of the subregion and the image in the subregion demarcated in the sample image in second pedestrian area
Feature is identical, and the position of the subregion is marked off from second pedestrian area.
A kind of possible implementation, the attributive character on the head is including at least following one or more:Age, property
Not, head accessory, face accessory;The attributive character of the upper part of the body includes at least:The feature of clothing;The attribute of the lower part of the body
Feature includes at least:Clothing, shoes;The attributive character of the hand held object includes at least:Bag, cart, luggage case, pet hold
Whether thing carries or its color, type.
The embodiment of the present application provides pedestrian's detection of attribute device in a kind of image, and described device includes:
Acquiring unit, for being detected to pedestrian in image to be detected;
Processing unit, the first pedestrian area of the pedestrian is included for obtaining;Described in first pedestrian area
The edge of pedestrian is extracted, and obtains the edge of the pedestrian;By the edge including the pedestrian and it is less than first pedestrian
The region in region is determined as the second pedestrian area;N number of subregion is marked off in second pedestrian area, per sub-regions bag
Include the body subregion of the pedestrian or include the adjunct of the pedestrian;N number of subregion is input to convolution god
Through in network, exporting the attributive character of the body subregion included per sub-regions or adjunct subregion.
A kind of possible implementation, the processing unit are specifically used for:
Obtain the gradient orientation histogram of described image to be detected;According to the gradient orientation histogram of described image to be detected
In Graded Density distribution, determine the feature vector of described image to be detected;By the feature vector of described image to be detected and in advance
If the feature vector of the sample image including pedestrian be compared, however, it is determined that the feature vector of described image to be detected with it is described
The difference of the feature vector of sample image is within a preset range, it is determined that there are pedestrian in described image to be detected, and demarcates institute
State the first pedestrian area where pedestrian.
A kind of possible implementation, the processing unit are specifically used for:According to determining color connected region algorithm
The edge of pedestrian in first pedestrian area.
A kind of possible implementation, the processing unit are specifically used for:
Obtain the gradient orientation histogram of second pedestrian area;It is straight according to the gradient direction of second pedestrian area
Graded Density distribution in square figure, determines the feature vector of second pedestrian area;For any in N number of subregion
One sub-regions, however, it is determined that there are in the feature vector of a sub-regions and default sample image in second pedestrian area
The difference of the feature vector of the subregion of calibration is within a preset range, it is determined that the figure of the subregion in second pedestrian area
As feature is identical with the characteristics of image in the subregion demarcated in the sample image, and divided from second pedestrian area
Go out the position of the subregion.
A kind of possible implementation, the attributive character on the head is including at least following one or more:Age, property
Not, head accessory, face accessory;The attributive character of the upper part of the body includes at least:The feature of clothing;The attribute of the lower part of the body
Feature includes at least:Clothing, shoes;The attributive character of the hand held object includes at least:Bag, cart, luggage case, pet hold
Whether thing carries or its color, type.
The embodiment of the present application provides a kind of computer program product, including computer-readable instruction, when computer is read
And perform the computer-readable instruction so that computer performs the method as described in above-mentioned any one.
The embodiment of the present application provides a kind of chip, and the chip is connected with memory, for reading and performing the storage
The software program stored in device, to realize the method in any of the above-described in various possible designs.
The embodiment of the present application provides pedestrian's attribute detection method and device in a kind of image, to be detected in this method
Pedestrian is detected in image, obtains the first pedestrian area for including the pedestrian;To row described in first pedestrian area
The edge of people is extracted, and obtains the edge of the pedestrian;By the edge including the pedestrian and it is less than the first pedestrian area
The region in domain is determined as the second pedestrian area;N number of subregion is marked off in second pedestrian area, is included per sub-regions
The body subregion of the pedestrian or the adjunct for including the pedestrian;N number of subregion is corresponding described to be checked
Altimetric image is input in convolutional neural networks, exports body subregion or appendicular attributive character that each region includes.
Since the second pedestrian area that the embodiment of the present application determines is the pedestrian area after being accurately positioned so as to N number of sub-district of division
Domain is more accurate, using the corresponding pedestrian's attributive character of the N number of subregion of convolutional neural networks one-off recognition, improves detection
Precision and detection efficiency.
Brief description of the drawings
Fig. 1 is the flow diagram of pedestrian's attribute detection method in a kind of image provided by the embodiments of the present application;
Fig. 2 is the schematic diagram of pedestrian's detection of attribute in a kind of image provided by the embodiments of the present application;
Fig. 3 is the schematic diagram of pedestrian's detection of attribute in a kind of image provided by the embodiments of the present application;
Fig. 4 is the schematic diagram of pedestrian's detection of attribute in a kind of image provided by the embodiments of the present application;
Fig. 5 is the schematic diagram of pedestrian's detection of attribute in a kind of image provided by the embodiments of the present application;
Fig. 6 provides a kind of structure diagram of convolutional neural networks for the embodiment of the present application;
Fig. 7 is pedestrian's detection of attribute apparatus structure schematic diagram in a kind of image provided by the embodiments of the present application.
Embodiment
In the prior art, it is mainly use direction histogram of gradients (Histogram to the pedestrian detection method in image
Of Oriented Gradient, HOG) algorithm is detected pedestrian target, and obtained result Detection accuracy is low, pedestrian's bag
The problem of peripheral frame is excessive, and pedestrian's attribute accuracy that the pedestrian area for causing basis definite determines is low.Belong to effectively improve pedestrian
Property detection efficiency, improve pedestrian's detection of attribute real-time, easy to pedestrian's detection of attribute global optimization, the embodiment of the present application carries
Pedestrian's attribute detection method and the device in a kind of image are supplied.
The embodiment of the present application is applied to electronic equipment, and the specific electronic equipment can be desktop computer, notebook, other tools
There is smart machine of disposal ability etc..Handed in addition, pedestrian's detection of attribute in image in the embodiment of the present application can be detection
Pedestrian's attribute in the image of logical scene, can also detect pedestrian's attribute in other scenes of video monitoring, such as park, public affairs
Residence, supermarket etc..It is widely used in the multiple business such as video investigation, the lookup of pedestrian's feature, suspect's search.
Fig. 1 is the flow diagram of pedestrian's attribute detection method in a kind of image provided by the embodiments of the present application, including
Following steps:
Step 101:Pedestrian in image to be detected is detected, obtains the first pedestrian area for including the pedestrian;
Step 102:The edge of pedestrian described in first pedestrian area is extracted, obtains the side of the pedestrian
Edge;
Step 103:It is determined as the second row by the edge including the pedestrian and less than the region of first pedestrian area
People region;
Step 104:N number of subregion is marked off in second pedestrian area, includes the pedestrian's per sub-regions
One body subregion or the adjunct for including the pedestrian;
Step 105:The corresponding described image to be detected of the N number of subregion is input in convolutional neural networks, is exported
The body subregion or appendicular attributive character that each region includes.
In a step 101, can by histograms of oriented gradients (Histogram of Oriented Gradient,
HOG) feature extraction algorithm, determines the first pedestrian area for including the pedestrian.Detailed process is as follows:
Step 1: obtain the gradient orientation histogram of described image to be detected;
In described image to be detected, the shape facility of pedestrian can be true according to the direction Density Distribution at gradient or edge
It is fixed.Concrete methods of realizing:
The wave filter for adapting to detection pedestrian is divided the image into, gathers the gradient of each pixel or the direction at edge in wave filter
Histogram;The gradient direction of each wave filter is divided into different directions, using the gradient direction in wave filter and amplitude to each
Direction is weighted projection, determines the feature vector that wave filter produces in described image to be detected.
Step 2: the feature vector by the feature vector of described image to be detected and the default sample image including pedestrian
It is compared, however, it is determined that the difference of the feature vector of described image to be detected and the feature vector of the sample image is in default model
In enclosing, it is determined that there are pedestrian in described image to be detected, and demarcate the first pedestrian area where the pedestrian.
A kind of possible implementation, the sample image can include positive sample, that is, mark off the sample of pedestrian area
Image, dividing mode can be artificial dividing mode;And negative sample, wherein, the sample of pedestrian is not included in negative sample image
Image.It is trained by the feature including pedestrian area in positive sample image and negative sample image, to obtain pedestrian area
Feature vector.In detection process, according to the feature of the feature vector of described image to be detected and trained pedestrian area to
Amount is compared, however, it is determined that the feature vector of described image to be detected and the difference of the feature vector of the pedestrian area are being preset
In the range of, it is determined that there are pedestrian in described image to be detected, and demarcate the first pedestrian area where the pedestrian.
Specific comparative approach, can be compared by the method for support vector machines, with method phase of the prior art
Together, details are not described herein.
As shown in Fig. 2, the first pedestrian area 201 determined for the embodiment of the present application.The first row determined by HOG features
People region 201 is a larger scope, and the first pedestrian area 201 only determined by HOG features carries out the knowledge of pedestrian's attribute
Not, accuracy of identification is relatively low, therefore, to improve the accuracy of identification of pedestrian, pedestrian's detection of attribute in the image of the embodiment of the present application
Method further includes the edge of the definite pedestrian.
In a step 102, the extraction at the edge of the pedestrian can be determined by color connected region algorithm.Specifically can be with
Comprise the following steps:
Step 1: the sample format of described image to be detected is converted into YCbCr format.
YCbCr is a kind of color space, commonly used in the image continuous processing in film, or in digital photographic systems.Y is
Brightness value (luminance), Cb and Cr are blueness and red chromatic component., can be by described in by the conversion of sample format
The size compression of image to be detected, to reduce the size of the model of image procossing, improves the speed of image procossing.
Step 2: obtain the bianry image of described image to be detected, and to the bianry image into expansion and corrosion treatment.
Wherein, bianry image (Binary Image) refers to each pixel on image there was only two kinds of possible values
Or tonal gradation state, it can represent bianry image with black and white, B&W, monochrome image.All pixels can only be from bianry image
Being taken in 0 and 1 the two values, the two desirable values are corresponded respectively to close and opened, and closing characterizes the pixel and is in background, and
Opening characterizes the pixel and is in prospect.Only retain the edge feature of image by bianry image, have ignored the details of image so that
Image occupied space is few, and can be easier to identify the architectural feature of image.For example, identification image is landscape or pedestrian
Etc. architectural feature.
In specific implementation process, the edge to image can be realized by the corrosion to bianry image and expansion process
It is smoothed.First corrode the process expanded afterwards and be known as opening operation, it, which has, eliminates small objects, in very thin place's separating objects
With the effect on smooth larger object border.The process for first expanding post-etching is known as closed operation, it has tiny sky in filler body
Hole, connects the effect of adjacent object and smooth boundary.Specifically, a small-sized binary map (structural element) is in the bianry image
Upper point-by-point movement is simultaneously compared, and corresponding expansion and corrosion treatment are made according to result of the comparison.Company is involved by Mathematical morphology filter
General character detection method removes cavity in prospect noise and region, can effectively remove the noise jamming of image to be detected, lifting
Pedestrian detection accuracy rate.
Step 3: determine the notable border of the pedestrian and surrounding environment, and the side using the border as the pedestrian
Edge.
, can be by the method for marking connected region of bianry image, from only by " 1 " pixel (prospect in specific implementation process
Point) and " 0 " pixel (background dot) composition a width bianry image in, " 1 " the value combination of pixels to adjoin each other into region, is used in combination
Boundary information describes each connected region.
As shown in figure 3, the schematic diagram at the edge 301 of the pedestrian determined for the embodiment of the present application.It should be noted that step
101 and the implementation procedure of step 102 can be advanced row step 101, after determining the first pedestrian area 201, by the first pedestrian area
Image to be detected in domain 201 carries out step 102, to obtain pedestrian edge 301;Can also be step 101 and step 102 at the same time
Carry out, the first pedestrian area 201 is determined and by determining owning in described image to be detected in step 102 by step 101
Edge after after, by the first definite pedestrian area 201, determine the pedestrian in all edges of described image to be detected
Edge 301.
The edge 301 determined by color connected region is the minimum edge of the pedestrian, in some instances it may even be possible to is occurred pedestrian
Body part or the adjunct of pedestrian be determined as situation outside the edge 301 of pedestrian, therefore, still suffer from certain error.For
Accuracy of identification is improved, by the first pedestrian area 201 that HOG features determine with reference to the pedestrian edge 301, is determined such as Fig. 4 institutes
The second pedestrian area 401 shown.
A kind of possible implementation, in step 103, the definite method of the second pedestrian area 401 can be by following
Method is realized:
Step 1: extreme point a, b, c, d on the four direction of upper and lower, left and right are determined according to the edge 301 of pedestrian.
A kind of possible implementation, the upper and lower, left and right four direction are 4 frames of the first pedestrian area 201
Vertical line and outwardly direction;In each direction, the marginal point of the maximum by the edge 301 of pedestrian in this direction, really
It is set to extreme point in this direction.
Step 2: according to four extreme points, the third line people region 302 similar to the first pedestrian area 201 is determined.
Step 3: the second pedestrian area 401 is determined with the third line people region 302 according to the first pedestrian area 201;
A kind of possible implementation, as shown in figure 4, the second pedestrian area 401 is the area less than the first pedestrian area 201
Domain, and more than the region in the third line people region 302.
A kind of possible implementation, the second pedestrian area 401 are located at for the first pedestrian area 201 and the third line people region
302 centre position, and size and the third line people region 302 of the size of the second pedestrian area 401 for the first pedestrian area 201
Size average value.
The second pedestrian area 401 determined by step 103, is reduced to suitable position by the detection zone of pedestrian, carries
The high precision and efficiency of the humanized detections of subsequent rows.
By the image to the second pedestrian area 401, the detection of the subregion of pedestrian is carried out, by the picture for reducing subregion
Vegetarian refreshments, reduces the size of the wave filter of division subregion, can greatly improve the speed of identification, and then improves image recognition
Efficiency.
In specific implementation process, body subregion or adjunct that N number of region includes can include at least with
The next item down is multinomial:Head, above the waist, the lower part of the body, hand held object etc.;The selection of subregion can be true according to practical application scene
It is fixed, do not limit herein.
At step 104, N number of subregion is marked off in the second pedestrian area 401, may comprise steps of:
Step 1: obtain the gradient orientation histogram of the second pedestrian area 401;
To improve the precision for identifying N number of subregion, a kind of possible implementation, can determine second by 2 class wave filters
The gradient orientation histogram of pedestrian area 401, comprises the following steps that:
Step 1:Convolution is carried out to the second pedestrian area 401 according to root wave filter, obtains the response of described wave filter
Figure;
Wherein, described wave filter (root filter) is global wave filter, be based on according to each root wave filter to
For amount machine disaggregated model to gradient direction weighted superposition, the brighter direction of gradient direction, which can be construed to pedestrian, has this direction gradient
Possibility it is bigger.The response diagram that root wave filter obtains substantially presents the overall feature of a pedestrian.
Step 2:Up-sampled by gaussian pyramid by 2 times of the image magnification of the second pedestrian area 401;According to N number of sub-district
Domain wave filter (part filter) carries out convolution to the corresponding image to be detected of second pedestrian area for amplifying 2 times respectively,
Obtain the response diagram of N number of subregion wave filter;
Image by amplifying 2 times carries out the detection of subregion, improves the precision for detecting N number of subregion.The N
Sub-regions wave filter is the wave filter of the N number of subregion determined according to trained sample image.Pass through the second pedestrian area
Size of the domain 401 as the sample image chosen in the N number of subregion wave filter of training, the picture of definite N number of subregion wave filter
Vegetarian refreshments is less, effectively raises the accuracy of identification and recognition efficiency of N number of subregion.
Step 3:It is down-sampled that gaussian pyramid is done to the response diagram of N number of subregion wave filter, and will be down-sampled after
The response diagram of response diagram and described wave filter is weighted direction and the width for averagely, determining the gradient of the second pedestrian area 401
Value, i.e. the Graded Density distribution of the second pedestrian area 401;
By doing the down-sampled processing of fine gaussian pyramid to the response diagram of N number of subregion wave filter, to ensure
The response diagram of N number of subregion wave filter and the response diagram of root wave filter just have identical resolution ratio.Added
Weight average, obtains final response diagram, and then determines the gradient orientation histogram of the second pedestrian area 401.
Step 2: the Graded Density distribution in the gradient orientation histogram of second pedestrian area, determines described
The feature vector of second pedestrian area;
Step 3: for any one subregion in N number of subregion, however, it is determined that deposited in second pedestrian area
The difference of the feature vector for the subregion demarcated in the feature vector of a sub-regions and default sample image is in default model
In enclosing, it is determined that in second pedestrian area characteristics of image of the subregion with the subregion demarcated in the sample image
Characteristics of image it is identical, and mark off from second pedestrian area position of the subregion.
For example, as shown in figure 5,4 sons of the second pedestrian area 402 determined for the embodiment of the present application by step 104
Region, respectively head subregion 501, upper part of the body subregion 502, lower part of the body subregion 503, adjunct subregion 504.
For head subregion 501, the attributive character on head can include at least following one or more:Age, gender,
The features such as head accessory, face accessory;The head accessory can be cap, and the face accessory can be glasses etc..
For upper part of the body subregion 502, the attributive character of the upper part of the body can include at least following one or more:Jacket face
The feature such as color, jacket type;
For lower part of the body subregion 503, the attributive character of the lower part of the body can include at least following one or more:Clothing,
The feature such as the color of shoes, clothing, footwear styles;
For adjunct subregion 504, the appendicular attributive character can include at least following one or more:
Bag, cart, luggage case, whether the adjunct such as pet carries or the feature such as its color, type.
In specific implementation process, it can determine that N number of pedestrian's attribute of N number of subregion of detection is special as needed
Sign;
For example, 4 sub-regions 501-504 as shown in Figure 5, it may be determined that 4 pedestrian's attributive character of corresponding detection
Respectively:Pedestrian's attributive character that subregion 501 detects is gender, and pedestrian's attributive character that subregion 502 detects is jacket face
Color, pedestrian's attributive character that subregion 503 detects are lower clothing color, and whether pedestrian's attributive character that subregion 504 detects is to take
Band bag.
In step 105, it is for definite N number of pedestrian's attributive character, N number of subregion is corresponding described to be detected
Image is input in convolutional neural networks.
Convolutional neural networks are trained using substantial amounts of sample image in the embodiment of the present application, pass through substantial amounts of sample
Image construction sample graph image set.Rectangle frame can be used to determine N number of subregion in each sample image.Per height in sample image
Region corresponds to a sub- convolutional neural networks, and every sub- convolutional neural networks model needs to know for identifying that the subregion is corresponding
Other pedestrian's attributive character.For example, the structure diagram of convolutional neural networks as shown in Figure 6, the convolutional neural networks bag
4 sub- convolutional neural networks 601-604 are included, correspond to 4 sub-regions 501-504 of the second pedestrian area 402, i.e. head respectively
Region 501, upper part of the body subregion 502, lower part of the body subregion 503, adjunct subregion 504.
In the embodiment of the present application, the convolutional neural networks are trained the institute that process can use sample image to concentrate
There is sample image to be trained convolutional neural networks.In order to improve trained efficiency, in the embodiment of the present application according to sample
Pedestrian's attributive character per sub-regions is directed in image to be respectively trained convolutional neural networks.
, can be with when not determining the weight coefficient of any one pedestrian's attributive character corresponding sub- convolutional neural networks model
Pedestrian's attributive character of the sub-regions in N number of pedestrian's attributive character of N number of subregion is randomly selected, to correspondence
Sub- convolutional neural networks be trained, specific training process includes:
Concentrated in the sample image and choose subsample image, the subsample image is the corresponding sample graph of the subregion
Picture;Using the subsample image of selection, which is trained;Constantly update the sub- convolutional Neural net
The weight coefficient of network, until the error between the information of pedestrian's attributive character of prediction and pedestrian's attributive character information of mark is received
Untill holding back.
After pedestrian's attributive character of at least one subregion is determined, by the part of definite pedestrian's attributive character
Initial value of the weight coefficient as the weight coefficient of next pedestrian's attributive character to be trained, is input to pedestrian to be trained and belongs to
It is trained in the property corresponding sub- convolutional neural networks of feature.Wherein it is possible to 80% power by definite pedestrian's attributive character
Initial value of the weight coefficient as the weight coefficient of next pedestrian's attributive character to be trained.Specific training process is determined with above-mentioned
Pedestrian's attributive character sub- convolutional neural networks training process it is identical, details are not described herein.
By above-mentioned training method, the corresponding weight system of N number of pedestrian's attributive character of obtained N number of subregion
Number part is identical.The calculation amount of convolutional neural networks model is greatly reduced, improves the recognition efficiency of image.
In step 105, when being detected to image to be detected, the image is directly input to training in advance and is completed
Convolutional neural networks in.Convolutional neural networks provided by the embodiments of the present application are as shown in fig. 7, the convolutional neural networks include:
Multiple convolutional layers and down-sampled layer and last full connection convolutional layer, full connection convolutional layer can be in the feature that convolutional layer obtains
The identification of the corresponding pedestrian's attributive character of subregion is carried out in figure, down-sampled layer carries out drop and adopt for every sub-regions of identification
Sample, the full convolutional layer that connects determine the corresponding feature vector of the subregion for down-sampled result.
For example, the convolutional neural networks model of sub- convolutional neural networks 601 as shown in Figure 6 for detection sex character, son
Convolutional neural networks 602 are the convolutional neural networks model of detection jacket color characteristic, and sub- convolutional neural networks 603 is under detections
The convolutional neural networks model of clothing color characteristic, sub- convolutional neural networks 604 are the convolution god for the feature for detecting whether carrying package
Through network model.The weight coefficient part of 4 sub- convolutional neural networks 601-604 is identical, therefore, 4 sub- convolutional Neurals
The weight coefficient of network model could be provided as the set of all weight coefficients of described 4 sub- convolutional neural networks models.
In the calculating process of the convolutional layer of the centre of sub- convolutional neural networks model, to described 4 sub- convolutional neural networks parallel computations,
To improve detection efficiency.
Only in last layer of 4 sub- convolutional neural networks models connects convolutional layer entirely, pass through 4 sub- convolutional Neural nets
The feature vector of the corresponding 4 pedestrian's attributive character of network model carries out Classification and Identification, to export corresponding attributive character value.
According to pedestrian's attributive character figure of the full connection convolutional layer of the sub- convolutional neural networks, predict whether the subregion is deposited
In the probability of corresponding pedestrian's attributive character.In the embodiment of the present application, when predicting the subregion there are during pedestrian's attribute, its is right
The probability answered is 1, and otherwise, its corresponding probability is 0, certainly, is predicting the subregion there are during corresponding pedestrian's attributive character,
Its corresponding probability can also be greater than 0 other values.
In specific implementation process, in the convolution characteristic pattern that last layer of convolutional layer obtains by connect full convolutional layer into
The judgement of the humanized feature of every trade.By connecting convolutional layer entirely, based on definite convolution characteristic pattern, determine existing for the subregion
The probability of corresponding pedestrian's attributive character, such as can determine that every sub-regions are special there are pedestrian's attribute by convolution characteristic pattern
The probability of sign is respectively 0 or 1, wherein 0 represents that corresponding pedestrian's attributive character is not present in the subregion, 1 represents that the subregion is deposited
In corresponding pedestrian's attributive character, naturally it is also possible to recorded using other modes and belonged to per sub-regions with the presence or absence of corresponding pedestrian
Property feature probability, for example, one probability threshold value of setting, more than probability threshold value explanation there are corresponding pedestrian's attributive character,
Corresponding pedestrian's attributive character is not present in explanation less than probability threshold value.
By taking head subregion 501 as an example, image to be detected of head subregion 501 is inputted to sub- convolutional neural networks
After 601, after convolutional layer, down-sampled layer, the characteristic pattern of head subregion 501 is obtained in last layer of full connection convolutional layer,
The sex character that head subregion 501 is determined by connecting convolutional layer entirely is that the probability of women is 0.8, and the probability of male is 0.2,
Then output is women for the gender of described image to be detected.Other subregions 502-504 is inputted extremely at the same time with head subregion 501
It is detected in convolutional neural networks model, and then obtains the corresponding 4 pedestrian's attributive character of 4 sub-regions at the same time.
In the embodiment of the present application, the convolutional neural networks can be google-net models, including 22 layers of convolutional layer,
5 layers of down-sampled layer, last layer are full articulamentum.Convolutional neural networks have powerful feature learning ability, can overcome by
The problem of description is not accurate enough caused by artificial division feature, in addition, using N number of sub- convolution in the embodiment of the present application
The identical network of neutral net weight coefficient, can be on the basis of accuracy rate be ensured, greatly using less weight coefficient
Calculation amount is reduced, and obtains N number of pedestrian's attributive character at the same time.
As shown in fig. 7, the embodiment of the present application provides pedestrian's detection of attribute device in a kind of image, described device includes:
Acquiring unit 701, for being detected to pedestrian in image to be detected;
Processing unit 702, the first pedestrian area of the pedestrian is included for obtaining;To institute in first pedestrian area
The edge for stating pedestrian is extracted, and obtains the edge of the pedestrian;By the edge including the pedestrian and it is less than the first row
The region in people region is determined as the second pedestrian area;N number of subregion is marked off in second pedestrian area, per sub-regions
A body subregion including the pedestrian or the adjunct for including the pedestrian;N number of subregion is input to convolution
In neutral net, the attributive character of the body subregion included per sub-regions or adjunct subregion is exported.
A kind of possible implementation, processing unit 702 are specifically used for:
Obtain the gradient orientation histogram of described image to be detected;According to the gradient orientation histogram of described image to be detected
In Graded Density distribution, determine the feature vector of described image to be detected;By the feature vector of described image to be detected and in advance
If the feature vector of the sample image including pedestrian be compared, however, it is determined that the feature vector of described image to be detected with it is described
The difference of the feature vector of sample image is within a preset range, it is determined that there are pedestrian in described image to be detected, and demarcates institute
State the first pedestrian area where pedestrian.
A kind of possible implementation, processing unit 702 are specifically used for:Described is determined according to color connected region algorithm
The edge of pedestrian in a group traveling together region.
A kind of possible implementation, processing unit 702 are specifically used for:
Obtain the gradient orientation histogram of second pedestrian area;It is straight according to the gradient direction of second pedestrian area
Graded Density distribution in square figure, determines the feature vector of second pedestrian area;For any in N number of subregion
One sub-regions, however, it is determined that there are in the feature vector of a sub-regions and default sample image in second pedestrian area
The difference of the feature vector of the subregion of calibration is within a preset range, it is determined that the figure of the subregion in second pedestrian area
As feature is identical with the characteristics of image in the subregion demarcated in the sample image, and divided from second pedestrian area
Go out the position of the subregion.
A kind of possible implementation, the attributive character on the head is including at least following one or more:Age, property
Not, head accessory, face accessory;The attributive character of the upper part of the body includes at least:The feature of clothing;The attribute of the lower part of the body
Feature includes at least:Clothing, shoes;The attributive character of the hand held object includes at least:Bag, cart, luggage case, pet hold
Whether thing carries or its color, type.
The embodiment of the present application provides pedestrian's attribute detection method and device in a kind of image, in this method.The application
Embodiment provides pedestrian's attribute detection method and device in a kind of image, and pedestrian in image to be detected is carried out in this method
Detection, obtains the first pedestrian area for including the pedestrian;The edge of pedestrian described in first pedestrian area is carried
Take, obtain the edge of the pedestrian;It is determined as by the edge including the pedestrian and less than the region of first pedestrian area
Second pedestrian area;N number of subregion is marked off in second pedestrian area, one of the pedestrian is included per sub-regions
Body subregion or the adjunct for including the pedestrian;The corresponding described image to be detected of the N number of subregion is input to volume
In product neutral net, body subregion or appendicular attributive character that each region includes are exported.Since the application is implemented
The second definite pedestrian area of example is the pedestrian area after being accurately positioned so that it is more accurate to N number of subregion of division, adopt
With the corresponding pedestrian's attributive character of the N number of subregion of convolutional neural networks one-off recognition, precision and the detection of detection are improved
Efficiency.
The embodiment of the present application provides a kind of computer program product, including computer-readable instruction, when computer is read
And perform the computer-readable instruction so that computer performs the method as described in above-mentioned any one.
The embodiment of the present application provides a kind of chip, and the chip is connected with memory, for reading and performing the storage
The software program stored in device, to realize the method in any of the above-described in various possible designs.
For systems/devices embodiment, since it is substantially similar to embodiment of the method, so the comparison of description is simple
Single, the relevent part can refer to the partial explaination of embodiments of method.
It should be understood by those skilled in the art that, embodiments herein can be provided as method, system or computer program
Product.Therefore, the application can use the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Apply the form of example.Moreover, the application can use the computer for wherein including computer usable program code in one or more
The computer program production that usable storage medium is implemented on (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The application is with reference to the flow according to the method for the embodiment of the present application, equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that it can be realized by computer program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or square frame in journey and/or square frame and flowchart and/or the block diagram.These computer programs can be provided
The processors of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce
A raw machine so that the instruction performed by computer or the processor of other programmable data processing devices, which produces, to be used in fact
The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to
Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or
The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or
The instruction performed on other programmable devices is provided and is used for realization in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a square frame or multiple square frames.
Although having been described for the preferred embodiment of the application, those skilled in the art once know basic creation
Property concept, then can make these embodiments other change and modification.So appended claims be intended to be construed to include it is excellent
Select embodiment and fall into all change and modification of the application scope.
Obviously, those skilled in the art can carry out the application essence of the various modification and variations without departing from the application
God and scope.In this way, if these modifications and variations of the application belong to the scope of the application claim and its equivalent technologies
Within, then the application is also intended to comprising including these modification and variations.
Claims (10)
- A kind of 1. pedestrian's attribute detection method in image, it is characterised in that the described method includes:Pedestrian in image to be detected is detected, obtains the first pedestrian area for including the pedestrian;The edge of pedestrian described in first pedestrian area is extracted, obtains the edge of the pedestrian;It is determined as the second pedestrian area by the edge including the pedestrian and less than the region of first pedestrian area;N number of subregion is marked off in second pedestrian area, the body sub-district of the pedestrian is included per sub-regions Domain or the adjunct for including the pedestrian;N number of subregion is input in convolutional neural networks, exports the body subregion or attached included per sub-regions Belong to the attributive character of thing subregion.
- 2. according to the method described in claim 1, it is characterized in that, obtaining includes the first pedestrian area of the pedestrian, including:Obtain the gradient orientation histogram of described image to be detected;Graded Density distribution in the gradient orientation histogram of described image to be detected, determines the spy of described image to be detected Sign vector;By the feature vector of described image to be detected compared with the feature vector of the default sample image including pedestrian, if Determine the difference of the feature vector of described image to be detected and the feature vector of the sample image within a preset range, it is determined that There are pedestrian in described image to be detected, and demarcate the first pedestrian area where the pedestrian.
- 3. according to the method described in claim 1, it is characterized in that, it is described determine the pedestrian edge, including:The edge of pedestrian in first pedestrian area is determined according to color connected region algorithm.
- 4. according to the method described in claim 1, it is characterized in that, described mark off N number of son in second pedestrian area Region, including:Obtain the gradient orientation histogram of second pedestrian area;Graded Density distribution in the gradient orientation histogram of second pedestrian area, determines second pedestrian area Feature vector;For any one subregion in N number of subregion, however, it is determined that there are a sub-district in second pedestrian area The difference of the feature vector for the subregion demarcated in the feature vector in domain and default sample image is within a preset range, it is determined that The characteristics of image of the subregion and the characteristics of image in the subregion demarcated in the sample image in second pedestrian area It is identical, and mark off from second pedestrian area position of the subregion.
- 5. according to claim 1-4 any one of them methods, it is characterised in that the attributive character on the head include at least with The next item down is multinomial:Age, gender, head accessory, face accessory;The attributive character of the upper part of the body includes at least:Clothing Feature;The attributive character of the lower part of the body includes at least:Clothing, shoes;The attributive character of the hand held object includes at least:Bag, Cart, luggage case, whether the hand held object of pet carries or its color, type.
- 6. pedestrian's detection of attribute device in a kind of image, it is characterised in that described device includes:Acquiring unit, for being detected to pedestrian in image to be detected;Processing unit, the first pedestrian area of the pedestrian is included for obtaining;To pedestrian described in first pedestrian area Edge extracted, obtain the edge of the pedestrian;By the edge including the pedestrian and it is less than first pedestrian area Region be determined as the second pedestrian area;N number of subregion is marked off in second pedestrian area, includes institute per sub-regions State the body subregion of pedestrian or include the adjunct of the pedestrian;N number of subregion is input to convolutional Neural net In network, the attributive character of the body subregion included per sub-regions or adjunct subregion is exported.
- 7. device according to claim 6, it is characterised in that the processing unit is specifically used for:Obtain the gradient orientation histogram of described image to be detected;According in the gradient orientation histogram of described image to be detected Graded Density is distributed, and determines the feature vector of described image to be detected;By the feature vector of described image to be detected with it is default The feature vector of sample image including pedestrian is compared, however, it is determined that the feature vector of described image to be detected and the sample The difference of the feature vector of image is within a preset range, it is determined that there are pedestrian in described image to be detected, and demarcates the row The first pedestrian area where people.
- 8. device according to claim 6, it is characterised in that the processing unit is specifically used for:According to color connected region Domain algorithm determines the edge of pedestrian in first pedestrian area.
- 9. device according to claim 6, it is characterised in that the processing unit is specifically used for:Obtain the gradient orientation histogram of second pedestrian area;According to the gradient orientation histogram of second pedestrian area In Graded Density distribution, determine the feature vector of second pedestrian area;For any one in N number of subregion Subregion, however, it is determined that the feature vector in second pedestrian area there are a sub-regions in default sample image with demarcating Subregion feature vector difference within a preset range, it is determined that the image of the subregion is special in second pedestrian area Sign is identical with the characteristics of image in the subregion demarcated in the sample image, and marks off this from second pedestrian area The position of subregion.
- 10. according to claim 6-9 any one of them devices, it is characterised in that the attributive character on the head includes at least It is one or more below:Age, gender, head accessory, face accessory;The attributive character of the upper part of the body includes at least:Clothing Feature;The attributive character of the lower part of the body includes at least:Clothing, shoes;The attributive character of the hand held object includes at least: Bag, cart, luggage case, whether the hand held object of pet carries or its color, type.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711230016.1A CN107944403B (en) | 2017-11-29 | 2017-11-29 | Method and device for detecting pedestrian attribute in image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711230016.1A CN107944403B (en) | 2017-11-29 | 2017-11-29 | Method and device for detecting pedestrian attribute in image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107944403A true CN107944403A (en) | 2018-04-20 |
CN107944403B CN107944403B (en) | 2021-03-19 |
Family
ID=61946818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711230016.1A Active CN107944403B (en) | 2017-11-29 | 2017-11-29 | Method and device for detecting pedestrian attribute in image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107944403B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740537A (en) * | 2019-01-03 | 2019-05-10 | 广州广电银通金融电子科技有限公司 | The accurate mask method and system of pedestrian image attribute in crowd's video image |
CN109784293A (en) * | 2019-01-24 | 2019-05-21 | 苏州科达科技股份有限公司 | Multi-class targets method for checking object, device, electronic equipment, storage medium |
CN109815842A (en) * | 2018-12-29 | 2019-05-28 | 上海依图网络科技有限公司 | A kind of method and device of the attribute information of determining object to be identified |
CN109829356A (en) * | 2018-12-05 | 2019-05-31 | 科大讯飞股份有限公司 | The training method of neural network and pedestrian's attribute recognition approach neural network based |
CN109934081A (en) * | 2018-08-29 | 2019-06-25 | 厦门安胜网络科技有限公司 | A kind of pedestrian's attribute recognition approach, device and storage medium based on deep neural network |
CN111753579A (en) * | 2019-03-27 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Detection method and device for designated walk-substituting tool |
US11488410B2 (en) | 2019-04-11 | 2022-11-01 | Fujitsu Limited | Pedestrian article detection apparatus and method and electronic device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845373A (en) * | 2017-01-04 | 2017-06-13 | 天津大学 | Towards pedestrian's attribute forecast method of monitor video |
CN106951872A (en) * | 2017-03-24 | 2017-07-14 | 江苏大学 | A kind of recognition methods again of the pedestrian based on unsupervised depth model and hierarchy attributes |
CN107346414A (en) * | 2017-05-24 | 2017-11-14 | 北京航空航天大学 | Pedestrian's attribute recognition approach and device |
-
2017
- 2017-11-29 CN CN201711230016.1A patent/CN107944403B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845373A (en) * | 2017-01-04 | 2017-06-13 | 天津大学 | Towards pedestrian's attribute forecast method of monitor video |
CN106951872A (en) * | 2017-03-24 | 2017-07-14 | 江苏大学 | A kind of recognition methods again of the pedestrian based on unsupervised depth model and hierarchy attributes |
CN107346414A (en) * | 2017-05-24 | 2017-11-14 | 北京航空航天大学 | Pedestrian's attribute recognition approach and device |
Non-Patent Citations (2)
Title |
---|
JIANQING ZHU ET AL: "Multi-label convolutional neural network based pedestrian attribute classification", 《IMAGE AND VISION COMPUTING》 * |
陈金辉: "静态图像行人检测算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934081A (en) * | 2018-08-29 | 2019-06-25 | 厦门安胜网络科技有限公司 | A kind of pedestrian's attribute recognition approach, device and storage medium based on deep neural network |
CN109829356A (en) * | 2018-12-05 | 2019-05-31 | 科大讯飞股份有限公司 | The training method of neural network and pedestrian's attribute recognition approach neural network based |
CN109829356B (en) * | 2018-12-05 | 2021-04-06 | 科大讯飞股份有限公司 | Neural network training method and pedestrian attribute identification method based on neural network |
CN109815842A (en) * | 2018-12-29 | 2019-05-28 | 上海依图网络科技有限公司 | A kind of method and device of the attribute information of determining object to be identified |
CN109740537A (en) * | 2019-01-03 | 2019-05-10 | 广州广电银通金融电子科技有限公司 | The accurate mask method and system of pedestrian image attribute in crowd's video image |
CN109740537B (en) * | 2019-01-03 | 2020-09-15 | 广州广电银通金融电子科技有限公司 | Method and system for accurately marking attributes of pedestrian images in crowd video images |
CN109784293A (en) * | 2019-01-24 | 2019-05-21 | 苏州科达科技股份有限公司 | Multi-class targets method for checking object, device, electronic equipment, storage medium |
CN111753579A (en) * | 2019-03-27 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Detection method and device for designated walk-substituting tool |
US11488410B2 (en) | 2019-04-11 | 2022-11-01 | Fujitsu Limited | Pedestrian article detection apparatus and method and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN107944403B (en) | 2021-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107944403A (en) | Pedestrian's attribute detection method and device in a kind of image | |
CN106951870B (en) | Intelligent detection and early warning method for active visual attention of significant events of surveillance video | |
CN109902806A (en) | Method is determined based on the noise image object boundary frame of convolutional neural networks | |
CN109154978A (en) | System and method for detecting plant disease | |
CN110929593B (en) | Real-time significance pedestrian detection method based on detail discrimination | |
CN107273832B (en) | License plate recognition method and system based on integral channel characteristics and convolutional neural network | |
EP3819859A1 (en) | Sky filter method for panoramic images and portable terminal | |
Qu et al. | A pedestrian detection method based on yolov3 model and image enhanced by retinex | |
Morris | A pyramid CNN for dense-leaves segmentation | |
CN109583483A (en) | A kind of object detection method and system based on convolutional neural networks | |
CN109840483B (en) | Landslide crack detection and identification method and device | |
CN108446707B (en) | Remote sensing image airplane detection method based on key point screening and DPM confirmation | |
CN113076871A (en) | Fish shoal automatic detection method based on target shielding compensation | |
CN109918971B (en) | Method and device for detecting number of people in monitoring video | |
CN112560675B (en) | Bird visual target detection method combining YOLO and rotation-fusion strategy | |
CN112949572A (en) | Slim-YOLOv 3-based mask wearing condition detection method | |
CN109685045A (en) | A kind of Moving Targets Based on Video Streams tracking and system | |
CN113361495A (en) | Face image similarity calculation method, device, equipment and storage medium | |
CN114758288A (en) | Power distribution network engineering safety control detection method and device | |
CN107622280B (en) | Modularized processing mode image saliency detection method based on scene classification | |
CN107818303A (en) | Unmanned plane oil-gas pipeline image automatic comparative analysis method, system and software memory | |
Zohourian et al. | Superpixel-based Road Segmentation for Real-time Systems using CNN. | |
CN108108669A (en) | A kind of facial characteristics analytic method based on notable subregion | |
CN111091101A (en) | High-precision pedestrian detection method, system and device based on one-step method | |
CN110852327A (en) | Image processing method, image processing device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |