CN108256401A - A kind of method and device for obtaining objective attribute target attribute Feature Semantics - Google Patents
A kind of method and device for obtaining objective attribute target attribute Feature Semantics Download PDFInfo
- Publication number
- CN108256401A CN108256401A CN201611244945.3A CN201611244945A CN108256401A CN 108256401 A CN108256401 A CN 108256401A CN 201611244945 A CN201611244945 A CN 201611244945A CN 108256401 A CN108256401 A CN 108256401A
- Authority
- CN
- China
- Prior art keywords
- target
- attributive character
- attribute
- attributes
- video image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
An embodiment of the present invention provides it is a kind of obtain objective attribute target attribute Feature Semantics method and device, the method includes:Video image is obtained, contains at least one target in the video image;The testing result of target is obtained from the video image;Using the testing result of the target, from the attributive character of the extracting target from video image, the attribute synthesis feature set of target is obtained, wherein, the attribute synthesis feature set contains the data of multiple attributive character;Using the contact between multiple attributes, the data of multiple attributive character in the attribute synthesis feature set are handled, obtain the semanteme of multiple attributive character of target.The embodiment of the present invention can obtain the semanteme of the attributive character compared with polymorphic type, disclosure satisfy that the practical application request to more attributive character, and the semantic accuracy higher of the attributive character obtained.
Description
Technical field
The present invention relates to field of intelligent video surveillance, more particularly to a kind of method for obtaining objective attribute target attribute Feature Semantics and
Device.
Background technology
Video monitoring is an important research content in the fields such as computer vision, pattern-recognition and artificial intelligence, is being pacified
The fields such as full monitoring, intelligent transportation, military navigation have wide practical use.Current intelligent video monitoring has been not content with
Video image is only provided to carry out the purpose of personal monitoring, but target is detected for the video image obtained, is obtained
The attributive character of target.By the attributive character of the target of acquisition, it disclosure satisfy that monitoring higher to target, lookup and positioning need
It asks.
In the prior art, there is a kind of side for establishing semantic scene model to the scene image of moving target using computer
Method, this method obtain the semanteme of track characteristic by obtaining the track of target in video image.But in practical application, it is also necessary to
A variety of attributive character in addition to target trajectory, color, model such as vehicle, gender, age of pedestrian etc. are obtained, to obtain rail
The attributive character of the target of other angles description except mark, and is converted into semanteme by attributive character, with meet higher monitoring,
Lookup and location requirement.Therefore, the type for the objective attribute target attribute that existing method is got is on the low side, in field of intelligent video surveillance application
Range is relatively narrow, it is impossible to meet the practical application request to more attributive character.
Invention content
The embodiment of the present invention is designed to provide a kind of method and device for obtaining objective attribute target attribute Feature Semantics, from video
The semanteme of more attributive character of image acquisition target, to meet the practical application request to more attributive character.Particular technique
Scheme is as follows:
An embodiment of the present invention provides it is a kind of obtain objective attribute target attribute Feature Semantics method, including:
Video image is obtained, contains at least one target in the video image;
The testing result of target is obtained from the video image;
Using the testing result of the target, from the attributive character of the extracting target from video image, target is obtained
Attribute synthesis feature set, wherein, the attribute synthesis feature set contains the data of multiple attributive character;
Using the contact between multiple attributes, the data of multiple attributive character in the attribute synthesis feature set are carried out
Processing obtains the semanteme of multiple attributive character of target.
Optionally, the step of acquisition video image, including:
Video image is obtained from video monitoring equipment, the video monitoring equipment includes at least video camera;
The target includes at least one of motor vehicle, non-motor vehicle, pedestrian or arbitrary combination.
Optionally, the testing result of the target includes at least:The type of target, the size of target, target are in video figure
Position as in.
Optionally, the testing result using the target, from the attributive character of the extracting target from video image,
The step of obtaining the attribute synthesis feature set of target, including:
Using the testing result of the target, the image of the target is extracted from the video image;
The form of the image of the target is converted, obtains the picture number of specified format corresponding with the target
According to;
It is concentrated in preset multiple general-purpose attributes, obtains the corresponding general-purpose attribute collection of type of the target, wherein, it is described
General-purpose attribute collection contains multiple attributes;
Using the corresponding general-purpose attribute collection of the type of the target, the image data of the target is calculated, is obtained
The general-purpose attribute feature set containing multiple attributive character of target;
Attributive character in the general-purpose attribute feature set is respectively calculated, obtains the data of multiple attributive character,
And by the Data Integration of the multiple attributive character be target attribute synthesis feature set.
Optionally, the contact using between multiple attributes, it is special to multiple attributes in the attribute synthesis feature set
The data of sign are handled, and obtain the semantic step of multiple attributive character of target, including:
It is one-dimension array by the data processing of each attributive character in the attribute synthesis feature set, by multiple attributes spy
The corresponding one-dimension array of data of sign obtains the relation on attributes matrix of target;
The relation on attributes matrix with preset coefficient matrix is multiplied, obtains the relation on attributes net of target, wherein, it is described
Preset coefficient matrix contains the contact between multiple attributes;
Classification map is carried out to the attribute in the relation on attributes net, obtains the class probability of attributive character;
The classification results of attributive character are obtained according to the class probability of the attributive character;
The classification results of the attributive character are converted into semanteme, obtain the semanteme of multiple attributive character of target.
Optionally, after the semanteme of the multiple attributive character for obtaining target, the method further includes:
The semanteme of multiple attributive character of the target is sent to application apparatus.
The embodiment of the present invention additionally provides a kind of device for obtaining objective attribute target attribute Feature Semantics, including:
Video image acquisition module for obtaining video image, contains at least one target in the video image;
Module of target detection, for obtaining the testing result of target from the video image;
Attributive character extraction module, for utilizing the testing result of the target, from the extracting target from video image
Attributive character, obtain the attribute synthesis feature set of target, wherein, the attribute synthesis feature set contains multiple attributive character
Data;
Attribute cooperates with judgment module, for utilizing the contact between multiple attributes, in the attribute synthesis feature set
The data of multiple attributive character are handled, and obtain the semanteme of multiple attributive character of target.
Optionally, the video image acquisition module, is specifically used for:
Video image is obtained from video monitoring equipment, the video monitoring equipment includes at least video camera;
The target includes at least one of motor vehicle, non-motor vehicle, pedestrian or arbitrary combination.
Optionally, the testing result of the target includes at least:The type of target, the size of target, target are in video figure
Position as in.
Optionally, the attributive character extraction module, including:
Image zooming-out submodule, for using the testing result of the target, the mesh to be extracted from the video image
Target image;
Format conversion submodule, the form for the image to the target convert, and obtain corresponding with the target
Specified format image data;
General-purpose attribute collection acquisition submodule for being concentrated in preset multiple general-purpose attributes, obtains the type of the target
Corresponding general-purpose attribute collection, wherein, the general-purpose attribute collection contains multiple attributes;
General-purpose attribute feature set acquisition submodule, for utilizing the corresponding general-purpose attribute collection of type of the target, to institute
The image data for stating target is calculated, and obtains the general-purpose attribute feature set containing multiple attributive character of target;
Attribute synthesis feature set acquisition submodule, for being carried out respectively to the attributive character in the general-purpose attribute feature set
It calculates, obtains the data of multiple attributive character, and the Data Integration of the multiple attributive character is special for the attribute synthesis of target
Collection.
Optionally, the attribute collaboration judgment module, including:
Relation on attributes matrix acquisition submodule, for by the data of each attributive character in the attribute synthesis feature set
It handles as one-dimension array, by the corresponding one-dimension array of the data of multiple attributive character, obtains the relation on attributes matrix of target;
Relation on attributes net acquisition submodule for the relation on attributes matrix to be multiplied with preset coefficient matrix, obtains
The relation on attributes net of target, wherein, the preset coefficient matrix contains the contact between multiple attributes;
Classification map submodule for carrying out classification map to the attribute in the relation on attributes net, obtains attributive character
Class probability;
Classification results acquisition submodule, for obtaining the classification knot of attributive character according to the class probability of the attributive character
Fruit;
Semanteme conversion submodule, for the classification results of the attributive character to be converted into semanteme, obtains the multiple of target
The semanteme of attributive character.
Optionally, described device further includes:
Sending module, for the semanteme of multiple attributive character of the target to be sent to application apparatus.
The method provided in an embodiment of the present invention for obtaining objective attribute target attribute Feature Semantics, first by obtaining regarding containing target
Secondly frequency image obtains the testing result of target from the video image, later using the testing result of target, is regarded from described
The attributive character of frequency extracting target from images obtains target, data containing multiple attributive character attribute synthesis feature sets,
Finally using the contact between multiple attributes, at the data of multiple attributive character in the attribute synthesis feature set
Reason obtains the semanteme of multiple attributive character of target.The embodiment of the present invention can obtain the semanteme of the attributive character compared with polymorphic type,
Such as color, the model of vehicle, gender, the age of pedestrian etc., it disclosure satisfy that the practical application request to more attributive character, and
And the semantic accuracy higher of the attributive character obtained.Certainly, it implements any of the products of the present invention or method must be needed not necessarily
To reach all the above advantage simultaneously.
Description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention, for those of ordinary skill in the art, without creative efforts, can be with
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of method flow diagram of the acquisition objective attribute target attribute Feature Semantics of the embodiment of the present invention;
Fig. 2 is the flow chart of the example based on method shown in Fig. 1;
Fig. 3 is a kind of structure drawing of device of the acquisition objective attribute target attribute Feature Semantics of the embodiment of the present invention;
Fig. 4 is the structure chart of the example based on Fig. 3 shown devices.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other without making creative work
Embodiment shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a kind of method and devices for obtaining objective attribute target attribute Feature Semantics, can be from video image
The semanteme of more attributive character of target is obtained, to meet the practical application request to more attributive character, first below to this
A kind of method for acquisition objective attribute target attribute Feature Semantics that inventive embodiments provide is introduced.
At present, field of intelligent video surveillance requirement carries out target detection to the video image of acquisition, to obtain the category of target
Property.By the attribute of the target of acquisition, meet monitoring higher to target, lookup and location requirement.For example, the category using target
Property in video image position, track target.But the track characteristic of existing extraction target obtains the side of track characteristic semanteme
Method is only capable of providing the attributive character of the target described with trajectory angle, and the type of the objective attribute target attribute got is on the low side, in intelligence
Field of video monitoring application range is relatively narrow, it is impossible to meet the practical application request to more attributive character.
The embodiment of the present invention proposes a kind of method for the semanteme for obtaining objective attribute target attribute feature, predominantly:From video image
The image of middle extraction target, for multiple attributive character of the image zooming-out target of target, and utilizes the contact between each attribute,
Acquired attributive character is handled, obtains the semanteme of multiple attributive character of target.
Referring to Fig. 1, Fig. 1 is a kind of method flow diagram of the acquisition objective attribute target attribute Feature Semantics of the embodiment of the present invention.Including
Following steps:
Step 101, video image is obtained.
In the present embodiment, video image is obtained by video monitoring equipment, wherein, video monitoring equipment is:In video
Monitoring area, video monitoring can be carried out, and the equipment that the video image of monitoring area is provided.Video monitoring equipment can wrap
It includes:Video camera, camera, mobile phone etc..
Contain at least one target in video image.Wherein, target is the monitored object in monitoring scene, and target can be
Any object or biology specified as needed, such as can include:The various vehicles, pedestrian, animal, building etc..
Step 102, the testing result of target is obtained from video image.
In the present embodiment, using object detection method or target detection model, the detection of target is obtained from video image
As a result.The application is not defined used object detection method or target detection model, any to detect to regard
Mesh calibration method in frequency image can be applied in the application.Wherein, object detection method is as based on deep learning, pattern
The object detection method of the technologies such as identification, image procossing.Object detection method is such as:One kind is used for improving weak typing algorithm accuracy
Boosting methods, deformable member DPM models, (faster-rcnn carries out fast target to FRCNN with convolutional neural networks
Detection) method etc..
The testing result of target can include:The type of target, the size of target, the position of target, target movement shape
State etc..
Wherein, the type of target includes:Type of vehicle, pedestrian's type, type of animal, building type etc..Target
Size and the position of target embody regional extent of the target in video image.The motion state of target include stationary state or
Motion state.
It should be noted that all targets in video image can disposably be detected completion by the embodiment of the present invention, obtain
Obtain the corresponding testing result of all targets.
Obtaining the testing result of target, target can be accurately positioned in video image and obtain the image of target.
Step 103, using the testing result of target, from the attributive character of extracting target from video image, target is obtained
Attribute synthesis feature set.Wherein, attribute synthesis feature set contains the data of multiple attributive character.
In the present embodiment, using the position of the type of target, the size of target and target in video image to target into
The image of the target is individually extracted in row positioning from video image, for the attributive character of the image zooming-out target of the target, and
The attributive character of target is embodied with data mode, obtains the attribute synthesis feature set of target.
For example, it is pedestrian's type for type, attribute can include:Gender, the age, hair style, build, it is upper dress style, under
Fill style, whether handbag etc..It is motor vehicle type for type, attribute can include:Vehicle, color, vehicle brand, sub-brand name
Deng.
By taking pedestrian A as an example, according to step 103, for attribute:Whether gender hair style, upper dress style, lower dress style, carries
Packet can get the data of each attributive character of pedestrian A, wherein, the data of each attributive character, which correspond to meaning, to be:
Male, red cotta, white shorts, carries red packet at bob.
Step 104, using the contact between multiple attributes, to the data of multiple attributive character in attribute synthesis feature set
It is handled, obtains the semanteme of multiple attributive character of target.
Contact between preset attribute is that the real property by a large amount of pictures is for statistical analysis, and utilizes big number
According to each attribute that analysis means obtain to the data of the influence degree of other attributes.Each influence journey of the attribute to other attributes
Degree is presented as an one-dimensional data, then formed to the influence degrees of other attributes for multiple one-dimensional datas one of multiple attributes is
Matrix number, the coefficient matrix represent the contact between attribute.
In the present embodiment, using the contact between preset attribute, to multiple in acquired attribute synthesis feature set
Attribute carries out collaboration judgement, i.e., is multiplied by attribute synthesis feature set with coefficient matrix, each attribute pair in usage factor matrix
The data of the influence degree of other attributes determine that multiple respective class probabilities of attribute, the classification are general in attribute synthesis feature set
Rate is determined as the probability of each feature in the multiple features of the attribute for each attribute, according to the respective class probability of attribute, obtains
The attributive character of each attribute correction, finally carries out the attributive character of correction semantic conversion, and the multiple attributes for obtaining target are special
The semanteme of sign.Wherein, the process that attribute synthesis feature set is multiplied with coefficient matrix is exactly corrected to multiple attributive character
Process.
By taking pedestrian A as an example, meaning is corresponded to for the data of the attributive character in step 103:Male, bob, red cotta,
White shorts carry red packet.At step 104, using the contact of each attribute, such as " gender ", " above filling style ", " lower dress style " and
Contact between " whether handbag " determines " wear red cotta, white shorts, carry red packet " these attributes, to attribute " property
Attributive character is more than for the percentage contribution of " women " to the percentage contribution of " male " in not ", by relational matrix to each attribute
After correction, attributive character is more than the probability of " male " for the probability of " women " in the attribute " gender " finally obtained, thus by it
The data " male " of the attributive character of preceding pedestrian A are corrected to " women ".The data of attributive character are finally converted into word description
Semanteme, i.e. pedestrian A is:Bob, women wear red cotta, white shorts, carry red packet.
Video image is converted into the category with word description of target by the embodiment of the present invention by step 101 to step 104
Property feature, the attribute type that the embodiment of the present invention obtains increases, and judges to improve attributive character and sentence using the collaboration of more attributes
Disconnected accuracy.
As it can be seen that the method provided in an embodiment of the present invention for obtaining objective attribute target attribute Feature Semantics, contains mesh by obtaining first
Secondly target video image obtains the testing result of target, later using the testing result of target, from video from video image
The attributive character of extracting target from images obtains target, data containing multiple attributive character attribute synthesis feature sets, most
Afterwards using the contact between multiple attributes, the data of multiple attributive character in attribute synthesis feature set are handled, are obtained
The semanteme of multiple attributive character of target.The embodiment of the present invention can obtain the semanteme of the attributive character compared with polymorphic type, such as vehicle
Color, model, brand, sub-brand name, whether the gender of pedestrian the age, hair style, build, upper dress style, lower dress style, carry
Thing, whether knapsack, whether wear glasses, whether wear masks, whether being branded as, whether phoning with mobile telephone, disclosure satisfy that more belonging to
Property feature practical application request, and the semantic accuracy higher of the attributive character obtained.
As a kind of embodiment of method shown in Fig. 1, referring to Fig. 2, Fig. 2 is the stream of the example based on method shown in Fig. 1
Cheng Tu.Include the following steps:
Step 201, video image is obtained from video monitoring equipment.
In the present embodiment, video monitoring equipment includes at least video camera.
Containing at least one target in video image, target includes at least one of motor vehicle, non-motor vehicle, pedestrian or arbitrary
Combination.
Step 202, the testing result of target is obtained from video image.
In the present embodiment, the target in video image is detected using the prior art, this will not be repeated here.
The testing result of target includes at least:The position of the type of target, the size of target, target in video image.
The type of target includes at least:Motor vehicle type, non-motor vehicle type, pedestrian's type.
The size of target can be the curvilinear frame of the preset shape comprising target, such as rectangle frame, circular frame.
Position of the target in video image can be coordinate value of the central point in video image of target or include mesh
The edge preset point of the curvilinear frame of target preset shape, coordinate value in video image etc..Such as the edge preset point of rectangle frame
Can be point where four angles of rectangle etc..
Step 2031, using the testing result of target, from the image of extracting target from video image.
In the present embodiment, according to the position of the type of target, the size of target, target in video image, in video figure
Target is positioned in shape, and extracts the image of target.
The prior art is used from the image of extracting target from video image, this will not be repeated here.
Step 2032, the form of the image of target is converted, obtains the picture number of specified format corresponding with target
According to.
In the present embodiment, the form of the image of target is RGB (red-green-blue, RGB color) forms or use
The colour coding method yuv format that brightness and color embodies, by the format conversion of the image of target for required for attributive character extraction
Data format, obtain the image data of target.Format conversion further includes:Required data format is extracted according to attributive character
Requirement, a variety of image enhancement operations that the image of surrounding target carries out such as change brightness, increase noise.
Step 2033, it is concentrated in preset multiple general-purpose attributes, obtains the corresponding general-purpose attribute collection of type of target,
In, general-purpose attribute collection contains multiple attributes.
In the present embodiment, for the different type of target, corresponding general-purpose attribute collection is preset.General-purpose attribute, which is concentrated to contain, to be somebody's turn to do
The all properties of type target.
For example, type can have for the attribute of motor vehicle that the general-purpose attribute collection of the target of motor vehicle type includes:Vehicle,
Color, vehicle brand, sub-brand name etc..Type can be with for the attribute of pedestrian that includes of general-purpose attribute collection of the target of pedestrian's type
Have:Gender, the age, hair style, build, it is upper dress style, it is lower dress style, whether carry thing, whether knapsack, whether wear glasses, whether
It wears masks, whether be branded as, whether phone with mobile telephone.
In the present embodiment, according to the image data of target, the type of target is obtained, in preset multiple general-purpose attribute collection
In, obtain the corresponding general-purpose attribute collection of type of target.
The attribute of each type of target is different, and the embodiment of the present invention can be directed to a plurality of types of targets, obtain target
The corresponding general-purpose attribute collection of type, so as to attributive character that is accurate, obtaining the target in detail.
In the embodiment of following steps, using the type of target as pedestrian's type for example, other kinds of target, is obtained
The method for taking objective attribute target attribute Feature Semantics is similar with pedestrian's type.
Step 2034, using the corresponding general-purpose attribute collection of the type of target, the image data of target is calculated, is obtained
The general-purpose attribute feature set containing multiple attributive character of target.
In the present embodiment, for each attribute that the corresponding general-purpose attribute of type of target is concentrated, using deep learning skill
Art calculates the image data of target, obtains the general-purpose attribute feature set of target.Contain target in general-purpose attribute feature set
, the attributive character for concentrating for general-purpose attribute each attributes extraction, and attributive character is embodied with data mode.
By taking pedestrian B as an example, it is for the corresponding general-purpose attribute collection of pedestrian's type:Gender, the age, hair style, it is upper dress style, under
Dress style, whether knapsack, whether wear glasses, whether wear masks.The image data of pedestrian B is calculated, extracts each attribute
Feature, obtaining the general-purpose attribute feature set of pedestrian B can include:The feature of male, young feature, the feature of long hair, cotta
Feature, the feature of skirt, the not feature of knapsack, the feature worn glasses, the feature that does not wear masks.Wherein, each attribute more than
Feature can also include multiple detailed attributive character, as the feature of long hair can include:Long straight hair, long hair-waving etc.;It wears glasses
Feature can include:Wear spectacles, wear dark glasses etc..
It should be noted that general-purpose attribute collection is suitable for all targets of the type.The general-purpose attribute feature set of acquisition is
The basis carried out to the attribute that general-purpose attribute is concentrated judges, covers the primary attribute feature of each attribute, is special to the attribute of target
The basic description of sign, general-purpose attribute feature set cover attribute as much as possible, are a descriptive and very strong collection of inclusiveness
It closes.Therefore in step 2035, a small amount of operation is only needed to can be completed when further extracting each attributive character.The present invention is implemented
Example uses general-purpose attribute collection, can be when attribute type is continuously increased, easy expansion attribute type, and it is special to reduce extraction objective attribute target attribute
The time overhead of sign and the incrementss of space expense improve the scalability of method, and therefore, present invention method is beneficial to
The limited hardware platform of resource.
Step 2035, the attributive character in general-purpose attribute feature set is respectively calculated, obtains multiple attributive character
Data, and by the Data Integration of multiple attributive character be target attribute synthesis feature set.
In the present embodiment, the attributive character in general-purpose attribute feature set is respectively calculated, wherein, the mode packet of calculating
The convolution of convolutional neural networks is included, samples, block, normalizing.
By taking pedestrian B as an example, in general-purpose attribute feature set:The feature of male, young feature, the feature of long hair, cotta
Feature, the feature of skirt, not on the basis of the feature of knapsack, the feature worn glasses, the feature that does not wear masks, by pedestrian B
Video image carry out series of computation, obtain the data of multiple attributive character, and further to the data of multiple attributive character
After integration processing, the attribute synthesis feature set for obtaining description pedestrian's B specific object features is.These comprehensive characteristics collection can be used for
Description:Male, youth, long hair, black cotta, white skirt, not knapsack, wear dark glasses, do not wear masks.
Step 2041, it is one-dimension array by the data processing of each attributive character in attribute synthesis feature set, by multiple
The corresponding one-dimension array of data of attributive character obtains the relation on attributes matrix of target.
It it is one one by the data processing of each attributive character in the attribute synthesis feature set of pedestrian B by taking pedestrian B as an example
Dimension group obtains eight respective one-dimension arrays of attributive character.By eight one-dimension arrays according to the category in attribute synthesis feature set
Property sequence, successively as the row of matrix, obtain two-dimentional, pedestrian B a relation on attributes matrix.
It is in order to which subsequent matrix calculates that attribute synthesis feature set is converted into relation on attributes matrix by the embodiment of the present invention.
Step 2042, relation on attributes matrix with preset coefficient matrix is multiplied, obtains the relation on attributes net of target,
In, preset coefficient matrix contains the contact between multiple attributes.
In the present embodiment, coefficient matrix is by big data analysis means, analyzes one inherently contacted between attribute
A two-dimensional matrix, the element in coefficient matrix are the numerical value for embodying degree of contact between attribute.Therefore coefficient matrix contains multiple
Contact between attribute.
Relation on attributes matrix with preset coefficient matrix is multiplied, obtains the relation on attributes net of target, the attribute of target closes
Be net be a two-dimensional matrix.Every a line in relation on attributes net is all an attributive character.
Step 2043, classification map is carried out to the attribute in relation on attributes net, obtains the class probability of attributive character.
In the present embodiment, a variety of methods, such as regression model softmax may be used in classification map.For relation on attributes
The data of attributive character in net carry out the classification map of attribute, obtain the class probability of attributive character.
By taking pedestrian B as an example, the attributive character of pedestrian B passes through category after step 2041, step 2042, step 2043
Property feature " youth, long hair, black cotta, white skirt, not knapsack, wear dark glasses, do not wear masks " influence to attribute " gender "
The probability that the attributive character of " gender " of pedestrian B is divided into " women " be the probability of 0.8, " male " is 0.2 by degree.
Step 2044, the classification results of attributive character are obtained according to the class probability of attributive character.
By taking pedestrian B as an example, according in the attributive character class probability of the attribute " gender " of pedestrian B, the classification of " women " is general
Rate is more than the class probability of " male ", and the attributive character classification results for obtaining the attribute " gender " of pedestrian B are " women ", thus will
The attributive character " male " obtained in step 2035 is corrected to " women ".
The process for obtaining the classification results of other attributive character is similar with the above process, such as the attribute of attribute " age " is special
Sign can include children, teenager, youth, middle age, old age etc., according to the contact between multiple attributes, obtain " age " each attribute
The class probability of feature according to the maximum of class probability, determines the determining attributive character of attribute " age ".
It, can be by the attributive character in attribute synthesis feature set, by between attribute by step 2041 to step 2044
Contact carry out collaboration judgement, correction, so as to obtain the higher attributive character of accuracy rate.It is used in the embodiment of the present invention to be based on belonging to
Property between connect each other to all properties carry out collaboration judgement method, can realize that attribute mutually corrects, improve attribute jointly
The accuracy of judgement solves the problems, such as that attribute individually judges the result contradiction generated.
Step 2045, the classification results of attributive character are converted into semanteme, obtain the semanteme of multiple attributive character of target.
By taking pedestrian B as an example, by the attributive character described with data of pedestrian B:It is women, youth, long hair, black cotta, white
Color skirt, not knapsack, do not wear masks at wear dark glasses, according to the semantic correspondence of preset attributive character, obtain the more of pedestrian B
The semanteme of a attributive character.I.e. it is final obtain using the attributive character of the pedestrian B of word description as:Women, youth, long hair, black
Cotta, white skirt, not knapsack, wear dark glasses, do not wear masks.
Wherein, the semantic correspondence of preset attributive character can be correspondence of data and word etc., data packet
Include numerical value, character etc..For example, numerical value " 0 " is corresponding with attributive character " women " or character " woman " and attributive character " women "
It is corresponding.It when occurring numerical value " 0 " or character " woman " in the attributive character described with data, is converted by semanteme, obtains attribute
It is characterized as " women ".
The embodiment of the present invention, by the video image containing target, obtains the more of target by step 201 to step 2045
The semanteme of a attributive character.
The method of the acquisition objective attribute target attribute Feature Semantics of the embodiment of the present invention can also include step 205, by the more of target
The semanteme of a attributive character is sent to application apparatus.
In the present embodiment, for application apparatus including computer etc., application apparatus can be according to multiple attributes of the target of acquisition
The semanteme of feature carries out a variety of operations, such as:The attribute information of display target, according to attributive character as screening conditions carry out mesh
Mark search etc..Image is converted into the word description of attribute by the semanteme for the attributive character that the embodiment of the present invention obtains, can be regarding
The demand of frequency structuring provides information resources, can disclosure satisfy that more intelligence to provide good technical support to scheme to search figure
The application demand of energy video monitoring.
As it can be seen that the method provided in an embodiment of the present invention for obtaining objective attribute target attribute Feature Semantics, first from video monitoring equipment
The video image containing target is obtained, the testing result of target is secondly obtained from video image, utilizes the detection of target later
As a result, from the image of extracting target from video image, it is corresponding general using the type of target for the data of the image of target
Property set calculates the image data of target, obtains the general-purpose attribute feature set of target, and in general-purpose attribute feature set base
On plinth, the attribute synthesis feature set of target is obtained, finally using the contact between attribute, attribute is carried out to attribute synthesis feature set
Collaboration judge, and judging result is handled, obtains the semanteme of multiple attributive character of target.The embodiment of the present invention can
Obtain the semanteme of the attributive character compared with polymorphic type, such as the color of vehicle, model, gender, the age of pedestrian etc. disclosure satisfy that pair
The practical application request of more attributive character, and the semantic accuracy higher of the attributive character obtained.
Referring to Fig. 3, Fig. 3 is a kind of structure drawing of device of the acquisition objective attribute target attribute Feature Semantics of the embodiment of the present invention.Including:
Video image acquisition module 301 for obtaining video image, contains at least one target in video image.
Module of target detection 302, for obtaining the testing result of target from video image.
Attributive character extraction module 303, for utilizing the testing result of target, from the attribute of extracting target from video image
Feature obtains the attribute synthesis feature set of target, wherein, attribute synthesis feature set contains the data of multiple attributive character.
Attribute cooperates with judgment module 304, for utilizing the contact between multiple attributes, to more in attribute synthesis feature set
The data of a attributive character are handled, and obtain the semanteme of multiple attributive character of target.
As it can be seen that the device provided in an embodiment of the present invention for obtaining objective attribute target attribute Feature Semantics, contains mesh by obtaining first
Secondly target video image obtains the testing result of target, later using the testing result of target, from video from video image
The attributive character of extracting target from images obtains target, data containing multiple attributive character attribute synthesis feature sets, most
Afterwards using the contact between multiple attributes, the data of multiple attributive character in attribute synthesis feature set are handled, are obtained
The semanteme of multiple attributive character of target.The embodiment of the present invention can obtain the semanteme of the attributive character compared with polymorphic type, such as vehicle
Color, model, gender, the age of pedestrian etc., disclosure satisfy that the practical application request to more attributive character, and obtain
Attributive character semantic accuracy higher.
It should be noted that the device of the embodiment of the present invention is the method using above-mentioned acquisition objective attribute target attribute Feature Semantics
Device, then all embodiments of the method for above-mentioned acquisition objective attribute target attribute Feature Semantics are suitable for the device, and can reach phase
Same or similar advantageous effect.
As a kind of embodiment of Fig. 3 shown devices, referring to Fig. 4, Fig. 4 is the knot of the example based on Fig. 3 shown devices
Composition.Including:
Video image acquisition module 401, is specifically used for:
Video image is obtained from video monitoring equipment, video monitoring equipment includes at least video camera;Contain in video image
At least one target, target include at least one of motor vehicle, non-motor vehicle, pedestrian or arbitrary combination.
Module of target detection 402, for obtaining the testing result of target from video image;Wherein, the detection knot of target
Fruit includes at least:The position of the type of target, the size of target, target in video image.
Attributive character extraction module 403, including:
Image zooming-out submodule 4031, for utilizing the testing result of target, from the image of extracting target from video image.
Format conversion submodule 4032, the form for the image to target convert, and obtain finger corresponding with target
The image data for the formula that fixes.
General-purpose attribute collection acquisition submodule 4033 for being concentrated in preset multiple general-purpose attributes, obtains the type of target
Corresponding general-purpose attribute collection, wherein, general-purpose attribute collection contains multiple attributes.
General-purpose attribute feature set acquisition submodule 4034, for utilizing the corresponding general-purpose attribute collection of type of target, to mesh
Target image data is calculated, and obtains the general-purpose attribute feature set containing multiple attributive character of target.
Attribute synthesis feature set acquisition submodule 4035, for being carried out respectively to the attributive character in general-purpose attribute feature set
It calculates, obtains the data of multiple attributive character, and by attribute synthesis feature set that the Data Integration of multiple attributive character is target.
Attribute cooperates with judgment module 404, including:
Relation on attributes matrix acquisition submodule 4041, for by the data of each attributive character in attribute synthesis feature set
It handles as one-dimension array, by the corresponding one-dimension array of the data of multiple attributive character, obtains the relation on attributes matrix of target.
Relation on attributes net acquisition submodule 4042 for relation on attributes matrix to be multiplied with preset coefficient matrix, obtains
The relation on attributes net of target, wherein, preset coefficient matrix contains the contact between multiple attributes.
Classification map submodule 4043 for carrying out classification map to the attribute in relation on attributes net, obtains attributive character
Class probability.
Classification results acquisition submodule 4044, for obtaining the classification knot of attributive character according to the class probability of attributive character
Fruit.
Semanteme conversion submodule 4045, for the classification results of attributive character to be converted into semanteme, obtains the multiple of target
The semanteme of attributive character.
The device of the embodiment of the present invention further includes:
Sending module 405, for the semanteme of multiple attributive character of target to be sent to application apparatus.
As it can be seen that the device provided in an embodiment of the present invention for obtaining objective attribute target attribute Feature Semantics, first from video monitoring equipment
The video image containing target is obtained, secondly the target in video image is detected, obtains the testing result of target, later
Using the testing result of target, from the image of extracting target from video image, for the data of the image of target, target is utilized
The corresponding general-purpose attribute collection of type, calculates the image data of target, obtains the general-purpose attribute feature set of target, and logical
On the basis of attributive character collection, the attribute synthesis feature set of target is obtained, finally using the contact between attribute, to attribute synthesis
The collaboration that feature set is carried out between attribute judges, and judging result is handled, and obtains the language of multiple attributive character of target
Justice.The embodiment of the present invention can obtain the semanteme of the attributive character compared with polymorphic type, such as the color of vehicle, model, the property of pedestrian
Not, age etc. disclosure satisfy that the practical application request to more attributive character, and the semanteme of the attributive character obtained is accurate
Spend higher.
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any this practical relationship or sequence.Moreover, term " comprising ", "comprising" or its any other variant are intended to
Non-exclusive inclusion, so that process, method, article or equipment including a series of elements not only will including those
Element, but also including other elements that are not explicitly listed or further include as this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
Also there are other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is described using relevant mode, identical similar portion between each embodiment
Point just to refer each other, and the highlights of each of the examples are difference from other examples.Especially for system reality
For applying example, since it is substantially similar to embodiment of the method, so description is fairly simple, related part is referring to embodiment of the method
Part explanation.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all
Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention
It is interior.
Claims (12)
1. a kind of method for obtaining objective attribute target attribute Feature Semantics, applied to field of intelligent video surveillance, which is characterized in that including:
Video image is obtained, contains at least one target in the video image;
The testing result of target is obtained from the video image;
Using the testing result of the target, from the attributive character of the extracting target from video image, the attribute of target is obtained
Comprehensive characteristics collection, wherein, the attribute synthesis feature set contains the data of multiple attributive character;
Using the contact between multiple attributes, at the data of multiple attributive character in the attribute synthesis feature set
Reason obtains the semanteme of multiple attributive character of target.
2. according to the method described in claim 1, it is characterized in that, it is described acquisition video image the step of, including:
Video image is obtained from video monitoring equipment, the video monitoring equipment includes at least video camera;
The target includes at least one of motor vehicle, non-motor vehicle, pedestrian or arbitrary combination.
3. according to the method described in claim 1, it is characterized in that, the testing result of the target includes at least:The class of target
The position of type, the size of target, target in video image.
4. according to the method described in claim 3, it is characterized in that, the testing result using the target, regards from described
The attributive character of frequency extracting target from images, the step of obtaining the attribute synthesis feature set of target, including:
Using the testing result of the target, the image of the target is extracted from the video image;
The form of the image of the target is converted, obtains the image data of specified format corresponding with the target;
It is concentrated in preset multiple general-purpose attributes, obtains the corresponding general-purpose attribute collection of type of the target, wherein, it is described general
Property set contains multiple attributes;
Using the corresponding general-purpose attribute collection of the type of the target, the image data of the target is calculated, obtains target
The general-purpose attribute feature set containing multiple attributive character;
Attributive character in the general-purpose attribute feature set is respectively calculated, obtains the data of multiple attributive character, and will
The Data Integration of the multiple attributive character is the attribute synthesis feature set of target.
5. according to the method described in claim 1, it is characterized in that, the contact using between multiple attributes, to the category
Property the data of multiple attributive character concentrated of comprehensive characteristics handled, obtain the semantic step of multiple attributive character of target
Suddenly, including:
It is one-dimension array by the data processing of each attributive character in the attribute synthesis feature set, by multiple attributive character
The corresponding one-dimension array of data obtains the relation on attributes matrix of target;
The relation on attributes matrix with preset coefficient matrix is multiplied, obtains the relation on attributes net of target, wherein, it is described default
Coefficient matrix contain contact between multiple attributes;
Classification map is carried out to the attribute in the relation on attributes net, obtains the class probability of attributive character;
The classification results of attributive character are obtained according to the class probability of the attributive character;
The classification results of the attributive character are converted into semanteme, obtain the semanteme of multiple attributive character of target.
6. the according to the method described in claim 1, it is characterized in that, semanteme of the multiple attributive character for obtaining target
Afterwards, the method further includes:
The semanteme of multiple attributive character of the target is sent to application apparatus.
7. a kind of device for obtaining objective attribute target attribute Feature Semantics, applied to field of intelligent video surveillance, which is characterized in that including:
Video image acquisition module for obtaining video image, contains at least one target in the video image;
Module of target detection, for obtaining the testing result of target from the video image;
Attributive character extraction module, for utilizing the testing result of the target, from the category of the extracting target from video image
Property feature obtains the attribute synthesis feature set of target, wherein, the attribute synthesis feature set contains the number of multiple attributive character
According to;
Attribute cooperates with judgment module, for utilizing the contact between multiple attributes, to multiple in the attribute synthesis feature set
The data of attributive character are handled, and obtain the semanteme of multiple attributive character of target.
8. device according to claim 7, which is characterized in that the video image acquisition module is specifically used for:
Video image is obtained from video monitoring equipment, the video monitoring equipment includes at least video camera;
The target includes at least one of motor vehicle, non-motor vehicle, pedestrian or arbitrary combination.
9. device according to claim 7, which is characterized in that the testing result of the target includes at least:The class of target
The position of type, the size of target, target in video image.
10. device according to claim 9, which is characterized in that the attributive character extraction module, including:
Image zooming-out submodule, for using the testing result of the target, the target to be extracted from the video image
Image;
Format conversion submodule, the form for the image to the target convert, and obtain finger corresponding with the target
The image data for the formula that fixes;
General-purpose attribute collection acquisition submodule, for being concentrated in preset multiple general-purpose attributes, the type for obtaining the target corresponds to
General-purpose attribute collection, wherein, the general-purpose attribute collection contains multiple attributes;
General-purpose attribute feature set acquisition submodule, for utilizing the corresponding general-purpose attribute collection of type of the target, to the mesh
Target image data is calculated, and obtains the general-purpose attribute feature set containing multiple attributive character of target;
Attribute synthesis feature set acquisition submodule, based on being carried out respectively to the attributive character in the general-purpose attribute feature set
It calculates, obtains the data of multiple attributive character, and by attribute synthesis feature that the Data Integration of the multiple attributive character is target
Collection.
11. device according to claim 7, which is characterized in that the attribute cooperates with judgment module, including:
Relation on attributes matrix acquisition submodule, for by the data processing of each attributive character in the attribute synthesis feature set
For one-dimension array, by the corresponding one-dimension array of the data of multiple attributive character, the relation on attributes matrix of target is obtained;
Relation on attributes net acquisition submodule for the relation on attributes matrix to be multiplied with preset coefficient matrix, obtains target
Relation on attributes net, wherein, the preset coefficient matrix contains the contact between multiple attributes;
Classification map submodule for carrying out classification map to the attribute in the relation on attributes net, obtains point of attributive character
Class probability;
Classification results acquisition submodule, for obtaining the classification results of attributive character according to the class probability of the attributive character;
Semanteme conversion submodule, for the classification results of the attributive character to be converted into semanteme, obtains multiple attributes of target
The semanteme of feature.
12. device according to claim 7, which is characterized in that described device further includes:
Sending module, for the semanteme of multiple attributive character of the target to be sent to application apparatus.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611244945.3A CN108256401B (en) | 2016-12-29 | 2016-12-29 | Method and device for obtaining target attribute feature semantics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611244945.3A CN108256401B (en) | 2016-12-29 | 2016-12-29 | Method and device for obtaining target attribute feature semantics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108256401A true CN108256401A (en) | 2018-07-06 |
CN108256401B CN108256401B (en) | 2021-03-26 |
Family
ID=62719911
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611244945.3A Active CN108256401B (en) | 2016-12-29 | 2016-12-29 | Method and device for obtaining target attribute feature semantics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108256401B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111976593A (en) * | 2020-08-21 | 2020-11-24 | 大众问问(北京)信息科技有限公司 | Voice prompt method, device, equipment and storage medium for vehicle external object |
CN115099684A (en) * | 2022-07-18 | 2022-09-23 | 江西中科冠物联网科技有限公司 | Enterprise safety production management system and management method thereof |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1936885A (en) * | 2005-09-21 | 2007-03-28 | 富士通株式会社 | Natural language component identifying correcting apparatus and method based on morpheme marking |
CN103020624A (en) * | 2011-09-23 | 2013-04-03 | 杭州海康威视系统技术有限公司 | Intelligent marking, searching and replaying method and device for surveillance videos of shared lanes |
US20130330008A1 (en) * | 2011-09-24 | 2013-12-12 | Lotfi A. Zadeh | Methods and Systems for Applications for Z-numbers |
CN103593335A (en) * | 2013-09-05 | 2014-02-19 | 姜赢 | Chinese semantic proofreading method based on ontology consistency verification and reasoning |
CN103810266A (en) * | 2014-01-27 | 2014-05-21 | 中国电子科技集团公司第十研究所 | Semantic network object identification and judgment method |
CN104378539A (en) * | 2014-11-28 | 2015-02-25 | 华中科技大学 | Scene-adaptive video structuring semantic extraction camera and method thereof |
CN104992142A (en) * | 2015-06-03 | 2015-10-21 | 江苏大学 | Pedestrian recognition method based on combination of depth learning and property learning |
US20160239711A1 (en) * | 2013-10-18 | 2016-08-18 | Vision Semanatics Limited | Visual Data Mining |
CN105979210A (en) * | 2016-06-06 | 2016-09-28 | 深圳市深网视界科技有限公司 | Pedestrian identification system based on multi-ball multi-gun camera array |
-
2016
- 2016-12-29 CN CN201611244945.3A patent/CN108256401B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1936885A (en) * | 2005-09-21 | 2007-03-28 | 富士通株式会社 | Natural language component identifying correcting apparatus and method based on morpheme marking |
CN103020624A (en) * | 2011-09-23 | 2013-04-03 | 杭州海康威视系统技术有限公司 | Intelligent marking, searching and replaying method and device for surveillance videos of shared lanes |
US20130330008A1 (en) * | 2011-09-24 | 2013-12-12 | Lotfi A. Zadeh | Methods and Systems for Applications for Z-numbers |
CN103593335A (en) * | 2013-09-05 | 2014-02-19 | 姜赢 | Chinese semantic proofreading method based on ontology consistency verification and reasoning |
US20160239711A1 (en) * | 2013-10-18 | 2016-08-18 | Vision Semanatics Limited | Visual Data Mining |
CN103810266A (en) * | 2014-01-27 | 2014-05-21 | 中国电子科技集团公司第十研究所 | Semantic network object identification and judgment method |
CN104378539A (en) * | 2014-11-28 | 2015-02-25 | 华中科技大学 | Scene-adaptive video structuring semantic extraction camera and method thereof |
CN104992142A (en) * | 2015-06-03 | 2015-10-21 | 江苏大学 | Pedestrian recognition method based on combination of depth learning and property learning |
CN105979210A (en) * | 2016-06-06 | 2016-09-28 | 深圳市深网视界科技有限公司 | Pedestrian identification system based on multi-ball multi-gun camera array |
Non-Patent Citations (4)
Title |
---|
CARINA SILBERER等: ""Models of Semantic Representation with Visual Attributes"", 《PROCEEDINGS OF THE 51ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS》 * |
LIN LIN等: ""Correlation-Based Video Semantic Concept Detection Using Multiple Correspondence Analysis"", 《2008 TENTH IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA》 * |
刘明霞: ""属性学习若干重要问题的研究及应用"", 《中国博士学位论文全文数据库 信息科技辑》 * |
石跃祥: ""计算机视觉图像语义模型的描述方法研究"", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111976593A (en) * | 2020-08-21 | 2020-11-24 | 大众问问(北京)信息科技有限公司 | Voice prompt method, device, equipment and storage medium for vehicle external object |
CN115099684A (en) * | 2022-07-18 | 2022-09-23 | 江西中科冠物联网科技有限公司 | Enterprise safety production management system and management method thereof |
CN115099684B (en) * | 2022-07-18 | 2023-04-07 | 江西中科冠物联网科技有限公司 | Enterprise safety production management system and management method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN108256401B (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108520226B (en) | Pedestrian re-identification method based on body decomposition and significance detection | |
CN108830285B (en) | Target detection method for reinforcement learning based on fast-RCNN | |
CN107016357B (en) | Video pedestrian detection method based on time domain convolutional neural network | |
CN105938564B (en) | Rice disease identification method and system based on principal component analysis and neural network | |
CN108288033B (en) | A kind of safety cap detection method based on random fern fusion multiple features | |
CN105335716B (en) | A kind of pedestrian detection method extracting union feature based on improvement UDN | |
CN110363134B (en) | Human face shielding area positioning method based on semantic segmentation | |
CN111611874B (en) | Face mask wearing detection method based on ResNet and Canny | |
CN108803617A (en) | Trajectory predictions method and device | |
CN108319957A (en) | A kind of large-scale point cloud semantic segmentation method based on overtrick figure | |
CN104573685B (en) | A kind of natural scene Method for text detection based on linear structure extraction | |
CN111008583B (en) | Pedestrian and rider posture estimation method assisted by limb characteristics | |
CN109543632A (en) | A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features | |
CN103914699A (en) | Automatic lip gloss image enhancement method based on color space | |
CN105894503A (en) | Method for restoring Kinect plant color and depth detection images | |
CN112489143A (en) | Color identification method, device, equipment and storage medium | |
CN110188835A (en) | Data based on production confrontation network model enhance pedestrian's recognition methods again | |
CN116051953A (en) | Small target detection method based on selectable convolution kernel network and weighted bidirectional feature pyramid | |
CN109271932A (en) | Pedestrian based on color-match recognition methods again | |
CN112069985A (en) | High-resolution field image rice ear detection and counting method based on deep learning | |
CN108256462A (en) | A kind of demographic method in market monitor video | |
Mammeri et al. | Design of traffic sign detection, recognition, and transmission systems for smart vehicles | |
CN113255514B (en) | Behavior identification method based on local scene perception graph convolutional network | |
CN109117717A (en) | A kind of city pedestrian detection method | |
CN110298893A (en) | A kind of pedestrian wears the generation method and device of color identification model clothes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |