CN110502986A - Identify character positions method, apparatus, computer equipment and storage medium in image - Google Patents

Identify character positions method, apparatus, computer equipment and storage medium in image Download PDF

Info

Publication number
CN110502986A
CN110502986A CN201910628940.8A CN201910628940A CN110502986A CN 110502986 A CN110502986 A CN 110502986A CN 201910628940 A CN201910628940 A CN 201910628940A CN 110502986 A CN110502986 A CN 110502986A
Authority
CN
China
Prior art keywords
image
identified
human body
video
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910628940.8A
Other languages
Chinese (zh)
Inventor
石磊
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910628940.8A priority Critical patent/CN110502986A/en
Publication of CN110502986A publication Critical patent/CN110502986A/en
Priority to PCT/CN2020/093608 priority patent/WO2021008252A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

This application involves method, apparatus, computer equipment and the storage mediums of character positions in a kind of identification image neural network based.The described method includes: obtaining monitor video file to be identified, and monitor video file to be identified is pre-processed, obtains video image to be identified;Determine the image type of video image to be identified;When image type is color image, the human body key point in video image to be identified is identified by the human body attitude model that training obtains, and the character positions information in video image to be identified is determined based on the human body key point identified;When image type is night vision image, the lightweight target detection model obtained by training identifies the character positions information in video image to be identified.Working efficiency can be improved using this method.

Description

Identify character positions method, apparatus, computer equipment and storage medium in image
Technical field
This application involves field of computer technology, more particularly to the method, apparatus of character positions in a kind of identification image, Computer equipment and storage medium.
Background technique
With the needs of social economy and safety in production, video monitoring equipment is in safe city, wisdom traffic, safe protection engineering Equal fields have obtained more and more extensive deployment.Also, video monitoring is towards high Qinghua, networking and intelligentized side in recent years To development.But due to the extensive reference of monitor video, multitude of video data caused by magnanimity camera are also more and more, It to check object, need to be inquired from massive video data, existing inquiry mode depends on manpower and checks and examine manually The problems such as rope causes video content monitoring automation degree not high, and search efficiency is slow.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide personage position in a kind of identification image that can be improved efficiency Method, apparatus, computer equipment and the storage medium set.
A kind of method of character positions in identification image, which comprises
Monitor video file to be identified is obtained, and the monitor video file to be identified is pre-processed, is obtained wait know Other video image;
Determine the image type of the video image to be identified;
When described image type is color image, the figure to be identified is identified by the human body attitude model that training obtains Human body key point as in, and determine that the character positions in the video image to be identified are believed based on the human body key point identified Breath;
When described image type is night vision image, the lightweight target detection model obtained by training identifies institute State the character positions information in video image to be identified.
The step of image type of the determination video image to be identified in one of the embodiments, comprising:
Obtain the triple channel pixel value of each pixel in the video image to be identified;
Difference calculating is carried out based on the triple channel pixel value, selects the maximum value of difference as pixel value difference;
The image type of the video image to be identified is determined according to preset value and the pixel value difference.
In one of the embodiments,
The image type of the determination video image to be identified, comprising:
The acquisition mode regulating time that the monitor video file to be identified corresponds to monitoring device is obtained, and described in acquisition The corresponding shooting time of video image to be identified;
The image type of the video image to be identified is determined according to the acquisition mode regulating time.
In one of the embodiments, when described image type is color image, pass through the human body attitude of training acquisition Model identifies the human body key point in the images to be recognized, and determines the view to be identified based on the human body key point identified Character positions information in frequency image, comprising:
Feature extraction is carried out to the video image to be identified using the pre-network layer of the human body attitude model, is obtained The corresponding characteristic pattern of the video image to be identified;
The video figure to be identified is extracted from the characteristic pattern using the Belief network layer of the human body attitude model The human body key point of human body, obtains the corresponding key point confidence map of human body key point in the video image to be identified as in;
The view to be identified is extracted from the characteristic pattern using the degree of association Vector Network network layers of the human body attitude model The degree of association of each human body key point in frequency image;
The video image to be identified is determined according to the degree of association of the key point confidence map and the human body key point Character positions information.
The degree of association according to the key point confidence map and the human body key point is true in one of the embodiments, The character positions information of the fixed video image to be identified, comprising:
According to the degree of association of the human body key point, the human body key point on the key point confidence map is attached, And crucial dot profile is calculated;
Obtain external minimum rectangle according to the crucial dot profile, the external minimum rectangle be include the key point wheel The wide the smallest rectangle of area;
The character positions information in the video image to be identified is determined according to the external minimum rectangle.
After obtaining the character positions information to be identified described in one of the embodiments, later, further includes:
Generate video information corresponding with the character positions information;
The video information is written in corresponding log.
In one of the embodiments,
It further include the trained human body attitude model and the light weight before the acquisition monitor video file to be identified The step of grade target detection model;The step of training human body attitude model and the lightweight target detection model, wraps It includes:
Obtain the history monitor video of monitoring device;
Color image sample and night vision image sample are extracted from the history monitor video, and decent to the cromogram Human body in this carries out the mark of human body key point, and the mark of position coordinates is carried out to the human body in the night vision image sample Note obtains mark color image and marks night vision image;
Size adjusting is carried out to the mark color image and mark night vision image respectively, obtains training color image and instruction Practice night vision image;
By the human body key point and the human body key point for marking and being marked in color image in the trained color image It is mapped, the human body attitude model is trained using the training color image after mapping;
The position coordinates marked in position coordinates and the mark night vision image in the trained night vision image are carried out Mapping, is trained the lightweight target detection model using the training night vision image after mapping.
The device of character positions, described device include: in a kind of identification image
Preprocessing module is carried out for obtaining monitor video file to be identified, and to the monitor video file to be identified Pretreatment, obtains video image to be identified;
Determining module, for determining the image type of the video image to be identified;
Identification module, for being known by the human body attitude model that training obtains when described image type is color image Human body key point in the not described video image to be identified, and the video to be identified is determined based on the human body key point identified Character positions information in image;
The identification module is also used to the lightweight target obtained when described image type is night vision image by training Detection model identifies the character positions information in the video image to be identified.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing Device realizes character positions in above-mentioned identification image method when executing the computer program.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor The method of character positions in above-mentioned identification image is realized when row.
Method, apparatus, computer equipment and the storage medium of character positions, to be identified when getting in above-mentioned identification image After monitor video file, monitor video file to be identified is pre-processed to obtain video image to be identified, consequently facilitating subsequent Processing to video content recognition.After the image type for determining video image to be identified, corresponding knowledge is called according to image type Other model is identified in images to be recognized that is, when image type is color image by the human body attitude model that training obtains Human body key point, and determine based on the human body key point identified the character positions information in video image to be identified.And scheming When picture type is night vision image, the lightweight target detection model obtained by training identifies the video image to be identified In character positions information.To guarantee that different types of video image to be identified can have most matched identification model to be known Not, the accuracy of identification is improved.And it according to the position of personage in different identification model detection video images, can get rid of old The manual identified inspection method of formula realizes automation quickly identification monitor video content.Improve working efficiency.
Detailed description of the invention
Fig. 1 is the application scenario diagram that the method for character positions in image is identified in one embodiment;
Fig. 2 is the flow diagram that the method for character positions in image is identified in one embodiment;
Fig. 3 is the flow diagram that the type step of video image is determined in one embodiment;
Fig. 4 is the flow diagram that the method for character positions in image is identified in another embodiment;
Fig. 5 is the structural block diagram that the device of character positions in image is identified in one embodiment;
Fig. 6 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not For limiting the application.
The method of character positions, can be applied in application environment as shown in Figure 1 in identification image provided by the present application. Wherein, monitoring device 102 is communicated by network with server 104.Server 104 obtain monitoring device 102 send to Identify that monitor video file, server 104 pre-process monitor video file to obtain video image to be identified.Server 104 determine the image type of video image to be identified.Server 104 is obtained when image type is color image by training Human body attitude model identification images to be recognized in human body key point, and determined based on the human body key point identified to be identified Character positions information in video image.Server 104 passes through the lightweight of training acquisition when image type is night vision image Target detection model identifies the character positions information in video image to be identified.Wherein, monitoring device 102 can with but it is unlimited Then various cameras, the personal computer for carrying camera, laptop, smart phone, tablet computer and portable Wearable device etc., server 104 can be with the server clusters of the either multiple server compositions of independent server come real It is existing.
In one embodiment, as shown in Fig. 2, providing a kind of method for identifying character positions in image, in this way Applied to being illustrated for the server in Fig. 1, comprising the following steps:
Step S202 obtains monitor video file to be identified, to monitor video file to be identified pre-processed to obtain to Identify video image.
Wherein, monitor video file to be identified refers to the file including monitoring device monitor video collected, Ke Yili Xie Wei, monitor video file to be identified include but is not limited to be that monitoring device acquisition monitor video sends server to, can also be with It is other terminal devices for having transfer function and being communicated with server.That is the monitor video to be identified text of server acquisition Part can come from monitoring device, can be from the video file of other terminal devices transmission.Pretreatment refers to prison to be identified Control video file, which is decoded, gets corresponding monitor video to be identified, and is split to obtain to monitor video to be identified Video image to be identified in monitor video to be identified, and gray scale adjustment is carried out to video image to be identified, removes dryness and sharpens Equal technical treatments, i.e., by adjusting the clarity and quality for improving image quality and noise guarantee image.
Specifically, user can issue character positions identification instruction by monitoring device, and select and identified Monitor video to be identified.After monitoring device receives the character positions identification instruction that user issues, obtains user and select Monitor video to be identified carry out compression and be packaged into corresponding monitor video file to be identified, and by monitor video file to be identified It is sent to corresponding server, and sends the request that character positions identify to corresponding server.Server receives personage After the identification request of position, by character positions identification request corresponding monitor video file to be identified be decoded reduction obtain to It identifies monitor video, then is pre-processed to obtain the video to be identified in monitor video to be identified to the monitor video to be identified Image.
Step S204 determines the image type of video image to be identified.
Specifically, when server pre-processes monitor video file to be identified to obtain corresponding video image to be identified Afterwards, determine that the video image to be identified belongs to night vision image or colour by obtaining the pixel value in the video image to be identified Image.
Step S206 is identified to be identified when image type is color image by the human body attitude model that training obtains Human body key point in image, and determine that the character positions in video image to be identified are believed based on the human body key point identified Breath.
Wherein, human body attitude model is openpose model, and openpose model is a kind of posture detection frame, for examining The key points such as the joint, such as neck, shoulder and ancon of human body are surveyed, key point is connected to obtain human body attitude. Openpose model includes pre-network layer and double multi-level CNN network (Convolutional Neural of branch Networks, convolutional neural networks).Pre-network is in VGG network (Visual Geometry Group Network, oversubscription Resolution test network) on the basis of the VGG-19 network modified, including ten two-dimensional convolution layers and correct linear elementary layer It is sequentially connected in series, is inserted into 3 pond layers therebetween.I.e. VGG-19 module include 4 block, wherein block1, block2 and Two convolutional layers and two amendment linear units are distinguished in block4, four convolution kernels and four amendments are linear single in block3 Member, 3 pond layers are between each block.The multi-level CNN network of double branches includes Belief network and degree of association vector Field network.
Specifically, after type of the server based on determination video image to be identified, if the class of the video image to be identified Type is color image, then calls openpose model as the identification model of the video image to be identified.By video figure to be identified As being input in openpose model, the video image to be identified is identified using openpose model, is obtained to be identified The human body key point of human body in video image, to obtain character positions according to human body key point.
Step S208, when image type is night vision image, the lightweight target detection model obtained by training, identification Character positions information in video image to be identified out.
Wherein, lightweight target detection model is ssdlite (Single Shot Detector-Lite, lightweight single Detector) model, ssdlite model is a kind of target detection frame, whether there is the model of object for identification.In this reality It applies in example, in order to improve the precision of model, the original loss of ssdlite model (loss function) is changed to focal loss.And And since night vision image is difficult to detect each key point of human body attitude, in the present embodiment, openpose model is used for Sense colors image, ssdlite model is for detecting night vision image.
Specifically, after type of the server based on determination video image to be identified, and if the video image to be identified Type is night vision image, then calls ssdlite model as the identification model of the video image to be identified, subsequent use Ssdlite model carries out the identification of character positions to the video image to be identified.
The method of character positions in above-mentioned identification image, after getting monitor video file to be identified, to prison to be identified Control video file is pre-processed to obtain video image to be identified, consequently facilitating the subsequent processing to video content recognition.It determines After the image type of video image to be identified, corresponding identification model is called according to image type, i.e., is colour in image type When image, the human body key point in images to be recognized is identified by the human body attitude model that training obtains, and based on identifying Human body key point determines the character positions information in video image to be identified.And when image type is night vision image, pass through instruction Practice the lightweight target detection model obtained, identifies the character positions information in the video image to be identified.To guarantee Different types of video image to be identified can have most matched identification model to be identified, improve the accuracy of identification.And According to the position of personage in different identification model detection video images, old-fashioned manual identified inspection method can be got rid of, it is real Now automation quickly identifies monitor video content.Improve working efficiency.
In one embodiment, as shown in figure 3, step S204, determine video image to be identified image type include with Lower step:
Step S302 obtains the triple channel pixel value of each pixel in video image to be identified.
Wherein, pixel refers to the lattice of composition image, i.e. minimum unit in image.And the lattice has clear Position and assigned color value, lattice color and position just determine the appearance that the image shows.Pixel value is For the corresponding color value of the pixel, the type of image can be determined by pixel value.Image type includes night vision image and coloured silk Chromatic graph picture.Triple channel pixel value is rgb pixel value, and rgb pixel value is to determine that the color value of aobvious color is presented in image. RGB is respectively red red, green green and blue blue.Specifically, when server determines the view according to as image pixel value When frequency image is night vision image or color image, the corresponding rgb pixel value of all pixels in image is obtained first.
Step S304 carries out difference calculating based on triple channel pixel value, selects the maximum value of difference as pixel value difference.
Specifically, after the triple channel pixel value for obtaining each pixel, that is, after getting rgb pixel value, difference meter is carried out to RGB It calculates.Difference calculating is that any two in RGB are carried out subtraction, and the maximum value of selection difference is made in acquired multiple differences For the corresponding pixel value difference of this pixel.For example, by taking pixel 1 as an example, the corresponding rgb value of pixel 1 is obtained, each RGB has pair The component value answered, component value are specifically how much depending on specific image, and the corresponding component value of general RGB is between 0-255.Divide Not Huo Qu the corresponding component value of R, the corresponding component value of G and the corresponding component value of B, three component values are then subjected to mutually difference It is worth operation.It is equivalent to the absolute value for calculating separately R-G, the absolute value of R-B, the absolute value of G-B, since the value of R-B or B-R are It is identical, though symbol on the contrary, and symbol is mathematically different on the contrary, but do not have for pixel.Therefore by taking absolutely Value can reduce calculating step, so that calculating be rapidly completed.That is, taking pixel difference of the maximum value of difference as pixel 1 Value is i.e. from the absolute value of R-G, the absolute value of R-B, chooses pixel value difference of the maximum value as pixel 1 in the absolute value of G-B.
Step S306 determines the image type of video image to be identified according to preset value and pixel value difference.
Wherein, preset value is to preset for judging that video image is the reference pixel value of color image or night vision image. In the present embodiment, preset value 10.Specifically, after getting pixel corresponding pixel value difference, by the pixel value difference and pre- If value 10 is compared.If the pixel value difference is greater than preset value 10, it is determined that the video image to be identified is color image, and if The pixel value difference is less than or equal to preset value 10, it is determined that the video image to be identified is night vision image.In the present embodiment, pass through The pixel value of video image to be identified determines the image type of video image to be identified, and guarantee is subsequent can be according to video figure to be identified The image type calling of picture is identified with the video image to be identified identification model the most matched, improves recognition accuracy.
In another embodiment, step S204 determines that the image type of video image to be identified includes: that acquisition is to be identified Monitor video file corresponds to the acquisition mode regulating time of monitoring device, and when obtaining the corresponding shooting of video image to be identified Between;The image type of video image to be identified is determined according to acquisition mode regulating time.
Specifically, monitoring device has both of which, including color acquisition mode and night vision black and white acquisition mode.It is monitoring When equipment acquires monitor video, since collected color video quality is lost in the lower situation of light.And in order to protect The quality for demonstrate,proving monitor video, when light is lower, color acquisition mode tuning can be automatically that night vision is black by monitoring device White mode, to acquire the monitor video of night vision black and white.Therefore, when determining video image content to be identified, by obtain to Identify the acquisition mode regulating time of the monitoring device of the corresponding monitor video file to be identified of video image content, that is, obtain from Color acquisition mode tuning to night vision white-black pattern time, so that it is determined that the time of the monitoring device shaping modes.Then, into One step obtains the shooting time of video image to be identified, and the shooting time of video image to be identified can be obtained from video information It takes.By the way that acquisition mode regulating time and shooting time are compared, when shooting time is before acquisition mode regulating time, It can determine that video image to be identified is color image, and when shooting time is after acquisition mode regulating time, that is, it can determine Video image to be identified is night vision image.In one embodiment, it when image type is color image, is obtained by training Human body attitude model identifies the human body key point in images to be recognized, and determines view to be identified based on the human body key point identified Character positions information in frequency image, specifically includes: using the pre-network layer of human body attitude model to video image to be identified Feature extraction is carried out, the corresponding characteristic pattern of video image to be identified is obtained;Using human body attitude model Belief network layer from It is crucial to obtain human body in video image to be identified for the human body key point that human body in the video image to be identified is extracted in characteristic pattern The corresponding key point confidence map of point;View to be identified is extracted from characteristic pattern using the degree of association Vector Network network layers of human body attitude model The degree of association of each human body key point in frequency image;It is determined according to the degree of association of key point confidence map and human body key point wait know The character positions information of other video image.
Specifically, when video image to be identified is color image, video image to be identified is inputted into human body brief note first The pre-network of model carries out the operation of the feature extractions such as convolution pond by pre-network layer to video image to be identified, turns Get the corresponding characteristic pattern of video image to be identified in return.Then characteristic pattern is input to the multi-level CNN network of double branches, that is, passed through The Belief network crossed in double branching networks obtains each human body key point and corresponding key point confidence map, by double branches Degree of association vector field network in network obtains the degree of association of each human body key point, i.e., is closed according to key point confidence map and human body The degree of association of key point determines the character positions information of video image to be identified.View to be identified can be found by key point confidence map The human body key point of personage in frequency image can obtain effective connection between each human body key point according to the degree of association, that is, pass through pass Key point confidence map and the degree of association can determine character positions.
In one embodiment, video image to be identified is determined according to the degree of association of key point confidence map and human body key point Character positions information, comprising: according to the degree of association of human body key point, the human body key point on key point confidence map is connected It connects, and crucial dot profile is calculated;Obtain external minimum rectangle according to crucial dot profile, external minimum rectangle be include described The smallest rectangle of the area of crucial dot profile;Determine that the character positions in video image to be identified are believed according to external minimum rectangle Breath.
Wherein, crucial dot profile refers to that the irregular shape for being framed human body key point, minimum circumscribed rectangle refer to The minimum rectangle that all key point profiles are framed.Specifically, using opencv tool according to key point confidence map and association Degree is calculated, and is first attached the human body key point on key point confidence map according to the degree of association, it is corresponding to obtain human body Posture.Also, crucial dot profile is calculated using opencv tool simultaneously, minimum external square is being obtained according to crucial dot profile Shape, external minimum rectangle inner region be personage position, the position coordinates of external minimum rectangle are character positions Information.Wherein, if obtained external minimum rectangle is the external minimum square being deviated compared to regular rectangle, that is, obtain Shape is irregular rectangle, then being corrected is regular rectangle, and finally obtained external minimum rectangle is the rectangle of rule.
In one embodiment, it as shown in figure 4, providing the method for character positions in another identification image, i.e., ought obtain It is further comprising the steps of after character positions information to be identified:
Step S210 generates video information corresponding with character positions information.
Video information is written in corresponding log step S212.
Wherein, goal-selling includes but is not limited to human body, can also be other objects, is preset according to actual needs. Log refers to the document for recording video information.Specifically, in the present embodiment by taking monitor video as an example, to monitor video into The demand of row identification is the human body in order to identify to obtain to occur in monitor video.Therefore, using human body as default in the present embodiment Target.And when recognition detection obtains video content, corresponding video information is generated based on the character positions information that detected. Wherein, video information includes whether the video image includes goal-selling, the video image from which monitor video file, And coordinate position etc. of the goal-selling in video image.It is to be understood that obtaining source, the video figure that the video image comes As in after the coordinate position of personage, to be packaged into a file, the video information generated.When generation video information Afterwards, which is written in corresponding log, it is subsequent it should be understood that when monitor video content can call directly log text Part can learn the video content in all monitor video files by the video information recorded in journal file.
In one embodiment, identification model is preparatory trained network model, i.e. human body attitude model and lightweight Target detection model is to train in advance, the model for character positions identification.Training human body attitude mode and lightweight target Detection model specifically includes: obtaining the history monitor video of monitoring device;Color image sample is extracted from history monitor video With night vision image sample, and the mark of human body key point is carried out to the human body in color image sample, and to night vision image sample Human body in this carries out the mark of position coordinates, obtains mark color image and marks night vision image;Cromogram will be marked respectively Picture and mark night vision image carry out size adjusting, obtain training color image and training night vision image;It will train in color image Human body key point and mark color image in the human body key point that marks mapped, utilize the training color image after mapping The human body attitude model is trained;The position that will be marked in position coordinates and mark night vision image in training night vision image It sets coordinate to be mapped, lightweight target detection model is trained using the training night vision image after mapping.
Specifically, the training image training identification model obtained based on history monitor video, so that identification model is to monitoring This scene is sufficiently learnt, and when subsequent identification monitor video content is more accurate.Since identification model includes openpose With two models of ssdlite, two models identify different types of image, therefore the history monitor video got is answered When including color video and night vision video.
After getting history monitor video, the video figure for meeting training requirement is extracted from history video using FFmpeg Picture extracts color image sample and night vision image sample including human body.Get color image sample including human body and After night vision image sample, the human body key point of the personage in color image is subjected to Labeling Coordinate using marking software, is marked Infuse color image.Wherein, marking software includes but is not limited to labelme standard software.And color image is that mark human body closes Key point, human body key point can generally have the different number quantity such as 9,14,16,17,18, in order to realize more accurate identification, this Embodiment preferably marks the coordinate of 18 key points, and 18 key points include nose, neck, right shoulder, right elbow, right wrist, left shoulder, a left side Elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, left ankle, left eye, right eye, left ear, auris dextra.Night vision image then Direct Mark Character positions coordinate obtains mark night vision image.Wherein, coordinate can be expressed as (x coordinate minimum value, y-coordinate minimum value, x seat Mark maximum value, y-coordinate maximum value), that is, it is expressed as (xmin, ymin, xmax, ymax).
Since two models of openpose and ssdlite carry out different disposal, acceptable input figure to image The size of picture is different.Therefore, color image needs to contract corresponding mark color image after carrying out human body key point mark 432*368 size is put into, and marks night vision image and is scaled to 300*300 size, the mark color image after scaling and mark night Visible image is as training color image and training night vision image.And due to the mark color image and mark night vision image after scaling Middle marked coordinate position can be varied, therefore the mark coordinate after scaling is mapped with preceding target mark is scaled, After color image and training night vision image will be trained to be mapped with corresponding mark color image and mark night vision image again Training color image and training night vision image are input to training in corresponding model, by being trained after establishing mapping relations, made Correct coordinate can be learnt in the process by obtaining model training.Wherein, color image is input to Openpose model training, night vision figure It is trained as being input in ssdlite model.
It should be understood that although each step in the flow chart of Fig. 2-3 is successively shown according to the instruction of arrow, These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-3 Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately It executes.
In one embodiment, as shown in figure 5, providing a kind of device for identifying character positions in image, comprising: pre- place Manage module 502, determining module 504 and identification module 506, in which:
Preprocessing module 502 for obtaining monitor video file to be identified, and carries out monitor video file to be identified pre- Processing, obtains video image to be identified.
Determining module 504, for determining the image type of video image to be identified.
Identification module 506, for being identified by the human body attitude model that training obtains when image type is color image Human body key point in video image to be identified, and determine based on the human body key point identified the people in video image to be identified Object location information.
Identification module 506 is also used to the lightweight target detection obtained when image type is night vision image by training Model identifies the character positions information in video image to be identified.
In one embodiment, determining module 504 is also used to obtain the triple channel picture of each pixel in video image to be identified Element value;Difference calculating is carried out based on triple channel pixel value, selects the maximum value of difference as pixel value difference;According to preset value and picture Plain difference determines the image type of video image to be identified.
In one embodiment, determining module 504 is also used to obtain monitor video file to be identified and corresponds to monitoring device Acquisition mode regulating time, and obtain the corresponding shooting time of video image to be identified;It is true according to acquisition mode regulating time The image type of fixed video image to be identified.
In one embodiment, identification module 506 is also used to the pre-network layer using human body attitude model to be identified Video image carries out feature extraction, obtains the corresponding characteristic pattern of video image to be identified;Utilize the confidence level of human body attitude model Network layer obtains in video image to be identified from the human body key point for extracting human body in the video image to be identified in characteristic pattern The corresponding key point confidence map of human body key point;It is extracted from characteristic pattern using the degree of association Vector Network network layers of human body attitude model The degree of association of each human body key point in video image to be identified;According to the degree of association of key point confidence map and human body key point Determine the character positions information of video image to be identified.
In one embodiment, identification module 506 is also used to the degree of association according to human body key point, by key point confidence map On human body key point be attached, and crucial dot profile is calculated;External minimum rectangle is obtained according to crucial dot profile, outside Connecing minimum rectangle is the smallest rectangle of area for including the crucial dot profile;Video to be identified is determined according to external minimum rectangle Character positions information in image.
In one embodiment, the device for identifying character positions in image further includes generation module, for generation and personage The corresponding video information of location information;Video information is written in corresponding log.
In one embodiment, the device for identifying character positions in image further includes training module, is set for obtaining monitoring Standby history monitor video;Color image sample and night vision image sample are extracted from history monitor video, and to color image Human body in sample carries out the mark of human body key point, and the mark of position coordinates is carried out to the human body in night vision image sample Note obtains mark color image and marks night vision image;It respectively will mark color image and mark night vision image progress size tune It is whole, it obtains training color image and training night vision image;By the human body key point and mark color image in training color image The human body key point of middle mark is mapped, and is instructed using the training color image after mapping to the human body attitude model Practice;The position coordinates marked in position coordinates and mark night vision image in training night vision image are mapped, mapping is utilized Training night vision image afterwards is trained lightweight target detection model.
Specific restriction about the device of character positions in identification image may refer to above for people in identification image The restriction of the method for object location, details are not described herein.Modules in above-mentioned monitor video content identification apparatus can all or It is realized by software, hardware and combinations thereof part.Above-mentioned each module can be embedded in the form of hardware or set independently of computer It in processor in standby, can also be stored in a software form in the memory in computer equipment, in order to processor calling Execute the corresponding operation of the above modules.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction Composition can be as shown in Figure 6.The computer equipment include by system bus connect processor, memory, network interface and Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating The database of machine equipment is for storing data.The network interface of the computer equipment is used to pass through network connection with external terminal Communication.To realize a kind of method for identifying character positions in image when the computer program is executed by processor.
It will be understood by those skilled in the art that structure shown in Fig. 6, only part relevant to application scheme is tied The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with Computer program, the processor perform the steps of when executing computer program
Monitor video file to be identified is obtained, and monitor video file to be identified is pre-processed, obtains view to be identified Frequency image;
Determine the image type of video image to be identified;
When image type is color image, identified in video image to be identified by the human body attitude model that training obtains Human body key point, and determine based on the human body key point identified the character positions information in video image to be identified;
When image type is night vision image, the lightweight target detection model obtained by training is identified to be identified Character positions information in video image.
In one embodiment, it is also performed the steps of when processor executes computer program
Obtain the triple channel pixel value of each pixel in video image to be identified;Based on triple channel pixel value progress difference It calculates, selects the maximum value of difference as pixel value difference;The image of video image to be identified is determined according to preset value and pixel value difference Type.
In one embodiment, it is also performed the steps of when processor executes computer program
The acquisition mode regulating time that monitor video file to be identified corresponds to monitoring device is obtained, and obtains view to be identified The corresponding shooting time of frequency image;The image type of video image to be identified is determined according to acquisition mode regulating time.
In one embodiment, it is also performed the steps of when processor executes computer program
Feature extraction is carried out to video image to be identified using the pre-network layer of human body attitude model, obtains view to be identified The corresponding characteristic pattern of frequency image;The video to be identified is extracted from characteristic pattern using the Belief network layer of human body attitude model The human body key point of human body in image obtains the corresponding key point confidence map of human body key point in video image to be identified;It utilizes The degree of association Vector Network network layers of human body attitude model are from extracting each human body key point in video image to be identified in characteristic pattern The degree of association;The character positions letter of video image to be identified is determined according to the degree of association of key point confidence map and human body key point Breath.
In one embodiment, it is also performed the steps of when processor executes computer program
According to the degree of association of human body key point, the human body key point on key point confidence map is attached, and is calculated To crucial dot profile;Obtain external minimum rectangle according to crucial dot profile, external minimum rectangle be include the crucial dot profile The smallest rectangle of area;The character positions information in video image to be identified is determined according to external minimum rectangle.
In one embodiment, it is also performed the steps of when processor executes computer program
Generate video information corresponding with character positions information;Video information is written in corresponding log.
In one embodiment, it is also performed the steps of when processor executes computer program
Obtain the history monitor video of monitoring device;Color image sample and night vision image are extracted from history monitor video Sample, and the mark of human body key point is carried out to the human body in color image sample, and to the human body in night vision image sample The mark for carrying out position coordinates obtains mark color image and marks night vision image;It respectively will mark color image and mark night Visible image carries out size adjusting, obtains training color image and training night vision image;Human body in training color image is crucial The human body key point marked in point and mark color image is mapped, using the training color image after mapping to the human body Attitude mode is trained;The position coordinates marked in position coordinates and mark night vision image in training night vision image are carried out Mapping, is trained lightweight target detection model using the training night vision image after mapping.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program performs the steps of when being executed by processor
Monitor video file to be identified is obtained, and monitor video file to be identified is pre-processed, obtains view to be identified Frequency image;
Determine the image type of video image to be identified;
When image type is color image, identified in video image to be identified by the human body attitude model that training obtains Human body key point, and determine based on the human body key point identified the character positions information in video image to be identified;
When image type is night vision image, the lightweight target detection model obtained by training is identified to be identified Character positions information in video image.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Obtain the triple channel pixel value of each pixel in video image to be identified;Based on triple channel pixel value progress difference It calculates, selects the maximum value of difference as pixel value difference;The image of video image to be identified is determined according to preset value and pixel value difference Type.
In one embodiment, it is also performed the steps of when computer program is executed by processor
The acquisition mode regulating time that monitor video file to be identified corresponds to monitoring device is obtained, and obtains view to be identified The corresponding shooting time of frequency image;The image type of video image to be identified is determined according to acquisition mode regulating time.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Feature extraction is carried out to video image to be identified using the pre-network layer of human body attitude model, obtains view to be identified The corresponding characteristic pattern of frequency image;The video to be identified is extracted from characteristic pattern using the Belief network layer of human body attitude model The human body key point of human body in image obtains the corresponding key point confidence map of human body key point in video image to be identified;It utilizes The degree of association Vector Network network layers of human body attitude model are from extracting each human body key point in video image to be identified in characteristic pattern The degree of association;The character positions letter of video image to be identified is determined according to the degree of association of key point confidence map and human body key point Breath.
In one embodiment, it is also performed the steps of when computer program is executed by processor
According to the degree of association of human body key point, the human body key point on key point confidence map is attached, and is calculated To crucial dot profile;Obtain external minimum rectangle according to crucial dot profile, external minimum rectangle be include the crucial dot profile The smallest rectangle of area;The character positions information in video image to be identified is determined according to external minimum rectangle.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Generate video information corresponding with character positions information;Video information is written in corresponding log.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Obtain the history monitor video of monitoring device;Color image sample and night vision image are extracted from history monitor video Sample, and the mark of human body key point is carried out to the human body in color image sample, and to the human body in night vision image sample The mark for carrying out position coordinates obtains mark color image and marks night vision image;It respectively will mark color image and mark night Visible image carries out size adjusting, obtains training color image and training night vision image;Human body in training color image is crucial The human body key point marked in point and mark color image is mapped, using the training color image after mapping to the human body Attitude mode is trained;The position coordinates marked in position coordinates and mark night vision image in training night vision image are carried out Mapping, is trained lightweight target detection model using the training night vision image after mapping.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (10)

1. a kind of method of character positions in identification image, which comprises
Monitor video file to be identified is obtained, and the monitor video file to be identified is pre-processed, obtains view to be identified Frequency image;
Determine the image type of the video image to be identified;
When described image type is color image, identified in the images to be recognized by the human body attitude model that training obtains Human body key point, and the character positions information in the video image to be identified is determined based on the human body key point identified;
When described image type is night vision image, the lightweight target detection model obtained by training, identify it is described to Identify the character positions information in video image.
2. the method according to claim 1, wherein the image type of the determination video image to be identified The step of, comprising:
Obtain the triple channel pixel value of each pixel in the video image to be identified;
Difference calculating is carried out based on the triple channel pixel value, selects the maximum value of difference as pixel value difference;
The image type of the video image to be identified is determined according to preset value and the pixel value difference.
3. the method according to claim 1, wherein the image class of the determination video image to be identified Type, comprising:
The acquisition mode regulating time that the monitor video file to be identified corresponds to monitoring device is obtained, and is obtained described wait know The corresponding shooting time of other video image;
The image type of the video image to be identified is determined according to the acquisition mode regulating time.
4. the method according to claim 1, wherein passing through training when described image type is color image The human body attitude model of acquisition identifies the human body key point in the images to be recognized, and true based on the human body key point identified Character positions information in the fixed video image to be identified, comprising:
Feature extraction is carried out to the video image to be identified using the pre-network layer of the human body attitude model, is obtained described The corresponding characteristic pattern of video image to be identified;
Belief network layer using the human body attitude model in the characteristic pattern from extracting in the video image to be identified The human body key point of human body obtains the corresponding key point confidence map of human body key point in the video image to be identified;
The video figure to be identified is extracted from the characteristic pattern using the degree of association Vector Network network layers of the human body attitude model The degree of association of each human body key point as in;
The personage of the video image to be identified is determined according to the degree of association of the key point confidence map and the human body key point Location information.
5. according to the method described in claim 4, it is characterized in that, described close according to the key point confidence map and the human body The degree of association of key point determines the character positions information of the video image to be identified, comprising:
According to the degree of association of the human body key point, the human body key point on the key point confidence map is attached, and is counted Calculation obtains crucial dot profile;
Obtain external minimum rectangle according to the crucial dot profile, the external minimum rectangle be include the crucial dot profile The smallest rectangle of area;
The character positions information in the video image to be identified is determined according to the external minimum rectangle.
6. the method according to claim 1, wherein also being wrapped after obtaining the character positions information to be identified It includes:
Generate video information corresponding with the character positions information;
The video information is written in corresponding log.
7. the method according to claim 1, wherein being gone back before the acquisition monitor video file to be identified Include the steps that the training human body attitude model and the lightweight target detection model;The training human body attitude mould The step of type and the lightweight target detection model includes:
Obtain the history monitor video of monitoring device;
Color image sample and night vision image sample are extracted from the history monitor video, and in the color image sample Human body carry out human body key point mark, and in the night vision image sample human body carry out position coordinates mark, It obtains mark color image and marks night vision image;
Size adjusting is carried out to the mark color image and mark night vision image respectively, obtains training color image and training night Visible image;
Human body key point in the trained color image is carried out with the human body key point marked in color image that marks Mapping, is trained the human body attitude model using the training color image after mapping;
The position coordinates marked in position coordinates and the mark night vision image in the trained night vision image are mapped, The lightweight target detection model is trained using the training night vision image after mapping.
8. the device of character positions in a kind of identification image, which is characterized in that described device includes:
Preprocessing module is located in advance for obtaining monitor video file to be identified, and to the monitor video file to be identified Reason, obtains video image to be identified;
Determining module, for determining the image type of the video image to be identified;
Identification module, for identifying institute by the human body attitude model that training obtains when described image type is color image The human body key point in video image to be identified is stated, and the video image to be identified is determined based on the human body key point identified In character positions information;
The identification module is also used to the lightweight target detection obtained when described image type is night vision image by training Model identifies the character positions information in the video image to be identified.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists In the step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
CN201910628940.8A 2019-07-12 2019-07-12 Identify character positions method, apparatus, computer equipment and storage medium in image Pending CN110502986A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910628940.8A CN110502986A (en) 2019-07-12 2019-07-12 Identify character positions method, apparatus, computer equipment and storage medium in image
PCT/CN2020/093608 WO2021008252A1 (en) 2019-07-12 2020-05-30 Method and apparatus for recognizing position of person in image, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910628940.8A CN110502986A (en) 2019-07-12 2019-07-12 Identify character positions method, apparatus, computer equipment and storage medium in image

Publications (1)

Publication Number Publication Date
CN110502986A true CN110502986A (en) 2019-11-26

Family

ID=68586137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910628940.8A Pending CN110502986A (en) 2019-07-12 2019-07-12 Identify character positions method, apparatus, computer equipment and storage medium in image

Country Status (2)

Country Link
CN (1) CN110502986A (en)
WO (1) WO2021008252A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178323A (en) * 2020-01-10 2020-05-19 北京百度网讯科技有限公司 Video-based group behavior identification method, device, equipment and storage medium
CN111222486A (en) * 2020-01-15 2020-06-02 腾讯科技(深圳)有限公司 Training method, device and equipment for hand gesture recognition model and storage medium
CN111476729A (en) * 2020-03-31 2020-07-31 北京三快在线科技有限公司 Target identification method and device
CN111753643A (en) * 2020-05-09 2020-10-09 北京迈格威科技有限公司 Character posture recognition method and device, computer equipment and storage medium
WO2021008252A1 (en) * 2019-07-12 2021-01-21 平安科技(深圳)有限公司 Method and apparatus for recognizing position of person in image, computer device and storage medium
CN112418135A (en) * 2020-12-01 2021-02-26 深圳市优必选科技股份有限公司 Human behavior recognition method and device, computer equipment and readable storage medium
CN113221832A (en) * 2021-05-31 2021-08-06 常州纺织服装职业技术学院 Human body identification method and system based on three-dimensional human body data
CN117354494A (en) * 2023-12-05 2024-01-05 天津华来科技股份有限公司 Testing method for night vision switching performance of intelligent camera

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861689A (en) * 2021-02-01 2021-05-28 上海依图网络科技有限公司 Searching method and device of coordinate recognition model based on NAS technology
CN112818908A (en) * 2021-02-22 2021-05-18 Oppo广东移动通信有限公司 Key point detection method, device, terminal and storage medium
CN113873196A (en) * 2021-03-08 2021-12-31 南通市第一人民医院 Method and system for improving infection prevention and control management quality
CN112990057A (en) * 2021-03-26 2021-06-18 北京易华录信息技术股份有限公司 Human body posture recognition method and device and electronic equipment
CN113141518B (en) * 2021-04-20 2022-09-06 北京安博盛赢教育科技有限责任公司 Control method and control device for video frame images in live classroom
CN113326773A (en) * 2021-05-28 2021-08-31 北京百度网讯科技有限公司 Recognition model training method, recognition method, device, equipment and storage medium
CN113807342A (en) * 2021-09-17 2021-12-17 广东电网有限责任公司 Method and related device for acquiring equipment information based on image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587622A (en) * 2009-06-18 2009-11-25 任芳 Forest rocket detection and recognition methods and equipment based on video image intelligent analysis
CN105005766A (en) * 2015-07-01 2015-10-28 深圳市迈科龙电子有限公司 Vehicle body color identification method
WO2017177902A1 (en) * 2016-04-14 2017-10-19 平安科技(深圳)有限公司 Video recording method, server, system, and storage medium
CN108829233A (en) * 2018-04-26 2018-11-16 深圳市深晓科技有限公司 A kind of exchange method and device
WO2019071664A1 (en) * 2017-10-09 2019-04-18 平安科技(深圳)有限公司 Human face recognition method and apparatus combined with depth information, and storage medium
CN109740513A (en) * 2018-12-29 2019-05-10 青岛小鸟看看科技有限公司 A kind of analysis of operative action method and apparatus
CN109886139A (en) * 2019-01-28 2019-06-14 平安科技(深圳)有限公司 Human testing model generating method, sewage draining exit method for detecting abnormality and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150194034A1 (en) * 2014-01-03 2015-07-09 Nebulys Technologies, Inc. Systems and methods for detecting and/or responding to incapacitated person using video motion analytics
CN104573111B (en) * 2015-02-03 2016-03-23 中国人民解放军国防科学技术大学 Pedestrian's data structured in a kind of monitor video stores and preindexing method
CN109961014A (en) * 2019-02-25 2019-07-02 中国科学院重庆绿色智能技术研究院 A kind of coal mine conveying belt danger zone monitoring method and system
CN110502986A (en) * 2019-07-12 2019-11-26 平安科技(深圳)有限公司 Identify character positions method, apparatus, computer equipment and storage medium in image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587622A (en) * 2009-06-18 2009-11-25 任芳 Forest rocket detection and recognition methods and equipment based on video image intelligent analysis
CN105005766A (en) * 2015-07-01 2015-10-28 深圳市迈科龙电子有限公司 Vehicle body color identification method
WO2017177902A1 (en) * 2016-04-14 2017-10-19 平安科技(深圳)有限公司 Video recording method, server, system, and storage medium
WO2019071664A1 (en) * 2017-10-09 2019-04-18 平安科技(深圳)有限公司 Human face recognition method and apparatus combined with depth information, and storage medium
CN108829233A (en) * 2018-04-26 2018-11-16 深圳市深晓科技有限公司 A kind of exchange method and device
CN109740513A (en) * 2018-12-29 2019-05-10 青岛小鸟看看科技有限公司 A kind of analysis of operative action method and apparatus
CN109886139A (en) * 2019-01-28 2019-06-14 平安科技(深圳)有限公司 Human testing model generating method, sewage draining exit method for detecting abnormality and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨阿丽等: ""用于交通治安卡口的全天候视频车辆检测方法"", 《合肥工业大学学报》, vol. 35, no. 3, pages 358 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021008252A1 (en) * 2019-07-12 2021-01-21 平安科技(深圳)有限公司 Method and apparatus for recognizing position of person in image, computer device and storage medium
CN111178323A (en) * 2020-01-10 2020-05-19 北京百度网讯科技有限公司 Video-based group behavior identification method, device, equipment and storage medium
CN111178323B (en) * 2020-01-10 2023-08-29 北京百度网讯科技有限公司 Group behavior recognition method, device, equipment and storage medium based on video
CN111222486A (en) * 2020-01-15 2020-06-02 腾讯科技(深圳)有限公司 Training method, device and equipment for hand gesture recognition model and storage medium
CN111222486B (en) * 2020-01-15 2022-11-04 腾讯科技(深圳)有限公司 Training method, device and equipment for hand gesture recognition model and storage medium
CN111476729A (en) * 2020-03-31 2020-07-31 北京三快在线科技有限公司 Target identification method and device
CN111753643A (en) * 2020-05-09 2020-10-09 北京迈格威科技有限公司 Character posture recognition method and device, computer equipment and storage medium
CN112418135A (en) * 2020-12-01 2021-02-26 深圳市优必选科技股份有限公司 Human behavior recognition method and device, computer equipment and readable storage medium
CN113221832A (en) * 2021-05-31 2021-08-06 常州纺织服装职业技术学院 Human body identification method and system based on three-dimensional human body data
CN113221832B (en) * 2021-05-31 2023-07-11 常州纺织服装职业技术学院 Human body identification method and system based on three-dimensional human body data
CN117354494A (en) * 2023-12-05 2024-01-05 天津华来科技股份有限公司 Testing method for night vision switching performance of intelligent camera
CN117354494B (en) * 2023-12-05 2024-02-23 天津华来科技股份有限公司 Testing method for night vision switching performance of intelligent camera

Also Published As

Publication number Publication date
WO2021008252A1 (en) 2021-01-21

Similar Documents

Publication Publication Date Title
CN110502986A (en) Identify character positions method, apparatus, computer equipment and storage medium in image
CN110197229B (en) Training method and device of image processing model and storage medium
CN106204779B (en) Check class attendance method based on plurality of human faces data collection strategy and deep learning
CN110490212A (en) Molybdenum target image processing arrangement, method and apparatus
CN110599395B (en) Target image generation method, device, server and storage medium
CN108900769A (en) Image processing method, device, mobile terminal and computer readable storage medium
CN108764052A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN107862663A (en) Image processing method, device, readable storage medium storing program for executing and computer equipment
CN109191403A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN110298862A (en) Method for processing video frequency, device, computer readable storage medium and computer equipment
CN110263768A (en) A kind of face identification method based on depth residual error network
CN107743200A (en) Method, apparatus, computer-readable recording medium and the electronic equipment taken pictures
CN113902657A (en) Image splicing method and device and electronic equipment
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN109712177A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111753782A (en) False face detection method and device based on double-current network and electronic equipment
CN109360254A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN111191521B (en) Face living body detection method and device, computer equipment and storage medium
CN111582155A (en) Living body detection method, living body detection device, computer equipment and storage medium
CN111696090B (en) Method for evaluating quality of face image in unconstrained environment
CN107862654A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN113642639A (en) Living body detection method, living body detection device, living body detection apparatus, and storage medium
CN109360176A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN108629329A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN113920556A (en) Face anti-counterfeiting method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination