CN109583391A - Critical point detection method, apparatus, equipment and readable medium - Google Patents

Critical point detection method, apparatus, equipment and readable medium Download PDF

Info

Publication number
CN109583391A
CN109583391A CN201811473824.5A CN201811473824A CN109583391A CN 109583391 A CN109583391 A CN 109583391A CN 201811473824 A CN201811473824 A CN 201811473824A CN 109583391 A CN109583391 A CN 109583391A
Authority
CN
China
Prior art keywords
video frame
key point
current video
frame
location information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811473824.5A
Other languages
Chinese (zh)
Other versions
CN109583391B (en
Inventor
胡耀全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811473824.5A priority Critical patent/CN109583391B/en
Publication of CN109583391A publication Critical patent/CN109583391A/en
Application granted granted Critical
Publication of CN109583391B publication Critical patent/CN109583391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present disclosure discloses a kind of critical point detection method, apparatus, equipment and readable medium.Wherein, method includes: to choose the first history video frame before current video frame and current video frame from the sequence of frames of video for showing user images;From current video frame, the initial position message of each key point is detected;By light stream, the location information of each key point in the first history video frame is mapped in current video frame, obtains the reference position information of each key point in current video frame;According to the initial position message of each key point and reference position information, the location information of each key point in current video frame is respectively obtained.The embodiment of the present disclosure can be improved the accuracy of critical point detection.

Description

Critical point detection method, apparatus, equipment and readable medium
Technical field
The embodiment of the present disclosure be related to computer vision technique more particularly to a kind of critical point detection method, apparatus, equipment and Readable medium.
Background technique
With the development of computer vision, some electronic equipments can detect each pass of user from the image of user Key point, such as each joint, limbs and five official ranks.
Currently, the key point detected generally requires real-time display in the picture, for example, making before camera lens respectively in user During kind posture or movement, the corresponding key point of real-time display in the image shot, so as to further progress shape The operations such as body correction, to increase entertaining sense and interactivity.This just proposes the accuracy and efficiency of critical point detection higher It is required that however, existing critical point detection method is not able to satisfy high accuracy and efficient requirement, it is difficult to be suitable for real-time The application scenarios of property.
Summary of the invention
The embodiment of the present disclosure provides a kind of critical point detection method, apparatus, equipment and readable medium, to improve key point inspection The accuracy and efficiency of survey adapts to the application scenarios of real-time.
In a first aspect, the embodiment of the present disclosure provides a kind of critical point detection method, comprising:
From the sequence of frames of video for showing user images, first before current video frame and current video frame is chosen History video frame;
From current video frame, the initial position message of each key point is detected;
By light stream, the location information of each key point in the first history video frame is mapped in current video frame, is obtained The reference position information of each key point in current video frame;
According to the initial position message of each key point and reference position information, each key point in current video frame is respectively obtained Location information.
Second aspect, the embodiment of the present disclosure additionally provide a kind of critical point detection device, comprising:
Module is chosen, for from the sequence of frames of video for showing user images, choosing current video frame and working as forward sight The first history video frame before frequency frame;
Detection module, for detecting the initial position message of each key point from current video frame;
Mapping block, for by light stream, the location information of each key point in the first history video frame to be mapped to currently In video frame, the reference position information of each key point in current video frame is obtained;
Module is obtained to respectively obtain for the initial position message and reference position information according to each key point and work as forward sight The location information of each key point in frequency frame.
The third aspect, the embodiment of the present disclosure additionally provide a kind of electronic equipment, and the electronic equipment includes:
One or more processing units;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processing Device realizes critical point detection method described in any embodiment.
Fourth aspect, the embodiment of the present disclosure additionally provide a kind of computer-readable medium, are stored thereon with computer program, Critical point detection method described in any embodiment is realized when the program is executed by processing unit.
In the embodiment of the present disclosure, by from current video frame, detecting the initial position message of each key point, pass through light Stream, the location information of each key point in the first history video frame is mapped in current video frame, is obtained each in current video frame The reference position information of key point, and according to the initial position message of each key point and reference position information, it respectively obtains current The location information of each key point in video frame is reference with the historical position of key point to be based on optical flow algorithm, is obtained current The location information of key point in video frame, effectively improves the accuracy of critical point detection;Meanwhile the key point in current video frame It is blocked, in the case where motion blur, also can relatively accurately detect the location information of key point.
Detailed description of the invention
Fig. 1 is a kind of flow chart for critical point detection method that the embodiment of the present disclosure one provides;
Fig. 2 is a kind of flow chart for critical point detection method that the embodiment of the present disclosure two provides;
Fig. 3 is a kind of flow chart for critical point detection method that the embodiment of the present disclosure three provides;
Fig. 4 is a kind of flow chart for critical point detection method that the embodiment of the present disclosure four provides;
Fig. 5 is a kind of structural schematic diagram for critical point detection device that the embodiment of the present disclosure five provides;
Fig. 6 is the structural schematic diagram for a kind of electronic equipment that the embodiment of the present disclosure five provides.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the disclosure, rather than the restriction to the disclosure.It also should be noted that in order to just Part relevant to the disclosure is illustrated only in description, attached drawing rather than entire infrastructure.In following each embodiments, each embodiment In simultaneously provide optional feature and example, each feature recorded in embodiment can be combined, form multiple optinal plans, The embodiment of each number should not be considered merely as to a technical solution.
Embodiment one
Fig. 1 is a kind of flow chart for critical point detection method that the embodiment of the present disclosure one provides, and the present embodiment is applicable to To show user images sequence of frames of video carry out critical point detection the case where, this method can by critical point detection device Lai It executes, which can be by hardware and/or software sharing, and integrates in the electronic device, which can be server Or terminal.In conjunction with Fig. 1, the method that the embodiment of the present disclosure provides specifically includes following operation:
S110, from the sequence of frames of video for showing user images, before choosing current video frame and current video frame The first history video frame.
Sequence of frames of video refers to the successive video frames in the continuous videos frame period in a period of time in video flowing, the view Frequency frame sequence includes multiple video frames, such as 20 video frames.In the present embodiment, the duration of the sequence of frames of video of acquisition should be compared with It is short, such as the duration is within the scope of preset duration, such as 3 seconds, so that display position of the user images in different video frame becomes Change is smaller, posture changing is smaller, to improve the accuracy of critical point detection.
Optionally, user images are shown in each video frame of sequence of frames of video, at least one is shown in user images A key point, such as the crown, left shoulder, the right knee of user etc..
The present embodiment successively selecting video frame from sequence of frames of video sequentially in time, as current video frame, and it is right Each current video frame is performed both by S110-S140, until the video frame in sequence of frames of video handles completion.It is provided in this embodiment Method purpose is to detect the location information of each key point in current video frame, in the detection process, before current video frame History video frame be reference.Optionally, it describes and distinguishes for convenience, the history video frame of key point reference by location will be used for Referred to as the first history video frame, by subsequent embodiment, the history video frame for returning frame reference by location is known as the second history Video frame.First history video frame and the second history video frame can be same video frame or different video frame, video frame number Amount may each be at least one.Preferably, in order to sufficiently with reference to the location information of each key point in history video frame, the first history Video frame include current video frame before N number of video frame, N is natural number, for example, 8,9,10 etc..
S120, from current video frame, detect the initial position message of each key point.
Training critical point detection model in advance, the critical point detection model are used for the video frame according to input, export each pass The location information of key point, the location information of each key point include the seat of mark (such as ID number) and each key point of each key point Mark.In the present embodiment, current video frame is input to critical point detection model, to obtain the location information of each key point.In order to Facilitate description and differentiation, the location information directly detected from current video frame is known as initial position message.
S130, pass through light stream, the location information of each key point in the first history video frame mapped in current video frame, Obtain the reference position information of each key point in current video frame.
In the present embodiment, using optical flow field algorithm, multiple pixels in the first history video frame and current video frame are calculated Motion vector, that is, establish optical flow field.Since the time interval between video frame is shorter, the motion vector of background image should be Identical, the motion vector of key point may be slightly different.Based on the property, according to the motion vector of background image, by first In history video frame in the location information mapping value current video frame of each key point.
Optionally, S130 includes following two steps:
Step 1: according to location information of the background pixel point in the first history video frame and in current video frame Initial position message determines the motion vector of background image.
In one example, the display position of background pixel point is coordinate points C in the first history video frame, in current video frame The display position of background pixel point is coordinate points D, then the motion vector of background image is
Step 2: the movement arrow of location information and background image according to each key point in the first history video frame Amount determines the reference position information of each key point in current video frame respectively.
Present embodiment assumes that position of the key point in actual environment does not change, if a key point is in the first history Location information in video frame is coordinate points A, then location information of the key point in current video isFor convenience Description and differentiation, are known as reference position information for the location information for mapping to each key point of current video frame.
S140, initial position message and reference position information according to each key point are respectively obtained each in current video frame The location information of key point.
In one example, the quantity of the first history video frame is 10, and the quantity of key point is 5, then obtains each pass The 10 reference position information and 1 initial position message of key point.For each key point, to initial position message and ginseng It examines location information and carries out comprehensive analysis, obtain optimal location information.
It,, will by light stream by from current video frame, detecting the initial position message of each key point in the present embodiment The location information of each key point maps in current video frame in first history video frame, obtains each key point in current video frame Reference position information respectively obtain current video frame and according to the initial position message of each key point and reference position information In each key point location information, thus be based on optical flow algorithm, with the historical position of key point be reference, obtain current video frame The location information of middle key point effectively improves the accuracy of critical point detection;Meanwhile key point is hidden in current video frame In the case where gear, motion blur, the location information of key point also can be relatively accurately detected.
Embodiment two
Fig. 2 is a kind of flow chart for critical point detection method that the embodiment of the present disclosure two provides.The present embodiment is to above-mentioned reality Each optional embodiment for applying example advanced optimizes, and is reference with the recurrence frame in the second history video frame, obtains current video Recurrence frame in frame, to improve the accuracy of critical point detection.In conjunction with Fig. 2, method provided in this embodiment specifically includes following Operation:
S210, from the sequence of frames of video for showing user images, before choosing current video frame and current video frame The first history video frame.
The second history video frame before S220, selection current video frame.
In order to facilitate describing and distinguish, the history video frame that will be used to return frame reference is known as the second history video frame.
It is worth noting that S220 is executed before S240, the present embodiment is not limited the execution sequence of S220. Preferably, S220 is synchronous with S210 executes.
S230, in current video frame, determine include each key point candidate frame.
Under normal circumstances, multiple candidate frames be can determine in current video frame, multiple candidate frames can be generally overlapped or superfluous It is remaining.In order to facilitate describing and distinguish, the frame directly determined from current video frame is known as candidate frame.
S240, pass through light stream, the recurrence frame in the second history video frame including each key point is mapped into current video frame In, the reference for obtaining current video frame returns frame.
It is similar with the mapping method of key point, in the present embodiment, using optical flow field algorithm, calculate the second history video frame with The motion vector of multiple pixels, that is, establish optical flow field in current video frame.According to the motion vector of background image, second is gone through In the location information mapping value current video frame for returning frame in history video frame, wherein the location information for returning frame includes returning frame Length and width and centre coordinate.
Optionally, S240 includes following two steps:
The first step, according to location information of the background pixel point in the second history video frame and in current video frame Initial position message determines the motion vector of background image.
Step 2: according to including location information and Background of the recurrence frame of each key point in the second history video frame The motion vector of picture determines that the reference in current video frame including each key point returns frame.
S250, frame is returned according to candidate frame and reference, obtains the recurrence frame in current video frame.
Optionally, non-maxima suppression is carried out to candidate frame and with reference to frame is returned, obtains the recurrence frame in current video frame. For example, being input to non-maxima suppression (Non-Maximum by current video frame and candidate frame and with reference to frame is returned Suppression, NMS) network, the recurrence frame of redundancy is removed, recurrence frame optimal in current video frame is obtained.Wherein, NMS net Network belongs to the prior art, and details are not described herein again.
It is worth noting that since multiple candidate recurrence frames can be obtained in current video frame, and this operation finally only needs Retain limited quantity, optimum regression frame, then it should not be excessive with reference to the quantity for returning frame.Optionally, with reference to the quantity for returning frame It is 1, correspondingly, the second history video frame is the previous video frame of current video frame.
S260, from the recurrence frame in current video frame, detect the initial position message of each key point.
Optionally, according to the location information for returning frame, interception returns the image of frame from current video frame, then will return frame Image be input in critical point detection model, obtain the location information of each key point.
In the present embodiment, by light stream, the recurrence frame in the second history video frame including each key point is mapped to currently In video frame, the reference for obtaining current video frame returns frame;According to candidate frame and with reference to frame is returned, obtain in current video frame Frame is returned, to be based on optical flow algorithm, to return the historical position of frame as reference, the recurrence frame in current video frame is obtained, has Effect improves the accuracy of critical point detection;Meanwhile in current video frame key point be blocked, motion blur in the case where, It can relatively accurately detect the location information of key point;Moreover, detecting each key by from the recurrence frame in current video frame The initial position message of point can remove the disturbing factor returned except frame, improve the accuracy of critical point detection.
Embodiment three
Fig. 3 is a kind of flow chart for critical point detection method that the embodiment of the present disclosure three provides.The present embodiment is to above-mentioned each Each optional embodiment of embodiment advanced optimizes, optionally, will be " according to the initial position message and reference bit of each key point Confidence breath, respectively obtains the location information of each key point in current video frame " it is optimized for " according to the initial bit confidence of each key point The confidence level of breath and the confidence level of reference position information, respectively obtain in current video frame, and confidence level meets the first preset requirement Each key point location information ", thus according to confidence level select current video frame in each key point location information.In conjunction with figure 3, method provided in this embodiment specifically includes following operation:
S310, from the sequence of frames of video for showing user images, before choosing current video frame and current video frame The first history video frame.
S320, from current video frame, detect the initial position message of each key point.
S330, pass through light stream, the location information of each key point in the first history video frame mapped in current video frame, Obtain the reference position information of each key point in current video frame.
S340, according to the confidence level of the initial position message of each key point and the confidence level of reference position information, respectively Into current video frame, confidence level meets the location information of each key point of the first preset requirement.
In the present embodiment, critical point detection model can also input each pass other than exporting the location information of each key point The confidence level of key point, i.e., the class probability of each key point.The present embodiment is mapped to by each key point in the first history video frame While current video frame, the confidence level of each key point in the first history video frame is also obtained.And in the first history video frame Each key point is obtained through critical point detection model inspection, so being corresponding with respective confidence level, such as 0.9,0.8 etc.. Moreover, the initial position message of each key point is also obtained through critical point detection model inspection in current video frame, it is also corresponding respective Confidence level.
Wherein, critical point detection model on the corresponding characteristic pattern of each key point by selecting maximum value, or progress spy Value indicative integral, obtains the confidence level of each key point.
For a key point, it is assumed that initial position message has 1, and reference position information has 10, then at this 11 In location information, confidence level is selected to meet the location information of the key point of the first preset requirement.
Optionally, from the confidence level of the confidence level of the initial position message of each key point and reference position information, respectively The corresponding location information of maximum confidence is selected, as in current video frame, the location information of each key point.Optionally, from each In the confidence level of the initial position message of key point and the confidence level of reference position information, selection is more than or equal to confidence level threshold respectively The corresponding location information of the confidence level of value, as in current video frame, the location information of each key point.One key point is come It says, if there is more than two confidence levels more than or equal to confidence threshold value, then can therefrom an optional confidence level correspond to Location information, as the location information of the key point, or the therefrom corresponding location information of selection maximum confidence, as this The location information of key point.
In the foregoing embodiment, current video frame with equal extent with reference to each first history video frame, i.e., with same Degree combines the different reference position information and initial position message of same key point.It is gone through to efficiently differentiate each first History video frame and current video frame, realize the differentiation reference of each video frame, and then improve the accuracy of critical point detection.
Optionally, firstly, determining the first weight of current video frame and the second weight of the first history video frame.Such as The quantity of fruit the first history video frame is one, it is determined that the second weight of the first history video frame;If the first history regards The quantity of frequency frame is two or more, then determines each second weight of each first history video frame respectively.Then, using the first weight The confidence level of the initial position message of each key point is weighted, confidence level after the first weighting of each key point is obtained;Using Second weight is weighted the confidence level of the reference position information of each key point, obtains confidence after the second weighting of each key point Degree.If the quantity of the first history video frame is two or more, using each second weight respectively to each first history video frame In the confidence level of reference position information be weighted.Then, from confidence level and the second weighting after the first weighting of each key point Afterwards in confidence level, the corresponding location information of confidence level after the weighting of the second preset requirement of satisfaction is selected respectively, as current video The location information of each key point in frame.Wherein, the second preset requirement includes that confidence level is maximum after weighting or is more than or equal to weighting Confidence threshold value afterwards.If it is larger than or equal to confidence level after the weighting of confidence threshold value after weighting there are two more than, then therefrom optionally Confidence level after confidence level after one weighting, or the maximum weighting of selection.
Optionally, the first weight and the second weight can be obtained by machine learning algorithm.Firstly, constructing training set and testing Card collection, is trained the weight of the first history video frame and current video frame in training set, so that confidence after maximum weighted Spend the actual position information that corresponding location information approaches key point in current video frame.Then, after verifying is concentrated to training Weight carry out cross validation, finally obtain the weight of the first history video frame and current video frame.Assuming that the first history video Frame includes video frame A and video frame B, and each second weight is respectively 0.5,0.4, and the first weight is 0.8, then maps video frame A Each key point reference position information confidence level multiplied by 0.5, by the reference position information of the video frame B each key point mapped Confidence level multiplied by 0.4, by the confidence level of the initial position message of key point each in current video frame multiplied by 0.8.Then, for Each key point selects the corresponding location information of confidence level after maximum weighting.
In the present embodiment, by according to the confidence level of the initial position message of each key point and the confidence of reference position information Degree, respectively obtains in current video frame, and confidence level meets the location information of each key point of the first preset requirement, and between confidence level The order of accuarcy for reflecting the key point is connect, then the location information of key point is obtained according to confidence level, can be improved key point inspection The accuracy of survey.
Example IV
Fig. 4 is a kind of flow chart for critical point detection method that the embodiment of the present invention four provides.The present embodiment is to above-mentioned each Each optional embodiment of embodiment advanced optimizes, optionally, will be " according to the initial position message and reference bit of each key point Confidence breath, respectively obtains the location information of each key point in current video frame " it is optimized for " according to the initial bit confidence of each key point The inverse at a distance from the information of reference position is ceased, determines the third weight of the reference position information of each key point respectively;Using Three weights and default weight are respectively weighted and averaged reference position information and initial position message, obtain in current video frame The location information of each key point ", so that the distance between comprehensive key point, obtains location information, improve the accurate of critical point detection Property.In conjunction with Fig. 4, method provided in this embodiment specifically includes following operation:
S410, from the sequence of frames of video for showing user images, before choosing current video frame and current video frame The first history video frame.
S420, from current video frame, detect the initial position message of each key point.
S430, pass through light stream, the location information of each key point in the first history video frame mapped in current video frame, Obtain the reference position information of each key point in current video frame.
S440, the inverse according to the initial position message of each key point at a distance from the information of reference position are determined each respectively The third weight of the reference position information of key point.
If the quantity of the reference position information of a key point is 1, by reference position information and initial position message Distance inverse, be determined as the third weight of reference position information.If the quantity of the reference position information of a key point is Two or more is identified as each reference position then by the inverse of each reference position information and each distance of initial position message The third weight of information.
Assuming that the initial position message of a key point is A (X1, Y1), reference position information be B (X2, Y2) and C (X3, Y3), the distance between A and B are L1, and the distance between A and C are L2, then the third weight of reference position information B are determined as 1/ The third weight of reference position information C is determined as 1/L2 by L1.
S450, reference position information and initial position message are weighted respectively using third weight and default weight it is flat , the location information of each key point in current video frame is obtained.
In the present embodiment, default weight, such as 1 are set by the weight of initial position message.Then above-mentioned example, by initial bit The weight of confidence breath A is set as 1, according to formula It obtains in current video frame, the location information (XC, YC) of the key point.
In some embodiments, the position letter of each key point in current video frame can be obtained in conjunction with distance and confidence level Breath, to further increase the accuracy of critical point detection.Optionally, corresponding according to the confidence level for meeting the first preset condition Reference position information, the inverse of distance corresponding with initial position message determine the third weight of reference position information, using Three weights and default weight are respectively weighted and averaged reference position information and initial position message, obtain in current video frame The location information of each key point.Optionally, according to the corresponding reference bit confidence of confidence level after the weighting for meeting the second preset requirement Breath, the inverse of distance corresponding with initial position message determine the third weight of reference position information, using third weight and write from memory Recognize weight to be respectively weighted and averaged reference position information and initial position message, obtains each key point in current video frame Location information.
Embodiment five
Fig. 5 is a kind of structural schematic diagram for critical point detection device that the embodiment of the present disclosure five provides, comprising: obtains module 41, first detection module 42 and the second detection module 43.
Module 51 is chosen, for from the sequence of frames of video for showing user images, chooses current video frame and current The first history video frame before video frame;
Detection module 52, for detecting the initial position message of each key point from current video frame;
Mapping block 53, for the location information of each key point in the first history video frame being mapped to and is worked as by light stream In preceding video frame, the reference position information of each key point in current video frame is obtained;
Module 54 is obtained, for the initial position message and reference position information according to each key point, is respectively obtained current The location information of each key point in video frame.
It,, will by light stream by from current video frame, detecting the initial position message of each key point in the present embodiment The location information of each key point maps in current video frame in first history video frame, obtains each key point in current video frame Reference position information respectively obtain current video frame and according to the initial position message of each key point and reference position information In each key point location information, thus be based on optical flow algorithm, with the historical position of key point be reference, obtain current video frame The location information of middle key point effectively improves the accuracy of critical point detection;Meanwhile key point is hidden in current video frame In the case where gear, motion blur, the location information of key point also can be relatively accurately detected.
Optionally, mapping block 53 maps the location information of each key point in the first history video frame by light stream Into current video frame, when obtaining the reference position information of each key point in current video frame, it is specifically used for: according to background pixel O'clock location information in the first history video frame and the initial position message in current video frame, determine background image Motion vector;According to the motion vector of location information and background image of each key point in the first history video frame, respectively Determine the reference position information of each key point in current video frame.
Optionally, it chooses module 51 and is also used to choose the second history video frame before current video frame.Detection module 52 When from current video frame, detecting the initial position message of each key point, it is specifically used for: in current video frame, determines packet Include the candidate frame of each key point;By light stream, the recurrence frame in the second history video frame including each key point is mapped to currently In video frame, the reference for obtaining current video frame returns frame;According to candidate frame and with reference to frame is returned, obtain in current video frame Return frame;From the recurrence frame in current video frame, the initial position message of each key point is detected.
Optionally, detection module 52 is returning frame according to candidate frame and reference, when obtaining the recurrence frame in current video frame, It is specifically used for: carries out non-maxima suppression to candidate frame and with reference to frame is returned, obtain the recurrence frame in current video frame.
Optionally, detection module 52 reflects the recurrence frame in the second history video frame including each key point by light stream It is incident upon in current video frame, when obtaining the reference recurrence frame of current video frame, is specifically used for: being gone through according to background pixel point second Location information in history video frame and the initial position message in current video frame, determine the motion vector of background image; According to the motion vector for including location information and background image of the recurrence frame of each key point in the second history video frame, really Determine the reference in current video frame including each key point and returns frame.
Optionally, module 54 is obtained in initial position message and reference position information according to each key point, is respectively obtained In current video frame when the location information of each key point, it is specifically used for: according to the confidence level of the initial position message of each key point It with the confidence level of reference position information, respectively obtains in current video frame, confidence level meets each key point of the first preset requirement Location information.Optionally, the first preset requirement includes that confidence level is maximum or confidence level is more than or equal to confidence threshold value.
Optionally, module 54 is obtained according to the confidence level of the initial position message of each key point and reference position information Confidence level respectively obtains in current video frame, when confidence level meets the location information of each key point of the first preset requirement, specifically For: determine the first weight of current video frame and the second weight of the first history video frame;Using the first weight to each pass The confidence level of the initial position message of key point is weighted, and obtains confidence level after the first weighting of each key point;Using the second power Value is weighted the confidence level of the reference position information of each key point, obtains confidence level after the second weighting of each key point;From After first weighting of each key point after confidence level and the second weighting in confidence level, selection meets the weighting of the second preset requirement respectively The corresponding location information of confidence level afterwards, the location information as key point each in current video frame.
Optionally, module 54 is obtained in initial position message and reference position information according to each key point, is respectively obtained In current video frame when the location information of each key point, it is specifically used for: according to the initial position message and reference bit of each key point The inverse of the distance of confidence breath, determines the third weight of the reference position information of each key point respectively;Using third weight and write from memory Recognize weight to be respectively weighted and averaged reference position information and initial position message, obtains each key point in current video frame Location information.
Key provided by disclosure any embodiment can be performed in critical point detection device provided by the embodiment of the present disclosure Point detecting method has the corresponding functional module of execution method and beneficial effect.
Embodiment six
Below with reference to Fig. 6, it illustrates the structural representations for the electronic equipment 600 for being suitable for being used to realize the embodiment of the present disclosure Figure.Electronic equipment in the embodiment of the present disclosure can include but is not limited to such as mobile phone, laptop, digital broadcasting and connect Receive device, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal (such as vehicle Carry navigation terminal) etc. mobile terminal and such as number TV, desktop computer etc. fixed terminal or various forms Server, such as separate server or server cluster.Electronic equipment shown in Fig. 6 is only an example, should not be to this The function and use scope of open embodiment bring any restrictions.
As shown in fig. 6, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.) 601, random visit can be loaded into according to the program being stored in read-only memory device (ROM) 602 or from storage device 605 It asks the program in storage device (RAM) 603 and executes various movements appropriate and processing.In RAM 603, it is also stored with electronics Equipment 600 operates required various programs and data.Processing unit 601, ROM602 and RAM 603 pass through bus 604 each other It is connected.Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device 609, which can permit electronic equipment 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool There is the electronic equipment 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, the computer program include for execute can operational controls display methods program code.In this way Embodiment in, which can be downloaded and installed from network by communication device 609, or from storage device 605 are mounted, or are mounted from ROM 602.When the computer program is executed by processing unit 601, it is real to execute the disclosure Apply the above-mentioned function of limiting in the method for example.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Take formula computer disk, hard disk, random access memory device (RAM), read-only memory device (ROM), erasable type may be programmed it is read-only Storage device (EPROM or flash memory), optical fiber, portable compact disc read-only memory device (CD-ROM), light storage device part, magnetic Storage device part or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be any packet Contain or store the tangible medium of program, which can be commanded execution system, device or device use or in connection It uses.And in the disclosure, computer-readable signal media may include propagating in a base band or as carrier wave a part Data-signal, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, packet Include but be not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be meter Any computer-readable medium other than calculation machine readable storage medium storing program for executing, which can send, propagate or Person's transmission is for by the use of instruction execution system, device or device or program in connection.Computer-readable Jie The program code for including in matter can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. Deng or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by this Manage device execute when so that the electronic equipment: from the sequence of frames of video for showing user images, choose current video frame and The first history video frame before current video frame;From current video frame, the initial position message of each key point is detected;Pass through The location information of each key point in first history video frame is mapped in current video frame, is obtained in current video frame by light stream The reference position information of each key point;According to the initial position message of each key point and reference position information, respectively obtain current The location information of each key point in video frame.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in module involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of module does not constitute the restriction to the module itself under certain conditions, for example, choosing Modulus block is also described as " choosing the module of current video frame and history video frame ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of critical point detection method characterized by comprising
From the sequence of frames of video for showing user images, the first history before current video frame and current video frame is chosen Video frame;
From current video frame, the initial position message of each key point is detected;
By light stream, the location information of each key point in the first history video frame is mapped in current video frame, is obtained current The reference position information of each key point in video frame;
According to the initial position message of each key point and reference position information, the position of each key point in current video frame is respectively obtained Confidence breath.
2. will respectively be closed in the first history video frame the method according to claim 1, wherein described by light stream The location information of key point maps in current video frame, obtains the reference position information of each key point in current video frame, comprising:
According to location information of the background pixel point in the first history video frame and the initial bit confidence in current video frame Breath, determines the motion vector of background image;
It is true respectively according to the motion vector of location information and the background image of each key point in the first history video frame Determine the reference position information of each key point in current video frame.
3. the method according to claim 1, wherein detecting the initial of each key point from current video frame Before location information, further includes:
Choose the second history video frame before current video frame;
It is described from current video frame, detect the initial position message of each key point, comprising:
In current video frame, the candidate frame including each key point is determined;
By light stream, the recurrence frame in the second history video frame including each key point is mapped in current video frame, is worked as The reference of preceding video frame returns frame;
According to candidate frame and with reference to frame is returned, the recurrence frame in current video frame is obtained;
From the recurrence frame in current video frame, the initial position message of each key point is detected.
4. according to the method described in claim 3, obtaining current it is characterized in that, described according to candidate frame and with reference to frame is returned Recurrence frame in video frame, comprising:
Non-maxima suppression is carried out to candidate frame and with reference to frame is returned, obtains the recurrence frame in current video frame.
5. according to the method described in claim 3, will include in the second history video frame it is characterized in that, described by light stream The recurrence frame of each key point maps in current video frame, and the reference for obtaining current video frame returns frame, comprising:
According to location information of the background pixel point in the second history video frame and the initial bit confidence in current video frame Breath, determines the motion vector of background image;
According to the fortune for including location information and the background image of the recurrence frame of each key point in the second history video frame Dynamic vector determines that the reference in current video frame including each key point returns frame.
6. the method according to claim 1, wherein the initial position message and reference according to each key point Location information respectively obtains the location information of each key point in current video frame, comprising:
According to the confidence level of the confidence level of the initial position message of each key point and reference position information, current video is respectively obtained In frame, confidence level meets the location information of each key point of the first preset requirement.
7. according to the method described in claim 6, it is characterized in that, the confidence of the initial position message according to each key point The confidence level of degree and reference position information, respectively obtains in current video frame, and confidence level meets each key of the first preset requirement The location information of point, comprising:
First preset requirement includes that confidence level is maximum or confidence level is more than or equal to confidence threshold value.
8. according to the method described in claim 6, it is characterized in that, the confidence of the initial position message according to each key point The confidence level of degree and reference position information, respectively obtains in current video frame, and confidence level meets each key of the first preset requirement The location information of point, comprising:
Determine the first weight of current video frame and the second weight of the first history video frame;
It is weighted using confidence level of first weight to the initial position message of each key point, obtain each key point first adds Confidence level after power;
It is weighted using confidence level of second weight to the reference position information of each key point, obtain each key point second adds Confidence level after power;
From in confidence level, selection meets the second preset requirement respectively after confidence level after the first weighting of each key point and the second weighting Weighting after the corresponding location information of confidence level, the location information as key point each in current video frame.
9. the method according to claim 1, wherein the initial position message and reference according to each key point Location information respectively obtains the location information of each key point in current video frame, comprising:
According to inverse of the initial position message of each key point at a distance from the information of reference position, the ginseng of each key point is determined respectively Examine the third weight of location information;
Reference position information and initial position message are weighted and averaged respectively using third weight and default weight, worked as The location information of each key point in preceding video frame.
10. a kind of critical point detection device characterized by comprising
Module is chosen, for choosing current video frame and current video frame from the sequence of frames of video for showing user images The first history video frame before;
Detection module, for detecting the initial position message of each key point from current video frame;
Mapping block, for by light stream, the location information of each key point in the first history video frame to be mapped to current video In frame, the reference position information of each key point in current video frame is obtained;
It obtains module and respectively obtains current video frame for the initial position message and reference position information according to each key point In each key point location information.
11. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
One or more processing units;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processing units Realize the critical point detection method as described in any in claim 1-9.
12. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that the program is held by processing unit The critical point detection method as described in any in claim 1-9 is realized when row.
CN201811473824.5A 2018-12-04 2018-12-04 Key point detection method, device, equipment and readable medium Active CN109583391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811473824.5A CN109583391B (en) 2018-12-04 2018-12-04 Key point detection method, device, equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811473824.5A CN109583391B (en) 2018-12-04 2018-12-04 Key point detection method, device, equipment and readable medium

Publications (2)

Publication Number Publication Date
CN109583391A true CN109583391A (en) 2019-04-05
CN109583391B CN109583391B (en) 2021-07-16

Family

ID=65926914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811473824.5A Active CN109583391B (en) 2018-12-04 2018-12-04 Key point detection method, device, equipment and readable medium

Country Status (1)

Country Link
CN (1) CN109583391B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027412A (en) * 2019-11-20 2020-04-17 北京奇艺世纪科技有限公司 Human body key point identification method and device and electronic equipment
CN111401228A (en) * 2020-03-13 2020-07-10 中科创达软件股份有限公司 Video target labeling method and device and electronic equipment
CN112066988A (en) * 2020-08-17 2020-12-11 联想(北京)有限公司 Positioning method and positioning equipment
CN113255411A (en) * 2020-02-13 2021-08-13 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113436226A (en) * 2020-03-23 2021-09-24 北京沃东天骏信息技术有限公司 Method and device for detecting key points
CN113887547A (en) * 2021-12-08 2022-01-04 北京世纪好未来教育科技有限公司 Key point detection method and device and electronic equipment
CN115511818A (en) * 2022-09-21 2022-12-23 北京医准智能科技有限公司 Optimization method, device, equipment and storage medium of pulmonary nodule detection model
WO2023098617A1 (en) * 2021-12-03 2023-06-08 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device, and storage medium
WO2023151348A1 (en) * 2022-02-10 2023-08-17 腾讯科技(深圳)有限公司 Method for processing key points in image, and related apparatus

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060269155A1 (en) * 2005-05-09 2006-11-30 Lockheed Martin Corporation Continuous extended range image processing
CN104376576A (en) * 2014-09-04 2015-02-25 华为技术有限公司 Target tracking method and device
CN104408743A (en) * 2014-11-05 2015-03-11 百度在线网络技术(北京)有限公司 Image segmentation method and device
CN104933735A (en) * 2015-06-30 2015-09-23 中国电子科技集团公司第二十九研究所 A real time human face tracking method and a system based on spatio-temporal context learning
WO2015146813A1 (en) * 2014-03-28 2015-10-01 株式会社ソニー・コンピュータエンタテインメント Object manipulation method, object manipulation program, and information processing device
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
US20160171656A1 (en) * 2014-12-11 2016-06-16 Sharp Laboratories Of America, Inc. System for video super resolution using semantic components
US20160342837A1 (en) * 2015-05-19 2016-11-24 Toyota Motor Engineering & Manufacturing North America, Inc. Apparatus and method for object tracking
US20170094310A1 (en) * 2015-09-30 2017-03-30 Sony Corporation Image processing system with optical flow recovery mechanism and method of operation thereof
CN106780557A (en) * 2016-12-23 2017-05-31 南京邮电大学 A kind of motion target tracking method based on optical flow method and crucial point feature
CN108205655A (en) * 2017-11-07 2018-06-26 北京市商汤科技开发有限公司 A kind of key point Forecasting Methodology, device, electronic equipment and storage medium
CN108229282A (en) * 2017-05-05 2018-06-29 商汤集团有限公司 Critical point detection method, apparatus, storage medium and electronic equipment
CN108280444A (en) * 2018-02-26 2018-07-13 江苏裕兰信息科技有限公司 A kind of fast motion object detection method based on vehicle panoramic view
US20180225517A1 (en) * 2017-02-07 2018-08-09 Fyusion, Inc. Skeleton detection and tracking via client-server communication
CN108898118A (en) * 2018-07-04 2018-11-27 腾讯科技(深圳)有限公司 A kind of video data handling procedure, device and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060269155A1 (en) * 2005-05-09 2006-11-30 Lockheed Martin Corporation Continuous extended range image processing
WO2015146813A1 (en) * 2014-03-28 2015-10-01 株式会社ソニー・コンピュータエンタテインメント Object manipulation method, object manipulation program, and information processing device
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
CN104376576A (en) * 2014-09-04 2015-02-25 华为技术有限公司 Target tracking method and device
CN104408743A (en) * 2014-11-05 2015-03-11 百度在线网络技术(北京)有限公司 Image segmentation method and device
US20160171656A1 (en) * 2014-12-11 2016-06-16 Sharp Laboratories Of America, Inc. System for video super resolution using semantic components
US20160342837A1 (en) * 2015-05-19 2016-11-24 Toyota Motor Engineering & Manufacturing North America, Inc. Apparatus and method for object tracking
CN104933735A (en) * 2015-06-30 2015-09-23 中国电子科技集团公司第二十九研究所 A real time human face tracking method and a system based on spatio-temporal context learning
US20170094310A1 (en) * 2015-09-30 2017-03-30 Sony Corporation Image processing system with optical flow recovery mechanism and method of operation thereof
CN106780557A (en) * 2016-12-23 2017-05-31 南京邮电大学 A kind of motion target tracking method based on optical flow method and crucial point feature
US20180225517A1 (en) * 2017-02-07 2018-08-09 Fyusion, Inc. Skeleton detection and tracking via client-server communication
CN108229282A (en) * 2017-05-05 2018-06-29 商汤集团有限公司 Critical point detection method, apparatus, storage medium and electronic equipment
CN108205655A (en) * 2017-11-07 2018-06-26 北京市商汤科技开发有限公司 A kind of key point Forecasting Methodology, device, electronic equipment and storage medium
CN108280444A (en) * 2018-02-26 2018-07-13 江苏裕兰信息科技有限公司 A kind of fast motion object detection method based on vehicle panoramic view
CN108898118A (en) * 2018-07-04 2018-11-27 腾讯科技(深圳)有限公司 A kind of video data handling procedure, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KARL PAUWELS等: ""Real-Time Model-Based Rigid Object Pose Estimation and Tracking Combining Dense and Sparse Visual Cues"", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
吴进等: ""基于区域卷积神经网络和光流法的目标跟踪"", 《电讯技术》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027412A (en) * 2019-11-20 2020-04-17 北京奇艺世纪科技有限公司 Human body key point identification method and device and electronic equipment
CN111027412B (en) * 2019-11-20 2024-03-08 北京奇艺世纪科技有限公司 Human body key point identification method and device and electronic equipment
EP3866065B1 (en) * 2020-02-13 2023-07-12 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Target detection method, device and storage medium
CN113255411A (en) * 2020-02-13 2021-08-13 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and storage medium
CN111401228B (en) * 2020-03-13 2023-12-19 中科创达软件股份有限公司 Video target labeling method and device and electronic equipment
CN111401228A (en) * 2020-03-13 2020-07-10 中科创达软件股份有限公司 Video target labeling method and device and electronic equipment
CN113436226A (en) * 2020-03-23 2021-09-24 北京沃东天骏信息技术有限公司 Method and device for detecting key points
CN112066988A (en) * 2020-08-17 2020-12-11 联想(北京)有限公司 Positioning method and positioning equipment
WO2023098617A1 (en) * 2021-12-03 2023-06-08 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device, and storage medium
CN113887547A (en) * 2021-12-08 2022-01-04 北京世纪好未来教育科技有限公司 Key point detection method and device and electronic equipment
CN113887547B (en) * 2021-12-08 2022-03-08 北京世纪好未来教育科技有限公司 Key point detection method and device and electronic equipment
WO2023151348A1 (en) * 2022-02-10 2023-08-17 腾讯科技(深圳)有限公司 Method for processing key points in image, and related apparatus
CN115511818A (en) * 2022-09-21 2022-12-23 北京医准智能科技有限公司 Optimization method, device, equipment and storage medium of pulmonary nodule detection model

Also Published As

Publication number Publication date
CN109583391B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN109583391A (en) Critical point detection method, apparatus, equipment and readable medium
JP7265003B2 (en) Target detection method, model training method, device, apparatus and computer program
CN109508681A (en) The method and apparatus for generating human body critical point detection model
CN109584276A (en) Critical point detection method, apparatus, equipment and readable medium
US20210343041A1 (en) Method and apparatus for obtaining position of target, computer device, and storage medium
CN107077738B (en) System and method for tracking object
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN109191514A (en) Method and apparatus for generating depth detection model
CN108198044A (en) Methods of exhibiting, device, medium and the electronic equipment of merchandise news
CN109495695A (en) Moving object special video effect adding method, device, terminal device and storage medium
CN111552888A (en) Content recommendation method, device, equipment and storage medium
CN114339409B (en) Video processing method, device, computer equipment and storage medium
CN109525891A (en) Multi-user's special video effect adding method, device, terminal device and storage medium
CN110059623B (en) Method and apparatus for generating information
EP4390728A1 (en) Model training method and apparatus, device, medium and program product
CN115471662B (en) Training method, recognition method, device and storage medium for semantic segmentation model
CN109348277A (en) Move pixel special video effect adding method, device, terminal device and storage medium
CN112115900B (en) Image processing method, device, equipment and storage medium
CN111589138B (en) Action prediction method, device, equipment and storage medium
CN116188684A (en) Three-dimensional human body reconstruction method based on video sequence and related equipment
CN110225400A (en) A kind of motion capture method, device, mobile terminal and storage medium
CN114416260A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110334650A (en) Object detecting method, device, electronic equipment and storage medium
CN109816791B (en) Method and apparatus for generating information
CN110060477A (en) Method and apparatus for pushed information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant