WO2021008158A1 - 一种人体关键点检测方法及装置、电子设备和存储介质 - Google Patents
一种人体关键点检测方法及装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2021008158A1 WO2021008158A1 PCT/CN2020/080231 CN2020080231W WO2021008158A1 WO 2021008158 A1 WO2021008158 A1 WO 2021008158A1 CN 2020080231 W CN2020080231 W CN 2020080231W WO 2021008158 A1 WO2021008158 A1 WO 2021008158A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- human body
- data
- image
- pose data
- key points
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Definitions
- the present disclosure relates to the technical field of human body detection, and in particular to a method and device for detecting key points of the human body, electronic equipment and storage medium.
- the human body key point detection technology is an application developed based on deep learning algorithms.
- deep learning algorithms as an important branch of machine learning, have been applied to various industries.
- application scenarios such as somatosensory games and human body dynamic monitoring, there is currently no effective solution for how to achieve accurate human body key point detection when the human body is in motion.
- the present disclosure proposes a technical solution for detecting key points of the human body.
- a method for detecting key points of a human body including:
- the 2D pose data and the depth data corresponding to the positions of the key points of the human body are subjected to the feature fusion of the key points of the human body to obtain 3D pose data for identifying the positions of the key points of the human body.
- the two-dimensional coordinate data used to identify the position of the key points of the human body in the image is extracted, and 2D pose data can be obtained.
- the 2D pose data and the depth data corresponding to the key point position of the human body are fused with the key point feature of the human body, and the 3D pose data obtained is the three-dimensional coordinate data used to identify the key point position of the human body.
- the coordinate data can realize accurate human body key point detection when the human body is in motion.
- the method further includes: before performing body key point feature fusion on the 2D pose data and the depth data corresponding to the position of the human body key point,
- RGB data and depth data are aligned to obtain RGBD data, which can realize data preprocessing, and then perform corresponding image processing on the RGB data and RGBD data respectively.
- the detected image contains a human body, including:
- the multiple image features are key point features of the human body according to the human body recognition network
- the multiple image features are key point features of the human body according to the human body recognition network
- the method further includes: before performing body key point feature fusion on the 2D pose data and the depth data corresponding to the position of the human body key point,
- multiple depth data are obtained after the second image processing, until the image processing is completed for at least one frame of image.
- multiple depth data are obtained after the second image processing, until the image processing is completed for at least one frame of image, and then multiple depth data and 2D pose data are combined to realize the human body Feature fusion of key points.
- the method further includes:
- the position changes of the key points of the human body corresponding to the first human body motion state are described by the first 3D pose data, and by sending the first control instruction to the receiving side device, the display on the receiving side device is realized Show the motion simulation operation corresponding to the first human motion state.
- the method further includes:
- a prompt message is issued according to the second control instruction to adjust the second human body motion state to a target state according to the prompt information.
- the position change of the key points of the human body corresponding to the second human motion state is described by the second 3D pose data, and the prompt information is issued through the second control instruction, so that the second human motion state is adjusted to Meet the target state.
- the method further includes:
- the third 3D pose data is sent to the receiving side device to display the operation performed by the avatar sampling the third 3D pose data on the display screen of the receiving side device.
- the position change of the key points of the human body corresponding to the third human body motion state is described by the third 3D pose data, and the third 3D pose data is sent to the receiving side device, which realizes the The operation performed by the avatar sampling the third 3D pose data is displayed on the display screen of the device.
- the training process of the human body recognition network includes:
- the pre-labeled human body key point features are used as training sample data, and the training sample data is input into the human body recognition network to be trained for training until the output result meets the network training conditions, and the human body recognition network is obtained after training.
- pre-marked human body key point features are used as training sample data, and the training sample data is input into the human body recognition network to be trained for training, and the human body recognition network obtained after training can be used for human body key point detection, and Ensure the efficiency and accuracy of detection.
- a human body key point detection device comprising:
- the detection module is configured to, in response to detecting that the image contains a human body, extract the two-dimensional coordinate data used to identify the position of the key points of the human body in the image to obtain 2D pose data;
- the fusion module is used to perform human body key point feature fusion on the 2D pose data and the depth data corresponding to the position of the human body key point to obtain 3D pose data for identifying the position of the human body key point.
- the device further includes: a preprocessing module for:
- the detection module is further used for:
- the multiple image features are key point features of the human body according to the human body recognition network
- the device further includes: an image processing module for:
- multiple depth data are obtained after the second image processing, until the image processing is completed for at least one frame of image.
- the device further includes:
- the first posture acquisition module is used to acquire the first human motion state
- the first data description module is configured to describe the position changes of the key points of the human body corresponding to the first human motion state through the first 3D pose data;
- the first instruction sending module is configured to generate a first control instruction according to the first 3D pose data, and send the first control instruction to the receiving side device to display the corresponding data on the display screen of the receiving side device.
- the motion simulation operation of the first human body motion state is described.
- the device further includes:
- the second posture acquisition module is used to acquire the second human motion state
- the second data description module is used to describe the position changes of the key points of the human body corresponding to the second human motion state through the second 3D pose data;
- a data comparison module configured to compare the second 3D pose data with pre-configured pose data, and generate a second control instruction if the comparison results are inconsistent;
- the prompt information sending module is configured to send prompt information according to the second control instruction, so as to adjust the second human body motion state to a target state according to the prompt information.
- the device further includes:
- the third posture acquisition module is used to acquire the third human motion state
- the third data description module is used to describe the position changes of the key points of the human body corresponding to the third human motion state through the third 3D pose data;
- the second instruction sending module is used to send the third 3D pose data to the receiving side device to display the operation performed by the avatar sampling the third 3D pose data on the display screen of the receiving side device .
- the device further includes: a network training module for:
- the pre-marked human body key point features are used as training sample data, and the training sample data is input into the human body recognition network to be trained for training until the output result meets the network training conditions.
- the human body recognition network
- an electronic device including:
- a memory for storing processor executable instructions
- the processor is configured to execute the above-mentioned human body key point detection method.
- a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the above-mentioned human body key point detection method is realized.
- a computer program wherein the computer program includes computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes Realize the above-mentioned human body key point detection method.
- the two-dimensional coordinate data used to identify the position of the key point of the human body in the image is extracted to obtain 2D pose data.
- the 2D pose data and the depth data corresponding to the positions of the key points of the human body are subjected to the feature fusion of the key points of the human body to obtain 3D pose data for identifying the positions of the key points of the human body.
- the two-dimensional coordinate data used to identify the position of the key points of the human body in the image is extracted, and 2D pose data can be obtained.
- the 2D pose data and the depth data corresponding to the key point position of the human body are fused with the key point feature of the human body, and the 3D pose data obtained is the three-dimensional coordinate data used to identify the key point position of the human body.
- the coordinate data can realize accurate human body key point detection when the human body is in motion.
- Fig. 1 shows a flowchart of a method for detecting key points of a human body according to an embodiment of the present disclosure.
- Fig. 2 shows a flowchart of a method for detecting key points of a human body according to an embodiment of the present disclosure.
- Fig. 3 shows a schematic diagram of key points of a human skeleton according to an embodiment of the present disclosure.
- FIG. 4 shows a scene diagram of a user holding a mobile phone terminal interacting with a large-screen device such as a TV according to an embodiment of the present disclosure.
- Fig. 5 shows a scene diagram for generating an avatar according to an embodiment of the present disclosure.
- Fig. 6 shows a schematic diagram of a human body detection scheme according to an embodiment of the present disclosure.
- Fig. 7 shows a block diagram of a human body key point detection device according to an embodiment of the present disclosure.
- FIG. 8 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- FIG. 9 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- Deep learning algorithms have developed rapidly and have received widespread attention.
- deep learning As an important branch of machine learning, has been applied to various industries.
- deep learning has become a key technology in the industry by virtue of its excellent calculation results and high robustness.
- Traditional fully connected neural networks have problems such as a large number of parameters, no use of position information between pixels, and limited network depth (the deeper the network, the stronger the expression ability, but the subsequent training parameters will also increase).
- the Convolutional Neural Network (CNN) is a good solution to these problems.
- the connections in CNN are local connections. Each neuron is no longer connected to at least one neuron in the upper layer, but only connected to a small part of the neuron. At the same time, a group of connections can share the same weight parameter, and the down-sampling strategy greatly reduces the number of parameters. Unlike the one-dimensional arrangement of a fully connected network, the neuron structure of CNN is a three-dimensional arrangement. By removing a large number of unimportant parameters and retaining important weight values, a deep neural network can be realized. It can handle more and more complex information.
- the 3D coordinates predicted by the RGB data are integrated with the depth data, which can effectively reduce the dependence on the accuracy of the depth data collected by the 3D hardware module, thereby achieving better detection accuracy and robustness Sex.
- Fig. 1 shows a flowchart of a method for detecting human key points according to an embodiment of the present disclosure.
- the method for detecting human key points is applied to a human body key point detection device.
- the human body key point detection device can be implemented by a terminal device or a server or other processing equipment.
- the terminal equipment can be user equipment (UE, User Equipment), mobile devices, cellular phones, cordless phones, personal digital assistants (PDAs, Personal Digital Assistant), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
- the method for detecting key points of the human body may be implemented by a processor calling computer-readable instructions stored in a memory. As shown in Figure 1, the process includes:
- Step S101 In response to detecting that the image contains a human body, extract the two-dimensional coordinate data used to identify the position of the key point of the human body in the image to obtain 2D pose data.
- Step S102 Perform human body key point feature fusion on the 2D pose data and the depth data corresponding to the human body key point position to obtain 3D pose data for identifying the human body key point position.
- 3D pose data can be obtained through 2D pose data+depth data.
- the 2D pose data is the two-dimensional coordinates of the key points of the human body in the RGB image
- the 3D pose data is the key points of the 3D human body.
- the human body can be accurately detected when the human body is in motion. For example, a certain motion state can be decomposed into at least one node pose among raising hands, kicking legs, head swinging, and bending, so as to track the key points of the human body corresponding to these node poses in real time.
- FIG. 2 shows a flowchart of a method for detecting human body key points according to an embodiment of the present disclosure.
- the method for detecting human body key points is applied to a human body key point detection device.
- the human body key point detection device may be implemented by a terminal device or a server or other processing equipment.
- the terminal equipment can be user equipment (UE, User Equipment), mobile devices, cellular phones, cordless phones, personal digital assistants (PDAs, Personal Digital Assistant), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
- the method for detecting key points of the human body may be implemented by a processor calling computer-readable instructions stored in a memory. As shown in Figure 2, the process includes:
- Step S201 Perform data alignment preprocessing on each frame of image in the RGB image data stream and the depth data corresponding to the same image to obtain the RGBD image data stream.
- RGB data and depth data need to be aligned to obtain RGBD data, and then the RGB data and RGBD data can be processed separately in the process of this method.
- Step S202 It is detected from the RGB image data stream that the image contains a human body, and the two-dimensional coordinate data used to identify the position of the key point of the human body in the image is extracted to obtain 2D pose data.
- Step S203 Obtain depth data from the RGBD image data stream, and perform human body key point feature fusion with 2D pose data and depth data (depth data corresponding to the key point position of the human body) to obtain a 3D pose used to identify the key point position of the human body data.
- each data pair composed of RGB and RGBD is an image frame corresponding to the same viewing angle. It is to align the key points of the human body in each frame of the image in the RGB image data stream with the depth data corresponding to the key points of the human body in the same image, so that for any key point of the human body in the image, it has the position of the key point of the human body.
- the depth data is obtained from a depth map (DepthMap).
- DepthMap can be considered as: an image composed of information related to the distance of the surface of the collected target object in the scene (Or called image channel).
- image channel an image composed of information related to the distance of the surface of the collected target object in the scene.
- the detecting that the image contains a human body includes: acquiring the RGB image data stream, and performing first image processing on each frame of the image in the RGB image data stream. For the current frame of image, multiple image features are obtained after the first image processing. In a case where it is determined that the multiple image features are key point features of the human body according to the human body recognition network, it is detected that the current frame of image contains a human body until the detection of at least one frame of image is completed.
- the method further includes: acquiring the RGBD image data stream before performing human body key point feature fusion on the 2D pose data and the depth data corresponding to the human body key point position, Perform second image processing on each frame of image in the RGBD image data stream. For the current frame of image, multiple depth data are obtained after the second image processing, until the image processing is completed for at least one frame of image.
- data alignment preprocessing obtains multiple RGBD data streams based on multiple RGB data streams.
- the human body key points of each frame of the image in the RGB image data stream can be aligned with the depth data corresponding to the human body key points in the same image. If RGB and RGBD are regarded as data pairs, then each RGB and RGBD data pair, Both are image frames corresponding to the same perspective.
- multiple RGB and RGBD data pairs can be input.
- the logical model of the human body key point detection process of the present disclosure can be input in two ways. For the first data (RGB data), the first image processing Then, the human body tracking network that has been trained is used to determine whether a human body is detected in the current image frame.
- the target RGB data corresponding to the current image frame is handed over to the subsequent steps for processing.
- the RGBD data and the target RGB data are combined to obtain 3D pose data (3D coordinates of the human skeleton based on the RGBD data and the target RGB data). key point).
- Dynamic tracking Use 3D coordinates to represent the 3D pose data of the key points of the human skeleton to realize the tracking of the human body in motion, such as tracking the changes of node poses, supporting at least one of raising hands, kicking, swinging head, bending over, etc. kind of human movement.
- the processing logic for running the human body key point detection process of the present disclosure can be integrated into the mobile phone in the form of an offline software development kit (SDK, Software Development Kit).
- SDK Software Development Kit
- the algorithm optimization based on the mobile phone as the mobile terminal can speed up the operation of the above processing logic, which is different from the prior art C/S online mode which places the processing logic on the server, so that if the terminal initiates a request to the server, there is easy transmission between the two Time delay, or network failure, etc., cause the processing result requested by the terminal cannot be obtained in time.
- the processing logic is directly placed on the terminal in the offline mode of the SDK, which greatly accelerates the processing efficiency of the detection method.
- Figure 3 shows a schematic diagram of the key points of the human skeleton according to an embodiment of the present disclosure, including 17 key points in the human skeleton.
- the user's dynamic posture changes can be tracked in real time, such as raising hands, kicking, At least one human body movement such as head swinging and bending over.
- the first human body motion state such as the swing motion when playing tennis, etc.
- the change is described by the first 3D pose data.
- Generate a first control instruction according to the first 3D pose data and send the first control instruction to the receiving-side device to display the action corresponding to the first human motion state on the display screen of the receiving-side device Simulation operation.
- ToF Time of Flight
- ToF mobile phone can be equipped with TOF module, its 3D imaging solution can be by continuously sending light pulses to the target object, and then using the sensor to receive the light returned from the target object, and detecting the flight (round trip) time of the light pulse to obtain the target object based on the collection position the distance.
- Fig. 4 shows a scene diagram of a user holding a mobile phone terminal interacting with a large-screen device such as a TV according to an embodiment of the present disclosure. It is an interactive scene of playing badminton.
- the user’s current posture changes can be tracked by detecting the key points of the user’s human skeleton.
- the obtained posture change is transmitted back to the electronic device such as a TV, and the corresponding posture change is presented in the electronic device.
- the second human body motion state for example, try to raise both hands to 90 degrees to the horizontal plane
- the change is described by the second 3D pose data.
- the second 3D pose data is compared with the pre-configured pose data. If the comparison results are inconsistent, a second control instruction is generated (for example, the user raises his hands only to 85 degrees, which fails to compare with the pre-configured The pose data is consistent with "90 degrees"), and a prompt message is issued according to the second control instruction to adjust the second human body motion state to a target state according to the prompt information.
- the prompt information includes: voice, text, sound and light, etc., prompting the user to notice that the current motion posture is completely incorrect or the posture is not in place.
- the prompt information includes: voice, text, sound and light, etc., prompting the user to notice that the current motion posture is completely incorrect or the posture is not in place.
- virtual coach software for the fitness industry can be developed based on the present disclosure, and the user's fitness actions can be detected through a mobile phone or similar 3D module, and guidance can be given.
- the user's human body data is applied to the scene of the avatar, the third human body motion state (such as the user's running posture) is obtained, and the position change of the human body key points corresponding to the third human body motion state is passed through the third 3D pose data Describe.
- the third 3D pose data is sent to the receiving side device to display the operation performed by the avatar sampling the third 3D pose data on the display screen of the receiving side device (the avatar can be a small animal , A boy or a girl is running in the game scene). This is just an example, and the present disclosure is also applicable to other avatar scenes.
- a virtual game can be developed based on the present disclosure, and a virtual image can be driven by real-time user motion capture instead of a real person in the game scene, which is an interactive way across the touch screen.
- Figure 5 shows a scene diagram of an avatar generated according to an embodiment of the present disclosure. It is a parkour scene.
- the posture change data corresponding to the avatar in an electronic device such as a TV can be generated by detecting the key points of the user's human skeleton, and in the electronic device The corresponding posture changes appear in the device.
- the training process of the human body recognition network includes: taking pre-annotated key features of the human body as training sample data, and inputting the training sample data into the human body recognition network (such as CNN) to be trained. Training until the output result meets the network training conditions, and the human body recognition network is obtained after training.
- CNN can extract the features of the key points of the human body in the image, and the algorithm model trained on the data set based on the skeleton key points of the human body can be used to identify whether the human body is included in the image.
- accurate node poses can be obtained, and changes in node pose pairs can be tracked in real time, supporting at least one human body movement such as raising hands, kicking legs, swinging heads, and bending over.
- Fig. 6 shows a schematic diagram of a human body detection scheme according to an embodiment of the present disclosure.
- image processing is performed on two image data streams, such as RGB image data stream and RGBD image data stream.
- RGB image data stream After image processing, it is determined whether a human body is detected in the current RGB image frame. If a human body is detected, the target RGB data corresponding to the current RGB image frame is handed over to the subsequent RGBD image data stream. Be processed all the way.
- the target RGBD data (depth data) obtained after image processing is combined with the target RGB data (2D pose data) to obtain 3D pose data according to the 2D pose data and depth data , That is, the key points of the human skeleton in the 3D coordinates, the 3D pose data is converted into data to obtain the data conversion result, which is used for the detection processing of at least one scene.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the present disclosure also provides human body key point detection devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any of the human body key point detection methods provided in this disclosure.
- the corresponding technical solutions and descriptions and refer to methods Part of the corresponding records will not be repeated.
- Fig. 7 shows a block diagram of a human body key point detection device according to an embodiment of the present disclosure.
- the human body key point detection device includes: a detection module 31 for responding to detecting that the image contains The human body extracts the two-dimensional coordinate data used to identify the position of the key point of the human body in the image to obtain 2D pose data; the fusion module 32 is used to combine the 2D pose data with the position of the key point of the human body The depth data is fused with the key points of the human body to obtain 3D pose data for identifying the positions of the key points of the human body.
- the device further includes: a preprocessing module, configured to perform data alignment preprocessing on each frame of the RGB image data stream and the depth data corresponding to the same image to obtain the RGBD image data stream.
- a preprocessing module configured to perform data alignment preprocessing on each frame of the RGB image data stream and the depth data corresponding to the same image to obtain the RGBD image data stream.
- the detection module is further configured to: for the current frame of image, obtain multiple image features after the first image processing; determine that the multiple image features are key points of the human body according to the human body recognition network In the case of features, it is detected that the current frame of image contains a human body until the detection of at least one frame of image is completed.
- the device further includes: an image processing module, configured to: for the current frame of image, obtain multiple depth data after the second image processing, until the image processing is completed for at least one frame of image.
- the device further includes: a first posture acquisition module for acquiring a first human body motion state; a first data description module for changing the position of key human body points corresponding to the first human body motion state It is described by using the first 3D pose data; the first instruction sending module is used to generate a first control instruction according to the first 3D pose data, and send the first control instruction to the receiving device for the An action simulation operation corresponding to the first human body motion state is displayed on the display screen of the receiving side device.
- the device further includes: a second posture acquisition module for acquiring a second human motion state; a second data description module for changing the position of key human body points corresponding to the second human motion state Describe via the second 3D pose data; a data comparison module for comparing the second 3D pose data with pre-configured pose data, and generate a second control instruction if the comparison results are inconsistent;
- the prompt information sending module is configured to send prompt information according to the second control instruction, so as to adjust the second human body motion state to a target state according to the prompt information.
- the device further includes: a third posture acquisition module for acquiring a third human body motion state; a third data description module for changing the position of key human body points corresponding to the third human body motion state It is described by the third 3D pose data; the second instruction sending module is used to send the third 3D pose data to the receiving side device to display the avatar sampling data on the display screen of the receiving side device. The operation performed by the third 3D pose data is described.
- the device further includes: a network training module, configured to use pre-marked human body key point features as training sample data during the training process of the human body recognition network, and use the training sample data Input the human body recognition network to be trained for training until the output result meets the network training conditions, and the human body recognition network is obtained after training.
- a network training module configured to use pre-marked human body key point features as training sample data during the training process of the human body recognition network, and use the training sample data Input the human body recognition network to be trained for training until the output result meets the network training conditions, and the human body recognition network is obtained after training.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the embodiment of the present disclosure also proposes a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above-mentioned human body key point detection method is realized.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above-mentioned human body key point detection method.
- the electronic device can be provided as a terminal, server or other form of device.
- An embodiment of the present disclosure further provides a computer program, wherein the computer program includes computer readable code, and when the computer readable code runs in an electronic device, the processor in the electronic device executes the above Human body key point detection method.
- Fig. 8 is a block diagram showing an electronic device 800 according to an exemplary embodiment.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC).
- the microphone is configured to receive external audio signals.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 can be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- Fig. 9 is a block diagram showing an electronic device 900 according to an exemplary embodiment.
- the electronic device 900 may be provided as a server.
- the electronic device 900 includes a processing component 922, which further includes one or more processors, and a memory resource represented by a memory 932, for storing instructions that can be executed by the processing component 922, such as application programs.
- the application program stored in the memory 932 may include one or more modules each corresponding to a set of instructions.
- the processing component 922 is configured to execute instructions to perform the aforementioned methods.
- the electronic device 900 may also include a power supply component 926 configured to perform power management of the electronic device 900, a wired or wireless network interface 950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 958 .
- the electronic device 900 can operate based on an operating system stored in the memory 932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium is also provided, such as a memory 932 including computer program instructions, which can be executed by the processing component 922 of the electronic device 900 to complete the foregoing method.
- the present disclosure may be a system, method, and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet connection).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine such that when these instructions are executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more functions for implementing the specified logical function.
- Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (19)
- 一种人体关键点检测方法,其特征在于,所述方法包括:响应于检测到图像中包含人体,将所述图像中用于标识人体关键点位置的二维坐标数据提取出来,得到2D位姿数据;将所述2D位姿数据和对应所述人体关键点位置的深度数据进行人体关键点特征融合,得到用于标识人体关键点位置的3D位姿数据。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:所述将所述2D位姿数据和对应所述人体关键点位置的深度数据进行人体关键点特征融合之前,将RGB图像数据流中每帧图像与对应同一图像的深度数据进行数据对齐的预处理,得到RGBD图像数据流。
- 根据权利要求1或2所述的方法,其特征在于,所述检测到图像中包含人体,包括:对于当前帧图像,经所述第一图像处理后得到多个图像特征;根据人体识别网络判断出所述多个图像特征为人体关键点特征的情况下,检测到所述当前帧图像中包含人体,直至对至少一帧图像完成检测。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:所述将所述2D位姿数据和对应所述人体关键点位置的深度数据进行人体关键点特征融合之前,对于当前帧图像,经所述第二图像处理后得到多个深度数据,直至对至少一帧图像完成图像处理。
- 根据权利要求1至4任一项所述的方法,其特征在于,所述方法还包括:获取第一人体运动状态;将所述第一人体运动状态对应的人体关键点位置变化通过第一3D位姿数据进行描述;根据所述第一3D位姿数据生成第一控制指令,将所述第一控制指令发送给接收侧设备,以在所述接收侧设备的显示屏上展示对应所述第一人体运动状态的动作模拟操作。
- 根据权利要求1至4任一项所述的方法,其特征在于,所述方法还包括:获取第二人体运动状态;将所述第二人体运动状态对应的人体关键点位置变化通过第二3D位姿数据进行描述;将所述第二3D位姿数据与预配置的位姿数据进行比对,比对结果不一致的情况下生成第二控制指令;根据所述第二控制指令发出提示信息,以根据所述提示信息调整所述第二人体运动状态至符合目标状态。
- 根据权利要求1至4任一项所述的方法,其特征在于,所述方法还包括:获取第三人体运动状态;将所述第三人体运动状态对应的人体关键点位置变化通过第三3D位姿数据进行描述;将所述第三3D位姿数据发送给接收侧设备,以在所述接收侧设备的显示屏上展示由虚拟形象采样所述第三3D位姿数据执行的操作。
- 根据权利要求3所述的方法,其特征在于,所述人体识别网络的训练过程包括:将预先标注好的人体关键点特征作为训练样本数据,将所述训练样本数据输入待训练的人体识别网络进行训练,直至输出结果满足网络训练条件,训练后得到所述人体识别网络。
- 一种人体关键点检测装置,其特征在于,所述装置包括:检测模块,用于响应于检测到图像中包含人体,将所述图像中用于标识人体关键点位置的二维坐标数据提取出来,得到2D位姿数据;融合模块,用于将所述2D位姿数据和对应所述人体关键点位置的深度数据进行人体关键点特征融合,得到用于标识人体关键点位置的3D位姿数据。
- 根据权利要求9所述的装置,其特征在于,所述装置还包括:预处理模块,用于:将RGB图像数据流中每帧图像与对应同一图像的深度数据进行数据对齐的预处理,得到RGBD图像数据流。
- 根据权利要求10所述的装置,其特征在于,所述检测模块,进一步用于:对于当前帧图像,经所述第一图像处理后得到多个图像特征;根据人体识别网络判断出所述多个图像特征为人体关键点特征的情况下,检测到所述当前帧图像中包含人体,直至对至少一帧图像完成检测。
- 根据权利要求10所述的装置,其特征在于,所述装置还包括:图像处理模块,用于:对于当前帧图像,经所述第二图像处理后得到多个深度数据,直至对至少一帧图像完成图像处理。
- 根据权利要求9至12任一项所述的装置,其特征在于,所述装置还包括:第一姿态获取模块,用于获取第一人体运动状态;第一数据描述模块,用于将所述第一人体运动状态对应的人体关键点位置变化通过第一3D位姿数据进行描述;第一指令发送模块,用于根据所述第一3D位姿数据生成第一控制指令,将所述第一控制指令发送给接收侧设备,以在所述接收侧设备的显示屏上展示对应所述第一人体运动状态的动作模拟操作。
- 根据权利要求9至12任一项所述的装置,其特征在于,所述装置还包括:第二姿态获取模块,用于获取第二人体运动状态;第二数据描述模块,用于将所述第二人体运动状态对应的人体关键点位置变化通过 第二3D位姿数据进行描述;数据比对模块,用于将所述第二3D位姿数据与预配置的位姿数据进行比对,比对结果不一致的情况下生成第二控制指令;提示信息发送模块,用于根据所述第二控制指令发出提示信息,以根据所述提示信息调整所述第二人体运动状态至符合目标状态。
- 根据权利要求9至12任一项所述的装置,其特征在于,所述装置还包括:第三姿态获取模块,用于获取第三人体运动状态;第三数据描述模块,用于将所述第三人体运动状态对应的人体关键点位置变化通过第三3D位姿数据进行描述;第二指令发送模块,用于将所述第三3D位姿数据发送给接收侧设备,以在所述接收侧设备的显示屏上展示由虚拟形象采样所述第三3D位姿数据执行的操作。
- 根据权利要求11所述的装置,其特征在于,所述装置还包括:网络训练模块,用于:在所述人体识别网络的训练过程中,将预先标注好的人体关键点特征作为训练样本数据,将所述训练样本数据输入待训练的人体识别网络进行训练,直至输出结果满足网络训练条件,训练后得到所述人体识别网络。
- 一种电子设备,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行权利要求1至8中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至8中任意一项所述的方法。
- 一种计算机程序,其中,所述计算机程序包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1-8中任意一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021564295A JP2022531188A (ja) | 2019-07-15 | 2020-03-19 | 人体キーポイント検出方法及び装置、電子機器並びに記憶媒体 |
SG11202111880SA SG11202111880SA (en) | 2019-07-15 | 2020-03-19 | Method and apparatus for detecting key points of human body, electronic device and storage medium |
US17/507,850 US20220044056A1 (en) | 2019-07-15 | 2021-10-22 | Method and apparatus for detecting keypoints of human body, electronic device and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910635763.6A CN110348524B (zh) | 2019-07-15 | 2019-07-15 | 一种人体关键点检测方法及装置、电子设备和存储介质 |
CN201910635763.6 | 2019-07-15 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/507,850 Continuation US20220044056A1 (en) | 2019-07-15 | 2021-10-22 | Method and apparatus for detecting keypoints of human body, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021008158A1 true WO2021008158A1 (zh) | 2021-01-21 |
Family
ID=68175308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/080231 WO2021008158A1 (zh) | 2019-07-15 | 2020-03-19 | 一种人体关键点检测方法及装置、电子设备和存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20220044056A1 (zh) |
JP (1) | JP2022531188A (zh) |
CN (1) | CN110348524B (zh) |
SG (1) | SG11202111880SA (zh) |
TW (1) | TW202105331A (zh) |
WO (1) | WO2021008158A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113377478A (zh) * | 2021-06-24 | 2021-09-10 | 上海商汤科技开发有限公司 | 文娱行业数据标注方法、装置、存储介质及设备 |
CN113961746A (zh) * | 2021-09-29 | 2022-01-21 | 北京百度网讯科技有限公司 | 视频生成方法、装置、电子设备及可读存储介质 |
WO2023025791A1 (en) * | 2021-08-27 | 2023-03-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Object tracking for lower latency and less bandwidth |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348524B (zh) * | 2019-07-15 | 2022-03-04 | 深圳市商汤科技有限公司 | 一种人体关键点检测方法及装置、电子设备和存储介质 |
WO2021098543A1 (zh) * | 2019-11-20 | 2021-05-27 | Oppo广东移动通信有限公司 | 一种姿势识别方法及装置、存储介质 |
CN111028283B (zh) * | 2019-12-11 | 2024-01-12 | 北京迈格威科技有限公司 | 图像检测方法、装置、设备及可读存储介质 |
CN111208783B (zh) * | 2019-12-30 | 2021-09-17 | 深圳市优必选科技股份有限公司 | 一种动作模仿方法、装置、终端及计算机存储介质 |
CN111160375B (zh) * | 2019-12-31 | 2024-01-23 | 北京奇艺世纪科技有限公司 | 三维关键点预测及深度学习模型训练方法、装置及设备 |
CN111723688B (zh) * | 2020-06-02 | 2024-03-12 | 合肥的卢深视科技有限公司 | 人体动作识别结果的评价方法、装置和电子设备 |
CN111914756A (zh) * | 2020-08-03 | 2020-11-10 | 北京环境特性研究所 | 一种视频数据处理方法和装置 |
CN112465890A (zh) * | 2020-11-24 | 2021-03-09 | 深圳市商汤科技有限公司 | 深度检测方法、装置、电子设备和计算机可读存储介质 |
CN112364807B (zh) * | 2020-11-24 | 2023-12-15 | 深圳市优必选科技股份有限公司 | 图像识别方法、装置、终端设备及计算机可读存储介质 |
CN112949633B (zh) * | 2021-03-05 | 2022-10-21 | 中国科学院光电技术研究所 | 一种基于改进YOLOv3的红外目标检测方法 |
CN115082302B (zh) * | 2021-03-15 | 2024-05-03 | 芯视界(北京)科技有限公司 | 一种光谱图像处理装置及方法 |
CN113095248B (zh) * | 2021-04-19 | 2022-10-25 | 中国石油大学(华东) | 一种用于羽毛球运动的技术动作纠正方法 |
CN113627326B (zh) * | 2021-08-10 | 2024-04-12 | 国网福建省电力有限公司营销服务中心 | 一种基于可穿戴设备和人体骨架的行为识别方法 |
CN114038009A (zh) * | 2021-10-26 | 2022-02-11 | 深圳市华安泰智能科技有限公司 | 一种基于人体骨骼关键点的图像数据采集分析系统 |
CN114120448B (zh) * | 2021-11-29 | 2023-04-07 | 北京百度网讯科技有限公司 | 图像处理方法和装置 |
CN114419526B (zh) * | 2022-03-31 | 2022-09-09 | 合肥的卢深视科技有限公司 | 犯规行为检测方法、装置、电子设备和存储介质 |
CN115409638B (zh) * | 2022-11-02 | 2023-03-24 | 中国平安财产保险股份有限公司 | 基于人工智能的牲畜保险承保和理赔方法及相关设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460338A (zh) * | 2018-02-02 | 2018-08-28 | 北京市商汤科技开发有限公司 | 人体姿态估计方法和装置、电子设备、存储介质、程序 |
CN108960036A (zh) * | 2018-04-27 | 2018-12-07 | 北京市商汤科技开发有限公司 | 三维人体姿态预测方法、装置、介质及设备 |
CN109176512A (zh) * | 2018-08-31 | 2019-01-11 | 南昌与德通讯技术有限公司 | 一种体感控制机器人的方法、机器人及控制装置 |
US20190130602A1 (en) * | 2018-12-26 | 2019-05-02 | Intel Corporation | Three dimensional position estimation mechanism |
CN110348524A (zh) * | 2019-07-15 | 2019-10-18 | 深圳市商汤科技有限公司 | 一种人体关键点检测方法及装置、电子设备和存储介质 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8787663B2 (en) * | 2010-03-01 | 2014-07-22 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
KR101392357B1 (ko) * | 2012-12-18 | 2014-05-12 | 조선대학교산학협력단 | 2차원 및 3차원 정보를 이용한 표지판 검출 시스템 |
US9460513B1 (en) * | 2015-06-17 | 2016-10-04 | Mitsubishi Electric Research Laboratories, Inc. | Method for reconstructing a 3D scene as a 3D model using images acquired by 3D sensors and omnidirectional cameras |
JP7126812B2 (ja) * | 2017-07-25 | 2022-08-29 | 株式会社クオンタム | 検出装置、検出システム、画像処理装置、検出方法、画像処理プログラム、画像表示方法、及び画像表示システム |
CN108564041B (zh) * | 2018-04-17 | 2020-07-24 | 云从科技集团股份有限公司 | 一种基于rgbd相机的人脸检测和修复方法 |
CN108830150B (zh) * | 2018-05-07 | 2019-05-28 | 山东师范大学 | 一种基于三维人体姿态估计方法及装置 |
US11074711B1 (en) * | 2018-06-15 | 2021-07-27 | Bertec Corporation | System for estimating a pose of one or more persons in a scene |
CN109583370A (zh) * | 2018-11-29 | 2019-04-05 | 北京达佳互联信息技术有限公司 | 人脸结构网格模型建立方法、装置、电子设备及存储介质 |
CN109584362B (zh) * | 2018-12-14 | 2023-03-21 | 北京市商汤科技开发有限公司 | 三维模型构建方法及装置、电子设备和存储介质 |
-
2019
- 2019-07-15 CN CN201910635763.6A patent/CN110348524B/zh active Active
-
2020
- 2020-03-19 SG SG11202111880SA patent/SG11202111880SA/en unknown
- 2020-03-19 JP JP2021564295A patent/JP2022531188A/ja active Pending
- 2020-03-19 WO PCT/CN2020/080231 patent/WO2021008158A1/zh active Application Filing
- 2020-05-08 TW TW109115341A patent/TW202105331A/zh unknown
-
2021
- 2021-10-22 US US17/507,850 patent/US20220044056A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460338A (zh) * | 2018-02-02 | 2018-08-28 | 北京市商汤科技开发有限公司 | 人体姿态估计方法和装置、电子设备、存储介质、程序 |
CN108960036A (zh) * | 2018-04-27 | 2018-12-07 | 北京市商汤科技开发有限公司 | 三维人体姿态预测方法、装置、介质及设备 |
CN109176512A (zh) * | 2018-08-31 | 2019-01-11 | 南昌与德通讯技术有限公司 | 一种体感控制机器人的方法、机器人及控制装置 |
US20190130602A1 (en) * | 2018-12-26 | 2019-05-02 | Intel Corporation | Three dimensional position estimation mechanism |
CN110348524A (zh) * | 2019-07-15 | 2019-10-18 | 深圳市商汤科技有限公司 | 一种人体关键点检测方法及装置、电子设备和存储介质 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113377478A (zh) * | 2021-06-24 | 2021-09-10 | 上海商汤科技开发有限公司 | 文娱行业数据标注方法、装置、存储介质及设备 |
CN113377478B (zh) * | 2021-06-24 | 2024-04-02 | 上海商汤科技开发有限公司 | 文娱行业数据标注方法、装置、存储介质及设备 |
WO2023025791A1 (en) * | 2021-08-27 | 2023-03-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Object tracking for lower latency and less bandwidth |
CN113961746A (zh) * | 2021-09-29 | 2022-01-21 | 北京百度网讯科技有限公司 | 视频生成方法、装置、电子设备及可读存储介质 |
CN113961746B (zh) * | 2021-09-29 | 2023-11-21 | 北京百度网讯科技有限公司 | 视频生成方法、装置、电子设备及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
TW202105331A (zh) | 2021-02-01 |
CN110348524A (zh) | 2019-10-18 |
US20220044056A1 (en) | 2022-02-10 |
CN110348524B (zh) | 2022-03-04 |
SG11202111880SA (en) | 2021-11-29 |
JP2022531188A (ja) | 2022-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021008158A1 (zh) | 一种人体关键点检测方法及装置、电子设备和存储介质 | |
WO2020140798A1 (zh) | 手势识别方法、装置、电子设备及存储介质 | |
US20210097715A1 (en) | Image generation method and device, electronic device and storage medium | |
US11455788B2 (en) | Method and apparatus for positioning description statement in image, electronic device, and storage medium | |
WO2017166622A1 (zh) | 一种视频播放方法、播放终端及媒体服务器 | |
KR102410879B1 (ko) | 포지셔닝 정보를 획득하는 방법, 장치 및 매체 | |
WO2021000708A1 (zh) | 健身教学方法、装置、电子设备及存储介质 | |
WO2021253777A1 (zh) | 姿态检测及视频处理方法、装置、电子设备和存储介质 | |
TW202113757A (zh) | 目標對象匹配方法及目標對象匹配裝置、電子設備和電腦可讀儲存媒介 | |
WO2022043741A1 (zh) | 网络训练、行人重识别方法及装置、存储介质、计算机程序 | |
KR20210111833A (ko) | 타겟의 위치들을 취득하기 위한 방법 및 장치와, 컴퓨터 디바이스 및 저장 매체 | |
WO2022068479A1 (zh) | 图像处理方法、装置、电子设备及计算机可读存储介质 | |
US20210224607A1 (en) | Method and apparatus for neutral network training, method and apparatus for image generation, and storage medium | |
JP2016531362A (ja) | 肌色調整方法、肌色調整装置、プログラム及び記録媒体 | |
WO2020155713A1 (zh) | 图像处理方法及装置、网络训练方法及装置 | |
WO2019153925A1 (zh) | 一种搜索方法及相关装置 | |
TWI718631B (zh) | 人臉圖像的處理方法及裝置、電子設備和儲存介質 | |
WO2022188305A1 (zh) | 信息展示方法及装置、电子设备、存储介质及计算机程序 | |
CN109410276B (zh) | 关键点位置确定方法、装置及电子设备 | |
CN111985268A (zh) | 一种人脸驱动动画的方法和装置 | |
WO2022151686A1 (zh) | 场景图像展示方法、装置、设备、存储介质、程序及产品 | |
CN111045511A (zh) | 基于手势的操控方法及终端设备 | |
WO2022193456A1 (zh) | 目标跟踪方法及装置、电子设备和存储介质 | |
CN114581525A (zh) | 姿态确定方法及装置、电子设备和存储介质 | |
CN109740557B (zh) | 对象检测方法及装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20840098 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021564295 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.06.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20840098 Country of ref document: EP Kind code of ref document: A1 |