WO2020088092A1 - 关键点位置确定方法、装置及电子设备 - Google Patents

关键点位置确定方法、装置及电子设备 Download PDF

Info

Publication number
WO2020088092A1
WO2020088092A1 PCT/CN2019/104231 CN2019104231W WO2020088092A1 WO 2020088092 A1 WO2020088092 A1 WO 2020088092A1 CN 2019104231 W CN2019104231 W CN 2019104231W WO 2020088092 A1 WO2020088092 A1 WO 2020088092A1
Authority
WO
WIPO (PCT)
Prior art keywords
human hand
shape
image
key point
posture
Prior art date
Application number
PCT/CN2019/104231
Other languages
English (en)
French (fr)
Inventor
张�雄
李强
郑文
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2020088092A1 publication Critical patent/WO2020088092A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular, to a method, device, and electronic device for determining key point positions.
  • the key point may be in a blocked state.
  • the position of the key point in the human hand color image may present an image of an obstacle, which may result in the calculation of the probability distribution
  • the probability that the obtained position is the position of the key point is low, and there is a large deviation from the actual situation.
  • the three-dimensional space position of the key point cannot be calculated, or the calculated three-dimensional space position of the key point has low accuracy, thereby affecting human hand pose estimation.
  • the present disclosure provides a method, device and electronic device for determining the position of a key point.
  • a method for determining a position of a key point including:
  • the position of the key point of the human hand is determined according to preset geometric constraint conditions of the human hand skeleton.
  • the determining the position of the key point of the human hand based on the shape and the pose according to preset geometric constraint conditions of the human hand bone includes:
  • the human hand is constructed based on the shape and the posture according to preset geometric constraint conditions of the human hand skeleton 3D bone model, including:
  • the skeleton animation framework Input the shape and the posture as model parameters into a preset skeleton animation framework to obtain a three-dimensional skeleton model of the human hand, the skeleton animation framework extracts geometric constraints from a plurality of sample human three-dimensional skeleton models The resulting parametric framework.
  • the determining of the shape of the human hand and the poses of multiple skeletal nodes in the human hand based on the image to be analyzed that includes the image of the human hand includes:
  • the image to be analyzed including the image of the human hand is input into a preset parameter extraction network to obtain the shape of the human hand and the posture of multiple skeletal nodes in the human hand.
  • the parameter extraction network is pre-marked with the shape and Sample image training for poses of each bone node.
  • the parameter extraction network is a mobile terminal neural network MobileNet.
  • a device for determining a position of a key point including:
  • the human hand analysis unit is configured to execute an image to be analyzed based on the image containing the human hand, and determine the shape of the human hand and the poses of multiple skeletal nodes in the human hand;
  • the position determining unit is configured to perform the determination of the position of the key point of the human hand based on the shape and the posture according to preset geometric constraint conditions of the human hand skeleton.
  • the location determining unit includes:
  • the building module is specifically configured to execute a three-dimensional skeleton model of the human hand based on the shape and the posture according to preset geometric constraints of the human hand bone;
  • the reading module is configured to perform reading the three-dimensional space coordinates of the key point of the human hand from the three-dimensional skeleton model as the position of the key point.
  • the building module is specifically configured to perform input of the shape and the pose as model parameters to a preset bone
  • a three-dimensional skeleton model of the human hand is obtained.
  • the skeleton animation frame is a parameterized frame obtained by extracting geometric constraint conditions from a plurality of sample human hand three-dimensional skeleton models.
  • the human hand analysis unit is specifically configured to perform extraction of the image to be analyzed including the image of the human hand by inputting preset parameters to obtain a shape of the human hand And the poses of multiple skeletal nodes in the human hand, the parameter extraction network is pre-trained with sample images that label the shape of the human hand and the poses of each skeletal node.
  • the parameter extraction network is a mobile terminal neural network MobileNet.
  • an electronic device including:
  • Memory for storing processor executable instructions
  • the processor is configured to:
  • the position of the key point of the human hand is determined according to preset geometric constraint conditions of the human hand skeleton.
  • the processor is specifically configured to execute:
  • the processor is specifically configured to execute:
  • the skeleton animation framework Input the shape and the posture as model parameters into a preset skeleton animation framework to obtain a three-dimensional skeleton model of the human hand, the skeleton animation framework extracts geometric constraints from a plurality of sample human three-dimensional skeleton models The resulting parametric framework.
  • the processing is specifically configured to execute:
  • the image to be analyzed including the image of the human hand is input into a preset parameter extraction network to obtain the shape of the human hand and the posture of multiple skeletal nodes in the human hand.
  • the parameter extraction network is pre-marked with the shape of the human hand and Sample image training for poses of each bone node.
  • the parameter extraction network is a mobile terminal neural network MobileNet.
  • a non-transitory computer-readable storage medium which when the instructions in the storage medium are executed by the processor of the mobile terminal, enables the mobile terminal to execute a key point position A determination method, the method comprising:
  • the position of the key point of the human hand is determined according to preset geometric constraint conditions of the human hand skeleton.
  • the determining the position of the key point of the human hand based on the shape and the pose according to preset geometric constraint conditions of the human hand bone includes:
  • the human hand is determined based on the shape and the posture according to a preset geometric constraint condition of the human hand bone
  • the location of key points includes:
  • the image to be analyzed based on the image of the human hand determines the shape of the human hand and the poses of multiple skeletal nodes in the human hand, including:
  • the image to be analyzed including the image of the human hand is input into a preset parameter extraction network to obtain the shape of the human hand and the posture of multiple skeletal nodes in the human hand. Sample image training for poses of each bone node.
  • the parameter extraction network is a mobile terminal neural network MobileNet.
  • a computer program product which, when executed by a processor of a user terminal, enables the user terminal to execute a method for determining a key point position, the method comprising:
  • the position of the key point of the human hand is determined according to preset geometric constraint conditions of the human hand skeleton.
  • the determining the position of the key point of the human hand based on the shape and the posture according to preset geometric constraint conditions of the human hand bone includes:
  • the human hand is constructed based on the shape and the posture according to preset geometric constraint conditions of the human hand skeleton 3D bone model, including:
  • the skeleton animation framework Input the shape and the posture as model parameters into a preset skeleton animation framework to obtain a three-dimensional skeleton model of the human hand, the skeleton animation framework extracts geometric constraints from a plurality of sample human three-dimensional skeleton models The resulting parametric framework.
  • the determining of the shape of the human hand and the poses of multiple skeletal nodes in the human hand based on the image to be analyzed that includes the image of the human hand includes:
  • the image to be analyzed including the image of the human hand is input into a preset parameter extraction network to obtain the shape of the human hand and the posture of multiple skeletal nodes in the human hand.
  • the parameter extraction network is pre-marked with the shape of the human hand and Sample image training for poses of each bone node.
  • the parameter extraction network is a mobile terminal neural network MobileNet.
  • the technical solution provided by the embodiments of the present disclosure may include the following beneficial effects: using the geometric constraints of the human hand bone itself, combined with the parameters of the human hand extracted from the image to be analyzed, the key that is blocked can be calculated more accurately The location of the point.
  • Fig. 1 is a flowchart of a method for determining a position of a key point according to an exemplary embodiment
  • Fig. 2a is a skeleton diagram of a human hand and a hand bone according to an exemplary embodiment
  • Fig. 2b is a distribution diagram of key points according to an exemplary embodiment
  • Fig. 3 is a flowchart of another method for determining a position of a key point according to an exemplary embodiment
  • Fig. 4 is a structural diagram of a device for determining a position of a key point according to an exemplary embodiment
  • Fig. 5 is a block diagram of a device for determining a position of a key point according to an exemplary embodiment
  • Fig. 6 is a block diagram of another device for determining a position of a key point according to an exemplary embodiment.
  • Fig. 1 is a flowchart of a method for determining a position of a key point according to an exemplary embodiment. As shown in Fig. 1, the method for determining a position of a key point may be applied to a terminal, and includes the following steps:
  • step S11 based on the to-be-analyzed image including the image of the human hand, the shape of the human hand and the poses of the multiple skeletal nodes in the human hand are determined.
  • the image to be analyzed may be a color image obtained by shooting a human hand (such as an RGB image).
  • the image of the human hand in the image to be analyzed may be partially occluded or unoccluded. For an unoccluded case, since there is no cost The technical problems that need to be solved in the disclosed embodiments will not be discussed here.
  • Each bone node is a preset point in the human hand.
  • the metacarpophalangeal joint of the index finger in the human hand can be set as a bone node in advance.
  • the number of bone nodes can be set according to actual needs or user experience. For example, in order to more accurately determine the position of key points, a relatively large number of bone nodes can be preset.
  • the pose of a bone node may include the position and angle of the bone node in the three-dimensional space coordinate system. It can be understood that although the shapes of the hands of different people are similar on the whole, there are some variables that vary from person to person, such as the length, width, fatness, and thinness of the hands, etc., which can be reflected by the determined shape of the hands. variable.
  • the image to be analyzed can be processed using a preset image recognition algorithm to obtain the shape of the human hand and the poses of multiple bone nodes, or it can be a previously trained neural network to realize the shape of the image to be analyzed to the shape of the human hand And the end-to-end mapping between the poses of the bone nodes.
  • the image to be analyzed including the image of the human hand may also be input into a preset parameter to extract a network to obtain the shape of the human hand and the posture of multiple skeletal nodes in the human hand.
  • the parameter extraction network is pre-trained with multiple sample images marked with true values, and each sample image includes images of human hands, and the marked true values can be the shape of the human hand and the poses of multiple bone nodes,
  • the training method of the parameter extraction network may be to input the sample image to the parameter extraction network described above, and calculate the loss function between the output of the parameter extraction network and the marked true value, and adjust the network of the parameter extraction network based on the loss function using the stochastic gradient descent method parameter.
  • the parameter extraction network can be a large convolutional neural network, such as U-net, HourGlass, or a lightweight convolutional neural network MobileNet.
  • MobileNet has a lower structural complexity than U-net or HourGlass.
  • the occupied computing resources are also relatively small, making it easier to run on mobile terminals.
  • U-net or HourGlass due to the relatively high complexity of the network structure, occupy relatively more computing resources and are difficult or impossible to run on mobile terminals.
  • step S12 based on the shape of the human hand and the postures of multiple bone nodes in the human hand, the position of the key point of the human hand is determined according to the geometric constraint conditions of the geometric constraint relationship of the preset human hand skeleton.
  • the hand bones include multiple bones. As shown in Figure 2a, these bones can be moved by the muscles of the hand. However, because these bones may be connected to each other, the movement of the bones is affected. Limit. For example, the middle phalanx of the middle finger can be bent toward the palm of the hand and / or the back of the hand with respect to the first joint, but it is impossible or difficult for the normal person to bend in the direction of the index finger or ring finger relative to the first joint (hereinafter referred to as Lateral), so geometric constraints can be set in advance: the lateral angle of the proximal phalanx of the middle finger and the middle phalanx are the same.
  • the length of the proximal phalanx of the middle finger is 3 cm
  • the proximal phalanx of the middle finger and the middle finger phalanx of the middle finger are related (hereinafter referred to as the first joint)
  • the metacarpal head 202 in the figure
  • the proximal phalanx of the middle finger is regarded as a rigid body, geometric constraints can be set in advance: the first joint and the second The distance between the joints is 3cm.
  • the key points used in human hand pose estimation can be the joints of hand bones.
  • the 21 joints of hand bones are used as 21 key points.
  • the specific distribution can be seen in Figure 2b , Where 1-21 represents the position of these 21 key points, so the position of the key points is not arbitrarily distributed, but is affected by the geometric constraints of the human hand bones.
  • the coordinate of the first joint of the middle finger in the three-dimensional space coordinate system (assuming that the unit of the three-dimensional space coordinate system is cm and the palm direction is the positive direction of the x-axis) is determined in S11 as (0,0, 0), the proximal phalanx of the middle finger is 3cm in length, and is bent 90 ° relative to the second joint of the middle finger toward the palm of the hand, and the first joint is the key point, then the coordinates of the key point can be determined to be (3, 0, 0) , And because the position of this key point is determined based on the shape of the human hand (proximal phalanx length 3 cm) and the posture of the second joint and the proximal phalanx, even if the first joint is blocked in the image to be analyzed, it can Determine the position of the first joint.
  • the geometric constraints of the human hand bones can be used to combine the shape of the human hand extracted from the image to be analyzed and the poses of multiple bone nodes to calculate the position of the key point, not directly from the image to be analyzed Identify and locate keypoints in, so even if the keypoints are blocked, the position of the keypoints can still be determined.
  • FIG. 3 is a schematic flowchart of another method for determining a position of a key point according to an embodiment of the present disclosure, including the following steps:
  • step S31 based on the to-be-analyzed image including the image of the human hand, the shape of the human hand and the postures of the multiple skeletal nodes in the human hand are determined.
  • This step is the same as S11, and reference may be made to the foregoing description of S11, which will not be repeated here.
  • step S32 the obtained shape and pose are input as model parameters into a preset skeleton animation frame to obtain a three-dimensional skeleton model of the human hand.
  • the skeletal animation frame may be a parameterized frame obtained by extracting geometric constraint conditions from the three-dimensional skeleton model of multiple human hands.
  • geometric constraint conditions There may be some differences between the three-dimensional skeleton models of different human hands, but also as humans, there are certain commonality between the three-dimensional skeleton models of different human hands, such as the connection between the bones in the hand bones, The movement modes that can be performed, therefore, there may be some same geometric constraint conditions in the 3D skeleton models of human hands of different people.
  • these geometric constraint conditions may be extracted from a plurality of sample human 3D skeleton models.
  • sample human 3D skeleton model can be selected according to actual needs.
  • a 3D skeleton model of human hands of different races, different ages, and different genders can be selected as the sample human 3D skeleton model.
  • the target group for this embodiment is Asian youth and middle-aged groups, and a three-dimensional skeleton model of human hands of Asian and male and female persons aged 12-40 years old may be selected as the three-dimensional skeleton model of human hands.
  • proximal phalanx may be 3cm long, and some people The proximal phalanx of the middle finger may be 3.5cm long.
  • the human hand is in a gripped state, and the proximal phalanx of the middle finger is bent toward the palm relative to the second joint.
  • the human hand is in a flat state, and the proximal phalanx of the middle finger may not be bent toward the palm relative to the second joint. Therefore, when constructing a three-dimensional skeleton model of a human hand, it is necessary to input the shape of the human hand and the poses of multiple bone nodes as model parameters.
  • step S33 the three-dimensional space coordinates of the key point of the human hand are read from the three-dimensional skeleton model as the position of the key point.
  • the 3D skeleton model of the human hand After the 3D skeleton model of the human hand is constructed, it can be regarded as the position of any point on the known human hand bone, so the 3D space coordinates of the key points of the human hand can be read from the 3D skeleton model.
  • the key points are read from the 3D skeletal model of the human hand, so the key points meet the geometric constraints extracted from the 3D skeletal model of the human hand of multiple samples, so the distribution of the key points can be considered It is consistent with the actual distribution of each joint in the human hand, and it can be considered that the position of the key point obtained at this time is more accurate.
  • Fig. 4 is a block diagram of a device for determining a position of a key point according to an exemplary embodiment. 4, the device includes a human hand analysis unit 401 and a position determination unit 402.
  • the human hand analysis unit 401 is configured to execute an image to be analyzed based on the image containing the human hand, and determine the shape of the human hand and the poses of multiple skeletal nodes in the human hand;
  • the position determination unit 402 is configured to perform the determination of the position of the key point of the human hand based on the shape and posture according to the preset geometric constraint conditions of the human hand skeleton.
  • the position determining unit 402 includes:
  • the building module is configured to execute a three-dimensional skeleton model of the human hand based on the shape and pose, and according to the preset geometric constraints of the human hand skeleton;
  • the reading module is configured to perform reading the three-dimensional space coordinates of the key points of the human hand from the three-dimensional skeleton model as the positions of the key points.
  • the building module may be specifically configured to perform input of shapes and poses as model parameters into a preset skeleton animation framework to obtain a three-dimensional skeleton model of the human hand.
  • the parameterized framework obtained by extracting geometric constraints from the three-dimensional human skeleton model of a sample.
  • the human hand analysis unit 401 may be specifically configured to perform extraction of the image to be analyzed that contains the image of the human hand, input a preset parameter, and obtain the shape of the human hand and multiple skeletal nodes
  • the posture and parameter extraction network are pre-trained with sample images labeled with the shape of the human hand and the posture of each bone node.
  • the parameter extraction network is the mobile terminal neural network MobileNet.
  • Fig. 5 is a block diagram of a device 500 for determining a position of a key point according to an exemplary embodiment.
  • the apparatus 500 may be a mobile phone, a computer, a digital broadcasting terminal, a message receiving and sending device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • the device 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input / output (I / O) interface 512, a sensor component 514, and Communication component 516.
  • the processing component 502 generally controls the overall operations of the device 500, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 502 may include one or more processors 520 to execute instructions to complete all or part of the steps of the above method.
  • the processing component 502 may include one or more modules to facilitate interaction between the processing component 502 and other components.
  • the processing component 502 may include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
  • the memory 504 is configured to store various types of data to support operation at the device 500. Examples of these data include instructions for any application or method operating on the device 500, contact data, phone book data, messages, pictures, videos, and so on.
  • the memory 504 may be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable and removable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable and removable Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 506 provides power to various components of the device 500.
  • the power supply component 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 500.
  • the multimedia component 508 includes a screen between the device 500 and the user that provides an output interface.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • the multimedia component 508 includes a front camera and / or a rear camera. When the device 500 is in an operation mode, such as a shooting mode or a video mode, the front camera and / or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 510 is configured to output and / or input audio signals.
  • the audio component 510 includes a microphone (MIC).
  • the microphone When the device 500 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 504 or transmitted via the communication component 516.
  • the audio component 510 further includes a speaker for outputting audio signals.
  • the I / O interface 512 provides an interface between the processing component 502 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, or a button. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor assembly 514 includes one or more sensors for providing the device 500 with status assessments in various aspects.
  • the sensor component 514 can detect the on / off state of the device 500, and the relative positioning of the components, such as the display and the keypad of the device 500, and the sensor component 514 can also detect the position change of the device 500 or a component of the device 500.
  • the sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 514 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 516 is configured to facilitate wired or wireless communication between the device 500 and other devices.
  • the device 500 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 516 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the apparatus 500 may be one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor or other electronic components are implemented to perform the above method.
  • a non-transitory computer-readable storage medium including instructions such as a memory 504 including instructions, is also provided.
  • the above instructions can be executed by the processor 520 of the device 500 to complete the above method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, or the like.
  • a computer program product is also provided.
  • the computer program product may be stored in a computer-readable storage medium, such as a memory 504, and the computer program product may be executed by the processor 520 of 500 to complete the above method.
  • Fig. 6 is a block diagram of another apparatus 600 for determining a position of a key point according to an exemplary embodiment.
  • the device 600 may be provided as a server. 6
  • the device 600 includes a processing component 622, which further includes one or more processors, and memory resources represented by the memory 632, for storing instructions executable by the processing component 622, such as application programs.
  • the application programs stored in the memory 632 may include one or more modules each corresponding to a set of instructions.
  • the processing component 622 is configured to execute instructions to perform the above method.
  • the device 600 may also include a power component 626 configured to perform power management of the device 600, a wired or wireless network interface 650 configured to connect the device 600 to the network, and an input output (I / O) interface 658.
  • the device 600 may operate based on an operating system stored in the memory 632, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

本公开是关于关键点位置确定方法、装置及电子设备。其中方法包括:基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿;基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。本公开可以利用人手手骨自身所存在的几何约束条件,结合从待分析图像中提取到的人手的参数,较为准确地推算出被遮挡的关键点的位置。

Description

关键点位置确定方法、装置及电子设备
本申请要求于2018年11月01日提交中国专利局、申请号为201811295915.4,发明名称为“关键点位置确定方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及关键点位置确定方法、装置及电子设备。
背景技术
相关技术中,可以利用人手的多个关键点所处的三维空间位置,对人手的姿态进行估计。针对每个关键点,可以基于人手色彩图像如人手RGB(Red Green Blue,红绿蓝)图像,计算该人手色彩图像中各个像素点为该关键点的概率,得到该关键点在二维图像上的概率分布,并基于该概率分布,利用预先经过训练的神经网络,计算得到该关键点的三维空间位置。
但是,在人手色彩图像中,关键点可能处于被遮挡的状态,在这种情况下,关键点在人手色彩图像中所处的位置可能呈现的是障碍物的图像,可能导致在计算概率分布时,得到的该位置为关键点所处的位置的概率较低,与实际情况存在较大偏差。进而造成无法计算出该关键点的三维空间位置,或者计算出的该关键点的三维空间位置准确性较低,从而影响人手的姿态估计。
发明内容
为克服相关技术中存在的问题,本公开提供一种关键点位置确定方法、装置及电子设备。
根据本公开实施例的第一方面,提供一种关键点位置确定方法,包括:
基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿;
基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。
结合第一方面,在第一种可能的实现方式中,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置, 包括:
基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型;
从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
结合第一方面的第一种可能的实现方式,在第二种可能的实现方式中,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型,包括:
将所述形状和所述位姿作为模型参数输入至预设的骨骼动画框架中,得到所述人手的三维骨骼模型,所述骨骼动画框架为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。
结合第一方面,在第三种可能的实现方式中,所述基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿,包括:
将包含人手的影像的待分析图像,输入预设的参数提取网络,得到所述人手的形状和所述人手中多个骨骼节点的位姿,所述参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
结合第一方面的第三种可能的实现方式,在第四种可能的实现方式中,所述参数提取网络为移动端神经网络MobileNet。
根据本公开实施例的第二方面,提供一种关键点位置确定装置,包括:
人手分析单元,被配置为执行基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿;
位置确定单元,被配置为执行基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。
结合第二方面,在第一种可能的实现方式中,所述位置确定单元,包括:
构建模块,具体被配置为执行基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型;
读取模块,被配置为执行从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
结合第二方面的第一种可能的实现方式,在第二种可能实现方式中,所述构建模块,具体被配置为执行将所述形状和所述位姿作为模型参数输入至预设的骨骼动画框架中,得到所述人手的三维骨骼模型,所述骨骼动画框架为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。
结合第二方面,在第三种可能的实现方式中,所述人手分析单元,具体被配置为执行将包含人手的影像的待分析图像,输入预设的参数提取网络,得到所述人手的形状和所述人手中多个骨骼节点的位姿,所述参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
结合第二方面的第三种可能的实现方式,在第四种可能的实现方式中,所述参数提取网络为移动端神经网络MobileNet。
根据本公开实施例的第三方面,提供了一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿;
基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。
结合第三方面,在第一种可能的实现方式中,所述处理器,具体被配置为执行:
基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型;
从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
结合第三方面的第一种可能的实现方式,在第二种可能的实现方式中,所述处理器具体被配置为执行:
将所述形状和所述位姿作为模型参数输入至预设的骨骼动画框架中,得到所述人手的三维骨骼模型,所述骨骼动画框架为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。
结合第三方面,在三种可能的实现方式中,所述处理具体被配置为执行:
将包含人手的影像的待分析图像,输入预设的参数提取网络,得到所述人手的形状和所述人手中多个骨骼节点的位姿,所述参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
结合第三方面的第三种可能的实现方式,在第四种可能的实现方式中,所述参数提取网络为移动端神经网络MobileNet。
根据本公开实施例的第四方面,提供了一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行一种关键点位置确定方法,所述方法包括:
基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿;
基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。
结合第四方面,在第一种可能的实现方式中,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置,包括:
基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型;
从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
结合第四方面的第一种可能的实现方式,在第二种可能的实现方式中,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置,包括:
基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型;
从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
结合第四方面,在第三种可能的实现方式中,所述基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿,包 括:
将包含人手的影像的待分析图像,输入预设的参数提取网络,得到所述人手的形状和所述人手中多个骨骼节点的位姿,所述参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
结合第四方面的第三种可能的实现方式中,在第四种可能的实现方式中,所述参数提取网络为移动端神经网络MobileNet。
根据本公开实施例的第五方面,提供一种计算机程序产品,当所述计算机程序产品由用户终端的处理器执行时,使得用户终端能够执行一种关键点位置确定方法,所述方法包括:
基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿;
基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。
结合第五方面,在第一种可能的实现方式中,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置,包括:
基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型;
从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
结合第五方面的第一种可能的实现方式,在第二种可能的实现方式中,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型,包括:
将所述形状和所述位姿作为模型参数输入至预设的骨骼动画框架中,得到所述人手的三维骨骼模型,所述骨骼动画框架为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。
结合第五方面,在第三种可能的实现方式中,所述基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿,包括:
将包含人手的影像的待分析图像,输入预设的参数提取网络,得到所述人手的形状和所述人手中多个骨骼节点的位姿,所述参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
结合第五方面的第三种可能的实现方式,在第四种可能的实现方式中,所述参数提取网络为移动端神经网络MobileNet。
本公开的实施例提供的技术方案可以包括以下有益效果:利用人手手骨自身所存在的几何约束条件,结合从待分析图像中提取到的人手的参数,可以较为准确地推算出被遮挡的关键点的位置。应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
为了更清楚地说明本发明实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据一示例性实施例示出的一种关键点位置确定方法的流程图;
图2a是根据一示例性实施例示出的一种人手手骨骨骼结构图;
图2b是根据一示例性实施例示出的一种关键点分布图;
图3是根据一示例性实施例示出的另一种关键点位置确定方法的流程图;
图4是根据一示例性实施例示出的一种关键点位置确定装置的结构图;
图5是根据一示例性实施例示出的一种用于关键点位置确定的装置的框图;
图6是根据一示例性实施例示出的另一种用于关键点位置确定的装置的框图。
具体实施方式
为使本发明的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本发明进一步详细说明。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1是根据一示例性实施例示出的一种关键点位置确定方法的流程图,如图1所示,关键点位置确定方法可以应用于终端中,包括以下步骤:
在步骤S11中,基于包含人手的影像的待分析图像,确定人手的形状和人手中多个骨骼节点的位姿。
其中,待分析图像可以是拍摄人手得到的色彩图像(如RGB图像),该待分析图像中的人手的影像可以部分被遮挡,也可以没有被遮挡,对于没有被遮挡的情况,由于不存在本公开实施例所需要解决的技术问题,此处不做讨论。每个骨骼节点均为预设的人手中的一个点,示例性的,可以预先将人手中食指的掌指关节设置为一个骨骼节点。骨骼节点的数量可以根据实际需求或者用户经验进行设置,例如为了更准确地确定关键点的位置,可以预先设置数量相对较多的骨骼节点,又例如为了降低确定关键点的位置所占用的计算资源,可以预先设置数量相对较少的骨骼节点。在本步骤中,一个骨骼节点的位姿可以包括该骨骼节点在三维空间坐标系中所处的位置以及角度。可以理解,虽然不同的人的人手的形状整体上相近,但是存在一些因人而异的变量,例如人手的长短、宽窄、胖瘦等,可以通过确定的人手的形状体现这些因人而异的变量。
具体的,可以是利用预设的图像识别算法对待分析图像进行处理,得到人手的形状和多个骨骼节点的位姿,也可以是利用预先经过训练的神经网络,实现待分析图像到人手的形状以及骨骼节点的位姿之间的端对端映射。
示例性的,在一种实施例中,还可以是将包含人手的影像的待分析图像,输入预设的参数提取网络,得到人手的形状和人手中多个骨骼节点的位姿。其中,参数提取网络预先经过多个标注有真值的样本图像的训练,每个样本图像中包括有人手的影像,并且标注的真值可以为该人手的形状以及多个骨骼节点的位姿,参数提取网络的训练方式可以是上述将样本图像输入至参数提取网络,并计算参数提取网络的输出与标注的真值之间的损失函数,基于损失函数利用随机梯度下降法调整参数提取网络的网络参数。
参数提取网络可以是大型的卷积神经网络,如U-net、HourGlass,也可以是轻量型的卷积神经网络MobileNet,MobileNet相比于U-net或者HourGlass,网络的结构复杂度更低,所占用的计算资源也相对较少,更容易在移动终端 上运行。而U-net或HourGlass由于网络的结构复杂度相对较高,因此占用的计算资源相对较多,难以或者无法在移动终端上运行。
在步骤S12中,基于人手的形状和人手中多个骨骼节点的位姿,按照预设的人手骨骼的几何约束关系几何约束条件,确定人手的关键点的位置。
人手中存在手骨,手骨包括多块骨头,可以如图2a所示,这些骨头能够在手部肌肉的带动下运动,但是由于这些骨头之间可能存在相互连接的关系,因此骨头的运动受到了限制。例如,中指的中节指骨可以相对于第一关节向手心方向和/或手背方向弯曲,而对于正常人无法或者难以相对于第一关节向食指方向或者无名指方向弯曲(以下称这两个方向为侧向),因此可以预先设置几何约束条件:中指的近节指骨和中节指骨的侧向角相同。假设中指的近节指骨(即图中201)的长度为3cm,由于中指的近节指骨与中指的中节指骨(即图中203)相关节(以下称该相关节处为第一关节),并且与掌骨头(即图中202)相关节(以下称该相关节处为第二关节),如果将中指的近节指骨视为刚体,则可以预先设置几何约束条件:第一关节与第二关节之间的距离为3cm。
在人手姿态估计中所使用的关键点可以为手骨的关节处,例如常用的人手姿态估计算法中,使用手骨的21个关节处作为21个关键点,具体的分布可以参见图2b所示,其中1-21分别表示这21个关键点所处的位置,因此关键点的位置并非是任意分布的,而是受到人手骨骼的几何约束条件影响的。示例性的,假设在S11中确定得到中指的第一关节在三维空间坐标系(假设该三维空间坐标系的单位为cm,手心方向为x轴的正方向)中的坐标为(0,0,0),中指的近节指骨长度为3cm,并且相对中指的第二关节向手心方向弯曲90°,第一关节为关键点,则可以确定得到该关键点的坐标为(3,0,0),并且由于该关键点的位置是基于人手的形状(近节指骨长3cm)以及第二关节和近节指骨的位姿确定得到的,因此即使在待分析图像中第一关节被遮挡,也能够确定出第一关节的位置。
选用该实施例,可以利用人手骨骼固有的几何约束条件,结合待分析图像中提取到的人手的形状以及多个骨骼节点的位姿,对关键点的位置进行推算,并不直接从待分析图像中识别并定位关键点,因此即使关键点被遮挡住, 仍然能够确定出关键点的位置。
参见图3,图3所示为本公开实施例提供的另一种关键点位置确定方法的流程示意图,包括以下步骤:
在步骤S31中,基于包含人手的影像的待分析图像,确定人手的形状和人手中多个骨骼节点的位姿。
该步骤与S11相同,可以参见前述关于S11的描述,在此不再赘述。
在步骤S32中,将得到的形状和位姿作为模型参数输入至预设的骨骼动画框架中,得到人手的三维骨骼模型。
其中,骨骼动画框架可以为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。不同人的人手的三维骨骼模型之间可能存在一定的差异,但是同样作为人类,不同人的人手的三维骨骼模型之间也存在一定的共性,如手骨中各块骨头之间的连接关系、能够进行的运动方式,因此不同人的人手的三维骨骼模型中可能存在一些相同的几何约束条件,在本实施例中,可以从多个样本人手三维骨骼模型中提取这些几何约束条件。
进一步的,样本人手三维骨骼模型可以根据实际需求进行选取,例如,为了扩展该实施例的适用人群,可以选用不同种族、不同年龄段、不同性别的人手的三维骨骼模型作为样本人手三维骨骼模型。又例如,已经确定该实施例的面向群体为亚洲青年、中年群体,则可以选用亚裔并且年龄段位于12-40岁的男性以及女性的人手的三维骨骼模型作为样本人手三维骨骼模型。
虽然不同的人的人手的三维骨骼模型之间存在一定的共性,但也存在一些因人而异的差异,或者随时间变化的变量,例如一部分人的中指近节指骨可能长3cm,另外一部分人的中指近节指骨可能长3.5cm。又例如在某一时刻人手处于紧握状态,中指近节指骨相对第二关节向掌心弯曲,在另一个时刻人手处于平摊状态,中指近节指骨可能没有相对第二关节向掌心弯曲。因此在构建人手的三维骨骼模型时,需要输入人手的形状以及多个骨骼节点的位姿作为模型参数。
在步骤S33中,从三维骨骼模型中读取人手的关键点的三维空间坐标,作为关键点的位置。
在构建完成人手的三维骨骼模型后,可以视为已知人手的手骨上任意点 的位置,因此可以从三维骨骼模型中读取人手的关键点的三维空间坐标。选用该实施例,关键点是从人手的三维骨骼模型中读取得到的,因此关键点之间符合从多个样本人手的三维骨骼模型中提取到的几何约束条件,因此可以认为关键点的分布符合人手中各个关节处的真实分布情况,进而可以认为此时得到的关键点的位置更加准确。
图4是根据一示例性实施例示出的一种关键点位置确定装置框图。参照图4,该装置包括人手分析单元401,位置确定单元402。
该人手分析单元401被配置为执行基于包含人手的影像的待分析图像,确定人手的形状和人手中多个骨骼节点的位姿;
该位置确定单元402被配置为执行基于形状和位姿,按照预设的人手骨骼的几何约束条件,确定人手的关键点的位置。
在一种示例性实施例中,该位置确定单元402,包括:
构建模块,被配置为执行基于形状和位姿,按照预设的人手骨骼的几何约束条件,构建人手的三维骨骼模型;
读取模块,被配置为执行从三维骨骼模型中读取人手的关键点的三维空间坐标,作为关键点的位置。
在一种示例性实施例中,该构建模块,可以具体被配置为执行将形状和位姿作为模型参数输入至预设的骨骼动画框架中,得到人手的三维骨骼模型,骨骼动画框架为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。
在一种示例性实施例中,该人手分析单元401,可以具体被配置为执行将包含人手的影像的待分析图像,输入预设的参数提取网络,得到人手的形状和人手中多个骨骼节点的位姿,参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
在一种示例性实施例中,参数提取网络为移动端神经网络MobileNet。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图5是根据一示例性实施例示出的一种用于关键点位置确定的装置500的框图。例如,装置500可以是移动电话,计算机,数字广播终端,消息收 发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图5,装置500可以包括以下一个或多个组件:处理组件502,存储器504,电源组件506,多媒体组件508,音频组件510,输入/输出(I/O)接口512,传感器组件514,以及通信组件516。
处理组件502通常控制装置500的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件502可以包括一个或多个处理器520来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件502可以包括一个或多个模块,便于处理组件502和其他组件之间的交互。例如,处理组件502可以包括多媒体模块,以方便多媒体组件508和处理组件502之间的交互。
存储器504被配置为存储各种类型的数据以支持在设备500的操作。这些数据的示例包括用于在装置500上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器504可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件506为装置500的各种组件提供电力。电源组件506可以包括电源管理系统,一个或多个电源,及其他与为装置500生成、管理和分配电力相关联的组件。
多媒体组件508包括在装置500和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件508包括一个前置摄像头和/或后置摄像头。当设备500处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能 力。
音频组件510被配置为输出和/或输入音频信号。例如,音频组件510包括一个麦克风(MIC),当装置500处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器504或经由通信组件516发送。在一些实施例中,音频组件510还包括一个扬声器,用于输出音频信号。
I/O接口512为处理组件502和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件514包括一个或多个传感器,用于为装置500提供各个方面的状态评估。例如,传感器组件514可以检测到设备500的打开/关闭状态,组件的相对定位,例如组件为装置500的显示器和小键盘,传感器组件514还可以检测装置500或装置500一个组件的位置改变,用户与装置500接触的存在或不存在,装置500方位或加速/减速和装置500的温度变化。传感器组件514可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件514还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件514还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件516被配置为便于装置500和其他设备之间有线或无线方式的通信。装置500可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件516经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件516还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置500可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器504,上述指令可由装置500的处理器520执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机程序产品,该计算机程序产品可以存储于计算机可读存储介质中,例如存储器504,该计算机程序产品可由500的处理器520执行以完成上述方法。
图6是根据一示例性实施例示出的另一种用于关键点位置确定的装置600的框图。例如,装置600可以被提供为一服务器。参照图6,装置600包括处理组件622,其进一步包括一个或多个处理器,以及由存储器632所代表的存储器资源,用于存储可由处理组件622的执行的指令,例如应用程序。存储器632中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件622被配置为执行指令,以执行上述方法。
装置600还可以包括一个电源组件626被配置为执行装置600的电源管理,一个有线或无线网络接口650被配置为将装置600连接到网络,和一个输入输出(I/O)接口658。装置600可以操作基于存储在存储器632的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同 之处。尤其,对于装置、电子设备以及存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。

Claims (20)

  1. 一种关键点位置确定方法,其特征在于,包括:
    基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿;
    基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置,包括:
    基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型;
    从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型,包括:
    将所述形状和所述位姿作为模型参数输入至预设的骨骼动画框架中,得到所述人手的三维骨骼模型,所述骨骼动画框架为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。
  4. 根据权利要求1所述的方法,其特征在于,所述基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿,包括:
    将包含人手的影像的待分析图像,输入预设的参数提取网络,得到所述人手的形状和所述人手中多个骨骼节点的位姿,所述参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
  5. 根据权利要求4所述的方法,其特征在于,所述参数提取网络为移动端神经网络MobileNet。
  6. 一种关键点位置确定装置,其特征在于,包括:
    人手分析单元,被配置为执行基于包含人手的影像的待分析图像,确定 所述人手的形状和所述人手中多个骨骼节点的位姿;
    位置确定单元,被配置为执行基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。
  7. 根据权利要求6所述的装置,其特征在于,所述位置确定单元,包括:
    构建模块,被配置为执行基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型;
    读取模块,被配置为执行从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
  8. 根据权利要求7所述的装置,其特征在于,所述构建模块,具体被配置为执行将所述形状和所述位姿作为模型参数输入至预设的骨骼动画框架中,得到所述人手的三维骨骼模型,所述骨骼动画框架为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。
  9. 根据权利要求6所述的装置,其特征在于,所述人手分析单元,具体被配置为执行将包含人手的影像的待分析图像,输入预设的参数提取网络,得到所述人手的形状和所述人手中多个骨骼节点的位姿,所述参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
  10. 根据权利要求9所述的装置,其特征在于,所述参数提取网络为移动端神经网络MobileNet。
  11. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行:
    基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿;
    基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。
  12. 根据权利要求11所述的电子设备,其特征在于,所述处理器,具体被配置为执行:
    基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建 所述人手的三维骨骼模型;
    从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
  13. 根据权利要求12所述的电子设备,其特征在于,所述处理器具体被配置为执行:
    将所述形状和所述位姿作为模型参数输入至预设的骨骼动画框架中,得到所述人手的三维骨骼模型,所述骨骼动画框架为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。
  14. 根据权利要求11所述的电子设备,其特征在于,所述处理具体被配置为执行:
    将包含人手的影像的待分析图像,输入预设的参数提取网络,得到所述人手的形状和所述人手中多个骨骼节点的位姿,所述参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
  15. 根据权利要求14所述的电子设备,其特征在于,所述参数提取网络为移动端神经网络MobileNet。
  16. 一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行一种关键点位置确定方法,所述方法包括:
    基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿;
    基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置。
  17. 根据权利要求16所述的非临时性计算机可读存储介质,其特征在于,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,确定所述人手的关键点的位置,包括:
    基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型;
    从所述三维骨骼模型中读取所述人手的关键点的三维空间坐标,作为所述关键点的位置。
  18. 根据权利要求17所述的非临时性计算机可读存储介质,其特征在于,所述基于所述形状和所述位姿,按照预设的人手骨骼的几何约束条件,构建所述人手的三维骨骼模型,包括:
    将所述形状和所述位姿作为模型参数输入至预设的骨骼动画框架中,得到所述人手的三维骨骼模型,所述骨骼动画框架为从多个样本人手三维骨骼模型中提取几何约束条件得到的参数化的框架。
  19. 根据权利要求16所述的非临时性计算机可读存储介质,其特征在于,所述基于包含人手的影像的待分析图像,确定所述人手的形状和所述人手中多个骨骼节点的位姿,包括:
    将包含人手的影像的待分析图像,输入预设的参数提取网络,得到所述人手的形状和所述人手中多个骨骼节点的位姿,所述参数提取网络预先经过标注有人手的形状和各个骨骼节点的位姿的样本图像的训练。
  20. 根据权利要求19所述的非临时性计算机可读存储介质,其特征在于,所述参数提取网络为移动端神经网络MobileNet。
PCT/CN2019/104231 2018-11-01 2019-09-03 关键点位置确定方法、装置及电子设备 WO2020088092A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811295915.4 2018-11-01
CN201811295915.4A CN109410276B (zh) 2018-11-01 2018-11-01 关键点位置确定方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2020088092A1 true WO2020088092A1 (zh) 2020-05-07

Family

ID=65471142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104231 WO2020088092A1 (zh) 2018-11-01 2019-09-03 关键点位置确定方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN109410276B (zh)
WO (1) WO2020088092A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2598452A (en) * 2020-06-22 2022-03-02 Ariel Ai Ltd 3D object model reconstruction from 2D images
US11688136B2 (en) 2020-06-22 2023-06-27 Snap Inc. 3D object model reconstruction from 2D images

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410276B (zh) * 2018-11-01 2020-10-27 北京达佳互联信息技术有限公司 关键点位置确定方法、装置及电子设备
CN112257582A (zh) * 2020-10-21 2021-01-22 北京字跳网络技术有限公司 脚部姿态确定方法、装置、设备和计算机可读介质
CN113052189B (zh) * 2021-03-30 2022-04-29 电子科技大学 一种基于改进的MobileNetV3特征提取网络
CN114332939B (zh) * 2021-12-30 2024-02-06 浙江核新同花顺网络信息股份有限公司 一种位姿序列生成方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012095756A2 (en) * 2011-01-03 2012-07-19 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
CN106886741A (zh) * 2015-12-16 2017-06-23 芋头科技(杭州)有限公司 一种基手指识别的手势识别方法
CN107767419A (zh) * 2017-11-07 2018-03-06 广州深域信息科技有限公司 一种人体骨骼关键点检测方法及装置
CN108399367A (zh) * 2018-01-31 2018-08-14 深圳市阿西莫夫科技有限公司 手部动作识别方法、装置、计算机设备及可读存储介质
CN109410276A (zh) * 2018-11-01 2019-03-01 北京达佳互联信息技术有限公司 关键点位置确定方法、装置及电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7961910B2 (en) * 2009-10-07 2011-06-14 Microsoft Corporation Systems and methods for tracking a model
AU2011203028B1 (en) * 2011-06-22 2012-03-08 Microsoft Technology Licensing, Llc Fully automatic dynamic articulated model calibration
CN104376309B (zh) * 2014-11-27 2018-12-25 韩慧健 一种基于手势识别的手势运动基元模型结构化方法
CN104680582B (zh) * 2015-03-24 2016-02-24 中国人民解放军国防科学技术大学 一种面向对象定制的三维人体模型创建方法
US10318008B2 (en) * 2015-12-15 2019-06-11 Purdue Research Foundation Method and system for hand pose detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012095756A2 (en) * 2011-01-03 2012-07-19 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
CN106886741A (zh) * 2015-12-16 2017-06-23 芋头科技(杭州)有限公司 一种基手指识别的手势识别方法
CN107767419A (zh) * 2017-11-07 2018-03-06 广州深域信息科技有限公司 一种人体骨骼关键点检测方法及装置
CN108399367A (zh) * 2018-01-31 2018-08-14 深圳市阿西莫夫科技有限公司 手部动作识别方法、装置、计算机设备及可读存储介质
CN109410276A (zh) * 2018-11-01 2019-03-01 北京达佳互联信息技术有限公司 关键点位置确定方法、装置及电子设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2598452A (en) * 2020-06-22 2022-03-02 Ariel Ai Ltd 3D object model reconstruction from 2D images
US11688136B2 (en) 2020-06-22 2023-06-27 Snap Inc. 3D object model reconstruction from 2D images
GB2598452B (en) * 2020-06-22 2024-01-10 Snap Inc 3D object model reconstruction from 2D images

Also Published As

Publication number Publication date
CN109410276B (zh) 2020-10-27
CN109410276A (zh) 2019-03-01

Similar Documents

Publication Publication Date Title
WO2020088092A1 (zh) 关键点位置确定方法、装置及电子设备
US11163373B2 (en) Method and electronic device of gesture recognition
US10191564B2 (en) Screen control method and device
US11176687B2 (en) Method and apparatus for detecting moving target, and electronic equipment
WO2021135601A1 (zh) 辅助拍照方法、装置、终端设备及存储介质
CN110674719B (zh) 目标对象匹配方法及装置、电子设备和存储介质
CN108712603B (zh) 一种图像处理方法及移动终端
CN105205479A (zh) 人脸颜值评估方法、装置及终端设备
CN110705365A (zh) 一种人体关键点检测方法、装置、电子设备及存储介质
US20210065342A1 (en) Method, electronic device and storage medium for processing image
JP2016531362A (ja) 肌色調整方法、肌色調整装置、プログラム及び記録媒体
JP2016531361A (ja) 画像分割方法、画像分割装置、画像分割デバイス、プログラム及び記録媒体
CN107133354B (zh) 图像描述信息的获取方法及装置
CN109005336B (zh) 一种图像拍摄方法及终端设备
TWI718631B (zh) 人臉圖像的處理方法及裝置、電子設備和儲存介質
CN112115894B (zh) 手部关键点检测模型的训练方法、装置及电子设备
CN111047526A (zh) 一种图像处理方法、装置、电子设备及存储介质
WO2021047069A1 (zh) 人脸识别方法和电子终端设备
CN111666917A (zh) 姿态检测及视频处理方法、装置、电子设备和存储介质
CN111241887A (zh) 目标对象关键点识别方法及装置、电子设备和存储介质
CN113194254A (zh) 图像拍摄方法及装置、电子设备和存储介质
WO2023005403A1 (zh) 呼吸率检测方法、装置、存储介质及电子设备
CN111724361B (zh) 实时显示病灶的方法及装置、电子设备和存储介质
CN112614214A (zh) 动作捕捉方法、装置、电子设备及存储介质
CN114581525A (zh) 姿态确定方法及装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19878459

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19878459

Country of ref document: EP

Kind code of ref document: A1