CN110532984A - Critical point detection method, gesture identification method, apparatus and system - Google Patents

Critical point detection method, gesture identification method, apparatus and system Download PDF

Info

Publication number
CN110532984A
CN110532984A CN201910830741.5A CN201910830741A CN110532984A CN 110532984 A CN110532984 A CN 110532984A CN 201910830741 A CN201910830741 A CN 201910830741A CN 110532984 A CN110532984 A CN 110532984A
Authority
CN
China
Prior art keywords
frame
target
target object
image
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910830741.5A
Other languages
Chinese (zh)
Other versions
CN110532984B (en
Inventor
孙晨
陈文科
姚聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuanli Jinzhi (Chongqing) Technology Co.,Ltd.
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910830741.5A priority Critical patent/CN110532984B/en
Publication of CN110532984A publication Critical patent/CN110532984A/en
Application granted granted Critical
Publication of CN110532984B publication Critical patent/CN110532984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种关键点检测方法、手势识别方法、装置及系统,涉及人工智能技术领域,包括:获取待检测图像,待检测图像中包含有目标对象;对待检测图像进行目标检测,得到每个目标对象的位置框;基于每个目标对象的位置框进行位置框合并,得到目标合并框;该目标合并框对应的图像中包括相接触的至少两个目标对象;基于目标合并框对待检测图像进行关键点检测,得到每个目标对象的多个热力图;其中,不同的热力图用于表征位于目标对象上不同位置处的关键点;基于热力图确定相接触的各目标对象的关键点。本发明能够有效提升关键点的检测准确性。

The invention provides a key point detection method, a gesture recognition method, a device and a system, which relate to the technical field of artificial intelligence and include: acquiring an image to be detected, and the image to be detected contains a target object; performing target detection on the image to be detected to obtain each position frames of each target object; merge the position frames based on the position frames of each target object to obtain a target merged frame; the image corresponding to the target merged frame includes at least two target objects in contact; the image to be detected based on the target merged frame Perform key point detection to obtain multiple heat maps for each target object; wherein, different heat maps are used to represent key points located at different positions on the target object; based on the heat maps, the key points of each target object in contact are determined. The present invention can effectively improve the detection accuracy of key points.

Description

关键点检测方法、手势识别方法、装置及系统Key point detection method, gesture recognition method, device and system

技术领域technical field

本发明涉及人工智能技术领域,尤其是涉及一种关键点检测方法、手势识别方法、装置及系统。The invention relates to the technical field of artificial intelligence, and in particular, to a key point detection method, a gesture recognition method, a device and a system.

背景技术Background technique

对图像中手势、人脸等目标对象的检测技术是人工智能的重要应用。目标对象的关键点检测是目标检测中关键的一环,目标对象的关键点信息有助于更加准确的确定目标对象的姿态。近年来,随着深度学习方法在目标对象姿态估计问题上的发展,目标对象检测结合目标对象关键点预估的方法是目前最常见且计算高效的关键点检测方法,但是该方法却只能对单独的目标对象个体或者完全分开的多个目标对象进行处理,一旦目标对象之间进行接触互动,比如:双手交叉、一只手在另一只手的掌心写字等,就很难准确地检测出目标对象的关键点。The detection technology of target objects such as gestures and faces in images is an important application of artificial intelligence. The key point detection of the target object is a key part of the target detection, and the key point information of the target object helps to determine the pose of the target object more accurately. In recent years, with the development of deep learning methods in the problem of target object pose estimation, the method of target object detection combined with target object key point estimation is the most common and computationally efficient key point detection method. Individual target objects or multiple target objects that are completely separated are processed. Once the target objects interact with each other, such as crossing hands, writing with one hand on the palm of the other hand, etc., it is difficult to accurately detect The keypoint of the target object.

发明内容SUMMARY OF THE INVENTION

有鉴于此,本发明的目的在于提供一种关键点检测方法、手势识别方法、装置及系统,能够有效提升关键点的检测准确性。In view of this, the purpose of the present invention is to provide a key point detection method, gesture recognition method, device and system, which can effectively improve the detection accuracy of key points.

为了实现上述目的,本发明实施例采用的技术方案如下:In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present invention are as follows:

第一方面,本发明实施例提供了一种关键点检测方法,所述方法包括:获取待检测图像,所述待检测图像中包含有目标对象;对所述待检测图像进行目标检测,得到每个所述目标对象的位置框;基于每个所述目标对象的位置框进行位置框合并,得到目标合并框;其中,所述目标合并框对应的图像中包括相接触的至少两个目标对象;基于所述目标合并框对所述待检测图像进行关键点检测,得到每个所述目标对象的多个热力图;其中,不同的所述热力图用于表征位于所述目标对象上不同位置处的关键点;基于所述热力图确定相接触的各所述目标对象的关键点。In a first aspect, an embodiment of the present invention provides a method for detecting key points, the method includes: acquiring an image to be detected, the image to be detected contains a target object; performing target detection on the image to be detected to obtain each a position frame of the target object; perform position frame merging based on the position frame of each target object to obtain a target merge frame; wherein, the image corresponding to the target merge frame includes at least two target objects in contact; Perform key point detection on the to-be-detected image based on the target merging frame to obtain multiple heatmaps for each target object; wherein the different heatmaps are used to represent different positions on the target object The key points of each of the contacting target objects are determined based on the heat map.

进一步,所述基于每个所述目标对象的位置框进行位置框合并,得到目标合并框的步骤,包括:基于每个所述目标对象的位置框,对位置框重复执行预设的合并操作,直至任意两个位置框之间的重叠度均不大于预设重叠度阈值,得到目标合并框。Further, the step of performing position frame merging based on the position frame of each of the target objects to obtain the target merged frame includes: based on the position frame of each of the target objects, repeatedly performing a preset merging operation on the position frame, Until the overlapping degree between any two position boxes is not greater than the preset overlapping degree threshold, the target merged frame is obtained.

进一步,所述基于每个所述目标对象的位置框进行位置框合并,得到目标合并框的步骤,包括:基于每个所述目标对象的位置框,对位置框重复执行预设的合并操作,直至任意两个位置框之间的重叠度均不大于预设重叠度阈值,得到候选合并框;根据所述候选合并框中所包含的目标对象的数量,从所述候选合并框中筛选得到目标合并框。Further, the step of performing position frame merging based on the position frame of each of the target objects to obtain the target merged frame includes: based on the position frame of each of the target objects, repeatedly performing a preset merging operation on the position frame, Until the degree of overlap between any two position frames is not greater than the preset overlap degree threshold, a candidate merging frame is obtained; according to the number of target objects contained in the candidate merging frame, a target is obtained from the candidate merging frame by screening Merge box.

进一步,所述合并操作包括:从多个位置框中确定待合并的位置框对;其中,所述位置框对包括两个位置框;计算所述位置框对中位置框之间的重叠度;如果计算的重叠度大于预设重叠度阈值,将所述位置框对中的位置框合并为新的位置框;其中,所述新的位置框的边界为根据所述位置框对中的两个位置框的边界确定的。Further, the merging operation includes: determining a position frame pair to be merged from a plurality of position frames; wherein, the position frame pair includes two position frames; calculating the degree of overlap between the position frames in the position frame pair; If the calculated overlap degree is greater than the preset overlap degree threshold, merge the location boxes in the pair of location boxes into a new location box; wherein, the boundary of the new location box is based on two of the location boxes in the pair of location boxes. The bounds of the location box are determined.

进一步,所述从多个位置框中确定待合并的位置框对的步骤,包括:获取位置框的置信度;按照各位置框的置信度对所述位置框进行排序,得到位置框排序结果;根据所述位置框排序结果从多个位置框中确定待合并的位置框对。Further, the step of determining the pair of position frames to be merged from a plurality of position frames includes: obtaining the confidence of the position frames; sorting the position frames according to the confidence of each position frame, and obtaining a ranking result of the position frames; A pair of location boxes to be merged is determined from the plurality of location boxes according to the result of the location box sorting.

进一步,所述获取位置框的置信度的步骤,包括:获取所述位置框对中两个位置框的置信度,根据所述两个位置框的置信度获得所述新的位置框的置信度。Further, the step of obtaining the confidence level of the position box includes: acquiring the confidence level of two position boxes in the position box pair, and obtaining the confidence level of the new position box according to the confidence level of the two position boxes .

进一步,所述基于所述目标合并框对所述待检测图像进行关键点检测,得到每个所述目标对象的多个热力图的步骤,包括:基于所述目标合并框从所述待检测图像中抠取局部图像;所述局部图像中包含有相接触的目标对象;对所述局部图像进行尺寸调整,并对尺寸调整之后的局部图像进行关键点检测,得到每个所述目标对象的多个热力图。Further, the step of performing key point detection on the to-be-detected image based on the target merging frame to obtain multiple heat maps of each target object includes: extracting data from the to-be-detected image based on the target merging frame Extract a partial image from the middle; the partial image contains the contacting target objects; adjust the size of the partial image, and perform key point detection on the partial image after the size adjustment, so as to obtain the multi-dimensional image of each target object. a heatmap.

进一步,所述对尺寸调整之后的局部图像进行关键点检测,得到每个所述目标对象的多个热力图的步骤,包括:通过训练后的检测模型对尺寸调整之后的局部图像进行关键点检测,得到每个所述目标对象的多个热力图。Further, the step of performing key point detection on the resized local image to obtain multiple heat maps of each of the target objects includes: performing key point detection on the resized local image through a trained detection model , to obtain multiple heatmaps for each of the target objects.

进一步,所述方法还包括:向待训练的检测模型输入多张标注有目标对象的关键点位置的训练图像,其中,所述训练图像中包括相接触的至少两个目标对象,且任意两个所述目标对象之间的重叠度均达到预设重叠度阈值;通过所述待训练的检测模型对所述训练图像进行检测,输出所述训练图像中各所述目标对象的热力图;基于所述训练图像中各所述目标对象的热力图得到所述训练图像中各目标对象中的关键点位置;基于所述待训练的检测模型得到的所述关键点位置和已标注的关键点位置对所述待训练的检测模型进行参数优化,直至所述待训练的检测模型得到的关键点位置和已标注的关键点位置之间的匹配度达到预设匹配度时,确定训练结束,得到训练后的检测模型。Further, the method further includes: inputting a plurality of training images marked with the positions of key points of the target object to the detection model to be trained, wherein the training images include at least two target objects in contact, and any two The degree of overlap between the target objects reaches a preset threshold of overlap degree; the training image is detected by the detection model to be trained, and the heat map of each of the target objects in the training image is output; based on the The heat map of each target object in the training image obtains the key point position in each target object in the training image; the key point position obtained based on the detection model to be trained and the marked key point position pair The parameters of the detection model to be trained are optimized, until the matching degree between the key point positions obtained by the detection model to be trained and the marked key point positions reaches the preset matching degree, it is determined that the training is over, and the post-training is obtained. detection model.

进一步,所述基于所述热力图确定相接触的各所述目标对象的关键点的步骤,包括:获得所述热力图中各个像素点的亮度值;其中,所述亮度值用于表征所述热力图中对应关键点的置信度;根据预设的关键点亮度阈值和获取到的最大亮度值对所述热力图进行过滤;根据过滤后的热力图确定相接触的各所述目标对象的关键点。Further, the step of determining the key points of each of the contacted target objects based on the heat map includes: obtaining a brightness value of each pixel in the heat map; wherein the brightness value is used to characterize the The confidence of the corresponding key points in the heat map; filter the heat map according to the preset key point brightness threshold and the obtained maximum brightness value; determine the key points of the contacting target objects according to the filtered heat map point.

第二方面,本发明实施例还提供一种手势识别方法,所述方法包括:采用如上述第一方面任一项所述的关键点检测方法对待检测的手部图像进行关键点检测,得到各手部的关键点;根据各所述手部的关键点识别手势类别。In a second aspect, an embodiment of the present invention further provides a gesture recognition method, the method comprising: performing key point detection on a hand image to be detected by using the key point detection method according to any one of the above first aspects, and obtaining each The key points of the hand; the gesture category is recognized according to the key points of each said hand.

第三方面,本发明实施例还提供一种关键点检测装置,所述装置包括:图像获取模块,用于获取待检测图像,所述待检测图像中包含有目标对象;目标检测模块,用于对所述待检测图像进行目标检测,得到每个所述目标对象的位置框;位置框合并模块,用于基于每个所述目标对象的位置框进行位置框合并,得到目标合并框;其中,所述目标合并框对应的图像中包括相接触的至少两个目标对象;关键点检测模块,用于基于所述目标合并框对所述待检测图像进行关键点检测,得到每个所述目标对象的多个热力图;其中,不同的所述热力图用于表征位于所述目标对象上不同位置处的关键点;关键点确定模块,用于基于所述热力图确定相接触的各所述目标对象的关键点。In a third aspect, an embodiment of the present invention further provides a key point detection device, the device includes: an image acquisition module for acquiring an image to be detected, the to-be-detected image contains a target object; a target detection module for Performing target detection on the to-be-detected image to obtain a position frame of each of the target objects; a position frame merging module for merging position frames based on the position frames of each of the target objects to obtain a target merge frame; wherein, The image corresponding to the target merging frame includes at least two target objects in contact; a key point detection module is configured to perform key point detection on the to-be-detected image based on the target merging frame to obtain each target object A plurality of heatmaps of the target object; wherein, the different heatmaps are used to represent key points located at different positions on the target object; the keypoint determination module is used to determine each of the targets in contact based on the heatmaps keypoints of the object.

第四方面,本发明实施例还提供一种手势识别装置,所述装置包括:手部关键点检测模块,用于采用如第一方面任一项所述的关键点检测方法对待检测的手部图像进行关键点检测,得到各手部的关键点;手势识别模块,用于根据各所述手部的关键点识别手势类别。In a fourth aspect, an embodiment of the present invention further provides a gesture recognition device, the device includes: a hand key point detection module, configured to use the key point detection method according to any one of the first aspect to detect the hand to be detected The image is detected by key points to obtain the key points of each hand; the gesture recognition module is used for recognizing gesture categories according to the key points of each hand.

第五方面,本发明实施例提供了一种关键点检测系统,所述系统包括:图像采集装置、处理器和存储装置;所述图像采集装置,用于采集待检测图像;所述存储装置上存储有计算机程序,所述计算机程序在被所述处理器运行时执行如第一方面任一项所述的关键点检测方法和如第二方面所述的手势识别方法。In a fifth aspect, an embodiment of the present invention provides a key point detection system, the system includes: an image acquisition device, a processor, and a storage device; the image acquisition device is used to acquire an image to be detected; A computer program is stored that, when executed by the processor, executes the keypoint detection method according to any one of the first aspects and the gesture recognition method according to the second aspect.

第六方面,本发明实施例提供了一种电子设备,包括存储器、处理器,所述存储器中存储有可在所述处理器上运行的计算机程序,处理器执行计算机程序时实现上述第一方面任一项所述的关键点检测方法的步骤和如第二方面所述的手势识别方法的步骤。In a sixth aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, wherein the memory stores a computer program that can run on the processor, and the processor implements the first aspect when executing the computer program The steps of any one of the key point detection method and the steps of the gesture recognition method according to the second aspect.

第七方面,本发明实施例提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行上述第一方面任一项所述的关键点检测方法的步骤和如第二方面所述的手势识别方法的步骤。In a seventh aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, any one of the above-mentioned first aspects is executed. The steps of the key point detection method and the steps of the gesture recognition method according to the second aspect.

本发明实施例提供了一种关键点检测方法、手势识别方法、装置及系统,能够通过首先对待检测图像进行目标检测,得到每个目标对象的位置框,然后基于每个目标对象的位置框进行位置框合并,得到目标合并框,且目标合并框对应的图像中包括相接触的至少两个目标对象;再基于目标合并框对待检测图像进行关键点的检测,得到每个目标对象的多个热力图;其中,不同的热力图用于表征位于目标对象上不同位置处的关键点;最后基于热力图确定相接触的各目标对象的关键点。本实施例能够基于相接触的至少两个目标对象对应的目标合并框进行关键点的检测,得到可表征相接触的目标对象上不同位置处的关键点的多个热力图,从而基于热力图确定该目标对象的关键点,这种方式可以有效改善因目标对象之间接触互动而导致的关键点检测不准确的问题,提升了关键点的检测准确性。The embodiments of the present invention provide a key point detection method, a gesture recognition method, a device and a system, which can obtain the position frame of each target object by first performing target detection on the image to be detected, and then perform the target detection based on the position frame of each target object. The position frames are merged to obtain a target merged frame, and the image corresponding to the target merged frame includes at least two contacting target objects; then, based on the target merged frame, the key points of the image to be detected are detected, and multiple thermal powers of each target object are obtained. Figure; wherein, different heat maps are used to represent key points located at different positions on the target object; finally, the key points of each target object in contact are determined based on the heat map. This embodiment can detect key points based on target merging frames corresponding to at least two contacting target objects, and obtain multiple heatmaps that can represent keypoints at different positions on the contacting target objects, so as to determine based on the heatmaps The key points of the target object, this method can effectively improve the problem of inaccurate key point detection caused by the contact interaction between the target objects, and improve the detection accuracy of the key points.

本发明的其他特征和优点将在随后的说明书中阐述,或者,部分特征和优点可以从说明书推知或毫无疑义地确定,或者通过实施本公开的上述技术即可得知。Additional features and advantages of the present invention will be set forth in the description which follows, or some may be inferred or unambiguously determined from the description, or may be learned by practice of the above-described techniques of the present disclosure.

为使本发明的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present invention more obvious and easy to understand, preferred embodiments are given below, and are described in detail as follows in conjunction with the accompanying drawings.

附图说明Description of drawings

为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the specific embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the specific embodiments or the prior art. Obviously, the accompanying drawings in the following description The drawings are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without creative efforts.

图1示出了本发明实施例所提供的一种电子设备的结构示意图;FIG. 1 shows a schematic structural diagram of an electronic device provided by an embodiment of the present invention;

图2示出了本发明实施例所提供的一种关键点检测方法流程图;FIG. 2 shows a flowchart of a key point detection method provided by an embodiment of the present invention;

图3示出了本发明实施例所提供的三种接触程度不同的两只手的示意图;FIG. 3 shows a schematic diagram of two hands with different degrees of contact provided by an embodiment of the present invention;

图4示出了本发明实施例所提供的一种多次执行合并操作的方法流程图;FIG. 4 shows a flowchart of a method for performing a merge operation multiple times according to an embodiment of the present invention;

图5示出了本发明实施例所提供的一种位置框的合并示意图;FIG. 5 shows a schematic diagram of the combination of a location frame provided by an embodiment of the present invention;

图6示出了本发明实施例所提供的另一种多次执行合并操作的方法流程图;6 shows a flowchart of another method for performing a merge operation multiple times provided by an embodiment of the present invention;

图7示出了本发明实施例所提供的一种手部热力图的示意图;FIG. 7 shows a schematic diagram of a heat map of a hand provided by an embodiment of the present invention;

图8示出了本发明实施例所提供的一种关键点检测装置的结构框图。FIG. 8 shows a structural block diagram of a key point detection apparatus provided by an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings. Obviously, the described embodiments are part of the embodiments of the present invention, but not all of them. example. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

考虑到现有对象关键点的检测方法,很难准确地检测出接触互动的对象上的关键点。基于此,为改善以上问题,本发明实施例提供的一种关键点检测方法、手势识别方法、装置及系统,该技术可以应用于人机交互、姿态识别等各种需要用到关键点检测的领域,为便于理解,以下对本发明实施例进行详细介绍。Considering the existing detection methods for object keypoints, it is difficult to accurately detect keypoints on contact-interactive objects. Based on this, in order to improve the above problems, a key point detection method, gesture recognition method, device and system provided by the embodiments of the present invention can be applied to various applications such as human-computer interaction and gesture recognition that need to use key point detection. In order to facilitate understanding, the following describes the embodiments of the present invention in detail.

实施例一:Example 1:

首先,参照图1来描述用于实现本发明实施例的关键点检测方法、手势识别方法、装置及系统的示例电子设备100。First, an example electronic device 100 for implementing the keypoint detection method, gesture recognition method, apparatus, and system according to an embodiment of the present invention is described with reference to FIG. 1 .

如图1所示的一种电子设备的结构示意图,电子设备100包括一个或多个处理器102、一个或多个存储装置104、输入装置106、输出装置108以及图像采集装置110,这些组件通过总线系统112和/或其它形式的连接机构(未示出)互连。应当注意,图1所示的电子设备100的组件和结构只是示例性的,而非限制性的,根据需要,所述电子设备也可以具有其他组件和结构。As shown in FIG. 1 is a schematic structural diagram of an electronic device, the electronic device 100 includes one or more processors 102, one or more storage devices 104, an input device 106, an output device 108, and an image acquisition device 110. These components are The bus system 112 and/or other form of connection mechanism (not shown) are interconnected. It should be noted that the components and structures of the electronic device 100 shown in FIG. 1 are only exemplary and not restrictive, and the electronic device may also have other components and structures as required.

所述处理器102可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制所述电子设备100中的其它组件以执行期望的功能。The processor 102 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.

所述存储装置104可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器102可以运行所述程序指令,以实现下文所述的本发明实施例中(由处理器实现)的客户端功能以及/或者其它期望的功能。在所述计算机可读存储介质中还可以存储各种应用程序和各种数据,例如所述应用程序使用和/或产生的各种数据等。The storage device 104 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random access memory (RAM) and/or cache memory, or the like. The non-volatile memory may include, for example, read only memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 102 may execute the program instructions to implement the client functions (implemented by the processor) in the embodiments of the present invention described below. and/or other desired functionality. Various application programs and various data, such as various data used and/or generated by the application program, etc. may also be stored in the computer-readable storage medium.

所述输入装置106可以是用户用来输入指令的装置,并且可以包括键盘、鼠标、麦克风和触摸屏等中的一个或多个。The input device 106 may be a device used by a user to input instructions, and may include one or more of a keyboard, mouse, microphone, touch screen, and the like.

所述输出装置108可以向外部(例如,用户)输出各种信息(例如,图像或声音),并且可以包括显示器、扬声器等中的一个或多个。The output device 108 may output various information (eg, images or sounds) to the outside (eg, a user), and may include one or more of a display, a speaker, and the like.

所述图像采集装置110可以拍摄用户期望的图像(例如照片、视频等),并且将所拍摄的图像存储在所述存储装置104中以供其它组件使用。The image capture device 110 may capture images (eg, photos, videos, etc.) desired by the user, and store the captured images in the storage device 104 for use by other components.

示例性地,用于实现根据本发明实施例的一种关键点检测方法、手势识别方法、装置及系统的示例电子设备可以被实现为诸如智能手机、平板电脑、计算机、VR设备、摄像头等智能终端上。Exemplarily, exemplary electronic devices for implementing a key point detection method, gesture recognition method, apparatus, and system according to embodiments of the present invention may be implemented as smart phones, tablet computers, computers, VR devices, cameras, etc. on the terminal.

实施例二:Embodiment 2:

参照图2所示的一种关键点检测方法的流程图,该方法具体包括如下步骤:Referring to the flowchart of a key point detection method shown in FIG. 2, the method specifically includes the following steps:

步骤S202,获取待检测图像,待检测图像中包含有目标对象。待检测图像可以是图像采集装置拍摄的原始图像,也可以是由网络下载、本地存储或人工上传的图像。该待检测图像中可以包括相接触的至少两个目标对象,该目标对象诸如人、人脸、手部、车辆等,相接触的至少两个目标对象可参照图3示出的三种接触程度不同的两只手:图3中的左侧图为相挨上的两只手,中间图为部分交叉在一起的两只手,右侧图为几乎完全交叠的两只手。当然在相接触的目标对象之外还可以包括其它分开的目标对象,在此不作限制。In step S202, an image to be detected is acquired, and the image to be detected contains a target object. The image to be detected may be an original image captured by an image acquisition device, or an image downloaded from the network, stored locally or manually uploaded. The to-be-detected image may include at least two contacting target objects, such as a person, a human face, a hand, a vehicle, etc. The at least two contacting target objects may refer to the three contact degrees shown in FIG. 3 . Two different hands: The left picture in Figure 3 shows two hands next to each other, the middle picture shows two hands that are partially crossed together, and the right picture shows two hands that are almost completely overlapped. Of course, other separate target objects may also be included in addition to the contacting target objects, which is not limited here.

步骤S204,对待检测图像进行目标检测,得到每个目标对象的位置框。In step S204, target detection is performed on the image to be detected, and a position frame of each target object is obtained.

在一些可选实施方式中,可以基于CNN网络模型(Convolutional NeuralNetworks,卷积神经网络)、R-CNN(Region-CNN)网络模型或Segnet网络模型等神经网络模型对待检测图像进行目标检测,以得到待检测图像中每个目标对象的位置框。In some optional embodiments, target detection may be performed on the image to be detected based on a neural network model such as a CNN network model (Convolutional Neural Networks, convolutional neural network), an R-CNN (Region-CNN) network model, or a Segnet network model, so as to obtain The location box of each target object in the image to be detected.

步骤S206,基于每个目标对象的位置框进行位置框合并,得到目标合并框;其中,目标合并框对应的图像中包括相接触的至少两个目标对象。Step S206 , merging the position frames based on the position frames of each target object to obtain a target merging frame; wherein, the image corresponding to the target merging frame includes at least two contacting target objects.

互相接触的至少两个目标对象会有部分重叠,各个目标对象的位置框之间也将互相重叠,对于互相重叠的至少两个目标对象的位置框进行合并,得到目标合并框。可以理解,对于基于互相重叠的位置框所得到的目标合并框,其对应的图像中所包括的目标对象的数量不会多于待检测图像中相接触的目标对象的数量,对于一些接触面积较小的目标对象可能并不在目标合并框对应图像的范围内。At least two target objects that are in contact with each other will partially overlap, and the position frames of each target object will also overlap each other. The position frames of the at least two target objects that overlap each other are merged to obtain a target merged frame. It can be understood that for the target merging frame obtained based on the overlapping position frames, the number of target objects included in the corresponding image will not be more than the number of contacting target objects in the image to be detected. Small target objects may not be within the range of the corresponding image of the target merge box.

步骤S208,基于目标合并框对待检测图像进行关键点检测,得到每个目标对象的多个热力图;其中,不同的热力图用于表征位于目标对象上不同位置处的关键点。Step S208 , perform key point detection on the image to be detected based on the target merging frame, and obtain multiple heat maps for each target object; wherein, different heat maps are used to represent key points located at different positions on the target object.

在本实施例中,基于目标合并框对待检测图像进行关键点检测可理解为:对待检测图像中目标合并框处的图像采用自下而上的方式检测每个目标对象的关键点,得到目标合并框中每个目标对象的热力图。热力图为以高亮或者彩色等特殊形式来显示关键点位置的图示。其中,在基于目标合并框得到每个目标对象的热力图的过程中,可以为当前目标对象上每个关键点都生成一张对应的热力图,也即,每张热力图中仅体现一个关键点。例如,目标对象为手,常规用于定位手部的关键点有21个,每只手对应的得到的热力图即为21张,不同热力图对应手部不同位置处的关键点。In this embodiment, performing key point detection on the image to be detected based on the target merging frame can be understood as: the image at the target merging frame in the image to be detected uses a bottom-up manner to detect the key points of each target object, and obtains the target merging frame. Heatmap for each target object in the box. A heatmap is an illustration that displays key points in a special form such as highlighting or coloring. Among them, in the process of obtaining the heat map of each target object based on the target merging frame, a corresponding heat map can be generated for each key point on the current target object, that is, each heat map only reflects one key point. For example, if the target object is a hand, there are 21 key points conventionally used to locate the hand, and the obtained heatmap corresponding to each hand is 21, and different heatmaps correspond to the key points at different positions of the hand.

步骤S210,基于热力图确定相接触的各目标对象的关键点。可以首先获取每张热力图所表征的关键点的位置坐标;然后根据预设的热力图与目标对象之间的映射关系,确定热力图中关键点的位置坐标在目标对象中的原始位置坐标;最后根据每个关键点的原始位置坐标确定相接触的各目标对象的关键点。Step S210, determining the key points of the contacting target objects based on the heat map. The position coordinates of the key points represented by each heat map can be obtained first; then according to the preset mapping relationship between the heat map and the target object, the original position coordinates of the position coordinates of the key points in the heat map in the target object are determined; Finally, the key points of the contacting target objects are determined according to the original position coordinates of each key point.

本发明实施例提供的一种关键点检测方法,能够通过首先对待检测图像进行目标检测,得到每个目标对象的位置框,然后基于每个目标对象的位置框进行位置框合并,得到目标合并框,且目标合并框对应的图像中包括相接触的至少两个目标对象;再基于目标合并框对待检测图像进行关键点的检测,得到每个目标对象的多个热力图;其中,不同的热力图用于表征位于目标对象上不同位置处的关键点;最后基于热力图确定相接触的各目标对象的关键点。本实施例能够基于相接触的至少两个目标对象对应的目标合并框进行关键点的检测,得到可表征相接触的目标对象上不同位置处的关键点的多个热力图,从而基于热力图确定该目标对象的关键点,这种方式可以有效改善因目标对象之间接触互动而导致的关键点检测不准确的问题,提升了关键点的检测准确性。A key point detection method provided by an embodiment of the present invention can obtain the position frame of each target object by first performing target detection on the image to be detected, and then combine the position frames based on the position frame of each target object to obtain the target merged frame , and the image corresponding to the target merging frame includes at least two target objects that are in contact; then, based on the target merging frame, the key points of the image to be detected are detected, and multiple heatmaps of each target object are obtained; among them, different heatmaps It is used to characterize the key points located at different positions on the target object; finally, the key points of each target object in contact are determined based on the heat map. This embodiment can detect key points based on target merging frames corresponding to at least two contacting target objects, and obtain multiple heatmaps that can represent keypoints at different positions on the contacting target objects, so as to determine based on the heatmaps The key points of the target object, this method can effectively improve the problem of inaccurate key point detection caused by the contact interaction between the target objects, and improve the detection accuracy of the key points.

在执行上述步骤S206时,可以根据实际检测场景采用如下两种位置框合并方式得到目标合并框:When performing the above step S206, the following two position frame merging methods can be used to obtain the target merged frame according to the actual detection scene:

合并方式一:基于每个目标对象的位置框,对位置框重复执行预设的合并操作,直至任意两个位置框之间的重叠度均不大于预设重叠度阈值,得到目标合并框。该合并方式一适合诸如得到的目标位置框仅为一个,或者,得到的目标位置框为两个以上且各目标位置框中所包含的目标对象的数量相同等较为简单的检测场景。Merging method 1: Based on the position frame of each target object, the preset merge operation is repeatedly performed on the position frame until the overlap between any two position frames is not greater than the preset overlap threshold, and the target merge frame is obtained. This merging method 1 is suitable for relatively simple detection scenarios such as only one target position frame is obtained, or more than two target position frames are obtained and the number of target objects contained in each target position frame is the same.

对于较为复杂的检测场景,比如一张待检测图像中有多组相接触的目标对象,既包括相接触的两个目标对象A和B,又包括相接触的三个目标对象C、D和E,且C、D、E与A、B之间相间隔。在此场景下,可以采用如下合并方式二得到目标合并框。For a more complex detection scene, for example, there are multiple groups of contacting target objects in an image to be detected, including not only two contacting target objects A and B, but also three contacting target objects C, D and E. , and C, D, E and A, B are spaced apart. In this scenario, the following merging method 2 can be used to obtain the target merged frame.

合并方式二:首先基于每个目标对象的位置框,对位置框重复执行预设的合并操作,直至任意两个位置框之间的重叠度均不大于预设重叠度阈值,得到候选合并框;其中,候选合并框的数量为至少两个,且候选合并框中所包含的目标对象的数量并不相同。然后再根据候选合并框中所包含的目标对象的数量,从候选合并框中筛选得到目标合并框。本实施例可以根据所包含的目标对象的数量对候选合并框进行分类,同一类候选合并框中包含的目标对象的数量相同,并将同一类候选合并框确定为一种目标合并框;还可以根据实际检测目标对象的数量需求(如指定目标对象为两个),将包含有所需求数量的目标对象的候选合并框确定为目标合并框。Merging method 2: First, based on the position frame of each target object, the preset merge operation is repeatedly performed on the position frame, until the overlap between any two position frames is not greater than the preset overlap threshold, and a candidate merge frame is obtained; The number of candidate merging frames is at least two, and the number of target objects included in the candidate merging frames is not the same. Then, according to the number of target objects contained in the candidate merging frame, the target merging frame is obtained by filtering from the candidate merging frame. In this embodiment, the candidate merging frames can be classified according to the number of included target objects, the number of target objects contained in the same type of candidate merging frames is the same, and the same type of candidate merging frames can be determined as one type of target merging frames; According to the actual number of detected target objects (for example, two target objects are specified), the candidate merged frame containing the required number of target objects is determined as the target merged frame.

采用上述合并方式二能够在执行步骤S208时,便于选择与目标合并框相匹配的关键点检测方式,比如通过基于卷积神经网络的关键点检测模型进行关键点检测时,能够根据目标合并框中所包含的目标对象的不同数量而选择不同的关键点检测模型,以使选择的关键点检测模型与待执行的关键点检测任务具有较高的匹配度,从而提升检测结果的准确性。By adopting the above-mentioned merging method 2, when step S208 is executed, it is convenient to select a key point detection method that matches the target merging frame. Different keypoint detection models are selected according to the different numbers of target objects included, so that the selected keypoint detection model has a high degree of matching with the keypoint detection task to be performed, thereby improving the accuracy of the detection results.

为了便于理解上述两种合并方式,本实施例对上述合并方式中合并操作的可能实现方式进行描述,参照如图4所示的一种多次执行合并操作的方法流程图,上述合并操作包括如下步骤S402至步骤S406:In order to facilitate the understanding of the above two merging methods, this embodiment describes possible implementations of the merging operation in the above merging methods. Referring to the flowchart of a method for performing a merging operation multiple times as shown in FIG. 4 , the above merging operation includes the following Steps S402 to S406:

步骤S402,从多个位置框中确定待合并的位置框对;其中,位置框对包括两个位置框,且在首轮执行合并操作时,该位置框对所包括的为两个目标对象的位置框。将多个位置框中的每两个位置框均确定为一组待合并的位置框对,比如有位置框a、位置框b、位置框c和位置框d,则两两组合确定的位置框对包括{ab}、{ac}、{ad}、{bc}、{bd}和{cd}。Step S402: Determine the position frame pair to be merged from multiple position frames; wherein, the position frame pair includes two position frames, and when the merge operation is performed in the first round, the position frame pair includes two target objects. location box. Each two position frames in the multiple position frames are determined as a group of position frame pairs to be merged. Pairs include {ab}, {ac}, {ad}, {bc}, {bd}, and {cd}.

步骤S404,计算位置框对中位置框之间的重叠度。针对上述的每一对位置框对,均计算位置框对中两个位置框之间的重叠部分面积与两个位置框的总面积之间的比值,得到该两个位置框的重叠度。Step S404: Calculate the degree of overlap between the position frames centered on the position frames. For each of the above-mentioned position frame pairs, the ratio between the area of the overlapping portion between the two position frames in the position frame pair and the total area of the two position frames is calculated to obtain the degree of overlap of the two position frames.

步骤S406,如果计算的重叠度大于预设重叠度阈值,将位置框对中的位置框合并为新的位置框;其中,新的位置框的边界为根据位置框对中的两个位置框的边界确定的。例如,如果位置框a、位置框b的重叠度大于预设重叠度阈值(如70%),将位置框a、位置框b进行合并产生新的位置框。新的位置框可参照如图5所示的位置框合并示意图,新的位置框的边界为将位置框a、位置框b的非重叠部分的各条边界分别进行延长并相交后得到的。Step S406, if the calculated overlapping degree is greater than the preset overlapping degree threshold, merge the position frame in the position frame pair into a new position frame; wherein, the boundary of the new position frame is based on the two position frames in the position frame pair. Boundaries are determined. For example, if the overlapping degree of the position frame a and the position frame b is greater than a preset overlapping degree threshold (eg 70%), the position frame a and the position frame b are merged to generate a new position frame. The new position frame can refer to the position frame merging schematic diagram shown in FIG. 5 . The boundary of the new position frame is obtained by extending and intersecting the borders of the non-overlapping parts of the position frame a and the position frame b respectively.

可以理解的是,通过执行上述步骤S406得到新的位置框后,回到上述步骤S402,从新的位置框和其它位置框中继续确定待合并的位置框对,即对位置框重新执行上述步骤S402至步骤S406,直至任意两个位置框之间的重叠度均不大于预设重叠度阈值,得到候选合并框或者直接得到目标合并框。It can be understood that, after obtaining a new position frame by performing the above step S406, return to the above step S402, and continue to determine the pair of position frames to be merged from the new position frame and other position frames, that is, perform the above step S402 again on the position frame. Go to step S406, until the degree of overlap between any two position frames is not greater than the preset overlap degree threshold, to obtain a candidate merged frame or directly obtain a target merged frame.

在实际应用中,有些位置框可能定位并不准确,导致位置框偏离目标对象,对这些位置框进行合并不但影响合并效率而且影响位置框合并结果的准确性。基于此,在执行上述步骤S402从多个位置框中确定待合并的位置框对时,可以进一步考虑位置框的置信度以改善上述问题,具体参照如下内容:In practical applications, some position boxes may not be positioned accurately, causing the position boxes to deviate from the target object. Merging these position boxes not only affects the merging efficiency but also affects the accuracy of the position box merging results. Based on this, when performing the above step S402 to determine the position frame pair to be merged from multiple position frames, the confidence of the position frame can be further considered to improve the above problem, and the specific reference is as follows:

首先,获取位置框的置信度。通过神经网络模型对待检测图像进行目标检测时可生成每个目标对象的位置框的置信度。First, get the confidence of the location box. The confidence of the position frame of each target object can be generated when the target detection is performed on the image to be detected by the neural network model.

然后,按照各位置框的置信度对位置框进行排序,得到位置框排序结果。当按照置信度由高到低对位置框进行排序时,排序靠后的位置框的置信度较低,表示这些位置框可能偏离目标,其可靠性较差,基于此可以筛选掉一些位置框。在具体实现时,可以基于预设的位置框置信度阈值滤除部分置信度较低的位置框;或者,还可以位置框的原始排序结果滤除指定排名之后的位置框;从而,根据筛选后的位置框确定位置框排序结果。假设位置框排序结果为置信度由高到低排列的位置框a、位置框b、位置框c和位置框d。本实施例基于位置框的置信度对位置框进行排序,有利于在后续合并位置框时减少对置信度较低的位置框的合并,从而能够有效提升位置框合并效率和准确性。Then, the position boxes are sorted according to the confidence of each position box to obtain the ranking result of the position boxes. When sorting the position boxes according to the confidence level from high to low, the confidence of the position boxes at the back of the ranking is lower, indicating that these position boxes may deviate from the target, and their reliability is poor. Based on this, some position boxes can be filtered out. During specific implementation, some position frames with lower confidence may be filtered out based on the preset position frame confidence threshold; alternatively, the position frames after the specified ranking may also be filtered out from the original sorting result of the position frames; thus, according to the filtered The location box determines the location box sorting results. It is assumed that the ranking result of the position boxes is the position frame a, the position frame b, the position frame c and the position frame d arranged from high to low confidence. This embodiment sorts the location boxes based on the confidence of the location boxes, which is beneficial to reduce the merging of the location boxes with lower confidence when the location boxes are subsequently merged, thereby effectively improving the efficiency and accuracy of the location box merging.

最后,根据位置框排序结果从多个位置框中确定待合并的位置框对;其中,位置框对包括两个位置框。上述位置框排序结果可以体现各位置框的准确性和可靠性,由此,根据位置框排序结果一方面可以确定适合优先合并的位置框对,另一方面还可以有顺序的逐对确定出各个待合并的位置框对,避免有位置框被遗漏。参照上述位置框排序结果为位置框a、位置框b、位置框c和位置框d,本实施例中待合并的位置框对可以包括{ab}、{ac}、{ad}、{bc}、{bd}和{cd}。Finally, the position box pair to be merged is determined from the plurality of position boxes according to the position box sorting result; wherein, the position box pair includes two position boxes. The above position frame sorting results can reflect the accuracy and reliability of each position frame. Therefore, according to the position frame sorting results, on the one hand, the position frame pairs suitable for priority merging can be determined. Pairs of position boxes to be merged to avoid missing position boxes. Referring to the above position box sorting results as position box a, position box b, position box c, and position box d, the position box pairs to be merged in this embodiment may include {ab}, {ac}, {ad}, {bc} , {bd}, and {cd}.

在上述基于置信度确定位置框对的基础上,本实施例还可以提供另一种合并操作的可能实现方式,可参照如图6所示的另一种多次执行合并操作的方法流程图,上述合并操作包括如下步骤S602至步骤S610:On the basis of the above-mentioned determination of the position frame pair based on the confidence, this embodiment can also provide another possible implementation manner of the merging operation. Referring to the flowchart of another method for performing the merging operation multiple times as shown in FIG. 6 , The above-mentioned merging operation includes the following steps S602 to S610:

步骤S602,获取位置框的置信度。In step S602, the confidence level of the position frame is obtained.

步骤S604,按照各位置框的置信度对位置框进行排序,得到位置框排序结果。Step S604: Rank the position boxes according to the confidence of each position box, and obtain a ranking result of the position boxes.

步骤S606,根据位置框排序结果从多个位置框中确定待合并的位置框对。Step S606: Determine the pair of position frames to be merged from the plurality of position frames according to the position frame sorting result.

步骤S608,计算位置框对中位置框之间的重叠度。Step S608: Calculate the degree of overlap between the position frames centered on the position frames.

步骤S610,如果计算的重叠度大于预设重叠度阈值,将位置框对中的位置框合并为新的位置框。Step S610, if the calculated overlap degree is greater than the preset overlap degree threshold, merge the position frames in the position frame pair into a new position frame.

可以理解的是,通过执行上述步骤S610得到新的位置框后,回到上述步骤S602,即对位置框重新执行上述步骤S602至步骤S610,直至任意两个位置框之间的重叠度均不大于预设重叠度阈值,得到候选合并框或者直接得到目标合并框。It can be understood that, after obtaining a new position frame by executing the above step S610, return to the above step S602, that is, the above steps S602 to S610 are re-executed for the position frame until the degree of overlap between any two position frames is not greater than Preset the overlap threshold to obtain candidate merged frames or directly obtain target merged frames.

其中,在执行新一轮的步骤S602时,对于新的位置框可以通过如下方式获取该位置框的置信度:获取位置框对中两个位置框的置信度,根据两个位置框的置信度获得新的位置框的置信度。Wherein, when a new round of step S602 is performed, the confidence of the new position frame can be obtained in the following manner: the confidence of the two position frames in the position frame is obtained, and the confidence of the two position frames can be obtained according to the confidence of the two position frames. Get the confidence of the new location box.

考虑到基于置信度的位置框排序结果可以用于确定适合优先合并的位置框对,因此,本实施例可以将两个位置框的置信度中的高置信度确定为新的位置框的置信度。当然,还可以有其它多种确定新的位置框的置信度的方式,比如:将两个位置框的置信度中的低置信度确定为新的位置框的置信度,或者,在两个位置框的置信度中随机选取一个置信度作为新的位置框的置信度,或者,将两个位置框的置信度的平均值确定为新的位置框的置信度。Considering that the ranking result of the position boxes based on the confidence can be used to determine the pair of position boxes suitable for preferential merging, therefore, in this embodiment, the confidence of the two position boxes can be determined as the high confidence of the new position box. . Of course, there may also be other ways of determining the confidence of the new position box, for example, determining the confidence of the two position boxes with a lower confidence level as the confidence of the new position box, or, in two positions One of the confidence levels of the box is randomly selected as the confidence level of the new position box, or the average value of the confidence levels of the two position boxes is determined as the confidence level of the new position box.

为了降低对待检测图像的关键点检测压力,本实施例在步骤S208基于目标合并框对待检测图像进行关键点检测时可以先对待检测图像进行处理,即参照如下内容:基于目标合并框从待检测图像中抠取局部图像;局部图像中包含有相接触的目标对象。具体的,获取目标合并框在待检测图像中的位置参数,位置参数可以包括目标位置框的左上顶点坐标(x,y)和目标合并框的高和宽;采用诸如OpenCv(Open Source Computer Vision Library,开源计算机视觉库)中的CvRect函数对目标合并框的位置参数进行运算,输出位置参数确定的图像,该图像即为从待检测图像中抠取局部图像。In order to reduce the key point detection pressure of the image to be detected, in this embodiment, when performing key point detection on the image to be detected based on the target merging frame in step S208, the image to be detected may be processed first, that is, refer to the following content: from the image to be detected based on the target merging frame The partial image is extracted from the middle; the partial image contains the contacting target objects. Specifically, the position parameters of the target merging frame in the image to be detected are obtained, and the position parameters may include the coordinates (x, y) of the upper left vertex of the target position frame and the height and width of the target merging frame; using tools such as OpenCv (Open Source Computer Vision Library) , the CvRect function in the open source computer vision library) operates on the position parameters of the target merging frame, and outputs an image determined by the position parameters, which is a partial image extracted from the image to be detected.

通常抠取到的局部图像的尺寸大小不一,为了适应多种关键点检测场景可以对局部图像进行尺寸调整,并对尺寸调整之后的局部图像进行关键点检测,得到每个目标对象的多个热力图。在实际应用中,可以直接将局部图像的尺寸(包括高度和宽度)重置为目标尺寸(包括目标高度和目标宽度),该尺寸调整方式操作简单效率较高。或者,还可以为了确保局部图像的原始尺寸比例,将局部图像的高度调整至目标高度h,并在局部图像的宽度不足时,采用0进行填充的方式将局部图像的宽度调整至目标宽度,使调整后的目标高度h和目标宽度仍满足原始尺寸比例。Usually the size of the extracted local images is different. In order to adapt to various key point detection scenarios, the size of the local images can be adjusted, and the key points of the resized local images can be detected to obtain multiple images of each target object. Heatmap. In practical applications, the size (including the height and width) of the partial image can be directly reset to the target size (including the target height and the target width), and the size adjustment method is simple and efficient. Alternatively, in order to ensure the original size ratio of the partial image, the height of the partial image can be adjusted to the target height h, and when the width of the partial image is insufficient, the width of the partial image can be adjusted to the target width by filling with 0, so that The adjusted target height h and target width still satisfy the original size ratio.

通过上述抠取局部图像以及采用重置或填充方式对局部图像的长宽尺寸进行调整的步骤,可以使检测关键点的模型在学习关键点检测时,仅需将学习能力放在固定大小的包含目标对象(如手部)的局部图像上即可,从而有效减轻模型对关键点进行检测的学习压力。基于此,本实施例提供如下对尺寸调整之后的局部图像进行关键点检测,得到每个目标对象的多个热力图的示例:通过训练后的检测模型对尺寸调整之后的局部图像进行关键点检测,得到每个目标对象的多个热力图。Through the above steps of extracting the local image and adjusting the length and width of the local image by resetting or filling, the model for detecting key points only needs to put the learning ability in the fixed-size frame when learning key point detection. The target object (such as the hand) can be placed on the local image, thereby effectively reducing the learning pressure of the model to detect key points. Based on this, this embodiment provides an example of performing keypoint detection on the resized local image to obtain multiple heatmaps for each target object: performing keypoint detection on the resized local image through the trained detection model , to get multiple heatmaps for each target object.

该检测模型可以是基于卷积神经网络的关键点检测模型,其输入是一张尺寸为(h,w)的局部图像,并假设该局部图像中包含的相接触的目标对象的数量为两个;该检测模型的输出是2*n个尺寸为(h/s,w/s)的热力图;每张热力图均表征一个关键点的置信度,热力图中置信度最高的位置就是对应关键点的位置。其中,s是热力图相对于局部图像的下采样率,下采样率耦合于检测模型的网络结构设计;n是单个目标对象的关键点数量,且各目标对象的关键点数量相同;2*n个热力图表示的是两个目标对象(如左、右两只手)所有的关键点位置,前n个热力图表示第一个目标对象(如左手)的关键点,后n个热力图表示第二个目标对象(如右手)的关键点;且每个目标对象的n各热力图所表征的关键的是固定的。由于本实施例提供的该检测模型所输出的热力图的数量是单个目标对象中关键点数量的两倍,因此可以将该检测模型称之为双通道检测模型。当然可以理解的是,当局部图像中包含的相接触的目标对象的数量为M(M=3、4、5……)个时,对应的训练后的检测模型所输出的热力图的数量是单个目标对象中关键点数量的M倍,此时的检测模型可称之为M通道检测模型。The detection model can be a key point detection model based on a convolutional neural network, and its input is a local image of size (h, w), and it is assumed that the number of contacting target objects contained in the local image is two ; The output of the detection model is 2*n heatmaps of size (h/s, w/s); each heatmap represents the confidence of a key point, and the position with the highest confidence in the heatmap is the corresponding key point location. Among them, s is the downsampling rate of the heat map relative to the local image, and the downsampling rate is coupled to the network structure design of the detection model; n is the number of key points of a single target object, and the number of key points of each target object is the same; 2*n The first heat maps represent the key points of the two target objects (such as the left and right hands), the first n heat maps represent the key points of the first target object (such as the left hand), and the last n heat maps represent the key points of the first target object (such as the left hand). The key points of the second target object (such as the right hand); and the key represented by the n heatmaps of each target object is fixed. Since the number of heat maps output by the detection model provided in this embodiment is twice the number of key points in a single target object, the detection model can be called a dual-channel detection model. Of course, it can be understood that when the number of contacting target objects contained in the local image is M (M=3, 4, 5...), the number of heat maps output by the corresponding trained detection model is The number of key points in a single target object is M times, and the detection model at this time can be called an M-channel detection model.

为便于理解,可以以手部作为目标对象对热力图及其表征的关键点进行描述;参照图7所示的手部热力图的示意图,其中一只手(右手)的第1张热力图可以表示大拇指指尖关键点的位置信息,那么第n+1张热力图对应的标识另一只手(左手)大拇指指尖关键点的位置信息。对应的,第2张热力图表示右手的食指指尖关键点的位置信息,则第n+2张热力图表示左手的食指指尖关键点的位置信息。For ease of understanding, the hand can be used as the target object to describe the key points of the heat map and its characterization; referring to the schematic diagram of the hand heat map shown in Figure 7, the first heat map of one hand (right hand) can be represents the position information of the key point of the thumb fingertip, then the position information of the key point of the thumb fingertip of the other hand (left hand) corresponding to the n+1th heat map is identified. Correspondingly, the second heat map represents the position information of the key points of the index finger of the right hand, and the n+2th heat map represents the position information of the key points of the index finger of the left hand.

基于热力图确定相接触的各目标对象的关键点的步骤可以包括如下步骤1)至步骤3):The step of determining the key points of the contacting target objects based on the heat map may include the following steps 1) to 3):

步骤1),计算热力图中各个像素点的亮度值;其中,该亮度值用于表征热力图中对应关键点的置信度。热力图本质是一个h*w的二维矩阵,其中矩阵对应位置的数值越大,可视化对应的热力图中像素点的亮度值越高。利用如下公式(1)找到二维矩阵中具有最大亮度值的像素点的位置坐标:Step 1): Calculate the brightness value of each pixel in the heat map; wherein, the brightness value is used to represent the confidence of the corresponding key point in the heat map. The essence of the heatmap is a two-dimensional matrix of h*w. The larger the value of the corresponding position of the matrix, the higher the brightness value of the pixel in the corresponding heatmap. Use the following formula (1) to find the position coordinates of the pixel with the largest brightness value in the two-dimensional matrix:

(x,y)=argmax(hx,hy) (1)(x,y)=argmax(hx,hy) (1)

其中,argmax(hx,hy)表示从热力图的基准坐标(hx,hy)开始遍历热力图,从而确定具有最大亮度值的像素点的位置坐标(x,y)。Among them, argmax (hx, hy) means to traverse the heat map from the reference coordinates (hx, hy) of the heat map, so as to determine the position coordinates (x, y) of the pixel with the maximum brightness value.

步骤2),参照如下公式(2)根据预设的关键点亮度阈值和获取到的最大亮度值对热力图进行过滤;也即如果具有最大亮度值的像素点的位置坐标(x,y)不满足公式(2),将该像素点所属的热力图进行滤除。Step 2), filter the heat map according to the preset key point brightness threshold and the obtained maximum brightness value with reference to the following formula (2); that is, if the position coordinates (x, y) of the pixel with the maximum brightness value are not If formula (2) is satisfied, the heat map to which the pixel belongs is filtered out.

heatmap[x,y]>conf_thresh (2)heatmap[x,y]>conf_thresh (2)

其中,heatmap[x,y]表示获取到的最大亮度值,也即位置坐标(x,y)的亮度值,conf_thresh为预设的关键点亮度阈值。如果该位置坐标(x,y)的亮度值低于预设的亮度阈值,表明该热力图中较亮的像素点是由噪声等干扰因素引起的,并不是目标对象的关键点。Among them, heatmap[x, y] represents the obtained maximum brightness value, that is, the brightness value of the position coordinate (x, y), and conf_thresh is the preset key point brightness threshold. If the brightness value of the position coordinate (x, y) is lower than the preset brightness threshold, it indicates that the brighter pixels in the heatmap are caused by interference factors such as noise, and are not the key points of the target object.

步骤3),根据过滤后的热力图确定相接触的各目标对象的关键点。Step 3): Determine the key points of the contacting target objects according to the filtered heat map.

对过滤后的每张热力图均可以首先采用诸如soft-argmax函数等算法计算该热力图中关键点的坐标,然后按照热力图与目标对象之间预设的映射关系,将计算的热力图中的坐标转换为目标对象中的原始位置坐标;最后根据转换后的全部关键点的原始位置坐标确定各目标对象的关键点。For each filtered heatmap, you can first calculate the coordinates of the key points in the heatmap by using algorithms such as the soft-argmax function, and then according to the preset mapping relationship between the heatmap and the target object, convert the calculated heatmap. The coordinates of the target object are converted into the original position coordinates of the target object; finally, the key points of each target object are determined according to the original position coordinates of all the converted key points.

在确定每个目标对象的关键点后,对该目标对象的全部关键点进行姿态复原,从而根据复原结果识别目标对象的姿态、表情等内容。After the key points of each target object are determined, posture restoration is performed on all the key points of the target object, so as to identify the posture, expression and other contents of the target object according to the restoration result.

另外,上述检测模型的训练过程可参照如下四个步骤:In addition, the training process of the above detection model can refer to the following four steps:

第一、向待训练的检测模型输入多张标注有目标对象的关键点位置的训练图像,其中,训练图像中包括相接触的至少两个目标对象,且任意两个目标对象之间的重叠度均达到预设重叠度阈值。First, input multiple training images marked with key point positions of target objects to the detection model to be trained, wherein the training images include at least two target objects that are in contact, and the degree of overlap between any two target objects is reach the preset overlap threshold.

第二、通过待训练的检测模型对训练图像进行检测,输出训练图像中各目标对象的热力图。Second, the training image is detected by the detection model to be trained, and the heat map of each target object in the training image is output.

第三、基于训练图像中各目标对象的热力图得到训练图像中各目标对象中的关键点位置。Third, based on the heat map of each target object in the training image, the position of the key point in each target object in the training image is obtained.

第四、基于待训练的检测模型得到的关键点位置和已标注的关键点位置对待训练的检测模型进行参数优化,直至待训练的检测模型得到的关键点位置和已标注的关键点位置之间的匹配度达到预设匹配度时,确定训练结束,得到训练后的检测模型。Fourth, optimize the parameters of the detection model to be trained based on the positions of the key points obtained by the detection model to be trained and the positions of the marked key points, until the position of the key points obtained by the detection model to be trained and the positions of the marked key points are between When the matching degree reaches the preset matching degree, it is determined that the training is over, and the trained detection model is obtained.

综上,上述实施例能够基于相接触的至少两个目标对象对应的目标合并框进行关键点的检测,得到可表征相接触的目标对象上不同位置处的关键点的多个热力图,从而基于热力图确定该目标对象的关键点,这种方式可以有效改善因目标对象之间接触互动而导致的关键点检测不准确的问题,提升了关键点的检测准确性。To sum up, the above-mentioned embodiments can detect key points based on the target merging frames corresponding to at least two contacting target objects, and obtain multiple heat maps that can represent key points at different positions on the contacting target objects, so that based on the The heatmap determines the key points of the target object, which can effectively improve the problem of inaccurate key point detection caused by the contact interaction between the target objects, and improve the detection accuracy of key points.

实施例三:Embodiment three:

对于实施例二中所提供的关键点检测方法,本发明实施例提供了一种手势识别方法,包括如下步骤1和步骤2:For the key point detection method provided in the second embodiment, an embodiment of the present invention provides a gesture recognition method, including the following steps 1 and 2:

步骤1、采用实施例二中所提供的关键点检测方法对待检测的手部图像进行关键点检测,得到各手部的关键点。为简要描述,该得到手部的关键点的步骤可参考前述方法实施例二中相应内容。Step 1. Use the key point detection method provided in the second embodiment to perform key point detection on the hand image to be detected, and obtain the key points of each hand. For a brief description, for the step of obtaining the key points of the hand, reference may be made to the corresponding content in the foregoing method embodiment 2.

步骤2、根据各手部的关键点识别手势类别。在一种具体实现方式中,对于每个手部,可以对该手部的全部关键点进行姿态复原,从而根据复原结果识别该手部的诸如握手、攥拳、抓取等手势类别。Step 2: Identify the gesture category according to the key points of each hand. In a specific implementation manner, for each hand, gesture restoration can be performed on all key points of the hand, so as to identify gesture categories such as shaking hands, clenching fists, grasping, etc. of the hand according to the restoration results.

本实施例提供的手势识别方法,能够通过首先采用实施例二中所提供的关键点检测方法得到各手部的关键点,然后根据各手部的关键点识别手势类别。本实施例能够基于关键点检测方法提升关键点的检测准确性,从而有效提升识别手势类别的准确性。The gesture recognition method provided in this embodiment can first obtain the key points of each hand by using the key point detection method provided in the second embodiment, and then recognize the gesture category according to the key points of each hand. This embodiment can improve the detection accuracy of key points based on the key point detection method, thereby effectively improving the accuracy of recognizing gesture categories.

实施例四:Embodiment 4:

对于实施例二中所提供的关键点检测方法,本发明实施例提供了一种关键点检测装置,参见图8所示的一种关键点检测装置的结构框图,该装置包括如下模块:For the key point detection method provided in the second embodiment, an embodiment of the present invention provides a key point detection device. Referring to the structural block diagram of a key point detection device shown in FIG. 8 , the device includes the following modules:

图像获取模块802,用于获取待检测图像,待检测图像中包含有目标对象。The image acquisition module 802 is configured to acquire an image to be detected, and the to-be-detected image contains a target object.

目标检测模块804,用于对待检测图像进行目标检测,得到每个目标对象的位置框。The target detection module 804 is configured to perform target detection on the image to be detected to obtain the position frame of each target object.

位置框合并模块806,用于基于每个目标对象的位置框进行位置框合并,得到目标合并框;其中,目标合并框对应的图像中包括相接触的至少两个目标对象。The position frame merging module 806 is configured to merge the position frames based on the position frames of each target object to obtain the target merged frame; wherein, the image corresponding to the target merged frame includes at least two contacting target objects.

关键点检测模块808,用于基于目标合并框对待检测图像进行关键点检测,得到每个目标对象的多个热力图;其中,不同的热力图用于表征位于目标对象上不同位置处的关键点。The keypoint detection module 808 is used to perform keypoint detection on the image to be detected based on the target merged frame, and obtain multiple heatmaps of each target object; wherein, different heatmaps are used to represent keypoints located at different positions on the target object .

关键点确定模块810,用于基于热力图确定相接触的各目标对象的关键点。The key point determination module 810 is configured to determine the key points of the contacting target objects based on the heat map.

本发明实施例提供的一种关键点检测装置,能够通过首先对待检测图像进行目标检测,得到每个目标对象的位置框,然后基于每个目标对象的位置框进行位置框合并,得到目标合并框,且目标合并框对应的图像中包括相接触的至少两个目标对象;再基于目标合并框对待检测图像进行关键点的检测,得到每个目标对象的多个热力图;其中,不同的热力图用于表征位于目标对象上不同位置处的关键点;最后基于热力图确定相接触的各目标对象的关键点。本实施例能够基于相接触的至少两个目标对象对应的目标合并框进行关键点的检测,得到可表征相接触的目标对象上不同位置处的关键点的多个热力图,从而基于热力图确定该目标对象的关键点,这种方式可以有效改善因目标对象之间接触互动而导致的关键点检测不准确的问题,提升了关键点的检测准确性。A key point detection device provided by an embodiment of the present invention can obtain a position frame of each target object by first performing target detection on an image to be detected, and then combine the position frames based on the position frame of each target object to obtain a target merged frame , and the image corresponding to the target merging frame includes at least two target objects that are in contact; then, based on the target merging frame, the key points of the image to be detected are detected, and multiple heatmaps of each target object are obtained; among them, different heatmaps It is used to characterize the key points located at different positions on the target object; finally, the key points of each target object in contact are determined based on the heat map. This embodiment can detect key points based on target merging frames corresponding to at least two contacting target objects, and obtain multiple heatmaps that can represent keypoints at different positions on the contacting target objects, so as to determine based on the heatmaps The key points of the target object, this method can effectively improve the problem of inaccurate key point detection caused by the contact interaction between the target objects, and improve the detection accuracy of the key points.

在一些实施方式中,上述位置框合并模块806进一步用于:基于每个目标对象的位置框,对位置框重复执行预设的合并操作,直至任意两个位置框之间的重叠度均不大于预设重叠度阈值,得到目标合并框。In some embodiments, the above-mentioned position frame merging module 806 is further configured to: based on the position frame of each target object, repeatedly perform a preset merging operation on the position frames, until the degree of overlap between any two position frames is not greater than Preset the overlap threshold to obtain the target merged frame.

在一些实施方式中,上述位置框合并模块806进一步用于:基于每个目标对象的位置框,对位置框重复执行预设的合并操作,直至任意两个位置框之间的重叠度均不大于预设重叠度阈值,得到候选合并框;根据候选合并框中所包含的目标对象的数量,从候选合并框中筛选得到目标合并框。In some embodiments, the above-mentioned position frame merging module 806 is further configured to: based on the position frame of each target object, repeatedly perform a preset merging operation on the position frames, until the degree of overlap between any two position frames is not greater than A threshold of the overlap degree is preset to obtain a candidate merging frame; according to the number of target objects contained in the candidate merging frame, a target merging frame is obtained by screening the candidate merging frame.

在一些实施方式中,上述合并操作包括确定位置框对子操作和合并新位置框子操作,其中,确定位置框对子操作包括:从多个位置框中确定待合并的位置框对;其中,位置框对包括两个位置框;合并新位置框子操作包括:计算位置框对中位置框之间的重叠度;如果计算的重叠度大于预设重叠度阈值,将位置框对中的位置框合并为新的位置框;其中,新的位置框的边界为根据位置框对中的两个位置框的边界确定的。In some embodiments, the above-mentioned merging operation includes a sub-operation of determining a position frame pair and a sub-operation of merging a new position frame, wherein the sub-operation of determining a position frame pair includes: determining a position frame pair to be merged from a plurality of position frames; The frame pair includes two position frames; the merge new position frame sub-operation includes: calculating the degree of overlap between the position frames in the position frame pair; if the calculated overlap degree is greater than the preset overlap degree threshold, merge the position frames in the position frame pair into a A new location box; wherein, the boundary of the new location box is determined according to the boundaries of the two location boxes in the pair of location boxes.

在一些实施方式中,上述确定位置框对子操作还包括:获取位置框的置信度;按照各位置框的置信度对位置框进行排序,得到位置框排序结果;根据位置框排序结果从多个位置框中确定待合并的位置框对。In some embodiments, the above-mentioned sub-operation of determining the position frame pair further includes: obtaining the confidence of the position frames; sorting the position frames according to the confidence of each position frame to obtain a ranking result of the position frames; The location box determines the pair of location boxes to be merged.

在一些实施方式中,上述确定位置框对子操作还包括:获取位置框对中两个位置框的置信度,根据两个位置框的置信度获得新的位置框的置信度。In some implementation manners, the above sub-operation of determining the position frame pair further includes: obtaining the confidence levels of two position frames in the position frame pair, and obtaining the confidence level of the new position frame according to the confidence levels of the two position frames.

在一些实施方式中,上述关键点检测模块808进一步用于:基于目标合并框从待检测图像中抠取局部图像;局部图像中包含有相接触的目标对象;对局部图像进行尺寸调整,并对尺寸调整之后的局部图像进行关键点检测,得到每个目标对象的多个热力图。In some embodiments, the above-mentioned key point detection module 808 is further configured to: extract a partial image from the image to be detected based on the target merging frame; the partial image contains the contacting target objects; adjust the size of the partial image, and Keypoint detection is performed on the resized local image to obtain multiple heatmaps for each target object.

在一些实施方式中,上述关键点检测模块808进一步用于:通过训练后的检测模型对尺寸调整之后的局部图像进行关键点检测,得到每个目标对象的多个热力图。In some embodiments, the above-mentioned key point detection module 808 is further configured to: perform key point detection on the local image after resizing by using the trained detection model to obtain multiple heat maps of each target object.

在一些实施方式中,上述关键点检测装置还包括模型训练模块(图中未示出),该模型训练模块用于:向待训练的检测模型输入多张标注有目标对象的关键点位置的训练图像,其中,训练图像中包括相接触的至少两个目标对象,且任意两个目标对象之间的重叠度均达到预设重叠度阈值;通过待训练的检测模型对训练图像进行检测,输出训练图像中各目标对象的热力图;基于训练图像中各目标对象的热力图得到训练图像中各目标对象中的关键点位置;基于待训练的检测模型得到的关键点位置和已标注的关键点位置对待训练的检测模型进行参数优化,直至待训练的检测模型得到的关键点位置和已标注的关键点位置之间的匹配度达到预设匹配度时,确定训练结束,得到训练后的检测模型。In some embodiments, the above-mentioned key point detection apparatus further includes a model training module (not shown in the figure), the model training module is used for: inputting a plurality of key point positions marked with the target object into the detection model to be trained for training image, wherein the training image includes at least two target objects in contact, and the overlap between any two target objects reaches a preset overlap threshold; the training image is detected by the detection model to be trained, and the training output is output. The heat map of each target object in the image; the key point position of each target object in the training image is obtained based on the heat map of each target object in the training image; the key point position and the marked key point position obtained based on the detection model to be trained The parameters of the detection model to be trained are optimized until the matching degree between the key point positions obtained by the detection model to be trained and the marked key point positions reaches the preset matching degree, and the training is determined to be completed, and the trained detection model is obtained.

在一些实施方式中,上述关键点确定模块810进一步用于:获得热力图中各个像素点的亮度值;其中,亮度值用于表征热力图中对应关键点的置信度;根据预设的关键点亮度阈值和获取到的最大亮度值对热力图进行过滤;根据过滤后的热力图确定相接触的各目标对象的关键点。In some embodiments, the above-mentioned key point determination module 810 is further configured to: obtain the brightness value of each pixel point in the heat map; wherein, the brightness value is used to represent the confidence of the corresponding key point in the heat map; according to the preset key point The heat map is filtered by the brightness threshold and the obtained maximum brightness value; the key points of each contacting target object are determined according to the filtered heat map.

本实施例所提供的装置,其实现原理及产生的技术效果和前述实施例二相同,为简要描述,本实施例部分未提及之处,可参考前述方法实施例二中相应内容。The implementation principle and technical effect of the device provided in this embodiment are the same as those in the second embodiment. For brief description, for the parts not mentioned in this embodiment, reference may be made to the corresponding content in the second method embodiment.

实施例五:Embodiment 5:

对于实施例三中所提供的手势识别方法,本发明实施例提供了一种手势识别装置,该装置包括:For the gesture recognition method provided in the third embodiment, an embodiment of the present invention provides a gesture recognition device, and the device includes:

手部关键点检测模块,用于采用实施例二中所提供的关键点检测方法对待检测的手部图像进行关键点检测,得到各手部的关键点。The hand key point detection module is used to perform key point detection on the hand image to be detected by using the key point detection method provided in the second embodiment to obtain the key points of each hand.

手势识别模块,用于根据各手部的关键点识别手势类别。The gesture recognition module is used to recognize gesture categories according to the key points of each hand.

实施例六:Embodiment 6:

基于前述实施例,本实施例给出了一种关键点检测系统,该系统包括:图像采集设备、处理器和存储设备;其中,图像采集设备用于采集待检测图像;存储设备上存储有计算机程序,计算机程序在被处理器运行时执行如实施例二所提供的任一项关键点检测方法。Based on the foregoing embodiments, this embodiment provides a key point detection system, which includes: an image acquisition device, a processor, and a storage device; wherein, the image acquisition device is used to acquire images to be detected; the storage device stores a computer The program, when the computer program is run by the processor, executes any one of the key point detection methods provided in the second embodiment.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, for the specific working process of the system described above, reference may be made to the corresponding process in the foregoing method embodiments, which will not be repeated here.

进一步,本实施例还提供了一种电子设备,包括存储器、处理器,存储器中存储有可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述实施例二提供的任一项关键点检测方法的步骤或实施例三提供的手势识别方法的步骤。Further, this embodiment also provides an electronic device, including a memory and a processor, wherein a computer program that can be run on the processor is stored in the memory, and when the processor executes the computer program, any one of the keys provided in the second embodiment above is implemented The steps of the point detection method or the steps of the gesture recognition method provided in the third embodiment.

进一步,本实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理设备运行时执行上述实施例二提供的任一项关键点检测方法的步骤或实施例三提供的手势识别方法的步骤。Further, this embodiment also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by the processing device, the steps of any one of the key point detection methods provided in the second embodiment above are executed. Or the steps of the gesture recognition method provided in the third embodiment.

本发明实施例所提供的一种关键点检测方法、手势识别方法、装置及系统的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行前面方法实施例中所述的方法,具体实现可参见方法实施例,在此不再赘述。A computer program product of a key point detection method, a gesture recognition method, an apparatus, and a system provided by the embodiments of the present invention includes a computer-readable storage medium storing program codes, and the instructions included in the program codes can be used to execute the foregoing methods. For the specific implementation of the method described in the embodiment, reference may be made to the method embodiment, which will not be repeated here.

所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。The functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution. The computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

最后应说明的是:以上所述实施例,仅为本发明的具体实施方式,用以说明本发明的技术方案,而非对其限制,本发明的保护范围并不局限于此,尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。Finally, it should be noted that the above-mentioned embodiments are only specific implementations of the present invention, and are used to illustrate the technical solutions of the present invention, but not to limit them. The protection scope of the present invention is not limited thereto, although referring to the foregoing The embodiment has been described in detail the present invention, those of ordinary skill in the art should understand: any person skilled in the art who is familiar with the technical field within the technical scope disclosed by the present invention can still modify the technical solutions described in the foregoing embodiments. Or can easily think of changes, or equivalently replace some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention, and should be covered in the present invention. within the scope of protection. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.

Claims (16)

1. a kind of critical point detection method, which is characterized in that the described method includes:
Image to be detected is obtained, includes target object in described image to be detected;
Target detection is carried out to described image to be detected, obtains the position frame of each target object;
Position frame based on each target object carries out the merging of position frame, obtains target and merges frame;Wherein, the target is closed It and include at least two target objects being in contact in the corresponding image of frame;
Merge frame based on the target and critical point detection is carried out to described image to be detected, obtains the more of each target object A thermodynamic chart;Wherein, the different thermodynamic charts is used to characterize the key point being located on the target object at different location;
The key point for each target object being in contact is determined based on the thermodynamic chart.
2. the method according to claim 1, wherein the position frame based on each target object carries out Position frame merges, and obtains the step of target merges frame, comprising:
Based on the position frame of each target object, preset union operation is repeated to position frame, until any two Degree of overlapping between the frame of position obtains target and merges frame no more than default degree of overlapping threshold value.
3. the method according to claim 1, wherein the position frame based on each target object carries out Position frame merges, and obtains the step of target merges frame, comprising:
Based on the position frame of each target object, preset union operation is repeated to position frame, until any two Degree of overlapping between the frame of position obtains candidate merging frame no more than default degree of overlapping threshold value;
According to the candidate quantity for merging target object included in frame, screening obtains target from the candidate merging frame Merge frame.
4. according to the method in claim 2 or 3, which is characterized in that the union operation includes:
Position frame pair to be combined is determined from multiple position frames;Wherein, the position frame is to including two position frames;
Calculate the degree of overlapping between the position frame center position frame;
If the degree of overlapping calculated is greater than default degree of overlapping threshold value, the position frame of the position frame centering is merged into new position Frame;Wherein, the boundary of the new position frame is to be determined according to the boundary of two position frames of the position frame centering.
5. according to the method described in claim 4, it is characterized in that, described determine position frame to be combined from multiple position frames Pair step, comprising:
Obtain the confidence level of position frame;
The position frame is ranked up according to the confidence level of each position frame, obtains position frame ranking results;
Position frame pair to be combined is determined from multiple position frames according to the position frame ranking results.
6. according to the method described in claim 5, it is characterized in that, it is described obtain position frame confidence level the step of, comprising:
The confidence level for obtaining two position frames of the position frame centering obtains described new according to the confidence level of described two position frames Position frame confidence level.
7. the method according to claim 1, wherein described merge frame to the mapping to be checked based on the target As the step of carrying out critical point detection, obtaining multiple thermodynamic charts of each target object, comprising:
Merge frame based on the target and takes topography from described image to be detected;It include to connect in the topography The target object of touching;
Size adjusting is carried out to the topography, and critical point detection is carried out to the topography after size adjusting, is obtained Multiple thermodynamic charts of each target object.
8. the method according to the description of claim 7 is characterized in that the topography to after size adjusting carries out key The step of point detects, and obtains multiple thermodynamic charts of each target object, comprising:
Critical point detection is carried out to the topography after size adjusting by the detection model after training, obtains each mesh Mark multiple thermodynamic charts of object.
9. according to the method described in claim 8, it is characterized in that, the method also includes:
The training image for the key point position that multiple are labeled with target object is inputted to detection model to be trained, wherein described It include at least two target objects being in contact in training image, and the degree of overlapping between target object described in any two reaches To default degree of overlapping threshold value;
The training image is detected by the detection model to be trained, exports each mesh in the training image Mark the thermodynamic chart of object;
Thermodynamic chart based on the target object each in the training image obtains in the training image in each target object Key point position;
The key point position obtained based on the detection model to be trained and the key point position marked to it is described to Trained detection model carries out parameter optimization, until the obtained key point position of detection model to be trained and having marked Matching degree between key point position reaches preset matching when spending, and determines that training terminates, the detection model after being trained.
10. the method according to claim 1, wherein described determine each institute being in contact based on the thermodynamic chart The step of stating the key point of target object, comprising:
Obtain the brightness value of each pixel in the thermodynamic chart;Wherein, the brightness value is right in the thermodynamic chart for characterizing Answer the confidence level of key point;
The thermodynamic chart is filtered according to preset key point luminance threshold and the maximum brightness value got;
The key point for each target object being in contact is determined according to filtered thermodynamic chart.
11. a kind of gesture identification method, which is characterized in that the described method includes:
Key point is carried out to hand images to be detected using critical point detection method as described in any one of claim 1 to 10 Detection, obtains the key point of each hand;
Gesture classification is identified according to the key point of each hand.
12. a kind of critical point detection device, which is characterized in that described device includes:
Image collection module includes target object in described image to be detected for obtaining image to be detected;
Module of target detection obtains the position of each target object for carrying out target detection to described image to be detected Frame;
Position frame merging module carries out the merging of position frame for the position frame based on each target object, obtains target conjunction And frame;Wherein, it includes at least two target objects being in contact that the target, which merges in the corresponding image of frame,;
Critical point detection module carries out critical point detection to described image to be detected for merging frame based on the target, obtains Multiple thermodynamic charts of each target object;Wherein, the different thermodynamic charts is located at the target object for characterizing Key point at different location;
Key point determining module, for determining the key point for each target object being in contact based on the thermodynamic chart.
13. a kind of gesture identifying device, which is characterized in that described device includes:
Hand critical point detection module, for being treated using critical point detection method as described in any one of claim 1 to 10 The hand images of detection carry out critical point detection, obtain the key point of each hand;
Gesture recognition module, for identifying gesture classification according to the key point of each hand.
14. a kind of critical point detection system, which is characterized in that the system comprises: image collecting device, processor and storage dress It sets;
Described image acquisition device, for acquiring image to be detected;
Computer program is stored on the storage device, the computer program is executed when being run by the processor as weighed Benefit requires 1 to 10 described in any item critical point detection methods or gesture identification method as claimed in claim 11.
15. a kind of electronic equipment, including memory, processor, it is stored with and can runs on the processor in the memory Computer program, which is characterized in that processor execute computer program when realize 1 to 10 any one of the claims described in Critical point detection method the step of or gesture identification method as claimed in claim 11.
16. a kind of computer readable storage medium, computer program, feature are stored on the computer readable storage medium It is, the described in any item critical point detections of the claims 1 to 10 is executed when the computer program is run by processor The step of method or gesture identification method as claimed in claim 11.
CN201910830741.5A 2019-09-02 2019-09-02 Key point detection method, gesture recognition method, device and system Active CN110532984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910830741.5A CN110532984B (en) 2019-09-02 2019-09-02 Key point detection method, gesture recognition method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910830741.5A CN110532984B (en) 2019-09-02 2019-09-02 Key point detection method, gesture recognition method, device and system

Publications (2)

Publication Number Publication Date
CN110532984A true CN110532984A (en) 2019-12-03
CN110532984B CN110532984B (en) 2022-10-11

Family

ID=68666665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910830741.5A Active CN110532984B (en) 2019-09-02 2019-09-02 Key point detection method, gesture recognition method, device and system

Country Status (1)

Country Link
CN (1) CN110532984B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178192A (en) * 2019-12-18 2020-05-19 北京达佳互联信息技术有限公司 Position identification method and device for target object in image
CN111208509A (en) * 2020-01-15 2020-05-29 中国人民解放军国防科技大学 Ultra-wideband radar human body target posture visualization enhancing method
CN111325171A (en) * 2020-02-28 2020-06-23 深圳市商汤科技有限公司 Abnormal parking monitoring method and related product
CN111524188A (en) * 2020-04-24 2020-08-11 杭州健培科技有限公司 Lumbar positioning point acquisition method, equipment and medium
CN111767792A (en) * 2020-05-22 2020-10-13 上海大学 A multi-person keypoint detection network and method based on classroom scene
CN111783882A (en) * 2020-06-30 2020-10-16 北京市商汤科技开发有限公司 Key point detection method and device, electronic equipment and storage medium
CN111948609A (en) * 2020-08-26 2020-11-17 东南大学 Binaural sound source positioning method based on Soft-argmax regression device
CN112464753A (en) * 2020-11-13 2021-03-09 深圳市优必选科技股份有限公司 Method and device for detecting key points in image and terminal equipment
CN112714253A (en) * 2020-12-28 2021-04-27 维沃移动通信有限公司 Video recording method and device, electronic equipment and readable storage medium
CN112784765A (en) * 2021-01-27 2021-05-11 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for recognizing motion
CN112836745A (en) * 2021-02-02 2021-05-25 歌尔股份有限公司 Target detection method and device
CN112861678A (en) * 2021-01-29 2021-05-28 上海依图网络科技有限公司 Image identification method and device
CN113012089A (en) * 2019-12-19 2021-06-22 北京金山云网络技术有限公司 Image quality evaluation method and device
CN113128383A (en) * 2021-04-07 2021-07-16 杭州海宴科技有限公司 Recognition method for campus student cheating behavior
CN113128436A (en) * 2021-04-27 2021-07-16 北京百度网讯科技有限公司 Method and device for detecting key points
CN113972006A (en) * 2021-10-22 2022-01-25 中冶赛迪重庆信息技术有限公司 Live animal health detection method and system based on infrared temperature measurement and image recognition
CN114998424A (en) * 2022-08-04 2022-09-02 中国第一汽车股份有限公司 Vehicle window position determining method and device and vehicle
CN115100691A (en) * 2022-08-24 2022-09-23 腾讯科技(深圳)有限公司 Method, device and equipment for acquiring key point detection model and detecting key points
CN115166790A (en) * 2022-05-23 2022-10-11 集度科技有限公司 Road data processing method, device, equipment and storage medium
CN117079242A (en) * 2023-09-28 2023-11-17 比亚迪股份有限公司 Deceleration strip determining method and device, storage medium, electronic equipment and vehicle

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015219892A (en) * 2014-05-21 2015-12-07 大日本印刷株式会社 Visual line analysis system and visual line analysis device
CN106778585A (en) * 2016-12-08 2017-05-31 腾讯科技(上海)有限公司 A kind of face key point-tracking method and device
CN108268869A (en) * 2018-02-13 2018-07-10 北京旷视科技有限公司 Object detection method, apparatus and system
CN108875482A (en) * 2017-09-14 2018-11-23 北京旷视科技有限公司 Object detecting method and device, neural network training method and device
CN108985259A (en) * 2018-08-03 2018-12-11 百度在线网络技术(北京)有限公司 Human motion recognition method and device
CN109509222A (en) * 2018-10-26 2019-03-22 北京陌上花科技有限公司 The detection method and device of straight line type objects
CN109801335A (en) * 2019-01-08 2019-05-24 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer storage medium
CN110047095A (en) * 2019-03-06 2019-07-23 平安科技(深圳)有限公司 Tracking, device and terminal device based on target detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015219892A (en) * 2014-05-21 2015-12-07 大日本印刷株式会社 Visual line analysis system and visual line analysis device
CN106778585A (en) * 2016-12-08 2017-05-31 腾讯科技(上海)有限公司 A kind of face key point-tracking method and device
CN108875482A (en) * 2017-09-14 2018-11-23 北京旷视科技有限公司 Object detecting method and device, neural network training method and device
CN108268869A (en) * 2018-02-13 2018-07-10 北京旷视科技有限公司 Object detection method, apparatus and system
CN108985259A (en) * 2018-08-03 2018-12-11 百度在线网络技术(北京)有限公司 Human motion recognition method and device
CN109509222A (en) * 2018-10-26 2019-03-22 北京陌上花科技有限公司 The detection method and device of straight line type objects
CN109801335A (en) * 2019-01-08 2019-05-24 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer storage medium
CN110047095A (en) * 2019-03-06 2019-07-23 平安科技(深圳)有限公司 Tracking, device and terminal device based on target detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MANDAR HALDEKAR等: "Identifying Spatial Relations in Images using Convolutional Neural Networks", 《2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)》 *
夏瀚笙等: "基于人体关键点的分心驾驶行为识别", 《计算机技术与发展》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178192B (en) * 2019-12-18 2023-08-22 北京达佳互联信息技术有限公司 Method and device for identifying position of target object in image
CN111178192A (en) * 2019-12-18 2020-05-19 北京达佳互联信息技术有限公司 Position identification method and device for target object in image
CN113012089A (en) * 2019-12-19 2021-06-22 北京金山云网络技术有限公司 Image quality evaluation method and device
CN113012089B (en) * 2019-12-19 2024-07-09 北京金山云网络技术有限公司 Image quality evaluation method and device
CN111208509A (en) * 2020-01-15 2020-05-29 中国人民解放军国防科技大学 Ultra-wideband radar human body target posture visualization enhancing method
CN111208509B (en) * 2020-01-15 2020-12-29 中国人民解放军国防科技大学 An Ultra-Wideband Radar Human Target Attitude Visualization Enhancement Method
CN111325171A (en) * 2020-02-28 2020-06-23 深圳市商汤科技有限公司 Abnormal parking monitoring method and related product
CN111524188A (en) * 2020-04-24 2020-08-11 杭州健培科技有限公司 Lumbar positioning point acquisition method, equipment and medium
CN111767792A (en) * 2020-05-22 2020-10-13 上海大学 A multi-person keypoint detection network and method based on classroom scene
CN111783882A (en) * 2020-06-30 2020-10-16 北京市商汤科技开发有限公司 Key point detection method and device, electronic equipment and storage medium
WO2022001106A1 (en) * 2020-06-30 2022-01-06 北京市商汤科技开发有限公司 Key point detection method and apparatus, and electronic device, and storage medium
CN111948609A (en) * 2020-08-26 2020-11-17 东南大学 Binaural sound source positioning method based on Soft-argmax regression device
CN111948609B (en) * 2020-08-26 2022-02-18 东南大学 Binaural sound source localization method based on Soft-argmax regressor
CN112464753A (en) * 2020-11-13 2021-03-09 深圳市优必选科技股份有限公司 Method and device for detecting key points in image and terminal equipment
CN112464753B (en) * 2020-11-13 2024-05-24 深圳市优必选科技股份有限公司 Method and device for detecting key points in image and terminal equipment
CN112714253A (en) * 2020-12-28 2021-04-27 维沃移动通信有限公司 Video recording method and device, electronic equipment and readable storage medium
CN112784765A (en) * 2021-01-27 2021-05-11 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for recognizing motion
CN112861678A (en) * 2021-01-29 2021-05-28 上海依图网络科技有限公司 Image identification method and device
CN112861678B (en) * 2021-01-29 2024-04-19 上海依图网络科技有限公司 Image recognition method and device
CN112836745A (en) * 2021-02-02 2021-05-25 歌尔股份有限公司 Target detection method and device
CN112836745B (en) * 2021-02-02 2022-12-09 歌尔股份有限公司 Target detection method and device
CN113128383A (en) * 2021-04-07 2021-07-16 杭州海宴科技有限公司 Recognition method for campus student cheating behavior
CN113128436A (en) * 2021-04-27 2021-07-16 北京百度网讯科技有限公司 Method and device for detecting key points
CN113972006A (en) * 2021-10-22 2022-01-25 中冶赛迪重庆信息技术有限公司 Live animal health detection method and system based on infrared temperature measurement and image recognition
CN115166790A (en) * 2022-05-23 2022-10-11 集度科技有限公司 Road data processing method, device, equipment and storage medium
CN114998424A (en) * 2022-08-04 2022-09-02 中国第一汽车股份有限公司 Vehicle window position determining method and device and vehicle
CN115100691B (en) * 2022-08-24 2023-08-08 腾讯科技(深圳)有限公司 Method, device and equipment for acquiring key point detection model and detecting key point
CN115100691A (en) * 2022-08-24 2022-09-23 腾讯科技(深圳)有限公司 Method, device and equipment for acquiring key point detection model and detecting key points
CN117079242A (en) * 2023-09-28 2023-11-17 比亚迪股份有限公司 Deceleration strip determining method and device, storage medium, electronic equipment and vehicle
CN117079242B (en) * 2023-09-28 2024-01-26 比亚迪股份有限公司 Deceleration strip determining method and device, storage medium, electronic equipment and vehicle

Also Published As

Publication number Publication date
CN110532984B (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN110532984B (en) Key point detection method, gesture recognition method, device and system
CN107808143B (en) Computer Vision-Based Dynamic Gesture Recognition Method
CN104350509B (en) Quick attitude detector
CN110959160B (en) A gesture recognition method, device and equipment
RU2711029C2 (en) Touch classification
Nair et al. Hand gesture recognition system for physically challenged people using IOT
JP5887775B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
US8768006B2 (en) Hand gesture recognition
CN108960163B (en) Gesture recognition method, device, equipment and storage medium
CN102831404B (en) Gesture detecting method and system
CN103164022B (en) Many fingers touch method and device, portable terminal
CN111989689A (en) Method for recognizing objects in images and mobile device for performing the method
CN108229268A (en) Expression recognition and convolutional neural network model training method and device and electronic equipment
US10311295B2 (en) Heuristic finger detection method based on depth image
CN109003224B (en) Face-based deformation image generation method and device
KR20150108888A (en) Part and state detection for gesture recognition
WO2019174398A1 (en) Method, apparatus, and terminal for simulating mouse operation by using gesture
CN111950514A (en) A system and method for aerial handwriting recognition based on depth camera
EP3518522B1 (en) Image capturing method and device
CN109451634B (en) Method for controlling electric light based on gesture and its intelligent electric light system
CN105912126B (en) A kind of gesture motion is mapped to the adaptive adjusting gain method at interface
CN108198130A (en) Image processing method, device, storage medium and electronic equipment
CN118038303A (en) Identification image processing method, device, computer equipment and storage medium
JP2016099643A (en) Image processing device, image processing method, and image processing program
US20200160090A1 (en) Method and system for determining physical characteristics of objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20241119

Address after: No. 257, 2nd Floor, Building 9, No. 2 Huizhu Road, Liangjiang New District, Yubei District, Chongqing 401100

Patentee after: Yuanli Jinzhi (Chongqing) Technology Co.,Ltd.

Country or region after: China

Address before: 313, block a, No.2, south academy of Sciences Road, Haidian District, Beijing

Patentee before: BEIJING KUANGSHI TECHNOLOGY Co.,Ltd.

Country or region before: China