CN111079597A - Three-dimensional information detection method and electronic equipment - Google Patents

Three-dimensional information detection method and electronic equipment Download PDF

Info

Publication number
CN111079597A
CN111079597A CN201911235316.8A CN201911235316A CN111079597A CN 111079597 A CN111079597 A CN 111079597A CN 201911235316 A CN201911235316 A CN 201911235316A CN 111079597 A CN111079597 A CN 111079597A
Authority
CN
China
Prior art keywords
person
point data
target person
dimensional
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911235316.8A
Other languages
Chinese (zh)
Inventor
杨戬
闫文林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911235316.8A priority Critical patent/CN111079597A/en
Publication of CN111079597A publication Critical patent/CN111079597A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a three-dimensional information detection method and electronic equipment, wherein the method comprises the following steps: acquiring a depth image of a detection area, acquiring three-dimensional point cloud data of the detection area based on the depth image, and acquiring first skeleton point data of at least one person to be detected, wherein the person to be detected is a person contained in the depth image; determining first skeleton point data of a target person from the first skeleton point data of at least one person to be detected, wherein the target person is selected from at least one person to be detected; and acquiring the three-dimensional point cloud data of the target person from the three-dimensional point cloud data of the detection area based on the first skeleton point data of the target person so as to acquire the human body three-dimensional information of the target person according to the three-dimensional point cloud data of the target person. The method can determine the three-dimensional point cloud data of the target person based on the first skeleton point data of the target person, acquire the human body three-dimensional information of the target person based on the three-dimensional point cloud data of the target person, can remove the interference of interferents, and has high accuracy of detection results.

Description

Three-dimensional information detection method and electronic equipment
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a three-dimensional information detection method and an electronic device.
Background
Non-contact three-dimensional inspection is typically accomplished with a depth camera. Taking a structured light depth camera as an example, a target person stands in a detection area, the depth camera projects structured light such as infrared light and the like to the detection area, a structured light depth sensor is used for receiving the reflected structured light to obtain a depth image, and human body three-dimensional information can be obtained through calculation based on the depth image. However, if a user detects that other people enter the detection area, the depth camera may capture the depth images of the other people, which may interfere with the detection of the target person. In public places such as gymnasiums and the like, for the purpose of fully utilizing space and facilitating user detection, a non-contact human body three-dimensional detection device is usually arranged at a passageway, and pedestrians come and go can interfere with the detection process of a target person, so that the detection process cannot be completed, the detection result accuracy is poor, and the user experience is poor.
Content of application
In view of the foregoing problems in the prior art, the present application provides a three-dimensional information detection method and an electronic device.
In order to solve the technical problem, the embodiment of the application adopts the following technical scheme:
a three-dimensional information detection method comprises the following steps:
acquiring a depth image of a detection area, acquiring three-dimensional point cloud data of the detection area based on the depth image, and acquiring first skeleton point data of at least one person to be detected, wherein the person to be detected is a person contained in the depth image;
determining first skeleton point data of a target person from the first skeleton point data of at least one person to be detected, wherein the target person is selected from at least one person to be detected;
and acquiring the three-dimensional point cloud data of the target person from the three-dimensional point cloud data of the detection area based on the first skeleton point data of the target person, so as to acquire the human body three-dimensional information of the target person according to the three-dimensional point cloud data of the target person.
In some embodiments, the determining the first skeletal point data of the target person from the first skeletal point data of at least one of the persons to be tested includes:
acquiring a first image of a detection area, wherein the first image comprises a person image of the person to be detected;
and acquiring identification information of the target person according to a selection instruction for selecting the person image, and determining first skeleton point data of the target person from the first skeleton point data of at least one person to be detected based on the identification information.
In some embodiments, the determining the first skeletal point data of the target person from the first skeletal point data of at least one of the persons to be tested based on the identification information includes:
acquiring second skeleton point data of the target person from the first image based on the identification information;
and determining first skeletal point data of the target person from the first skeletal point data of at least one person to be detected based on the second skeletal point data of the target person.
In some embodiments, said obtaining second skeletal point data of said target person from said first image based on said identification information comprises:
obtaining second skeleton point data of at least one character to be detected based on the first image;
and determining second skeletal point data of the target person from the second skeletal point data of at least one person to be detected based on the identification information.
In some embodiments, the determining the first skeletal point data of the target person from the first skeletal point data of at least one of the persons to be tested based on the second skeletal point data of the target person comprises:
respectively calculating the matching degree of the second skeleton point data of the target person and the first skeleton point data of the person to be detected;
and determining the first skeleton point data of the character to be detected with the matching degree meeting the preset condition as the first skeleton point data of the target character.
In some embodiments, the obtaining human three-dimensional information of the target person based on the three-dimensional point cloud data of the target person includes:
constructing a three-dimensional model of the target person based on the three-dimensional point cloud data of the target person;
and acquiring human body three-dimensional information of the target person based on the three-dimensional model.
An electronic device, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a depth image comprising depth information of a detection area, and acquiring three-dimensional point cloud data of the detection area and first skeleton point data of at least one person to be detected based on the depth image, and the person to be detected is a person contained in the depth image;
the determining module is used for determining first skeleton point data of a target person from the first skeleton point data of at least one person to be detected, wherein the target person is selected from at least one person to be detected;
and the second acquisition module is used for acquiring the three-dimensional point cloud data of the target person from the three-dimensional point cloud data of the detection area based on the first skeleton point data of the target person so as to acquire the human body three-dimensional information of the target person according to the three-dimensional point cloud data of the target person.
In some embodiments, the determining module comprises:
a first acquisition unit for acquiring a first image of a detection area by a user, wherein the first image comprises a person image of the person to be detected;
and the determining unit is used for acquiring the identification information of the target person according to a selection instruction for selecting the person image, and determining the first skeleton point data of the target person from the first skeleton point data of at least one person to be detected based on the identification information.
In some embodiments, the determining unit is specifically configured to:
acquiring second skeleton point data of the target person from the first image based on the identification information;
and determining first skeletal point data of the target person from the first skeletal point data of at least one person to be detected based on the second skeletal point data of the target person.
In some embodiments, the determining unit is further configured to:
respectively calculating the matching degree of the second skeleton point data of the target person and the first skeleton point data of the person to be detected;
and determining the first skeleton point data of the character to be detected with the matching degree meeting the preset condition as the first skeleton point data of the target character.
The beneficial effects of the embodiment of the application are that:
according to the three-dimensional information detection method, the first skeleton point data of the target person is determined from the first skeleton point data of at least one person to be detected, the three-dimensional point cloud data of the target person is matched with the three-dimensional point cloud data of the target person from the three-dimensional point cloud data of the detection area based on the first skeleton point data of the target person, the purpose of removing the three-dimensional point cloud data of the interference object can be achieved, the obtained three-dimensional point cloud data of the target person does not contain the three-dimensional point cloud data of the interference object, then the human body three-dimensional information of the target person is obtained based on the three-dimensional point cloud data of the target person, the interference of the interference object can be removed.
Drawings
Fig. 1 is a flowchart of a three-dimensional information detection method according to an embodiment of the present application;
fig. 2 is a flowchart of step S200 in the three-dimensional information detection method according to the embodiment of the present application;
fig. 3 is a flowchart of step S220 in the three-dimensional information detection method according to the embodiment of the present application;
fig. 4 is a flowchart of step S300 in the three-dimensional information detection method according to the embodiment of the present application;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present application.
Description of reference numerals:
10-a first acquisition module; 20-a determination module; 30-a second acquisition module.
Detailed Description
Various aspects and features of the present application are described herein with reference to the drawings.
It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the application.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present application will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application of unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the application.
The embodiment of the application provides a three-dimensional information detection method, which is mainly used for detecting human body three-dimensional information of a target person, such as information of a body's neck circumference, chest circumference, waist circumference, hip circumference, upper arm circumference, lower arm circumference, thigh circumference, shank circumference and the like.
Fig. 1 is a flowchart of a three-dimensional information detection method according to an embodiment of the present application, and referring to fig. 1, the three-dimensional information detection method according to the embodiment of the present application specifically includes the following steps:
s100, obtaining a depth image of a detection area, obtaining three-dimensional point cloud data of the detection area based on the depth image, and obtaining first skeleton point data of at least one person to be detected, wherein the person to be detected is a person contained in the depth image.
The detection area is used for detecting human body three-dimensional information when a person to be detected stands. A depth image is an image in which the distance of the image acquisition means to each point in the examination area is taken as a pixel value, which directly reflects the geometry of the visible surface in the examination area.
In a specific implementation process, the three-dimensional information detection method of the embodiment of the application can be applied to a three-dimensional information detection device, a server, a mobile terminal or the like. When applied to a three-dimensional information detection apparatus, the three-dimensional information detection apparatus may include at least one depth image acquisition apparatus for acquiring a depth image and a processing apparatus for processing the acquired depth image. The depth image acquisition device acquires a depth image of the detection area in various ways, for example, the depth image of the detection area can be acquired by a structured light depth camera. The light with certain structural characteristics is projected to a detection area through a near-infrared laser during the structured light depth camera, and then collected by an infrared camera. The light with a certain structure can acquire different image phase information because the person to be detected is in different depth areas, and a depth image can be obtained by calculation based on the acquired image phase information. The depth image of the detection area may also be acquired by a time-of-flight ranging (TOF) method, in which a light pulse is transmitted to the detection area, and then light returned from the object is received by a sensor, and the distance to the human body to be detected is obtained by detecting the time of flight of the light pulse, thereby obtaining the depth image of the detection area. Or the depth image of the detection area can be acquired by a binocular stereo vision method, which is a method of acquiring a plurality of images of the detection area from different positions based on the time difference distance and by using an imaging device, and acquiring the depth image of the detection area by calculating the position deviation between corresponding points of the images. When applied to a server or a mobile terminal, acquiring the depth image of the detection area may be a process of acquiring the depth image from the depth image acquisition device based on a communication link.
When the person to be detected stands on the detection area, the depth image of the detection area can be obtained through the depth image acquisition device, and then the depth image data can be converted into three-dimensional point cloud data of the detection area based on internal standard parameters of the depth image acquisition device and other related parameters, an algorithm or a self-learning model and the like. The three-dimensional point cloud data of the detection area comprises three-dimensional point cloud data of a person to be detected standing in the detection area. The person to be detected comprises a target person needing three-dimensional information detection and also comprises an interference person entering a detection area. Of course, when other types of interferents enter the detection area, the three-dimensional point cloud data of the detection area also includes the three-dimensional point cloud data of the interferents.
The skeletal point data is information describing human skeleton through some key points of the human body, and the human posture can be represented by the skeletal point data, wherein the key points can comprise key points such as a head, a neck, a shoulder joint, an elbow joint, a wrist joint, a crotch joint, a knee joint and an ankle joint, and the skeletal point data can comprise data types such as 9 key points, 14 key points, 15 key points, 16 key points, 17 key points and 22 key points. After the depth image data is obtained, fitting calculation can be carried out by utilizing an algorithm or a self-learning model so as to obtain first skeleton point data of the person to be detected in the detection area. And fitting to obtain first skeleton point data of the person to be detected in the detection area based on the obtained three-dimensional point cloud data of the detection area. Here, the manner of obtaining the first skeleton point data of the person to be measured is not particularly limited.
S200, determining first skeleton point data of a target person from the first skeleton point data of at least one person to be detected, wherein the target person is selected from at least one person to be detected.
As described above, when the target person is a person entering the detection area for performing three-dimensional information detection, and when the depth image is acquired, if an interfering person other than the target person enters the detection area, the person to be detected includes the target person and also includes the interfering person, and the first skeleton point data of the person to be detected includes the first skeleton point data of the target person and also includes the first skeleton point data of the interfering person, so that the first skeleton point data of the target person needs to be determined from the first skeleton point data of the person to be detected.
In the specific implementation process, there are various ways to determine the first skeleton point data of the target person from the first skeleton point data of at least one person to be tested. For example, at least one first skeletal point image may be generated based on first skeletal point data of a person to be detected, the at least one first skeletal point image may be displayed through a display device, and the first skeletal point data of a target person may be determined based on a user's selection of the first skeletal point image. For example, a depth image may be acquired, a visual image (e.g., a color image, a black-and-white image, etc.) of the detection area may be acquired, the visual image may be displayed on the display device, the target person determined based on the user's selection operation on the at least one person to be detected may be determined, and the first skeleton point data of the target person may be determined from the first skeleton point data of the at least one person to be detected. Or, the first skeleton point data of the target person can be automatically determined from the first skeleton point data of at least one person to be detected. For example, when there is a targeted detection site in the detection area, the first skeleton point data of the target person may be determined from the first skeleton point data of the at least one person to be detected based on the position information of the detection site.
S300, acquiring three-dimensional point cloud data of the target person from the three-dimensional point cloud data of the detection area based on the first skeleton point data of the target person, and acquiring human body three-dimensional information of the target person according to the three-dimensional point cloud data of the target person.
The first skeleton point data includes three-dimensional information of a human body key point of the target person, that is, the first skeleton point data includes position information (X-axis direction) of the human body key point in the left-right direction, position information (Y-axis direction) of the human body key point in the up-down direction, and position information (Z-axis direction, that is, depth information) of the human body key point in the front-back direction. After the first skeleton point data of the target person is acquired, three-dimensional point cloud data of the target person, which is matched with the position information of the first skeleton point data of the target person, can be extracted from the three-dimensional point cloud data of the detection area. After the three-dimensional point cloud data of the target character is obtained, the human body three-dimensional information of the target character can be obtained based on an algorithm or a self-learning model.
According to the three-dimensional information detection method, the first skeleton point data of the target person is determined from the first skeleton point data of at least one person to be detected, the three-dimensional point cloud data of the target person is matched with the three-dimensional point cloud data of the target person from the three-dimensional point cloud data of the detection area based on the first skeleton point data of the target person, the purpose of removing the three-dimensional point cloud data of the interference object can be achieved, the obtained three-dimensional point cloud data of the target person does not contain the three-dimensional point cloud data of the interference object, then the human body three-dimensional information of the target person is obtained based on the three-dimensional point cloud data of the target person, the interference of the interference object can be removed.
In some embodiments, as shown in fig. 2, in step S200, determining first skeleton point data of the target person from the first skeleton point data of at least one of the persons to be tested includes:
s210, acquiring a first image of a detection area, wherein the first image comprises a person image of the person to be detected. The first image is a visual image, such as a color image or a black-and-white image. In a specific implementation process, the depth image of the detection area is collected by the depth image collecting device, and meanwhile, a first image of the detection area can be collected by another image collecting device. Alternatively, when the depth image of the detection region is acquired by the binocular stereo vision method, one of a plurality of images of the detection region acquired from different positions may be used as the first image, or the first image may be generated based on the acquisition of the images.
S220, obtaining identification information of the target person according to a selection instruction for selecting the person image, and determining first skeleton point data of the target person from the first skeleton point data of at least one person to be detected based on the identification information.
The identification information includes information for identifying a character image of the target character, such as two-dimensional point cloud data or three-dimensional point cloud data of the character image acquired based on the character image of the target character, two-dimensional data or three-dimensional data of key points of the character image of the target character, for example, or selection position information of pixels in the first image included in the selection instruction, such as position information of pixels selected by a user in a touch manner on the touch display device.
In a specific implementation process, when the method of the embodiment of the application is applied to a three-dimensional information detection device, the three-dimensional information detection device may include a touch display device, the first image may be displayed through the touch display device, based on a touch selection operation of a character image of the first image displayed on the touch display device, position information of a pixel selected by a user through touch on the first image may be determined, and further, a character image of a target character may be determined, and based on the character image of the target character, two-dimensional point cloud data or three-dimensional point cloud data of the character image may be obtained.
When the method is applied to a server, the other image acquisition device can send the first image to the server after acquiring the first image, the server can send the first image to a terminal of a target person based on user information of the target person and display the first image through the terminal, the terminal can generate a selection instruction based on selection operation of a user on a person image on the first image and send the selection instruction to the server, and the server can obtain identification information of the target person based on the selection instruction after obtaining the selection instruction.
When the method of the embodiment of the application is applied to the mobile terminal, the other image acquisition device can directly send the first image to the mobile terminal after acquiring the first image, the mobile terminal generates a selection instruction based on the selection operation of the user, and the identification information of the target person is acquired based on the selection instruction.
After the identification information of the target person is determined, the first skeleton point data of the target person can be matched from the first skeleton point data of the person to be detected based on the identification information. For example, when the identification information includes two-dimensional point cloud data of a person image of a target person, first skeleton point data of the target person may be matched from first skeleton point data of a person to be detected based on position information of the person image in the vertical direction and the horizontal direction in the world coordinate system, that is, the first skeleton point data matched with the position information in the vertical direction and the horizontal direction may be obtained. Therefore, the target person can be accurately determined based on the selection operation of the person image in the first image by the user, and the three-dimensional point cloud data of the target person can be accurately obtained.
As shown in fig. 3, in some embodiments, in step S220, determining the first skeletal point data of the target person from the first skeletal point data of at least one of the persons to be tested based on the identification information may include:
s221, second skeleton point data of the target person is obtained from the first image based on the identification information.
The second skeleton point data of the target person may be two-dimensional skeleton point data, that is, only including position information of the key point in the up-down direction and position information of the key point in the left-right direction, or may be three-dimensional skeleton point data, that is, including position information of the key point in the up-down direction, position information of the key point in the left-right direction, and position information of the key point in the front-back direction. After the identification information is obtained, the second skeleton point data of the target person may be fitted based on the identification information, or the second skeleton point data of the target person may be selected based on the identification information. For example, when the identification information is two-dimensional point cloud data or three-dimensional point cloud data of a person image of the target person, second skeleton point data of the target person may be fitted based on the two-dimensional point cloud data or the three-dimensional point cloud data.
In a preferred embodiment, the step S221 may include:
and acquiring second skeleton point data of at least one person to be detected based on the first image.
That is, after the first image is obtained, the second skeleton point data of the person to be detected included in the first image may be obtained based on algorithm or self-learning model fitting, and as described above, the second skeleton point data may be two-dimensional skeleton point data or three-dimensional skeleton point data. When the person to be detected in the first image includes an interference person, the second skeleton point data of the at least one person to be detected includes the second skeleton point data of the target person and also includes the second skeleton point data of the interference person.
And determining second skeletal point data of the target person from the second skeletal point data of at least one person to be detected based on the identification information.
In this case, the identification information may be two-dimensional point cloud data or three-dimensional point cloud data of the person image of the target person, or may be position information selected for pixels of the person image in the first image included in the selection instruction. For example, the second skeletal point data of the target person may be determined from the second skeletal point data of the at least one person to be measured based on the extracted position information.
S222, determining first skeleton point data of the target person from the first skeleton point data of at least one person to be detected based on the second skeleton point data of the target person.
In a specific implementation process, the step S222 may include:
and respectively calculating the matching degree of the second skeleton point data of the target person and the first skeleton point data of the person to be detected.
The first skeleton point data and the second skeleton point data of the target person are both skeleton point data of the target person in a human body posture during three-dimensional information detection, namely the first skeleton point data and the second skeleton point data of the target person are skeleton point data of the target person in the same human body posture, so that even if the first skeleton point data and the second skeleton point data are obtained based on the depth image and the first image respectively, the first skeleton point data and the second skeleton point data still have high matching degree. Taking the second skeleton point data as two-dimensional skeleton point data as an example, at least the position information of the key point in the second skeleton point data in the vertical direction and the position information of the key point in the left-right direction have a higher matching degree with the position information of the key point in the first skeleton point data in the vertical direction and the position information of the key point in the left-right direction.
In the specific implementation process, the matching degree of the second skeleton point data of the target person and the first skeleton point data of the person to be detected can be respectively calculated. For example, only the first matching degree between each key point in the second skeleton point data of the target person and each key point in the first skeleton point data of the person to be detected may be calculated, or after the first matching degree between each key point is obtained, normalization processing may be further performed to obtain the second matching degree between the two sets of data, that is, the first skeleton point data and the second skeleton point data.
And determining the first skeleton point data of the character to be detected with the matching degree meeting the preset condition as the first skeleton point data of the target character.
The preset condition may be that a first matching degree between the key points reaches a first matching degree threshold, and the number of the key points reaching the first matching degree threshold reaches a first number threshold. In this way, only the first matching degree of each key point in the second skeleton point data of the target person and each key point in the first skeleton point data of the person to be detected can be calculated, whether the first matching degree of each key point reaches the first matching degree threshold value or not is compared, whether the number of the key points reaching the first matching degree threshold value reaches the first number threshold value or not is compared, and if the first number threshold value is reached, the first skeleton point data of the person to be detected can be determined to be the first skeleton point data of the target person.
The preset condition may also be that a second matching degree between two sets of data, that is, the first skeleton point data of the character to be detected and the second skeleton point data of the target character reaches a second matching degree threshold, so that, in actual application, whether the second matching degree between the two sets of data reaches the second matching degree threshold or not may be calculated and judged, and if so, the first skeleton point data of the character to be detected may be determined as the first skeleton point data of the target character.
In some embodiments, as shown in fig. 4, the step 300 of obtaining human three-dimensional information of the target person based on the three-dimensional point cloud data of the target person may include:
s310, constructing a three-dimensional model of the target person based on the three-dimensional point cloud data of the target person.
In a specific implementation process, after the three-dimensional point cloud data of the target person is determined, the three-dimensional point cloud data of the target person can be preprocessed, such as filtering and drying removal, data simplification, data interpolation and the like. And then constructing a three-dimensional model of the target person by utilizing the preprocessed three-dimensional point cloud data based on a three-dimensional modeling method in the prior art.
S320, acquiring human body three-dimensional information of the target person based on the three-dimensional model.
After the three-dimensional model is built, the human body three-dimensional information of the target person can be obtained based on the curve with the longest length on the cross section corresponding to each detection part in the three-dimensional model. Taking the measurement of the bust as an example, the curve with the longest length obtained on the cross sections of the X axis and the Z axis in the chest parameters in the three-dimensional model can be calculated and obtained, namely the bust. For example, the measurement methods of waist circumference, hip circumference, upper arm circumference and lower arm circumference are similar to those of chest circumference, and the main difference is that the parameter positions of the selected three-dimensional models are different.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present application, and referring to fig. 5, the electronic device according to the embodiment of the present application includes:
the first obtaining module 10 is configured to obtain a depth image including depth information of a detection area, and obtain three-dimensional point cloud data of the detection area and first skeleton point data of at least one person to be detected based on the depth image, where the person to be detected is a person included in the depth image;
a determining module 20, configured to determine first skeleton point data of a target person from first skeleton point data of at least one person to be tested, where the target person is selected from the at least one person to be tested;
the second obtaining module 30 is configured to obtain three-dimensional point cloud data of the target person from the three-dimensional point cloud data of the detection area based on the first skeleton point data of the target person, so as to obtain human three-dimensional information of the target person according to the three-dimensional point cloud data of the target person.
In some embodiments, the determining module 20 comprises:
a first acquisition unit for acquiring a first image of a detection area by a user, wherein the first image comprises a person image of the person to be detected;
and the determining unit is used for acquiring the identification information of the target person according to a selection instruction for selecting the person image, and determining the first skeleton point data of the target person from the first skeleton point data of at least one person to be detected based on the identification information.
In some embodiments, the determining unit is specifically configured to:
acquiring second skeleton point data of the target person from the first image based on the identification information;
and determining first skeletal point data of the target person from the first skeletal point data of at least one person to be detected based on the second skeletal point data of the target person.
In some embodiments, the determining unit is further configured to:
obtaining second skeleton point data of at least one character to be detected based on the first image;
and determining second skeletal point data of the target person from the second skeletal point data of at least one person to be detected based on the identification information.
In some embodiments, the determining unit is further configured to:
respectively calculating the matching degree of the second skeleton point data of the target person and the first skeleton point data of the person to be detected;
and determining the first skeleton point data of the character to be detected with the matching degree meeting the preset condition as the first skeleton point data of the target character.
In some embodiments, the second obtaining module 30 includes:
a construction unit for constructing a three-dimensional model of the target person based on the three-dimensional point cloud data of the target person;
and the second acquisition unit is used for acquiring the human body three-dimensional information of the target person based on the three-dimensional model.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.

Claims (10)

1. A three-dimensional information detection method comprises the following steps:
acquiring a depth image of a detection area, acquiring three-dimensional point cloud data of the detection area based on the depth image, and acquiring first skeleton point data of at least one person to be detected, wherein the person to be detected is a person contained in the depth image;
determining first skeleton point data of a target person from the first skeleton point data of at least one person to be detected, wherein the target person is selected from at least one person to be detected;
and acquiring the three-dimensional point cloud data of the target person from the three-dimensional point cloud data of the detection area based on the first skeleton point data of the target person, so as to acquire the human body three-dimensional information of the target person according to the three-dimensional point cloud data of the target person.
2. The three-dimensional information detection method according to claim 1, wherein the determining of the first skeleton point data of the target person from the first skeleton point data of the at least one person to be detected includes:
acquiring a first image of a detection area, wherein the first image comprises a person image of the person to be detected;
and acquiring identification information of the target person according to a selection instruction for selecting the person image, and determining first skeleton point data of the target person from the first skeleton point data of at least one person to be detected based on the identification information.
3. The three-dimensional information detection method according to claim 2, wherein said determining first skeletal point data of the target person from first skeletal point data of at least one of the persons to be detected based on the identification information comprises:
acquiring second skeleton point data of the target person from the first image based on the identification information;
and determining first skeletal point data of the target person from the first skeletal point data of at least one person to be detected based on the second skeletal point data of the target person.
4. The three-dimensional information detection method according to claim 3, wherein said obtaining second skeletal point data of the target person from the first image based on the identification information includes:
obtaining second skeleton point data of at least one character to be detected based on the first image;
and determining second skeletal point data of the target person from the second skeletal point data of at least one person to be detected based on the identification information.
5. The three-dimensional information detection method according to claim 3, wherein the determining of the first skeletal point data of the target person from the first skeletal point data of at least one of the persons to be detected based on the second skeletal point data of the target person includes:
respectively calculating the matching degree of the second skeleton point data of the target person and the first skeleton point data of the person to be detected;
and determining the first skeleton point data of the character to be detected with the matching degree meeting the preset condition as the first skeleton point data of the target character.
6. The three-dimensional information detection method according to claim 1, wherein the acquiring human three-dimensional information of the target person based on the three-dimensional point cloud data of the target person includes:
constructing a three-dimensional model of the target person based on the three-dimensional point cloud data of the target person;
and acquiring human body three-dimensional information of the target person based on the three-dimensional model.
7. An electronic device, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a depth image comprising depth information of a detection area, and acquiring three-dimensional point cloud data of the detection area and first skeleton point data of at least one person to be detected based on the depth image, and the person to be detected is a person contained in the depth image;
the determining module is used for determining first skeleton point data of a target person from the first skeleton point data of at least one person to be detected, wherein the target person is selected from at least one person to be detected;
and the second acquisition module is used for acquiring the three-dimensional point cloud data of the target person from the three-dimensional point cloud data of the detection area based on the first skeleton point data of the target person so as to acquire the human body three-dimensional information of the target person according to the three-dimensional point cloud data of the target person.
8. The electronic device of claim 7, wherein the determining module comprises:
a first acquisition unit for acquiring a first image of a detection area by a user, wherein the first image comprises a person image of the person to be detected;
and the determining unit is used for acquiring the identification information of the target person according to a selection instruction for selecting the person image, and determining the first skeleton point data of the target person from the first skeleton point data of at least one person to be detected based on the identification information.
9. The electronic device of claim 8, wherein the determining unit is specifically configured to:
acquiring second skeleton point data of the target person from the first image based on the identification information;
and determining first skeletal point data of the target person from the first skeletal point data of at least one person to be detected based on the second skeletal point data of the target person.
10. The electronic device of claim 9, wherein the determination unit is further to:
respectively calculating the matching degree of the second skeleton point data of the target person and the first skeleton point data of the person to be detected;
and determining the first skeleton point data of the character to be detected with the matching degree meeting the preset condition as the first skeleton point data of the target character.
CN201911235316.8A 2019-12-05 2019-12-05 Three-dimensional information detection method and electronic equipment Pending CN111079597A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911235316.8A CN111079597A (en) 2019-12-05 2019-12-05 Three-dimensional information detection method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911235316.8A CN111079597A (en) 2019-12-05 2019-12-05 Three-dimensional information detection method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111079597A true CN111079597A (en) 2020-04-28

Family

ID=70313140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911235316.8A Pending CN111079597A (en) 2019-12-05 2019-12-05 Three-dimensional information detection method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111079597A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344917A (en) * 2021-07-28 2021-09-03 浙江华睿科技股份有限公司 Detection method, detection device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787469A (en) * 2016-03-25 2016-07-20 广州市浩云安防科技股份有限公司 Method and system for pedestrian monitoring and behavior recognition
CN106971130A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method using face as reference
CN106971131A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method based on center
CN110427917A (en) * 2019-08-14 2019-11-08 北京百度网讯科技有限公司 Method and apparatus for detecting key point
WO2019230205A1 (en) * 2018-05-31 2019-12-05 株式会社日立製作所 Skeleton detection device and skeleton detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971130A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method using face as reference
CN106971131A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method based on center
CN105787469A (en) * 2016-03-25 2016-07-20 广州市浩云安防科技股份有限公司 Method and system for pedestrian monitoring and behavior recognition
WO2019230205A1 (en) * 2018-05-31 2019-12-05 株式会社日立製作所 Skeleton detection device and skeleton detection method
CN110427917A (en) * 2019-08-14 2019-11-08 北京百度网讯科技有限公司 Method and apparatus for detecting key point

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344917A (en) * 2021-07-28 2021-09-03 浙江华睿科技股份有限公司 Detection method, detection device, electronic equipment and storage medium
CN113344917B (en) * 2021-07-28 2021-11-23 浙江华睿科技股份有限公司 Detection method, detection device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107273846B (en) Human body shape parameter determination method and device
CN102657532B (en) Height measuring method and device based on body posture identification
US9047507B2 (en) Upper-body skeleton extraction from depth maps
JP6231302B2 (en) Inspection assistance device
EP3144900B1 (en) Method and terminal for acquiring sign data of target object
CN106625673A (en) Narrow space assembly system and assembly method
CN112185514A (en) Rehabilitation training effect evaluation system based on action recognition
JP2000251078A (en) Method and device for estimating three-dimensional posture of person, and method and device for estimating position of elbow of person
WO2019230205A1 (en) Skeleton detection device and skeleton detection method
Itami et al. A simple calibration procedure for a 2D LiDAR with respect to a camera
CN115035546B (en) Three-dimensional human body posture detection method and device and electronic equipment
CN114894337B (en) Temperature measurement method and device for outdoor face recognition
JP2021067469A (en) Distance estimation device and method
Bragança et al. An overview of the current three-dimensional body scanners for anthropometric data collection
CN113544738A (en) Portable acquisition equipment for human body measurement data and method for collecting human body measurement data
CN111079597A (en) Three-dimensional information detection method and electronic equipment
CN113749646A (en) Monocular vision-based human body height measuring method and device and electronic equipment
CN109740458B (en) Method and system for measuring physical characteristics based on video processing
CN115568823B (en) Human body balance capability assessment method, system and device
CN115937969A (en) Method, device, equipment and medium for determining target person in sit-up examination
CN115841497A (en) Boundary detection method and escalator area intrusion detection method and system
CN113836991B (en) Action recognition system, action recognition method, and storage medium
CN112075926B (en) Human body movement system and viscera system measurement method and device based on infrared image
JP2004254960A (en) Device and method for detecting direction of visual line
JP2006331009A (en) Image processor and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination