CN115590475A - Double-camera human body scanning system - Google Patents

Double-camera human body scanning system Download PDF

Info

Publication number
CN115590475A
CN115590475A CN202211278162.2A CN202211278162A CN115590475A CN 115590475 A CN115590475 A CN 115590475A CN 202211278162 A CN202211278162 A CN 202211278162A CN 115590475 A CN115590475 A CN 115590475A
Authority
CN
China
Prior art keywords
human body
point cloud
depth
camera
depth camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211278162.2A
Other languages
Chinese (zh)
Inventor
陈树青
卓敏达
林逸
袁壮
江淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xianku Intelligent Co ltd
Original Assignee
Shenzhen Xianku Intelligent Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xianku Intelligent Co ltd filed Critical Shenzhen Xianku Intelligent Co ltd
Priority to CN202211278162.2A priority Critical patent/CN115590475A/en
Publication of CN115590475A publication Critical patent/CN115590475A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0073Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by tomography, i.e. reconstruction of 3D images from 2D projections
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1071Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring angles, e.g. using goniometers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1077Measuring of profiles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/26Bioelectric electrodes therefor maintaining contact between the body and the electrodes by the action of the subjects, e.g. by placing the body on the electrodes or by grasping the electrodes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4869Determining body composition
    • A61B5/4872Body fat
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a double-camera human body scanning system, which comprises: the interactive panel comprises a fixed frame, a first depth camera, a second depth camera and a host, and the rotary table comprises a measuring platform, wherein the measuring platform comprises an electrode plate and a human body composition measuring component; the host computer implements the following steps: obtaining a depth image of a user standing on the turntable through the first depth camera and the second depth camera, and realizing contactless interaction with the user according to the depth image; reconstructing a real human body three-dimensional model of the user through the depth image, and obtaining the size of a key part of the user according to the real human body three-dimensional model; acquiring body composition data of the measured person through the electrode slice and the body composition measuring component; and generating an analysis report according to the critical part size and the body composition data.

Description

Double-camera human body scanning system
Technical Field
The invention relates to the field of human body detection, in particular to a double-camera human body scanning system.
Background
With the rapid development of socioeconomic and scientific technology, people pay more and more attention to the appearance and wearing fashion of the people. For these extrinsic factors, everyone has own habits and preferences, but the habits and preferences are subjective information, and the user does not necessarily know what really fits himself. Therefore, a plurality of data services such as body shape and posture correction schemes, fitness schemes, wearing party recommendations, garment customization, size recommendations, garment customization and the like suitable for the user are recommended for the user.
The precondition of the data service needs to obtain accurate parameters of the user, including dimension values, weight, body fat and the like of each part of the human body. The prior commonly used human body scanning and human body component measuring system is a non-contact automatic measuring system which utilizes BIA electrical impedance measurement method, optical measurement technology, point cloud processing technology and the like to carry out three-dimensional human body surface contour processing and body component calculation. For human body scanning, the prior art provides a scheme for alternately acquiring human body data by using two depth cameras. The first depth camera and the second depth camera are vertically fixed on the equipment, and a rotary table is connected right in front of the equipment. During collection, the person to be measured rotates 360 degrees along with the rotary table.
However, the current double-camera human body scanning system has the defects of complicated interaction, less acquired information amount, high requirement on the field and the like, and the application range of the system is limited.
Disclosure of Invention
In view of the above, the present invention discloses a human body parameter measuring system, which aims to improve the above problems.
The invention adopts the following scheme:
a dual camera body scanning system, comprising:
the interactive panel comprises a fixed frame, a first depth camera, a second depth camera, a display screen and a host, wherein the first depth camera, the second depth camera, the display screen and the host are arranged on the fixed frame;
the rotary table comprises a measuring platform for a human body to stand on, the measuring platform comprises electrode plates positioned on the surface and a human body component measuring assembly positioned inside, the electrode plates are electrically connected with the human body component measuring assembly, and the human body component measuring assembly is electrically connected with the host; wherein:
the host includes a memory and a processor configured to implement the following steps by executing a computer program stored in the memory:
obtaining a depth image of a user standing on the turntable through the first depth camera and the second depth camera, and realizing contactless interaction with the user according to the depth image;
reconstructing a real human body three-dimensional model of the user through the depth image, and obtaining the size of a key part of the user according to the real human body three-dimensional model;
acquiring body composition data of the measured person through the electrode slice and the body composition measuring component;
and generating an analysis report according to the critical part size and the body composition data.
Preferably, the distance between the first depth camera and the horizontal ground is 1700mm, and the lens of the first depth camera is inclined downwards; the distance between the second depth camera and the horizontal ground is 500mm, and the lens of the second depth camera is inclined upwards; inclination angle of first depth camera and second depth camera
Figure 905331DEST_PATH_IMAGE001
Comprises the following steps:
Figure 719703DEST_PATH_IMAGE002
Figure 542166DEST_PATH_IMAGE003
wherein d is 1 The distance between the measured person and the interactive panel;
h 1 is the distance between the first depth camera and the horizontal ground;
h 2 is the distance between the second depth camera and the horizontal ground;
H 1 the distance from the horizontal plane to the lowest point where the first depth camera view frustum intersects with the measured person;
H 2 is the distance from the horizontal plane of the lowest point where the second depth camera view cone intersects the person being measured.
Preferably, the obtaining, by the first depth camera and the second depth camera, a depth image of a user standing on the turntable, and implementing contactless interaction with the user according to the depth image specifically includes:
acquiring a depth image of a user standing on the turntable according to the first depth camera and the second depth camera, and reconstructing according to a preset external parameter matrix to obtain a human body point cloud of the user;
projecting the human body point cloud onto a plane where the interactive panel is located, and acquiring a human body contour height without arms according to the projected human body point cloud;
forming a human body trunk contour map according to the projected human body point cloud and the human body contour height;
performing subtraction according to the human body point cloud and the human body trunk contour map to obtain a double-arm image, and performing connected domain segmentation and morphological analysis on the double-arm image to obtain the opening angles of the left arm and the right arm;
and forming a preset interactive instruction according to the human body trunk outline drawing and the opening angles of the left arm and the right arm.
Preferably, a human body trunk contour map is formed according to the projected human body point cloud and the human body contour height, and specifically comprises:
taking the human shank as a starting point, setting all local pixel points as seeds to be checked, and taking half of the height of the human body contour as a boundary:
searching pixel points in 8 regions of the lower half with each seed to be checked as a central point, namely, the left, right, upper, first upper jump, second upper jump, lower, first lower jump and second lower jump; the first jump-up is positioned right above the central point and is separated from the central point by two pixel points; the second jump is positioned right above the central point and is separated from the central point by four pixel points; the first down jump is positioned under the central point and is separated from the central point by two pixel points; the second down jump is positioned right below the central point and is separated from the central point by four pixel points;
in the upper half body, each seed to be checked is taken as a central point, and points to the left, the right, the upper, the first jump upwards and the second jump upwards, and the 5 areas are used for searching pixel points;
and after traversing all the seeds to be checked, obtaining a human body trunk contour map according to the images formed by all the seeds.
Preferably, if the searched pixel is not a seed and the difference between the gray value of the pixel and the gray value of the seed is smaller than a set threshold, the pixel is set as the seed to be searched.
Preferably, according to the human trunk contour map and the opening angles of the left arm and the right arm, a preset interactive instruction is formed, and the method specifically comprises the following steps:
calculating the widest width of the two arms based on the human body trunk contour map and the opening angles of the left arm and the right arm;
taking the widest width as a diameter and taking the human body trunk contour map as an axis to create a cylinder;
and shielding the depth information outside the cylinder, and generating an interactive instruction when the connected domain of the depth data in the cylinder is larger than a certain area.
Preferably, the obtaining the size of the key part of the user according to the depth image specifically includes:
controlling the rotary table to rotate, and simultaneously controlling the first depth camera and the second depth camera to alternately acquire depth images corresponding to all parts of a measured person on the rotary table;
processing the collected depth image to form a three-dimensional human body model corresponding to the user;
and extracting point cloud data in the three-dimensional human body model, and calculating the size of the key part of the user.
Preferably, the processing the acquired depth image to form the three-dimensional human body model corresponding to the user specifically includes:
according to the acquired depth image, taking a central axis of a human trunk in the image as an axis, taking the width opened by two hands as a diameter, establishing a cylinder, shielding depth information outside the cylinder, and calculating a current frame point cloud; the point clouds comprise an upper body point cloud and a lower body point cloud;
carrying out connected domain segmentation on the current frame point cloud, reserving a human body part and carrying out point cloud registration on the human body part and the previous frame point cloud to obtain a transformation matrix;
transforming according to the upper half body point cloud, the lower half body point cloud and the transformation matrix thereof to obtain a preliminary upper half body point cloud set and lower half body point cloud set;
separately down-sampling the upper body point cloud set and the lower body point cloud set, finding out nearest neighbor point pairs between adjacent frames, and updating all transformation matrixes through global least square;
extracting point clouds of a plurality of angles of the camera from the human body according to all the updated transformation matrixes; carrying out global icp registration on the point clouds to obtain final upper-half body point cloud and final lower-half body point cloud; during matching, the weight of the trunk part is increased, and the weight of the arms is reduced; and the farther the point cloud is from the camera, the lower the weight is;
calculating the direction of a rotating shaft by the fact that a transformation matrix of the upper body point cloud and the lower body point cloud is distributed around the rotating shaft, and simulating the rotating shaft of the upper body point cloud and the lower body point cloud to a Z axis;
calculating the distance between the upper body point cloud and the lower body point cloud on the rotating shaft according to the distance between the rotating shaft and the camera and the external parameters of the camera, and offsetting the upper body point cloud and the lower body point cloud towards the Z axis after the distance is obtained;
calculating three parameters of x offset, y offset and rz offset of the upper half body relative to the lower half body according to the nearest neighbor point pair of the upper half body point cloud and the lower half body point by adopting least squares, finally carrying out the same-level down-sampling operation on the affine point cloud, fusing the points of the overlapped part, and generating a body point cloud set;
cutting the feet of the human body and the turntable through the extracted turntable plane equation, and supplementing the foot step point cloud;
acquiring human body point clouds according to the body point cloud set and the step point clouds, carrying out cluster analysis on the point clouds, and extracting a maximum point cloud group;
and performing Poisson triangulation on the point cloud group, and performing optimization operation to generate a three-dimensional human body model.
Preferably, the turntable plane equation is obtained by:
obtaining a depth map shot by a second depth camera, and extracting a plurality of plane areas in the depth map by a random sampling consistency method;
and screening the plurality of plane areas according to the areas, the outlines and the positions to finally obtain the plane areas meeting the plane conditions of the turntable, and calculating a general plane equation of the plane areas.
Preferably, generating an analysis report according to the critical site size and the body composition data comprises:
generating an analysis report by taking the three-dimensional human body model, the sex, the weight, the age and the impedance information of the user as input; wherein the analysis report comprises:
key point positioning: positioning key point positions of the human body based on the human body characteristics for assisting subsequent human body measurement and body state analysis;
analyzing a wearing state; the wearing state is divided into an upper body wearing state and a lower body wearing state, and the upper body wearing state is divided into arms, a front and rear chest area and a front and rear waist area; distinguishing a hip circumference area, a thigh area and a shank area according to the wearing state of the lower body, and confirming the wearing state and the clothing type according to the surface curvature characteristic state of each area;
measuring human body data; the basic human body measurement data comprises circumference data, length data and vertical length data; the method is defined differently according to the requirements of the part and comprises the steps of horizontal circumference measurement, axial vertical circumference measurement, skin surface length measurement and space vertical measurement;
analyzing body type and body state; based on the curvature of the surface of the human body, the body shape and the posture of the human body are subjected to targeted analysis;
human body data measurement optimization: based on the body shape and the posture of the human body, correcting or supplementing the measurement of the individual part;
analyzing human body components: and calculating and optimizing the body composition of the three-dimensional human body model based on the analysis of the body state data and the wearing state.
In summary, the dual-camera human body scanning system provided by the embodiment has the following advantages:
1. two depth cameras are obliquely arranged, so that the interaction panel and the rotary table can obtain the complete posture of a measured person under the condition that an overlarge distance is not needed, and the requirement on a field is reduced.
2. The two depth cameras are used for realizing a non-contact interaction mode, so that a user can realize interaction on the rotary table without frequently realizing interaction on an interaction panel, and the user experience is good.
3. The multi-dimensional posture data of the user can be fully collected, and subsequent analysis and calculation are facilitated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of a dual-camera human body scanning system according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating the working principle of a dual-camera human body scanning system according to an embodiment of the present invention.
Fig. 3 is a schematic view of an interactive panel and a base provided in an embodiment of the present invention.
Fig. 4 is an internal structural diagram of an interactive panel provided in the embodiment of the present invention.
Fig. 5 isbase:Sub>A schematic sectional view taken alongbase:Sub>A-base:Sub>A of fig. 4.
Fig. 6 and 7 are graphs showing a portion of the measurement results provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive efforts based on the embodiments of the present invention, are within the scope of protection of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1 and 2, an embodiment of the present invention provides a dual-camera human body scanning system, which includes:
the interactive panel 20 comprises a fixed frame 21, a first depth camera 22, a second depth camera 23 and a host 25, wherein the first depth camera 22, the second depth camera 23 and the host 25 are arranged on the fixed frame 21, the first depth camera 22 and the second depth camera 23 are arranged on different heights of the fixed frame 21, and the host 25 is electrically connected with the first depth camera 22 and the second depth camera 23.
Specifically, in this embodiment, as shown in fig. 2, the interactive panel further includes a base 10, the fixing frame 21 is substantially a thin rectangular parallelepiped, the base 10 includes a bottom plate and two parallel connecting arms 12 disposed on two sides of the bottom plate 11, and two sides of the fixing frame 21 are located just inside the two connecting arms 12 along the width direction, so that the fixing frame 21 can be partially embedded into the base 10, and then the fixing frame 21 is fixed on the connecting arms 12 by screws or other fixing methods, so as to complete the connection between the interactive panel 20 and the base 10.
In this embodiment, the display device further includes a display screen 24, the fixed frame 21 is formed with a surface and an accommodating cavity, the first depth camera 22, the second depth camera 23 and the display screen 24 are disposed on the surface of the fixed frame 21, and the host 25 is disposed in the accommodating cavity. In addition, in order to facilitate the electrical connection with the outside, a necessary interface, such as a USB interface or other interfaces, is formed on the fixed frame 21, which is not described herein again.
In this embodiment, fixed frame 21's front surface is formed with the glass mirror surface, the groove of stepping down has been seted up to the glass mirror surface, display screen 24 installs in the inslot of stepping down, the glass mirror surface with display screen 24 adopts the full laminating technology to set up fixed frame 21 is last. Wherein the display screen 24 is used for user interaction and data presentation. The glass mirror surface can provide glass mirror surface effect and can be used as a common dressing mirror in a breath screen state. In addition, in order to meet the requirement of the camera on infrared light transmittance, a covering and plating process is adopted, and the infrared light transmittance of the mirror surface at the lens of the depth camera needs to reach more than 92% on the premise of not influencing the mirror surface effect of other areas of the mirror surface.
In the present embodiment, it is preferable that a space 27 for accommodating the turntable 30 is formed at the bottom of the fixing frame 21, as shown in fig. 3.
Wherein the turntable 30 can be accommodated in the space 27 when not in use, and the turntable 30 can be moved out of the space 27 when in use, thereby further saving space.
In the present embodiment, as shown in fig. 4 and 5, the first depth camera 22 and the second depth camera 23 are obliquely disposed at different heights of the fixed frame 21. In particular, in one embodiment, the first depth camera 22 is spaced approximately 1700mm from the horizontal ground with its lens tilted downward; the distance between the second depth camera 23 and the horizontal ground is approximately 500mm, and its lens is tilted upward. Of course, it should be noted that in other embodiments of the present invention, the heights and the tilt angles of the two depth cameras may be adjusted according to actual needs, and the present invention is not limited in particular.
In the present embodiment, since the lenses of the two depth cameras are obliquely arranged, even when the turntable 30 is close to the interactive panel 20, the two depth cameras can completely capture the whole view of the measured person standing on the turntable 30, so that the requirement for the field area can be reduced.
In the embodiment, in particular, the inclination angles of the first depth camera 22 and the second depth camera 23
Figure 809199DEST_PATH_IMAGE001
Comprises the following steps:
Figure 8099DEST_PATH_IMAGE002
Figure 739295DEST_PATH_IMAGE003
wherein, d 1 The distance between the measured person and the interactive panel;
h 1 is the distance between the first depth camera and the horizontal ground;
h 2 is the distance between the second depth camera and the horizontal ground;
H 1 the distance from the horizontal plane to the lowest point where the first depth camera view frustum intersects with the measured person;
H 2 is the distance from the horizontal plane of the lowest point where the second depth camera view frustum intersects the person being measured.
Through such inclination, can guarantee that can accurate acquisition obtain the user's complete depth image of health, and the reduction that can try hard to meet an urgent need to the area in place.
It should be noted that in order to confirm the spatial positions of the multiple cameras in cooperation with each other, the depth cameras need to be calibrated. Camera calibration is divided into two forms, a standard scheme and a simple scheme.
The standard scheme has the characteristics of portability, high definition, convenient use and the like. Specifically, in order to facilitate factory personnel and outgoing maintenance personnel to quickly carry out high-precision calibration work, a specially-made infrared lamp box is used as a calibration plate in design, the infrared lamp box is light in weight and provided with sliding rollers, and the infrared lamp box is convenient to move, so that the calibration work of multiple devices can be quickly completed.
In the design of the calibration pattern, a ring target containing codes is used as a calibration point, so that calibration can be performed without requiring a common visual area between cameras in the calibration process. And meanwhile, the calibration patterns face the inner side of the lamp box, so that when an infrared light source in the lamp box is turned on, the inward patterns have a certain positive blocking effect on light rays, and then the winter patterns are calibrated to appear. When the infrared light source is turned off, the lamp box spreading is white, and can be stably and accurately captured by the depth camera. When the calibration pattern is arranged outside, the color of the calibration pattern is too dark, and the light absorption phenomenon exists, so that the situation that the depth image is true at a key position (a target position) is caused.
In the calibration process, the calibration is carried out by using semi-automatic operation, so that factory personnel and outgoing maintenance personnel are greatly facilitated, and the education cost of calibration tool software is reduced. The specific process is as follows:
1) After starting the software calibration process, the camera will automatically enter an IR image shooting mode, detect whether the light source of the light box is turned on and whether the light box with proper brightness is placed at a proper position, and find stable and sufficient calibration points. If the condition is not met, the program will display the image and prompt.
2) The brightness meets the requirement that the target fixed plate is in a proper position, the camera automatically stores the IR image, and at the moment, the operator is prompted to turn off the IR light source. Until the light source is detected to be turned off, the program automatically switches to a depth image acquisition mode, and a depth image is acquired;
3) Comprehensively analyzing the depth winter image and the IR winter image acquired by the process, comparing camera internal parameters, calculating camera external parameters and locally recording;
4) The pitching angles of the upper camera and the lower camera are analyzed through the external parameters obtained through calculation, and therefore assembly is guaranteed to be complete.
The simple scheme is as follows:
in the actual use process, the relative position between the cameras slightly deviates due to the reasons of structure, vibration, transportation and the like, so a set of simple and easy version calibration scheme which is more convenient for consumers to use in actual use is needed. Simple calibration need not and outer stage property can be accomplished the demarcation with the human body, has reduced the mark cost that equipment should shift the production of equipment:
1) And starting the simple calibration flow by the calibration tool or starting the simple calibration flow on the interactive program.
2) When the human body stands at a distance of about 80cm from the equipment, the hands naturally droop, the two legs are lightly attached to the two sides of the two legs, and the two arms swing forwards by about 20cm by taking the two shoulders as axes. And the arms of the front pendulum are to appear together in the upper and lower phase machines.
3) When the program detects that the two arms have obvious forward swing, the image is immediately acquired, point cloud analysis and registration are carried out, and external parameters are calculated and stored locally.
The turntable 30, the turntable 30 is including the measurement platform 31 that supplies the human body to stand, measurement platform 31 is including electrode slice 33 that is located the surface and the human body composition measurement subassembly that is located inside, electrode slice 33 with the human body composition measurement subassembly electricity is connected, the human body composition measurement subassembly with the host computer 25 electricity is connected.
In this embodiment, the body composition measuring assembly includes a body fat scale. The rotary table 30 is used for collecting human body electrical impedance data and driving a measured person to rotate, and in order to ensure that the power of the rotary table 30 can rotate to the maximum 200 kg bearing capacity and the stability of the rotating speed, the rotary table 30 adopts a USB-C3.1 interface standard to output the rated power of 19V/2A.
The body electrical impedance data can be obtained by measuring two electrode plates 33 arranged on the measuring platform 31, and the electrode plates 33 can be made of SUS 304.
In consideration of the fact that weight and body fat scales on the market generally need to be placed on a flat and hard ground when weighing. In order to increase the weighing stability of the rotary table 30, the base 32 is made of bakelite, and the gravity sensor is in direct contact with the bakelite and is not in contact with the ground, so that the requirement of the equipment on the ground is reduced.
In addition, the turntable 30 provides the voltage required for the operation of the interactive panel 20, the turntable 30 is approximately 750mm away from the depth camera and is connected to the host computer 25 using a dual-head USB-C3.1 data line, i.e., a power supply line and a data transmission line.
In this embodiment, the host 25 includes an interactive module, a human three-dimensional model reconstruction module, and a body fat calculation module, and the three modules cooperate with an external sensor and the display screen 24 to detect and display the body parameters of the person to be measured. Specifically, the host 25 can implement the following steps:
s101, obtaining a depth image of a user standing on the rotary table through the first depth camera and the second depth camera, and achieving contactless interaction with the user according to the depth image.
The method comprises the following specific steps:
and acquiring a depth image of a user standing on the turntable according to the first depth camera and the second depth camera, and reconstructing according to a preset external parameter matrix to obtain a human body point cloud of the user.
In this embodiment, the first depth camera 10 and the second depth camera 20 are fixed at different heights, so that a depth image of the upper body and a depth image of the lower body of the user can be obtained respectively, and then a human point cloud of the user is obtained through stitching and reconstruction of the depth image of the upper body and the depth image of the lower body, and a method for reconstructing the human point cloud is the prior art, which is not described herein again.
Projecting the human body point cloud onto a plane where the interactive panel is located, and acquiring the height of a human body contour without an arm according to the projected human body point cloud;
in this embodiment, since the user stands vertically and is substantially parallel to the height direction of the interactive panel 30, the human body point cloud is projected onto the plane where the interactive panel is located, and then the point cloud of the arm area is obtained by recognition, and the human body contour height without the arm can be obtained by removing the point cloud of the arm area.
And forming a human body trunk contour map according to the projected human body point cloud and the human body contour height.
Specifically, with the human shank as a starting point, all the local pixel points are set as seeds to be checked, and the half of the height of the human body contour is used as a boundary:
as shown in fig. 3, in the lower half of the body, with each seed to be checked as a center point, pixel points are found in 8 regions of left, right, up, first jump up, second jump up, down, first jump down, and second jump down; the first jump-up is positioned right above the central point and is separated from the central point by two pixel points; the second jump is positioned right above the central point and is separated from the central point by four pixel points; the first down jump is positioned under the central point and is separated from the central point by two pixel points; the second down-jump is located right below the center point and is separated from the center point by four pixel points.
As shown in fig. 4, in the upper half, each seed to be checked is taken as a central point, and points to the left, right, upper, first jump up, second jump up, and these 5 regions search for pixel points.
In the depth image, local data may be lost due to too deep color or a blind area of a viewing angle, such as a junction between loose clothing and skin, which may cause data loss, resulting in disconnected regions. If the searching is only carried out through the adjacent points, the searching process is possibly interrupted by the disconnected part, so that the jumping points are added to jump over the disconnected part to search the pixel points, and the integrity of the human body contour is ensured as much as possible.
Particularly, because the lower body may have clothes such as skirt and shorts, the simple upward search can cause the protrusion of the clothes not to be covered, and the area is too large, and then the clothes are judged as a foreign matter by mistake, so that the lower body is additionally provided with 4 jumping search areas of first jumping, second jumping, first jumping and second jumping, and the lower body is suitable for the use scene of complex clothes as far as possible.
And after traversing all the seeds to be checked, obtaining a human body trunk contour map according to the images formed by all the seeds.
And performing subtraction according to the human body point cloud and the human body trunk contour map to obtain a double-arm image, and performing connected domain segmentation and morphological analysis on the double-arm image to obtain the opening angles of the left arm and the right arm.
The connected component segmentation is to divide the two-arm image into independent connected components, and the morphological analysis includes a series of operations such as erosion and expansion, which are conventional algorithms in the field.
And forming a preset interactive instruction according to the human body trunk contour map and the opening angles of the left arm and the right arm.
In this embodiment, the gesture of the user may be determined according to the opening angle of the left and right arms, and the interaction instruction may be generated by comparing the gesture of the user with a preset standard gesture. For example, when the user's arm opening amplitude is found to be small, it may be instructed to increase the amplitude of the arm opening; for another example, when the user is detected to have a large arm opening amplitude, the user may be instructed to decrease the arm opening amplitude; for another example, when the arm opening amplitude of the user is detected to be within the preset range, an indicator light for starting scanning and reconstruction of the human body can be emitted.
In addition, the method can be used for prompting foreign object invasion.
Specifically, the method comprises the following steps:
firstly, calculating the widest width of the two arms based on the human body trunk contour map and the opening angles of the left arm and the right arm.
After the opening angles of the left arm and the right arm and the human body trunk contour map are obtained, the width of the human body trunk contour can be obtained, and then the widest width of the two arms can be obtained through calculation according to the angles.
Then, taking the widest width as a diameter and taking the contour map of the human trunk as an axis to create a cylinder;
and finally, shielding the depth information outside the cylinder, and generating an interactive instruction when the connected domain of the depth data in the cylinder is larger than a certain area.
Specifically, in this embodiment, for example, a person or a foreign object intrudes into the measurement cylinder with a diameter of "widest width of both arms +10cm", and when the area on the depth map is greater than 100 square centimeters, a conspicuous color mark is marked, and the user is reminded with characters that a foreign object intrudes into the measurement area, so that the user can move away the foreign object or avoid the foreign object.
S102, reconstructing a real human body three-dimensional model of the user through the depth image, and obtaining the size of a key part of the user according to the real human body three-dimensional model;
s103, acquiring body composition data of the measured person through the electrode slice and the body composition measuring component;
in particular, the amount of the solvent to be used,
controlling the rotary table to rotate, and simultaneously controlling the first depth camera and the second depth camera to alternately acquire depth images corresponding to all parts of a person to be measured on the rotary table;
processing the collected depth image to form a three-dimensional human body model corresponding to the user;
and extracting point cloud data in the three-dimensional human body model, and calculating the size of the key part of the user.
Preferably, the processing the acquired depth image to form the three-dimensional human body model corresponding to the user specifically includes:
according to the collected depth image, taking a central axis of a human trunk in the image as an axis, taking the width opened by two hands as a diameter, establishing a cylinder, shielding depth information outside the cylinder, and calculating a current frame point cloud; the point clouds comprise an upper body point cloud and a lower body point cloud;
carrying out connected domain segmentation on the current frame point cloud, reserving a human body part and carrying out point cloud registration on the human body part and the previous frame point cloud to obtain a transformation matrix;
transforming according to the upper half body point cloud, the lower half body point cloud and the transformation matrix thereof to obtain a preliminary upper half body point cloud set and lower half body point cloud set;
separately down-sampling the upper body point cloud set and the lower body point cloud set, finding out nearest neighbor point pairs between adjacent frames, and updating all transformation matrixes through global least square;
extracting point clouds of a plurality of angles of the camera from the human body according to all the updated transformation matrixes; carrying out global icp registration on the point clouds to obtain final upper-half body point cloud and final lower-half body point cloud; during matching, the weight of the trunk part is increased, and the weight of the arms is reduced; and the farther the point cloud is from the camera, the lower the weight is;
calculating the direction of a rotating shaft by distributing a transformation matrix of the upper body point cloud and the lower body point cloud around the rotating shaft, and imitating the rotating shaft of the upper body point cloud and the lower body point cloud on a Z axis;
calculating the distance between the upper body point cloud and the lower body point cloud on the rotating shaft according to the distance between the rotating shaft and the camera and the external parameters of the camera, and offsetting the upper body point cloud and the lower body point cloud towards the Z axis after the calculation;
calculating three parameters of x offset, y offset and rz offset of the upper half body relative to the lower half body according to the nearest neighbor point pair of the upper half body point cloud and the lower half body point by adopting least squares, finally carrying out the same-level down-sampling operation on the affine point cloud, fusing the points of the overlapped part, and generating a body point cloud set;
cutting the feet of the human body and the turntable through the extracted turntable plane equation, and supplementing the foot point cloud;
acquiring human body point clouds according to the body point cloud set and the step point clouds, carrying out cluster analysis on the point clouds, and extracting a maximum point cloud group;
and performing Poisson triangulation on the point cloud group, and performing optimization operation to generate a three-dimensional human body model.
Preferably, the turntable plane equation is obtained by:
obtaining a depth map shot by a second depth camera, and extracting a plurality of plane areas in the depth map by a random sampling consistency method;
and screening the plurality of plane areas according to the areas, the outlines and the positions to finally obtain the plane areas meeting the plane conditions of the turntable, and calculating a general plane equation of the plane areas.
And S104, generating an analysis report according to the critical part size and the body composition data.
Specifically, generating an analysis report according to the critical site size and the body composition data includes:
generating an analysis report by taking the three-dimensional human body model, the sex, the weight, the age and the impedance information of the user as input; wherein the analysis report comprises:
key point positioning: positioning key point positions of the human body based on the human body characteristics for assisting subsequent human body measurement and body state analysis;
analyzing a wearing state; the wearing state is divided into an upper body wearing state and a lower body wearing state, and the upper body wearing state is divided into arms, a front and rear chest area and a front and rear waist area; distinguishing a hip circumference area, a thigh area and a shank area according to the wearing state of the lower body, and confirming the wearing state and the clothing type according to the surface curvature characteristic state of each area;
measuring human body data; the basic human body measurement data comprises circumference data, length data and vertical length data; the method is defined differently according to the requirements of the part and comprises the steps of horizontal circumference measurement, axial vertical circumference measurement, skin surface length measurement and space vertical measurement;
analyzing body type and body state; based on the curvature of the surface of the human body, carrying out targeted analysis on the body shape and the body state of the human body;
human body data measurement optimization: based on the body type and the body form of the human body, correcting or supplementing the measurement of the individual part;
analyzing human body components: and calculating and optimizing the body composition of the three-dimensional human body model based on the analysis of the body state data and the wearing state.
In order to facilitate understanding of the present invention, the working process of the embodiment of the present invention will be fully described as follows:
the overall workflow of the present embodiment may include five steps of calibration, interaction, scanning, human body reconstruction, and data calculation.
Firstly, the depth camera is calibrated, and the specific calibration includes standard calibration and simple calibration, which is not described herein again.
Then, before the person to be measured performs the measurement, the person to be measured may perform the entry and confirmation of the user information through the interaction module. For example, the measured person may perform a series of operations such as registration and login through an interactive model, and may perform binding by scanning a two-dimensional code on a display screen.
Specifically, the interface of the interaction module includes a device self-checking interface, a login interface, a scanning interface, a result display interface, and the like.
In the device self-checking interface, the interaction module can be used for performing a series of initial checks such as network detection, software version detection, device registration detection, system time detection, turntable connection state detection, turntable position detection and the like.
In the login interface, the interaction module can provide various modes for the user to register and login, such as two-dimensional code scanning login, login through face recognition, login through inputting an account number and a password, and the like.
And in the scanning interface, the interaction module is used for scanning the body of the measured person to obtain a depth image.
The subject stands on the turntable 30, and both feet thereof contact two electrode pads 33 provided on the surface of the turntable 30, respectively.
The host computer 25 controls the first depth camera 22 and the first depth camera 23 to detect the standing posture of the measured person, and guides the standing posture of the measured person to reach a preset measurement specification.
The host 25 can make a rough judgment on the standing posture of the measured person through the first depth camera 22 and the first depth camera 23, and then judge whether the standing posture of the measured person meets a preset measurement specification by comparing with a standard standing posture. For example, whether the measured person stands straight or not, whether the arm is open or not, etc. are judged, and then according to the detection result, the host computer 25 can remind the measured person to adjust the standing posture in a voice mode until the measurement specification is reached.
Then, the host computer 25 controls the rotation of the turntable 30, and simultaneously controls the first depth camera 22 and the first depth camera 23 to alternately acquire depth images corresponding to each part of the measured person on the turntable 30, and processes the acquired depth images through the three-dimensional human model reconstruction module to form a real three-dimensional human model corresponding to the measured person, extracts point cloud data in the real three-dimensional human model, and calculates the size of the key part of the measured person.
Wherein, the key parts may include a chest circumference, a waist circumference, a height, etc., and the invention is not particularly limited. After the three-dimensional human body model is obtained through reconstruction, the size of the key part of the measured person can be obtained through identification.
Then, the main body 25 receives the body composition measurement assembly and the electrode sheet 33 from the turntable 30 through the data line, and calculates and obtains the body composition data of the measured person through the body fat calculation module.
For example, data such as the body weight and body fat of the subject can be acquired.
Finally, as shown in fig. 6 and 7, the host computer 25 displays the data of the size, body fat, body weight, and the like of the critical part of the subject on the result display interface screen.
Wherein, only partial key data are displayed on the display screen 24, and the detailed scanning result can be acquired and displayed at the mobile terminal (mobile phone public number, applet, APP, etc.).
In this embodiment, after the measurement is completed, the measured person may choose to perform operations such as re-measurement, saving the measurement data, and exporting the measurement data. In addition, the host 25 may also bind the detection data with the user ID and store the bound detection data locally, so that the measured person may view the detection data subsequently.
In summary, the dual-camera human body scanning system provided by the embodiment has the following advantages:
1. by obliquely arranging the two depth cameras, the interaction panel 10 and the rotary table 30 can obtain the complete posture of the measured person without an excessive distance, and the requirement on the field is reduced.
2. The two depth cameras realize a non-contact interaction mode, so that a user can realize interaction on the rotary table 30 without frequently realizing interaction on the interaction panel 10, and the user experience is good.
3. The multi-dimensional posture data of the user can be fully collected, and subsequent analysis and calculation are facilitated.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention.

Claims (10)

1. A dual camera body scanning system, comprising:
the interactive panel comprises a fixed frame, a first depth camera, a second depth camera and a host, wherein the first depth camera, the second depth camera and the host are arranged on the fixed frame, the first depth camera and the second depth camera are arranged on different heights of the fixed frame, and the host is electrically connected with the first depth camera and the second depth camera;
the rotary table comprises a measuring platform for a human body to stand on, the measuring platform comprises electrode plates positioned on the surface and a human body component measuring assembly positioned inside, the electrode plates are electrically connected with the human body component measuring assembly, and the human body component measuring assembly is electrically connected with the host; wherein:
the host includes a memory and a processor configured to implement the following steps by executing a computer program stored in the memory:
obtaining a depth image of a user standing on the turntable through the first depth camera and the second depth camera, and realizing contactless interaction with the user according to the depth image;
reconstructing a real human body three-dimensional model of the user through the depth image, and obtaining the size of a key part of the user according to the real human body three-dimensional model;
acquiring body composition data of the measured person through the electrode slice and the body composition measuring component;
and generating an analysis report according to the critical part size and the body composition data.
2. The dual camera body scanning system of claim 1, further comprising a display screen disposed on said interactive panel, said display screen being electrically connected to said host computer and configured to provide interactive guidance, login scanning, and presentation of said critical site dimensions and said body composition data.
3. The dual camera body scanning system of claim 1, wherein the first depth camera is 1700mm from horizontal ground with its lens tilted downward; the distance between the second depth camera and the horizontal ground is 500mm, and the lens of the second depth camera is inclined upwards; inclination angle of first depth camera and second depth camera
Figure 85659DEST_PATH_IMAGE001
Comprises the following steps:
Figure 2800DEST_PATH_IMAGE002
Figure 782537DEST_PATH_IMAGE003
wherein d is 1 The distance between the measured person and the interactive panel;
h 1 is the distance between the first depth camera and the horizontal ground;
h 2 is the distance between the second depth camera and the horizontal ground;
H 1 the distance from the horizontal plane to the lowest point where the first depth camera view cone intersects with the measured person;
H 2 is the distance from the horizontal plane of the lowest point where the second depth camera view cone intersects the person being measured.
4. The dual-camera body scanning system of claim 1, wherein the first depth camera and the second depth camera are used to obtain a depth image of a user standing on the turntable, and the contactless interaction with the user is realized according to the depth image, and the system comprises:
acquiring a depth image of a user standing on the turntable according to the first depth camera and the second depth camera, and reconstructing according to a preset external parameter matrix to obtain a human body point cloud of the user;
projecting the human body point cloud onto a plane where the interactive panel is located, and acquiring a human body contour height without arms according to the projected human body point cloud;
forming a human body trunk contour map according to the projected human body point cloud and the human body contour height;
performing subtraction according to the human body point cloud and the human body trunk outline image to obtain a double-arm image, and performing connected domain segmentation and morphological analysis on the double-arm image to obtain the opening angles of the left arm and the right arm;
and forming a preset interactive instruction according to the human body trunk contour map and the opening angles of the left arm and the right arm.
5. The dual-camera human body scanning system of claim 4, wherein a human body trunk contour map is formed according to the projected human body point cloud and the human body contour height, and specifically comprises:
taking the shank of the human body as a starting point, setting all the local pixel points as seeds to be checked, and taking half of the height of the contour of the human body as a boundary:
searching pixel points in 8 regions of the lower half body, namely, the left region, the right region, the upper region, the first jump-up region, the second jump-up region, the lower region, the first jump-down region and the second jump-down region by taking each seed to be checked as a central point; the first jump-up is positioned right above the central point and is separated from the central point by two pixel points; the second jump is positioned right above the central point and is separated from the central point by four pixel points; the first down jump is positioned under the central point and is separated from the central point by two pixel points; the second down jump is positioned right below the central point and is separated from the central point by four pixel points;
in the upper half body, each seed to be checked is taken as a central point, and points to the left, the right, the upper, the first jump upwards and the second jump upwards, and the 5 areas are used for searching pixel points; if the searched pixel point is not a seed and the difference between the gray value of the pixel point and the gray value of the seed is smaller than a set threshold, setting the pixel point as a seed to be checked;
and after traversing all the seeds to be checked, obtaining a human body trunk contour map according to the images formed by all the seeds.
6. The dual-camera human body scanning system of claim 5, wherein a preset interactive command is formed according to the human body trunk contour map and the opening angles of the left and right arms, and specifically comprises:
calculating the widest width of the two arms based on the human body trunk contour map and the opening angles of the left arm and the right arm;
taking the widest width as a diameter and taking the human body trunk contour map as an axis to create a cylinder;
and shielding the depth information outside the cylinder, and generating an interactive instruction when the connected domain of the depth data in the cylinder is larger than a certain area.
7. Dual camera human body scanning system according to claim 1,
obtaining the size of the key part of the user according to the depth image, specifically:
controlling the rotary table to rotate, and simultaneously controlling the first depth camera and the second depth camera to alternately acquire depth images corresponding to all parts of a measured person on the rotary table;
processing the collected depth image to form a three-dimensional human body model corresponding to the user;
and extracting point cloud data in the three-dimensional human body model, and calculating the size of the key part of the user.
8. The dual-camera body scanning system of claim 6, wherein processing the acquired depth images to form a three-dimensional body model corresponding to the user specifically comprises:
according to the acquired depth image, taking a central axis of a human trunk in the image as an axis, taking the width opened by two hands as a diameter, establishing a cylinder, shielding depth information outside the cylinder, and calculating a current frame point cloud; the point clouds comprise an upper body point cloud and a lower body point cloud;
performing connected domain segmentation on the current frame point cloud, reserving a human body part and performing point cloud registration on the human body part and the previous frame point cloud to obtain a transformation matrix;
transforming according to the upper half body point cloud, the lower half body point cloud and the transformation matrix thereof to obtain a preliminary upper half body point cloud set and lower half body point cloud set;
separately down-sampling the upper body point cloud set and the lower body point cloud set, finding out nearest neighbor point pairs between adjacent frames, and updating all transformation matrixes through global least square;
extracting point clouds of a plurality of angles of the camera from the human body according to all the updated transformation matrixes; carrying out global icp registration on the point clouds to obtain final upper-half body point cloud and final lower-half body point cloud; during matching, the weight of the trunk part is increased, and the weight of the arms is reduced; and the farther the point cloud is from the camera, the lower the weight is;
calculating the direction of a rotating shaft by the fact that a transformation matrix of the upper body point cloud and the lower body point cloud is distributed around the rotating shaft, and simulating the rotating shaft of the upper body point cloud and the lower body point cloud to a Z axis;
calculating the distance between the upper body point cloud and the lower body point cloud on the rotating shaft according to the distance between the rotating shaft and the camera and the external parameters of the camera, and offsetting the upper body point cloud and the lower body point cloud towards the Z axis after the calculation;
calculating three parameters of x offset, y offset and rz offset of the upper half body relative to the lower half body according to the nearest neighbor point pair of the upper half body point cloud and the lower half body point by adopting least squares, finally carrying out the same-level down-sampling operation on the affine point cloud, fusing the points of the overlapped part, and generating a body point cloud set;
cutting the feet of the human body and the turntable through the extracted turntable plane equation, and supplementing the foot step point cloud;
acquiring human body point clouds according to the body point cloud set and the step point clouds, carrying out cluster analysis on the point clouds, and extracting a maximum point cloud group;
and performing Poisson triangulation on the point cloud group, and performing optimization operation to generate a three-dimensional human body model.
9. The dual camera body scanning system of claim 8, wherein the turntable plane equation is obtained by:
obtaining a depth map shot by a second depth camera, and extracting a plurality of plane areas in the depth map by a random sampling consistency method;
and screening the plurality of plane areas according to the areas, the outlines and the positions to finally obtain the plane areas meeting the plane conditions of the turntable, and calculating a general plane equation of the plane areas.
10. The dual camera body scanning system of claim 1, wherein generating an analysis report from the critical site dimensions and the body composition data comprises:
generating an analysis report by taking the three-dimensional human body model, the sex, the weight, the age and the impedance information of the user as input; wherein the analysis report comprises:
key point positioning: positioning key point positions of the human body based on the human body characteristics for assisting subsequent human body measurement and body state analysis;
analyzing a wearing state; the wearing state is divided into an upper body wearing state and a lower body wearing state, and the upper body wearing state is divided into arms, a front and rear chest area and a front and rear waist area; distinguishing hip circumference area, thigh area and shank area according to the wearing state of the lower body, and confirming the wearing state and the clothing type according to the surface curvature characteristic state of each area;
measuring human body data; the basic human body measurement data comprises circumference data, length data and vertical length data; the method is defined differently according to the requirements of the part and comprises the steps of horizontal circumference measurement, axial vertical circumference measurement, skin surface length measurement and space vertical measurement;
analyzing the body shape and the posture; based on the curvature of the surface of the human body, the body shape and the posture of the human body are subjected to targeted analysis;
human body data measurement optimization: based on the body shape and the posture of the human body, correcting or supplementing the measurement of the individual part;
analyzing human body components: and calculating and optimizing the body composition of the three-dimensional human body model based on the analysis of the body state data and the wearing state.
CN202211278162.2A 2022-10-19 2022-10-19 Double-camera human body scanning system Pending CN115590475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211278162.2A CN115590475A (en) 2022-10-19 2022-10-19 Double-camera human body scanning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211278162.2A CN115590475A (en) 2022-10-19 2022-10-19 Double-camera human body scanning system

Publications (1)

Publication Number Publication Date
CN115590475A true CN115590475A (en) 2023-01-13

Family

ID=84847946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211278162.2A Pending CN115590475A (en) 2022-10-19 2022-10-19 Double-camera human body scanning system

Country Status (1)

Country Link
CN (1) CN115590475A (en)

Similar Documents

Publication Publication Date Title
US10013803B2 (en) System and method of 3D modeling and virtual fitting of 3D objects
CN103106401B (en) Mobile terminal iris recognition device with human-computer interaction mechanism
CN105556508B (en) The devices, systems, and methods of virtual mirror
CN104898832B (en) Intelligent terminal-based 3D real-time glasses try-on method
CN110168562B (en) Depth-based control method, depth-based control device and electronic device
CN107609516B (en) Adaptive eye movement method for tracing
US20160225164A1 (en) Automatic generation of virtual materials from real-world materials
CN111028341B (en) Three-dimensional model generation method
EP2894851B1 (en) Image processing device, image processing method, program, and computer-readable storage medium
CN108885487B (en) Gesture control method of wearable system and wearable system
CN107230224B (en) Three-dimensional virtual garment model production method and device
CN106355479A (en) Virtual fitting method, virtual fitting glasses and virtual fitting system
GB2504711A (en) Pose-dependent generation of 3d subject models
US10909275B2 (en) Breast shape and upper torso enhancement tool
US20220398781A1 (en) System and method for digital measurements of subjects
KR20210027028A (en) Body measuring device and controlling method for the same
CN107997276A (en) Three-dimensional human body measurement unit
US20180096490A1 (en) Method for determining anthropometric measurements of person
CN104487886A (en) Method for measuring the geometric morphometric parameters of a person wearing glasses
CN111340959A (en) Three-dimensional model seamless texture mapping method based on histogram matching
CN114494468A (en) Three-dimensional color point cloud construction method, device and system and storage medium
CN115590475A (en) Double-camera human body scanning system
CN115414031A (en) Human body parameter measuring device
CN206711151U (en) A kind of virtual fitting glasses and virtual fitting system
EP4185184B1 (en) Method for determining a coronal position of an eye relative to the head

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518100 Building C, Minzhi Stock Commercial Center, North Station Community, Minzhi Street, Longhua District, Shenzhen City, Guangdong Province 2401

Applicant after: Shenzhen xianku intelligent Co.,Ltd.

Address before: 518063 3407, Block A, Building 9, Zone 2, Shenzhen Bay Science and Technology Ecological Park, No. 3609, Baishi Road, High tech District Community, Yuehai Street, Nanshan District, Shenzhen, Guangdong

Applicant before: Shenzhen xianku intelligent Co.,Ltd.

CB02 Change of applicant information