WO2023124878A1 - 面罩的虚拟佩戴方法、装置、终端设备及可读存储介质 - Google Patents

面罩的虚拟佩戴方法、装置、终端设备及可读存储介质 Download PDF

Info

Publication number
WO2023124878A1
WO2023124878A1 PCT/CN2022/137573 CN2022137573W WO2023124878A1 WO 2023124878 A1 WO2023124878 A1 WO 2023124878A1 CN 2022137573 W CN2022137573 W CN 2022137573W WO 2023124878 A1 WO2023124878 A1 WO 2023124878A1
Authority
WO
WIPO (PCT)
Prior art keywords
mask
information
user
feature data
nose
Prior art date
Application number
PCT/CN2022/137573
Other languages
English (en)
French (fr)
Inventor
何垄
周明钊
庄志
Original Assignee
北京怡和嘉业医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京怡和嘉业医疗科技股份有限公司 filed Critical 北京怡和嘉业医疗科技股份有限公司
Publication of WO2023124878A1 publication Critical patent/WO2023124878A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present application relates to the technical field of medical equipment, in particular to a method, device, terminal equipment and readable storage medium for virtual wearing of a mask.
  • OSA obstructive sleep apnea
  • COPD chronic obstructive pulmonary emphysema
  • a mask as non-invasive ventilation therapy consists of an interface device on the patient's face that surrounds the nose and mouth of the patient's face and forms a sealed breathing space.
  • masks are usually divided into four types: nasal mask, oral mask, full face mask and nasal pad mask.
  • the nasal mask only covers the nose
  • the oral mask only covers the mouth
  • the oronasal mask covers the mouth at the same time.
  • the face and nose, while the nose pad mask is stuffed into the nostrils; at the same time, in order to adapt to different face sizes, the mask is set to different models such as large, medium, small, and small width.
  • the embodiments of the present application are proposed to provide a virtual mask wearing method, device, terminal device and readable storage medium that overcome the above problems or at least partially solve the above problems.
  • a virtual wearing method of a mask comprising:
  • a wearing picture message is generated.
  • a virtual wearing device for a mask comprising:
  • An acquisition module configured to acquire the user's face image
  • the image processing module is used to determine the actual facial feature data according to the facial image
  • a display module configured to display one or more first masks of various models matched according to the actual facial feature data
  • An operation module configured to determine a target mask from each of the first masks according to the received first input from the user
  • a determination module configured to generate wearing image information according to the face image and the target mask
  • the display module is also used to display information on the wearing screen.
  • a terminal device includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor.
  • the program or instruction is executed by the processor, the following The steps of the method of the first aspect.
  • a readable storage medium is provided, and a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method in the first aspect are implemented.
  • the actual facial feature data is determined according to the user's real-time facial image
  • the actual facial feature data reflects the user's actual facial feature, so according to the facial feature data can accurately match the user's facial feature
  • Multiple first masks of different types and sizes then after the user determines the target mask through the first input, the wearing screen can be generated by technologies such as virtual reality (VR) according to the target mask and the user's face image
  • VR virtual reality
  • Fig. 1 is a flow chart of a virtual wearing method of a mask provided by an embodiment of the present application
  • Fig. 2 is a schematic diagram of reference classification of masks provided by the embodiment of the present application.
  • Figure 3 is the applicable size standard for classification according to the distance from the tip of the nose to the jaw of the "F mask-A type" in the embodiment of the application;
  • Fig. 4 is a schematic diagram of the geometric relationship of key feature positions in the embodiment of the present application.
  • Fig. 5 is a schematic flow diagram of recommending a mask by identifying the beard features of the user's face in the embodiment of the application;
  • Fig. 6 is a block diagram of a virtual wearing device for a mask provided by an embodiment of the present application.
  • Fig. 7 is a block diagram of another device for virtual wearing of a mask provided by an embodiment of the present application.
  • FIG. 1 a flow chart of a method for virtual wearing of a mask is shown, and the method may specifically include the following steps 101 to 105 .
  • the virtual wearing method of a mask provided by some embodiments of the present application is applied to a terminal device, the terminal device has a display device such as a screen, and the terminal device is a device for the user to select a specific type of mask, such as a mask sales terminal device.
  • Step 101 acquiring a user's face image.
  • the user is a patient who needs to select a mask; this step is to collect the user's face image in real time through the image acquisition module with the user's permission.
  • the face image can be a two-dimensional image or a three-dimensional image.
  • the face image can be a still photo or a dynamic video.
  • the user's face image is automatically collected by the image collection module in real time, thereby obtaining the user's face image.
  • the above-mentioned image acquisition module may include all known devices that can collect two-dimensional image information, such as but not limited to cameras, video cameras, infrared imagers, monitors and other equipment, and the above-mentioned image acquisition module may also include all known devices that can collect three-dimensional image information Image information equipment, such as but not limited to 3D scanners, stereoscopic imagers, etc., the above-mentioned three-dimensional images can be dynamic or static 3D images, point clouds, or 3D images obtained by analyzing 2D data wait.
  • Step 102 Determine actual facial feature data according to the facial image.
  • the facial features of the user are identified, so as to obtain the actual facial feature data of the user.
  • the above-mentioned actual facial feature data includes but is not limited to at least one of facial beard feature information, facial contour feature information, and jaw type feature information, and also includes position information and size information of key features;
  • Features include, but are not limited to, eyes, eyebrows, nose, chin, and jaw.
  • Eye size information includes interocular distance information.
  • Nose size information includes nose width information, nose bridge height information, and distance information from nose tip to jaw.
  • Step 103 according to the actual facial feature data, determine and display one or more first masks of various models that match.
  • the facial feature data determined in step 102 it is possible to determine various types of first masks including different types, different sizes and adapted to the user's face, and display the above-mentioned first masks through the display module, so as to After the user chooses to display the wearing effect.
  • the mask model is composed of mask type, applicable facial features and applicable size. That is to say, each mask model includes information such as mask type, applicable facial features and applicable size.
  • the mask type includes nasal mask, oral mask, full face mask and nasal pad mask, and applicable facial features include facial beard feature information, facial contour Feature information, chin type feature information, etc.
  • Applicable sizes can be divided into different sizes such as large, medium, small, and small wide, and different sizes correspond to different ranges of facial feature data.
  • FIG. 2 shows a reference classification diagram of masks.
  • the F mask-A type-L size mask is indicated as a full-face type, no beard, and a large size mask model.
  • the applicable size is the suitable size range for key features. For example, if it is aimed at the range of distance b from the tip of the nose to the mandible, the range of the distance b from the tip of the nose to the mandible of the patient’s face is pre-recorded in the database for each type of mask. For details, refer to Figure 3, which shows In the embodiment of this application, the "F mask-A type" is classified according to the size b part of the applicable size standard. When a mask matching the user's actual facial feature data is determined, when a call command is sent to the database, the size b applicable range information of the specific mask is output to the terminal device.
  • the patient interface matching size database can simultaneously include information on the range of multiple face sizes for each type of mask in the above classification marks, such as the nose width information, eye distance information, and nose bridge height that are applicable to the face at the same time. information etc. Through the common reference of multiple size information ranges, the selection of the final mask equipment will be more accurate.
  • Step 104 Determine a target mask from each first mask according to the received first input from the user.
  • the first input is an input for the user to select a target mask from each of the first masks, specifically, operations such as touching, clicking, and cursor locking on the target mask.
  • Step 105 Generate wearing picture information according to the facial image and the target mask, and display the wearing picture information.
  • the image synthesis technology is used to add the target mask to the above-mentioned face image, and the picture information of the user wearing the target mask can be generated, and then the wearing picture information can be displayed, that is, the picture of the user wearing the target mask can be simulated, thereby Show the wearing effect to the user, so that the user can know the wearing effect of the mask in advance at the selection stage, and avoid misjudgment and misjudgment to choose a type and appearance of the mask that does not meet the user's actual situation.
  • the user's facial image determine various facial feature data according to the facial image; determine and display one or more matching first mask of various models according to each facial feature data;
  • the first input is to determine the target mask from each of the first masks; according to the face image and the target mask, generate wearing picture information, and display the wearing picture information.
  • the facial feature data is determined according to the user's real-time facial image, the facial feature data reflects the user's actual facial features, so according to the facial feature data, different face features suitable for the user's facial features can be accurately matched.
  • the wearing picture information can be generated by technologies such as virtual reality (Virtual Reality, VR) according to the target mask and the user's face image and display, thereby simulating the effect of the user wearing the target mask, which can enable the user to have an immersive sense of immersion, and facilitate the user to filter out a more satisfactory mask.
  • VR Virtual Reality
  • the user takes pictures of his face through the image acquisition module of the terminal device, and sees the picture of his face in real time in the display module.
  • the system prompts the user to move the face to align with the camera, and collects the size information; the system recognizes the facial features of the patient and recommends a suitable mask type. Then, the user selects a mask and wears it in VR in the display module.
  • users can intuitively experience the appearance, size, performance, wearing and use of each mask, and can watch video tutorials and place orders for purchase, which is convenient and fast.
  • the method provided in the embodiment of the present application can be formed in an app of a mobile phone application, a WeChat plug-in, etc. for the convenience of customers, and can also be used as a separate device or in combination.
  • the method provided in the embodiment of the present application further includes steps 106 to 108 after the above step 102 .
  • Step 106 according to the actual facial feature data, determine the geometric relationship between the positions of each key feature.
  • the facial feature data includes position information of key features, and key features include but not limited to eyes, eyebrows, nose, philtrum, upper jaw and lower jaw, so the geometric relationship between the positions of key features can be determined.
  • the geometric relationship between the key features is then determined.
  • Step 107 Determine whether the user's face is skewed according to the geometric relationship between the positions of the key features.
  • the positional relationship of the key features of the human face is fixed, that is, the positions of each key feature are in a fixed geometric relationship, so according to the actual geometric relationship between the positions of the key features, the user's current position can be determined. Whether the face is crooked.
  • connection line of is defined as l 2 , after receiving the position information of the above five points, it will automatically calculate the distance dimension a between the two eyes and the distance dimension b from the tip of the nose to the jaw, and check the straight line l 1 , straight line l 2 , Geometric positional relationship of dimension a and dimension b.
  • Step 108 if it is determined that the user's face is skewed, generate prompt information to prompt the user to adjust the posture of the face.
  • the above step 103 includes steps 301 to 304 .
  • Step 301 Obtain the preset correspondence between different models of masks and facial feature data.
  • the facial features and the size of facial features suitable for various types of masks are determined in advance through big data analysis, and the range of facial feature data suitable for various types of masks is determined, so as to determine the above correspondence.
  • the above-mentioned facial feature data includes but not limited to facial beard feature information, facial contour feature information, jaw type feature information, position information and size information of key features; where key features include but not It is limited to one or more of eyes, eyebrows, nose, philtrum, upper jaw and lower jaw.
  • the size information of the eyes includes the distance information between the eyes.
  • the facial feature data includes nose size information
  • the nose size information includes nose width information, nose bridge height information, distance information from the tip of the nose to the upper jaw, and information on the distance from the tip of the nose to the lower jaw.
  • the distance information from the tip of the nose to the upper jaw is used to select a nasal mask or a nasal pad mask
  • the distance from the tip of the nose to the upper jaw specifically refers to the distance between the tip of the nose and the highest point of the upper jaw
  • the distance information from the tip of the nose to the lower jaw allows the user to choose a full-face mask
  • the distance from the tip of the nose to the jaw specifically refers to the distance from the tip of the nose to the lowest point of the chin.
  • the facial feature data includes nose size information
  • the nose size information includes nose width information, nose bridge height information, and distance information from the tip of the nose to the upper jaw.
  • the facial feature data includes nose size information
  • the nose size information includes nose width information, nose bridge height information, and distance information from the tip of the nose to the jaw.
  • the above-mentioned corresponding relationship can be stored in a data system, which is managed and maintained by the service party; the data system can be bound to the above-mentioned terminal device, and can be performed together with the downloading and accessing of the above-mentioned terminal device, for example but not limited to
  • Step 302 determine the first mask of various models corresponding to the actual facial feature data.
  • the mask model corresponding to the actual facial feature data can be determined according to the above-mentioned preset corresponding relationship, that is, it can be Identify the above first mask.
  • Step 303 display each first mask.
  • Each of the first masks determined in step 302 is displayed for selection and determination by the user.
  • the aforementioned preset correspondence includes a first sub-correspondence between mask types and facial features, and a second sub-correspondence between mask sizes and size ranges of key features, and the above step 302 includes Step 3021 to step 3023.
  • Step 3021 Determine the first mask type according to the first sub-correspondence relationship and the facial features in the actual facial feature data.
  • the actual facial features of the user are extracted from the actual facial feature data, and then based on the actual facial features combined with the above-mentioned first sub-correspondence, an adapted mask type, that is, the above-mentioned first mask type is determined.
  • Step 3022 Determine the size of the first mask according to the second sub-correspondence and the size information of key features in the actual facial feature data.
  • the size information of the key features of the user is extracted from the actual facial feature data, and then based on the size information of the key features combined with the above-mentioned second sub-correspondence, the adapted mask size is determined, that is, the above-mentioned first mask size .
  • Step 3023 Determine the first mask according to the first mask type and the first mask size.
  • the first mask type determines the specific type of mask
  • the first mask size defines the specific size of the mask
  • the first mask type and the first mask size can be combined to determine the specific type of mask, that is, the above-mentioned A first mask that matches the user's actual facial feature data.
  • the method provided in the embodiment of the present application further includes steps 3024 to 3027 after the above step 3023 .
  • Step 3024 Determine the first matching score of each first mask according to the actual facial feature data and the first sub-correspondence.
  • the corresponding matching score can be determined according to the actual facial features of the user in the actual facial feature data, that is, the above-mentioned The first matching score, the higher the score indicates that the mask model is determined to be suitable for the user's facial features from the perspective of user characteristics, otherwise it indicates that the mask model is not suitable for the user's facial features from the perspective of user characteristics.
  • the mask is judged at this time It is suitable for the facial features and gives a higher feature score, for example, 95 points, which is also the above-mentioned first matching score.
  • Step 3025 Determine the second matching score of each first mask according to the actual facial feature data and the second sub-correspondence.
  • the second sub-correspondence determines the correspondence between the size of the mask and the size range of key features
  • the corresponding matching score can be determined according to the size information of the key features of the user in the actual facial feature data, That is to say the above-mentioned second matching score, the higher the score indicates that the mask model is determined to be suitable for the user's facial features from the perspective of the user's face size, otherwise it indicates that the mask model is not suitable for the user's face size from the perspective of user characteristics.
  • the size score of the "F mask-A type-M size” mask with respect to the user's face is relatively low, for example, 30 points. In this way, the size score of a certain mask with respect to a certain user's face is obtained, and then the above-mentioned size score can be displayed.
  • Step 3026 Determine the comprehensive matching score of each first mask according to the first matching score and the second matching score.
  • the first matching score and the second matching score are weighted to obtain a comprehensive score that matches each first mask with the patient's face, that is, the above-mentioned comprehensive matching score.
  • Step 3027 display the first matching score, the second matching score and the comprehensive matching score of each first mask.
  • the above-mentioned matching scores are displayed, so that the user can know the specific matching situation between each first mask and his own face, and it is convenient for the user to filter and compare.
  • the above step 105 includes step 501 - step 502 .
  • Step 501 perform location marking on the face image according to the position information of each key feature.
  • one or more of the position information of the key features such as the tip of the nose, the center of the eyebrows, the center of the person, and the eyes are marked and positioned, so as to realize the real-time positioning mark of the facial image;
  • the movement of the head can be marked and positioned in real time.
  • Step 502 Position the VR graphics information of the target mask on the face image according to the positioning mark, and generate wearing image information.
  • the above-mentioned positioning marks can mark and locate the key feature positions of the patient's face in real time, so the VR graphic information of the target mask can be obtained through the database, and then the VR graphic information of the mask and the above-mentioned positioning marks can be combined.
  • Perform matching and positioning generate wearing screen information, and display the wearing screen, that is, the VR graphic information of the mask can be positioned on the patient's face in real time. It can be understood that the VR graphic information of the mask can move with the movement of the patient's face, achieving the VR effect of "real-time wearing".
  • one or more key feature positions of the user's face are used to mark and locate in real time, and the VR graphic information of the mask output from the database is received, and then the VR graphic information of the mask is matched and positioned with the positioning marks, and then the mask is The VR graphic information is positioned on the patient's face in real time.
  • matching and positioning it can be a separate matching of each feature, or a comprehensive matching of several features, for example, the distance between the tip of the nose and the outermost side of the upper alveolar, the angle of inclination, etc.
  • the above-mentioned wearing screen is displayed through the interface interaction window of the terminal device, and the interface interaction window is configured to output information, display relevant content, receive operation instructions input by the user, and display prompt information when necessary.
  • the user's real-time facial image, real-time positioning and other information can be displayed in real time, so that the user can see his own facial features in real time;
  • the VR image of the mask can be displayed, and the VR image of the mask can be simulated to be worn on the screen according to the real-time positioning information.
  • the patient's face It can be understood that the VR image of the mask is still relative to the face when the face moves, which is similar to the real wearing effect.
  • the above-mentioned interface interaction window is also configured to display the performance information of the selected target mask, such as but not limited to information such as the weight of the mask, the dead space volume of the mask, and the applicable pressure of the mask; it can also display the wearing of the mask and usage information, such as but not limited to the mask wearing process, use and cleaning methods, etc.; it can also display the fit information of the mask, such as but not limited to, when the comprehensive score of face matching is low, the reason for the low score is displayed "mask too It is too large, it is recommended to use a smaller size mask", "The mask is too small, it is recommended to use a larger size mask”; user prompts can also be given when necessary, such as but not limited to prompting the user to adjust when calculating the distance from the tip of the nose to the jaw Prompt information on the position of the face; dynamic prompt information can also be displayed, for example, but if the user chooses a nasal mask or a nasal cushion mask, and opens his mouth when using it, it will display the
  • the method provided by the embodiment of the present application can also receive the second input from the user to select a different reference size, such as but not limited to the distance b of the patient's face from the tip of the nose to the jaw, and can When necessary, the database is interrupted to transmit the applicable size information to the terminal device, that is, the above-mentioned second sub-correspondence relationship. In this case, the user can choose to determine whether to use the size score as a necessary reference condition for the final comprehensive matching score.
  • a different reference size such as but not limited to the distance b of the patient's face from the tip of the nose to the jaw
  • the method provided by the embodiment of the present application can also receive the user's third input, control the feature score instruction, and can interrupt the above-mentioned determination operation of the second matching score when necessary.
  • the user A second input operation may be used to select whether to use the second matching score as a necessary reference condition for the final comprehensive matching score.
  • the user can also determine the specific recognition of the user's facial feature category through the third input, such as identifying beard features and facial features separately or simultaneously; the user can also determine the marker location of key features through the third input operation, such as eyes, eyebrow , human center, upper lip, lower jaw, and nose tip to mark and locate certain features.
  • FIG. 5 shows a schematic flow chart of recommending a mask by identifying the beard features of the user's face in the embodiment of the present application.
  • step 221 the image information collected by the image acquisition module is received
  • step 222 a selection procedure is performed to identify beard features
  • step 223 match and recommend the corresponding model of mask, and then enter step 224; for example, if it is recognized that the user's face has no beard, then recommend the user to use a type A mask; if the user's face is recognized as bearded , it is recommended to use a B-type mask; if the user's face is identified as a goatee, it is recommended to use a C-type mask;
  • step 224 the recommendation result is output to the display module, and then the display module receives and displays the recommendation result.
  • FIG. 6 it shows a block diagram of a virtual wearing device for a mask, and the device may specifically include:
  • Image acquisition module 61 for acquiring the facial image of the user
  • the image processing module 62 is used to determine the actual facial feature data according to the facial image
  • Determining module 63 is used for determining one or more first masks of various models that match according to the actual facial feature data
  • a display module 64 configured to display each first mask
  • An operation module 65 configured to determine a target mask from each first mask according to the received first input from the user;
  • the determining module 63 is also used to generate wearing picture information according to the face image and the target mask;
  • the display module 64 is also used for displaying the wearing picture information.
  • the actual facial feature data is determined according to the user's real-time facial image
  • the actual facial feature data reflects the user's actual facial feature, so according to the facial feature data can accurately match the user's facial feature
  • Multiple first masks of different types and sizes then after the user determines the target mask through the first input, the wearing screen can be generated by technologies such as virtual reality (VR) according to the target mask and the user's face image
  • VR virtual reality
  • the determination module 63 is specifically used to obtain the preset correspondence between different models of masks and facial feature data; according to the preset correspondence, determine the first number of various models corresponding to the actual facial feature data. a mask.
  • facial feature data including but not limited to at least one of facial beard feature information, facial contour feature information, and jaw type feature information, also includes position information and size information of key features; key features Including but not limited to one or more of the eyes, the center of the brows, the nose, the upper jaw and the lower jaw, the size information of the eyes includes the distance information between the eyes;
  • the size information of the nose includes nose width information, nose bridge height information, and distance information from nose tip to jaw; or nose size information includes nose width information, nose bridge height information, and distance information from nose tip to upper jaw; or nose size information includes nose width information, nose bridge height information, and distance information from the tip of the nose to the lower jaw.
  • the image processing module 62 includes a size data processing unit, and the shown size data processing unit includes:
  • the first determination subunit is used to determine the geometric relationship between the positions of the key features according to the actual facial feature data after determining the facial feature data according to the facial image;
  • the second determining subunit is used to determine whether the user's face is skewed according to the geometric relationship between the positions of the key features
  • the prompt subunit is configured to generate prompt information to prompt the user to adjust the posture of the face when it is determined that the user's face is skewed.
  • the image processing module 62 also includes:
  • the image positioning unit is used for positioning and marking the face image according to the position information of each key feature
  • Determination module 63 includes:
  • the VR matching determination unit is configured to locate the VR graphic information of the target mask on the face image according to the positioning mark, and generate wearing picture information.
  • the preset correspondence in the device includes a first sub-correspondence between mask types and facial features, and a second sub-correspondence between mask sizes and size ranges of key features;
  • Determination module 63 includes:
  • a type determining unit configured to determine the first mask type according to the first sub-correspondence and the facial features in the actual facial feature data
  • a size determination unit configured to determine the size of the first mask according to the second sub-correspondence and the size information of the key features in the actual facial feature data
  • the mask determining unit is configured to determine the first mask according to the first mask type and the first mask size.
  • the determining module 63 further includes:
  • the feature score determination unit is used to determine the first matching score of each first mask according to the actual facial feature data and the first sub-correspondence after determining the first mask according to the first mask type and the first mask size;
  • a size score determination unit configured to determine the second matching score of each first mask according to the actual facial feature data and the second sub-correspondence
  • a comprehensive score determination unit configured to determine the comprehensive matching score of each first mask according to the first matching score and the second matching score
  • the display module is also used to display the first matching score, the second matching score and the comprehensive matching score of each first mask.
  • FIG. 7 shows a block diagram of another device for virtual wearing of a mask provided by an embodiment of the present application.
  • the above-mentioned image processing module 62 functions to receive image information collected by the image collection module 61 , and identify, locate, and process data of a specific size.
  • the image processing module 62 also includes a user feature recognition unit 621 , an image positioning unit 622 , and a size data processing unit 623 .
  • the user feature identification unit 621 can identify the features of the patient's face, for example, but not limited to, it can identify the beard features of the patient's face (whether there is a beard, the distribution and density of the beard, etc.), identify the patient's facial contour features (circular face) Or oval face, whether there are facial defects, etc.), identify key feature positions (nose tip, eyebrow center, human center, eyes, etc.).
  • the user feature identification unit 621 can identify the features of the patient's face, and output to the display module 64 the type of mask adapted to the patient's face features.
  • the above-mentioned image positioning unit 622 can receive the key feature positions (such as nose tip, eyebrow center, human center, eyes, etc.) identified by the user feature recognition unit 621, and mark and locate one or more of the above key feature positions. Moreover, the image positioning unit 622 is configured to mark and position in real time as the user's face moves. It can be understood that when the patient's face moves, the image positioning unit 622 will mark and locate the key feature positions of the patient's face in real time, and output it to the display module 64 to provide a positioning basis for subsequent VR image display. Further, the image positioning unit 622 can output the marked key feature position to the size data processing unit 623.
  • the key feature positions such as nose tip, eyebrow center, human center, eyes, etc.
  • the size data processing unit 623 is configured to receive one or more key feature position information of the user's face from the image positioning unit 622 , and calculate the distance data between specific marks, and then transmit the distance data to the determination module 63 .
  • the above-mentioned database 66 is a data system for face mask selection, which is managed and maintained by the service provider. It can be bound to system data and be carried out together with the download and access of the virtual wearing device of the above-mentioned mask, for example but not limited to, the data information of the database 66 and the system file of the above-mentioned selection device are jointly formed into an APP applied to the mobile terminal; It can be stored in the cloud, base station, hardware storage device (hard disk, USB device), etc., and can be retrieved at any time in the application. Further, the database 66 is composed of a patient interface device 3D model 661, a patient interface dimension database 662, a patient interface VR graphics database 663, and the like. The patient interface device 3D model 661 is all 3D models of optional masks that the service provider can provide, and has been classified and marked according to the contact mode with the human face, applicable facial features, and applicable sizes.
  • the patient interface is equipped with a fixed size database 662, which is used to record the information of a certain size range of the face suitable for each type of mask in the above classification marks, and output the size range information to the determination module 63 to determine the size score of the suitable mask in the determination module 63.
  • a determination of provides the second data.
  • the patient interface configuration database 662 records the range of distance b from the tip of the nose to the jaw for each type of mask that is suitable for the patient's face, and when the operation module issues a call command, the specific type The information on the applicable range of the size b of the mask is output to the determining module 63 .
  • the patient interface VR graphics database 663 records the VR graphics data of each type of mask according to the above-mentioned classification marks, and can output the above-mentioned VR image data to the determination module 63, so that the matching determination of the VR data can be performed in the determination module 63.
  • the determination module 63 is a logic execution unit of the selection system for the VR patient interface device. Its function is to receive the data information in the image processing module 62 and the database 66 , determine the size score and VR matching, and further output the results to the display module 64 .
  • the determination module 63 includes a size score determination unit 631 , a VR matching determination unit 632 , and a feature score determination unit 633 .
  • the size score determining unit 631 simultaneously receives the first data output by the image processing module 62 (such as the distance b from the tip of the nose to the jaw in this embodiment) and the second data output by the database 66 (such as the distance b in this embodiment)
  • the range information of each type of mask suitable for size b) is output, and the first data and the second data are compared to determine the size score of this type of mask.
  • the size score of a certain type of mask with respect to a certain patient's face is obtained in the size score determination unit 631 , and the size score is output to the display module 64 , and then the above-mentioned size score is displayed in the display module 64 .
  • the VR matching determining unit 632 is used to simultaneously receive the VR graphic information output from the database 66 and the image positioning information output from the image processing module 62, and locate the aforementioned VR graphic information on the face of the patient in the image in real time. Specifically, the VR matching determination unit 632 receives the real-time marker positioning of one or more key feature positions of the patient's face output by the image positioning unit 622 (such as but not limited to the positions of eyes, nose tip, and human body), and receives the patient's Interface the VR graphics information of the mask output by the VR graphics database, and then match and position the VR graphics information of the mask with the marker positioning, and then position the VR graphics information of the mask on the patient's face in real time.
  • the VR matching determination unit 632 receives the real-time marker positioning of one or more key feature positions of the patient's face output by the image positioning unit 622 (such as but not limited to the positions of eyes, nose tip, and human body), and receives the patient's Interface the VR graphics information of the mask output by the VR graphics database
  • the result is output to the display module 64 in real time, and the above-mentioned wearing effect is displayed in real time in the display module 64; wherein, when performing feature matching, it can be a separate match of each feature, or a comprehensive match of several features, for example, the tip of the nose and Match the outermost spacing and inclination angle of the upper alveolar.
  • the feature score determination unit 633 is used to match the user feature information output by the image processing module 62 and the mask model information output by the database 66, and provide a corresponding feature score.
  • the facial features of the user on the contrary, indicate that the mask model is not suitable for the facial features of the user from the perspective of user features.
  • the feature score determination unit 633 simultaneously receives the mask type recommendation information output from the user feature identification unit 621 and the mask model information output from the patient interface size database 662 for matching.
  • the user feature identification unit 621 recognizes that the user has no beard on the face and outputs a recommended A-type mask, and the patient interface matching size database 662 outputs the "F mask-A type-M size" mask, then the feature score determination unit at this time 633 will determine that the mask is suitable for the facial features, and give a higher feature score (such as 95 points). Otherwise, a lower feature score is given. Further, the feature score determining unit 633 may output the above feature score to the display module 64 and display the above feature score on the display module 64 .
  • the above-mentioned operation module 65 is a link connecting the image processing module 62 , the database 66 and the display module 64 .
  • the user can issue operation instructions to other modules through the operation module 65 .
  • the operating module 65 includes a first operating unit 651 and a second operating unit 652 .
  • the first operation unit 651 can send operation instructions to the image processing module 62 and the display module 64 at the same time, and control the transmission commands sent or received by the image processing module 62 and the display module 64 .
  • the first operation unit 651 can manage the feature score instruction transmitted from the user feature recognition unit 621 to the feature score determination unit 633, and can interrupt the transmission when necessary.
  • the user can pass the first operation unit 651 Optionally decide whether to use the feature score as a necessary reference condition for the final comprehensive score.
  • the user can also determine the facial feature category of the patient identified by the user feature identification unit 621 through the first operating unit 651, such as identifying the beard feature and the facial feature separately or simultaneously; the user can also determine the image positioning unit 622 through the first operating unit 651 Mark positioning, such as marking and positioning certain features in the eyes, eyebrows, philtrum, upper lip, mandible, and nose tip.
  • the first operation unit 651 may also include touch screen operation keys presented in the display module 64 and an operation interface with similar functions.
  • the second operating unit 652 can simultaneously send operating instructions to the database 66 and the display module 64, and manage the transmission commands sent or received by the database 66 and the display module 64.
  • the user can select different models of patient interface devices through the second operation unit 652, and then call up different models of mask VR graphics; also can select different matching size references through the second operation unit 652 (such as but not limited to patient The distance b) of the face from the tip of the nose to the jaw, and can interrupt the transmission of the patient interface matching size database 662 to the determination module 64 when needed, and in this case the user can optionally decide whether to use the second operating unit 652 Use the size score as a necessary reference condition for the final composite score.
  • the second operation unit 652 may also include touch screen operation keys presented in the display module 64 and an operation interface with similar functions.
  • the above-mentioned display module 64 is the interface interaction window of the device, configured to accept the output information of the image processing module 62 and the confirmation module 63 and the operation instructions of the operation module 65 and display relevant content, and can display prompt information when necessary.
  • the further display module 64 includes a real-time image display unit 641 , a comprehensive score display unit 642 , a VR image display unit 643 and an information display unit 644 .
  • the real-time image display unit 641 can receive information such as real-time images and real-time positioning output by the image processing module 62 and display them in real time. That is, the user can see his facial features in real time in the real-time image display unit 641 .
  • the VR image display unit 643 can display the VR image of the mask, and "wear" the VR image of the mask on the patient's face according to the real-time positioning information. It can be understood that the VR image of the mask is still relative to the face when the face moves, which is similar to the real wearing effect.
  • the comprehensive score display unit 642 can receive the size score output by the size score determination unit 631 and/or the feature score output by the feature score determination unit 633, and obtain and display the comprehensive score matching the selected mask with the patient's face through a weighted operation. The higher the composite score, the better the match.
  • the information display unit 644 is configured to display the performance information of the selected mask (such as but not limited to the weight of the mask, the dead space volume of the mask, the applicable pressure of the mask, etc.), and can also display the wearing and use information of the mask (such as but not limited to Not limited to the mask wearing process, use and cleaning methods, etc.), it can also display the fit information of the mask (for example, but not limited to, when the comprehensive score of face matching is low, the reason for the low score is displayed "The mask is too large, it is recommended to use a larger Small size mask" "The mask is too small, it is recommended to use a larger size mask"), and can give the user prompt information when necessary (such as but not limited to prompting the user to adjust the face position when calculating the distance b from the tip of the nose to the jaw prompt information).
  • the performance information of the selected mask such as but not limited to the weight of the mask, the dead space volume of the mask, the applicable pressure of the mask, etc.
  • the wearing and use information of the mask such as but not
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • a terminal device including a display, a processor, a memory, and a computer program stored on the memory and operable on the processor.
  • the steps in the virtual wearing method can achieve the same technical effect, and will not be repeated here to avoid repetition.
  • a readable storage medium is also provided.
  • a computer program is stored on the readable storage medium.
  • various processes such as the steps in the method of virtual wearing of a mask are implemented, and can To achieve the same technical effect, in order to avoid repetition, no more details are given here.
  • embodiments of the embodiments of the present application may be provided as methods, devices, or computer program products. Therefore, the embodiment of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • a computer device includes one or more processors (CPUs), input/output interfaces, network interfaces and memory.
  • Memory may include non-permanent storage in computer readable media, in the form of random access memory (RAM) and/or nonvolatile memory such as read-only memory (ROM) or flash RAM.
  • RAM random access memory
  • ROM read-only memory
  • Memory is an example of computer readable media.
  • Computer-readable media including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer readable media does not include non-persistent computer readable media (transitory media), such as modulated data signals and carrier waves.
  • Embodiments of the present application are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to the embodiments of the present application. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor or processor of other programmable data processing terminal equipment to produce a machine such that instructions executed by the computer or processor of other programmable data processing terminal equipment Produce means for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing terminal to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the The instruction means implements the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种面罩的虚拟佩戴方法、装置、终端设备及可读存储介质,该方法包括:获取用户的脸部图像(101);根据所述脸部图像,确定实际脸部特征数据(102);根据所述实际脸部特征数据,确定并显示匹配的各种型号的一个或多个第一面罩(103);根据接收到的所述用户的第一输入,从各所述第一面罩中确定目标面罩(104);根据所述脸部图像及所述目标面罩,生成佩戴画面信息,并显示所述佩戴画面信息(105)。不仅可以根据用户的脸部特征数据准确匹配出适于用户脸部特征的不同类型及尺寸的多个第一面罩,还通过虚拟现实等技术生成佩戴画面信息并显示,模拟用户佩戴目标面罩的效果,能使用户具有身临其境的沉浸感,便于用户筛选出更为满意的面罩。

Description

面罩的虚拟佩戴方法、装置、终端设备及可读存储介质
本申请要求在2021年12月31日提交中国专利局、申请号为202111678051.6、发明名称为“面罩的虚拟佩戴方法、装置、终端设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及医疗设备技术领域,特别是涉及一种面罩的虚拟佩戴方法、装置、终端设备及可读存储介质。
背景技术
近年来,无侵入式正压通气的方式已被广泛用于阻塞性睡眠呼吸暂停(OSA),慢性阻塞性肺气肿(COPD)等。
面罩作为非侵入式通气治疗包括病人面部上的接口装置,可以围绕患者脸部的鼻子和嘴巴并形成密封的呼吸空间。按照接触方式不同通常将面罩分为鼻面罩、口面罩、全脸面罩及鼻垫面罩等四种类型,其中鼻面罩仅罩住鼻部,口面罩仅罩住口部,口鼻面罩则同时罩住口部和鼻部,而鼻垫面罩则塞入鼻孔;同时,为了适配不同的面部尺寸,面罩被设定为大号、中号、小号、小宽号等不同的型号。
为提供更有效的治疗,需要选出适合患者面部尺寸的面罩。若选用与面部尺寸不适配的面罩,可能造成漏气等现象,影响佩戴舒适度,降低治疗效果。
现有技术中,对于面罩的选型,通常是利用照片或视频等,结合自我审美,选择出面罩的类型和外观,然后再利用量鼻卡来测量鼻宽,进而选出适配的面罩型号。上述方式在实际应用中偏差较大且无体验感可言,经常出现错判、误判以及鼻梁、下巴等其他位置不合适的现象,进而造成退换货情况的频繁发生。
发明内容
鉴于上述问题,提出了本申请实施例以便提供一种克服上述问题或者至少部分地解决上述问题的一种面罩的虚拟佩戴方法、装置、终端设备及可读存储介质。
根据本申请的第一方面,提供了一种面罩的虚拟佩戴方法,方法包括:
获取用户的脸部图像;
根据脸部图像,确定实际脸部特征数据;
根据实际脸部特征数据,确定并显示匹配的各种型号的一个或多个第一面罩;
根据接收到的用户的第一输入,从各第一面罩中确定目标面罩;
根据脸部图像及目标面罩,生成佩戴画面信。
根据本申请的第二方面,提供了一种面罩的虚拟佩戴装置,装置包括:
获取模块,用于获取用户的脸部图像;
图像处理模块,用于根据脸部图像,确定实际脸部特征数据;
显示模块,用于根据实际脸部特征数据,显示匹配的各种型号的一个或多个第一面罩;
操作模块,用于根据接收到的用户的第一输入,从各第一面罩中确定目标面罩;
确定模块,用于根据脸部图像及目标面罩,生成佩戴画面信息;
显示模块,还用于显示佩戴画面信息。
根据本申请的第三方面,提供了一种终端设备,该终端设备包括处理器、存储器及存储在存储器上并可在处理器上运行的程序或指令,程序或指令被处理器执行时实现如第一方面的方法的步骤。
根据本申请的第四方面,提供了一种可读存储介质,可读存储介质上存储程序或指令,程序或指令被处理器执行时实现如第一方面的方法的步骤。
本申请实施例包括以下优点:
获取用户的脸部图像;根据脸部图像,确定实际脸部特征数据;根据实际脸部特征数据,确定并显示匹配的各种型号的一个或多个第一面罩;根据接收到的用户的第一输入,从各第一面罩中确定目标面罩;根据脸部图像及目标面罩,生成佩戴画面信息,并显示佩戴画面信息。因为是根据用户实时的脸部图像确定其实际脸部特征数据,因而该实际脸部特征数据反映了用户的实际脸部特征,所以根据该脸部特征数据可以准确匹配出适于用户脸部特征的不同类型及尺寸的多个第一面罩;然后在用户通过第一输入确定出目标面罩后,可以根据目标面罩及用户的脸部图像,通过虚拟现实(Virtual Reality,VR)等技术生成佩戴画面信息并显示,从而模拟用户佩戴目标面罩的效果,能使用户具有身临其境的沉浸感,便于用户筛选出更为满意的面罩。
附图说明
图1是本申请实施例提供的一种面罩的虚拟佩戴方法的流程图;
图2是本申请实施例提供的面罩的参考分类示意图;
图3是本申请实施例中“F面罩-A类型”根据鼻尖到下颚的距离进行分类的适用尺寸标准;
图4是本申请实施例中关键特征位置的几何关系示意图;
图5是申请实施例中通过识别用户脸部胡须特征进行面罩推荐的流程示意图;
图6是本申请实施例提供的一种面罩的虚拟佩戴装置的框图;
图7是本申请实施例提供的另一种面罩的虚拟佩戴装置的框图。
具体实施方式
为使本申请的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。
在一些实施例中,参照图1,示出了一种面罩的虚拟佩戴方法的流程图,该方法具体可以包括如下步骤101~步骤105。
本申请一些实施例所提供的面罩的虚拟佩戴方法,应用于终端设备,该终端设备具有屏幕等显示装置,该终端设备为供用户可以选择具体型号面罩的设备,例如面罩售卖终端设备。
步骤101、获取用户的脸部图像。
该步骤中,该用户为需要选择面罩的患者;该步骤即在经过用户允许的情况下,通过图像采集模块实时采集用户的脸部图像,该脸部图像可以为二维图像或三维图像,该脸部图像可以为静态照片或动态视频。在实际应用中,是由图像采集模块实时自动采集用户的脸部图像,从而获取用户的脸部图像。
上述图像采集模块可以包括已知的所有可采集二维图像信息的设备,例如但不限于照相机、摄像机、红外成像仪、监视器等设备,上述图像采集模块也可以包括已知的所有可采集三维图像信息的设备,例如但不限于3D扫描器、立体成像仪等,上述三维图像可以为上述可三维图像信息的设备采集的动态或静态的3D图像、点云或通过2D数据解析得到的3D图像等。
步骤102、根据脸部图像,确定实际脸部特征数据。
该步骤中,基于采集的用户脸部图像,识别用户的脸部特征,从而得到用户实际的脸部特征数据。
在实际应用中,上述实际脸部特征数据,包括但不限于脸部胡须特征信息、脸部轮廓特征信息、下巴类型特征信息中的至少一种,还包括关键特征的位置信息及尺寸信息;关键特征包括但不限于眼睛、眉心、鼻子、人中及下颚,眼睛的尺寸信息包括眼间距信息,鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、鼻尖到下颚的距离信息。
步骤103、根据实际脸部特征数据,确定并显示匹配的各种型号的一个 或多个第一面罩。
该步骤中,因为上述多种脸部特征数据反映了用户的实际脸部情况,而各种型号的面罩适用的脸部特征及脸部特征的尺寸范围可以预先设置,或者通过大数据分析确定,因而根据步骤102所确定的脸部特征数据,可以确定出包括不同类型、不同尺寸且适配用户脸部的各种型号的第一面罩,并将上述各第一面罩通过显示模块进行显示,以供用户选择后展示佩戴效果。
其中,面罩型号由面罩类型、适用面部特征及适用尺寸共同组成。也即每一个面罩型号包括面罩类型、适用面部特征及适用尺寸等信息,该面罩类型包括鼻面罩、口面罩、全脸面罩及鼻垫面罩,适用面部特征包括脸部胡须特征信息、脸部轮廓特征信息、下巴类型特征信息等;适用尺寸可以分为大号、中号、小号、小宽号等不同的尺码,而不同的尺码对应不同的脸部特征数据范围。
在一些实施例中,请参阅图2,示出了面罩的参考分类示意图。如图2所示,F面罩–A类型–L号面罩表示为全脸类型、无胡须且尺寸为大号的面罩型号。
其中,适用尺寸为针对关键特征的适配尺寸范围。例如,若是针对鼻尖到下颚的距离尺寸b的范围,则预先在数据库中收录的为每一款面罩适用患者脸部从鼻尖到下颚的距离尺寸b的范围,具体可参阅图3,示出了本申请实施例中“F面罩-A类型”根据尺寸b进行分类的一部分适用尺寸标准。在确定匹配用户实际脸部特征数据的面罩时,向数据库发出调取指令时,将特定款面罩的尺寸b适用范围信息输出给终端设备。
当然可以理解的,患者接口配定尺寸数据库可以同时收录上述分类标记中每一款面罩适用脸部多个尺寸的范围信息,例如同时收录了适用脸部的鼻宽信息、眼间距信息、鼻梁高度信息等。通过多个尺寸信息范围的共同参考,会使最终面罩设备的选择更准确。
步骤104、根据接收到的用户的第一输入,从各第一面罩中确定目标面罩。
该步骤中,第一输入为用户从各第一面罩中选择目标面罩的输入,具体可以为对目标面罩的触控、点击、光标锁定等操作。
步骤105、根据脸部图像及目标面罩,生成佩戴画面信息,并显示佩戴画面信息。
该步骤中,利用图片合成技术,将目标面罩添加至上述脸部图像上,可以生成用户佩戴该目标面罩的画面信息,然后显示该佩戴画面信息,即可以模拟出用户佩戴目标面罩的画面,从而向用户展示佩戴效果,便于用户在选择阶段即提前获知面罩佩戴效果,避免因错判、误判选择不符合用户实际情 况的类型和外观的面罩。
本申请一些实施例包括以下优点:
获取用户的脸部图像;根据脸部图像,确定多种脸部特征数据;根据各脸部特征数据,确定并显示匹配的各种型号的一个或多个第一面罩;根据接收到的用户的第一输入,从各第一面罩中确定目标面罩;根据脸部图像及目标面罩,生成佩戴画面信息,并显示佩戴画面信息。因为是根据用户实时的脸部图像确定其脸部特征数据,因而该脸部特征数据反映了用户的实际脸部特征,所以根据该脸部特征数据可以准确匹配出适于用户脸部特征的不同类型及尺寸的多个第一面罩;然后在用户通过第一输入确定出目标面罩后,可以根据目标面罩及用户的脸部图像,通过虚拟现实(Virtual Reality,VR)等技术生成佩戴画面信息并显示,从而模拟用户佩戴目标面罩的效果,能使用户具有身临其境的沉浸感,便于用户筛选出更为满意的面罩。
本申请一些实施例中,用户通过该终端设备的图像采集模块拍摄自己脸部,并在显示模块中实时看到自己脸部画面。系统提示用户移动脸部对正摄像头,并采集尺寸信息;系统识别患者脸部特征,并推荐适用面罩类型。随后,用户选择一款面罩,并在显示模块中实现VR佩戴。用户可以通过该终端设备,直观感受每一款面罩的外观、大小、性能、佩戴和使用方式,并可以观看视频教程并下单购买,方便快捷。
可以理解的,本申请实施例所提供的方法,可以形成于手机应用的app、微信插件等方便客户使用,也可以作为单独的设备或结合使用。
在一些实施例中,本申请实施例所提供的方法,在上述步骤102之后,还包括步骤106~步骤108。
步骤106、根据实际脸部特征数据,确定各关键特征的位置之间的几何关系。
该步骤中,脸部特征数据包括关键特征的位置信息,而关键特征包括但不限于眼睛、眉心、鼻子、人中及上颌和下颚,因而可以确定各关键特征的位置之间的几何关系。
具体地,通过各关键特征的定位标记计算关键特征之间距离时,进而判定关键特征之间的几何关系。
步骤107、根据关键特征的位置之间的几何关系,确定用户的脸部是否歪斜。
因为在自然正脸状态下,人脸部的关键特征的位置关系固定,也即各关键特征的位置呈固定的几何关系,因而根据关键特征的位置之间实际的几何关系,可以判定用户当前的脸部是否歪斜。
例如,在用户脸部定位的左眼球、右眼球、眉心、鼻尖、下颚五个点, 如图4所示,左左眼球与右眼球的连线定义为l 1,眉心、鼻尖、下颚三者的连线定义为l 2,在接收到上述五个点的位置信息后,将自动计算两眼睛之间距离尺寸a和鼻尖到下颚的距离尺寸b,并校核直线l 1、直线l 2、尺寸a、尺寸b的几何位置关系。当且仅当校核直线l 1和l 2为相互垂直,且l 2将尺寸a平分时,判定此时患者脸部为无歪斜、无旋转的正脸状态,并输出b的尺寸大小,为后续适配面罩的第二匹配分数的确定提供数据。而当直线l 1和l 2不垂直或l 2不能将尺寸a平分时,说明用户当前为以正脸状态朝向终端设备的图像采集模块,也即用户脸部处于歪斜状态。
步骤108、在确定用户的脸部歪斜的情况下,生成提示信息,以提示用户调整脸部姿势。
该步骤中,在用户脸部歪斜的情况下,不便于数据的采集以及面罩的适配推荐,因而需要发布提示信息,以指引患者转动脸部到特定位置。
在一些实施例中,上述步骤103包括步骤301~步骤304。
步骤301、获取不同型号的面罩与脸部特征数据之间的预设对应关系。
该步骤中,预先通过大数据分析确定各种型号的面罩适用的脸部特征及脸部特征的尺寸,确定各种型号的面罩所适配的脸部特征数据范围,从而确定上述对应关系。
在一些具体实施方式中,上述脸部特征数据包括但不限于为脸部胡须特征信息、脸部轮廓特征信息、下巴类型特征信息、关键特征的位置信息及尺寸信息;其中,关键特征包括但不限于眼睛、眉心、鼻子、人中及上颌和下颚其中的一种或几种,眼睛的尺寸信息包括眼间距信息。
在一些具体实施方式中,上述脸部特征数据包括鼻子的尺寸信息,鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、以及鼻尖到上颌的距离信息和鼻尖到下颚的距离信息。其中,鼻尖到上颌的距离信息用于选择选择鼻面罩或鼻垫面罩,该鼻尖到上颌的距离具体指鼻尖到上颌最高点之间的距离;而鼻尖到下颚的距离信息用户选择全脸面罩,该鼻尖到下颚的距离具体指鼻尖到下巴最低点之间的距离。
在一些具体实施方式中,上述脸部特征数据包括鼻子的尺寸信息,上述鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、以及鼻尖到上颌的距离信息。
在一些具体实施方式中,上述脸部特征数据包括鼻子的尺寸信息,上述鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、以及鼻尖到下颌的距离信息。
其中,上述对应关系可以存储于数据系统中,该数据系统由服务方管理和维护;该数据系统可以绑定于上述终端设备,随上述终端设备的下载和存取一同进行,例如但不限于将数据系统的数据信息和上述终端设备的系统文件共同形成一款应用于终端设备的APP;也可以存储于云端、基站、硬件储 存设备如硬盘、USB设备等,在应用中随时调取。
步骤302、根据预设对应关系,确定实际脸部特征数据对应的各种型号的第一面罩。
该步骤中,因为上述预设对应关系预先确定了每一种脸部特征数据均对应匹配的面罩型号,因而可以根据上述预设对应关系,确定实际脸部特征数据对应的面罩型号,也即可以确定上述第一面罩。
在实际应用中,上述实际脸部特征数据对应的第一面罩可以为1个或多个。
步骤303、显示各第一面罩。
将步骤302中所确定的各个第一面罩显示出来,以供用户选择、确定。
在一些具体实施方式中,上述预设对应关系包括面罩类型与脸部特征之间的第一子对应关系、以及面罩尺寸与关键特征的尺寸范围之间的第二子对应关系,上述步骤302包括步骤3021~步骤3023。
步骤3021、根据第一子对应关系及实际脸部特征数据中的脸部特征,确定第一面罩类型。
该步骤中,从实际脸部特征数据中提取用户的实际脸部特征,然后基于该实际脸部特征结合上述第一子对应关系,确定适配的面罩类型,也即上述第一面罩类型。
步骤3022、根据第二子对应关系及实际脸部特征数据中关键特征的尺寸信息,确定第一面罩尺寸。
该步骤中,从实际脸部特征数据中提取用户的关键特征的尺寸信息,然后基于该关键特征的尺寸信息结合上述第二子对应关系,确定适配的面罩尺寸,也即上述第一面罩尺寸。
步骤3023、根据第一面罩类型及第一面罩尺寸,确定第一面罩。
该步骤中,因为第一面罩类型确定了面罩的具体类型,而第一面罩尺寸限定了面罩的具体大小,因而综合第一面罩类型及第一面罩尺寸,可以确定具体型号的面罩,也即上述匹配用户的实际脸部特征数据的第一面罩。
在一些具体实施方式中,本申请实施例所提供的方法,在上述步骤3023之后,还包括步骤3024~步骤3027。
步骤3024、根据实际脸部特征数据及第一子对应关系,确定各第一面罩的第一匹配分数。
该步骤中,因为第一子对应关系确定了面罩类型与脸部特征之间的对应关系,因而根据实际脸部特征数据中用户的实际脸部特征,可以确定出对应的匹配分数,也即上述第一匹配分数,该分数越高表示从用户特征角度判定该面罩型号适合该用户的脸型特征,反之表示从用户特征角度判定该面罩型 号不太适合该用户的脸型特征。
例如,若通过实际脸部特征数据识别用户脸部无胡须并输出推荐使用A型面罩,通过数据系统如数据库输出为“F面罩-A类型-M号”型面罩,则此时判定该款面罩适合该脸部特征,并给出较高的特征分数,例如为95分,该特征分数也即上述第一匹配分数。
步骤3025、根据实际脸部特征数据及第二子对应关系,确定各第一面罩的第二匹配分数。
该步骤中,因为第二子对应关系确定了面罩尺寸与关键特征的尺寸范围之间的对应关系,因而根据实际脸部特征数据中用户的关键特征的尺寸信息,可以确定出对应的匹配分数,也即上述第二匹配分数,该分数越高表示从用户脸部尺寸角度判定该面罩型号适合该用户的脸型特征,反之表示从用户特征角度判定该面罩型号不太适合该用户的脸部尺寸。
例如,当通过图像分析为输出b=70mm,也即确定用户鼻尖到下颚的距离为70mm时,数据库输出为“F面罩-A类型-M号”时,由图3易知b=70mm在“F面罩-A类型-M号”适用b尺寸范围内,判定该“F面罩-A类型-M号”面罩关于该用户脸部的尺寸分数较高,也即第二匹配分数较高,例如为95分。
同理的,若当通过图像分析为输出b=88mm,数据库输出为“F面罩-A类型-M号”时,由于b=88mm不在“F面罩-A类型-M号”适用b尺寸范围内,判定该“F面罩-A类型-M号”面罩关于该用户脸部的尺寸分数较低,例如为30分。如此,得到某款面罩关于某用户脸部的尺寸分数,进而可以显示上述尺寸分数。
步骤3026、根据第一匹配分数及第二匹配分数,确定各第一面罩的综合匹配分数。
该步骤中,将第一匹配分数及第二匹配分数经过加权运算得出并各第一面罩与患者脸部匹配的综合分数,也即上述综合匹配分数,综合分数越高,表示匹配度越好。
步骤3027、显示各第一面罩的第一匹配分数、第二匹配分数及综合匹配分数。
该步骤中,将上述各匹配分数进行显示,以便于用户知晓各第一面罩与自身脸部的具体匹配情况,便于用户筛选、比较。
在一些实施例中,上述步骤105包括步骤501~步骤502。
步骤501、根据各关键特征的位置信息对脸部图像进行定位标记。
该步骤中,将上述鼻尖、眉心、人中、眼睛等关键特征的位置信息中的一个或多个进行标记和定位,从而实现对脸部图像的实时定位标记;上述定位标记可以随着患者脸部的移动而实时标记和定位。
步骤502、根据定位标记,将目标面罩的VR图形信息定位于脸部图像,生成佩戴画面信息。
该步骤中,因为当患者脸部移动时,上述定位标记可以实时标记和定位患者脸部的关键特征位置,因而可以通过数据库获取目标面罩的VR图形信息,随后将面罩VR图形信息与上述定位标记进行匹配定位,生成佩戴画面信息,将该佩戴画面进行显示,即可以将面罩VR图形信息实时定位在患者脸部。可以理解的,面罩VR图形信息可以随着患者脸部的移动而移动,达到“实时佩戴”的VR效果。
在实际应用中,通过用户脸部的一个或多个关键特征位置实时进行标记定位,并接收到数据库输出的面罩的VR图形信息,随后将面罩VR图形信息与定位标记进行匹配定位,进而将面罩VR图形信息实时定位在患者脸部。在匹配定位时,可以是各个特征的单独匹配,也可以是几个特征的综合匹配,例如,鼻尖与上牙槽最外侧的间距、倾斜角度等匹配。
在实际应用中,通过终端设备的界面交互窗口显示上述佩戴画面,界面交互窗口被配置为可以输出信息、显示相关内容以及接收用户输入的操作指令,并可以在必要时显示提示信息。例如可以将获取的用户实时脸部图像、实时定位等信息并进行实时显示,使得用户可以实时看到自己的脸部特征;可以显示面罩VR图像,并根据实时定位信息将面罩VR图像模拟佩戴于患者脸部。可以理解的,面罩VR图像在面部的移动时而相对于面部静止,即类似与真实的佩戴效果。
在一些实施例中,上述界面交互窗口还配置为可以显示所选目标面罩的性能信息,例如但不限于面罩的重量、面罩的死腔体积、面罩的适用压力等信息;也可以显示面罩的佩戴和使用信息,例如但不限于面罩的佩戴流程、使用和清洗方法等;也可以显示面罩的适配度信息,例如但不限于面部匹配的综合分数较低时显示分数较低的原因“面罩过大,推荐使用较小型号面罩”、“面罩过小,推荐使用较大型号面罩”;还可以在必要时给出用户提示信息,例如但不限于当计算鼻尖到下颚的距离尺寸时提示用户调整脸部位置的提示信息;还可以显示动态提示信息,例如但不当用户选择的面罩为鼻面罩或鼻垫面罩,并在使用时做出张嘴动作,会显示用户的嘴部做出排气的动态线条,用以提示用户在使用鼻面罩或鼻垫面罩时,不能张嘴。
在一些实施例中,本申请实施例所提供的方法,还可以接收用户的第二输入,选择不同的配定尺寸参考,例如但不限于患者脸部从鼻尖到下颚的距离尺寸b,并可以在需要时中断数据库向终端设备传输适用尺寸信息,也即上述第二子对应关系,在这种情况下用户可以选择确定是否将尺寸分数作为最终综合匹配分数的必要参考条件。
在一些实施例中,本申请实施例所提供的方法,还可以接收用户的第三输入,管控特征分数指令,并可以在需要时中断上述第二匹配分数的确定操作,在这种情况下用户可以通过第二输入操作选择确定是否将第二匹配分数作为最终综合匹配分数的必要参考条件。同时,用户还可以通过第三输入确定具体识别的用户脸部特征类别,例如单独或同时识别胡须特征和脸型特征;用户也可以通过第三输入操作确定关键特征的标记定位,例如将眼睛、眉心、人中、上唇、下颚、鼻尖中的某几个特征进行标记定位。
请参阅图5,示出了本申请实施例中通过识别用户脸部胡须特征进行面罩推荐的流程示意图。
如图5所示,在步骤221中,接收图像采集模块所采集的图像信息;
在步骤222中,执行选择程序来识别胡须特征;
在步骤223中,根据具体的胡须特征,匹配推荐对应型号的面罩,然后进入步骤224;例如若识别出用户脸部无胡须,则推荐用户使用A类型面罩;若识别出用户脸部为络腮胡,则推荐适用B型面罩;若识别用户脸部为山羊胡,则推荐适用C型面罩;
在步骤224中,向显示模块输出推荐结果,然后显示模块接收并显示该推荐结果。
当然,当执行识别患者脸部轮廓特征、患者下巴类型特征等时,均有类似上述的判别和推荐过程,在此不做赘述。
可以理解的,可以同时识别患者脸部的多个特征,例如但不限于同时识别胡须特征、脸部轮廓特征、下巴类型特征等,然后综合后给出推荐的面罩类型。
在一些实施例中,参照图6,示出了一种面罩的虚拟佩戴装置的框图,该装置具体可以包括:
图像采集模块61,用于获取用户的脸部图像;
图像处理模块62,用于根据脸部图像,确定实际脸部特征数据;
确定模块63,用于根据实际脸部特征数据,确定匹配的各种型号的一个或多个第一面罩;
显示模块64,用于显示各第一面罩;
操作模块65,用于根据接收到的用户的第一输入,从各第一面罩中确定目标面罩;
确定模块63,还用于根据脸部图像及目标面罩,生成佩戴画面信息;
显示模块64,还用于显示佩戴画面信息。
本申请一些实施例包括以下优点:
获取用户的脸部图像;根据脸部图像,确定实际脸部特征数据;根据实际脸部特征数据,确定并显示匹配的各种型号的一个或多个第一面罩;根据接收到的用户的第一输入,从各第一面罩中确定目标面罩;根据脸部图像及目标面罩,生成佩戴画面信息,并显示佩戴画面信息。因为是根据用户实时的脸部图像确定其实际脸部特征数据,因而该实际脸部特征数据反映了用户的实际脸部特征,所以根据该脸部特征数据可以准确匹配出适于用户脸部特征的不同类型及尺寸的多个第一面罩;然后在用户通过第一输入确定出目标面罩后,可以根据目标面罩及用户的脸部图像,通过虚拟现实(Virtual Reality,VR)等技术生成佩戴画面信息并显示,从而模拟用户佩戴目标面罩的效果,能使用户具有身临其境的沉浸感,便于用户筛选出更为满意的面罩。
在一些实施例中,确定模块63,具体用于获取不同型号的面罩与脸部特征数据之间的预设对应关系;根据预设对应关系,确定实际脸部特征数据对应的各种型号的第一面罩。
在一些实施例中,脸部特征数据,包括但不限于脸部胡须特征信息、脸部轮廓特征信息、下巴类型特征信息中的至少一种,还包括关键特征的位置信息及尺寸信息;关键特征包括但不限于眼睛、眉心、鼻子、人中上颌和下颚其中的一种或几种,眼睛的尺寸信息包括眼间距信息;
鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、鼻尖到下颚的距离信息;或鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、以及鼻尖到上颌的距离信息;或鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、以及鼻尖到下颚的距离信息。
在一些实施例中,图像处理模块62包括尺寸数据处理单元,所示尺寸数据处理单元包括:
第一确定子单元,用于在根据脸部图像,确定脸部特征数据之后,根据实际脸部特征数据,确定各关键特征的位置之间的几何关系;
第二确定子单元,用于根据关键特征的位置之间的几何关系,确定用户的脸部是否歪斜;
提示子单元,用于在确定用户的脸部歪斜的情况下,生成提示信息,以提示用户调整脸部姿势。
在一些实施例中,图像处理模块62还包括:
图像定位单元,用于根据各关键特征的位置信息对脸部图像进行定位标记;
确定模块63包括:
VR匹配确定单元,用于根据定位标记,将目标面罩的VR图形信息定位于脸部图像,生成佩戴画面信息。
在一些实施例中,装置中预设对应关系包括面罩类型与脸部特征之间的第一子对应关系、以及面罩尺寸与关键特征的尺寸范围之间的第二子对应关系;
确定模块63包括:
类型确定单元,用于根据第一子对应关系及实际脸部特征数据中的脸部特征,确定第一面罩类型;
尺寸确定单元,用于根据第二子对应关系及实际脸部特征数据中关键特征的尺寸信息,确定第一面罩尺寸;
面罩确定单元,用于根据第一面罩类型及第一面罩尺寸,确定第一面罩。
在一些实施例中,确定模块63,还包括:
特征分数确定单元,用于在根据第一面罩类型及第一面罩尺寸,确定第一面罩之后,根据实际脸部特征数据及第一子对应关系,确定各第一面罩的第一匹配分数;
尺寸分数确定单元,用于根据实际脸部特征数据及第二子对应关系,确定各第一面罩的第二匹配分数;
综合分数确定单元,用于根据第一匹配分数及第二匹配分数,确定各第一面罩的综合匹配分数;
显示模块,还用于显示各第一面罩的第一匹配分数、第二匹配分数及综合匹配分数。
请参阅图7,示出了本申请实施例提供的另一种面罩的虚拟佩戴装置的框图。
如图7所示,上述图像处理模块62,其作用为接收图像采集模块61所采集到的图像信息,并识别、定位、处理特定尺寸数据。图像处理模块62还包括用户特征识别单元621、图像定位单元622、尺寸数据处理单元623。
具体的,用户特征识别单元621可以识别患者脸部的特征,例如但不限于可以识别患者脸部胡须特征(是否有胡须,胡须分布和稠密情况等)、识别患者脸部轮廓特征(圆形脸或瓜子脸,是否有脸部缺陷等)、识别关键特征位置(鼻尖、眉心、人中、眼睛等)。用户特征识别单元621可识别患者脸部的特征,并向显示模块64输出与患者脸部特征适配的面罩类型。
上述图像定位单元622,可接收用户特征识别单元621所识别的关键特征位置(如鼻尖、眉心、人中、眼睛等),并将上述关键特征位置中的一个或多个进行标记和定位。并且,图像定位单元622被配置为可以随着用户脸部的移动而实时标记和定位。可以理解的,当患者脸部移动时,图像定位单元622会实时标记和定位患者脸部的关键特征位置,并输出给显示模块64,为后续VR图像显示提供定位基础。进一步的,图像定位单元622可以将标 记的关键特征位置,输出给尺寸数据处理单元623。
上述尺寸数据处理单元623,配置为可以从图像定位单元622接收用户脸部的一个或多个关键特征位置信息,并计算出特定标记之间的距离数据,随后将距离数据传递给确定模块63。
上述数据库66为面罩选型的数据系统,由服务方管理和维护。可以绑定于系统数据,随上述面罩的虚拟佩戴装置的下载和存取一同进行,例如但不限于将数据库66的数据信息和上述选择装置的系统文件共同形成一款应用于手机端的APP;也可以存储于云端、基站、硬件储存设备(硬盘、USB设备)等,在应用中随时调取。进一步的,上述数据库66由患者接口设备3D模型661、患者接口配定尺寸数据库662、患者接口VR图形数据库663等组成。患者接口设备3D模型661,为服务方所能提供可选择面罩的全部3D模型,并已按照与人脸的接触方式、适用面部特征、适用尺寸等分类标记。
患者接口配定尺寸数据库662,为收录上述分类标记中每一款面罩适用脸部某一尺寸范围信息,并将该尺寸范围信息输出给确定模块63,为确定模块63中适配面罩的尺寸分数的确定提供第二数据。例如在本实施例中,患者接口配定尺寸数据库662收录的为每一款面罩适用患者脸部从鼻尖到下颚的距离尺寸b的范围,并在当操作模块发出调取指令时,将特定款面罩的尺寸b适用范围信息输出给确定模块63。
患者接口VR图形数据库663,收录的是按照上述分类标记中每一款面罩的VR图形数据,并可以将上述VR图像数据输出给确定模块63,以便在确定模块63中进行VR数据的匹配确定。
上述确定模块63,为VR患者接口设备选择系统的逻辑执行单元。其作用为,可以接收图像处理模块62和数据库66中的数据信息,进行尺寸分数确定和VR匹配确定,并进一步的可以将结果输出给显示模块64。具体的,确定模块63包括尺寸分数确定单元631、VR匹配确定单元632、特征分数确定单元633。在本实施例中,尺寸分数确定单元631,同时接收图像处理模块62输出的第一数据(例如本实施例中输出鼻尖到下颚的距离尺寸b)和数据库66输出的第二数据(例如本实施例中输出每一款面罩适用于尺寸b的范围信息),并将第一数据和第二数据进行比对,进而确定该款面罩的尺寸分数。如此,在尺寸分数确定单元631中得到某款面罩关于某患者脸部的尺寸分数,并将尺寸分数输出给显示模块64,进而在显示模块64中显示上述尺寸分数。
VR匹配确定单元632,其作用为同时接收数据库66输出的VR图形信息和图像处理模块62输出的图像定位信息,并实时将上述VR图形信息定 位于图像中患者的脸部。具体的,VR匹配确定单元632接收到图像定位单元622输出的关于患者脸部某个或多个关键特征位置实时的标记定位(例如但不限于眼睛、鼻尖、人中等位置),并接收到患者接口VR图形数据库输出的面罩的VR图形信息,随后将面罩VR图形信息与标记定位进行匹配定位,进而将面罩VR图形信息实时定位在患者脸部。将结果实时输出给显示模块64,在显示模块64中实时显示上述佩戴效果;其中,在进行特征匹配时,可以是各个特征的单独匹配,也可以是几个特征的综合匹配,例如,鼻尖与上牙槽最外侧的间距、倾斜角度等匹配。
特征分数确定单元633,其作用为根据图像处理模块62输出的用户特征信息和数据库66输出的面罩型号信息进行匹配,给出对应的特征分数,分数越高表示从用户特征角度判定该面罩型号适合该用户的脸型特征,反之表示从用户特征角度判定该面罩型号不太适合该用户的脸型特征。具体的,特征分数确定单元633同时接收到用户特征识别单元621输出的面罩类型推荐信息和患者接口配定尺寸数据库662输出的面罩型号信息进行匹配。例如,用户特征识别单元621识别用户脸部无胡须并输出推荐使用A型面罩,患者接口配定尺寸数据库662输出为“F面罩-A类型-M号”型面罩,则此时特征分数确定单元633会判定该款面罩适合该脸部特征,并给出较高的特征分数(如95分)。反之,则给出较低的特征分数。进一步的,特征分数确定单元633可将上述特征分数输出给显示模块64,并在显示模块64中显示上述特征分数。
上述操作模块65,是连接图像处理模块62、数据库66、显示模块64的纽带。用户可以通过操作模块65向其他模块发出操作指令。进一步的,操作模块65包括第一操作单元651和第二操作单元652。第一操作单元651,可以同时向图像处理模块62和显示模块64发送操作指令,并管控图像处理模块62、显示模块64发送或接受的传输命令。进一步的,例如,第一操作单元651可以管控用户特征识别单元621向特征分数确定单元633传达的特征分数指令,并可以在需要时中断该传输,在这种情况下用户可以通过第一操作单元651可选择的决定是否将特征分数作为最终综合分数的必要参考条件。同时,用户还可以通过第一操作单元651决定用户特征识别单元621所识别患者脸部特征类别,例如单独或同时识别胡须特征和脸型特征;用户也可以通过第一操作单元651决定图像定位单元622的标记定位,例如将眼睛、眉心、人中、上唇、下颚、鼻尖中的某几个特征进行标记定位。进一步的,第一操作单元651,还可以包括在显示模块64中呈现的触屏操作按键以及具有类似功能的操作界面。
同样的,第二操作单元652,可以同时向数据库66和显示模块64发送 操作指令,并管控数据库66、显示模块64发送或接受的传输命令。进一步的,用户可以通过第二操作单元652选择不同型号的患者接口设备,进而调取不同型号的面罩VR图形;也可以通过第二操作单元652选择不同的配定尺寸参考(例如但不限于患者脸部从鼻尖到下颚的距离尺寸b),并可以在需要时中断患者接口配定尺寸数据库662向确定模块64的传输,在这种情况下用户可以通过第二操作单元652可选择的决定是否将尺寸分数作为最终综合分数的必要参考条件。进一步的,第二操作单元652,还可以包括在显示模块64中呈现的触屏操作按键以及具有类似功能的操作界面。
上述显示模块64为装置的界面交互窗口,被配置为可以接受图像处理模块62、确认模块63的输出信息和操作模块65的操作指令和显示相关内容,并可以在必要时显示提示信息。进一步的显示模块64包括实时图像显示单元641、综合分数显示单元642、VR图像显示单元643和信息显示单元644。实时图像显示单元641,可以接收图像处理模块62输出的实时图像、实时定位等信息并进行实时显示。即用户可以在实时图像显示单元641中实时看到自己的脸部特征。VR图像显示单元643可以显示面罩VR图像,并根据实时定位信息将面罩VR图像“佩戴”于患者脸部。可以理解的,面罩VR图像在面部的移动时而相对于面部静止,即类似与真实的佩戴效果。综合分数显示单元642,可以接收尺寸分数确定单元631输出的尺寸分数和/或特征分数确定单元633输出的特征分数,并经过加权运算得出并显示所选面罩与患者脸部匹配的综合分数。综合分数越高,表示匹配度越好。
信息显示单元644,配置为可以显示所选面罩的性能信息(例如但不限于面罩的重量、面罩的死腔体积、面罩的适用压力等信息),也可以显示面罩的佩戴和使用信息(例如但不限于面罩的佩戴流程、使用和清洗方法等),也可以显示面罩的适配度信息(例如但不限于面部匹配的综合分数较低时显示分数较低的原因“面罩过大,推荐使用较小型号面罩”“面罩过小,推荐使用较大型号面罩”),并可以在必要时给出用户提示信息(例如但不限于当计算鼻尖到下颚的距离尺寸b时提示用户调整脸部位置的提示信息)。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
在本申请一些实施例中,还提供一种终端设备,包括显示器,处理器,存储器,存储在存储器上并可在处理器上运行的计算机程序,该计算机程序被处理器执行时实现如面罩的虚拟佩戴方法中的步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
在本申请一些实施例中,还提供一种可读存储介质,可读存储介质上存 储有计算机程序,计算机程序被处理器执行时实现如面罩的虚拟佩戴方法中的步骤的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本申请实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
在一个典型的配置中,计算机设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非持续性的电脑可读媒体(transitory media),如调制的数据信号和载波。
本申请实施例是参照根据本申请实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理 终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本申请所提供的一种面罩的虚拟佩戴方法和一种面罩的虚拟佩戴装置,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。

Claims (12)

  1. 一种面罩的虚拟佩戴方法,其特征在于,所述方法包括:
    获取用户的脸部图像;
    根据所述脸部图像,确定实际脸部特征数据;
    根据所述实际脸部特征数据,确定并显示匹配的各种型号的一个或多个第一面罩;
    根据接收到的所述用户的第一输入,从各所述第一面罩中确定目标面罩;
    根据所述脸部图像及所述目标面罩,生成佩戴画面信息,并显示所述佩戴画面信息。
  2. 根据权利要求1所述的方法,其特征在于,在根据所述脸部图像,确定实际脸部特征数据之后,所述方法还包括:
    根据所述实际脸部特征数据,确定各关键特征的位置之间的几何关系;
    根据所述关键特征的位置之间的几何关系,确定所述用户的脸部是否歪斜;
    在确定所述用户的脸部歪斜的情况下,生成提示信息,以提示用户调整脸部姿势。
  3. 根据权利要求2所述的方法,其特征在于,根据所述实际脸部特征数据,确定并显示匹配的各种型号的第一面罩,包括:
    获取不同型号的面罩与脸部特征数据之间的预设对应关系;
    根据所述预设对应关系,确定所述实际脸部特征数据对应的各种型号的第一面罩;
    显示各所述第一面罩。
  4. 根据权利要求3所述的方法,其特征在于,所述预设对应关系包括面罩类型与脸部特征之间的第一子对应关系、以及面罩尺寸与关键特征的尺寸范围之间的第二子对应关系;
    所述根据所述预设对应关系,确定所述实际脸部特征数据对应的各种型号的第一面罩,包括:
    根据所述第一子对应关系及所述实际脸部特征数据中的脸部特征,确定第一面罩类型;
    根据所述第二子对应关系及所述实际脸部特征数据中关键特征的尺寸信息,确定第一面罩尺寸;
    根据所述第一面罩类型及所述第一面罩尺寸,确定所述第一面罩。
  5. 根据权利要求4所述的方法,其特征在于,在根据所述第一面罩类型及所述第一面罩尺寸,确定所述第一面罩之后,所述方法还包括:
    根据所述实际脸部特征数据及所述第一子对应关系,确定各所述第一面罩的第一匹配分数;
    根据所述实际脸部特征数据及所述第二子对应关系,确定各所述第一面罩的第二匹配分数;
    根据所述第一匹配分数及所述第二匹配分数,确定各所述第一面罩的综合匹配分数;
    显示各所述第一面罩的所述第一匹配分数、所述第二匹配分数及所述综合匹配分数。
  6. 根据权利要求3所述的方法,其特征在于,所述脸部特征数据包括鼻子的尺寸信息,所述鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、以及鼻尖到上颌的距离信息和鼻尖到下颚的距离信息;
    或所述脸部特征数据包括鼻子的尺寸信息,所述鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、以及鼻尖到上颌的距离信息;
    或所述脸部特征数据包括鼻子的尺寸信息,所述鼻子的尺寸信息包括鼻宽信息、鼻梁高度信息、以及鼻尖到下颚的距离信息。
  7. 根据权利要求6所述的方法,其特征在于,所述脸部特征数据,包括脸部胡须特征信息、脸部轮廓特征信息、下巴类型特征信息中的至少一种,还包括所述关键特征的位置信息及尺寸信息;所述关键特征包括眼睛、眉心、鼻子、人中及上颌和下颚其中的一种或几种,所述眼睛的尺寸信息包括眼间距信息。
  8. 根据权利要求6所述的方法,其特征在于,根据所述脸部图像及所述目标面罩,生成佩戴画面信息,包括:
    根据各所述关键特征的位置信息对所述脸部图像进行定位标记;
    根据所述定位标记,将所述目标面罩的VR图形信息定位于所述脸部图像,生成所述佩戴画面信息。
  9. 根据权利要求6所述的方法,其特征在于,所述获取用户的脸部图像,包括:
    自动获取用户的脸部图像。
  10. 一种面罩的虚拟佩戴装置,其特征在于,所述装置包括:
    图像采集模块,用于获取用户的脸部图像;
    图像处理模块,用于根据所述脸部图像,确定实际脸部特征数据;
    确定模块,用于根据所述实际脸部特征数据,确定匹配的各种型号 的一个或多个第一面罩;
    显示模块,用于显示各所述第一面罩;
    操作模块,用于根据接收到的所述用户的第一输入,从各所述第一面罩中确定目标面罩;
    所述确定模块,还用于根据所述脸部图像及所述目标面罩,生成佩戴画面信息;
    所述显示模块,还显示所述佩戴画面信息。
  11. 一种终端设备,所述终端设备包括显示器、处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,其特征在于,所述程序或指令被所述处理器执行时实现如权利要求1~9任一所述的方法的步骤。
  12. 一种可读存储介质,所述可读存储介质上存储程序或指令,其特征在于,所述程序或指令被处理器执行时实现如权利要求1~9任一所述的方法的步骤。
PCT/CN2022/137573 2021-12-31 2022-12-08 面罩的虚拟佩戴方法、装置、终端设备及可读存储介质 WO2023124878A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111678051.6A CN114419703A (zh) 2021-12-31 2021-12-31 面罩的虚拟佩戴方法、装置、终端设备及可读存储介质
CN202111678051.6 2021-12-31

Publications (1)

Publication Number Publication Date
WO2023124878A1 true WO2023124878A1 (zh) 2023-07-06

Family

ID=81271577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/137573 WO2023124878A1 (zh) 2021-12-31 2022-12-08 面罩的虚拟佩戴方法、装置、终端设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN114419703A (zh)
WO (1) WO2023124878A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419703A (zh) * 2021-12-31 2022-04-29 北京怡和嘉业医疗科技股份有限公司 面罩的虚拟佩戴方法、装置、终端设备及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150293382A1 (en) * 2014-04-09 2015-10-15 Pro Fit Optix, Inc. Method and System for Virtual Try-On and Measurement
CN106845379A (zh) * 2017-01-12 2017-06-13 段元文 图像显示方法及装置
CN106999742A (zh) * 2014-06-20 2017-08-01 霍尼韦尔国际公司 用于定制面部呼吸面罩的售货亭
CN109978655A (zh) * 2019-01-14 2019-07-05 明灏科技(北京)有限公司 一种虚拟镜框选配方法及系统
CN112084398A (zh) * 2020-07-28 2020-12-15 北京旷视科技有限公司 配饰推荐方法、配饰的虚拟试戴方法、装置及电子设备
CN114419703A (zh) * 2021-12-31 2022-04-29 北京怡和嘉业医疗科技股份有限公司 面罩的虚拟佩戴方法、装置、终端设备及可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150293382A1 (en) * 2014-04-09 2015-10-15 Pro Fit Optix, Inc. Method and System for Virtual Try-On and Measurement
CN106999742A (zh) * 2014-06-20 2017-08-01 霍尼韦尔国际公司 用于定制面部呼吸面罩的售货亭
CN106845379A (zh) * 2017-01-12 2017-06-13 段元文 图像显示方法及装置
CN109978655A (zh) * 2019-01-14 2019-07-05 明灏科技(北京)有限公司 一种虚拟镜框选配方法及系统
CN112084398A (zh) * 2020-07-28 2020-12-15 北京旷视科技有限公司 配饰推荐方法、配饰的虚拟试戴方法、装置及电子设备
CN114419703A (zh) * 2021-12-31 2022-04-29 北京怡和嘉业医疗科技股份有限公司 面罩的虚拟佩戴方法、装置、终端设备及可读存储介质

Also Published As

Publication number Publication date
CN114419703A (zh) 2022-04-29

Similar Documents

Publication Publication Date Title
JP6363608B2 (ja) 患者の顔面データにアクセスするためのシステム
US11103664B2 (en) Methods and systems for providing interface components for respiratory therapy
AU2015348151B2 (en) Real-time visual feedback for user positioning with respect to a camera and a display
EP3513761B1 (en) 3d platform for aesthetic simulation
RU2636682C2 (ru) Система идентификации интерфейса пациента
CN104346834B (zh) 信息处理设备和位置指定方法
US20180092595A1 (en) System and method for training and monitoring administration of inhaler medication
Baysal et al. Reproducibility and reliability of three-dimensional soft tissue landmark identification using three-dimensional stereophotogrammetry
JP2008009769A (ja) 顔認証システム及び顔認証方法
WO2017177259A1 (en) System and method for processing photographic images
WO2023124878A1 (zh) 面罩的虚拟佩戴方法、装置、终端设备及可读存储介质
CN217718730U (zh) 一种确定患者佩戴面罩型号的装置
JP3590321B2 (ja) 人物照合システム
EP3687614A1 (en) Providing a mask for a patient based on a temporal model generated from a plurality of facial scans
WO2019044123A1 (ja) 情報処理装置、情報処理方法、及び記録媒体
US20230072470A1 (en) Systems and methods for self-administered sample collection
JP2017016418A (ja) ヘアスタイル提案システム
US20240170139A1 (en) Systems and methods of adaptively generating facial device selections based on visually determined anatomical dimension data
CN112752537A (zh) 用于确定处于自然姿势的受试者的至少一个几何形态参数以确定视力矫正装备的方法
TW201905836A (zh) 以ar技術實現穴道視覺化之中醫系統及其方法
TW202321986A (zh) 線上眼鏡試配的處理系統與方法
Busser 3D SCAN (A) HEAD: Design of an instantaneous 3D Head Scanner for ultra-personalized headwear
AU2022294064A1 (en) Method for fitting virtual glasses
NZ762180A (en) Methods and systems for providing interface components for respiratory therapy
NZ762180B2 (en) Methods and systems for providing interface components for respiratory therapy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22914133

Country of ref document: EP

Kind code of ref document: A1