WO2022171020A1 - 图像显示方法、装置、设备及介质 - Google Patents

图像显示方法、装置、设备及介质 Download PDF

Info

Publication number
WO2022171020A1
WO2022171020A1 PCT/CN2022/074871 CN2022074871W WO2022171020A1 WO 2022171020 A1 WO2022171020 A1 WO 2022171020A1 CN 2022074871 W CN2022074871 W CN 2022074871W WO 2022171020 A1 WO2022171020 A1 WO 2022171020A1
Authority
WO
WIPO (PCT)
Prior art keywords
real
image
time
target
body part
Prior art date
Application number
PCT/CN2022/074871
Other languages
English (en)
French (fr)
Inventor
林高杰
罗宇轩
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to US18/264,886 priority Critical patent/US20240054719A1/en
Publication of WO2022171020A1 publication Critical patent/WO2022171020A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present disclosure relates to the field of multimedia technologies, and in particular, to an image display method, apparatus, device, and medium.
  • the present disclosure provides an image display method, apparatus, device and medium.
  • the present disclosure provides an image display method, including:
  • the composite image is displayed in real time.
  • the composite image is an image obtained by superimposing the target 3D image on the target body part in the real-time image.
  • the target 3D image is obtained by rendering the 3D model of the wearable component according to the real-time posture of the target body part and the real-time unoccluded area.
  • the real-time pose and real-time unoccluded area are determined from the real-time image.
  • an image display device comprising:
  • an acquisition unit configured to acquire a real-time image of the target body part
  • the display unit is configured to display a composite image in real time, and the composite image is an image obtained by superimposing a target three-dimensional image on the target body part in the real-time image.
  • the model is rendered, and the real-time pose and real-time unoccluded area are determined according to the real-time image.
  • an image display device comprising:
  • the processor is configured to read executable instructions from the memory and execute the executable instructions to implement the image display method described in the first aspect.
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, enables the processor to implement the image display method described in the first aspect.
  • the image display method, device, device, and medium of the embodiments of the present disclosure can, after acquiring the real-time image of the target body part, display in real time a composite image obtained by superimposing the target three-dimensional image on the target body part in the real-time image, wherein,
  • the target 3D image is obtained by rendering the 3D model of the wearable component according to the real-time posture and real-time unoccluded area of the target body.
  • the purpose of the 3D decoration effect of the component is to improve the integration of the added 3D decoration effect with the original image because the posture and occlusion of the body part wearing the decoration are considered in the process of adding the 3D decoration effect. Sexuality, to avoid the appearance of wearing a help screen, thereby improving the user's experience.
  • FIG. 1 is a schematic flowchart of an image display method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a composite image provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of another composite image provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a renderable image area provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of another renderable image area provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of still another renderable image area provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of a blocking area provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of still another renderable image area provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a preset occlusion model provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of yet another composite image provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of still another composite image provided by an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • Augmented reality (AR, Augmented Reality) technology is a technology that can calculate the position and angle of the camera in real time and add corresponding images, videos or three-dimensional models. This technology can combine the virtual world and the real world on the screen. and interact.
  • AR technology can be used to add decorative effects to some body parts of the user in real time on each frame of images. For example, adding a helmet effect to the user's head, adding a watch effect to the user's wrist, etc.
  • the decoration effect covers the entire body part, and if there are other obstructions on the body part, the decoration effect also covers the obstruction. For example, when adding a helmet effect to the user's head, if the position of the head where the helmet effect is added is blocked by the user's hand, the helmet effect will also cover the user's hand. This problem will make the effect of the decorations less integrated with the original image, and often appear the piercing screen, which makes it difficult for the user to maintain the immersion in the process of using the decoration effect, which reduces the user's experience.
  • the occlusion relationship of the fingers is more complex and difficult to simulate than the head, wrist, etc., which makes the decorative effect and the original image less integrated.
  • the user is trying to wear AR During the process of the ring, it is more likely to appear that the ring has poor fit or the ring is embedded in the adjacent finger.
  • the embodiments of the present disclosure provide an image display method, apparatus, device and medium that can take into account the posture and occlusion of the body part wearing the decoration when adding the decoration effect.
  • the image display method provided by the embodiment of the present disclosure is first described below with reference to FIG. 1 .
  • the image display method may be performed by an electronic device.
  • electronic devices may include mobile phones, tablet computers, desktop computers, notebook computers, vehicle-mounted terminals, wearable electronic devices, all-in-one computers, smart home devices and other devices with communication functions, and may also be virtual machines or devices simulated by simulators .
  • FIG. 1 shows a schematic flowchart of an image display method provided by an embodiment of the present disclosure.
  • the image display method includes the following steps.
  • the electronic device when the user wants to add a decoration effect on the target body part of the real-time image, the electronic device can be made to acquire the real-time image of the target body part.
  • the electronic device may capture images through a camera to obtain a real-time image of the target body part.
  • the electronic device may receive images sent by other devices to obtain real-time images of the target body part.
  • the electronic device may locally read an image selected by the user among the local images to obtain a real-time image of the target body part.
  • a decoration effect can be added to the target body part in the real-time image.
  • the target body part may be any body part in the human body preset according to actual application requirements, which is not limited here.
  • the target body part may include a first granularity of body parts, such as any of the head, torso, upper extremities, hands, lower extremities, feet, and the like.
  • the target body part may also include a body part with a second granularity that is finer than the first granularity, such as any one of ear, neck, wrist, finger, ankle and the like.
  • the number of target body parts may also be any number preset according to actual application requirements, which is not limited here.
  • the number of target body parts may be 1, 2, 3, etc.
  • the electronic device after acquiring the real-time image of the target body part, can add decoration effects to the target body part in the real-time image in real time, and display the effect of the decoration on the target body part in the real-time image in real time.
  • the electronic device may determine the real-time posture of the target body part according to the real-time image.
  • the real-time pose may be a real-time three-dimensional pose of the target body part.
  • the real-time pose may include the real-time rotational pose of the target body part.
  • the real-time rotational posture of the target body part may include the real-time three-dimensional rotational posture of each joint in the target body part.
  • the electronic device may determine the real-time unoccluded area of the target body part according to the real-time image.
  • the real-time unoccluded area may include an area of the target body part that is not occluded by an occluder.
  • the occluder may include at least one of a non-body part object, a non-target body structure other than the target body structure to which the target body part belongs, and a non-target body part of the same type as the body part of the target body part. Requirements are preset.
  • the non-body part object may include at least one of an image background in the real-time image and an object other than a human body that occludes any part region within the target body part.
  • the target body structure may be the body structure to which the target body part belongs, which is preset according to the actual application requirements, which is not limited here.
  • the target body structure may be the body part of the first granularity, such as any one of the head, torso, upper limbs, hands, lower limbs and feet.
  • target body part may be the target body structure itself or a part of the target body structure, which is not limited herein.
  • the non-target body structure can be any body structure other than the target body structure divided according to the division granularity of the target body structure, which is not limited here.
  • the non-target body structure may be the head, torso, upper limbs, lower limbs, and feet, etc.
  • the non-target body part may be any other body part other than the target body part divided according to the division granularity of the target body part, which is not limited here.
  • the non-target body part may be other fingers.
  • the electronic device can render the target 3D image obtained by rendering the 3D model of the wearable component according to the real-time posture of the target body part and the real-time unoccluded area, and then superimpose the target 3D image on the real-time image. On the target body part, a composite image is obtained.
  • the electronic device can render the three-dimensional model of the wearable component based on the real-time unoccluded area of the target body part according to the real-time posture of the target body part, and obtain the target three-dimensional image.
  • the electronic device may superimpose the target three-dimensional image on the wearing position of the wearable component of the target body part in the real-time image to obtain a composite image.
  • the wearing position of the wearing component can be any position of the target body part preset according to the actual application requirements, which is not limited here.
  • the target body part may include the target finger.
  • the target finger may be a finger preset according to actual application requirements, which is not limited here.
  • the target finger may be at least one of the thumb, index finger, middle finger, ring finger, and little finger.
  • the three-dimensional model of the wearable component may be a three-dimensional model of the component to be worn on the target finger, that is, the three-dimensional model of the wearable component is a three-dimensional model corresponding to the wearable component to be worn on the target finger.
  • the real-time unoccluded area may include an area of the target finger in the real-time image that is not occluded by an occluder.
  • the occluder may be configured to include non-body part objects, body structures other than the hand to which the target finger belongs, and fingers other than the target finger.
  • the non-body part object may include at least one of an image background and an object other than a human body that occludes any part region within the target finger.
  • the target finger is the body part of the second granularity, and the corresponding target body structure is the body part of the first granularity. Therefore, the target body structure to which the target finger belongs is the hand to which the target finger belongs.
  • the non-target hand body structure may be any body structure other than the hand to which the target finger belongs based on the first granularity division.
  • the non-target body part of the same type as the body part of the target finger can be any finger other than the target finger.
  • the wearing component may be a component for snugly wearing on the target finger, such as a ring or the like.
  • the electronic device can render the part of the three-dimensional model of the wearable component in the real-time unoccluded area of the target finger according to the real-time posture of the target finger, such as the real-time rotation posture, to obtain the target three-dimensional image, and then The three-dimensional image of the target is superimposed on the wearing position of the wearing component of the target finger in the real-time image to obtain a composite image.
  • the real-time posture of the target finger such as the real-time rotation posture
  • FIG. 2 shows a schematic diagram of a composite image provided by an embodiment of the present disclosure.
  • the composite image may be an image including a ring finger 201
  • the wearable component may be a ring 205 .
  • the entire border of the hand to which the ring finger 201 belongs is connected to the image background 202 , and a part of the area within the ring finger 201 is blocked by the little finger 203 and the middle finger 204 . Therefore, the image background 202 , the little thumb 203 and the middle finger 204 can all be used as the cover for the ring finger 201 .
  • the part of the three-dimensional model of the ring located in the unobstructed area of the ring finger 201 that is not blocked by the above-mentioned occluder can be rendered to obtain the three-dimensional ring 205, and the The ring 205 is superimposed and displayed on the ring wearing position of the ring finger 201 .
  • the wearing component may also be a component for at least partially non-fittingly worn on the target finger, such as a manicure sheet or the like.
  • the three-dimensional model of the wearable component includes a first model portion corresponding to a fitting portion that fits the target finger and a second model portion corresponding to a non-fitting portion that does not fit the target finger.
  • the device can detect the part of the first model part in the real-time unoccluded area of the target finger and the second model part in the real-time unoccluded background of the real-time image in the three-dimensional model of the wearable component.
  • the part in the area is rendered to obtain a three-dimensional image of the target, and then the three-dimensional target image is superimposed on the wearing position of the wearable component of the target finger in the real-time image to obtain a composite image.
  • the real-time unobstructed background area may include the image background of the real-time image and the image area in the real-time image corresponding to the body structure not connected to the non-target body structure in the occluder.
  • the gesture and the occlusion situation of the finger can be considered when adding the finger decoration effect, and then the finger decoration effect can be added to the unoccluded area of the finger.
  • the target body part may include the target head.
  • the three-dimensional model of the wearable component may be a three-dimensional model of the component to be worn on the target head, that is, the three-dimensional model of the wearable component is a three-dimensional model corresponding to the wearable component to be worn on the target head.
  • the real-time unoccluded area may include an area of the target's head in the real-time image that is not occluded by an occluder.
  • the occluder may be set to include non-body part objects and non-target head body structures other than the target head.
  • the non-body part object may include at least one of an image background in the real-time image and an object other than a human body that occludes any part area within the target head.
  • the non-target head body structure may be any body structure other than the target head divided based on the first granularity.
  • the donning component may be a component for fit snugly on the target's head, such as a headband or the like.
  • the electronic device may render the part of the three-dimensional model of the wearable component in the real-time unoccluded area of the target head according to the real-time posture of the target head, such as the real-time rotation posture, to obtain a three-dimensional image of the target , and then superimpose the target three-dimensional image on the wearing position of the wearing component of the target head in the real-time image to obtain a composite image.
  • the real-time posture of the target head such as the real-time rotation posture
  • the wearing component can also be a component for wearing on the target head in a non-fitting manner, such as a helmet or the like.
  • the electronic device may, according to the real-time posture of the target head, such as the real-time rotation posture, analyze the part of the three-dimensional model of the wearable component in the real-time unoccluded area of the target head and the real-time unobstructed area of the real-time image.
  • the part of the occluded background area is rendered to obtain a three-dimensional image of the target, and then the three-dimensional target image is superimposed on the wearing position of the wearing component of the target head in the real-time image to obtain a composite image.
  • the real-time unobstructed background area may include the image background of the real-time image and the image area in the real-time image corresponding to the body structure not connected to the non-target body structure in the occluder.
  • FIG. 3 shows a schematic diagram of another composite image provided by an embodiment of the present disclosure.
  • the composite image may be an image including the target head 301
  • the wearing component may be a finger helmet 306 .
  • Part of the area in the target head 301 is blocked by the hand 302, and the hand 302 is connected to the upper limb 303. Therefore, when rendering the 3D model of the helmet, the 3D model of the helmet can be placed in the target head according to the real-time posture of the target head 301.
  • the part of the part 301 in the unobstructed area that is not blocked by the hand 302, the part of the helmet three-dimensional model in the image background 304 and the part of the helmet three-dimensional model in the body 305 are rendered to obtain a three-dimensional helmet 306, and the helmet is 306 is displayed superimposed on the helmet wearing position of the target head 301 .
  • the image area corresponding to the image background 304 and the body 305 forms an unobstructed background area.
  • the posture and the occluded situation of the head can be considered when adding the head decoration effect, and then the head decoration effect can be added for the unoccluded area in the head.
  • a composite image obtained by superimposing the target three-dimensional image on the target body part in the real-time image can be displayed in real time, wherein the target three-dimensional image is based on the target body part
  • Real-time posture and real-time unoccluded area are obtained by rendering the 3D model of the wearable component.
  • the real-time posture and real-time unoccluded area are directly determined according to the real-time image, so as to achieve the purpose of automatically adding a three-dimensional decoration effect with the wearable component to the real-time image.
  • the integration of the added three-dimensional decoration effect and the original image can be improved, avoiding the appearance of the wearing picture, and then Improve user experience.
  • the target 3D image in order to enable the electronic device to reliably display the composite image, can also identify the part of the 3D model of the wearable component that is not occluded by the preset body part model according to the real-time posture and the real-time unoccluded area After rendering, the preset body part model can be used to simulate the target body part.
  • the image display method may further include:
  • the real-time posture determine the first depth information of the three-dimensional model of the wearable component and the second depth information of the preset body part model
  • a part of the to-be-rendered part whose depth is smaller than the preset body part model is rendered to obtain a target three-dimensional image.
  • the electronic device can acquire the real-time posture of the target body part, and determine the first depth information of the three-dimensional model of the wearable component and the second depth information of the preset body part model according to the real-time posture, and at the same time according to the real-time
  • the occlusion area determines the to-be-rendered part of the three-dimensional model of the wearable component, and then according to the first depth information and the second depth information, renders the part of the to-be-rendered part whose depth is smaller than the preset body part model to obtain the target three-dimensional image.
  • the real-time posture may include the real-time rotational posture of the target body part, and the real-time rotational posture of the target body part may include the real-time three-dimensional rotational posture of each joint in the target body part.
  • the real-time posture can be represented by the real-time three-dimensional rotational posture information of each joint in the target body part.
  • the real-time three-dimensional rotation attitude information may include Euler angles or rotation matrices, etc., which are not limited herein.
  • the three-dimensional gesture representation of a human hand represents three-dimensional rotation information of each joint of a human finger, which is represented by Euler angles (ie, the rotation angles of a finger joint around three axes in three-dimensional space) or a rotation matrix.
  • the image display method may further include using a three-dimensional attitude detection model obtained by pre-training to perform attitude detection on the target body part in the real-time image to obtain the target body.
  • the real-time pose of the part may be used to determine the depth information of the three-dimensional model of the wearable component according to the real-time attitude.
  • the electronic device can first perform posture detection on the target body part in the real-time image to obtain the real-time posture of the target body part, and then determine the first position of the three-dimensional model of the wearable component according to the real-time posture of the target body part. a depth information and a second depth information of the preset body part model.
  • the electronic device may rotate the three-dimensional model of the wearable component and the preset body part model synchronously according to the real-time posture of the target body part, so that the model postures of the three-dimensional model of the wearable component and the preset body part model are the same as those of the target body.
  • the real-time postures of the parts are consistent, and then the first depth information of the three-dimensional model of the wearable component in the model posture and the second depth information of the preset body part model in the model posture are extracted.
  • the electronic device may also first use a pre-trained feature point detection model to perform feature point detection on the target body structure to which the target body part in the real-time image belongs to obtain each feature point of the target body structure, and then According to the real-time posture of the target body part and each feature point of the target body part, the 3D model of the wearable component and the preset body part model are scaled, rotated and translated synchronously, so that the model postures of the 3D model of the wearable component and the preset body part model are It is consistent with the real-time posture, real-time size and real-time position of the target body part, and then the first depth information of the three-dimensional model of the wearable component and the second depth information of the preset body part model are extracted.
  • a pre-trained feature point detection model to perform feature point detection on the target body structure to which the target body part in the real-time image belongs to obtain each feature point of the target body structure
  • the 3D model of the wearable component and the preset body part model are scaled, rotated and
  • the specific methods for scaling, rotating and translating the 3D model of the wearable component may be:
  • the three-dimensional rotation matrix M ring ⁇ R 3 ⁇ 3 can be concatenated with the translation vector V rin ⁇ R 3 ⁇ 1 to obtain the three-dimensional rotation and translation matrix of the three-dimensional ring model which is
  • the three-dimensional rotation and translation matrix of the three-dimensional model of the ring can be used Rotate and translate each pixel in the three-dimensional model of the ring to obtain a three-dimensional model of the ring that is consistent with the real-time posture, real-time size and real-time position of the target body part.
  • the real-time unoccluded area may include an area of the target body part that is not occluded by an occluder, and the occluder may include a non-target body structure other than the target body structure to which the target body part belongs.
  • the electronic device determines the specific method of the to-be-rendered part of the three-dimensional model of the wearable component according to the real-time unoccluded area, which may include: first, performing image segmentation for the body structure on the real-time image to obtain the target body structure image, the non-target body structure image, and the non-target body structure image.
  • the body structure image and the background image, the real-time unoccluded area of the target body part is determined in the target body structure image. Therefore, the electronic device can determine the to-be-rendered part of the three-dimensional model of the wearable component according to the target body structure image.
  • the electronic device can locate the three-dimensional model of the wearing component in the real-time unoccluded area corresponding to the target body structure image part as the part to be rendered.
  • the wearing component is a component for at least partially non-fitted wearing on the target body part
  • the three-dimensional model of the wearing component may include a first model part corresponding to the fitting part of the target body part and a non-fitting part of the target body part
  • the electronic device can place the first model part in the real-time unoccluded area of the target finger.
  • the part and the part of the second model part in the real-time unoccluded background area of the real-time image are used as the part to be rendered.
  • the real-time unoccluded background area may include an area corresponding to the background image and an area corresponding to a non-target body structure image corresponding to a body structure not connected to the non-target body structure in the occluder.
  • the electronic device can place the three-dimensional model of the wearing component on the target.
  • the part in the real-time unobstructed area of the finger and the part in the real-time unobstructed background area of the real-time image are used as the part to be rendered.
  • the real-time unoccluded background area may include the area corresponding to the background image and the area corresponding to the non-target body structure image corresponding to the body structure not connected to the non-target body structure in the occluder.
  • FIG. 4 shows a schematic diagram of a renderable image area provided by an embodiment of the present disclosure.
  • the target head 401 is blocked by the hand 402 . Therefore, the area of the target head 401 that is not blocked by the hand 402 is the real-time non-blocked area 403 (the area of the target head 401 excluding the shaded portion).
  • the hand 402 is connected to the upper limb 404 , so the real-time unoccluded background area may include the area corresponding to the image background 405 and the body 406 .
  • the wearing component is a helmet
  • the real-time unoccluded area 403 and the real-time unoccluded background area can form a renderable image area
  • the portion of the three-dimensional model to be rendered may include the portion of the three-dimensional model of the helmet that is within the renderable image area.
  • the electronic device can simulate the occlusion of the effect of the three-dimensional decoration by the non-target body structure when the blocking object includes a non-target body structure other than the target body structure to which the target body part belongs, so as to improve the effect of the added three-dimensional decoration and the effect of the three-dimensional decoration. Fusion of the original image.
  • the real-time unoccluded area may include an area of the target body part that is not occluded by an occluder, and the occluder may include a non-body part object and a non-target body structure other than the target body structure to which the target body part belongs at least one of them.
  • determining the to-be-rendered part of the three-dimensional model of the wearable component may specifically include:
  • the electronic device may perform image segmentation on the real-time image for the target body structure to which the target body part belongs, to obtain the target body structure image, and use the area where the target body part is located in the target body structure image as the real-time unaffected area.
  • the occluded area is determined, and then the to-be-rendered part of the three-dimensional model of the wearable component is determined according to the real-time unobstructed area.
  • the electronic device can locate the three-dimensional model of the wearing component in the real-time unoccluded area corresponding to the target body structure image part as the part to be rendered.
  • FIG. 5 shows a schematic diagram of another renderable image area provided by an embodiment of the present disclosure.
  • the entire area of the ring finger 501 on the hand 502 is an unblocked area in real time.
  • the wearing component is a ring
  • the real-time unoccluded area may form a renderable image area
  • the to-be-rendered part of the three-dimensional model of the ring may include the three-dimensional model of the ring located in the The part within the image area can be rendered, while the background area where the wearable component is located outside the hand 502 is not rendered, and the background area can be obtained by segmenting the hand image as a non-hand area.
  • the wearing component is a component other than the component for fittingly wearing on the target body part
  • the method for determining the part to be rendered and the occluder in the embodiment including the non-target body structure The method is similar and will not be repeated here.
  • the electronic device can simulate the occlusion of the three-dimensional decoration effect by the non-body part object and the non-target body structure when the occluder includes at least one of the non-body part object and the non-target body structure, so as to improve the added The fusion of the three-dimensional decoration effect and the original image.
  • the occluder in addition to the non-target body structure other than the non-body part object and the target body structure to which the target body part belongs, may also include a non-target body of the same type as the body part of the target body part part, as shown in Figure 2.
  • determining the real-time unoccluded area may specifically include:
  • the feature points of the target body structure are detected on the target body structure image, and the feature points of the target body structure are obtained;
  • the occlusion area of the non-target body part to the target body part is determined.
  • the real-time unoccluded area is determined.
  • the electronic device may first use a pre-trained feature point detection model to perform feature point detection on the target body structure image in the real-time image to obtain each feature point of the target body structure, and then, according to the non-target body structure
  • the feature points of the body part and the target body part are used to determine the occlusion area of the non-target body part to the target body part, and then according to the occlusion area, in the target body structure image, the real-time unoccluded area is determined to determine the part to be rendered.
  • the electronic device may first determine the first feature point and the second feature point that are closest to the wearing position of the wearable component among the feature points corresponding to the target body part, and then, among all the feature points corresponding to the non-target body part, Determine the third and fourth feature points that are closest to the first feature point and the fifth and sixth feature points that are closest to the second feature point. Next, calculate the difference between the first feature point and the third feature point. The first middle point between the first feature point and the fourth feature point, the third middle point between the second feature point and the fifth feature point, and the second feature point and the sixth feature point the fourth intermediate point in between. Further, the first intermediate point, the second intermediate point, the third intermediate point and the fourth intermediate point are divided into two groups, and each group includes two intermediate points corresponding to the same non-target body part.
  • the electronic device can generate a parallelogram-shaped occlusion area corresponding to the non-target body part to which each group of intermediate points belongs, according to the line segments formed by connecting each group of intermediate points.
  • the electronic device may use a line segment formed by connecting each set of intermediate points as a hypotenuse, and generate a parallelogram-shaped occlusion area according to a long side of a preset length.
  • the electronic device may use the occlusion area to cover the target body structure image, and then use the image area of the target body structure image that does not cover the occlusion area as the real-time unoccluded area.
  • the electronic device can simulate the three-dimensional effect of the non-body part object, the non-target body structure and the non-target body part on the occluder including at least one of the non-body part object, the non-target body structure and the non-target body part.
  • the occlusion of the decoration effect improves the fusion of the added 3D decoration effect with the original image.
  • FIG. 6 shows a schematic diagram of still another renderable image area provided by an embodiment of the present disclosure.
  • FIG. 7 shows a schematic diagram of a blocking area provided by an embodiment of the present disclosure.
  • FIG. 8 shows a schematic diagram of still another renderable image area provided by an embodiment of the present disclosure.
  • the hand 602 to which the ring finger 601 belongs is not blocked by other body structures. Therefore, the real-time unblocked area of the ring finger 601 can be determined in the hand 602 .
  • the ring finger 601 overlaps with the middle finger 603 and the little finger 604 , it is necessary to further determine the occluded area of the ring finger 601 that is not occluded by the little finger 604 and the middle finger 603 to finally determine the real-time unoccluded area of the ring finger 601 .
  • the first feature point 605 and the second feature point 606 of the ring finger 601 are the feature points with the closest distance to the wearing position of the wearable component, and the two feature points with the closest distance to the first feature point 605 include the middle finger 603
  • the third feature point 607 and the fourth feature point 608 of the little finger 604 , the two feature points closest to the second feature point 606 include the fifth feature point 609 of the middle finger 603 and the sixth feature point 610 of the little finger 604 .
  • the middle point between the first feature point 605 and the third feature point 607 is the first middle point 611
  • the middle point between the first feature point 605 and the fourth feature point 608 is the second middle point 612
  • the second feature point The middle point between 606 and the fifth feature point 609 is the third middle point 613
  • the middle point between the second feature point 606 and the sixth feature point 610 is the fourth middle point 614 .
  • a second occlusion area 616 in the shape of a parallelogram corresponding to the little finger 604 is generated.
  • the first occlusion area 615 and the second occlusion area 616 are superimposed on the hand 602 , and the area of the ring finger 601 not covered by the first occlusion area 615 and the second occlusion area 616 is regarded as the real-time unoccluded area area.
  • the wearing component is a ring
  • the real-time unoccluded area may form a renderable image area
  • the part to be rendered of the three-dimensional model of the ring may include the three-dimensional model of the ring located in the Renderable portion of the image area.
  • the occlusion of the effect of the three-dimensional decoration worn on the ring finger by other fingers can be simulated, so as to improve the integration of the effect of the added three-dimensional decoration with the original image, and avoid The three-dimensional ornament effect is embedded in the finger adjacent to the ring finger.
  • the three-dimensional model of the ornament may include a three-dimensional model of a wearable component and a preset body part model.
  • the electronic device can render the part of the 3D model of the wearable component that is not occluded by the preset body part model according to the real-time posture and the real-time unoccluded area, and obtain the target 3D image.
  • the preset body part model may be a model preset according to actual application requirements for simulating the target body part, which is not limited herein.
  • the target body part is the target head
  • the preset body part model may be a preset standard head model.
  • the target body part is a finger
  • the preset body part model may be a cylinder or a rectangular parallelepiped.
  • the three-dimensional model of the wearing component may be worn on the preset body part model according to the wearing manner of being worn on the target body part.
  • the preset body part model can be scaled, rotated and translated synchronously with the three-dimensional model of the wearable component according to the real-time pose and feature points of the target body part, so as to determine the first depth information of the three-dimensional model of the wearable component and the second depth of the preset body part model. in-depth information.
  • the first depth information may include the first depth on each pixel of the three-dimensional model of the wearable component, and the second depth information may include the second depth on each pixel of the preset body part model.
  • the electronic device can compare the first depth on each pixel of the three-dimensional model of the wearable component with the second depth on the pixel, and determine whether the pixel is located in the to-be-rendered part, if the first depth is less than the second depth And the pixel is located in the to-be-rendered part, then the pixel of the three-dimensional model of the wearable component is rendered.
  • FIG. 9 shows a schematic diagram of a preset occlusion model provided by an embodiment of the present disclosure.
  • the three-dimensional model of the wearable component can be a three-dimensional model of a ring 901, and the preset body part model can be a cylinder 902.
  • the cylinder 902 is used to simulate the ring finger, and the three-dimensional model of the ring 901 can be set on the cylinder 902.
  • the device can firstly scale, rotate and translate the three-dimensional model of the ring 901 and the cylinder 902 synchronously according to the real-time posture and feature points of the finger used to wear the ring, and then obtain the first number of pixels of the three-dimensional model 901 of the ring.
  • each pixel of the cylinder 902 Depth and the second depth of each pixel of the cylinder 902, if the first depth under the same pixel is smaller than the second depth, it means that the three-dimensional model of the ring 901 is closer to the image surface at this pixel than the cylinder 902, Therefore, the three-dimensional ring model 901 can be rendered at this pixel point, otherwise, if the first depth at the same pixel point is greater than the second depth, the three-dimensional ring model 901 will not be rendered at this pixel point.
  • the electronic device can use the preset body part model to render the 3D model of the wearable component, simulate the occlusion of the 3D decoration effect by the target body part, and improve the integration of the added 3D decoration effect with the original image.
  • the real-time image may be an image including the target body structure to which the target body part belongs.
  • the electronic device may also identify the target body part in real time on the real-time image, and display the composite image only when the target body part is identified.
  • the electronic device can first identify whether the target body part is displayed in the target body structure in the real-time image, that is, whether the target body part is displayed in the real-time image, and if the real-time image shows the target body part part, the composite image can be displayed, otherwise, the real-time image can be displayed.
  • the electronic device Since the real-time images obtained by the electronic device at different times may change, the electronic device needs to identify in real time whether the target body part is displayed in the obtained real-time image, and then determine the image to be displayed according to the identification result. To further avoid the appearance of wearing a help screen.
  • the 3D image of the target can be obtained by rendering the 3D model of the functional component according to the relative position of the target and the real-time posture, and rendering the 3D model of the wearable component according to the relative position of the target, the real-time posture and the real-time unoccluded area.
  • the relative position of the target can be the relative position of the three-dimensional model of the functional component and the three-dimensional model of the wearable component under the real-time attitude.
  • the three-dimensional model of the functional component may be a three-dimensional model corresponding to the functional component.
  • the functional components may be components with decorative functions, such as diamonds, bow ties, and the like.
  • the functional component may also be a component with a use function, such as a searchlight, an antenna, and the like.
  • the functional component and the wearing component can form a complete three-dimensional decoration effect.
  • the electronic device can render the three-dimensional model of the functional component according to the relative position of the target and the real-time attitude, and render the three-dimensional model of the wearable component according to the relative position of the target, the real-time attitude and the real-time unoccluded area, and obtain the three-dimensional image of the target , and then superimpose the target three-dimensional image on the target body part in the real-time image to obtain a composite image.
  • the three-dimensional model of the decoration may further include a three-dimensional model of the wearable component and a three-dimensional model of the functional component, and the three-dimensional model of the wearable component and the three-dimensional model of the functional component may be arranged according to preset relative positions.
  • the electronic device can rotate the 3D model of the wearable component and the 3D model of the functional component synchronously according to the real-time posture of the target body part, so that the model postures of the 3D model of the wearable component and the 3D model of the functional component are consistent with the real-time posture of the target body part, and obtain The relative position of the target in the 3D model of the functional component and the 3D model of the wearable component in a posture that is consistent with the real-time posture of the target body part.
  • the image display method may further include:
  • the yaw angle of the upper surface belongs to the first preset angle range, rendering the three-dimensional model of the wearable component and the three-dimensional model of the functional component to obtain a three-dimensional image of the target;
  • the three-dimensional model of the wearable component is rendered to obtain a three-dimensional image of the target.
  • the electronic device can determine the model posture of the three-dimensional model of the functional component according to the real-time posture, and then determine the yaw angle of the upper surface of the three-dimensional model of the functional component according to the model posture of the three-dimensional model of the functional component, and judge the yaw of the upper surface.
  • the preset angle range to which the angle belongs If the yaw angle of the upper surface belongs to the first preset angle range, the three-dimensional model of the wearable component and the three-dimensional model of the functional component are rendered to obtain the target three-dimensional image including the wearable component and the functional component, otherwise, Only the 3D model of the wearable component is rendered, and the target 3D image including only the wearable component is obtained.
  • the electronic device can zoom, rotate and translate the three-dimensional model of the functional component and the three-dimensional model of the wearable component synchronously according to the real-time posture, and after zooming, rotating and translating the three-dimensional model of the functional component and the three-dimensional model of the wearable component, determine The upper surface yaw angle of the upper surface of the three-dimensional model of the functional component.
  • the upper surface may be the surface of the three-dimensional model of the functional component preset according to actual application requirements, which is not limited herein.
  • the first preset angular range may be an angular range that is preset according to actual application requirements and can make the upper surface face the direction that is visible to the user, which is not limited herein.
  • the second preset angle range may be an angle range that is preset according to actual application requirements and can make the upper surface face away from the user, which is not limited herein.
  • the first preset angle range may be an angle range of [0°, 100°] in a clockwise direction and a counterclockwise direction
  • the second preset angle range may be an angle range other than the first preset angle range
  • FIG. 10 shows a schematic diagram of yet another composite image provided by an embodiment of the present disclosure.
  • the composite image may be an image including a ring finger 1001
  • the wearable component may be a ring 1002
  • the functional component may be a diamond 1003 .
  • the yaw angle of the upper surface of the diamond belongs to the first preset range by the posture of the ring finger 1001
  • Users can watch the ring effect and diamond effect at the same time.
  • FIG. 11 shows a schematic diagram of still another composite image provided by an embodiment of the present disclosure.
  • the composite image may be an image including a ring finger 1101
  • the wearable component may be a ring 1102
  • the functional component may be a diamond.
  • the yaw angle of the upper surface of the diamond belongs to the second preset range by the posture of the ring finger 1101
  • the electronic device can also render the three-dimensional model of the functional component according to the real-time posture of the target body part and the real-time unobstructed background area to obtain the functional component in the target three-dimensional image.
  • the electronic device can render the part of the three-dimensional model of the functional component in the real-time unobstructed background area to obtain the functional component in the target three-dimensional image. Do repeat.
  • the electronic device can further simulate the blocking of the functional component effect of the three-dimensional decoration effect by the blocking object, and improve the integration of the added three-dimensional decoration effect and the original image.
  • the image display method can simulate the occlusion of the effect of the decorative object at the pixel level in various ways, so as to simulate a more precise occlusion relationship, and can display the effect of the decorative object when: It greatly improves the authenticity of any object in the image obscuring the effect of the decoration, improves the integration of the added 3D decoration effect and the original image, avoids the appearance of piercing pictures, enhances the user's sense of immersion, and thus improves the user's experience.
  • the embodiment of the present disclosure further provides an image display device capable of implementing the above-mentioned image display method.
  • the following describes the image display device provided by the embodiment of the present disclosure with reference to FIG. 12 .
  • the image display apparatus may be an electronic device.
  • electronic devices may include mobile phones, tablet computers, desktop computers, notebook computers, vehicle-mounted terminals, wearable electronic devices, all-in-one computers, smart home devices and other devices with communication functions, and may also be virtual machines or devices simulated by simulators .
  • FIG. 12 shows a schematic structural diagram of an image display device provided by an embodiment of the present disclosure.
  • the image display apparatus 1200 may include an acquisition unit 1210 and a display unit 1220 .
  • the acquisition unit 1210 may be configured to acquire real-time images of the target body part.
  • the display unit 1220 can be configured to display a composite image in real time, and the composite image is an image obtained by superimposing a target three-dimensional image on the target body part in the real-time image.
  • the 3D model of the component is rendered, and the real-time pose and real-time unoccluded area are determined according to the real-time image.
  • a composite image obtained by superimposing the target three-dimensional image on the target body part in the real-time image can be displayed in real time, wherein the target three-dimensional image is based on the target body part
  • Real-time posture and real-time unoccluded area are obtained by rendering the 3D model of the wearable component.
  • the real-time posture and real-time unoccluded area are directly determined according to the real-time image, so as to achieve the purpose of automatically adding a three-dimensional decoration effect with the wearable component to the real-time image.
  • the integration of the added three-dimensional decoration effect and the original image can be improved, avoiding the appearance of the wearing picture, and then Improve user experience.
  • the real-time pose may include a real-time rotational pose of the target body part.
  • the real-time unoccluded area may include an area of the target body part that is not occluded by an occluder, and the occluder may include a non-body part object, a non-target body structure other than the target body structure to which the target body part belongs, and At least one of the non-target body parts of the same body type as the target body part.
  • the target body part may include the target finger
  • the three-dimensional model of the wearing component may be a three-dimensional model of the component worn on the target finger
  • the real-time unoccluded area may include the target finger in the real-time image that is not covered by the target finger.
  • the occluder may include non-body part objects, body structures other than the hand to which the target finger belongs, and fingers other than the target finger.
  • the 3D image of the target can be obtained by rendering the part of the 3D model of the wearable component that is not occluded by the preset body part model according to the real-time posture and the real-time unoccluded area, and the preset body part model can be used to simulate the target body parts.
  • the image display apparatus 1200 may further include a first processing unit, a second processing unit, and a first rendering unit.
  • the first processing unit may be configured to determine the first depth information of the three-dimensional model of the wearable component and the second depth information of the preset body part model according to the real-time posture.
  • the second processing unit may be configured to determine the to-be-rendered portion of the three-dimensional model of the wearable component according to the real-time unoccluded area.
  • the first rendering unit may be configured to render, according to the first depth information and the second depth information, a part of the to-be-rendered part whose depth is smaller than the preset body part model to obtain a target three-dimensional image.
  • the real-time unoccluded area may include an area of the target body part that is not occluded by an occluder, and the occluder may include a non-body part object and a non-target body structure other than the target body structure to which the target body part belongs at least one of.
  • the second processing unit may include a first processing subunit, a second processing subunit and a third processing subunit.
  • the first processing subunit may be configured to perform image segmentation on the real-time image for the target body structure to which the target body part belongs to obtain the target body structure image.
  • the second processing subunit may be configured to determine a real-time unoccluded area in the target body structure image.
  • the third processing subunit may be configured to determine the portion to be rendered according to the real-time unoccluded area.
  • the occluder may also include a non-target body part of the same type of body part as the target body part.
  • the second processing subunit can be further configured as:
  • the 3D image of the target can be obtained by rendering the 3D model of the functional component according to the relative position of the target and the real-time posture, and the 3D model of the wearable component according to the relative position, real-time posture and real-time unoccluded area of the target.
  • the relative position may be the relative position of the three-dimensional model of the functional component wearing component under the said posture.
  • the image display apparatus 1200 may further include a third processing unit, a second rendering unit, and a third rendering unit.
  • the third processing unit may be configured to determine the yaw angle of the upper surface of the three-dimensional model of the functional component according to the real-time attitude.
  • the second rendering unit may be configured to render the three-dimensional model of the wearable component and the three-dimensional model of the functional component to obtain the target three-dimensional image when the yaw angle of the upper surface falls within the first preset angle range.
  • the third rendering unit may be configured to render the three-dimensional model of the wearable component to obtain the target three-dimensional image when the yaw angle of the upper surface falls within the second preset angle range.
  • the image display apparatus 1200 shown in FIG. 12 can execute various steps in the method embodiments shown in FIGS. 1 to 11 , and implement various processes and processes in the method embodiments shown in FIGS. The effect will not be repeated here.
  • Embodiments of the present disclosure also provide an image display device, the image display device may include a processor and a memory, and the memory may be used to store executable instructions.
  • the processor can be used to read executable instructions from the memory, and execute the executable instructions to implement the image display method in the above embodiment.
  • FIG. 13 shows a schematic structural diagram of an image display device provided by an embodiment of the present disclosure. Referring specifically to FIG. 13 below, it shows a schematic structural diagram of an image display device 1300 suitable for implementing an embodiment of the present disclosure.
  • the image display device 1300 in the embodiment of the present disclosure may be an electronic device.
  • the electronic equipment may include, but not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), in-vehicle terminals (such as in-vehicle navigation terminals) , wearable devices, etc., as well as stationary terminals such as digital TVs, desktop computers, smart home devices, and the like.
  • image display device 1300 shown in FIG. 13 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the image display apparatus 1300 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 1301 , which may be loaded into a read-only memory (ROM) 1302 according to a program stored in a read-only memory (ROM) 1302 or from a storage device 1308 A program in a random access memory (RAM) 1303 executes various appropriate actions and processes. In the RAM 1303, various programs and data necessary for the operation of the image display device 1300 are also stored.
  • the processing device 1301, the ROM 1302, and the RAM 1303 are connected to each other through a bus 1304.
  • An input/output (I/O) interface 1305 is also connected to bus 1304 .
  • the following devices can be connected to the I/O interface 1305: input devices 1306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 1307 of a computer, etc.; a storage device 1308 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1309.
  • the communication means 1309 may allow the image display device 1300 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 13 shows the image display apparatus 1300 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the processor enables the processor to implement the image display method in the foregoing embodiments.
  • Embodiments of the present disclosure also provide a computer program product, the computer program product may include a computer program, and when the computer program is executed by a processor, enables the processor to implement the image display method in the above embodiments.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 1309, or from the storage device 1308, or from the ROM 1302.
  • the processing device 1301 the above-mentioned functions defined in the image display method of the embodiment of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • clients, servers can communicate using any currently known or future developed network protocol, such as HTTP, and can be interconnected with any form or medium of digital data communication (eg, a communication network).
  • a communication network examples include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned image display apparatus; or may exist alone without being incorporated into the image display apparatus.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the image display device, the image display device is caused to execute:
  • the composite image is an image obtained by superimposing the target 3D image on the target body part in the real-time image.
  • the target 3D image is adjusted according to the real-time posture of the target body part and the real-time unoccluded area
  • the three-dimensional model of the wearable component is rendered, and the real-time posture and real-time unoccluded area are determined according to the real-time image.
  • computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开涉及一种图像显示方法、装置、设备及介质。其中,图像显示方法包括:获取目标身体部位的实时图像;实时显示合成图像,合成图像为在实时图像中的目标身体部位上叠加目标三维图像得到的图像,目标三维图像根据目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,实时姿态和实时未被遮挡区域根据实时图像确定。根据本公开实施例,能够提高所添加的三维装饰物效果与原图的融合性,提升了用户的体验。

Description

图像显示方法、装置、设备及介质
本申请要求于2021年2月10日提交中国国家知识产权局、申请号为202110185451.7、申请名称为“图像显示方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及多媒体技术领域,尤其涉及一种图像显示方法、装置、设备及介质。
背景技术
随着计算机技术和移动通信技术的迅速发展,基于电子设备的各种图像拍摄平台得到了普遍应用,极大地丰富了人们的日常生活。越来越多的用户乐于在图像拍摄平台上进行图像拍摄,以得到效果满意的照片或者视频。
为了增加拍摄图像时的趣味性,一般可以在用户进行图像拍摄时,在各帧图像上实时地为用户添加装饰物效果。但是,现有的装饰物效果虽然具有一定趣味性,但与原图的融合性较差,经常出现穿帮画面,降低了用户的体验。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种图像显示方法、装置、设备及介质。
第一方面,本公开提供了一种图像显示方法,包括:
获取目标身体部位的实时图像;
实时显示合成图像,合成图像为在实时图像中的目标身体部位上叠加目标三维图像得到的图像,目标三维图像根据目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,实时姿态和实时未被遮挡区域根据实时图像确定。
第二方面,本公开提供了一种图像显示装置,包括:
获取单元,配置为获取目标身体部位的实时图像;
显示单元,配置为实时显示合成图像,合成图像为在实时图像中的目标身 体部位上叠加目标三维图像得到的图像,目标三维图像根据目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,实时姿态和实时未被遮挡区域根据实时图像确定。
第三方面,本公开提供了一种图像显示设备,包括:
处理器;
存储器,用于存储可执行指令;
其中,处理器用于从存储器中读取可执行指令,并执行可执行指令以实现第一方面所述的图像显示方法。
第四方面,本公开提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当计算机程序被处理器执行时,使得处理器实现第一方面所述的图像显示方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例的图像显示方法、装置、设备及介质,能够在获取到目标身体部位的实时图像之后,实时显示在实时图像中的目标身体部位上叠加目标三维图像所得到的合成图像,其中,目标三维图像根据目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,实时姿态和实时未被遮挡区域直接根据实时图像确定,从而可以达到在实时图像上自动添加具有穿戴组件的三维装饰物效果的目的,由于在添加三维装饰物效果的过程中考虑到了穿戴装饰物的身体部位的姿态和被遮挡情况,因此,可以提高所添加的三维装饰物效果与原图的融合性,避免出现穿帮画面,进而提升用户的体验。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种图像显示方法的流程示意图;
图2为本公开实施例提供的一种合成图像的示意图;
图3为本公开实施例提供的另一种合成图像的示意图;
图4为本公开实施例提供的一种可渲染图像区域的示意图;
图5为本公开实施例提供的另一种可渲染图像区域的示意图;
图6为本公开实施例提供的又一种可渲染图像区域的示意图;
图7为本公开实施例提供的一种遮挡区域的示意图;
图8为本公开实施例提供的再一种可渲染图像区域的示意图;
图9为本公开实施例提供的一种预设遮挡模型的示意图;
图10为本公开实施例提供的又一种合成图像的示意图;
图11为本公开实施例提供的再一种合成图像的示意图;
图12为本公开实施例提供的一种图像显示装置的结构示意图;
图13为本公开实施例提供的一种图像显示设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性 的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
增强现实(AR,Augmented Reality)技术是一种能够实时地计算摄影机摄像的位置及角度并加上相应图像、视频或三维模型的技术,这种技术可以在屏幕上把虚拟世界和现实世界结合,并进行互动。
随着计算机技术的发展,AR技术的应用越来越为广泛,并且逐渐应用到了图像拍摄平台的拍摄功能中。为了增加拍摄图像时的趣味性,一般可以在用户进行图像拍摄时,利用AR技术在各帧图像上实时地为用户的一些身体部位添加装饰物效果。例如,为用户的头部添加头盔效果、为用户的手腕添加手表效果等。
但是,申请人发现现有的装饰物效果虽然具有一定趣味性,但仍然存在如下问题:
装饰物效果会覆盖在整个身体部位上,如果该身体部位上有其他遮挡物,装饰物效果也会将遮挡物覆盖。例如,在为用户的头部添加头盔效果时,如果添加头盔效果的头部位置被用户的手部遮挡,头盔效果也会将用户的手部覆盖。这个问题会使得装饰物效果与原图的融合性较差,经常出现穿帮画面,进而使用户在使用装饰物效果的过程中,难以保持沉浸感,降低了用户的体验。
另外,在为用户的手指添加装饰物效果时,由于手指的遮挡关系比头部、手腕等更为复杂并且难以模拟,使得装饰物效果与原图的融合性更差,例如用户在试戴AR戒指的过程中,更加容易出现指环贴合性差或者指环嵌入临近手指等穿帮画面。
为了解决上述的问题,本公开实施例提供了一种能够在添加装饰物效果时考虑到穿戴装饰物的身体部位的姿态和被遮挡情况的图像显示方法、装置、设备及介质。
下面首先参考图1对本公开实施例提供的图像显示方法进行说明。
在本公开实施例中,该图像显示方法可以由电子设备执行。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。
图1示出了本公开实施例提供的一种图像显示方法的流程示意图。
如图1所示,该图像显示方法包括如下步骤。
S110、获取目标身体部位的实时图像。
在本公开实施例中,当用户想要在实时图像的目标身体部位上添加装饰物效果时,可以使电子设备获取目标身体部位的实时图像。
在一些实施例中,电子设备可以通过摄像头拍摄图像,以获取目标身体部位的实时图像。
在另一些实施例中,电子设备可以接收其他设备发送的图像,以获取目标身体部位的实时图像。
在又一些实施例中,电子设备可以从本地读取用户在本地图像中选择的图像,以获取目标身体部位的实时图像。
在电子设备获取到目标身体部位的实时图像之后,便可以在实时图像中的目标身体部位上添加装饰物效果。
其中,目标身体部位可以为根据实际应用需求预先设定的人体中的任意身体部位,在此不作限制。例如,目标身体部位可以包括第一粒度的身体部位,如头部、躯干、上肢、手部、下肢和脚部等中的任意一种。再例如,目标身体部位也可以包括精细度高于第一粒度的第二粒度的身体部位,如耳部、颈部、手腕、手指、脚腕等中的任意一种。
进一步地,目标身体部位的数量也可以为根据实际应用需求预先设定的任意数量,在此不作限制。例如,目标身体部位的数量可以为1个、2个、3个等。
S120、实时显示合成图像,合成图像为在实时图像中的目标身体部位上叠加目标三维图像得到的图像,目标三维图像根据目标身体部位的实时姿态和实 时未被遮挡区域对穿戴组件三维模型进行渲染得到,实时姿态和实时未被遮挡区域根据实时图像确定。
在本公开实施例中,电子设备在获取到目标身体部位的实时图像之后,可以实时地为实时图像中的目标身体部位添加装饰物效果,并且实时地显示在实时图像中的目标身体部位上叠加三维装饰物效果对应的目标三维图像得到的合成图像。
在本公开实施例中,电子设备可以根据实时图像确定目标身体部位的实时姿态。
可选地,实时姿态可以为目标身体部位的实时三维姿态。实时姿态可以包括目标身体部位的实时旋转姿态。
进一步地,目标身体部位的实时旋转姿态可以包括目标身体部位中的各个关节的实时三维旋转姿态。
在本公开实施例中,电子设备可以根据实时图像确定目标身体部位的实时未被遮挡区域。
可选地,实时未被遮挡区域可以包括目标身体部位未被遮挡物遮挡的区域。其中,遮挡物可以包括非身体部位对象、目标身体部位所属的目标身体结构以外的非目标身体结构和与目标身体部位的身体部位类型相同的非目标身体部位中的至少一种,可以根据实际应用需求预先设定。
非身体部位对象可以包括实时图像中的图像背景和遮挡目标身体部位内的任意部位区域的除人体以外的物体中的至少一种。
目标身体结构可以为根据实际应用需求预先设定的目标身体部位所属的身体结构,在此不作限制。例如,目标身体部位为第二粒度的身体部位时,目标身体结构可以为第一粒度的身体部位,如头部、躯干、上肢、手部、下肢和脚部等中的任意一种。
进一步地,目标身体部位可以为目标身体结构本身,也可以为目标身体结构的一部分,在此不作限制。
非目标身体结构可以为按照目标身体结构的划分粒度所划分的目标身体 结构以外的其他任意身体结构,在此不作限制。
例如,若目标身体结构为手部,则非目标身体结构可以为头部、躯干、上肢、下肢和脚部等。
非目标身体部位可以为按照目标身体部位的划分粒度所划分的目标身体部位以外的其他任意身体部位,在此不作限制。
例如,若目标身体部位为手指,则非目标身体部位可以为其他手指。
由此,在本公开实施例中,电子设备可以根据目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到的目标三维图像,进而将目标三维图像叠加在实时图像中的目标身体部位上,得到合成图像。
进一步地,电子设备可以按照目标身体部位的实时姿态,基于目标身体部位的实时未被遮挡区域,对穿戴组件三维模型进行渲染,得到目标三维图像。
进一步地,电子设备可以将目标三维图像叠加在实时图像中的目标身体部位的穿戴组件佩戴位置上,得到合成图像。
其中,穿戴组件佩戴位置可以为根据实际应用需求预先设定的目标身体部位的任意位置,在此不做限制,
在本公开一些实施例中,目标身体部位可以包括目标手指。
其中,目标手指可以为根据实际应用需求预先设定的手指,在此不做限制。例如,目标手指可以为拇指、食指、中指、无名指和小拇指中的至少一个。
相应地,穿戴组件三维模型可以为用于穿戴在目标手指上的组件的三维模型,即穿戴组件三维模型为用于穿戴在目标手指上的穿戴组件对应的三维模型。
在这些实施例中,实时未被遮挡区域可以包括实时图像中的目标手指未被遮挡物遮挡的区域。
可选地,为了考虑全部有可能对目标手指产生遮挡的情况,遮挡物可以被配置为包括非身体部位对象、目标手指所属的手部以外的身体结构和目标手指以外的手指。
非身体部位对象可以包括图像背景和遮挡目标手指内的任意部位区域的除人体以外的物体中的至少一种。
目标手指为第二粒度的身体部位,而对应的目标身体结构为第一粒度的身体部位,因此,目标手指所属的目标身体结构为目标手指所属的手部。此时,非目标手部身体结构可以为基于第一粒度划分的除目标手指所属的手部以外的其他任意身体结构。
与目标手指的身体部位类型相同的非目标身体部位可以为目标手指以外的其他任意手指。
在一些实施例中,穿戴组件可以为用于贴合地穿戴在目标手指上的组件,例如指环等。
在这些实施例中,可选地,电子设备可以按照目标手指的实时姿态如实时旋转姿态,对穿戴组件三维模型处于目标手指的实时未被遮挡区域内的部分进行渲染,得到目标三维图像,进而将目标三维图像叠加在实时图像中的目标手指的穿戴组件佩戴位置上,得到合成图像。
图2示出了本公开实施例提供的一种合成图像的示意图。
如图2所示,合成图像可以为包括无名指201的图像,穿戴组件可以为指环205。无名指201所属的手部的全部边界与图像背景202衔接,无名指201内的部分区域被小拇指203和中指204遮挡,因此,图像背景202、小拇指203和中指204均可以被作为无名指201的遮挡物。在对指环三维模型进行渲染时,可以根据无名指201的实时姿态,对指环三维模型处于无名指201的未被上述遮挡物遮挡的未被遮挡区域内的部分进行渲染,得到三维的指环205,并且将指环205叠加显示在无名指201的指环佩戴位置上。
在另一些实施例中,穿戴组件也可以为用于至少部分非贴合地穿戴在目标手指上的组件,如美甲片等。
在这些实施例中,可选地,穿戴组件三维模型包括与目标手指贴合的贴合部分对应的第一模型部分和不与目标手指贴合的非贴合部分对应的第二模型部分,电子设备可以按照目标手指的实时姿态如实时旋转姿态,对穿戴组件三维模型中的第一模型部分处于目标手指的实时未被遮挡区域内的部分和第二模型部分处于实时图像的实时未被遮挡背景区域内的部分进行渲染,得到目标 三维图像,进而将目标三维图像叠加在实时图像中的目标手指的穿戴组件佩戴位置上,得到合成图像。
其中,实时未被遮挡背景区域可以包括实时图像的图像背景以及实时图像中不与遮挡物中的非目标身体结构相连的身体结构对应的图像区域。
由此,在这些实施例中,可以在添加手指装饰物效果时考虑到手指的姿态和被遮挡情况,进而针对手指中的未被遮挡区域添加手指装饰物效果。
在本公开另一些实施例中,目标身体部位可以包括目标头部。
相应地,穿戴组件三维模型可以为用于穿戴在目标头部上的组件的三维模型,即穿戴组件三维模型为用于穿戴在目标头部上的穿戴组件对应的三维模型。
在这些实施例中,实时未被遮挡区域可以包括实时图像中的目标头部未被遮挡物遮挡的区域。
可选地,为了考虑全部有可能对目标头部产生遮挡的情况,遮挡物可以被设置为包括非身体部位对象和目标头部以外的非目标头部身体结构。
非身体部位对象可以包括实时图像中的图像背景和遮挡目标头部内的任意部位区域的除人体以外的物体中的至少一种。
非目标头部身体结构可以为基于第一粒度划分的除目标头部以外的其他任意身体结构。
在一些实施例中,穿戴组件可以为用于贴合地穿戴在目标头部上的组件,例如发箍等。
在这些实施例中,可选地,电子设备可以按照目标头部的实时姿态如实时旋转姿态,对穿戴组件三维模型处于目标头部的实时未被遮挡区域内的部分进行渲染,得到目标三维图像,进而将目标三维图像叠加在实时图像中的目标头部的穿戴组件佩戴位置上,得到合成图像。
在另一些实施例中,穿戴组件也可以为用于全部非贴合地穿戴在目标头部上的组件,例如头盔等。
在这些实施例中,可选地,电子设备可以按照目标头部的实时姿态如实时旋转姿态,对穿戴组件三维模型处于目标头部的实时未被遮挡区域内的部分和 处于实时图像的实时未被遮挡背景区域的部分进行渲染,得到目标三维图像,进而将目标三维图像叠加在实时图像中的目标头部的穿戴组件佩戴位置上,得到合成图像。
其中,实时未被遮挡背景区域可以包括实时图像的图像背景以及实时图像中不与遮挡物中的非目标身体结构相连的身体结构对应的图像区域。
图3示出了本公开实施例提供的另一种合成图像的示意图。
如图3所示,合成图像可以为包括目标头部301的图像,穿戴组件可以为指头盔306。目标头部301内的部分区域被手部302遮挡,手部302与上肢303相连,因此,在对头盔三维模型进行渲染时,可以根据目标头部301的实时姿态,对头盔三维模型处于目标头部301的未被手部302遮挡的未被遮挡区域内的部分、头盔三维模型处于图像背景304内的部分和头盔三维模型处于身体305内的部分进行渲染,得到三维的头盔306,并且将头盔306叠加显示在目标头部301的头盔佩戴位置上。其中,图像背景304和身体305对应的图像区域形成未被遮挡背景区域。
由此,在这些实施例中,可以在添加头部装饰物效果时考虑到头部的姿态和被遮挡情况,进而针对头部中的未被遮挡区域添加头部装饰物效果。
在本公开实施例中,能够在获取到目标身体部位的实时图像之后,实时显示在实时图像中的目标身体部位上叠加目标三维图像所得到的合成图像,其中,目标三维图像根据目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,实时姿态和实时未被遮挡区域直接根据实时图像确定,从而可以达到在实时图像上自动添加具有穿戴组件的三维装饰物效果的目的,由于在添加三维装饰物效果的过程中考虑到了穿戴装饰物的身体部位的姿态和被遮挡情况,因此,可以提高所添加的三维装饰物效果与原图的融合性,避免出现穿帮画面,进而提升用户的体验。
在本公开另一种实施方式中,为了使电子设备可以可靠地显示合成图像,目标三维图像还可以根据实时姿态和实时未被遮挡区域对穿戴组件三维模型未被预设身体部位模型遮挡的部分进行渲染得到,预设身体部位模型可以用于 模拟目标身体部位。
可选地,在图1所示的S120之前,该图像显示方法还可以包括:
根据实时姿态,确定穿戴组件三维模型的第一深度信息和预设身体部位模型的第二深度信息;
根据实时未被遮挡区域,确定穿戴组件三维模型的待渲染部分;
根据第一深度信息和第二深度信息,对待渲染部分中的深度小于预设身体部位模型的部分进行渲染,得到目标三维图像。
在本公开实施例中,电子设备可以获取目标身体部位的实时姿态,并且根据实时姿态,确定穿戴组件三维模型的第一深度信息和预设身体部位模型的第二深度信息,同时根据实时未被遮挡区域,确定穿戴组件三维模型的待渲染部分,进而根据第一深度信息和第二深度信息,对待渲染部分中的深度小于预设身体部位模型的部分进行渲染,得到目标三维图像。
其中,实时姿态可以包括目标身体部位的实时旋转姿态,目标身体部位的实时旋转姿态可以包括目标身体部位中的各个关节的实时三维旋转姿态。
进一步地,实时姿态可以通过目标身体部位中的各个关节的实时三维旋转姿态信息表征。其中,实时三维旋转姿态信息可以包括欧拉角或者旋转矩阵等,在此不做限制。示例性地,人手三维姿态表示表示人手指各个关节的三维旋转信息,采用欧拉角(即某手指关节分别围绕着三维空间的三个轴的旋转角度)或旋转矩阵等形式来表示。
可选地,在根据实时姿态,确定穿戴组件三维模型的深度信息之前,该图像显示方法还可以包括利用预先训练得到的三维姿态检测模型对实时图像中的目标身体部位进行姿态检测,得到目标身体部位的实时姿态。
由此,在本公开实施例中,电子设备可以首先对实时图像中的目标身体部位进行姿态检测,得到目标身体部位的实时姿态,进而根据目标身体部位的实时姿态,确定穿戴组件三维模型的第一深度信息和预设身体部位模型的第二深度信息。
在一些实施例中,电子设备可以根据目标身体部位的实时姿态,对穿戴组 件三维模型和预设身体部位模型进行同步旋转,使得该穿戴组件三维模型和预设身体部位模型的模型姿态与目标身体部位的实时姿态一致,然后提取穿戴组件三维模型在该模型姿态下的第一深度信息和预设身体部位模型在该模型姿态下的第二深度信息。
在另一些实施例中,电子设备还可以先利用预先训练得到的特征点检测模型对实时图像中的目标身体部位所属的目标身体结构的进行特征点检测,得到目标身体结构的各个特征点,然后根据目标身体部位的实时姿态和目标身体部位的各个特征点,对穿戴组件三维模型和预设身体部位模型进行同步缩放、旋转和平移,使得该穿戴组件三维模型和预设身体部位模型的模型姿态与目标身体部位的实时姿态、实时大小和实时位置均一致,进而提取穿戴组件三维模型的第一深度信息和预设身体部位模型的第二深度信息。
在一个示例中,以穿戴组件三维模型为指环三维模型、目标身体部位为无名指为例,对穿戴组件三维模型进行缩放、旋转和平移的具体方法可以为:
电子设备可以根据无名指所属的手部的实时姿态生成指环三维模型的三维旋转矩阵M ring∈R 3×3,指环三维模型的三维旋转矩阵M ring∈R 3×3可以由手腕关节的三维旋转矩阵M wrist左乘上无名指关节的三维旋转矩阵M finger得到,即M ring=M wristM finger
设定好视场角(Field of View,FOV)等电子设备的摄像头内参,我们可以通过计算特征点对的长度来估算手部在实时图像中的大小尺度,并根据尺度推算出指环三维模型所在三维位置的深度,之后利用指环三维模型深度和无名指关键点的像素坐标推算出指环三维模型在摄像头标系中的大致三维位置,即指环三维模型的平移向量V ring∈R 3×1
然后,可以将三维旋转矩阵M ring∈R 3×3与平移向量V rin∈R 3×1拼接,得到指环三维模型的三维旋转平移矩阵
Figure PCTCN2022074871-appb-000001
Figure PCTCN2022074871-appb-000002
由此,可以利用指环三维模型的三维旋转平移矩阵
Figure PCTCN2022074871-appb-000003
对指环三维模型中的各个像素进行旋转和平移,得到处于与目标身体部位的实时姿态、 实时大小和实时位置均一致的状态下的指环三维模型。
在本公开一些实施例中,实时未被遮挡区域可以包括目标身体部位未被遮挡物遮挡的区域,遮挡物可以包括目标身体部位所属的目标身体结构以外的非目标身体结构。
在这些实施例中,电子设备根据实时未被遮挡区域,确定穿戴组件三维模型的待渲染部分的具体方法可以包括:首先对实时图像进行针对身体结构的图像分割,得到目标身体结构图像、非目标身体结构图像和背景图像,在该目标身体结构图像中确定目标身体部位的实时未被遮挡区域,因此,电子设备可以根据目标身体结构图像确定穿戴组件三维模型的待渲染部分。
在穿戴组件为用于贴合地穿戴在目标身体部位上的组件的情况下,电子设备在得到目标身体结构图像之后,可以将穿戴组件三维模型位于目标身体结构图像对应的实时未被遮挡区域中的部分作为待渲染部分。
在穿戴组件为用于至少部分非贴合地穿戴在目标身体部位上的组件的情况下,穿戴组件三维模型可以包括与目标身体部位的贴合部分对应的第一模型部分和不与目标身体部位贴合的非贴合部分对应的第二模型部分,电子设备在得到目标身体结构图像、非目标身体结构图像和背景图像之后,可以将第一模型部分处于目标手指的实时未被遮挡区域内的部分和第二模型部分处于实时图像的实时未被遮挡背景区域内的部分作为待渲染部分。
其中,实时未被遮挡背景区域可以包括背景图像对应的区域以及不与遮挡物中的非目标身体结构相连的身体结构对应的非目标身体结构图像对应的区域。
在穿戴组件为用于全部非贴合地穿戴在目标头部上的组件的情况下,电子设备在得到目标身体结构图像、非目标身体结构图像和背景图像之后,可以将穿戴组件三维模型处于目标手指的实时未被遮挡区域内的部分和处于实时图像的实时未被遮挡背景区域内的部分作为待渲染部分。
其中,实时未被遮挡背景区域可以包括背景图像对应的区域以及不与遮挡物中的非目标身体结构相连的身体结构对应的非目标身体结构图像对应的区 域。
图4示出了本公开实施例提供的一种可渲染图像区域的示意图。
如图4所示,目标头部401被手部402遮挡。因此,目标头部401未被手部402遮挡的区域为实时未被遮挡区域403(目标头部401除阴影部分以外的区域)。手部402与上肢404相连,因此,实时未被遮挡背景区域可以包括图像背景405和身体406对应的区域。由此,在穿戴组件为头盔的情况下,由于头盔用于不完全贴合地穿戴在目标头部401上,实时未被遮挡区域403和实时未被遮挡背景区域可以形成可渲染图像区域,头盔三维模型的待渲染部分可以包括头盔三维模型位于可渲染图像区域内的部分。
由此,电子设备可以在遮挡物包括目标身体部位所属的目标身体结构以外的非目标身体结构的情况下,模拟非目标身体结构对三维装饰物效果的遮挡,提高所添加的三维装饰物效果与原图的融合性。
在本公开另一些实施例中,实时未被遮挡区域可以包括目标身体部位未被遮挡物遮挡的区域,遮挡物可以包括非身体部位对象和目标身体部位所属的目标身体结构以外的非目标身体结构中的至少一种。
相应地,根据实时未被遮挡区域,确定穿戴组件三维模型的待渲染部分可以具体包括:
对实时图像进行针对目标身体部位所属的目标身体结构的图像分割,得到目标身体结构图像;
在目标身体结构图像中,确定实时未被遮挡区域;
根据实时未被遮挡区域,确定待渲染部分。
在这些实施例中,电子设备可以对实时图像进行针对目标身体部位所属的目标身体结构的图像分割,得到目标身体结构图像,并将目标身体部位位于该目标身体结构图像中的区域作为实时未被遮挡区域,进而根据实时未被遮挡区域确定穿戴组件三维模型的待渲染部分。
在穿戴组件为用于贴合地穿戴在目标身体部位上的组件的情况下,电子设备在得到目标身体结构图像之后,可以将穿戴组件三维模型位于目标身体结构 图像对应的实时未被遮挡区域中的部分作为待渲染部分。
图5示出了本公开实施例提供的另一种可渲染图像区域的示意图。
如图5所示,无名指501和手部502均未被其它身体结构或者手指遮挡,因此,手部502上的无名指501的全部区域为实时未被遮挡区域。在穿戴组件为指环的情况下,由于指环用于完全贴合地穿戴在无名指501上,因此,实时未被遮挡区域可以形成可渲染图像区域,指环三维模型的待渲染部分可以包括指环三维模型位于可渲染图像区域内的部分,而穿戴组件位于手部502之外的背景区域不做渲染,其中背景区域作为非手部区域可以通过手部图像分割得到。
在本公开实施例中,在穿戴组件为用于贴合地穿戴在目标身体部位上的组件以外的组件的情况下,确定待渲染部分的方法与遮挡物包括非目标身体结构的实施例中的方法相似,在此不做赘述。
由此,电子设备可以在遮挡物包括非身体部位对象和非目标身体结构中的至少一种的情况下,模拟非身体部位对象和非目标身体结构对三维装饰物效果的遮挡,提高所添加的三维装饰物效果与原图的融合性。
在本公开又一些实施例中,遮挡物除包括非身体部位对象和目标身体部位所属的目标身体结构以外的非目标身体结构以外,还可以包括与目标身体部位的身体部位类型相同的非目标身体部位,如图2所示。
相应地,在目标身体结构图像中,确定实时未被遮挡区域可以具体包括:
对目标身体结构图像进行针对目标身体结构的特征点检测,得到目标身体结构的特征点;
根据特征点,确定非目标身体部位对目标身体部位的遮挡区域。
根据遮挡区域,在目标身体结构图像中,确定实时未被遮挡区域。
在这些实施例中,电子设备在还可以先利用预先训练得到的特征点检测模型对实时图像中的目标身体结构图像进行特征点检测,得到目标身体结构的各个特征点,然后,根据非目标身体部位和目标身体部位的特征点,确定非目标身体部位对目标身体部位的遮挡区域,进而根据遮挡区域,在目标身体结构图像中,确定实时未被遮挡区域,以确定待渲染部分。
具体地,电子设备可以首先在目标身体部位对应的特征点中,确定与穿戴组件佩戴位置的距离最近的第一特征点和第二特征点,然后在非目标身体部位对应的全部特征点中,确定与第一特征点距离最近的第三特征点和第四特征点以及与第二特征点距离最近的第五特征点和第六特征点,接着,计算第一特征点与第三特征点之间的第一中间点、第一特征点与第四特征点之间的第二中间点、第二特征点与第五特征点之间的第三中间点以及第二特征点与第六特征点之间的第四中间点。进而将第一中间点、第二中间点、第三中间点和第四中间点分为两组,每组包括同一个非目标身体部位对应的两个中间点。
由此,电子设备可以根据每组中间点连接形成的线段生成每组中间点所属的非目标身体部位对应的平行四边形形状的遮挡区域。
例如,电子设备可以将每组中间点连接形成的线段作为斜边,根据预设长度的长边,生成平行四边形形状的遮挡区域。
电子设备在生成遮挡区域之后,可以利用遮挡区域覆盖目标身体结构图像,进而将目标身体结构图像中未覆盖遮挡区域的图像区域作为实时未被遮挡区域。
由此,电子设备可以在遮挡物包括非身体部位对象、非目标身体结构和非目标身体部位中的至少一种的情况下,模拟非身体部位对象、非目标身体结构和非目标身体部位对三维装饰物效果的遮挡,提高所添加的三维装饰物效果与原图的融合性。
图6示出了本公开实施例提供的又一种可渲染图像区域的示意图。图7示出了本公开实施例提供的一种遮挡区域的示意图。图8示出了本公开实施例提供的再一种可渲染图像区域的示意图。
如图6所示,无名指601所属的手部602未被其它身体结构遮挡,因此,可以在手部602中确定无名指601的实时未被遮挡区域。
由于无名指601与中指603和小拇指604交叠,因此,需要进一步确定无名指601未被小拇指604和中指603遮挡的遮挡区域,才能最终确定无名指601的实时未被遮挡区域。
如图7所示,无名指601的第一特征点605和第二特征点606为与穿戴组件佩戴位置的距离最近的特征点,与第一特征点605距离最近的两个特征点包括中指603的第三特征点607和小拇指604的第四特征点608,与第二特征点606距离最近的两个特征点包括中指603的第五特征点609和小拇指604的第六特征点610。第一特征点605与第三特征点607之间的中间点为第一中间点611,第一特征点605与第四特征点608之间的中间点为第二中间点612,第二特征点606与第五特征点609之间的中间点为第三中间点613,第二特征点606与第六特征点610之间的中间点为第四中间点614。将第一中间点611和第三中间点613连接形成的线段作为斜边,根据预设长度的长边,生成中指603对应的平行四边形形状的第一遮挡区域615。将第二中间点612和第四中间点614连接形成的线段作为斜边,根据预设长度的长边,生成小拇指604对应的平行四边形形状的第二遮挡区域616。
如图8所示,将第一遮挡区域615和第二遮挡区域616叠加在手部602上,将无名指601的未被第一遮挡区域615和第二遮挡区域616覆盖的区域作为实时未被遮挡区域。
在穿戴组件为指环的情况下,由于指环用于完全贴合地穿戴在无名指601上,因此,实时未被遮挡区域可以形成可渲染图像区域,指环三维模型的待渲染部分可以包括指环三维模型位于可渲染图像区域内的部分。
由此,在本公开实施例中,可以在手指合拢的情况下,模拟其它手指对穿戴在无名指上的三维装饰物效果的遮挡,提高所添加的三维装饰物效果与原图的融合性,避免三维装饰物效果嵌入与无名指临近的手指。
在本公开实施例中,装饰物三维模型可以包括穿戴组件三维模型和预设身体部位模型。电子设备可以根据实时姿态和实时未被遮挡区域对穿戴组件三维模型未被预设身体部位模型遮挡的部分进行渲染得到,得到目标三维图像。
其中,预设身体部位模型可以为根据实际应用需求预先设定的用于模拟目标身体部位的模型,在此不做限制。例如,目标身体部位为目标头部,预设身体部位模型可以为预设的标准头部模型。再例如,目标身体部位为手指,预设 身体部位模型可以为圆柱体或长方体等。
具体地,穿戴组件三维模型可以按照穿戴在目标身体部位上的穿戴方式被穿戴在预设身体部位模型上。预设身体部位模型可以根据目标身体部位的实时姿态和特征点与穿戴组件三维模型同步地进行缩放、旋转和平移,以确定穿戴组件三维模型的第一深度信息和预设身体部位模型的第二深度信息。
其中,第一深度信息可以包括穿戴组件三维模型的各个像素点上的第一深度,第二深度信息可以包括预设身体部位模型的各个像素点上的第二深度。
电子设备可以将穿戴组件三维模型的每个像素点上的第一深度与该像素点上的第二深度进行对比,并且判断该像素点是否位于待渲染部分内,如果第一深度小于第二深度并且该像素点位于待渲染部分内,则对穿戴组件三维模型的该像素点进行渲染。
图9示出了本公开实施例提供的一种预设遮挡模型的示意图。
如图9所示,穿戴组件三维模型可以为指环三维模型901,预设身体部位模型可以为圆柱体902,圆柱体902用于模拟无名指,指环三维模型901可以套设在圆柱体902上,电子设备在可以首先根据用于穿戴指环的手指的实时姿态和特征点,对指环三维模型901和圆柱体902进行同步地缩放、旋转和平移,然后,获取指环三维模型901的各个像素点的第一深度以及圆柱体902的各个像素点的第二深度,如果相同像素点下的第一深度小于第二深度,则说明指环三维模型901在该像素点相较于圆柱体902离图像表面更近,因此,可以在该像素点对指环三维模型901进行渲染,否则,如果相同像素点下的第一深度大于第二深度,则不在该像素点对指环三维模型901进行渲染。
由此,电子设备可以利用预设身体部位模型来实现对穿戴组件三维模型的渲染,模拟目标身体部位对三维装饰物效果的遮挡,提高所添加的三维装饰物效果与原图的融合性。
在本公开又一种实施方式中,实时图像可以为包括目标身体部位所属的目标身体结构的图像。为了进一步避免出现穿帮画面,电子设备还可以实时地对实时图像进行目标身体部位的识别,并且仅在识别到目标身体部位的情况下, 显示合成图像。
具体地,电子设备在获取到实时图像之后,可以首先识别实时图像中的目标身体结构中是否显示有目标身体部位,即实时图像中是否显示有目标身体部位,如果识别到实时图像显示有目标身体部位,则可以显示合成图像,否则,显示实时图像。
由于电子设备在不同时刻下获取到的实时图像是可能改变的,因此,电子设备需要实时地识别获取到的实时图像中是否显示有目标身体部位,进而根据识别结果来确定所需显示的图像,进一步避免出现穿帮画面。
在本公开再一种实施方式中,目标三维图像可以根据目标相对位置和实时姿态对功能组件三维模型进行渲染以及根据目标相对位置、实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,目标相对位置可以为功能组件三维模型和穿戴组件三维模型在实时姿态下的相对位置。
其中,功能组件三维模型可以为功能组件对应的三维模型。
在一些实施例中,功能组件可以为具有装饰功能的组件,例如钻石、蝴蝶结等。
在另一些实施例中,功能组件还可以为具有使用功能的组件,例如探照灯、天线等。
具体地,功能组件和穿戴组件可以形成完整的三维装饰物效果。
在本公开实施例中,电子设备可以根据目标相对位置和实时姿态对功能组件三维模型进行渲染以及根据目标相对位置、实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染,得到目标三维图像,进而将目标三维图像叠加在实时图像中的目标身体部位上,得到合成图像。
进一步地,装饰物三维模型还可以包括穿戴组件三维模型和功能组件三维模型,穿戴组件三维模型和功能组件三维模型可以按照预设的相对位置进行布置。电子设备可以根据目标身体部位的实时姿态,对穿戴组件三维模型和功能组件三维模型进行同步旋转,使得该穿戴组件三维模型和功能组件三维模型的模型姿态与目标身体部位的实时姿态一致,并且获取功能组件三维模型和穿戴 组件三维模型在处于与目标身体部位的实时姿态一致的姿态下的目标相对位置,然后根据目标相对位置和实时姿态对功能组件三维模型未被穿戴组件三维模型遮挡的部分进行渲染以及根据目标相对位置、实时姿态和实时未被遮挡区域对穿戴组件三维模型未被功能组件三维模型遮挡的部分进行渲染,得到目标三维图像,然后将目标三维图像叠加在实时图像中的目标身体部位上,得到合成图像。
在本公开一些实施例中,在图1中的S120之前,该图像显示方法还可以包括:
根据实时姿态,确定功能组件三维模型的上表面偏航角;
在上表面偏航角属于第一预设角度范围的情况下,对穿戴组件三维模型和功能组件三维模型进行渲染,得到目标三维图像;
在上表面偏航角属于第二预设角度范围的情况下,对穿戴组件三维模型进行渲染,得到目标三维图像。
在本公开实施例中,电子设备可以根据实时姿态,确定功能组件三维模型的模型姿态,进而根据功能组件三维模型的模型姿态确定功能组件三维模型的上表面偏航角,并判断上表面偏航角所属的预设角度范围,如果上表面偏航角属于第一预设角度范围,则对穿戴组件三维模型和功能组件三维模型进行渲染,得到包括穿戴组件和功能组件的目标三维图像,否则,仅对穿戴组件三维模型进行渲染,得到仅包括穿戴组件的目标三维图像。
具体地,电子设备可以根据实时姿态,将功能组件三维模型和穿戴组件三维模型同步地进行缩放、旋转和平移,并在对功能组件三维模型和穿戴组件三维模型进行缩放、旋转和平移之后,确定功能组件三维模型的上表面的上表面偏航角。
其中,上表面可以为根据实际应用需求预先设定的功能组件三维模型的表面,在此不做限制。第一预设角度范围可以为根据实际应用需求预先设定的能够使上表面朝向面向用户可见的方向的角度范围,在此不做限制。第二预设角度范围可以为根据实际应用需求预先设定的能够使上表面朝向背对用户的方 向的角度范围,在此不做限制。
例如,第一预设角度范围可以为沿顺时针方向和沿逆时针方向的[0°,100°]的角度范围,第二预设角度范围可以为第一预设角度范围以外的角度范围。
图10示出了本公开实施例提供的又一种合成图像的示意图。
如图10所示,合成图像可以为包括无名指1001的图像,穿戴组件可以为指环1002,功能组件可以为钻石1003。其中,通过无名指1001的姿态判断钻石的上表面偏航角属于第一预设范围,即可以确定钻石三维模型的上表面为面向用户的方向,因此,可以根据手指的实时姿态对钻石三维模型进行渲染,并结合手指的实时姿态和实时未被遮挡区域对指环三维模型进行渲染,得到目标三维图像中的指环1002和钻石1003,并且将目标三维图像叠加显示在无名指1001的指环佩戴位置上,使用户可以同时观看到指环效果和钻石效果。
图11示出了本公开实施例提供的再一种合成图像的示意图。
如图11所示,合成图像可以为包括无名指1101的图像,穿戴组件可以为指环1102,功能组件可以为钻石。其中,通过无名指1101的姿态判断钻石的上表面偏航角属于第二预设范围,即可以确定钻石三维模型的上表面为背向用户的方向,因此,可以仅对穿戴组件三维模型进行渲染,得到三维的指环1102,并且将指环1102叠加显示在无名指1101的指环佩戴位置上,使用户仅可以观看到指环效果。
可选地,电子设备还可以根据目标身体部位的实时姿态和实时未被遮挡背景区域,对功能组件三维模型进行渲染,得到目标三维图像中的功能组件。
具体地,在上表面偏航角属于预设角度范围的情况下,电子设备可以对功能组件三维模型处于实时未被遮挡背景区域的部分进行渲染,得到目标三维图像中的功能组件,在此不做赘述。
由此,电子设备可以进一步模拟遮挡物对三维装饰物效果的功能组件效果的遮挡,提高所添加的三维装饰物效果与原图的融合性。
综上所述,本公开实施例提供的图像显示方法,能够通过多种方式对装饰 物效果的遮挡情况进行像素级别的模拟,从而能够模拟更精细的遮挡关系,能够在显示装饰物效果是,极大地提升图像内的任意物体对装饰物效果遮挡的真实性,提高所添加的三维装饰物效果与原图的融合性,避免出现穿帮画面,增强用户的沉浸感,进而提升用户的体验。
本公开实施例还提供了一种能够实现上述的图像显示方法的图像显示装置,下面参考图12对本公开实施例提供的图像显示装置进行说明,
在本公开实施例中,该图像显示装置可以为电子设备。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。
图12示出了本公开实施例提供的一种图像显示装置的结构示意图。
如图12所示,该图像显示装置1200可以包括获取单元1210和显示单元1220。
该获取单元1210可以配置为获取目标身体部位的实时图像。
该显示单元1220可以配置为实时显示合成图像,合成图像为在实时图像中的目标身体部位上叠加目标三维图像得到的图像,目标三维图像根据目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,实时姿态和实时未被遮挡区域根据实时图像确定。
在本公开实施例中,能够在获取到目标身体部位的实时图像之后,实时显示在实时图像中的目标身体部位上叠加目标三维图像所得到的合成图像,其中,目标三维图像根据目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,实时姿态和实时未被遮挡区域直接根据实时图像确定,从而可以达到在实时图像上自动添加具有穿戴组件的三维装饰物效果的目的,由于在添加三维装饰物效果的过程中考虑到了穿戴装饰物的身体部位的姿态和被遮挡情况,因此,可以提高所添加的三维装饰物效果与原图的融合性,避免出现穿帮画面,进而提升用户的体验。
在本公开一些实施例中,实时姿态可以包括目标身体部位的实时旋转姿态。
在本公开一些实施例中,实时未被遮挡区域可以包括目标身体部位未被遮挡物遮挡的区域,遮挡物可以包括非身体部位对象、目标身体部位所属的目标身体结构以外的非目标身体结构和与目标身体部位的身体部位类型相同的非目标身体部位中的至少一种。
在本公开一些实施例中,目标身体部位可以包括目标手指,穿戴组件三维模型可以为用于穿戴在目标手指上的组件的三维模型,实时未被遮挡区域可以包括实时图像中的目标手指未被遮挡物遮挡的区域,遮挡物可以包括非身体部位对象、目标手指所属的手部以外的身体结构和目标手指以外的手指。
在本公开一些实施例中,目标三维图像可以根据实时姿态和实时未被遮挡区域对穿戴组件三维模型未被预设身体部位模型遮挡的部分进行渲染得到,预设身体部位模型可以用于模拟目标身体部位。
在本公开一些实施例中,该图像显示装置1200还可以包括第一处理单元、第二处理单元和第一渲染单元。
该第一处理单元可以配置为根据实时姿态,确定穿戴组件三维模型的第一深度信息和预设身体部位模型的第二深度信息。
该第二处理单元可以配置为根据实时未被遮挡区域,确定穿戴组件三维模型的待渲染部分。
该第一渲染单元可以配置为根据第一深度信息和第二深度信息,对待渲染部分中的深度小于预设身体部位模型的部分进行渲染,得到目标三维图像。
在本公开一些实施例中,实时未被遮挡区域可以包括目标身体部位未被遮挡物遮挡的区域,遮挡物可以包括非身体部位对象和目标身体部位所属的目标身体结构以外的非目标身体结构中的至少一种。
相应地,该第二处理单元可以包括第一处理子单元、第二处理子单元和第三处理子单元。
该第一处理子单元可以配置为对实时图像进行针对目标身体部位所属的目标身体结构的图像分割,得到目标身体结构图像。
该第二处理子单元可以配置为在目标身体结构图像中,确定实时未被遮挡区域。
该第三处理子单元可以配置为根据实时未被遮挡区域,确定待渲染部分。
在本公开一些实施例中,遮挡物还可以包括与目标身体部位的身体部位类型相同的非目标身体部位。
相应地,该第二处理子单元可以进一步配置为:
对目标身体结构图像进行针对目标身体结构的特征点检测,得到目标身体结构的特征点;根据特征点,确定非目标身体部位对目标身体部位的遮挡区域;根据遮挡区域,在目标身体结构图像中,确定实时未被遮挡区域。
在本公开一些实施例中,目标三维图像可以根据目标相对位置和实时姿态对功能组件三维模型进行渲染以及根据目标相对位置、实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,目标相对位置可以为功能组件三维模型穿戴组件三维模型在所述时姿态下的相对位置。
在本公开一些实施例中,该图像显示装置1200还可以包括第三处理单元、第二渲染单元和第三渲染单元。
该第三处理单元可以配置为根据实时姿态,确定功能组件三维模型的上表面偏航角。
该第二渲染单元可以配置为在上表面偏航角属于第一预设角度范围的情况下,对穿戴组件三维模型和功能组件三维模型进行渲染,得到目标三维图像。
该第三渲染单元可以配置为在上表面偏航角属于第二预设角度范围的情况下,对穿戴组件三维模型进行渲染,得到目标三维图像。
需要说明的是,图12所示的图像显示装置1200可以执行图1至图11所示的方法实施例中的各个步骤,并且实现图1至图11所示的方法实施例中的各个过程和效果,在此不做赘述。
本公开实施例还提供了一种图像显示设备,该图像显示设备可以包括处理器和存储器,存储器可以用于存储可执行指令。其中,处理器可以用于从存储 器中读取可执行指令,并执行可执行指令以实现上述实施例中的图像显示方法。
图13示出了本公开实施例提供的一种图像显示设备的结构示意图。下面具体参考图13,其示出了适于用来实现本公开实施例中的图像显示设备1300的结构示意图。
本公开实施例中的图像显示设备1300可以为电子设备。其中,电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴设备、等等的移动终端以及诸如数字TV、台式计算机、智能家居设备等等的固定终端。
需要说明的是,图13示出的图像显示设备1300仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图13所示,该图像显示设备1300可以包括处理装置(例如中央处理器、图形处理器等)1301,其可以根据存储在只读存储器(ROM)1302中的程序或者从存储装置1308加载到随机访问存储器(RAM)1303中的程序而执行各种适当的动作和处理。在RAM 1303中,还存储有图像显示设备1300操作所需的各种程序和数据。处理装置1301、ROM 1302以及RAM 1303通过总线1304彼此相连。输入/输出(I/O)接口1305也连接至总线1304。
通常,以下装置可以连接至I/O接口1305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1306;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1307;包括例如磁带、硬盘等的存储装置1308;以及通信装置1309。通信装置1309可以允许图像显示设备1300与其他设备进行无线或有线通信以交换数据。虽然图13示出了具有各种装置的图像显示设备1300,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
本公开实施例还提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当计算机程序被处理器执行时,使得处理器实现上述实施例中的图像显示方法。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。
本公开实施例还提供了一种计算机程序产品,该计算机程序产品可以包括计算机程序,当计算机程序被处理器执行时,使得处理器实现上述实施例中的图像显示方法。
例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1309从网络上被下载和安装,或者从存储装置1308被安装,或者从ROM 1302被安装。在该计算机程序被处理装置1301执行时,执行本公开实施例的图像显示方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述图像显示设备中所包含的;也可以是单独存在,而未装配入该图像显示设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该图像显示设备执行时,使得该图像显示设备执行:
获取目标身体部位的实时图像;实时显示合成图像,合成图像为在实时图像中的目标身体部位上叠加目标三维图像得到的图像,目标三维图像根据目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,实时姿态和实时未被遮挡区域根据实时图像确定。
在本公开实施例中,可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图 中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成 的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (14)

  1. 一种图像显示方法,其特征在于,包括:
    获取目标身体部位的实时图像;
    实时显示合成图像,所述合成图像为在所述实时图像中的所述目标身体部位上叠加目标三维图像得到的图像,所述目标三维图像根据所述目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,所述实时姿态和所述实时未被遮挡区域根据所述实时图像确定。
  2. 根据权利要求1所述的方法,其特征在于,所述实时未被遮挡区域包括所述目标身体部位未被遮挡物遮挡的区域,所述遮挡物包括非身体部位对象、所述目标身体部位所属的目标身体结构以外的非目标身体结构和与所述目标身体部位的身体部位类型相同的非目标身体部位中的至少一种。
  3. 根据权利要求2所述的方法,其特征在于,所述目标身体部位包括目标手指,所述穿戴组件三维模型为用于穿戴在所述目标手指上的组件的三维模型,所述实时未被遮挡区域包括所述实时图像中的所述目标手指未被遮挡物遮挡的区域,所述遮挡物包括非身体部位对象、所述目标手指所属的手部以外的身体结构和所述目标手指以外的手指。
  4. 根据权利要求1所述的方法,其特征在于,所述目标三维图像根据所述实时姿态和所述实时未被遮挡区域对所述穿戴组件三维模型未被预设身体部位模型遮挡的部分进行渲染得到,所述预设身体部位模型用于模拟所述目标身体部位。
  5. 根据权利要求4所述的方法,其特征在于,在所述实时显示合成图像之前,所述方法还包括:
    根据所述实时姿态,确定所述穿戴组件三维模型的第一深度信息和所述预设身体部位模型的第二深度信息;
    根据所述实时未被遮挡区域,确定所述穿戴组件三维模型的待渲染部分;
    根据所述第一深度信息和所述第二深度信息,对所述待渲染部分中的深度小于所述预设身体部位模型的部分进行渲染,得到所述目标三维图像。
  6. 根据权利要求5所述的方法,其特征在于,所述实时未被遮挡区域包括所述目标身体部位未被遮挡物遮挡的区域,所述遮挡物包括非身体部位对象和所述目标身体部位所属的目标身体结构以外的非目标身体结构中的至少一种;
    其中,所述根据所述实时未被遮挡区域,确定所述穿戴组件三维模型的待渲染部分,包括:
    对所述实时图像进行针对所述目标身体部位所属的目标身体结构的图像分割,得到目标身体结构图像;
    在所述目标身体结构图像中,确定所述实时未被遮挡区域;
    根据所述实时未被遮挡区域,确定所述待渲染部分。
  7. 根据权利要求6所述的方法,其特征在于,所述遮挡物还包括与所述目标身体部位的身体部位类型相同的非目标身体部位;
    其中,所述在所述目标身体结构图像中,确定所述实时未被遮挡区域,包括:
    对所述目标身体结构图像进行针对所述目标身体结构的特征点检测,得到所述目标身体结构的特征点;
    根据所述特征点,确定所述非目标身体部位对所述目标身体部位的遮挡区域;
    根据所述遮挡区域,在所述目标身体结构图像中,确定所述实时未被遮挡区域。
  8. 根据权利要求1所述的方法,其特征在于,所述目标三维图像根据目标相对位置和所述实时姿态对功能组件三维模型进行渲染以及根据所述目标相对位置、所述实时姿态和所述实时未被遮挡区域对穿戴组件三维模型进行渲染得到,所述目标相对位置为所述功能组件三维模型和所述穿戴组件三维模型在所述实时姿态下的相对位置。
  9. 根据权利要求8所述的方法,其特征在于,在所述实时显示合成图像之前,所述方法还包括:
    根据所述实时姿态,确定所述功能组件三维模型的上表面偏航角;
    在所述上表面偏航角属于第一预设角度范围的情况下,对所述穿戴组件三维模型和所述功能组件三维模型进行渲染,得到所述目标三维图像;
    在所述上表面偏航角属于第二预设角度范围的情况下,对所述穿戴组件三维模型进行渲染,得到所述目标三维图像。
  10. 根据权利要求1所述的方法,其特征在于,所述实时姿态包括所述目标身体部位的实时旋转姿态。
  11. 一种图像显示装置,其特征在于,包括:
    获取单元,配置为获取目标身体部位的实时图像;
    显示单元,配置为实时显示合成图像,所述合成图像为在所述实时图像中的所述目标身体部位上叠加目标三维图像得到的图像,所述目标三维图像根据所述目标身体部位的实时姿态和实时未被遮挡区域对穿戴组件三维模型进行渲染得到,所述实时姿态和所述实时未被遮挡区域根据所述实时图像确定。
  12. 一种图像显示设备,其特征在于,包括:
    处理器;
    存储器,用于存储可执行指令;
    其中,所述处理器用于从所述存储器中读取所述可执行指令,并执行所述可执行指令以实现上述权利要求1-10中任一项所述的图像显示方法。
  13. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使得处理器实现上述权利要求1-10中任一项所述的图像显示方法。
  14. 一种计算机程序产品,其特征在于,所述计算机程序产品在终端设备上运行时,使得所述终端设备执行权利要求1至10任一项所述的方法。
PCT/CN2022/074871 2021-02-10 2022-01-29 图像显示方法、装置、设备及介质 WO2022171020A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/264,886 US20240054719A1 (en) 2021-02-10 2022-01-29 Image display method and apparatus, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110185451.7A CN114943816A (zh) 2021-02-10 2021-02-10 图像显示方法、装置、设备及介质
CN202110185451.7 2021-02-10

Publications (1)

Publication Number Publication Date
WO2022171020A1 true WO2022171020A1 (zh) 2022-08-18

Family

ID=82838262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074871 WO2022171020A1 (zh) 2021-02-10 2022-01-29 图像显示方法、装置、设备及介质

Country Status (3)

Country Link
US (1) US20240054719A1 (zh)
CN (1) CN114943816A (zh)
WO (1) WO2022171020A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110221690A (zh) * 2019-05-13 2019-09-10 Oppo广东移动通信有限公司 基于ar场景的手势交互方法及装置、存储介质、通信终端
US20200013212A1 (en) * 2017-04-04 2020-01-09 Intel Corporation Facial image replacement using 3-dimensional modelling techniques
CN111369686A (zh) * 2020-03-03 2020-07-03 足购科技(杭州)有限公司 能够处理局部遮挡物的ar成像虚拟试鞋方法及装置
CN111754303A (zh) * 2020-06-24 2020-10-09 北京字节跳动网络技术有限公司 虚拟换服饰的方法和装置、设备和介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200013212A1 (en) * 2017-04-04 2020-01-09 Intel Corporation Facial image replacement using 3-dimensional modelling techniques
CN110221690A (zh) * 2019-05-13 2019-09-10 Oppo广东移动通信有限公司 基于ar场景的手势交互方法及装置、存储介质、通信终端
CN111369686A (zh) * 2020-03-03 2020-07-03 足购科技(杭州)有限公司 能够处理局部遮挡物的ar成像虚拟试鞋方法及装置
CN111754303A (zh) * 2020-06-24 2020-10-09 北京字节跳动网络技术有限公司 虚拟换服饰的方法和装置、设备和介质

Also Published As

Publication number Publication date
CN114943816A (zh) 2022-08-26
US20240054719A1 (en) 2024-02-15

Similar Documents

Publication Publication Date Title
US11838518B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
WO2019223468A1 (zh) 相机姿态追踪方法、装置、设备及系统
CN110954083B (zh) 移动设备的定位
WO2019205842A1 (zh) 相机姿态追踪过程的重定位方法、装置及存储介质
CN111738220A (zh) 三维人体姿态估计方法、装置、设备及介质
WO2018214697A1 (zh) 图形处理方法、处理器和虚拟现实系统
JP6177872B2 (ja) 入出力装置、入出力プログラム、および入出力方法
US20220100265A1 (en) Dynamic configuration of user interface layouts and inputs for extended reality systems
JP6250024B2 (ja) キャリブレーション装置、キャリブレーションプログラム、およびキャリブレーション方法
US11288871B2 (en) Web-based remote assistance system with context and content-aware 3D hand gesture visualization
WO2022007627A1 (zh) 一种图像特效的实现方法、装置、电子设备及存储介质
CN107274491A (zh) 一种三维场景的空间操控虚拟实现方法
WO2023179346A1 (zh) 特效图像处理方法、装置、电子设备及存储介质
WO2014128751A1 (ja) ヘッドマウントディスプレイ装置、ヘッドマウントディスプレイ用プログラム、およびヘッドマウントディスプレイ方法
WO2022188708A1 (zh) 基于增强现实的试鞋方法、装置和电子设备
JP7469510B2 (ja) 画像処理方法、装置、電子機器及びコンピュータ可読記憶媒体
KR20200111119A (ko) 가상 종이
JP6250025B2 (ja) 入出力装置、入出力プログラム、および入出力方法
US20200005507A1 (en) Display method and apparatus and electronic device thereof
US10296098B2 (en) Input/output device, input/output program, and input/output method
WO2022171020A1 (zh) 图像显示方法、装置、设备及介质
WO2022083213A1 (zh) 图像生成方法、装置、设备和计算机可读介质
CN109685881B (zh) 一种体绘制方法、装置及智能设备
KR102534449B1 (ko) 이미지 처리 방법, 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체
CN109062413A (zh) 一种ar交互系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22752181

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18264886

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22752181

Country of ref document: EP

Kind code of ref document: A1