CN111462337A - An image processing method, device and computer-readable storage medium - Google Patents

An image processing method, device and computer-readable storage medium Download PDF

Info

Publication number
CN111462337A
CN111462337A CN202010231304.4A CN202010231304A CN111462337A CN 111462337 A CN111462337 A CN 111462337A CN 202010231304 A CN202010231304 A CN 202010231304A CN 111462337 A CN111462337 A CN 111462337A
Authority
CN
China
Prior art keywords
image
human body
virtual
key points
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010231304.4A
Other languages
Chinese (zh)
Other versions
CN111462337B (en
Inventor
赵琦
颜忠伟
毕铎
王科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Culture Technology Co Ltd
Original Assignee
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Culture Technology Co Ltd filed Critical MIGU Culture Technology Co Ltd
Priority to CN202010231304.4A priority Critical patent/CN111462337B/en
Publication of CN111462337A publication Critical patent/CN111462337A/en
Application granted granted Critical
Publication of CN111462337B publication Critical patent/CN111462337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种图像处理方法、设备及计算机可读存储介质,涉及通信技术领域,以提高真实用户和虚拟对象的AR合照图像的显示效果。该方法包括:获取用户在显示屏上的投影图像;提取所述投影图像中的人体轮廓;根据所述人体轮廓,获取虚拟对象的虚拟图像,其中,所述虚拟对象的轮廓与所述人体轮廓的匹配度满足第一预设要求;确定所述虚拟图像和所述投影图像之间的相对位置关系;根据所述投影图像、所述虚拟图像以及所述相对位置关系,得到AR图像。本发明实施例可提高真实用户和虚拟对象的AR合照图像的显示效果。

Figure 202010231304

The invention discloses an image processing method, a device and a computer-readable storage medium, which relate to the technical field of communications, so as to improve the display effect of an AR group photo image of a real user and a virtual object. The method includes: acquiring a projected image of a user on a display screen; extracting a human body outline in the projected image; acquiring a virtual image of a virtual object according to the human body outline, wherein the outline of the virtual object and the human body outline The matching degree of the image satisfies the first preset requirement; the relative positional relationship between the virtual image and the projected image is determined; and an AR image is obtained according to the projected image, the virtual image and the relative positional relationship. The embodiment of the present invention can improve the display effect of the AR group photo image of the real user and the virtual object.

Figure 202010231304

Description

一种图像处理方法、设备及计算机可读存储介质An image processing method, device and computer-readable storage medium

技术领域technical field

本发明涉及图像处理技术领域,尤其涉及一种图像处理方法、设备及计算机可读存储介质。The present invention relates to the technical field of image processing, and in particular, to an image processing method, device and computer-readable storage medium.

背景技术Background technique

AR(Augmented Reality,增强现实)拍照中,虚拟对象(如虚拟人物)的身高、动作是固定的。但是,在用户和虚拟人物合照的过程中,不同的用户具有不同的身高和拍照姿势。因此,在这种情况下,如何提高真实用户和虚拟人物合拍的真实感从而改善AR图像的显示效果,是需要解决的技术问题。In AR (Augmented Reality, Augmented Reality) photography, the height and movement of virtual objects (such as virtual characters) are fixed. However, in the process of taking a photo of the user and the avatar, different users have different heights and photographing poses. Therefore, in this case, it is a technical problem that needs to be solved how to improve the sense of reality of the real user and the virtual character being photographed together so as to improve the display effect of the AR image.

发明内容SUMMARY OF THE INVENTION

本发明实施例提供一种图像处理方法、设备及计算机可读存储介质,以提高真实用户和虚拟对象的AR合照图像的显示效果。Embodiments of the present invention provide an image processing method, a device, and a computer-readable storage medium, so as to improve the display effect of an AR group photo image of a real user and a virtual object.

第一方面,本发明实施例提供了一种图像处理方法,包括:In a first aspect, an embodiment of the present invention provides an image processing method, including:

获取用户在显示屏上的投影图像;Obtain the projected image of the user on the display screen;

提取所述投影图像中的人体轮廓;extracting the outline of the human body in the projected image;

根据所述人体轮廓,获取虚拟对象的虚拟图像,其中,所述虚拟对象的轮廓与所述人体轮廓的匹配度满足第一预设要求;obtaining a virtual image of a virtual object according to the human body outline, wherein the matching degree between the outline of the virtual object and the human body outline meets a first preset requirement;

确定所述虚拟图像和所述投影图像之间的相对位置关系;determining the relative positional relationship between the virtual image and the projected image;

根据所述投影图像、所述虚拟图像以及所述相对位置关系,得到AR图像。An AR image is obtained according to the projected image, the virtual image, and the relative positional relationship.

其中,所述提取所述投影图像中的人体轮廓,包括:Wherein, the extracting the human body contour in the projection image includes:

分别将所述投影图像进行图像转换,得到至少一张灰度图;Perform image conversion on the projected images respectively to obtain at least one grayscale image;

计算所述灰度图的平均值,得到背景灰度图;Calculate the average value of the grayscale image to obtain a background grayscale image;

计算每一张所述灰度图和所述背景灰度图的差,得到所述用户的人体轮廓。The difference between each of the grayscale images and the background grayscale image is calculated to obtain the user's body contour.

其中,所述虚拟对象包括虚拟人物;所述根据所述人体轮廓,获取虚拟对象的虚拟图像,包括:Wherein, the virtual object includes a virtual character; and the acquiring a virtual image of the virtual object according to the human body outline includes:

在所述人体轮廓上确定第一关键点,其中,所述第一关键点至少包括头部关键点和手部关键点;determining a first key point on the outline of the human body, wherein the first key point includes at least a head key point and a hand key point;

在所述虚拟对象的候选图像中确定所述虚拟对象的目标轮廓;determining a target contour of the virtual object in candidate images of the virtual object;

相应于所述第一关键点,在所述目标轮廓上确定第二关键点,其中,所述第二关键点至少包括头部关键点和手部关键点;Corresponding to the first key point, a second key point is determined on the target contour, wherein the second key point includes at least a head key point and a hand key point;

基于所述第一关键点和所述第二关键点,计算所述人体轮廓和所述目标轮廓之间的相似度;Based on the first key point and the second key point, calculating the similarity between the human body contour and the target contour;

如果所述相似度满足第二预设要求,将所述候选图像作为所述虚拟图像。If the similarity meets the second preset requirement, the candidate image is used as the virtual image.

其中,所述基于所述第一关键点和所述第二关键点,计算所述人体轮廓和所述目标轮廓之间的相似度,包括:Wherein, calculating the similarity between the human body contour and the target contour based on the first key point and the second key point includes:

对于所述第一关键点中的各个第一目标关键点,计算所述第一目标关键点和第二目标关键点之间的欧式距离,所述第二目标关键点是所述第二关键点中与所述第一目标关键点对应的关键点;For each first target key point in the first key points, calculate the Euclidean distance between the first target key point and the second target key point, and the second target key point is the second key point The key points corresponding to the first target key points in ;

基于获得的欧式距离,计算所述人体轮廓和所述目标轮廓之间的相似度。Based on the obtained Euclidean distance, the similarity between the human body contour and the target contour is calculated.

其中,所述基于获得的欧式距离,计算所述人体轮廓和所述目标轮廓之间的相似度,包括:Wherein, calculating the similarity between the human body contour and the target contour based on the obtained Euclidean distance includes:

将各个欧式距离分别乘以对应的权值,获得各个欧式距离对应的第一数值;Multiply each Euclidean distance by the corresponding weight to obtain the first value corresponding to each Euclidean distance;

将各个第一数值相加,得到所述人体轮廓和所述目标轮廓之间的相似度;Each first numerical value is added to obtain the similarity between the human body contour and the target contour;

其中,所述方法还包括:Wherein, the method also includes:

预先设置所述权值,其中,基于头部关键点和/或手部关键点获得的欧式距离的权值大于基于其他关键点获得的欧式距离的权值。The weight is preset, wherein the weight of the Euclidean distance obtained based on the head key point and/or the hand key point is greater than the weight of the Euclidean distance obtained based on other key points.

其中,所述确定所述虚拟图像和所述投影图像之间的相对位置关系,包括:Wherein, the determining the relative positional relationship between the virtual image and the projected image includes:

确定所述用户的实际拍照位置和所述投影图像之间的距离;determining the distance between the actual photographing position of the user and the projected image;

根据所述距离确定所述虚拟图像和所述投影图像之间的深度距离。A depth distance between the virtual image and the projected image is determined based on the distance.

其中,所述根据所述距离确定所述虚拟图像和所述投影图像之间的深度距离,包括:Wherein, determining the depth distance between the virtual image and the projected image according to the distance includes:

利用以下公式,根据所述距离确定所述虚拟图像和所述投影图像之间的深度距离:The depth distance between the virtual image and the projected image is determined from the distance using the following formula:

Figure BDA0002429366710000031
Figure BDA0002429366710000031

其中,Δd表示所述深度距离,所述Δθ表示用户的双眼视差,D表示所述用户的实际拍照位置和用户在显示屏上的投影图像之间的距离,P表示用户两眼之间的间距,Δθ、P为常量Among them, Δd represents the depth distance, the Δθ represents the binocular parallax of the user, D represents the distance between the user's actual photographing position and the user's projected image on the display screen, and P represents the distance between the user's eyes , Δθ, P are constants

第二方面,本发明实施例提供了一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序;所述处理器,用于读取存储器中的程序实现如第一方面所述的图像处理方法中的步骤。In a second aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a program stored on the memory and running on the processor; the processor is configured to read the memory The program in implements the steps in the image processing method described in the first aspect.

第三方面,本发明实施例提供了一种计算机可读存储介质,用于存储计算机程序,其特征在于,所述计算机程序被处理器执行时如第一方面所述的图像处理方法中的步骤。In a third aspect, an embodiment of the present invention provides a computer-readable storage medium for storing a computer program, wherein when the computer program is executed by a processor, the steps in the image processing method described in the first aspect are .

在本发明实施例中,根据用户在显示屏上的投影图像提取人体轮廓,并根据人体轮廓获得虚拟对象的虚拟图像。之后,根据所述虚拟对象和所述投影图像的相对位置关系、投影图像以及虚拟图像在进行AR合照。由于虚拟对象的轮廓和人体轮廓的匹配度满足第一预设要求,且在合照时考虑了虚拟对象和投影图像的相对位置关系,因此,利用本发明实施例获得的AR合照使得虚拟对象的形态、姿势和用户的形态、姿态匹配度较高,从而增强了图像的真实感,提高了AR图像的显示效果。In the embodiment of the present invention, the outline of the human body is extracted according to the projected image of the user on the display screen, and the virtual image of the virtual object is obtained according to the outline of the human body. After that, an AR group photo is taken according to the relative positional relationship between the virtual object and the projected image, the projected image and the virtual image. Since the matching degree between the outline of the virtual object and the outline of the human body meets the first preset requirement, and the relative positional relationship between the virtual object and the projected image is considered when taking a group photo, the AR group photo obtained by using the embodiment of the present invention makes the shape of the virtual object , posture and the user's shape and posture are highly matched, thereby enhancing the realism of the image and improving the display effect of the AR image.

附图说明Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments of the present invention. Obviously, the drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative labor.

图1是本发明实施例提供的图像处理方法的流程图;1 is a flowchart of an image processing method provided by an embodiment of the present invention;

图2是人的双眼成像原理图;Figure 2 is a schematic diagram of human binocular imaging;

图3是为双眼成像原理的数学示意图;3 is a mathematical schematic diagram of the principle of binocular imaging;

图4是本发明实施例提供的拍照示意图之一;4 is one of the schematic diagrams of taking pictures provided by an embodiment of the present invention;

图5是本发明实施例提供的拍照示意图之二;FIG. 5 is the second schematic diagram of photographing provided by an embodiment of the present invention;

图6是本发明实施例提供的图像处理装置的结构图;6 is a structural diagram of an image processing apparatus provided by an embodiment of the present invention;

图7是本发明实施例提供的电子设备的结构图。FIG. 7 is a structural diagram of an electronic device provided by an embodiment of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

参见图1,图1是本发明实施例提供的图像处理方法的流程图,如图1所示,包括以下步骤:Referring to FIG. 1, FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present invention, as shown in FIG. 1, including the following steps:

步骤101、获取用户在显示屏上的投影图像。Step 101: Acquire a projected image of the user on the display screen.

当用户需要进行拍照时,通常会利用摄像头进行拍摄,从而在显示屏上会显示用户的图像。在本发明实施例中,将显示的用户的图像称为投影图像。在本发明实施例中,所述投影图像需要包括用户的人体轮廓,优选的,由于是和虚拟对象(如虚拟人物,虚拟物品等)进行AR合照,投影图像中需包括用户完整的人体轮廓。用户的人体轮廓能够体现出用户的身高、站立姿态等信息。When a user needs to take a picture, a camera is usually used to take a picture, so that the user's image is displayed on the display screen. In this embodiment of the present invention, the displayed image of the user is referred to as a projected image. In this embodiment of the present invention, the projected image needs to include the user's body contour. Preferably, since an AR photo is taken with a virtual object (eg, virtual character, virtual item, etc.), the projected image needs to include the user's complete body contour. The user's body contour can reflect the user's height, standing posture and other information.

步骤102、提取所述投影图像中的人体轮廓。Step 102 , extract the outline of the human body in the projection image.

在本发明实施例中,连续获取多张用户的投影图像,然后,基这些投影图像确定人体的轮廓。具体的,在此步骤中,分别将获得的多张投影图像进行图像转换,得到至少一张灰度图。其中,每张投影图像都具有对应的灰度图。之后,计算所述灰度图的平均值,得到背景灰度图。最后,计算每一张所述灰度图和所述背景灰度图的差,得到所述用户的人体轮廓。通过这种方式,可以使得获得的人体轮廓信息更为准确。In the embodiment of the present invention, a plurality of projection images of the user are continuously acquired, and then the outline of the human body is determined based on these projection images. Specifically, in this step, image conversion is performed on the obtained multiple projection images to obtain at least one grayscale image. Among them, each projected image has a corresponding grayscale image. Afterwards, the average value of the grayscale images is calculated to obtain a background grayscale image. Finally, the difference between each of the grayscale images and the background grayscale image is calculated to obtain the user's body contour. In this way, the obtained body contour information can be made more accurate.

以连续拍摄五张用户的图像为例,将五张图像转换为灰度图并记为fgi(x,y),i=1,2,3,4,5。将五张图像的灰度图按照下述公式(1)相加并求取其平均值,即为背景图的灰度图,记为fb(x,y)。Taking five images of users continuously taken as an example, the five images are converted into grayscale images and denoted as f gi (x, y), i=1, 2, 3, 4, 5. Add the grayscale images of the five images according to the following formula (1) and obtain the average value, which is the grayscale image of the background image, denoted as f b (x, y).

Figure BDA0002429366710000041
Figure BDA0002429366710000041

之后,将每一张灰度图与背景图的灰度图做差,即可以得到人体轮廓信息,其中,人体轮廓信息可表示为公式(2):After that, the difference between each grayscale image and the grayscale image of the background image can be obtained to obtain the human body contour information, wherein the human body contour information can be expressed as formula (2):

fd(x,y)=|fgi(x,y)-fb(x,y)| (2)f d (x,y)=|f gi (x, y)-f b (x, y)| (2)

其中,fd(x,y)表示人体轮廓信息。Among them, f d (x, y) represents the human body contour information.

步骤103、根据所述人体轮廓,获取虚拟对象的虚拟图像,其中,所述虚拟对象的轮廓与所述人体轮廓的匹配度满足第一预设要求。Step 103: Acquire a virtual image of a virtual object according to the outline of the human body, wherein the matching degree between the outline of the virtual object and the outline of the human body meets a first preset requirement.

其中,所述第一预设要求可以是匹配度大于某个预设值,而该预设值可根据实际需要设置。The first preset requirement may be that the matching degree is greater than a preset value, and the preset value may be set according to actual needs.

以虚拟对象为虚拟人物为例,可搜索虚拟人物库中虚拟人物所有的图像,并与当前的人体轮廓信息做运算,匹配与人体轮廓最相似的虚拟人物的姿势。在匹配的过程中重点考虑用户的身高及手部姿势。在采用欧式距离计算人体轮廓相似点的过程中,适当增加头部姿势和手部姿势的权重,从而着重考虑头部和手部的相似度。通过与所有图像进行匹配,找出与客户身高、姿势最相似的虚拟人物的图像。Taking the virtual object as an avatar as an example, all images of the avatar in the avatar library can be searched, and the current human body contour information can be calculated to match the posture of the avatar most similar to the human body contour. In the matching process, the user's height and hand posture are mainly considered. In the process of using Euclidean distance to calculate the similarity of human body contours, the weight of head pose and hand pose is appropriately increased, so as to focus on the similarity of head and hand. By matching all images, find the image of the avatar that most closely resembles the client's height and pose.

具体的,在此步骤中,可按照如下过程获取虚拟对象的虚拟图像:Specifically, in this step, the virtual image of the virtual object can be obtained according to the following process:

步骤1031、在所述人体轮廓上确定第一关键点,其中,所述第一关键点至少包括头部关键点和手部关键点。Step 1031: Determine a first key point on the human body outline, where the first key point at least includes a head key point and a hand key point.

在此,可通过标记的方式,在人体轮廓上确定第一关键点。当然,所述第一关键点还可包括人体轮廓上的其他关键点。Here, the first key point can be determined on the contour of the human body by marking. Of course, the first key point may also include other key points on the outline of the human body.

步骤1032、在所述虚拟对象的候选图像中确定所述虚拟对象的目标轮廓。Step 1032: Determine the target contour of the virtual object in the candidate image of the virtual object.

在实际应用中,可预存储有多个虚拟对象的多个图像,而不同的虚拟对象具有不同的身高,姿态等。在此,将这些图像称为虚拟对象的候选图像。如果用户选择了需要合照的虚拟对象,那么,可直接根据用户选择的虚拟对象从预存储的图像中获取用户选择的虚拟对象的候选对象。In practical applications, multiple images of multiple virtual objects may be pre-stored, and different virtual objects have different heights, postures, and the like. Here, these images are referred to as candidate images of virtual objects. If the user selects a virtual object to be photographed, candidates of the virtual object selected by the user may be obtained from the pre-stored image directly according to the virtual object selected by the user.

以虚拟对象为虚拟人物为例,在此确定的目标轮廓是该虚拟人物的人体轮廓。其中,虚拟对象的人体轮廓的确定方式在本发明实施例中不做限定。Taking the virtual object as a virtual character as an example, the target contour determined here is the human body contour of the virtual character. The manner of determining the human body contour of the virtual object is not limited in this embodiment of the present invention.

步骤1033、相应于所述第一关键点,在所述目标轮廓上确定第二关键点,其中,所述第二关键点至少包括头部关键点和手部关键点。Step 1033: Corresponding to the first key point, determine a second key point on the target contour, where the second key point at least includes a head key point and a hand key point.

“相应于所述第一关键点”指的是,根据第一关键点在人体轮廓中的位置,在虚拟人物的人体轮廓的对应位置确定关键点,即第二关键点。通过这种方式,可使得获得的虚拟对象的身高、姿态等和用户的身高、姿态更为接近。可选的,所述第二关键点也可包括其他部位的关键点。"Corresponding to the first key point" means that, according to the position of the first key point in the human body contour, a key point, that is, a second key point, is determined at the corresponding position of the human body contour of the virtual character. In this way, the obtained height, posture, etc. of the virtual object can be made closer to the height and posture of the user. Optionally, the second key point may also include key points of other parts.

步骤1034、基于所述第一关键点和所述第二关键点,计算所述人体轮廓和所述目标轮廓之间的相似度。Step 1034: Calculate the similarity between the human body contour and the target contour based on the first key point and the second key point.

在此步骤中,主要是计算各关键点之间的欧式距离,然后,根据欧式距离计算人体轮廓和目标轮廓之间的相似度。In this step, the Euclidean distance between each key point is mainly calculated, and then the similarity between the human body contour and the target contour is calculated according to the Euclidean distance.

在计算欧式距离的时候,是以人体轮廓和目标轮廓上对应的两个关键点为基础进行计算。具体的,对于所述第一关键点中的各个第一目标关键点,计算所述第一目标关键点和第二目标关键点之间的欧式距离,所述第二目标关键点是所述第二关键点中与所述第一目标关键点对应的关键点。其中,第一目标关键点是第一关键点中的任一关键点。然后,基于获得的欧式距离,计算所述人体轮廓和所述目标轮廓之间的相似度。When calculating the Euclidean distance, the calculation is based on the two key points corresponding to the contour of the human body and the contour of the target. Specifically, for each of the first target key points in the first key points, the Euclidean distance between the first target key point and the second target key point is calculated, and the second target key point is the first target key point. A key point corresponding to the first target key point among the two key points. The first target key point is any one of the first key points. Then, based on the obtained Euclidean distance, the similarity between the human body contour and the target contour is calculated.

对于获得的多个欧式距离,将各个欧式距离分别乘以对应的权值,获得各个欧式距离对应的第一数值,然后,将各个第一数值相加,得到所述人体轮廓和所述目标轮廓之间的相似度。For the obtained multiple Euclidean distances, multiply each Euclidean distance by the corresponding weight to obtain the first value corresponding to each Euclidean distance, and then add the first values to obtain the human body contour and the target contour similarity between.

在本发明实施例中的过程中,还可预先设置所述权值,其中,基于头部关键点和/或手部关键点获得的欧式距离的权值大于基于其他关键点获得的欧式距离的权值。In the process of the embodiment of the present invention, the weights may also be preset, wherein the weights of the Euclidean distance obtained based on the head key points and/or the hand key points are greater than the weights of the Euclidean distances obtained based on other key points. weight.

在本发明实施例中,通过增加头部关键点或者手部关键点对应的权值,可使得获得的虚拟对象的身高、姿态等和用户的身高、姿态更为接近,从而进一步提高图像的显示效果。In the embodiment of the present invention, by increasing the weights corresponding to the head key points or the hand key points, the height and posture of the obtained virtual object can be made closer to the height and posture of the user, thereby further improving the display of the image. Effect.

步骤1035、如果所述相似度满足第二预设要求,将所述候选图像作为所述虚拟图像。Step 1035: If the similarity meets the second preset requirement, use the candidate image as the virtual image.

其中,相似度满足第二预设要求可以是相似度大于某个预设值,而该预设值可以根据实际需要设置。Wherein, the similarity meeting the second preset requirement may be that the similarity is greater than a preset value, and the preset value can be set according to actual needs.

步骤104、确定所述虚拟图像和所述投影图像之间的相对位置关系。Step 104: Determine the relative positional relationship between the virtual image and the projected image.

在本发明实施例中,所述相对位置关系可以通过所述虚拟图像和所述投影图像之间的深度距离来体现。In this embodiment of the present invention, the relative positional relationship may be represented by a depth distance between the virtual image and the projected image.

具体的,在此步骤中,确定所述用户的实际拍照位置和所述投影图像之间的距离,然后根据所述距离确定所述虚拟图像和所述投影图像之间的深度距离。Specifically, in this step, the distance between the actual photographing position of the user and the projected image is determined, and then the depth distance between the virtual image and the projected image is determined according to the distance.

如图2所示,为人的双眼成像原理图。由于人的双眼之间有60mm左右的距离,左、右眼以各自角度观看物体的时候,在视网膜上呈现的图像是有差异的。大脑根据这种差异来判断物体的空间位置,从而使人们对物体产生立体视觉。As shown in Figure 2, it is a schematic diagram of human binocular imaging. Since there is a distance of about 60mm between human eyes, when the left and right eyes view objects at their respective angles, the images presented on the retina are different. The brain judges the spatial position of objects according to this difference, so that people can produce stereoscopic vision of objects.

如图3所示,为双眼成像原理的数学示意图。参照图3,双眼视差与物体空间位置的几何关系如公式(3)所示:As shown in FIG. 3 , it is a mathematical schematic diagram of the principle of binocular imaging. Referring to Fig. 3, the geometric relationship between binocular disparity and the spatial position of the object is shown in formula (3):

Figure BDA0002429366710000071
Figure BDA0002429366710000071

其中,P为人的两眼间距,D为视距,Δd为物体的相对深度。通过上述公式,可以得到两眼的视差与物体相对深度(深度距离)的函数关系。Among them, P is the distance between the eyes of a person, D is the viewing distance, and Δd is the relative depth of the object. Through the above formula, the functional relationship between the parallax of the two eyes and the relative depth (depth distance) of the object can be obtained.

如图4所示,当用户进行拍照时,站立的位置即实际拍照位置确定后,即可确定所述用户的实际拍照位置和所述投影图像之间的距离D。然后,根据用户在显示屏上的投影图像来动态调整虚拟图像的位置,以更好的呈现立体视角。As shown in FIG. 4 , when the user takes a picture, after the standing position, that is, the actual photographing position is determined, the distance D between the actual photographing position of the user and the projected image can be determined. Then, the position of the virtual image is dynamically adjusted according to the projected image of the user on the display screen to better present the stereoscopic view.

具体的,根据以下公式(4)确定所述虚拟图像和所述投影图像之间的深度距离:Specifically, the depth distance between the virtual image and the projected image is determined according to the following formula (4):

Figure BDA0002429366710000072
Figure BDA0002429366710000072

其中,Δd表示所述深度距离,所述Δθ表示用户的双眼视差,D表示所述用户的实际拍照位置和用户在显示屏上的投影图像之间的距离,P表示用户两眼之间的间距,Δθ、P为常量。Among them, Δd represents the depth distance, the Δθ represents the binocular parallax of the user, D represents the distance between the user's actual photographing position and the user's projected image on the display screen, and P represents the distance between the user's eyes , Δθ and P are constants.

其中,在确定D值的时候,可以以人体的某个点和投影图像中该点对应的点之间的距离,作为D值。Δd可以是以投影图像的某个点和虚拟图像中的某个点之间的距离,比如,投影图像中用户的脚尖上的点和虚拟人物脚尖上的点等。Wherein, when determining the D value, the distance between a certain point of the human body and the point corresponding to the point in the projected image may be used as the D value. Δd may be the distance between a certain point in the projected image and a certain point in the virtual image, for example, a point on the user's toe and a point on the virtual character's toe in the projected image, and the like.

AR合拍的最终合成照片中,对于合成效果影响最为明显的便是用户的投影与虚拟明星之间的相对深度距离,基于公式(4)来动态调整深度距离,以达到最优的拍照效果。In the final composite photo of AR co-shooting, the most obvious influence on the composite effect is the relative depth distance between the user's projection and the virtual star. Based on formula (4), the depth distance is dynamically adjusted to achieve the optimal photo effect.

例如,用户与虚拟人物进行AR合拍时,为保证视觉效果的最佳,需要根据用户站立的实际位置实时调整虚拟人物的出现位置,即根据D的值确定Δd。如图5所示,在不同次的拍照过程中,基于用户现实场景中得到的距离D,根据公式4来实时调整虚拟人物的出现位置,即确定Δd,以保证虚拟人物和用户的投影图像相对位置如线51示意,即实现视距效果最佳。For example, in order to ensure the best visual effect when the user and the avatar perform AR co-shot, it is necessary to adjust the appearance position of the avatar in real time according to the actual position of the user standing, that is, determine Δd according to the value of D. As shown in Fig. 5, in different times of taking pictures, based on the distance D obtained in the user's real scene, the appearance position of the virtual character is adjusted in real time according to formula 4, that is, Δd is determined, so as to ensure that the projected image of the virtual character and the user are relatively relative to each other. The position is indicated by line 51, that is, the best viewing distance effect is achieved.

步骤105、根据所述投影图像、所述虚拟图像以及所述相对位置关系,得到AR图像。Step 105: Obtain an AR image according to the projected image, the virtual image, and the relative positional relationship.

在确定好虚拟图像的位置后,即可将所述投影图像、所述虚拟图像进行合成,得到AR图像。其中,具体的合成方法在本发明实施例中不做限定。After the position of the virtual image is determined, the projected image and the virtual image can be synthesized to obtain an AR image. Wherein, the specific synthesis method is not limited in the embodiments of the present invention.

在本发明实施例中,根据用户在显示屏上的投影图像提取人体轮廓,并根据人体轮廓获得虚拟对象的虚拟图像。之后,根据所述虚拟对象和所述投影图像的相对位置关系、投影图像以及虚拟图像在进行AR合照。由于虚拟对象的轮廓和人体轮廓的匹配度满足第一预设要求,且在合照时考虑了虚拟对象和投影图像的相对位置关系,因此,利用本发明实施例获得的AR合照使得虚拟对象的形态、姿势和用户的形态、姿态匹配度较高,从而增强了图像的真实感,提高了AR图像的显示效果。In the embodiment of the present invention, the outline of the human body is extracted according to the projected image of the user on the display screen, and the virtual image of the virtual object is obtained according to the outline of the human body. After that, an AR group photo is taken according to the relative positional relationship between the virtual object and the projected image, the projected image and the virtual image. Since the matching degree between the outline of the virtual object and the outline of the human body meets the first preset requirement, and the relative positional relationship between the virtual object and the projected image is considered when taking a group photo, the AR group photo obtained by using the embodiment of the present invention makes the shape of the virtual object , posture and the user's shape and posture are highly matched, thereby enhancing the realism of the image and improving the display effect of the AR image.

本发明实施例还提供了一种图像处理装置。参见图6,图6是本发明实施例提供的图像处理装置的结构图。由于图像处理装置解决问题的原理与本发明实施例中图像处理方法相似,因此该图像处理装置的实施可以参见方法的实施,重复之处不再赘述。The embodiment of the present invention also provides an image processing apparatus. Referring to FIG. 6, FIG. 6 is a structural diagram of an image processing apparatus provided by an embodiment of the present invention. Since the principle of the image processing apparatus for solving the problem is similar to the image processing method in the embodiment of the present invention, the implementation of the image processing apparatus may refer to the implementation of the method, and the repetition will not be repeated.

如图6所示,图像处理装置600包括:As shown in FIG. 6 , the image processing apparatus 600 includes:

第一获取模块601,用于获取用户在显示屏上的投影图像;第一提取模块602,用于提取所述投影图像中的人体轮廓;第二获取模块603,用于根据所述人体轮廓,获取虚拟对象的虚拟图像,其中,所述虚拟对象的轮廓与所述人体轮廓的匹配度满足第一预设要求;第一确定模块604,用于确定所述虚拟图像和所述投影图像之间的相对位置关系;第四获取模块605,用于根据所述投影图像、所述虚拟图像以及所述相对位置关系,得到AR图像。The first acquisition module 601 is used to acquire the projected image of the user on the display screen; the first extraction module 602 is used to extract the outline of the human body in the projected image; the second acquisition module 603 is used to, according to the outline of the human body, Acquiring a virtual image of a virtual object, wherein the matching degree between the outline of the virtual object and the outline of the human body meets a first preset requirement; a first determination module 604 is configured to determine the difference between the virtual image and the projected image The fourth obtaining module 605 is configured to obtain an AR image according to the projected image, the virtual image and the relative positional relationship.

可选的,所述第一提取模块602可包括:Optionally, the first extraction module 602 may include:

转换子模块,用于分别将所述投影图像进行图像转换,得到至少一张灰度图;第一计算子模块,用于计算所述灰度图的平均值,得到背景灰度图;第二计算子模块,用于计算每一张所述灰度图和所述背景灰度图的差,得到所述用户的人体轮廓。a conversion sub-module for performing image conversion on the projected images respectively to obtain at least one grayscale image; a first calculation submodule for calculating the average value of the grayscale images to obtain a background grayscale image; the second The calculation sub-module is configured to calculate the difference between each of the grayscale images and the background grayscale images to obtain the user's body contour.

可选的,所述虚拟对象包括虚拟人物;所述第二获取模块603包括:Optionally, the virtual object includes a virtual character; the second obtaining module 603 includes:

第一确定子模块,用于在所述人体轮廓上确定第一关键点,其中,所述第一关键点至少包括头部关键点和手部关键点;第二确定子模块,用于在所述虚拟对象的候选图像中确定所述虚拟对象的目标轮廓;第三确定子模块,用于相应于所述第一关键点,在所述目标轮廓上确定第二关键点,其中,所述第二关键点至少包括头部关键点和手部关键点;第一计算子模块,用于基于所述第一关键点和所述第二关键点,计算所述人体轮廓和所述目标轮廓之间的相似度;第四确定子模块,用于如果所述相似度满足第二预设要求,将所述候选图像作为所述虚拟图像。The first determination sub-module is used to determine the first key point on the outline of the human body, wherein the first key point includes at least the head key point and the hand key point; the second determination sub-module is used for the The target contour of the virtual object is determined in the candidate image of the virtual object; the third determination sub-module is used to determine the second key point on the target contour corresponding to the first key point, wherein the first key point is The two key points include at least head key points and hand key points; a first calculation sub-module is used to calculate the distance between the human body contour and the target contour based on the first key point and the second key point the similarity; the fourth determination sub-module is configured to use the candidate image as the virtual image if the similarity meets the second preset requirement.

可选的,所述第一计算子模块包括:Optionally, the first calculation submodule includes:

第一计算单元,用于对于所述第一关键点中的各个第一目标关键点,计算所述第一目标关键点和第二目标关键点之间的欧式距离,所述第二目标关键点是所述第二关键点中与所述第一目标关键点对应的关键点;第二计算单元,用于基于获得的欧式距离,计算所述人体轮廓和所述目标轮廓之间的相似度。a first calculation unit, configured to calculate the Euclidean distance between the first target key point and the second target key point for each first target key point in the first key point, the second target key point is the key point corresponding to the first target key point in the second key point; the second calculation unit is configured to calculate the similarity between the human body contour and the target contour based on the obtained Euclidean distance.

可选的,所述第二计算单元包括:Optionally, the second computing unit includes:

第一计算子单元,用于将各个欧式距离分别乘以对应的权值,获得各个欧式距离对应的第一数值;第二计算子单元,用于将各个第一数值相加,得到所述人体轮廓和所述目标轮廓之间的相似度。The first calculation subunit is used to multiply each Euclidean distance by the corresponding weight to obtain the first value corresponding to each Euclidean distance; the second calculation subunit is used to add each first value to obtain the human body similarity between the contour and the target contour.

可选的,所述第二计算单元还可包括:设置子模块,用于预先设置所述权值,其中,基于头部关键点和/或手部关键点获得的欧式距离的权值大于基于其他关键点获得的欧式距离的权值。Optionally, the second computing unit may further include: a setting sub-module, configured to preset the weight, wherein the weight of the Euclidean distance obtained based on the head key point and/or the hand key point is greater than the weight based on the head key point and/or the hand key point. The weight of the Euclidean distance obtained by other key points.

可选的,所述第一确定模块604包括:Optionally, the first determining module 604 includes:

第一确定子模块,用于确定所述用户的实际拍照位置和所述投影图像之间的距离;第二确定子模块,用于根据所述距离确定所述虚拟图像和所述投影图像之间的深度距离。The first determination submodule is used to determine the distance between the actual photographing position of the user and the projected image; the second determination submodule is used to determine the distance between the virtual image and the projected image according to the distance depth distance.

可选的,所述第二确定子模块用于,利用以下公式,根据所述距离确定所述虚拟图像和所述投影图像之间的深度距离:Optionally, the second determination sub-module is configured to, using the following formula, determine the depth distance between the virtual image and the projected image according to the distance:

Figure BDA0002429366710000101
Figure BDA0002429366710000101

其中,Δd表示所述深度距离,所述Δθ表示用户的双眼视差,D表示所述用户的实际拍照位置和用户在显示屏上的投影图像之间的距离,P表示用户两眼之间的间距,Δθ、P为常量。Among them, Δd represents the depth distance, the Δθ represents the binocular parallax of the user, D represents the distance between the user's actual photographing position and the user's projected image on the display screen, and P represents the distance between the user's eyes , Δθ and P are constants.

本发明实施例提供的装置,可以执行上述方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。The apparatus provided in the embodiment of the present invention can execute the foregoing method embodiments, and the implementation principles and technical effects thereof are similar, and details are not described herein again in this embodiment.

如图7所示,本发明实施例的电子设备,包括:处理器700,用于读取存储器710中的程序,执行下列过程:As shown in FIG. 7 , an electronic device according to an embodiment of the present invention includes: a processor 700 configured to read a program in a memory 710 and execute the following processes:

获取用户在显示屏上的投影图像;Obtain the projected image of the user on the display screen;

提取所述投影图像中的人体轮廓;extracting the outline of the human body in the projected image;

根据所述人体轮廓,获取虚拟对象的虚拟图像,其中,所述虚拟对象的轮廓与所述人体轮廓的匹配度满足第一预设要求;obtaining a virtual image of a virtual object according to the human body outline, wherein the matching degree between the outline of the virtual object and the human body outline meets a first preset requirement;

确定所述虚拟图像和所述投影图像之间的相对位置关系;determining the relative positional relationship between the virtual image and the projected image;

根据所述投影图像、所述虚拟图像以及所述相对位置关系,得到增强现实AR图像。An augmented reality AR image is obtained according to the projected image, the virtual image, and the relative positional relationship.

其中,在图7中,总线架构可以包括任意数量的互联的总线和桥,具体由处理器700代表的一个或多个处理器和存储器710代表的存储器的各种电路链接在一起。总线架构还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口提供接口。处理器700负责管理总线架构和通常的处理,存储器710可以存储处理器700在执行操作时所使用的数据。7, the bus architecture may include any number of interconnected buses and bridges, specifically one or more processors represented by processor 700 and various circuits of memory represented by memory 710 are linked together. The bus architecture may also link together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and, therefore, will not be described further herein. The bus interface provides the interface. The processor 700 is responsible for managing the bus architecture and general processing, and the memory 710 may store data used by the processor 700 in performing operations.

处理器700负责管理总线架构和通常的处理,存储器710可以存储处理器700在执行操作时所使用的数据。The processor 700 is responsible for managing the bus architecture and general processing, and the memory 710 may store data used by the processor 700 in performing operations.

处理器700还用于读取所述程序,执行如下步骤:The processor 700 is further configured to read the program, and perform the following steps:

分别将所述投影图像进行图像转换,得到至少一张灰度图;Perform image conversion on the projected images respectively to obtain at least one grayscale image;

计算所述灰度图的平均值,得到背景灰度图;Calculate the average value of the grayscale image to obtain a background grayscale image;

计算每一张所述灰度图和所述背景灰度图的差,得到所述用户的人体轮廓。The difference between each of the grayscale images and the background grayscale image is calculated to obtain the user's body contour.

所述虚拟对象包括虚拟人物;处理器700还用于读取所述程序,执行如下步骤:The virtual object includes a virtual character; the processor 700 is further configured to read the program, and perform the following steps:

在所述人体轮廓上确定第一关键点,其中,所述第一关键点至少包括头部关键点和手部关键点;determining a first key point on the outline of the human body, wherein the first key point includes at least a head key point and a hand key point;

在所述虚拟对象的候选图像中确定所述虚拟对象的目标轮廓;determining a target contour of the virtual object in candidate images of the virtual object;

相应于所述第一关键点,在所述目标轮廓上确定第二关键点,其中,所述第二关键点至少包括头部关键点和手部关键点;Corresponding to the first key point, a second key point is determined on the target contour, wherein the second key point includes at least a head key point and a hand key point;

基于所述第一关键点和所述第二关键点,计算所述人体轮廓和所述目标轮廓之间的相似度;Based on the first key point and the second key point, calculating the similarity between the human body contour and the target contour;

如果所述相似度满足第二预设要求,将所述候选图像作为所述虚拟图像。If the similarity meets the second preset requirement, the candidate image is used as the virtual image.

处理器700还用于读取所述程序,执行如下步骤:The processor 700 is further configured to read the program, and perform the following steps:

对于所述第一关键点中的各个第一目标关键点,计算所述第一目标关键点和第二目标关键点之间的欧式距离,所述第二目标关键点是所述第二关键点中与所述第一目标关键点对应的关键点;For each first target key point in the first key points, calculate the Euclidean distance between the first target key point and the second target key point, and the second target key point is the second key point The key points corresponding to the first target key points in ;

基于获得的欧式距离,计算所述人体轮廓和所述目标轮廓之间的相似度。Based on the obtained Euclidean distance, the similarity between the human body contour and the target contour is calculated.

处理器700还用于读取所述程序,执行如下步骤:The processor 700 is further configured to read the program, and perform the following steps:

将各个欧式距离分别乘以对应的权值,获得各个欧式距离对应的第一数值;Multiply each Euclidean distance by the corresponding weight to obtain the first value corresponding to each Euclidean distance;

将各个第一数值相加,得到所述人体轮廓和所述目标轮廓之间的相似度。The respective first values are added to obtain the similarity between the human body contour and the target contour.

处理器700还用于读取所述程序,执行如下步骤:The processor 700 is further configured to read the program, and perform the following steps:

预先设置所述权值,其中,基于头部关键点和/或手部关键点获得的欧式距离的权值大于基于其他关键点获得的欧式距离的权值。The weight is preset, wherein the weight of the Euclidean distance obtained based on the head key point and/or the hand key point is greater than the weight of the Euclidean distance obtained based on other key points.

处理器700还用于读取所述程序,执行如下步骤:The processor 700 is further configured to read the program, and perform the following steps:

确定所述用户的实际拍照位置和所述投影图像之间的距离;determining the distance between the actual photographing position of the user and the projected image;

根据所述距离确定所述虚拟图像和所述投影图像之间的深度距离。A depth distance between the virtual image and the projected image is determined based on the distance.

处理器700还用于读取所述程序,执行如下步骤:The processor 700 is further configured to read the program, and perform the following steps:

利用以下公式,根据所述距离确定所述虚拟图像和所述投影图像之间的深度距离:The depth distance between the virtual image and the projected image is determined from the distance using the following formula:

Figure BDA0002429366710000111
Figure BDA0002429366710000111

其中,Δd表示所述深度距离,所述Δθ表示用户的双眼视差,D表示所述用户的实际拍照位置和用户在显示屏上的投影图像之间的距离,P表示用户两眼之间的间距,Δθ、P为常量。Among them, Δd represents the depth distance, the Δθ represents the binocular parallax of the user, D represents the distance between the user's actual photographing position and the user's projected image on the display screen, and P represents the distance between the user's eyes , Δθ and P are constants.

本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。Embodiments of the present invention further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium. When the computer program is executed by a processor, each process of the above image processing method embodiments can be implemented, and the same technology can be achieved. The effect, in order to avoid repetition, is not repeated here. The computer-readable storage medium is, for example, a read-only memory (Read-Only Memory, ROM for short), a random access memory (Random Access Memory, RAM for short), a magnetic disk, or an optical disk.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。根据这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation. According to this understanding, the technical solutions of the present invention essentially or the parts that contribute to the prior art can be embodied in the form of software products, and the computer software products are stored in a storage medium (such as ROM/RAM, magnetic disk, CD), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present invention.

上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本发明的保护之内。The embodiments of the present invention have been described above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned specific embodiments, which are merely illustrative rather than restrictive. Under the inspiration of the present invention, without departing from the spirit of the present invention and the scope protected by the claims, many forms can be made, which all belong to the protection of the present invention.

Claims (10)

1. An image processing method, comprising:
acquiring a projection image of a user on a display screen;
extracting a human body contour in the projection image;
acquiring a virtual image of a virtual object according to the human body contour, wherein the matching degree of the contour of the virtual object and the human body contour meets a first preset requirement;
determining a relative positional relationship between the virtual image and the projection image;
and obtaining an augmented reality AR image according to the projection image, the virtual image and the relative position relation.
2. The method of claim 1, wherein the projection image is at least one; the extracting the human body contour in the projection image comprises:
respectively carrying out image conversion on the projected images to obtain at least one gray image;
calculating the average value of the gray level images to obtain a background gray level image;
and calculating the difference between each gray level image and the background gray level image to obtain the human body contour of the user.
3. The method of claim 1, wherein the virtual object comprises a virtual character; the acquiring of the virtual image of the virtual object according to the human body contour includes:
determining first key points on the human body contour, wherein the first key points at least comprise head key points and hand key points;
determining a target contour of the virtual object in a candidate image of the virtual object;
determining second key points on the target contour corresponding to the first key points, wherein the second key points at least comprise head key points and hand key points;
calculating the similarity between the human body contour and the target contour based on the first key point and the second key point;
and if the similarity meets a second preset requirement, taking the candidate image as the virtual image.
4. The method of claim 3, wherein said calculating a similarity between the human body contour and the target contour based on the first keypoint and the second keypoint comprises:
calculating Euclidean distances between first target key points and second target key points for each first target key point in the first key points, wherein the second target key points are key points corresponding to the first target key points in the second key points;
and calculating the similarity between the human body contour and the target contour based on the obtained Euclidean distance.
5. The method according to claim 4, wherein the calculating the similarity between the human body contour and the target contour based on the obtained Euclidean distance comprises:
multiplying each Euclidean distance by the corresponding weight value respectively to obtain a first numerical value corresponding to each Euclidean distance;
and adding the first numerical values to obtain the similarity between the human body contour and the target contour.
6. The method of claim 5, further comprising:
and presetting the weight, wherein the weight of the Euclidean distance obtained based on the head key point and/or the hand key point is larger than the weight of the Euclidean distance obtained based on other key points.
7. The method of claim 1, wherein said determining a relative positional relationship between said virtual image and said projected image comprises:
determining a distance between an actual photographing position of the user and the projected image;
determining a depth distance between the virtual image and the projected image according to the distance.
8. The method of claim 7, wherein said determining a depth distance between said virtual image and said projected image based on said distance comprises:
determining a depth distance between the virtual image and the projected image from the distance using the following formula:
Figure FDA0002429366700000021
wherein Δ D represents the depth distance, Δ θ represents the binocular parallax of the user, D represents the distance between the actual photographing position of the user and the projection image of the user on the display screen, P represents the distance between the eyes of the user, and Δ θ, P are constants.
9. An electronic device, comprising: a memory, a processor, and a program stored on the memory and executable on the processor; characterized in that the processor, for reading the program in the memory, implements the steps in the image processing method according to any one of claims 1 to 8.
10. A computer-readable storage medium for storing a computer program, characterized in that the computer program, when being executed by a processor, is adapted to carry out the steps of the image processing method according to any one of claims 1 to 8.
CN202010231304.4A 2020-03-27 2020-03-27 Image processing method, device and computer-readable storage medium Active CN111462337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010231304.4A CN111462337B (en) 2020-03-27 2020-03-27 Image processing method, device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010231304.4A CN111462337B (en) 2020-03-27 2020-03-27 Image processing method, device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN111462337A true CN111462337A (en) 2020-07-28
CN111462337B CN111462337B (en) 2023-08-18

Family

ID=71685711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010231304.4A Active CN111462337B (en) 2020-03-27 2020-03-27 Image processing method, device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111462337B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113179376A (en) * 2021-04-29 2021-07-27 山东数字人科技股份有限公司 Video comparison method, device and equipment based on three-dimensional animation and storage medium
CN116233395A (en) * 2023-03-07 2023-06-06 珠海普罗米修斯视觉技术有限公司 Video synchronization method, device and computer readable storage medium for volume video
JP7696742B2 (en) 2021-03-30 2025-06-23 キヤノン株式会社 Image processing device and method for controlling image processing device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005266223A (en) * 2004-03-18 2005-09-29 Casio Comput Co Ltd Camera apparatus and program
JP2007025861A (en) * 2005-07-13 2007-02-01 Toppan Printing Co Ltd Virtual reality system and method, and interpolated image generation apparatus and method
CN103617639A (en) * 2013-06-27 2014-03-05 苏州金螳螂展览设计工程有限公司 Mirror surface induction interactive group photo system and method
CN106097435A (en) * 2016-06-07 2016-11-09 北京圣威特科技有限公司 A kind of augmented reality camera system and method
WO2016207628A1 (en) * 2015-06-22 2016-12-29 Ec Medica Ltd Augmented reality imaging system, apparatus and method
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN108398787A (en) * 2018-03-20 2018-08-14 京东方科技集团股份有限公司 Augmented reality shows equipment, method and augmented reality glasses
CN110910512A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Virtual object self-adaptive adjusting method and device, computer equipment and storage medium
CN110909680A (en) * 2019-11-22 2020-03-24 咪咕动漫有限公司 Facial expression recognition method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005266223A (en) * 2004-03-18 2005-09-29 Casio Comput Co Ltd Camera apparatus and program
JP2007025861A (en) * 2005-07-13 2007-02-01 Toppan Printing Co Ltd Virtual reality system and method, and interpolated image generation apparatus and method
CN103617639A (en) * 2013-06-27 2014-03-05 苏州金螳螂展览设计工程有限公司 Mirror surface induction interactive group photo system and method
WO2016207628A1 (en) * 2015-06-22 2016-12-29 Ec Medica Ltd Augmented reality imaging system, apparatus and method
CN106097435A (en) * 2016-06-07 2016-11-09 北京圣威特科技有限公司 A kind of augmented reality camera system and method
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN108398787A (en) * 2018-03-20 2018-08-14 京东方科技集团股份有限公司 Augmented reality shows equipment, method and augmented reality glasses
CN110909680A (en) * 2019-11-22 2020-03-24 咪咕动漫有限公司 Facial expression recognition method and device, electronic equipment and storage medium
CN110910512A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Virtual object self-adaptive adjusting method and device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7696742B2 (en) 2021-03-30 2025-06-23 キヤノン株式会社 Image processing device and method for controlling image processing device
CN113179376A (en) * 2021-04-29 2021-07-27 山东数字人科技股份有限公司 Video comparison method, device and equipment based on three-dimensional animation and storage medium
CN116233395A (en) * 2023-03-07 2023-06-06 珠海普罗米修斯视觉技术有限公司 Video synchronization method, device and computer readable storage medium for volume video

Also Published As

Publication number Publication date
CN111462337B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN112767538B (en) Three-dimensional reconstruction and related interaction and measurement methods, related devices and equipment
US12198374B2 (en) Method for training SMPL parameter prediction model, computer device, and storage medium
CN103140879B (en) Information presentation device, digital camera, head mounted display, projecting apparatus, information demonstrating method and information are presented program
JP4950787B2 (en) Image processing apparatus and method
CN110675487A (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
CN109711472B (en) Training data generation method and device
JPWO2005038716A1 (en) Image collation system and image collation method
CN111462337B (en) Image processing method, device and computer-readable storage medium
CN111815768B (en) Three-dimensional face reconstruction method and device
CN111862299A (en) Human body three-dimensional model construction method, device, robot and storage medium
CN111460937B (en) Facial feature point positioning method and device, terminal equipment and storage medium
CN111754622B (en) Face three-dimensional image generation method and related equipment
CN110660076A (en) Face exchange method
CN113902781B (en) Three-dimensional face reconstruction method, device, equipment and medium
CN113706373A (en) Model reconstruction method and related device, electronic equipment and storage medium
JPWO2006049147A1 (en) Three-dimensional shape estimation system and image generation system
WO2022237026A1 (en) Plane information detection method and system
Anbarjafari et al. 3D face reconstruction with region based best fit blending using mobile phone for virtual reality based social media
Michael et al. Model-based generation of personalized full-body 3D avatars from uncalibrated multi-view photographs
CN118691744A (en) Three-dimensional Gaussian radiation field training method, device, equipment, storage medium and program product
CN114926324B (en) Virtual fitting model training method based on real person images, virtual fitting method, device and equipment
CN113240811B (en) Three-dimensional face model creating method, system, equipment and storage medium
CN112396117B (en) Image detection method, device and electronic equipment
CN113610969A (en) Three-dimensional human body model generation method and device, electronic equipment and storage medium
CN113902855B (en) Three-dimensional face reconstruction method based on camera equipment and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant