CN111462337A - Image processing method, device and computer readable storage medium - Google Patents

Image processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN111462337A
CN111462337A CN202010231304.4A CN202010231304A CN111462337A CN 111462337 A CN111462337 A CN 111462337A CN 202010231304 A CN202010231304 A CN 202010231304A CN 111462337 A CN111462337 A CN 111462337A
Authority
CN
China
Prior art keywords
image
human body
key points
virtual
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010231304.4A
Other languages
Chinese (zh)
Other versions
CN111462337B (en
Inventor
赵琦
颜忠伟
毕铎
王科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Migu Cultural Technology Co Ltd
Original Assignee
Migu Cultural Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Migu Cultural Technology Co Ltd filed Critical Migu Cultural Technology Co Ltd
Priority to CN202010231304.4A priority Critical patent/CN111462337B/en
Publication of CN111462337A publication Critical patent/CN111462337A/en
Application granted granted Critical
Publication of CN111462337B publication Critical patent/CN111462337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image processing method, image processing equipment and a computer readable storage medium, and relates to the technical field of communication, so as to improve the display effect of AR (augmented reality) co-shooting images of a real user and a virtual object. The method comprises the following steps: acquiring a projection image of a user on a display screen; extracting a human body contour in the projection image; acquiring a virtual image of a virtual object according to the human body contour, wherein the matching degree of the contour of the virtual object and the human body contour meets a first preset requirement; determining a relative positional relationship between the virtual image and the projection image; and obtaining an AR image according to the projection image, the virtual image and the relative position relation. The embodiment of the invention can improve the display effect of the AR co-shooting images of the real user and the virtual object.

Description

Image processing method, device and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a computer-readable storage medium.
Background
In AR (Augmented Reality) photographing, the height and the motion of a virtual object (such as a virtual character) are fixed. However, during the co-photographing of the user and the virtual person, different users have different heights and photographing postures. Therefore, in this case, how to improve the reality of the real user and the virtual character in a close shot to improve the display effect of the AR image is a technical problem to be solved.
Disclosure of Invention
The embodiment of the invention provides an image processing method, image processing equipment and a computer readable storage medium, which are used for improving the display effect of AR (augmented reality) co-shooting images of a real user and a virtual object.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a projection image of a user on a display screen;
extracting a human body contour in the projection image;
acquiring a virtual image of a virtual object according to the human body contour, wherein the matching degree of the contour of the virtual object and the human body contour meets a first preset requirement;
determining a relative positional relationship between the virtual image and the projection image;
and obtaining an AR image according to the projection image, the virtual image and the relative position relation.
Wherein the extracting the human body contour in the projection image comprises:
respectively carrying out image conversion on the projected images to obtain at least one gray image;
calculating the average value of the gray level images to obtain a background gray level image;
and calculating the difference between each gray level image and the background gray level image to obtain the human body contour of the user.
Wherein the virtual object comprises a virtual character; the acquiring of the virtual image of the virtual object according to the human body contour includes:
determining first key points on the human body contour, wherein the first key points at least comprise head key points and hand key points;
determining a target contour of the virtual object in a candidate image of the virtual object;
determining second key points on the target contour corresponding to the first key points, wherein the second key points at least comprise head key points and hand key points;
calculating the similarity between the human body contour and the target contour based on the first key point and the second key point;
and if the similarity meets a second preset requirement, taking the candidate image as the virtual image.
Wherein the calculating the similarity between the human body contour and the target contour based on the first key point and the second key point comprises:
calculating Euclidean distances between first target key points and second target key points for each first target key point in the first key points, wherein the second target key points are key points corresponding to the first target key points in the second key points;
and calculating the similarity between the human body contour and the target contour based on the obtained Euclidean distance.
Wherein the calculating of the similarity between the human body contour and the target contour based on the obtained euclidean distance comprises:
multiplying each Euclidean distance by the corresponding weight value respectively to obtain a first numerical value corresponding to each Euclidean distance;
adding the first numerical values to obtain the similarity between the human body contour and the target contour;
wherein the method further comprises:
and presetting the weight, wherein the weight of the Euclidean distance obtained based on the head key point and/or the hand key point is larger than the weight of the Euclidean distance obtained based on other key points.
Wherein the determining a relative positional relationship between the virtual image and the projection image comprises:
determining a distance between an actual photographing position of the user and the projected image;
determining a depth distance between the virtual image and the projected image according to the distance.
Wherein said determining a depth distance between said virtual image and said projected image from said distance comprises:
determining a depth distance between the virtual image and the projected image from the distance using the following formula:
Figure BDA0002429366710000031
wherein Δ D represents the depth distance, Δ θ represents the binocular parallax of the user, D represents the distance between the actual photographing position of the user and the projection image of the user on the display screen, P represents the distance between the eyes of the user, and Δ θ and P are constants
In a second aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a program stored on the memory and executable on the processor; the processor is configured to read a program in the memory to implement the steps in the image processing method according to the first aspect.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium for storing a computer program, where the computer program is executed by a processor, and the computer program is characterized by the steps in the image processing method according to the first aspect.
In the embodiment of the invention, the human body contour is extracted according to the projection image of the user on the display screen, and the virtual image of the virtual object is obtained according to the human body contour. And then carrying out AR co-shooting according to the relative position relation between the virtual object and the projection image, the projection image and the virtual image. Because the matching degree of the outline of the virtual object and the outline of the human body meets the first preset requirement, and the relative position relation of the virtual object and the projection image is considered during the photo-matching, the matching degree of the shape and the posture of the virtual object and the shape and the posture of the user is higher by utilizing the AR photo-matching obtained by the embodiment of the invention, thereby enhancing the reality of the image and improving the display effect of the AR image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of an image processing method provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of binocular imaging of a person;
FIG. 3 is a mathematical schematic for the principles of binocular imaging;
FIG. 4 is a schematic diagram of a photograph provided by an embodiment of the present invention;
FIG. 5 is a second schematic diagram of photographing according to the embodiment of the present invention;
fig. 6 is a structural diagram of an image processing apparatus provided in an embodiment of the present invention;
fig. 7 is a block diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, as shown in fig. 1, including the following steps:
step 101, acquiring a projection image of a user on a display screen.
When a user needs to take a picture, the user usually takes a picture by using a camera, so that the image of the user is displayed on a display screen. In the embodiment of the present invention, the displayed image of the user is referred to as a projection image. In the embodiment of the present invention, the projection image needs to include the human body contour of the user, and preferably, the projection image needs to include the complete human body contour of the user because the projection image is AR-photographed with a virtual object (such as a virtual character, a virtual article, etc.). The human body outline of the user can embody the information of the height, the standing posture and the like of the user.
And 102, extracting the human body contour in the projection image.
In an embodiment of the present invention, projection images of a plurality of users are successively acquired, and then the contour of the human body is determined based on these projection images. Specifically, in this step, the obtained plurality of projection images are subjected to image conversion, so as to obtain at least one gray-scale image. Wherein each projected image has a corresponding grayscale map. And then, calculating the average value of the gray level images to obtain a background gray level image. And finally, calculating the difference between each gray image and the background gray image to obtain the human body contour of the user. In this way, the obtained human body contour information can be more accurate.
Taking the continuous shooting of five images of the user as an example, the five images are converted into a gray scale image and recorded as fgi(x, y), i ═ 1, 2, 3, 4, 5. Adding the gray level images of the five images according to the following formula (1) and calculating the average value, namely the gray level image of the background image, which is marked as fb(x,y)。
Figure BDA0002429366710000041
And then, subtracting the gray level image of each gray level image from the gray level image of the background image to obtain the human body contour information, wherein the human body contour information can be expressed as formula (2):
fd(x,y)=|fgi(x,y)-fb(x,y)| (2)
wherein f isd(x, y) represents the human body contour information.
Step 103, acquiring a virtual image of a virtual object according to the human body contour, wherein the matching degree of the contour of the virtual object and the human body contour meets a first preset requirement.
The first preset requirement may be that the matching degree is greater than a certain preset value, and the preset value may be set according to actual needs.
Taking a virtual object as a virtual character as an example, all images of the virtual character in the virtual character library can be searched, and the calculation is performed with the current human body outline information to match the posture of the virtual character most similar to the human body outline. The height and the hand posture of the user are mainly considered in the matching process. In the process of calculating the human body contour similarity points by adopting the Euclidean distance, the weights of the head posture and the hand posture are appropriately increased, so that the similarity of the head and the hand is considered. And matching with all the images to find out the image of the virtual person most similar to the height and the posture of the client.
Specifically, in this step, a virtual image of the virtual object may be acquired as follows:
step 1031, determining first key points on the human body contour, wherein the first key points at least comprise head key points and hand key points.
Here, the first keypoints can be determined on the contour of the human body by means of markers. Of course, the first keypoint may also include other keypoints on the human body contour.
Step 1032, determining a target contour of the virtual object in the candidate image of the virtual object.
In practical applications, multiple images of multiple virtual objects may be pre-stored, with different virtual objects having different heights, poses, etc. These images are referred to herein as candidate images of the virtual object. If the user selects the virtual object needing to be photographed, the candidate object of the virtual object selected by the user can be directly obtained from the pre-stored image according to the virtual object selected by the user.
Taking a virtual object as an example of a virtual character, the target contour determined here is a human body contour of the virtual character. The determination method of the human body contour of the virtual object is not limited in the embodiment of the present invention.
Step 1033, determining second keypoints on the target contour corresponding to the first keypoints, wherein the second keypoints comprise at least head keypoints and hand keypoints.
"corresponding to the first keypoint" means that the keypoint, i.e. the second keypoint, is determined at the corresponding position of the human body outline of the virtual character according to the position of the first keypoint in the human body outline. In this way, the height, posture, etc. of the obtained virtual object can be made closer to the height, posture, etc. of the user. Optionally, the second keypoints may also include keypoints at other positions.
Step 1034, calculating the similarity between the human body contour and the target contour based on the first key point and the second key point.
In the step, the Euclidean distance between the key points is mainly calculated, and then the similarity between the human body contour and the target contour is calculated according to the Euclidean distance.
When the Euclidean distance is calculated, the calculation is carried out on the basis of two corresponding key points on the human body contour and the target contour. Specifically, for each first target keypoint in the first keypoints, a euclidean distance between the first target keypoint and a second target keypoint is calculated, where the second target keypoint is a keypoint, corresponding to the first target keypoint, in the second keypoint. Wherein the first target keypoint is any one of the first keypoints. Then, based on the obtained Euclidean distance, the similarity between the human body contour and the target contour is calculated.
And for the obtained plurality of Euclidean distances, multiplying each Euclidean distance by the corresponding weight value respectively to obtain a first numerical value corresponding to each Euclidean distance, and then adding the first numerical values to obtain the similarity between the human body contour and the target contour.
In the process in the embodiment of the present invention, the weight may also be preset, where the weight of the euclidean distance obtained based on the head key point and/or the hand key point is greater than the weight of the euclidean distance obtained based on other key points.
In the embodiment of the invention, the height, the posture and the like of the obtained virtual object are closer to the height and the posture of the user by increasing the weight corresponding to the head key point or the hand key point, so that the display effect of the image is further improved.
And 1035, if the similarity meets a second preset requirement, taking the candidate image as the virtual image.
The similarity meeting the second preset requirement may be that the similarity is greater than a certain preset value, and the preset value may be set according to actual needs.
And 104, determining the relative position relationship between the virtual image and the projection image.
In an embodiment of the present invention, the relative positional relationship may be embodied by a depth distance between the virtual image and the projection image.
Specifically, in this step, the distance between the actual photographing position of the user and the projection image is determined, and then the depth distance between the virtual image and the projection image is determined according to the distance.
As shown in fig. 2, a schematic diagram of imaging a human's eyes. Since the distance between the eyes of a person is about 60mm, when the left and right eyes view an object at respective angles, the images presented on the retina are different. The brain judges the spatial position of the object according to the difference, so that people can generate stereoscopic vision to the object.
Fig. 3 shows a mathematical diagram of the principle of binocular imaging. Referring to fig. 3, the geometric relationship between binocular parallax and the spatial position of the object is shown in formula (3):
Figure BDA0002429366710000071
wherein P is the interocular distance of the human, D is the visual distance, and Δ D is the relative depth of the object. Through the above formula, the functional relationship between the parallax of both eyes and the relative depth (depth distance) of the object can be obtained.
As shown in fig. 4, when the user takes a picture, after the standing position, i.e., the actual picture-taking position, is determined, the distance D between the actual picture-taking position of the user and the projection image can be determined. Then, the position of the virtual image is dynamically adjusted according to the projection image of the user on the display screen, so that the stereoscopic visual angle can be better presented.
Specifically, the depth distance between the virtual image and the projected image is determined according to the following formula (4):
Figure BDA0002429366710000072
wherein Δ D represents the depth distance, Δ θ represents the binocular parallax of the user, D represents the distance between the actual photographing position of the user and the projection image of the user on the display screen, P represents the distance between the eyes of the user, and Δ θ, P are constants.
When determining the D value, the distance between a certain point of the human body and a point corresponding to the certain point in the projection image may be used as the D value. Δ d may be a distance between a certain point of the projected image and a certain point in the virtual image, such as a point on the tip of the user's foot and a point on the tip of the virtual character in the projected image, etc.
In the final synthetic photo of the AR snapshot, the most significant influence on the synthetic effect is the relative depth distance between the projection of the user and the virtual star, and the depth distance is dynamically adjusted based on the formula (4) to achieve the optimal photographing effect.
For example, when the user performs AR taking with the virtual character, in order to ensure the best visual effect, the appearance position of the virtual character needs to be adjusted in real time according to the actual position where the user stands, that is, Δ D is determined according to the value of D. As shown in fig. 5, during the photographing process of different times, based on the distance D obtained in the real scene of the user, the appearance position of the virtual character is adjusted in real time according to equation 4, i.e. Δ D is determined, so as to ensure that the relative positions of the virtual character and the projected image of the user are indicated by line 51, i.e. the best view distance effect is achieved.
And 105, obtaining an AR image according to the projection image, the virtual image and the relative position relation.
After the position of the virtual image is determined, the projected image and the virtual image can be synthesized to obtain an AR image. The specific synthesis method is not limited in the examples of the present invention.
In the embodiment of the invention, the human body contour is extracted according to the projection image of the user on the display screen, and the virtual image of the virtual object is obtained according to the human body contour. And then carrying out AR co-shooting according to the relative position relation between the virtual object and the projection image, the projection image and the virtual image. Because the matching degree of the outline of the virtual object and the outline of the human body meets the first preset requirement, and the relative position relation of the virtual object and the projection image is considered during the photo-matching, the matching degree of the shape and the posture of the virtual object and the shape and the posture of the user is higher by utilizing the AR photo-matching obtained by the embodiment of the invention, thereby enhancing the reality of the image and improving the display effect of the AR image.
The embodiment of the invention also provides an image processing device. Referring to fig. 6, fig. 6 is a structural diagram of an image processing apparatus according to an embodiment of the present invention. Because the principle of the image processing apparatus for solving the problem is similar to the image processing method in the embodiment of the present invention, the implementation of the image processing apparatus can refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 6, the image processing apparatus 600 includes:
a first obtaining module 601, configured to obtain a projection image of a user on a display screen; a first extraction module 602, configured to extract a human body contour in the projection image; a second obtaining module 603, configured to obtain a virtual image of a virtual object according to the human body contour, where a matching degree between the contour of the virtual object and the human body contour meets a first preset requirement; a first determining module 604 for determining a relative positional relationship between the virtual image and the projection image; a fourth obtaining module 605, configured to obtain an AR image according to the projection image, the virtual image, and the relative position relationship.
Optionally, the first extraction module 602 may include:
the conversion submodule is used for respectively carrying out image conversion on the projection images to obtain at least one gray image; the first calculation submodule is used for calculating the average value of the gray level image to obtain a background gray level image; and the second calculation submodule is used for calculating the difference between each gray level image and the background gray level image to obtain the human body contour of the user.
Optionally, the virtual object includes a virtual character; the second obtaining module 603 includes:
the first determining submodule is used for determining first key points on the human body contour, wherein the first key points at least comprise head key points and hand key points; a second determining sub-module for determining a target contour of the virtual object in a candidate image of the virtual object; a third determining submodule, configured to determine second keypoints on the target contour corresponding to the first keypoints, where the second keypoints include at least head keypoints and hand keypoints; the first calculating submodule is used for calculating the similarity between the human body contour and the target contour on the basis of the first key point and the second key point; and the fourth determining submodule is used for taking the candidate image as the virtual image if the similarity meets a second preset requirement.
Optionally, the first computing submodule includes:
a first calculating unit, configured to calculate, for each first target keypoint of the first keypoints, a euclidean distance between the first target keypoint and a second target keypoint, where the second target keypoint is a keypoint of the second keypoint corresponding to the first target keypoint; and the second calculation unit is used for calculating the similarity between the human body contour and the target contour based on the obtained Euclidean distance.
Optionally, the second calculating unit includes:
the first calculating subunit is used for multiplying each Euclidean distance by the corresponding weight value respectively to obtain a first numerical value corresponding to each Euclidean distance; and the second calculating subunit is used for adding the first numerical values to obtain the similarity between the human body contour and the target contour.
Optionally, the second computing unit may further include: and the setting submodule is used for presetting the weight, wherein the weight of the Euclidean distance obtained based on the head key point and/or the hand key point is larger than the weight of the Euclidean distance obtained based on other key points.
Optionally, the first determining module 604 includes:
a first determination submodule for determining a distance between an actual photographing position of the user and the projection image; a second determining submodule for determining a depth distance between the virtual image and the projection image according to the distance.
Optionally, the second determining submodule is configured to determine the depth distance between the virtual image and the projection image according to the distance by using the following formula:
Figure BDA0002429366710000101
wherein Δ D represents the depth distance, Δ θ represents the binocular parallax of the user, D represents the distance between the actual photographing position of the user and the projection image of the user on the display screen, P represents the distance between the eyes of the user, and Δ θ, P are constants.
The apparatus provided in the embodiment of the present invention may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
As shown in fig. 7, the electronic device according to the embodiment of the present invention includes: the processor 700, which is used to read the program in the memory 710, executes the following processes:
acquiring a projection image of a user on a display screen;
extracting a human body contour in the projection image;
acquiring a virtual image of a virtual object according to the human body contour, wherein the matching degree of the contour of the virtual object and the human body contour meets a first preset requirement;
determining a relative positional relationship between the virtual image and the projection image;
and obtaining an augmented reality AR image according to the projection image, the virtual image and the relative position relation.
Where in fig. 7, the bus architecture may include any number of interconnected buses and bridges, with various circuits being linked together, particularly one or more processors represented by processor 700 and memory represented by memory 710. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The processor 700 is responsible for managing the bus architecture and general processing, and the memory 710 may store data used by the processor 700 in performing operations.
The processor 700 is responsible for managing the bus architecture and general processing, and the memory 710 may store data used by the processor 700 in performing operations.
The processor 700 is further configured to read the program and execute the following steps:
respectively carrying out image conversion on the projected images to obtain at least one gray image;
calculating the average value of the gray level images to obtain a background gray level image;
and calculating the difference between each gray level image and the background gray level image to obtain the human body contour of the user.
The virtual object comprises a virtual character; the processor 700 is further configured to read the program and execute the following steps:
determining first key points on the human body contour, wherein the first key points at least comprise head key points and hand key points;
determining a target contour of the virtual object in a candidate image of the virtual object;
determining second key points on the target contour corresponding to the first key points, wherein the second key points at least comprise head key points and hand key points;
calculating the similarity between the human body contour and the target contour based on the first key point and the second key point;
and if the similarity meets a second preset requirement, taking the candidate image as the virtual image.
The processor 700 is further configured to read the program and execute the following steps:
calculating Euclidean distances between first target key points and second target key points for each first target key point in the first key points, wherein the second target key points are key points corresponding to the first target key points in the second key points;
and calculating the similarity between the human body contour and the target contour based on the obtained Euclidean distance.
The processor 700 is further configured to read the program and execute the following steps:
multiplying each Euclidean distance by the corresponding weight value respectively to obtain a first numerical value corresponding to each Euclidean distance;
and adding the first numerical values to obtain the similarity between the human body contour and the target contour.
The processor 700 is further configured to read the program and execute the following steps:
and presetting the weight, wherein the weight of the Euclidean distance obtained based on the head key point and/or the hand key point is larger than the weight of the Euclidean distance obtained based on other key points.
The processor 700 is further configured to read the program and execute the following steps:
determining a distance between an actual photographing position of the user and the projected image;
determining a depth distance between the virtual image and the projected image according to the distance.
The processor 700 is further configured to read the program and execute the following steps:
determining a depth distance between the virtual image and the projected image from the distance using the following formula:
Figure BDA0002429366710000111
wherein Δ D represents the depth distance, Δ θ represents the binocular parallax of the user, D represents the distance between the actual photographing position of the user and the projection image of the user on the display screen, P represents the distance between the eyes of the user, and Δ θ, P are constants.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. With such an understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring a projection image of a user on a display screen;
extracting a human body contour in the projection image;
acquiring a virtual image of a virtual object according to the human body contour, wherein the matching degree of the contour of the virtual object and the human body contour meets a first preset requirement;
determining a relative positional relationship between the virtual image and the projection image;
and obtaining an augmented reality AR image according to the projection image, the virtual image and the relative position relation.
2. The method of claim 1, wherein the projection image is at least one; the extracting the human body contour in the projection image comprises:
respectively carrying out image conversion on the projected images to obtain at least one gray image;
calculating the average value of the gray level images to obtain a background gray level image;
and calculating the difference between each gray level image and the background gray level image to obtain the human body contour of the user.
3. The method of claim 1, wherein the virtual object comprises a virtual character; the acquiring of the virtual image of the virtual object according to the human body contour includes:
determining first key points on the human body contour, wherein the first key points at least comprise head key points and hand key points;
determining a target contour of the virtual object in a candidate image of the virtual object;
determining second key points on the target contour corresponding to the first key points, wherein the second key points at least comprise head key points and hand key points;
calculating the similarity between the human body contour and the target contour based on the first key point and the second key point;
and if the similarity meets a second preset requirement, taking the candidate image as the virtual image.
4. The method of claim 3, wherein said calculating a similarity between the human body contour and the target contour based on the first keypoint and the second keypoint comprises:
calculating Euclidean distances between first target key points and second target key points for each first target key point in the first key points, wherein the second target key points are key points corresponding to the first target key points in the second key points;
and calculating the similarity between the human body contour and the target contour based on the obtained Euclidean distance.
5. The method according to claim 4, wherein the calculating the similarity between the human body contour and the target contour based on the obtained Euclidean distance comprises:
multiplying each Euclidean distance by the corresponding weight value respectively to obtain a first numerical value corresponding to each Euclidean distance;
and adding the first numerical values to obtain the similarity between the human body contour and the target contour.
6. The method of claim 5, further comprising:
and presetting the weight, wherein the weight of the Euclidean distance obtained based on the head key point and/or the hand key point is larger than the weight of the Euclidean distance obtained based on other key points.
7. The method of claim 1, wherein said determining a relative positional relationship between said virtual image and said projected image comprises:
determining a distance between an actual photographing position of the user and the projected image;
determining a depth distance between the virtual image and the projected image according to the distance.
8. The method of claim 7, wherein said determining a depth distance between said virtual image and said projected image based on said distance comprises:
determining a depth distance between the virtual image and the projected image from the distance using the following formula:
Figure FDA0002429366700000021
wherein Δ D represents the depth distance, Δ θ represents the binocular parallax of the user, D represents the distance between the actual photographing position of the user and the projection image of the user on the display screen, P represents the distance between the eyes of the user, and Δ θ, P are constants.
9. An electronic device, comprising: a memory, a processor, and a program stored on the memory and executable on the processor; characterized in that the processor, for reading the program in the memory, implements the steps in the image processing method according to any one of claims 1 to 8.
10. A computer-readable storage medium for storing a computer program, characterized in that the computer program, when being executed by a processor, is adapted to carry out the steps of the image processing method according to any one of claims 1 to 8.
CN202010231304.4A 2020-03-27 2020-03-27 Image processing method, device and computer readable storage medium Active CN111462337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010231304.4A CN111462337B (en) 2020-03-27 2020-03-27 Image processing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010231304.4A CN111462337B (en) 2020-03-27 2020-03-27 Image processing method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111462337A true CN111462337A (en) 2020-07-28
CN111462337B CN111462337B (en) 2023-08-18

Family

ID=71685711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010231304.4A Active CN111462337B (en) 2020-03-27 2020-03-27 Image processing method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111462337B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113179376A (en) * 2021-04-29 2021-07-27 山东数字人科技股份有限公司 Video comparison method, device and equipment based on three-dimensional animation and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005266223A (en) * 2004-03-18 2005-09-29 Casio Comput Co Ltd Camera system and program
JP2007025861A (en) * 2005-07-13 2007-02-01 Toppan Printing Co Ltd Virtual reality system and method, and interpolation image generation device and method
CN103617639A (en) * 2013-06-27 2014-03-05 苏州金螳螂展览设计工程有限公司 Mirror surface induction interactive group photo system and method
CN106097435A (en) * 2016-06-07 2016-11-09 北京圣威特科技有限公司 A kind of augmented reality camera system and method
WO2016207628A1 (en) * 2015-06-22 2016-12-29 Ec Medica Ltd Augmented reality imaging system, apparatus and method
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN108398787A (en) * 2018-03-20 2018-08-14 京东方科技集团股份有限公司 Augmented reality shows equipment, method and augmented reality glasses
CN110910512A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Virtual object self-adaptive adjusting method and device, computer equipment and storage medium
CN110909680A (en) * 2019-11-22 2020-03-24 咪咕动漫有限公司 Facial expression recognition method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005266223A (en) * 2004-03-18 2005-09-29 Casio Comput Co Ltd Camera system and program
JP2007025861A (en) * 2005-07-13 2007-02-01 Toppan Printing Co Ltd Virtual reality system and method, and interpolation image generation device and method
CN103617639A (en) * 2013-06-27 2014-03-05 苏州金螳螂展览设计工程有限公司 Mirror surface induction interactive group photo system and method
WO2016207628A1 (en) * 2015-06-22 2016-12-29 Ec Medica Ltd Augmented reality imaging system, apparatus and method
CN106097435A (en) * 2016-06-07 2016-11-09 北京圣威特科技有限公司 A kind of augmented reality camera system and method
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN108398787A (en) * 2018-03-20 2018-08-14 京东方科技集团股份有限公司 Augmented reality shows equipment, method and augmented reality glasses
CN110909680A (en) * 2019-11-22 2020-03-24 咪咕动漫有限公司 Facial expression recognition method and device, electronic equipment and storage medium
CN110910512A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Virtual object self-adaptive adjusting method and device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113179376A (en) * 2021-04-29 2021-07-27 山东数字人科技股份有限公司 Video comparison method, device and equipment based on three-dimensional animation and storage medium

Also Published As

Publication number Publication date
CN111462337B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN112767538B (en) Three-dimensional reconstruction and related interaction and measurement methods, related devices and equipment
JP4556873B2 (en) Image collation system and image collation method
CN103140879B (en) Information presentation device, digital camera, head mounted display, projecting apparatus, information demonstrating method and information are presented program
KR101791590B1 (en) Object pose recognition apparatus and method using the same
US8086027B2 (en) Image processing apparatus and method
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
WO2013175228A1 (en) Body measurement
WO2019128676A1 (en) Light spot filtering method and apparatus
JPWO2006049147A1 (en) Three-dimensional shape estimation system and image generation system
CN109711472B (en) Training data generation method and device
CN110544302A (en) Human body action reconstruction system and method based on multi-view vision and action training system
JP4631973B2 (en) Image processing apparatus, image processing apparatus control method, and image processing apparatus control program
JP2020119127A (en) Learning data generation method, program, learning data generation device, and inference processing method
CN111815768B (en) Three-dimensional face reconstruction method and device
CN114333046A (en) Dance action scoring method, device, equipment and storage medium
CN111435550A (en) Image processing method and apparatus, image device, and storage medium
CN113902781A (en) Three-dimensional face reconstruction method, device, equipment and medium
CN111462337B (en) Image processing method, device and computer readable storage medium
CN111416938B (en) Augmented reality close-shooting method and device and computer readable storage medium
CN113610969A (en) Three-dimensional human body model generation method and device, electronic equipment and storage medium
JP2000331019A (en) Method and device for indexing aspect image and recording medium with aspect image indexing program recorded
CN113240811B (en) Three-dimensional face model creating method, system, equipment and storage medium
JP7326965B2 (en) Image processing device, image processing program, and image processing method
CN114821791A (en) Method and system for capturing three-dimensional motion information of image
CN111222448A (en) Image conversion method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant