CN112631487A - Image processing method, electronic device, and readable storage medium - Google Patents

Image processing method, electronic device, and readable storage medium Download PDF

Info

Publication number
CN112631487A
CN112631487A CN202011509145.6A CN202011509145A CN112631487A CN 112631487 A CN112631487 A CN 112631487A CN 202011509145 A CN202011509145 A CN 202011509145A CN 112631487 A CN112631487 A CN 112631487A
Authority
CN
China
Prior art keywords
coordinate value
polar coordinate
polar
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011509145.6A
Other languages
Chinese (zh)
Other versions
CN112631487B (en
Inventor
李琳
林蔚澜
张聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011509145.6A priority Critical patent/CN112631487B/en
Publication of CN112631487A publication Critical patent/CN112631487A/en
Application granted granted Critical
Publication of CN112631487B publication Critical patent/CN112631487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The application discloses an image processing method, electronic equipment and a readable storage medium, and belongs to the field of image processing. The image processing method comprises the following steps: acquiring a first image acquired by a rear camera of electronic equipment and the position of a solid scene corresponding to the first image in an imaging space of the rear camera; acquiring a first polar coordinate value corresponding to each position point in the entity scene and a second polar coordinate value corresponding to each position point in the first image, wherein the first polar coordinate value and the second polar coordinate value are both positioned in a first polar coordinate system, and the first polar coordinate system takes the position of eyes of a user of the electronic equipment as a coordinate origin; acquiring a target polar coordinate value corresponding to each second polar coordinate value, and determining a target position corresponding to the target polar coordinate value in the scene; and acquiring a second image according to the target position, wherein the second image is used for displaying in the display screen. The scheme provided by the application solves the problem of poor image display effect in the related art.

Description

Image processing method, electronic device, and readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, an electronic device, and a readable storage medium.
Background
With the development of communication technology, electronic devices such as mobile phones have become one of indispensable products in life, and the requirements of people on the aspects of appearance, performance and the like of the electronic devices are higher and higher. At present, electronic equipment can collect images based on a rear camera and present the images in a display screen, so that the electronic equipment presents the effect of a transparent display screen. However, the image acquired by the rear camera as the viewing angle in such a manner is different from the image seen by the viewing angle of human eyes, so that the image displayed in the display screen cannot be butted with the real image around the electronic device, and the display effect is poor.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an electronic device, and a readable storage medium, which can solve the problem of poor image display effect in the related art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device, where the electronic device includes a rear camera; the method comprises the following steps:
acquiring a first image acquired by the rear camera and the position of the entity scene corresponding to the first image in the imaging space of the rear camera, wherein the first image is displayed on a display screen of the electronic equipment;
acquiring a first polar coordinate value corresponding to each position point in the entity scene and a second polar coordinate value corresponding to each position point in the first image, wherein the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system, and the first polar coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin;
acquiring a target polar coordinate value corresponding to each second polar coordinate value, and determining a target position corresponding to the target polar coordinate value in the physical scene, wherein the target polar coordinate value is any one of first polar coordinate values corresponding to each position point in the physical scene, and the target polar coordinate value has a first polar angle and a second polar angle which are the same as the second polar coordinate value;
and acquiring a second image according to the target position, wherein the second image is used for being displayed in the display screen.
In a second aspect, an embodiment of the present application provides an image processing apparatus, where the apparatus includes a rear camera, and the apparatus further includes:
the first acquisition module is used for acquiring a first image acquired by the rear camera and the position of the entity scene corresponding to the first image in the imaging space of the rear camera, and the first image is displayed in a display screen of the device;
a second obtaining module, configured to obtain a first polar coordinate value corresponding to each position point in the physical scene and a second polar coordinate value corresponding to each position point in the first image, where the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system, and the first polar coordinate system uses a position of an eye of a user of the electronic device as a coordinate origin;
a determining module, configured to obtain a target polar coordinate value corresponding to each second polar coordinate value, and determine a target position corresponding to the target polar coordinate value in the entity scene, where the target polar coordinate value is any one of first polar coordinate values corresponding to each position point in the entity scene, and the target polar coordinate value has a first polar angle and a second polar angle that are the same as the second polar coordinate value;
and the third acquisition module is used for acquiring a second image according to the target position, wherein the second image is used for being displayed in the display screen.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the image processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method according to the first aspect.
In the embodiment of the application, a target polar coordinate value corresponding to each second polar coordinate value is obtained by obtaining a first polar coordinate value corresponding to each position point in a physical scene, and based on a second polar coordinate value corresponding to each position point in a first image imaged by the physical scene, a target position corresponding to the target polar coordinate value in the physical scene is determined, and a second image is determined based on the target position. Like this, just also can determine the image that the entity scene that the user's eyes field of vision within range was sheltered from by electronic equipment corresponds for electronic equipment can show the second image, and then just also make the second image that shows in the display screen can combine together with the entity scene outside the display screen in the stadia effect, promotes electronic equipment's image display effect, brings better visual experience for the user.
Drawings
Fig. 1 is a flowchart of an image processing method provided in an embodiment of the present application;
fig. 1a is a scene schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 1b is a schematic view of a display interface of an electronic device to which an image processing method provided by an embodiment of the present application is applied;
fig. 2 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 3 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail an image processing method, an image processing apparatus, and an electronic device provided in the embodiments of the present application with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method provided in an embodiment of the present application, where the image processing method is applied to an electronic device, and the electronic device includes a rear camera. Alternatively, the electronic device may be a cell phone, a tablet, a notebook computer, a wearable device, or the like.
As shown in fig. 1, the image processing method includes the steps of:
step 101, acquiring a first image acquired by the rear camera and a position of a physical scene corresponding to the first image in an imaging space of the rear camera, wherein the first image is displayed on a display screen of the electronic equipment.
The electronic equipment can acquire a first image through the rear camera and display the first image on the display screen, and determine the position of a solid scene corresponding to the first image in the imaging space of the rear camera through the rear camera, so as to determine the position of the solid scene relative to the rear camera. As shown in fig. 1a, if the viewing range of the rear camera F is an entity scene AD, the first image displayed on the display screen of the electronic device is also the image corresponding to AD.
And 102, acquiring a first polar coordinate value corresponding to each position point in the physical scene and a second polar coordinate value corresponding to each position point in the first image, wherein the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system, and the first polar coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin.
It can be understood that, after the electronic device determines the position of the physical scene relative to the rear camera, the electronic device can further acquire a first polar coordinate value corresponding to each position point in the physical scene with the position of the eye of the user of the electronic device as the origin of coordinates. That is to say, a first polar coordinate system is established with the position of the eye of the user of the electronic device as the origin of coordinates, and a first polar coordinate value corresponding to each position point in the physical scene in the first polar coordinate system is acquired. The position of the eyes of the user of the electronic equipment can be determined by a front camera, a ranging sensor, an infrared sensor and the like of the electronic equipment.
Optionally, before the step 102, the method may further include:
and determining the position of the eyes of the user of the electronic equipment in the imaging space of the front camera based on the front camera of the electronic equipment.
It can be understood that the eyes of the user of the electronic device need to be located within the viewing range of the front camera, for example, the eyes of the user of the electronic device directly view the display screen of the electronic device, and then the electronic device can determine the position of the eyes of the user in the imaging space of the front camera through the front camera, and then can also determine the position of the eyes of the user relative to the front camera. For example, the electronic device determines the position of the eyes of the user according to the monocular image depth estimation method FastDepth based on the front camera, and a specific algorithm implementation process may refer to related technologies, which are not described in detail in the embodiments of the present application.
In the embodiment of the present application, the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system established by using the position of the eye of the user of the electronic device as a coordinate origin, the position of the eye of the user may be a position relative to the front camera, and the position of the first position relative to the rear camera, that is, the position or the distance of the first position relative to the eye of the user can be calculated.
Optionally, the obtaining first polar coordinate values corresponding to respective position points in the physical scene includes:
obtaining third polar coordinate values corresponding to each position point in the entity scene, wherein the third polar coordinate values are located in a second polar coordinate system, and the second polar coordinate system takes the position of the rear camera as a coordinate origin;
and converting the third polar coordinate value into a first polar coordinate value.
The electronic device can establish a second polar coordinate system with the position of the rear camera as the coordinate origin, and acquire a third polar coordinate value corresponding to each position point in the physical scene in the second polar coordinate system. Further, based on the first polar coordinate system and the second polar coordinate system, the third polar coordinate value in the second polar coordinate system is converted to correspond to the first polar coordinate value in the first polar coordinate system.
Optionally, the converting the third polar value into the first polar value comprises:
acquiring a fourth polar coordinate value, wherein the fourth polar coordinate value is a polar coordinate value corresponding to the eye position of the electronic equipment user in the second polar coordinate system;
calculating a first polar coordinate value based on the tripolar coordinate value and the quadrupolar coordinate value.
The coordinate origin of the second polar coordinate system corresponds to the position of the rear camera of the electronic device, and the electronic device can determine the position of the eyes of the user relative to the rear camera based on devices such as the front camera or the ranging sensor, so that the fourth polar coordinate value corresponding to the eyes of the user in the second polar coordinate system established by taking the rear camera as the coordinate origin can be determined.
It can be understood that the third polar coordinate value and the fourth polar coordinate value are both located in the second polar coordinate system, the third polar coordinate value corresponds to each position point in the physical scene, the fourth polar coordinate value corresponds to the position of the eye of the user of the electronic device, and the specific position of each position point in the physical scene relative to the position of the eye of the user can be determined by the third polar coordinate value and the fourth polar coordinate value, so that in the first polar coordinate system established by using the position of the eye of the user as the origin of coordinates, the polar coordinate value corresponding to each position point in the physical scene in the first polar coordinate system, that is, the first polar coordinate value, can be determined.
In an embodiment of the present application, the obtaining a first polar coordinate value corresponding to the first position based on the third polar coordinate value and the fourth polar coordinate value includes:
converting the third pole coordinate value into a first coordinate value corresponding to a first Euclidean coordinate system, and converting the fourth pole coordinate value into a second coordinate value corresponding to the first Euclidean coordinate system, wherein the first Euclidean coordinate system takes the position of the rear camera as a coordinate origin;
calculating a third coordinate value based on the first coordinate value and the second coordinate value, wherein the third coordinate value is located in a second Euclidean coordinate system, and the second Euclidean coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin;
converting the third coordinate value into a first polar coordinate value corresponding to the first polar coordinate system.
It will be appreciated that it is computationally simpler to convert the two polar coordinate values to coordinate values corresponding to the euclidean coordinate system, respectively.
The third polar coordinate value and the fourth polar coordinate value are both located in a second polar coordinate system, the second polar coordinate system takes the position of the rear camera as a coordinate origin, correspondingly takes the position of the rear camera as the coordinate origin to establish a first Euclidean coordinate system, converts the third polar coordinate value into a first coordinate value corresponding to the first Euclidean coordinate system, and converts the fourth polar coordinate value into a second coordinate value corresponding to the first Euclidean coordinate system.
It can be understood that the third polar coordinate value corresponds to each position point in the physical scene, and the fourth polar coordinate value corresponds to the position of the eyes of the user of the electronic device; similarly, the first coordinate value corresponds to each position point in the physical scene, the second coordinate value corresponds to the position of the eyes of the user of the electronic device, the specific position of each position point in the physical scene relative to the position of the eyes of the user of the electronic device can be calculated through the first coordinate value and the second coordinate value, and further, in a second euclidean coordinate system established by taking the position of the eyes of the user of the electronic device as a coordinate origin, the coordinate value corresponding to each position point in the physical scene in the second euclidean coordinate system, namely, a third coordinate value can be determined based on the specific position. For example, the third coordinate value may be a difference between the first coordinate value and the second coordinate value.
Optionally, the x-axis coordinate value in the third coordinate value is a difference between the x-axis coordinate value in the first coordinate value and the x-axis coordinate value in the second coordinate value; the y-axis coordinate value in the third coordinate value is the difference between the y-axis coordinate value in the first coordinate value and the y-axis coordinate value in the second coordinate value; the z-axis coordinate value in the third coordinate value is the difference between the z-axis coordinate value in the first coordinate value and the z-axis coordinate value in the second coordinate value.
In this way, a third coordinate value is calculated by the first coordinate value and the second coordinate value, the third coordinate value is located in a second euclidean coordinate system established by taking the position of the eye of the user of the electronic equipment as a coordinate origin, and the third coordinate value is further converted into a first polar coordinate value in a first polar coordinate system established by also taking the position of the eye of the user of the electronic equipment as the coordinate origin, so that the first polar coordinate value corresponding to each position point in the entity scene is obtained.
To better understand the above calculation, a specific embodiment will be described below.
Assuming that a polar coordinate value corresponding to an arbitrary position (e.g. a first position) of a solid scene in an imaging space of a rear camera of an electronic device in a polar coordinate system with the rear camera as a coordinate origin is (r)CAM
Figure BDA0002845845900000072
θCAM) And according to the conversion relation between the polar coordinate system and the Euclidean coordinate system, obtaining a corresponding first coordinate value as follows:
Figure BDA0002845845900000071
wherein x isCAMIs the x-axis coordinate value of the first coordinate value, yCAMIs the y-axis coordinate value, z, of the first coordinate valuesCAMIs the z-axis coordinate value of the first coordinate value, rCAMIs a polar radial coordinate value (namely the distance from the position of the rear camera to the first position) in the polar coordinate values,
Figure BDA0002845845900000073
is the first polar angle coordinate value of the polar coordinate values, thetaCAMThe specific definition of the above parameters for the second polar coordinate system may refer to the related art.
Similarly, assume that the polar coordinate value corresponding to the position of the eye of the user of the electronic device in the polar coordinate system with the rear camera as the origin of coordinates is (r'CAM
Figure BDA0002845845900000084
,θ’CAM) And according to the conversion relation between the polar coordinate system and the Euclidean coordinate system, obtaining a corresponding second coordinate value as follows:
Figure BDA0002845845900000081
wherein, x'CAMIs the coordinate value of x-axis, y 'in the second coordinate value'CAMIs a y-axis coordinate value, z 'of the second coordinate value'CAMIs a z-axis coordinate value, r 'of the second coordinate value'CAMIs a polar radial coordinate value (namely the distance from the position of the rear camera to the eyes of the user) in the polar coordinate values,
Figure BDA0002845845900000085
is a first polar angle coordinate value, theta 'of the polar coordinate values'CAMIs the second polar angular coordinate value in the polar coordinate system.
It is understood that the first coordinate value and the second coordinate value are both located in a first euclidean coordinate system established by the rear camera as a coordinate origin. In a second euclidean coordinate system established by taking the position of the eye of the user of the electronic equipment as a coordinate origin, based on the first coordinate value and the second coordinate value, a coordinate value (i.e. a third coordinate value) of an arbitrary position point (for example, a first position) of the physical scene in the imaging space of the rear camera can be obtained as follows:
Figure BDA0002845845900000082
the definitions of the parameters in the above formulas can refer to the descriptions in the first two formulas, and are not described herein again.
Further, a polar coordinate value (r) corresponding to the third coordinate value in a polar coordinate system established with the position of the eye of the user of the electronic device as the origin of coordinates is obtained from the conversion relationship between the polar coordinate system and the Euclidean coordinate systemE
Figure BDA0002845845900000086
,θE) Comprises the following steps:
Figure BDA0002845845900000083
therefore, the polar coordinate value corresponding to the arbitrary position in the physical scene in the polar coordinate system established by taking the position of the eye of the user of the electronic equipment as the coordinate origin can be obtained based on the conversion relation between the polar coordinate system and the Euclidean coordinate system.
Step 103, obtaining a target polar coordinate value corresponding to each second polar coordinate value, and determining a target position corresponding to the target polar coordinate value in the physical scene, where the target polar coordinate value is any one of first polar coordinate values corresponding to each position point in the physical scene, and the target polar coordinate value has a first polar angle and a second polar angle that are the same as the second polar coordinate value.
It can be understood that the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system, the second polar coordinate value corresponds to each position point in a first image acquired by a rear camera, the first polar coordinate value corresponds to each position point in the physical scene, the first image corresponds to the physical scene, and the first image is an image displayed on a display screen of an electronic device, that is, an image displayed by the electronic device to a user. After the first polar coordinate value of each position point in the physical scene and the second polar coordinate value of each position point in the first image displayed by the display screen are obtained, a target position matched with the second polar coordinate value in the physical scene is determined, and the target position is the position in the physical scene which can be seen by eyes of a user of the electronic equipment.
Optionally, the target polar coordinate value is any one of first polar coordinate values corresponding to each position point in the physical scene, and the target polar coordinate value has a first polar angle and a second polar angle which are the same as the second polar coordinate value.
It can be understood that, the first image is an image collected by the rear camera and displayed on the display screen of the electronic device, as shown in fig. 1a, the viewing range of the rear camera F is an entity scene AD, and the range blocked by the electronic device in the field of view of the eyes E of the user is the entity scene BC, when the display screen of the electronic device is required to present the effect of the transparent display screen, the display screen of the electronic device is required to display an image formed corresponding to the entity scene BC, and then the image displayed on the display screen can be better integrated with the actual image outside the display screen, so that the display image in the display screen and the actual image outside the display screen are prevented from being broken or repeated, and the display screen can present a similar transparent effect.
In this embodiment of the application, the electronic device may obtain first polar coordinate values corresponding to all position points in the physical scene, where the second polar coordinate value is a polar coordinate value corresponding to each position point in the first image, and if, for example, the polar coordinate value of the first position point in the physical scene is matched with the polar coordinate value of a position point on the display screen, that is, the first angular coordinate value and the second angular coordinate value of the first position point are the same, that is, the two position points are located on the same straight line, and the straight line further includes the eye position of the user, that is, the first position point in the physical scene is located in a range blocked by the display screen, the matched first position point is determined as a target position, and then the target position in the physical scene is also an area blocked by the electronic device; and the position points in the entity scene which are not matched with the polar coordinate value of any position point in the display screen, namely the range which is not shielded by the electronic equipment. In the embodiment of the application, the target position in the physical scene, namely the area blocked by the electronic device, is to be displayed, so that the display screen can present a transparent-like effect.
And 104, acquiring a second image according to the target position, wherein the second image is used for being displayed in the display screen.
In the embodiment of the application, after the target position in the entity scene to be displayed is determined, the second image can be determined based on the target position. It is understood that the second image is a part of the first image acquired by the rear camera.
Further, the step 104 is followed by:
updating the first image to the second image.
It is understood that after the second image is determined, the first image displayed by the electronic device may be updated to the second image, and the second image is displayed on the display screen. For example, based on the method described in step 102, the polar coordinate value of the position point X in the first image is converted into a polar coordinate value in the first polar coordinate system with the position of the user's eye as the origin of coordinates
Figure BDA0002845845900000101
Converting the position point Y in the physical scene into a polar coordinate value in the first polar coordinate system, if so
Figure BDA0002845845900000102
Point Y in the physical scene is displayed in the electronic device display screen. As shown in fig. 1b, a portion S11, S11 of the electronic device S2 that displays the solid scene S1 is a portion of the user' S eye view that is obscured by the electronic device S2, while a portion S12 that is not obscured by the electronic device S2 is not displayed on the display screen, and the portion S11 displayed on the display screen and the portion S12 that is not obscured are visually merged into a whole solid scene S.
Like this, what show in the electronic equipment display screen is also the image that the entity scene that is sheltered from by electronic equipment in by user's eyes field of vision scope corresponds, and the entity scene that is not sheltered from just can not show in the display screen yet, and then the second image that shows in the display screen that promptly can be integrated (as shown in fig. 1 b) with the entity scene outside the display screen on the stadia effect for the electronic equipment display screen presents the effect similar to transparent display screen in the vision, brings better visual experience for the user.
The scheme provided by the embodiment of the application can be applied to the display screen of the electronic equipment in a bright screen state, for example, when the electronic equipment is in a screen locking state, the second image can be presented in the display screen by the method when the screen is lightened; or the method may also be applied to a scenario that the electronic device receives an incoming call, and the like, and is not limited specifically.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the loaded image processing method. The embodiment of the present application describes an image processing apparatus provided in the embodiment of the present application, by taking an example in which an image processing apparatus executes a loaded image processing method.
Referring to fig. 2, fig. 2 is a diagram of an image processing apparatus according to an embodiment of the present disclosure, where the image processing apparatus includes a rear camera. As shown in fig. 2, the image processing apparatus 200 includes:
a first obtaining module 201, configured to obtain a first image collected by the rear camera and a position of a physical scene corresponding to the first image in an imaging space of the rear camera, where the first image is displayed on a display screen of the apparatus;
a second obtaining module 202, configured to obtain a first polar coordinate value corresponding to each position point in the physical scene and a second polar coordinate value corresponding to each position point in the first image, where the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system, and the first polar coordinate system uses a position of an eye of the device user as a coordinate origin;
a first determining module 203, configured to obtain a target polar coordinate value corresponding to each second polar coordinate value, and determine a target position of the target polar coordinate value in the physical scene, where the target polar coordinate value is any one of first polar coordinate values corresponding to each position point in the physical scene, and the target polar coordinate value has a first polar angle and a second polar angle that are the same as the second polar coordinate value;
a third obtaining module 204, configured to obtain a second image according to the target position, where the second image is used to be displayed in the display screen.
Optionally, the second obtaining module 202 includes:
the acquisition submodule is used for acquiring a third polar coordinate value corresponding to each position point in the entity scene, wherein the third polar coordinate value is positioned in a second polar coordinate system, and the second polar coordinate system takes the position of the rear camera as a coordinate origin;
and the conversion submodule is used for converting the third polar coordinate value into a first polar coordinate value.
Optionally, the conversion sub-module is further configured to:
acquiring a fourth polar coordinate value, wherein the fourth polar coordinate value is a polar coordinate value corresponding to the eye position of the electronic equipment user in the second polar coordinate system;
and acquiring a first polar coordinate value corresponding to the first position based on the third polar coordinate value and the fourth polar coordinate value.
Optionally, the conversion sub-module is further configured to:
converting the third pole coordinate value into a first coordinate value corresponding to a first Euclidean coordinate system, and converting the fourth pole coordinate value into a second coordinate value corresponding to the first Euclidean coordinate system, wherein the first Euclidean coordinate system takes the position of the rear camera as a coordinate origin;
calculating a third coordinate value based on the first coordinate value and the second coordinate value, wherein the third coordinate value is located in a second Euclidean coordinate system, and the second Euclidean coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin;
converting the third coordinate value into a first polar coordinate value corresponding to the first polar coordinate system.
Optionally, the x-axis coordinate value in the third coordinate value is a difference between the x-axis coordinate value in the first coordinate value and the x-axis coordinate value in the second coordinate value; the y-axis coordinate value in the third coordinate value is the difference between the y-axis coordinate value in the first coordinate value and the y-axis coordinate value in the second coordinate value; the z-axis coordinate value in the third coordinate value is the difference between the z-axis coordinate value in the first coordinate value and the z-axis coordinate value in the second coordinate value.
Optionally, the apparatus further comprises:
the second determining module is used for determining the position of the eyes of the user of the electronic equipment in the imaging space of the front camera based on the front camera of the electronic equipment.
Optionally, the apparatus further comprises:
and the updating module is used for updating the first image into the second image.
The image processing apparatus 200 according to the embodiment of the present application obtains a target polar coordinate value corresponding to each second polar coordinate value by obtaining a first polar coordinate value corresponding to each position point in a physical scene and a second polar coordinate value corresponding to each position point in a first image imaged based on the physical scene, determines a target position corresponding to the target polar coordinate value in the physical scene, and determines a second image based on the target position. Like this, just also can determine the image that the entity scene that the user's eyes field of vision within range was sheltered from by electronic equipment corresponds for image processing apparatus 200 can show the second image, and then just also make the second image that shows in the display screen can combine together with the entity scene outside the display screen in the stadia effect, promotes image processing apparatus 200's image display effect, brings better visual experience for the user.
The image processing apparatus 200 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus 200 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus 200 provided in this embodiment of the application can implement each process implemented by the above-mentioned image processing method embodiment, and is not described here again to avoid repetition.
Referring to fig. 3, fig. 3 is a structural diagram of an electronic device according to an embodiment of the present disclosure, and as shown in fig. 3, the electronic device includes: a processor 300, a memory 320 and a program or instructions stored on the memory 320 and executable on the processor 300, the processor 300 for reading the program or instructions in the memory 320; the electronic device also includes a bus interface and transceiver 310.
A transceiver 310 for receiving and transmitting data under the control of the processor 300.
Where in fig. 3, the bus architecture may include any number of interconnected buses and bridges, with various circuits being linked together, particularly one or more processors represented by processor 300 and memory represented by memory 320. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 310 may be a number of elements including a transmitter and a transceiver providing a means for communicating with various other apparatus over a transmission medium. The processor 300 is responsible for managing the bus architecture and general processing, and the memory 320 may store data used by the processor 300 in performing operations.
It should be noted that the electronic device includes a rear camera, a front camera, and a display screen.
The processor 300 is configured to read a program or an instruction in the memory 320, and execute the following steps:
acquiring a first image acquired by the rear camera and the position of the entity scene corresponding to the first image in the imaging space of the rear camera, wherein the first image is displayed on a display screen of the electronic equipment;
acquiring a first polar coordinate value corresponding to each position point in the entity scene and a second polar coordinate value corresponding to each position point in the first image, wherein the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system, and the first polar coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin;
acquiring a target polar coordinate value corresponding to each second polar coordinate value, and determining a target position corresponding to the target polar coordinate value in the physical scene, wherein the target polar coordinate value is any one of first polar coordinate values corresponding to each position point in the physical scene, and the target polar coordinate value has a first polar angle and a second polar angle which are the same as the second polar coordinate value;
and acquiring a second image according to the target position, wherein the second image is used for being displayed in the display screen.
Optionally, the processor 300 is configured to read the program or the instructions in the memory 320, and perform the following steps:
obtaining third polar coordinate values corresponding to each position point in the entity scene, wherein the third polar coordinate values are located in a second polar coordinate system, and the second polar coordinate system takes the position of the rear camera as a coordinate origin;
and converting the third polar coordinate value into a first polar coordinate value.
Optionally, the processor 300 is configured to read the program or the instructions in the memory 320, and perform the following steps:
acquiring a fourth polar coordinate value, wherein the fourth polar coordinate value is a polar coordinate value corresponding to the eye position of the electronic equipment user in the second polar coordinate system;
and acquiring a first polar coordinate value corresponding to the first position based on the third polar coordinate value and the fourth polar coordinate value.
Optionally, the processor 300 is configured to read the program or the instructions in the memory 320, and perform the following steps:
converting the third pole coordinate value into a first coordinate value corresponding to a first Euclidean coordinate system, and converting the fourth pole coordinate value into a second coordinate value corresponding to the first Euclidean coordinate system, wherein the first Euclidean coordinate system takes the position of the rear camera as a coordinate origin;
calculating a third coordinate value based on the first coordinate value and the second coordinate value, wherein the third coordinate value is located in a second Euclidean coordinate system, and the second Euclidean coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin;
converting the third coordinate value into a first polar coordinate value corresponding to the first polar coordinate system.
Optionally, the x-axis coordinate value in the third coordinate value is a difference between the x-axis coordinate value in the first coordinate value and the x-axis coordinate value in the second coordinate value; the y-axis coordinate value in the third coordinate value is the difference between the y-axis coordinate value in the first coordinate value and the y-axis coordinate value in the second coordinate value; the z-axis coordinate value in the third coordinate value is the difference between the z-axis coordinate value in the first coordinate value and the z-axis coordinate value in the second coordinate value.
Optionally, the processor 300 is configured to read the program or the instructions in the memory 320, and perform the following steps:
and determining the position of the eyes of the user of the electronic equipment in the imaging space of the front camera based on the front camera of the electronic equipment.
Optionally, the processor 300 is configured to read the program or the instructions in the memory 320, and perform the following steps:
updating the first image to the second image.
The electronic device provided by the embodiment of the application acquires a target polar coordinate value corresponding to each second polar coordinate value by acquiring a first polar coordinate value corresponding to each position point in a physical scene and a second polar coordinate value corresponding to each position point in a first image imaged on the basis of the physical scene, determines a target position corresponding to the target polar coordinate value in the physical scene, and determines a second image on the basis of the target position. Like this, just also can determine the image that the entity scene that the user's eyes field of vision within range was sheltered from by electronic equipment corresponds for electronic equipment can show the second image, and then just also make the second image that shows in the display screen can combine together with the entity scene outside the display screen in the stadia effect, promotes electronic equipment's image display effect, brings better visual experience for the user.
The embodiment of the invention also provides a readable storage medium, and the readable storage medium is stored with a computer program.
Wherein the computer program when executed by a processor implements the steps of:
acquiring a first image acquired by the rear camera and the position of the entity scene corresponding to the first image in the imaging space of the rear camera, wherein the first image is displayed on a display screen of the electronic equipment;
acquiring a first polar coordinate value corresponding to each position point in the entity scene and a second polar coordinate value corresponding to each position point in the first image, wherein the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system, and the first polar coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin;
acquiring a target polar coordinate value corresponding to each second polar coordinate value, and determining a target position corresponding to the target polar coordinate value in the physical scene, wherein the target polar coordinate value is any one of first polar coordinate values corresponding to each position point in the physical scene, and the target polar coordinate value has a first polar angle and a second polar angle which are the same as the second polar coordinate value;
and acquiring a second image according to the target position, wherein the second image is used for being displayed in the display screen.
Optionally, the computer program when executed by a processor implements the steps of:
obtaining third polar coordinate values corresponding to each position point in the entity scene, wherein the third polar coordinate values are located in a second polar coordinate system, and the second polar coordinate system takes the position of the rear camera as a coordinate origin;
and converting the third polar coordinate value into a first polar coordinate value.
Optionally, the computer program when executed by a processor implements the steps of:
acquiring a fourth polar coordinate value, wherein the fourth polar coordinate value is a polar coordinate value corresponding to the eye position of the electronic equipment user in the second polar coordinate system;
and acquiring a first polar coordinate value corresponding to the first position based on the third polar coordinate value and the fourth polar coordinate value.
Optionally, the computer program when executed by a processor implements the steps of:
converting the third pole coordinate value into a first coordinate value corresponding to a first Euclidean coordinate system, and converting the fourth pole coordinate value into a second coordinate value corresponding to the first Euclidean coordinate system, wherein the first Euclidean coordinate system takes the position of the rear camera as a coordinate origin;
calculating a third coordinate value based on the first coordinate value and the second coordinate value, wherein the third coordinate value is located in a second Euclidean coordinate system, and the second Euclidean coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin;
converting the third coordinate value into a first polar coordinate value corresponding to the first polar coordinate system.
Optionally, the x-axis coordinate value in the third coordinate value is a difference between the x-axis coordinate value in the first coordinate value and the x-axis coordinate value in the second coordinate value; the y-axis coordinate value in the third coordinate value is the difference between the y-axis coordinate value in the first coordinate value and the y-axis coordinate value in the second coordinate value; the z-axis coordinate value in the third coordinate value is the difference between the z-axis coordinate value in the first coordinate value and the z-axis coordinate value in the second coordinate value.
Optionally, the computer program when executed by a processor implements the steps of:
and determining the position of the eyes of the user of the electronic equipment in the imaging space of the front camera based on the front camera of the electronic equipment.
Optionally, the computer program when executed by a processor implements the steps of:
updating the first image to the second image.
In this embodiment, the readable storage medium can perform all the technical features of the above-mentioned embodiment of the image processing method, and the implementation principle and the technical effect are similar, which are not described herein again.
The readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method is applied to electronic equipment, and the electronic equipment comprises a rear camera; characterized in that the method comprises:
acquiring a first image acquired by the rear camera and the position of the entity scene corresponding to the first image in the imaging space of the rear camera, wherein the first image is displayed on a display screen of the electronic equipment;
acquiring a first polar coordinate value corresponding to each position point in the entity scene and a second polar coordinate value corresponding to each position point in the first image, wherein the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system, and the first polar coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin;
acquiring a target polar coordinate value corresponding to each second polar coordinate value, and determining a target position corresponding to the target polar coordinate value in the physical scene, wherein the target polar coordinate value is any one of first polar coordinate values corresponding to each position point in the physical scene, and the target polar coordinate value has a first polar angle and a second polar angle which are the same as the second polar coordinate value;
and acquiring a second image according to the target position, wherein the second image is used for being displayed in the display screen.
2. The method according to claim 1, wherein the obtaining first polar coordinate values corresponding to each position point in the physical scene comprises:
obtaining third polar coordinate values corresponding to each position point in the entity scene, wherein the third polar coordinate values are located in a second polar coordinate system, and the second polar coordinate system takes the position of the rear camera as a coordinate origin;
and converting the third polar coordinate value into a first polar coordinate value.
3. The method of claim 2, wherein the converting the third polar value to the first polar value comprises:
acquiring a fourth polar coordinate value, wherein the fourth polar coordinate value is a polar coordinate value corresponding to the eye position of the electronic equipment user in the second polar coordinate system;
calculating a first polar coordinate value based on the third polar coordinate value and the fourth polar coordinate value.
4. The method of claim 3, wherein the calculating a first polar coordinate value based on the third and fourth polar coordinate values comprises:
converting the third pole coordinate value into a first coordinate value corresponding to a first Euclidean coordinate system, and converting the fourth pole coordinate value into a second coordinate value corresponding to the first Euclidean coordinate system, wherein the first Euclidean coordinate system takes the position of the rear camera as a coordinate origin;
calculating a third coordinate value based on the first coordinate value and the second coordinate value, wherein the third coordinate value is located in a second Euclidean coordinate system, and the second Euclidean coordinate system takes the position of the eye of the user of the electronic equipment as a coordinate origin;
converting the third coordinate value into a first polar coordinate value corresponding to the first polar coordinate system.
5. The method of claim 4 wherein the x-axis coordinate value in the third coordinate value is the difference between the x-axis coordinate value in the first coordinate value and the x-axis coordinate value in the second coordinate value; the y-axis coordinate value in the third coordinate value is the difference between the y-axis coordinate value in the first coordinate value and the y-axis coordinate value in the second coordinate value; the z-axis coordinate value in the third coordinate value is the difference between the z-axis coordinate value in the first coordinate value and the z-axis coordinate value in the second coordinate value.
6. The method according to claim 1, wherein before the obtaining the first polar coordinate value corresponding to each position point in the physical scene, the method further comprises:
and determining the position of the eyes of the user of the electronic equipment in the imaging space of the front camera based on the front camera of the electronic equipment.
7. The method of claim 1, wherein after the acquiring the second image according to the target position, the method further comprises:
updating the first image to the second image.
8. An image processing apparatus including a rear camera, characterized by comprising:
the first acquisition module is used for acquiring a first image acquired by the rear camera and the position of the entity scene corresponding to the first image in the imaging space of the rear camera, and the first image is displayed in a display screen of the device;
a second obtaining module, configured to obtain a first polar coordinate value corresponding to each position point in the physical scene and a second polar coordinate value corresponding to each position point in the first image, where the first polar coordinate value and the second polar coordinate value are both located in a first polar coordinate system, and the first polar coordinate system uses a position of an eye of a user of the electronic device as a coordinate origin;
a determining module, configured to obtain a target polar coordinate value corresponding to each second polar coordinate value, and determine a target position corresponding to the target polar coordinate value in the entity scene, where the target polar coordinate value is any one of first polar coordinate values corresponding to each position point in the entity scene, and the target polar coordinate value has a first polar angle and a second polar angle that are the same as the second polar coordinate value;
and the third acquisition module is used for acquiring a second image according to the target position, wherein the second image is used for being displayed in the display screen.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 7.
10. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 7.
CN202011509145.6A 2020-12-18 2020-12-18 Image processing method, electronic device, and readable storage medium Active CN112631487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011509145.6A CN112631487B (en) 2020-12-18 2020-12-18 Image processing method, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011509145.6A CN112631487B (en) 2020-12-18 2020-12-18 Image processing method, electronic device, and readable storage medium

Publications (2)

Publication Number Publication Date
CN112631487A true CN112631487A (en) 2021-04-09
CN112631487B CN112631487B (en) 2022-06-28

Family

ID=75317411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011509145.6A Active CN112631487B (en) 2020-12-18 2020-12-18 Image processing method, electronic device, and readable storage medium

Country Status (1)

Country Link
CN (1) CN112631487B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108153502A (en) * 2017-12-22 2018-06-12 长江勘测规划设计研究有限责任公司 Hand-held augmented reality display methods and device based on transparent screen
CN108491069A (en) * 2018-03-01 2018-09-04 湖南西冲智能家居有限公司 A kind of augmented reality AR transparence display interaction systems
CN111541888A (en) * 2020-05-07 2020-08-14 青岛跃迁科技有限公司 AR implementation method based on display surface
US20200368616A1 (en) * 2017-06-09 2020-11-26 Dean Lindsay DELAMONT Mixed reality gaming system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200368616A1 (en) * 2017-06-09 2020-11-26 Dean Lindsay DELAMONT Mixed reality gaming system
CN108153502A (en) * 2017-12-22 2018-06-12 长江勘测规划设计研究有限责任公司 Hand-held augmented reality display methods and device based on transparent screen
CN108491069A (en) * 2018-03-01 2018-09-04 湖南西冲智能家居有限公司 A kind of augmented reality AR transparence display interaction systems
CN111541888A (en) * 2020-05-07 2020-08-14 青岛跃迁科技有限公司 AR implementation method based on display surface

Also Published As

Publication number Publication date
CN112631487B (en) 2022-06-28

Similar Documents

Publication Publication Date Title
CN111046744B (en) Method and device for detecting attention area, readable storage medium and terminal equipment
CN107223269B (en) Three-dimensional scene positioning method and device
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
CN109064390B (en) Image processing method, image processing device and mobile terminal
JP7339386B2 (en) Eye-tracking method, eye-tracking device, terminal device, computer-readable storage medium and computer program
US20220092803A1 (en) Picture rendering method and apparatus, terminal and corresponding storage medium
CN109089015B (en) Video anti-shake display method and device
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
CN111290580A (en) Calibration method based on sight tracking and related device
CN110706283A (en) Calibration method and device for sight tracking, mobile terminal and storage medium
CN107592520B (en) Imaging device and imaging method of AR equipment
CN112631487B (en) Image processing method, electronic device, and readable storage medium
CN112365530A (en) Augmented reality processing method and device, storage medium and electronic equipment
CN115797579A (en) Image superposition method and device for three-dimensional map, electronic equipment and storage medium
CN112767248B (en) Method, device and equipment for splicing infrared camera pictures and readable storage medium
CN112711984B (en) Fixation point positioning method and device and electronic equipment
CN112184920A (en) AR-based skiing blind area display method and device and storage medium
CN112200842A (en) Image registration method and device, terminal equipment and storage medium
CN109949212B (en) Image mapping method, device, electronic equipment and storage medium
CN113485660B (en) Folding screen picture display method and device
KR102534449B1 (en) Image processing method, device, electronic device and computer readable storage medium
CN111353929A (en) Image processing method and device and electronic equipment
US20220375098A1 (en) Image matting method and apparatus
CN114332416B (en) Image processing method, device, equipment and storage medium
CN117437258A (en) Image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant