CN110855972B - Image processing method, electronic device, and storage medium - Google Patents

Image processing method, electronic device, and storage medium Download PDF

Info

Publication number
CN110855972B
CN110855972B CN201911151310.2A CN201911151310A CN110855972B CN 110855972 B CN110855972 B CN 110855972B CN 201911151310 A CN201911151310 A CN 201911151310A CN 110855972 B CN110855972 B CN 110855972B
Authority
CN
China
Prior art keywords
area
image
region
image quality
processing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911151310.2A
Other languages
Chinese (zh)
Other versions
CN110855972A (en
Inventor
张海平
樊晓港
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911151310.2A priority Critical patent/CN110855972B/en
Publication of CN110855972A publication Critical patent/CN110855972A/en
Application granted granted Critical
Publication of CN110855972B publication Critical patent/CN110855972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking

Abstract

The application discloses an image processing method, electronic equipment and a storage medium, and belongs to the technical field of image processing. The image processing method detects the fixation position of eyeballs in a display interface; determining a first area corresponding to the watching position and a second area except the first area on a target image to be displayed on a display interface; reducing the image quality of the second area; and respectively carrying out image rendering processing on the first area and the second area with the lowered image quality. According to the method and the device, the target image is divided into the regions based on the fixation position of the eyeballs, and each region is subjected to image processing respectively, so that the image processing difficulty of a partial region such as a second region is reduced conveniently, the time for overall processing of the image is reduced, and further the delay is reduced.

Description

Image processing method, electronic device, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an electronic device, and a storage medium.
Background
Existing Augmented Reality (AR) systems have a delay, for example, you wear AR glasses and wear a hat on one's head, and in the process, you find that the hat moves back and forth in the AR glasses.
Disclosure of Invention
The application provides an image processing method, an electronic device and a storage medium; it is intended to reduce the image processing time to reduce the delay.
In order to solve the technical problems, the technical scheme is as follows: an image processing method for an electronic device, comprising:
detecting the fixation position of an eyeball in a display interface;
determining a first area corresponding to the fixation position and a second area except the first area on a target image to be displayed on the display interface;
reducing the image quality of the second area;
respectively performing image rendering processing on the first area and the second area with the lowered image quality;
and displaying the processed target image on the display interface.
The further technical scheme is that the detecting of the fixation position of the eyeball in the display interface comprises the following steps:
tracking and locking the movement angle of the eyeball;
confirming the current space coordinate of an exit pupil according to the motion angle of the eyeball;
and mapping the current space coordinate of the pupil to a two-dimensional coordinate system of the display interface, and marking the falling point of the pupil on the two-dimensional coordinate system as the eye-watching position of the eyeball.
The further technical solution is that the reducing the image quality of the second region includes:
the image resolution of the second region is turned down so that the image quality of the second region is turned down.
A further technical solution is that the performing image rendering processing on the first area and the second area with reduced image quality includes:
and respectively carrying out image rendering processing on the first area and the second area with the lowered image quality, wherein the image rendering quality of the first area is greater than that of the second area with the lowered image quality.
The further technical proposal is that the second area comprises a plurality of annular areas which are sequentially arranged around the first area; the two adjacent annular areas are seamlessly spliced, and the annular area adjacent to the first area is seamlessly spliced with the first area;
the reducing the image quality of the second area comprises:
and respectively reducing the image quality of the plurality of annular areas, wherein the image quality of the annular area close to the first area in two adjacent annular areas is greater than or equal to the image quality of the annular area far away from the first area.
The further technical scheme is that the second area surrounds the first area; or the second area is positioned on one side or two opposite sides of the first area; the second area is seamlessly spliced with the first area; the second region comprises a plurality of third regions seamlessly spliced together; the third region is positioned on one side of the first region;
the reducing the image quality of the second area comprises:
and respectively reducing the image quality of the plurality of third areas, wherein the image quality of the third area close to the first area in two adjacent third areas is greater than or equal to the image quality of the third area far away from the first area.
The further technical scheme is that the first area is provided with a reference point;
the determining a first region corresponding to the gaze location and a second region other than the first region includes:
acquiring a first coordinate of a drop point of the gaze position on a two-dimensional coordinate system of the display interface; and
acquiring a second coordinate of the reference point on the two-dimensional coordinate system;
detecting whether the distance between the first coordinate and the second coordinate is larger than a set distance
And if so, correcting the second coordinate of the reference point to be the first coordinate of the fixation position.
A further technical solution is that the performing image rendering processing on the first area and the second area with reduced image quality includes:
and performing special effect distortion processing on the second area without performing image rendering processing on the second area with the lowered image quality, and performing image rendering processing on the first area.
A further technical solution is that performing image rendering processing on the first region and the second region with reduced image quality respectively includes:
if the moving speed of the electronic equipment is detected to be greater than the set speed, image rendering processing is not respectively carried out on the first area and the second area with the lowered image quality;
or, if it is detected that the angular velocity of the electronic device is greater than a set angular velocity, the image rendering processing is not performed on each of the first area and the second area with the image quality reduced.
In order to solve the technical problems, the technical scheme is as follows: an electronic device comprises a processor, and a display screen, a camera and a memory which are connected with the processor;
the camera is used for collecting images, and the display screen is used for displaying target images;
wherein the memory is used for storing program data, and the processor is used for executing the program data to realize the image processing method.
In order to solve the technical problems, the technical scheme is as follows: a storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method described above.
Adopt this application technical scheme, the beneficial effect who has does: according to the method and the device, the target image is divided into the regions based on the fixation position of the eyeballs, and each region is subjected to image processing respectively, so that the image processing difficulty of a partial region such as a second region is reduced conveniently, the time for overall processing of the image is reduced, and further the delay is reduced.
Drawings
FIG. 1 is a block diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating an image processing method of an electronic device according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating an exemplary method for detecting a gaze location of an eye in a display interface;
fig. 4 is a flowchart illustrating determining a first region corresponding to a gaze location and a second region other than the first region on a target image to be displayed on a display interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating different divisions of a first region and a second region according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating different divisions of a first region and a second region according to an embodiment of the present application;
FIG. 7 is a schematic diagram of different divisions of a first region and a second region according to an embodiment of the present application;
FIG. 8 is a schematic diagram of different divisions of a first region and a second region according to an embodiment of the present application;
FIG. 9 is a schematic diagram of different divisions of a first region and a second region according to an embodiment of the present application;
FIG. 10 is a schematic diagram of different divisions of a first region and a second region according to an embodiment of the present application;
FIG. 11 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a block diagram of a storage medium according to an embodiment of the present application.
Detailed Description
An electronic device for image processing is described, which may be a hardware device, such as a mobile phone, a computer, a toy, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, a wearable device, or the like; for wearable devices, it may be a virtual reality or augmented reality device, such as virtual reality or augmented reality glasses.
Referring to FIG. 1, a block diagram of an electronic device 10 according to an embodiment is disclosed; the electronic device 10 may include a processor 13, and a camera 11, a memory 12, and a display 14 coupled to the processor 13. Wherein the processor 13 runs the program data stored in the memory 12 to perform image processing; the source of the image may be obtained by the camera 11, may also be artificially synthesized, or may also be stored in the memory 12 in advance, and the source of the image may not be specifically limited herein; the image processed by the processor 13 may be stored in the memory 12, or may be displayed on the display 14, and other operations may be performed on the image processed by the processor 13, which is not limited in this respect.
Referring to fig. 1, the camera 11 is used for capturing images, such as an eyeball image, an image in real life, and the like; the image can be stored in the memory 12, or stored in the memory 12 after being processed by the processor 13, or displayed on the display 13 after being processed by the processor 13; the camera 11 may be a general-purpose camera.
When the electronic device 10 is an augmented reality device, the camera 11 may be disposed inside the augmented reality device for capturing an image of an eyeball of a user; the camera 11 can be disposed at the front end of the augmented reality device for capturing images in reality, and the camera 11 can include a Time of flight (TOF) camera, an RGB camera, and two fisheye cameras; the TOF camera can comprise a light emitting module, a photosensitive receiving module and an FPC; the light emitting module and the light sensing receiving module are both connected to the FPC. When the TOF camera works, the light emitting module is used for emitting modulated light beams, the light beams are reflected by a target object and then received by the photosensitive receiving module, and the photosensitive receiving module can obtain the flight time of the light beams in the space through demodulation, so that the distance of the corresponding target object is calculated. Therefore, through the TOF camera, when a user wears the augmented reality device to make a turn in an environment such as a room, the shape and model of the room can be modeled; that is, the shape and model of the room where the user is located can be determined by measuring the distance from each point to the augmented reality device worn by the user, thereby constructing a scene; the RGB camera can be used for collecting two-dimensional color images, shooting the color difference of the images and the like, and is arranged adjacent to the TOF camera; two fisheye cameras are located TOF camera and RGB camera both sides and the symmetry sets up. The two fisheye cameras can be mainly used for cooperating with imaging, such as taking left and right images. The TOF camera, the RGB camera and the two fisheye cameras can complement each other; the shooting angle of the fisheye camera is large, the fisheye camera can be a wide-angle camera, and the resolution ratio of the fisheye camera can be low. The resolution ratio of the RGB camera can be higher, but the shooting angle can be smaller, and by combining the RGB camera and the fisheye camera, an image which is larger in shooting angle and clearer can be formed.
Referring to fig. 1, the memory 12 may be used for storing program data executed by the processor 13, and may be used for storing data of the processor 13 during processing; specifically, the memory 12 may be used to store the image processed by the processor 13, and may be used to store the image captured by the camera 11, such as an image of an eyeball, an image in real life, and the like.
The memory 12 includes a nonvolatile storage portion for storing the above-described program data. In another embodiment, the memory 12 may serve as only a memory of the processor 13 to buffer the program data executed by the processor 13, the program data is actually stored in a device outside the electronic device 10, and the processor 13 is connected to an external device to call the program data stored externally to execute the corresponding processing.
Referring to fig. 1, the processor 13 is used for executing program data stored in the memory 12. In particular, the processor 13 controls the operation of the electronic device 10, for example, the processor 13 may be configured to control the display 13 to display an image, and may be configured to process the image captured by the camera 11, for example, to detect the gaze position of the eyeball from the eyeball image, for example, to image-process the imageExample (b)Such as picture quality adjustment, rendering processing, special effect processing and the like, so as to form images required by the user.
The processor 13 may be a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit). The processor 13 may be an integrated circuit chip having signal and graphics processing capabilities. The processor 13 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
An image processing method of the electronic device 10 is described below, which can be applied to the electronic device 10 to perform image processing. Referring to fig. 2, a flowchart of an image processing method of the electronic device 10 according to an embodiment is disclosed, where the image processing method includes:
in step S21, the gaze position of the eyeball is detected on the display interface.
In this embodiment, the target image is displayed on the display interface for the user to watch, the watching range of the eyeball of the user is limited, the target image in the watching range of the eyeball of the user can present a good visual effect to the user, and the eyeball of the user can see the target image outside the watching range of the eyeball of the user, but the visual effect of the user cannot be influenced too much by the quality of the image of the target image in the area; therefore, a distribution map that affects the visual effect of the user can be determined by the gaze position of the eyeball on the target image.
In step S22, a first region corresponding to the gaze position and a second region other than the first region are determined on the target image to be displayed on the display interface.
In this embodiment, since the farther a certain position on the target image is from the gaze position of the eyeball in the distribution map for influencing the visual effect of the user determined on the target image, the smaller the degree of influencing the visual effect of the user, the target image may be divided into a plurality of regions, such as a first region and a second region, according to the distribution map for influencing the visual effect of the user, the first region corresponding to the gaze position, so that the degree of influencing the visual effect of the user by the first region is greater than that by the second region; of course, the range of the first area can be adjusted according to the needs of the user on the basis of the range watched by the eyeballs of the user, such as increasing or decreasing the range of the first area; the first region and the second region may be subdivided as needed according to a distribution map that affects the visual effect of the user, for example, the second region may be subdivided into a third region, a fourth region, a fifth region, and the like.
In step S23, the image quality of the second region is reduced.
Because the first area and the second area have different degrees of influence on the visual effect of the user, the image quality, such as resolution, of the second area with small degree of influence is reduced and then the subsequent processing is carried out, and the overall visual effect of the user when watching the target image cannot be influenced; by processing in this way, the difficulty of processing the images in the partial region such as the second region can be reduced; the time for processing the target image is reduced, and the problem that much time is consumed for processing the target image by adopting the same processing requirement on the whole target image is avoided; therefore, the purpose of saving hardware performance is achieved, and the delay of the electronic equipment is reduced.
In step S24, the image rendering process is performed on each of the first region and the second region whose image quality has been reduced.
In this step, the image rendering process is performed to make the target image have a beautiful visual effect, wherein since the image quality of the second area is reduced in step S23, the image rendering process time of the second area is further reduced when the image rendering process is performed in step S24, and the delay of the electronic device is further reduced.
In step S25, the processed target image is displayed on the display screen.
In this embodiment, the processed target image may be displayed on a display interface, or may be subjected to storage backup.
In an embodiment, please refer to fig. 3, which discloses a flowchart of detecting a gaze location of an eyeball in a display interface in step S21 of an embodiment of the present application, wherein step S21 may include:
in step S31, the movement angle of the eyeball is tracked and locked.
In the embodiment, an eyeball image of the user wearing the electronic device 10 can be captured by the camera 11 of the electronic device 10, for example, a camera disposed inside the augmented reality device, and the eyeball image can be processed by the processor 13 to obtain a movement angle of the eyeball of the user relative to a reference, such as a state where the reference is the front of the eyeball of the user, such as a state where the reference is the eyeball of the user looking straight on the display screen of the electronic device 10.
In step S32, the current spatial coordinates of the exit pupil are determined from the eye movement angle.
In this embodiment, the rotation angle and the direction of the pupil may be determined by acquiring the movement angle of the user's eyeball relative to the reference, and the length and the direction of a connection line between the pupil and the center are known, and the spatial coordinates of the pupil of the user may be acquired in this space.
And step S33, mapping the current space coordinate of the pupil to a two-dimensional coordinate system of the display interface, and marking the falling point of the pupil on the two-dimensional coordinate system as the eye fixation position of the eyeball.
In this embodiment, the line connecting the pupil and the center is extended to the camera capture interface to complete the determination of the gaze position of the eyeball.
In one embodiment, the first area is provided with a reference point; please refer to fig. 4, which discloses a flowchart of determining a first region corresponding to the gaze location and a second region excluding the first region on the target image to be displayed on the display interface in step S22 of the present application; step S22 may include:
step S41, acquiring a first coordinate of a drop point of the gaze position on a two-dimensional coordinate system of the display interface.
In step S42, a second coordinate of the reference point on the two-dimensional coordinate system is acquired.
In step S43, it is detected whether the distance between the first coordinate and the second coordinate is greater than a set distance.
In step S44, if yes, the second coordinate of the reference point is corrected to the first coordinate of the gaze position.
In this embodiment, by determining the distance between the first coordinate and the second coordinate, it can be determined whether the gaze position is away from the first region, and the first region can be corrected immediately after the gaze position is away from the first region, so that the first region moves along with the movement of the gaze position; step S41 and step S42 are not limited to a sequential order, and step S41 and step S42 may be performed simultaneously.
In one embodiment, step S23 may include: the image resolution of the second region is adjusted down so that the image quality of the second region is adjusted down, in this embodiment, the difficulty and time of the image rendering process is based on the quality of the target image; the higher the quality of the target image, the more time the image rendering process will take. The image resolution of the second region is reduced prior to image rendering, thereby reducing the time consumed by the image rendering process and reducing the delay.
In one embodiment, step S24 may include: and respectively carrying out image rendering processing on the first area and the second area with the lowered image quality, wherein the image rendering quality of the first area is greater than that of the second area with the lowered image quality. For the image rendering process, the time consumption is different due to different requirements of image rendering; therefore, by applying the low-quality image rendering processing to the image of the second region with the lowered image quality, the image rendering processing time can be further reduced, and the delay can be further reduced.
In an embodiment, the image processing method may also be specifically used for a virtual reality or augmented reality device. In both the virtual reality technology and the augmented reality technology, operations such as image preprocessing, rendering processing and the like are required before image display is performed on virtual reality or augmented reality equipment; and displaying the processed image; when a user wears the virtual reality or augmented reality equipment, a camera in the virtual reality or augmented reality equipment shoots the gaze position of an eyeball, and area division is carried out on an image according to the gaze position of the eyeball, for example, the image is divided into a first area and a second area, a third area and a fourth area can also be continuously divided, and the number and the range of the areas can be adjusted according to the needs of the user for the area division; the image processing method in the above embodiment may be adopted for the image processing of each region.
In an embodiment, please refer to fig. 5, fig. 6 and fig. 7, which respectively disclose different division diagrams of the first area 51 and the second area 52 in different embodiments; a first area 51 and a second area 52 are divided on the target image 53, and the second area 52 comprises a plurality of annular areas 521 which are sequentially arranged around the first area 51; the two adjacent annular areas 521 are seamlessly spliced, and the annular area 521 adjacent to the first area 51 is seamlessly spliced with the first area 51; of course, the ranges of the first region 51, the second region 52, and the annular region 521 may be adjusted according to their own needs, and are not limited to a large number.
In this embodiment, the step S23 may include: image quality reduction is performed on each of the plurality of annular regions 521, wherein, of two adjacent annular regions 521, the image quality of the annular region 521 close to the first region 51 is greater than or equal to the image quality of the annular region 521 far away from the first region 51; then, in step S24, the image rendering process is performed on the second area 52, and the image quality can be adjusted to be low, for example, the image resolution of the annular area 521 close to the first area 51 in two adjacent annular areas 521 is greater than or equal to the image resolution of the annular area 521 far from the first area 51; of course, the adjustment can be made from the image elements such as color, hue, contrast, etc. to adjust the image quality; as for the image rendering processing in step S24, image rendering processing with different rendering qualities may be performed on different annular regions 521, for example, image rendering processing may be performed on an annular region 521 near the first region 51 and an annular region 521 far from the first region 51, respectively, and the image rendering quality of the annular region 521 near the first region 51 is higher than that of the annular region 521 far from the first region 51.
In this embodiment, it is also possible to divide enough regions so that the target image 53 forms a transition from the first region 51 to the second region 52, so that the picture processing is made to fade.
In the present embodiment, referring to fig. 5, the first area 51 may be circular, and the annular area 521 may be circular. Referring to fig. 6, the first region 51 may be an oval shape, and the annular region 521 may be an oval ring shape. Referring to fig. 7, the first area 51 may be rectangular, and of course, the first area 51 may also be other shapes, such as triangle, hexagon, heptagon, etc., and may be adjusted according to the needs thereof; the corresponding annular region 521 may also be adapted, and is not specifically limited herein.
In an embodiment, please refer to fig. 8, fig. 9 and fig. 10, which respectively disclose different division diagrams of the first area 51 and the second area 52 in different embodiments; referring to fig. 8 and 9, the second region 52 surrounds the first region 51; referring to fig. 10, the second regions 52 are located at two opposite sides of the first region 51, and of course, the second regions 52 may be located at one side of the first region 51 according to the circumstances; referring to fig. 8, 9 and 10, a first area 51 and a second area 52 are divided on a target image 53, and the second area 52 is seamlessly spliced with the first area 51; the second region 52 comprises a plurality of third regions 521 seamlessly spliced together, the third regions 521 being located on one side of the first region 51; of course, the ranges of the first area 51, the second area 52 and the third area 521 may be adjusted according to their own needs, and are not limited to a large number.
In this embodiment, step S23 may include: the image quality of each of the plurality of third areas 521 is reduced, wherein the image quality of the third area 521 close to the first area 51 in two adjacent third areas 521 is greater than or equal to the image quality of the third area 521 far away from the first area 51; then, in step S24, the image rendering process is performed on the second area 52 again, and for the image quality reduction, the image resolution may be adjusted, for example, in two adjacent third areas 521, the image resolution of the third area 521 close to the first area 51 is greater than or equal to the image resolution of the third area 521 far from the first area 51; of course, the adjustment can be made from the image elements such as color, hue, contrast, etc. to adjust the image quality; as for the image rendering processing in step S24, image rendering processing with different rendering qualities may be performed on different third areas 521, for example, image rendering processing may be performed on the third area 521 close to the first area 51 and the third area 521 far from the first area 51, respectively, and the image rendering quality of the third area 521 close to the first area 51 is greater than that of the third area 521 far from the first area 51.
In this embodiment, it is also possible to divide enough regions so that the target image 53 forms a transition from the first region 51 to the second region 52, so that the picture processing is made to fade.
Referring to fig. 8, 9 and 10, the first area 51 and the third area 521 may be both rectangular, and may also be a mixed arrangement of other polygons, and the shapes of the first area 51 and the third area 521 are not limited in this respect.
Referring to fig. 11, a flowchart of an image processing method of the electronic device 10 according to an embodiment is disclosed; in the image processing method, step S24 may include:
step S111, the second region with the lowered image quality is subjected to the special effect distortion processing without performing the image rendering processing. And performing image rendering processing on the first area.
Referring to fig. 5, 6 and 7, in step S23, the image quality may be reduced for each of the plurality of annular regions 521, and then the distortion special effect processing may be performed without the rendering processing in step S111; among the two adjacent annular regions 521, the image quality of the annular region 521 close to the first region 51 is greater than or equal to the image quality of the annular region 521 far away from the first region 51; the image quality adjustment has been described in detail above and is not described herein in excess; referring to fig. 8, 9, and 10, in step S23, image quality reduction may be performed on each of the plurality of third areas 521, and then a distortion special effect process is performed without a rendering process in step S111; among the two adjacent third regions 521, the image quality of the third region 521 close to the first region 51 is greater than or equal to the image quality of the third region 521 far away from the first region 51; the image quality adjustment has been described in detail above and is not described here too much. For the distortion special effect processing, the operations of lengthening, distorting, extruding and the like can be carried out on the image on the premise of not damaging the image quality, so that the 3D space effect is simulated, and the stereoscopic impression of the image is realized; of course, the special effect distortion processing can be replaced by other special effect processing capable of simulating a 3D space effect to realize the three-dimensional effect of the picture, and the time for carrying out image processing on the same image by the special effect processing is shorter than the time for image rendering processing.
In an embodiment, referring to fig. 11, in the image processing method, step S24 may include:
in step S112, if it is detected that the moving speed of the electronic device is greater than the set speed, the image rendering processing is not performed on the first area and the second area with the reduced image quality.
Because the user cannot watch the image of the electronic device 10 with full concentration during the movement of the electronic device, the image displayed on the display screen of the electronic device 10 does not need to spend a lot of time to perform image processing, such as adjusting resolution, image rendering processing, and the like; the set moving speed can be determined according to the needs of the user, for example, the set speed can be the speed when the image can still be seen clearly in the moving process.
In an embodiment, referring to fig. 11, in the image processing method, step S24 may include:
in step S113, if it is detected that the angular velocity of the electronic device is greater than the set angular velocity, the image rendering process is not performed on each of the first area and the second area with the image quality reduced.
Since the user cannot watch the image of the electronic device 10 in a completely centralized manner during the rotation and the flipping of the electronic device, the image displayed on the display screen of the electronic device 10 does not need to spend a lot of time to perform image processing, such as operations of adjusting resolution, image rendering processing, and the like, and the set moving speed can be determined according to the needs of the user, for example, the set angular speed can be an angular speed at which the image can still be seen clearly during the rotation and the flipping.
In one embodiment, referring to fig. 11, step S112 and step S113 may be executed prior to step S24 and step S111; alternatively, the processing may be performed in step S24 or step S111 as necessary.
Next, a storage medium is described, please refer to fig. 12, which discloses a block diagram of a storage medium 121 according to an embodiment of the present application; the storage medium 121 stores a computer program 122, and the computer program 122 realizes the image processing method when executed by a processor.
The storage medium 121 may be a medium that can store program instructions, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may be a server that stores the program instructions, and the server may send the stored program instructions to other devices for operation, or may self-operate the stored program instructions.
In one embodiment, the storage medium 121 may also be the memory 12 as shown in FIG. 1.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

Claims (8)

1. An image processing method for an electronic device, comprising:
detecting the fixation position of an eyeball in a display interface;
determining a first area corresponding to the fixation position and a second area except the first area on a target image to be displayed on the display interface;
reducing the image quality of the second area;
if the movement speed of the electronic equipment is detected to be smaller than a set speed, or the angular speed of the electronic equipment is detected to be smaller than the set angular speed, respectively performing image rendering processing on the first area and the second area with the lowered image quality, wherein the image rendering quality of the first area is greater than the image rendering quality of the second area with the lowered image quality; or if the movement speed of the electronic equipment is detected to be smaller than a set speed, or the angular speed of the electronic equipment is detected to be smaller than the set angular speed, performing special effect distortion processing on the second area without performing image rendering processing after the image quality is reduced, and performing image rendering processing on the first area; or if the movement speed of the electronic equipment is detected to be greater than a set speed, or the angular speed of the electronic equipment is detected to be greater than the set angular speed, the first area and the second area with reduced image quality are not subjected to image rendering processing respectively;
and displaying the processed target image on the display interface.
2. The image processing method according to claim 1, wherein the detecting a gaze location of an eyeball in a display interface comprises:
tracking and locking the movement angle of the eyeball;
confirming the current space coordinate of an exit pupil according to the motion angle of the eyeball;
and mapping the current space coordinate of the pupil to a two-dimensional coordinate system of the display interface, and marking the falling point of the pupil on the two-dimensional coordinate system as the eye-watching position of the eyeball.
3. The image processing method according to claim 1, wherein the reducing the image quality of the second region comprises:
the image resolution of the second region is turned down so that the image quality of the second region is turned down.
4. The image processing method according to claim 1, wherein the second region includes a plurality of ring-shaped regions that sequentially surround the first region; the two adjacent annular areas are seamlessly spliced, and the annular area adjacent to the first area is seamlessly spliced with the first area;
the reducing the image quality of the second area comprises:
and respectively reducing the image quality of the plurality of annular areas, wherein the image quality of the annular area close to the first area in two adjacent annular areas is greater than or equal to the image quality of the annular area far away from the first area.
5. The image processing method according to claim 1, wherein the second region surrounds the first region; or the second area is positioned on one side or two opposite sides of the first area; the second area is seamlessly spliced with the first area; the second region comprises a plurality of third regions seamlessly spliced together; the third region is positioned on one side of the first region;
the reducing the image quality of the second area comprises:
and respectively reducing the image quality of the plurality of third areas, wherein the image quality of the third area close to the first area in two adjacent third areas is greater than or equal to the image quality of the third area far away from the first area.
6. The image processing method according to claim 1, wherein the first region sets a reference point;
the determining a first region corresponding to the gaze location and a second region other than the first region includes:
acquiring a first coordinate of a drop point of the gaze position on a two-dimensional coordinate system of the display interface; and
acquiring a second coordinate of the reference point on the two-dimensional coordinate system;
detecting whether the distance between the first coordinate and the second coordinate is greater than a set distance;
and if so, correcting the second coordinate of the reference point to be the first coordinate of the fixation position.
7. An electronic device, characterized in that the electronic device comprises a processor, and a display screen, a camera and a memory which are connected with the processor;
the camera is used for collecting images, and the display screen is used for displaying target images;
wherein the memory is configured to store program data and the processor is configured to execute the program data to implement the image processing method of any one of claims 1 to 6.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, is adapted to carry out the image processing method of any one of claims 1 to 6.
CN201911151310.2A 2019-11-21 2019-11-21 Image processing method, electronic device, and storage medium Active CN110855972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911151310.2A CN110855972B (en) 2019-11-21 2019-11-21 Image processing method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911151310.2A CN110855972B (en) 2019-11-21 2019-11-21 Image processing method, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN110855972A CN110855972A (en) 2020-02-28
CN110855972B true CN110855972B (en) 2021-07-27

Family

ID=69603672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911151310.2A Active CN110855972B (en) 2019-11-21 2019-11-21 Image processing method, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN110855972B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111885310A (en) * 2020-08-31 2020-11-03 深圳市圆周率软件科技有限责任公司 Panoramic data processing method, processing equipment and playing equipment
CN113177434A (en) 2021-03-30 2021-07-27 青岛小鸟看看科技有限公司 Virtual reality system fixation rendering method and system based on monocular tracking
CN113114985B (en) * 2021-03-31 2022-07-26 联想(北京)有限公司 Information processing method and information processing device
CN113256661A (en) * 2021-06-23 2021-08-13 北京蜂巢世纪科技有限公司 Image processing method, apparatus, device, medium, and program product
CN113485546A (en) * 2021-06-29 2021-10-08 歌尔股份有限公司 Control method of wearable device, wearable device and readable storage medium
CN116847106A (en) * 2022-03-25 2023-10-03 北京字跳网络技术有限公司 Image compression transmission method, device, electronic equipment and storage medium
CN114581583A (en) * 2022-04-19 2022-06-03 京东方科技集团股份有限公司 Image processing method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106658170A (en) * 2016-12-20 2017-05-10 福州瑞芯微电子股份有限公司 Method and device for reducing virtual reality latency
CN107203270A (en) * 2017-06-06 2017-09-26 歌尔科技有限公司 VR image processing methods and device
CN108696732A (en) * 2017-02-17 2018-10-23 北京三星通信技术研究有限公司 Wear the method for adjusting resolution and equipment of display equipment
CN110121885A (en) * 2016-12-29 2019-08-13 索尼互动娱乐股份有限公司 For having recessed video link using the wireless HMD video flowing transmission of VR, the low latency of watching tracking attentively
CN110140353A (en) * 2016-04-01 2019-08-16 线性代数技术有限公司 System and method for being suitable for the head-mounted display of visual perception

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110140353A (en) * 2016-04-01 2019-08-16 线性代数技术有限公司 System and method for being suitable for the head-mounted display of visual perception
CN106658170A (en) * 2016-12-20 2017-05-10 福州瑞芯微电子股份有限公司 Method and device for reducing virtual reality latency
CN110121885A (en) * 2016-12-29 2019-08-13 索尼互动娱乐股份有限公司 For having recessed video link using the wireless HMD video flowing transmission of VR, the low latency of watching tracking attentively
CN108696732A (en) * 2017-02-17 2018-10-23 北京三星通信技术研究有限公司 Wear the method for adjusting resolution and equipment of display equipment
CN107203270A (en) * 2017-06-06 2017-09-26 歌尔科技有限公司 VR image processing methods and device

Also Published As

Publication number Publication date
CN110855972A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN110855972B (en) Image processing method, electronic device, and storage medium
JP6632443B2 (en) Information processing apparatus, information processing system, and information processing method
US20180101227A1 (en) Headset removal in virtual, augmented, and mixed reality using an eye gaze database
US8330793B2 (en) Video conference
US11151776B2 (en) Prediction and throttling adjustments based on application rendering performance
US9196093B2 (en) Information presentation device, digital camera, head mount display, projector, information presentation method and non-transitory computer readable medium
CN109729365B (en) Video processing method and device, intelligent terminal and storage medium
EP3857885A1 (en) Motion smoothing for re-projected frames
US20210350762A1 (en) Image processing device and image processing method
US20190213975A1 (en) Image processing system, image processing method, and computer program
WO2014105646A1 (en) Low-latency fusing of color image data in a color sequential display system
JP7101269B2 (en) Pose correction
CN109002248B (en) VR scene screenshot method, equipment and storage medium
KR101148508B1 (en) A method and device for display of mobile device, and mobile device using the same
US20230024396A1 (en) A method for capturing and displaying a video stream
US10901213B2 (en) Image display apparatus and image display method
US20110085018A1 (en) Multi-User Video Conference Using Head Position Information
US11749141B2 (en) Information processing apparatus, information processing method, and recording medium
CN108027646B (en) Anti-shaking method and device for terminal display
CN110969706A (en) Augmented reality device, image processing method and system thereof, and storage medium
CN110910509A (en) Image processing method, electronic device, and storage medium
CN114513646A (en) Method and device for generating panoramic video in three-dimensional virtual scene
US10620585B2 (en) Method, device, system and storage medium for displaying a holographic portrait in real time
TW202038605A (en) Image representation of a scene
Kim et al. AR timewarping: A temporal synchronization framework for real-Time sensor fusion in head-mounted displays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant