CN117635460A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN117635460A
CN117635460A CN202311697069.XA CN202311697069A CN117635460A CN 117635460 A CN117635460 A CN 117635460A CN 202311697069 A CN202311697069 A CN 202311697069A CN 117635460 A CN117635460 A CN 117635460A
Authority
CN
China
Prior art keywords
image
initial
processing method
tracking area
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311697069.XA
Other languages
Chinese (zh)
Inventor
陶成功
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311697069.XA priority Critical patent/CN117635460A/en
Publication of CN117635460A publication Critical patent/CN117635460A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides an image processing method and apparatus, the image processing method including: acquiring an initial image; determining a first image and a second image in the initial image, wherein the first image is an area image corresponding to a tracking area of a user eye in the initial image, and the second image comprises: areas outside the first image in the initial image; downsampling the second image to obtain a third image, wherein the size of the third image is the same as that of the second image, and the pixel density of the third image is lower than that of the second image; and rendering and synthesizing the first image and the third image to obtain a target image to be displayed, wherein only pixels of a tracking area are reserved, pixels of an area outside the tracking area are reduced, and the smoothness of the image displayed by the VR equipment can be improved.

Description

Image processing method and device
Technical Field
The embodiment of the disclosure relates to the technical field of virtual reality, in particular to an image processing method and device.
Background
VR (Virtual Reality) devices are a computer simulation system that can create and experience a Virtual world by using a computer to create a simulated environment into which a user is immersed.
At present, when VR equipment is displaying an image, if the displayed image pixels are higher, the problem of display blocking can occur, and the user experience is affected.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method and device for improving smoothness of image display of VR equipment.
In a first aspect, an embodiment of the present disclosure provides an image processing method, applied to a virtual reality device, including: acquiring an initial image; determining a first image and a second image in the initial image, wherein the first image is an area image corresponding to a tracking area of a user eye in the initial image, and the second image comprises: areas outside the first image in the initial image; downsampling the second image to obtain a third image, wherein the size of the third image is the same as that of the second image, and the pixel density of the third image is lower than that of the second image; and rendering and synthesizing the first image and the third image to obtain a target image to be displayed.
In a second aspect, embodiments of the present disclosure provide a virtual reality device, comprising:
an acquisition unit configured to acquire an initial image;
a determining unit, configured to determine a first image and a second image in an initial image, where the first image is an area image corresponding to a tracking area of a user's eye in the initial image, and the second image includes: areas outside the first image in the initial image;
the sampling unit is used for downsampling the second image to obtain a third image, the size of the third image is the same as that of the second image, and the pixel density of the third image is lower than that of the second image;
and the rendering unit is used for rendering and synthesizing the first image and the third image to obtain a target image to be displayed.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored in the memory, causing the at least one processor to perform the image processing method as provided above in the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the image processing method provided in the first aspect above.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided a computer program product comprising computer-executable instructions which, when executed by a processor, implement the image processing method as provided above in the first aspect.
The image processing method and device provided in this embodiment include: acquiring an initial image; determining a first image and a second image in the initial image, wherein the first image is an area image corresponding to a tracking area of a user eye in the initial image, and the second image comprises: areas outside the first image in the initial image; downsampling the second image to obtain a third image, wherein the size of the third image is the same as that of the second image, and the pixel density of the third image is lower than that of the second image; and rendering and synthesizing the first image and the third image to obtain a target image to be displayed, wherein only pixels of a tracking area are reserved, pixels of an area outside the tracking area are reduced, and the smoothness of the image displayed by the VR equipment can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of a display of an initial image according to the related art;
FIG. 2 is a schematic diagram of a display target image according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating steps of an image processing method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of determining a first image and a third image provided by an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating steps of another image processing method according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an image processing method according to an embodiment of the disclosure;
fig. 7 is a block diagram of a virtual reality device according to an embodiment of the present disclosure;
fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
In the related art, an image is displayed in the manner of fig. 1, specifically, an image sensor of a virtual reality device acquires an initial image, as shown in fig. 1, and then the initial image is directly displayed on a display screen of the virtual reality device, which can be seen to affect Liu Chang degrees of the display image of the display screen due to the influence of bandwidth and power consumption of the virtual reality device if the pixel of the initial image is higher (e.g., 800w pixel), thereby affecting the user experience.
Based on the above-mentioned problems, the image processing method provided by the present disclosure firstly processes an initial image in the manner of fig. 2 to obtain a target image, wherein the pixel density of a tracking area in the target image is the same as that of the initial image, the pixel density outside the tracking area is lower than that of the target image, and then the target image is displayed in a display screen. In the method, the high pixel density is maintained in the area which can be seen by the eyes of the user, the low pixel density is maintained in the area which can not be seen by the eyes of the user, and the smoothness of the display of the image of the display screen can be improved.
Fig. 2 is only one example of a scene, and embodiments of the present disclosure may be applicable to image processing in any scene.
Referring to fig. 3, a flowchart of an image processing method according to an embodiment of the disclosure is shown. As shown in fig. 3, the image processing method specifically includes the following steps:
s301, acquiring an initial image.
In the embodiment of the disclosure, the initial image is acquired by an image sensor of the virtual reality device, for example, 800 ten thousand pixels, and the size of the initial image is the same as the size of a display screen of the virtual reality device.
S302, determining a first image and a second image in the initial image.
The first image is an area image corresponding to a tracking area of a user eye in the initial image, and the second image comprises: areas outside the first image in the initial image.
Referring to fig. 4, the first image is directly extracted from the initial image, and the pixel density of the first image is the same as that of the initial image.
In an alternative embodiment, the second image is the initial image (e.g., second image b in fig. 4) or an area outside the first image in the initial image (e.g., second image a in fig. 4).
S303, downsampling the second image to obtain a third image.
The size of the third image is the same as that of the second image, and the pixel density of the third image is lower than that of the second image.
Referring to fig. 4, the second image a is downsampled to obtain a third image a, and the second image b is downsampled to obtain a third image b.
In an embodiment of the disclosure, the pixel density of the third image may be preset, for example, the pixel of the second image is 800w pixels, the pixel density of the second image is 800 w/(30 cm×20 cm) if the size of the second image is 30 cm×20 cm, and the second image is downsampled to a third image having 200w pixels and a size of 30 cm×20 cm if the preset pixel density of the third image is 200 w/(30 cm×20 cm).
And S304, rendering and synthesizing the first image and the third image to obtain a target image to be displayed.
The method for rendering and synthesizing the first image and the third image to obtain the target image to be displayed comprises the following steps: respectively carrying out noise reduction treatment on the first image and the third image; and rendering the synthesized first image after noise reduction and the third image after noise reduction to obtain a target image.
In the embodiment of the disclosure, the noise reduction processing is performed before the first image and the third image are rendered and synthesized. The present disclosure is not limited to a specific noise reduction process.
In the method, the pixel density of the tracking area is kept consistent with the pixel density of the initial image, so that the definition of a user browsing the target image can be ensured, the pixel density of the area except for the tracking area is further reduced, the memory bandwidth and the power consumption of the image display can be improved, the delay time of the image display can be reduced, and the user browsing image experience is further improved.
Referring to fig. 5, a flowchart of another image processing method according to an embodiment of the disclosure is shown.
As shown in fig. 5, the image processing method specifically includes the steps of:
s501, acquiring an initial image.
The implementation process of this step refers to S301, and will not be described here again.
S502, determining a tracking area of the eyes of the user on the display screen.
Wherein the virtual reality device includes a plurality of tracking cameras, determines the tracking area of user's eyes on the display screen, includes: acquiring eye images acquired by a plurality of tracking cameras; determining a tracking position of eyes on a display screen according to the eye image; and determining a tracking area by taking the tracking position as a center, wherein the area ratio of the tracking area to the display screen is a preset ratio.
In this embodiment of the application, different tracking cameras can be set at different positions of the virtual reality device, and a plurality of tracking cameras collect eye images of a user, so that coordinate positions of eyeballs in different eye images can be determined, coordinates of the tracking cameras under a coordinate system of the virtual reality device can be determined, and further the tracking positions of eyes on a display screen can be converted and determined. In the present disclosure, the size of the tracking area is preset, such as a quarter display screen or an eighth display screen.
In the present disclosure, the size of the initial image after being rendered is the same as the size of the display area of the display screen. For example, the size of the initial image after being rendered is 30 cm×20 cm, and the size of the display area is also 30 cm×20 cm.
Further, the coordinates of the tracking area on the display screen=.
S503, extracting a first image in a tracking area of the initial image.
In the present disclosure, the first image may be extracted at a coordinate position corresponding to the initial image according to coordinates of the tracking area on the real screen.
Wherein extracting the first image in the tracking area of the initial image comprises: extracting, in the kernel, a first image in a tracking area of the initial image;
s504, determining a second image.
In an alternative embodiment, the initial image is determined to be the second image. In another alternative embodiment, the portion of the tracking area is removed from the initial image to obtain the second image.
S505, combining the first image and the third image to obtain an intermediate image.
Wherein rendering the composite first image and the third image comprises: in the hardware abstraction layer, the first image and the third image are rendered and synthesized.
The specific implementation process of this step refers to S304, and will not be described here again.
S506, in the intermediate image, performing Gaussian blur processing on pixels at the joint of the first image and the third image to obtain a target image.
Specifically, the pixels at the connection position of the first image and the third image are subjected to Gaussian blur processing, so that the pixel density of the first image at the connection position is reduced, the pixel density of the third image at the connection position is increased, further, the jumping sense of the connection position of the first image and the third image in the target image can be reduced, and the experience sense of a user for browsing the target image is improved.
Further, in the embodiment of the present disclosure, referring to fig. 6, the image sensor transmits the acquired initial image to an IFE (image front end), the image front end processes the initial image according to the acquired tracking area to obtain a first image and a third image, and then transmits the first image and the third image to an IPE (image processing engine) through a bus to perform noise reduction processing to obtain a first image after noise reduction and a third image after noise reduction. And transmitting the first image after noise reduction and the third image after noise reduction to a hardware abstraction layer through a bus for rendering and synthesizing to obtain a target image to be displayed.
Further, a CAMX architecture may be deployed in the hardware abstraction layer to enable rendering composition of the target image.
In the method, the pixel density of the tracking area of the target area is consistent with the pixel density of the initial image, so that the definition of the target image browsed by a user can be realized, the pixel density of the area before the tracking area is further reduced, the memory bandwidth and the power consumption of the image display can be improved, the delay time of the image display can be reduced, and the image browsing experience of the user is further improved.
Corresponding to the image processing method of the above embodiment, fig. 7 is a block diagram of a virtual reality device 70 provided by an embodiment of the present disclosure. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. As shown in fig. 7, the virtual reality device 70 specifically includes: an acquisition unit 701, a determination unit 702, a sampling unit 703, and a rendering unit 704, wherein:
an acquisition unit 701 for acquiring an initial image;
a determining unit 702, configured to determine a first image and a second image in the initial image, where the first image is an area image corresponding to a tracking area of a user's eye in the initial image, and the second image includes: areas outside the first image in the initial image;
a sampling unit 703, configured to downsample the second image to obtain a third image, where the size of the third image is the same as the size of the second image, and the pixel density of the third image is lower than the pixel density of the second image;
and a rendering unit 704, configured to render and synthesize the first image and the third image, so as to obtain a target image to be displayed.
In some embodiments, the determining unit 702 is specifically configured to determine a tracking area of the user's eyes on the display screen when determining the first image in the initial image; a first image is extracted at a tracking area of the initial image.
In some embodiments, the determining unit 702 is specifically configured to, in determining a tracking area of the user's eyes on the display screen: acquiring eye images acquired by a plurality of tracking cameras; determining a tracking position of eyes on a display screen according to the eye image; and determining a tracking area by taking the tracking position as a center, wherein the area ratio of the tracking area to the display screen is a preset ratio.
In some embodiments, the rendering unit 704 is specifically configured to synthesize the first image and the third image to obtain an intermediate image; and in the intermediate image, performing Gaussian blur processing on pixels at the joint of the first image and the third image to obtain a target image.
In some embodiments, the second image is the initial image or an area of the initial image other than the first image.
In some embodiments, the rendering unit 704 is specifically configured to perform noise reduction processing on the first image and the third image respectively; and rendering the synthesized first image after noise reduction and the third image after noise reduction to obtain a target image.
In some embodiments, the determining unit 702 is specifically configured to, when extracting the first image from the tracking area of the initial image: extracting, in the kernel, a first image in a tracking area of the initial image;
the rendering unit 704 is specifically configured to: in the hardware abstraction layer, the first image and the third image are rendered and synthesized.
The virtual reality device provided in this embodiment may be used to execute the technical scheme of the embodiment of the image processing method, and its implementation principle and technical effect are similar, and this embodiment is not repeated here.
Referring to fig. 8, a schematic diagram of a configuration of an electronic device 80 suitable for use in implementing embodiments of the present disclosure is shown, the electronic device 80 may be a terminal device or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, the electronic device 80 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 81 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 82 or a program loaded from a storage means 88 into a random access Memory (Random Access Memory, RAM) 83. In the RAM 83, various programs and data required for the operation of the electronic device 80 are also stored. The processing device 81, the ROM 82, and the RAM 83 are connected to each other via a bus 84. An input/output (I/O) interface 85 is also connected to bus 84.
In general, the following devices may be connected to the I/O interface 85: input devices 86 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 87 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 88 including, for example, magnetic tape, hard disk, etc.; and communication means 89. The communication means 89 may allow the electronic device 80 to communicate with other devices wirelessly or by wire to exchange data. While fig. 8 shows an electronic device 80 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 89, or from the storage means 88, or from the ROM 82. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 81.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a unit, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a field programmable gate array (FPGA, field Programmable Gate Array), an application specific integrated circuit (ASIC, application Specific Integrated Circuit), a special standard product (ASSP, A pp li cat i on specific Standard Parts), a system on a chip (SOC, S ystem on Chip), complex programmable logic devices (CPLDs, C omp l ex programmable Logic Device) and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM, random Access Memory), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM, erasable Programmable Read Only Memory, or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM, compact Disc Read-Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided an image processing method applied to a virtual reality device, the image processing method including: acquiring an initial image; determining a first image and a second image in the initial image, wherein the first image is an area image corresponding to a tracking area of a user eye in the initial image, and the second image comprises: areas outside the first image in the initial image; downsampling the second image to obtain a third image, wherein the size of the third image is the same as that of the second image, and the pixel density of the third image is lower than that of the second image; and rendering and synthesizing the first image and the third image to obtain a target image to be displayed.
In accordance with one or more embodiments of the present disclosure, determining a first image in an initial image includes: determining a tracking area of the user's eyes on the display screen; a first image is extracted at a tracking area of the initial image.
According to one or more embodiments of the present disclosure, a virtual reality device includes a plurality of tracking cameras, determining a tracking area of a user's eyes on a display screen, comprising:
acquiring eye images acquired by a plurality of tracking cameras;
determining a tracking position of eyes on a display screen according to the eye image;
and determining a tracking area by taking the tracking position as a center, wherein the area ratio of the tracking area to the display screen is a preset ratio.
According to one or more embodiments of the present disclosure, rendering a composite first image and a third image to obtain a target image to be displayed, includes:
synthesizing the first image and the third image to obtain an intermediate image;
and in the intermediate image, performing Gaussian blur processing on pixels at the joint of the first image and the third image to obtain a target image.
According to one or more embodiments of the present disclosure, the second image is the initial image or an area outside the first image in the initial image.
According to one or more embodiments of the present disclosure, rendering a composite first image and a third image to obtain a target image to be displayed, includes:
respectively carrying out noise reduction treatment on the first image and the third image;
and rendering the synthesized first image after noise reduction and the third image after noise reduction to obtain a target image.
According to one or more embodiments of the present disclosure, extracting a first image at a tracking area of an initial image includes:
extracting, in the kernel, a first image in a tracking area of the initial image;
rendering the composite first image and the third image includes:
in the hardware abstraction layer, the first image and the third image are rendered and synthesized.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided a virtual reality device comprising:
an acquisition unit configured to acquire an initial image;
a determining unit, configured to determine a first image and a second image in an initial image, where the first image is an area image corresponding to a tracking area of a user's eye in the initial image, and the second image includes: areas outside the first image in the initial image;
the sampling unit is used for downsampling the second image to obtain a third image, the size of the third image is the same as that of the second image, and the pixel density of the third image is lower than that of the second image;
and the rendering unit is used for rendering and synthesizing the first image and the third image to obtain a target image to be displayed.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored in the memory, causing the at least one processor to perform the image processing method as provided above in the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the image processing method as provided above in the first aspect.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided a computer program product comprising computer-executable instructions which, when executed by a processor, implement the image processing method as provided above in the first aspect.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. An image processing method applied to a virtual reality device, the image processing method comprising:
acquiring an initial image;
determining a first image and a second image in the initial image, wherein the first image is an area image corresponding to a tracking area of a user eye in the initial image, and the second image comprises: areas outside the first image in the initial image;
downsampling the second image to obtain a third image, wherein the size of the third image is the same as that of the second image, and the pixel density of the third image is lower than that of the second image;
and rendering and synthesizing the first image and the third image to obtain a target image to be displayed.
2. The image processing method according to claim 1, said determining a first image in the initial image, comprising:
determining a tracking area of the user's eyes on the display screen;
the first image is extracted at the tracking area of the initial image.
3. The image processing method according to claim 2, the virtual reality device including a plurality of tracking cameras, the determining a tracking area of the user's eyes on the display screen, comprising:
acquiring eye images acquired by the plurality of tracking cameras;
determining a tracking position of the eye on the display screen according to the eye image;
and determining the tracking area by taking the tracking position as the center, wherein the area ratio of the tracking area to the display screen is a preset ratio.
4. An image processing method according to any one of claims 1 to 3, wherein the rendering synthesizes the first image and the third image to obtain a target image to be displayed, and comprises:
synthesizing the first image and the third image to obtain an intermediate image;
and in the intermediate image, performing Gaussian blur processing on pixels at the joint of the first image and the third image to obtain a target image.
5. A method of image processing according to any one of claims 1 to 3, the second image being the initial image or an area of the initial image other than the first image.
6. An image processing method according to any one of claims 1 to 3, wherein the rendering synthesizes the first image and the third image to obtain a target image to be displayed, and comprises:
respectively carrying out noise reduction treatment on the first image and the third image;
and rendering the synthesized first image after noise reduction and the third image after noise reduction to obtain the target image.
7. The image processing method according to claim 2 or 3, the extracting the first image at the tracking area of the initial image, comprising:
extracting, in a kernel, the first image at the tracking area of the initial image;
the rendering the first image and the third image comprises:
in a hardware abstraction layer, rendering and synthesizing the first image and the third image.
8. A virtual reality device, comprising:
an acquisition unit configured to acquire an initial image;
a determining unit, configured to determine a first image and a second image in the initial image, where the first image is an area image corresponding to a tracking area of a user's eye in the initial image, and the second image includes: areas outside the first image in the initial image;
the sampling unit is used for downsampling the second image to obtain a third image, the size of the third image is the same as that of the second image, and the pixel density of the third image is lower than that of the second image;
and the rendering unit is used for rendering and synthesizing the first image and the third image to obtain a target image to be displayed.
9. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the image processing method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the image processing method of any one of claims 1 to 7.
CN202311697069.XA 2023-12-11 2023-12-11 Image processing method and device Pending CN117635460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311697069.XA CN117635460A (en) 2023-12-11 2023-12-11 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311697069.XA CN117635460A (en) 2023-12-11 2023-12-11 Image processing method and device

Publications (1)

Publication Number Publication Date
CN117635460A true CN117635460A (en) 2024-03-01

Family

ID=90019857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311697069.XA Pending CN117635460A (en) 2023-12-11 2023-12-11 Image processing method and device

Country Status (1)

Country Link
CN (1) CN117635460A (en)

Similar Documents

Publication Publication Date Title
US20220277481A1 (en) Panoramic video processing method and apparatus, and storage medium
WO2022100735A1 (en) Video processing method and apparatus, electronic device, and storage medium
US11812180B2 (en) Image processing method and apparatus
CN111325704B (en) Image restoration method and device, electronic equipment and computer-readable storage medium
CN112672185B (en) Augmented reality-based display method, device, equipment and storage medium
CN110298851B (en) Training method and device for human body segmentation neural network
CN112053449A (en) Augmented reality-based display method, device and storage medium
US11516411B2 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN110310293B (en) Human body image segmentation method and device
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
CN113535105B (en) Media file processing method, device, equipment, readable storage medium and product
US20240054703A1 (en) Method for image synthesis, device for image synthesis and storage medium
CN111862349A (en) Virtual brush implementation method and device and computer readable storage medium
US20220272283A1 (en) Image special effect processing method, apparatus, and electronic device, and computer-readable storage medium
CN115409696A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115734001A (en) Special effect display method and device, electronic equipment and storage medium
CN113223012B (en) Video processing method and device and electronic device
CN117635460A (en) Image processing method and device
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN114219884A (en) Particle special effect rendering method, device and equipment and storage medium
CN109472855B (en) Volume rendering method and device and intelligent device
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN112395826B (en) Text special effect processing method and device
CN112214187B (en) Water ripple image implementation method and device
CN117667462A (en) Data transmission method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination