CN114549718A - Rendering method and device of virtual information, augmented reality device and storage medium - Google Patents

Rendering method and device of virtual information, augmented reality device and storage medium Download PDF

Info

Publication number
CN114549718A
CN114549718A CN202210089796.7A CN202210089796A CN114549718A CN 114549718 A CN114549718 A CN 114549718A CN 202210089796 A CN202210089796 A CN 202210089796A CN 114549718 A CN114549718 A CN 114549718A
Authority
CN
China
Prior art keywords
display
parameter
real scene
augmented reality
rendered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210089796.7A
Other languages
Chinese (zh)
Inventor
胡永涛
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN202210089796.7A priority Critical patent/CN114549718A/en
Publication of CN114549718A publication Critical patent/CN114549718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The application discloses a rendering method and device of virtual information, an augmented reality device and a storage medium. The method is applied to an augmented reality device and comprises the following steps: determining a region to be rendered in a scene image, wherein the scene image represents an image acquired by acquiring a real scene by an augmented reality device, and the area of the region to be rendered is smaller than that of the scene image; acquiring environmental parameters of a real scene; and adjusting the display parameters of the virtual information in the region to be rendered based on the environment parameters of the real scene. According to the method, the whole scene image is not required to be rendered, but the local area in the scene image is rendered, so that the computing resource of a processor can be saved, the rendering time can be shortened, the virtual information can be timely displayed on the augmented reality device, and the display efficiency is improved.

Description

Rendering method and device of virtual information, augmented reality device and storage medium
Technical Field
The present application relates to the field of augmented reality technologies, and in particular, to a method and an apparatus for rendering virtual information, an augmented reality apparatus, and a storage medium.
Background
In recent years, with the progress of science and technology, Augmented Reality (AR) technology has gradually become a hot spot of research at home and abroad. When a user watches a real scene of the real world through an augmented reality device (for example, augmented reality glasses), virtual information generated by the augmented reality device is displayed in the real scene in an overlapping manner, so that the user can acquire more abundant information through the augmented reality device.
In the prior art, when the augmented reality glasses acquire a scene image in a real scene through the image acquisition device, the whole scene image is often rendered, so that virtual information is displayed in a superposition manner in the whole scene image. However, the above-mentioned method of rendering the entire scene image consumes a lot of computing resources.
Disclosure of Invention
The embodiment of the application provides a rendering method and device of virtual information, an augmented reality device and a storage medium.
In a first aspect, some embodiments of the present application provide a method for rendering virtual information, where the method is applied to an augmented reality device. The method comprises the following steps: determining a region to be rendered in a scene image, wherein the scene image represents an image acquired by an augmented reality device by collecting a real scene, and the area of the region to be rendered is smaller than that of the scene image. Acquiring an environmental parameter of a real scene, wherein the environmental parameter of the real scene comprises at least one of the following items: brightness parameters, color parameters, texture parameters of the real scene. And adjusting the display parameters of the virtual information in the region to be rendered based on the environment parameters of the real scene.
In a second aspect, some embodiments of the present application further provide an apparatus for rendering virtual information, where the apparatus is applied to an augmented reality apparatus. The device includes: the device comprises a to-be-rendered area determining module, an environment parameter acquiring module and a display parameter adjusting module. The area to be rendered determining module is used for determining an area to be rendered in a scene image, the scene image represents an image acquired by the augmented reality device for a real scene, and the area of the area to be rendered is smaller than that of the scene image. The environment parameter acquiring module is used for acquiring environment parameters of a real scene, wherein the environment parameters of the real scene comprise at least one of the following items: brightness parameters, color parameters, texture parameters of the real scene. The display parameter adjusting module is used for adjusting the display parameters of the virtual information in the area to be rendered based on the environment parameters of the real scene.
In a third aspect, some embodiments of the present application further provide an augmented reality apparatus. This augmented reality device includes: one or more processors, memory, and one or more applications. Wherein one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs configured to perform the above-described method of rendering the virtual information.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where computer program instructions are stored in the computer-readable storage medium. The computer program instructions may be called by the processor to execute the above-mentioned rendering method of the virtual information.
In a fifth aspect, an embodiment of the present application further provides a computer program product, where the computer program product, when executed, implements the above-mentioned method for rendering virtual information.
The application provides a rendering method and device of virtual information, an augmented reality device and a storage medium. In the method, the area to be rendered, which is smaller than the area of the scene image, is determined in the scene image, so that the display parameters of the virtual information in the area to be rendered, which are the environmental parameters of the real scene, are obtained and adjusted. On one hand, the method does not need to render the whole scene image, but renders the local area in the scene image, so that the computing resource of the processor can be saved, the rendering time can be reduced, the virtual information can be ensured to be displayed on the augmented reality device in time, and the display efficiency is improved. On the other hand, the display parameters of the virtual information are adjusted based on the environmental parameters of the real scene, so that when the virtual information is displayed on the augmented reality device, the display effect of the virtual information can be closer to the real scene, and the watching experience of a user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows an application environment diagram of a rendering method of virtual information according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating a rendering method of virtual information according to a first embodiment of the present application.
Fig. 3 is a flowchart illustrating a rendering method of virtual information according to a second embodiment of the present application.
Fig. 4 shows an application environment diagram of a rendering method of virtual information according to an embodiment of the present application.
Fig. 5 is a flowchart illustrating a rendering method of virtual information according to a third embodiment of the present application.
Fig. 6 is a flowchart illustrating a rendering method of virtual information according to a fourth embodiment of the present application.
Fig. 7 shows a block diagram of modules of a rendering apparatus for virtual information according to an embodiment of the present application.
Fig. 8 shows a block diagram of an augmented reality device provided in an embodiment of the present application.
Fig. 9 illustrates a block diagram of modules of a computer-readable storage medium provided by an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a rendering method and device of virtual information, an augmented reality device and a storage medium. In the method, a region to be rendered with an area smaller than that of a scene image is determined in the scene image, and then, an environment parameter of a real scene is obtained to adjust a display parameter of virtual information in the region to be rendered. On one hand, the method does not need to render the whole scene image, but renders the local area in the scene image, so that the computing resource of the processor can be saved, the rendering time can be reduced, the virtual information can be ensured to be displayed on the augmented reality device in time, and the display efficiency is improved. On the other hand, the display parameters of the virtual information are adjusted based on the environmental parameters of the real scene, so that when the virtual information is displayed on the augmented reality device, the display effect of the virtual information can be closer to the real scene, and the watching experience of a user is improved.
For convenience of describing the scheme of the present application in detail, an application environment of the rendering method of virtual information provided by the example of the present application is described below with reference to the accompanying drawings. Referring to fig. 1, fig. 1 is a schematic application environment diagram of a rendering method of virtual information according to an example of the present application. In fig. 1, the user wears an augmented reality device 100 on his head. In some embodiments, the augmented reality device 100 may be a head-mounted display device, such as augmented reality glasses, an augmented reality helmet, or the like. In other embodiments, the augmented reality device may be a handheld display device, such as a smartphone, tablet, or the like.
The augmented reality device 100 provided in the present application has an image capture function and a rendering function. Optionally, an image acquisition device 110 is disposed on the augmented reality device 100, and the image acquisition device 110 is configured to acquire a real scene and acquire a scene image corresponding to the real scene. Illustratively, the image capture device 110 may be a camera. Optionally, the augmented reality device 100 further obtains virtual information required to be displayed in the scene image by processing the scene image. The rendering function may be implemented by the GPU in the augmented reality device. If the scene image has a vehicle, the virtual information may be model information, price information, and the like of the vehicle; if a person exists in the scene image, the corresponding virtual information may be introduction information of the person, and the like.
Referring to fig. 1 again, the vehicle 200 is an object in a real scene, the image 300 is an image of the vehicle 200 in the real scene seen by the user through the augmented reality device 100, and the image 300 further includes virtual information 400 generated by the augmented reality device 100 for the vehicle 200, where the virtual information 400 includes model information and price information of the vehicle 200.
Referring to fig. 2, fig. 2 schematically illustrates a method for rendering virtual information according to a first embodiment of the present application. The method is applied to the augmented reality device in fig. 1. Specifically, the method may include the following steps S210 to S230.
Step S210: and determining a region to be rendered in the scene image.
The scene image represents an image acquired by the augmented reality device by acquiring a real scene. As an embodiment, an image capturing device is disposed on the augmented reality device, and the image capturing device captures an image of a real scene to obtain a scene image when receiving an image capturing instruction sent by a processor in the augmented reality device. In some embodiments, a wearing sensor is disposed on the augmented reality device, and the processor sends an image acquisition instruction to the image acquisition device when the electrical signal received by the processor and sent by the wearing sensor indicates that the augmented reality device is in a wearing state. The wearing sensor may be a capacitive sensor or an optical sensor, and the embodiment does not specifically limit the specific type of the wearing sensor. In other embodiments, the processor sends the image acquisition instruction to the image acquisition device upon receiving a power-on signal of the augmented reality device.
The region to be rendered is a region to be rendered by the augmented reality device. The area of the region to be rendered is smaller than the area of the scene image, that is, the region to be rendered is a sub-image of the scene image. Because the area of the augmented reality device which needs to be rendered actually is reduced, the computing resource of the processor can be saved, the time required by rendering is reduced, the virtual information can be displayed on the virtual reality device in time, and the display efficiency is improved.
Step S220: and acquiring the environmental parameters of the real scene.
The environmental parameters of the real scene include at least one of: brightness parameters, color parameters, texture parameters of the real scene.
The luminance parameter of the real scene characterizes a luminance value of ambient light in the real scene. As an embodiment, the augmented reality device collects a brightness value of the ambient light through a brightness sensor. Specifically, a brightness sensor is arranged on the outer surface of a shell of the augmented reality device, and the brightness sensor can acquire the brightness value of the ambient light of the target environment where the augmented reality device is located in real time in the working state of the augmented reality device.
As another embodiment, the augmented reality device determines the brightness value of the ambient light in the real scene by the brightness value of the scene image. Optionally, the brightness value of the scene image may be determined by gray scale values or RGB values of a plurality of pixel points in the scene image. Taking the scene image as a black-and-white scene image as an example, the augmented reality device may determine the brightness value of the scene image according to the gray-scale value average value corresponding to the plurality of pixel points in the black-and-white scene image, where the larger the value of the gray-scale value average value is, the larger the brightness value of the scene image is. Taking the scene image as the color scene image as an example, the augmented reality device determines the brightness value of the scene image according to the brightness values corresponding to the plurality of pixel points in the color scene image. Taking a color scene image as an image in an RGB color space as an example, the brightness value of each pixel point may be determined based on a weighted sum of values of R, G, B three color channels and a preset scaling factor, where a first scaling factor corresponding to the R channel is 0.299, a second scaling factor corresponding to the G channel is 0.587, and a third scaling factor corresponding to the B channel is 0.114. Taking the values of the R, G, B color channels corresponding to a certain pixel point as 50, 100, and 200, respectively, the brightness value corresponding to the pixel point is: 50 × 0.299+100 × 0.587+200 × 0.114 ═ 96.45. And under the condition of determining the brightness values of the plurality of pixel points, the augmented reality device determines the average value of the plurality of brightness values as the brightness value of the scene image. It should be noted that the plurality of pixel points may be pixel points determined by randomly sampling in the scene image, or pixel points in a projection image of the region to be rendered in the scene image.
The color parameters of the real scene represent the color distribution condition of the real scene corresponding to the area to be rendered. As an embodiment, the color parameter may be determined by color values of a plurality of pixel points in a projected image of the region to be rendered in the scene image. The color value may be determined according to the type of color space. Here, the scene image is a color scene image. And the augmented reality device determines the color distribution condition of the plurality of pixel points in the corresponding color space under the condition of determining the color space of the color scene image. The color space of the color scene image may be an RGB color space, a CMY/CMYK color space, an HSV/HSB color space, and the like, which is not specifically limited in the embodiment of the present application. Taking the color space of the color scene image as the RGB color space as an example, the augmented reality device determines, based on the image parameters of the color scene image, that the color space corresponding to the image is the RGB color space, and further determines the value taking conditions of a plurality of pixel points in the projection image of the region to be rendered in the scene image in R, G, B three color channels, then calculates the value taking mean values of the plurality of pixel points in R, G, B three color channels, and determines the color distribution condition of the region to be rendered in the scene image, that is, the color distribution condition of the real scene corresponding to the region to be rendered, based on the value taking mean values of the three color channels. Exemplarily, if the value average of the G channel is much larger than the value average of the R channel and the B channel, it is determined that the overall color of the region to be rendered in the scene image is greenish.
The texture parameters of the real scene represent the texture complexity of the real scene corresponding to the region to be rendered in the real scene. As an embodiment, in the case of determining a projection image of a region to be rendered in a scene image, the augmented reality device determines the texture complexity of a real scene through a gray level histogram of the projection image. In some embodiments, in case a gray level histogram is determined, a second moment of the gray level histogram, i.e. a variance value of the gray level histogram, is further calculated. Wherein, the smaller the value of the second moment is, the lower the texture complexity is; conversely, the higher the texture complexity.
Step S230: and adjusting the display parameters of the virtual information in the region to be rendered based on the environment parameters of the real scene.
In the embodiment of the present application, the display parameter of the virtual information represents a parameter when the virtual information is displayed on the augmented reality device. The display parameter of the virtual information includes at least one of display brightness, display color, and display definition of the virtual information. The augmented reality device adjusts the display brightness of the virtual information based on the brightness parameter of the real scene; adjusting the display color of the virtual information based on the color parameters of the real scene; and adjusting the display definition of the virtual information based on the texture parameters of the real scene.
The embodiment of the application provides a rendering method of virtual information. In the method, the augmented reality device determines the region to be rendered with the area smaller than that of the scene image in the scene image, and then obtains the environmental parameters of the real scene to adjust the display parameters of the virtual information in the region to be rendered. On one hand, the method does not need to render the whole scene image, but renders the local area in the scene image, so that the computing resource of the processor can be saved, the rendering time can be reduced, the virtual information can be ensured to be displayed on the augmented reality device in time, and the display efficiency is improved. On the other hand, the augmented reality device adjusts the display parameters of the virtual information based on the environmental parameters of the real scene, so that when the virtual information is displayed on the augmented reality device, the display effect of the virtual information can be closer to the real scene, and the viewing experience of the user is improved.
Referring to fig. 3, fig. 3 schematically illustrates a method for rendering virtual information according to a second embodiment of the present application. The method is applied to the augmented reality device in fig. 1. In the embodiment of the present application, the area range of the real scene in the scene image is often larger than the area range of the real scene in the visual field area seen by the user through the augmented reality device. Referring to fig. 4, a user views a vehicle 200 and a character 210 in a real scene through the augmented reality device 100. The first image 300 is a field of view region seen by the user through the augmented reality device 100, including the vehicle 200 in the field of view region. The second image 310 is a scene image captured by the image capturing device 110, and the vehicle 200 and the person 210 are included in the scene image. As can be seen from fig. 4, the scene image contains more information than in the field of view. For example, the character 210 is not included in the visual field area, and the augmented reality device 100 does not need to display virtual information of the character 210. Therefore, the virtual information of the character 210 is unnecessary virtual information, and the augmented reality device 100 inevitably wastes processor computing resources when acquiring and rendering the unnecessary virtual information.
In the technical scheme provided by the embodiment of the application, the augmented reality device determines the visual field area as the area to be rendered, so that the situation that the processor processes and renders the whole scene image can be avoided, the computing resource of the processor is further saved, and the reality efficiency is improved. Specifically, the method may include the following steps S310 to S350.
Step S310: the method includes acquiring field angle information of the augmented reality device.
In an embodiment of the application, the field angle information includes a field angle of the augmented reality device. The size of the field angle determines the field range of the augmented reality device, that is, the larger the field angle, the larger the field range of the augmented reality device, and the larger the range of the field area seen by the user wearing the augmented reality device; conversely, the smaller the field angle, the smaller the field of view of the augmented reality device. Specifically, the field angle size may be any value greater than or equal to 10 degrees and less than 180 degrees. As an embodiment, the augmented reality device acquires the field angle information by reading a hardware parameter in the memory.
Step S320: based on the field angle information, a field of view region of a user wearing the augmented reality device is determined.
The field of view area represents the area covered by the field angle information in the scene image. In the embodiment of the present application, the field of view region is an image in a real scene seen by a user wearing an augmented reality device through the augmented reality device. Since the image acquisition device is arranged at a fixed position of the augmented reality device, the visual field region and the scene image are relatively changed along with the change of the position and the orientation of the user, and it is understood that the relative position of the visual field region in the scene image is fixed in the process of changing the visual field region and the scene image. Wherein the relative position of the field of view region in the scene image may be determined based on the position of the image acquisition device disposed on the augmented reality device.
The augmented reality device may determine an image size of the field of view region in the scene image based on the field angle information, and may determine the field of view region of the user in the scene image if the image size of the field of view region and the relative position of the field of view region in the scene image are both known.
Step S330: and determining the visual field area as an area to be rendered in the scene image.
In the embodiment of the application, the augmented reality device determines the field of view area as the area to be rendered in the scene image.
Step S340: and acquiring the environmental parameters of the real scene.
Step S350: and adjusting the display parameters of the virtual information in the region to be rendered based on the environment parameters of the real scene.
The detailed description of steps S340 to S350 may refer to the detailed description of steps S220 to S230, and is not repeated herein.
To sum up, according to the technical scheme provided by the embodiment of the application, the field area is determined based on the field angle information of the augmented reality device, only the virtual information in the field area is rendered in the subsequent rendering process, and the virtual information outside the field area is not rendered, so that the computing resources of the processor can be saved, the virtual information in the field area can be displayed in time, and the display efficiency is improved.
Referring to fig. 5, fig. 5 schematically illustrates a method for rendering virtual information according to a third embodiment of the present application. The method is applied to the augmented reality device in fig. 1. In the embodiment of the present application, specifically, the method may include the following steps S510 to S550.
Step S510: an image of a human eye of a user wearing an augmented reality device is acquired.
As an embodiment, an image capturing device (e.g., a camera module) is disposed on a side of the augmented reality device facing a face of the user, and the image capturing device captures a plurality of eye images of the user in real time when receiving a face image capturing signal sent by the augmented reality device.
Step S520: and determining the visual fixation point area of the user in the real scene based on the human eye image.
The visual fixation point region represents the region in which the eyes of the user are focused in the real scene. As an implementation manner, the augmented reality device acquires motion conditions of left and right eyes of a user based on a plurality of human eye images, and then determines a visual fixation point area of the user in a real scene based on the motion conditions of the left and right eyes.
In some embodiments, an eyeball positioning algorithm is set in the augmented reality device, the positions of the eyeballs in the eye images can be positioned based on the eyeball positioning algorithm, then the left eyeball feature points and the right eyeball feature points corresponding to the left eyeballs and the right eyeballs are respectively determined, and finally the motion conditions of the left eyes and the right eyes are determined based on the change conditions of the left eyeball feature points and the right eyeball feature points. In particular, the eye localization algorithm may be an algorithm based on MEMS eye tracking technology.
After acquiring the motion conditions of the left eye and the right eye, the augmented reality device determines the visual fixation point area of the user in the real scene based on a preset eyeball motion posture matching algorithm. The input of the eyeball movement posture matching algorithm is the movement conditions of the left eye and the right eye, and the output of the algorithm is a target area determined in a scene image, namely a visual fixation point area. In some embodiments, the motion conditions of the left eye and the right eye may be characterized based on motion vectors, respectively, and the motion vectors are used for storing coordinate information corresponding to the left eye feature point and the right eye feature point at different times. For example, if both the left eye and the right eye of the user move upward, which indicates that the user is looking upward at this time, the augmented reality device determines the upper region in the scene image as the target region through the eye movement posture matching algorithm.
Step S530: and determining the visual fixation point area as an area to be rendered in the scene image.
In the embodiment of the application, the augmented reality device determines the visual point-of-regard region as a region to be rendered in a scene image.
Step S540: and acquiring the environmental parameters of the real scene.
Step S550: and adjusting the display parameters of the virtual information in the region to be rendered based on the environment parameters of the real scene.
The detailed description of steps S540 to S550 may refer to the detailed description of steps S220 to S230, and is not repeated herein.
To sum up, according to the technical scheme provided by the embodiment of the application, the human eye image is acquired based on the human eye, the visual fixation point area is determined based on the human eye image, only the virtual information in the visual fixation point area is rendered in the subsequent rendering process, and the virtual information outside the visual fixation point area is not rendered, so that the computing resource of the processor can be saved, the virtual information in the visual field area can be displayed in time, and the display efficiency is improved.
Referring to fig. 6, fig. 6 schematically illustrates a method for rendering virtual information according to a fourth embodiment of the present application. The method is applied to the augmented reality device in fig. 1. In the embodiment of the application, schemes for adjusting the display parameters of the virtual information based on the brightness parameters, the color parameters and the texture parameters of the real scene are respectively provided, so that the display effect of the virtual information can be closer to the real scene. Specifically, the method may include the following steps S610 to S640.
Step S610: and determining a region to be rendered in the scene image.
In the embodiment of the present application, the region to be rendered is a region of interest (ROI) in the scene image. And under the condition that the augmented reality device determines the region of interest in the scene image, determining the region of interest as a region to be rendered.
As an embodiment, the region of interest may be determined based on a region of interest identification algorithm preset in the processor. Under the condition that the scene image is acquired, the augmented reality device determines an interested area in the scene image based on an interested area identification algorithm, and determines the interested area as an area to be rendered. Specifically, the region-of-interest identification algorithm may be a Maximally Stable Extremal Regions (MSERs) detection algorithm, an R-CNN network-based region identification algorithm, or the like. For example, if the scene image acquired by the augmented reality device is a road condition image, the region of interest determined based on the region of interest recognition algorithm may be a region where a traffic sign in the road condition image is located; if the scene image acquired by the augmented reality device is a classroom image, the region of interest determined based on the region of interest identification algorithm may be a teaching screen in the classroom image.
As another embodiment, the region of interest may be determined based on a voice instruction of the user. Illustratively, an audio capture device (e.g., a microphone) is provided on the augmented reality device for capturing voice instructions of the user. And under the condition that the voice instruction is collected, the augmented reality device extracts the keyword information in the voice instruction, and further determines the region corresponding to the keyword information in the scene image as an interested region, namely a region to be rendered. Illustratively, the user's voice instruction may be "please find a western restaurant with a high surrounding rating". And under the condition that the voice instruction is collected, the augmented reality device extracts the keyword information in the voice instruction as 'western-style restaurant', and further determines the region corresponding to the western-style restaurant in the scene image as the region to be rendered.
The keyword information in the voice command can be extracted by an audio recognition algorithm and a keyword extraction algorithm inside the processor. Specifically, the audio recognition algorithm may be an algorithm based on a Hidden Markov Model (HMM) of a parametric model, an algorithm based on Dynamic Time Warping (Dynamic Time Warping), or the like. The keyword extraction algorithm may be a statistical feature-based keyword extraction algorithm, a deep learning-based keyword extraction algorithm, and the like, and specific implementation manners of the audio recognition algorithm and the keyword extraction algorithm are not particularly limited in the embodiments of the present application.
The embodiment of the application provides an implementation mode for determining a region to be rendered in a region of interest in a scene image, and an augmented reality device adjusts display parameters of virtual information in the region to be rendered based on environmental parameters of a real scene under the condition of determining the region to be rendered, so that the condition that a processor processes and renders the whole scene image can be avoided, and further, the computing resources of the processor are saved.
Step S620: and acquiring the environmental parameters of the real scene.
The detailed description of steps S610 to S620 may refer to the detailed description of steps S210 to S220, and is not repeated herein.
Step S630: based on the environmental parameters of the real scene, a correction value of a display parameter of the virtual information in the area to be rendered is determined.
In one embodiment, the corrected value of the display parameter is the final value of the display parameter, that is, the virtual information is displayed directly according to the corrected value of the display parameter when being displayed. As another embodiment, the correction value of the display parameter is a relative value of the display parameter, and the augmented reality device determines the relative value of the display parameter when determining the environmental parameter of the real scene and the initial value of the display parameter, and corrects the initial value according to the relative value to obtain the final value of the display parameter. As an embodiment, the augmented reality device takes an absolute value of a difference between an environmental parameter of the real scene and an initial value of the display parameter as a relative value of the display parameter, and takes a result of adding the relative value of the display parameter and the initial value of the display parameter as a final value of the display parameter when the environmental parameter of the real scene is greater than the initial value of the display parameter; and in the case that the environmental parameter of the real scene is smaller than the initial value of the display parameter, taking the result of subtracting the relative value of the display parameter from the initial value of the display parameter as the final value of the display parameter.
In some embodiments, the environment parameter of the real scene comprises a brightness parameter of the real scene, the display parameter comprises a display brightness, and step S630 comprises step S6310.
Step S6310: a correction value for the display luminance is determined based on the luminance parameter of the real scene.
The display brightness is the brightness of the virtual information when displayed on the augmented reality device.
In one embodiment, when determining a luminance parameter of a real scene, the augmented reality device directly determines the luminance parameter as a final value of display luminance. Illustratively, taking the luminance parameter of the real scene as 100 as an example, the augmented reality device directly confirms that the final value of the display luminance is 100.
As another embodiment, when determining the luminance parameter of the real scene, the augmented reality device further determines an initial value of the display luminance, determines a relative value of the display luminance based on the luminance parameter of the real scene and the initial value of the display luminance, and corrects the initial value of the display luminance based on the relative value. In some embodiments, the relative value of the display luminance is determined based on an absolute value of a difference between an initial value of the display luminance and a luminance parameter of the real scene. For example, if the initial value of the display luminance is 20 and the luminance parameter of the real scene is 100, the relative value of the display luminance is 80.
Optionally, a brightness correction threshold is further set in the augmented reality device, and the augmented reality device corrects the display brightness when determining that the relative value of the display brightness is greater than or equal to the brightness correction threshold; in the case where it is determined that the relative value of the display luminance is smaller than the luminance correction threshold value, the display luminance is not corrected. For example, the luminance correction threshold may be 10, and in the case where the relative value of the display luminance is 80, the display luminance is corrected, and in the case where the relative value of the display luminance is 5, the display luminance is not corrected. In this embodiment, the brightness correction threshold is set in the augmented reality device, so that the display brightness correction step is not performed under the condition that the absolute value of the difference between the initial value of the display brightness and the brightness parameter of the real scene is small, and further, the calculation resource of the processor can be saved.
In the embodiment of the application, the augmented reality device corrects the display brightness through the brightness parameter of the real scene, and can timely ensure that the final value of the display brightness is consistent with the brightness parameter of the real scene under the condition that the difference between the brightness parameter of the real scene and the initial value of the display brightness is large, so that the display effect of the virtual information can be closer to the real scene when the virtual information is displayed on the augmented reality device, and the watching experience of a user is improved. For example, when a user uses the augmented reality device in an environment with sufficient light in the daytime, if the display brightness of the virtual information is too low, the situation that the brightness of the virtual information is too dark occurs, and in this situation, the augmented reality device adjusts the display brightness of the virtual information by acquiring the brightness parameter of the real scene, so that the display brightness of the virtual information is consistent with the brightness parameter of the real scene, and the normal viewing experience of the user is ensured.
In some embodiments, the environment parameter of the real scene includes a color parameter of the real scene, the display parameter includes a display color, and step S630 includes steps S6320 to S6330.
Step S6320: the initial value of the display color is acquired.
The display color is a color of the virtual information when displayed on the augmented reality device. In some embodiments, the virtual information may be a virtual image, and the display color is the color of each part in the virtual image; in other embodiments, the virtual information may be virtual text in the form of an information bar, and the display color is the font color of the virtual text or the background color of the information bar. The initial value of the display color may be stored in a memory of the augmented reality device, and the augmented reality device reads the initial value to obtain the initial value of the display color.
Step S6330: and determining a correction value of the display color based on the color parameter, the initial value of the display color and a preset color correction mapping table.
The color correction mapping table represents a correspondence between the color parameter, the initial value of the display color, and the correction value of the display color. Wherein the color correction map is determined based on the principle of three primary colors of light, for example, red light and green light are mixed into yellow light. Illustratively, when the color parameter of the real scene characterizes the whole color of the area to be rendered is greenish and the initial value of the display color is yellow, the correction value of the display color is determined to be red based on the color correction mapping table. In the case where the virtual reality device determines the correction value of the display color, the display color of the virtual information is displayed in accordance with the correction value, and therefore, in the above example, the display color of the virtual information is displayed in red. When a user watches red virtual information in a green region to be rendered, the color of the virtual information finally presented is yellow.
In the embodiment of the application, the augmented reality device corrects the display color through the color parameter of the real scene, so that when the corrected virtual information is displayed on the augmented reality device, the color of the virtual information seen by the user through the augmented reality device under the regions to be rendered in different colors is consistent with the default color of the virtual information, and the normal watching experience of the user is ensured.
In some embodiments, the environment parameter of the real scene comprises a texture parameter of the real scene, the display parameter comprises a display definition, and step S630 comprises step S6340.
Step S6340: and determining a correction value of the display definition based on the texture parameter of the real scene.
The display clarity is the clarity of the virtual information when displayed on the augmented reality device.
In some embodiments, the sharpness values may be characterized by a resolution. Wherein, the larger the resolution, the larger the sharpness value. As an embodiment, when determining texture parameters of a real scene, the augmented reality device determines resolutions corresponding to different texture parameters based on a preset resolution mapping table. The resolution mapping table represents different texture parameters and mapping relations among different resolutions, and when the texture complexity of a texture parameter representation area to be rendered is larger, the corresponding resolution is larger; conversely, the smaller the resolution. The augmented reality device determines a resolution as a final value of display definition when determining a corresponding resolution based on texture parameters of a real scene.
In other embodiments, the sharpness value may be characterized by the number of rendered patches to which the virtual information corresponds. Wherein, the greater the number of rendering patches, the greater the sharpness value. As an embodiment, when determining a texture parameter of a real scene, an augmented reality device determines the number of rendering patches corresponding to different texture parameters based on a preset rendering patch mapping table. The rendering patch mapping table represents mapping relations between different texture parameters and the number of different rendering patches, and when the texture parameter represents that the complexity of the texture of the region to be rendered is larger, the number of corresponding rendering patches is larger; conversely, the less resolution. The augmented reality device determines the number of corresponding rendering patches based on the texture parameters of the real scene, and determines the number of the rendering patches as a final value of the display definition.
In the embodiment of the application, the augmented reality device corrects the display definition through the texture parameters of the real scene, so that when the corrected virtual information is displayed on the augmented reality device, the definition of the virtual information, which is seen by a user through the augmented reality device under the regions to be rendered of different texture parameters, is consistent, and the normal viewing experience of the user is ensured. In addition, when the user watches the virtual information in the region to be rendered (for example, a solid wall) with low texture complexity, the virtual information is not required to have higher definition, and the definition of the virtual information can be reduced at the moment, so that the rendering calculation resources of the augmented reality device are reduced, and the virtual information can be displayed on the augmented reality device more quickly.
Step S640: and adjusting the display parameters of the virtual information in the region to be rendered according to the correction value of the display parameters of the virtual information in the region to be rendered.
In some embodiments, the augmented display device directly takes the corrected value of the display parameter as the final value of the display parameter. In other embodiments, the augmented display device adjusts the initial value of the display parameter when determining the relative value of the display parameter, thereby determining the final value of the display parameter.
Here, when adjusting the display parameters of the virtual information, the overlay ratio between the virtual information and the real scene also needs to be considered. The superposition ratio of the virtual information and the real scene may be any ratio, for example, 1: 1. 7: 3, etc. The overlay ratio may be stored in a memory of the enhanced display device, and the enhanced display device may adjust the final value of the display parameter by reading the overlay ratio in the memory. In some embodiments, when determining the above-described superimposition ratio, the augmented reality device multiplies the final value of the display parameter before adjustment by the superimposition ratio, and determines the final value after adjustment. And the augmented reality device displays the virtual information based on the adjusted final value of the display parameter. In this embodiment, the final value of the display parameter is adjusted by the overlay ratio, so that the display effect of the virtual information can be closer to the real scene when the virtual information is displayed.
In the embodiment of the application, methods for adjusting display parameters of virtual information by an augmented reality device based on brightness parameters, color parameters and texture parameters of a real scene are respectively provided. The display parameters are adjusted through the environmental parameters of the real scene, so that the display effect of the virtual information can be closer to the real scene, and the watching experience of a user is improved.
Referring to fig. 7, a block diagram of a rendering apparatus 700 for virtual information according to an embodiment of the present disclosure is shown. The device 700 is applied to an augmented reality device. The apparatus 700 comprises: a to-be-rendered area determining module 710, an environment parameter obtaining module 720 and a display parameter adjusting module 730. The area to be rendered determining module 710 is configured to determine an area to be rendered in a scene image, where the scene image represents an image acquired by the augmented reality device from a real scene, and an area of the area to be rendered is smaller than an area of the scene image. The environment parameter obtaining module 720 is configured to obtain an environment parameter of a real scene, where the environment parameter of the real scene includes at least one of: brightness parameters, color parameters, texture parameters of the real scene. The display parameter adjusting module 730 is configured to adjust a display parameter of the virtual information in the region to be rendered based on an environment parameter of the real scene.
In some embodiments, the to-be-rendered region determining module 710 is further configured to obtain field angle information of the augmented reality device. Based on the field angle information, a field of view region of the user wearing the augmented reality device is determined, the field of view region representing a region covered by the field angle information in the scene image. And determining the visual field area as an area to be rendered in the scene image.
In some embodiments, the to-be-rendered area determination module 710 is also used to acquire images of human eyes of a user wearing an augmented reality device. Based on the human eye image, a visual fixation point area of the user in the real scene is determined, and the visual fixation point area represents an area focused by human eyes of the user in the real scene. And determining the visual fixation point area as an area to be rendered in the scene image.
In some embodiments, the display parameter adjustment module 730 is further configured to determine a correction value for a display parameter of the virtual information in the area to be rendered based on an environmental parameter of the real scene. And adjusting the display parameters of the virtual information in the region to be rendered according to the correction value of the display parameters of the virtual information in the region to be rendered.
In some embodiments, the environmental parameter of the real scene comprises a brightness parameter of the real scene and the display parameter comprises a display brightness. The display parameter adjustment module 730 is further configured to determine a correction value of the display brightness based on the brightness parameter of the real scene.
In some embodiments, the environmental parameter of the real scene comprises a color parameter of the real scene and the display parameter comprises a display color. The display parameter adjustment module 730 is further configured to obtain an initial value of the display color. And determining a correction value of the display color based on the color parameter, the initial value of the display color and a preset color correction mapping table, wherein the color correction mapping table represents the corresponding relation among the color parameter, the initial value of the display color and the correction value of the display color.
In some embodiments, the environmental parameter of the real scene comprises a texture parameter of the real scene and the display parameter comprises display sharpness. The display parameter adjustment module 730 is further configured to determine a correction value of the display definition based on the texture parameter of the real scene.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The embodiment of the application provides a rendering device of virtual information. In the device, the augmented reality device determines the region to be rendered with the area smaller than that of the scene image in the scene image, and then obtains the environmental parameters of the real scene to adjust the display parameters of the virtual information in the region to be rendered. On one hand, the device does not need to render the whole scene image, but renders the local area in the scene image, so that the computing resource of the processor can be saved, the rendering time can be reduced, the virtual information can be timely displayed on the augmented reality device, and the display efficiency is improved. On the other hand, the augmented reality device adjusts the display parameters of the virtual information based on the environmental parameters of the real scene, so that when the virtual information is displayed on the augmented reality device, the display effect of the virtual information can be closer to the real scene, and the viewing experience of the user is improved.
Referring to fig. 8, it is shown that an augmented reality device 800 is further provided in an embodiment of the present application, where the augmented reality device 800 includes: one or more processors 810, memory 820, and one or more applications. Wherein one or more applications are stored in the memory 820 and configured to be executed by the one or more processors 810, the one or more applications configured to perform the methods described in the above embodiments.
Processor 810 may include one or more processing cores. The processor 810 interfaces with various interfaces and circuitry throughout the various parts of the battery management system to perform various functions of the battery management system and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 820 and invoking data stored in the memory 820. Alternatively, the processor 810 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 810 may integrate one or a combination of a Central Processing Unit (CPU) 810, a Graphics Processing Unit (GPU) 810, a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 810, but may be implemented by a communication chip.
The Memory 820 may include a Random Access Memory (RAM) 820 or a Read-Only Memory (Read-Only Memory) 820. The memory 820 may be used to store instructions, programs, code sets, or instruction sets. The memory 820 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area can also store data (such as a phone book, audio and video data, chatting record data) created by the electronic device map in use and the like.
Referring to fig. 9, a computer-readable storage medium 900 is further provided according to an embodiment of the present application, in which computer program instructions 910 are stored in the computer-readable storage medium 900, and the computer program instructions 910 can be called by a processor to execute the method described in the above embodiment.
The computer-readable storage medium 900 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 900 includes a non-volatile computer-readable storage medium. The computer-readable storage medium 900 has storage space for computer program instructions 910 to perform any of the method steps of the method described above. The computer program instructions 910 may be read from or written to one or more computer program products.
Although the present application has been described with reference to the preferred embodiments, it is to be understood that the present application is not limited to the disclosed embodiments, but rather, the present application is intended to cover various modifications, equivalents and alternatives falling within the spirit and scope of the present application.

Claims (10)

1. A rendering method of virtual information is applied to an augmented reality device, and the method comprises the following steps:
determining a region to be rendered in a scene image, wherein the scene image represents an image acquired by the augmented reality device through collecting a real scene, and the area of the region to be rendered is smaller than that of the scene image;
acquiring environmental parameters of the real scene, wherein the environmental parameters of the real scene comprise at least one of the following: the brightness parameter, the color parameter and the texture parameter of the real scene;
and adjusting the display parameters of the virtual information in the area to be rendered based on the environment parameters of the real scene.
2. The method of claim 1, wherein determining the region to be rendered in the scene image comprises:
acquiring field angle information of the augmented reality device;
determining, based on the field angle information, a field of view region of a user wearing the augmented reality device, the field of view region characterizing a region in the scene image covered by the field angle information;
and determining the visual field area as an area to be rendered in the scene image.
3. The method of claim 1, wherein determining the region to be rendered in the scene image comprises:
acquiring a human eye image of a user wearing the augmented reality device;
determining a visual fixation point region of the user in a real scene based on the human eye image, wherein the visual fixation point region represents a region in which human eyes of the user are focused in the real scene;
and determining the visual fixation point area as an area to be rendered in the scene image.
4. The method according to any one of claims 1 to 3, wherein the adjusting the display parameters of the virtual information in the area to be rendered based on the environment parameters of the real scene comprises:
determining a correction value of a display parameter of the virtual information in the area to be rendered based on the environmental parameter of the real scene;
and adjusting the display parameters of the virtual information in the region to be rendered according to the correction value of the display parameters of the virtual information in the region to be rendered.
5. The method according to claim 4, wherein the environment parameter of the real scene comprises a brightness parameter of the real scene, the display parameter comprises a display brightness, and the determining the correction value of the display parameter of the virtual information in the area to be rendered based on the environment parameter of the real scene comprises:
and determining a correction value of the display brightness based on the brightness parameter of the real scene.
6. The method of claim 4, wherein the environmental parameters of the real scene comprise color parameters of the real scene, wherein the display parameters comprise display colors, and wherein determining the correction parameters of the display parameters based on the characteristic parameters and the display parameters comprises:
acquiring an initial value of the display color;
and determining a correction value of the display color based on the color parameter, the initial value of the display color and a preset color correction mapping table, wherein the color correction mapping table represents the corresponding relation among the color parameter, the initial value of the display color and the correction value of the display color.
7. The method according to claim 4, wherein the environment parameter of the real scene comprises a texture parameter of the real scene, the display parameter comprises display sharpness, and the determining the correction parameter of the display parameter based on the feature parameter and the display parameter comprises:
and determining a correction value of the display definition based on the texture parameter of the real scene.
8. An apparatus for rendering virtual information, the apparatus being applied to an augmented reality apparatus, the apparatus comprising:
a to-be-rendered area determining module, configured to determine a to-be-rendered area in a scene image, where the scene image represents an image acquired by the augmented reality device by collecting a real scene, and an area of the to-be-rendered area is smaller than an area of the scene image;
an environment parameter obtaining module, configured to obtain an environment parameter of the real scene, where the environment parameter of the real scene includes at least one of: the brightness parameter, the color parameter and the texture parameter of the real scene;
and the display parameter adjusting module is used for adjusting the display parameters of the virtual information in the area to be rendered based on the environment parameters of the real scene.
9. An augmented reality apparatus, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium having computer program instructions stored therein, the computer program instructions being invokable by a processor to perform the method of any of claims 1-7.
CN202210089796.7A 2022-01-25 2022-01-25 Rendering method and device of virtual information, augmented reality device and storage medium Pending CN114549718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210089796.7A CN114549718A (en) 2022-01-25 2022-01-25 Rendering method and device of virtual information, augmented reality device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210089796.7A CN114549718A (en) 2022-01-25 2022-01-25 Rendering method and device of virtual information, augmented reality device and storage medium

Publications (1)

Publication Number Publication Date
CN114549718A true CN114549718A (en) 2022-05-27

Family

ID=81674590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210089796.7A Pending CN114549718A (en) 2022-01-25 2022-01-25 Rendering method and device of virtual information, augmented reality device and storage medium

Country Status (1)

Country Link
CN (1) CN114549718A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294488A (en) * 2022-10-10 2022-11-04 江西财经大学 AR rapid object matching display method
CN117041511A (en) * 2023-09-28 2023-11-10 青岛欧亚丰科技发展有限公司 Video image processing method for visual interaction enhancement of exhibition hall
CN117078868A (en) * 2023-10-17 2023-11-17 北京太极信息系统技术有限公司 Virtual reality engine based on information creation software and hardware and modeling and rendering method thereof

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294488A (en) * 2022-10-10 2022-11-04 江西财经大学 AR rapid object matching display method
CN117041511A (en) * 2023-09-28 2023-11-10 青岛欧亚丰科技发展有限公司 Video image processing method for visual interaction enhancement of exhibition hall
CN117041511B (en) * 2023-09-28 2024-01-02 青岛欧亚丰科技发展有限公司 Video image processing method for visual interaction enhancement of exhibition hall
CN117078868A (en) * 2023-10-17 2023-11-17 北京太极信息系统技术有限公司 Virtual reality engine based on information creation software and hardware and modeling and rendering method thereof
CN117078868B (en) * 2023-10-17 2023-12-15 北京太极信息系统技术有限公司 Virtual reality engine based on information creation software and hardware and modeling and rendering method thereof

Similar Documents

Publication Publication Date Title
EP3454250B1 (en) Facial image processing method and apparatus and storage medium
CN114549718A (en) Rendering method and device of virtual information, augmented reality device and storage medium
KR100556856B1 (en) Screen control method and apparatus in mobile telecommunication terminal equipment
US9635311B2 (en) Image display apparatus and image processing device
JP5450739B2 (en) Image processing apparatus and image display apparatus
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
CN110300264B (en) Image processing method, image processing device, mobile terminal and storage medium
CN111210510B (en) Three-dimensional face model generation method and device, computer equipment and storage medium
CN109937434B (en) Image processing method, device, terminal and storage medium
KR102383129B1 (en) Method for correcting image based on category and recognition rate of objects included image and electronic device for the same
CN112017222A (en) Video panorama stitching and three-dimensional fusion method and device
WO2018233217A1 (en) Image processing method, device and augmented reality apparatus
CN111880711B (en) Display control method, display control device, electronic equipment and storage medium
CN109313797B (en) Image display method and terminal
CN109104578B (en) Image processing method and mobile terminal
CN112351325B (en) Gesture-based display terminal control method, terminal and readable storage medium
WO2021077863A1 (en) Terminal message processing method, image recognition method, and apparatuses, medium and system
CN111627076B (en) Face changing method and device and electronic equipment
CN111836058B (en) Method, device and equipment for playing real-time video and storage medium
US9323981B2 (en) Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored
CN111432122A (en) Image processing method and electronic equipment
US20230186425A1 (en) Face image processing method and apparatus, device, and computer readable storage medium
CN113706430A (en) Image processing method and device for image processing
CN112258435A (en) Image processing method and related product
KR20190006329A (en) Display apparatus and the control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination