CN117294826A - Image display method, device, electronic equipment and readable storage medium - Google Patents

Image display method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117294826A
CN117294826A CN202311385818.5A CN202311385818A CN117294826A CN 117294826 A CN117294826 A CN 117294826A CN 202311385818 A CN202311385818 A CN 202311385818A CN 117294826 A CN117294826 A CN 117294826A
Authority
CN
China
Prior art keywords
color information
rendered
vision
color
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311385818.5A
Other languages
Chinese (zh)
Inventor
刘川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311385818.5A priority Critical patent/CN117294826A/en
Publication of CN117294826A publication Critical patent/CN117294826A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]

Abstract

The application discloses an image display method, an image display device, electronic equipment and a readable storage medium, and belongs to the technical field of image display. The method comprises the following steps: acquiring first color information corresponding to a first object to be rendered, wherein the first color information is color information of the first object to be rendered under a first vision, the first color information is different from color information of the object to be rendered under a second vision, and the first vision and the second vision are different vision display types of watching objects; and rendering and displaying the first object to be rendered according to the first color information.

Description

Image display method, device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of image display, and particularly relates to an image display method, an image display device, electronic equipment and a readable storage medium.
Background
The visual color seen by the human eye in everyday life is generally considered to represent visual features throughout the world. However, the colored vision of animals and their visual world insights are often different from the human eye vision, and therefore, there is a difference in color in animal vision from that in human eye vision. With the continuous progress of human society, the phase of human and animals is more harmonious and friendly. People are willing to spend more time and effort deeply learning about the animal's habit and the world in their eyes.
Currently, when an object to be rendered is rendered and displayed on a display screen, the object to be rendered is usually displayed in a color under the vision of human eyes, for example, an apple seen under the vision of a user is generally red, and then the apple is rendered and displayed in red on the display screen. However, the user may want to know the color seen in the eyes of animals, for example, the user wants to see the color of apples seen in the eyes of dogs, and the related art does not provide a corresponding scheme so that the user can watch the color in the eyes of other animals of a certain object, and thus how to display the object so that the user can watch the color of the object under different vision becomes a problem to be solved.
Disclosure of Invention
An object of the embodiment of the application is to provide an image display method, an image display device, an electronic device and a readable storage medium, which can render and display color information of an object to be rendered under different vision, so that a user can see colors of the object under other vision different from the user's own vision.
In a first aspect, an embodiment of the present application provides an image display method, including: acquiring first color information corresponding to a first object to be rendered, wherein the first color information is color information of the first object to be rendered under a first vision, the first color information is different from color information of the object to be rendered under a second vision, and the first vision and the second vision are different vision display types of watching objects; and rendering and displaying the first object to be rendered according to the first color information.
In a second aspect, embodiments of the present application provide an image display apparatus including: the device comprises an acquisition module and a display module, wherein: the obtaining module is configured to obtain first color information corresponding to a first object to be rendered, where the first color information is color information of the first object to be rendered under a first vision, the first color information is different from color information of the object to be rendered under a second vision, and the first vision and the second vision are different vision display types of viewing objects; and the display module is used for rendering and displaying the first object to be rendered according to the first color information acquired by the acquisition module.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In this embodiment of the present application, an image display device obtains first color information corresponding to a first object to be rendered, where the first color information is color information of the first object to be rendered under a first vision, the first color information is different from color information of the first object to be rendered under a second vision, the first vision and the second vision are different visual display types of viewing objects, and then, according to the first color information, the first object to be rendered is rendered and displayed. According to the method, the electronic equipment can acquire the color information of the object to be rendered under the first vision, and render and display the object to be rendered according to the color information under the first vision, so that the color information of the object to be rendered under different vision can be rendered and displayed, a user can see the colors of the object under other vision different from the user's own vision, and the interestingness of object rendering and displaying is improved.
Drawings
Fig. 1 is a schematic flow chart of an image display method according to an embodiment of the present application;
fig. 2 (a) is a schematic structural diagram of a head-mounted AR device according to an embodiment of the present application;
fig. 2 (B) is another schematic structural diagram of a head-mounted AR device according to an embodiment of the present application;
FIG. 3 (A) is a schematic diagram of an interface to which the image display method according to the embodiment of the present application is applied;
FIG. 3 (B) is a second schematic diagram of an interface applied by the image display method according to the embodiment of the present application;
FIG. 4 (A) is a third schematic diagram of an interface to which the image display method according to the embodiment of the present application is applied;
FIG. 4 (B) is a schematic diagram of an interface to which the image display method according to the embodiment of the present application is applied;
fig. 5 is a schematic structural diagram of a stylus according to an embodiment of the present application;
FIG. 6 is a fifth schematic diagram of an interface to which the image display method according to the embodiment of the present application is applied;
fig. 7 is a schematic diagram of capturing object color information by a stylus according to an embodiment of the present application;
FIG. 8 is a second flowchart of an image display method according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an image display device according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 11 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The terms "at least one," "at least one," and the like in the description and in the claims of the present application mean that they encompass any one, any two, or a combination of two or more of the objects. For example, at least one of a, b, c (item) may represent: "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural. Similarly, the term "at least two" means two or more, and the meaning of the expression is similar to the term "at least one".
The image rendering display method provided by the embodiment of the application can be applied to rendering and displaying the scene of the image acquired by the camera.
Under the condition that the image data of the scenery is obtained by collecting the image of the scenery through the camera, the electronic equipment performs rendering display of the image according to color information (such as gray information) of each pixel point in the collected image data, and the color of the rendered and displayed image is close to the true color of the scenery and the color of the scenery observed by eyes. However, the color of the image rendered and displayed in the above manner is the color of the image in the human eye, and the user cannot intuitively know the color of the scene under the animal vision. In the embodiment of the application, under the condition that the image data of the scenery is acquired through the camera, the electronic equipment can determine to acquire the color information of the image corresponding to the image data under the first vision, render and display the object to be rendered according to the color information under the first vision, and render and display the image without adopting the original color information in the image data, so that the color information of the object to be rendered under different vision can be rendered, a user can see the colors of the object under other vision different from the user, and the interestingness of object rendering and display is improved.
The execution subject of the image display method provided in the embodiment of the present invention may be an electronic device, or may be at least one of a functional module and an entity module capable of implementing the image display method in the electronic device, and specifically may be determined according to actual use requirements, which is not limited by the embodiment of the present invention. The image display method provided in the embodiment of the present application will be described below by taking an example in which the image display device executes the image display method.
The image display method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an image display method according to an embodiment of the present application, as shown in fig. 1, the image display method may include the following steps S201 and S202:
step S201: the image display device obtains first color information corresponding to a first object to be rendered.
The first color information is color information of a first object to be rendered under a first vision, the first color information is different from color information of the object to be rendered under a second vision, and the first vision and the second vision are different vision display types of the watched object.
Optionally, in an embodiment of the present application, the first object to be rendered may be at least part of an object in an image acquired by a camera.
Alternatively, in an embodiment of the present application, the first vision may be the vision of an animal. Illustratively, the first vision may be the vision of cats, dogs, fish, cattle, horses, snakes, birds, insects, chameleons, and the like.
Alternatively, in the embodiment of the present application, the first vision may also be the vision in the achromatopsia group or the eye of the achromatopsia group.
Illustratively, the visual display types of different viewing objects are different. For example, the same "apple" (i.e., object to be rendered) has different visual effects in the human eye and in the bull's eye (i.e., different viewing objects), and thus the bull and the human have different visual display types. Some animals (e.g. snakes) can also perceive an infrared view, the visual effect of which resembles the viewing angle of an infrared night vision device. Some animals (e.g., horses) do not see red, but can see blue and green, so that for horses, the red apples or the fresh orange carrots they see may look tan or green. In addition, there is a blind spot in the viewing angle of the horse, i.e. just in front of it, because of the relationship of the eye positions, the picture it sees will exhibit a side-to-side split effect. For example, the same scene has a different visual effect in the achromatopsia population than in the normal vision human eye, since the achromatopsia population does not have the ability to resolve certain colors.
Alternatively, in an embodiment of the present application, the second vision may be a vision of a human eye.
Optionally, in an embodiment of the present application, the color information includes, but is not limited to, at least one of: color information, brightness information, and hue information.
Step S202: and the image display device performs rendering display on the first object to be rendered according to the first color information.
Alternatively, in the embodiment of the present application, the image display apparatus may render and display the first object to be rendered on the display interface of the electronic device.
Optionally, in an embodiment of the present application, the electronic device may be a VR intelligent terminal, for example, a VR headset; alternatively, the electronic device may be a general electronic device, such as a mobile terminal.
Fig. 2 (a) is a schematic structural diagram of VR intelligent terminal equipment provided in an embodiment of the present application. As shown in fig. 2 (a), the above VR intelligent terminal hardware part may include at least a main body frame 21, a camera module 22 located at the front side of the main body frame 21, a display 23 located at the inner side of the main body frame 21, and a head fixing structure 24 connected to the main body frame 21, and optical devices and acoustic devices are mounted on the main body frame 21, and a processor and a memory are disposed inside the main body frame 21. The camera module 21 is located at a front cover plate of the glasses frame 21, and the camera module 21 sends collected image data to the processor for processing. The display screen 23 is disposed at the near-eye end of the main body frame 21, and is used for displaying the image data collected by the camera module 21. Further, the camera module 21 is connected to a processor 25, and the processor 25 is connected to a display screen 26, as shown in fig. 2 (B).
Taking the first vision as the vision of the animal as an example, the VR intelligent terminal shoots the scenery in the shooting scene through the camera module to obtain scenery information in the shooting scene, then obtains color information of the scenery information under the vision of the animal, and renders and displays the scenery information on the display screen according to the color information so as to present the color of the scenery under the vision of the animal on the display screen, thereby enabling a user to wear the VR intelligent terminal to intuitively see the color of the scenery in the animal eye.
In the embodiment of the application, the rendering device can convert the colors in the eyes of the user into the colors in the eyes of the animal for rendering and displaying, so that the user can immersively experience and feel the world colors in a certain animal eye, the interestingness is improved, or the rendering device can convert the colors in the eyes of the user into the colors in the eyes of the achromatopsia group or the achromatopsia group for rendering and displaying, the interaction of the world scene colors in the eyes between the achromatopsia group and normal people is established, and the user can feel the world in the achromatopsia eye.
According to the rendering display method provided by the embodiment of the application, the rendering device acquires first color information corresponding to the first object to be rendered, the first color information is color information of the first object to be rendered under the first vision, the first color information is different from the color information of the first object to be rendered under the second vision, and the first object to be rendered is rendered and displayed according to the first color information. According to the method, the electronic equipment can acquire the color information of the object to be rendered under the first vision, and render and display the object to be rendered according to the color information under the first vision, so that the color information of the object to be rendered under different vision can be obtained through rendering, a user can see the colors of the object under other vision different from the user's own vision, and the interestingness of object rendering and display is improved.
Optionally, in an embodiment of the present application, the first vision is a vision of a first viewing object. Illustratively, before the step S201, the image display method provided in the embodiment of the present application may further include the following step S203:
step S203: the image display device receives a first input of a first object identifier of the at least one object identifier by a user under the condition that the at least one object identifier is displayed.
Wherein the at least one object identifier corresponds to at least one viewing object, and the first object identifier corresponds to a first viewing object.
In connection with the above step S203, the process of the above step S201 may be implemented by the following steps S201a and S201 b.
Step S201a: the image display device responds to the first input and acquires at least one color information corresponding to the first viewing object from the first color database.
Step S201b: the image display device determines first color information corresponding to a first object to be rendered from the at least one color information.
The first color database includes color information of at least one object to be rendered under at least one vision, the at least one vision corresponds to at least one viewing object, each vision corresponds to one or more objects to be rendered, and each object to be rendered corresponds to one color information.
Optionally, in an embodiment of the present application, the first color database may include color information of at least one object to be rendered under canine vision, color information of at least one object to be rendered under feline vision, color information of at least one object to be rendered under avian vision, and the like. Illustratively, the at least one object to be rendered may include a person, a building, a landscape, a food, and the like.
Alternatively, in the embodiment of the present application, the color information in the first color database may be obtained through big data or generated through an artificial intelligence algorithm.
It should be noted that, the perception of visual context by living beings to be foreign objects requires the treatment of light by eyes. The human eye has three-color vision, which means that three light receptors, namely three red, green and blue perceived by the viewing cone, can be presented in the human eye. However, the eye treatment of animals is different from human eyes, some animals can only perceive two light receptors, and some animals can perceive four or more light receptors, namely ultraviolet light, infrared light and the like which can not be seen by people. Thus, the colors seen by the human eye differ from those seen by the animal eye, as do the different classes of animal eyes.
Alternatively, in the embodiment of the present application, the object identifier may be an image, text, or symbol. The object identification may be an image of an animal, for example.
Alternatively, in the embodiment of the present application, the first object identifier may be any one of the object identifiers.
Optionally, in an embodiment of the present application, the image display device may display at least one object identifier on a second interface, where the second interface may be a display interface of the VR device.
Optionally, in an embodiment of the present application, the first input is used to select to render the vision when the first object to be rendered is displayed.
Alternatively, in the embodiment of the present application, the first input may be a touch input, a voice input, or a gesture input of a user, which is not limited in the embodiment of the present application.
Illustratively, the first input may be a click input, a slide input, a press input, or the like of the user. Further, the clicking operation may be any number of clicking operations. The above-described sliding operation may be a sliding operation in any direction, for example, an upward sliding, a downward sliding, a leftward sliding, a rightward sliding, or the like, which is not limited in the embodiment of the present application.
For example, in a case where the user wears the VR headset and triggers the VR headset to enter the "animal vision" mode, as shown in fig. 3 (a), the display interface 30 of the VR headset displays the identifier 31 of the feline and the identifier 32 of the canine, when the user wants to view the color (e.g., green) of the lion in the eyes of the feline, after clicking the identifier 31 corresponding to the feline through gesture input, the camera of the VR headset photographs the lion in front of the camera, and obtains the color information of the lion in the eyes of the feline, and then displays the image of the lion in the display screen according to the color information in the eyes of the feline, as shown in fig. 3 (B).
When shooting a scene through the camera, the user can click the mark of the animal when the user wants to know the color of the scene in the eyes of the animal, the image display device obtains the color information of the shot scene under the vision of the animal, and the scene is rendered and displayed by adopting the color information of the scene under the vision of the animal, so that the color of the scene in the eyes of the animal is displayed on the display screen, the user can intuitively check the color of the scene in the eyes of the animal, and the interestingness of image display is improved.
Optionally, in an embodiment of the present application, before step S202 described above, the image display method provided in an embodiment of the present application may further include the following steps S204 to S206:
step S204: the image display device acquires a first image through the camera and determines one or more objects in the first image as a first object to be rendered.
Step S205: the image display device acquires second color information corresponding to a second object to be rendered in the first VR image.
The second color information is color information of the second object to be rendered under the first vision.
Step S206: and the image display device performs color correction processing on a second object to be rendered in the first VR image according to the second color information to obtain a second VR image.
In combination with the above steps S204 to S206, the above step S202 may be replaced with the following step S202a:
step S202a: and the image display device fuses the first object to be rendered into the second VR image for rendering and displaying.
Alternatively, in the embodiment of the present application, the first image may be an image acquired in real time by a camera.
Alternatively, in the embodiment of the present application, one or more objects in the first image may be a person, a scene, a building, or the like in the image.
Optionally, in an embodiment of the present application, image recognition is performed on the first image, so as to obtain one or more objects in the first image.
Alternatively, in the embodiment of the present application, the first VR image may be a three-dimensional virtual image.
Alternatively, in the embodiment of the present application, the second object to be rendered may be a person image, a scenic image, a building image, or the like in the second VR image.
Alternatively, in the embodiment of the application, the image display device may acquire color information of the second object to be rendered under the first vision through the first color database.
It should be noted that, the explanation of the first color database may be referred to above, and will not be repeated here.
For example, assuming that the first VR image includes an image of a red apple, the image display device obtains that the red apple is yellowish green under the vision of the feline, and corrects the color of the area where the apple image is located in the first VR image to be yellowish green, so as to obtain a second VR image after color correction.
It should be noted that, the process of performing color correction processing on the image may refer to related art, and this embodiment of the present application will not be described in detail.
Optionally, in the application embodiment, in the case of obtaining the second VR image, the image display apparatus may use an image fusion technology to fuse the first object to be rendered into the second VR image for rendering and displaying, so as to present a virtual-real combined picture.
It should be noted that, VR device manufacturers on the market currently mainly use video perspective technology to synthesize the real world and the virtual world, and then display a rich-color picture, so that the user can watch the real object in the real world and the virtual object in the virtual world. However, the colors of the VR image frames finally synthesized by the above-described conventional method are the colors observed in the eyes of the user, and VR images having colors at other viewing angles cannot be synthesized, for example, VR images having colors of the world seen in animal eyes cannot be synthesized, resulting in a single VR image to be generated.
According to the image display method, the object to be rendered in the image acquired by the camera and the color of the object to be rendered in the VR virtual image are converted into the color under the animal visual angle for rendering and displaying, so that a user can experience or play roles by wearing the terminal device, the world colors in eyes of certain animals are experienced and felt, and the interestingness of image display is improved.
Optionally, in an embodiment of the present application, before step S202 described above, the image display method provided in an embodiment of the present application may further include the following step S207 and step S208:
step S207: the image display device displays a first interface.
The first interface includes an object contour corresponding to the first object to be rendered.
Step S208: the image display device receives a second input from a user to the first interface.
Wherein the second input is an input of a color filling the outline of the object.
In combination with the above step S207 and step S208, the process of the above step S202 may be replaced with the following step S202b:
step S202b: the image display device responds to the second input and adopts the first color information to carry out color filling processing on the outline of the object so as to render and display the first object to be rendered.
Alternatively, in the embodiment of the present application, the first interface may be a drawing interface of a drawing application, or the first interface may be an image editing interface of an image editing application.
Optionally, in an embodiment of the present application, the first interface may be a display interface of the VR device.
Alternatively, in the embodiment of the present application, the object outline may be an edge of the object.
Illustratively, taking the first object to be rendered as an apple image as an example, the object outline may be an outline of the apple image.
Alternatively, in the embodiment of the present application, the second input may be a touch input, a voice input, or a gesture input of the user, which is not limited in the embodiment of the present application.
Illustratively, the second input may be a click input, a slide input, a press input, or the like of the user. Further, the clicking operation may be any number of clicking operations. The above-described sliding operation may be a sliding operation in any direction, for example, an upward sliding, a downward sliding, a leftward sliding, a rightward sliding, or the like, which is not limited in the embodiment of the present application.
For example, as shown in fig. 4 (a), the display screen 40 of the electronic device displays an outline 41 of an "apple", and when the user wants to fill color into the outline 41 of the "apple", after clicking any one of the areas in the outline by using the stylus, as shown in fig. 4 (B), the image display device obtains color information of the "apple" in the eyes of the feline, fills the outline of the "apple" with the color information, and displays the "apple" 42 after filling color to display the color (such as yellow green) of the apple in the eyes of the feline, thereby improving the interest of the display.
In the embodiment of the application, when a user uses the drawing application to draw a picture, the image display device can acquire the color information in the animal eyes corresponding to the object to be rendered, which is to be color-filled, and color-fill the object to be rendered according to the color information, so that the user can draw the object in the animal eyes, and the interest of drawing is improved.
Further optionally, in an embodiment of the present application, before step S201, the image display method provided in the embodiment of the present application may further include the following step S209:
step S209: the image display device receives third color information corresponding to a first object to be rendered, which is acquired by the touch pen.
The third color information is color information of the first object to be rendered under the second vision.
In combination with the above step S209, the process of the above step S201 may be replaced with the following step S201c:
step S201c: the image display device determines target color information corresponding to the third color information from a second color database based on the third color information, and determines the target color information as the first color information.
The second color database includes at least one color information under the first vision and color information under the second vision corresponding to each color information.
Optionally, in an embodiment of the present application, the second color database includes color information under the second vision corresponding to each of the at least one color information under the first vision.
Optionally, in an embodiment of the present application, the second color database may include at least one color information under at least one vision. Illustratively, the at least one vision may include feline vision, canine vision, avian vision, and the like.
Alternatively, in an embodiment of the present application, the first vision may be a vision of a feline, and the second vision may be a vision of a human eye.
Illustratively, the color database may include colors of red under human eye vision and corresponding colors of red under feline eye vision, such as yellow-green, and colors of yellow under human eye vision and corresponding colors of yellow under feline eye vision, such as green, and the like.
It should be noted that, with the intensive research of animal vision, it is now possible to predict the color under human eyes and the color that the color presents under animal vision.
Alternatively, in the embodiment of the present application, the stylus may be a capacitive pen.
Optionally, in the embodiment of the present application, color information of a real scene in an environment may be acquired by a stylus and sent to an electronic device, where the electronic device may acquire target color information corresponding to the color information under different views, that is, convert the color information, and color fill an image contour according to the acquired target color information.
Fig. 5 is a schematic structural diagram of a stylus according to an embodiment of the present application, as shown in fig. 5, the stylus may include a pen tip 51 and a pen body 52 connected to the pen tip, where the pen tip 51 is internally provided with a light receiving unit 53 and a light emitting unit 54, the light receiving unit and the light emitting unit are configured to receive light reflected by a scene, so as to extract color information of the scene, the pen body 52 is internally provided with an RGB color extraction sensor 55 and a processor 56, the RGB color extraction sensor 55 is configured to acquire and process color information of the scene according to the light received by the light receiving unit, and the processor 56 is configured to process the color information of the scene processed by the RGB color extraction sensor 55, and send the processed color information to an electronic device when drawing a picture.
Optionally, in the embodiment of the present application, after acquiring the real color information of the object to be rendered acquired by the stylus, the color information corresponding to the real color information under the first vision may be acquired, and color filling is performed on the object to be rendered according to the color information corresponding to the first vision.
As shown in fig. 6, an exemplary display screen of the electronic device displays a drawing interface 61, where the drawing interface 61 includes an outline 60 of an apple to be filled with color, a lower right corner area of the drawing interface 61 includes a logo 62 and a logo 63, after clicking the logo 62 by a user, the electronic device enters a visual mode of a feline, invokes a preset color library, waits for scene color information extracted by a stylus, then, as shown in fig. 7, the user can hold the stylus to align with a real apple corresponding to the outline of the apple to be filled with color, extracts the color of the apple (for example, red in human eyes) through a light emitting unit, a receiving unit and a color extraction sensing component of the stylus, processes the extracted color information, and sends the processed color information to the electronic device, and finally, as shown in fig. 4 (B), the user clicks the outline area of the apple in the drawing interface by using the stylus, the electronic device fills the "color outline of the apple" according to the obtained color information of the cat eye.
Further, the outline area B1, the outline area B2 and the outline area B3 of the apple outline can be aligned to different areas of the real apple corresponding to different outline areas by a touch pen by a user, and colors of the different outline areas are obtained respectively.
In the embodiment of the application, the scene colors are extracted based on the stylus and converted through the preset color library of the electronic equipment, so that the images of the world colors in the animal eyes focused by the user are drawn on the electronic equipment in real time, and the real world pictures in the animal eyes are constructed, thereby bringing different funs to the user.
Alternatively, in the embodiment of the present application, the step S202 may include the following steps S202c and S202d:
step S202c: the image display device generates a first visual image corresponding to a first object to be rendered according to the first color information.
Step S202d: the image display device performs a first process on the first visual image according to the visual characteristic type of the first viewing object corresponding to the first visual.
Wherein the first process includes at least one of an image correction process and an image cutting process.
Optionally, in the embodiment of the present application, the image display device may render the object to be rendered according to the first color information, so as to obtain the first visual image.
Alternatively, in the embodiment of the present application, the above visual feature types may include a visual field range, a visual angle, and the like.
Optionally, in an embodiment of the present application, different viewing objects correspond to different visual feature types. For example, due to the eye position of the horse, there is a blind spot in front of the viewing angle, so that there is a blind spot of the viewing line in front of the horse, and the seen picture is split into left and right halves from the middle. For another example, the field of view of a cat is wider than the field of view of the human eye, so the cat can see a wider view.
Alternatively, in the embodiment of the present application, the image display device may perform the image correction process or the image cutting process on the first visual image according to the visual feature type of the first viewing object.
It should be noted that, the processes of the image correction process and the image cutting process may refer to related technologies, and this will not be described in detail in the embodiments of the present application.
For example, when a screen under horse vision needs to be displayed, the first visual image may be subjected to image segmentation, and left and right partial images obtained by the segmentation of the display screen may be rendered.
For example, when it is necessary to display a screen under cat vision, the first visual image may be subjected to image correction processing, and the corrected image may be rendered and displayed, thereby obtaining a visual screen with a wider viewing angle or a larger field of view.
Thus, the image display device can cut the image according to the visual characteristics of the visual object, so that the picture in the animal eye can be more truly presented.
Fig. 8 is a flowchart of an image display method applied to a virtual reality VR device, where the image display method may include the following steps S301 and S302:
Step S301: and the VR equipment acquires first color information corresponding to the first object to be rendered through the camera module.
The first color information is color information of a first object to be rendered under a first vision, the first color information is different from color information of the object to be rendered under a second vision, and the first vision and the second vision are different vision display types of the watched object.
Step S302: and the VR equipment renders and displays the first object to be rendered on a display screen of the VR equipment according to the first color information.
Optionally, in an embodiment of the present application, the first vision is a vision of a first viewing object; before the step S301, the image display method provided in the embodiment of the present application further includes the following step S303:
step S303: and the VR equipment receives a first input of a first object identifier in the at least one object identifier by a user under the condition that the display screen displays the at least one object identifier.
Wherein the at least one object identifier corresponds to at least one viewing object, and the first object identifier corresponds to a first viewing object.
In combination with the step S303, the step S301 may include the following steps S301a and S301b:
Step S301a: the VR device is responsive to the first input to obtain at least one color information corresponding to the first viewing object from a first color database.
Step S301b: and the VR equipment determines first color information corresponding to the first object to be rendered from at least one color information.
The first color database includes color information of at least one object to be rendered under at least one vision, the at least one vision corresponds to at least one viewing object, each vision corresponds to one or more objects to be rendered, and each object to be rendered corresponds to one color information.
Optionally, in an embodiment of the present application, before step S302, the image display method provided in the embodiment of the present application further includes the following steps S304 and S305:
step S304: the VR device displays the first interface on the display screen.
The first interface includes an object contour corresponding to the first object to be rendered.
Step S305: the VR device receives a second input from the user to the first interface.
Wherein the second input is an input of a color filling the outline of the object.
The step S302 may include the following step S302a in combination with the step S304 and the step S305:
step S302a: and the VR equipment responds to the second input, adopts the first color information to carry out color filling processing on the outline of the object so as to render and display the first object to be rendered.
Alternatively, in the embodiment of the present application, the step S302 may include the following steps S302b1 and S302b2:
step S302b1: and rendering and displaying the first object to be rendered on the display screen by the VR equipment to obtain a first visual image.
Step S302b2: and the VR equipment performs first processing on the first visual image according to the visual characteristics of the first viewing object corresponding to the first vision.
Wherein the first process includes at least one of a viewing angle correction process and a cutting process.
Optionally, in an embodiment of the present application, before step S301, the image display method provided in the embodiment of the present application further includes the following step S305:
step S305: and the VR equipment receives third color information corresponding to the first object to be rendered, which is acquired by the touch pen.
The third color information is color information of the first object to be rendered under the second vision.
In combination with the step S305, the step S301 may include the following step S301c:
step S301c: the VR device determines target color information corresponding to the third color information from the second color database based on the third color information and determines the target color information as the first color information.
The second color database includes at least one color information under the first vision and color information under the second vision corresponding to each color information.
Alternatively, in the embodiment of the present application, the step S305 may include the following step S305a:
step S305a: and the VR equipment receives third color information corresponding to the first object to be rendered, which is sent by the sending module of the touch control pen.
The third color information is obtained by photoelectrically converting light information collected by a light receiving module of the touch pen through a color extraction sensor of the touch pen.
It should be noted that, the explanation of this embodiment may be specifically referred to the above embodiment, and will not be repeated here.
The foregoing method embodiments, or various possible implementation manners in the method embodiments, may be executed separately, or may be executed in combination with each other on the premise that no contradiction exists, and may be specifically determined according to actual use requirements, which is not limited by the embodiments of the present application.
According to the image display method provided by the embodiment of the application, the execution subject can be an image display device. In the embodiment of the present application, an image display device is described by taking an example in which the image display device performs an image display method.
Fig. 9 is a schematic diagram of an image display apparatus provided in an embodiment of the present application, as shown in fig. 8, the image display apparatus 800 may include an obtaining module 801 and a display module 802, where: the obtaining module 801 is configured to obtain first color information corresponding to a first object to be rendered, where the first color information is color information of the first object to be rendered under a first vision, and the first color information is different from color information of the first object to be rendered under a second vision; the display module 802 is configured to render and display the first object to be rendered according to the first color information acquired by the acquisition module 801.
Optionally, in an embodiment of the present application, the first vision is a vision of a first viewing object; the device further comprises: a receiving module; the receiving module is configured to receive, before obtaining first color information corresponding to a first object to be rendered, a first input of a first object identifier of the at least one object identifier by a user while displaying the at least one object identifier, where the at least one object identifier corresponds to at least one viewing object, and the first object identifier corresponds to the first viewing object; the acquiring module is specifically configured to acquire at least one color information corresponding to the first viewing object from a first color database in response to the first input received by the receiving module; the acquiring module is specifically configured to determine, from the at least one color information, first color information corresponding to the first object to be rendered; the first color database includes color information of at least one object to be rendered under at least one vision, the at least one vision corresponds to the at least one viewing object, each vision corresponds to one or more objects to be rendered, and each object to be rendered corresponds to one color information.
Optionally, in this embodiment of the present application, the obtaining module is further configured to collect, by a camera, a first image before rendering and displaying the first object to be rendered according to the first color information, and determine one or more objects in the first image as the first object to be rendered; the obtaining module is further configured to obtain second color information corresponding to a second object to be rendered in the first VR image, where the second color information is color information of the second object to be rendered under the first vision;
the device further comprises: a processing module;
the processing module is configured to perform color correction processing on a second object to be rendered in the first VR image according to the second color information acquired by the acquiring module, so as to obtain a second VR image; the display module is specifically configured to fuse the first object to be rendered into the second VR image for rendering and displaying.
Optionally, in this embodiment of the present application, the display module is further configured to display a first interface before rendering and displaying the first object to be rendered according to the first color information, where the first interface includes an object contour corresponding to the first object to be rendered;
The device further comprises: a receiving module;
the receiving module is used for receiving a second input of the user to the first interface, wherein the second input is an input of colors for filling the outline of the object; the display module is specifically configured to perform color filling processing on the object outline by using the first color information in response to the second input received by the receiving module, so as to render and display the first object to be rendered.
Optionally, in this embodiment of the present application, before the obtaining module is further configured to obtain first color information corresponding to a first object to be rendered, receive third color information corresponding to the first object to be rendered, where the third color information is color information of the first object to be rendered under a second vision, and the third color information is collected by a stylus; the acquisition module is specifically configured to determine, according to the third color information, target color information corresponding to the third color information from a second color database, and determine the target color information as the first color information; wherein the color database includes at least one color information under the first vision and color information under the second vision corresponding to each color information.
Optionally, in an embodiment of the present application, the display module is specifically configured to generate, according to the first color information, a first visual image corresponding to a first object to be rendered; and the processing module is used for carrying out image cutting processing on the first visual image according to the visual characteristic type of the first viewing object corresponding to the first visual.
According to the image display device provided by the embodiment of the application, the image display device acquires the first color information corresponding to the first object to be rendered, the first color information is the color information of the first object to be rendered under the first vision, the first color information is different from the color information of the first object to be rendered under the second vision, the first vision and the second vision are different vision display types of watching objects, and the first object to be rendered is rendered and displayed according to the first color information. According to the method, the electronic equipment can acquire the color information of the object to be rendered under the first vision, and render and display the object to be rendered according to the color information under the first vision, so that the color information of the object to be rendered under different vision can be obtained through rendering, a user can see the colors of the object under other vision different from the user's own vision, and the interestingness of object rendering and display is improved.
The image display device in the embodiment of the application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image display device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 7, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 10, the embodiment of the present application further provides an electronic device 900, including a processor 901 and a memory 902, where a program or an instruction capable of being executed on the processor 901 is stored in the memory 902, and the program or the instruction when executed by the processor 901 implements each step of the embodiment of the image display method, and the steps can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than illustrated, or may combine some components, or may be arranged in different components, which are not described in detail herein.
The processor 110 is configured to obtain first color information corresponding to a first object to be rendered, where the first color information is color information of the first object to be rendered under a first vision, and the first color information is different from color information of the first object to be rendered under a second vision; the display unit 106 is configured to render and display the first object to be rendered according to the first color information acquired by the processor 110.
Optionally, in an embodiment of the present application, the first vision is a vision of a first viewing object; the device further comprises: a user input unit 107; the user input unit 107 is configured to receive, before obtaining first color information corresponding to a first object to be rendered, a first input of a first object identifier of the at least one object identifier, where the at least one object identifier corresponds to at least one viewing object, and where the first object identifier corresponds to the first viewing object, when at least one object identifier is displayed; the processor 110 is specifically configured to obtain, in response to the first input received by the user input unit 107, at least one color information corresponding to the first viewing object from a first color database; the processor 110 is specifically configured to determine, from the at least one color information, first color information corresponding to the first object to be rendered; the first color database includes color information of at least one object to be rendered under at least one vision, the at least one vision corresponds to the at least one viewing object, each vision corresponds to one or more objects to be rendered, and each object to be rendered corresponds to one color information.
Optionally, in this embodiment of the present application, the input unit 104 is configured to collect a first image and determine one or more objects in the first image as a first object to be rendered, before rendering and displaying the first object to be rendered according to the first color information; the processor 110 is further configured to obtain second color information corresponding to a second object to be rendered in the first VR image, where the second color information is color information of the second object to be rendered under the first vision; the processor 110 is configured to perform color correction processing on a second object to be rendered in the first VR image according to the second color information acquired by the acquiring module, so as to obtain a second VR image; the display unit 106 is specifically configured to fuse the first object to be rendered into the second VR image for rendering and displaying.
Optionally, in this embodiment of the present application, the display unit 106 is further configured to display a first interface before rendering and displaying the first object to be rendered according to the first color information, where the first interface includes an object contour corresponding to the first object to be rendered; the user input unit 107 is further configured to receive a second input from a user to the first interface, where the second input is an input of a color filling the outline of the object; the display unit 106 is specifically configured to perform color filling processing on the object outline by using the first color information in response to the second input received by the user input unit 107, so as to render and display the first object to be rendered.
Optionally, in this embodiment of the present application, before the processor 110 is further configured to obtain the first color information corresponding to the first object to be rendered, receive third color information corresponding to the first object to be rendered, which is collected by a stylus, where the third color information is color information of the first object to be rendered under the second vision; the processor 110 is specifically configured to determine, from a second color database, target color information corresponding to the third color information according to the third color information, and determine the target color information as the first color information; wherein the color database includes at least one color information under the first vision and color information under the second vision corresponding to each color information.
Optionally, in the embodiment of the present application, the display unit 106 is specifically configured to generate, according to the first color information, a first visual image corresponding to the first object to be rendered; the processor 110 is configured to perform image cutting processing on the first visual image according to the visual feature type of the first viewing object corresponding to the first visual.
According to the electronic device provided by the embodiment of the application, the electronic device acquires the first color information corresponding to the first object to be rendered, the first color information is the color information of the first object to be rendered under the first vision, the first color information is different from the color information of the first object to be rendered under the second vision, the first vision and the second vision are different vision display types of the watching object, and the first object to be rendered is displayed according to the first color information. According to the method, the electronic equipment can acquire the color information of the object to be rendered under the first vision, and render and display the object to be rendered according to the color information under the first vision, so that the color information of the object to be rendered under different vision can be obtained through rendering, a user can see the colors of the object under other vision different from the user's own vision, and the interestingness of object rendering and display is improved.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the embodiment of the image display method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the image display method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the image display method described above, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (14)

1. An image display method, the method comprising:
acquiring first color information corresponding to a first object to be rendered, wherein the first color information is color information of the first object to be rendered under a first vision, the first color information is different from color information of the object to be rendered under a second vision, and the first vision and the second vision are different vision display types of watching objects;
and rendering and displaying the first object to be rendered according to the first color information.
2. The method of claim 1, wherein the first vision is a vision of a first viewing object; before the first color information corresponding to the first object to be rendered is obtained, the method further includes:
receiving a first input of a first object identifier of at least one object identifier by a user under the condition that the at least one object identifier is displayed, wherein the at least one object identifier corresponds to at least one watching object, and the first object identifier corresponds to the first watching object;
the obtaining the first color information corresponding to the first object to be rendered includes:
in response to the first input, acquiring at least one color information corresponding to the first viewing object from a first color database;
Determining first color information corresponding to the first object to be rendered from the at least one color information;
the first color database comprises color information of at least one object to be rendered under at least one vision, the at least one vision corresponds to the at least one viewing object, each vision corresponds to one or more objects to be rendered, and each object to be rendered corresponds to one color information.
3. The method of claim 1, wherein prior to rendering the first object to be rendered based on the first color information, the method further comprises:
acquiring a first image through a camera, and determining one or more objects in the first image as a first object to be rendered;
acquiring second color information corresponding to a second object to be rendered in a first VR image, wherein the second color information is the color information of the second object to be rendered under the first vision;
performing color correction processing on a second object to be rendered in the first VR image according to the second color information to obtain a second VR image;
the rendering and displaying the first object to be rendered includes:
And fusing the first object to be rendered into the second VR image for rendering and displaying.
4. The method of claim 1, wherein prior to rendering the first object to be rendered based on the first color information, the method further comprises:
displaying a first interface, wherein the first interface comprises an object outline corresponding to the first object to be rendered;
receiving a second input of a user to the first interface, wherein the second input is an input of a color for filling the outline of the object;
and rendering and displaying the first object to be rendered according to the first color information, including:
and responding to the second input, adopting the first color information to perform color filling processing on the object outline so as to render and display the first object to be rendered.
5. The method of claim 1, wherein rendering the first object to be rendered according to the first color information comprises:
generating a first visual image corresponding to the first object to be rendered according to the first color information;
and performing image cutting processing on the first visual image according to the visual characteristic type of the first viewing object corresponding to the first visual.
6. An image display method applied to a virtual reality VR device, the method comprising:
the VR equipment acquires first color information corresponding to a first object to be rendered through a camera module, wherein the first color information is color information of the first object to be rendered under first vision, the first color information is different from color information of the object to be rendered under second vision, and the first vision and the second vision are different vision display types of watching objects;
and the VR equipment renders and displays the first object to be rendered on a display screen of the VR equipment according to the first color information.
7. The method of claim 6, wherein the first vision is a vision of a first viewing object; before the VR device obtains the first color information corresponding to the first object to be rendered through the camera module, the method further includes:
the VR device receives a first input of a first object identifier in at least one object identifier when the display screen displays the at least one object identifier, wherein the at least one object identifier corresponds to at least one viewing object, and the first object identifier corresponds to the first viewing object;
The VR device obtains first color information corresponding to a first object to be rendered through a camera module, and the VR device comprises:
the VR equipment responds to the first input and acquires at least one piece of color information corresponding to the first watching object from a first color database;
the VR equipment determines first color information corresponding to the first object to be rendered from the at least one color information;
the first color database comprises color information of at least one object to be rendered under at least one vision, the at least one vision corresponds to the at least one viewing object, each vision corresponds to one or more objects to be rendered, and each object to be rendered corresponds to one color information.
8. The method of claim 6, wherein prior to rendering the first object to be rendered for display by the VR device based on the first color information, the method further comprises:
the VR equipment displays a first interface on the display screen, wherein the first interface comprises an object outline corresponding to the first object to be rendered;
the VR device receives a second input from a user to the first interface, the second input being an input to fill a color of the object outline;
The VR device performs rendering display on the first object to be rendered according to the first color information, including:
and the VR equipment responds to the second input, adopts the first color information to perform color filling processing on the outline of the object so as to render and display the first object to be rendered.
9. The method of claim 6, wherein the VR device rendering the first object to be rendered according to the first color information comprises:
the VR equipment performs rendering display on the first object to be rendered on a display screen to obtain a first visual image;
the VR device performs a first process on the first visual image according to a visual characteristic of a first viewing object corresponding to the first visual, the first process including at least one of a viewing angle correction process and a cutting process.
10. The method of claim 8, wherein before the VR device obtains the first color information corresponding to the first object to be rendered, the method further comprises:
the VR equipment receives third color information corresponding to the first object to be rendered, which is acquired by a touch pen, wherein the third color information is color information of the first object to be rendered under second vision;
The VR device obtains first color information corresponding to a first object to be rendered, including:
the VR equipment determines target color information corresponding to the third color information from a second color database according to the third color information, and determines the target color information as the first color information;
wherein the second color database comprises at least one piece of color information under the first vision and color information under the second vision corresponding to each piece of color information.
11. The method of claim 10, wherein the VR device receiving third color information corresponding to the first object to be rendered collected by a stylus comprises:
the VR equipment receives third color information corresponding to the first object to be rendered, which is sent by the sending module of the touch pen; the third color information is obtained by photoelectric conversion of the light information acquired by the light receiving module of the touch pen through the color extraction sensor of the touch pen.
12. An image display device, the device comprising: the device comprises an acquisition module and a display module, wherein:
the acquisition module is used for acquiring first color information corresponding to a first object to be rendered, wherein the first color information is color information of the first object to be rendered under a first vision, and the first color information is different from the color information of the object to be rendered under a second vision; the first vision and the second vision are different viewing corresponding vision display types;
The display module is used for rendering and displaying the first object to be rendered according to the first color information acquired by the acquisition module.
13. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the image display method of any one of claims 1-5.
14. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implements the steps of the image display method according to any one of claims 1-5.
CN202311385818.5A 2023-10-24 2023-10-24 Image display method, device, electronic equipment and readable storage medium Pending CN117294826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311385818.5A CN117294826A (en) 2023-10-24 2023-10-24 Image display method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311385818.5A CN117294826A (en) 2023-10-24 2023-10-24 Image display method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117294826A true CN117294826A (en) 2023-12-26

Family

ID=89248057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311385818.5A Pending CN117294826A (en) 2023-10-24 2023-10-24 Image display method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117294826A (en)

Similar Documents

Publication Publication Date Title
CN111556278B (en) Video processing method, video display device and storage medium
US10939034B2 (en) Imaging system and method for producing images via gaze-based control
TWI615776B (en) Method and system for creating virtual message onto a moving object and searching the same
JP2019145108A (en) Electronic device for generating image including 3d avatar with facial movements reflected thereon, using 3d avatar for face
CN108712603B (en) Image processing method and mobile terminal
CN109325450A (en) Image processing method, device, storage medium and electronic equipment
CN107771395A (en) The method and apparatus for generating and sending the metadata for virtual reality
CN111448568B (en) Environment-based application presentation
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
CN111612873A (en) GIF picture generation method and device and electronic equipment
WO2020151255A1 (en) Display control system and method based on mobile terminal
CN111722775A (en) Image processing method, device, equipment and readable storage medium
CN114549718A (en) Rendering method and device of virtual information, augmented reality device and storage medium
CN111308707A (en) Picture display adjusting method and device, storage medium and augmented reality display equipment
CN110650367A (en) Video processing method, electronic device, and medium
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN113709519A (en) Method and equipment for determining live broadcast shielding area
CN105227828B (en) Filming apparatus and method
CN109413152A (en) Image processing method, device, storage medium and electronic equipment
CN115379195B (en) Video generation method, device, electronic equipment and readable storage medium
CN117294826A (en) Image display method, device, electronic equipment and readable storage medium
US20220189128A1 (en) Temporal segmentation
CN112532904B (en) Video processing method and device and electronic equipment
CN105894581B (en) Method and device for presenting multimedia information
CN113723168A (en) Artificial intelligence-based subject identification method, related device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination