CN117931034A - Display device and image generation method - Google Patents

Display device and image generation method Download PDF

Info

Publication number
CN117931034A
CN117931034A CN202311694254.3A CN202311694254A CN117931034A CN 117931034 A CN117931034 A CN 117931034A CN 202311694254 A CN202311694254 A CN 202311694254A CN 117931034 A CN117931034 A CN 117931034A
Authority
CN
China
Prior art keywords
image
area
pixel
display device
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311694254.3A
Other languages
Chinese (zh)
Inventor
孟昊
刘健
吴汉勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202311694254.3A priority Critical patent/CN117931034A/en
Publication of CN117931034A publication Critical patent/CN117931034A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application relates to a display device and an image generation method, and relates to the technical field of display. The display device includes: a user interface configured to acquire a first image, a second image, and an image rendering mode; the controller is configured to divide the first image into a first area and a second area according to the color values of all the pixel points in the first image, reset the color values of all the pixel points in the first area according to the image drawing mode and the second image, set the transparency of all the pixel points in the second area to be preset transparency so as to obtain a third image, and generate an image to be displayed according to the third image; the first area is an area formed by pixel points, the color values of which meet preset conditions, in the first image, and the second area is an area formed by pixel points, the color values of which do not meet preset conditions, in the first image; and a display configured to display an image to be displayed. The method and the device are used for improving the generation efficiency of the artistic image.

Description

Display device and image generation method
Technical Field
The present application relates to the field of display technologies, and in particular, to a display device and an image generating method.
Background
With advances in technology and improvements in economic level, people are beginning to pursue improvements in mental aspects, so visual arts are becoming very popular in people's daily lives, and displaying artistic images through a display device is one of important manifestations of visual arts.
At present, display devices mainly provide artistic images for displaying the artistic images through designers, and if the diversity of the artistic images is enriched by not stacking a large amount of materials, visual fatigue of users is very easy to cause. But the manual generation material not only needs to consume a large amount of time and manpower, but also is difficult to meet the requirements of users for artistic diversity, and then causes poor artistic display effect of display equipment, and user experience is poor.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present application provides a display device and an image generating method, which are used for improving the generating efficiency of artistic images.
In order to achieve the above object, the technical solution provided by the embodiments of the present application is as follows:
In a first aspect, an embodiment of the present application provides a display apparatus, including:
A user interface configured to acquire a first image, a second image, and an image rendering mode;
A controller configured to divide the first image into a first area and a second area according to color values of each pixel point in the first image, reset the color values of each pixel point in the first area according to the image drawing mode and the second image, set transparency of each pixel point in the second area to be preset transparency, obtain a third image, and generate an image to be displayed according to the third image; the first area is an area formed by pixel points, the color values of which meet preset conditions, in the first image, and the second area is an area formed by pixel points, the color values of which do not meet preset conditions, in the first image;
and a display configured to display the image to be displayed.
In a second aspect, an embodiment of the present application provides an image generating method, including:
Acquiring a first image, a second image and an image drawing mode;
dividing the first image into a first area and a second area according to the color value of each pixel point in the first image; the first area is an area formed by pixel points, the color values of which meet preset conditions, in the first image, and the second area is an area formed by pixel points, the color values of which do not meet preset conditions, in the first image;
Resetting color values of all pixel points in the first area according to the image drawing mode and the second image, and setting the transparency of all pixel points in the second area to be preset transparency to obtain a third image;
Generating an image to be displayed according to the third image;
And displaying the image to be displayed.
In a third aspect, the present application provides a computer-readable storage medium comprising: a computer-readable storage medium stores thereon a computer program which, when executed by a processor, implements the image generation method as described in the second aspect.
In a fourth aspect, the present application provides a computer program product comprising a computer program which, when run on a computer, causes the computer to implement the image generation method as described in the second aspect.
The display device provided by the embodiment of the application comprises: a user interface, a controller, and a display. The user interface may acquire a first image, a second image and an image drawing mode, the controller may divide an area formed by pixels having color values satisfying a preset condition in the first image into a first area according to color values of the pixels in the first image, divide an area formed by pixels having color values not satisfying the preset condition in the first image into a second area, reset the color values of the pixels in the first area according to the image drawing mode and the second image, set transparency of the pixels in the second area to be preset transparency, acquire a third image, and generate an image to be displayed according to the third image, and the display device may display the image to be displayed. That is, after the first image is divided into the first area and the second area, the display device provided in the embodiment of the present application may reset the color value of each pixel point in the first area, set the transparency of each pixel point in the second area to be a preset transparency to obtain a third image, and finally generate and display an image to be displayed according to the third image. Compared with the art image provided by a designer, the method and the device can automatically generate and display the image to be displayed according to the first image, the second image and the image drawing mode, so that the generation efficiency of the art image can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic view of an operation scenario of a display device according to some embodiments of the present application;
fig. 2 is a schematic structural diagram of a display device according to some embodiments of the present application;
FIG. 3 is a schematic diagram of a control device according to some embodiments of the present application;
fig. 4 is a schematic structural diagram of an operating system of a display device according to some embodiments of the present application;
FIG. 5 is a flowchart illustrating steps of an image generation method according to some embodiments of the present application;
FIG. 6 is a schematic view of a first region and a second region of a first image provided by some embodiments of the present application;
FIG. 7 is a second flowchart illustrating steps of an image generating method according to some embodiments of the present application;
FIG. 8 is a schematic illustration of a third image provided in some embodiments of the application;
FIG. 9 is a schematic diagram of an image to be displayed according to some embodiments of the present application;
FIG. 10 is a second schematic view of a third image according to some embodiments of the present application;
FIG. 11 is a second schematic diagram of an image to be displayed according to some embodiments of the present application;
fig. 12 is a schematic diagram of a third embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the application will be more clearly understood, a further description of the application will be made. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the application.
The display device provided by the embodiment of the application can have various implementation forms, and can be, for example, a liquid crystal television, a laser television, a mobile phone, a personal computer (Personal Computer, a PC), a projector, a display (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table), a sound box with a display function, a refrigerator, a washing machine, an air conditioner, an intelligent curtain, a router, a set top box and the like.
Fig. 1 is a schematic diagram of an operation scenario of a display device according to an embodiment of the present application. As shown in fig. 1, the operation scene of the display device includes: control device 100, display device 200, smart device 300, and server 400. The user may operate the display device 200 through the control apparatus 100 or the smart device 300. The display device 200 may acquire media assets from the server 400 via the internet.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes: infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, the remote controller controls the display device 200 by wireless or wired modes. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the display device 200 may also be controlled in ways other than the control apparatus 100 and the smart device 300. For example, a control operation of the user is received by touch, gesture, voice instruction, or the like.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a block diagram of a configuration of the display device 200 in the embodiment shown in fig. 1. As shown in fig. 2, the display device 200 includes: at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, a user interface.
The controller 250 may include: a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), at least one of a first to nth interface for input/output, a communication bus (Communication Bus), and the like. The controller 250 controls the operation of the display device and responds to the user's operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
The display 260 may include: the display screen component is used for presenting pictures, the driving component is used for driving image display, receiving image signals output by the controller, and displaying video content, image content and menu control interfaces, and the user controls the UI interface. The display 260 may be a liquid crystal display, an OLED display, or a projection display.
The communicator 220 includes components for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include: at least one of a Wifi module, a Bluetooth module, a wired Ethernet module and other network communication protocol chips or a near field communication protocol chip and an infrared receiver. The display device 200 may establish a data transmission link for transmission and reception of control signals and data signals with the control apparatus 100 or the smart device 300 or the server 400 through the communicator 220.
A user interface for receiving control signals input by a user through the control device 100 (e.g., an infrared remote control, etc.), or touch or gesture, voice command, etc.
The detector 230 is used to collect signals of the external environment or interaction with the outside. For example, the detector 230 includes: the light receiver is used for acquiring a sensor of the intensity of ambient light; or detector 230 includes: an image collector, such as a camera, may be used to collect external environmental scenes, user attributes, or user interaction gestures, or the detector 230 may include: sound collectors, such as microphones, etc., are used to receive external sounds.
The external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, or the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
The modem 210 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
The user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user interface receives the user input command through the Graphical User Interface (GUI). Or the user may enter a user command by entering a specific sound or gesture, the user interface recognizes the sound or gesture through the sensor and receives the user input command. Where a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user, it enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of a user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a graphically displayed user interface that is related to computer operations. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, the user interface may receive a user operation and obtain a first image, a second image, and an image rendering mode according to the user operation; the controller 250 may divide an area composed of pixels having color values satisfying a preset condition in the first image into a first area, divide an area composed of pixels having color values not satisfying a preset condition in the first image into a second area, reset color values of respective pixels in the first area according to the image drawing mode and the second image, set transparency of respective pixels in the second area to a preset transparency to obtain a third image, and generate an image to be displayed according to the third image; the display 260 may display an image to be displayed.
In some embodiments, the controller 250 may implement resetting the color values of the respective pixels within the first region according to the image rendering mode and the second image by:
Determining pixel points in the second image corresponding to each pixel point in the first region under the condition that the image drawing mode is a first image drawing mode; resetting the pixel value of each pixel point in the first area to the pixel value of the pixel point in the second image corresponding to each pixel point in the first area.
In some embodiments, the user interface may further obtain the first location information according to a control operation input by the user; the controller 250 may determine the pixel points in the second image corresponding to the respective pixel points in the first area by:
Normalizing the pixel coordinates of the first image to obtain normalized coordinates of each pixel point in the second area; normalizing the pixel coordinates of the second image to obtain normalized coordinates of each pixel point in the second image; and acquiring the corresponding pixel points of each pixel point in the second image in the second area according to the first position information, the size ratio of the first image to the second image, the normalized coordinates of each pixel point in the second area and the normalized coordinates of each pixel point in the second image.
In some embodiments, the controller 250 may obtain the pixel points corresponding to each pixel point in the second image in the second area according to the first position information, the size ratio of the first image to the second image, the normalized coordinates of each pixel point in the second area, and the normalized coordinates of each pixel point in the second image by: acquiring a first coordinate value corresponding to each pixel point in the second area according to the normalized coordinate of each pixel point in the second area and the size ratio of the first image to the second image; acquiring second coordinate values corresponding to all the pixel points in the second area according to the first coordinate values corresponding to all the pixel points in the second area and the first position information; acquiring the corresponding pixel points of each pixel point in the second image according to the first coordinate values corresponding to each pixel point in the second area and the normalized coordinates of each pixel point in the second image
In some embodiments, in the case that the image drawing mode is the first image drawing mode, the user interface may further acquire canvas ground color and second position information according to a control operation input by a user; the control 250 may implement generating the image to be displayed according to the third image as follows: and generating a target canvas according to the canvas ground color, and overlaying and drawing the third image on the target canvas according to the second position information so as to acquire the image to be displayed.
In some embodiments, the controller 250 may implement resetting the color values of the respective pixels within the first region according to the image rendering mode and the second image by:
And under the condition that the image drawing mode is a second image drawing mode, acquiring a target color value according to the color value of each pixel point in the second image, and resetting the pixel value of each pixel point in the first area to the target color value.
In some embodiments, the user interface may further obtain third location information according to a control operation input by a user, and the control 250 may implement generating an image to be displayed according to the third image by: and superposing and drawing the third image on the second image according to the third position information so as to acquire the image to be displayed.
In some embodiments, the user interface may further obtain a scaling ratio according to a control operation input by a user, and the control 250 may scale the third image according to the scaling ratio before generating the image to be displayed according to the third image.
In some embodiments, the controller 250 may divide the first image into a first region and a second region according to color values of respective pixels in the first image by: acquiring color values of all pixel points in the first image; traversing the color values of all pixel points in the first image, dividing the region corresponding to the pixel points with the color values smaller than the threshold color value in the first image into the first region, and dividing the region corresponding to the pixel points with the color values larger than or equal to the threshold color value in the first image into the second region.
Fig. 3 is a block diagram schematically showing the configuration of the control apparatus 100 in the embodiment shown in fig. 1. As shown in fig. 3, the control device 100 includes a controller 110, a memory 120, a communication interface 130, a user input/output interface 140, and a power supply. The control apparatus 100 may receive an input operation instruction of a user, and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and may perform an interaction between the user and the display device 200.
Referring to FIG. 4, in some embodiments, the operating system of the display device may be divided into four layers, from top to bottom, an application layer (application layer) 41, an application framework layer (Application Framework layer) 42, an android runtime (Android runtime) and a system library layer (system runtime layer) 43, and a kernel layer 44, respectively.
At least one application program is running in the application layer 41, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, or the like; or may be an application developed by a third party developer. In some embodiments, the application layer may create a Bitmap (Bitmap) corresponding to the canvas to occupy system memory, and obtain image generation information such as the first image, the second image, the image rendering mode, the scaling, the location information, and the like. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer 42 provides an application programming interface (application programming interface, API) and programming framework for the application. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. Through the API interface, the application program can access the resources in the system and acquire the services of the system in the execution. The framework layer 42 includes a manager (Managers), content Provider (Content Provider), view system (VIEW SYSTEM), etc., where the manager includes at least one of the following modules: an activity manager (ACTIVITY MANAGER) is used to interact with all activities running in the system; a Location Manager (Location Manager) is used to provide system services or applications with access to system Location services; a package manager (PACKAGE MANAGER) for retrieving various information about the application packages currently installed on the device; a notification manager (Notification Manager) for controlling the display and clearing of notification messages; a Window Manager (Window Manager) is used to manage bracketing icons, windows, toolbars, wallpaper, and desktop components on the user interface. In some embodiments, the activity manager is used to manage the lifecycle of the individual applications as well as the usual navigation rollback functions, such as controlling the exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of the display screen, judging whether a status bar exists or not, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window to display, dithering display, distorting display, etc.), etc.
The system runtime layer 43 provides support for the upper layer, the framework layer, which when used, the android operating system runs the C/c++ libraries contained in the system runtime layer to implement the functions to be implemented by the framework layer.
The kernel layer 44 is a layer between hardware and software. The kernel layer contains at least one of the following drivers: GPU shader, GPU rendering driver, audio driver, display driver, bluetooth driver, camera driver, WIFI driver, USB driver, HDMI driver, sensor driver (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), power driver, etc. In some embodiments, the GPU shader may perform color calculation on each pixel point in the first area based on the image drawing mode and the second image to obtain a color value of each pixel point in the first area, and the GPU drawing driver may obtain the color value of each pixel point in the first area calculated by the GPU shader, and draw the color value into a memory occupied by a bitmap corresponding to the canvas to generate an image to be displayed; the display driver may drive the display to display an image to be displayed.
The embodiment of the application also provides an image generation method, which is shown by referring to fig. 5 and comprises the following steps S51 to S55:
S51, acquiring a first image, a second image and an image drawing mode.
In some embodiments, acquiring the first image and the second image comprises: the first image and the second image are read from a server or a designated storage space according to a user operation.
S52, dividing the first image into a first area and a second area according to the color value of each pixel point in the first image.
The first area is an area formed by pixel points, the color values of which meet preset conditions, in the first image, and the second area is an area formed by pixel points, the color values of which do not meet preset conditions, in the first image.
For example, referring to fig. 6, in fig. 6, the first image 600 is a brush stroke image, and the preset condition is that the gray value is greater than the preset gray value. The first region 601 having a color value greater than a preset gray value and the second region 602 having a color value less than or equal to the preset gray value may be divided according to the color values of the pixels in the first image 600.
S53, resetting color values of all pixel points in the first area according to the image drawing mode and the second image, and setting the transparency of all pixel points in the second area to be preset transparency to obtain a third image.
In some embodiments, the preset transparency may be 100%. I.e. the second area is set to be completely transparent.
S54, generating an image to be displayed according to the third image.
S55, displaying the image to be displayed.
According to the image generation method provided by the embodiment of the application, a first image, a second image and an image drawing mode are firstly obtained, then the first image is divided into a first area and a second area according to the color values of all pixel points in the first image, then the color values of all pixel points in the first area are reset according to the image drawing mode and the second image, the transparency of all pixel points in the second area is set to be preset transparency, so that a third image is obtained, finally an image to be displayed is generated according to the third image, and the image to be displayed is displayed. That is, in the image generating method provided by the embodiment of the present application, after a first image is divided into a first area and a second area, color values of each pixel point in the first area are reset, transparency of each pixel point in the second area is set to be a preset transparency to obtain a third image, and an image to be displayed is generated and displayed according to the third image. Compared with the art image provided by a designer, the method and the device can automatically generate and display the image to be displayed according to the first image, the second image and the image drawing mode, so that the generation efficiency of the art image can be improved.
As an extension and refinement of the above embodiment, an embodiment of the present application provides another image generation method, referring to fig. 7, including the steps of:
s701, acquiring a first image, a second image, and an image drawing mode.
S702, obtaining color values of all pixel points in the first image.
S703, traversing color values of all pixel points in the first image, dividing a region corresponding to the pixel points with the color value smaller than a threshold color value in the first image into the first region, and dividing a region corresponding to the pixel points with the color value larger than or equal to the threshold color value in the first image into the second region.
S704, determining that the image drawing mode is the first image drawing mode or the second image drawing mode.
In some embodiments, the first image drawing mode may be represented by a mode identification 0, the second image drawing mode may be represented by a mode identification 1, the image drawing mode being either the first image drawing mode or the second image drawing mode, comprising: determining whether a mode identification is less than 0.5, if the mode identification is less than 0.5, determining that the image drawing mode is a first image drawing mode, and if the mode identification is greater than 0.5, determining that the image drawing mode is a second image drawing mode.
In step S704, if the image drawing mode is the first image drawing mode, the following steps S705 to S709 are executed:
s705, determining pixel points in the second image corresponding to the pixel points in the first area.
In some embodiments, determining the pixel point in the second image corresponding to each pixel point in the first region includes the following steps a to d:
And a step a of carrying out normalization processing on pixel coordinates of the first image so as to obtain normalized coordinates of each pixel point in the second area.
When the size (resolution) of the first image is expressed as m×n, the calculation formula of the normalized coordinates of the pixel points in the second area may be as follows:
P(x,y)=P(u/M,y/N)
Wherein P (x, y) is the normalized coordinates of the pixel point P in the second region, and P (u, v) is the pixel coordinates of the pixel point P in the second region, respectively.
And b, carrying out normalization processing on the pixel coordinates of the second image to obtain the normalized coordinates of each pixel point in the second image.
The implementation manner of obtaining the normalized coordinates of each pixel point in the second image is similar to the implementation manner of obtaining the normalized coordinates of each pixel point in the second area, and for avoiding redundancy, detailed description is omitted here.
And c, acquiring and receiving the first position information.
And d, acquiring corresponding pixel points of each pixel point in the second image according to the first position information, the size ratio of the first image to the second image, the normalized coordinates of each pixel point in the second area and the normalized coordinates of each pixel point in the second image.
In some embodiments, according to the first position information, the size ratio of the first image to the second image, the normalized coordinates of each pixel point in the second area, and the normalized coordinates of each pixel point in the second image, the method obtains the corresponding pixel point of each pixel point in the second area in the second image, including the following steps d1 to d3:
And d1, acquiring a first coordinate value corresponding to each pixel point in the second area according to the normalized coordinates of each pixel point in the second area and the size ratio of the first image to the second image.
For example, the size of the first image is 540×960, the size of the second image is 1080×1920, and the size ratio of the first image to the second image is 1:4, if the normalized coordinates of a certain pixel point in the second area are (0.5 ), the first coordinate value corresponding to the pixel point is (0.125).
And d2, acquiring second coordinate values corresponding to all the pixel points in the second area according to the first coordinate values corresponding to all the pixel points in the second area and the first position information.
In the embodiment of the application, the first position information is used for representing translation information when the texture information in the second image is mapped to the second area. For example, the first coordinate value corresponding to the pixel point is (0.125), the first position information is (0.2,0.3), and the second coordinate value corresponding to the pixel point is (0.325,0.425).
Step d3, obtaining the corresponding pixel points of each pixel point in the second image according to the first coordinate values corresponding to each pixel point in the second area and the normalized coordinates of each pixel point in the second image.
In the embodiment, the pixel point corresponding to the pixel coordinate (0.5 ) in the second area in the second image is the pixel point with the pixel coordinate (0.325,0.425).
S706, resetting the pixel value of each pixel point in the first area to the pixel value of the pixel point in the second image corresponding to each pixel point in the first area, and setting the transparency of each pixel point in the second area to be a preset transparency to obtain a third image.
Illustratively, referring to fig. 8, the pixel values of the pixels in the first area 61 of the first image 600 are reset to the pixel values of the pixels in the second image 700 corresponding to the pixels in the first area 61, and the transparency of the pixels in the second area 62 of the first image 600 is set to 100% to obtain the third image 800. The texture content corresponding to the first region 61 in the third image 800 is the texture content of the second image 700, and the transparency of the third image 800 to the second region 62 is 100% (no content).
S707, obtaining the canvas ground color and the second position information.
And S708, generating a target canvas according to the canvas ground color.
And S709, superposing and drawing the third image on the target canvas according to the second position information so as to acquire the image to be displayed.
For example, referring to fig. 9, the canvas ground color is taken as black in fig. 9 as an example, after obtaining the acquired canvas ground color and the second position information, a target canvas 91 is first generated according to the canvas ground color, and then the third image is superimposed and drawn on the target canvas 91 according to the second position information, so as to obtain an image 92 to be displayed.
In step S704, if the image drawing mode is the second image drawing mode, the following steps S710 to S713 are executed:
s710, obtaining a target color value according to the color value of each pixel point in the second image.
In some embodiments, obtaining the target color value from the color values of the respective pixels in the second image includes: and calculating the average value of the color values of all pixel points in the second image to acquire the target color value.
In some embodiments, obtaining the target color value from the color values of the respective pixels in the second image includes: randomly selecting any pixel point in the second image. And determining the color value of the pixel point as the target color value.
S711, resetting pixel values of all pixel points in the first area to the target color value, and setting transparency of all pixel points in the second area to be preset transparency to obtain a third image.
Illustratively, referring to fig. 10, the pixel values of the respective pixels in the first region 61 of the first image 600 are reset to the target color values, and the transparency of the respective pixels in the second region 62 of the first image 600 is set to 100% of the obtainable third image 1000. The color value of each pixel point in the first area 61 in the third image 1000 is the target color, and the transparency of the second area 62 in the third image 1000 is 100% (no content).
S712, acquiring third position information.
S713, the third image is overlapped and drawn on the second image according to the third position information, so that the image to be displayed is obtained.
For example, referring to fig. 11, the image 1100 to be displayed may be obtained by overlaying and drawing the third image 1000 on the second image 700 according to the third position information.
After the image to be displayed is acquired through the above steps S705 to S709 or the image to be displayed is acquired through the above steps S710 to S713, the following step S714 is performed:
S714, displaying the image to be displayed.
In some embodiments, on the basis of the embodiment shown in fig. 7, the image generating method provided by the embodiment of the present application further includes: obtaining a scaling ratio; and scaling the third image according to the scaling ratio before generating the image to be displayed according to the third image.
For example, referring to fig. 12, if a scaling ratio of 1:4 is obtained on the basis of the example shown in fig. 12, the third image is first enlarged by 2 times, and then the enlarged third image 121 is superimposed and drawn on the second image 700 according to the third position information, so as to obtain the image 122 to be displayed.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes each process executed by the image generating method, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted.
The computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk.
The present application provides a computer program product, which includes a computer program, when the computer program runs on a computer, makes the computer implement the above-mentioned image generating method, and can achieve the same technical effects, and for avoiding repetition, the description is omitted herein.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. The above discussion in some examples is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, characterized by comprising:
A user interface configured to acquire a first image, a second image, and an image rendering mode;
A controller configured to divide the first image into a first area and a second area according to color values of each pixel point in the first image, reset the color values of each pixel point in the first area according to the image drawing mode and the second image, set transparency of each pixel point in the second area to be preset transparency, obtain a third image, and generate an image to be displayed according to the third image; the first area is an area formed by pixel points, the color values of which meet preset conditions, in the first image, and the second area is an area formed by pixel points, the color values of which do not meet preset conditions, in the first image;
and a display configured to display the image to be displayed.
2. The display device of claim 1, wherein the controller is specifically configured to:
Determining pixel points in the second image corresponding to each pixel point in the first region under the condition that the image drawing mode is a first image drawing mode;
resetting the pixel value of each pixel point in the first area to the pixel value of the pixel point in the second image corresponding to each pixel point in the first area.
3. The display device of claim 2, wherein the display device is configured to display the plurality of images,
The user interface is further configured to obtain first location information;
The controller is specifically configured to: normalizing the pixel coordinates of the first image to obtain normalized coordinates of each pixel point in the second area; normalizing the pixel coordinates of the second image to obtain normalized coordinates of each pixel point in the second image; and acquiring the corresponding pixel points of each pixel point in the second image in the second area according to the first position information, the size ratio of the first image to the second image, the normalized coordinates of each pixel point in the second area and the normalized coordinates of each pixel point in the second image.
4. A display device according to claim 3, wherein,
The controller is specifically configured to: acquiring a first coordinate value corresponding to each pixel point in the second area according to the normalized coordinate of each pixel point in the second area and the size ratio of the first image to the second image; acquiring second coordinate values corresponding to all the pixel points in the second area according to the first coordinate values corresponding to all the pixel points in the second area and the first position information; and acquiring the pixel points corresponding to the pixel points in the second image according to the first coordinate values corresponding to the pixel points in the second area and the normalized coordinates of the pixel points in the second image.
5. The display device of claim 4, wherein the display device comprises a display device,
The user interface is further configured to acquire canvas ground color and second position information;
the controller is specifically configured to generate a target canvas according to the canvas ground color, and superimpose and draw the third image on the target canvas according to the second position information so as to acquire the image to be displayed.
6. The display device of claim 1, wherein the controller is specifically configured to:
And under the condition that the image drawing mode is a second image drawing mode, acquiring a target color value according to the color value of each pixel point in the second image, and resetting the pixel value of each pixel point in the first area to the target color value.
7. The display device of claim 6, wherein the display device comprises a display device,
The user interface is further configured to obtain third location information;
the controller is specifically configured to superimpose and draw the third image on the second image according to the third position information so as to acquire the image to be displayed.
8. The display device of claim 1, wherein the display device comprises a display device,
The user interface is further configured to obtain a scale;
The controller is further configured to scale the third image according to the scaling factor before generating the image to be displayed from the third image.
9. The display device of claim 1, wherein the controller is specifically configured to:
Acquiring color values of all pixel points in the first image;
Traversing the color values of all pixel points in the first image, dividing the region corresponding to the pixel points with the color values smaller than the threshold color value in the first image into the first region, and dividing the region corresponding to the pixel points with the color values larger than or equal to the threshold color value in the first image into the second region.
10. An image generation method, comprising:
Acquiring a first image, a second image and an image drawing mode;
dividing the first image into a first area and a second area according to the color value of each pixel point in the first image; the first area is an area formed by pixel points, the color values of which meet preset conditions, in the first image, and the second area is an area formed by pixel points, the color values of which do not meet preset conditions, in the first image;
Resetting color values of all pixel points in the first area according to the image drawing mode and the second image, and setting the transparency of all pixel points in the second area to be preset transparency to obtain a third image;
Generating an image to be displayed according to the third image;
And displaying the image to be displayed.
CN202311694254.3A 2023-12-11 2023-12-11 Display device and image generation method Pending CN117931034A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311694254.3A CN117931034A (en) 2023-12-11 2023-12-11 Display device and image generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311694254.3A CN117931034A (en) 2023-12-11 2023-12-11 Display device and image generation method

Publications (1)

Publication Number Publication Date
CN117931034A true CN117931034A (en) 2024-04-26

Family

ID=90758319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311694254.3A Pending CN117931034A (en) 2023-12-11 2023-12-11 Display device and image generation method

Country Status (1)

Country Link
CN (1) CN117931034A (en)

Similar Documents

Publication Publication Date Title
US9489165B2 (en) System and method for virtual displays
CN113810746B (en) Display equipment and picture sharing method
CN112584211B (en) Display equipment
CN115129214A (en) Display device and color filling method
CN113825002A (en) Display device and focus control method
CN112580625A (en) Display device and image content identification method
CN112419988B (en) Backlight adjusting method and display device
CN112926420B (en) Display device and menu character recognition method
CN117931034A (en) Display device and image generation method
CN116801027A (en) Display device and screen projection method
CN112235621B (en) Display method and display equipment for visual area
CN114760513A (en) Display device and cursor positioning method
CN115083343A (en) Display apparatus and resolution adjusting method
CN114793298A (en) Display device and menu display method
CN114296841A (en) Display device and AI enhanced display method
CN115396717B (en) Display device and display image quality adjusting method
CN112416214B (en) Display equipment
CN117807337A (en) Display equipment and browser page resource rendering method
CN117765847A (en) Display equipment and closed graph generation method
CN117812378A (en) Display device and interface display method
CN116934770A (en) Display method and display device for hand image
CN118175367A (en) Display equipment and content display method
WO2022120079A1 (en) Display apparatus
CN116935432A (en) Gesture recognition method and display device
CN116347166A (en) Display device and window display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination