CN114090141A - Interface display method and device, electronic equipment and readable storage medium - Google Patents

Interface display method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114090141A
CN114090141A CN202010866786.0A CN202010866786A CN114090141A CN 114090141 A CN114090141 A CN 114090141A CN 202010866786 A CN202010866786 A CN 202010866786A CN 114090141 A CN114090141 A CN 114090141A
Authority
CN
China
Prior art keywords
view
superposed
views
image
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010866786.0A
Other languages
Chinese (zh)
Inventor
许启恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN202010866786.0A priority Critical patent/CN114090141A/en
Publication of CN114090141A publication Critical patent/CN114090141A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application provides an interface display method, an interface display device, electronic equipment and a readable storage medium, and relates to the technical field of computers. The method comprises the following steps: determining a target interface background image according to the received selection instruction; determining views to be superposed from a plurality of views for generating an interface to be displayed; according to the display position information of each view to be superposed, obtaining a first partial image corresponding to each view to be superposed from the background image of the target interface; for each view to be superposed, superposing the view to be superposed and the corresponding first local image under the first preset transparency to obtain a superposed view; and generating and displaying an interface to be displayed according to the view subjected to the superposition processing. Therefore, the interface can present the content of the background image of the target interface, and the condition of poor interface display effect caused by setting other views except the view at the bottom layer as no background or transparent background is avoided, so that the display effect of the interface is ensured.

Description

Interface display method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an interface display method and apparatus, an electronic device, and a readable storage medium.
Background
At present, after an interface background image is determined, the interface background image is generally directly set in a bottommost view of an interface, and then the bottommost view and other views are synthesized into the interface to be displayed. In order to improve the consistency of the background presented by the interface, the other views except the bottommost view are generally set to be a background-free or transparent background, so that most of the contents of the background image of the interface in the bottommost view can be presented through the other views. However, the above method may cause the display effect of at least a part of the views to be poor, and further cause the display effect of the whole interface to be poor. For example, the view is set to have no background or a transparent background, so that the display effect of the view itself is not good; alternatively, in the case where both views include text and display positions of the text overlap, the text in both views overlaps, thereby making the text unreadable.
Disclosure of Invention
The application provides an interface display method, an interface display device, electronic equipment and a readable storage medium, which can avoid the situation that the interface display effect is not good because other views except the view at the bottom layer are set to be no background or transparent background, thereby ensuring the display effect of each view and further ensuring the display effect of the interface.
The embodiment of the application can be realized as follows:
in a first aspect, an embodiment of the present application provides an interface display method applied to an electronic device, where the method includes:
determining a target interface background image according to the received selection instruction;
determining views to be superposed from a plurality of views for generating an interface to be displayed;
according to the display position information of each view to be superposed, obtaining a first partial image corresponding to each view to be superposed from the target interface background image;
for each view to be superposed, superposing the view to be superposed and the corresponding first local image under the first preset transparency to obtain a superposed view;
and generating and displaying the interface to be displayed according to the view after the superposition processing.
In an optional embodiment, the superimposing the view to be superimposed and the corresponding first local image under the first preset transparency to obtain a view after the superimposing process includes:
if the view to be superposed comprises a view background, superposing a first local image under a first preset transparency on the view background to obtain a superposed view;
and if the view to be superposed does not comprise a view background, superposing the first local image under the first preset transparency on the view to be superposed to obtain a superposed view.
In an optional embodiment, before performing, for each view to be superimposed, superimposition processing on the view to be superimposed and a first local image under a corresponding first preset transparency to obtain a view after the superimposition processing, the method further includes:
obtaining the hierarchical information of each superposed view according to the parent-child relationship among the views in the plurality of views;
and setting a first preset transparency of the first local image corresponding to each view to be superposed according to the level information of each view to be superposed, wherein different levels correspond to different first preset transparencies.
In an alternative embodiment, the first preset transparency of the first partial image corresponding to each view to be superimposed is inversely proportional to a level value of a level of the view to be superimposed, where the level value of the lowest level view is the smallest.
In an optional embodiment, the determining a view to be superimposed from among a plurality of views used for generating an interface to be displayed includes:
determining views of display areas including other views in the display areas from the plurality of views as views to be superposed according to the level information and the display position information of each view in the plurality of views, wherein the level information of each view is determined by parent-child relationship among the views in the plurality of views;
and determining views to be superposed from the determined views suspected to be superposed.
In an optional embodiment, before the determining a view to be superimposed from the determined views suspected to be superimposed, the determining a plurality of views to be superimposed from the plurality of views for generating an interface to be displayed further includes:
and taking the view comprising the view background as the suspected view to be superposed.
In an optional embodiment, the determining a view to be superimposed from the determined views suspected to be superimposed includes:
judging whether each suspected view to be superposed is an image with rich colors or not; if not, the suspected view to be superposed is taken as the view to be superposed.
In an optional embodiment, the determining whether the suspected view to be superimposed is a colorful image includes:
determining a target image from the suspected view to be superposed, and calculating a hue distribution interval and a saturation distribution interval of the target image, wherein if the suspected view to be superposed comprises a view background, the target image is the view background; if the suspected view to be superposed does not comprise a view background, the target image is the suspected view to be superposed;
judging whether the hue distribution interval is larger than a preset hue distribution interval or not and whether the saturation distribution interval is larger than a preset saturation distribution interval or not;
if the hue distribution interval is larger than the preset hue distribution interval or the saturation distribution interval is larger than the preset saturation distribution interval, determining that the suspected view to be superposed is an image with rich colors;
and if the hue distribution interval is smaller than or equal to the preset hue distribution interval and the saturation distribution interval is smaller than or equal to the preset saturation distribution interval, judging that the suspected view to be superposed is not an image with rich colors.
In an optional embodiment, the calculating the hue distribution interval and the saturation distribution interval of the target image includes:
obtaining color values of all effective sampling points in the target image, wherein the transparency of the effective sampling points is greater than a second preset transparency;
and calculating to obtain the hue distribution interval and the saturation distribution interval according to the hue and the saturation corresponding to the color value of each effective sampling point.
In an optional embodiment, before the calculating the hue distribution interval and the saturation distribution interval of the target image, the determining whether the view suspected to be superimposed is a colorful image further includes:
judging whether the target image is a pure color image or not;
if yes, executing the step of judging that the suspected view to be superposed is not an image with rich colors;
if not, executing the step of calculating the hue distribution interval and the saturation distribution interval of the target image.
In an optional embodiment, before generating and displaying the interface to be displayed according to the view after the overlay processing, the method further includes:
obtaining a second partial image corresponding to each view comprising characters from the target interface background image according to the display position information of each view comprising characters in the plurality of views;
aiming at each view comprising characters, adjusting the brightness of the characters in the view according to the brightness of a second local image corresponding to the view to obtain a view with adjusted contrast, wherein the adjustment is used for increasing the contrast between the characters and the second local image;
the generating and displaying the interface to be displayed according to the view after the superposition processing comprises the following steps:
and synthesizing the view subjected to superposition processing, the view subjected to contrast adjustment and the unprocessed view into the interface to be displayed according to the display position information and the hierarchy information of each view in the plurality of views, and displaying, wherein the hierarchy information of each view is determined by parent-child relationship among the views in the plurality of views.
In a second aspect, an embodiment of the present application provides an interface display apparatus, which is applied to an electronic device, and the apparatus includes:
the first determining module is used for determining a background image of the target interface according to the received selection instruction;
the second determining module is used for determining the view to be superposed from the multiple views for generating the interface to be displayed;
the local image acquisition module is used for acquiring a first local image corresponding to each view to be superposed from the target interface background image according to the display position information of each view to be superposed;
the processing module is used for carrying out superposition processing on the views to be superposed and the corresponding first local images under the first preset transparency aiming at the views to be superposed to obtain superposed views;
the processing module is further configured to generate and display the interface to be displayed according to the view subjected to the superposition processing.
In an optional embodiment, the processing module is specifically configured to:
when the view to be superposed comprises a view background, superposing a first local image under a first preset transparency on the view background to obtain a superposed view;
when the view to be superimposed does not include a view background, superimposing the first partial image under the first preset transparency on the view to be superimposed to obtain a view after superimposition processing.
In an alternative embodiment, the processing module is further configured to:
obtaining the hierarchical information of each superposed view according to the parent-child relationship among the views in the plurality of views;
and setting a first preset transparency of the first local image corresponding to each view to be superposed according to the level information of each view to be superposed, wherein different levels correspond to different first preset transparencies.
In an alternative embodiment, the first preset transparency of the first partial image corresponding to each view to be superimposed is inversely proportional to a level value of a level of the view to be superimposed, where the level value of the lowest level view is the smallest.
In an optional embodiment, the second determining module is specifically configured to:
determining views of display areas including other views in the display areas from the plurality of views as views to be superposed according to the level information and the display position information of each view in the plurality of views, wherein the level information of each view is determined by parent-child relationship among the views in the plurality of views;
and determining views to be superposed from the determined views suspected to be superposed.
In an optional implementation manner, the second determining module is further specifically configured to:
and taking the view comprising the view background as the suspected view to be superposed.
In an optional embodiment, the second determining module is specifically configured to:
judging whether each suspected view to be superposed is an image with rich colors or not; if not, the suspected view to be superposed is taken as the view to be superposed.
In an optional embodiment, the second determining module is specifically configured to:
determining a target image from the suspected view to be superposed, and calculating a hue distribution interval and a saturation distribution interval of the target image, wherein if the suspected view to be superposed comprises a view background, the target image is the view background; if the suspected view to be superposed does not comprise a view background, the target image is the suspected view to be superposed;
judging whether the hue distribution interval is larger than a preset hue distribution interval or not and whether the saturation distribution interval is larger than a preset saturation distribution interval or not;
if the hue distribution interval is larger than the preset hue distribution interval or the saturation distribution interval is larger than the preset saturation distribution interval, determining that the suspected view to be superposed is an image with rich colors;
and if the hue distribution interval is smaller than or equal to the preset hue distribution interval and the saturation distribution interval is smaller than or equal to the preset saturation distribution interval, judging that the suspected view to be superposed is not an image with rich colors.
In an optional embodiment, the second determining module is specifically configured to:
obtaining color values of all effective sampling points in the target image, wherein the transparency of the effective sampling points is greater than a second preset transparency;
and calculating to obtain the hue distribution interval and the saturation distribution interval according to the hue and the saturation corresponding to the color value of each effective sampling point.
In an optional embodiment, the second determining module is specifically configured to:
judging whether the target image is a pure color image or not;
if yes, executing the step of judging that the suspected view to be superposed is not an image with rich colors;
if not, executing the step of calculating the hue distribution interval and the saturation distribution interval of the target image.
In an optional implementation manner, the partial image obtaining module is further configured to obtain, from the target interface background image, a second partial image corresponding to each view including a character according to display position information of each view including the character in the plurality of views;
the processing module is further configured to, for each view including a character, adjust the brightness of the character in the view according to the brightness of the second partial image corresponding to the view to obtain a view with an adjusted contrast, where the adjustment is used to increase the contrast between the character and the second partial image;
the processing module is further configured to synthesize the superimposed view, the view with the adjusted contrast, and the unprocessed view into the interface to be displayed according to the display position information and the hierarchy information of each view in the multiple views, and display the interface, where the hierarchy information of each view is determined by a parent-child relationship between the views in the multiple views.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor can execute the machine executable instructions to implement the interface display method described in any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the interface display method according to any one of the foregoing embodiments.
According to the interface display method, the interface display device, the electronic equipment and the readable storage medium, after the views to be superposed are determined from the views included in the interface to be displayed, the first local image corresponding to each view to be superposed is obtained from the determined target interface background image according to the display position information of each view to be superposed; then, overlapping each overlapped view and a first local image under a first preset transparency corresponding to each overlapped view to obtain a view after overlapping; and finally, generating and displaying an interface to be displayed based on the view subjected to the superposition processing. Therefore, the interface to be displayed can present the content of the background image of the target interface, and the display effect of each view is not good, so that the display effect of the interface to be displayed is ensured, and the condition that the display effect of the interface is not good due to the fact that other views except the view at the bottom layer are set to be non-background or transparent background is avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of an interface display method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating the sub-steps included in step S120 of FIG. 2;
fig. 4 is a schematic diagram of a view tree and a node tree corresponding to an interface provided in the embodiment of the present application;
FIG. 5 is a schematic flow chart of sub-steps included in sub-step S122 of FIG. 3;
FIG. 6 is one of the flow diagrams of sub-steps included in sub-step S1221 of FIG. 5;
FIG. 7 is a second schematic flowchart of the sub-steps included in sub-step S1221 in FIG. 5;
fig. 8 is a second schematic flowchart of an interface display method according to an embodiment of the present application;
fig. 9 is a third schematic flowchart of an interface display method according to an embodiment of the present application;
fig. 10 is a schematic block diagram of an interface display device according to an embodiment of the present application.
Icon: 100-an electronic device; 110-a memory; 120-a processor; 130-a communication unit; 200-an interface display device; 210-a first determination module; 220-a second determination module; 230-local image acquisition module; 240-processing module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, fig. 1 is a block diagram of an electronic device 100 according to an embodiment of the present disclosure. The electronic device 100 may be, but is not limited to, a smart phone, a tablet computer, etc. As shown in fig. 1, the electronic device 100 may include a memory 110, a processor 120, and a communication unit 130. The elements of the memory 110, the processor 120 and the communication unit 130 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 110 is used for storing programs or data. The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 120 is used to read/write data or programs stored in the memory 110 and perform corresponding functions. For example, the memory 110 stores therein the interface display apparatus 200, and the interface display apparatus 200 includes at least one software functional module which can be stored in the memory 110 in the form of software or firmware (firmware). The processor 120 executes various functional applications and data processing by running software programs and modules stored in the memory 110, such as the interface display apparatus 200 in the embodiment of the present application, so as to implement the interface display method in the embodiment of the present application.
The communication unit 130 is configured to establish a communication connection between the electronic device 100 and another communication terminal through a network, and to transmit and receive data through the network.
It should be understood that the structure shown in fig. 1 is only a schematic structural diagram of the electronic device 100, and the electronic device 100 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 2, fig. 2 is a flowchart illustrating an interface display method according to an embodiment of the present disclosure. The method is applied to the electronic device 100. The specific flow of the interface display method is described in detail below.
And step S110, determining a target interface background image according to the received selection instruction.
Step S120, determining a view to be superposed from a plurality of views for generating the interface to be displayed.
Step S130, obtaining a first partial image corresponding to each view to be superimposed from the target interface background image according to the display position information of each view to be superimposed.
Step S140, for each view to be superimposed, performing superimposition processing on the view to be superimposed and the corresponding first local image under the first preset transparency to obtain a view after the superimposition processing.
And S150, generating and displaying the interface to be displayed according to the view subjected to the superposition processing.
In this embodiment, the selection instruction may be input by a user, or may be automatically generated by the electronic device 100, where a generation manner of the selection instruction is not specifically limited. And after receiving the selection instruction, taking the image corresponding to the selection instruction as a target interface background image.
After the views to be superposed are determined, for each view to be superposed, according to the display position information of the view to be superposed, a first partial image corresponding to the view to be superposed is obtained from the determined target interface background image. Thereby, a respective first partial image for each view to be superimposed can be obtained. The display position information of the view represents the position of the view when the view is displayed in an interface. The first partial image may be a part of the target interface background image or may be a complete target interface background image, and is determined by the display position information of the corresponding view to be superimposed.
In this embodiment, the sequence between step 110 and step 120 is not limited, step 110 may be located before step 120, or may be located after step S120, and step 110 and step 120 may also be executed simultaneously.
Then, for one view to be superimposed and a first partial image corresponding to the view to be superimposed, superimposing the view to be superimposed and the first partial image under a first preset transparency to obtain a view obtained after superimposing the view to be superimposed. Wherein the value range of the transparency is 0-1, 0% represents complete transparency, and 100% represents complete opacity. And repeatedly executing the processing for each view to be superposed to obtain the superposed views corresponding to all the determined views to be superposed. And finally, generating the interface to be displayed based on the view subjected to the superposition processing, and displaying the interface to be displayed. The interface to be displayed can also be generated based on the view subjected to the superposition processing and the view which is not subjected to the superposition processing in the plurality of views, and the interface to be displayed is displayed. The interface to be displayed may be, but is not limited to, a desktop or an application interface.
Optionally, each time a view to be superimposed is determined, a first local image corresponding to the view to be superimposed is obtained, and then a view after the superimposition processing corresponding to the view to be superimposed is obtained through the superimposition processing. Or after all the views to be superimposed are determined, the first local images corresponding to all the views to be superimposed are obtained, and then the views after the superimposition processing corresponding to all the views to be superimposed are obtained through the superimposition processing. It is understood that the above description is only exemplary, and the specific implementation manner may be selected according to actual requirements to obtain the view after the overlay processing.
Therefore, the interface to be displayed can present the content of the background image of the target interface, and the display effect of each view is not good, so that the display effect of the interface to be displayed is ensured, and the condition that the display effect of the interface is not good due to the fact that other views except the view at the bottom layer are set to be non-background or transparent background is avoided.
In this embodiment, as an optional implementation manner, a view to be superimposed may be determined from a plurality of views used for generating an interface to be displayed according to a received user setting command. For example, the multiple views used for generating the interface to be displayed include views 1, 2, and 3, and the received user setting command is used to designate the views 1 and 2 as views to be superimposed; upon receiving the user setup command, views 1, 2 are determined as views to be superimposed. Therefore, the user can determine the content of the background image of the target interface presented on the interface to be displayed by specifying the view to be superimposed according to the preference of the user, and the personalized requirement is met.
In this embodiment, as another optional implementation manner, a view to be superimposed may be determined from a plurality of views used for generating an interface to be displayed according to a preset rule. For example, a view relatively close to the bottommost view is taken as a view to be superimposed. Therefore, the view to be superposed can be automatically determined without excessive operation of a user.
Optionally, referring to fig. 3, fig. 3 is a flowchart illustrating the sub-steps included in step S120 in fig. 2. Step S120 may include substeps S121 and substep S122. The view to be superimposed among the plurality of views may be determined by substep S121 and substep S122.
And a substep S121, determining, from the plurality of views, a view of which the display area includes the display areas of other views as a view suspected to be superimposed according to the level information and the display position information of each view in the plurality of views.
In this embodiment, the hierarchical information of each view may be obtained according to a parent-child relationship between views in the multiple views used for generating the interface to be displayed. The level information may include a level value, where the level value indicates a specific level of the view in an interface, that is, a specific level in a view tree, and the view tree is a view tree corresponding to a view in the interface where the view is located.
Referring to fig. 4, fig. 4 is a schematic view of a view tree and a node tree corresponding to an interface according to an embodiment of the present disclosure. A Window is internally provided with a plurality of views, and the views are managed in a tree structure. As shown in FIG. 4, the topmost view is the root view, RootView, i.e., the bottom-most view of an interface is the root view, RootView. A view may be the parent of 0 views or 1 or more views. According to the parent-child relationship between the views, a view tree shown in the upper half of fig. 4 can be generated.
Taking View-2-1 in FIG. 4 as an example, the parent View of View-2-1 is View-2, and the parent View of View-2 is RootView; thus, the level value in the level information of View-2 may be 3, indicating that View-2 is at level 3 in the View tree.
Optionally, the hierarchical information may also include an identification of the parent view of the view, an identification of the parent view, …, an identification of the root view. Thus, an ancestor view of a view can be directly determined from the view hierarchy information.
The display position information of each of the plurality of views is preset information, and the display position information is used for indicating a position when the view is displayed, namely, in which area the view is displayed. After the level information and the display position information of each view in the multiple views are obtained, the view of the display area including other views in the display area can be found out to be the suspected view to be superposed according to the level information and the display position information. The display area includes views of display areas of other views, which are visually regarded as background in the interface, so that the views can be regarded as views suspected to be superimposed, so that the first partial image can be subsequently superimposed on the views, thereby enabling the interface to be displayed to present more coherent target interface background image information. Meanwhile, the information main body of the original interface to be displayed cannot be influenced. For example, the information main body is the content in the view 1, but the display area of the view 1 does not include the display areas of other views, so the view 1 does not overlap the first partial image, and the display effect of the view 1 is not poor due to the fact that the view 1 overlaps the first partial image.
If the display area of one view includes the display areas of other views, it may be that the view carries the other views. The bearing may be that there is a parent-child relationship between the View and the other views, for example, since the root View RootView is a parent View of View-2, the display area of the root View RootView includes the display area of View-2. It is also possible that the display area of other views in the same level as the View is located in the display area of the View, for example, if the display area of View-3-2 in the same level in fig. 4 includes the display area of View-3-3. It will of course be appreciated that other reasons may also result in the display area of one view including the display area of the other view.
Optionally, when determining a view suspected to be superimposed, the view including the view background may also be used as the view suspected to be superimposed; and/or, not treating the view that does not include the background of the view but includes the character as the suspected view to be overlaid.
For a view in which a view background is set, the view background of the view is also generally considered as the background of the interface. When one view includes a view background, the view also includes non-background content, the non-background content is generally content that needs to be focused by a user, and the view background is used as a background. For example, if a chat bubble image includes text and a solid background image, the solid background image is a view background of the chat bubble image, and the text is non-background content.
In order to enable the interface to be displayed to present more contents of the target interface background image, it may be sequentially determined whether each view in the multiple views for generating the interface to be displayed includes a view background, and a view including the view background is taken as a suspected view to be superimposed.
When a view does not include a view background but includes a character, if the view is taken as a view to be superimposed, and the view is subsequently superimposed with a first partial image, the visibility of the character may be affected. Therefore, when determining the view suspected to be superimposed, the view that does not include the background of the view but includes the character may not be taken as the view suspected to be superimposed to avoid that the visibility of the character in the view is affected by the view being superimposed with the first partial image.
As an implementation manner, of course, a view that does not include a view background but includes characters may also be taken as a view to be superimposed, so that the view to be superimposed and the first partial image may be subsequently superimposed, so that the interface to be displayed can display more contents of the target interface background image.
And a substep S122, determining views to be superimposed from the determined views suspected to be superimposed.
Optionally, after the suspected view to be superimposed is determined, the determined suspected view to be superimposed may be directly used as the view to be superimposed, or the view to be superimposed may be selected from the determined suspected view to be superimposed according to a certain selection rule. If a view to be superimposed is selected from the determined suspected view to be superimposed according to a selection rule, the selection rule can be set according to actual requirements.
Referring to fig. 5, fig. 5 is a flowchart illustrating sub-steps included in sub-step S122 in fig. 3. Step S122 may include sub-step S1221 to sub-step S1223.
And a substep S1221, determining whether each suspected view to be superimposed is an image with rich colors.
Alternatively, in this embodiment, it may be determined whether one view suspected to be superimposed is a colorful image in the following manner. Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a sub-step included in sub-step S1221 in fig. 5. The sub-step S1221 may include a sub-step S12211, a sub-step S12213 to a sub-step S12216.
And a substep S12211 of determining a target image from the suspected view to be overlaid.
Optionally, the view to be superimposed may be directly used as the target image, and the target image may also be determined according to whether the view to be superimposed includes the view background. For example, if the suspected view to be superimposed includes a view background, the view background is used as the target image. And if the suspected view to be overlapped does not comprise the view background, taking the suspected view to be overlapped as the target image.
In sub-step S12213, the hue distribution section and the saturation distribution section of the target image are calculated.
Optionally, after the target image is determined, the color value of each pixel point of the target image in the RGB color space may be converted from the RGB color space to the HSV color space. Wherein, the color parameters of the RGB color space are respectively: red (Red), Green (Green), Blue (Blue), the color parameters of the HSV color space are: hue (Hue), Saturation (Saturation), lightness (Value). Therefore, the hue and the saturation of each pixel point of the target image can be obtained, then the difference between the maximum value and the minimum value in the hue of each pixel point can be used as the hue distribution interval of the target image, and the difference between the maximum value and the minimum value in the saturation of each pixel point can be used as the saturation distribution interval of the target image.
Optionally, after the target image is determined, sampling points may also be determined from the pixel points included in the target image point according to a certain order, for example, a first pixel point of every 4 pixel points is taken as a sampling point. And then, converting the color value of each sampling point in the RGB color space into the HSV color space, thereby obtaining the hue and saturation of each sampling point. Then, the difference between the maximum value and the minimum value in the hue of each sample point is used as the hue distribution interval of the target image, and the difference between the maximum value and the minimum value in the saturation of each sample point is used as the saturation distribution interval of the target image. Thus, the speed of obtaining the tone distribution section and the saturation distribution section of the target image can be increased.
Optionally, after the target image is determined, sampling points may also be determined from pixel points included in the target image points according to a certain order. And then comparing the transparency of each sampling point with a second preset transparency, and taking the sampling points with the transparencies higher than the second preset transparency as effective sampling points. Therefore, the influence of the color value of the pixel point with low transparency on the tone distribution interval and the saturation distribution interval can be avoided, and the calculation result is inaccurate. The second preset transparency may be set according to actual requirements, for example, set to 30%.
After the effective sampling points are determined, the color values of the effective sampling points in the RGB color space can be obtained, and the color values are converted into the HSV color space through the RGB color space. Therefore, the hue and the saturation of each effective sampling point of the target image can be obtained, then the difference between the maximum value and the minimum value in the hue of each effective sampling point is used as the hue distribution interval of the target image, and the difference between the maximum value and the minimum value in the saturation of each pixel point is used as the saturation distribution interval of the target image.
And a substep S12214 of determining whether the tone distribution interval is greater than a preset tone distribution interval.
And a substep S12215 of determining whether the saturation distribution interval is greater than a preset saturation distribution interval.
If the hue distribution interval is less than or equal to the preset hue distribution interval and the saturation distribution interval is less than or equal to the preset saturation distribution interval, the substep S12216 is performed.
Substep S12216: and judging that the suspected view to be superposed is not a colorful image.
The hue distribution interval of the target image is less than or equal to the preset hue distribution interval, and the saturation distribution interval is less than or equal to the preset saturation distribution interval, namely deltaH is less than or equal to T1 and deltaS is less than or equal to T2, the color interval of the image information of the target image is monotonous and concentrated, and the target image is not an image with rich colors. Thus, it can be determined that the suspected view to be superimposed is not a color-rich image. Where deltaH denotes a tone distribution section of the target image, T1 denotes a preset tone distribution section, deltaS denotes a saturation distribution section of the target image, and T2 denotes a preset saturation section.
The tone value interval of one image is 0-100, and the preset tone distribution interval can be set according to actual requirements, for example, the preset tone distribution interval is set to be 20. The saturation value interval of one image is 0-1, the preset saturation distribution interval can be set according to actual requirements, and for example, the preset saturation distribution interval is set to be 0.2.
If the hue distribution interval is greater than the preset hue distribution interval, or the saturation distribution interval is greater than the preset saturation distribution interval, performing substep S12217.
Substep S12217: and judging that the suspected view to be superposed is a colorful image.
The hue distribution interval of the target image is larger than the preset hue distribution interval, or the saturation distribution interval of the target image is larger than the preset saturation interval, namely deltaH is larger than T1 or deltaS is larger than T2, the image information representing the target image is rich in color distribution, and the target image is a colorful image. Thus, it can be determined that the suspected view to be superimposed is a color-rich image.
Optionally, referring to fig. 7, fig. 7 is a second flowchart illustrating a sub-step included in the sub-step S1221 in fig. 5. Before sub-step S12213, sub-step S1221 may further include sub-step S12212.
Substep S12212: and judging whether the target image is a pure color image.
Alternatively, whether the target image is a pure color image may be determined according to the drawing content corresponding to the target image. And if the drawing content corresponding to the target image is a pure Color, indicating that the target image is a pure Color image. If the target image is a pure color image, it means that the target image is not a vivid color image, in this case, the sub-step S12216 may be directly performed: and judging that the suspected view to be superposed is not a colorful image.
If the drawing content corresponding to the target image is Bitmap, it indicates that the target image is not a pure color image. In this case, then sub-step S12213 may be performed: and calculating a hue distribution interval and a saturation distribution interval of the target image so as to judge whether the suspected view to be superposed is an image with rich colors according to the hue distribution interval and the saturation distribution interval of the target image.
Therefore, when the target image corresponding to the suspected image to be superposed is a pure color image, the speed of judging whether the suspected image to be superposed is a colorful image can be increased by judging whether the target image is the pure color image or not.
If the view suspected to be superimposed is not a color-rich image, then sub-step S1222 is performed: and taking the suspected view to be superposed as the view to be superposed.
If one suspected view to be superposed is not an image with rich colors, the user can not be obviously influenced to obtain the key visual information even if the image information of the suspected view to be superposed is modified, so that the suspected view to be superposed with the poor colors can be used as the view to be superposed.
If the view suspected to be superimposed is a colorful image, the sub-step S1223 is executed: and not taking the suspected view to be superposed as the view to be superposed.
If a suspected view to be superimposed is an image rich in color, the suspected view to be superimposed contains information (e.g., a color photograph) required by the user, and is not suitable for superimposing the first partial image, and the original image information needs to be maintained, so that the suspected view to be superimposed rich in color is not used as the view to be superimposed.
After the views to be superimposed are determined, the views to be superimposed and the first local images under the first preset transparency corresponding to the views to be superimposed can be superimposed to obtain the views after the superimposition processing. The first preset transparency of the first partial image corresponding to one view to be superimposed may be determined by whether the view to be superimposed is the bottom view. Optionally, when the view to be superimposed is the bottom view, the first preset transparency of the first partial image corresponding to the view to be superimposed may be set to be larger, so that after the superimposition processing, the obtained view after the superimposition processing can present the content of the superimposed first partial image as clearly as possible. For example, when the view to be superimposed is the bottom view, the first preset transparency of the first partial image corresponding to the view to be superimposed may be set to 1, that is, the first partial image superimposed by the bottom view is completely opaque.
As an optional implementation manner, the first preset transparencies of the first partial images corresponding to the views to be superimposed, which are not the bottom view, may be the same or different, and may be set according to actual requirements. The first preset transparency of the first partial image corresponding to the view to be superimposed, which is not the bottom view, may be located between 0 and 1, so that the obtained view subjected to the superimposition processing may present the content of the view to be superimposed, or may present the content of the superimposed first partial image.
Optionally, referring to fig. 8, fig. 8 is a second flowchart of the interface display method according to the embodiment of the present application. In this embodiment, before step S140, the method may further include step S171 and step S172.
Step S171, obtaining the hierarchical information of each view to be superimposed according to the parent-child relationship between the views in the multiple views.
Step S171 is to set a first preset transparency of the first partial image corresponding to each view to be superimposed according to the hierarchical information of each view to be superimposed.
In this embodiment, the hierarchical information of each view may be obtained according to the parent-child relationship between the views in the multiple views. Wherein, the hierarchy information may include a hierarchy value. The lowest view has the lowest level value. After the views to be superimposed are determined, the hierarchy information of each view to be superimposed can be obtained from the hierarchy information of each view. Then, for each view to be superimposed, setting a first preset transparency of a first partial image corresponding to the view to be superimposed according to the hierarchy information of the view to be superimposed. The processing is repeated, and the first preset transparency of each first partial image can be obtained.
The different levels correspond to different first preset transparencies, that is, the first preset transparencies of the first partial images superposed by the views to be superposed at the different levels are different. Therefore, the finally obtained interface to be displayed is distinct in hierarchy, and convenience is brought to reading of a user.
Optionally, for each view to be superimposed, the first preset transparency of the first partial image corresponding to the view to be superimposed may be proportional to the level value of the view to be superimposed.
Optionally, for each view to be superimposed, the first preset transparency of the first partial image corresponding to the view to be superimposed may be inversely proportional to the level value of the view to be superimposed. That is, the higher the hierarchy of the views to be superimposed, the smaller the first preset transparency of the superimposed first partial image. The smaller the first preset transparency, the more transparent the first partial image at the first preset transparency. Therefore, the display effect of the interface to be displayed can be further ensured.
In the process of the overlay, in order to avoid that the display of the important information in the view to be overlaid is affected due to the overlay of the first partial image, different overlay modes can be adopted according to whether the view to be overlaid includes the view background or not.
Optionally, if one view to be superimposed includes a view background, the first partial image at the first preset transparency may be superimposed on the view background, so as to obtain a view after the superimposition processing. If one view to be superimposed does not include a view background, the first partial image under the first preset transparency may be superimposed on the view to be superimposed to obtain a view after the superimposition processing.
Optionally, referring to fig. 9, fig. 9 is a third schematic flow chart of the interface display method according to the embodiment of the present application. In this embodiment, before step S160, the method may further include step S151 and step S152.
Step S151, obtaining a second partial image corresponding to each view including the character from the target interface background image according to the display position information of each view including the character in the interface to be displayed.
In this embodiment, the view including the character may also be determined from the views included in the interface to be displayed. Wherein the characters may include words, numbers, letters, punctuation marks, and the like. Then, according to the display position information of each view including the characters, a second partial image corresponding to each view including the characters is obtained from the target interface background image. The second partial image may be a part of the target interface background image or a complete target interface background image, and is determined by the display position information of the corresponding view including the character.
Step S152, for each view including the character, adjusting the brightness of the character in the view according to the brightness of the second partial image corresponding to the view, so as to obtain a view with an adjusted contrast.
And calculating the brightness of a second partial image corresponding to the view comprising the character aiming at the view comprising the character, and then adjusting the brightness of the character in the view comprising the character according to the brightness of the second partial image so as to obtain the view with the adjusted contrast. The adjustment is used for increasing the contrast between the character in the view including the character and the second local image so as to dynamically ensure the contrast between the character and the background of the current area of the character, so that the key information is clearly visible at any moment, namely, the visibility of the character is ensured. And performing the above processing on each view comprising the character to obtain the view with the adjusted contrast corresponding to each view comprising the character.
Optionally, after the second local image is obtained, the second local image may be converted into an LAB color space, and then the brightness of the second local image is directly calculated according to the brightness of each pixel point of the second local image, or the brightness of each sampling point, or the brightness of an effective sampling point. Wherein the LAB color space comprises three elements: luminance (L), a and B are two color channels, and the colors are mixed to produce a color with a bright effect. The description of the sampling points and the effective sampling points can refer to the above description of the sampling points and the effective sampling points used in the calculation of the tone distribution interval.
Optionally, the brightness of the pixel point of the second local image, or the brightness of the sampling point, or the average value of the brightness of the effective sampling point may be used as the brightness of the second local image. It is to be understood that the above-mentioned manner is only an example, and the brightness of the second partial image may be obtained in other manners.
After obtaining the brightness of the second partial image corresponding to a view including a character, the brightness may be compared with a preset brightness. The preset brightness can be set according to actual requirements. If the brightness is less than the preset brightness, the second local image is determined to be in a dark color, and the character in the view including the character needs to be highlighted. If the brightness is not less than the preset brightness, it can be determined that the second local image is in bright tone, and the characters in the view including the characters need to be dimmed.
Optionally, as an alternative implementation, if the character needs to be brightened, the color of the character may be converted from the RGB color space to the LAB color space to obtain the current L component of the character, then the brightness of the character is adjusted to max { L,100-L }, and then max { L,100-L } is converted to the RGB color space, thereby completing the process of brightening the character. Correspondingly, if the character needs to be dimmed, the color of the character can be converted from the RGB color space to the LAB color space to obtain the current L component of the character, then the brightness of the character is adjusted to min { L,100-L }, and then min { L,100-L } is converted to the RGB color space, thereby completing the dimming process for the character.
Optionally, as another optional implementation, the process of brightening or dimming the character may also be implemented by subtracting a preset value from or adding a preset value to the current L component of the character.
After the brightness of the second partial image corresponding to the view including the character is obtained, the brightness of the character in the view including the character can be adjusted according to the brightness of the second partial image and by combining the user preferred color and/or the related color theory. For example, if the user prefers color a and the contrast between color a and the brightness of the second partial image satisfies the contrast requirement, the color of the character may be directly adjusted to color a. Therefore, the readability of the character can be ensured, the adjusted character color can be in accordance with the preference of the user, and/or the adjusted character color can meet the relevant color theory.
The view including the characters may be a view to be superimposed or may not be a view to be superimposed.
It is of course understood that the brightness of the character may also be adjusted in such a way as to ensure the readability of the character.
After the view after the superimposition processing and the contrast-adjusted view are obtained, step S160 can be realized by sub-step S161 shown in fig. 9.
And a substep S161, synthesizing the view after the superimposition processing, the view after the contrast adjustment, and the unprocessed view into the interface to be displayed for displaying according to the display position information and the hierarchy information of each view in the interface to be displayed.
After the superposition processing and the contrast adjustment, the display position information and the level information of each superposed view, the view after the contrast adjustment and the unprocessed view can be obtained according to the display position information and the level information of each view in the interface to be displayed. And subsequently, synthesizing the view subjected to the superposition processing, the view subjected to the contrast adjustment and the unprocessed view into the interface to be displayed based on the display position information and the hierarchy information, and displaying the interface to be displayed.
Therefore, the self-adaptive switching of the Interface style can be realized only by designating one image as a target Interface background image, a User Interface (UI) designer is not required to perform a complicated adaptation process, a disordered resource arrangement process can be omitted, the real-time visual effect is achieved, and the maintenance is simple. Meanwhile, the colorful image information is not affected; the contrast between the character and the background of the current area can be dynamically ensured, and the character is clearly visible at any moment; and the view is superposed with the first partial images with different transparencies, so that the hierarchy of the interface to be displayed is clear, and the user can read the interface conveniently.
Through the mode provided by the application, the UI designer can rapidly design the interfaces with different interface styles, so that the development period of the interface theme pack can be greatly shortened.
Generally, when drawing a view, the view is drawn according to a node corresponding to the view. The following describes an interface display method provided by the embodiment of the present application from a node perspective with reference to fig. 4.
Each view corresponds to a content rendering Node (C-Node). If the view background is set, the view also corresponds to a background drawing Node (denoted as Bg-Node). If a view is set with a view background, the view corresponds to a content rendering node and a background rendering node, and the background rendering node is a child node of the content rendering node.
One drawing command buffer DisplayList is included in each node. And in each refresh drawing period, drawing the graph by each view of the interface through a system interface, storing the drawing command and the drawing parameter in the DisplayList of the corresponding node, and waiting for the rendering. The drawing parameters comprise parameters such as shapes, coordinates and paintbrush colors.
After determining the target interface background image ThemeBG, saving the ThemeBG to a shared rendering cache. And then, fast slicing can be carried out at the rendering node, and local images are extracted.
Generating a node tree corresponding to the view for generating the interface; then determining a Bg (Background) node and an Fg (Foreground) node according to the drawing command parameters of each node and the node tree; and then, modifying the drawing command parameters of the Bg nodes to superpose the first local images corresponding to the background nodes on the images corresponding to the background nodes, and modifying the drawing command parameters of the Fg nodes to ensure the visibility of the characters in the images corresponding to the Fg nodes.
Optionally, the node tree corresponding to the view for generating the interface may be generated according to the parent-child relationship between the views for generating the interface to be displayed and whether the view corresponds to the background drawing node. Wherein the node tree may be in the form shown in fig. 4.
If the node tree includes a background drawing node corresponding to the view background of the view, the node can be firstly marked as a Bg node, and other nodes are defaulted as Unknown nodes.
Next, each node in the node tree is traversed starting from the root node RootRenderNode in the node tree.
When traversing to any node, judging whether characters exist in the current node or not according to the drawing content information in the drawing command parameter of the current node, namely judging whether characters exist in an image drawn by the current node or not, if so, marking the current node as an Fg node because the characters need to be displayed with high contrast to ensure the readability of the characters.
And when the current node is marked as the Fg node, judging whether the current node has a child node or not according to the node tree. When a node has a child node, the node and the child node have a parent-child relationship. When judging whether a node has a child node according to the node tree, the judgment can be made according to whether the node is directly connected to the node in the node tree. For example, if the Node C-Node-2 in FIG. 4 is directly connected to the Node C-Node-2-1, it can be determined that the Node C-Node-2 has a child Node.
If the current node marked as the Fg node has child nodes, it may be determined whether to mark each child node as a Bg node, respectively, by analyzing the display position superposition condition according to the display position information in the drawing command parameter of the child node of the current node.
When judging, initializing the current traversed area corresponding to the current node marked as the Fg node. Wherein the initialized traversed region size is 0. And then, traversing each child node in a reverse order according to the preset drawing sequence of the child node of the current node, and judging whether the display area corresponding to the display position information of the current child node surrounds the traversed area. And if the display area corresponding to the current child node surrounds the traversed area, marking the current child node as a Bg node. And if the display area corresponding to the current child node does not surround the traversed area, not marking the current child node as a Bg node. And after the judgment of one child node is completed, updating the display area corresponding to the current child node into the traversed area. And then, taking the next child node as a new current child node, and judging whether the current child node is marked as a Bg node or not until all child nodes of the current node are judged.
Wherein nodes not labeled as Bg nodes or Fg nodes remain as Unknown nodes. The preset drawing sequence of the child node of the current node can be determined by the distance between the image drawn by the child node and human eyes, and if the image drawn by the child node is closer to a person, the drawing sequence of the child node is more backward; that is, traversal starts with the child node closest to the human eye. Whether an image is near the human eye is determined by the display effect that the image exhibits. For example, if the display area of one image a includes the display area of another image B, the distance between image B and the human eye is visually smaller than the distance between image a and the human eye.
And if no character exists in the current node and the current node is not marked as a Bg node or an Fg node, determining whether the current node is marked as the Bg node or not according to the marking condition of the child nodes of the current node.
Optionally, when there is no character in the current node and the current node is not marked as a Bg node or an Fg node, if the number of child nodes of the current node is 1 and the child node is not marked as a Bg node, the current node is marked as a Bg node. For example, if the Node C-Node-2 in the Node tree of fig. 4 is the current Node, the Node C-Node-2 only has the child Node C-Node-2-1, and the child Node C-Node-2-1 is not marked as the Bg Node, the Node C-Node-2 is marked as the Bg Node.
Optionally, when there is no character in the current node and the current node is not marked as a Bg node or an Fg node, if there are a plurality of child nodes of the current node, the current node is marked as a Bg node. For example, if the Node C-Node-3 in the Node tree of FIG. 4 is the current Node and the child nodes of the Node C-Node-3 are Bg-Node-3, Bg-Node-3-1, Bg-Node-3-2, and Bg-Node-3-3, the Node C-Node-3 is labeled as Bg Node.
After the judgment of each node in the node tree is completed based on the above manner, the marking of the Bg node and the Fg node can be completed. Nodes not separately labeled as Bg node, Fg node, remain as Unknown nodes.
Then, for each Bg node, judging whether the drawing content information of the Bg node is a pure Color according to the drawing content information of the Bg node, if so, indicating that the image drawn by the Bg node is not a colorful image, and directly setting a command Apply-Cover for the Bg node to further process. If the drawing content information of the Bg node is Bitmap, namely the color image, determining whether the color image is a colorful image according to a hue distribution interval and a saturation partition piece of the color image. If the color is rich, the Bg node is not further processed, and an Apply-None command is set. If the image is not rich in color, setting a command Aply-Cover aiming at the Bg node, and carrying out further processing.
And for each Fg node, intercepting a corresponding first local image from the ThemeBG according to the display position information of the Fg node, then obtaining the brightness condition of the first local image, and setting a command for the Fg node according to the brightness condition. If the first partial image is in a dark tone, the information of the Fg node needs to be highlighted (i.e., the node of the Fg node needs to be highlighted), and a command Apply-Light is set for the Fg node. If the first partial image is in bright tone, the information of the Fg node needs to be dimmed, and a command Apply-Dark is set for the Fg node.
By this point, the processing of the drawing command before rendering is completed, and rendering is performed next.
When actual rendering is performed:
no processing is performed for each node having an Apply-None command.
For each node with an Apply-Cover command, according to the display position information and the size information of the node, quickly intercepting a first local image corresponding to the node from a shared rendering cache, and then overlaying the first local image under a first preset transparency on an image originally drawn by the node, that is, overlaying the information of the first local image under the first preset transparency and the information from the node. The higher the level value of the view corresponding to the node is, the smaller the first preset transparency of the first local image superposed by the node is, so that the interface level can be made clear.
And for each appliance-Light node, converting the text brush color in the drawing command parameter of the node into an LAB color space to obtain an L component of the text brush color. The L component of the text brush color is then adjusted to max { L,100-L }, and the adjusted L component is then reset to the text brush color.
For each appliance-Dark node, the text brush color in the drawing command parameters of the node is converted to the LAB color space to obtain the L component of the text brush color. And then adjusting the L component of the color of the character brush to min { L,100-L }, and resetting the adjusted L component into the color of the text brush.
And finally, drawing each view based on the drawing command parameters of each processed node and the drawing command parameters of each unprocessed node. The drawn view may include a view after the superimposition processing and a view after the contrast adjustment. And synthesizing the drawn images according to the display position information of each view to obtain an interface to be displayed, and displaying the interface to be displayed.
In order to perform the corresponding steps in the above embodiments and various possible manners, an implementation manner of the interface display apparatus 200 is given below, and optionally, the interface display apparatus 200 may adopt the device structure of the electronic device 100 shown in fig. 1. Further, referring to fig. 10, fig. 10 is a block diagram illustrating an interface display device according to an embodiment of the present disclosure. It should be noted that the basic principle and the generated technical effect of the interface display device 200 provided in the present embodiment are the same as those of the above embodiments, and for the sake of brief description, no part of the present embodiment is mentioned, and corresponding contents in the above embodiments may be referred to. The interface display apparatus 200 may be applied to the electronic device 100. The interface display apparatus 200 may include: a first determination module 210, a second determination module 220, a local image acquisition module 230, and a processing module 240.
The first determining module 210 is configured to determine a background image of the target interface according to the received selection instruction. The second determining module 220 is configured to determine a view to be superimposed from the multiple views used for generating the interface to be displayed. The local image obtaining module 230 is configured to obtain, according to the display position information of each view to be superimposed, a first local image corresponding to each view to be superimposed from the target interface background image. The processing module 240 is configured to, for each view to be superimposed, perform superimposition processing on the view to be superimposed and the corresponding first local image under the first preset transparency, to obtain a view after the superimposition processing. The processing module 240 is further configured to generate and display the interface to be displayed according to the view after the superimposition processing.
In an alternative embodiment, the processing module 240 is specifically configured to: when the view to be superposed comprises a view background, superposing a first local image under a first preset transparency on the view background to obtain a superposed view; when the view to be superimposed does not include a view background, superimposing the first partial image under the first preset transparency on the view to be superimposed to obtain a view after superimposition processing.
In an alternative embodiment, the processing module 240 is further configured to: obtaining the hierarchical information of each superposed view according to the parent-child relationship among the views in the plurality of views; and setting a first preset transparency of the first local image corresponding to each view to be superposed according to the level information of each view to be superposed, wherein different levels correspond to different first preset transparencies.
In an alternative embodiment, the first preset transparency of the first partial image corresponding to each view to be superimposed is inversely proportional to a level value of a level of the view to be superimposed, where the level value of the lowest level view is the smallest.
In an optional embodiment, the second determining module 220 is specifically configured to: determining views of display areas including other views in the display areas from the plurality of views as views to be superposed according to the level information and the display position information of each view in the plurality of views, wherein the level information of each view is determined by parent-child relationship among the views in the plurality of views; and determining views to be superposed from the determined views suspected to be superposed.
In an optional implementation manner, the second determining module 220 is further specifically configured to: taking a view comprising a view background as a suspected view to be superposed; and/or, not treating the view that does not include the background of the view but includes the character as the suspected view to be overlaid.
In an optional embodiment, the second determining module 220 is specifically configured to: judging whether each suspected view to be superposed is an image with rich colors or not; if not, the suspected view to be superposed is taken as the view to be superposed.
In an optional embodiment, the second determining module 220 is specifically configured to: determining a target image from the suspected view to be superposed, and calculating a hue distribution interval and a saturation distribution interval of the target image, wherein if the suspected view to be superposed comprises a view background, the target image is the view background; if the suspected view to be superposed does not comprise a view background, the target image is the suspected view to be superposed; judging whether the hue distribution interval is larger than a preset hue distribution interval or not and whether the saturation distribution interval is larger than a preset saturation distribution interval or not; if the hue distribution interval is larger than the preset hue distribution interval or the saturation distribution interval is larger than the preset saturation distribution interval, determining that the suspected view to be superposed is an image with rich colors; and if the hue distribution interval is smaller than or equal to the preset hue distribution interval and the saturation distribution interval is smaller than or equal to the preset saturation distribution interval, judging that the suspected view to be superposed is not an image with rich colors.
In an optional embodiment, the second determining module 220 is specifically configured to: obtaining color values of all effective sampling points in the target image, wherein the transparency of the effective sampling points is greater than a second preset transparency; and calculating to obtain the hue distribution interval and the saturation distribution interval according to the hue and the saturation corresponding to the color value of each effective sampling point.
In an optional embodiment, the second determining module 220 is specifically configured to: judging whether the target image is a pure color image or not; if yes, executing the step of judging that the suspected view to be superposed is not an image with rich colors; if not, executing the step of calculating the hue distribution interval and the saturation distribution interval of the target image.
In an optional embodiment, the partial image obtaining module 230 is further configured to obtain, from the target interface background image, a second partial image corresponding to each view including a character according to display position information of each view including the character in the multiple views; the processing module 240 is further configured to, for each view including a character, adjust the brightness of the character in the view according to the brightness of the second partial image corresponding to the view to obtain a view with an adjusted contrast, where the adjustment is used to increase the contrast between the character and the second partial image; the processing module 240 is further configured to synthesize the superimposed view, the view with the adjusted contrast, and the unprocessed view into the interface to be displayed according to the display position information and the hierarchy information of each view in the multiple views, and display the interface to be displayed, where the hierarchy information of each view is determined by a parent-child relationship between the views in the multiple views.
Alternatively, the modules may be stored in the memory 110 shown in fig. 1 in the form of software or Firmware (Firmware) or be fixed in an Operating System (OS) of the electronic device 100, and may be executed by the processor 120 in fig. 1. Meanwhile, data, codes of programs, and the like required to execute the above-described modules may be stored in the memory 110.
The present embodiment also provides a readable storage medium on which a computer program is stored, the computer program implementing the interface display method when executed by a processor.
To sum up, the embodiment of the present application provides an interface display method, an interface display apparatus, an electronic device, and a readable storage medium, where after views to be superimposed are determined from views included in an interface to be displayed, a first partial image corresponding to each view to be superimposed is obtained from a determined target interface background image according to display position information of each view to be superimposed; then, overlapping each overlapped view and a first local image under a first preset transparency corresponding to each overlapped view to obtain a view after overlapping; and finally, generating and displaying an interface to be displayed based on the view subjected to the superposition processing. Therefore, the interface to be displayed can present the content of the background image of the target interface, and the display effect of each view is not good, so that the display effect of the interface to be displayed is ensured, and the condition that the display effect of the interface is not good due to the fact that other views except the view at the bottom layer are set to be non-background or transparent background is avoided.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. An interface display method is applied to electronic equipment, and the method comprises the following steps:
determining a target interface background image according to the received selection instruction;
determining views to be superposed from a plurality of views for generating an interface to be displayed;
according to the display position information of each view to be superposed, obtaining a first partial image corresponding to each view to be superposed from the target interface background image;
for each view to be superposed, superposing the view to be superposed and the corresponding first local image under the first preset transparency to obtain a superposed view;
and generating and displaying the interface to be displayed according to the view after the superposition processing.
2. The method according to claim 1, wherein the superimposing the view to be superimposed and the corresponding first local image under the first preset transparency to obtain a view after the superimposing process, includes:
if the view to be superposed comprises a view background, superposing a first local image under a first preset transparency on the view background to obtain a superposed view;
and if the view to be superposed does not comprise a view background, superposing the first local image under the first preset transparency on the view to be superposed to obtain a superposed view.
3. The method according to claim 1, before performing, for each view to be superimposed, a superimposition process on the view to be superimposed and the corresponding first partial image at the first preset transparency to obtain a view after the superimposition process, the method further includes:
obtaining the hierarchical information of each superposed view according to the parent-child relationship among the views in the plurality of views;
and setting a first preset transparency of the first local image corresponding to each view to be superposed according to the level information of each view to be superposed, wherein different levels correspond to different first preset transparencies.
4. The method according to claim 3, wherein the first preset transparency of the first partial image corresponding to each view to be superimposed is inversely proportional to a level value of a level of the view to be superimposed, wherein the level value of a lowest level view is smallest.
5. The method according to claim 1, wherein the determining the view to be superimposed from the plurality of views for generating the interface to be displayed comprises:
determining views of display areas including other views in the display areas from the plurality of views as views to be superposed according to the level information and the display position information of each view in the plurality of views, wherein the level information of each view is determined by parent-child relationship among the views in the plurality of views;
and determining views to be superposed from the determined views suspected to be superposed.
6. The method according to claim 5, wherein before the determining a view to be superimposed from the determined views suspected to be superimposed, the determining a plurality of views to be superimposed from the plurality of views for generating an interface to be displayed further comprises:
and taking the view comprising the view background as the suspected view to be superposed.
7. The method according to claim 5 or 6, wherein the determining a view to be superimposed from the determined views suspected to be superimposed comprises:
judging whether each suspected view to be superposed is an image with rich colors or not; if not, the suspected view to be superposed is taken as the view to be superposed.
8. The method of claim 7, wherein said determining whether the suspected view to be overlaid is a colorful image comprises:
determining a target image from the suspected view to be superposed, and calculating a hue distribution interval and a saturation distribution interval of the target image, wherein if the suspected view to be superposed comprises a view background, the target image is the view background; if the suspected view to be superposed does not comprise a view background, the target image is the suspected view to be superposed;
judging whether the hue distribution interval is larger than a preset hue distribution interval or not and whether the saturation distribution interval is larger than a preset saturation distribution interval or not;
if the hue distribution interval is larger than the preset hue distribution interval or the saturation distribution interval is larger than the preset saturation distribution interval, determining that the suspected view to be superposed is an image with rich colors;
and if the hue distribution interval is smaller than or equal to the preset hue distribution interval and the saturation distribution interval is smaller than or equal to the preset saturation distribution interval, judging that the suspected view to be superposed is not an image with rich colors.
9. The method according to claim 8, wherein the calculating of the hue distribution section and the saturation distribution section of the target image includes:
obtaining color values of all effective sampling points in the target image, wherein the transparency of the effective sampling points is greater than a second preset transparency;
and calculating to obtain the hue distribution interval and the saturation distribution interval according to the hue and the saturation corresponding to the color value of each effective sampling point.
10. The method according to claim 8, wherein before the calculating of the hue distribution interval and the saturation distribution interval of the target image, the determining whether the view suspected to be overlaid is a colorful image further comprises:
judging whether the target image is a pure color image or not;
if yes, executing the step of judging that the suspected view to be superposed is not an image with rich colors;
if not, executing the step of calculating the hue distribution interval and the saturation distribution interval of the target image.
11. The method of claim 1,
before generating and displaying the interface to be displayed according to the view after the superposition processing, the method further includes:
obtaining a second partial image corresponding to each view comprising characters from the target interface background image according to the display position information of each view comprising characters in the plurality of views;
aiming at each view comprising characters, adjusting the brightness of the characters in the view according to the brightness of a second local image corresponding to the view to obtain a view with adjusted contrast, wherein the adjustment is used for increasing the contrast between the characters and the second local image;
the generating and displaying the interface to be displayed according to the view after the superposition processing comprises the following steps:
and synthesizing the view subjected to superposition processing, the view subjected to contrast adjustment and the unprocessed view into the interface to be displayed according to the display position information and the hierarchy information of each view in the plurality of views, and displaying, wherein the hierarchy information of each view is determined by parent-child relationship among the views in the plurality of views.
12. An interface display device, applied to an electronic device, the device comprising:
the first determining module is used for determining a background image of the target interface according to the received selection instruction;
the second determining module is used for determining the view to be superposed from the multiple views for generating the interface to be displayed;
the local image acquisition module is used for acquiring a first local image corresponding to each view to be superposed from the target interface background image according to the display position information of each view to be superposed;
the processing module is used for carrying out superposition processing on the views to be superposed and the corresponding first local images under the first preset transparency aiming at the views to be superposed to obtain superposed views;
the processing module is further configured to generate and display the interface to be displayed according to the view subjected to the superposition processing.
13. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the interface display method of any one of claims 1-11.
14. A readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing the interface display method according to any one of claims 1 to 11.
CN202010866786.0A 2020-08-25 2020-08-25 Interface display method and device, electronic equipment and readable storage medium Pending CN114090141A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010866786.0A CN114090141A (en) 2020-08-25 2020-08-25 Interface display method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010866786.0A CN114090141A (en) 2020-08-25 2020-08-25 Interface display method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114090141A true CN114090141A (en) 2022-02-25

Family

ID=80294957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010866786.0A Pending CN114090141A (en) 2020-08-25 2020-08-25 Interface display method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114090141A (en)

Similar Documents

Publication Publication Date Title
US8508546B2 (en) Image mask generation
US8438470B2 (en) Data editing for improving readability of a display
CN104778047B (en) A kind of content display method and terminal
US8572501B2 (en) Rendering graphical objects based on context
US7573487B1 (en) Dynamically transformable user interface icons
US20120127198A1 (en) Selection of foreground characteristics based on background
US20200242823A1 (en) Transferring Vector Style Properties to a Vector Artwork
CN103065338A (en) Method and device providing shadow for foreground image in background image
US7342585B2 (en) Use of an input overscaled bitmap to generate emboldened overscaled bitmap
US6191790B1 (en) Inheritable property shading system for three-dimensional rendering of user interface controls
JP2006332908A (en) Color image display apparatus, color image display method, program, and recording medium
EP1635294A1 (en) Graphical user interface for a keyer
CN113436284A (en) Image processing method and device, computer equipment and storage medium
CN114090141A (en) Interface display method and device, electronic equipment and readable storage medium
US10593067B2 (en) Intelligent systems and methods for dynamic color hierarchy and aesthetic design computation
CN113360820B (en) Page display method, system, equipment and storage medium
CN112927321B (en) Intelligent image design method, device, equipment and storage medium based on neural network
CN108292319A (en) Determine the appearance of the element for being displayed on
US20090031213A1 (en) System and method for website colorization
Panchaud et al. Smart cartographic background symbolization for map mashups in geoportals: A proof of concept by example of landuse representation
CN113760223A (en) Method, system, medium, and apparatus for resource reuse and consistency in software iteration process
WO2021129213A1 (en) Theme icon generation method and apparatus therefor, and computer device
CN111783402B (en) Method and device for obtaining visual effect of document
CN116578798B (en) Page contrast automatic calibration optimization method
US11928757B2 (en) Partially texturizing color images for color accessibility

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination