CN112148409A - Window image effect realization method and device and storage medium - Google Patents

Window image effect realization method and device and storage medium Download PDF

Info

Publication number
CN112148409A
CN112148409A CN202011040043.4A CN202011040043A CN112148409A CN 112148409 A CN112148409 A CN 112148409A CN 202011040043 A CN202011040043 A CN 202011040043A CN 112148409 A CN112148409 A CN 112148409A
Authority
CN
China
Prior art keywords
window
layer window
sub
content
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011040043.4A
Other languages
Chinese (zh)
Inventor
李杨
黎俊聪
周丹薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011040043.4A priority Critical patent/CN112148409A/en
Publication of CN112148409A publication Critical patent/CN112148409A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • G06T3/04

Abstract

The embodiment of the application discloses a method and a device for realizing an image effect of a window and a storage medium. The method for realizing the image effect of the window comprises the following steps: when detecting that a user operation window of a terminal operation system has a sub-window, determining a content layer window and a background layer window of the sub-window; hiding a target layer window in the sub-window, and intercepting a background image corresponding to the background layer window from a user operation window; carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing; generating a target sub-window with a preset image effect according to the content layer window and the background image after the fuzzy processing; the target sub-window is displayed in the user operation window, the preset image effect of the sub-window is achieved, and the compatibility of the sub-window image effect displayed on different operation systems can be improved.

Description

Window image effect realization method and device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for implementing an image effect of a window, and a storage medium.
Background
With the rapid development of image processing technology, a terminal of a Windows operating system (Windows) may implement an image effect of a window into various effects, such as a frosted glass effect, a fade effect, and/or a shadow effect. Among these image effects, the frosted glass effect is an image effect that is commonly used by users, and the frosted glass effect refers to an image with blurred contours, colors, and the like.
In the course of research and practice on the related art, the inventors of the present application found that, since the image effect of the windows is closely related to the display card driver, and there is a certain difference between the display drivers of different operating systems, not all operating systems can support the image effect of the windows.
Disclosure of Invention
The embodiment of the application provides a method and a device for realizing the image effect of a window and a storage medium, which can improve the compatibility of the image effect of a sub-window displayed on different operating systems.
The embodiment of the application provides a method for realizing an image effect of a window, which comprises the following steps:
when detecting that a user operation window of a terminal operation system has a sub-window, determining a content layer window and a background layer window of the sub-window;
hiding a target layer window in the sub-window, and intercepting a background image corresponding to a background layer window from a user operation window, wherein the target layer window comprises the background layer window, or the target layer window comprises the background layer window and the content layer window;
carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing;
generating a target sub-window with a preset image effect according to the content layer window and the background image after the fuzzy processing;
and displaying the target sub-window in the user operation window.
Correspondingly, the embodiment of the present application further provides an apparatus for implementing an image effect of a window, including:
the determining unit is used for determining a content layer window and a background layer window of a sub-window when detecting that the sub-window appears in a user operation window of the terminal operation system;
an intercepting unit, configured to hide a target layer window in the child window, and intercept a background image corresponding to a background layer window from a user operation window, where the target layer window includes the background layer window, or the target layer window includes the background layer window and the content layer window;
the processing unit is used for carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing;
the generating unit is used for generating a sub-target sub-window with a preset image effect according to the content layer window and the background image after the fuzzy processing;
and the display unit is used for displaying the target sub-window in the user operation window.
In some embodiments, the determining unit comprises:
a first determining subunit, configured to determine a content layer window of the sub-window;
a detection subunit, configured to detect a transparent region in the content layer window;
and the construction subunit is used for constructing the background layer window corresponding to the content layer window based on the transparent area.
In some embodiments, the detection subunit is specifically configured to:
acquiring the transparency of each pixel point in a content layer window;
extracting pixel points with the transparency smaller than the preset transparency from all the pixel points of the content layer window to obtain target pixel points;
and determining the transparent area of the content layer window based on the target pixel point.
In some embodiments, the determining unit comprises:
a second determining subunit, configured to determine a content layer window of the sub-window;
the identification subunit is used for carrying out content identification on the content layer window to obtain a content identification result, and the content identification result comprises at least one content type;
and the first obtaining subunit is used for obtaining an area corresponding to the content type which accords with the preset content type in the content identification result from the content layer window to obtain a background layer window.
In some embodiments, a display unit includes:
the drawing subunit is used for drawing the background image after the fuzzy processing to the background layer window to obtain an effect layer corresponding to the content layer window;
and the superposition sub-unit is used for carrying out superposition processing on the content layer window and the effect layer to obtain a target sub-window with a preset image effect.
In some embodiments, a processing unit, comprising:
the third determining subunit is used for determining adjacent pixel points adjacent to the pixel points to be processed in the background image, wherein the pixel points to be processed are any pixel points in the background image;
the second acquiring subunit is used for acquiring the color parameter value of each adjacent pixel point;
the fourth determining subunit is used for determining the target color parameter value of the pixel point to be processed based on the color parameter values of all the adjacent pixel points;
and the updating subunit is used for updating the current color parameter value of the pixel point to be processed according to the target color parameter value so as to realize the fuzzy processing of the background image.
In some embodiments, the second determining subunit is specifically configured to:
acquiring the number of adjacent pixel points of the pixel points to be processed to obtain a first numerical value;
performing sum operation on the color parameter values of all adjacent pixel points to obtain a second numerical value;
and calculating the ratio of the second numerical value to the first numerical value to obtain the target color parameter value of the pixel point to be processed.
Correspondingly, the embodiment of the application also provides a storage medium, wherein the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to execute the method for realizing the image effect of the window.
According to the scheme, the content layer and the background layer corresponding to the sub-window of the user operation window are obtained, the image corresponding to the content layer is intercepted at the user operation window, the image is subjected to fuzzy processing to obtain the image display effect of the sub-window, the preset image effect of the sub-window is achieved, and the image display efficiency of the sub-window can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a system for implementing an image effect of a window according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a method for implementing an image effect of a window according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a sub-window frame according to an embodiment of the present application.
Fig. 4 is a schematic diagram of pixel arrangement of a content layer window according to an embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating another method for implementing an image effect of a window according to an embodiment of the present application.
Fig. 6 is a schematic view of a user operation window interface provided in an embodiment of the present application.
Fig. 7 is a schematic diagram of a sub-window of a user operation window according to an embodiment of the present application.
Fig. 8 is a schematic diagram of pixel arrangement of a content layer according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a transparent area of a content layer according to an embodiment of the present application.
Fig. 10 is a block diagram of an apparatus for implementing an image effect of a window according to an embodiment of the present application.
Fig. 11 is a block diagram of a device for implementing an image effect of another window according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a method and a device for realizing an image effect of a window and a storage medium. Specifically, the embodiment of the present application provides an image effect implementation apparatus suitable for a window of a computer device. The computer device may be a terminal or a server, the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like.
Referring to fig. 1, fig. 1 is a schematic view of a scene of a system for implementing an image effect of a window according to an embodiment of the present disclosure, which includes a terminal, where the terminal may be connected to a communication network, and the communication network may include a wireless network and a wired network, where the wireless network includes one or a combination of a wireless wide area network, a wireless local area network, a wireless metropolitan area network, and a wireless personal area network. The network includes network entities such as routers, gateways, etc.
The system for implementing the image effect of the window may include a terminal or a server, and as shown in fig. 1, when it is detected that a sub-window appears in a user operation window of a terminal operation system, a content layer window of the sub-window is determined, a background layer window is obtained according to the content layer window, a background image corresponding to the background layer window is captured from the user operation window, further, a background image is blurred, a background image after blurring is obtained, and finally, the sub-window with a predetermined image effect is displayed according to the content layer window and the background image after blurring, so that compatibility of the image effect of the sub-window displayed on different operation systems can be improved.
It should be noted that the scene schematic diagram of the image effect implementation system of the window shown in fig. 1 is only an example, and the image effect implementation system of the window and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application.
Based on the above problem, embodiments of the present application provide a method and an apparatus for implementing an image effect of a first window, and a storage medium, which can improve the image effect display efficiency of a sub-window. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiment of the present application provides a method for implementing an image effect of a window, where the method may be executed by a terminal or a server, and the method for implementing an image effect of a window is described as an example executed by a terminal.
As shown in fig. 2, fig. 2 is a schematic flowchart of a method for implementing an image effect of a window according to an embodiment of the present application. The specific flow of the method for realizing the image effect of the window can be as follows:
101. and when detecting that the sub-window appears in the user operation window of the terminal operation system, determining a content layer window and a background layer window of the sub-window.
In the embodiment of the present application, the terminal may include various types, for example, the terminal may be a computer.
The operating system is a set of system software which manages computer hardware resources, controls the operation of other programs and provides an interactive operation interface for a user. The operating system is a key component of a computer system and is responsible for basic tasks such as managing and configuring memory, determining the priority of supply and demand of system resources, controlling input and output devices, operating a network, managing a file system and the like. The operating systems are various, and the operating systems installed on various devices can be from simple to complex and can be from embedded operating systems of mobile phones to large operating systems of super computers.
For example, the operating system may be a Windows operating system. The Microsoft Windows operating system (Microsoft Windows) is a family of operating systems that are introduced by Microsoft corporation.
The User operation window may refer to a User Interface (UI), which is a medium for interaction and information exchange between a system and a User, and implements conversion between an internal form of information and a human-acceptable form. The user interface is designed between a user and hardware to interactively communicate with each other, aims to enable the user to conveniently and efficiently operate the hardware to achieve bidirectional interaction and complete work expected to be completed by means of the hardware, is widely defined and comprises a human-computer interaction user interface and a graphical user interface, and the user interface exists in the field of human and mechanical information communication.
For example, when a user uses the terminal, the interface displayed on the display screen of the terminal may be a user operation window, through which the user may perform various interactions with the terminal.
The sub-window may be another interface window appearing on the user operation window, and the area of the sub-window may be smaller than or equal to the area of the user operation window. The child window may be a window having an association relationship with the user operation window.
For example, the user operation window may be an operation interface of an application program of the terminal, and a user performs an operation on the operation interface of the application program to trigger a relevant window of the application program to be displayed, where the relevant window may be a sub-window of the operation interface of the application program.
In some embodiments, the sub-window may also be a window that has no association with the user operation window, for example, the user operation window may be a main display interface of the terminal, where the main display interface may include an application icon, a background picture, and the like, and the user may open an application interface corresponding to the application icon by clicking the application icon, and then the application interface may be a sub-window of the main display interface of the terminal.
Referring to fig. 3, fig. 3 is a schematic view of a sub-window frame structure according to an embodiment of the present disclosure. In fig. 3, a sub-window is displayed on a user operation window, and the sub-window may be divided into a content layer and a background layer on a frame, and may belong to two windows, that is, a content layer window and a background layer window.
Among them, the content layer window (layerwindow) is a window mode supported from the Windows XP (Windows operating system XP) system. The content layer window can display the content of the child window and can support the window semi-transparent effect. The content layer window may have translucent pixels, and in fig. 3, if the background layer window of the sub-window is hidden, the content layer window may have a translucent effect, and the user may manipulate the background image of the window by the user below the sub-window.
The background layer window can realize different image display effects of the sub-windows, such as a ground glass image effect and the like, and the ground glass image effect can be an image effect expressed after the image is subjected to gaussian blur processing. The background layer window can provide different ground glass image effect implementation modes according to different terminal operation systems.
For example, the Windows7 os can support ero (ero is a user interface used by Windows os, has a transparent glass feeling, and includes window effects such as real-time thumbnails and real-time animations in addition to transparent interfaces): aero frosted glass image effects may be used, and Windows10 (Windows operating system 10) operating system may support frosted glass: the frosted glass image effect can be realized by setting the window background, and the like.
In some embodiments, the step of determining the content layer window and the background layer window of the sub-window may include:
determining a content layer window of the child window;
detecting a transparent region in a content layer window;
and constructing a background layer window corresponding to the content layer window based on the transparent area.
For example, the content layer window may include a content region a, a content region B, and a content region C, and when it is detected that the content region a has transparency, it may be determined that the content region a is a transparent region.
In some embodiments, the step of detecting a transparent region in the content layer window may comprise:
acquiring the transparency of each pixel point in a content layer window;
extracting pixel points with the transparency smaller than the preset transparency from all the pixel points of the content layer window to obtain target pixel points;
and determining the transparent area of the content layer window based on the target pixel point.
The content layer window may be a layer (the layer is like a film containing elements such as characters or figures, which are sequentially stacked together and combined to form a final effect of the page, the layer may accurately position the elements on the page, the layer may be added with texts, pictures, tables, and plug-ins, or the layer may be further nested inside the layer), the layer may be composed of a plurality of pixel points, for example, the content layer window layer may include 1000 pixel points.
The image layer may include a plurality of channels, a Red channel (Red), a Green channel (Green), a Blue channel (Blue), and a transparent channel (Alpha), where each color channel stores information of color elements in the image, colors in all the color channels are superimposed and mixed to generate colors of pixels in the image, and then the transparent channel may set a transparency of a color display of each pixel.
Then, the transparency of the pixel point may represent transparency information corresponding to the transparent channel of the layer where the pixel point is located. The transparency of the pixel points in the layer in the transparent channel may be set to a value of 0-255, where 0 may indicate that the pixel points are transparent, and 255 may be completely visible.
For example, the content layer window may include 10 pixel points, and the transparency of each pixel point in the transparent channel may be obtained as follows: 100. 100, 255, and 255.
In the embodiment of the present application, the transparency of each pixel point in the content layer window may be compared with a preset transparency, and the pixel point with the transparency smaller than the preset transparency may be determined as the target pixel point of the content layer window, so as to reduce the transparency difference of the target pixel point, thereby accurately selecting a transparent region from the content layer window.
Further, after determining the target pixel point of the content layer window, the target pixel point may be located in the region formed by the content layer window, and the region may be used as the transparent region of the content layer window.
For example, the transparency of the pixel 1 to the pixel 10 of the content layer window may be: 100. 100, 255, and 255, where the preset transparency may be 125, it may be determined that the pixel points smaller than the preset transparency are pixel points 1 to 6, it may be determined that the target pixel points of the content layer window are pixel points 1 to 6, and then the region formed by pixel points 1 to 6 of the content layer window may be a transparent region.
In some embodiments, when the number of the pixel points in the content layer window is large, a long time is required for obtaining the transparency of the pixel points individually, and the working efficiency is affected. Then, when the number of the pixels is large, for example, 1000, 2000 pixels, etc., the sampling processing may be performed on the pixels in the content layer window. The sampling processing can be performed on pixels of a preset number at intervals of longitudinal and transverse directions of the content layer window, the preset number can be determined according to the total pixel number, for example, the larger the total pixel number is, the larger the preset number is, the transparency of pixels of the preset number adjacent to the sample pixel point can be determined according to the transparency of the sample pixel point, and the working efficiency is improved.
At this time, the transparent region of the content layer window is determined, and then the background layer window corresponding to the content layer window can be obtained according to the display position and the display area of the user operation window obtained from the transparent region.
In some embodiments, the step of determining the content layer window and the background layer window of the sub-window may further comprise:
determining a content layer window of the child window;
performing content identification on the content layer window to obtain a content identification result;
and acquiring an area corresponding to the content type which accords with the preset content type in the content identification result from the content layer window to obtain a background layer window.
The content layer window may include display contents of the sub-windows, and the display contents may include multiple types, for example, the display contents may include pictures, texts, and/or blanks. By performing content recognition on the content layer window, different types of display content areas included in the content layer window can be obtained.
For example, content identification is performed on a content layer window, and the content layer window including a text area and a blank area can be obtained, so as to obtain a content identification result of the content layer window.
And secondly, determining an area corresponding to the display content type which accords with the preset content type from the display content in the content layer window according to the content identification result of the content layer window, so as to obtain the background layer window. The preset content type can be a content type with a content importance level lower than that of characters, such as pictures or blanks. Because the background layer window displays the image effect, the image effect can be distinguished from the character area of the content layer window, so that the information of the window browsed by a user is not influenced when the sub-window displays the image effect with the preset effect.
For example, the content layer window includes a text area and a blank area, and the preset type may be a blank type, and then the background layer window corresponding to the content layer window may be obtained according to a display position and a display area of the blank area in the content layer window in the user operation window.
102. And hiding the target layer window in the sub-window, and intercepting a background image corresponding to the background layer window from the user operation window.
In some embodiments, the content layer window and the background layer window may be in a stacked relationship, and then the content layer window may be displayed above the background layer window on the user-manipulated window. After the content layer window and the background layer window are determined, the background layer window can be hidden, that is, the display of the background layer window is stopped, so that the area of the content layer window corresponding to the user operation window can be conveniently obtained.
The target layer window may be a background layer window of the sub-window, or may include a background layer window and a content layer window.
In an embodiment, when the background layer window is hidden and the content layer window is kept to be displayed, the stability of the user operation window can be ensured when the fuzzy background image is generated, and the situations of window picture flickering and the like can not occur.
In an embodiment, when the background layer window and the content layer window are hidden and the blurred background image is generated, the consistency of the image effect and the native image effect of the operating system can be ensured, and the user experience is improved.
Specifically, the background image may be a screen shot, that is, an image obtained by performing a screen capture operation on the user operation window. Firstly, determining the display position and the display area of a user operation window corresponding to a background layer window to obtain an intercepted area of the user operation window, and then intercepting an image from the user operation window according to the intercepted area to obtain a background image of the background layer window corresponding to the user operation window.
103. And carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing.
Wherein the blurring process may be gaussian blurring, which adjusts pixel color values according to a gaussian curve, which selectively blurs the image. In other words, gaussian blur can count the color values of pixels around a certain point according to a gaussian curve, and a mathematically weighted average calculation method is adopted to obtain the color values of the curve, so that the figure outline, namely the curve, can be finally left. Principle of gaussian blur: the color value of each pixel in the image is averaged over the color values of surrounding pixels.
In some embodiments, the step of blurring the background image may include:
determining adjacent pixel points adjacent to the pixel points to be processed in the background image;
acquiring a color parameter value of each adjacent pixel point;
determining target color parameter values of the pixel points to be processed based on the color parameter values of all adjacent pixel points;
and updating the current color parameter value of the pixel point to be processed according to the target color parameter value so as to realize the fuzzy processing of the background image.
The pixel to be processed is any pixel in the background image, for example, the background image may include a first pixel, a second pixel, a third pixel and a fourth pixel, and the pixel to be processed may be the first pixel, the second pixel, the third pixel or the fourth pixel.
The adjacent pixel points may be pixel points arranged adjacent to the pixel points to be processed in the background image. Referring to fig. 4, fig. 4 is a schematic diagram illustrating a pixel arrangement of a content layer window according to an embodiment of the present disclosure. Fig. 4 shows a pixel point adjacent to pixel point 1, which may be a partial pixel point arrangement region of a background picture: pixel 2 and pixel 5, pixel adjacent to pixel 2: pixel 1, pixel 3 and pixel 6, the pixel adjacent to pixel 3: pixel 2, pixel 4, pixel 7, the pixel adjacent to pixel 4: pixel 3 and pixel 8, the pixel adjacent to pixel 5: pixel 1 and pixel 6, the pixel adjacent to pixel 6: pixel 2, pixel 5 and pixel 7, the pixel adjacent to pixel 7: pixel 3, pixel 6 and pixel 8, the pixel adjacent to pixel 8: pixel 4 and pixel 7.
For example, in fig. 4, if the pixel point 1 is a pixel point to be processed, it can be determined that the pixel point 2 and the pixel point 5 adjacent to the pixel point 1 are adjacent to the pixel point to be processed.
After the adjacent pixel point of the pixel point to be processed is determined, the color parameter value of the adjacent pixel point can be obtained, and the target color parameter value of the pixel point to be processed can be obtained by calculating the color parameter value of the adjacent pixel point.
In some embodiments, the step of determining the target color parameter value of the pixel point to be processed based on the color parameter values of all the adjacent pixel points may include:
acquiring the number of adjacent pixel points of the pixel points to be processed to obtain a first numerical value;
performing sum operation on the color parameter values of all adjacent pixel points to obtain a second numerical value;
and calculating the ratio of the second numerical value to the first numerical value to obtain the target color parameter value of the pixel point to be processed.
The number of the adjacent pixel points can be determined according to the pixel points adjacent to the pixel point to be processed.
For example, the pixels adjacent to the pixel to be processed may be pixel 2 and pixel 5, and then the number of the adjacent pixels is 2, that is, the first value is 2.
The color parameter values may include R (red), G (green), and B (blue) values, respectively.
For example, the color parameter value obtained for the pixel point 2 may be: r240, G156, B33; the color parameter value obtained for the pixel point 5 may be: r is 50, G is 180, B is 255, and the color parameter values of the adjacent pixels are summed to obtain: when R is 290, G is 336 and B is 288, second values 290, 336 and 288, respectively, are obtained.
Further, the ratio of the second value to the first value is calculated, that is, the average value of the color parameter values of a plurality of adjacent pixels is calculated.
For example, the number of adjacent pixels may be 2, and the color parameter value: r is 290, G is 336, and B is 288, then the ratio of the number of the pixels to the total value of the color parameter is calculated, so as to obtain an average value of R is 145, an average value of G is 168, an average value of B is 144, and based on the calculation result, the target color parameter value of the pixel to be processed can be obtained: r145, G168, B144.
After the target color parameter value of the pixel point to be processed is determined, the current color parameter value of the pixel point to be processed can be adjusted, and specifically, the color parameter value of the pixel point to be processed can be updated to be the target color parameter value.
For example, the current color parameter value of the pixel point to be processed may be: r100, G165, B100, the target color parameter values may be: and when R is 145, G is 168, and B is 144, the current color parameter value of the pixel point to be processed may be adjusted to: and R is 145, G is 168, and B is 144, so that a background image after blurring processing can be obtained.
104. And generating a target sub-window with a preset image effect according to the content layer window and the background image after the blurring processing.
The predetermined image effect may be a plurality of image effects, for example, the predetermined image effect may be a frosted glass effect, and the frosted glass effect may be obtained by performing a blurring process on the image.
After the blurred background image is obtained, the frosted glass effect of the sub-window can be displayed based on the content layer window and the blurred background image.
In some embodiments, the step of displaying the sub-window having the predetermined image effect according to the content layer window and the background image after the blurring process may include:
drawing the background image after the fuzzy processing to a background layer window to obtain an effect layer corresponding to the content layer window;
and overlapping the content layer window and the effect layer to obtain a target sub-window with a preset image effect.
The background image after the blurring process is drawn to the background layer window, which may be worth copying the background image after the blurring process to a corresponding area of the background layer window, and further obtaining an effect layer corresponding to a content area of the content layer window or all the content areas. Wherein, the effect layer can be used for showing the specified image effect.
The content layer window and the effect layer are superposed, the content layer window and the effect layer can be fused, the display positions and the display sequence (Zorder) of the content layer window and the effect layer (background layer window) are synchronized through a window synchronization technology, and then a target sub-window can be determined, wherein the target sub-window is also a display sub-window with a preset image effect.
The window synchronization technology synchronizes the positions of a background layer window and a content layer window of a child window and the Zorder (which is a computer term and is used for setting a sequence, wherein the sequence of the Windows in a child window chain is the front-back sequence of the Windows displayed on a screen, and the front-back sequence is Z-Order when the window with the position closer to the front in the child window chain is displayed, so that the positions of the two Windows are synchronized, and one window is above the other window.
105. And displaying the target sub-window in the user operation window.
After the target sub-window corresponding to the sub-window is determined, the target sub-window may be displayed in the user operation window, and then the sub-window with the predetermined image effect may be displayed.
The embodiment of the application discloses a method for realizing the image effect of a window, which comprises the steps of determining a content layer window and a background layer window of a sub-window when detecting that the sub-window appears in a user operation window of a terminal operation system; hiding a target layer window in the sub-window, and intercepting a background image corresponding to the background layer window from a user operation window; carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing; generating a target sub-window with a preset image effect according to the content layer window and the background image after the fuzzy processing; and displaying the target sub-window in the user operation window. According to the scheme, the content layer and the background layer corresponding to the sub-window of the user operation window are obtained, the image corresponding to the content layer is intercepted in the user operation window, the image is subjected to fuzzy processing to obtain the image display effect of the sub-window, the preset image effect of the sub-window is achieved, and user experience can be improved.
Based on the above description, the method for implementing the image effect of the window of the present application will be further described below by way of example. In this embodiment, an example will be described in which the image effect realization device of the window is specifically integrated in a server.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating another method for implementing an image effect of a window according to an embodiment of the present disclosure. The specific process can be as follows:
201. and the terminal receives a sub-window creating instruction triggered by a user in the user operation window.
In the embodiment of the present application, the sub-window creating instruction may be used to generate a sub-window in the current user operation window. Specifically, the terminal may detect an operation of a user in the user operation window, acquire user operation information, and trigger the sub-window creation instruction according to the user operation information.
For example, the current user operation window may be a terminal operating system interface, and the operating system interface displays a plurality of application icons, background pictures, and task bars. Referring to fig. 6, fig. 6 is a schematic view of a user operation window interface provided in the embodiment of the present application. In fig. 6, the user operation window may include an upper background picture area and a lower taskbar area, and an application icon may be included above the background picture area: application A, application B, application C and application D; the taskbar region may include setup controls, search controls, network, volume and time, and the like.
When the terminal detects that the user operates the application icon in the user operation window shown in fig. 6, a sub-window creating instruction may be received, where the sub-window creating instruction may be used to open an application corresponding to the application icon operated by the user.
202. And the terminal determines the content layer of the sub-window based on the sub-window creation instruction.
After the terminal receives the sub-window creation instruction, the sub-window to be created can be determined according to user operation.
For example, in the current user operation window, when the terminal detects that the user operates the application a icon, it may be determined that the user wants to open the application a, and it may be determined that the application program interface of the application a is a child window that needs to be created. Referring to fig. 7, fig. 7 is a schematic view of a sub-window of a user operation window according to an embodiment of the present application. Fig. 7 shows that an application a window corresponding to the application a is displayed as a child window on the current user operation window.
Further, after the terminal determines the sub-window, a content layer of the sub-window may be obtained, where the content layer may belong to one layer in a frame of the sub-window. The area of the content layer may be equal to the area of the sub-window, and the content layer may include all contents of the sub-window, such as text, pictures, and the like.
203. And the terminal detects the transparent area of the content layer and intercepts a background image corresponding to the transparent area from the current user operation window.
In this embodiment of the application, the terminal may detect the transparent area of the content layer in a pixel sampling manner. The pixel sampling mode may be that pixel sampling is performed at intervals of a preset number of pixels according to the vertical and horizontal sequence of the arrangement of the pixels of the layer.
For example, please refer to fig. 8, fig. 8 is a schematic diagram illustrating a pixel arrangement of a content layer according to an embodiment of the present disclosure. In the content layer, pixel points are arranged horizontally and vertically: 9x7, the number of pixels is 63. Further, sampling the pixel points in the content layer can be performed according to the vertical and horizontal sequence at every 2 pixel points at intervals, and then the obtained sample pixel points are: 1. 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, and 63.
After determining the sample pixel points collected in the content layer, the transparency of the sample pixel points may be obtained, specifically, the transparency of the sample pixel points may be determined by obtaining transparency parameter values of the sample pixel points in a transparent channel of the content layer, for example, obtaining the sample pixel points: 1. 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, and 63, the corresponding transparency parameter values may be: 0. 0, 255, and 255, wherein a smaller value of the transparency parameter indicates a higher transparency of the pixel point. Then the transparent pixel points may be determined to be 1, 3, 5, 7, 9, 11, 13, 15, and 17, and further, the transparent area may be determined to be pixels 1 to 18 according to the transparent sample pixel points, that is, the pixel point positions of the first row and the second row in the content layer.
After the terminal determines the transparent area of the content layer by detecting the content layer, an image corresponding to the transparent area may be intercepted from the user operation window to obtain a background image.
For example, please refer to fig. 9, and fig. 9 is a schematic diagram of a transparent area of a content layer according to an embodiment of the present application. In fig. 9, the content layer includes a transparent area and a non-transparent area, and an image corresponding to the transparent area may be captured from the current user operation window according to the display position and the display area of the transparent area in the user operation window, so as to obtain a background image corresponding to the content layer.
204. And the terminal carries out fuzzy processing on the background image to obtain a background image after the fuzzy processing.
For the fuzzy processing, reference may be made to the detailed description of the above embodiment, which is not repeated in this step.
Specifically, the terminal may obtain all pixel points of the background image, for example, the pixel points of the background image may be: 9x2 (i.e., 9 rows and 2 columns). After all the pixel points of the background image are determined, the terminal can determine the pixel points to be processed every other pixel point according to the vertical and horizontal sequence of the arrangement of the pixel points, and then the pixel points to be processed can be obtained as follows: 1. 3, 5, 7, 9, 11, 13, 15, 17.
Furthermore, the terminal can acquire the color value of each adjacent pixel point adjacent to the pixel point to be processed, and the target color value of the pixel point to be processed is determined according to the color value of the adjacent pixel point of the pixel point to be processed, so that the background image can be subjected to fuzzy processing, and the background image subjected to fuzzy processing is obtained.
For example, the neighboring pixels of the pixel 1 to be processed are: 2. 10, the adjacent pixel points of the pixel point 1 to be processed are: 2. 10; the adjacent pixel points of the pixel point 3 to be processed are: 2. 4, 12; the adjacent pixel points of the pixel point 5 to be processed are: 4. 6, 14; the adjacent pixel points of the pixel point 7 to be processed are 6, 8 and 16; the adjacent pixel points of the pixel point to be processed 9 are: 8. 18; the adjacent pixel points of the pixel point to be processed 11 are: 10. 2, 12; the adjacent pixel points of the pixel point to be processed 13 are: 12. 4, 14; the adjacent pixel points of the pixel point 15 to be processed are: 6. 14, 16; the adjacent pixel points of the pixel point to be processed 17 are: 16. 8, 18, then can calculate the average value of the colour value of the adjacent pixel of each pixel to be processed, can regard this average value as the target colour value of this pixel to be processed, adjust, can be in order to obtain the background image after the fuzzy processing.
205. And the terminal performs fusion processing on the content layer and the background image after the fuzzy processing to obtain a target sub-window with a preset image effect.
In this embodiment of the application, after the terminal determines the blurred background image corresponding to the content layer, the blurred background image may be drawn into a background layer of a sub-window, where the background layer may be one layer in a frame of the sub-window and is used to display an image effect of the sub-window, an area of the background layer may be smaller than or equal to an area of the sub-window, and the background layer and the content layer may be displayed in an overlapping manner by using a window synchronization technology to form the sub-window.
The content layer can be positioned above the background layer, and a user can browse the content of the sub-window conveniently. By the method, the frosted glass image effect of the sub-window can be realized, and the target display sub-window is obtained.
206. And displaying the target sub-window in the user operation window.
After the terminal generates the target sub-window with the preset image effect, the target sub-window can be displayed in the current user operation window, and due to the fact that the target sub-window has the frosted glass image effect, a user can see the content of the area, shielded by the target sub-window, of the current user operation window through the frosted glass effect area of the target sub-window, so that the user can experience the image effect of the window when browsing the window, and meanwhile the integrity of window information browsed by the user is guaranteed.
The embodiment of the application discloses a method for realizing image effect of a window, wherein a terminal receives a sub-window creating instruction triggered by a user in a user operation window, then determines a content layer of the sub-window based on the sub-window creating instruction, further detects a transparent area of the content layer, intercepts a background image corresponding to the transparent area from the current user operation window, performs fuzzy processing on the background image to obtain a background image after the fuzzy processing, then performs fusion processing on the content layer window and the background image after the fuzzy processing to obtain a target sub-window with a preset image effect, and displays the target sub-window in the user operation window to realize the frosted glass image effect of the user operation window of different operation systems of the terminal and improve the display effect of a user interface.
In order to better implement the method for implementing the image effect of the window provided by the embodiment of the present application, the embodiment of the present application further provides a device for implementing the image effect of the window based on the method for implementing the image effect of the window. The meaning of the noun is the same as that in the image effect implementation method of the window, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 10, fig. 10 is a block diagram of an image effect implementation apparatus for a window according to an embodiment of the present disclosure, where the image effect implementation apparatus for a window may be applied to a terminal such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like, and may be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform, where the apparatus may include:
a determining unit 301, configured to determine a content layer window and a background layer window of a child window when it is detected that the child window appears in a user operation window of the terminal operating system;
an intercepting unit 302, configured to hide a target layer window of a child window, and intercept a background image corresponding to a background layer window from a user operation window, where the target layer window includes the background layer window, or the target layer window includes the background layer window and the content layer window;
the processing unit 303 is configured to perform a blurring process on the background image to obtain a background image after the blurring process;
a generating unit 304, configured to generate a target sub-window with a predetermined image effect according to the content layer window and the background image after the blurring processing;
and the display unit 305 is configured to display the target sub-window in the user operation window.
In some embodiments, please refer to fig. 11, and fig. 11 is a block diagram illustrating a structure of an apparatus for implementing an image effect of a window according to an embodiment of the present disclosure. The determining unit 301 may include:
a first determining subunit 3011, configured to determine a content layer window of the sub-window;
a detection subunit 3012, configured to detect a transparent region in the content layer window;
and the constructing subunit 3013 is configured to construct a background layer window corresponding to the background layer window based on the transparent region.
In some embodiments, the detection subunit 3011 may be specifically configured to: acquiring the transparency of each pixel point in a content layer window; extracting pixel points with the transparency smaller than the preset transparency from all the pixel points of the content layer window to obtain target pixel points; and determining the transparent area of the content layer window based on the target pixel point.
In some embodiments, the determining unit 301 may further include:
a second determining subunit, configured to determine a content layer window of the sub-window;
the identification subunit is used for carrying out content identification on the content layer window to obtain a content identification result, and the content identification result comprises at least one content type;
and the first obtaining subunit is used for obtaining an area corresponding to the content type which accords with the preset content type in the content identification result from the content layer window to obtain a background layer window.
In some embodiments, the display unit 304 may include:
the drawing subunit is used for drawing the background image after the fuzzy processing to the background layer window to obtain an effect layer corresponding to the content layer window;
and the superposition sub-unit is used for carrying out superposition processing on the content layer window and the effect layer to obtain a target sub-window with a preset image effect.
In some embodiments, the processing unit 303 may include:
the third determining subunit is used for determining adjacent pixel points adjacent to the pixel points to be processed in the background image, wherein the pixel points to be processed are any pixel points in the background image;
the second acquiring subunit is used for acquiring the color parameter value of each adjacent pixel point;
the fourth determining subunit is used for determining the target color parameter value of the pixel point to be processed based on the color parameter values of all the adjacent pixel points;
and the updating subunit is used for updating the current color parameter value of the pixel point to be processed according to the target color parameter value so as to realize the fuzzy processing of the background image.
In some embodiments, the second determining subunit may be specifically configured to: acquiring the number of adjacent pixel points of the pixel points to be processed to obtain a first numerical value; performing sum operation on the color parameter values of all adjacent pixel points to obtain a second numerical value; and calculating the ratio of the second numerical value to the first numerical value to obtain the target color parameter value of the pixel point to be processed.
The embodiment of the application discloses a device for realizing the image effect of a window, which determines a content layer window and a background layer window of a sub-window by a determining unit 301 when detecting that the sub-window appears in a user operation window of a terminal operation system, then the intercepting unit 302 hides the target layer window in the sub-window, and intercepts the background image corresponding to the background layer window from the user operation window, the further processing unit 303 performs the blurring processing on the background image to obtain the background image after the blurring processing, the target layer window includes the background layer window, or the target layer window comprises the background layer window and the content layer window, then the display unit 304 generates a target sub-window with a predetermined image effect according to the content layer window and the background image after the blurring processing, and finally the display unit 305 displays the target sub-window in the user operation window. Therefore, the image display effect of each sub-window of the user operation window is realized, and the compatibility of the sub-window image effect displayed on different operation systems can be effectively improved.
An embodiment of the present application also provides a computer device, which may be a terminal, and as shown in fig. 12, the terminal may include a Radio Frequency (RF) circuit 801, a memory 802 including one or more computer-readable storage media, an input unit 803, a display unit 804, a sensor 805, an audio circuit 806, a Wireless Fidelity (WiFi) module 807, a processor 808 including one or more processing cores, and a power supply 809. Those skilled in the art will appreciate that the terminal structure shown in fig. 12 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 801 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receive downlink information from a base station and then send the received downlink information to one or more processors 808 for processing; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 801 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 801 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 802 may be used to store software programs and modules, and the processor 808 may execute various functional applications and data processing by operating the software programs and modules stored in the memory 802. The memory 802 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 802 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 802 may also include a memory controller to provide the processor 808 and the input unit 803 access to the memory 802.
The input unit 803 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in a particular embodiment, the input unit 803 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 808, and can receive and execute commands sent by the processor 808. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 803 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 804 may be used to display information input by or provided to a user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 804 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 808 to determine the type of the touch event, and then the processor 808 provides a corresponding visual output on the display panel according to the type of the touch event. Although in FIG. 12 the touch sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement input and output functions.
The terminal may also include at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the terminal is stationary, and can be used for applications of recognizing terminal gestures (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
Audio circuitry 806, a speaker, and a microphone may provide an audio interface between the user and the terminal. The audio circuit 806 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into an audio signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 806, and then outputs the audio data to the processor 808 for processing, and then passes through the RF circuit 801 to be transmitted to, for example, another terminal, or outputs the audio data to the memory 802 for further processing. The audio circuit 806 may also include an earbud jack to provide peripheral headset communication with the terminal.
WiFi belongs to short distance wireless transmission technology, and the terminal can help the user to send and receive e-mail, browse web page and access streaming media etc. through WiFi module 807, which provides wireless broadband internet access for the user. Although fig. 12 shows the WiFi module 807, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 808 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 802 and calling data stored in the memory 802, thereby performing overall monitoring of the terminal. Optionally, processor 808 may include one or more processing cores; preferably, the processor 808 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 808.
The terminal also includes a power supply 809 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 808 via a power management system to manage charging, discharging, and power consumption via the power management system. The power supply 809 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.
Although not shown, the terminal may further include a camera, a bluetooth module, and the like, which will not be described herein. Specifically, in this embodiment, the processor 808 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 802 according to the following instructions, and the processor 808 runs the application programs stored in the memory 802, thereby implementing various functions:
when detecting that a user operation window of a terminal operation system has a sub-window, determining a content layer window and a background layer window of the sub-window;
hiding a target layer window in the sub-window, and intercepting a background image corresponding to the background layer window from a user operation window;
carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing;
and displaying the sub-window with the preset image effect according to the content layer window and the background image after the blurring processing.
The embodiment of the application discloses a method and a device for realizing an image effect of a window and a storage medium. The method comprises the steps that when a child window of a user operation window of a terminal operation system is detected to appear, a content layer window and a background layer window of the child window are determined; hiding a target layer window in the sub-window, and intercepting a background image corresponding to the background layer window from a user operation window; carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing; generating a target sub-window with a preset image effect according to the content layer window and the background image after the fuzzy processing; and displaying the target sub-window in the user operation window. According to the scheme, the content layer and the background layer corresponding to the sub-window of the user operation window are obtained, the image corresponding to the content layer is intercepted from the user operation window, the image is subjected to fuzzy processing to obtain the image display effect of the sub-window, the preset image effect of the sub-window is achieved, and the compatibility of the sub-window image effect displayed on different operation systems can be improved.
It will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by instructions or by instructions controlling associated hardware, which may be stored in a storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the methods for implementing an image effect of a window provided in the present application. For example, the instructions may perform the steps of:
when detecting that a user operation window of a terminal operation system has a sub-window, determining a content layer window and a background layer window of the sub-window; hiding a target layer window in the sub-window, and intercepting a background image corresponding to the background layer window from a user operation window; carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing; generating a target sub-window with a preset image effect according to the content layer window and the background image after the fuzzy processing; and displaying the target sub-window in the user operation window.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium may execute the steps in the method for implementing the image effect of any window provided in the embodiment of the present application, beneficial effects that can be achieved by the method for implementing the image effect of any window provided in the embodiment of the present application may be achieved, for details, see the foregoing embodiment, and are not described herein again.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the above-described image effect implementation aspect of the window or the various alternative implementations of the image effect implementation aspect of the window.
The method, the apparatus, and the storage medium for implementing the image effect of the window provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for realizing image effect of a window is characterized by comprising the following steps:
when detecting that a user operation window of a terminal operation system appears in a sub-window, determining a content layer window and a background layer window of the sub-window;
hiding a target layer window in the sub-window, and intercepting a background image corresponding to the background layer window from the user operation window, wherein the target layer window comprises the background layer window, or the target layer window comprises the background layer window and the content layer window;
carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing;
generating a target sub-window with a preset image effect according to the content layer window and the background image after the blurring processing;
and displaying the target sub-window in the user operation window.
2. The method of claim 1, wherein determining the content layer window and the background layer window of the sub-window comprises:
determining a content layer window of the sub-window;
detecting a transparent region in the content layer window;
and constructing the background layer window corresponding to the content layer window based on the transparent area.
3. The method of claim 2, wherein the detecting a transparent region in the content layer window comprises:
acquiring the transparency of each pixel point in the content layer window;
extracting pixel points with the transparency smaller than the preset transparency from all the pixel points of the content layer window to obtain target pixel points;
and determining a transparent area of the content layer window based on the target pixel point.
4. The method of claim 1, wherein determining the content layer window and the background layer window of the sub-window comprises:
determining a content layer window of the sub-window;
performing content identification on the content layer window to obtain a content identification result, wherein the content identification result comprises at least one content type;
and acquiring an area corresponding to the content type which accords with the preset content type in the content identification result from the content layer window to obtain the background layer window.
5. The method of claim 1, wherein generating the target sub-window with the predetermined image effect from the content layer window and the blurred background image comprises:
drawing the background image after the fuzzy processing to the background layer window to obtain an effect layer corresponding to the content layer window;
and overlapping the content layer window and the effect layer to obtain a target sub-window with the preset image effect.
6. The method of claim 1, wherein the blurring the background image comprises:
determining adjacent pixel points adjacent to the pixel points to be processed in the background image, wherein the pixel points to be processed are any pixel points in the background image;
acquiring a color parameter value of each adjacent pixel point;
determining target color parameter values of the pixel points to be processed based on the color parameter values of all adjacent pixel points;
and updating the current color parameter value of the pixel point to be processed according to the target color parameter value so as to realize the fuzzy processing of the background image.
7. The method according to claim 6, wherein the determining the target color parameter value of the pixel point to be processed based on the color parameter values of all the neighboring pixel points comprises:
acquiring the number of adjacent pixels of the pixel to be processed to obtain a first numerical value;
performing sum operation on the color parameter values of all adjacent pixel points to obtain a second numerical value;
and calculating the ratio of the second numerical value to the first numerical value to obtain a target color parameter value of the pixel point to be processed.
8. An apparatus for implementing an image effect of a window, comprising:
the device comprises a determining unit, a background layer window and a content layer window, wherein the determining unit is used for determining the content layer window and the background layer window of a sub-window when detecting that the sub-window appears in a user operation window of a terminal operation system;
an intercepting unit, configured to hide a target layer window in the child window, and intercept a background image corresponding to the background layer window from the user operation window, where the target layer window includes the background layer window, or the target layer window includes the background layer window and the content layer window;
the processing unit is used for carrying out fuzzy processing on the background image to obtain a background image after the fuzzy processing;
the generating unit is used for generating a target sub-window with a preset image effect according to the content layer window and the background image after the blurring processing;
and the display unit is used for displaying the target sub-window in the user operation window.
9. The apparatus of claim 8, wherein the determining unit comprises:
a first determining subunit, configured to determine a content layer window of the sub-window;
a detection subunit, configured to detect a transparent region in the content layer window;
and the construction subunit is used for constructing the background layer window corresponding to the content layer window based on the transparent area.
10. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the image effect implementation method of a window according to any one of claims 1 to 7.
CN202011040043.4A 2020-09-28 2020-09-28 Window image effect realization method and device and storage medium Pending CN112148409A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011040043.4A CN112148409A (en) 2020-09-28 2020-09-28 Window image effect realization method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011040043.4A CN112148409A (en) 2020-09-28 2020-09-28 Window image effect realization method and device and storage medium

Publications (1)

Publication Number Publication Date
CN112148409A true CN112148409A (en) 2020-12-29

Family

ID=73895667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011040043.4A Pending CN112148409A (en) 2020-09-28 2020-09-28 Window image effect realization method and device and storage medium

Country Status (1)

Country Link
CN (1) CN112148409A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327725A (en) * 2021-12-28 2022-04-12 珠海豹趣科技有限公司 Personalized taskbar display method and device applied to Windows10 system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327725A (en) * 2021-12-28 2022-04-12 珠海豹趣科技有限公司 Personalized taskbar display method and device applied to Windows10 system
CN114327725B (en) * 2021-12-28 2024-03-22 珠海豹趣科技有限公司 Personalized taskbar display method and device applied to Windows10 system

Similar Documents

Publication Publication Date Title
CN107369197B (en) Picture processing method, device and equipment
CN109002243B (en) Image parameter adjusting method and terminal equipment
US10269160B2 (en) Method and apparatus for processing image
RU2632153C2 (en) Method, device and terminal for displaying virtual keyboard
CN110795666B (en) Webpage generation method, device, terminal and storage medium
US10775979B2 (en) Buddy list presentation control method and system, and computer storage medium
CN112000269B (en) Screen opening method and device and electronic equipment
CN108549519B (en) Split screen processing method and device, storage medium and electronic equipment
CN109491738B (en) Terminal device control method and terminal device
US20150089431A1 (en) Method and terminal for displaying virtual keyboard and storage medium
CN110458921B (en) Image processing method, device, terminal and storage medium
JP2023526618A (en) Unread message display method, device and electronic device
CN108595089A (en) A kind of virtual key control method and mobile terminal
CN110045890B (en) Application identifier display method and terminal equipment
CN110908554B (en) Long screenshot method and terminal device
CN109284041A (en) A kind of application interface switching method and mobile terminal
CN111127595A (en) Image processing method and electronic device
CN108920069A (en) A kind of touch operation method, device, mobile terminal and storage medium
CN107066268A (en) The display location switching method and device of widget application
WO2023025121A1 (en) Display method and apparatus, electronic device, and readable storage medium
CN109992337B (en) Webpage display method and device and storage medium
WO2021128929A1 (en) Image rendering method for panorama application, and terminal device
CN109614173B (en) Skin changing method and device
CN113313804A (en) Image rendering method and device, electronic equipment and storage medium
CN110908757B (en) Method and related device for displaying media content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination