CN113254141B - Image generation method, image generation device, electronic equipment and storage medium - Google Patents

Image generation method, image generation device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113254141B
CN113254141B CN202110706913.5A CN202110706913A CN113254141B CN 113254141 B CN113254141 B CN 113254141B CN 202110706913 A CN202110706913 A CN 202110706913A CN 113254141 B CN113254141 B CN 113254141B
Authority
CN
China
Prior art keywords
image
area
target
screen capture
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110706913.5A
Other languages
Chinese (zh)
Other versions
CN113254141A (en
Inventor
陈睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiaomi Communication Technology Co ltd
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Shenzhen Xiaomi Communication Technology Co ltd
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiaomi Communication Technology Co ltd, Beijing Xiaomi Mobile Software Co Ltd filed Critical Shenzhen Xiaomi Communication Technology Co ltd
Priority to CN202110706913.5A priority Critical patent/CN113254141B/en
Publication of CN113254141A publication Critical patent/CN113254141A/en
Application granted granted Critical
Publication of CN113254141B publication Critical patent/CN113254141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image generation method, an image generation device, electronic equipment and a storage medium, wherein the method comprises the steps of executing screen capture processing operation on a to-be-captured page to obtain a screen capture interface area, and the screen capture interface area corresponds to a screen capture interface image; identifying a region to be processed from a screen capture interface region; blurring the area image corresponding to the area to be processed to obtain a target area image; and synthesizing the target image according to the target area image and the screen capture interface image. Through this disclosure, can promote the generation efficiency of screenshot image effectively, promote screenshot efficiency and screenshot effect effectively, promote screenshot experience degree.

Description

Image generation method, image generation device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of electronic devices, and in particular, to an image generation method and apparatus, an electronic device, and a storage medium.
Background
The method for capturing the screen of the display screen of the electronic device generally includes reading a screen capture interface image directly from a frame buffer (frame buffer) after a screen capture operation, and then displaying the screen capture interface image, wherein the frame buffer (frame buffer) is an interface provided by an operating system for the electronic device, and can shield bottom layer differences of image hardware and allow an upper layer application program to directly read and write a display buffer area in a graphic mode. In some application scenarios, the screenshot interface image may contain private content.
In the related art, the screen capture user usually needs to intervene autonomously to perform corresponding blurring processing on the privacy content in the screen capture interface image, so as to obtain a blurred image as an image after the screen capture operation.
In this way, after the screen capture operation, the blurring processing efficiency of the image is not high, thereby affecting the blurring processing efficiency and the image generation effect of the image.
Disclosure of Invention
The present disclosure is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, an object of the present disclosure is to provide an image generation method, an image generation apparatus, an electronic device, and a storage medium, which can efficiently perform adaptive blurring processing on a screen capture interface image, thereby effectively improving the blurring processing efficiency of the image and effectively improving the image generation effect.
In order to achieve the above object, an embodiment of the first aspect of the present disclosure provides an image generation method, including: executing screen capture processing operation on a to-be-captured screen page to obtain a screen capture interface area, wherein the screen capture interface area corresponds to a screen capture interface image; identifying a region to be processed from the screen capture interface region; blurring the area image corresponding to the area to be processed to obtain a target area image; and synthesizing a target image according to the target area image and the screen capture interface image.
The image generation method provided by the embodiment of the first aspect of the disclosure performs a screen capture processing operation on a to-be-captured page to obtain a screen capture interface area, the screen capture interface area corresponds to a screen capture interface image, identifies a to-be-processed area from the screen capture interface area, performs a blurring processing on an area image corresponding to the to-be-processed area to obtain a target area image, and synthesizes the target image according to the target area image and the screen capture interface image, so that the to-be-processed area which may contain privacy content is automatically identified, the corresponding area image is subjected to the blurring processing to obtain the target area image, and the target image after the screen capture operation is synthesized according to the target area image and the screen capture interface image, thereby being capable of efficiently performing adaptive blurring processing on the screen capture interface image, and effectively improving the blurring processing efficiency of the image, the image generation effect is effectively improved.
In order to achieve the above object, an embodiment of a second aspect of the present disclosure provides an image generating apparatus, including: the screen capture module is used for executing screen capture processing operation on a to-be-captured page to obtain a screen capture interface area, and the screen capture interface area corresponds to a screen capture interface image; the identification module is used for identifying a region to be processed from the screen capture interface region; the processing module is used for carrying out fuzzification processing on the area image corresponding to the area to be processed so as to obtain a target area image; and the synthesis module is used for synthesizing a target image according to the target area image and the screen capture interface image.
The image generating apparatus provided in the embodiment of the second aspect of the disclosure performs a screen capturing operation on a to-be-captured page to obtain a screen capturing interface area, where the screen capturing interface area corresponds to a screen capturing interface image, identifies a to-be-processed area from the screen capturing interface area, performs a blurring process on an area image corresponding to the to-be-processed area to obtain a target area image, and synthesizes the target image according to the target area image and the screen capturing interface image, so that the to-be-processed area that may contain private content is automatically identified, and performs a blurring process on the area image corresponding to the to-be-processed area image to obtain the target area image, and the target image after the screen capturing operation is synthesized according to the target area image and the screen capturing interface image, thereby being capable of implementing an adaptive blurring process on the screen capturing interface image efficiently, and being capable of effectively improving the blurring process efficiency of the image, the image generation effect is effectively improved.
An embodiment of a third aspect of the present disclosure provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the image generation method as set forth in the embodiment of the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the image generation method as set forth in the first aspect of the present disclosure.
A fifth aspect of the present disclosure provides a computer program product, wherein when the instructions of the computer program product are executed by a processor, the image generation method as set forth in the first aspect of the present disclosure is performed.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an image generation method according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of an image generation method according to another embodiment of the present disclosure;
fig. 3 is a schematic flow chart of an image generation method according to another embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image generating apparatus according to another embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of illustrating the present disclosure and should not be construed as limiting the same. On the contrary, the embodiments of the disclosure include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Fig. 1 is a schematic flowchart of an image generation method according to an embodiment of the present disclosure.
It should be noted that an execution subject of the image generation method of this embodiment is an image generation apparatus, the apparatus may be implemented by software and/or hardware, the apparatus may be configured in an electronic device, and the electronic device may include, but is not limited to, a terminal, a server, and the like.
As shown in fig. 1, the image generation method includes:
s101: and executing screen capture processing operation on the to-be-captured page to obtain a screen capture interface area, wherein the screen capture interface area corresponds to the screen capture interface image.
The page to be screenshot may be a page currently displayed on a display screen of the electronic device, and the page may include: an operating system page, a function page of an application program, or a screen capture function page supporting a screen capture processing operation (the screen capture function page may be one page in the application program providing the screen capture service), and the like, that is, the page to be screen captured generally includes an overlay of one or more function pages, which is not limited in this respect.
The display screen of the electronic device may monitor a user operation, and if the user operation is to perform a screen capture processing operation on the to-be-captured page, the screen capture interface area may be captured, where the screen capture interface area may be an interface area included in a trajectory of the screen capture processing operation, and the screen capture interface area may be an entire interface area of the to-be-captured page of the electronic device, or may also be a partial interface area in the to-be-captured page of the electronic device, which is not limited to this.
The interface image correspondingly included in the screen capture interface area can be referred to as a screen capture interface image, and the screen capture interface image can be used for assisting the subsequent synthesis to obtain a screen capture result image, and the screen capture result image can be referred to as a target image.
In the related art, after capturing a page to be screenshot and performing a screenshot processing operation to obtain a screenshot interface region, the screenshot interface image is directly provided to a frame buffer (frame buffer) for a user to read a corresponding screenshot interface image from the frame buffer (frame buffer) as a screenshot result image.
S102: and identifying a region to be processed from the screen capture interface region.
After the screen capture processing operation is executed on the to-be-captured page to obtain the screen capture interface area, the to-be-processed area can be identified from the screen capture interface area, where the to-be-processed area may be a local area to be processed in the screen capture interface area, and the to-be-processed area may include some private content.
Optionally, in some embodiments, when the step of identifying the region to be processed from the screen capture interface region is performed, as shown in fig. 2, fig. 2 is a schematic flowchart of an image generation method according to another embodiment of the present disclosure, and may include:
s201: and determining a reference view layer corresponding to the screen capture interface area.
The screenshot interface region may be a superposition of one or more functional pages, so that each page may correspond to one view layer, and a view layer related to the screenshot interface region may be referred to as a reference view layer, where the reference view layer has a corresponding reference view attribute, and a manner of obtaining the reference view attribute may specifically refer to the following steps.
For example, the screen capture interface area may include: the browser page corresponds to a View layer View1, the operating system main interface corresponds to a View layer View2, the application function page corresponds to a View layer View3, and the View layer View1, the View layer View2 and the View layer View3 can be called reference View layers.
The view attribute may be configured in advance by an interface developer, and the view attribute corresponding to the reference view layer may be referred to as a reference view attribute.
For example, a view attribute configuration page may be provided for an interface developer, an attribute, that is, isHired (hidden or not), may be added to view attributes (view attributes) in an operating system, a corresponding value option may be false (no) or true (yes), a default value of the isHired (hidden or not) attribute may be false (no), and when the interface developer configures an interface function, the isHired (hidden or not) attribute may be configured as true (yes) or not configured (that is, a default value is adopted) according to an actual usage scenario requirement, so as to assist in determining, based on the preconfigured view attributes, reference view attributes corresponding to respective reference view layers in a subsequent actual screen capture process, which is not limited.
In the embodiment of the present disclosure, when the isHired attribute is configured as false, it indicates that the content in the control area involved in the view layer may not be subjected to the fuzzification processing, and when the isHired attribute is configured as true, it indicates that the content in the control area involved in the view layer may be subjected to the fuzzification processing.
S202: acquiring an interface configuration file, wherein the interface configuration file comprises: the view layer identification and the view attribute corresponding to the view layer identification.
That is, the interface developer may actually generate an interface configuration file for the configuration status of each view attribute, where the interface configuration file includes: the view layer identification comprises a plurality of view layer identifications and view attributes corresponding to the view layer identifications, namely each view layer identification has a corresponding view attribute, and each view layer can be identified through the view layer identification.
S203: and determining the view layer identifier matched with the reference identifier of the reference view layer.
S204: and determining the view attribute corresponding to the matched view layer identifier from the interface configuration file, and taking the corresponding view attribute as a reference view attribute.
In the embodiment of the disclosure, the reference view attribute is obtained in an auxiliary manner based on the configured interface configuration file directly, wherein the interface configuration file includes a plurality of view layer identifiers and view attributes corresponding to the view layer identifiers, so that the reference identifiers of the reference view layers are adopted, the view layer identifiers matched with the reference identifiers are determined by combining the configured interface configuration file, then the view attributes corresponding to the matched view layer identifiers are directly used as the reference view attributes, and the reference view attributes do not need to be determined for the page types or page contents actually corresponding to the reference view layers after the screen capturing operation, so that the reference view attributes corresponding to the reference view layers can be determined in an auxiliary manner, and the view attributes for the view layers are obtained by pre-configuration, so that the determination efficiency of the reference view attributes is guaranteed, the accuracy of the reference view attribute is guaranteed, and the reference value of the reference view attribute in screen capture operation is improved.
S205: and if the reference view attribute is the target view attribute, determining the target control from the reference view layer.
In the embodiment of the present disclosure, after determining the reference view attribute, the reference view attribute may be compared with a target view attribute (i.e., a target view attribute is true), if the reference view attribute is the target view attribute, a target control is determined from the reference view layer, a control region corresponding to the target control is used as a region to be processed, the control region is a position region of the target control corresponding to the screenshot interface region, that is, a position of the target control in the screenshot interface region is the control region.
For example, assume that the page type or page content corresponding to the reference view layer is: the browser page may determine that the browser page includes, based on the functions provided by the browser: the method includes the steps of setting a text display control, an operation button control, a search box control, a picture display control and the like, then determining control attributes corresponding to the controls respectively (for example, whether the controls can set texts or pictures or whether the controls can adjust display effects and the like), and then taking the controls in which the texts or the pictures can be set and the controls in which the display effects can be adjusted as target controls, which is not limited to this.
Optionally, in some embodiments, the determining the target control from the reference view layer may be determining a plurality of candidate controls from the reference view layer; determining a plurality of control attributes respectively corresponding to a plurality of candidate controls, wherein one candidate control corresponds to one control attribute; and if the control attribute is the target control attribute, taking the candidate control corresponding to the target control attribute as the target control.
For example, the text display control, the operation button control, the search box control, the picture display control, and the like in the above example may be referred to as a candidate control, and the determination manner in the candidate control may specifically be to determine a page type or page content (for example, a browser page) corresponding to the reference view layer, so as to analyze a business service function provided by the page type or page content correspondingly, and determine a control that may be involved therein as the candidate control, which is not limited herein.
Wherein the target control properties indicate: the target control corresponding to the target control may be configured with target information, and the target information may be fuzzified, where the fuzzified processing may be a mosaic processing manner or a blurring processing manner, and the target information may be, for example, privacy information (for example, identity card information, face photo information, and the like), which is not limited herein.
Of course, the target control property may also be configured to any other possible combination of information, for example, the target control property indicates: the user physical examination information can be configured in the corresponding target control, and the user physical examination information can be processed in a virtualized manner, which is not limited.
For example, a plurality of interface controls may be determined as candidate controls from the reference view layer in combination with the development configuration file, and then, control attributes corresponding to each candidate control may be analyzed, if the control attributes indicate: target information can be configured in the candidate control corresponding to the target information, and the target information can be fuzzified, so that the control attribute is determined as the target control attribute, and the corresponding candidate control is taken as the target control.
For example, assume that the page type or page content corresponding to the reference view layer is: the browser page may obtain a development configuration file (for example, a development configuration file related to the browser, where the development configuration file may include a style, a type, and configuration information of each interface control in the browser), and then analyze each interface control identifier from the development configuration file, and use the interface control to which the interface control identifier belongs as a candidate control.
For example, the control properties may specifically be: whether the control can be provided with characters or pictures or not, or whether the control can adjust the display effect and the like or not, or whether identity card information and face photo information can be configured in the control in a self-adaptive manner or not, and if the control attribute indicates that: identity card information (which can be adaptively input by a user) or face photo information (which can be adaptively uploaded by the user) can be adaptively configured in the control, and the display effect of the control can be adjusted, it is determined that the control attribute is the target control attribute, and if only text information (for example, function indicative text) which is not adaptively modifiable is included in the control, or text information or photo information which is not configurable in the control, it is determined that the control attribute is not the target control attribute, which is not limited.
S206: and taking a control area corresponding to the target control as a to-be-processed area, wherein the control area is a position area of the target control corresponding to the screen capturing interface area.
For example, if the interface includes private content (e.g., text or digital content such as identification card information and passwords), a control is usually configured in the interface, and the private content is displayed in the control, so that in the embodiment of the present disclosure, a target control can be directly determined from the reference view layer, and a control area corresponding to the target control is used as a to-be-processed area, so that the to-be-processed area can be quickly identified from the screenshot interface area.
And the control area can be, for example, the position area occupied by the target control in the screen capture interface area.
Of course, the identification of the region to be processed from the screen capture interface region may be implemented in any other possible manner, such as, but not limited to, a model matching manner, an image processing manner, and the like.
As shown in fig. 2, if a view attribute configuration page is provided for an interface developer, an attribute, namely, isHired (hidden or not), is added to view attributes (view attributes) in an operating system, and then, after a configuration instruction of the interface developer is received, an interface configuration file can be generated according to the actual configuration situation of the interface developer, the number of reference view layers is multiple, and in step S201: after determining the reference view layer corresponding to the screen capture interface area, this embodiment may further include:
s103: and carrying out fuzzification processing on the area image corresponding to the area to be processed to obtain a target area image.
After the screen capture processing operation is executed on the page to be subjected to screen capture to obtain the screen capture interface area and the area to be processed is identified from the screen capture interface area, the area image corresponding to the area to be processed can be fuzzified to obtain the target area image.
The image included in the to-be-processed area may be referred to as an area image, and the area image may be a local image in the screen capture interface image, which is not limited to this.
The number of the to-be-processed areas may be one or more, the area image corresponding to the to-be-processed area is subjected to blurring processing, which may specifically be mosaic processing, blurring processing, and the like, and then the area image obtained through blurring processing is used as the target area image, which is not limited to this.
S104: and synthesizing the target image according to the target area image and the screen capture interface image.
After the screen capture processing operation is executed on the page to be captured to obtain the screen capture interface area, the area to be processed is identified from the screen capture interface area, the area image corresponding to the area to be processed is subjected to fuzzification processing to obtain the target area image, the target area image and the screen capture interface image can be directly subjected to image synthesis, the synthesized image is used as the target image, and the target image is the result image of the screen capture.
In the embodiment, the screen capture processing operation is executed on the screen capture page to obtain the screen capture interface area, the screen capture interface area corresponds to the screen capture interface image, the area to be processed is identified from the screen capture interface area, fuzzifying the area image corresponding to the area to be processed to obtain a target area image, synthesizing the target image according to the target area image and the screen capture interface image, as the areas to be processed which possibly contain the privacy content are automatically identified, and the corresponding area images are fuzzified to obtain the target area images, and the target image after the screen capture operation is synthesized according to the target area image and the screen capture interface image, therefore, adaptive blurring processing can be efficiently carried out on the screenshot interface image, the blurring processing efficiency of the image can be effectively improved, and the image generation effect is effectively improved.
Fig. 3 is a schematic flowchart of an image generation method according to another embodiment of the present disclosure.
As shown in fig. 3, the image generation method includes:
s301: and executing screen capture processing operation on the to-be-captured page to obtain a screen capture interface area, wherein the screen capture interface area corresponds to the screen capture interface image.
S302: and determining a reference view layer corresponding to the screen capture interface area, wherein the reference view layer has a corresponding reference view attribute.
S303: and if the reference view attribute is the target view attribute, determining the target control from the reference view layer.
S304: and taking a control area corresponding to the target control as a to-be-processed area, wherein the control area is a position area of the target control corresponding to the screen capturing interface area.
S305: and carrying out fuzzification processing on the area image corresponding to the area to be processed to obtain a target area image.
For the description of S301 to S305, reference may be made to the above embodiments, which are not described herein again.
For example, a user executes a screen capture processing operation on a to-be-captured page by using a hidden sensitive operation mode, an operating system identifies the screen capture processing operation, information corresponding to the screen capture processing operation is transmitted to a frame of the operating system through a Touch Panel (TP), the frame divides the information corresponding to the screen capture processing operation into policy classes, namely, mobile window managers, and the operating system determines that the fuzzification processing operation needs to be executed on a screen capture interface area based on the mobile window managers.
For example, the operating system may obtain all view layers corresponding to the screen capture interface region as reference view layers, traverse the reference view attribute of each reference view layer, screen out target view layers (the number of the target view layers may be one or more, and the view attribute of the target view layer is the above-mentioned target view attribute), add the target view layers to the view layer list, then determine information related to the target control from each target view layer, and store the information related to the target control, where the information related to the target control is, for example, position region information of the target control corresponding to the screen capture interface region, or identification information of the target control, control attribute information, and the like.
Then, the saved information related to the target control and the view attribute of the target view layer to which the saved information belongs can be correspondingly saved, and a hash table h1 related to list < point > and isHired can be regenerated, wherein the list < point > can save the position area information of the target control corresponding to the screenshot interface area, the hash table h1 of the isHired can save the view attribute of the target view layer to which the target control belongs, and then the operating System can also transmit the hash table h1 of list < point > and isHired to the display composition System surfafinger service, wherein the surfafinger service is started in the operating System process and is responsible for uniformly managing the frame buffer of the device. When the target image is synthesized by the surfefinger service, a reference image carrying the target area image may be regenerated by referring to the hash table h1 of list < point > and isHired, which may be specifically referred to in the following embodiments.
S306: an initial image is created in a transparent state.
For example, an initial image P1 in a transparent state may be created via the surface flicker service, and then subsequent steps may be triggered.
S307: and determining the relative position information of the to-be-processed area in the screen capture interface area.
Alternatively, in some embodiments, determining the relative position information of the to-be-processed region in the screen capture interface region may be determining that the target control corresponds to the position region information in the screen capture interface region, and regarding the position region information as the relative position information, that is, the relative position information is information for determining the position of the target control in the screen capture interface region.
That is to say, when determining the relative position information of the to-be-processed area in the screenshot interface area, the embodiment of the present disclosure may read the above list < point >, because the list < point > may store the position area information of the target control corresponding to the screenshot interface area, and the target control may configure the target information, and the target information may be fuzzified, thereby directly using the position area information of the target control corresponding to the screenshot interface area as the relative position information, and implementing to quickly and accurately determine the relative position information of the to-be-processed area in the screenshot interface area.
For example, an initial image P1 in a transparent state is created, the list < point > is read, position area information of the target control corresponding to the screen capture interface area is determined from the list, the position area information is used as relative position information of the area to be processed in the screen capture interface area, and then a corresponding area image F is interpolated according to the relative position information point, that is, the target area image is synthesized into the initial image.
S308: and determining reference position information corresponding to the initial image according to the relative position information, wherein the reference position information and the relative position information are kept coincident after the initial image and the screen capture interface image are aligned.
After determining the relative position information of the region to be processed in the screen capture interface region, the reference position information corresponding to the initial image may be determined, wherein the reference position information and the relative position information are kept coincident after the initial image and the screen capture interface image are aligned.
For example, the initial image and the screen capture interface image may be aligned first, and then the position information in the initial image, which is coincident with the relative position information, may be used as the reference position information.
S309: and synthesizing the target area image to the position indicated by the reference position information in the initial image to obtain a reference image.
For example, the initial image and the screen capture interface image may be aligned first, position information that is in the initial image and is overlapped with the relative position information is used as reference position information, and then the target area image is synthesized to a position indicated by the reference position information in the initial image to obtain a reference image.
It will be appreciated that, via the above-described synthesis process, the reference image will include: the target area image is obtained by performing blurring processing on the area image corresponding to the area to be processed in advance, so that the reference image can be understood as an image in which other areas except the target area image are in a transparent state, which is not limited herein.
S310: and synthesizing the reference image and the screen capture interface image to obtain a target image.
After the traversal of each target area image is completed, each target area image is synthesized to the position indicated by the reference position information in the initial image to obtain the reference image, the reference image and the screen capture interface image can be synthesized (for example, the reference image can be directly covered on the screen capture interface image to perform synthesis processing without limitation), so as to obtain the target image, thereby achieving the purpose of improving the blurring processing effect to a large extent, and in the process of generating the target image with the blurring effect, the image quality of the screen capture interface image is not affected, thereby effectively improving the image quality of the generated target image, and improving the screen capture effect.
Of course, any other possible manner may be adopted to synthesize the target image according to the target area image and the screenshot interface image, for example, the local interface image corresponding to the target position information in the screenshot interface image may be determined, the target position information is the relative position information of the area to be processed in the screenshot interface area, and the target area image is used to replace the local interface image in the screenshot interface image to obtain the target image, or a model synthesis manner may also be adopted, and the like, which is not limited thereto.
In the embodiment, the initial image in the transparent state is created, the relative position information of the region to be processed in the screen capture interface region is determined, and the reference position information corresponding to the initial image is determined, wherein after the initial image and the screen capture interface image are aligned, the reference position information and the relative position information are kept coincident, the target region image is synthesized to the position indicated by the reference position information in the initial image to obtain the reference image, and the reference image and the screen capture interface image are synthesized to obtain the target image, so that the blurring processing effect is improved to a large extent, and in the process of generating the target image with the blurring effect, the image quality of the screen capture interface image is not affected, so that the image quality of the generated target image is effectively improved, and the screen capture effect is improved.
Fig. 4 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure.
As shown in fig. 4, the image generating apparatus 40 includes:
the screen capture module 401 is configured to perform a screen capture processing operation on a to-be-captured page to obtain a screen capture interface area, where the screen capture interface area corresponds to a screen capture interface image;
an identifying module 402, configured to identify a region to be processed from a screen capture interface region;
the processing module 403 is configured to perform blurring processing on the area image corresponding to the area to be processed to obtain a target area image; and
and a synthesizing module 404, configured to synthesize the target image according to the target area image and the screen capture interface image.
In some embodiments of the present disclosure, as shown in fig. 5, fig. 5 is a schematic structural diagram of an image generating apparatus according to another embodiment of the present disclosure, and the synthesizing module 404 includes:
a creating sub-module 4041 for creating an initial image in a transparent state;
the first determining submodule 4042 is configured to determine relative position information of the to-be-processed area in the screen capture interface area;
a second determining sub-module 4043, configured to determine, according to the relative position information, reference position information corresponding to the initial image, where the reference position information and the relative position information are kept coincident after the initial image and the screenshot interface image are aligned;
a first synthesizing submodule 4044, configured to synthesize the target area image to a position indicated by the reference position information in the initial image, so as to obtain a reference image;
and the second synthesizing submodule 4045 is configured to synthesize the reference image and the screenshot interface image to obtain a target image.
In some embodiments of the present disclosure, the identifying module 402 is configured to:
determining a reference view layer corresponding to the screen capture interface area, wherein the reference view layer has a corresponding reference view attribute;
if the reference view attribute is the target view attribute, determining a target control from the reference view layer;
and taking a control area corresponding to the target control as a to-be-processed area, wherein the control area is a position area of the target control corresponding to the screen capturing interface area.
In some embodiments of the disclosure, the number of reference view layers is multiple layers, and the identifying module 402 is further configured to:
acquiring an interface configuration file, wherein the interface configuration file comprises: the method comprises the steps of obtaining a plurality of view layer identifications and view attributes corresponding to the view layer identifications;
determining a view layer identifier matched with the reference identifier of the reference view layer;
and determining the view attribute corresponding to the matched view layer identifier from the interface configuration file, and taking the corresponding view attribute as a reference view attribute.
In some embodiments of the present disclosure, the identifying module 402 is configured to:
determining a plurality of candidate controls from the reference view layer;
determining control attributes corresponding to the candidate controls;
and if the control attribute is the target control attribute, taking the candidate control corresponding to the target control attribute as the target control.
In some embodiments of the present disclosure, the first determination sub-module 4042 is configured to:
and determining that the target control corresponds to the position area information in the screen capture interface area, and taking the position area information as relative position information.
In some embodiments of the present disclosure, the target control property indicates that: the target information can be configured in the corresponding target control, and the target information can be fuzzified.
It should be noted that the foregoing explanation of the embodiment of the image generation method is also applicable to the image generation apparatus of this embodiment, and is not repeated here.
In the embodiment, the screen capture processing operation is executed on the screen capture page to obtain the screen capture interface area, the screen capture interface area corresponds to the screen capture interface image, the area to be processed is identified from the screen capture interface area, fuzzifying the area image corresponding to the area to be processed to obtain a target area image, synthesizing the target image according to the target area image and the screen capture interface image, as the areas to be processed which possibly contain the privacy content are automatically identified, and the corresponding area images are fuzzified to obtain the target area images, and the target image after the screen capture operation is synthesized according to the target area image and the screen capture interface image, therefore, adaptive blurring processing can be efficiently carried out on the screenshot interface image, the blurring processing efficiency of the image can be effectively improved, and the image generation effect is effectively improved.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The electronic device includes: a memory 601, a processor 602, and a computer program stored on the memory 601 and executable on the processor 602.
The processor 602, when executing the program, implements the image generation method provided in the above-described embodiments.
In one possible implementation, an electronic device includes:
a communication interface 603 for communication between the memory 601 and the processor 602.
The memory 601 is used for storing computer programs that can be run on the processor 602.
Memory 601 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
A processor 602, configured to implement the image generation method of the above-described embodiment when executing a program.
If the memory 601, the processor 602 and the communication interface 603 are implemented independently, the communication interface 603, the memory 601 and the processor 602 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 601, the processor 602, and the communication interface 603 are integrated on a chip, the memory 601, the processor 602, and the communication interface 603 may complete mutual communication through an internal interface.
The processor 602 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present disclosure.
The present embodiment also provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the image generation method as described above.
In order to implement the above embodiments, the present disclosure also proposes a computer program product, which when instructions in the computer program product are executed by a processor, executes the image generation method shown in the above embodiments.
It should be noted that, in the description of the present disclosure, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present disclosure, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present disclosure have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present disclosure.

Claims (14)

1. An image generation method, characterized in that the method comprises:
executing screen capture processing operation on a to-be-captured screen page to obtain a screen capture interface area, wherein the screen capture interface area corresponds to a screen capture interface image;
identifying a region to be processed from the screen capture interface region;
blurring the area image corresponding to the area to be processed to obtain a target area image; and
synthesizing a target image according to the target area image and the screen capture interface image;
the synthesizing of the target image according to the target area image and the screen capture interface image comprises:
creating an initial image in a transparent state;
determining relative position information of the area to be processed in the screen capture interface area;
determining reference position information corresponding to the initial image according to the relative position information, wherein after the initial image and the screen capture interface image are aligned, the reference position information and the relative position information are kept coincident;
synthesizing the target area image to a position indicated by the reference position information in the initial image to obtain a reference image;
and synthesizing the reference image and the screen capture interface image to obtain the target image.
2. The method of claim 1, wherein the identifying a region to be processed from the screenshot interface region comprises:
determining a reference view layer corresponding to the screen capture interface area, wherein the reference view layer has a corresponding reference view attribute;
if the reference view attribute is the target view attribute, determining a target control from the reference view layer;
and taking a control area corresponding to the target control as the area to be processed, wherein the control area is a position area of the target control corresponding to the screenshot interface area.
3. The method of claim 2, wherein the number of reference view layers is a plurality of layers, and after the determining the reference view layer corresponding to the screenshot interface region, comprising:
obtaining an interface configuration file, wherein the interface configuration file comprises: the method comprises the following steps of (1) identifying a plurality of view layers and view attributes corresponding to the view layer identifications;
determining a view layer identifier matched with the reference identifier of the reference view layer;
and determining a view attribute corresponding to the matched view layer identifier from the interface configuration file, and taking the corresponding view attribute as the reference view attribute.
4. The method of claim 2, wherein said determining a target control from said reference viewing layer comprises:
determining a plurality of candidate controls from the reference view layer;
determining control attributes corresponding to the candidate controls;
and if the control attribute is the target control attribute, taking the candidate control corresponding to the target control attribute as the target control.
5. The method of claim 2, wherein the determining the relative position information of the to-be-processed area in the screen capture interface area comprises:
and determining that the target control corresponds to position area information in the screen capturing interface area, and taking the position area information as the relative position information.
6. The method of claim 4, wherein the target control property indicates that: target information can be configured in the corresponding target control, and the target information can be fuzzified.
7. An image generation apparatus, characterized in that the apparatus comprises:
the screen capture module is used for executing screen capture processing operation on a to-be-captured page to obtain a screen capture interface area, and the screen capture interface area corresponds to a screen capture interface image;
the identification module is used for identifying a region to be processed from the screen capture interface region;
the processing module is used for carrying out fuzzification processing on the area image corresponding to the area to be processed so as to obtain a target area image; and
the synthesis module is used for synthesizing a target image according to the target area image and the screen capture interface image;
the synthesis module comprises:
the creating sub-module is used for creating an initial image in a transparent state;
the first determining submodule is used for determining the relative position information of the area to be processed in the screen capture interface area;
the second determining submodule is used for determining reference position information corresponding to the initial image according to the relative position information, wherein after the initial image and the screen capture interface image are aligned, the reference position information and the relative position information are overlapped;
a first synthesis submodule, configured to synthesize the target area image to a position indicated by the reference position information in the initial image, so as to obtain a reference image;
and the second synthesis sub-module is used for carrying out synthesis processing on the reference image and the screen capture interface image to obtain the target image.
8. The apparatus of claim 7, wherein the identification module is to:
determining a reference view layer corresponding to the screen capture interface area, wherein the reference view layer has a corresponding reference view attribute;
if the reference view attribute is the target view attribute, determining a target control from the reference view layer;
and taking a control area corresponding to the target control as the area to be processed, wherein the control area is a position area of the target control corresponding to the screenshot interface area.
9. The apparatus of claim 8, wherein the number of reference view layers is a number of layers, the identification module further to:
obtaining an interface configuration file, wherein the interface configuration file comprises: the method comprises the following steps of (1) identifying a plurality of view layers and view attributes corresponding to the view layer identifications;
determining a view layer identifier matched with the reference identifier of the reference view layer;
and determining a view attribute corresponding to the matched view layer identifier from the interface configuration file, and taking the corresponding view attribute as the reference view attribute.
10. The apparatus of claim 8, wherein the identification module is to:
determining a plurality of candidate controls from the reference view layer;
determining control attributes corresponding to the candidate controls;
and if the control attribute is the target control attribute, taking the candidate control corresponding to the target control attribute as the target control.
11. The apparatus of claim 8, wherein the first determination submodule is to:
and determining that the target control corresponds to position area information in the screen capturing interface area, and taking the position area information as the relative position information.
12. The apparatus of claim 10, wherein the target control property indicates: target information can be configured in the corresponding target control, and the target information can be fuzzified.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any of claims 1-6 when executing the program.
14. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-6.
CN202110706913.5A 2021-06-25 2021-06-25 Image generation method, image generation device, electronic equipment and storage medium Active CN113254141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110706913.5A CN113254141B (en) 2021-06-25 2021-06-25 Image generation method, image generation device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110706913.5A CN113254141B (en) 2021-06-25 2021-06-25 Image generation method, image generation device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113254141A CN113254141A (en) 2021-08-13
CN113254141B true CN113254141B (en) 2021-11-02

Family

ID=77189601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110706913.5A Active CN113254141B (en) 2021-06-25 2021-06-25 Image generation method, image generation device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113254141B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110320342A1 (en) * 2010-06-29 2011-12-29 Sociogramics, Inc. Methods and systems for improving timely loan repayment by controlling online accounts, notifying social contacts, using loan repayment coaches, or employing social graphs
CN106371712B (en) * 2015-07-20 2020-03-17 鸿合科技股份有限公司 Irregular screenshot method and device
CN107516050A (en) * 2017-08-08 2017-12-26 北京小米移动软件有限公司 Image processing method, device and terminal
CN108121583B (en) * 2017-12-11 2021-06-11 Oppo广东移动通信有限公司 Screen capturing method and related product
CN108491729B (en) * 2018-02-26 2020-09-25 挖财网络技术有限公司 Method and device for dynamically protecting user privacy in android system
CN110879739A (en) * 2019-11-27 2020-03-13 广东欢太科技有限公司 Display method and display device of notification bar
CN111580730B (en) * 2020-04-29 2021-04-02 掌阅科技股份有限公司 Background display method of application program, electronic device and storage medium

Also Published As

Publication number Publication date
CN113254141A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
KR102278932B1 (en) Application program processing method and terminal device
CN105843494B (en) Method, device and terminal for realizing area screen capture
CN110377264B (en) Layer synthesis method, device, electronic equipment and storage medium
US8520967B2 (en) Methods and apparatuses for facilitating generation images and editing of multiframe images
EP3822757A1 (en) Method and apparatus for setting background of ui control
US9202299B2 (en) Hint based spot healing techniques
US11194993B2 (en) Display apparatus and display control method for displaying images
US20180253824A1 (en) Picture processing method and apparatus, and storage medium
US20190325562A1 (en) Window rendering method and terminal
US20210304471A1 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
CN112579187A (en) Optimization method and device for cold start of application program
US20130182943A1 (en) Systems and methods for depth map generation
US11526961B2 (en) Information processing method, image processing apparatus, and storage medium that selectively perform upsampling to increase resolution to image data included in album data
CN113254141B (en) Image generation method, image generation device, electronic equipment and storage medium
EP2811732B1 (en) Image processing apparatus, image processing method, computer-readable storage medium and program
CN114677278A (en) Pathological section image splicing method, device, equipment and storage medium
CN113012085A (en) Image processing method and device
CN112433778A (en) Mobile equipment page display method and device, electronic equipment and storage medium
US20200236235A1 (en) Information processing system, information processing apparatus, and method
US20200272688A1 (en) Information processing apparatus and non-transitory computer readable medium
CN112308815A (en) Image display method, device, server, client and readable storage medium
CN112463280B (en) Image generation method, device, electronic equipment and computer readable storage medium
JP7197875B1 (en) Program, image processing method and image processing apparatus
US20240176566A1 (en) Processing method and apparatus thereof
JP6394947B2 (en) Code noise removing device, operation method of code noise removing device, and program to be executed by the device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant