WO2019183984A1 - 一种图像显示方法及终端 - Google Patents

一种图像显示方法及终端 Download PDF

Info

Publication number
WO2019183984A1
WO2019183984A1 PCT/CN2018/081491 CN2018081491W WO2019183984A1 WO 2019183984 A1 WO2019183984 A1 WO 2019183984A1 CN 2018081491 W CN2018081491 W CN 2018081491W WO 2019183984 A1 WO2019183984 A1 WO 2019183984A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
noise
terminal
frame
sensitive area
Prior art date
Application number
PCT/CN2018/081491
Other languages
English (en)
French (fr)
Inventor
王琪
黄伟
方平
吴黄伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201880045049.5A priority Critical patent/CN110892405A/zh
Priority to US17/041,196 priority patent/US11615215B2/en
Priority to PCT/CN2018/081491 priority patent/WO2019183984A1/zh
Priority to EP18912684.0A priority patent/EP3764267A4/en
Publication of WO2019183984A1 publication Critical patent/WO2019183984A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/84Protecting input, output or interconnection devices output devices, e.g. displays or monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/604Tools and structures for managing or administering access control systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6254Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image display method and a terminal.
  • the embodiment of the present invention provides an image display method and a terminal, which can effectively protect the display content of the terminal and reduce the possibility of the display content of the terminal being leaked.
  • an embodiment of the present application provides an image display method, which is applied to a terminal having a display screen.
  • the display method of the image includes: displaying, by the terminal, a first image on a display screen at a first screen refresh rate, the first image
  • the output frame rate is the first frame rate; after detecting that the preset condition is met, the terminal displays the second image on the display screen.
  • at least a portion of the second image is superimposed with a noise parameter, at least a portion of the superimposed noise parameter is displayed at a second screen refresh rate, and the at least a portion of the output frame rate is a second frame rate.
  • the second image includes a multi-frame plus noise sub-image.
  • the second frame rate is greater than the first frame rate
  • the second screen refresh rate is greater than the first screen refresh rate.
  • the terminal may display at least a part of the second image including the multi-frame noise-added sub-image (the at least one part is superimposed noise parameter) at the second screen refresh rate, where the at least A portion of the output frame rate is the second frame rate.
  • the second screen refresh rate is greater than the first screen refresh rate
  • the second frame rate is greater than the first frame rate.
  • the foregoing detecting that the preset condition is met may be that the terminal detects that the user turns on the noise-canzing option. Specifically, after detecting that the preset condition is met, the terminal displays the second image on the display screen, including: the terminal enters the noise-adding mode in response to the opening operation of the noise-adding option, and the terminal displays the first Two images.
  • the above noise adding option may be displayed in a setting interface or a notification column of the terminal.
  • the detecting that the predetermined condition is met may include the sensitive feature as the second image.
  • the terminal displays the second image on the display screen, including: when the second image includes the sensitive feature, the terminal automatically enters the noise-adding mode, and displays the second image on the display screen.
  • the sensitive feature may include at least one of a preset control, a currency symbol, and a preset text, and the preset control includes at least one of a password input box, a user name input box, and an ID number input box, and the pre-predetermined Set the text to include at least one of the balance, password, salary, and account.
  • the detecting that the preset condition is met may be an interface of the application whose second image is a preset type. Specifically, after detecting that the preset condition is met, the terminal displays the second image on the display screen, including: when displaying an interface of the preset type of application, the terminal automatically enters the noise-adding mode and displays on the display screen. Second image.
  • the preset type of application includes at least one of a banking application, a payment application, and a communication application.
  • the foregoing detecting that the preset condition is met may satisfy the preset condition that the current scene information meets. Specifically, after detecting that the preset condition is met, the terminal displays the second image on the display screen, including: when the current scene information meets the preset condition, the terminal automatically enters the noise-adding mode.
  • the current scene information includes at least one of time information, address information, and environment information. The time information is used to indicate the current time, and the address information is used to indicate the current location of the terminal, such as a home, a company, a shopping mall, and the like.
  • the above environmental information can be used to indicate the number of people around the terminal, and whether strangers or the like are included around the terminal.
  • the terminal can use the voice recognition or the camera to collect images to determine the number of people around the terminal and whether strangers are included around the terminal.
  • the sensitive feature of the second image is superimposed with a noise parameter, and at least a portion of the second image may be at least one sensitive region (including a region of the sensitive feature) in the second image.
  • At least a part of the second image is superimposed with a noise parameter, and specifically, at least one sensitive area in the second image is superimposed with a noise parameter.
  • the method of the embodiment of the present application further includes: the terminal generating, according to the image of the sensitive area, the N-frame first noise-adding sub- image.
  • the first framed noise image of the N frame is displayed in the sensitive area at the second screen refresh rate, and the output frame rate of the first framed image of the N frame is the second frame rate; the second frame rate is the first frame rate.
  • N times, the second screen refresh rate is N times the first screen refresh rate, and N is an integer greater than or equal to 2.
  • N is a pre-configured fixed value.
  • the sneak shot device traces the rule that the terminal operates on the image to determine the fixed value, and performs the restoration process on the sneaked image after the sneak shot;
  • the N can vary randomly within a certain range.
  • N may be determined according to the remaining power of the terminal.
  • the terminal generates an N-frame first noise-added sub-image according to the image of the sensitive area, including: when the remaining power of the terminal is greater than or equal to the first threshold, the terminal generates the first noise-added sub-image of the N1 frame according to the image of the sensitive area; When the remaining power of the terminal is less than the first threshold, the terminal generates an N2 frame first noise added sub-image according to the image of the sensitive area, where N1>N2.
  • N may be determined according to the sensitivity of the sensitive area. That is, N can be determined based on the sensitivity of sensitive features in the sensitive area. The sensitivity of each sensitive feature can also be preserved in the terminal, and the sensitivity of different sensitive features is different. Specifically, the terminal generates an N-frame first noise-added sub-image according to the image of the sensitive area, where the terminal generates, according to the sensitivity of the sensitive area, an N-frame first noise-added sub-image, where the plurality of sensitive areas including different sensitive features are included. Sensitivity is different. The higher the sensitivity of sensitive areas, the greater the value of N.
  • N may be determined according to the remaining power of the terminal and the sensitivity of the sensitive area.
  • the remaining power of the terminal is constant, the sensitivity of the sensitive area is higher, and the number N of frames of the noise-added sub-image generated for the sensitive area is larger.
  • the sensitivity of the sensitive area is constant, the more remaining power of the terminal, the larger the number N of frames of the added noise image generated for the sensitive area.
  • a i is the pixel value of the pixel point i in the image of the sensitive area, i ⁇ ⁇ 1, 2, . . . , Q ⁇ , Q is the total number of pixel points in the image of the sensitive area;
  • W n , i is the noise parameter of the ith pixel of the first noisy sub-image of the nth frame , n ⁇ ⁇ 1, 2, ..., N ⁇ ,
  • a n,i is the pixel value of the pixel point i of the first noisy sub-image of the nth frame.
  • the noise parameter of each pixel of the first noise-added sub-image of one frame may be the same.
  • the noise parameter W n,i of the i-th pixel point and the noise parameter W n,i+k of the i+k-th pixel point are the same.
  • the noise parameters of different pixel points in the first noise-added sub-image of one frame may also be different.
  • the noise parameter W n,i of the i-th pixel point and the noise parameter W n,i+k of the i+k-th pixel point are different.
  • the sum of each set of noise parameters of the at least one set of noise parameters used is zero, or each of the at least one set of noise parameters employed
  • the sum of a set of noise parameters is within the preset parameter interval. For example ⁇ W 1,i , W 2,i ,..., W n,i ,...,W N,i ⁇ Therefore, the average value of the pixel values of the pixel points i in the first framed noise image of the N frames described above For A i .
  • a i is the pixel value of the pixel i of the sensitive area before the noise is processed.
  • the human eye cannot perceive the difference between the image after the noise-added processing and the image before the un-noise processing, and can ensure that the images before and after the noise-adding processing are the same in the human eye, and the user can be guaranteed.
  • Visual experience based on the low-pass effect of the human eye vision, the human eye cannot perceive the difference between the image after the noise-added processing and the image before the un-noise processing, and can ensure that the images before and after the noise-adding processing are the same in the human eye, and the user can be guaranteed.
  • the pixel value of the pixel includes the color value of the color component of the pixel, and the color component includes three primary colors of Red Green Blue (RGB).
  • the color value of the color component before the noise processing, R n,i is the color value after R i is added, and G n,i is the color value after the G i is added, B n,i is B i plus color values after noise processing; according to R
  • the hardware value of the display of the terminal is limited, and the pixel value A i of the pixel of the image (such as the pixel point i) displayed by the terminal is in the range of [0, P]; Therefore, it is necessary to ensure that the pixel value of each pixel in the first noise-added sub-image of each frame after the noise-adding process is [0, P], such as 0 ⁇ a n, i ⁇ P. From 0 ⁇ a n, i ⁇ P, It can be determined that the nth noise parameter W n,i of a sensitive area satisfies the following conditions:
  • max(x, y) represents the maximum value in x and y
  • min(x, y) represents the minimum value in x and y.
  • the N noise parameters are randomly selected; or, the N noise parameters are consistent with a uniform distribution or a Gaussian distribution.
  • the fluctuation magnitude of the N noise parameters is proportional to the sensitivity of a sensitive region, and the fluctuation magnitude of the N noise parameters is determined by the pixel point i in the first frame of the N-frame.
  • the variance of the pixel values of the image is characterized.
  • the fluctuation of the N noise parameters ⁇ W 1,i , W 2,i , . . . , W n,i , . . . , W N,i ⁇ of the i-th pixel point is sensitive to the sensitivity of the sensitive area. Just proportional.
  • the fluctuation magnitude of the N noise parameters is represented by the variance of the pixel value of the pixel value i of the N-frame first noise-added sub-image.
  • the variance of the pixel value of the pixel i in the N frame plus noise sub-image The sensitivity of a sensitive area is higher, and a set of noise parameters ⁇ W 1,i , W 2,i , . . . , W n,i used by the terminal to add noise to the i-th pixel of the sensitive area. , ..., the greater the fluctuation of W N,i ⁇ , that is, the larger the variance s 2 of the pixel value of the pixel 1 at the first plus-added sub-image of the N frame.
  • the non-sensitive area of the second image is displayed at a first screen refresh rate, and the output frame rate of the non-sensitive area is the first frame rate. Therefore, only the sensitive area needs to be processed and the screen refresh rate is adjusted, and the anti-sneak shot effect can be achieved with lower complexity and lower power consumption.
  • the non-sensitive area of the second image is displayed at a second screen refresh rate, and the output frame rate of the non-sensitive area is the second frame rate.
  • the non-sensitive area is an area other than the sensitive area in the second image.
  • the terminal can output an N-frame tuned sub-image at the same screen refresh rate and frame rate in the sensitive area and the non-sensitive area. That is, the terminal can display the entire content of the second image at the same screen refresh rate (ie, the second screen refresh rate) and the frame rate (the second frame rate), and the requirements on the screen are greatly reduced, and the screen does not need to support different refreshes in different display areas. rate. Moreover, the terminal can perform different degrees of scrambling on the sensitive area and the non-sensitive area.
  • the method of the embodiment of the present application further includes: the terminal generates the image according to the non-sensitive area. N frame second noise added sub-image.
  • the N frame second noise added sub-image is displayed in the non-sensitive area at the second screen refresh rate, and the output frame rate of the N-frame second noise-added sub-image is the second frame rate;
  • the second frame rate is N times the first frame rate
  • the second screen refresh rate is N times the first screen refresh rate
  • N is an integer greater than or equal to 2.
  • the noise parameter used by the terminal to generate the N-frame second noise-added sub-image is different from the noise parameter used by the terminal to generate the N-frame first noise-added sub-image.
  • the method for the terminal to determine the at least one sensitive area of the second image may include: determining, by the terminal, that the second image includes a sensitive feature; and determining, by the terminal, the location of the sensitive feature in the second image, Identify at least one sensitive area.
  • the terminal determining that the second image includes the sensitive feature comprises: when the second image is an image of a preset type of application in the terminal, an image of the encrypted document, an image of the encrypted image, an image of the private video, the terminal may determine The second image includes a sensitive feature; or the terminal identifies the second image to be displayed, obtains one or more image features included in the second image, and compares the obtained one or more image features with the pre-stored sensitive features. When the obtained one or more image features include image features that match the sensitive features, the terminal may determine that the second image includes sensitive features.
  • the terminal may divide the second image into M sub-areas, and identify an image of each sub-area to determine the corresponding Whether the sub-area is a sensitive area.
  • the method for determining, by the terminal, the at least one sensitive area of the second image may include: the terminal dividing the second image into M sub-regions, M ⁇ 2; identifying the images of the M regions to extract image features of each sub-region; a sub-area, when an image feature of a sub-area includes a sensitive feature, determining that the one sub-area is a sensitive area; wherein, M is a pre-configured fixed value; or, M is determined according to a processing capability of the terminal and a remaining power of the terminal .
  • the processing capability of the terminal may specifically be the processing capability of the processor of the terminal, and the processor of the terminal may include a CPU and a graphics processing unit (GPU).
  • the processing capability of the processor may include parameters such as a processor's main frequency, a core number (such as a multi-core processor), a bit number, and a cache.
  • an embodiment of the present application provides a terminal, where the terminal includes: a display unit and a control unit. And a display unit, configured to display the first image at a first screen refresh rate, where an output frame rate of the first image is a first frame rate.
  • the control unit is configured to detect that the terminal meets the preset condition.
  • a display unit configured to display a second image after the control unit detects that the preset condition is met, wherein at least a portion of the second image displayed by the display unit is superimposed with a noise parameter, and at least a portion is displayed at a second screen refresh rate, at least A portion of the output frame rate is the second frame rate, and the second image displayed by the display unit includes the multi-frame plus noise sub-image.
  • the second frame rate is greater than the first frame rate
  • the second screen refresh rate is greater than the first screen refresh rate.
  • control unit is specifically configured to control the terminal to enter the noise-adding mode in response to the opening operation of the noise-canzing option.
  • control unit is specifically configured to: when the second image includes the sensitive feature, the control terminal automatically enters the noise-adding mode.
  • control unit is specifically configured to: when the display unit displays an interface of the preset type of application, the control terminal automatically enters the noise-adding mode.
  • the preset type of application includes at least one of a banking application, a payment application, and a communication application.
  • the sensitive feature of the second image displayed by the display unit is superimposed with noise parameters, at least a portion includes at least one sensitive area of the second image, and the sensitive area includes a sensitive feature.
  • the terminal further includes: a generating unit. And a generating unit, configured to generate an N-frame first noise-added sub-image according to the image of the sensitive area.
  • the N frame first noise added sub-image is displayed in the sensitive area at the second screen refresh rate, and the output frame rate of the N-frame first noise-added sub-image is the second frame rate; the second frame rate is the N of the first frame rate.
  • the second screen refresh rate is N times the first screen refresh rate, and N is an integer greater than or equal to 2.
  • the generating unit is configured to: when the remaining power of the terminal is greater than or equal to the first threshold, generate the first noise-added sub-image of the N1 frame according to the image of the sensitive area; When the electric quantity is less than the first threshold, the first noise-added sub-image of the N2 frame is generated according to the image of the sensitive area, and N1>N2.
  • the generating unit is configured to generate an N-frame first noise-added sub-image according to the sensitivity of the sensitive area, where the sensitivity is determined according to the sensitive feature of the sensitive area; Multiple sensitive areas, including different sensitive features, are sensitive to different degrees.
  • the display unit displays an image of the non-sensitive area of the second image at the first screen refresh rate, and the output frame rate of the non-sensitive area is the first frame rate.
  • the display unit displays an image of the non-sensitive area of the second image at the second screen refresh rate, and the output frame rate of the non-sensitive area is the second frame rate.
  • the generating unit is further configured to generate an N-frame second noise-added sub-image according to the image of the non-sensitive area.
  • the N frame second noise added sub-image is displayed in the non-sensitive area at the second screen refresh rate, and the output frame rate of the N-frame second noise-added sub-image is the second frame rate; the second frame rate is N times the first frame rate
  • the second screen refresh rate is N times the first screen refresh rate, and N is an integer greater than or equal to 2.
  • the noise parameter used by the generating unit to generate the N-frame second noise-added sub-image is different from the noise parameter used by the terminal to generate the N-frame first noise-added sub-image.
  • an embodiment of the present application provides a terminal, where the terminal includes: a processor, a memory, and a display; the memory and the display are coupled to the processor, the display is configured to display an image, the memory includes a non-volatile storage medium, and the memory is used to Storing computer program code, the computer program code comprising computer instructions, when the processor executes the computer instruction, the processor is configured to display the first image on the display at a first screen refresh rate, the output frame rate of the first image being the first frame And a processor, configured to display a second image on the display after detecting that the preset condition is met, wherein at least a portion of the second image displayed by the display is superimposed with the noise parameter, and at least a portion is displayed at the second screen refresh rate At least a portion of the output frame rate is a second frame rate, and the second image includes a multi-frame noise-added sub-image; wherein the second frame rate is greater than the first frame rate, and the second screen refresh rate is greater than the first screen refresh
  • the processor is configured to display a second image on the display after detecting that the preset condition is met, including: a processor, specifically configured to respond to the noise-adding option Turn on the operation, enter the noise-enhancing mode, and display the second image on the display.
  • the processor is configured to display a second image on the display after detecting that the preset condition is met, including: a processor, specifically configured to include in the second image When the feature is sensitive, it automatically enters the noise mode and displays the second image on the display.
  • the processor is configured to display a second image on the display after detecting that the preset condition is met, including: a processor, specifically configured to display a preset type on the display When the interface is applied, the noise mode is automatically entered and the second image is displayed on the display.
  • the sensitive feature of the second image displayed by the display is superimposed with noise parameters, at least a portion comprising at least one sensitive area of the second image, the sensitive area comprising a sensitive feature.
  • the processor is further configured to generate an N-frame first noise-added sub-image according to the image of the sensitive area before displaying the second image on the display.
  • the first framed noise image of the N frame displayed by the display is displayed in the sensitive area at the second screen refresh rate, and the output frame rate of the first framed image of the N frame is the second frame rate; the second frame rate is the first frame.
  • N times the rate, the second screen refresh rate is N times the first screen refresh rate, and N is an integer greater than or equal to 2.
  • the processor is configured to generate an N-frame first noise-added sub-image for the image of the sensitive area, including: a processor, where the remaining power of the terminal is greater than or equal to the first threshold. And generating, according to the image of the sensitive area, the first noise-added sub-image of the N1 frame; when the remaining power of the terminal is less than the first threshold, generating the first noise-added sub-image of the N2 frame according to the image of the sensitive area, where N1>N2.
  • the processor is configured to generate an N-frame first noise-added sub-image according to the image of the sensitive area
  • the processor includes: a processor, specifically configured to generate an N-frame according to the sensitivity of the sensitive area.
  • the first noisy sub-image, the sensitivity is determined based on the sensitive characteristics of the sensitive area.
  • the processor is configured to generate an N-frame first noise-added sub-image according to an image of the sensitive area
  • the processor displays an image of the non-sensitive area of the second image at the first screen refresh rate, and the output frame rate of the non-sensitive area is the first frame rate.
  • the processor displays an image of the non-sensitive area of the second image at the second screen refresh rate of the display, and the output frame rate of the non-sensitive area is the second frame rate.
  • the processor is further configured to generate an N-frame second noise-added sub-image according to the image of the non-sensitive area before displaying the second image on the display;
  • the noise-added sub-image is displayed in the non-sensitive area at the second screen refresh rate, and the output frame rate of the N-frame second noise-added sub-image is the second frame rate;
  • the second frame rate is N times the first frame rate, and the second screen is
  • the refresh rate is N times the first screen refresh rate, and N is an integer greater than or equal to 2.
  • the noise parameter used by the processor to generate the N-frame second noise-added sub-image is different from the noise parameter used by the terminal to generate the N-frame first noise-added sub-image.
  • an embodiment of the present application provides a control device, where the control device includes a processor and a memory, where the memory is used to store computer program code, where the computer program code includes computer instructions, and when the processor executes the computer instruction, the control
  • the apparatus performs the method as described in the first aspect of the embodiments of the present application and any of its possible design approaches.
  • the embodiment of the present application provides a computer storage medium, where the computer storage medium includes computer instructions, when the computer instruction is run on the terminal, causing the terminal to perform the first aspect of the embodiment of the present application Any of the methods described in the possible design approach.
  • the embodiment of the present application provides a computer program product, when the computer program product is run on a computer, causing the computer to perform the first aspect of the embodiment of the present application and any possible design manner thereof. The method described.
  • the second aspect, the third aspect, and any one of the design manners, and the technical effects brought by the fourth aspect, the fifth aspect, and the sixth aspect can be referred to the technology brought by different design modes in the foregoing first aspect. The effect will not be described here.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile phone according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram 1 of an example of a display interface according to an embodiment of the present application.
  • FIG. 3 is a second schematic diagram of a display interface according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram 3 of an example of a display interface according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram 4 of an example of a display interface according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram 5 of an example of a display interface according to an embodiment of the present application.
  • FIG. 7 is a flowchart 1 of an image display method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram 6 of an example of a display interface according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram 7 of an example of a display interface according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram 8 of an example of a display interface according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of an example of a sensitive area in a second image according to an embodiment of the present disclosure.
  • FIG. 12 is a second flowchart of an image display method according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an example of a divided sub-area according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic diagram 1 of an example of a sensitive area and its N noise-added sub-images according to an embodiment of the present application;
  • 15A is a schematic diagram 1 of a principle for generating N noise-added sub-images according to an embodiment of the present application
  • FIG. 15B is a schematic diagram 2 of generating a N-added noise image according to an embodiment of the present application.
  • 15C is a third flowchart of an image display method according to an embodiment of the present application.
  • 15D is a flowchart 4 of an image display method according to an embodiment of the present application.
  • 16 is a schematic diagram 1 of an image display method according to an embodiment of the present application.
  • FIG. 17 is a schematic diagram 2 of an example of a sensitive area and its N noise-added sub-images according to an embodiment of the present disclosure
  • FIG. 18 is a schematic diagram 2 of an image display method according to an embodiment of the present disclosure.
  • FIG. 19A is a schematic diagram 9 of an example of a display interface according to an embodiment of the present application.
  • FIG. 19B is a schematic diagram of a principle of adding noise of a second interface according to an embodiment of the present application.
  • FIG. 20 is a schematic diagram 1 of a principle for taking an image according to an embodiment of the present application.
  • FIG. 21 is a schematic diagram 2 of a principle for taking an image according to an embodiment of the present application.
  • FIG. 22 is a first schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 23 is a second schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • first and second are used for descriptive purposes only, and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, features defining “first” and “second” may include one or more of the features either explicitly or implicitly. In the description of the present application, "a plurality” means two or more unless otherwise stated.
  • An image display method provided by an embodiment of the present application can be applied to a process in which an image is displayed by a terminal.
  • the image in the embodiment of the present application may include an image that can be displayed by the terminal, such as a picture, an image in the video, and an application interface of the terminal.
  • the terminal may adjust an output frame rate and a screen refresh rate used by the terminal to display an image, and adopt an adjusted output frame rate and a screen refresh rate to output a multi-frame noise of the frame-by-frame image. Sub image. Even if the display content of the camera shooting terminal is photographed, the captured one-frame sub-image is captured, and a complete one-frame image cannot be obtained; therefore, the display content of the terminal can be effectively protected from being sneak shot, and the display of the terminal is reduced. The possibility of content leakage.
  • the noise mode in the embodiment of the present application refers to the working mode of the terminal when the terminal performs the method in the embodiment of the present application.
  • the method in the embodiment of the present application may be performed to perform noise-adding processing on the image displayed by the terminal.
  • the noise-adding mode may also be referred to as a noise-added display mode or an image-protected mode, and the like.
  • the terminal in the embodiment of the present application may be a portable terminal (such as the mobile phone 100 shown in FIG. 1), a notebook computer, a personal computer (PC), a wearable electronic device (such as a smart watch), and a tablet computer.
  • a portable terminal such as the mobile phone 100 shown in FIG. 1
  • PC personal computer
  • a wearable electronic device such as a smart watch
  • tablet computer a tablet computer.
  • ATM Automated Teller Machine
  • AR augmented reality
  • VR virtual reality
  • on-board computer etc.
  • display functions including display screen
  • the mobile phone 100 is used as an example of the terminal.
  • the mobile phone 100 may specifically include: a processor 101, a radio frequency (RF) circuit 102, a memory 103, a touch screen 104, a Bluetooth device 105, and one or more sensors 106.
  • These components can communicate over one or more communication buses or signal lines (not shown in Figure 1). It will be understood by those skilled in the art that the hardware structure shown in FIG. 1 does not constitute a limitation to a mobile phone, and the mobile phone 100 may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • the processor 101 is a control center of the mobile phone 100, and connects various parts of the mobile phone 100 by using various interfaces and lines, and executes the mobile phone 100 by running or executing an application stored in the memory 103 and calling data stored in the memory 103.
  • processor 101 can include one or more processing units.
  • the processor 101 in the embodiment of the present application may include a central processing unit (CPU) and a graphics processing unit (GPU).
  • the radio frequency circuit 102 can be used for receiving and transmitting wireless signals.
  • the radio frequency circuit 102 can process the downlink data of the base station and then process it to the processor 101; in addition, transmit the data related to the uplink to the base station.
  • radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency circuit 102 can also communicate with other devices through wireless communication.
  • the wireless communication can use any communication standard or protocol including, but not limited to, global mobile communication systems, general packet radio services, code division multiple access, wideband code division multiple access, long term evolution, and the like.
  • the memory 103 is used to store applications and data, and the processor 101 executes various functions and data processing of the mobile phone 100 by running applications and data stored in the memory 103.
  • the memory 103 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.); the storage data area can be stored according to the use of the mobile phone. Data created at 100 o'clock (such as audio data, phone book, etc.).
  • the memory 103 may include a high speed random access memory (RAM), and may also include a nonvolatile memory such as a magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
  • the memory 103 can store various operating systems.
  • the above memory 103 may be independent and connected to the processor 101 via the above communication bus; the memory 103 may also be integrated with the processor 101.
  • the touch screen 104 may specifically include a touch panel 104-1 and a display 104-2.
  • the touch panel 104-1 can collect touch events on or near the user of the mobile phone 100 (for example, the user uses any suitable object such as a finger, a stylus, or the like on the touch panel 104-1 or on the touchpad 104.
  • the operation near -1), and the collected touch information is sent to other devices (for example, processor 101).
  • the touch event of the user in the vicinity of the touch panel 104-1 may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touchpad in order to select, move or drag a target (eg, an icon, etc.) , and only the user is located near the device to perform the desired function.
  • the touch panel 104-1 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • a display (also referred to as display) 104-2 can be used to display information entered by the user or information provided to the user as well as various menus of the mobile phone 100.
  • the display 104-2 can be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the touchpad 104-1 can be overlaid on the display 104-2, and when the touchpad 104-1 detects a touch event on or near it, it is transmitted to the processor 101 to determine the type of touch event, and then the processor 101 may provide a corresponding visual output on display 104-2 depending on the type of touch event.
  • the touch panel 104-1 and the display screen 104-2 function as two independent components to implement the input and output functions of the mobile phone 100, in some embodiments,
  • the touch panel 104-1 is integrated with the display screen 104-2 to implement the input and output functions of the mobile phone 100. It is to be understood that the touch screen 104 is formed by stacking a plurality of layers of materials. In the embodiment of the present application, only the touch panel (layer) and the display screen (layer) are shown, and other layers are not described in the embodiment of the present application. .
  • the touch panel 104-1 can be disposed on the front side of the mobile phone 100 in the form of a full-board
  • the display screen 104-2 can also be disposed on the front surface of the mobile phone 100 in the form of a full-board, so that the front side of the mobile phone can realize a frameless structure.
  • the mobile phone 100 can also have a fingerprint recognition function.
  • a fingerprint capture device (ie, fingerprint reader) 112 can be configured on the back of the handset 100 (eg, below the rear camera), or the fingerprint capture device 112 can be configured on the front of the handset 100 (eg, below the touch screen 104).
  • the fingerprint collection device 112 can be configured in the touch screen 104 to implement the fingerprint recognition function, that is, the fingerprint collection device 112 can be integrated with the touch screen 104 to implement the fingerprint recognition function of the mobile phone 100.
  • the fingerprint collection device 112 is disposed in the touch screen 104, may be part of the touch screen 104, or may be disposed in the touch screen 104 in other manners.
  • the main component of the fingerprint collection device 112 in the embodiment of the present application is a fingerprint sensor, which can employ any type of sensing technology, including but not limited to optical, capacitive, piezoelectric or ultrasonic sensing technologies.
  • the mobile phone 100 can also include a Bluetooth device 105 for enabling short-distance data exchange between the handset 100 and other devices (eg, mobile phones, smart watches, etc.).
  • the Bluetooth device in the embodiment of the present application may be an integrated circuit or a Bluetooth chip or the like.
  • the one or more sensors 106 described above include sensors for detecting a user's pressing operation on the side and a sliding operation of the user on the side.
  • the one or more sensors 106 described above include, but are not limited to, the above-described sensors, for example, the one or more sensors 106 may also include light sensors, motion sensors, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display of the touch screen 104 according to the brightness of the ambient light, and the proximity sensor may turn off the power of the display when the mobile phone 100 moves to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the mobile phone 100 can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.
  • the mobile phone 100 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Let me repeat.
  • the WiFi device 107 is configured to provide the mobile phone 100 with network access complying with the WiFi related standard protocol, and the mobile phone 100 can access the WiFi hotspot through the WiFi device 107, thereby helping the user to send and receive emails, browse web pages, and access streaming media, etc. Users provide wireless broadband Internet access.
  • the WiFi device 107 can also function as a WiFi wireless access point, and can provide WiFi network access for other devices.
  • the positioning device 108 is configured to provide a geographic location for the mobile phone 100. It can be understood that the positioning device 108 can be specifically a receiver of a positioning system such as a Global Positioning System (GPS) or a Beidou satellite navigation system, or a Russian GLONASS.
  • GPS Global Positioning System
  • Beidou satellite navigation system or a Russian GLONASS.
  • the positioning device 108 After receiving the geographical location sent by the positioning system, the positioning device 108 sends the information to the processor 101 for processing, or sends the information to the memory 103 for storage.
  • the positioning device 108 can also be a receiver of an Assisted Global Positioning System (AGPS), which assists the positioning device 108 in performing ranging and positioning services by acting as an auxiliary server.
  • AGPS Assisted Global Positioning System
  • the secondary location server provides location assistance over a wireless communication network in communication with a location device 108 (i.e., a GPS receiver) of the device, such as handset 100.
  • the positioning device 108 can also be a WiFi hotspot based positioning technology. Since each WiFi hotspot has a globally unique Media Access Control (MAC) address, the device can scan and collect the broadcast signal of the surrounding WiFi hotspot when the WiFi is turned on, so that the WiFi hotspot broadcast can be obtained. The MAC address is sent out; the device sends the data (such as the MAC address) capable of indicating the WiFi hotspot to the location server through the wireless communication network, and the location server retrieves the geographic location of each WiFi hotspot, and combines the strength of the WiFi broadcast signal. The geographic location of the device is calculated and sent to the location device 108 of the device.
  • MAC Media Access Control
  • the audio circuit 109, the speaker 113, and the microphone 114 can provide an audio interface between the user and the handset 100.
  • the audio circuit 109 can transmit the converted electrical data of the received audio data to the speaker 113 for conversion to the sound signal output by the speaker 113; on the other hand, the microphone 114 converts the collected sound signal into an electrical signal by the audio circuit 109. After receiving, it is converted into audio data, and then the audio data is output to the RF circuit 102 for transmission to another mobile phone, or the audio data is output to the memory 103 for further processing.
  • the peripheral interface 110 is used to provide various interfaces for external input/output devices (such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.). For example, it is connected to the mouse through a Universal Serial Bus (USB) interface, and is connected to a Subscriber Identification Module (SIM) card provided by the service provider through a metal contact on the card slot of the subscriber identity module. . Peripheral interface 110 can be used to couple the external input/output devices described above to processor 101 and memory 103.
  • USB Universal Serial Bus
  • SIM Subscriber Identification Module
  • the mobile phone 100 can communicate with other devices in the device group through the peripheral interface 110.
  • the peripheral interface 110 can receive display data sent by other devices for display, etc. No restrictions are imposed.
  • the mobile phone 100 may further include a power supply device 111 (such as a battery and a power management chip) that supplies power to the various components.
  • the battery may be logically connected to the processor 101 through the power management chip to manage charging, discharging, and power management through the power supply device 111. And other functions.
  • the mobile phone 100 may further include a camera (front camera and/or rear camera), a flash, a micro projection device, a near field communication (NFC) device, and the like, and details are not described herein.
  • a camera front camera and/or rear camera
  • a flash a flash
  • micro projection device a micro projection device
  • NFC near field communication
  • Output frame rate such as the first frame rate and the second frame rate, refers to the display of the terminal's display frames per second (FPS).
  • Screen refresh rate such as the first screen refresh rate and the second screen refresh rate, refers to the number of times the terminal's display refreshes the screen every second.
  • the output frame rate and the screen refresh rate in the embodiment of the present application may be the same.
  • the screen output by the GPU of the terminal to the display is different each time, that is, the image of one frame displayed by the display is different each time.
  • the output frame rate and screen refresh rate can also be different.
  • the screen refresh rate B is twice the output frame rate A
  • the image that the GPU outputs to the display every two times is the same frame image.
  • Scenario 1 When the interface displayed by the terminal includes a password input box, if other users use other terminals to sneak the screen of the terminal, the password in the password input box displayed by the terminal may be leaked.
  • the interface including the password input box in the embodiment of the present application may include an account login interface of an application (such as WeChat, Alipay, QQ, and e-mail) in a portable terminal such as a mobile phone or a tablet computer, for example, as shown in the figure.
  • the WeChat account login interface shown in 2 the electronic payment interface of a portable terminal such as a mobile phone or a tablet (such as the payment interface of Alipay, WeChat or banking applications), and the password input interface of the ATM.
  • Scenario 2 When the terminal displays a private document (for example, the private document 1 shown in (a) of FIG. 3) or a private picture (for example, the picture 1 shown in (b) of FIG. 3), or the terminal plays the private video. (For example, in the video 1 shown in FIG. 4), if other users use other terminals to sneak the screen of the terminal, the private document, the private picture or the private video displayed by the terminal may be leaked.
  • a private document for example, the private document 1 shown in (a) of FIG. 3
  • a private picture for example, the picture 1 shown in (b) of FIG. 3
  • Scenario 3 When the terminal displays the account amount (for example, the account management interface shown in FIG. 5 includes the account balance), if other users use other terminals to sneak the screen of the terminal, the account amount displayed by the terminal may be leaked.
  • the account amount displayed by the terminal may be leaked.
  • Scenario 4 When the terminal displays a chat interface (for example, the WeChat chat interface shown in (a) of FIG. 6) or a mail interface (for example, the mail interface shown in (b) of FIG. 6), if other users use Other terminals sneak a screen of the terminal, which may cause the communication content displayed by the terminal to leak.
  • a chat interface for example, the WeChat chat interface shown in (a) of FIG. 6
  • a mail interface for example, the mail interface shown in (b) of FIG. 6
  • the terminal in the embodiment of the present application is a projection device of a theater.
  • a movie theater device is in a movie, if a movie viewer uses other terminals to sneak a screen, the movie played on the screen is recorded.
  • the execution body of the image display method provided by the embodiment of the present application may be an image display device, and the image display device may be any one of the above terminals (for example, the image display device may be the mobile phone 100 shown in FIG. 1); or The image display device may also be a central processing unit (English: Central Processing Unit, CPU for short) of the terminal, or a control module for executing an image display method in the terminal.
  • the image display method provided by the embodiment of the present application is described in the embodiment of the present application.
  • An embodiment of the present application provides an image display method. As shown in FIG. 7, the image display method includes S701-S702:
  • the terminal displays the first image on the display screen at a first screen refresh rate, and the output frame rate of the first image is the first frame rate.
  • the terminal After detecting that the preset condition is met, the terminal displays a second image on the display screen, wherein at least a portion of the second image is superimposed with a noise parameter, the at least one portion is displayed at a second screen refresh rate, and at least a portion of the output frame is displayed.
  • the rate is the second frame rate, and the second image includes a multi-frame plus noise sub-image.
  • the second frame rate is greater than the first frame rate
  • the second screen refresh rate is greater than the first screen refresh rate
  • the terminal may enter the noise-adding mode after detecting that the preset condition is met; after entering the noise-adding mode, the terminal may display the second image on the display screen.
  • the first frame rate is an output frame rate used when the terminal displays the image before entering the noise-adding mode;
  • the first screen refresh rate is a screen refresh rate used when the terminal displays the image before entering the noise-adding mode.
  • the noise parameter in the embodiment of the present application is used to perform noise processing on the image to obtain a noise-added image.
  • the noise-adding parameter may be superimposed on the pixel value of the pixel of the image to change the pixel value of the pixel to obtain a noise-added sub-image, so as to achieve the purpose of adding noise to the image.
  • S702 can include S702a:
  • the terminal enters a noise-adding mode in response to the turning-on operation of the noise-adding option, and displays a second image on the display screen.
  • the noise-adding option in the embodiment of the present application may be: a user interface provided by the terminal to facilitate the user to operate the terminal to enter the noise-adding mode.
  • the noise-adding option may be an option in the setting interface; or the noise-canzing option may be a switch button in the notification bar displayed by the terminal.
  • the noise-adding option may also be referred to as a noise-adding button or a noise-adding display option, and the like.
  • the terminal is the mobile phone 100 as an example.
  • the noise setting option "noise display” 801 may be included in the setting interface of the mobile phone 100.
  • the handset 100 can enter the noisy mode in response to the user turning on the noisy option "noise display” 801.
  • the mobile phone 100 can display the noise control control interface 802 shown in (b) of FIG. 8 in response to the user's click operation on the noisy option "noise display” 801, and the noise control control interface 802 includes multiple types.
  • the options of the application such as the options of the banking application, the options of the payment application, and the options of the communication application; or, the mobile phone 100 can display the click operation in response to the user's click on the noisy option "noise display" 801.
  • the noise control interface 803 shown in (c) includes a plurality of application options, such as an option of Alipay, an option to share a bicycle, an option of a merchant bank, and an option of Taobao.
  • the mobile phone 100 can perform the opening operation of the image display method provided by the embodiment of the present application after the mobile phone 100 enters the noise-adding mode in response to the user's opening operation of the various options in the noise control control interface 802 and the noise control control interface 803.
  • the corresponding application image is the image display method provided by the embodiment of the present application.
  • the terminal may enter or exit the above-mentioned noise-adding mode in response to the user's click operation of the noisy option "noise display” in the pull-down menu.
  • the pull-down menu 901 shown in FIG. 9 includes a noise-adding option "noise-and-error display", and the mobile phone 100 enters the noise-adding mode in response to the user's turning-on operation of the noise-added option "noisy display”.
  • S702 can include S702b:
  • the terminal automatically enters the noise-adding mode, and displays the second image on the display screen.
  • the sensitive feature in the embodiment of the present application may include at least one of a preset control, a currency symbol, and a preset text.
  • the preset control includes at least one of a password input box, a user name input box, and an ID number input box
  • the preset text includes at least one of a balance, a password, a salary, and an account.
  • the above currency symbol may be a currency symbol of each country, for example, a renminbi symbol , dollar sign $ and euro sign € and so on.
  • the preset text includes but is not limited to a balance, a password, a salary, an account, etc., for example, the preset text may further include a "private document" as shown in (a) of FIG.
  • the sensitive features in the embodiments of the present application include, but are not limited to, the features listed above.
  • the sensitive feature may also include information in a preset format, such as a bank card number, an ID card number, a bank card password, and an email address.
  • the method for the terminal to determine that the second image includes the sensitive feature may further include: when the second image is an image of an application of a preset type in the terminal, an image of the encrypted document, or an encrypted image The image, the image of the private video, the terminal may determine that the second image includes sensitive features.
  • the method for the terminal to determine that the second image includes the sensitive feature may further include: the terminal identifying the second image to be displayed, and obtaining one or more image features included in the second image. And comparing the obtained one or more image features with the pre-stored sensitive features, and when the obtained one or more image features include the image features matching the sensitive features, the terminal may determine that the second image includes the sensitive features .
  • S702 can include S702c:
  • S702c When displaying an interface of the preset type of application, the terminal automatically enters the noise-adding mode, and displays the second image on the display screen.
  • the preset types of applications may include banking applications (such as China Merchants Bank APP and Bank of China APP), payment applications (such as Alipay and WeChat, etc.), and communication applications (such as email, and WeChat. At least one of QQ and other instant messaging applications.
  • banking applications such as China Merchants Bank APP and Bank of China APP
  • payment applications such as Alipay and WeChat, etc.
  • communication applications such as email, and WeChat.
  • QQ and other instant messaging applications At least one of QQ and other instant messaging applications.
  • the above preset type of application can be set by the user in the terminal.
  • the user can set the above-described preset type of application in the noise control control interface 802 shown in (b) of FIG.
  • the implementation manner (1) is different:
  • the mobile phone 100 can enter the noise-adding mode when the mobile phone 100 displays the image of the application corresponding to the opening operation in response to the user turning on the various options in the noise-canzing control interface 802.
  • the user performs a noise-adding display on operation on the payment-type application
  • the mobile phone 100 displays the interface of the Alipay application
  • the mobile phone 100 enters the noise-adding mode.
  • the foregoing detecting that the preset condition is met may satisfy that the current scene information meets the preset condition.
  • the above S702 may include S702d:
  • S702d The terminal automatically enters the noise adding mode when the current scene information meets the preset condition.
  • the current scene information includes at least one of time information, address information, and environment information.
  • the time information is used to indicate the current time
  • the address information is used to indicate the current location of the terminal, such as a home, a company, a shopping mall, etc.
  • the terminal can determine the current location of the terminal by using an existing positioning method, and the existing positioning method includes but not Limited to GPS positioning and WiFi positioning.
  • the above environmental information can be used to indicate the number of people around the terminal, and whether strangers or the like are included around the terminal.
  • the terminal can determine the number of people around the terminal by voice recognition or by capturing images through the camera, and whether strangers are included around the terminal.
  • the manner in which the terminal enters the noise-adding mode includes, but is not limited to, the manners listed above.
  • the terminal may turn on the above-described noise-adding mode in response to a preset gesture input by the user. That is to say, when the user wants to control the terminal to display the private image, and wants to avoid leaking due to the private image being sneaked by other devices, the user can control the terminal to turn on the noise by the preset gesture regardless of the interface currently displayed by the terminal. mode. That is, the terminal can receive and respond to the preset gesture input by the user at any time to enter the noise-adding mode.
  • the mobile phone 100 can receive the "S-type gesture” input by the user on the desktop 1001 of the mobile phone 100, and enter the noise-adding mode.
  • the mobile phone 100 can display the mode reminder window 1002 shown in (b) of FIG. 10 in response to the “S-type gesture” input by the user on the desktop 1001 of the mobile phone 100, and the mode reminder window 1002 is used to remind the user: the mobile phone The noise mode has been entered.
  • the output image of the display screen of the terminal is one frame by one frame according to the output frame rate and the screen refresh rate. That is, both the first image and the second image in the embodiment of the present application may be one frame image.
  • the terminal may display at least a part of the second image including the multi-frame noise-added sub-image at the second screen refresh rate (the at least part of the superimposed noise parameter)
  • the at least one portion of the output frame rate is the second frame rate.
  • the second screen refresh rate is greater than the first screen refresh rate
  • the second frame rate is greater than the first frame rate.
  • At least a part of the second image may be at least one sensitive area (a region including sensitive features) in the second image.
  • at least a portion of the second image 1101 shown in (a) of FIG. 11 is a sensitive region S of the second image 1101; or, at least a portion of the second image 1102 shown in (b) of FIG. 11 is The sensitive area S1 and the sensitive area S2 of the second image 1102.
  • the second image includes a multi-frame noise-added sub-image.
  • the image of the at least one sensitive area in the second image includes a multi-frame noise-added sub-image (such as an N-frame plus-sub-sub-image, where N is an integer greater than or equal to 2 ).
  • the multi-frame noise-added sub-image of a sensitive area is obtained by superimposing noise parameters on the image of the sensitive area.
  • non-sensitive areas The other areas of the second image except the sensitive area are referred to as non-sensitive areas, the non-sensitive areas are displayed at the second screen refresh rate, and the output frame rate of the non-sensitive area is the second frame rate; or, the non-sensitive area is The first screen refresh rate shows that the output frame rate of the non-sensitive area is the first frame rate.
  • the terminal may display the same screen refresh rate (ie, the second screen refresh rate) and the frame rate (the second frame rate).
  • the entire content of the second image greatly reduces the performance requirements of the display screen, and does not require the display screen to support different refresh rates in different display areas.
  • the non-sensitive area is displayed at the first screen refresh rate and the output frame rate of the non-sensitive area is the first frame rate, only the sensitive area needs to be processed and the screen refresh rate is adjusted, and the anti-sneak shot effect can be achieved with low power consumption.
  • S702 in FIG. 7 may include S1201-S1203:
  • the terminal may determine a sensitive area S from the second image 1101; as shown in (b) of FIG. 11, the terminal may be from the second Two sensitive areas of the sensitive area S1 and the sensitive area S2 are determined in the image 1102.
  • the terminal may identify the second image; when identifying the sensitive feature in the second image, determining one or more sensitive according to the location of the sensitive feature in the second image region.
  • the above S1201 may include S1201a-S1201b:
  • the terminal determines that the second image includes a sensitive feature.
  • the terminal may determine that the second image includes a sensitive feature.
  • the terminal may identify the second image to be displayed, obtain one or more image features included in the second image; and then compare the obtained one or more image features with the pre-saved sensitive features; When the image features matching the sensitive features are included in the plurality of image features, the terminal may determine that the second image includes the sensitive features.
  • the terminal can save a plurality of sensitive features in advance.
  • S1201b The terminal determines at least one sensitive area of the second image according to the location of the sensitive feature in the second image.
  • the terminal determines that the second image includes sensitive features
  • the location of the sensitive feature in the second image can be determined. Then, the terminal may determine the area including the sensitive feature in the second image as the sensitive area according to the determined position.
  • one or more sensitive features can be included in the second image, so the terminal can determine at least one sensitive area according to the one or more sensitive features.
  • the mobile phone 100 can determine that the display interface includes sensitive features and determine the sensitive area 201 according to the location of the password input box in the image. Since the display interface shown in FIG. 5 includes the RMB symbol ⁇ , the mobile phone 100 can determine that the display interface includes sensitive features, and determine the sensitive area 501 according to the position of the RMB symbol ⁇ in the image. Since the WeChat chat interface shown in (a) of FIG. 6 includes the preset text “password”, the mobile phone 100 can determine that the display interface includes a sensitive feature, and determine the sensitive area 601 according to the position of the preset text “password” in the image. . Since the email is a preset type of application, the mobile phone 100 can determine that the sensitive area 602 shown in (b) of FIG. 6 is the mail body of the mail.
  • the second image is an image of an encrypted document, an image of an encrypted image, or an image of a private video
  • the sensitive features are distributed over the entire area of the frame image, the entire area of the frame image is required. Perform noise addition display. Therefore, in this case, the at least one sensitive area determined by the terminal is the entire area of the second image.
  • the document 1 displayed by the mobile phone 100 is a private document, and the entire area of the image of the document 1 displayed by the mobile phone 100 is a sensitive area.
  • the picture 1 displayed by the mobile phone 100 is a private picture, and then the entire area of the picture of the picture 1 displayed by the mobile phone 100 is a sensitive area.
  • the video 1 played by the mobile phone 100 is a private video, and then the entire area of the image of the video 1 displayed by the mobile phone 100 is a sensitive area.
  • the terminal may divide the second image into M sub-areas, and identify an image of each sub-area to determine the corresponding sub-area. Whether it is a sensitive area.
  • S1201 may include S1201c-S1201e::
  • the terminal divides the second image into M sub-regions, and M ⁇ 2.
  • M in the embodiment of the present application may be a pre-configured fixed value.
  • M may be determined according to a first parameter of the terminal, where the first parameter includes a processing capability of the terminal and a remaining power of the terminal.
  • the processing capability of the terminal may specifically be the processing capability of the processor of the terminal, and the processor of the terminal may include a CPU and a GPU.
  • the processing capability of the processor may include parameters such as a processor's main frequency, a core number (such as a multi-core processor), a bit number, and a cache.
  • the terminal may divide the second image into M sub-regions, that is, the M sub-regions have the same size.
  • the four sub-regions shown in (b) of FIG. 13 have the same size; or, M sub-regions.
  • the sizes are different, for example, the sizes of the six sub-regions shown in (a) of FIG. 13 are different.
  • the image display function provided by the method of the embodiment of the present application can be implemented in one application, and the terminal can install the application to perform the method of the embodiment of the present application.
  • Different processors have different processors, and their processing capabilities are different. Therefore, in the embodiment of the present application, the values of M are different for different processors.
  • the value of M depends on the remaining power of the terminal. Specifically, the higher the processing capability of the terminal is, the larger the M is. When the processing capability of the terminal is constant, the more the remaining power is, the larger the M is. For example, as shown in Table 1, an example of a relationship between the processing capability of M and the terminal and the remaining power is provided in the embodiment of the present application.
  • the processing power of the processor 1 - processor n in Table 1 is getting higher and higher.
  • the terminal including the processor n can divide the image of one frame. It is 6 sub-regions, and the terminal including the processor 2 can divide one frame image into 4 sub-regions.
  • the terminal when the processing capability of the terminal is certain (for example, the processor of the terminal is the processor 1), if the remaining power of the terminal is in the (11%, 30%) interval, the terminal can divide the image of one frame into In the three sub-areas, if the remaining power of the terminal is in the (70%, 100%) interval, the terminal can divide one frame image into six sub-regions. That is, if the processing capacity of the terminal is constant, the more the remaining power is, the larger the M is. .
  • the terminal identifies image content of the M sub-regions to extract image features of each sub-region.
  • the method for the terminal to identify the image content of the M sub-regions to extract the image features of each of the sub-regions may refer to the method for the terminal to identify the image in the conventional technology to extract the image features, which is not described herein.
  • the terminal performs S702b2 to extract image features of each of the M sub-regions, and then the terminal may perform S702c for each sub-region:
  • S1201e When an image feature of a sub-region includes a sensitive feature, the terminal determines that the sub-region is a sensitive region.
  • the terminal may determine that the plurality of sub-regions are sensitive regions.
  • the terminal performing S1201 can determine at least one sensitive area of the second image, and then perform S1202-S1203 for each of the at least one sensitive area:
  • S1202 The terminal generates an N-frame first noise-added sub-image according to the image of the sensitive area.
  • N in the embodiment of the present application may be a pre-configured fixed value.
  • the sneak shot device traces the law of the terminal to operate the image to determine the fixed value, and performs the restoration processing on the captured image after the sneak shot;
  • the N in the application embodiment can be randomly changed within a certain range. For example, when the terminal displays the second image for the first time, the terminal generates 3 frames of the first noise-added sub-image for the image of the sensitive area a in the second image, and when the second image is displayed for the second time, The image of the sensitive area a generates 4 frames of the first noise added sub-image.
  • the terminal displays the second image in the first preset time (such as 8:00-9:00 in the morning), generating 4 frames of the first noise-added sub-image for the image of the sensitive area b in the second image;
  • the second image is displayed for two preset times (for example, 10:00-12:00 in the morning)
  • two frames of the first noise-added sub-image are generated for the image of the sensitive area b in the second image.
  • N may be determined based on the remaining power of the terminal.
  • the first noise-added sub-image of the N2 frame N1>N2.
  • the terminal may determine not only the value range in which the remaining power is greater than or equal to the first threshold, but also the value of the remaining power is less than the first threshold, and determine the number of frames of the generated first noise-added sub-image; The range of values of the power is divided more carefully, and the correspondence between N and the remaining power is saved. For example, as shown in Table 2, an example of a relationship between N and the remaining power of the terminal provided by the embodiment of the present application is shown.
  • the value of N in the embodiment of the present application includes, but is not limited to, the values shown in Table 2, in the example.
  • N may be determined based on the image type of the second image.
  • the image type may indicate that the second image is a dynamic image or a still image.
  • the dynamic image in the embodiment of the present application may be a frame image in the video, and the display time of the dynamic image is short, so the value of N may be small.
  • the static image may include a desktop image of the terminal, an interface image of the application, and a picture displayed by the terminal. The display time of the static image is long, so the value of N may be large.
  • the terminal when the second image is a still image, the terminal generates an N1 frame noise-added sub-image for the second image; and when the second image is a dynamic image, the terminal generates an N2 frame-encoded sub-image for the second image.
  • the number of frames of the first noise-added sub-image generated for the multiple sensitive areas may be the same or different.
  • two sensitive areas (sensitive area S1 and sensitive area S2) are included in the second image 1102 shown in (b) of FIG. 11 as an example.
  • the number of frames of the first noise-added sub-image generated by the terminal for the sensitive area S1 and the sensitive area S2 may be N; or, as shown in FIG. 17, the terminal may generate the first noise-added sub-image of the N1 frame for the sensitive area S1.
  • the sensitive area S2 generates a first noise-added sub-image of the N2 frame, and N1 is not equal to N2.
  • N may be determined based on the sensitivity of the sensitive area. That is, N can be determined according to the sensitivity of the sensitive area. The sensitivity of each sensitive feature can also be preserved in the terminal, and the sensitivity of different sensitive features is different.
  • the terminal can determine the sensitivity of the sensitive area according to the sensitivity of sensitive features in a sensitive area. Specifically, when a sensitive area includes a sensitive feature, the sensitivity of the sensitive area is the sensitivity of the sensitive feature; when a sensitive area includes multiple sensitive features, the sensitivity of the sensitive area is sensitive. The sum of the sensitivity of the features. Among them, different types of sensitive features have different degrees of sensitivity. For example, the default text "password" is more sensitive than the currency symbol (eg The sensitivity level.
  • the correspondence between the sensitivity level and N can be saved in the terminal.
  • N For example, as shown in Table 3, an example of a relationship between N and sensitivity is provided in the embodiment of the present application.
  • the degree of sensitivity a-sensitivity g shown in Table 3 gradually increases. For example, suppose that the sensitive area a includes only the sensitive feature " ⁇ ”, the sensitive degree of the sensitive area a is in the [0, a] interval; the sensitive area b includes the sensitive features " ⁇ " and "password”, and the sensitive area a Sensitivity is sensitive in the (a, b) range.
  • N may be determined according to the remaining power of the terminal and the sensitivity of the sensitive area, that is, N is determined according to the remaining power of the terminal and the sensitivity of the sensitive area.
  • the remaining power of the terminal is constant, the sensitivity of the sensitive area is higher, and the number N of frames of the first noise-added sub-image generated for the sensitive area is larger.
  • the fourth implementation manner and the fifth implementation manner it is assumed that a plurality of sensitive regions are included in one frame image. If the sensitivity of the two sensitive areas in the plurality of sensitive areas is in the same interval, the number of frames N of the first noise-added sub-image generated by the terminal for the two sensitive areas is the same; if the sensitivity of the two sensitive areas is In different intervals, the number of frames N of the first noise-added sub-image generated by the terminal for the two sensitive areas is also different.
  • the terminal may perform noise extraction on the sub-image by using a set of noise parameters ⁇ W 1 , W 2 , . . . , W N ⁇ , and generate an N-frame first noise-added sub-image according to an image of a sensitive area.
  • W n is the nth noise parameter of a sensitive area.
  • the terminal can adopt a set of noise parameters ⁇ W 1 , W 2 , ..., W N ⁇ And generating an N-frame first noise-added sub-image according to the image of the sensitive area S.
  • the noise parameter corresponding to the first noise-added sub-image of the first frame is W 1
  • the noise parameter corresponding to the first noise-added sub-image of the second frame is W 2 , . . .
  • the noise parameter corresponding to the first noise-added sub-image of the nth frame is W n , . . .
  • the noise parameter corresponding to the first noise-added sub-image of the Nth frame is W N .
  • the method for the terminal to generate the N frames of the first noise added sub-image according to the image of the sensitive area may include S1202a-S1202c, that is, the foregoing S1202 may include S1202a-S1202c:
  • the terminal determines a pixel value of each pixel in an image of a sensitive area.
  • the pixel value of the first pixel of the first row of the sensitive area S (abbreviated as the a1 pixel) is A a1
  • the pixel value of the fourth pixel of the row (abbreviated as the a4th pixel) is A a4
  • the pixel value of the first pixel of the sixth row (referred to as f1 pixel) is A f1 .
  • S1202b The terminal determines N noise parameters of a sensitive area, and the sum of the N noise parameters is zero.
  • the N noise parameters ⁇ W 1 , W 2 , . . . , W N ⁇ may be randomly selected as long as the N noise parameters are satisfied. Just fine.
  • the N noise parameters ⁇ W 1 , W 2 , . . . , W N ⁇ may conform to a uniform distribution or a Gaussian distribution, as long as the N noise parameters satisfy Just fine.
  • the terminal may perform S1202c for each of the N noise parameters to calculate a pixel value of each pixel in the first noise-added sub-image of the frame to obtain a frame of the first noise-added sub-image:
  • the terminal calculates the pixel value of each pixel in the first noise-added sub-image of one frame by using formula (1), and obtains a frame of the first noise-added sub-image.
  • a i is the pixel value of pixel i in the image of a sensitive area, i ⁇ ⁇ 1, 2, ..., Q ⁇ , Q is the total number of pixels in the image of a sensitive area;
  • W n is a The nth noise parameter of the sensitive area, n ⁇ 1,2,...,N ⁇ , a n,i is the pixel value of the first noise-added sub-image of the pixel i at the nth frame.
  • the terminal can be calculated by using the above formula (1):
  • the pixel value of the first pixel of the first row is a 2
  • a1 A a1 + W 2
  • first The pixel value of the fourth pixel (abbreviated as the a4th pixel) is a 2
  • a4 A a4 + W 2
  • the pixel value of the first pixel of the sixth row (referred to as f1 pixel) is a 2
  • f1 A f1 + W 2 .
  • the pixel value of the first pixel of the first row is a N
  • a1 A a1 + W N
  • first The pixel value of the fourth pixel (abbreviated as the a4th pixel) is a N
  • a4 A a4 + W N
  • the pixel value of the first pixel of the sixth row (referred to as f1 pixel) is a N
  • f1 A f1 + W N .
  • the calculation of the pixel values of other pixels in the first frame of the first noise added sub-image, the second frame first noise-added sub-image, and the N-th frame first noise-added sub-image The method of the present application is not described herein.
  • the N-frame noise-added sub-image the first frame-added noise image other than the first frame first-attenuation sub-image, the second frame first-attenuation sub-image, and the N-th frame first-encoded sub-image
  • the method for calculating the pixel value of each pixel is not described herein.
  • the terminal may sequentially perform noise processing on the image of one sensitive area in one frame by using each noise parameter (such as W n ) in the above-mentioned set of noise parameters ⁇ W 1 , W 2 , . . . , W N ⁇ (ie, A noise image is superimposed on the image of the sensitive area to obtain a second image including the N-frame first noise-added sub-image.
  • the noise parameters used in the noise processing of each pixel in the first noise-added sub-image of each frame are the same.
  • the first image sub-frame noise plus 1, each pixel 14 plus noise noise processing parameters are employed W 1.
  • the noise parameters used in the first noise-added sub-images of different frames are different.
  • the noise parameter of the first frame of the first sub-adding noise Image noise processing shown in FIG. 14 employed as W 1 the second frame and the noise parameter of the first sub-adding noise Image noise processing is employed W 2 W 1 is different from W 2 .
  • the noise parameter of each pixel of the first noise-added sub-image of one frame may be the same.
  • the noise parameters of different pixels in the first noise-added sub-image of one frame may also be different.
  • the terminal can be sensitive to the Q group noise parameters.
  • the noise parameters of the respective pixels in the image are ⁇ W n,1 , W n,2 , . . . , W n,i , . . . , W n,Q ⁇ .
  • the pixel value of the first pixel of the first row of the sensitive area S (abbreviated as the a1 pixel) is A a1
  • the pixel value of the fourth pixel of the row (abbreviated as the a4th pixel) is A a4
  • the pixel value of the first pixel of the sixth row (referred to as f1 pixel) is A f1 .
  • the terminal can be calculated by using the above formula (1):
  • the human eye cannot perceive the difference between the image after the noise-added processing and the image before the un-noise processing, and can ensure that the images before and after the noise-adding processing are the same in the human eye, and the user can be guaranteed.
  • Visual experience based on the low-pass effect of the human eye vision, the human eye cannot perceive the difference between the image after the noise-added processing and the image before the un-noise processing, and can ensure that the images before and after the noise-adding processing are the same in the human eye, and the user can be guaranteed.
  • the N noise parameters of the ith pixel point are ⁇ W 1,i , W 2,i , . . . , W n,i , . . . , W N,i ⁇
  • the magnitude of the fluctuation is proportional to the sensitivity of the sensitive area.
  • the fluctuation magnitude of the N noise parameters is represented by the variance of the pixel value of the pixel value i of the N frame plus noise sub-image.
  • the variance of the pixel values of the pixel 1 at the first noise-added sub-image of the N frame is:
  • the sensitivity of a sensitive area is higher, and a set of noise parameters ⁇ W 1,i , W 2,i , . . . , W n,i used by the terminal to add noise to the i-th pixel of the sensitive area. , ..., the greater the fluctuation of W N,i ⁇ , that is, the larger the variance s 2 of the pixel value of the pixel 1 at the first plus-added sub-image of the N frame.
  • the hardware of the display of the terminal is limited, and the pixel value A i of the pixel (such as the pixel point i) of the image displayed by the terminal is [0, P]; therefore, each of the noise-added processing is to be ensured.
  • the pixel value of each pixel in the first noise-added sub-image of the frame ranges from [0, P], such as 0 ⁇ a n, i ⁇ P, that is, 0 ⁇ A i + W n, i ⁇ P, -A i ⁇ W n,i ⁇ PA i .
  • the terminal in order to ensure that the images before and after the noise-adding process are the same in the human eye, the average value of the pixel values of the pixel points i in the first-framed sub-image of the N frames after the noise is added.
  • the terminal needs to compensate the noise in the noise-added sub-picture of the previous n-1 frame in the n-th frame, Can be drawn among them, Is the sum of the noise parameters used in the first n-1 frame first noise-added sub-pixel noise-adding process in the N-frame first noise-added sub-image.
  • max(x, y) represents the maximum value in x and y
  • min(x, y) represents the minimum value in x and y
  • the pixel value in the embodiment of the present application may be a color value of a color component of a pixel point.
  • the terminal may perform S1202a-S1202c for the color value of each color component of each pixel to obtain a frame of the first noise-added sub-image.
  • the color component of the pixel may include three primary colors of Red Green Blue (RGB).
  • the color value of the color component before the noise processing, R n,i is the color value after R i is added, G n,i is the color value after G i is added, B n,i is B i plus noise processed color value; terminal according to R n, i, G n, i , and B n, i, determining a first noise pixel plus sub-image n-th frame the pixel value i a n, i.
  • the sum of the N noise parameters may be zero.
  • the sum of a set of noise parameters (ie, N noise parameters) in the embodiment of the present application may also be within a preset parameter interval.
  • the difference between the upper limit value and the lower limit value of the preset parameter interval and zero is less than a preset parameter threshold.
  • the preset parameter threshold may be 0.3 or 0.05.
  • the preset parameter threshold may be 0.3, and the preset parameter interval may be [-0.3, 0.2].
  • the terminal displays an N-frame first noise-added sub-image in the sensitive area at a second screen refresh rate, and an output frame rate of the N-frame first-encoded sub-image is a second frame rate.
  • the second frame rate is N times the first frame rate
  • the second screen refresh rate is N times the first screen refresh rate.
  • the first frame rate is an output frame rate used when the terminal displays an image before entering the noise-adding mode
  • the first screen refresh rate is a screen refresh rate before the terminal enters the noise-adding mode.
  • the non-sensitive area is displayed at a first screen refresh rate, and the output frame rate of the non-sensitive area is a first frame rate.
  • the method in the embodiment of the present application may further include S1204:
  • the terminal displays an image of the non-sensitive area at a first screen refresh rate, and the output frame rate of the non-sensitive area is a first frame rate.
  • the second frame rate is N times the first frame rate
  • the second screen refresh rate is N times the first screen refresh rate. That is, the terminal displays the N-frame first noise-added sub-image in the sensitive area at the second screen refresh rate and the second frame rate, and displays the image of the non-sensitive area (one frame image) at the first screen refresh rate and the first frame rate. . In this implementation, the image of the non-sensitive area is not superimposed with noise.
  • the non-sensitive area is displayed at a second screen refresh rate, and the output frame rate of the non-sensitive area is a second frame rate.
  • the method in the embodiment of the present application may further include S1501-S1502:
  • S1501 The terminal generates an N-frame second noise-added sub-image according to the image of the non-sensitive area.
  • the terminal displays an N-frame second noise-added sub-image in the sensitive area at a second screen refresh rate, and an output frame rate of the N-frame second-encoded sub-image is a second frame rate.
  • the method for generating the N-frame second noise-added sub-image by the terminal according to the image of the non-sensitive area may refer to the method for generating the N-frame first-added noise image according to the image of the sensitive area by the terminal in S1202, which is not used in this embodiment of the present application. Narration.
  • the difference is that at least one set of noise parameters used by the terminal to generate the N frames of the second noisy sub-image is different from the at least one set of noise parameters used by the terminal to generate the N frames of the first noisy sub-image.
  • the fluctuation of the noise parameter used to generate the N-frame second noise-added sub-image is smaller than the noise parameter used to generate the N-frame first noise-added sub-image.
  • the greater the fluctuation of the noise parameter the higher the degree of scrambling of the image superimposed with the noise parameter. That is, although the terminal outputs N frames of the noisy sub-image at the same screen refresh rate and frame rate in the sensitive area and the non-sensitive area; however, the image of the sensitive area is more noisy than the image of the non-sensitive area. degree.
  • the embodiment of the present application provides an image display method, in which a terminal can output an N-frame tuned sub-image at the same screen refresh rate and frame rate in a sensitive area and a non-sensitive area. That is, the terminal can display the entire content of the second image at the same screen refresh rate (ie, the second screen refresh rate) and the frame rate (the second frame rate), and does not require the screen to support different refresh rates in different display regions, and the requirements on the screen are greatly reduce. Moreover, the terminal can perform different degrees of scrambling on the sensitive area and the non-sensitive area.
  • the method for displaying the second image in the embodiment of the present application is illustrated by using the sensitive area S in the second image 1101 shown in (a) of FIG. 11 as an example.
  • the second image 1101 includes a sensitive area S, and other areas of the second image 1101 except the sensitive area S (areas filled with black dots) are non-sensitive areas.
  • the time T between t1 and t2 is the time when the terminal displays the second image 1101 by using a conventional scheme.
  • the terminal displays the second image 1101 during the time T period t1-t2.
  • the terminal may divide the time T of the period t1-t2 into N segments, and the time of each segment is T/N, and the terminal may display a frame first in the sensitive region S.
  • Noise image For example, as shown in FIG. 16, at the time T1/N during the period t1-t3, the terminal displays the first noise-added sub-image of the first frame in the sensitive area S; at the time T3-N4, the terminal is in the sensitive area. S displays the first plus noise sub-image of the second frame; ...; at the time T/N during t5-t2, the terminal displays the first noise-added sub-image of the Nth frame in the sensitive area S.
  • the terminal displays the second noise-added sub-image of the first frame in the non-sensitive area; during the period T3-t4, the terminal displays the second frame in the non-sensitive area.
  • the second noise-added sub-image displayed by the terminal in the non-sensitive area is different.
  • the second image 1102 shown in (b) of FIG. 11 includes two sensitive areas (a sensitive area S1 and a sensitive area S2) as an example, and the second image in the embodiment of the present application is taken as an example.
  • the terminal displays the second image by way of example:
  • the terminal may generate a first noise-added sub-image of the N1 frame for the sensitive area S1, and generate a first noise-added sub-image of the N2 frame for the sensitive area S2.
  • N1 is the same as N2; or N1 is different from N2.
  • the terminal adopts a set of noise parameters for the noise processing of the sensitive area S1 as ⁇ W a1 , W a2 , . . . , W aN1 ⁇
  • the first frame also referred to as the a1st frame
  • first noise-added sub-picture noise processing uses a set of noise parameters as W a1
  • the second frame also referred to as the a2th frame
  • the noise processing uses a set of noise parameters as W a2 ;...; the N1 frame (also referred to as the aN1 frame).
  • the first noise-added sub-image is subjected to a set of noise parameters W aN1 .
  • a set of noise parameters adopted by the terminal for the noise processing of the sensitive area S2 is ⁇ W b1 , W b2 , ..., W bN2 ⁇ , Wherein the noise parameters ⁇ W a1, W a2, whil , W aN1 ⁇ and the noise parameter ⁇ W b1, W b2, whil , W bN2 ⁇ may be the same or different.
  • the first frame (also referred to as the b1th frame) first noise-added sub-picture noise processing uses a set of noise parameters as W b1 ;
  • the second frame (also referred to as the b- th frame) first-plus-sub-sub-image plus A set of noise parameters used for noise processing is W b2 ;...;
  • N1 frame (also referred to as bN2 frame)
  • the first noise-added sub-image is subjected to a set of noise parameters W bN2 .
  • the noise parameters ⁇ W a1 , W a2 , . . . , W aN1 ⁇ and the noise parameters ⁇ W b1 , W b2 , . . . , W bN2 ⁇ can also satisfy the condition corresponding to the above formula (3).
  • the second image 1102 includes a sensitive area S1 and a sensitive area S2.
  • Other areas of the second image 1102 other than the sensitive area S1 and the sensitive area S2 are non-sensitive areas.
  • the time T between t1 and t2 is the time at which the terminal displays the second image 1102 by using a conventional scheme.
  • the terminal displays the second image 1102 during the time T period t1-t2.
  • the terminal may divide the time T of the period t1-t2 into N1 segments, and each segment T/N1, the terminal may display a frame of the sensitive region S1 in the sensitive region S1.
  • the terminal can divide the time T of t1-t2 into N2 segments, and each segment T/N2, the terminal can display a frame of the first noise-added sub-image of the sensitive region S2 in the sensitive region S2.
  • the terminal displays the a1st frame in the sensitive region S1.
  • a noise-added sub-image displays the first noise-added sub-image of the b1th frame in the sensitive area S2, that is, the terminal displays the image a shown in FIG.
  • the terminal displays the first attenuated sub-image of the a2th frame in the sensitive region S1, which is sensitive.
  • the area S2 displays the first noise-added sub-picture of the b1th frame, that is, the terminal displays the image b shown in FIG.
  • the terminal displays the first attenuated sub-image of the a2th frame in the sensitive region S1, which is sensitive.
  • the area S2 displays the first noise-added sub-picture of the b2th frame, that is, the terminal displays the image c shown in FIG.
  • the terminal displays the first attenuated sub-image of the a3th frame in the sensitive region S1, which is sensitive.
  • the area S2 displays the first noise-added sub-picture of the b2th frame, that is, the terminal displays the image d shown in FIG.
  • the terminal displays the first noise-added sub-n1-1 frame in the sensitive region S1.
  • the image displays the first noise-added sub-picture of the bN2 frame in the sensitive area S2, that is, the terminal displays the image e shown in FIG.
  • the terminal displays the first attenuated sub-image of the aN1 frame in the sensitive region S1, which is sensitive.
  • the area S2 displays the first noise-added sub-picture of the bN2 frame, that is, the terminal displays the image f shown in FIG.
  • the display interface including the password input box shown in FIG. 2 displayed by the mobile phone 100 is taken as an example.
  • the mobile phone 100 executes the image display method provided by the embodiment of the present application, if the mobile phone 200 captures the image displayed by the mobile phone 100, FIG.
  • the displayed interface shown in Fig. 19A can be obtained.
  • the terminal may determine at least one sensitive area of the second image, and then generate, for each sensitive area, N (N is an integer greater than or equal to 2) according to the image of the sensitive area.
  • the noise image is finally used in the sensitive area at a second frame rate (the second frame rate is N times the original output frame rate) and the second screen refresh rate (the second screen refresh rate is N times the original screen refresh rate).
  • the frame outputs N frames of the first noise added sub-image.
  • the image of the sensitive area can be divided into N frames of the first noise-added sub-image output frame by frame, and the screen of the sneak shot device sneak shot terminal captures the noise-added sub-image, which can reduce the possibility of the display content leakage of the terminal, effectively Protect the display content of the terminal.
  • the sum of the noise parameters used by the terminal for the noise processing of the sensitive area is zero.
  • the average value of the pixel values of the pixel points in the N-frame plus-noise sub-image after the addition of noise is the same as the pixel value of the pixel before the un-noise processing.
  • the human eye cannot perceive the difference between the image after the noise-added processing and the image before the un-noise processing, and can ensure that the images before and after the noise-adding processing are the same in the human eye, and the user can be guaranteed.
  • Visual experience That is, the method provided by the embodiment of the present invention can reduce the possibility of the display content of the terminal leaking under the premise of ensuring the visual experience of the user, and effectively protect the display content of the terminal.
  • the terminal may perform different noise processing on different sensitive areas (for example, the number of frames N of the added noise image obtained by the noise processing of different sensitive areas is different, and different The noise parameters used in the noise processing of the sensitive area are different), that is, the terminal can perform different degrees of noise processing on different sensitive areas.
  • At least a part of the second image may be an entire area of the second image.
  • at least a portion of the second image 1003 shown in (c) of FIG. 11 is the entire region S3 of the second image 1103.
  • the second image includes a multi-frame noise-added sub-image, and specifically, the image of the entire area in the second image includes a multi-frame tuned sub-image (such as an N-frame tuned sub-image).
  • the multi-frame noise-added sub-image is obtained by superimposing noise parameters on an image of an entire area of the second image.
  • the image of the entire area of the second image is superimposed with the noise parameter, it does not mean that the noise parameters superimposed on the image of the entire area of the second image are the same, and do not indicate The entire area of the second image includes sensitive features.
  • a partial area in the second image may include a sensitive feature, and other areas except the partial area may not include sensitive features.
  • the image of the entire region of the second image (the region including the sensitive feature and the region not including the sensitive feature) is superimposed with the noise parameter; however, the noise parameter of the image superimposed of the region including the sensitive feature is not included
  • the noise parameters of the superimposed regions of the sensitive features are different. Specifically, the fluctuation of the noise parameter of the area superimposed without the sensitive feature is smaller than the noise parameter superimposed by the area including the sensitive feature.
  • the greater the fluctuation of the noise parameter the higher the degree of scrambling of the image superimposed with the noise parameter; therefore, the degree of noise of the image of the region including the sensitive feature is higher than the degree of noise of the image of the region not including the sensitive feature .
  • the image superimposed noise parameter of the entire region S4 of the second image 1901 does not mean that the entire region S4 includes sensitive features.
  • the entire region S4 includes sensitive features.
  • only part of the area a in the entire area S4 includes sensitive features, and other areas b other than the partial area a do not include sensitive features.
  • the terminal can superimpose the noise parameter for the image of the entire region S4; however, the image of the partial region a including the sensitive feature is different from the noise parameter superimposed with the image of the other region b not including the sensitive feature.
  • the terminal can perform different degrees of noise addition processing on the image of the partial area a including the sensitive feature and the image of the other area b not including the sensitive feature.
  • the degree of noise of the image of the partial region a including the sensitive feature is higher than the degree of noise of the image of the other region b not including the sensitive feature.
  • a denser black dot is used to indicate the degree of noise of the image of the partial region a including the sensitive feature
  • a sparse black dot is used to indicate the other region b not including the sensitive feature. The degree of noise added to the image.
  • the screen refresh rate and the frame rate of the partial area a including the sensitive features and the other areas b not including the sensitive features are the same.
  • the N-frames plus the sub-images are displayed on the partial area a including the sensitive features and the other areas b not including the sensitive features.
  • the entire area of the second image may include sensitive features, such as when the second image is an image of a private document, the entire area of the second image includes sensitive features. In this case, the image of the entire area of the second image is superimposed with the same noise parameter.
  • the entire region of the second image is displayed at the second screen refresh rate, and the second image
  • the output frame rate of the entire area is the second frame rate.
  • the image of the region including the sensitive feature and the image of the region not including the sensitive feature have different degrees of noise, that is, an image of the region including the sensitive feature and an image not including the sensitive feature
  • the image of the area is superimposed with different noise parameters.
  • the method for displaying the second image by the terminal at the second screen refresh rate may refer to the description of the related method steps in FIG. 15D of the embodiment of the present application. .
  • the terminal may set the same N for all the sub-regions, that is, the terminal may target each of the M sub-regions.
  • An N-frame tuned sub-image of the sub-region is generated from an image of a sub-region (ie, a sensitive region).
  • the sensitivity of different sub-regions of M sub-regions may be different; therefore, different noise parameters may be used when the terminal generates N-frames of noise-added sub-images for different sub-regions. That is, the fluctuation magnitude of the N noise parameters adopted by each sub-region is proportional to the sensitivity of the sub-region.
  • the terminal may determine, for each sub-region, according to the image features of the sub-region and the pre-stored sensitive features, and the sensitivity of the sensitive features. The sensitivity of each sub-area; then, a set of noise parameters is selected for the corresponding sub-area according to the sensitivity of the sub-area.
  • the terminal since the terminal generates an N-frame tuned sub-image for all sub-regions of the second image, the terminal can output each sub-region frame by frame by using the second frame rate and the second screen refresh rate. N frames of noise added sub-images. That is to say, for the entire area of the second image, the output frame rate when the terminal displays the second image is the same, and the screen refresh rate when the terminal displays the second image is the same.
  • the sneak shot device even if the sneak shot device continuously captures the multi-frame image displayed on the terminal screen, for example, the sneak shot device captures the video on the terminal screen, the sneak shot device cannot restore the image before the un-noise processing according to the sneak shot multi-frame image.
  • the reason is that the scanning mode of the device when the image is taken may include the interlaced scanning and the progressive scanning.
  • the two images are used to capture the image displayed by the display method provided by the embodiment of the present application, and the obtained image is after the noise processing. Garbled image.
  • interlaced scanning refers to scanning images in two fields when the image is acquired.
  • the first field scans odd lines
  • the second field scans even lines
  • the two fields form a complete image (ie, one frame image).
  • the sensitive area includes a plurality of pixel points as shown in (a) of FIG. 20 as an example.
  • the pixel value of the pixel 1 is A 1
  • the pixel value of the pixel 2 is A 2
  • the pixel value of the pixel 3 is A 3
  • the pixel value of the pixel 4 is A 4 .
  • the terminal may display the noise-added sub-image of the n-th frame in the sensitive area, and the pixel in the noise-added sub-image of the n-th frame
  • the pixel value of point 1 is A 1 +W n
  • the pixel value of pixel 2 is A 2 +W n
  • the pixel value of pixel 3 is A 3 +W n
  • the pixel value of pixel 4 is A 4 +W n .
  • the captapping device cannot scan the even-numbered lines of the noisy sub-image of the nth frame.
  • the camera device can only scan the pixel value A 1 + W n of the pixel 1 and the pixel value A 2 + W n of the pixel 2 , and cannot scan the pixel.
  • the terminal may display the noise-added sub-image of the n+kth frame and the noisy sub-image of the n+kth frame in the sensitive area.
  • the pixel value of the pixel 1 is A 1 +W n+k
  • the pixel value of the pixel 2 is A 2 +W n+k
  • the pixel value of the pixel 3 is A 3 +W n+k
  • the pixel 4 The pixel value is A 4 + W n+k .
  • the captapping device cannot scan the odd-numbered lines of the noisy sub-image of the n+kth frame.
  • the camera device can only scan the pixel value A 3 + W n + k of the pixel 3 and the pixel value A 4 + W n + k of the pixel 4 , but cannot The pixel value A 1 +W n+k of the pixel 1 and the pixel value A 2 +W n+k of the pixel 2 are obtained by scanning.
  • the sneak shot device may perform one or more even-line scans, one or more odd-line scans on the N-frame tuned sub-image of one frame of image.
  • the camera device can scan the odd-numbered line pixel information of the sensitive area for the image of the frame, including the pixel value A 1 +W n+k of the pixel 1 and the pixel value A 2 +W n+k of the pixel 2 , scanning
  • the even-numbered row pixel information of the obtained sensitive region includes the pixel value A 3 +W n+k of the pixel 3 and the pixel value A 4 +W n+k of the pixel 4 ; then, the scanned information is combined to obtain FIG. 21 The image shown.
  • FIG. 21 since the noise parameters of the pixels of the odd rows and the pixels of the even rows are different; therefore, the images of the sensitive regions obtained after the combination are garbled compared to the images of the sensitive regions before the noise processing. .
  • progressive scanning refers to the way of scanning one line after another in order.
  • the sneak shot device adopts the progressive scan mode to capture the terminal screen in the embodiment of the present application, and the following problem may exist: when the sneak shot device scans the mth line, the terminal displays the noise image of the nth frame in the sensitive area; In the m+1th row, the terminal displays the noisy sub-image of the n+1th frame in the sensitive area. It can be seen that, for different rows of pixels, the pixel values of the pixels scanned by the camera are different; therefore, the images of the sensitive regions obtained by combining the multi-line scans are compared with the sensitive regions before the noise processing. The image will appear garbled.
  • the terminal performs the method of displaying the image in the method provided by the embodiment of the present application, even if the camera device captures the video on the screen of the terminal, the screen of the camera of the candid camera captures a garbled image, which can reduce the leakage of the display content of the terminal.
  • the possibility can effectively protect the display content of the terminal.
  • the above terminal and the like include hardware structures and/or software modules corresponding to each function.
  • the embodiments of the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the embodiments of the present application.
  • the embodiment of the present application may perform the division of the function modules on the foregoing terminal according to the foregoing method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • the embodiment of the present application provides a terminal 2200, which includes: a display unit 2201 and a control unit 2202.
  • the display unit 2201 is configured to support the terminal 2200 to perform the display actions in S701, S702 in the foregoing method embodiments, S1203, S1204, S1502, and/or other processes for the techniques described herein.
  • the control unit 2202 is configured to support the terminal 2200 to control the display unit 2201 to display an image, the support terminal 2200 performs the detection action in S702 in the foregoing method embodiment, and the actions of entering the noise-adding mode in S702a-S702d and S1201, and determining the sensitivity in S1201.
  • the terminal 2200 may further include: a generating unit.
  • the generating unit is configured to support the terminal 2200 to perform S1202, S1501 in the foregoing method embodiments, and/or other processes for the techniques described herein.
  • the above terminal 2200 may further include other unit modules.
  • the foregoing terminal 2200 may further include: a storage unit and a transceiver unit.
  • the terminal 2200 can interact with other devices through the transceiver unit.
  • the terminal 2200 can transmit an image file to other devices through the transceiver unit or receive an image file sent by another device.
  • Storage units are used to store data, such as sensitive features.
  • control unit 2202 and the generating unit and the like may be integrated into one processing module.
  • the transceiver unit may be an RF circuit, a WiFi module or a Bluetooth module of the terminal 2200, and the storage unit may be the terminal 2200.
  • the display unit 2201 may be a display module such as a display (touch screen).
  • FIG. 23 is a schematic diagram showing a possible structure of a terminal involved in the above embodiment.
  • the terminal 2300 includes a processing module 2301, a storage module 2302, and a display module 2303.
  • the processing module 2301 is configured to perform control management on the terminal 2300.
  • the display module 2303 is for displaying an image.
  • the storage module 2302 is configured to save the program code and data of the terminal 2300, and a plurality of sensitive features and their sensitivity.
  • the terminal 2300 described above may further include a communication module for communicating with other devices. A message or image file used by a communication module to receive or send to other devices.
  • the processing module 2301 may be a processor or a controller, and may include, for example, a CPU and a GPU, a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (ASIC). Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 2304 can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 2302 can be a memory.
  • the processing module 2301 is a processor (such as the processor 101 shown in FIG. 1)
  • the communication module is a radio frequency circuit (such as the radio frequency circuit 102 shown in FIG. 1)
  • the storage module 2302 is a memory (such as the memory 103 shown in FIG. 1).
  • the display module 2303 is a touch screen (including the touch panel 104-1 and the display panel 104-2 shown in FIG. 1 ), and the device provided by the present application may be the mobile phone 100 shown in FIG. 1 .
  • the communication module can be included not only as a radio frequency circuit but also as a WiFi module and a Bluetooth module.
  • the communication modules such as a radio frequency circuit, a WiFi module, and a Bluetooth module can be collectively referred to as a communication interface, wherein the processor, the communication interface, the touch screen, and the memory can be coupled together by a bus. .
  • the embodiment of the present application further provides a control device, including a processor and a memory, where the memory is used to store computer program code, where the computer program code includes computer instructions, when the processor executes the computer instruction,
  • the control device can be a control chip.
  • the embodiment of the present application further provides a computer storage medium, where the computer program code is stored, and when the processor executes the computer program code, the device executes any one of FIG. 7, FIG. 9 and FIG.
  • the related method steps implement the method in the above embodiment.
  • the embodiment of the present application further provides a computer program product, when the computer program product is run on a computer, causing the computer to perform the related method steps in any one of FIG. 7, FIG. 9 and FIG. 12 to implement the above embodiment. Methods.
  • the terminal 2200 and the terminal 2300, the control device, the computer storage medium or the computer program product provided by the application are all used to perform the corresponding method provided above, and therefore, the beneficial effects that can be achieved can be referred to the above.
  • the beneficial effects in the corresponding methods are not described here.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present application may be embodied in the form of a software product in essence or in part or in the form of a software product, which is stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a flash memory, a mobile hard disk, a read only memory, a random access memory, a magnetic disk, or an optical disk, and the like, which can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本申请实施例提供一种图像显示方法及终端,涉及图像处理技术领域,可以有效保护终端的显示内容,减少终端的显示内容泄露的可能性。具体方案包括:终端在显示屏上以第一屏幕刷新率显示第一图像,该第一图像的输出帧率为第一帧率;在检测到满足预设条件后,终端在显示屏上显示第二图像。其中,该第二图像的至少一部分被叠加噪声参数,被叠加噪声参数的至少一部分以第二屏幕刷新率显示,并且该至少一部分的输出帧率为第二帧率。该第二图像包括多帧加噪子图像。其中,第二帧率大于第一帧率,第二屏幕刷新率大于第一屏幕刷新率。

Description

一种图像显示方法及终端 技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像显示方法及终端。
背景技术
随着各种移动智能设备上的照相功能越来越强大,对屏幕的偷拍行为也越来越隐秘且难以被察觉。因此,在一些要求禁止拍照同时需要以显示屏显示进行资料文件展示与交流的场合里,文件资料的保护工作十分困难。尤其是重要科技资料、机密文件、内部信息或著作版权等,不能有效地防止被偷拍给它们的保护工作造成了很大的威胁。
已有的防偷拍措施大多是对现场进行监测,发出警报进行提醒,但这属于被动措施,只能做出提醒,却不能从根本上阻止偷拍行为,屏幕的内容仍然得不到有效保护。
发明内容
本申请实施例提供一种图像显示方法及终端,可以有效保护终端的显示内容,减少终端的显示内容泄露的可能性。
第一方面,本申请实施例提供一种图像显示方法,应用于具有显示屏的终端,该图像的显示方法包括:终端在显示屏上以第一屏幕刷新率显示第一图像,该第一图像的输出帧率为第一帧率;在检测到满足预设条件后,终端在显示屏上显示第二图像。其中,该第二图像的至少一部分被叠加噪声参数,被叠加噪声参数的至少一部分以第二屏幕刷新率显示,并且该至少一部分的输出帧率为第二帧率。该第二图像包括多帧加噪子图像。其中,第二帧率大于第一帧率,第二屏幕刷新率大于第一屏幕刷新率。
本申请实施例中,终端可以在检测到满足预设条件后,以第二屏幕刷新率显示包括多帧加噪子图像的第二图像的至少一部分(该至少一部分被叠加噪声参数),该至少一部分的输出帧率为第二帧率。并且,第二屏幕刷新率大于第一屏幕刷新率,第二帧率大于第一帧率。如此,第二图像的至少一部分的图像便可以分为多帧的加噪子图像逐帧输出,偷拍设备偷拍终端的屏幕拍到的是加噪子图像,可以减少终端的显示内容泄露的可能性,有效保护终端的显示内容。
在第一方面的一种可能的设计方式中,上述检测到满足预设条件可以为终端检测到用户对加噪选项的开启操作。具体的,上述在检测到满足预设条件后,终端在显示屏上显示第二图像,包括:终端响应于对加噪选项的开启操作,进入加噪模式,终端在显示屏上显示所述第二图像。其中,上述加噪选项可以显示在终端的设置界面或者通知栏中。
在第一方面的另一种可能的设计方式中,上述检测到满足预设条件可以为第二图像包括敏感特征。具体的,上述在检测到满足预设条件后,终端在显示屏上显示第二图像,包括:在第二图像中包括敏感特征时,终端自动进入加噪模式,在显示屏上显示第二图像。其中,上述敏感特征可以包括预设控件、货币符号和预设文字中的至少一项,预设控件包括密码输入框、用户名输入框和身份证号输入框中的至少一项,所述预设文字包括余额、密码、工资和账户中的至少一项。
在第一方面的另一种可能的设计方式中,上述检测到满足预设条件可以为第二图像是预设类型的应用的界面。具体的,上述在检测到满足预设条件后,终端在所述显示屏上显示第二图像,包括:在显示预设类型的应用的界面时,终端自动进入加噪模式,在显示屏上显示第二图像。其中,预设类型的应用包括:银行类应用、支付类应用和通讯类应用中的至少一项。
在第一方面的另一种可能的设计方式中,上述检测到满足预设条件可以为当前场景信息满足预设条件。具体的,上述在检测到满足预设条件后,终端在所述显示屏上显示第二图像,包括:终端在当前场景信息满足预设条件时,自动进入加噪模式。当前场景信息包括时间信息、地址信息和环境信息中的至少一项。其中,时间信息用于指示当前时间,地址信息用于指示终端的当前所在的位置,如家、公司和商场等。上述环境信息可以用于指示终端周围的人数,以及终端周围是否包括陌生人等。终端可以通过声音识别或摄像头采集图像来确定终端周围的人数,以及终端周围是否包括陌生人。
在第一方面的另一种可能的设计方式中,上述第二图像的敏感特征被叠加噪声参数,第二图像的至少一部分可以为第二图像中的至少一个敏感区域(包括敏感特征的区域)。第二图像的至少一部分被叠加噪声参数,具体可以为:第二图像中的至少一个敏感区域被叠加噪声参数。
在第一方面的另一种可能的设计方式中,在上述终端在显示屏上显示第二图像之前,本申请实施例的方法还包括:终端根据敏感区域的图像生成N帧第一加噪子图像。其中,上述N帧第一加噪子图像以第二屏幕刷新率在敏感区域显示,N帧第一加噪子图像的输出帧率为第二帧率;第二帧率是第一帧率的N倍,第二屏幕刷新率是第一屏幕刷新率的N倍,N为大于或等于2的整数。
可选的,N是预配置的固定值。其中,N可以是大于2的任一自然数。例如,N=4。
可选的,为了避免当N为预配置的固定值时,偷拍设备追踪到终端对图像进行操作的规律确定出该固定值,对偷拍的加噪后的图像进行还原处理;本申请实施例中的N可以在一定范围内随机变化。
可选的,N可以是根据终端的剩余电量确定的。由于N越大终端显示的加噪子图像越多,终端显示图像的耗电量越大;因此,终端可以根据剩余电量确定N的取值。具体的,终端根据敏感区域的图像生成N帧第一加噪子图像,包括:终端的剩余电量大于等于第一阈值时,终端根据所述敏感区域的图像生成N1帧第一加噪子图像;终端的剩余电量小于所述第一阈值时,终端根据所述敏感区域的图像生成N2帧第一加噪子图像,N1>N2。
可选的,N可以是根据敏感区域的敏感程度确定的。即N可以是根据敏感区域中的敏感特征的敏感程度确定的。其中,该终端中还可以保存每个敏感特征的敏感程度,不同敏感特征的敏感程度不同。具体的,终端根据敏感区域的图像生成N帧第一加噪子图像,包括:终端根据敏感区域的敏感程度生成N帧第一加噪子图像;其中,包括不同敏感特征的多个敏感区域的敏感程度不同。敏感区域的敏感程度越高,N取值越大。
可选的,N可以是根据终端的剩余电量和敏感区域的敏感程度确定的。在这种情 况下,终端的剩余电量一定时,敏感区域的敏感程度越高,为该敏感区域生成的加噪子图像的帧数N越大。敏感区域的敏感程度一定时,终端的剩余电量越多,为该敏感区域生成的加噪子图像的帧数N越大。
在第一方面的另一种可能的设计方式中,终端根据敏感区域的图像生成N帧第一加噪子图像,包括:终端确定敏感区域的图像中的每个像素点的像素值;终端确定敏感区域的至少一组噪声参数,每组噪声参数中包括N个噪声参数;N个噪声参数之和为零,或者所述N个噪声参数之和在预设参数区间内;终端采用a n,i=A i+W n,i,计算一帧加噪子图像中每个像素点的像素值,得到所述一帧加噪子图像。
其中,A i是所述敏感区域的图像中的像素点i的像素值,i∈{1,2,……,Q},Q为所述敏感区域的图像中的像素点的总数;W n,i是第n帧第一加噪子图像的第i个像素点的噪声参数,n∈{1,2,……,N},
Figure PCTCN2018081491-appb-000001
a n,i是第n帧第一加噪子图像的像素点i的像素值。
需要说明的是,在本申请实施例中,一帧第一加噪子图像的每个像素点的噪声参数可以相同。例如,第n帧第一加噪子图像中,第i个像素点的噪声参数W n,i和第i+k个像素点的噪声参数W n,i+k相同。或者,一帧第一加噪子图像中不同的像素点的噪声参数也可以不同。例如,第n帧第一加噪子图像中,第i个像素点的噪声参数W n,i和第i+k个像素点的噪声参数W n,i+k不同。
可以理解,由于对第二图像中的一个敏感区域的图像进行加噪处理,所采用的至少一组噪声参数中每一组噪声参数之和为零,或者所采用的至少一组噪声参数中每一组噪声参数之和在预设参数区间内。例如{W 1,i,W 2,i,……,W n,i,……,W N,i}满足
Figure PCTCN2018081491-appb-000002
因此,上述N帧第一加噪子图像中的像素点i的像素值的平均值
Figure PCTCN2018081491-appb-000003
为A i。A i为敏感区域的像素点i未加噪处理前的像素值。这样,基于人眼视觉的低通效应,人眼无法察觉加噪处理后的图像与未加噪处理前的图像的区别,可以保证在人眼看来加噪处理前后的图像相同,可以保证用户的视觉体验。
在第一方面的另一种可能的设计方式中,上述像素点的像素值包括像素点的颜色分量的颜色值,所述颜色分量包括红绿蓝(Red Green Blue,RGB)三基色。终端计算第n帧第一加噪子图像的像素点i的像素值a n,i的方法包括:终端采用R n,i=R i+W n,i、G n,i=G i+W n,i和B n,i=B i+W n,i,计算第n帧第一加噪子图像的第i个像素点的颜色分量;其中,R i、G i和B i是像素点i加噪处理前的颜色分量的颜色值,R n,i是R i加噪处理后的颜色值,G n,i是G i加噪处理后的颜色值,B n,i是B i加噪处理后的颜色值;根据R n,i、G n,i和B n,i,确定第n帧第一加噪子图像的像素点i的像素值a n,i
在第一方面的另一种可能的设计方式中,受到终端的显示器的硬件限制,终端所显示的图像的像素点(如像素点i)的像素值A i的范围为[0,P];因此,要保证加噪处理后的每一帧第一加噪子图像中每个像素点的像素值的范围均为[0,P],如0≤a n,i≤P。由0≤a n,i≤P,
Figure PCTCN2018081491-appb-000004
可以确定一个敏感区域的第n个噪声参数W n,i满足以下条件:
Figure PCTCN2018081491-appb-000005
其中,max(x,y)表示取x和y中的最大值,min(x,y)表示取x和y中的最小值,
Figure PCTCN2018081491-appb-000006
用于表示N帧加噪子图像中前n-1帧第一加噪子图像的第i个像素点的噪声参数之和。
在第一方面的另一种可能的设计方式中,N个噪声参数随机取值;或者,N个噪声参数符合均匀分布或高斯分布。
在第一方面的另一种可能的设计方式中,N个噪声参数的波动大小与一个敏感区域的敏感程度成正比,N个噪声参数的波动大小由像素点i在N帧第一加噪子图像的像素值的方差来表征。例如,上述第i个像素点的N个噪声参数{W 1,i,W 2,i,……,W n,i,……,W N,i}的波动大小与敏感区域的敏感程度成正比。其中,N个噪声参数的波动大小由像素点i在N帧第一加噪子图像的像素值的方差来表征。像素点i在N帧加噪子图像的像素值的方差
Figure PCTCN2018081491-appb-000007
其中,一个敏感区域的敏感程度越高,终端对该敏感区域的第i个像素点加噪处理所采用的一组噪声参数{W 1,i,W 2,i,……,W n,i,……,W N,i}的波动越大,即像素点i在N帧第一加噪子图像的像素值的方差s 2越大。
在第一方面的另一种可能的设计方式中,第二图像的非敏感区域以第一屏幕刷新率显示,非敏感区域的输出帧率为所述第一帧率。由此,只需要对敏感区域进行处理和调整屏幕刷新率,可以较低复杂度和较低功耗实现防偷拍的效果。
在第一方面的另一种可能的设计方式中,第二图像的非敏感区域以第二屏幕刷新率显示,非敏感区域的输出帧率为所述第二帧率。其中,非敏感区域是第二图像中除所述敏感区域之外的其他区域。
其中,终端可以在敏感区域和非敏感区域以相同的屏幕刷新率和帧率输出N帧加噪子图像。即终端可以同一屏幕刷新率(即第二屏幕刷新率)和帧率(第二帧率)显示第二图像的全部内容,对屏幕的要求大大降低,不需要屏幕在不同显示区域支持不同的刷新率。并且,终端可以对敏感区域和非敏感区域进行不同程度的加扰。
在第一方面的另一种可能的设计方式中,在终端进入加噪模式后,终端在显示屏上显示第二图像之前,本申请实施例的方法还包括:终端根据非敏感区域的图像生成N帧第二加噪子图像。所述N帧第二加噪子图像以所述第二屏幕刷新率在所述非敏感区域显示,所述N帧第二加噪子图像的输出帧率为所述第二帧率;所述第二帧率是所述第一帧率的N倍,所述第二屏幕刷新率是所述第一屏幕刷新率的N倍,N为大于或者等于2的整数。其中,终端生成N帧第二加噪子图像所使用的噪声参数,与终端生成N帧第一加噪子图像所使用的噪声参数不同。
在第一方面的一种可能的设计方式中,终端确定第二图像的至少一个敏感区域的方法可以包括:终端确定第二图像中包括敏感特征;终端根据敏感特征在第二图像中的位置,确定至少一个敏感区域。
示例性的,终端确定第二图像中包括敏感特征包括:当第二图像是终端中预设类型的应用的图像、加密文档的图像、加密图片的图像、私密视频的图像时,终端可以确定第二图像中包括敏感特征;或者,终端识别待显示的第二图像,获得第二图像中包括的一个或多个图像特征,将获得的一个或多个图像特征与预先保存的敏感特征进 行对比,当获得的一个或多个图像特征中包括与敏感特征匹配的图像特征时,终端则可以确定第二图像中包括敏感特征。
在第一方面的另一种可能的设计方式中,为了更加清楚的识别出第二图像中的敏感区域,终端可以将第二图像分割成M个子区域,识别每个子区域的图像,以判断对应子区域是否为敏感区域。具体的,终端确定第二图像的至少一个敏感区域的方法可以包括:终端将第二图像分割成M个子区域,M≥2;识别M个区域的图像以提取每个子区域的图像特征;针对每个子区域,当一个子区域的图像特征包括敏感特征时,确定该一个子区域为敏感区域;其中,M是预配置的固定值;或者,M是根据终端的处理能力和终端的剩余电量确定的。
其中,终端的处理能力具体可以为该终端的处理器的处理能力,终端的处理器可以包括CPU和图形处理器(Graphics Processing Unit,GPU)。其中,处理器的处理能力可以包括处理器的主频、核数(如多核处理器)、位数和缓存等参数。
第二方面,本申请实施例提供一种终端,该终端包括:显示单元和控制单元。显示单元,用于以第一屏幕刷新率显示第一图像,第一图像的输出帧率为第一帧率。控制单元,用于检测终端满足预设条件。显示单元,还用于在控制单元检测到满足预设条件后,显示第二图像,其中,显示单元显示的第二图像的至少一部分被叠加噪声参数,至少一部分以第二屏幕刷新率显示,至少一部分的输出帧率为第二帧率,显示单元显示的第二图像包括多帧加噪子图像。其中,第二帧率大于第一帧率,第二屏幕刷新率大于第一屏幕刷新率。
在第二方面的一种可能的设计中,上述控制单元,具体用于响应于对加噪选项的开启操作,控制终端进入加噪模式。
在第二方面的另一种可能的设计中,上述控制单元,具体用于在第二图像中包括敏感特征时,控制终端自动进入加噪模式。
在第二方面的另一种可能的设计中,上述控制单元,具体用于在显示单元显示预设类型的应用的界面时,控制终端自动进入加噪模式。其中,所述预设类型的应用包括:银行类应用、支付类应用和通讯类应用中的至少一项。
在第二方面的另一种可能的设计中,上述显示单元显示的第二图像的敏感特征被叠加噪声参数,至少一部分包括第二图像的至少一个敏感区域,敏感区域包括敏感特征。
在第二方面的另一种可能的设计中,上述终端还包括:生成单元。生成单元,用于根据敏感区域的图像生成N帧第一加噪子图像。其中,N帧第一加噪子图像以第二屏幕刷新率在敏感区域显示,N帧第一加噪子图像的输出帧率为第二帧率;第二帧率是第一帧率的N倍,第二屏幕刷新率是第一屏幕刷新率的N倍,N为大于或等于2的整数。
在第二方面的另一种可能的设计中,上述生成单元,具体用于:终端的剩余电量大于等于第一阈值时,根据敏感区域的图像生成N1帧第一加噪子图像;终端的剩余电量小于第一阈值时,根据敏感区域的图像生成N2帧第一加噪子图像,N1>N2。
在第二方面的另一种可能的设计中,上述生成单元,具体用于根据敏感区域的敏感程度生成N帧第一加噪子图像,敏感程度是根据敏感区域的敏感特征确定的;其中, 包括不同敏感特征的多个敏感区域的敏感程度不同。
在第二方面的另一种可能的设计中,上述显示单元以第一屏幕刷新率显示第二图像的非敏感区域的图像,非敏感区域的输出帧率为第一帧率。
在第二方面的另一种可能的设计中,上述显示单元以第二屏幕刷新率显示第二图像的非敏感区域的图像,非敏感区域的输出帧率为第二帧率。
在第二方面的另一种可能的设计中,上述生成单元,还用于根据非敏感区域的图像生成N帧第二加噪子图像。N帧第二加噪子图像以第二屏幕刷新率在非敏感区域显示,N帧第二加噪子图像的输出帧率为第二帧率;第二帧率是第一帧率的N倍,第二屏幕刷新率是第一屏幕刷新率的N倍,N为大于或者等于2的整数。其中,生成单元生成N帧第二加噪子图像所使用的噪声参数,与终端生成N帧第一加噪子图像所使用的噪声参数不同。
第三方面,本申请实施例提供一种终端,该终端包括:处理器、存储器和显示器;存储器和显示器与处理器耦合,显示器用于显示图像,存储器包括非易失性存储介质,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当处理器执行计算机指令时,处理器,用于在显示器上以第一屏幕刷新率显示第一图像,第一图像的输出帧率为第一帧率;处理器,还用于在检测到满足预设条件后,在显示器上显示第二图像,其中,显示器显示的第二图像的至少一部分被叠加噪声参数,至少一部分以第二屏幕刷新率显示,至少一部分的输出帧率为第二帧率,第二图像包括多帧加噪子图像;其中,第二帧率大于第一帧率,第二屏幕刷新率大于第一屏幕刷新率。
在第三方面的一种可能的设计中,上述处理器,用于在检测到满足预设条件后,在显示器上显示第二图像,包括:处理器,具体用于响应于对加噪选项的开启操作,进入加噪模式,在显示器上显示第二图像。
在第三方面的另一种可能的设计中,上述处理器,用于在检测到满足预设条件后,在显示器上显示第二图像,包括:处理器,具体用于在第二图像中包括敏感特征时,自动进入加噪模式,在显示器上显示第二图像。
在第三方面的一种可能的设计中,上述处理器,用于在检测到满足预设条件后,在显示器上显示第二图像,包括:处理器,具体用于在显示器显示预设类型的应用的界面时,自动进入加噪模式,在显示器上显示第二图像。
在第三方面的一种可能的设计中,上述显示器显示的第二图像的敏感特征被叠加噪声参数,至少一部分包括第二图像的至少一个敏感区域,敏感区域包括敏感特征。
在第三方面的一种可能的设计中,上述处理器,还用于在显示器上显示第二图像之前,根据敏感区域的图像生成N帧第一加噪子图像。其中,显示器显示的N帧第一加噪子图像以第二屏幕刷新率在敏感区域显示,N帧第一加噪子图像的输出帧率为第二帧率;第二帧率是第一帧率的N倍,第二屏幕刷新率是第一屏幕刷新率的N倍,N为大于或等于2的整数。
在第三方面的一种可能的设计中,上述处理器,用于敏感区域的图像生成N帧第一加噪子图像,包括:处理器,具体用于:终端的剩余电量大于等于第一阈值时,根据敏感区域的图像生成N1帧第一加噪子图像;终端的剩余电量小于第一阈值时,根据敏感区域的图像生成N2帧第一加噪子图像,N1>N2。
在第三方面的一种可能的设计中,上述处理器,用于根据敏感区域的图像生成N帧第一加噪子图像,包括:处理器,具体用于根据敏感区域的敏感程度生成N帧第一加噪子图像,敏感程度是根据敏感区域的敏感特征确定的。
在第三方面的一种可能的设计中,上述处理器,用于根据敏感区域的图像生成N帧第一加噪子图像,包括:处理器,具体用于:确定敏感区域的图像中的每个像素点的像素值;确定敏感区域的至少一组噪声参数,每组噪声参数中包括N个噪声参数;N个噪声参数之和为零,或者N个噪声参数之和在预设参数区间内;采用a n,i=A i+W n,i,计算一帧加噪子图像中每个像素点的像素值,得到一帧加噪子图像。
在第三方面的一种可能的设计中,上述处理器在显示器以第一屏幕刷新率显示第二图像的非敏感区域的图像,非敏感区域的输出帧率为第一帧率。
在第三方面的一种可能的设计中,上述处理器在显示器以第二屏幕刷新率显示第二图像的非敏感区域的图像,非敏感区域的输出帧率为第二帧率。
在第三方面的一种可能的设计中,上述处理器,还用于在显示器显示第二图像之前,根据非敏感区域的图像生成N帧第二加噪子图像;显示器显示的N帧第二加噪子图像以第二屏幕刷新率在非敏感区域显示,N帧第二加噪子图像的输出帧率为第二帧率;第二帧率是第一帧率的N倍,第二屏幕刷新率是第一屏幕刷新率的N倍,N为大于或者等于2的整数。其中,处理器生成N帧第二加噪子图像所使用的噪声参数,与终端生成N帧第一加噪子图像所使用的噪声参数不同。
需要说明的是,第二方面、第三方面的可能的设计方式中所述的敏感特征、预设类型的应用、A i、QW n,i
Figure PCTCN2018081491-appb-000008
a n,i、和非敏感区域的具体内容可以参考第一方面的可能的设计方式中的描述,本申请实施例这里不再赘述。
第四方面,本申请实施例提供一种控制设备,该控制设备包括处理器和存储器,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,当处理器执行该计算机指令时,控制设备执行如本申请实施例第一方面及其任一种可能的设计方式所述的方法。
第五方面,本申请实施例提供一种计算机存储介质,所述计算机存储介质包括计算机指令,当所述计算机指令在终端上运行时,使得所述终端执行如本申请实施例第一方面及其任一种可能的设计方式所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如本申请实施例第一方面及其任一种可能的设计方式所述的方法。
另外,第二方面、第三方面及其任一种设计方式,以及第四方面、第五方面和第六方面所带来的技术效果可参见上述第一方面中不同设计方式所带来的技术效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种手机的硬件结构示意图;
图2为本申请实施例提供的一种显示界面实例示意图一;
图3为本申请实施例提供的一种显示界面实例示意图二;
图4为本申请实施例提供的一种显示界面实例示意图三;
图5为本申请实施例提供的一种显示界面实例示意图四;
图6为本申请实施例提供的一种显示界面实例示意图五;
图7为本申请实施例提供的一种图像显示方法流程图一;
图8为本申请实施例提供的一种显示界面实例示意图六;
图9为本申请实施例提供的一种显示界面实例示意图七;
图10为本申请实施例提供的一种显示界面实例示意图八;
图11为本申请实施例提供的第二图像中的敏感区域的实例示意图;
图12为本申请实施例提供的一种图像显示方法流程图二;
图13为本申请实施例提供的分割子区域的实例示意图;
图14为本申请实施例提供的一种敏感区域及其N个加噪子图像的实例示意图一;
图15A为本申请实施例提供的一种生成N个加噪子图像的原理示意图一;
图15B为本申请实施例提供的一种生成N个加噪子图像的原理示意图二;
图15C为本申请实施例提供的一种图像显示方法流程图三;
图15D为本申请实施例提供的一种图像显示方法流程图四;
图16为本申请实施例提供的一种图像显示方法的原理示意图一;
图17为本申请实施例提供的一种敏感区域及其N个加噪子图像的实例示意图二;
图18为本申请实施例提供的一种图像显示方法的原理示意图二;
图19A为本申请实施例提供的一种显示界面实例示意图九;
图19B为本申请实施例提供的一种第二界面的加噪原理示意图;
图20为本申请实施例提供的一种拍摄图像的原理示意图一;
图21为本申请实施例提供的一种拍摄图像的原理示意图二;
图22为本申请实施例提供的一种终端的结构组成示意图一;
图23为本申请实施例提供的一种终端的结构组成示意图二。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供的一种图像显示方法可以应用于终端显示图像的过程中。本申请实施例中的图像可以包括图片、视频中的图像和终端的应用界面等终端可以显示的图像。
本申请实施例中,终端在进入加噪模式后,可以调整终端显示图像所采用的输出帧率和屏幕刷新率,采用调整后的输出帧率和屏幕刷新率逐帧输出图像的多帧加噪子图像。即使偷拍设备拍摄终端的显示内容,其拍摄到的也是上述加噪后的一帧子图像,而不能获得完整的一帧图像;因此,可以有效保护终端的显示内容不被偷拍,减少终端的显示内容泄露的可能性。本申请实施例中的加噪模式是指:终端执行本申请实施例的方法时该终端的工作模式。终端工作在上述加噪模式时,可以执行本申请实施例的方法,对终端显示的图像进行加噪处理。该加噪模式也可以称为加噪显示模式或者 图像保护模式等,本申请实施例对此不作限制。
举例来说,本申请实施例中的终端可以为便携式终端(如图1所示的手机100)、笔记本电脑、个人计算机(Personal Computer,PC)、可穿戴电子设备(如智能手表)、平板电脑、自动柜员机(Automated Teller Machine,ATM)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备、车载电脑等具备显示功能(包括显示屏)的设备,以下实施例对该终端的具体形式不做特殊限制。
如图1所示,以手机100作为上述终端举例,手机100具体可以包括:处理器101、射频(Radio Frequency,RF)电路102、存储器103、触摸屏104、蓝牙装置105、一个或多个传感器106、WiFi装置107、定位装置108、音频电路109、外设接口110以及电源装置111等部件。这些部件可通过一根或多根通信总线或信号线(图1中未示出)进行通信。本领域技术人员可以理解,图1中示出的硬件结构并不构成对手机的限定,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图1对手机100的各个部件进行具体的介绍:
处理器101是手机100的控制中心,利用各种接口和线路连接手机100的各个部分,通过运行或执行存储在存储器103内的应用程序,以及调用存储在存储器103内的数据,执行手机100的各种功能和处理数据。在一些实施例中,处理器101可包括一个或多个处理单元。本申请实施例中的处理器101可以包括中央处理器(Central Processing Unit,CPU)和图形处理器(Graphics Processing Unit,GPU)。
射频电路102可用于无线信号的接收和发送。特别地,射频电路102可以将基站的下行数据接收后,给处理器101处理;另外,将涉及上行的数据发送给基站。通常,射频电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频电路102还可以通过无线通信和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统、通用分组无线服务、码分多址、宽带码分多址、长期演进等。
存储器103用于存储应用程序以及数据,处理器101通过运行存储在存储器103的应用程序以及数据,执行手机100的各种功能以及数据处理。存储器103主要包括存储程序区以及存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等);存储数据区可以存储根据使用手机100时所创建的数据(比如音频数据、电话本等)。此外,存储器103可以包括高速随机存取存储器(Random Access Memory,RAM),还可以包括非易失存储器,例如磁盘存储器件、闪存器件或其他易失性固态存储器件等。存储器103可以存储各种操作系统。上述存储器103可以是独立的,通过上述通信总线与处理器101相连接;存储器103也可以和处理器101集成在一起。
触摸屏104具体可以包括触控板104-1和显示器104-2。
其中,触控板104-1可采集手机100的用户在其上或附近的触摸事件(比如用户使用手指、触控笔等任何适合的物体在触控板104-1上或在触控板104-1附近的操作),并将采集到的触摸信息发送给其他器件(例如处理器101)。其中,用户在触控板104-1附近的触摸事件可以称之为悬浮触控;悬浮触控可以是指,用户无需为了选择、移动 或拖动目标(例如图标等)而直接接触触控板,而只需用户位于设备附近以便执行所想要的功能。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型来实现触控板104-1。
显示器(也称为显示屏)104-2可用于显示由用户输入的信息或提供给用户的信息以及手机100的各种菜单。可以采用液晶显示器、有机发光二极管等形式来配置显示器104-2。触控板104-1可以覆盖在显示器104-2之上,当触控板104-1检测到在其上或附近的触摸事件后,传送给处理器101以确定触摸事件的类型,随后处理器101可以根据触摸事件的类型在显示器104-2上提供相应的视觉输出。
需要说明的是,虽然在图1中,触控板104-1与显示屏104-2是作为两个独立的部件来实现手机100的输入和输出功能,但是在某些实施例中,可以将触控板104-1与显示屏104-2集成而实现手机100的输入和输出功能。可以理解的是,触摸屏104是由多层的材料堆叠而成,本申请实施例中只展示出了触控板(层)和显示屏(层),其他层在本申请实施例中不予记载。另外,触控板104-1可以全面板的形式配置在手机100的正面,显示屏104-2也可以全面板的形式配置在手机100的正面,这样在手机的正面就能够实现无边框的结构。
另外,手机100还可以具有指纹识别功能。例如,可以在手机100的背面(例如后置摄像头的下方)配置指纹采集器件(即指纹识别器)112,或者在手机100的正面(例如触摸屏104的下方)配置指纹采集器件112。又例如,可以在触摸屏104中配置指纹采集器件112来实现指纹识别功能,即指纹采集器件112可以与触摸屏104集成在一起来实现手机100的指纹识别功能。在这种情况下,该指纹采集器件112配置在触摸屏104中,可以是触摸屏104的一部分,也可以其他方式配置在触摸屏104中。本申请实施例中的指纹采集器件112的主要部件是指纹传感器,该指纹传感器可以采用任何类型的感测技术,包括但不限于光学式、电容式、压电式或超声波传感技术等。
手机100还可以包括蓝牙装置105,用于实现手机100与其他设备(例如手机、智能手表等)之间的短距离的数据交换。本申请实施例中的蓝牙装置可以是集成电路或者蓝牙芯片等。
上述一个或多个传感器106包括:用于检测用户对侧边的按压操作和用户在侧边的滑动操作的传感器。
当然,上述一个或多个传感器106包括但不限于上述传感器,例如,该一个或多个传感器106还可以包括光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节触摸屏104的显示器的亮度,接近传感器可在手机100移动到耳边时,关闭显示器的电源。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机100还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
WiFi装置107,用于为手机100提供遵循WiFi相关标准协议的网络接入,手机100可以通过WiFi装置107接入到WiFi热点,进而帮助用户收发电子邮件、浏览网 页和访问流媒体等,它为用户提供了无线的宽带互联网访问。在其他一些实施例中,该WiFi装置107也可以作为WiFi无线接入点,可以为其他设备提供WiFi网络接入。
定位装置108,用于为手机100提供地理位置。可以理解的是,该定位装置108具体可以是全球定位系统(Global Positioning System,GPS)或北斗卫星导航系统、俄罗斯GLONASS等定位系统的接收器。
其中,定位装置108在接收到上述定位系统发送的地理位置后,将该信息发送给处理器101进行处理,或者发送给存储器103进行保存。在另外的一些实施例中,该定位装置108还可以是辅助全球卫星定位系统(Assisted Global Positioning System,AGPS)的接收器,AGPS系统通过作为辅助服务器来协助定位装置108完成测距和定位服务,在这种情况下,辅助定位服务器通过无线通信网络与设备例如手机100的定位装置108(即GPS接收器)通信而提供定位协助。
在另外的一些实施例中,该定位装置108也可以是基于WiFi热点的定位技术。由于每一个WiFi热点都有一个全球唯一的媒体访问控制(Media Access Control,MAC)地址,设备在开启WiFi的情况下即可扫描并收集周围的WiFi热点的广播信号,因此可以获取到WiFi热点广播出来的MAC地址;设备将这些能够标示WiFi热点的数据(例如MAC地址)通过无线通信网络发送给位置服务器,由位置服务器检索出每一个WiFi热点的地理位置,并结合WiFi广播信号的强弱程度,计算出该设备的地理位置并发送到该设备的定位装置108中。
音频电路109、扬声器113、麦克风114可提供用户与手机100之间的音频接口。音频电路109可将接收到的音频数据转换后的电信号,传输到扬声器113,由扬声器113转换为声音信号输出;另一方面,麦克风114将收集的声音信号转换为电信号,由音频电路109接收后转换为音频数据,再将音频数据输出至RF电路102以发送给另一手机,或者将音频数据输出至存储器103以便进一步处理。
外设接口110,用于为外部的输入/输出设备(例如键盘、鼠标、外接显示器、外部存储器、用户识别模块卡等)提供各种接口。例如通过通用串行总线(Universal Serial Bus,USB)接口与鼠标连接,通过用户识别模块卡卡槽上的金属触点与电信运营商提供的用户识别模块卡(Subscriber Identification Module,SIM)卡进行连接。外设接口110可以被用来将上述外部的输入/输出设备耦接到处理器101和存储器103。
在本发明实施例中,手机100可通过外设接口110与设备组内的其他设备进行通信,例如,通过外设接口110可接收其他设备发送的显示数据进行显示等,本发明实施例对此不作任何限制。
手机100还可以包括给各个部件供电的电源装置111(比如电池和电源管理芯片),电池可以通过电源管理芯片与处理器101逻辑相连,从而通过电源装置111实现管理充电、放电、以及功耗管理等功能。
尽管图1未示出,手机100还可以包括摄像头(前置摄像头和/或后置摄像头)、闪光灯、微型投影装置、近场通信(Near Field Communication,NFC)装置等,在此不再赘述。
以下实施例中的方法均可以在具有上述硬件结构的手机100中实现。
以下对本申请实施例中涉及的术语进行介绍:
输出帧率:如第一帧率和第二帧率,是指终端的显示器每秒显示帧数(Frames per Second,FPS)。
屏幕刷新率:如第一屏幕刷新率和第二屏幕刷新率,是指终端的显示器每秒钟刷新屏幕的次数。
本申请实施例中的输出帧率与屏幕刷新率可以相同。此时,以终端播放视频的场景为例,该终端的GPU每次向显示器输出的画面均不同,即显示器每次所显示的一帧图像均不同。
当然,输出帧率与屏幕刷新率也可以不同。例如,当屏幕刷新率B是输出帧率A的2倍时,GPU每两次向显示器输出的图像是同一帧图像。
本申请实施例这里,分情况对本申请实施例提供的图像显示方法所应用的具体场景进行介绍:
一般而言,当用户操作终端(如图1所示的手机100)显示私密信息时,如果其他用户使用其他终端偷拍手机100的屏幕,则可能会造成该用户的隐私泄露,给该用户带来财产或者其他损失。以下对本申请实施例所涉及的终端显示私密信息的场景进行介绍:
场景一:当终端显示的界面包括密码输入框时,如果其他用户使用其他终端偷拍终端的屏幕,则会导致终端所显示的密码输入框中的密码泄露。
示例性的,本申请实施例中包括密码输入框的界面可以包括:手机或者平板电脑等便携式终端中的应用(如微信、支付宝、QQ和电子邮箱等应用)的账号登录界面,例如,如图2所示的微信的账号登录界面,手机或者平板电脑等便携式终端的电子支付界面(如支付宝、微信或者银行类应用的支付界面),以及ATM的密码输入界面等。
场景二:当终端显示私密文档(例如,图3中的(a)所示的私密文档1)或者私密图片(例如,图3中的(b)所示的图片1),或者终端播放私密视频(例如图4所示的视频1)时,如果其他用户使用其他终端偷拍终端的屏幕,则会导致终端所显示的私密文档、私密图片或者私密视频泄露。
场景三:当终端显示账户金额(例如,图5所示的账户管理界面包括账户余额)时,如果其他用户使用其他终端偷拍终端的屏幕,则会导致终端所显示的账户金额泄露。
场景四:当终端显示聊天界面(例如,图6中的(a)所示的微信聊天界面)或者邮件界面(例如,图6中的(b)所示的邮件界面)时,如果其他用户使用其他终端偷拍终端的屏幕,则会导致终端所显示的通讯内容泄露。
场景五:本申请实施例中的终端是影院的放映设备,影院的放映设备在投放电影时,如果观影人员使用其他终端偷拍荧幕,则会导致荧幕中所播放的电影被录制。
本申请实施例提供的图像显示方法的执行主体可以为图像显示装置,该图像显示装置可以为上述终端中的任一种(例如,图像显示装置可以为图1所示的手机100);或者,该图像显示装置还可以为该终端的中央处理器(英文:Central Processing Unit,简称:CPU),或者该终端中的用于执行图像显示方法的控制模块。本申请实施例中以终端执行图像显示方法为例,说明本申请实施例提供的图像显示方法。
本申请实施例提供一种图像显示方法,如图7所示,该图像显示方法包括S701-S702:
S701、终端在显示屏上以第一屏幕刷新率显示第一图像,第一图像的输出帧率为第一帧率。
S702、在检测到满足预设条件后,终端在显示屏上显示第二图像,其中,第二图像的至少一部分被叠加噪声参数,该至少一部分以第二屏幕刷新率显示,至少一部分的输出帧率为第二帧率,该第二图像包括多帧加噪子图像。
其中,第二帧率大于第一帧率,第二屏幕刷新率大于第一屏幕刷新率。
终端可以在检测到满足预设条件后,进入加噪模式;在进入加噪模式后终端可以在显示屏显示第二图像。上述第一帧率是终端进入加噪模式之前,显示图像时所采用的输出帧率;上述第一屏幕刷新率是终端进入加噪模式之前,显示图像时所采用的屏幕刷新率。
本申请实施例中的噪声参数用于对图像进行加噪处理,以得到加噪后的图像。具体的,该加噪参数可以叠加在图像的像素点的像素值中,以改变像素点的像素值得到加噪子图像,以达到对图像加噪的目的。
在本申请实施例的实现方式(1)中,上述检测到满足预设条件可以为终端检测到用户对加噪选项的开启操作。具体的,S702可以包括S702a:
S702a、终端响应于对加噪选项的开启操作,进入加噪模式,在显示屏上显示第二图像。
其中,本申请实施例中的加噪选项可以为:终端提供的方便用户操作终端以进入加噪模式的一个用户接口。例如,该加噪选项可以是设置界面中的一个选项;或者,该加噪选项可以是终端显示的通知栏中的一个开关按钮。其中,该加噪选项还可以称为加噪按钮或者加噪显示选项等,本申请实施例对此不作限制。
示例性的,以终端是手机100为例。如图8中的(a)所示,手机100的设置界面中可以包括加噪选项“加噪显示”801。手机100可以响应于用户对该加噪选项“加噪显示”801的开启操作,进入加噪模式。并且,手机100响应于用户对加噪选项“加噪显示”801的点击操作,可以显示图8中的(b)所示的加噪控制界面802,该加噪控制界面802中包括多个类型的应用的选项,如银行类应用的选项、支付类应用的选项和通讯类应用的选项等;或者,手机100响应于用户对加噪选项“加噪显示”801的点击操作,可以显示图8中的(c)所示的加噪控制界面803,该加噪控制界面803中包括多个应用的选项,如支付宝的选项、共享单车的选项、招商银行的选项和淘宝的选项等。
其中,手机100可以响应于用户对加噪控制界面802和加噪控制界面803中各个选项的开启操作,在手机100进入加噪模式后,执行本申请实施例提供的图像显示方法显示该开启操作对应的应用的图像。
或者,当终端显示任一界面(如桌面或者任一应用的界面)时,该终端可以响应于用户对下拉菜单中加噪选项“加噪显示”的点击操作,进入或者退出上述加噪模式。例如,如图9所示的下拉菜单901中包括加噪选项“加噪显示”,手机100响应于用户对该加噪选项“加噪显示”的开启操作,进入加噪模式。
在本申请实施例的实现方式(2)中,上述检测到满足预设条件可以为第二图像包括敏感特征。具体的,S702可以包括S702b:
S702b、在第二图像中包括敏感特征时,终端自动进入加噪模式,在显示屏上显示所述第二图像。
其中,本申请实施例中的敏感特征可以包括:预设控件、货币符号和预设文字中的至少一项。其中,预设控件包括密码输入框、用户名输入框和身份证号输入框中的至少一项,所述预设文字包括余额、密码、工资和账户中的至少一项。
例如,上述货币符号可以是各个国家的货币符号,例如,人民币符号
Figure PCTCN2018081491-appb-000009
、美元符号$和欧元符号€等。上述预设文字包括但不限于余额、密码、工资和账户等,例如,预设文字还可以包括图3中的(a)所示的“私密文档”。
需要说明的是,本申请实施例中的敏感特征包括但不限于上述所列举的特征。例如,敏感特征还可以包括预设格式的信息,如银行卡号、身份证号、银行卡密码和邮箱地址等。
可选的,在上述实现方式(2)中,终端确定第二图像中包括敏感特征的方法还可以包括:当第二图像是终端中预设类型的应用的图像、加密文档的图像、加密图片的图像、私密视频的图像时终端可以确定该第二图像中包括敏感特征。
可选的,在上述实现方式(2)中,终端确定第二图像中包括敏感特征的方法还可以包括:终端识别待显示的第二图像,获得第二图像中包括的一个或多个图像特征,将获得的一个或多个图像特征与预先保存的敏感特征进行对比,当获得的一个或多个图像特征中包括与敏感特征匹配的图像特征时,终端则可以确定第二图像中包括敏感特征。
在本申请实施例的实现方式(3)中,上述检测到满足预设条件可以为第二图像是预设类型的应用的界面。具体的,S702可以包括S702c:
S702c、在显示预设类型的应用的界面时,终端自动进入加噪模式,在显示屏上显示第二图像。
在本申请实施例中,预设类型的应用可以包括银行类应用(如招商银行APP和中国银行APP)、支付类应用(如支付宝和微信等)和通讯类应用(如电子邮件,以及微信和QQ等即时通讯应用)中的至少一项。
上述预设类型的应用可以由用户在终端中设置。例如,用户可以在图8中的(b)所示的加噪控制界面802设置上述预设类型的应用。与实现方式(1)中“手机100在进入加噪模式后,执行本申请实施例提供的图像显示方法显示加噪控制界面802中被开启的应用的界面”不同的是:在实现方式(2)中,手机100可以响应于用户对加噪控制界面802中各个选项的开启操作,在手机100显示该开启操作对应的应用的图像时,进入加噪模式。例如,当用户对支付类应用执行加噪显示开启操作后,在手机100显示支付宝应用的界面时,手机100进入加噪模式。
在本申请实施例的实现方式(4)中,上述检测到满足预设条件可以为当前场景信息满足预设条件。具体的,上述S702可以包括S702d:
S702d、终端在当前场景信息满足预设条件时,自动进入加噪模式。
当前场景信息包括时间信息、地址信息和环境信息中的至少一项。其中,时间信 息用于指示当前时间,地址信息用于指示终端的当前所在的位置,如家、公司和商场等,终端可以采用现有定位方法确定终端当前所在的位置,现有定位方法包括但不限于GPS定位和WiFi定位。上述环境信息可以用于指示终端周围的人数,以及终端周围是否包括陌生人等。终端可以通过声音识别或通过摄像头采集图像来确定终端周围的人数,以及终端周围是否包括陌生人。
需要说明的是,本申请实施例中,终端进入加噪模式的方式包括但不限于上述所列举的方式。例如,终端可以响应于用户输入的预设手势,开启上述加噪模式。也就是说,当用户想要控制终端显示私密图像,并且想要避免由于该私密图像被其他设备偷拍而泄露时,无论终端当前显示何种界面,用户都可以通过预设手势控制终端开启加噪模式。即终端可以随时接收并响应于用户输入的预设手势,进入加噪模式。
例如,如图10中的(a)所示,手机100可以接收用户在手机100的桌面1001输入的“S型手势”,进入加噪模式。可选的,手机100响应于用户在手机100的桌面1001输入的“S型手势”可以显示图10中的(b)所示的模式提醒窗1002,该模式提醒窗1002用于提醒用户:手机已进入加噪模式。
可以理解,终端显示图片、视频或者终端的应用界面时,都是由终端的显示屏根据其输出帧率和屏幕刷新率一帧一帧的输出图像。即本申请实施例中的第一图像和第二图像都可以是一帧图像。
本申请实施例提供的图像显示方法,终端可以在检测到满足预设条件后,以第二屏幕刷新率显示包括多帧加噪子图像的第二图像的至少一部分(该至少一部分被叠加噪声参数),该至少一部分的输出帧率为第二帧率。并且,第二屏幕刷新率大于第一屏幕刷新率,第二帧率大于第一帧率。如此,第二图像的至少一部分的图像便可以分为多帧的加噪子图像逐帧输出,偷拍设备偷拍终端的屏幕拍到的是加噪子图像,可以减少终端的显示内容泄露的可能性,有效保护终端的显示内容。
在本申请实施例的第一种应用场景中,第二图像的至少一部分可以为第二图像中至少一个敏感区域(包括敏感特征的区域)。例如,图11中的(a)所示的第二图像1101的至少一部分为第二图像1101的一个敏感区域S;或者,图11中的(b)所示的第二图像1102的至少一部分为第二图像1102的敏感区域S1和敏感区域S2。
在第一种应用场景中,第二图像的至少一部分被叠加噪声参数,具体可以为:第二图像中的至少一个敏感区域被叠加噪声参数。例如,图11中的(a)所示的敏感区域S被叠加噪声参数。上述第二图像包括多帧加噪子图像,具体为:第二图像中的至少一个敏感区域的图像包括多帧加噪子图像(如N帧加噪子图像,N为大于或者等于2的整数)。其中,一个敏感区域的多帧加噪子图像是对该敏感区域的图像叠加噪声参数得到的。
其中,第二图像中除敏感区域之外的其他区域称为非敏感区域,非敏感区域以第二屏幕刷新率显示,非敏感区域的输出帧率为第二帧率;或者,非敏感区域以第一屏幕刷新率显示,非敏感区域的输出帧率为第一帧率。
当非敏感区域以第二屏幕刷新率显示,非敏感区域的输出帧率为第二帧率时,终端可以同一屏幕刷新率(即第二屏幕刷新率)和帧率(第二帧率)显示第二图像的全部内容,对显示屏的性能要求大大降低,不需要显示屏在不同显示区域支持不同的刷 新率。当非敏感区域以第一屏幕刷新率显示,非敏感区域的输出帧率为第一帧率时,只需要对敏感区域进行处理和调整屏幕刷新率,可以低功耗地实现防偷拍的效果。
本申请实施例这里,对上述第一种应用场景中,终端进入加噪模式后,在敏感区域输出N帧加噪子图像的方法进行详细说明:
其中,终端在进入加噪模式后,在显示屏显示第二图像之前,可以先确定出第二图像的敏感区域。具体的,如图12所示,图7中的S702可以包括S1201-S1203:
S1201、在检测到满足预设条件后,进入加噪模式,确定第二图像的至少一个敏感区域。
在第一种应用场景中,如图11中的(a)所示,终端可以从第二图像1101中确定出一个敏感区域S;如图11中的(b)所示,终端可以从第二图像1102中确定出敏感区域S1和敏感区域S2两个敏感区域。
可选的,在一种实现方式中,终端可以识别第二图像;当识别到该第二图像中包括敏感特征时,则根据敏感特征在第二图像中的位置,确定出一个或多个敏感区域。具体的,上述S1201可以包括S1201a-S1201b:
S1201a、终端确定第二图像中包括敏感特征。
其中,当第二图像是终端中预设类型的应用的图像、加密文档的图像、加密图片的图像、私密视频的图像时终端可以确定该第二图像中包括敏感特征。
其中,终端可以识别待显示的第二图像,获得第二图像中包括的一个或多个图像特征;然后,将获得的一个或多个图像特征与预先保存的敏感特征进行对比;当获得的一个或多个图像特征中包括与敏感特征匹配的图像特征时,终端则可以确定第二图像中包括敏感特征。其中,终端可以预先保存多个敏感特征。
S1201b、终端根据敏感特征在第二图像中的位置,确定第二图像的至少一个敏感区域。
当终端确定第二图像中包括敏感特征时,则可以确定出该敏感特征在第二图像中的位置。然后,终端可以根据确定出的位置将第二图像中包括该敏感特征的区域确定为敏感区域。
可以理解,第二图像中可以包括一个或多个敏感特征,因此终端根据这一个或多个敏感特征可以确定出至少一个敏感区域。
举例来说,由于图2所示的显示界面包括密码输入框,手机100可以确定该显示界面包括敏感特征,并根据密码输入框在图像中的位置确定敏感区域201。由于图5所示的显示界面包括人民币符号¥,手机100可以确定该显示界面包括敏感特征,并根据人民币符号¥在图像中的位置确定敏感区域501。由于图6中的(a)所示的微信聊天界面包括预设文字“密码”,手机100可以确定该显示界面包括敏感特征,并根据预设文字“密码”在图像中的位置确定敏感区域601。由于电子邮件是预设类型的应用,因此手机100可以确定图6中的(b)所示的敏感区域602是邮件的邮件正文。
需要说明的是,当上述第二图像是加密文档的图像、加密图片的图像或者私密视频的图像时,由于敏感特征分布在这一帧图像的整个区域,因此这一帧图像的整个区域都需要进行加噪显示。因此,在这种情况下,终端确定出的至少一个敏感区域为这 第二图像的整个区域。例如,如图3中的(a)所示,手机100显示的文档1是私密文档,那么手机100所显示的文档1的图像的整个区域均为敏感区域。如图3中的(b)所示,手机100显示的图片1是私密图片,那么手机100所显示的图片1的图像的整个区域均为敏感区域。如图4所示,手机100播放的视频1是私密视频,那么手机100所显示的视频1的图像的整个区域均为敏感区域。
可选的,在另一种实现方式中,为了更加清楚的识别出第二图像中的敏感区域,终端可以将第二图像分割成M个子区域,识别每个子区域的图像,以判断对应子区域是否为敏感区域。具体的,上述S1201可以包括S1201c-S1201e::
S1201c、终端将第二图像分割成M个子区域,M≥2。
示例性的,本申请实施例中的M可以是预配置的固定值。例如,如图13中的(a)所示,M=6,终端可以将第二图像1301分割成6个子区域;如图13中的(b)所示,M=4,终端可以将第二图像1302分割成4个子区域;如图13中的(c)所示,M=3,终端可以将第二图像1303分割成3个子区域。
或者,M可以是根据终端的第一参数确定的,该第一参数包括终端的处理能力和终端的剩余电量。终端的处理能力具体可以为该终端的处理器的处理能力,终端的处理器可以包括CPU和GPU。其中,处理器的处理能力可以包括处理器的主频、核数(如多核处理器)、位数和缓存等参数。
需要说明的是,终端可以将第二图像平均分为M个子区域,即M个子区域的大小相同,例如,图13中的(b)所示的4个子区域的大小相同;或者,M个子区域的大小不同,例如,图13中的(a)所示的6个子区域的大小不同。
可以理解,本申请实施例的方法所提供的图像显示功能可以在一个应用中实现,终端可以安装该应用以执行本申请实施例的方法。不同的终端的处理器不同,其处理能力也不同;因此,本申请实施例中针对不同的处理器,M的取值不同。并且,由于一个终端的处理器的处理能力是一定的,而终端的剩余电量会发生变化;因此,针对处理能力相同的终端,M的取值取决于该终端的剩余电量。具体的,终端的处理能力越高M越大;终端的处理能力一定的情况下,剩余电量越多M越大。例如,如表1所示,为本申请实施例提供的一种M与终端的处理能力和剩余电量的关系表实例。
表1
Figure PCTCN2018081491-appb-000010
其中,表1中的处理器1-处理器n的处理能力越来越高。如表1所示,当终端的剩余电量处于[0,10%]区间内时,由于处理器n的处理能力大于处理器2的处理能力,因此包括处理器n的终端可以将一帧图像分割为6个子区域,而包括处理器2的终端可以将一帧图像分割为4个子区域。如表1所示,当终端的处理能力一定(如终端的处理器为处理器1)时,如果终端的剩余电量处于(11%,30%]区间,那么该终端可 以将一帧图像分割为3个子区域,如果终端的剩余电量处于(70%,100%]区间,那么该终端可以将一帧图像分割为6个子区域。即终端的处理能力一定的情况下,剩余电量越多M越大。
S1201d、终端识别M个子区域的图像内容以提取每个子区域的图像特征。
其中,终端识别M个子区域的图像内容以提取每个子区域的图像特征的方法,可以参考常规技术中终端识别图像以提取图像特征的方法,本申请实施例这里不予赘述。
可以理解,终端执行S702b2可以提取出M个子区域中每个区域的图像特征,然后,终端可以针对每个子区域执行S702c:
S1201e、当一个子区域的图像特征包括敏感特征时,终端确定该子区域为敏感区域。
其中,当上述M个子区域中多个子区域的图像特征均包括敏感特征时,终端便可以确定这多个子区域均为敏感区域。
可以理解,终端执行S1201可以确定出第二图像的至少一个敏感区域,然后针对至少一个敏感区域中的每个敏感区域执行S1202-S1203:
S1202、终端根据敏感区域的图像生成N帧第一加噪子图像。
在第一种实现方式中,本申请实施例中的N可以是预配置的固定值。N可以是大于2的任一自然数。例如,N=4。
在第二种实现方式中,为了避免当N为预配置的固定值时,偷拍设备追踪到终端对图像进行操作的规律确定出该固定值,对偷拍的加噪后的图像进行还原处理;本申请实施例中的N可以在一定范围内随机变化。例如,终端在第一次显示第二图像时,为第二图像中的敏感区域a的图像生成3帧第一加噪子图像,在第二次显示第二图像时,为第二图像中的敏感区域a的图像生成4帧第一加噪子图像。或者,终端在第一预设时间(如早上午8:00-9:00)显示第二图像时,为第二图像中的敏感区域b的图像生成4帧第一加噪子图像;在第二预设时间(如上午10:00-12:00)显示第二图像时,为第二图像中的敏感区域b的图像生成2帧第一加噪子图像。
在第三种实现方式中,N可以是根据终端的剩余电量确定的。由于N越大终端显示的第一加噪子图像越多,终端显示图像的耗电量越大;因此,终端可以根据剩余电量确定N的取值。例如,当终端的剩余电量大于等于第一阈值时,终端可以根据敏感区域的图像生成N1帧第一加噪子图像;当终端的剩余电量小于第一阈值时,终端可以根据敏感区域的图像生成N2帧第一加噪子图像,N1>N2。
可选的,终端不仅可以根据上述剩余电量大于等于第一阈值的取值范围,以及剩余电量小于第一阈值的取值范围,确定生成的第一加噪子图像的帧数;终端可以对剩余电量的取值范围进行更加细致的划分,并保存N与剩余电量的对应关系。例如,如表2所示,为本申请实施例提供的一种N与终端的剩余电量的关系表实例。
表2
Figure PCTCN2018081491-appb-000011
其中,如表2所示,终端的剩余电量越多,N的取值越大。需要说明的是,表2 仅以示例方式给出N与终端的剩余电量的对应关系,本申请实施例中N的取值包括但不限于表2所示的值。
在第四种实现方式中,N可以是根据第二图像的图像类型确定的。其中,图像类型可以指示第二图像是动态图像或者静态图像。本申请实施例中的动态图像可以为视频中的一帧图像,该动态图像的显示时间较短,因此N的取值可以较小。静态图像可以包括终端的桌面图像、应用的界面图像和终端显示的图片等,该静态图像的显示时间较长,因此N的取值可以较大。例如,当第二图像为静态图像时,终端为第二图像生成N1帧加噪子图像;当第二图像为动态图像时,终端为第二图像生成N2帧加噪子图像。
需要说明的是,在上述四种实现方式中,当第二图像中包括多个敏感区域时,为多个敏感区域生成的第一加噪子图像的帧数可以相同,也可以不同。例如,以图11中的(b)所示的第二图像1102中包括两个敏感区域(敏感区域S1和敏感区域S2)为例。终端为敏感区域S1和敏感区域S2生成的第一加噪子图像的帧数可以均为N;或者,如图17所示,终端可以为敏感区域S1生成N1帧第一加噪子图像,为敏感区域S2生成N2帧第一加噪子图像,N1不等于N2。
在第五种实现方式中,N可以是根据敏感区域的敏感程度确定的。即N可以是根据敏感区域的敏感程度确定的。其中,该终端中还可以保存每个敏感特征的敏感程度,不同敏感特征的敏感程度不同。
其中,终端可以根据一个敏感区域中的敏感特征的敏感程度,确定该敏感区域的敏感程度。具体的,当一个敏感区域中包括一个敏感特征时,该敏感区域的敏感程度是该敏感特征的敏感程度;当一个敏感区域包括多个敏感特征时,该敏感区域的敏感程度是这多个敏感特征的敏感程度之和。其中,不同类型的敏感特征的敏感程度不同。例如,预设文字“密码”的敏感程度高于货币符号的(如
Figure PCTCN2018081491-appb-000012
)的敏感程度。
终端中可以保存敏感程度与N的对应关系。例如,如表3所示,为本申请实施例提供的一种N与敏感程度的关系表实例。
表3
Figure PCTCN2018081491-appb-000013
其中,表3所示的敏感程度a-敏感程度g逐渐增大。举例来说,假设敏感区域a仅包括敏感特征“¥”,敏感区域a的敏感程度敏感程度在[0,a]区间;敏感区域b包括敏感特征“¥”和“密码”,敏感区域a的敏感程度敏感程度在(a,b]区间。
如表3所示,当一个敏感区域的敏感程度在[0,a]区间时,N=2;当一个敏感区域的敏感程度在(a,b]区间时,N=3;当一个敏感区域的敏感程度在(e,f]区间时,N=6;当一个敏感区域的敏感程度在(f,g]区间时,N=8。由表3可知,一个敏感区域的敏感程度越高,为该敏感区域生成的第一加噪子图像的帧数N越大。
在第六种实现方式中,N可以是根据终端的剩余电量和敏感区域的敏感程度,即N是根据终端的剩余电量和敏感区域的敏感程度确定的。在这种情况下,终端的剩余电量一定时,敏感区域的敏感程度越高,为该敏感区域生成的第一加噪子图像的帧数 N越大。
需要说明的是,由于在第四种实现方式和第五种实现方式中,假设一帧图像中包括多个敏感区域。如果多个敏感区域中的两个敏感区域的敏感程度在同一区间,则终端为这两个敏感区域生成的第一加噪子图像的帧数N相同;如果这两个敏感区域的敏感程度在不同区间,则终端为这两个敏感区域生成的第一加噪子图像的帧数N也不同。
本申请实施例中,终端可以采用一组噪声参数{W 1,W 2,……,W N}对子图像进行加噪,根据一个敏感区域的图像生成N帧第一加噪子图像。其中,
Figure PCTCN2018081491-appb-000014
W n是一个敏感区域的第n个噪声参数。
以图11中的(a)所示的一帧图像1101中包括一个敏感区域S为例,如图14所示,终端可以采用一组噪声参数{W 1,W 2,……,W N},根据敏感区域S的图像生成N帧第一加噪子图像。其中,这N帧第一加噪子图像中,第1帧第一加噪子图像对应的噪声参数为W 1,第2帧第一加噪子图像对应的噪声参数为W 2,……,第n帧第一加噪子图像对应的噪声参数为W n,……,第N帧第一加噪子图像对应的噪声参数为W N
具体的,终端根据一个敏感区域的图像生成N帧第一加噪子图像的方法可以包括S1202a-S1202c,即上述S1202可以包括S1202a-S1202c:
S1202a、终端确定一个敏感区域的图像中的每个像素点的像素值。
示例性的,以图14所示的敏感区域S为例,如图15A所示,敏感区域S的第一行第一个像素点(简称第a1个像素点)的像素值为A a1,第一行第四个像素点(简称第a4个像素点)的像素值为A a4,第六行第一个像素点(简称f1个像素点)的像素值为A f1
S1202b、终端确定一个敏感区域的N个噪声参数,这N个噪声参数之和为零。
在一种实现方式中,上述N个噪声参数{W 1,W 2,……,W N}可以随机取值,只要这N个噪声参数满足
Figure PCTCN2018081491-appb-000015
即可。
在另一种实现方式中,上述N个噪声参数{W 1,W 2,……,W N}可以符合均匀分布或高斯分布,只要这N个噪声参数满足
Figure PCTCN2018081491-appb-000016
即可。
其中,终端可以针对N个噪声参数中的每个噪声参数,执行S1202c以计算一帧第一加噪子图像中每个像素点的像素值,得到一帧第一加噪子图像:
S1202c、终端采用公式(1)计算一帧第一加噪子图像中每个像素点的像素值,得到一帧第一加噪子图像。
a n,i=A i+W n   公式(1)
其中,A i是一个敏感区域的图像中的像素点i的像素值,i∈{1,2,……,Q},Q为一个敏感区域的图像中的像素点的总数;W n是一个敏感区域的第n个噪声参数,n∈{1,2,……,N},
Figure PCTCN2018081491-appb-000017
a n,i是像素点i在第n帧第一加噪子图像的像素值。
示例性的,以图14所示的敏感区域S为例,如图15A所示,终端采用上述公式(1)可以计算得到:
第1帧第一加噪子图像中,即n=1时,第一行第一个像素点(即第a1个像素点)的像素值为a 1,a1=A a1+W 1,第一行第四个像素点(简称第a4个像素点)的像素值为a 1,a4=A a4+W 1,第六行第一个像素点(简称f1个像素点)的像素值为a 1,f1=A f1+W 1
第2帧第一加噪子图像中,即n=2时,第一行第一个像素点(即第a1个像素点)的像素值为a 2,a1=A a1+W 2,第一行第四个像素点(简称第a4个像素点)的像素值为a 2,a4=A a4+W 2,第六行第一个像素点(简称f1个像素点)的像素值为a 2,f1=A f1+W 2
第N帧第一加噪子图像中,即n=N时,第一行第一个像素点(即第a1个像素点)的像素值为a N,a1=A a1+W N,第一行第四个像素点(简称第a4个像素点)的像素值为a N,a4=A a4+W N,第六行第一个像素点(简称f1个像素点)的像素值为a N,f1=A f1+W N
需要说明的是,本申请实施例中,第1帧第一加噪子图像、第2帧第一加噪子图像和第N帧第一加噪子图像中,其他像素点的像素值的计算方法,本申请实施例这里不予赘述。并且,N帧加噪子图像中除第1帧第一加噪子图像、第2帧第一加噪子图像和第N帧第一加噪子图像之外的其他第一帧加噪子图像的各个像素点的像素值的计算方法,本申请实施例这里不予赘述。
终端可以依次采用上述一组噪声参数{W 1,W 2,……,W N}中的每个噪声参数(如W n)对一帧图像中的一个敏感区域的图像进行加噪处理(即为敏感区域的图像叠加噪声参数),得到包括N帧第一加噪子图像的第二图像。其中,每一帧第一加噪子图像中各个像素点加噪处理所采用的噪声参数相同。例如,如图14所示,第1帧第一加噪子图像中,每个像素点加噪处理所采用的噪声参数均为W 1。不同帧第一加噪子图像加噪处理所采用的噪声参数不同。例如,如图14所示,第1帧第一加噪子图像加噪处理所采用的噪声参数为W 1,而第2帧第一加噪子图像加噪处理所采用的噪声参数为W 2,W 1与W 2不同。
需要说明的是,如图15A所示,在本申请实施例中,一帧第一加噪子图像的每个像素点的噪声参数可以相同。
可选的,本申请实施例中,一帧第一加噪子图像中不同的像素点的噪声参数也可以不同。例如,一个敏感区域包括Q个像素点,那么终端可以采用Q组噪声参数对敏
Figure PCTCN2018081491-appb-000018
图像中各个像素点的噪声参数为{W n,1,W n,2,……,W n,i,……,W n,Q}。
即上述公式(1)可以替换为公式(2):
a n,i=A i+W n,i      公式(2)
示例性的,以图14所示的敏感区域S为例,如图15B所示,敏感区域S的第一行第一个像素点(简称第a1个像素点)的像素值为A a1,第一行第四个像素点(简称第a4个像素点)的像素值为A a4,第六行第一个像素点(简称f1个像素点)的像素值为A f1。如图15B所示,终端采用上述公式(1)可以计算得到:
第1帧第一加噪子图像中,即n=1时,第一行第一个像素点(即第a1个像素点)的像素值为a 1,a1=A a1+W 1,a1,第一行第四个像素点(简称第a4个像素点)的像素值为a 1,a4=A a4+W 1,a4,第六行第一个像素点(简称f1个像素点)的像素值为a 1,f1=A f1+W 1,f1
第2帧第一加噪子图像中,即n=2时,第一行第一个像素点(即第a1个像素点)的像素值为a 2,a1=A a1+W 2,a2,第一行第四个像素点(简称第a4个像素点)的像素值为a 2,a4=A a4+W 2,a4,第六行第一个像素点(简称f1个像素点)的像素值为a 2,f1=A f1+W 2,f1
第N帧第一加噪子图像中,即n=N时,第一行第一个像素点(即第a1个像素点)的像素值为a N,a1=A a1+W N,a1,第一行第四个像素点(简称第a4个像素点)的像素值为a N,a4=A a4+W N,a4,第六行第一个像素点(简称f1个像素点)的像素值为a N,f1=A f1+W N,f1
其中,
Figure PCTCN2018081491-appb-000019
可以理解,由于对一帧图像中的一个敏感区域的图像进行加噪处理,所采用的至少一组噪声参数中每一组噪声参数之和为零,例如{W 1,i,W 2,i,……,W n,i,……,W N,i}满足
Figure PCTCN2018081491-appb-000020
因此,上述N帧第一加噪子图像中的像素点i的像素值的平均值
Figure PCTCN2018081491-appb-000021
为A i。A i为敏感区域的像素点i未加噪处理前的像素值。这样,基于人眼视觉的低通效应,人眼无法察觉加噪处理后的图像与未加噪处理前的图像的区别,可以保证在人眼看来加噪处理前后的图像相同,可以保证用户的视觉体验。
具体的,在第1帧第一加噪子图像中,像素点i的像素值为a 1,i=A i+W 1,i;在第2帧第一加噪子图像中,像素点i的像素值为a 2,i=A i+W 2,i;……;在第n帧第一加噪子图像中,像素点i的像素值为a n,i=A i+W n,i;……;在第N帧第一加噪子图像中,像素点i的像素值为a N,i=A i+W N,i
那么,N帧第一加噪子图像中像素点i的像素值的平均值
Figure PCTCN2018081491-appb-000022
Figure PCTCN2018081491-appb-000023
可选的,在另一种实现方式中,上述第i个像素点的N个噪声参数{W 1,i,W 2,i,……,W n,i,……,W N,i}的波动大小与敏感区域的敏感程度成正比。其中,N个噪声参数的波动大小由像素点i在N帧加噪子图像的像素值的方差来表征。
基于上述实例,像素点i在N帧第一加噪子图像的像素值的方差为:
Figure PCTCN2018081491-appb-000024
其中,一个敏感区域的敏感程度越高,终端对该敏感区域的第i个像素点加噪处理所采用的一组噪声参数{W 1,i,W 2,i,……,W n,i,……,W N,i}的波动越大,即像素点i在N帧第一加噪子图像的像素值的方差s 2越大。
进一步的,受到终端的显示器的硬件限制,终端所显示的图像的像素点(如像素点i)的像素值A i的范围为[0,P];因此,要保证加噪处理后的每一帧第一加噪子图像中每个像素点的像素值的范围均为[0,P],如0≤a n,i≤P,即0≤A i+W n,i≤P,-A i≤W n,i≤P-A i
并且,基于人眼视觉的低通效应,为了保证在人眼看来加噪处理前后的图像相同, 加噪后N帧第一加噪子图像中像素点i的像素值的平均值
Figure PCTCN2018081491-appb-000025
与未加噪处理前的像素点i的像素值A i相同,终端需要在第n帧补偿前面n-1帧的加噪子图像中的噪声,由
Figure PCTCN2018081491-appb-000026
可以得出
Figure PCTCN2018081491-appb-000027
其中,
Figure PCTCN2018081491-appb-000028
是N帧第一加噪子图像中前n-1帧第一加噪子图像加噪处理所采用的噪声参数之和,
Figure PCTCN2018081491-appb-000029
是N帧第一加噪子图像中,第n+1帧第一加噪子图像至第N帧第一加噪子图像(共N-n帧)加噪处理所采用的噪声参数之和,N≥n。其中,N=n时,
Figure PCTCN2018081491-appb-000030
由于每一帧第一加噪子图像加噪处理所采用的噪声参数均满足-A i≤W k,i≤P-A i;因此,上述N-n帧第一加噪子图像加噪处理所采用的噪声参数之和
Figure PCTCN2018081491-appb-000031
应满足以下公式(3):
Figure PCTCN2018081491-appb-000032
Figure PCTCN2018081491-appb-000033
可以得出
Figure PCTCN2018081491-appb-000034
Figure PCTCN2018081491-appb-000035
和公式(3)可以得出:
Figure PCTCN2018081491-appb-000036
Figure PCTCN2018081491-appb-000037
Figure PCTCN2018081491-appb-000038
也就是说,W n,i同时满足以下条件(1)和条件(2):
条件(1):-A i≤W n,i≤P-A i
条件(2):
Figure PCTCN2018081491-appb-000039
由上述条件(1)和条件(2)可知,一个敏感区域的第n个噪声参数W n,i满足公式(4):
Figure PCTCN2018081491-appb-000040
其中,max(x,y)表示取x和y中的最大值,min(x,y)表示取x和y中的最小值。
需要说明的是,本申请实施例中的像素值可以是像素点的颜色分量的颜色值。终端可以针对每个像素点的每个颜色分量的颜色值执行S1202a-S1202c,以得到一帧第一加噪子图像。其中,像素点的颜色分量可以包括红绿蓝(Red Green Blue,RGB)三基色。例如,终端计算第n帧加噪子图像中第i个像素点的像素值的方法可以包括:终端采用R n,i=R i+W n,i、G n,i=G i+W n,i和B n,i=B i+W n,i,计算第n帧第一加噪子图像的 第i个像素点的颜色分量,R i、G i和B i是所述像素点i加噪处理前的颜色分量的颜色值,R n,i是R i加噪处理后的颜色值,G n,i是G i加噪处理后的颜色值,B n,i是B i加噪处理后的颜色值;终端根据R n,i、G n,i和B n,i,确定第n帧第一加噪子图像的像素点i的像素值a n,i
如上述实施例所述,N个噪声参数之和可以为零。可选的,本申请实施例中的一组噪声参数(即N个噪声参数)之和也可以在预设参数区间内。其中,该预设参数区间的上限值和下限值与零的差值小于预设参数阈值。例如,该预设参数阈值可以为0.3或者0.05。以预设参数阈值可以为0.3为例,上述预设参数区间可以为[-0.3,0.2]。
S1203、终端以第二屏幕刷新率在敏感区域显示N帧第一加噪子图像,N帧第一加噪子图像的输出帧率为第二帧率。
其中,第二帧率是第一帧率的N倍,第二屏幕刷新率是第一屏幕刷新率的N倍。第一帧率是终端进入加噪模式前显示图像时所采用的输出帧率,第一屏幕刷新率是终端进入加噪模式前的屏幕刷新率。
在第一种应用场景的一种实现方式中,非敏感区域以第一屏幕刷新率显示,非敏感区域的输出帧率为第一帧率。如图15C所示,在图12中的S1201之后,本申请实施例的方法还可以包括S1204:
S1204、终端以第一屏幕刷新率显示非敏感区域的图像,非敏感区域的输出帧率为第一帧率。
其中,第二帧率是第一帧率的N倍,第二屏幕刷新率是第一屏幕刷新率是N倍。即终端以第二屏幕刷新率和第二帧率在敏感区域显示N帧第一加噪子图像的同时,以第一屏幕刷新率和第一帧率显示非敏感区域的图像(一帧图像)。在这种实现方式中,非敏感区域的图像没有被叠加噪声。
在第一种应用场景的另一种实现方式中,非敏感区域以第二屏幕刷新率显示,非敏感区域的输出帧率为第二帧率。如图15D所示,在图12中的S1201之后,本申请实施例的方法还可以包括S1501-S1502:
S1501、终端根据非敏感区域的图像生成N帧第二加噪子图像。
S1502、终端以第二屏幕刷新率在敏感区域显示N帧第二加噪子图像,N帧第二加噪子图像的输出帧率为第二帧率。
其中,终端根据非敏感区域的图像生成N帧第二加噪子图像的方法,可以参考S1202中终端根据敏感区域的图像生成N帧第一加噪子图像的方法,本申请实施例这里不予赘述。
不同的是,终端生成N帧第二加噪子图像所使用的至少一组噪声参数,与终端生成N帧第一加噪子图像所使用的至少一组噪声参数不同。具体的,与生成N帧第一加噪子图像所使用的噪声参数相比,生成N帧第二加噪子图像所使用的噪声参数的波动较小。其中,噪声参数的波动越大,叠加该噪声参数的图像的加扰程度越高。也就是说,虽然终端在敏感区域和非敏感区域以相同的屏幕刷新率和帧率输出N帧加噪子图像;但是,敏感区域的图像的加噪程度高于非敏感区域的图像的加噪程度。
本申请实施例提供一种图像显示方法,终端可以在敏感区域和非敏感区域以相同的屏幕刷新率和帧率输出N帧加噪子图像。即终端可以同一屏幕刷新率(即第二屏幕 刷新率)和帧率(第二帧率)显示第二图像的全部内容,不需要屏幕在不同显示区域支持不同的刷新率,对屏幕的要求大大降低。并且,终端可以对敏感区域和非敏感区域进行不同程度的加扰。
示例性的,本申请实施例这里以图11中的(a)所示的第二图像1101中包括敏感区域S为例,对本申请实施例显示该第二图像的方法进行举例说明:
如图16所示,第二图像1101中包括敏感区域S,第二图像1101中除敏感区域S之外的其他区域(填充黑点的区域)为非敏感区域。其中,t1-t2这段时间T是终端采用常规方案显示第二图像1101的时间。
如图16所示,在常规方案中,终端在t1-t2这段时间T内显示第二图像1101。而在本申请实施例提供的图像显示方法中,终端可以将t1-t2这段时间T平均划分为N段,每一段的时间为T/N,终端可以在敏感区域S显示一帧第一加噪子图像。例如,如图16所示,在t1-t3这段时间T/N,终端在敏感区域S显示第1帧第一加噪子图像;在t3-t4这段时间T/N,终端在敏感区域S显示第2帧第一加噪子图像;……;在t5-t2这段时间T/N,终端在敏感区域S显示第N帧第一加噪子图像。
其中,在t1-t2这段时间T内,即使终端在敏感区域S显示的第一加噪子图像不同,其非敏感区域所显示的图像的帧率和刷新率保持不变。
或者,在t1-t3这段时间T/N,终端在非敏感区域显示第1帧第二加噪子图像;在t3-t4这段时间T/N,终端在非敏感区域显示第2帧第二加噪子图像;……;在t5-t2这段时间T/N,终端在非敏感区域显示第N帧第二加噪子图像。在t1-t2这段时间T内,终端在非敏感区域显示的第二加噪子图像不同。
示例性的,本申请实施例这里以图11中的(b)所示的第二图像1102中包括两个敏感区域(敏感区域S1和敏感区域S2)为例,对本申请实施例中第二图像中包括多个敏感区域时,终端显示该第二图像的方法进行举例说明:
如图17所示,终端可以为敏感区域S1生成N1帧第一加噪子图像,为敏感区域S2生成N2帧第一加噪子图像。其中,N1与N2相同;或者,N1与N2不同。
并且,终端对敏感区域S1的加噪处理所采用一组噪声参数为{W a1,W a2,……,W aN1},
Figure PCTCN2018081491-appb-000041
其中,第1帧(也称为第a1帧)第一加噪子图像加噪处理所采用一组噪声参数为W a1;第2帧(也称为第a2帧)第一加噪子图像加噪处理所采用一组噪声参数为W a2;……;第N1帧(也称为第aN1帧)第一加噪子图像加噪处理所采用一组噪声参数为W aN1
终端对敏感区域S2的加噪处理所采用一组噪声参数为{W b1,W b2,……,W bN2},
Figure PCTCN2018081491-appb-000042
其中,噪声参数{W a1,W a2,……,W aN1}与噪声参数{W b1,W b2,……,W bN2}可以相同,也可以不同。其中,第1帧(也称为第b1帧)第一加噪子图像加噪处理所采用一组噪声参数为W b1;第2帧(也称为第b2帧)第一加噪子图像加噪处理所采用一组噪声参数为W b2;……;第N1帧(也称为第bN2帧)第一加噪子图像加噪处理所采用一组噪声参数为W bN2
进一步的,噪声参数{W a1,W a2,……,W aN1}与噪声参数{W b1,W b2,……,W bN2}还可以满足上述公式(3)对应的条件。
如图18所示,第二图像1102中包括敏感区域S1和敏感区域S2,第二图像1102中除敏感区域S1和敏感区域S2之外的其他区域(填充黑点的区域)为非敏感区域。其中,t1-t2这段时间T是终端采用常规方案显示第二图像1102的时间。
如图18所示,在常规方案中,终端在t1-t2这段时间T内显示第二图像1102。而在本申请实施例提供的图像显示方法中,终端可以将t1-t2这段时间T平均划分为N1段,每一段T/N1,终端可以在敏感区域S1显示敏感区域S1的一帧第一加噪子图像;终端可以将t1-t2这段时间T平均划分为N2段,每一段T/N2,终端可以在敏感区域S2显示敏感区域S2的一帧第一加噪子图像。
例如,如图18所示,在t1-t3这段时间(t1-t3这段时间属于第1段T/N1,且属于第1段T/N2),终端在敏感区域S1显示第a1帧第一加噪子图像,在敏感区域S2显示第b1帧第一加噪子图像,即终端显示图18所示的图像a。
在t3-t4这段时间(t3-t4这段时间属于第2段T/N1,且属于第1段T/N2),终端在敏感区域S1显示第a2帧第一加噪子图像,在敏感区域S2显示第b1帧第一加噪子图像,即终端显示图18所示的图像b。
在t4-t5这段时间(t4-t5这段时间属于第2段T/N1,且属于第2段T/N2),终端在敏感区域S1显示第a2帧第一加噪子图像,在敏感区域S2显示第b2帧第一加噪子图像,即终端显示图18所示的图像c。
在t5-t6这段时间(t5-t6这段时间属于第3段T/N1,且属于第2段T/N2),终端在敏感区域S1显示第a3帧第一加噪子图像,在敏感区域S2显示第b2帧第一加噪子图像,即终端显示图18所示的图像d。
在t7-t8这段时间(t7-t8这段时间属于第N1-1段T/N1,且属于第N2段T/N2),终端在敏感区域S1显示第aN1-1帧第一加噪子图像,在敏感区域S2显示第bN2帧第一加噪子图像,即终端显示图18所示的图像e。
在t8-t9这段时间(t8-t9这段时间属于第N1段T/N1,且属于第N2段T/N2),终端在敏感区域S1显示第aN1帧第一加噪子图像,在敏感区域S2显示第bN2帧第一加噪子图像,即终端显示图18所示的图像f。
举例来说,以手机100显示的图2所示的包括密码输入框的显示界面为例,在手机100执行本申请实施例提供的图像显示方法后,如果手机200拍摄手机100所显示的图2所示的显示界面,可以得到图19A所示的拍摄图片。
本申请实施例提供的图像显示方法,终端可以确定出第二图像的至少一个敏感区域,然后针对每个敏感区域,根据该敏感区域的图像生成N(N为大于或者等于2的整数)帧加噪子图像,最后在该敏感区域采用第二帧率(第二帧率是原输出帧率的N倍)和第二屏幕刷新率(第二屏幕刷新率是原屏幕刷新率的N倍)逐帧输出N帧第一加噪子图像。如此,敏感区域的图像便可以分为N帧的第一加噪子图像逐帧输出,偷拍设备偷拍终端的屏幕拍到的是加噪子图像,可以减少终端的显示内容泄露的可能性,有效保护终端的显示内容。
并且,终端对敏感区域加噪处理所采用的噪声参数之和为零。如此,可以保证加噪后N帧加噪子图像中像素点的像素值的平均值与未加噪处理前该像素点的像素值相同。这样,基于人眼视觉的低通效应,人眼无法察觉加噪处理后的图像与未加噪处理 前的图像的区别,可以保证在人眼看来加噪处理前后的图像相同,可以保证用户的视觉体验。即本申请实施例提供的方法,可以在保证用户视觉体验的前提下,减少终端的显示内容泄露的可能性,有效保护终端的显示内容。
并且,当第二图像中包括多个敏感区域时,终端可以对不同的敏感区域做不同的加噪处理(例如,不同敏感区域加噪处理得到的加噪子图像的帧数N不同,并且不同敏感区域加噪处理所采用的噪声参数不同),即终端可以对不同的敏感区域做不同程度的加噪处理。
在本申请实施例的第二种应用场景中,第二图像的至少一部分可以为第二图像的整个区域。例如,图11中的(c)所示的第二图像1003的至少一部分为第二图像1103的整个区域S3。
在第二种应用场景中,第二图像的至少一部分被叠加噪声参数,具体可以为:第二图像的整个区域被叠加噪声参数。例如,图11中的(c)所示的整个区域S3被叠加噪声参数。上述第二图像包括多帧加噪子图像,具体为:第二图像中的整个区域的图像包括多帧加噪子图像(如N帧加噪子图像)。其中,该多帧加噪子图像是对第二图像的整个区域的图像叠加噪声参数得到的。
需要说明的是,在第二种应用场景中,虽然第二图像的整个区域的图像都叠加噪声参数,但是并不表示第二图像的整个区域的图像所叠加的噪声参数均相同,并不表示第二图像的整个区域都包括敏感特征。
其中,第二图像中的部分区域可能包括敏感特征,除该部分区域外的其他区域可能并不包括敏感特征。在这种情况下,虽然第二图像的整个区域(包括敏感特征的区域和不包括敏感特征的区域)的图像都叠加噪声参数;但是,包括敏感特征的区域的图像叠加的噪声参数与不包括敏感特征的区域叠加的噪声参数不同。具体的,与包括敏感特征的区域叠加的噪声参数相比,不包括敏感特征的区域叠加的噪声参数的波动较小。其中,由于噪声参数的波动越大,叠加该噪声参数的图像的加扰程度越高;因此,包括敏感特征的区域的图像的加噪程度高于不包括敏感特征的区域的图像的加噪程度。
例如,如图19B中的(a)所示,假设第二图像1901的至少一部分是第二图像1901的整个区域S4。其中,第二图像1901的整个区域S4的图像叠加噪声参数并不表示整个区域S4都包括敏感特征。例如,如图19B中的(b)所示,整个区域S4中只有部分区域a包括敏感特征,而该部分区域a之外的其他区域b并不包括敏感特征。在这种情况下,终端虽然可以为整个区域S4的图像均叠加噪声参数;但是,包括敏感特征的部分区域a的图像与不包括敏感特征的其他区域b的图像所叠加的噪声参数不同。即终端可以对包括敏感特征的部分区域a的图像与不包括敏感特征的其他区域b的图像进行不同程度的加噪处理。其中,包括敏感特征的部分区域a的图像的加噪程度高于不包括敏感特征的其他区域b的图像的加噪程度。例如,如图19B中的(c)所示,采用较密集的黑点表示包括敏感特征的部分区域a的图像的加噪程度,采用较稀疏的黑点表示不包括敏感特征的其他区域b的图像的加噪程度。
其中,包括敏感特征的部分区域a和不包括敏感特征的其他区域b的屏幕刷新率和帧率均相同。如图19B中的(c)所示,包括敏感特征的部分区域a和不包括敏感特 征的其他区域b上显示的都是N帧加噪子图像。
当然,第二图像的整个区域可能都包括敏感特征,如当第二图像是私密文档的图像时,第二图像的整个区域都包括敏感特征。在这种情况下,第二图像的整个区域的图像叠加相同的噪声参数。
在第二种应用场景中,无论第二图像中部分区域包括敏感特征,还是第二图像的整个区域都包括敏感特征,该第二图像的整个区域均以第二屏幕刷新率显示,第二图像的整个区域的输出帧率均为第二帧率。但是,当第二图像中部分区域包括敏感特征时,包括敏感特征的区域的图像和不包括敏感特征的区域的图像的加噪程度不同,即包括敏感特征的区域的图像和不包括敏感特征的区域的图像被叠加的噪声参数不同。
需要说明的是,第二种应用场景中,终端以第二屏幕刷新率在显示第二图像的方法,可以参考本申请实施例图15D中相关方法步骤的描述,本申请实施例这里不再赘述。
可选的,本申请实施例的一种实现方式中,终端在将第二图像分割成M个子区域之后,可以为所有的子区域设置相同的N,即终端可以针对M个子区域中每个子区域,根据一个子区域的图像(即敏感区域)生成该子区域的N帧加噪子图像。
不同的是,M个子区域中不同子区域的敏感程度可能不同;因此,终端为不同子区域生成N帧加噪子图像时可以采用不同的噪声参数。即每个子区域所采用的N个噪声参数的波动大小与该子区域的敏感程度成正比。其中,终端可以在识别M个子区域的图像以提取每个子区域的图像特征之后,可以针对每个子区域,根据该子区域的图像特征和预先保存的敏感特征,以及敏感特征的敏感程度,确定出每个子区域的敏感程度;然后,根据子区域的敏感程度为对应的子区域选择一组噪声参数。
在这种实现方式中,由于终端为第二图像的所有子区域均生成了N帧加噪子图像,因此,终端便可以采用第二帧率和第二屏幕刷新率,逐帧输出每个子区域的N帧加噪子图像。也就是说,对于第二图像的整个区域而言,终端显示该第二图像时的输出帧率相同,且终端显示这第二图像时的屏幕刷新率相同。
可以理解,即使偷拍设备连续偷拍终端屏幕显示的多帧图像,如偷拍设备对终端屏幕拍摄视频,偷拍设备也不能根据偷拍到的多帧图像还原出未加噪处理之前的图像。原因在于,设备拍摄图像时的扫描方式可以包括隔行扫描和逐行扫描两种方式,采用这两种扫描方式拍摄本申请实施例提供的显示方法所显示的图像,得到的图像是加噪处理后的乱码图像。
具体的,隔行扫描是指在采集图像时,分成两场进行扫描图像,第一场扫描奇数行,第二场扫描偶数行,两场合起来构成一幅完整的图像(即一帧图像)。
示例性的,以敏感区域包括如图20中的(a)所示的多个像素点为例。其中,如图20中的(a)所示,像素点1的像素值为A 1,像素点2的像素值为A 2,像素点3的像素值为A 3,像素点4的像素值为A 4
其中,当偷拍设备扫描如图20中的(b)中填充斜线的奇数行时,终端在敏感区域显示的可能是第n帧的加噪子图像,第n帧的加噪子图像中像素点1的像素值为A 1+W n,像素点2的像素值为A 2+W n,像素点3的像素值为A 3+W n,像素点4的像素值为A 4+W n。此时,偷拍设备不能扫描得到第n帧的加噪子图像的偶数行。也就是说, 在像素点1-像素点4中,偷拍设备只能扫描得到像素点1的像素值A 1+W n和像素点2的像素值A 2+W n,而不能扫描得到像素点3的像素值A 3+W n和像素点4的像素值A 4+W n
当偷拍设备扫描如图20中的(c)中填充黑点的偶数行时,终端在敏感区域显示的可能是第n+k帧的加噪子图像,第n+k帧的加噪子图像中像素点1的像素值为A 1+W n+k,像素点2的像素值为A 2+W n+k,像素点3的像素值为A 3+W n+k,像素点4的像素值为A 4+W n+k。此时,偷拍设备不能扫描得到第n+k帧的加噪子图像的奇数行。也就是说,在像素点1-像素点4中,偷拍设备只能扫描得到像素点3的像素值A 3+W n+k和像素点4的像素值A 4+W n+k,而不能扫描得到像素点1的像素值A 1+W n+k和像素点2的像素值A 2+W n+k
其中,偷拍设备可能会对一帧图像的N帧加噪子图像进行一次或多次偶数行扫描,一次或多次奇数行扫描。以偷拍设备对一帧图像的N帧加噪子图像进行一次偶数行扫描和奇数行扫描为例。该偷拍设备针对这一帧图像,可以扫描得到敏感区域的奇数行像素点信息包括像素点1的像素值A 1+W n+k和像素点2的像素值A 2+W n+k,扫描得到敏感区域的偶数行像素点信息包括像素点3的像素值A 3+W n+k和像素点4的像素值A 4+W n+k;然后,将扫描得到的信息进行合并得到图21所示的图像。如图21所示,由于奇数行的像素点和偶数行的像素点所采用的加噪参数不同;因此,合并后得到的敏感区域的图像相比加噪处理前敏感区域的图像,会出现乱码。
其中,逐行扫描是指按次序一行接一行扫描的方式。偷拍设备采用逐行扫描的方式拍摄本申请实施例中终端屏幕,可能会存在以下问题:偷拍设备扫描第m行时,终端在敏感区域显示的是第n帧的加噪子图像;偷拍设备扫描第m+1行时,终端在敏感区域显示的是第n+1帧的加噪子图像。由此可见,针对不同行的像素点,偷拍设备扫描到的像素点的像素值所采用的加噪参数不同;因此,合并多行扫描结果得到的敏感区域的图像相比加噪处理前敏感区域的图像,会出现乱码。
由此可见,终端执行本申请实施例提供的方法显示图像时,即使偷拍设备对该终端屏幕拍摄视频,偷拍设备偷拍终端的屏幕拍到的是也是乱码的图像,可以减少终端的显示内容泄露的可能性,可以有效保护终端的显示内容。
可以理解的是,上述终端等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本申请实施例可以根据上述方法示例对上述终端进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,如图22所示,本申请实施例提 供一种终端2200,该终端2200包括:显示单元2201和控制单元2202。
其中,显示单元2201用于支持终端2200执行上述方法实施例中的S701,S702中的显示动作,S1203,S1204,S1502,和/或用于本文所描述的技术的其它过程。其中,控制单元2202用于支持终端2200控制显示单元2201显示图像,支持终端2200执行上述方法实施例中的S702中的检测动作,S702a-S702d和S1201中进入加噪模式的动作,S1201中确定敏感区域的动作,S1201a-S1201b,S1201c-S1201e,和/或用于本文所描述的技术的其它过程。
进一步的,上述终端2200还可以包括:生成单元。其中,生成单元用于支持终端2200执行上述方法实施例中的S1202,S1501,和/或用于本文所描述的技术的其它过程。
当然,上述终端2200还可以包括其他的单元模块。例如,上述终端2200还可以包括:存储单元和收发单元。该终端2200可以通过收发单元与其他设备交互。例如,终端2200可以通过收发单元向其他设备发送图像文件,或者接收其他设备发送的图像文件。存储单元用于存储数据,如敏感特征。
在采用集成单元的情况下,上述控制单元2202和生成单元等可以集成在一个处理模块中实现,上述收发单元可以是终端2200的RF电路、WiFi模块或者蓝牙模块,上述存储单元可以是终端2200的存储模块,上述显示单元2201可以是显示模块,如显示器(触摸屏)。
图23示出了上述实施例中所涉及的终端的一种可能的结构示意图。该终端2300包括:处理模块2301、存储模块2302和显示模块2303。
处理模块2301用于对终端2300进行控制管理。显示模块2303用于显示图像。存储模块2302,用于保存终端2300的程序代码和数据,以及多个敏感特征及其敏感程度。上述终端2300还可以包括通信模块,该通信模块用于与其他设备通信。如通信模块用于接收或者向其他设备发送的消息或者图像文件。
其中,处理模块2301可以是处理器或控制器,例如可以包括CPU和GPU,通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块2304可以是收发器、收发电路或通信接口等。存储模块2302可以是存储器。
当处理模块2301为处理器(如图1所示的处理器101),通信模块为射频电路(如图1所示的射频电路102),存储模块2302为存储器(如图1所示的存储器103),显示模块2303为触摸屏(包括图1所示的触控板104-1和显示板104-2时,本申请所提供的设备可以为图1所示的手机100。其中,上述通信模块2304不仅可以包括射频电路,还可以包括WiFi模块和蓝牙模块。射频电路、WiFi模块和蓝牙模块等通信模块可以统称为通信接口。其中,上述处理器、通信接口、触摸屏和存储器可以通过总线耦合在一起。
本申请实施例还提供一种控制设备,包括处理器和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,执行如上述方法实施例所述的图像显示方法。该控制设备可以为控制芯片。
本申请实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机程序代码,当上述处理器执行该计算机程序代码时,设备执行图7、图9和图12中任一附图中的相关方法步骤实现上述实施例中的方法。
本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行图7、图9和图12中任一附图中的相关方法步骤实现上述实施例中的方法。
其中,本申请提供的终端2200和终端2300、控制设备、计算机存储介质或者计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换, 都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (37)

  1. 一种图像显示方法,其特征在于,应用于具有显示屏的终端,所述方法包括:
    所述终端在所述显示屏上以第一屏幕刷新率显示第一图像,所述第一图像的输出帧率为第一帧率;
    在检测到满足预设条件后,所述终端在所述显示屏上显示第二图像,其中,所述第二图像的至少一部分被叠加噪声参数,所述至少一部分以第二屏幕刷新率显示,所述至少一部分的输出帧率为第二帧率,所述第二图像包括多帧加噪子图像;
    其中,所述第二帧率大于所述第一帧率,所述第二屏幕刷新率大于所述第一屏幕刷新率。
  2. 根据权利要求1所述的方法,其特征在于,所述在检测到满足预设条件后,所述终端在所述显示屏上显示第二图像,包括:
    所述终端响应于对加噪选项的开启操作,进入加噪模式,所述终端在所述显示屏上显示所述第二图像。
  3. 根据权利要求1所述的方法,其特征在于,所述在检测到满足预设条件后,所述终端在所述显示屏上显示第二图像,包括:
    在所述第二图像中包括敏感特征时,所述终端自动进入加噪模式,在所述显示屏上显示所述第二图像;
    其中,所述敏感特征包括预设控件、货币符号和预设文字中的至少一项,所述预设控件包括密码输入框、用户名输入框和身份证号输入框中的至少一项,所述预设文字包括余额、密码、工资和账户中的至少一项。
  4. 根据权利要求1所述的方法,其特征在于,所述在检测到满足预设条件后,所述终端在所述显示屏上显示第二图像,包括:
    在显示预设类型的应用的界面时,所述终端自动进入加噪模式,在所述显示屏上显示所述第二图像;
    其中,所述预设类型的应用包括:银行类应用、支付类应用和通讯类应用中的至少一项。
  5. 根据权利要求1-4中任意一项所述的方法,其特征在于,所述第二图像的敏感特征被叠加噪声参数,所述至少一部分包括所述第二图像的至少一个敏感区域,所述敏感区域包括所述敏感特征;
    其中,所述敏感特征包括预设控件、货币符号和预设文字中的至少一项,所述预设控件包括密码输入框、用户名输入框和身份证号输入框中的至少一项,所述预设文字包括余额、密码、工资和账户中的至少一项。
  6. 根据权利要求5所述的方法,其特征在于,在所述终端在所述显示屏上显示所述第二图像之前,所述方法还包括:
    所述终端根据所述敏感区域的图像生成N帧第一加噪子图像;
    其中,所述N帧第一加噪子图像以所述第二屏幕刷新率在所述敏感区域显示,所述N帧第一加噪子图像的输出帧率为所述第二帧率;所述第二帧率是所述第一帧率的N倍,所述第二屏幕刷新率是所述第一屏幕刷新率的N倍,N为大于或等于2的整数。
  7. 根据权利要求6所述的方法,其特征在于,所述终端根据所述敏感区域的图像 生成N帧第一加噪子图像,包括:
    所述终端的剩余电量大于等于第一阈值时,所述终端根据所述敏感区域的图像生成N1帧第一加噪子图像;
    所述终端的剩余电量小于所述第一阈值时,所述终端根据所述敏感区域的图像生成N2帧第一加噪子图像,N1>N2。
  8. 根据权利要求6或7所述的方法,其特征在于,所述终端根据所述敏感区域的图像生成N帧第一加噪子图像,包括:
    所述终端根据所述敏感区域的敏感程度生成N帧第一加噪子图像,所述敏感程度是根据所述敏感区域的敏感特征确定的;
    其中,包括不同敏感特征的多个敏感区域的敏感程度不同。
  9. 根据权利要求6-8中任意一项所述的方法,其特征在于,所述终端根据所述敏感区域的图像生成N帧第一加噪子图像,包括:
    所述终端确定所述敏感区域的图像中的每个像素点的像素值;
    所述终端确定所述敏感区域的至少一组噪声参数,每组噪声参数中包括N个噪声参数;所述N个噪声参数之和为零,或者所述N个噪声参数之和在预设参数区间内;
    所述终端采用a n,i=A i+W n,i,计算一帧加噪子图像中每个像素点的像素值,得到所述一帧加噪子图像;
    其中,所述A i是所述敏感区域的图像中的像素点i的像素值,i∈{1,2,……,Q},Q为所述敏感区域的图像中的像素点的总数;所述W n,i是第n帧第一加噪子图像的第i个像素点的噪声参数,n∈{1,2,……,N},
    Figure PCTCN2018081491-appb-100001
    所述a n,i是第n帧第一加噪子图像的像素点i的像素值。
  10. 根据权利要求5-9中任意一项所述的方法,其特征在于,所述第二图像的非敏感区域以所述第一屏幕刷新率显示,所述非敏感区域的输出帧率为所述第一帧率;
    其中,所述非敏感区域是所述第二图像中除所述敏感区域之外的其他区域。
  11. 根据权利要求5-9中任意一项所述的方法,其特征在于,所述第二图像的非敏感区域以所述第二屏幕刷新率显示,所述非敏感区域的输出帧率为所述第二帧率;
    其中,所述非敏感区域是所述第二图像中除所述敏感区域之外的其他区域。
  12. 根据权利要求11所述的方法,其特征在于,在所述终端进入所述加噪模式后,所述终端在所述显示屏上显示所述第二图像之前,所述方法还包括:
    所述终端根据所述非敏感区域的图像生成N帧第二加噪子图像;
    所述N帧第二加噪子图像以所述第二屏幕刷新率在所述非敏感区域显示,所述N帧第二加噪子图像的输出帧率为所述第二帧率;所述第二帧率是所述第一帧率的N倍,所述第二屏幕刷新率是所述第一屏幕刷新率的N倍,N为大于或者等于2的整数;
    其中,所述终端生成所述N帧第二加噪子图像所使用的噪声参数,与所述终端生成所述N帧第一加噪子图像所使用的噪声参数不同。
  13. 一种终端,其特征在于,所述终端包括:
    显示单元,用于以第一屏幕刷新率显示第一图像,所述第一图像的输出帧率为第一帧率;
    控制单元,用于检测所述终端满足预设条件;
    所述显示单元,还用于在控制单元检测到满足所述预设条件后,显示第二图像,其中,所述显示单元显示的所述第二图像的至少一部分被叠加噪声参数,所述至少一部分以第二屏幕刷新率显示,所述至少一部分的输出帧率为第二帧率,所述显示单元显示的所述第二图像包括多帧加噪子图像;
    其中,所述第二帧率大于所述第一帧率,所述第二屏幕刷新率大于所述第一屏幕刷新率。
  14. 根据权利要求13所述的终端,其特征在于,所述控制单元,具体用于响应于对加噪选项的开启操作,控制所述终端进入加噪模式。
  15. 根据权利要求13所述的终端,其特征在于,所述控制单元,具体用于在所述第二图像中包括敏感特征时,控制所述终端自动进入加噪模式;
    其中,所述敏感特征包括预设控件、货币符号和预设文字中的至少一项,所述预设控件包括密码输入框、用户名输入框和身份证号输入框中的至少一项,所述预设文字包括余额、密码、工资和账户中的至少一项。
  16. 根据权利要求13所述的终端,其特征在于,所述控制单元,具体用于在所述显示单元显示预设类型的应用的界面时,控制所述终端自动进入加噪模式;
    其中,所述预设类型的应用包括:银行类应用、支付类应用和通讯类应用中的至少一项。
  17. 根据权利要求13-16中任意一项所述的终端,其特征在于,所述显示单元显示的所述第二图像的敏感特征被叠加噪声参数,所述至少一部分包括所述第二图像的至少一个敏感区域,所述敏感区域包括所述敏感特征;
    其中,所述敏感特征包括预设控件、货币符号和预设文字中的至少一项,所述预设控件包括密码输入框、用户名输入框和身份证号输入框中的至少一项,所述预设文字包括余额、密码、工资和账户中的至少一项。
  18. 根据权利要求17所述的终端,其特征在于,所述终端还包括:
    生成单元,用于根据所述敏感区域的图像生成N帧第一加噪子图像;
    其中,所述N帧第一加噪子图像以所述第二屏幕刷新率在所述敏感区域显示,所述N帧第一加噪子图像的输出帧率为所述第二帧率;所述第二帧率是所述第一帧率的N倍,所述第二屏幕刷新率是所述第一屏幕刷新率的N倍,N为大于或等于2的整数。
  19. 根据权利要求18所述的终端,其特征在于,所述生成单元,具体用于:
    所述终端的剩余电量大于等于第一阈值时,根据所述敏感区域的图像生成N1帧第一加噪子图像;
    所述终端的剩余电量小于所述第一阈值时,根据所述敏感区域的图像生成N2帧第一加噪子图像,N1>N2。
  20. 根据权利要求18或19所述的终端,其特征在于,所述生成单元,具体用于根据所述敏感区域的敏感程度生成N帧第一加噪子图像,所述敏感程度是根据所述敏感区域的敏感特征确定的;
    其中,包括不同敏感特征的多个敏感区域的敏感程度不同。
  21. 根据权利要求18-20中任意一项所述的终端,其特征在于,所述显示单元以 所述第一屏幕刷新率显示所述第二图像的非敏感区域的图像,所述非敏感区域的输出帧率为所述第一帧率;
    其中,所述非敏感区域是所述第二图像中除所述敏感区域之外的其他区域。
  22. 根据权利要求18-20中任意一项所述的终端,其特征在于,所述显示单元以所述第二屏幕刷新率显示所述第二图像的非敏感区域的图像,所述非敏感区域的输出帧率为所述第二帧率;
    其中,所述非敏感区域是所述第二图像中除所述敏感区域之外的其他区域。
  23. 根据权利要求22所述的终端,其特征在于,所述生成单元,还用于根据所述非敏感区域的图像生成N帧第二加噪子图像;
    所述N帧第二加噪子图像以所述第二屏幕刷新率在所述非敏感区域显示,所述N帧第二加噪子图像的输出帧率为所述第二帧率;所述第二帧率是所述第一帧率的N倍,所述第二屏幕刷新率是所述第一屏幕刷新率的N倍,N为大于或者等于2的整数;
    其中,所述生成单元生成所述N帧第二加噪子图像所使用的噪声参数,与所述终端生成所述N帧第一加噪子图像所使用的噪声参数不同。
  24. 一种终端,其特征在于,所述终端包括:处理器、存储器和显示器;所述存储器和所述显示器与所述处理器耦合,所述显示器用于显示图像,所述存储器包括非易失性存储介质,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,
    所述处理器,用于在所述显示器上以第一屏幕刷新率显示第一图像,所述第一图像的输出帧率为第一帧率;
    所述处理器,还用于在检测到满足预设条件后,在所述显示器上显示第二图像,其中,所述显示器显示的所述第二图像的至少一部分被叠加噪声参数,所述至少一部分以第二屏幕刷新率显示,所述至少一部分的输出帧率为第二帧率,所述第二图像包括多帧加噪子图像;
    其中,所述第二帧率大于所述第一帧率,所述第二屏幕刷新率大于所述第一屏幕刷新率。
  25. 根据权利要求24所述的终端,其特征在于,所述处理器,用于在检测到满足预设条件后,在所述显示器上显示第二图像,包括:
    所述处理器,具体用于响应于对加噪选项的开启操作,进入加噪模式,在所述显示器上显示所述第二图像。
  26. 根据权利要求24所述的终端,其特征在于,所述处理器,用于在检测到满足预设条件后,在所述显示器上显示第二图像,包括:
    所述处理器,具体用于在所述第二图像中包括敏感特征时,自动进入加噪模式,在所述显示器上显示所述第二图像;
    其中,所述敏感特征包括预设控件、货币符号和预设文字中的至少一项,所述预设控件包括密码输入框、用户名输入框和身份证号输入框中的至少一项,所述预设文字包括余额、密码、工资和账户中的至少一项。
  27. 根据权利要求24所述的终端,其特征在于,所述处理器,用于在检测到满足预设条件后,在所述显示器上显示第二图像,包括:
    所述处理器,具体用于在所述显示器显示预设类型的应用的界面时,自动进入加噪模式,在所述显示器上显示所述第二图像;
    其中,所述预设类型的应用包括:银行类应用、支付类应用和通讯类应用中的至少一项。
  28. 根据权利要求24-27中任意一项所述的终端,其特征在于,所述显示器显示的所述第二图像的敏感特征被叠加噪声参数,所述至少一部分包括所述第二图像的至少一个敏感区域,所述敏感区域包括所述敏感特征;
    其中,所述敏感特征包括预设控件、货币符号和预设文字中的至少一项,所述预设控件包括密码输入框、用户名输入框和身份证号输入框中的至少一项,所述预设文字包括余额、密码、工资和账户中的至少一项。
  29. 根据权利要求28所述的终端,其特征在于,所述处理器,还用于在显示器上显示所述第二图像之前,根据所述敏感区域的图像生成N帧第一加噪子图像;
    其中,所述显示器显示的所述N帧第一加噪子图像以所述第二屏幕刷新率在所述敏感区域显示,所述N帧第一加噪子图像的输出帧率为所述第二帧率;所述第二帧率是所述第一帧率的N倍,所述第二屏幕刷新率是所述第一屏幕刷新率的N倍,N为大于或等于2的整数。
  30. 根据权利要求29所述的终端,其特征在于,所述处理器,用于所述敏感区域的图像生成N帧第一加噪子图像,包括:
    所述处理器,具体用于:
    所述终端的剩余电量大于等于第一阈值时,根据所述敏感区域的图像生成N1帧第一加噪子图像;
    所述终端的剩余电量小于所述第一阈值时,根据所述敏感区域的图像生成N2帧第一加噪子图像,N1>N2。
  31. 根据权利要求29或30所述的终端,其特征在于,所述处理器,用于所述敏感区域的图像生成N帧第一加噪子图像,包括:
    所述处理器,具体用于根据所述敏感区域的敏感程度生成N帧第一加噪子图像,所述敏感程度是根据所述敏感区域的敏感特征确定的;
    其中,包括不同敏感特征的多个敏感区域的敏感程度不同。
  32. 根据权利要求29-31中任意一项所述的终端,其特征在于,所述处理器,用于所述敏感区域的图像生成N帧第一加噪子图像,包括:
    所述处理器,具体用于:
    确定所述敏感区域的图像中的每个像素点的像素值;
    确定所述敏感区域的至少一组噪声参数,每组噪声参数中包括N个噪声参数;所述N个噪声参数之和为零,或者所述N个噪声参数之和在预设参数区间内;
    采用
    Figure PCTCN2018081491-appb-100002
    计算一帧加噪子图像中每个像素点的像素值,得到所述一帧加噪子图像;
    其中,所述A i是所述敏感区域的图像中的像素点i的像素值,i∈{1,2,……,Q},Q为所述敏感区域的图像中的像素点的总数;所述W n,i是第n帧第一加噪子图像的第 i个像素点的噪声参数,n∈{1,2,……,N},
    Figure PCTCN2018081491-appb-100003
    所述a n,i是第n帧第一加噪子图像的像素点i的像素值。
  33. 根据权利要求28-32中任意一项所述的终端,其特征在于,所述处理器在所述显示器以所述第一屏幕刷新率显示所述第二图像的非敏感区域的图像,所述非敏感区域的输出帧率为所述第一帧率;
    其中,所述非敏感区域是所述第二图像中除所述敏感区域之外的其他区域。
  34. 根据权利要求28-32中任意一项所述的终端,其特征在于,所述处理器在所述显示器以所述第二屏幕刷新率显示所述第二图像的非敏感区域的图像,所述非敏感区域的输出帧率为所述第二帧率;
    其中,所述非敏感区域是所述第二图像中除所述敏感区域之外的其他区域。
  35. 根据权利要求34所述的终端,其特征在于,所述处理器,还用于在所述显示器显示所述第二图像之前,根据所述非敏感区域的图像生成N帧第二加噪子图像;
    所述显示器显示的所述N帧第二加噪子图像以所述第二屏幕刷新率在所述非敏感区域显示,所述N帧第二加噪子图像的输出帧率为所述第二帧率;所述第二帧率是所述第一帧率的N倍,所述第二屏幕刷新率是所述第一屏幕刷新率的N倍,N为大于或者等于2的整数;
    其中,所述处理器生成所述N帧第二加噪子图像所使用的噪声参数,与所述终端生成所述N帧第一加噪子图像所使用的噪声参数不同。
  36. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机指令,当所述计算机指令在终端上运行时,使得所述终端执行如权利要求1-12中任一项所述的方法。
  37. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-12中任一项所述的方法。
PCT/CN2018/081491 2018-03-31 2018-03-31 一种图像显示方法及终端 WO2019183984A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201880045049.5A CN110892405A (zh) 2018-03-31 2018-03-31 一种图像显示方法及终端
US17/041,196 US11615215B2 (en) 2018-03-31 2018-03-31 Image display method and terminal
PCT/CN2018/081491 WO2019183984A1 (zh) 2018-03-31 2018-03-31 一种图像显示方法及终端
EP18912684.0A EP3764267A4 (en) 2018-03-31 2018-03-31 IMAGE DISPLAY PROCESS AND TERMINAL

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/081491 WO2019183984A1 (zh) 2018-03-31 2018-03-31 一种图像显示方法及终端

Publications (1)

Publication Number Publication Date
WO2019183984A1 true WO2019183984A1 (zh) 2019-10-03

Family

ID=68062053

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/081491 WO2019183984A1 (zh) 2018-03-31 2018-03-31 一种图像显示方法及终端

Country Status (4)

Country Link
US (1) US11615215B2 (zh)
EP (1) EP3764267A4 (zh)
CN (1) CN110892405A (zh)
WO (1) WO2019183984A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111711748A (zh) * 2020-05-27 2020-09-25 维沃移动通信(杭州)有限公司 屏幕刷新率的控制方法、装置、电子设备及可读存储介质
US11500605B2 (en) * 2019-09-17 2022-11-15 Aver Information Inc. Image transmission device, image display system capable of remote screenshot, and remote screenshot method

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071047B (zh) * 2021-10-30 2023-08-29 深圳曦华科技有限公司 帧率控制方法及相关装置
CN116257235A (zh) * 2021-12-10 2023-06-13 华为技术有限公司 绘制方法及电子设备
CN114339411B (zh) * 2021-12-30 2023-12-26 西安紫光展锐科技有限公司 视频处理方法、装置及设备
CN114518859A (zh) * 2022-02-23 2022-05-20 维沃移动通信有限公司 显示控制方法、装置、电子设备及存储介质
CN115065848B (zh) * 2022-06-10 2023-11-17 展讯半导体(成都)有限公司 一种显示数据的传输方法、电子设备及模组设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077361A (zh) * 2012-12-28 2013-05-01 东莞宇龙通信科技有限公司 移动终端及其防窥视方法
CN104504350A (zh) * 2015-01-04 2015-04-08 京东方科技集团股份有限公司 显示装置、观看装置、显示方法及观看方法
US9342621B1 (en) * 2008-08-04 2016-05-17 Zscaler, Inc. Phrase matching
US20160371498A1 (en) * 2014-10-29 2016-12-22 Square, Inc. Secure Display Element

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3560441B2 (ja) 1997-04-07 2004-09-02 日本アイ・ビー・エム株式会社 複数フレーム・データ・ハイディング方法及び検出方法
US7302162B2 (en) 2002-08-14 2007-11-27 Qdesign Corporation Modulation of a video signal with an impairment signal to increase the video signal masked threshold
JP2006074339A (ja) * 2004-09-01 2006-03-16 Fuji Xerox Co Ltd 符号化装置、復号化装置、符号化方法、復号化方法、及びこれらのプログラム
US20060129948A1 (en) 2004-12-14 2006-06-15 Hamzy Mark J Method, system and program product for a window level security screen-saver
CN101561852B (zh) 2008-04-16 2012-10-10 联想(北京)有限公司 一种显示方法和装置
US20110199624A1 (en) * 2010-02-12 2011-08-18 Kabushiki Kaisha Toshiba Method and apparatus for processing image
US9292959B2 (en) * 2012-05-16 2016-03-22 Digizig Media Inc. Multi-dimensional stacking with self-correction
CN103237113A (zh) 2013-03-04 2013-08-07 东莞宇龙通信科技有限公司 信息显示的方法及电子设备
US20140283100A1 (en) 2013-03-15 2014-09-18 Edward R. Harrison Display privacy with dynamic configuration
JP6273566B2 (ja) * 2013-04-12 2018-02-07 パナソニックIpマネジメント株式会社 通信システム、画像生成方法、及び通信装置
US9251760B2 (en) 2013-07-02 2016-02-02 Cisco Technology, Inc. Copy protection from capture devices for photos and videos
CN104469127B (zh) 2013-09-22 2019-10-18 南京中兴软件有限责任公司 拍摄方法和装置
WO2015196122A1 (en) * 2014-06-19 2015-12-23 Contentguard Holdings, Inc. Rendering content using obscuration techniques
CN105827820B (zh) 2015-12-25 2019-06-07 维沃移动通信有限公司 一种移动终端的防偷窥方法及移动终端
CN107516486A (zh) 2016-06-16 2017-12-26 中兴通讯股份有限公司 图像的显示方法及装置
CN106295425A (zh) 2016-08-05 2017-01-04 深圳市金立通信设备有限公司 终端及其屏幕内容显示方法
CN106407827B (zh) 2016-11-24 2021-07-27 合肥工业大学 一种基于频率差的屏幕防偷拍设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9342621B1 (en) * 2008-08-04 2016-05-17 Zscaler, Inc. Phrase matching
CN103077361A (zh) * 2012-12-28 2013-05-01 东莞宇龙通信科技有限公司 移动终端及其防窥视方法
US20160371498A1 (en) * 2014-10-29 2016-12-22 Square, Inc. Secure Display Element
CN104504350A (zh) * 2015-01-04 2015-04-08 京东方科技集团股份有限公司 显示装置、观看装置、显示方法及观看方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11500605B2 (en) * 2019-09-17 2022-11-15 Aver Information Inc. Image transmission device, image display system capable of remote screenshot, and remote screenshot method
CN111711748A (zh) * 2020-05-27 2020-09-25 维沃移动通信(杭州)有限公司 屏幕刷新率的控制方法、装置、电子设备及可读存储介质
CN111711748B (zh) * 2020-05-27 2023-01-24 维沃移动通信(杭州)有限公司 屏幕刷新率的控制方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
US11615215B2 (en) 2023-03-28
CN110892405A (zh) 2020-03-17
EP3764267A1 (en) 2021-01-13
US20210034793A1 (en) 2021-02-04
EP3764267A4 (en) 2021-03-17

Similar Documents

Publication Publication Date Title
WO2019183984A1 (zh) 一种图像显示方法及终端
US20230044497A1 (en) Display method for foldable screen and related apparatus
CN106502693B (zh) 一种图像显示方法和装置
US20220057866A1 (en) Display Method and Related Apparatus
CN106506935B (zh) 移动终端及其控制方法
US9692959B2 (en) Image processing apparatus and method
CN110100251B (zh) 用于处理文档的设备、方法和计算机可读存储介质
US20230140946A1 (en) Method for Recommending Service, Electronic Device, and System
CN110278464B (zh) 显示榜单的方法和装置
CN115525383B (zh) 壁纸显示方法、装置、移动终端及存储介质
KR20220062061A (ko) 접이식 스크린 디스플레이 방법 및 전자 디바이스
US20220236848A1 (en) Display Method Based on User Identity Recognition and Electronic Device
US20230353862A1 (en) Image capture method, graphic user interface, and electronic device
CN111754386B (zh) 图像区域屏蔽方法、装置、设备及存储介质
CN109068063B (zh) 一种三维图像数据的处理、显示方法、装置及移动终端
US20240086580A1 (en) Unlocking method and electronic device
CN111447389A (zh) 视频生成方法、装置、终端及存储介质
KR20150027934A (ko) 다각도에서 촬영된 영상을 수신하여 파일을 생성하는 전자 장치 및 방법
WO2022134691A1 (zh) 一种终端设备中啸叫处理方法及装置、终端
CN113609358B (zh) 内容分享方法、装置、电子设备以及存储介质
CN109104573B (zh) 一种确定对焦点的方法及终端设备
WO2022033272A1 (zh) 图像处理方法以及电子设备
CN109922256B (zh) 拍摄方法及终端设备
CN110163835B (zh) 检测截图的方法、装置、设备及计算机可读存储介质
CN108881739B (zh) 图像生成方法、装置、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18912684

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018912684

Country of ref document: EP

Effective date: 20201006