CN111866393B - Display control method, device and storage medium - Google Patents

Display control method, device and storage medium Download PDF

Info

Publication number
CN111866393B
CN111866393B CN202010765828.1A CN202010765828A CN111866393B CN 111866393 B CN111866393 B CN 111866393B CN 202010765828 A CN202010765828 A CN 202010765828A CN 111866393 B CN111866393 B CN 111866393B
Authority
CN
China
Prior art keywords
camera
target
image
preset
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010765828.1A
Other languages
Chinese (zh)
Other versions
CN111866393A (en
Inventor
刘雪飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010765828.1A priority Critical patent/CN111866393B/en
Publication of CN111866393A publication Critical patent/CN111866393A/en
Application granted granted Critical
Publication of CN111866393B publication Critical patent/CN111866393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Abstract

The embodiment of the application discloses a display control method, a display control device and a storage medium, which are applied to electronic equipment, wherein the electronic equipment comprises two cameras, the two cameras comprise a first camera and a second camera, the field angle of the first camera is larger than that of the second camera, and the method comprises the following steps: shooting through the first camera to obtain a first image; carrying out face detection on the first image to obtain the number of faces; and hiding preset display content when the number of the human faces is not 1. According to the embodiment of the application, peep prevention can be achieved, and the safety of the electronic equipment is improved.

Description

Display control method, device and storage medium
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a display control method, device, and storage medium.
Background
With the widespread use of electronic devices (such as mobile phones, tablet computers, and the like), the electronic devices have more and more applications and more powerful functions, and the electronic devices are developed towards diversification and personalization, and become indispensable electronic products in the life of users.
At present, a large screen and a comprehensive screen are more and more favored by users, but the larger the screen is, the easier the information is to be peered by other people, potential safety hazards are brought to electronic equipment, and therefore the problem of how to realize peeping prevention is urgently needed to be solved.
Disclosure of Invention
The embodiment of the application provides a display control method, a display control device and a storage medium, so that peeping prevention can be realized, and the safety of electronic equipment is improved.
In a first aspect, an embodiment of the present application provides a display control method, which is applied to an electronic device, where the electronic device includes two cameras, where the two cameras include a first camera and a second camera, and a field angle of the first camera is larger than a field angle of the second camera, and the method includes:
shooting through the first camera to obtain a first image;
carrying out face detection on the first image to obtain the number of faces;
and hiding preset display content when the number of the human faces is not 1.
In a second aspect, an embodiment of the present application provides a display control apparatus, which is applied to an electronic device, where the electronic device includes two cameras, where the two cameras include a first camera and a second camera, and a field angle of the first camera is greater than a field angle of the second camera, and the apparatus includes: a photographing unit, an image recognizing unit, and a display control unit, wherein,
the shooting unit is used for shooting through the first camera to obtain a first image;
the image identification unit is used for carrying out face detection on the first image to obtain the number of faces;
and the display control unit is used for hiding preset display contents when the number of the human faces is not 1.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, a dual camera, and one or more programs, wherein the dual camera includes a first camera and a second camera, a field angle of the first camera is larger than a field angle of the second camera, the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps in any of the methods of the first aspect of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides a chip, including: and the processor is used for calling and running the computer program from the memory so that the device provided with the chip executes part or all of the steps described in any method of the first aspect of the embodiment of the application.
In a fifth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps as described in any one of the methods of the first aspect of this application.
In a sixth aspect, the present application provides a computer program, wherein the computer program is operable to cause a computer to perform some or all of the steps as described in any of the methods of the first aspect of the embodiments of the present application. The computer program may be a software installation package.
It can be seen that the display control method, the display control device, and the storage medium described in the embodiments of the present application are applied to an electronic device, where the electronic device includes two cameras, the two cameras include a first camera and a second camera, a field angle of the first camera is larger than a field angle of the second camera, the first camera performs shooting to obtain a first image, the first image is subjected to face detection to obtain the number of faces, and when the number of faces is not 1, preset display content is hidden.
Drawings
Reference will now be made in brief to the drawings that are needed in describing embodiments or prior art.
Fig. 1A is a block diagram of an electronic device 100 according to an embodiment of the present disclosure;
fig. 1B is a schematic architecture diagram of a software and hardware system provided with an Android system according to an embodiment of the present application;
fig. 1C is an architecture diagram of an electronic device according to an embodiment of the present application;
fig. 2A is a schematic flowchart of a display control method according to an embodiment of the present disclosure;
fig. 2B is a schematic diagram of an application scenario provided in the embodiment of the present application;
fig. 2C is a schematic flowchart of another display control method provided in the embodiment of the present application;
fig. 3 is a schematic flowchart of another display control method provided in an embodiment of the present application;
fig. 4 is a block diagram illustrating functional units of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of functional units of a display control apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to better understand the scheme of the embodiments of the present application, the following first introduces the related terms and concepts that may be involved in the embodiments of the present application.
In this embodiment, the electronic device may be an electronic device with a display function, for example, a handheld device (a smart phone, a tablet computer, etc.), an in-vehicle device (a navigator, an auxiliary reversing system, a driving recorder, an in-vehicle refrigerator, etc.), a wearable device (a smart band, a wireless headset, a smart watch, smart glasses, etc.), a computing device or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), a Mobile Station (MS), a virtual reality/augmented reality device, a terminal device (terminal device), etc., where the electronic device may also be a base Station or a server.
The electronic device may further include an intelligent home device, and the intelligent home device may be at least one of: intelligent audio amplifier, intelligent camera, intelligent electric rice cooker, intelligent wheelchair, intelligent massage armchair, intelligent furniture, intelligent dish washer, intelligent TV set, intelligent refrigerator, intelligent electric fan, intelligent room heater, intelligent clothes hanger that dries in the air, intelligent lamp, intelligent router, intelligent switch, intelligent flush mounting plate, intelligent humidifier, intelligent air conditioner, intelligent door (for example, take fingerprint lock or trick lock), intelligent window, intelligent top of a kitchen range, intelligent sterilizer, intelligent closestool, the robot of sweeping the floor etc. do not restrict here.
Referring to fig. 1A, a block diagram of an electronic device 100 according to an exemplary embodiment of the present disclosure is shown. The electronic device 100 may include one or more of the following components: a processor 110, a memory 120, and an input-output device 130.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall electronic device 100 using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Processor 110 may include one or more processing units, such as: the processor 110 may include a Central Processing Unit (CPU), an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like. A memory may be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses, reducing the latency of the processor 110, and increasing system efficiency.
It is understood that the processor 110 may be mapped to a System on a Chip (SOC) in an actual product, and the processing unit and/or the interface may not be integrated into the processor 110, and the corresponding functions may be implemented by a communication Chip or an electronic component alone. The above-described interface connection relationship between the modules is merely illustrative, and does not constitute a unique limitation on the structure of the electronic apparatus 100.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system (including a system based on Android system depth development), an IOS system developed by apple inc (including a system based on IOS system depth development), or other systems. The data storage area may also store data created by the electronic device 100 during use (e.g., phone book, audio-video data, chat log data), and the like.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system and an IOS system with a layered architecture as an example, and exemplifies a software architecture of the electronic device 100.
As shown in fig. 1B, the memory 120 may store a Linux kernel layer 220, a system runtime library layer 240, an application framework layer 260, and an application layer 280, wherein the layers communicate with each other through a software interface, and the Linux kernel layer 220, the system runtime library layer 240, and the application framework layer 260 belong to an operating system space.
The application layer 280 belongs to a user space, and at least one application program runs in the application layer 280, and the application programs may be native application programs carried by an operating system, or third-party application programs developed by third-party developers, and specifically may include application programs such as passwords, eye tracking, cameras, gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, short messages, and the like.
The application framework layer 260 provides various APIs that may be used by applications that build the application layer, and developers may also build their own applications by using these APIs, such as a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, a message manager, an activity manager, a package manager, and a location manager. The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc. The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures. The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.). The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like. The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc. The message manager can be used for storing the data of the messages reported by the APPs and processing the data reported by the APPs.
The system runtime library layer 240 provides the main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system Runtime layer 240 is an Android Runtime library (Android Runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language.
The Linux kernel layer 220 provides underlying drivers for various hardware of the electronic device 100, such as a display driver, an audio driver, a camera driver, a Bluetooth driver, a Wi-Fi driver, power management, and the like.
It should be understood that the interface display method described in the embodiment of the present application may be applied to an android system, and may also be applied to other operating systems, such as an IOS system, and the interface display method is only described by taking the android system as an example, but is not limited thereto.
In the following, a conventional electronic device will be described in detail with reference to fig. 1C, and it should be understood that the configuration illustrated in the embodiment of the present application is not intended to specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Further, as shown in fig. 1C, the electronic device 100 may include an normally-open sensor 101, a camera serial interface decoder 200, an image signal processor 300, and a digital signal processor 400, wherein the image signal processor 300 includes a lightweight image front end 310 and an image front end 320, the normally-open sensor 101 is connected to the camera serial interface decoder 200, the camera serial interface decoder 200 is connected to the lightweight image front end 310 of the image signal processor 300, and the lightweight image front end 310 is connected to the digital signal processor 400;
the digital signal processor 400 is configured to receive first raw image data acquired by the normally-open sensor 101 through the camera serial interface decoder 200 and the lightweight image front end 310, and call a first image processing algorithm to perform a first preset process on the first raw image data to obtain first reference image data, and the image front end 320 is configured to transmit second raw image data acquired by the camera sensor 500 of the electronic device 100.
Wherein the first original image data and the second original image data may be MIPI RAW image data, and the first reference image data may be YUV image data.
The first image processing algorithm is used for realizing a data processing effect equivalent to that of the image signal processor in a software algorithm mode, namely, an operation corresponding to first preset processing, and the first preset processing comprises at least one of the following steps: automatic exposure control, lens attenuation compensation, brightness improvement, black level correction, lens shading correction, dead pixel correction, color interpolation, automatic white balance and color correction. It should be noted that although the normally-open sensor 101 transmits the first raw image data through the lightweight image front end 310 of the image signal processor 300, the image signal processor 300 does not further process the first raw image data, and the image signal processor 300 only performs the same or different processing as the first preset processing on the second raw image data transmitted through the image front end 320. Also, since the lightweight image front end 310 is only responsible for interfacing inputs and does not do anything else, its power consumption is relatively low relative to prior solutions that enable the image front end 320 to transfer image data (which would require enabling other modules of the image signal processor 300 for processing of the image data).
Wherein, normally open type sensor 101 can be the image sensor in the normally open type (always on, AON) camera, or, normally open type sensor 101 can also be ultrasonic sensor, in this application embodiment, the normally open type camera can be understood as keeping the camera of normally open state as long as the electric quantity of electronic equipment is higher than predetermineeing the electric quantity, predetermine the electric quantity and can be set up by oneself or the system is acquiescent by the user, for example, predetermine the electric quantity and be 0, then as long as electronic equipment is in the power on state, the camera then can keep the open state, for example again, predetermine the electric quantity and can be 10%, then as long as the electric quantity of electronic equipment is higher than 10%, the camera then can keep the open state. Alternatively, the normally open camera may also be understood as a camera that is rapidly started when a message reminding event or a screen lightening event is detected, and remains in an open state before the screen is turned off. In the embodiment of the application, the electronic device can also be provided with a switch, the switch is turned on, the normally open type camera can be started, the camera is always in an open state when the electric quantity is higher than the preset electric quantity, otherwise, when the electric quantity is lower than the preset electric quantity, the camera can be automatically closed, and then the camera can be manually opened again, of course, the switch can also be closed to close the camera, and then the camera can be controlled according to the input starting or closing instruction. In the implementation of the application, when the normally open sensor 101 is an ultrasonic sensor, the ultrasonic sensor can be in a start-up state and always in an open state at the electronic device, and image acquisition can also be realized through the ultrasonic sensor.
In this application embodiment, normally open type camera can be first camera, electronic equipment passes through the application function that normally open type camera can realize includes following at least one:
1. privacy protection, for example, a social application APP receives a new message of a girl friend, a bank sends a wage account new short message, the privacy information in the message is not expected to be seen by others, and the electronic device can detect that the screen is dark when the eyes of a stranger watch the screen of a mobile phone of a user owner through the normally open sensor 101.
2. And (3) performing an air-separating operation, namely, a user is cooking, placing a mobile phone beside to check a menu, wherein an important call is called in, the hand of the user is full of oil stain, the mobile phone is inconvenient to directly operate, and the electronic equipment can detect the air-separating gesture of the user and execute the operation corresponding to the air-separating gesture through the normally-open sensor 101.
3. The electronic equipment can detect that a user still watches the screen through the normally open sensor 101, and then the automatic screen-off function is not started.
At present, in display control technologies supported by mobile phones and the like, because meaningless return motions often occur in the motion process of a human hand, such as a return motion of falling back after sliding upwards, and a return motion of turning back the palm after the palm is turned over to the back of the hand, and the like, in order to avoid misidentifying the return motion of the human hand into effective control operation, the existing scheme generally adopts a mode of stopping for a period of time after executing one motion and not performing identification, so that a user has time to recover the initial gesture state of the human hand, but the pause time affects the user experience, and the display control cannot be rapidly and continuously responded.
In view of the above problem, an embodiment of the present application provides a display control method, which is described in detail below with reference to the accompanying drawings.
Referring to fig. 2A, fig. 2A is a schematic flowchart of a display control method provided in an embodiment of the present application, and is applied to an electronic device, where the electronic device includes two cameras, the two cameras include a first camera and a second camera, and a field angle of the first camera is larger than a field angle of the second camera, as shown in the figure, the method includes:
201. shooting through the first camera to obtain a first image.
Wherein, in this application embodiment, electronic equipment can include two cameras, and this two cameras include first camera and second camera, and first camera, second camera all can be: the first camera may be a wide-angle camera, and the second camera may be a conventional field-of-view camera, or the first camera may be a rotatable camera whose field-of-view may be understood as being capable of realizing the maximum shooting field-of-view during the rotation process, and the second camera may be a conventional field-of-view camera. The first camera may be a normally open camera. Of course, in the embodiment of the present application, the first camera may also be replaced by an ultrasonic sensor, and the principle thereof is similar to that of the embodiment of the present application and is not described herein again. In addition, the first camera may be a front camera, the second camera may be a rotatable camera, the rotatable camera may be in a side camera state or a rear camera state or a front camera state in an initial state, and when the second camera is called to shoot, the second camera may be rotated to the front camera state and the field angle of the second camera is completely within the field angle range of the first camera.
In a possible example, in the step 201, capturing by the first camera to obtain the first image, the method may include the following steps:
11. acquiring target environment parameters;
12. determining a target first shooting parameter corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the first shooting parameter;
13. and controlling the first camera to shoot according to the first shooting parameter of the target to obtain a first image.
In this embodiment, the environmental parameter may be at least one of the following: the electronic device may integrate various sensors, through which the electronic device may be used to detect environmental parameters, and the sensors may be at least one of the following: the environment light sensor, the temperature sensor, the humidity sensor, the positioning sensor, the magnetic field detection sensor, etc. are not limited herein, for example, the environment light sensor can detect the brightness of the environment light, the temperature sensor is used for detecting the environment temperature, the humidity sensor is used for detecting the environment humidity, the positioning sensor is used for detecting the geographical position, and the magnetic field detection sensor is used for detecting the magnetic field interference intensity. The first photographing parameter may be at least one of: the sensitivity ISO, the white balance parameter, the focal length, the exposure duration, the working parameter of the fill-in light, and the like, which are not limited herein, wherein the working parameter of the fill-in light may be at least one of the following: the working current of the light supplement lamp, the working voltage of the light supplement lamp, the working power of the light supplement lamp, the light supplement brightness of the light supplement lamp, the light supplement area of the light supplement lamp, the light supplement angle of the light supplement lamp, the light supplement duration of the light supplement lamp, and the like, which are not limited herein. When the first camera is a wide-angle camera, the first shooting parameters may further include distortion correction parameters, and the distortion correction parameters may include at least one of: the type of distortion correction algorithm, distortion correction degree control parameter, distortion region, and the like, are not limited herein.
In the specific implementation, the mapping relation between the preset environment parameter and the first shooting parameter can be prestored in the electronic device, the electronic device can acquire the target environment parameter through the various sensors, and then the first shooting parameter of the target corresponding to the target environment parameter can be determined according to the mapping relation between the preset environment parameter and the first shooting parameter, and the first camera is controlled to shoot according to the first shooting parameter of the target, so that the first image is obtained, the shooting image suitable for the environment can be obtained, and the follow-up face recognition efficiency is improved.
202. And carrying out face detection on the first image to obtain the number of faces.
In the embodiment of the application, the electronic device can perform face detection on the first image and perform face statistics to obtain the number of faces. In specific implementation, the electronic device may input the first image into a first preset neural network model to obtain the number of the human faces, where the first preset neural network model may be at least one of the following: convolutional neural network models, cyclic neural network models, fully-connected neural network models, impulse neural network models, and the like, without limitation. Taking the first preset neural network model as CNN as an example, the first preset neural network model may be modified based on the mobilent + SSD network model because a feature pyramid is added to improve the face detection precision, and the algorithm takes into account both precision and speed.
203. And hiding preset display content when the number of the human faces is not 1.
The preset display content can be set by a user or defaulted by a system, the preset display content can be designated content, and the specific type of the designated content can be at least one of the following types: pictures, video, audio, text, icons, links, etc., without limitation. Specifically, the specific content may be private content, important content, or other content specified by a user or a system, the specific content may be a predefined keyword, or the specific content may be content in a preset display area, such as a push message triggered by an application program, or the preset display content may correspond to an importance level identifier, the importance level identifier indicates an importance level higher than a preset level, and the preset level may be preset or default by the system.
In a specific implementation, when the number of faces is not 1, the electronic device may hide part or all of the preset display content, for example, transparentize the preset display content, for example, blur the preset display content, such as mosaic, messy codes, and the like, for example, reduce the preset display content, or perform italic font processing, or convert the preset display content into other fonts, and the like, without limitation, for example, deepen a background color of an area where the preset display content is located, or adjust a background color of the area where the preset display content is located to be similar to the preset display content (a color difference before and after adjustment is smaller than a preset color resolution), for example, keep the area where the preset display content is located in a screen-out state, and the like, and without limitation.
In one possible example, the step 203 of hiding the preset display content may include the following steps:
31. determining a second target face closest to the electronic equipment in the faces corresponding to the number of the faces;
32. determining a target relative parameter between the second target face and the preset display content, wherein the relative parameter comprises a second relative angle between the human eye in the target face and the preset display content, or a target distance between the human eye in the target face and the preset display content;
33. determining a target hiding processing parameter corresponding to the target relative parameter according to a mapping relation between a preset relative parameter and the hiding processing parameter;
34. and hiding the preset display content according to the target hiding processing parameter.
Wherein the relative parameter may comprise at least one of: relative angle, relative distance. The concealment process parameters may include at least one of: a hiding location, a hiding duration, a display processing parameter, etc., without limitation. The hidden position can be any area of the display screen or any memory area, and the hidden time length can be set by a user or default by a system. The display processing parameter may be at least one of: a font processing parameter, a picture processing parameter, a background processing parameter, etc., which are not limited herein, the font processing parameter may be at least one of: shrinking, replacing, deleting, etc., without limitation. The picture processing parameter may be at least one of: scale-down, blur, transparency, etc., the background processing parameter may be at least one of: color processing parameters, brightness processing parameters, etc., and are not limited herein.
In a specific implementation, the electronic device may further include a distance sensor, and perform ranging through image recognition, face positioning, and the distance sensor to determine a second target face closest to the electronic device in faces corresponding to the number of faces, and may further determine a target relative parameter between the second target face and preset display content, where the target relative parameter may include a second relative angle between an eye in the second target face and the preset display content, and/or a target distance between an eye in the second target face and the preset display content. Furthermore, the electronic device may store the mapping relationship between the relative parameter and the hidden processing parameter in advance according to the mapping relationship between the preset relative parameter and the hidden processing parameter, and further may determine the target hidden processing parameter corresponding to the target relative parameter according to the mapping relationship, and further hide the preset display content according to the target hidden processing parameter, so that other people may be prevented from peeping at the important information, and the security of the electronic device may be improved.
In one possible example, after the step 202, the following steps may be further included:
a1, when the number of the human faces is 1, detecting whether the number of the human faces changes in a preset time period;
a2, when the number of the human faces is not changed, calling the second camera to shoot to obtain a second image, and performing face recognition on the second image to obtain a face recognition result;
and A3, when the face recognition result meets the preset requirement, displaying the preset display content.
The preset time period may be set by the user or default by the system, and the preset time period may be 5s or 10s, which is not limited herein. The preset requirement can also be set by the user or defaulted by the system, and the preset requirement can be that the face recognition result is that the face of the owner is detected or other faces specified by the owner are detected. The second camera can be the camera of normally open type, also can not be the camera of normally open type.
In a specific implementation, the electronic device may detect whether the number of faces changes within a preset time period when the number of faces is 1, and when the number of faces does not change, it indicates that it is likely that the owner wants to refer to the electronic device, the second camera may be activated and called to take a picture, so as to obtain a second image, and perform face recognition on the second image, specifically, a preset face template may be stored in the electronic device in advance, the preset face template may be a face of the owner or a face of another person designated by the owner, and further, the face in the second image may be matched with the preset face template, so as to obtain a target matching value, and determine a face recognition result according to the target matching value, for example, when the target matching value is greater than the preset matching value, it may be determined that the face recognition result is successful, and when the target matching value is less than or equal to the preset matching value, the face recognition result can be confirmed to be failed in recognition, for example, the face of the owner can be recognized in the face recognition result, the preset display content can be displayed, so that when the owner looks up the information, the preset display content can be displayed on the owner under the condition that no other people exist around the owner, otherwise, when the face recognition result is that the face of the owner is not recognized, the preset display content can be hidden, and therefore, the information of the electronic equipment can be protected.
In specific implementation, the electronic device may perform face extraction on the second image to obtain a third target face, and may further match the third target face with a preset face template. The electronic device may input the second image into a second preset neural network model to obtain a third target face, where the second preset neural network model may be at least one of: convolutional neural network models, cyclic neural network models, fully-connected neural network models, impulse neural network models, and the like, without limitation. For example, when the second preset neural network model is a convolutional neural network model, in the embodiment of the present application, a minimum network design may be performed on the CNN (a backbone of a network structure is modified based on the shufflentv 2), and the second preset neural network model may also perform network acceleration by removing part of a convolutional layer and modifying part of the convolutional layer into depthwise + pointwise, so that the accuracy and the speed can be obviously improved.
For example, as shown in fig. 2B, taking a mobile phone as an example, in the embodiment of the present application, the first camera may be a wide-angle camera, which is also a normally-open camera, and can be used for increasing a range of face detection through a larger field angle, so as to ensure that all people trying to peep are photographed by the camera, thereby performing people number recognition, and if many people watch the screen, the people are regarded as peepers, and privacy information is shielded. If only one person is watching the screen, then call the second camera and carry out owner's discernment, the angle of field of the second camera is littleer than AON camera, and can be through ISP processing, and it is clear to shoot the picture, and the picture of using it to shoot carries out face identification precision height, can further judge whether the user who is watching the cell-phone is owner, if non owner then shield privacy information, otherwise then can show information.
For another example, as shown in fig. 2C, the electronic device may call the first camera to perform face recognition to obtain the number of faces, hide the preset display content when the number of faces is 0 or greater than or equal to 2, call the second camera to perform face recognition when the number of faces is 1 and the number of faces has not changed within a period of time, display the preset display content if the number is the owner, hide the preset display content if the number is not the owner, and end the process if the number of faces is not 1.
Further, in a possible example, before the step a1, the following steps may be further included:
a4, carrying out face extraction on the first image to obtain a first target face;
a5, determining a first relative angle between the first target face and the preset display content;
a6, when the first relative angle is in a preset angle range, executing the step A1.
The preset angle range may be set by the user or by a system, and the preset angle range may be a partial or full range of the field angle range of the second camera.
In the concrete realization, electronic equipment can carry out face extraction to first image, obtains first target face, can also further confirm first target face and predetermine the first relative angle between the display content, when first relative angle is in predetermineeing the angle scope, carries out A1 step, otherwise, then can not carry out follow-up step, so, can guarantee that the second camera can clearly catch owner's face, help carrying out accurate discernment to the owner.
In a possible example, the step a2, invoking the second camera to capture a second image, may include the following steps:
a21, carrying out image quality evaluation on the first image to obtain a target image quality evaluation value;
a22, determining a target adjustment coefficient corresponding to the target image quality evaluation value according to a preset mapping relation between the image quality evaluation value and the adjustment coefficient;
a23, determining a target second shooting parameter corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the second shooting parameter;
a24, adjusting the second shooting parameter according to the target adjustment coefficient to obtain the adjusted target second shooting parameter;
and A25, calling the second camera to shoot the second target shooting parameter after adjustment to obtain the second image.
Wherein the second shooting parameter may be at least one of: the sensitivity ISO, the white balance parameter, the focal length, the exposure duration, the working parameter of the fill-in light, and the like, which are not limited herein, wherein the working parameter of the fill-in light may be at least one of the following: the working current of the light supplement lamp, the working voltage of the light supplement lamp, the working power of the light supplement lamp, the light supplement brightness of the light supplement lamp, the light supplement area of the light supplement lamp, the light supplement angle of the light supplement lamp, the light supplement duration of the light supplement lamp, and the like, which are not limited herein.
In a specific implementation, the electronic device may perform image quality evaluation on the first image by using at least one image quality evaluation value index to obtain a target image quality evaluation value, where the image quality evaluation value index may be at least one of: information entropy, mean square error, mean gradient, edge preservation, sharpness, etc., and is not limited herein.
Further, the electronic device may pre-store a mapping relationship between a preset image quality evaluation value and an adjustment coefficient, and further, may determine a target adjustment coefficient corresponding to the target image quality evaluation value according to the mapping relationship between the preset image quality evaluation value and the adjustment coefficient, in this embodiment of the application, a value range of the adjustment coefficient may be between-1 and 1, for example, a value range may be between-0.15 and 0.15. The electronic device may further pre-store a mapping relationship between a preset environment parameter and a second shooting parameter, wherein the specific description of the environment parameter may refer to the above steps, and further, a target second shooting parameter corresponding to the target environment parameter may be determined according to the mapping relationship between the preset environment parameter and the second shooting parameter, and the second shooting parameter is adjusted according to a target adjustment coefficient to obtain an adjusted target second shooting parameter, and the specific calculation manner is as follows:
the adjusted second target shooting parameter is (1+ target adjustment coefficient) second target shooting parameter
Further, the electronic equipment can call the second camera to shoot the second shooting parameter of the adjusted target, so that the second image is obtained, on one hand, the shooting parameter of the second camera suitable for the environment can be obtained, on the other hand, the shooting parameter of the second camera can be adjusted according to the image quality of the first camera, the image which is suitable for the environment and has better image quality can be shot, and accurate face recognition can be favorably realized.
Further, the step a21 of evaluating the image quality of the first image to obtain the target image quality evaluation value may include the following steps:
a211, performing multi-scale feature decomposition on the first image to obtain a low-frequency feature component image and a high-frequency feature component image;
a212, dividing the low-frequency characteristic component image into a plurality of areas;
a213, determining an information entropy corresponding to each of the plurality of regions to obtain a plurality of information entropies;
a214, determining an average information entropy and a target mean square error according to the plurality of information entropies;
a215, determining a target fine tuning adjustment coefficient corresponding to the target mean square error;
a216, adjusting the average information entropy according to the target fine adjustment adjusting coefficient to obtain a target information entropy;
a217, determining a first evaluation value corresponding to the target information entropy according to a preset mapping relation between the information entropy and the evaluation value;
a218, acquiring a first shooting parameter of a target corresponding to the first image;
a219, determining a target low-frequency weight corresponding to the first shooting parameter of the target according to a mapping relation between preset shooting parameters and the low-frequency weight, and determining a target high-frequency weight according to the target low-frequency weight;
a220, determining the distribution density of the target characteristic points according to the high-frequency characteristic component image;
a221, determining a second evaluation value corresponding to the target feature point distribution density according to a preset mapping relation between the feature point distribution density and the evaluation value;
and A222, performing weighting operation according to the first evaluation value, the second evaluation value, the target low-frequency weight and the target high-frequency weight to obtain a target image quality evaluation value of the target first image.
In specific implementation, the electronic device may perform multi-scale feature decomposition on the first image by using a multi-scale decomposition algorithm to obtain a low-frequency feature component image and a high-frequency feature component image, where the multi-scale decomposition algorithm may be at least one of the following: the pyramid transform algorithm, the wavelet transform, the contourlet transform, the shear wave transform, etc., are not limited herein, and in a specific implementation, the number of the low-frequency feature component images may be 1, and the number of the high-frequency feature component images may be 1 or more. Further, the low-frequency feature component image may be divided into a plurality of regions, each of which has the same or different area size. The low-frequency feature component image reflects the main features of the image and can occupy the main energy of the image, and the high-frequency feature component image reflects the detail information of the image.
Further, the electronic device may determine an information entropy corresponding to each of the plurality of regions to obtain a plurality of information entropies, and determine an average information entropy and a target mean square error according to the plurality of information entropies, where the information entropy may reflect the amount of image information to a certain extent, and the mean square error may reflect the stability of the image information. The mapping relation between the preset mean square error and the fine adjustment coefficient can be stored in the electronic device in advance, and then the target fine adjustment coefficient corresponding to the target mean square error can be determined according to the mapping relation.
Further, the electronic device may adjust the average information entropy according to the target fine-tuning adjustment coefficient to obtain a target information entropy, where the target information entropy is (1+ target fine-tuning adjustment coefficient) × the average information entropy. The electronic device may pre-store a mapping relationship between a preset information entropy and an evaluation value, and further, may determine a first evaluation value corresponding to the target information entropy according to the mapping relationship between the preset information entropy and the evaluation value.
In addition, the electronic device may obtain a first shooting parameter of a target corresponding to the first image, where the first shooting parameter of the target refers to the above description, and is not described herein again. The electronic device may further pre-store a mapping relationship between a preset shooting parameter and a low-frequency weight, and further determine a target low-frequency weight corresponding to a first shooting parameter of a target according to the mapping relationship between the preset shooting parameter and the low-frequency weight, and determine a target high-frequency weight according to the target low-frequency weight, where the target low-frequency weight + the target high-frequency weight is 1.
Further, the electronic device may determine a target feature point distribution density from the high-frequency feature component image, where the target feature point distribution density is the total number of feature points/area of the high-frequency feature component image. The electronic device may further pre-store a mapping relationship between a preset feature point distribution density and an evaluation value, further determine a second evaluation value corresponding to the target feature point distribution density according to the mapping relationship between the preset feature point distribution density and the evaluation value, and finally perform a weighting operation according to the first evaluation value, the second evaluation value, the target low-frequency weight, and the target high-frequency weight to obtain a target image quality evaluation value of the first image, which is specifically as follows:
target image quality evaluation value (first evaluation value, target low-frequency weight + second evaluation value, target high-frequency weight)
Thus, the image quality evaluation can be performed based on two dimensions of the low-frequency component and the high-frequency component of the first image, and the evaluation parameter suitable for the shooting environment, namely the target image quality evaluation value, can be accurately obtained.
Further, in a possible example, when the number of the human faces is greater than or equal to 2, before hiding the preset display content in step 203, the method may further include the following steps:
b1, determining the fixation point of each human eye on the display screen of the electronic equipment in the human faces corresponding to the number of the human faces by utilizing an eyeball tracking technology to obtain N fixation points, wherein N is a natural number;
b2, when at least one of the N gazing points is in the preset range of the preset display content, executing the step of hiding the preset display content.
Wherein, the preset range can be set by the user or the default of the system. In the specific implementation, the electronic device may determine, by using an eyeball tracking technology, gaze points of each human eye on a display screen of the electronic device in the faces corresponding to the number of the human faces, to obtain N gaze points, where N is a natural number, and N is less than or equal to the number of the human faces, and in addition, since the human eyes do not necessarily gaze the display screen of the electronic device, N may be 0 at the minimum, and may be equal to the number of the human faces at the maximum, and when at least one of the N gaze points is within a preset range of preset display contents, it is very likely that a person peeps at the preset display contents, and the preset display contents may be hidden at this time, otherwise, it is indicated that no person peeps at the preset display contents, and the preset display contents may also be continuously displayed, which is beneficial to protecting the preset display contents, and improves the security of the electronic device.
It can be seen that the display control method described in the embodiment of the present application is applied to an electronic device, where the electronic device includes two cameras, the two cameras include a first camera and a second camera, a field angle of the first camera is larger than a field angle of the second camera, the first camera performs shooting to obtain a first image, the first image is subjected to face detection to obtain the number of faces, and when the number of faces is not 1, preset display contents are hidden, so that when a plurality of faces or 0 face is detected, information hiding is achieved, an anti-peeping function is achieved, and security of the electronic device is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of a display control method provided in an embodiment of the present application, and is applied to an electronic device, where the electronic device includes two cameras, the two cameras include a first camera and a second camera, and a field angle of the first camera is greater than a field angle of the second camera, as shown in the figure, the method includes:
301. shooting through the first camera to obtain a first image.
302. And carrying out face detection on the first image to obtain the number of faces.
303. And hiding preset display content when the number of the human faces is not 1.
304. And when the number of the human faces is 1, detecting whether the number of the human faces changes within a preset time period.
305. And when the number of the human faces is not changed, calling the second camera to shoot to obtain a second image, and performing face recognition on the second image to obtain a face recognition result.
306. And when the face recognition result meets a preset requirement, displaying the preset display content.
For the detailed description of steps 301 to 306, reference may be made to the corresponding steps of the display control method described in fig. 2A, which are not described herein again.
It can be seen that the display control method described in the embodiment of the present application can not only hide information when detecting a plurality of faces or 0 face, but also display information when detecting 1 face, and if the face is identified as the owner, thus implementing an anti-peeping function and improving the security of the electronic device.
Referring to fig. 4 in keeping with the embodiments shown in fig. 2A and fig. 3, as shown in the figure, fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the electronic device includes a dual camera including a first camera and a second camera, a field angle of the first camera is larger than a field angle of the second camera, and the program includes instructions for:
shooting through the first camera to obtain a first image;
carrying out face detection on the first image to obtain the number of faces;
and hiding preset display content when the number of the human faces is not 1.
It can be seen that the electronic device described in the embodiment of the present application, the electronic device includes two cameras, the two cameras include a first camera and a second camera, a field angle of the first camera is greater than a field angle of the second camera, the first camera performs shooting to obtain a first image, the first image is subjected to face detection to obtain the number of faces, when the number of faces is not 1, preset display contents are hidden, so that when a plurality of faces or 0 faces are detected, information hiding is realized, an anti-peeping function is realized, and the security of the electronic device is improved.
In one possible example, the program further includes instructions for performing the steps of:
when the number of the human faces is 1, detecting whether the number of the human faces changes within a preset time period;
when the number of the human faces is not changed, calling the second camera to shoot to obtain a second image, and carrying out human face recognition on the second image to obtain a human face recognition result;
and when the face recognition result meets a preset requirement, displaying the preset display content.
In one possible example, the program further includes instructions for performing the steps of:
carrying out face extraction on the first image to obtain a first target face;
determining a first relative angle between the first target face and the preset display content;
and when the first relative angle is within a preset angle range, executing the step of detecting whether the number of the human faces is changed within a preset time period.
In one possible example, the program includes instructions for performing, in the capturing by the first camera, the first image:
acquiring target environment parameters;
determining a target first shooting parameter corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the first shooting parameter;
and controlling the first camera to shoot according to the first shooting parameter of the target to obtain a first image.
In one possible example, in the aspect of invoking the second camera to capture the second image, the program includes instructions for performing the following steps:
performing image quality evaluation on the first image to obtain a target image quality evaluation value;
determining a target adjustment coefficient corresponding to the target image quality evaluation value according to a preset mapping relation between the image quality evaluation value and the adjustment coefficient;
determining a target second shooting parameter corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the second shooting parameter;
adjusting the second shooting parameter according to the target adjustment coefficient to obtain the adjusted target second shooting parameter;
and calling the second camera to shoot the adjusted second shooting parameters of the target to obtain the second image.
In one possible example, in the hiding preset display content, the program includes instructions for performing the steps of:
determining a second target face closest to the electronic equipment in the faces corresponding to the number of the faces;
determining a target relative parameter between the second target face and the preset display content, wherein the target relative parameter comprises a second relative angle between the human eye in the second target face and the preset display content, or a target distance between the human eye in the second target face and the preset display content;
determining a target hiding processing parameter corresponding to the target relative parameter according to a mapping relation between a preset relative parameter and the hiding processing parameter;
and hiding the preset display content according to the target hiding processing parameter.
In one possible example, when the number of faces is greater than or equal to 2, the program further includes instructions for performing the following steps:
determining the fixation point of each human eye on the display screen of the electronic equipment in the human faces corresponding to the number of the human faces by utilizing an eyeball tracking technology to obtain N fixation points, wherein N is a natural number;
and when at least one of the N fixation points is in the preset range of the preset display content, the step of hiding the preset display content is executed.
The embodiment of the application provides a display control device, which can be an electronic device. Specifically, the display control apparatus is configured to execute the steps executed by the electronic device in the above display control method. The display control device provided by the embodiment of the application can comprise modules corresponding to the corresponding steps.
In the embodiment of the present application, the display control apparatus may be divided into the functional modules according to the method example, for example, each functional module may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The division of the modules in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 5 shows a schematic diagram of a possible structure of the display control apparatus 500 according to the above-described embodiment, in a case where each functional module is divided in correspondence with each function. As shown in fig. 5, applied to an electronic device including a dual camera including a first camera and a second camera, a field angle of the first camera is larger than a field angle of the second camera, the apparatus 500 includes: a photographing unit 501, an image recognizing unit 502, and a display control unit 503, wherein,
the shooting unit 501 is configured to shoot through the first camera to obtain a first image;
the image recognition unit 502 is configured to perform face detection on the first image to obtain the number of faces;
the display control unit 503 is configured to hide preset display content when the number of the human faces is not 1.
It can be seen that the display control apparatus described in this embodiment of the application is applied to an electronic device, the electronic device includes two cameras, the two cameras include a first camera and a second camera, a field angle of the first camera is larger than a field angle of the second camera, shooting is performed through the first camera to obtain a first image, face detection is performed on the first image to obtain the number of faces, and when the number of faces is not 1, preset display contents are hidden, so that when a plurality of faces or 0 face is detected, information hiding is realized, an anti-peeping function is realized, and the security of the electronic device is improved.
In one possible example, the apparatus is further configured to perform the following functions:
the image recognition unit 502 is further configured to detect whether the number of faces changes within a preset time period when the number of faces is 1;
the shooting unit 501 is further configured to call the second camera to shoot when the number of the faces is not changed, so as to obtain a second image, and perform face recognition on the second image, so as to obtain a face recognition result;
the display control unit 503 is further configured to display the preset display content when the face recognition result meets a preset requirement.
In one possible example, the apparatus is further configured to perform the following functions:
the image recognition unit 502 is further configured to perform face extraction on the first image to obtain a first target face; determining a first relative angle between the first target face and the preset display content; and when the first relative angle is in a preset angle range, executing the step of detecting whether the number of the human faces changes within a preset time period.
In a possible example, in terms of obtaining the first image by the capturing with the first camera, the capturing unit 501 is specifically configured to:
acquiring target environment parameters;
determining a target first shooting parameter corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the first shooting parameter;
and controlling the first camera to shoot according to the first shooting parameter of the target to obtain a first image.
In a possible example, in terms of invoking the second camera to perform shooting to obtain a second image, the shooting unit 501 is specifically configured to:
performing image quality evaluation on the first image to obtain a target image quality evaluation value;
determining a target adjustment coefficient corresponding to the target image quality evaluation value according to a preset mapping relation between the image quality evaluation value and the adjustment coefficient;
determining a target second shooting parameter corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the second shooting parameter;
adjusting the second shooting parameter according to the target adjustment coefficient to obtain the adjusted target second shooting parameter;
and calling the second camera to shoot the adjusted second shooting parameters of the target to obtain the second image.
In one possible example, in terms of hiding the preset display content, the display control unit 503 is specifically configured to:
determining a second target face closest to the electronic equipment in the faces corresponding to the number of the faces;
determining a target relative parameter between the second target face and the preset display content, wherein the target relative parameter comprises a second relative angle between the human eye in the second target face and the preset display content, or a target distance between the human eye in the second target face and the preset display content;
determining a target hiding processing parameter corresponding to the target relative parameter according to a mapping relation between a preset relative parameter and the hiding processing parameter;
and hiding the preset display content according to the target hiding processing parameter.
In one possible example, when the number of faces is greater than or equal to 2, the apparatus is further configured to perform the following functions:
the image recognition unit 502 is further configured to determine, by using an eye tracking technology, gaze points of each human eye on a display screen of the electronic device in the human faces corresponding to the number of the human faces, to obtain N gaze points, where N is a natural number;
the step of hiding the preset display content is performed by the display control unit 503 when at least one of the N gaze points is within a preset range of the preset display content.
It can be understood that the functions of each program module of the display control apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
Embodiments of the present application further provide a chip, where the chip includes a processor, configured to call and run a computer program from a memory, so that a device in which the chip is installed performs some or all of the steps described in the electronic device in the above method embodiments.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to perform some or all of the steps described in the above method embodiment for a network-side device.
Embodiments of the present application further provide a computer program product, where the computer program product includes a computer program operable to cause a computer to perform some or all of the steps described in the electronic device in the above method embodiments. The computer program product may be a software installation package.
The steps of a method or algorithm described in the embodiments of the present application may be implemented in hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in an access network device, a target network device, or a core network device. Of course, the processor and the storage medium may reside as discrete components in an access network device, a target network device, or a core network device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functionality described in the embodiments of the present application may be implemented, in whole or in part, by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the embodiments of the present application in further detail, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (9)

1. The display control method is applied to electronic equipment, the electronic equipment comprises two cameras, the two cameras comprise a first camera and a second camera, the two cameras are front cameras, shooting directions of the two cameras are consistent, and a field angle of the first camera is larger than that of the second camera, and the method comprises the following steps:
shooting through the first camera to obtain a first image;
carrying out face detection on the first image to obtain the number of faces;
hiding preset display content when the number of the human faces is not 1;
wherein the method further comprises:
when the number of the human faces is 1, detecting whether the number of the human faces changes within a preset time period;
when the number of the human faces is not changed, calling the second camera to shoot to obtain a second image, and carrying out human face recognition on the second image to obtain a human face recognition result;
and when the face recognition result meets a preset requirement, displaying the preset display content.
2. The method of claim 1, further comprising:
carrying out face extraction on the first image to obtain a first target face;
determining a first relative angle between the first target face and the preset display content;
and when the first relative angle is within a preset angle range, executing the step of detecting whether the number of the human faces is changed within a preset time period.
3. The method of claim 1 or 2, wherein said capturing by said first camera results in a first image comprising
Acquiring target environment parameters;
determining a target first shooting parameter corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the first shooting parameter;
and controlling the first camera to shoot according to the first shooting parameter of the target to obtain a first image.
4. The method of claim 3, wherein said invoking the second camera to capture a second image comprises:
performing image quality evaluation on the first image to obtain a target image quality evaluation value;
determining a target adjustment coefficient corresponding to the target image quality evaluation value according to a preset mapping relation between the image quality evaluation value and the adjustment coefficient;
determining a target second shooting parameter corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the second shooting parameter;
adjusting the second shooting parameter according to the target adjustment coefficient to obtain the adjusted target second shooting parameter;
and calling the second camera to shoot the adjusted second shooting parameters of the target to obtain the second image.
5. The method according to claim 1 or 2, wherein hiding the preset display content comprises:
determining a second target face closest to the electronic equipment in the faces corresponding to the number of the faces;
determining a target relative parameter between the second target face and the preset display content, wherein the target relative parameter comprises a second relative angle between the human eye in the second target face and the preset display content, or a target distance between the human eye in the second target face and the preset display content;
determining a target hiding processing parameter corresponding to the target relative parameter according to a mapping relation between a preset relative parameter and the hiding processing parameter;
and hiding the preset display content according to the target hiding processing parameter.
6. The method according to claim 1 or 2, wherein when the number of faces is greater than or equal to 2, the method further comprises:
determining the fixation point of each human eye on the display screen of the electronic equipment in the human faces corresponding to the number of the human faces by utilizing an eyeball tracking technology to obtain N fixation points, wherein N is a natural number;
and when at least one of the N fixation points is in the preset range of the preset display content, the step of hiding the preset display content is executed.
7. A display control device is applied to an electronic device, the electronic device comprises two cameras, the two cameras comprise a first camera and a second camera, the two cameras are front cameras, the shooting directions of the two cameras are consistent, the field angle of the first camera is larger than that of the second camera, and the device comprises: a photographing unit, an image recognizing unit, and a display control unit, wherein,
the shooting unit is used for shooting through the first camera to obtain a first image;
the image identification unit is used for carrying out face detection on the first image to obtain the number of faces;
the display control unit is used for hiding preset display contents when the number of the human faces is not 1;
wherein the apparatus is further specifically configured to:
when the number of the human faces is 1, detecting whether the number of the human faces changes within a preset time period;
when the number of the human faces is not changed, calling the second camera to shoot to obtain a second image, and carrying out human face recognition on the second image to obtain a human face recognition result;
and when the face recognition result meets a preset requirement, displaying the preset display content.
8. An electronic device comprising a processor, a memory, a communication interface, a dual camera comprising a first camera and a second camera, a field of view of the first camera being greater than a field of view of the second camera, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-6.
9. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
CN202010765828.1A 2020-07-31 2020-07-31 Display control method, device and storage medium Active CN111866393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010765828.1A CN111866393B (en) 2020-07-31 2020-07-31 Display control method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010765828.1A CN111866393B (en) 2020-07-31 2020-07-31 Display control method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111866393A CN111866393A (en) 2020-10-30
CN111866393B true CN111866393B (en) 2022-01-14

Family

ID=72952745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010765828.1A Active CN111866393B (en) 2020-07-31 2020-07-31 Display control method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111866393B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464679B (en) * 2020-04-09 2021-05-25 Oppo广东移动通信有限公司 Electronic equipment and monitoring method thereof
CN112541450A (en) * 2020-12-18 2021-03-23 Oppo广东移动通信有限公司 Context awareness function control method and related device
CN112702530B (en) * 2020-12-29 2023-04-25 维沃移动通信(杭州)有限公司 Algorithm control method and electronic equipment
CN116382896B (en) * 2023-02-27 2023-12-19 荣耀终端有限公司 Calling method of image processing algorithm, terminal equipment, medium and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992096A (en) * 2015-06-30 2015-10-21 广东欧珀移动通信有限公司 Data protection method and mobile terminal
CN106156663A (en) * 2015-04-14 2016-11-23 小米科技有限责任公司 A kind of terminal environments detection method and device
CN106682540A (en) * 2016-12-06 2017-05-17 上海斐讯数据通信技术有限公司 Intelligent peep-proof method and device
CN107862265A (en) * 2017-10-30 2018-03-30 广东欧珀移动通信有限公司 Image processing method and related product
CN110866236A (en) * 2019-11-20 2020-03-06 Oppo广东移动通信有限公司 Private picture display method, device, terminal and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446634A (en) * 2016-09-26 2017-02-22 维沃移动通信有限公司 Method for privacy protection of mobile terminal and mobile terminal
CN106778381B (en) * 2016-11-30 2020-06-05 宇龙计算机通信科技(深圳)有限公司 Important information processing method and terminal
WO2018121750A1 (en) * 2016-12-29 2018-07-05 Kwan Kin Keung Kevin Monitoring and tracking system, method, article and device
CN108108604A (en) * 2017-12-14 2018-06-01 珠海格力电器股份有限公司 A kind of method and apparatus for controlling screen display
CN110119684A (en) * 2019-04-11 2019-08-13 华为技术有限公司 Image-recognizing method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156663A (en) * 2015-04-14 2016-11-23 小米科技有限责任公司 A kind of terminal environments detection method and device
CN104992096A (en) * 2015-06-30 2015-10-21 广东欧珀移动通信有限公司 Data protection method and mobile terminal
CN106682540A (en) * 2016-12-06 2017-05-17 上海斐讯数据通信技术有限公司 Intelligent peep-proof method and device
CN107862265A (en) * 2017-10-30 2018-03-30 广东欧珀移动通信有限公司 Image processing method and related product
CN110866236A (en) * 2019-11-20 2020-03-06 Oppo广东移动通信有限公司 Private picture display method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN111866393A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111866393B (en) Display control method, device and storage medium
CN109635542B (en) Biological identification interaction method, graphical interaction interface and related device
CN107409180B (en) Electronic device with camera module and image processing method for electronic device
CN108776568B (en) Webpage display method, device, terminal and storage medium
CN109753159B (en) Method and apparatus for controlling electronic device
KR102360453B1 (en) Apparatus And Method For Setting A Camera
KR102620138B1 (en) Method for Outputting Screen and the Electronic Device supporting the same
EP3009816B1 (en) Method and apparatus for adjusting color
KR102649197B1 (en) Electronic apparatus for displaying graphic object and computer readable recording medium
EP3264744A2 (en) Electronic device and image capturing method thereof
KR102246762B1 (en) Method for content adaptation based on ambient environment in electronic device and the electronic device thereof
US20160349936A1 (en) Method for outputting screen and electronic device supporting the same
US10078441B2 (en) Electronic apparatus and method for controlling display displaying content to which effects is applied
KR102317820B1 (en) Method for processing image and electronic device supporting the same
CN111885265A (en) Screen interface adjusting method and related device
CN107925738B (en) Method and electronic device for providing image
KR102636243B1 (en) Method for processing image and electronic device thereof
KR20170019823A (en) Method for processing image and electronic device supporting the same
CN111767554B (en) Screen sharing method and device, storage medium and electronic equipment
CN108965981B (en) Video playing method and device, storage medium and electronic equipment
KR102359276B1 (en) Method and apparatus for controlling white balance function of electronic device
WO2021018169A1 (en) Privacy protection method for electronic device, and electronic device
KR102588524B1 (en) Electronic apparatus and operating method thereof
CN113888159B (en) Opening method of function page of application and electronic equipment
CN112541450A (en) Context awareness function control method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant