CN115883959A - Picture content control method for privacy protection and related product - Google Patents
Picture content control method for privacy protection and related product Download PDFInfo
- Publication number
- CN115883959A CN115883959A CN202310107036.9A CN202310107036A CN115883959A CN 115883959 A CN115883959 A CN 115883959A CN 202310107036 A CN202310107036 A CN 202310107036A CN 115883959 A CN115883959 A CN 115883959A
- Authority
- CN
- China
- Prior art keywords
- picture
- camera
- acquisition area
- target user
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Studio Devices (AREA)
Abstract
The application provides a picture content control method and a related product for privacy protection, which are applied to intelligent camera equipment, and the method comprises the following steps: determining a first acquisition area according to a current shooting scene; detecting whether a target user is in a first acquisition area; if yes, detecting whether a non-target user exists in the third acquisition area, if not, outputting the acquired original picture, and if so, executing corresponding privacy protection operation; if not, judging whether privacy disclosure risks exist in the current shooting scene, if not, outputting the acquired original picture, and if so, executing corresponding privacy protection operation. Therefore, the accuracy of scene identification for follow-up privacy disclosure is improved, the privacy safety of a user is protected to the maximum extent, the intelligence of the intelligent camera is improved, and the internal performance of the intelligent camera is optimized.
Description
Technical Field
The application belongs to the technical field of image communication in the Internet industry, and particularly relates to a picture content control method for privacy protection and a related product.
Background
At present, camera equipment is widely applied to various camera shooting scenes such as remote classes, video conferences and live broadcasts, but the existing camera equipment cannot divide a reasonable main activity area for collecting a target user in a specific camera shooting scene according to the specific camera shooting scene, so that the camera equipment cannot identify or cannot accurately identify privacy disclosure scenes, further the equipment cannot perform privacy protection processing on collected original pictures, the equipment lacks intelligence, and the privacy safety of users is influenced.
Disclosure of Invention
The application provides a picture content control method for privacy protection and a related product, so as to improve the accuracy of identifying privacy disclosure scenes and the intelligence of camera equipment, optimize the internal performance of the camera equipment and protect the privacy safety of users.
In a first aspect, an embodiment of the present application provides a picture content control method for privacy protection, which is applied to an intelligent camera device, where the intelligent camera device includes a camera for collecting a picture and a privacy protection cover disposed at a front end of the camera, the privacy protection cover is in an open state, the intelligent camera device is connected to a terminal device, and the method includes:
determining a first acquisition area according to a current shooting scene, wherein the first acquisition area is used for indicating a main activity area of a target user in the current shooting scene, the first acquisition area is a partial area in a second acquisition area, the second acquisition area is used for indicating an area corresponding to an overall picture acquired by a camera, and the second acquisition area is used for indicating an area corresponding to an overall image picture which can be acquired by the camera under the constraint of a preset focal length;
detecting whether a target user is in a first acquisition area, wherein the target user refers to a user who inputs face information into the intelligent camera equipment;
if yes, detecting whether a non-target user exists in a third acquisition area, wherein the non-target user refers to a user who does not enter face information into the intelligent camera equipment, and the third acquisition area refers to other areas except the first acquisition area in the second acquisition area;
if not, sending the original picture acquired by the camera to the terminal equipment so that the terminal equipment displays the original picture;
if yes, the following operations are executed: blurring first picture content corresponding to an area where the non-target user is located in an original picture acquired by the camera to obtain a first image picture, and sending the first image picture to the terminal equipment so that the terminal equipment can display the first image picture;
outputting a first reminding message, wherein the first reminding message is used for indicating that the non-target user is in the third acquisition area;
and if a blurring removal instruction input by the target user is detected, canceling blurring processing on the first picture content;
if not, judging whether privacy disclosure risks exist in the current shooting scene;
if so, performing the following operations: blurring second picture content in an original picture acquired by the camera to obtain a second image picture, and sending the second image picture to the terminal equipment so that the terminal equipment displays the second image picture;
and outputting a second reminding message, wherein the second reminding message is used for indicating the privacy disclosure risk;
and if the blurring removal instruction is detected, canceling blurring processing on the second picture content; if the blurring removal instruction is not detected within the first preset time, outputting the second reminding message again; if the virtualization removing instruction is not detected within second preset time, switching the privacy protection cover to a closed state, wherein the second preset time is longer than the first preset time;
and if the original picture does not exist, sending the original picture acquired by the camera to the terminal equipment so that the terminal equipment can display the original picture.
In a second aspect, an embodiment of the present application provides a picture content control apparatus for privacy protection, which is applied to an intelligent camera device, where the intelligent camera device includes a camera for acquiring a picture and a privacy protection cover disposed at a front end of the camera, the privacy protection cover is in an open state, the intelligent camera device is connected to a terminal device, and the apparatus includes: a determining unit, a first detecting unit, a second detecting unit, a first sending unit, a first executing unit, a judging unit, a second executing unit and a second sending unit, wherein,
the determining unit is configured to determine a first acquisition area according to a current shooting scene, where the first acquisition area is used to indicate a main activity area of a target user in the current shooting scene, the first acquisition area is a partial area in a second acquisition area, the second acquisition area is used to indicate an area corresponding to an entire picture acquired by the camera, and the second acquisition area is used to indicate an area corresponding to an entire image picture that can be acquired by the camera under the constraint of a preset focal length;
the first detection unit is used for detecting whether a target user is in a first acquisition area, wherein the target user is a user who inputs face information in the intelligent camera equipment;
if yes, detecting whether a non-target user exists in a third acquisition area through the second detection unit, wherein the non-target user refers to a user who does not enter face information into the intelligent camera equipment, and the third acquisition area refers to other areas except the first acquisition area in the second acquisition area;
if not, sending the original picture acquired by the camera to the terminal equipment through the first sending unit so that the terminal equipment can display the original picture;
if yes, the following operations are executed through the first execution unit: blurring first picture content corresponding to an area where the non-target user is located in an original picture acquired by the camera to obtain a first image picture, and sending the first image picture to the terminal equipment so that the terminal equipment can display the first image picture; outputting a first reminding message, wherein the first reminding message is used for indicating that the non-target user is in the third acquisition area; and if a blurring removal instruction input by the target user is detected, canceling blurring processing on the first picture content;
if not, judging whether the privacy leakage risk exists in the current shooting scene through the judging unit;
if yes, the following operations are executed through the second execution unit: blurring second picture content in an original picture acquired by the camera to obtain a second image picture, and sending the second image picture to the terminal equipment so that the terminal equipment displays the second image picture; outputting a second reminding message, wherein the second reminding message is used for indicating the privacy disclosure risk; and if the blurring removal instruction is detected, canceling blurring processing of the second picture content; if the blurring removal instruction is not detected within the first preset time, outputting the second reminding message again; if the virtualization removing instruction is not detected within second preset time, switching the privacy protection cover to a closed state, wherein the second preset time is longer than the first preset time;
and if the original picture does not exist, the original picture collected by the camera is sent to the terminal equipment through the second sending unit, so that the terminal equipment displays the original picture.
In a third aspect, an embodiment of the present application provides an intelligent image capturing apparatus, including a camera, a privacy protecting cover provided at a front end of the camera, a processor, a memory, and one or more programs, stored in the memory and configured to be executed by the processor, where the program includes instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program/instruction is stored, where the computer program/instruction, when executed by a processor, implements the steps in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application.
In the embodiment of the application, the intelligent camera device firstly determines a first acquisition area for indicating a main activity area of a target user in a current camera scene according to the current camera scene, and then detects whether the target user is in the first acquisition area; if yes, detecting whether a non-target user exists in the third acquisition area, if yes, performing blurring processing and subsequent reminding operation on the first picture content in the acquired original picture by the intelligent camera equipment, and if not, directly outputting the acquired original picture; if not, judging whether privacy disclosure risks exist in the current shooting scene, if so, performing blurring processing and subsequent reminding operation on second picture content in the collected original picture, and if not, directly outputting the collected original picture. Therefore, the intelligent camera device can determine the first acquisition area indicating the main activity range of the user according to the current camera scene, the accuracy of follow-up privacy disclosure scene identification is improved, privacy protection processing is carried out on the image content in the acquired original image based on the first acquisition area, the privacy safety of the user is protected to the maximum extent, the intelligence of the intelligent camera device is improved, and the internal performance of the intelligent camera device is optimized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1a is a block diagram of a privacy protecting system according to an embodiment of the present application;
fig. 1b is a schematic structural diagram of an intelligent image capturing apparatus according to an embodiment of the present application;
fig. 1c is a block diagram of another intelligent image capturing apparatus provided in the embodiment of the present application;
fig. 2 is a schematic flowchart of a picture content control method for privacy protection according to an embodiment of the present application;
fig. 3a is an exemplary schematic diagram of a collection area of an intelligent camera device according to an embodiment of the present disclosure;
fig. 3b is a schematic diagram of an example of a collection area of another intelligent camera device according to the embodiment of the present application;
fig. 4a is a block diagram of functional units of a picture content control apparatus for privacy protection according to an embodiment of the present application;
fig. 4b is a block diagram illustrating functional units of another picture content control apparatus for privacy protection according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1a, fig. 1a is a block diagram of a privacy protecting system according to an embodiment of the present disclosure. As shown in fig. 1a, the privacy protection system 10 includes an intelligent camera device 11 and a terminal device 12, where the intelligent camera device 11 is connected to the terminal device 12, the connection mode may specifically be wired connection or wireless connection, the intelligent camera device 11 is configured to collect an original picture, select whether to perform blurring processing on the original picture based on a related processing result, and then send an unprocessed or processed picture to the terminal device 12, and the terminal device 12 is configured to display the unprocessed or processed picture. The terminal device 12 may be an intelligent device such as a notebook computer, a personal computer, or the like. It should be noted that in an actual application scene, after the intelligent camera device outputs an unprocessed or processed picture, the picture is sent to a server corresponding to a specific camera scene, and then the picture is processed and forwarded by the server to all terminal devices in the specific camera scene; for example, in a live scene, the intelligent camera device sends an unprocessed or processed picture to a live server corresponding to a live platform, the live server further processes the transmitted picture, such as beautifying and special effects, and then the live server redistributes the further processed picture to the terminal device for display by the terminal device. However, the embodiment of the present application focuses on the technical improvement brought by the smart camera device side, and therefore, in order to simplify the interaction flow and save description space, the embodiment of the present application does not consider the participation of the server corresponding to the specific camera scene.
Referring to fig. 1b, fig. 1b is a schematic structural diagram of an intelligent image capturing apparatus according to an embodiment of the present disclosure. As shown in fig. 1b, the smart camera apparatus includes: (1) the privacy protection cover comprises a camera main body, (2) a privacy protection cover, (3) a camera, (4) a right cover plate, (5) a left cover plate, (6) a base, (7) a base fixing clamp, (8) a universal rotating shaft and (9) an annular indicator light. The camera is used for collecting pictures; the privacy protection cover is arranged at the front end of the camera, and when the privacy protection cover is in a closed state, the view field of the camera can be completely shielded; the privacy protection cover, the camera, the right cover plate and the left cover plate jointly form the camera main body; the camera body, the universal rotating shaft and the base are fixedly connected, and the camera body can rotate in all directions through the universal rotating shaft so as to realize the follow shooting of a target user and ensure that the target user is always located in a first acquisition area in the using process; the base fixing clamp is used for fixing the intelligent camera equipment, and the fixing position can be on a display screen of the terminal equipment; the annular indicator light is used for indicating whether the intelligent camera equipment is in a usable state, wherein after the intelligent camera equipment is connected with the terminal equipment, the annular indicator light is on, and the intelligent camera equipment is in the usable state.
Referring to fig. 1c, fig. 1c is a block diagram of another intelligent image capturing apparatus according to an embodiment of the present disclosure. As shown in fig. 1c, the smart camera device 11 may include one or more of the following components: a processor 111, a memory 112 coupled to the processor 111, wherein the memory 112 may store one or more computer programs that may be configured to implement the methods described in the embodiments above when executed by the one or more processors 111.
The memory 112 may include a Random Access Memory (RAM) or a Read-only memory (ROM). The memory 112 may be used to store instructions, programs, code sets, or instruction sets. The memory 112 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by the smart camera device 11 in use, and the like. It is understood that the smart camera device 11 may include more or less structural elements than those in the structural block diagram, and is not limited herein.
A picture content control method for privacy protection provided by an embodiment of the present application is described below.
Referring to fig. 2, fig. 2 is a schematic flowchart of a picture content control method for privacy protection according to an embodiment of the present application, where the method is applied to the intelligent image capturing apparatus 11, and as shown in fig. 2, the method includes:
The first acquisition area is used for indicating a main activity area of a target user in the current shooting scene, the first acquisition area is a partial area in a second acquisition area, the second acquisition area is used for indicating an area corresponding to an overall image acquired by the camera, and the second acquisition area is used for indicating an area corresponding to an overall image which can be acquired by the camera under the constraint of a preset focal length. For example, please refer to fig. 3a, fig. 3a is an exemplary schematic diagram of a collection area of an intelligent camera device according to an embodiment of the present application, and as shown in fig. 3a, a first collection area 301 is a partial area of the second collection area 30 determined by the intelligent camera device according to a current shooting scene, the second collection area 30 is an area corresponding to a maximum picture collected by the camera in a case that a shooting angle is fixed, in other words, the second collection area 30 may be used to indicate a shooting range of the camera. The specific shape of the second acquisition region 30 is determined by the acquisition field of view of the camera, the rectangular shape in fig. 3a is only described as an example, similarly, the specific shape of the first acquisition region 301 may be determined by the smart camera according to the current shooting scene, and the rectangular shape in fig. 3a is only described as an example.
If yes, go to step 203; if not, go to step 204.
The target user is a user who inputs face information into the intelligent camera equipment, and the detection mode can be that the face information of the user in the first acquisition area is captured through infrared scanning and image acquisition and is compared with information in a face information base which is input in advance, so that whether the user is the target user or not is determined.
If yes, go to step 205; if not, go to step 207.
The non-target user refers to a user who does not enter face information into the intelligent camera equipment, and the third acquisition area refers to the other areas except the first acquisition area in the second acquisition area. For example, please refer to fig. 3b, fig. 3b is a schematic diagram of an example of a capture area of another intelligent camera device according to an embodiment of the present application, and as shown in fig. 3b, the third capture area 302 is an area other than the first capture area 301 in the second capture area 30. When a target user is in the first acquisition area 301, the intelligent camera device is in a working state, and at this time, if other non-target users without face information input in advance appear in the third acquisition area, the risk of privacy disclosure may exist, and privacy protection is required; if no non-target user appears in the third area, the third area is in a normal use state, and privacy protection is not needed.
And 204, judging whether the privacy leakage risk exists in the current shooting scene.
If yes, go to step 206; if not, go to step 207.
The outputting of the first reminding message may be that the processor sends a preset reminding text corresponding to the first reminding message to the voice module, and the voice module outputs a sound and amplifies the sound through a loudspeaker, so that a target user can hear the first reminding message; or the communication connection between the intelligent camera device and the mobile terminal of the target user is established in advance, and then a first reminding message is sent to the mobile terminal of the target user to remind the target user. For example, the first reminding message may specifically be "detecting that an unidentified person appears in the area, blurring the picture, and asking you to confirm whether to cancel the blurring effect". The blurring canceling instruction may be voice data which is directly spoken by a target user and used for canceling a blurring effect, at this time, a voice data acquisition module of the intelligent camera device acquires the voice data and cancels blurring processing on specific picture content after responding, and the voice data may be specifically "cancel" or "switch to an open mode" or the like; the terminal device responds to the click operation of the target user, generates a blurring canceling request and sends the blurring canceling request to the intelligent camera device, and the intelligent camera device responds to the blurring canceling request to cancel blurring processing on the specific picture content. When the intelligent camera device cancels blurring processing on specific picture content, the picture content transmitted to the terminal device by the intelligent camera device is an original picture without a blurring part, and the picture content displayed by the terminal device is also the original picture.
It can be understood that when the target user is in the first acquisition area, the normal progress of the work of the target user in the image capturing scene should be in the first order, so that even if a non-target user appears in the first acquisition area, the privacy protection processing on the first acquisition area is not performed, otherwise the normal progress of the work of the target user is affected.
The specific manner of outputting the second warning message may refer to the specific implementation manner shown in the foregoing description of outputting the first warning message, which is not described herein again. For example, the second reminding message may specifically be "detecting that there is a privacy disclosure risk, blurring a picture, and asking you to confirm whether to cancel a blurring effect", where the first preset time and the second preset time may be obtained according to statistical analysis of historical empirical data, or may be set by a user in a user-defined manner, for example, the first preset time may be 120 seconds, and the second preset time may be 180 seconds, that is, after the second reminding message is output, if a blurring removal instruction of the target user is not detected within 120 seconds, the second reminding message is output again to remind the user; if the blurring removal instruction of the target user is not detected after 180 seconds, the privacy protection cover is closed, the camera cannot acquire any picture content, and the privacy safety of the user is protected to the maximum extent.
And step 207, sending the original picture acquired by the camera to the terminal equipment so that the terminal equipment displays the original picture.
Therefore, in the embodiment of the application, the intelligent camera shooting device firstly determines a first acquisition area used for indicating a main activity area of a target user in a current camera shooting scene according to the current camera shooting scene, and then detects whether the target user is in the first acquisition area; if yes, detecting whether a non-target user exists in the third acquisition area, if yes, performing blurring processing and subsequent reminding operation on the first picture content in the acquired original picture by the intelligent camera equipment, and if not, directly outputting the acquired original picture; if not, judging whether privacy leakage risks exist in the current shooting scene, if so, performing blurring processing and subsequent reminding operation on second picture content in the collected original picture, and if not, directly outputting the collected original picture. Therefore, the intelligent camera device can determine the first acquisition area indicating the main activity range of the user according to the current camera scene, the accuracy of follow-up privacy disclosure scene identification is improved, privacy protection processing is carried out on the image content in the acquired original image based on the first acquisition area, the privacy safety of the user is protected to the maximum extent, the intelligence of the intelligent camera device is improved, and the internal performance of the intelligent camera device is optimized.
In one possible example, the determining a first acquisition region according to the current shooting scene includes: acquiring video playing software information corresponding to the current shooting scene from the terminal equipment; determining a scene type corresponding to the current shooting scene according to the video playing software information, wherein the scene type corresponding to the current shooting scene is any one of the following scene types: remote classroom scenes, video conference scenes, live scenes; and determining the first acquisition area according to the scene type corresponding to the current shooting scene.
The acquiring of the video playing software information corresponding to the current shooting scene from the terminal device may be sending a request message to the terminal device, where the terminal device sends related information of the video playing software running on the terminal device to the intelligent shooting device in response to the request message, and the video playing software may be live broadcast software or conference software. For example, the video playing software may be an Tencent meeting application, at this time, information of a target meeting, for example, a meeting name of "XX class on line" may be acquired through the terminal device, at this time, the intelligent image capturing device may determine, in combination with the video playing software of "Tencent meeting" and the specific meeting name of "XX class on line", that the scene type corresponding to the current image capturing scene is a remote classroom scene, and further determine the first acquisition area according to the remote classroom scene.
Therefore, in the example, the intelligent camera shooting device can determine the scene type corresponding to the camera shooting scene through the video playing software information corresponding to the current camera shooting scene, so that the corresponding first acquisition area can be further determined, the intelligence of the device is improved, the influence of privacy protection operation on a work task is reduced to the maximum extent, and the use experience of a user is optimized.
In one possible example, the determining the first acquisition region according to the scene type corresponding to the current camera shooting scene is performed by using a camera shooting method, where the scene type corresponding to the current camera shooting scene is a remote classroom scene or a video conference scene, and the method includes: detecting the presence of a target item within the second collection area, the target item comprising at least: a tablet, a projection screen; if not, determining a preset picture acquisition area as the first acquisition area; and if so, adjusting the preset picture acquisition area according to the target object to obtain the first acquisition area.
When the shooting scene is a remote classroom scene or a video conference scene, the face or the whole body of a speaker, namely a target user, can be shot under a general condition, in this case, the first acquisition region is a preset image acquisition region, and the preset image acquisition region can be a main activity region of the speaker under the general remote classroom scene and the video conference scene obtained according to big data statistics. However, a speaker, i.e. a target user, may have a requirement for cooperating with teaching or conference instruction by means of an article in the above scenario, for example, a teacher needs to write a blackboard writing on a tablet and play a slide on a projection screen to assist in performing teaching work, the speaker needs to write meeting matters on the tablet and play a slide on the projection screen to assist in performing a conference normally, at this time, a main activity area of the target user cannot be limited to a user's own body, and the target article, e.g. the tablet and the projection screen, needs to be included to ensure that a participant can view information on the target article, so that when the target article appears in a picture acquired by a camera, i.e. in a second acquisition area, the intelligent camera device adjusts a preset picture acquisition area according to the target article to obtain the first acquisition area, so that the target article is also in the first acquisition area, thereby avoiding that subsequent privacy protection operation blocks information on the target article and affects normal performance of a work task corresponding to a current camera scene.
Therefore, in this example, when the scene type corresponding to the current shooting scene is a remote classroom scene or a video conference scene, the intelligent shooting device can divide the first collection area more adaptive to the user requirements by detecting whether the second collection area has the target object, so that the intelligence of the device is improved, the influence of privacy protection operation on the work task is reduced to the greatest extent, and the use experience of the user is optimized.
In one possible example, the determining the first acquisition region according to the scene type corresponding to the current shooting scene includes: acquiring a live broadcast type corresponding to the live broadcast scene from the terminal equipment, wherein the live broadcast type comprises a moving live broadcast and a non-moving live broadcast; if the live broadcast type is the sports type live broadcast, determining the first acquisition area according to the height of a target user; and if the live broadcast type is the non-moving live broadcast, determining a preset picture acquisition area as the first acquisition area.
The live broadcast type corresponding to the live broadcast scene obtained from the terminal equipment can be live broadcast software running on the terminal equipment, and the live broadcast type is determined according to live broadcast type classification, live broadcast content and the like of a live broadcast room where the target user is located in the live broadcast software. At this time, the intelligent camera device may specifically determine the first capture area according to different live broadcast types, in this example, the live broadcast types may be divided into a moving live broadcast and a non-moving live broadcast, when the live broadcast type of the target user is a non-moving live broadcast, for example, a video chat live broadcast, most of the live broadcast that is shown to the audience is a face or a half of the target user, in this case, the first capture area is a preset picture capture area, and the preset picture capture area may be a main activity area of a main broadcast in a general non-moving live broadcast scene obtained according to big data statistics. When the live broadcast type of the target user is the sports live broadcast, whether the live broadcast is the body-building live broadcast or the dance live broadcast, most of the displayed content of the target user is the whole body line or posture of the target user, therefore, the picture content corresponding to the first acquisition area should include the whole body of the target user, and at the moment, the intelligent camera equipment can determine the first acquisition area according to the height of the target user. It should be noted that the above classification of the live broadcast type and the corresponding processing scheme are only one scheme provided in the embodiment of the present application, and those skilled in the art, in combination with the embodiment of the present application, may easily think that the intelligent image capturing device may also perform other classifications on the live broadcast type, such as live broadcast with cargo and live broadcast without cargo, and a determination scheme for the first acquisition area corresponding to the live broadcast type.
Therefore, in the example, when the scene type corresponding to the current shooting scene is the live broadcast scene, the intelligent shooting device can further divide the first acquisition area which is more adaptive to the user requirements through the difference of the live broadcast types, so that the intelligence of the device is improved, the influence of privacy protection operation on the work task is reduced to the greatest extent, and the use experience of the user is optimized.
In one possible example, the determining whether there is a risk of privacy disclosure in the current imaging scene includes: if the target user is detected to be in the first acquisition area before the third preset time, detecting whether the target user returns to the first acquisition area within the fourth preset time; if so, determining that no privacy disclosure risk exists in the current shooting scene; and if not, determining that the privacy leakage risk exists in the current shooting scene.
The third preset time and the fourth preset time may be obtained through statistical analysis according to historical empirical data, or may be set by a user in a user-defined manner, for example, the third preset time may be 30 seconds, and the fourth preset time may be 60 seconds, that is, if it is detected that the target user is still in the first acquisition area before 30 seconds, it is indicated that the target user has just left the visual field range that the camera can acquire, at this time, it is detected whether the user returns to the first acquisition area within 60 seconds to continue to perform a work task corresponding to the current shooting scene, if so, it is indicated that there is no privacy disclosure risk, and if not, it is indicated that there is a privacy disclosure risk, and it is necessary to perform blurring processing on the picture.
As can be seen, in this example, for the case that the target user is not in the first acquisition area, the intelligent camera device may detect whether there is a possibility of privacy disclosure risk by presetting time, so that the intelligence of the device and the accuracy of privacy disclosure scene recognition are improved, and the privacy security of the user is protected to the greatest extent.
In one possible example, the second picture content refers to the entire picture content of the original picture collected by the camera.
When the target user does not timely return to the first collection area, various situations can possibly occur in an original picture collected by the camera at the moment, and the target user cannot timely sense the original picture, privacy disclosure is easily caused, work corresponding to the current shooting scene cannot be smoothly performed, at the moment, the intelligent shooting equipment can execute privacy protection operation to virtualize second picture content, and under the scene, the second picture content refers to all picture contents of the original picture collected by the camera, so that the privacy safety of the user can be protected to the greatest extent when the target user cannot timely sense the specific situation of the original picture.
As can be seen, in this example, for the case that the target user leaves the first acquisition area for a short time but does not return to the first acquisition area for a long time, the intelligent camera device virtualizes all the picture contents, so that when the target user cannot timely perceive the specific case of the original picture, the privacy security of the user can be protected to the greatest extent, and the intelligence of the device is improved.
In other possible examples, the determining whether the target user is within the third acquisition area; if so, determining that the privacy leakage risk exists in the current shooting scene; and if not, determining that the privacy disclosure risk does not exist in the current shooting scene. And the corresponding second picture content in the scene is the whole picture content of the original picture acquired by the third acquisition area or the camera.
The method comprises the steps that a target user does not exist in a first acquisition area, but exists in a third acquisition area, at the moment, the target user does not formally start to carry out a work task corresponding to the current shooting scene, privacy is possessed, at the moment, activities carried out by the target user in the third acquisition area possibly have privacy disclosure risks, and privacy protection operation needs to be carried out. Specifically, the second image content subjected to blurring may be a third acquisition area where the target user is located, or may be all image contents of the original image acquired by the camera, so that privacy and security of the target user are protected to the greatest extent.
In one possible example, the method further comprises: when the ending voice or the ending action input by the target user is detected, judging whether the target user switches the program state of the video playing software corresponding to the current shooting scene into a stop state within a fifth preset time, wherein the program state of the video playing software comprises the stop state and an operating state; if so, switching the privacy protection cover to a closed state; if not, the following operations are executed: blurring all picture contents of the original picture acquired by the camera to obtain a third image picture; outputting a third reminding message, wherein the third reminding message is used for indicating the video playing software in the running state; and if the blurring removal instruction and the program state switching instruction are not detected within a sixth preset time, switching the privacy protection cover to a closed state, wherein the program state switching instruction is an instruction which is input by the target user and is used for switching the program state of the video playing software from an operating state to a stopped state.
The detected ending voice or ending action input by the target user may be ending voice input by the target user, such as "bailey", "bye", or the like, received by the voice acquisition module of the intelligent image capturing device, or ending action captured by the camera to the target user, such as a hand swinging action. The switching of the program state of the video playing software corresponding to the current shooting scene to the stop state may be that the intelligent shooting device sends a detection request to the terminal device, the terminal device detects whether the video playing software in the background, such as an "Tencent meeting" application program, is actively closed by a target user, if the video playing software is closed, it indicates that the program state of the video playing software is switched to the stop state, and at this time, the privacy protection cover is closed, so that privacy disclosure caused by hacker attack is avoided; if the intelligent camera device is not closed, the target user inputs the finish voice or finishes the action, and the intelligent camera device is probably not required to be used temporarily, at this time, all picture contents of an original picture acquired by the camera are firstly subjected to blurring processing, so that the picture leakage caused by hacker attack is avoided, the privacy security of the user is endangered, the user is reminded by third reminding information, the third reminding information specifically can be 'detection of video playing software which is not closed, blurring of the picture, asking for confirmation of whether to cancel blurring effect or close the video playing software', and if a blurring removal instruction and a program state switching instruction of the target user are not detected within sixth preset time, such as 60 seconds, the privacy protection cover is automatically switched to a closed state, so that the privacy security of the user is protected to the maximum extent.
It can be seen that, in this example, after the target user finishes using, if the target user directly closes the video playing software, the intelligent image capturing device may automatically close the privacy protection cover, if the target user forgets to close the video playing software, there may be privacy leakage risks such as hacking, and at this time, the intelligent image capturing device may execute the last privacy protection operation, so as to protect the privacy security of the user to the greatest extent, and improve the intelligence of the device.
In accordance with the foregoing illustrated embodiment, please refer to fig. 4a, fig. 4a is a block diagram of functional units of an apparatus for controlling screen content for privacy protection provided in an embodiment of the present application, which is applied to an intelligent image capturing device 11, where the apparatus 40 for controlling screen content for privacy protection includes: a determination unit 401, a first detection unit 402, a second detection unit 403, a first transmission unit 404, a first execution unit 405, a judgment unit 406, a second execution unit 407, and a second transmission unit 408. The determining unit 401 is configured to determine a first acquisition area according to a current shooting scene, where the first acquisition area is used to indicate a main activity area of a target user in the current shooting scene, the first acquisition area is a partial area in a second acquisition area, the second acquisition area is used to indicate an area corresponding to an entire picture acquired by the camera, and the second acquisition area is used to indicate an area corresponding to an entire image picture that can be acquired by the camera under the constraint of a preset focal length; the first detecting unit 402 is configured to detect whether a target user is in a first acquisition area, where the target user is a user who enters face information in the intelligent image capturing apparatus; if yes, the second detection unit 403 detects whether a non-target user exists in a third acquisition area, where the non-target user is a user who does not enter face information in the intelligent image capturing device, and the third acquisition area is an area other than the first acquisition area in the second acquisition area; if not, the original picture acquired by the camera is sent to the terminal equipment through the first sending unit 404, so that the terminal equipment displays the original picture; if yes, the following operations are executed by the first execution unit 405: blurring first picture content corresponding to an area where the non-target user is located in an original picture acquired by the camera to obtain a first image picture, and sending the first image picture to the terminal equipment so that the terminal equipment can display the first image picture; outputting a first reminding message, wherein the first reminding message is used for indicating that the non-target user is in the third acquisition area; and if a blurring removal instruction input by the target user is detected, canceling blurring processing on the first picture content; if not, the judgment unit 406 judges whether the privacy leakage risk exists in the current shooting scene; if yes, the following operations are performed by the second execution unit 407: blurring second picture content in the original picture collected by the camera to obtain a second image picture, and sending the second image picture to the terminal equipment so that the terminal equipment displays the second image picture; outputting a second reminding message, wherein the second reminding message is used for indicating the privacy disclosure risk; and if the blurring removal instruction is detected, canceling blurring processing on the second picture content; if the blurring removal instruction is not detected within the first preset time, outputting the second reminding message again; if the virtualization removal instruction is not detected within second preset time, switching the privacy protection cover to a closed state, wherein the second preset time is longer than the first preset time; if the original picture does not exist, the original picture acquired by the camera is sent to the terminal device through the second sending unit 408, so that the terminal device displays the original picture.
In one possible example, in the aspect of determining the first capturing region according to the current shooting scene, the determining unit 401 is specifically configured to: acquiring video playing software information corresponding to the current shooting scene from the terminal equipment; determining a scene type corresponding to the current shooting scene according to the video playing software information, wherein the scene type corresponding to the current shooting scene is any one of the following scene types: a remote classroom scene, a video conference scene, a live scene; and determining the first acquisition area according to the scene type corresponding to the current shooting scene.
In a possible example, the scene type corresponding to the current image capturing scene is a remote classroom scene or a video conference scene, and in the aspect of determining the first acquisition area according to the scene type corresponding to the current image capturing scene, the determining unit 401 is specifically configured to: detecting the presence of a target item within the second collection area, the target item comprising at least: a tablet, a projection screen; if not, determining a preset picture acquisition area as the first acquisition area; and if so, adjusting the preset picture acquisition area according to the target object to obtain the first acquisition area.
In a possible example, the scene type corresponding to the current shooting scene is a live broadcast scene, and in terms of determining the first acquisition area according to the scene type corresponding to the current shooting scene, the determining unit 401 is specifically configured to: acquiring a live broadcast type corresponding to the live broadcast scene from the terminal equipment, wherein the live broadcast type comprises a moving live broadcast and a non-moving live broadcast; if the live broadcast type is the sports type live broadcast, determining the first acquisition area according to the height of a target user; and if the live broadcast type is the non-moving live broadcast, determining a preset picture acquisition area as the first acquisition area.
In a possible example, in the aspect of determining whether there is a risk of privacy disclosure in the current imaging scene, the determining unit 406 is specifically configured to: if the target user is detected to be in the first acquisition area before the third preset time, detecting whether the target user returns to the first acquisition area within the fourth preset time; if so, determining that no privacy disclosure risk exists in the current shooting scene; and if not, determining that the privacy leakage risk exists in the current shooting scene.
In one possible example, the second picture content refers to the entire picture content of the original picture collected by the camera.
In one possible example, the screen content control apparatus for privacy protection 40 is further configured to: when the ending voice or the ending action input by the target user is detected, judging whether the target user switches the program state of the video playing software corresponding to the current shooting scene into a stop state within a fifth preset time, wherein the program state of the video playing software comprises the stop state and an operation state; if so, switching the privacy protection cover to a closed state; if not, the following operations are executed: blurring all picture contents of the original picture acquired by the camera to obtain a third image picture; outputting a third reminding message, wherein the third reminding message is used for indicating the video playing software in the running state; and if the blurring removal instruction and the program state switching instruction are not detected within a sixth preset time, switching the privacy protection cover to a closed state, wherein the program state switching instruction is an instruction which is input by the target user and is used for switching the program state of the video playing software from an operating state to a stopped state.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
In the case of using an integrated unit, as shown in fig. 4b, fig. 4b is a block diagram of functional units of another screen content control apparatus for privacy protection provided in an embodiment of the present application. In fig. 4b, the screen content control device 41 for privacy protection includes: a processing module 412 and a communication module 411. The processing module 412 is used to control and manage the actions of the screen content control device for privacy protection, for example, to perform the steps of the determination unit 401, the first detection unit 402, the second detection unit 403, the first transmission unit 404, the first execution unit 405, the determination unit 406, the second execution unit 407, and the second transmission unit 408, and/or other processes for performing the techniques described herein. The communication module 411 is used to support interaction between the screen content control apparatus for privacy protection and other devices. As shown in fig. 4b, the screen content control apparatus for privacy protection may further include a storage module 413, and the storage module 413 is used for storing program codes and data of the screen content control apparatus for privacy protection.
The processing module 412 may be a Processor or a controller, and may be, for example, a Central Processing Unit (CPU), a general-purpose Processor, a Digital Signal Processor (DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication module 411 may be a transceiver, an RF circuit or a communication interface, etc. The storage module 413 may be a memory.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. The screen content control device 41 for privacy protection described above may each execute the screen content control method for privacy protection described above with reference to fig. 2.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
Embodiments of the present application also provide a computer storage medium, in which a computer program/instructions are stored, and when executed by a processor, implement part or all of the steps of any one of the methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods as set out in the above method embodiments.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative; for example, the division of the cell is only a logic function division, and there may be another division manner in actual implementation; for example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer-readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: u disk, removable hard disk, magnetic disk, optical disk, volatile memory or non-volatile memory. The non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM) among various media that can store program code.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications can be easily made by those skilled in the art without departing from the spirit and scope of the present invention, and it is within the scope of the present invention to include different functions, combination of implementation steps, software and hardware implementations.
Claims (10)
1. The picture content control method for privacy protection is applied to intelligent camera equipment, the intelligent camera equipment comprises a camera for collecting pictures and a privacy protection cover arranged at the front end of the camera, the privacy protection cover is in an open state, the intelligent camera equipment is connected with terminal equipment, and the method comprises the following steps:
determining a first acquisition area according to a current shooting scene, wherein the first acquisition area is used for indicating a main activity area of a target user in the current shooting scene, the first acquisition area is a partial area in a second acquisition area, the second acquisition area is used for indicating an area corresponding to an overall picture acquired by a camera, and the second acquisition area is used for indicating an area corresponding to an overall image picture which can be acquired by the camera under the constraint of a preset focal length;
detecting whether a target user is in a first acquisition area, wherein the target user refers to a user who inputs face information into the intelligent camera equipment;
if yes, detecting whether a non-target user exists in a third acquisition area, wherein the non-target user refers to a user who does not enter face information into the intelligent camera equipment, and the third acquisition area refers to other areas except the first acquisition area in the second acquisition area;
if not, sending the original picture acquired by the camera to the terminal equipment so that the terminal equipment can display the original picture;
if yes, the following operations are executed: blurring first picture content corresponding to an area where the non-target user is located in an original picture acquired by the camera to obtain a first image picture, and sending the first image picture to the terminal equipment so that the terminal equipment can display the first image picture;
outputting a first reminding message, wherein the first reminding message is used for indicating that the non-target user is in the third acquisition area;
and if a blurring removal instruction input by the target user is detected, canceling blurring processing on the first picture content;
if not, judging whether privacy leakage risks exist in the current shooting scene or not;
if so, performing the following operations: blurring second picture content in an original picture acquired by the camera to obtain a second image picture, and sending the second image picture to the terminal equipment so that the terminal equipment displays the second image picture;
outputting a second reminding message, wherein the second reminding message is used for indicating the privacy disclosure risk;
and if the blurring removal instruction is detected, canceling blurring processing on the second picture content; if the blurring removal instruction is not detected within the first preset time, outputting the second reminding message again; if the virtualization removing instruction is not detected within second preset time, switching the privacy protection cover to a closed state, wherein the second preset time is longer than the first preset time;
and if the original picture does not exist, sending the original picture acquired by the camera to the terminal equipment so that the terminal equipment can display the original picture.
2. The method of claim 1, wherein determining a first acquisition region from the current camera scene comprises:
acquiring video playing software information corresponding to the current shooting scene from the terminal equipment;
determining a scene type corresponding to the current shooting scene according to the video playing software information, wherein the scene type corresponding to the current shooting scene is any one of the following scene types: remote classroom scenes, video conference scenes, live scenes;
and determining the first acquisition area according to the scene type corresponding to the current shooting scene.
3. The method according to claim 2, wherein the scene type corresponding to the current camera shooting scene is a remote classroom scene or a video conference scene, and the determining the first acquisition area according to the scene type corresponding to the current camera shooting scene comprises:
detecting the presence of a target item within the second collection area, the target item comprising at least: a tablet, a projection screen;
if not, determining a preset picture acquisition area as the first acquisition area;
and if so, adjusting the preset picture acquisition area according to the target object to obtain the first acquisition area.
4. The method according to claim 2, wherein the scene type corresponding to the current camera shooting scene is a live broadcast scene, and the determining the first acquisition region according to the scene type corresponding to the current camera shooting scene includes:
acquiring a live broadcast type corresponding to the live broadcast scene from the terminal equipment, wherein the live broadcast type comprises a moving live broadcast and a non-moving live broadcast;
if the live broadcast type is the sports type live broadcast, determining the first acquisition area according to the height of a target user;
and if the live broadcast type is the non-moving live broadcast, determining a preset picture acquisition area as the first acquisition area.
5. The method according to claim 1, wherein the determining whether the current camera scene has a privacy disclosure risk comprises:
if the target user is detected to be in the first acquisition area before the third preset time, detecting whether the target user returns to the first acquisition area within the fourth preset time;
if so, determining that no privacy disclosure risk exists in the current shooting scene;
and if not, determining that the privacy disclosure risk exists in the current shooting scene.
6. The method according to claim 5, wherein the second picture content is a whole picture content of an original picture captured by the camera.
7. The method of claim 1, further comprising:
when the ending voice or the ending action input by the target user is detected, judging whether the target user switches the program state of the video playing software corresponding to the current shooting scene into a stop state within a fifth preset time, wherein the program state of the video playing software comprises the stop state and an operating state;
if so, switching the privacy protection cover to a closed state;
if not, the following operations are executed:
blurring all picture contents of the original picture acquired by the camera to obtain a third image picture;
outputting a third reminding message, wherein the third reminding message is used for indicating the video playing software in the running state;
and if the blurring removal instruction and the program state switching instruction are not detected within a sixth preset time, switching the privacy protection cover to a closed state, wherein the program state switching instruction is an instruction which is input by the target user and is used for switching the program state of the video playing software from an operating state to a stopped state.
8. The utility model provides a picture content controlling means for privacy protection, its characterized in that is applied to intelligent camera equipment, intelligent camera equipment including be used for gathering the picture with set up in the privacy protection lid of camera front end, the privacy protection lid is in the open mode, intelligent camera equipment connects terminal equipment, the device includes: a determining unit, a first detecting unit, a second detecting unit, a first sending unit, a first executing unit, a judging unit, a second executing unit and a second sending unit, wherein,
the determining unit is used for determining a first acquisition area according to a current shooting scene, the first acquisition area is used for indicating a main activity area of a target user in the current shooting scene, the first acquisition area is a partial area in a second acquisition area, the second acquisition area is used for indicating an area corresponding to an overall image acquired by the camera, and the second acquisition area is used for indicating an area corresponding to an overall image which can be acquired by the camera under the constraint of a preset focal length;
the first detection unit is used for detecting whether a target user is in a first acquisition area, wherein the target user is a user who inputs face information in the intelligent camera equipment;
if yes, detecting whether a non-target user exists in a third acquisition area through the second detection unit, wherein the non-target user refers to a user who does not enter face information into the intelligent camera equipment, and the third acquisition area refers to other areas except the first acquisition area in the second acquisition area;
if not, sending the original picture acquired by the camera to the terminal equipment through the first sending unit so that the terminal equipment can display the original picture;
if yes, the following operations are executed through the first execution unit: blurring first picture content corresponding to an area where the non-target user is located in an original picture acquired by the camera to obtain a first image picture, and sending the first image picture to the terminal equipment so that the terminal equipment can display the first image picture; outputting a first reminding message, wherein the first reminding message is used for indicating that the non-target user is in the third acquisition area; and if a blurring removal instruction input by the target user is detected, canceling blurring processing on the first picture content;
if not, judging whether privacy leakage risks exist in the current shooting scene through the judging unit;
if yes, the following operations are executed through the second execution unit: blurring second picture content in an original picture acquired by the camera to obtain a second image picture, and sending the second image picture to the terminal equipment so that the terminal equipment displays the second image picture; outputting a second reminding message, wherein the second reminding message is used for indicating the privacy disclosure risk; and if the blurring removal instruction is detected, canceling blurring processing of the second picture content; if the blurring removal instruction is not detected within a first preset time, outputting the second reminding message again; if the virtualization removing instruction is not detected within second preset time, switching the privacy protection cover to a closed state, wherein the second preset time is longer than the first preset time;
and if the original picture does not exist, the original picture acquired by the camera is sent to the terminal equipment through the second sending unit, so that the terminal equipment displays the original picture.
9. A smart camera device comprising a camera, a privacy protecting cover disposed in front of the camera, a processor, a memory, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program/instructions is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310107036.9A CN115883959B (en) | 2023-02-14 | 2023-02-14 | Picture content control method for privacy protection and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310107036.9A CN115883959B (en) | 2023-02-14 | 2023-02-14 | Picture content control method for privacy protection and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115883959A true CN115883959A (en) | 2023-03-31 |
CN115883959B CN115883959B (en) | 2023-06-06 |
Family
ID=85761077
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310107036.9A Active CN115883959B (en) | 2023-02-14 | 2023-02-14 | Picture content control method for privacy protection and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115883959B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116847159A (en) * | 2023-08-29 | 2023-10-03 | 中亿(深圳)信息科技有限公司 | Monitoring video management system based on video cloud storage |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006304250A (en) * | 2005-03-23 | 2006-11-02 | Victor Co Of Japan Ltd | Image processor |
CN106878588A (en) * | 2017-02-27 | 2017-06-20 | 努比亚技术有限公司 | A kind of video background blurs terminal and method |
CN107197138A (en) * | 2017-03-31 | 2017-09-22 | 努比亚技术有限公司 | A kind of filming apparatus, method and mobile terminal |
CN107948517A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Preview screen virtualization processing method, device and equipment |
CN112601054A (en) * | 2020-12-14 | 2021-04-02 | 珠海格力电器股份有限公司 | Pickup picture acquisition method and device, storage medium and electronic equipment |
CN112672102A (en) * | 2019-10-15 | 2021-04-16 | 杭州海康威视数字技术股份有限公司 | Video generation method and device |
CN112689093A (en) * | 2020-12-24 | 2021-04-20 | 深圳创维-Rgb电子有限公司 | Intelligent device privacy protection method, intelligent device and storage medium |
CN113014830A (en) * | 2021-03-01 | 2021-06-22 | 鹏城实验室 | Video blurring method, device, equipment and storage medium |
-
2023
- 2023-02-14 CN CN202310107036.9A patent/CN115883959B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006304250A (en) * | 2005-03-23 | 2006-11-02 | Victor Co Of Japan Ltd | Image processor |
CN106878588A (en) * | 2017-02-27 | 2017-06-20 | 努比亚技术有限公司 | A kind of video background blurs terminal and method |
CN107197138A (en) * | 2017-03-31 | 2017-09-22 | 努比亚技术有限公司 | A kind of filming apparatus, method and mobile terminal |
CN107948517A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Preview screen virtualization processing method, device and equipment |
CN112672102A (en) * | 2019-10-15 | 2021-04-16 | 杭州海康威视数字技术股份有限公司 | Video generation method and device |
CN112601054A (en) * | 2020-12-14 | 2021-04-02 | 珠海格力电器股份有限公司 | Pickup picture acquisition method and device, storage medium and electronic equipment |
CN112689093A (en) * | 2020-12-24 | 2021-04-20 | 深圳创维-Rgb电子有限公司 | Intelligent device privacy protection method, intelligent device and storage medium |
CN113014830A (en) * | 2021-03-01 | 2021-06-22 | 鹏城实验室 | Video blurring method, device, equipment and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116847159A (en) * | 2023-08-29 | 2023-10-03 | 中亿(深圳)信息科技有限公司 | Monitoring video management system based on video cloud storage |
CN116847159B (en) * | 2023-08-29 | 2023-11-07 | 中亿(深圳)信息科技有限公司 | Monitoring video management system based on video cloud storage |
Also Published As
Publication number | Publication date |
---|---|
CN115883959B (en) | 2023-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11343446B2 (en) | Systems and methods for implementing personal camera that adapts to its surroundings, both co-located and remote | |
EP3163498B1 (en) | Alarming method and device | |
CN108255304B (en) | Video data processing method and device based on augmented reality and storage medium | |
WO2018058899A1 (en) | Sound volume adjusting method and apparatus of intelligent terminal | |
WO2016197765A1 (en) | Human face recognition method and recognition system | |
US10255690B2 (en) | System and method to modify display of augmented reality content | |
CN110619350B (en) | Image detection method, device and storage medium | |
EP2658242A2 (en) | Apparatus and method for recognizing image | |
KR20210042952A (en) | Image processing method and device, electronic device and storage medium | |
US11113998B2 (en) | Generating three-dimensional user experience based on two-dimensional media content | |
WO2019161729A1 (en) | Data processing method, terminal device and data processing system | |
CN109784327B (en) | Boundary box determining method and device, electronic equipment and storage medium | |
CN115883959B (en) | Picture content control method for privacy protection and related product | |
WO2021179856A1 (en) | Content recognition method and apparatus, electronic device, and storage medium | |
WO2021169616A1 (en) | Method and apparatus for detecting face of non-living body, and computer device and storage medium | |
CN113676592A (en) | Recording method, recording device, electronic equipment and computer readable medium | |
CN110705356A (en) | Function control method and related equipment | |
WO2017152592A1 (en) | Mobile terminal application operation method and mobile terminal | |
CN113923461A (en) | Screen recording method and screen recording system | |
US9148537B1 (en) | Facial cues as commands | |
CN113901871A (en) | Driver dangerous action recognition method, device and equipment | |
CN110933314B (en) | Focus-following shooting method and related product | |
CN113379999A (en) | Fire detection method and device, electronic equipment and storage medium | |
WO2024001617A1 (en) | Method and apparatus for identifying behavior of playing with mobile phone | |
CN112507798B (en) | Living body detection method, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |