CN110609921B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN110609921B
CN110609921B CN201910813348.5A CN201910813348A CN110609921B CN 110609921 B CN110609921 B CN 110609921B CN 201910813348 A CN201910813348 A CN 201910813348A CN 110609921 B CN110609921 B CN 110609921B
Authority
CN
China
Prior art keywords
information
action
video
video information
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910813348.5A
Other languages
Chinese (zh)
Other versions
CN110609921A (en
Inventor
于晨晨
符博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910813348.5A priority Critical patent/CN110609921B/en
Publication of CN110609921A publication Critical patent/CN110609921A/en
Application granted granted Critical
Publication of CN110609921B publication Critical patent/CN110609921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an information processing method, which comprises the following steps: acquiring first video information, and acquiring first action information of a first object in the first video information; the first action information is action information aiming at a second object in the first video information; if the first action information is matched with first preset action information, hiding the second object to obtain second video information; and displaying the second video information. The embodiment of the invention also discloses electronic equipment.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to the field of electronics and information technologies, and in particular, to an information processing method and an electronic device.
Background
When a user uses electronic equipment to take a picture, a section of complete video is often required to be subjected to fuzzy processing. In the related art, a user acquires each frame of image in a complete video shot by electronic equipment, and performs fuzzy processing on each frame of image by using the electronic equipment so as to obtain a blurred video; that is to say, in the related art, after an electronic device obtains a section of complete video, the electronic device can perform blurring processing on the obtained complete video, so that the efficiency of performing blurring processing on the video by the electronic device is low.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention desirably provide an information processing method, an electronic device, and a computer storage medium, so as to solve a problem in the related art that after an electronic device obtains a section of complete video, the obtained complete video can be blurred, so that the efficiency of the electronic device for blurring the video is low, and thus the efficiency of blurring the video is improved.
The technical scheme of the invention is realized as follows:
an information processing method, the method comprising:
acquiring first video information, and acquiring first action information of a first object in the first video information; the first action information is action information aiming at a second object in the first video information;
if the first action information is matched with first preset action information, hiding the second object to obtain second video information;
and displaying the second video information.
Optionally, if the first motion information matches first preset motion information, hiding the second object to obtain second video information, including:
if the first action information is matched with the first preset action information, determining at least two first sub-objects in the first video information based on the first action information; wherein the at least two first sub-objects comprise the second object;
and receiving an operation of the first object on the second object in the at least two first sub-objects, and hiding the second object based on the operation to obtain the second video information.
Optionally, if the first action information matches with first preset action information, hiding the second object to obtain second video information, including:
if the first action information is matched with the first preset action information, scene information of the second object in the first video information is obtained;
determining a first target object matched with the scene information;
and replacing or shielding the second object by the first target object to obtain the second video information.
Optionally, the determining a first target object matching the scene information includes:
acquiring first characteristic information of the first object;
determining the first target object matching the first feature information and the scene information.
Optionally, if the first action information matches with first preset action information, hiding the second object to obtain second video information, including:
if the first action information is matched with the first preset action information, outputting first indication information for indicating the first object to send out a first target action;
and acquiring second action information of the first object, and if the second action information is matched with the first target action, hiding the second object to obtain second video information.
Optionally, after the displaying the second video information, the method further includes:
acquiring third action information of the first object in the first video information; wherein the third motion information is motion information for a second object in the first video information;
and if the third action information is matched with second preset action information, canceling to hide the second object to display the second object, obtaining the first video information, and displaying the first video information.
Optionally, the acquiring the first video information and obtaining the first action information of the first object in the first video information includes:
acquiring the first video information, and acquiring second characteristic information of a third object in the first video information; wherein the second feature information comprises header feature information;
if the second feature information does not comprise the facial feature information, acquiring first action information of the first object; wherein the first action information is action information for a head of the third object;
correspondingly, if the first action information is matched with first preset action information, hiding the second object to obtain second video information, including:
acquiring third characteristic information of the third object in the first video information; wherein the third feature information comprises header feature information;
and if the first action information is matched with the first preset action information and the third feature information comprises face feature information, hiding the face of the third object to obtain the second video information.
Optionally, the acquiring the first video information and obtaining the first action information of the first object in the first video information includes:
acquiring the first video information, analyzing the first video information, and determining a plurality of second sub-objects included in the first video information; wherein the plurality of second sub-objects includes the first object and the second object;
if it is determined from the plurality of second sub-objects that the fourth feature information of at least one sub-object matches the target feature information, outputting second indication information for indicating the first object to issue a second target action; wherein the at least one sub-object comprises the second object;
and acquiring the first action information of the first object matched with the second target action.
Optionally, the acquiring the first video information and obtaining the first action information of the first object in the first video information includes:
acquiring the first video information, and acquiring identity characteristic information of a first object in the first video information;
and if the identity characteristic information is matched with preset identity characteristic information, acquiring the first action information.
An electronic device, the electronic device comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is used for executing the program of the information processing method in the memory to realize the following steps:
acquiring first video information, and acquiring first action information of a first object in the first video information; the first action information is action information aiming at a second object in the first video information;
if the first action information is matched with first preset action information, hiding the second object to obtain second video information;
and displaying the second video information.
The information processing method and the electronic device provided by the embodiment of the invention collect the first video information and acquire the first action information of the first object in the first video information; the first action information is action information aiming at a second object in the first video information; if the first action information is matched with the first preset action information, hiding the second object to obtain second video information; and displaying the second video information. Therefore, after the electronic equipment acquires the first video information, the second object is automatically hidden based on the first action information in the first video information, the problem of low fuzzy processing efficiency caused by fuzzy processing on each frame of image in the related art is solved, and the video fuzzy processing efficiency is improved.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to an embodiment of the present invention;
FIG. 2 is a schematic flowchart of another information processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another information processing method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of an information processing method according to another embodiment of the present invention;
fig. 5 is a flowchart illustrating an information processing method according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
It should be appreciated that reference throughout this specification to "an embodiment of the present invention" or "an embodiment described previously" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in an embodiment of the present invention" or "in the foregoing embodiments" in various places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In a case where no specific description is given, the electronic device may execute any step in the embodiments of the present invention, and the processor of the electronic device may execute the step. It should also be noted that the embodiment of the present invention does not limit the sequence of the steps executed by the electronic device. In addition, the data may be processed in the same way or in different ways in different embodiments. It should be further noted that any step in the embodiments of the present invention may be executed by the electronic device independently, that is, when the electronic device executes any step in the following embodiments, the electronic device may not depend on the execution of other steps.
An embodiment of the present invention provides an information processing method applied to an electronic device, and as shown in fig. 1, the method includes the following steps:
step 101: the method comprises the steps of collecting first video information and obtaining first action information of a first object in the first video information.
The first action information is action information aiming at a second object in the first video information.
Alternatively, the electronic device may be any device with video acquisition capability, data processing capability, and data playing capability, such as a mobile phone, a tablet computer, a notebook computer, a palm top computer, a personal digital assistant, a portable media player, a smart speaker, a navigation device, a wearable device, a smart band, a pedometer, a digital TV, a camera, or a desktop computer. In the embodiment of the present invention, the electronic device may be a mobile phone or a computer including a camera module. In other embodiments, the electronic device may include a computer and a camera coupled to the computer. That is, the electronic apparatus may be an integral structure, or the electronic apparatus may be an apparatus formed by combining a plurality of devices.
The electronic device can acquire first video information by using the camera module. Optionally, the first video information is real-time video information, that is, the electronic device may collect the first video information in real time. In one embodiment, a camera module of the electronic device collects first video information in real time, and sends the collected first video information to a processor of the electronic device at intervals of a predetermined time, so that the processor acquires first action information of a first object. The predetermined time may be a time less than or equal to 1 minute, i.e. the predetermined time is short to ensure a low time delay between the capturing and displaying of the video, for example, the predetermined time may be 0.2 second, 1 minute, etc., and is not limited herein.
The information processing method in the embodiment of the invention can be applied to various scenes. For example, the information processing method may be applied in a live scene, and the electronic device may be a main broadcasting terminal device; or the information processing method can be applied to video recording, and the electronic equipment can be recording equipment; or the information processing method can be applied to video call, and the electronic device can be a call initiator device or a call receiver device; or the information processing method can be applied to a security monitoring system, and the electronic device can comprise a processor, a camera and a display screen.
Wherein the first object may be a first user, or the first object may be a hand of the first user.
In an embodiment of the present invention, the first motion information of the first object may be gesture information of the first user. In other embodiments, the first motion information of the first object may be head motion information or behavior gesture information of the first user, or the like. It should be understood that, in the embodiment of the present invention, the first action information is taken as the gesture information for example, however, when the first action information is other information, the implementation manner of the information processing method may refer to the relevant description that the first action information is the gesture information, and details thereof are not repeated.
The second object may be an object different from the first user, for example, the second object may be a second user, a cup, a cell phone, a photo, or the like in the first video information; alternatively, the second object may be identical to the first object; still alternatively, the second object may be a partial object of the first object, for example, the second object may be a head, face or other part of the first user. The embodiment of the present invention does not limit the specific reference of the first object and the second object, as long as the first object can send the motion information, and both the first object and the second object are in the first video information.
The electronic equipment can acquire the characteristic information of the first object, acquire the characteristic information of the second object, and determine that the first action information is action information for the second object in the first video information based on the characteristic information of the first object and the characteristic information of the second object; wherein the first object and the second object may or may not be in contact.
Step 102: and if the first action information is matched with the first preset action information, hiding the second object to obtain second video information.
The matching of the first motion information with the first preset motion information may include: the similarity between the first action information and the first preset action information is larger than a threshold value.
The first preset motion information may be preset motion information, for example, the first preset motion information may be preset gesture information. In one embodiment, the preset gesture information may be pointing information of a finger. In another embodiment, the preset gesture information may be information that the palm is opened to closed. It should be noted that the gesture information listed above does not constitute a limitation to the first preset motion information, the first preset motion information may have various selection modes, and the first preset motion information may be a static piece of motion information, such as pointing information of a finger, or the first preset motion information may be a dynamic piece of motion information, such as information of a palm opening to closing. In one embodiment, when the first preset motion information is static one motion information, the matching of the first motion information and the first preset motion information may include: the similarity between the first action information and the first preset action information is greater than a threshold value, and it is determined that the action corresponding to the first action information, which is matched with the first preset action information, lasts for a preset duration. It should be understood that the preset gesture information may also include other gesture information, which is not listed in the embodiments of the present invention.
Optionally, the hiding process for the second object includes, but is not limited to, mosaic process, occlusion process, blurring process, or disappearance process for the second object. It should be understood that the hiding processing in the embodiment of the present invention does not refer to literally hiding, but refers to processing the second object so that the second object is invisible to the user, that is, the user cannot recognize the real second object through the processed second object. In other words, the hiding process for the second object may be a privacy protection process for the second object.
In a feasible embodiment, when the electronic device hides the second object to obtain the second video information, the magic video information with the magic effect may be superimposed on the second video information, so that the user may see the magic effect when the second object is just hidden, or the magic effect is always present in the process of hiding the second object, that is, the magic video information may be video information with a fixed duration or video information with the same duration as the hiding duration of the second object.
Step 103: and displaying the second video information.
In the embodiment of the present invention, the electronic device acquires the first video information and displays the second video information synchronously, that is, the displayed second video information is a real-time scene. The second video information may be displayed simultaneously in a plurality of devices including the electronic device. For example, in a live scene, the anchor-side device and the audience-side device watching the anchor may simultaneously display the second video information. In other embodiments, the second video information is displayed on a device other than an electronic device. For example, the anchor side still displays the first video information, while the viewer side displays the second video information.
The information processing method provided by the embodiment of the invention comprises the steps of collecting first video information, and acquiring first action information of a first object in the first video information; the first action information is action information aiming at a second object in the first video information; if the first action information is matched with the first preset action information, hiding the second object to obtain second video information; and displaying the second video information. Therefore, after the electronic equipment acquires the first video information, the second object is automatically hidden based on the first action information in the first video information, the problem of low fuzzy processing efficiency caused by fuzzy processing on each frame of image in the related art is solved, and the video fuzzy processing efficiency is improved.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, as shown in fig. 2, the method including the following steps:
step 201: the electronic equipment collects the first video information and obtains first action information of a first object in the first video information.
The first action information is action information aiming at a second object in the first video information.
Step 202: if the first action information is matched with the first preset action information, the electronic equipment determines at least two first sub-objects in the first video information based on the first action information.
Wherein the at least two first sub-objects comprise a second object.
For example, when the user points at a cup with a pattern with a finger, the electronic device acquires two feature objects of the cup and the pattern on the cup based on the first motion information.
After step 202, the electronic device may display at least two first sub-objects. The electronic equipment displays the at least two first sub-objects without influencing the playing of the video, namely, the at least two first sub-objects can be displayed on the upper layer of the played second video information.
Step 203: the electronic equipment receives the operation of the first object on a second object in the at least two first sub-objects, and carries out hiding processing on the second object based on the operation to obtain second video information.
In one embodiment, the electronic device receiving an operation of a first object with respect to a second object of at least two first sub-objects may include: the electronic equipment receives a click operation or a voice operation of a first object on a second object. In another embodiment, the electronic device may obtain target motion information of the first object and determine to receive an operation of the first object with respect to the second object based on the target motion information. The target action information may be an operation in which a finger points at the second object or an operation in which eyes watch the second object and the watching time duration exceeds a preset time duration, and the target action information is not limited in the embodiment of the present invention.
For example, the electronic device may receive an operation of a first object with respect to a pattern on a cup, determine that the pattern on the cup is a second object.
Step 204: the electronic device displays the second video information.
It should be noted that, for the description of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the description in the other embodiments, which is not repeated herein.
According to the information processing method provided by the embodiment of the invention, when the electronic device determines the at least two first sub-objects based on the first action information, the electronic device can receive the operation of the first object on the second object in the at least two first sub-objects, so that the second object is hidden, and by the way, the situation that the object processed by the electronic device does not conform to the object which the first object wants to process is avoided.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, as shown in fig. 3, including the following steps:
step 301: the electronic equipment collects first video information and obtains first action information of a first object in the first video information.
The first action information is action information aiming at a second object in the first video information.
Step 302: and if the first action information is matched with the first preset action information, the electronic equipment acquires scene information of the second object in the first video information.
The scene information may include environment information, and the electronic device may analyze the first video information to obtain the scene information. In one embodiment, the electronic device may analyze each frame of image in the first video information to obtain scene information where the second object is located in each frame of image. The context information may include at least one of geographical location information, weather information, and spatial environment information.
Step 303: the electronic device determines a first target object that matches the scene information.
The electronic device may determine a target object that matches the scene information, and the size of the first target object may match the size of the second object such that the second object is able to completely cover the second object. For example, in a live scene, when an environment in which scene information is in a lovely style is determined, it may be determined that a target object is a cartoon image; in a video recording scene or a video call scene, when scene information is determined to be a room, it may be determined that the first target object is an object which is not incongruous to be combined with an image around the second object. In one embodiment, the electronic device may determine scene information for each frame of image and determine each sub-target object matching the scene information for each frame of image.
In one embodiment, step 303 may be implemented by the following steps a1 to a 2:
step A1: the electronic device acquires first characteristic information of a first object.
The first characteristic information of the first subject may include at least one of an age, a gender, a work type, and an appearance characteristic of the first subject.
Step A2: the electronic device determines a first target object that matches the first feature information and the scene information.
Step 304: and the electronic equipment replaces or shields the second object by the first target object to obtain second video information.
In one embodiment, the electronic device may obtain the second video information by replacing or blocking the second object of each frame by each sub-target object.
Step 305: the electronic device displays the second video information.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
According to the information processing method provided by the embodiment of the invention, the electronic equipment can replace or shield the second object by the first target object matched with the scene information of the second object in the first video information, so that the shielding object has no sense of incongruity when shielding the second object.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, as shown in fig. 4, the method including the following steps:
step 401: the electronic equipment collects the first video information and obtains first action information of a first object in the first video information.
The first motion information is motion information for a second object in the first video information.
Step 402: if the first action information is matched with the first preset action information, the electronic equipment outputs first indication information for indicating the first object to send out the first target action.
The first indication information is used for indicating whether the second object needs to be hidden. For example, the first indication information may be "whether to perform the hiding process on the second object, and if so, please confirm in the head".
The first indication information can be output in a text mode or a voice playing mode.
Step 403: and the electronic equipment acquires second action information of the first object, and if the second action information is matched with the first target action, the second object is hidden to obtain second video information.
For example, the second motion information of the first object may be motion information such as a nod, an OK gesture, or other specific gesture.
Step 404: the electronic device displays the second video information.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
According to the information processing method provided by the embodiment of the invention, if the first action information is matched with the first preset action information, the electronic equipment outputs first indication information for indicating the first object to send out the first target action; and acquiring second action information of the first object, and if the second action information is matched with the first target action, hiding the second object to obtain second video information, so that the situation that the electronic equipment hides the second object under the condition that the first object is operated by mistake is avoided.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, as shown in fig. 5, the method including the following steps:
step 501: the electronic equipment collects the first video information and obtains second characteristic information of a third object in the first video information.
Wherein the second feature information includes header feature information. The third object may be the same object as the first object or the second object, or may be an object different from both the first object and the second object, i.e., the third object may be a second user different from the first user.
Alternatively, the second feature information may be feature information of the entire part of the body of the third subject, or the second feature information may be feature information of the head of the third subject.
The second characteristic information may be skin information or color information, etc.
Step 502: if the second feature information is determined not to include the facial feature information, the electronic equipment acquires first action information of the first object.
Wherein the first action information is action information for a head of the third object.
In a real scene, the second feature information does not include facial feature information, and the third object may face away from the camera module, or no facial feature is determined based on the second feature information.
Step 503: the electronic equipment acquires third characteristic information of a third object in the first video information.
Wherein the third characteristic information includes header characteristic information.
Step 504: and if the first action information is matched with the first preset action information and the third characteristic information comprises facial characteristic information, the electronic equipment hides the face of the third object to obtain second video information.
The third feature information may be feature information of all parts of the body of the third subject, or the third feature information may be feature information of the head of the third subject.
In a specific embodiment, the electronic device acquires second feature information of the third object, which does not include the facial feature information, determines that the face of the third object faces away from the camera module, and may track the third object until third feature information of the third object, which includes the facial feature information, is acquired, and determines that the third object faces the camera based on the third feature information, and determines that the face of the third object is the second object.
Step 505: the electronic device displays the second video information.
It should be noted that, for the description of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the description in the other embodiments, which is not repeated herein.
In the embodiment of the invention, when the electronic equipment determines that the third object faces away from the camera module and detects that the first action is performed on the head of the third object, the electronic equipment performs tracking detection on the third object until the face of the third object is detected to face towards the camera module, and the face of the third object is subjected to hiding processing, so that the condition that the privacy of the third object is leaked due to the display of the third object is avoided.
Based on the foregoing embodiments, the embodiment of the present invention provides an information processing method, after steps 103, 204, 305, 404, or 505, the electronic device may further perform the following steps B1 to B2.
Step B1: the electronic equipment acquires third action information of the first object in the first video information.
And the third motion information is motion information aiming at the second object in the first video information. The third motion information of the first object may be gesture information of the first user.
Step B2: and if the third action information is matched with the second preset action information, the electronic equipment cancels the hiding processing of the second object to display the second object, so that the first video information is obtained, and the first video information is displayed.
The second preset motion information may be preset motion information. The second preset motion information may be pointing information of a finger or information that a palm is opened from closed.
By the method, when the first object does not need to hide the second object, the first object can be unhidden through gestures.
In one implementation, based on the foregoing example, the steps 101, 201, 301 or 401 may be implemented by the following steps C1 to C3:
step C1: the electronic equipment collects the first video information, analyzes the first video information and determines a plurality of second sub-objects included in the first video information.
Wherein the plurality of second sub-objects includes the first object and the second object.
The plurality of second sub-objects may be all objects that the electronic device can recognize based on the first video information.
Step C2: and if the fourth characteristic information of at least one sub-object is determined to be matched with the target characteristic information from the plurality of second sub-objects, the electronic equipment outputs second indication information for indicating the first object to send out a second target action.
Wherein the at least one sub-object comprises a second object.
The fourth feature information may include at least one of text information, color information, and texture information. The target characteristic information may be preset characteristic information. The electronic device may determine, from the plurality of second sub-objects, at least one sub-object that may be related to privacy. The sub-object is for example a brand identity of a product, a personal item or any object that may reveal privacy etc.
Step C3: the electronic device acquires first action information of the first object, which is matched with the second target action.
Through the method, the electronic equipment can automatically identify the second object possibly related to privacy, so that the first object is reminded to hide the second object by sending out the first target action, and the purpose of protecting the privacy of the first object is achieved.
In another embodiment, based on the foregoing example, the steps 101, 201, 301 or 401 may be implemented by the following steps D1 to D2:
step D1: the electronic equipment collects the first video information and acquires the identity characteristic information of the first object in the first video information.
The identity characteristic information may be at least one of face characteristic information, fingerprint information, account information for login, and iris characteristic information.
Step D2: if the identity characteristic information is matched with the preset identity characteristic information, the electronic equipment acquires first action information.
The matching of the identity characteristic information with the preset identity characteristic information may include: the similarity between the identity characteristic information and the preset identity characteristic information is larger than a threshold value.
By the method, the electronic equipment can acquire the first action information only when the identity characteristic information of the first object is matched with the preset identity characteristic information, so that only a user with operation authority executes the information processing method in the embodiment of the invention, and the information safety is improved.
Based on the foregoing embodiment, an embodiment of the present invention provides an electronic device 6, where the electronic device 6 may be applied to an information processing method provided in the embodiments corresponding to fig. 1 to 5, and as shown in fig. 6, the electronic device 6 may include: a processor 61, a memory 62, and a communication bus 63, wherein:
the communication bus 63 is used to implement a communication connection between the processor 61 and the memory 62.
The processor 61 is configured to execute a program of an information processing method stored in the memory 62 to realize the steps of:
acquiring first video information, and acquiring first action information of a first object in the first video information; the first action information is action information aiming at a second object in the first video information;
if the first action information is matched with the first preset action information, hiding the second object to obtain second video information;
and displaying the second video information.
In another embodiment of the present invention, the processor 61 is configured to execute the following steps to hide the second object to obtain the second video information if the first motion information stored in the memory 62 matches with the first preset motion information:
if the first action information is matched with the first preset action information, determining at least two first sub-objects in the first video information based on the first action information; wherein the at least two first sub-objects comprise a second object;
and receiving the operation of the first object on a second object in the at least two first sub-objects, and hiding the second object based on the operation to obtain second video information.
In another embodiment of the present invention, the processor 61 is configured to execute the following steps to hide the second object to obtain the second video information if the first motion information stored in the memory 62 matches the first preset motion information:
if the first action information is matched with the first preset action information, scene information of a second object in the first video information is obtained;
determining a first target object matched with the scene information;
and replacing or shielding the second object by the first target object to obtain second video information.
In other embodiments of the present invention, processor 61 is configured to execute the first target object stored in memory 62 to determine a match with the scene information, so as to implement the following steps:
acquiring first characteristic information of a first object;
and determining a first target object matched with the first characteristic information and the scene information.
In another embodiment of the present invention, the processor 61 is configured to execute the following steps to hide the second object to obtain the second video information if the first motion information stored in the memory 62 matches the first preset motion information:
if the first action information is matched with the first preset action information, outputting first indication information for indicating the first object to send out a first target action;
and acquiring second action information of the first object, and if the second action information is matched with the first target action, hiding the second object to obtain second video information.
In other embodiments of the present invention, the processor 61 is configured to execute the information processing method stored in the memory 62 to implement the following steps:
acquiring third action information of a first object in the first video information; the third action information is action information aiming at a second object in the first video information;
and if the third action information is matched with the second preset action information, the second object is cancelled to be hidden so as to display the second object, the first video information is obtained, and the first video information is displayed.
In other embodiments of the present invention, the processor 61 is configured to execute the capturing first video information stored in the memory 62, and obtain first action information of the first object in the first video information, so as to implement the following steps:
acquiring first video information, and acquiring second characteristic information of a third object in the first video information; wherein the second characteristic information comprises header characteristic information;
if the second feature information is determined not to include the facial feature information, acquiring first action information of the first object; wherein the first action information is action information for a head of the third object;
correspondingly, the processor 61 is configured to execute the following steps to hide the second object to obtain the second video information if the first motion information stored in the memory 62 matches the first preset motion information:
acquiring third characteristic information of a third object in the first video information; wherein the third feature information includes header feature information;
and if the first action information is matched with the first preset action information and the third characteristic information comprises facial characteristic information, hiding the face of the third object to obtain second video information.
In other embodiments of the present invention, the processor 61 is configured to execute the capturing first video information stored in the memory 62, and obtain first motion information of the first object in the first video information, so as to implement the following steps:
acquiring first video information, analyzing the first video information, and determining a plurality of second sub-objects included in the first video information; wherein the plurality of second sub-objects includes a first object and a second object;
if the fourth characteristic information of at least one sub-object is determined to be matched with the target characteristic information from the plurality of second sub-objects, outputting second indication information for indicating the first object to send out a second target action; wherein the at least one sub-object comprises a second object;
and acquiring first action information of the first object, which is matched with the second target action.
In other embodiments of the present invention, the processor 61 is configured to execute the capturing first video information stored in the memory 62, and obtain first motion information of the first object in the first video information, so as to implement the following steps:
acquiring first video information, and acquiring identity characteristic information of a first object in the first video information;
and if the identity characteristic information is matched with the preset identity characteristic information, acquiring first action information.
It should be noted that, a specific implementation process of the steps executed by the processor in this embodiment may refer to an implementation process in the information processing method provided in the embodiments corresponding to fig. 1 to 5, and is not described here again.
The electronic equipment provided by the embodiment of the invention collects the first video information and acquires the first action information of the first object in the first video information; the first action information is action information aiming at a second object in the first video information; if the first action information is matched with the first preset action information, hiding the second object to obtain second video information; and displaying the second video information. Therefore, after the electronic equipment collects the first video information, the second object is automatically hidden based on the first action information in the first video information, the problem of low blurring processing efficiency caused by blurring processing each frame of image in the related technology is solved, and the blurring processing efficiency of the video is improved.
Based on the foregoing embodiments, an embodiment of the present invention provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the information processing method as any one of the above.
The Processor or the CPU may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is understood that the electronic device implementing the above-described processor function may be other electronic devices, and the embodiments of the present application are not limited in particular.
The computer storage medium/Memory may be a Memory such as a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); and may be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., including one or any combination of the above-mentioned memories.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a removable Memory device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. An information processing method, the method comprising:
acquiring first video information, and acquiring first action information of a first object in the first video information; the first action information is action information aiming at a second object in the first video information;
if the first action information is matched with first preset action information, hiding the second object to obtain second video information, wherein the hiding process comprises the following steps:
if the first action information is matched with the first preset action information, determining at least two first sub-objects in the first video information based on the first action information; wherein the at least two first sub-objects comprise the second object; receiving an operation of the first object on the second object in the at least two first sub-objects, and performing hiding processing on the second object based on the operation to obtain second video information;
or, if the first action information is matched with the first preset action information, acquiring scene information of the second object in the first video information; determining a first target object matched with the scene information; replacing or shielding the second object by the first target object to obtain the second video information;
or if the first action information is matched with the first preset action information, outputting first indication information for indicating the first object to send out a first target action; acquiring second action information of the first object, and if the second action information is matched with the first target action, hiding the second object to obtain second video information;
and displaying the second video information.
2. The method of claim 1, wherein the determining a first target object matching the scene information comprises:
acquiring first characteristic information of the first object;
determining the first target object matching the first feature information and the scene information.
3. The method of claim 1, wherein after displaying the second video information, the method further comprises:
acquiring third action information of the first object in the first video information; wherein the third motion information is motion information for a second object in the first video information;
and if the third action information is matched with second preset action information, canceling to hide the second object to display the second object, obtaining the first video information, and displaying the first video information.
4. The method according to claim 1, wherein the acquiring the first video information and obtaining the first motion information of the first object in the first video information comprises:
acquiring the first video information, and acquiring second characteristic information of a third object in the first video information; wherein the second feature information comprises header feature information;
if the second feature information is determined not to include the facial feature information, acquiring first action information of the first object; wherein the first action information is action information for a head of the third object;
correspondingly, if the first action information is matched with first preset action information, hiding the second object to obtain second video information, including:
acquiring third characteristic information of the third object in the first video information; wherein the third feature information comprises header feature information;
and if the first action information is matched with the first preset action information and the third feature information comprises face feature information, hiding the face of the third object to obtain the second video information.
5. The method according to any one of claims 1 to 3, wherein the acquiring the first video information and obtaining the first motion information of the first object in the first video information comprises:
acquiring the first video information, analyzing the first video information, and determining a plurality of second sub-objects included in the first video information; wherein the plurality of second sub-objects includes the first object and the second object;
if it is determined from the plurality of second sub-objects that the fourth feature information of at least one sub-object matches the target feature information, outputting second indication information for indicating the first object to issue a second target action; wherein the at least one sub-object comprises the second object;
and acquiring the first action information of the first object matched with the second target action.
6. The method according to any one of claims 1 to 3, wherein the acquiring the first video information and obtaining the first motion information of the first object in the first video information comprises:
acquiring the first video information, and acquiring identity characteristic information of a first object in the first video information;
and if the identity characteristic information is matched with preset identity characteristic information, acquiring the first action information.
7. An electronic device, characterized in that the electronic device comprises: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is used for executing the program of the information processing method in the memory to realize the following steps:
acquiring first video information, and acquiring first action information of a first object in the first video information; the first action information is action information aiming at a second object in the first video information;
if the first action information is matched with first preset action information, hiding the second object to obtain second video information, wherein the hiding process comprises the following steps:
if the first action information is matched with the first preset action information, determining at least two first sub-objects in the first video information based on the first action information; wherein the at least two first sub-objects comprise the second object; receiving an operation of the first object on the second object in the at least two first sub-objects, and hiding the second object based on the operation to obtain second video information;
or, if the first action information is matched with the first preset action information, acquiring scene information of the second object in the first video information; determining a first target object matched with the scene information; replacing or shielding the second object by the first target object to obtain the second video information;
or if the first action information is matched with the first preset action information, outputting first indication information for indicating the first object to send out a first target action; acquiring second action information of the first object, and if the second action information is matched with the first target action, hiding the second object to obtain second video information;
and displaying the second video information.
CN201910813348.5A 2019-08-30 2019-08-30 Information processing method and electronic equipment Active CN110609921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910813348.5A CN110609921B (en) 2019-08-30 2019-08-30 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910813348.5A CN110609921B (en) 2019-08-30 2019-08-30 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110609921A CN110609921A (en) 2019-12-24
CN110609921B true CN110609921B (en) 2022-08-19

Family

ID=68890738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910813348.5A Active CN110609921B (en) 2019-08-30 2019-08-30 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110609921B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333730A (en) * 2014-11-26 2015-02-04 北京奇艺世纪科技有限公司 Video communication method and video communication device
CN104349099A (en) * 2013-07-25 2015-02-11 联想(北京)有限公司 Image storage method and device
CN105388779A (en) * 2015-12-25 2016-03-09 小米科技有限责任公司 Control method and device for intelligent equipment
CN105472303A (en) * 2015-11-20 2016-04-06 小米科技有限责任公司 Privacy protection method and apparatus for video chatting
CN105592331A (en) * 2015-12-16 2016-05-18 广州华多网络科技有限公司 Method for processing barrage messages, related equipment, and system
CN106454195A (en) * 2016-09-14 2017-02-22 惠州Tcl移动通信有限公司 Anti-peeping method and system for video chats based on VR
CN108763514A (en) * 2018-05-30 2018-11-06 维沃移动通信有限公司 A kind of method for information display and mobile terminal
CN108848334A (en) * 2018-07-11 2018-11-20 广东小天才科技有限公司 Video processing method, device, terminal and storage medium
CN108965982A (en) * 2018-08-28 2018-12-07 百度在线网络技术(北京)有限公司 Video recording method, device, electronic equipment and readable storage medium storing program for executing
CN109151338A (en) * 2018-07-10 2019-01-04 Oppo广东移动通信有限公司 Image processing method and related product
CN109254650A (en) * 2018-08-02 2019-01-22 阿里巴巴集团控股有限公司 A kind of man-machine interaction method and device
CN109871834A (en) * 2019-03-20 2019-06-11 北京字节跳动网络技术有限公司 Information processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9407864B2 (en) * 2013-07-25 2016-08-02 Beijing Lenovo Software Ltd. Data processing method and electronic device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104349099A (en) * 2013-07-25 2015-02-11 联想(北京)有限公司 Image storage method and device
CN104333730A (en) * 2014-11-26 2015-02-04 北京奇艺世纪科技有限公司 Video communication method and video communication device
CN105472303A (en) * 2015-11-20 2016-04-06 小米科技有限责任公司 Privacy protection method and apparatus for video chatting
CN105592331A (en) * 2015-12-16 2016-05-18 广州华多网络科技有限公司 Method for processing barrage messages, related equipment, and system
CN105388779A (en) * 2015-12-25 2016-03-09 小米科技有限责任公司 Control method and device for intelligent equipment
CN106454195A (en) * 2016-09-14 2017-02-22 惠州Tcl移动通信有限公司 Anti-peeping method and system for video chats based on VR
CN108763514A (en) * 2018-05-30 2018-11-06 维沃移动通信有限公司 A kind of method for information display and mobile terminal
CN109151338A (en) * 2018-07-10 2019-01-04 Oppo广东移动通信有限公司 Image processing method and related product
CN108848334A (en) * 2018-07-11 2018-11-20 广东小天才科技有限公司 Video processing method, device, terminal and storage medium
CN109254650A (en) * 2018-08-02 2019-01-22 阿里巴巴集团控股有限公司 A kind of man-machine interaction method and device
CN108965982A (en) * 2018-08-28 2018-12-07 百度在线网络技术(北京)有限公司 Video recording method, device, electronic equipment and readable storage medium storing program for executing
CN109871834A (en) * 2019-03-20 2019-06-11 北京字节跳动网络技术有限公司 Information processing method and device

Also Published As

Publication number Publication date
CN110609921A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN108197586B (en) Face recognition method and device
CN110321790B (en) Method for detecting countermeasure sample and electronic equipment
CN107886032B (en) Terminal device, smart phone, authentication method and system based on face recognition
CN108566516B (en) Image processing method, device, storage medium and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN110956061A (en) Action recognition method and device, and driver state analysis method and device
CN109151338B (en) Image processing method and related product
CN107124548A (en) A kind of photographic method and terminal
JP2022118201A (en) Image processing system, image processing method, and program
CN108848334A (en) Video processing method, device, terminal and storage medium
CN112257552B (en) Image processing method, device, equipment and storage medium
CN113422977A (en) Live broadcast method and device, computer equipment and storage medium
CN110532957B (en) Face recognition method and device, electronic equipment and storage medium
CN110827195B (en) Virtual article adding method and device, electronic equipment and storage medium
EP3617851B1 (en) Information processing device, information processing method, and recording medium
US20190130193A1 (en) Virtual Reality Causal Summary Content
CN112351327A (en) Face image processing method and device, terminal and storage medium
CN107977636B (en) Face detection method and device, terminal and storage medium
CN110597426A (en) Bright screen processing method and device, storage medium and terminal
US20200005507A1 (en) Display method and apparatus and electronic device thereof
CN112511743B (en) Video shooting method and device
CN107357424B (en) Gesture operation recognition method and device and computer readable storage medium
CN111416936B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110609921B (en) Information processing method and electronic equipment
CN105450973A (en) Method and device of video image acquisition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant