CN111726559A - Image blurring processing method and device used in multimedia video conference - Google Patents
Image blurring processing method and device used in multimedia video conference Download PDFInfo
- Publication number
- CN111726559A CN111726559A CN202010474769.2A CN202010474769A CN111726559A CN 111726559 A CN111726559 A CN 111726559A CN 202010474769 A CN202010474769 A CN 202010474769A CN 111726559 A CN111726559 A CN 111726559A
- Authority
- CN
- China
- Prior art keywords
- background image
- image
- participant
- dressing
- dynamic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 35
- 238000001514 detection method Methods 0.000 claims abstract description 20
- 230000008030 elimination Effects 0.000 claims description 12
- 238000003379 elimination reaction Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Image Analysis (AREA)
Abstract
An image blurring processing method and device used in a multimedia video conference are provided, wherein the method comprises the following steps: when the participant terminal is detected to carry out a video conference, acquiring a background image in a video picture corresponding to the participant and receiving a real-time video stream fed back by at least one camera terminal which is in the same wireless local area network environment with the participant terminal; carrying out dynamic personnel detection on the video stream, and judging whether the video stream is a participant or not when the video stream is detected; if not, judging whether the position of the dynamic personnel is close to the background image or not; if the current walking track of the dynamic personnel is obtained, predicting the target walking track of the dynamic personnel when the dynamic personnel reaches the corresponding area of the background image; carrying out fuzzy processing on the target walking track in an image block area corresponding to the background image; when non-participants enter the background image, the non-participants can be covered in the fuzzy processing area, and all the participants can not see dynamic personnel from the video pictures of the participants, so that the privacy protection effect is improved, and the efficiency of the video conference is effectively improved.
Description
Technical Field
The invention relates to the technical field of multimedia conferences, in particular to an image blurring processing method and device used in a multimedia video conference.
Background
With the development of society, the application of video conference software is increasingly popularized, and the video conference software is a multimedia communication mode for holding a conference through a communication network by using electronic equipment, and can enable geographically dispersed participants to exchange and share real-time information through video and sound information streams so as to develop a cooperative working mode.
During the course of a conference, such scenarios are often encountered: when a participant performs a video conference with other participants, other non-participants who are in the same space with the participant can easily fall into a video picture (for example, the family of the participant) when the non-participant moves in the space, thereby affecting the quality of the conference and the privacy of the user.
Therefore, how to perform pre-recognition on the positions of non-participants and fuzzy processing after entry in the process of video conference becomes a technical problem to be solved.
Disclosure of Invention
The purpose of the invention is as follows:
in order to overcome the disadvantages in the background art, embodiments of the present invention provide an image blurring processing method and apparatus for use in a multimedia video conference, which can effectively solve the problems related to the background art.
The technical scheme is as follows:
a method of image blur processing for use in multimedia video conferencing, the method comprising:
when a participant terminal is detected to carry out a video conference, acquiring a background image in a video picture corresponding to a participant, and receiving a real-time video stream fed back by at least one camera terminal which is in the same wireless local area network environment with the participant terminal, wherein the shooting picture of the camera terminal covers the shooting picture of the participant terminal;
performing dynamic personnel detection on the real-time video stream, and judging whether the dynamic personnel is the participant when the dynamic personnel is detected;
if not, judging whether the position of the dynamic personnel is close to the background image or not;
if so, acquiring the current walking track of the dynamic personnel and predicting the target walking track of the dynamic personnel when the dynamic personnel reach the area corresponding to the background image according to the walking track;
and carrying out fuzzy processing on the image block region corresponding to the background image of the target walking track.
As a preferred mode of the present invention, after blurring the target walking trajectory in a region of a tile corresponding to the background image, the method further includes:
and detecting the real-time position of the dynamic personnel in the image block area, and performing fuzzy elimination processing on redundant areas exceeding the dynamic personnel according to the real-time position.
As a preferred mode of the present invention, before determining whether the dynamic person is located near the background image, the method further includes:
acquiring dressing images of the participants and participants who have a video conference with the participants, and judging whether uniform standardized dressing is performed according to the dressing images;
if yes, acquiring the dressing image of the dynamic personnel and judging whether the dressing image conforms to the dressing of the participant and the participant who carries out the video conference with the participant;
if not, executing: and judging whether the position of the dynamic personnel is close to the background image.
As a preferred mode of the present invention, the method further includes:
acquiring positioning data of at least one mobile terminal which is in the same wireless local area network environment with the participant terminal;
judging whether the mobile terminal is in a mobile state or not according to the positioning data;
if yes, judging whether the position of the positioning data is close to the background image;
if so, predicting a target moving track when the positioning data reaches the corresponding area of the background image according to the moving track of the positioning data;
and carrying out fuzzy processing on the target moving track in an image block region corresponding to the background image.
As a preferred mode of the present invention, the method further includes:
detecting a specific gesture of a participant and acquiring an object indicated by the specific gesture;
and carrying out fuzzy processing on the image block region corresponding to the object.
A dynamic image blurring apparatus for use in a multimedia video conference, comprising:
the background image acquisition module is used for acquiring background images in video pictures corresponding to participants when the participant terminal is detected to carry out a video conference;
the real-time video stream receiving module is used for receiving a real-time video stream fed back by at least one camera terminal which is located in the same wireless local area network environment with the participant terminal, wherein the shooting picture of the camera terminal covers the shooting picture of the participant terminal;
the dynamic personnel detection module is used for carrying out dynamic personnel detection on the real-time video stream;
the participant judging module is used for judging whether the dynamic personnel is the participant when the dynamic personnel is detected;
the first position judgment module is used for judging whether the position of the dynamic personnel is close to the background image or not when the dynamic personnel is judged not to be the participant;
the current walking track obtaining module is used for obtaining the current walking track of the dynamic personnel when the position of the dynamic personnel is judged to be close to the background image;
the target walking track prediction module is used for predicting a target walking track when the target walking track reaches the corresponding area of the background image according to the walking track;
and the first fuzzy processing module is used for carrying out fuzzy processing on the image block region corresponding to the background image of the target walking track.
As a preferred embodiment of the present invention, the present invention further comprises:
the real-time position detection module is used for detecting the real-time position of the dynamic personnel in the image block area;
and the fuzzy elimination module is used for carrying out fuzzy elimination processing on the redundant area exceeding the dynamic personnel according to the real-time position.
As a preferred embodiment of the present invention, the present invention further comprises:
the first dressing image acquisition module is used for acquiring dressing images of the participants and participants who have a video conference with the participants;
the first dressing image judging module is used for judging whether the uniform standardized dressing is carried out according to the dressing image;
the second dressing image acquisition module is used for acquiring the dressing image of the dynamic person when the dressing image is judged to be the uniform standard dressing;
and the second dressing image judging module is used for judging whether the dressing of the second dressing image accords with the dressing of the participants and the participants who have video conferences with the participants.
As a preferred embodiment of the present invention, the present invention further comprises:
the positioning data acquisition module is used for acquiring positioning data of at least one mobile terminal which is in the same wireless local area network environment with the participant terminal;
the mobile state judging module is used for judging whether the mobile state is the mobile state according to the positioning data;
the second position judging module is used for judging whether the position of the positioning data is close to the background image or not when the positioning data is judged to be in the moving state;
the target moving track prediction module is used for predicting a target moving track when the positioning data reaches an area corresponding to the background image according to the moving track of the positioning data when the position where the positioning data is located is judged to be close to the background image;
and the second blurring processing module is used for blurring the image block region corresponding to the background image of the target moving track.
As a preferred embodiment of the present invention, the present invention further comprises:
the specific gesture detection module is used for detecting specific gestures of the participants and acquiring objects indicated by the specific gestures;
and the third fuzzy processing module is used for carrying out fuzzy processing on the image block region corresponding to the object.
The invention realizes the following beneficial effects:
1. according to the method and the device, the proximity judgment is set, namely, a proximity area is set around the background image, and whether the user is in the proximity area is judged, so that the fuzzy processing of the area needing the fuzzy processing can be triggered in advance, and the fuzzy processing efficiency is improved;
2. according to the invention, by predicting the target walking track and carrying out fuzzy processing on the image block region corresponding to the background image, when dynamic personnel (non-participators) enter the background image, the dynamic personnel can be covered in the fuzzy processing region, and other participators in the video conference with the participators cannot see the dynamic personnel from the video pictures of the participators, so that good privacy protection can be carried out, and the efficiency of the video conference can be effectively improved.
3. The invention carries out the fuzzy elimination processing on the redundant area beyond the dynamic personnel, namely only the area occupied by the dynamic personnel in the background image is subjected to the fuzzy elimination processing, and the fuzzy elimination processing is carried out on other areas, namely the redundant area beyond the dynamic personnel, namely the fuzzy processing of the area is eliminated, so that the picture is recovered to be normal, and the fuzzy processing of the dynamic personnel can be ensured at the same time.
4. According to the invention, by identifying and comparing the wearing clothes of each participant and the dynamic personnel, when the judgment result is the same, the dynamic personnel is considered to belong to one of the members which can participate in the video conference, otherwise, the dynamic personnel is considered to have no authority to join the video conference and needs to be subjected to fuzzy processing, so that the precision of the fuzzy processing and the user experience are improved.
5. According to the invention, the mobile terminal is detected and the positioning data is obtained, when the position of the positioning data is judged to be close to the background image, the target moving track of the mobile terminal when the mobile terminal reaches the area corresponding to the background image is predicted according to the moving track of the positioning data, and then the target moving track is subjected to fuzzy processing in the block area corresponding to the background image, so that after a carrier (non-participant) of the mobile terminal enters the background image, the carrier can be covered in the fuzzy processing area, and other participants who carry out a video conference with the participants can not see the carrier of the mobile terminal from the video images of the participants, so that good privacy protection can be carried out, and the efficiency of the video conference can be effectively improved.
6. According to the method and the device, the specific gesture of the participant is detected, the object indicated by the specific gesture is obtained, and then the image block region corresponding to the object is subjected to fuzzy processing, so that the object to be hidden can be subjected to fuzzy processing according to the requirements of the participant during a video conference, and the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic flowchart of an image blurring processing method for use in a multimedia video conference according to an embodiment of the present invention;
fig. 2 is a schematic view of application of a shot picture of a camera terminal and a shot picture of a participant terminal according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of an image blurring processing method for use in a multimedia video conference according to a second embodiment of the present invention;
fig. 4 is a schematic flowchart of an image blurring processing method for use in a multimedia video conference according to a third embodiment of the present invention;
fig. 5 is a schematic flowchart of an image blurring processing method for use in a multimedia video conference according to a fourth embodiment of the present invention;
fig. 6 is a schematic diagram of a first structure of an image blur processing apparatus for use in a multimedia video conference according to a fifth embodiment of the present invention;
fig. 7 is a second schematic structural diagram of an image blur processing apparatus for use in a multimedia video conference according to a fifth embodiment of the present invention;
fig. 8 is a schematic structural diagram of a fifth exemplary embodiment of an image blur processing apparatus for use in a multimedia video conference;
fig. 9 is a schematic diagram of a fourth structure of an image blur processing apparatus for use in a multimedia video conference according to a fifth embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments; in the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure; one skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc.; in other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale; the same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted; the structures shown in the drawings are illustrative only and do not necessarily include all of the elements; for example, some components may be split and some components may be combined to show one device.
The method provided by the present invention may be implemented by software installed or provided in the device, where the software may be an application program, such as a typical APP, or implemented by an operating system running in the device. The conference terminal mentioned in the embodiment of the present invention refers to a terminal that is installed with multimedia conference software and is starting the software to carry out a conference, and the mobile terminal refers to other terminals that are located in the same space with the conference terminal except the conference terminal, wherein the mobile terminal may also be one of the conference terminals, but needs to be changed into the conference terminal when the multimedia conference software is operated to carry out a conference.
Example one
Refer to fig. 1 and 2. The embodiment provides an image blurring processing method used in a multimedia video conference, which comprises the following steps:
s10, when detecting that the participant terminal carries out a video conference, acquiring a background image in a video picture corresponding to the participant, and receiving a real-time video stream fed back by at least one camera terminal which is located in the same wireless local area network environment with the participant terminal, wherein the shooting picture of the camera terminal covers the shooting picture of the participant terminal.
The participants in the embodiment refer to the participants who are in a multimedia video conference, that is, the participants are in a conference with other participants by using a participant terminal, the participant terminal (specifically using an APP or an operating system) detects the running state of video conference software, and when detecting that the participant is in a video conference, the participant terminal acquires a background image in a video picture corresponding to the participants, wherein the background image comprises the participants and other images; and then sending a camera terminal detection instruction to a router accessed by the router, detecting whether a camera terminal, such as a camera, exists in the terminal connected with the router after the router receives the detection instruction, if so, determining that the camera terminal is in the same wireless local area network environment as the participating terminal, sending a real-time video stream feedback instruction to the camera terminal connected with the router, and transmitting the shot real-time video stream to the participating terminal by the camera terminal according to the received feedback instruction.
In this embodiment, the shooting angle of the camera terminal is greater than that of the participant terminal, and at least the shooting picture of the participant terminal, that is, the shooting picture of the participant terminal, is a part of the picture of the camera terminal.
And S11, carrying out dynamic personnel detection on the real-time video stream, judging whether the real-time video stream is the participant when the dynamic personnel is detected, and if not, executing S12.
Specifically, after receiving the real-time video stream transmitted by the camera terminal, the participant terminal detects the dynamic person in the real-time video stream, that is, whether the dynamic person exists in the real-time video stream is identified by using a portrait identification technology, if the dynamic person exists, a face image of the dynamic person is extracted and compared with the face image of the participant to identify whether the dynamic person is a participant, if so, the process is directly ended (for example, the participant leaves halfway and returns to continue to participate in the video conference), and if not, S12 is executed.
S12, judging whether the position of the dynamic personnel is close to the background image; if yes, go to S13.
Specifically, an adjacent area may be set around the background image, and if it is determined that the position of the dynamic person is in the adjacent area, the dynamic person is considered to be adjacent to the background image.
The reason that the approach judgment is set in the embodiment of the invention is that in the prior art, the person is identified after the person walks into the shot picture (for example, the shot picture of the meeting terminal), and if the person is identified in the method, the subsequent fuzzy processing is delayed, so that by setting the approach judgment, a near area is specifically set around the background image, and by judging whether the user is in the near area, the fuzzy processing on the area needing the fuzzy processing in advance can be triggered, and the fuzzy processing efficiency is improved.
And S13, acquiring the current walking track of the dynamic personnel and predicting the target walking track of the dynamic personnel when the dynamic personnel reach the area corresponding to the background image according to the walking track.
The current walking track refers to a walking track before the dynamic personnel does not enter the background image, and after the walking track is obtained, the walking track when the dynamic personnel reaches the corresponding area of the background image is predicted according to the walking track to be used as a target walking track; the specific prediction method can be as follows: training a plurality of walking tracks (the sum of the walking tracks before entering the background image and the walking tracks after entering the background image) of dynamic personnel in advance to obtain the most probable track which the walking track before entering the background image passes after entering the background image, and taking the most probable track as a predicted target walking track; of course, other manners may also be adopted, for example, the current walking trajectory before the dynamic person enters the background image is linearly extended in the stage after the dynamic person enters the background image, or the remaining regions except the region where the participant is located in the background image are all used as the target walking trajectory.
And S14, carrying out fuzzy processing on the target walking track in the image block area corresponding to the background image.
After the target walking track is predicted, acquiring a corresponding image block region of the target in a background image, specifically, an image block in the background image corresponding to the region where the target is located in the spatial position, and then performing a blurring process on the image block region, for example, performing a mosaic process or other blurring processing modes capable of hiding the image in the region; after the fuzzy processing, when dynamic personnel (non-participant) enter the background image, the dynamic personnel can be covered in the fuzzy processing area, and other participants who carry out the video conference with the participants can not see the dynamic personnel from the video pictures of the participants, so that the good privacy protection can be carried out, and the efficiency of the video conference can be effectively improved.
As a preferable mode of the embodiment of the present invention, after S14, the method further includes: and detecting the real-time position of the dynamic personnel in the image block area, and performing fuzzy elimination processing on redundant areas exceeding the dynamic personnel according to the real-time position.
Specifically, in S14, the target walking trajectory that the dynamic person will pass through in the background image is blurred in the image block region corresponding to the background image in advance, but after the dynamic person enters the background image, the trajectory changes, and therefore the position of the dynamic person needs to be tracked in real time.
The reason why the corresponding blurring is performed in this step is that in the previous step, the position area where the dynamic person is located in the background image cannot be accurately obtained, and a prediction method is adopted to predict the target walking track in advance, and then the blurring processing of the area is performed in advance, so as to avoid the delay of the blurring processing.
Example two
As shown with reference to fig. 2. In this embodiment, on the basis of the first embodiment, before S12, the method further includes the following steps:
and S20, acquiring dressing images of the participants and the participants who have video conferences with the participants.
And S21, judging whether the uniform standard dress is present according to the dress image, and if so, executing S22.
And S22, acquiring the dressing image of the dynamic personnel.
S23, judging whether the dressing image of the dynamic personnel is consistent with the dressing of the participant and the participant who has a video conference with the participant; if not, S12 is executed, and if yes, the flow ends.
In this embodiment, the method includes acquiring wearing images of participants and participants in a video conference with the participants, that is, acquiring wearing images of all the participants, specifically acquiring wearing images of the upper half, and then determining whether uniform standardized wearing is performed, for example, whether types, colors and styles of clothes are uniform is determined, and if uniform, determining uniform standardized wearing; and then acquiring a dressing image of the dynamic person, specifically acquiring a dressing image of the upper half of the body, judging whether the dressing image of the dynamic person is consistent with the dresses of the participants and the participants who have a video conference with the participants, namely whether the dressing image of the dynamic person is the same as the types, colors and styles of the garments of the participants, if the dressing image of the dynamic person is the same as the dresses of the participants, determining that the dynamic person belongs to one of the participants who can participate in the video conference, and if not, determining that the dynamic person does not have the right to join in the video conference, and performing fuzzy processing on the dynamic person.
EXAMPLE III
As shown with reference to fig. 3. In this embodiment, on the basis of the first embodiment, the method further includes the following steps:
and S30, acquiring the positioning data of at least one mobile terminal which is in the same wireless local area network environment with the participating terminal.
The joining terminal sends a mobile terminal detection instruction to a router accessed by the joining terminal, the router detects whether a mobile terminal, such as a mobile phone, exists in a terminal connected with the router after receiving the detection instruction, if the mobile terminal is detected, the mobile terminal and the joining terminal are considered to be in the same wireless local area network environment, and sends a positioning data acquisition instruction to the mobile terminal connected with the router, and the mobile terminal acquires current positioning data through a GPS module according to the received positioning data acquisition instruction and transmits the current positioning data to the joining terminal in real time.
S31, judging whether the mobile terminal is in a mobile state according to the positioning data, and if so, executing S32.
Namely, whether the positioning data is continuously changed is judged, if so, the positioning data is considered to be in the mobile terminal, namely, the carrier of the mobile terminal is in a moving state.
S32, judging whether the position of the positioning data is close to the background image, if so, executing S33.
The determination manner in this step is similar to S12, and specifically includes: and setting a near area around the background image, and if the position of the positioning data is judged to be in the near area, considering that the positioning data is close to the background image.
And S33, predicting the target moving track when the target moving track reaches the corresponding area of the background image according to the moving track of the positioning data.
The determination manner in this step is similar to S13, and specifically includes: training a plurality of moving tracks of the positioning data (the sum of the moving tracks before entering the background image and the moving tracks after entering the background image) in advance to obtain the most probable track which the moving track before entering the background image passes after entering the background image, and taking the most probable track as a predicted target moving track; of course, other manners may also be adopted, for example, the moving trajectory of the positioning data is linearly extended in the stage after entering the background image before entering the background image, or the remaining areas except the area where the participant is located in the background image are all used as the target moving trajectory, and the embodiment of the present invention is not limited to a specific prediction manner.
And S34, carrying out fuzzy processing on the target movement track in the image block area corresponding to the background image.
The determination manner in this step is similar to S14, and specifically includes: after the target movement track is predicted, acquiring a corresponding image block region in a background image, specifically, an image block in the background image corresponding to the region where the target movement track is located in a spatial position, and then performing a blurring process on the image block region, for example, performing a mosaic process or other blurring processing modes capable of hiding the image in the region; after the fuzzy processing, after the carrier (non-participant) of the mobile terminal enters the background image, the carrier can be covered in the fuzzy processing area, and other participants who carry out the video conference with the participants can not see the carrier of the mobile terminal from the video picture of the participants, so that the good privacy protection can be carried out, and the efficiency of the video conference can be effectively improved.
Example four
As shown with reference to fig. 4. In this embodiment, on the basis of the first embodiment, the method further includes the following steps:
and S40, detecting the specific gesture of the participant and acquiring the object indicated by the specific gesture.
And S41, carrying out fuzzy processing on the image block region corresponding to the object.
In the embodiment of the present invention, the specific gesture is a preset gesture, for example, the index finger extends out of other fingers and closes up, specifically, after it is detected that the participant performs the specific gesture, the object indicated by the specific gesture is obtained, that is, the closest object corresponding to the extending direction of the extended index finger is obtained, then the block region corresponding to the object in the background image is obtained, and then the block region is subjected to the blurring processing, so that the object to be hidden can be subjected to the blurring processing according to the requirement of the participant during the video conference, thereby improving the experience of the user.
EXAMPLE five
As shown with reference to fig. 5. The embodiment provides a dynamic image blurring device used in a multimedia video conference, comprising:
the background image obtaining module 501 is configured to obtain a background image in a video picture corresponding to a participant when it is detected that the participant terminal performs a video conference.
A real-time video stream receiving module 502, configured to receive a real-time video stream fed back by at least one camera terminal that is located in the same wlan environment with the participant terminal, where a shooting picture of the camera terminal covers a shooting picture of the participant terminal.
And a dynamic person detection module 503, configured to perform dynamic person detection on the real-time video stream.
And the participant judging module 504 is configured to judge whether the dynamic person is the participant when the dynamic person is detected.
And a first position determining module 505, configured to determine whether the position of the dynamic person is close to the background image when it is determined that the dynamic person is not the participant.
A current walking track obtaining module 506, configured to obtain a current walking track of the dynamic person when it is determined that the position of the dynamic person is close to the background image.
And the target walking track prediction module 507 is used for predicting the target walking track when the target walking track reaches the area corresponding to the background image according to the walking track.
A first blurring module 508, configured to perform blurring processing on the image block region corresponding to the background image of the target walking trajectory.
As a preferable mode of the embodiment of the present invention, the apparatus further includes:
a real-time location detection module 509, configured to detect a real-time location of the dynamic person in the tile region.
And the fuzzy elimination module 510 is configured to perform fuzzy elimination processing on the redundant area beyond the dynamic staff according to the real-time position.
As shown with reference to fig. 6. As a preferable mode of the embodiment of the present invention, the apparatus further includes:
a first dressing image obtaining module 511, configured to obtain dressing images of the participant and the conference participants who have performed a video conference with the participant.
A first dressing image determining module 512, configured to determine whether the uniform standardized dressing is present according to the dressing image.
And a second dressing image obtaining module 513, configured to obtain the dressing image of the dynamic person when it is determined that the dressing image is a uniform standardized dressing.
A second dressing image determination module 514 determines whether it matches the dressing of the participant and the participant who is in the video conference with the participant.
As shown with reference to fig. 7. As a preferable mode of the embodiment of the present invention, the apparatus further includes:
the positioning data obtaining module 515 is configured to obtain positioning data of at least one mobile terminal that is located in the same wlan environment with the participant terminal.
The moving state determining module 516 is configured to determine whether the mobile terminal is in a moving state according to the positioning data.
The second position determining module 517 is configured to determine whether the position of the positioning data is close to the background image when the positioning data is determined to be in the moving state.
A target moving track predicting module 518, configured to predict, according to the moving track of the positioning data, a target moving track when the positioning data reaches the area corresponding to the background image when it is determined that the position of the positioning data is close to the background image.
And the second blurring module 519 is configured to perform blurring processing on the image block region corresponding to the background image of the target moving trajectory.
As shown with reference to fig. 8. As a preferable mode of the embodiment of the present invention, the apparatus further includes:
and the specific gesture detection module 520 is used for detecting the specific gesture of the participant and acquiring the object indicated by the specific gesture.
And the third blurring module 521 is configured to perform blurring processing on the tile region corresponding to the object.
The implementation process of this embodiment is the same as that of the first, second, third, and fourth embodiments, and specific reference is made to the above.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.
Claims (10)
1. An image blurring processing method used in a multimedia video conference, the method comprising:
when a participant terminal is detected to carry out a video conference, acquiring a background image in a video picture corresponding to a participant, and receiving a real-time video stream fed back by at least one camera terminal which is in the same wireless local area network environment with the participant terminal, wherein the shooting picture of the camera terminal covers the shooting picture of the participant terminal;
performing dynamic personnel detection on the real-time video stream, and judging whether the dynamic personnel is the participant when the dynamic personnel is detected;
if not, judging whether the position of the dynamic personnel is close to the background image or not;
if so, acquiring the current walking track of the dynamic personnel and predicting the target walking track of the dynamic personnel when the dynamic personnel reach the area corresponding to the background image according to the walking track;
and carrying out fuzzy processing on the image block region corresponding to the background image of the target walking track.
2. The method of claim 1, wherein after blurring the target walking trajectory in a tile region corresponding to the background image, the method further comprises:
and detecting the real-time position of the dynamic personnel in the image block area, and performing fuzzy elimination processing on redundant areas exceeding the dynamic personnel according to the real-time position.
3. The method of claim 1, wherein before determining whether the dynamic person is located near the background image, the method further comprises:
acquiring dressing images of the participants and participants who have a video conference with the participants, and judging whether uniform standardized dressing is performed according to the dressing images;
if yes, acquiring the dressing image of the dynamic personnel and judging whether the dressing image conforms to the dressing of the participant and the participant who carries out the video conference with the participant;
if not, executing: and judging whether the position of the dynamic personnel is close to the background image.
4. The method of claim 1, wherein the method further comprises:
acquiring positioning data of at least one mobile terminal which is in the same wireless local area network environment with the participant terminal;
judging whether the mobile terminal is in a mobile state or not according to the positioning data;
if yes, judging whether the position of the positioning data is close to the background image;
if so, predicting a target moving track when the positioning data reaches the corresponding area of the background image according to the moving track of the positioning data;
and carrying out fuzzy processing on the target moving track in an image block region corresponding to the background image.
5. The method of claim 1, wherein the method further comprises:
detecting a specific gesture of a participant and acquiring an object indicated by the specific gesture;
and carrying out fuzzy processing on the image block region corresponding to the object.
6. A moving image blurring apparatus used in a multimedia video conference, comprising:
the background image acquisition module is used for acquiring background images in video pictures corresponding to participants when the participant terminal is detected to carry out a video conference;
the real-time video stream receiving module is used for receiving a real-time video stream fed back by at least one camera terminal which is located in the same wireless local area network environment with the participant terminal, wherein the shooting picture of the camera terminal covers the shooting picture of the participant terminal;
the dynamic personnel detection module is used for carrying out dynamic personnel detection on the real-time video stream;
the participant judging module is used for judging whether the dynamic personnel is the participant when the dynamic personnel is detected;
the first position judgment module is used for judging whether the position of the dynamic personnel is close to the background image or not when the dynamic personnel is judged not to be the participant;
the current walking track obtaining module is used for obtaining the current walking track of the dynamic personnel when the position of the dynamic personnel is judged to be close to the background image;
the target walking track prediction module is used for predicting a target walking track when the target walking track reaches the corresponding area of the background image according to the walking track;
and the first fuzzy processing module is used for carrying out fuzzy processing on the image block region corresponding to the background image of the target walking track.
7. The apparatus of claim 6, further comprising:
the real-time position detection module is used for detecting the real-time position of the dynamic personnel in the image block area;
and the fuzzy elimination module is used for carrying out fuzzy elimination processing on the redundant area exceeding the dynamic personnel according to the real-time position.
8. The apparatus of claim 6, further comprising:
the first dressing image acquisition module is used for acquiring dressing images of the participants and participants who have a video conference with the participants;
the first dressing image judging module is used for judging whether the uniform standardized dressing is carried out according to the dressing image;
the second dressing image acquisition module is used for acquiring the dressing image of the dynamic person when the dressing image is judged to be the uniform standard dressing;
and the second dressing image judging module is used for judging whether the dressing of the second dressing image accords with the dressing of the participants and the participants who have video conferences with the participants.
9. The apparatus of claim 6, further comprising:
the positioning data acquisition module is used for acquiring positioning data of at least one mobile terminal which is in the same wireless local area network environment with the participant terminal;
the mobile state judging module is used for judging whether the mobile state is the mobile state according to the positioning data;
the second position judging module is used for judging whether the position of the positioning data is close to the background image or not when the positioning data is judged to be in the moving state;
the target moving track prediction module is used for predicting a target moving track when the positioning data reaches an area corresponding to the background image according to the moving track of the positioning data when the position where the positioning data is located is judged to be close to the background image;
and the second blurring processing module is used for blurring the image block region corresponding to the background image of the target moving track.
10. The apparatus of claim 6, further comprising:
the specific gesture detection module is used for detecting specific gestures of the participants and acquiring objects indicated by the specific gestures;
and the third fuzzy processing module is used for carrying out fuzzy processing on the image block region corresponding to the object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010474769.2A CN111726559A (en) | 2020-05-29 | 2020-05-29 | Image blurring processing method and device used in multimedia video conference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010474769.2A CN111726559A (en) | 2020-05-29 | 2020-05-29 | Image blurring processing method and device used in multimedia video conference |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111726559A true CN111726559A (en) | 2020-09-29 |
Family
ID=72565434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010474769.2A Withdrawn CN111726559A (en) | 2020-05-29 | 2020-05-29 | Image blurring processing method and device used in multimedia video conference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111726559A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112261347A (en) * | 2020-10-14 | 2021-01-22 | 浙江大华技术股份有限公司 | Method and device for adjusting participation right, storage medium and electronic device |
CN113727055A (en) * | 2021-08-23 | 2021-11-30 | 董玉兰 | Video conference system and terminal equipment |
CN114390241A (en) * | 2022-01-14 | 2022-04-22 | 西安万像电子科技有限公司 | VR (virtual reality) teleconference method and device |
CN115333879A (en) * | 2022-08-09 | 2022-11-11 | 深圳市研为科技有限公司 | Teleconference method and system |
CN116708709A (en) * | 2023-08-01 | 2023-09-05 | 深圳市海域达赫科技有限公司 | Communication system and method based on cloud service |
CN117201724A (en) * | 2023-10-07 | 2023-12-08 | 东莞市智安家科技有限公司 | Video conference processing method and system |
CN117240990A (en) * | 2023-11-15 | 2023-12-15 | 辽宁牧龙科技有限公司 | Communication method and terminal equipment for video conference |
-
2020
- 2020-05-29 CN CN202010474769.2A patent/CN111726559A/en not_active Withdrawn
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112261347A (en) * | 2020-10-14 | 2021-01-22 | 浙江大华技术股份有限公司 | Method and device for adjusting participation right, storage medium and electronic device |
CN113727055A (en) * | 2021-08-23 | 2021-11-30 | 董玉兰 | Video conference system and terminal equipment |
CN113727055B (en) * | 2021-08-23 | 2024-03-01 | 董玉兰 | Video conference system and terminal equipment |
CN114390241A (en) * | 2022-01-14 | 2022-04-22 | 西安万像电子科技有限公司 | VR (virtual reality) teleconference method and device |
CN115333879A (en) * | 2022-08-09 | 2022-11-11 | 深圳市研为科技有限公司 | Teleconference method and system |
CN115333879B (en) * | 2022-08-09 | 2023-11-07 | 深圳市研为科技有限公司 | Remote conference method and system |
CN116708709A (en) * | 2023-08-01 | 2023-09-05 | 深圳市海域达赫科技有限公司 | Communication system and method based on cloud service |
CN116708709B (en) * | 2023-08-01 | 2024-03-08 | 深圳市海域达赫科技有限公司 | Communication system and method based on cloud service |
CN117201724A (en) * | 2023-10-07 | 2023-12-08 | 东莞市智安家科技有限公司 | Video conference processing method and system |
CN117240990A (en) * | 2023-11-15 | 2023-12-15 | 辽宁牧龙科技有限公司 | Communication method and terminal equipment for video conference |
CN117240990B (en) * | 2023-11-15 | 2024-02-20 | 辽宁牧龙科技有限公司 | Communication method and terminal equipment for video conference |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111726559A (en) | Image blurring processing method and device used in multimedia video conference | |
US9948893B2 (en) | Background replacement based on attribute of remote user or endpoint | |
EP2172016B1 (en) | Techniques for detecting a display device | |
CN108833785A (en) | Fusion method, device, computer equipment and the storage medium of multi-view image | |
CN112347941B (en) | Motion video collection intelligent generation and distribution method based on 5G MEC | |
CN106775258A (en) | The method and apparatus that virtual reality is interacted are realized using gesture control | |
CN105528786A (en) | Image processing method and device | |
KR101768532B1 (en) | System and method for video call using augmented reality | |
EP2838257B1 (en) | A method for generating an immersive video of a plurality of persons | |
CN102710549A (en) | Method, terminal and system for establishing communication connection relationship through photographing | |
US10721277B2 (en) | Device pairing techniques using digital watermarking | |
EP2953351B1 (en) | Method and apparatus for eye-line augmentation during a video conference | |
CN110163055A (en) | Gesture identification method, device and computer equipment | |
CN103873759B (en) | A kind of image pickup method and electronic equipment | |
CN111524086B (en) | Moving object detection device, moving object detection method, and storage medium | |
CN113194253A (en) | Shooting method and device for removing image reflection and electronic equipment | |
CN103475850A (en) | Window shield identification method for sharing application program | |
CN109145878B (en) | Image extraction method and device | |
CN114390206A (en) | Shooting method and device and electronic equipment | |
CN112330717B (en) | Target tracking method and device, electronic equipment and storage medium | |
CN114125226A (en) | Image shooting method and device, electronic equipment and readable storage medium | |
CN110248144A (en) | Control method, device, equipment and the computer readable storage medium of video conference | |
CN113268211A (en) | Image acquisition method and device, electronic equipment and storage medium | |
KR20090014465A (en) | Method and system for providing service to hide object during video call | |
CN101753854A (en) | Image communication method and electronic device using same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200929 |