CN114332972A - Monitoring image processing method and device, electronic equipment and readable storage medium - Google Patents

Monitoring image processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114332972A
CN114332972A CN202011078856.2A CN202011078856A CN114332972A CN 114332972 A CN114332972 A CN 114332972A CN 202011078856 A CN202011078856 A CN 202011078856A CN 114332972 A CN114332972 A CN 114332972A
Authority
CN
China
Prior art keywords
image
simulation model
identity
personnel
monitoring image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011078856.2A
Other languages
Chinese (zh)
Inventor
廖永汉
吴潼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN202011078856.2A priority Critical patent/CN114332972A/en
Publication of CN114332972A publication Critical patent/CN114332972A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Alarm Systems (AREA)

Abstract

The application discloses a monitoring image processing method, a monitoring image processing device, monitoring image processing equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a monitoring image, and performing identity recognition processing on a personnel image in the monitoring image to obtain an identity recognition result; determining a head simulation model according to the identity recognition result, generating a corresponding human behavior simulation model by utilizing the personnel image, and constructing a personnel simulation model by utilizing the head simulation model and the human behavior simulation model; replacing the personnel image by utilizing a personnel simulation model to obtain a privacy-removed monitoring image; the human simulation model can represent the same action behavior as the human image and can represent the specific identity of the human image; because the privacy information in the monitoring picture is basically related to the limbs of the person, the effect of protecting privacy can be achieved by replacing the person image; meanwhile, the identity and the specific action behavior of the agent can be clearly known, the information in the monitored image is reserved, and the authenticity of the image is improved.

Description

Monitoring image processing method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a monitoring image processing method, a monitoring image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
As the monitoring market expands in size, monitoring devices are increasingly used in a variety of applications. Many families choose to install monitoring equipment in areas such as living rooms and kitchens. Since the monitoring device is directly connected to the network, the user may worry about privacy security of the monitoring screen and about uploading behavior of the user in a non-public place to the network. In order to solve the above problems, the related art uses technologies such as OSD (On Screen Display) masking and intelligent coding to process the monitoring picture. However, the related technologies are all to perform encryption blurring or overlaying processing on the picture, and to blur or overlay part of the content in the monitored picture. Therefore, when the monitoring picture is viewed, if a human body appears in the processed area of the picture, the monitoring picture cannot distinguish whether the human body is an owner, an acquaintance or an illegal invader, and cannot judge the specific action of the current actor, so that accurate and effective information cannot be obtained from the monitoring picture. Therefore, the related art has the problems of poor picture authenticity and more information loss.
Disclosure of Invention
In view of the above, an object of the present application is to provide a monitoring image processing method, a monitoring image processing apparatus, an electronic device and a computer readable storage medium, which can avoid information loss in a monitoring image and improve image reality.
In order to solve the above technical problem, the present application provides a monitoring image processing method, including:
acquiring a monitoring image, and performing identity recognition processing on a personnel image in the monitoring image to obtain an identity recognition result;
determining a corresponding head simulation model according to the identity recognition result, generating a corresponding human behavior simulation model by using the personnel image, and constructing a human simulation model by using the head simulation model and the human behavior simulation model;
and replacing the personnel image by using the personnel simulation model to obtain a privacy-removed monitoring image.
Optionally, the generating a corresponding human behavior simulation model by using the person image includes:
acquiring human body structure data corresponding to the identity recognition result;
and performing body attitude simulation and/or behavior binding treatment on the standard human body model based on the human body structure data and the personnel image to obtain a human body behavior simulation model.
Optionally, the determining a head simulation model according to the identification result includes:
determining head model information corresponding to the identity recognition result by utilizing a pre-stored identity-head model corresponding relation;
and determining the head simulation model corresponding to the human simulation model according to the head model information.
Optionally, the performing identity recognition processing on the person image in the monitoring image to obtain an identity recognition result includes:
carrying out facial feature extraction processing on the personnel image to obtain facial features;
matching the facial features with standard features corresponding to each identity information in a configuration file;
if the matched target standard features exist, determining target identity information corresponding to the target standard features as the identity recognition result;
and if the target standard characteristic does not exist, determining the identity recognition result as a no-result, and executing a preset operation.
Optionally, the replacing the person image with the person simulation model to obtain the privacy-removed monitoring image includes:
carrying out interference data coverage processing on the personnel images in the monitoring images to obtain interference images;
and replacing the interference area corresponding to the personnel image by using the personnel simulation model to obtain the privacy-removing monitoring image.
Optionally, after obtaining the identification result, the method further includes:
performing identity marking processing on the personnel image based on the identity recognition result;
correspondingly, after the monitoring image is obtained, before the personal image in the monitoring image is subjected to identity recognition processing to obtain an identity recognition result, the method further includes:
performing identity marking processing on the monitoring image based on a historical identity mark, and judging whether the personnel image has the identity mark;
if the identity mark exists, acquiring the identity recognition result according to the identity mark;
if the person image does not have the identity mark, judging whether the person image comprises a person head image or not;
and if the head image of the person is included, executing the step of carrying out identity recognition processing on the image of the person in the monitoring image to obtain an identity recognition result.
The application also provides a monitoring image processing method, which further comprises the following steps:
acquiring a privacy-removed monitoring image;
determining identity information corresponding to the person simulation model in the privacy-removed monitoring image;
acquiring a real person head model corresponding to the identity information;
and replacing the head simulation model of the personnel simulation model by using the real head model to obtain a privacy recovery monitoring image.
The present application also provides a monitoring image processing apparatus, including:
the identification module is used for acquiring a monitoring image and carrying out identity identification processing on a personnel image in the monitoring image to obtain an identity identification result;
the simulation model obtaining module is used for determining a head simulation model according to the identity recognition result, generating a corresponding human behavior simulation model by using the personnel image, and constructing a personnel simulation model by using the head simulation model and the human behavior simulation model;
and the replacing module is used for replacing the personnel image by utilizing the personnel simulation model to obtain the privacy-removing monitoring image.
The present application also provides a monitoring image processing apparatus, including:
the image acquisition module is used for acquiring privacy-removed monitoring images;
the identity information determining module is used for determining identity information corresponding to the person simulation model in the privacy-removed monitoring image;
the model generating module is used for acquiring a real person head model corresponding to the identity information;
and the recovery module is used for replacing the head simulation model of the person simulation model by using the real person head model to obtain a privacy recovery monitoring image.
The present application further provides an electronic device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the monitoring image processing method.
The present application also provides a computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the above-described monitoring image processing method.
The monitoring image processing method provided by the application obtains a monitoring image, and carries out identity recognition processing on the personnel image in the monitoring image to obtain an identity recognition result; determining a head simulation model according to the identity recognition result, generating a corresponding human behavior simulation model by utilizing the personnel image, and constructing a personnel simulation model by utilizing the head simulation model and the human behavior simulation model; and replacing the personnel image by utilizing the personnel simulation model to obtain the privacy-removed monitoring image.
Therefore, after the monitoring image is obtained, the method identifies the personnel image corresponding to the personnel in the monitoring image, and the identification result can represent the specific identity of the personnel. And after the identity recognition result is obtained, generating a corresponding personnel simulation model according to the identity recognition result and the corresponding personnel image. The human body behavior simulation model can represent the same action behavior as the human image and is generated based on the identity recognition result, namely the head simulation model is obtained based on the identity recognition result, and therefore the specific identity of the human image can be represented. And replacing the personnel image by utilizing the personnel simulation model to obtain the privacy-removed monitoring image. Because the privacy information in the monitoring picture is basically related to the limbs of the person, the replacement of the person image can achieve the effect of protecting privacy. Meanwhile, the personnel images are replaced by utilizing the personnel simulation model, so that the identity and the specific action of the agent can be clearly known, the information in the monitored images is kept, the information loss is avoided, and the image authenticity is improved. The problems of poor picture authenticity and more information loss in the related technology are solved.
Correspondingly, after the privacy-removed monitoring image is obtained, partial images in the privacy-removed monitoring image can be restored, so that the personnel corresponding to the personnel plausible model in the privacy-removed monitoring image can be visually identified. After the identity information of the personnel simulation model is determined, a real head model corresponding to the identity information is generated, and the real head model is used for replacing the head simulation model of the personnel simulation model to obtain a privacy recovery monitoring image. The method can recover the head image of the privacy-removed monitoring image transmitted by the public network so as to visually express the behavior of personnel and accurately acquire the information in the monitoring image.
In addition, the application also provides a monitoring image processing device, electronic equipment and a computer readable storage medium, and the monitoring image processing device, the electronic equipment and the computer readable storage medium also have the beneficial effects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or related technologies of the present application, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a monitoring image processing method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of another monitoring image processing method provided in the embodiment of the present application;
FIG. 3 is a schematic diagram of a specific data flow provided by an embodiment of the present application;
FIG. 4 is a specific monitoring image provided by an embodiment of the present application;
FIG. 5 is a specific privacy-removed monitoring image provided by an embodiment of the present application;
fig. 6 is a specific privacy restoration monitoring image provided by an embodiment of the present application;
fig. 7 is a schematic structural diagram of a monitoring image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another monitoring image processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a monitoring image processing method according to an embodiment of the present disclosure. The method comprises the following steps:
s101: and acquiring a monitoring image, and performing identity recognition processing on the personnel image in the monitoring image to obtain an identity recognition result.
The monitoring image is an image obtained by directly using the monitoring equipment, and the specific content of the monitoring image is not limited. The number of the monitoring images may be one or more, and when the number of the monitoring images is plural, it may be a monitoring image adjacent in time, may constitute a monitoring video, or may be a monitoring image discrete in time. In one embodiment, the monitoring image may be a real-time monitoring image, that is, a monitoring image obtained in real time at the current time; in another embodiment, the monitoring image may be a playback monitoring image, i.e., a monitoring image that is acquired and stored before the current time. It can be understood that the manner of acquiring the monitoring image is different according to the different acquisition time of the monitoring image. When the monitoring image is a real-time monitoring image, the monitoring image can be obtained from the monitoring equipment for obtaining the monitoring image, and the monitoring equipment can be a camera and the like. When the monitoring image is a playback monitoring image, the monitoring image can be acquired from a storage path of the monitoring image. Since the data transmitted through the external network may cause privacy disclosure, the storage path should be a path corresponding to a storage medium in the local area network where the monitoring device is located, or a path corresponding to a local storage medium of the monitoring device.
It should be noted that the manner of acquiring the monitoring image may be related to the electronic device executing part or all of the steps of the monitoring image processing method provided in this embodiment. The specific form of the electronic device is not limited, and may be a computer, a server, or the monitoring device itself, for example. If the electronic device is a non-monitoring device, an acquisition request may be sent to the monitoring device when acquiring the monitoring image, and the monitoring image fed back by the monitoring device may be acquired. If the electronic device is a monitoring device, the monitoring image can be directly obtained or obtained from a storage path.
After the monitoring image is obtained, the personnel image can be subjected to identity recognition processing to obtain an identity recognition result. The number of the personnel images is one or more, namely one or more personnel images can exist in one monitoring image. In order to avoid information loss in the monitored image, the monitored image needs to be subjected to identity recognition processing when being processed, so that different personnel simulation models can be generated for the identities of different personnel in the follow-up process, and then the identities corresponding to the personnel simulation models in the privacy monitored image can be directly distinguished after the privacy monitored image is replaced to obtain the privacy-removed monitored image. For example, in a possible implementation manner, facial feature extraction may be adopted, the extracted facial features are compared with pre-stored features or standard features in a preset configuration file, whether the pre-stored features or standard features passing through the comparison exist is judged, if so, the identity corresponding to the facial features may be determined, and then the identity recognition result corresponding to the person image may be determined. In another feasible implementation manner, the posture feature extraction may be adopted, that is, the posture features of the person image are extracted, and the trained network model is used to classify the posture features, so as to obtain the identity recognition result corresponding to the person image. In one embodiment, the identification result may include identification information, which may be in various specific forms, such as numbers or numbers, such as A, B, 001, etc.; or may be the name of a person, called a person, such as zhang san, li ye, dad, mom, etc.
Further, in a possible scenario, the step of performing identification processing on the person image in the monitoring image to obtain an identification result may include:
step 11: and carrying out facial feature extraction processing on the personnel image to obtain facial features.
Since the facial features have higher accuracy and are more favorable for distinguishing, in this embodiment, facial feature extraction processing is performed on the person image to obtain the facial features. The specific extraction process of the facial feature extraction process may refer to the related art, and is not limited herein.
Step 12: and matching the facial features with the standard features corresponding to the identity information in the configuration file.
The configuration file is used for recording data necessary for processing the monitoring image, and may include a plurality of configuration items, each corresponding to a user, for example. The data content and number recorded by each configuration item are not limited, and may include, for example, identity information, standard features, standard images, head images, partial human body structure data, version numbers, and the like, and the specific content may be set according to actual needs. The standard features are pre-entered features, and different identity information has corresponding standard features, namely the standard features extracted from the standard face image corresponding to each user. The embodiment does not limit the specific acquisition mode of the standard features, and for example, the standard features may be directly acquired, or a standard face image may be acquired and the standard features extracted from the standard face image may be acquired. It should be noted that the type of the standard feature may be one or more, and each standard feature corresponds to a different standard face image.
Specifically, in an embodiment, two groups of standard face images may be provided, the first group is a standard certificate photo, which may include images of three angles of the front, the left front, and the top front of the face, and the second group is a face image acquired at an angle and a direction of the practical application of the monitoring device, which may include images of three angles of the front, the left front, and the top front of the face acquired from the angle and the direction. In practical application, the monitoring equipment acquires images at the same angle and direction of the second group, so that the standard features corresponding to the standard facial images of the second group can be more accurately matched, the weight of the standard facial images can be improved, when matching is performed, the second standard features corresponding to the standard facial images of the second group in each identity information are firstly used for matching, if the matching is not successful, the matching can be continuously performed again by using the standard facial images of the first group, and if the matching is not successful, the matching is performed again. It is determined that there are no target standard features that pass the match. And if the matching with the second standard feature or the first standard feature is successful, determining the successfully matched second standard feature or first standard feature as the target standard feature. It should be noted that, in order to perform a normal matching operation, the facial features are extracted and processed in the same manner as the standard features.
Step 13: and if the matched target standard features exist, determining the target identity information corresponding to the target standard features as an identity recognition result.
The present embodiment does not limit the criterion for determining whether the matching is passed, for example, in one implementation, it may be determined whether the similarity degree between the target feature and the standard feature is greater than a preset threshold, for example, 60%. If the value is greater than the preset threshold value, the passing matching can be determined. Corresponding to the above description, if there are multiple categories of standard features, such as the first standard feature or the second standard feature, it may be determined whether the similarity between the target feature and the second standard feature is greater than 60%, and if so, it may be determined that the matching is passed; if the similarity is not greater than 60%, whether the similarity between the target feature and the first standard feature is greater than 60% may be further determined, if so, a pass matching may be determined, and if not, a fail matching may be determined. When the standard features are classified into different categories, the preset thresholds corresponding to the standard features may be the same or different, and may be, for example, 60% in all cases, or the second preset threshold corresponding to the second standard feature is 60% and the first standard threshold corresponding to the first standard feature is 70%.
After the matching is determined to pass, determining a target standard feature, wherein the identity information corresponding to the target standard feature, namely the target identity information, is the identity recognition result corresponding to the facial feature.
Step 14: and if the target standard characteristics do not exist, determining the identity recognition result as a no-result, and executing a preset operation.
In another case, if there is no target standard feature, it indicates that the person image does not correspond to the identity information in any of the profiles. In practical applications, the person corresponding to the person image is not a person with a known identity, and may be an unknown intruder. Therefore, the identity recognition result is determined to be a no-result, and the execution is not continued according to the normal flow, but the preset operation is executed. The preset operation may specifically be an alarm operation, or may be an operation for recording the current time, and the specific content of the preset operation is not limited in this embodiment. It should be noted that the preset operation does not include privacy-removing processing on the person image, so as to retain the most original monitoring image and facilitate evidence collection. The identity recognition processing mode provided by the embodiment can be utilized to accurately recognize the identity of the person with the known identity, and the person image corresponding to the person with the unknown identity is reserved, so that privacy disclosure is prevented, and the information of the monitoring image is reserved.
In addition, in a possible implementation mode, since the identification processing needs a certain time, in order to increase the speed of monitoring image processing, the consumption of computing resources is reduced. After obtaining the identification result, the method may further include:
step 21: and carrying out identity marking processing on the personnel image based on the identity recognition result.
Because the monitoring image is continuous in time in most cases, the identity recognition result corresponding to the personnel image in the monitoring image does not have mutation. Therefore, after the identity recognition result is obtained, the identity marking processing can be carried out on the personnel image by using the identity marking processing, so that when a new monitoring image is obtained subsequently, the identity recognition result of the personnel image in the new monitoring image is determined based on the identity marking processing, and the facial features do not need to be extracted again. Accordingly, after the monitoring image is obtained, before the personal image in the monitoring image is subjected to the identification processing to obtain the identification result, the method may further include:
step 22: and carrying out identity marking processing on the monitored image based on the historical identity mark, and judging whether the personnel image has the identity mark.
After the monitoring image is obtained, under the condition that a historical identity mark exists, the historical identity mark is used for carrying out identity mark processing on the monitoring image, and whether each personnel image in the monitoring image has the identity mark or not is judged after the identity mark processing. The historical identity mark is a result obtained after the identity marking processing is carried out on the monitoring image obtained last time, and because the corresponding personnel image in the two adjacent monitoring images does not have larger position mutation, the monitoring image obtained this time can inherit the historical identity mark, namely the monitoring image can be subjected to the identity marking processing based on the historical identity mark. For example, the monitoring image acquired this time may be searched for by using each person image in the monitoring image of the last time as a reference and using a preset length as a radius, and the identity mark corresponding to the person image in the range may be given to complete the identity mark processing. After the identification mark processing is completed, it can be determined whether each person image in the monitoring image has an identification mark, and step 23 is performed when an identification mark exists.
Step 23: and if the identity mark exists, acquiring an identity recognition result according to the identity mark.
If the person image has the identity mark, the person image is identified, and the subsequent steps can be executed based on the identity mark. The present embodiment is not limited to whether the person image is complete in this case, and for example, the person head image and the person body image may be completely included, or may be incomplete, and only the person head image or the person body image may be provided.
Step 24: and if the person image does not have the identity mark, judging whether the person image comprises a person head image.
If the person does not have the identity mark, the person cannot determine the identity corresponding to the image of the person, and facial features need to be extracted. Because the facial features need to be extracted from the person head image, whether the person head image is included in the person image is judged firstly, if not, the step of subsequent identity recognition processing is not needed, and the consumption of computing resources is reduced. The embodiment does not limit the subsequent execution content when the person image does not include the person head image, and may be, for example, no operation, that is, no operation is performed; or it may be determined whether a human body simulation model or a human behavior simulation model highly matching the human body image in the human image is stored in the memory, for example, a human behavior simulation model with a matching degree of 95%. If the image exists, the image can be determined as the human simulation model corresponding to the human image, and the step S102 is skipped to directly enter the step S103.
Step 25: and if the head image of the person is included, performing identity recognition processing on the image of the person in the monitoring image to obtain an identity recognition result.
By using the identification process provided in steps 21 to 25, the number of times of identification processing can be reduced by using the form of the identification mark, the speed of identification can be increased, and meanwhile, the consumption of computing resources can be reduced.
S102: and determining a corresponding head simulation model according to the identity recognition result, generating a corresponding human behavior simulation model by utilizing the personnel image, and constructing the human simulation model by utilizing the head simulation model and the human behavior simulation model.
After the identity recognition result is obtained, the identity recognition result and the corresponding personnel image can be utilized, and the identity recognition result and the personnel image are utilized to jointly generate the personnel simulation model. In this embodiment, the human action simulation model includes a head simulation model and a human action simulation model, the head simulation model determined according to the identity recognition result may represent the identity corresponding to the human image, and the human action simulation model generated according to the human image may represent the action of the human image. It can be understood that, since the human simulation model is generated based on the identification result, the human simulation model may have an identity feature by which the identity of the person corresponding to the human simulation model can be distinguished. In a feasible implementation manner, because the face has a strong identity identification function, a fixed head simulation model may be set for each identity recognition result, that is, the head simulation model of the human simulation model is used as the head model corresponding to the identity recognition result, so as to represent the identity corresponding to the human simulation model. The head simulation model corresponding to each identity recognition result can be recorded in the configuration item corresponding to the configuration file, and the head simulation model is called according to the record when the personnel simulation model is generated.
Further, the human body behavior simulation model is generated based on the human image, namely, the human body behavior simulation model is performed based on the human image, so that the human body behavior simulation model in the generated human body simulation model can have the same or similar action behavior as the human image. By simulating the action behaviors of the personnel images, the specific behaviors of the corresponding personnel can be known from the processed monitoring images (namely privacy-removed monitoring images), and the loss of information is avoided.
In one possible embodiment, the step of determining the head simulation model according to the identification result may include:
step 31: and determining the head model information corresponding to the identity recognition result by utilizing the pre-stored identity-head model corresponding relation.
Step 32: and determining a head simulation model corresponding to the human simulation model according to the head model information.
Specifically, the head simulation model may be generated in advance and bound to each identification result, and a one-to-one correspondence between the head simulation model and the person, that is, an identity-head model correspondence is established. The specific form of the correspondence is not limited, and for example, the correspondence may be recorded in a configuration file, or may be separately stored. After the identity recognition result is obtained, corresponding head model information is obtained according to the identity recognition result, and the head model information may be a storage path of the head simulation model or a serial number of the head simulation model. And obtaining the head simulation model according to the head model information. The head simulation model has a good identity identification effect, and can be used as the embodiment of an identity identification result, so that the identity of the personnel image replaced by the personnel simulation model can be identified in the privacy-removed monitoring image according to the head simulation model, and the information loss in the monitoring image is avoided.
Accordingly, in one possible embodiment, the step of generating the corresponding human behavior simulation model by using the human image may include:
step 33: and acquiring human body structure data corresponding to the identity recognition result.
The human body structure data is used for improving simulation of a standard human body so as to enable the human simulation model to be closer to the real situation. The human body structure data can specifically comprise body type, gender, top and bottom color, top and bottom type and the like, can also comprise other data, and can be selected and set according to actual needs. The acquisition mode of the human body structure data can be various, and the acquisition modes corresponding to different data items can be different. For example, part of the body structure data may be stored in a configuration file as one configuration item, such as gender, body type, etc., and some other body structure data may be obtained from the person image, such as top-bottom color, top-bottom type, etc.
Step 34: and performing body attitude simulation and/or behavior binding treatment on the standard human body model based on the human body structure data and the personnel image to obtain a human body behavior simulation model.
After the human body structure data are obtained, the human body structure data and the personnel images are used together to perform improvement simulation and/or behavior binding processing on the standard human body model, and a human body behavior simulation model is obtained. Specifically, the standard human body model is a preset human body model, and the number of the standard human body models can be one or more. The storage location of the standard human body model is not limited, and may be, for example, a memory, some designated folder in a disk, or a dedicated simulated human body library. When the standard human body model has a plurality of, one closest to the action behavior of the human image may be selected as the object to be processed based on the human image. In reality, the body behaviors of people are very variable, and the standard human body model cannot cover all the body actions, so when the body behaviors cannot be completely matched, the standard human body model can be subjected to behavior binding processing according to the image of the people, and the standard human body model and the human body model have the same behaviors. Similarly, because the posture of the person in the person image may be greatly different from that of the standard human body model, the human body structure data can be used for simulating the posture of the standard human body model, so that the posture of the person in the person image is the same as that of the standard human body model. In a preferred embodiment, after the human behavior simulation model is obtained, the human behavior simulation model may be stored in the memory, so that when a new monitoring image is subsequently obtained, the human behavior simulation model generated this time is directly used to modify to obtain a new human behavior simulation model. Meanwhile, the time length of the existence of the person in the monitoring image is limited, and the change of the human body structure data or the change of the body behavior possibly occurs, so the aging time length can be set for the human body behavior simulation model stored in the memory, and the aging time length is deleted after the storage time length reaches the aging time length.
It should be noted that, in this embodiment, the specific execution sequence of the body state simulation processing and the behavior binding processing that need to be executed is not limited, and the body state simulation processing and the behavior binding processing may be performed in series, for example, the body state simulation processing may be performed first, and the behavior binding processing may be performed; or the behavior binding process can be performed first, and the posture simulation process can be performed first. Or both may be performed in parallel, i.e., the body state simulation process and the behavior binding process are performed simultaneously. Further, the present embodiment also does not limit the specific acquiring order of the head simulation model and the human behavior simulation model, that is, the executing order of the steps 31 to 32 and the steps 33 to 34, and the steps 31 to 32 may be executed in parallel, that is, the steps 33 to 34 are executed simultaneously when the steps 31 to 32 are executed, or may be executed in series.
After the head simulation model and the human behavior simulation model are obtained, the human simulation model is constructed by utilizing the head simulation model and the human behavior simulation model together. In this embodiment, in order to improve the matching degree between the human simulation model and the human image and improve the authenticity of the privacy-removed monitoring image, the head simulation model and the human behavior simulation model may both be 3D models, and thus the obtained human simulation model is also a 3D model. The 3D model can be used for selecting a proper angle to replace the personnel image in the subsequent replacement process, so that the privacy-removed monitoring image obtained after replacement can have better authenticity.
S103: and replacing the personnel image by utilizing the personnel simulation model to obtain the privacy-removed monitoring image.
And after the personnel simulation model is obtained, replacing the corresponding personnel image by using the personnel simulation model to obtain the privacy-removed monitoring image. The embodiment of the specific alternative is not limited, and for example, the person image may be covered by using a human simulation model, the covered image is re-encoded, and the edge of the human simulation model is smoothed to obtain the privacy-removed monitoring image. After the privacy-removed monitoring image is obtained, other operations can be executed, for example, the privacy-removed monitoring image is sent to a target terminal or device, such as a mobile phone, a tablet computer, a notebook computer, and the like, through a public network.
Further, in a possible implementation, the step S103 may include:
step 41: and carrying out interference data coverage processing on the personnel images in the monitoring images to obtain interference images.
Step 42: and replacing the interference area corresponding to the personnel image by using the personnel simulation model to obtain the privacy-removing monitoring image.
In order to prevent the privacy-removed monitoring image from being leaked in the public network and subjected to image restoration processing, so that the monitoring image is restored and further the privacy is leaked, in the embodiment, interference data covering processing can be performed on the personnel image when replacement is performed, namely, the personnel image in the original monitoring image is destroyed first to obtain the interference image. And then replacing the interference area corresponding to the personnel image on the basis of the interference image to obtain a privacy-removed monitoring image, namely replacing the interference area with the privacy-removed monitoring image. Even if the privacy-removed monitoring image obtained by the method is intercepted and recovered, the obtained image is an interference image and is not a privacy image with privacy data, and the privacy protection effect is further achieved.
By applying the monitoring image processing method provided by the embodiment of the application, after the monitoring image is obtained, the identity of the personnel image corresponding to the personnel in the monitoring image is identified, and the identity identification result can represent the specific identity of the personnel. And after the identity recognition result is obtained, generating a corresponding personnel simulation model according to the identity recognition result and the corresponding personnel image. The human simulation model can represent the same action behavior as the human image, and can represent the specific identity of the human image based on the identity recognition result. And replacing the personnel image by utilizing the personnel simulation model to obtain the privacy-removed monitoring image. Because the privacy information in the monitoring picture is basically related to the limbs of the person, the replacement of the person image can achieve the effect of protecting privacy. Meanwhile, the personnel images are replaced by utilizing the personnel simulation model, so that the identity and the specific action of the agent can be clearly known, the information in the monitored images is kept, the information loss is avoided, and the image authenticity is improved. The problems of poor picture authenticity and more information loss in the related technology are solved.
Referring to fig. 2, fig. 2 is a flowchart of another monitoring image processing method according to an embodiment of the present disclosure. The method comprises the following steps:
s201: and acquiring a privacy-removed monitoring image.
It should be noted that, in this embodiment, the privacy-elimination monitoring pop-up may be acquired by the electronic device executing all or part of the steps in this embodiment after being transmitted through the public network. The electronic device can be a terminal such as a smart phone, a tablet computer and a notebook computer, or a device such as a computer and a server.
S202: and determining identity information corresponding to the person plausibility model in the privacy-removed monitoring image.
After the privacy-removed monitoring image is obtained, because the personnel image in the original monitoring image is replaced by the personnel simulation model, and the personnel simulation model has identity characteristics, when the user views the privacy-removed monitoring image, the personnel simulation model still needs to be manually corresponding to the personnel. If the identity characteristics are similar, or the user memorizes the images in a fuzzy or wrong way, the privacy monitoring images cannot be accurately checked, and the information in the monitoring images cannot be accurately acquired. In order to solve the problems, after the privacy-removed monitoring image is obtained, the identity information corresponding to the human simulation model can be determined. Because the human simulation model has the identity characteristics, the corresponding identity information can be accurately determined according to the identity characteristics. For example, when the identity feature is embodied by the head simulation model, the configuration item in which the head simulation model is positioned can be determined in the configuration file, and the identity information can be determined in the configuration item.
S203: and acquiring a real head model corresponding to the identity information.
After the identity information is determined, a real head model corresponding to the identity information can be obtained. The real head model models the heads of the personnel corresponding to the personnel simulation model, and the identity of the personnel can be accurately represented by utilizing the real head model. The embodiment does not limit the specific way of obtaining the head model of the real person, and for example, the head model of the real person may be pre-stored locally, or the head model of the real person may be generated.
Further, in a possible implementation manner, the head image can be used to generate the real head model, so as to avoid the problem that the storage space is excessively occupied due to the fact that the real head model is locally stored. The S203 step may include:
step 51: and acquiring a head image corresponding to the identity information.
Step 52: and carrying out three-dimensional modeling processing on the head image to obtain a real head model.
In this embodiment, the head video may include at least one of a preset video, a standard face image, and a historical face image. The preset video is a video corresponding to the head of the real person. The standard face image may be a standard certificate photo, which may include images of the front, the left front, and the top front of the face, or may be a face image acquired at an actual application angle and direction of the monitoring device, which may include images of the front, the left front, and the top front of the face acquired from the angle and direction. The historical face image is a face image corresponding to the person in a part of the monitoring images acquired before the current moment, namely, a face image acquired from the monitoring images acquired in actual use. After the head image is obtained, three-dimensional modeling processing is performed to obtain a real head model, and the specific mode of the three-dimensional modeling processing is not limited, and related technologies can be referred to. In an embodiment, different head videos may be obtained according to the category of the privacy-removed monitoring image, for example, if the privacy-removed monitoring image is a playback image, that is, if the privacy-removed monitoring image is not an image at the current time, the head video corresponding to the generation time of the privacy-removed monitoring image may be obtained; or the privacy-removing monitoring image is a real-time image, the head image corresponding to the current moment can be obtained.
S204: and replacing the head simulation model of the personnel simulation model by using the real head model to obtain the privacy recovery monitoring image.
And after the real person head model is obtained, replacing the head simulation model of the person simulation model by using the real person head model, and obtaining the privacy recovery monitoring image. Because the head of each person simulation model is replaced by the original head simulation model for the real person head model, the user can visually and accurately determine the identity of each person when looking at the privacy recovery monitoring image and accurately acquire the information in the monitoring image.
By applying the monitoring image processing method provided by the embodiment of the application, after the privacy-removed monitoring image is obtained, partial images in the privacy-removed monitoring image can be restored, so that people corresponding to the human simulation model in the privacy-removed monitoring image can be visually identified. After the identity information of the personnel simulation model is determined, a real head model corresponding to the identity information is generated, and the real head model is used for replacing the head simulation model of the personnel simulation model to obtain a privacy recovery monitoring image. The method can recover the head image of the privacy-removed monitoring image transmitted by the public network so as to visually express the behavior of personnel and accurately acquire the information in the monitoring image.
Based on the above embodiments, please refer to fig. 3, and fig. 3 is a specific data flow diagram provided in the embodiments of the present application. Before acquiring the monitoring image, the camera can be configured, and the face image of a specific person (such as a family member) and the corresponding relation with the head model file (namely the head simulation model) of the simulation person are input. When the camera acquires the monitoring images by using the image acquisition module, the acquired data stream (namely the data stream formed by the monitoring images) is sent to the local storage module for storage, and is sent to the local area network live-action module for picture data in the local area network. The local storage module stores the monitoring pictures in real time, and the local area network live module can send the unprocessed pictures to local area network equipment.
The local area network live module can send the monitoring image to the intelligent recognition module, then the intelligent recognition module performs face detection, and when a face is recognized, whether a face with similarity reaching a threshold exists is detected from a locally configured face corresponding relation (namely a configuration file). Meanwhile, the face of a person in the monitored image can be captured, and the corresponding relation among the pre-stored face image, the face captured image and the simulated person is stored in a configuration file.
Specifically, through local area network or local background configuration of a video camera, photos (namely, pre-stored face images) of three views (front, left and upper front) of the head of a family member are imported, and a head model file of a dummy person, which can be selected from a library of the camera itself, or a third party model file can be imported, wherein the model file contains information which can include: firstly, simulating a three-dimensional model of a human head model; ② replacing the algorithm version number. And when the version number of the replacement algorithm in the dummy head model file is not in the acceptance range of the camera, the import fails.
After the camera acquires the face image of a family member, the characteristic value of the face image is identified, and all the configuration of the member is stored in a configuration file, wherein the configuration file can comprise two parts: the first part is photos of three views of the head (namely human face images); secondly, feature value feature of the face image; thirdly, simulating a human head model file; the second part is three view screenshot files of the head of the currently applicable monitoring picture, wherein the screenshot files of the three views can be named currentfacesnap.jpg, and all the screenshot files can be distinguished in a number form, such as currentFaceSnap _1.jpg, currentFaceSnap _2.jpg, currentFaceSnap _3. jpg; secondly, extracting a characteristic value currentFeature from the currently applicable monitoring picture face screenshot file; thirdly, capturing the file faceSnap _ $timejpg of all the monitoring picture heads in a certain time. It should be noted that the second part of data needs to be acquired from the monitored image after the monitored image starts to be acquired, currentfacesnap.jpg is a face screenshot captured from the monitored image, and currentFeature is a feature value of currentfacesnap.jpg.
After the camera collects the monitoring images, the original stream formed by the monitoring images is copied to the storage module for local storage and is sent to the local area network live module, and the local area network live module is distributed to the local area network stream receiving equipment and the intelligent identification module. Devices in the local area network or designated devices of the local area network configuration may obtain real-time pictures by accessing the url generated by the camera.
After a monitored image enters an intelligent recognition module, the module recognizes a face and a human body in the monitored image, if the face exists, a corresponding face characteristic value is obtained and compared with each currentFeature stored locally, if no similarity exceeds a certain threshold value, such as 60% currentFeature, the face characteristic value is compared with each feature, if the similarity exceeds the threshold value, the face and the corresponding human body are marked, a corresponding dummy head model file is used, the face is covered at a similar angle and size, and then whether three views (namely currentfacesnap. jpg) of the current screenshot are complete or not is checked, namely whether screenshot files of the three views exist or not is checked. If not, judging whether the current face is currentFaceSnap. jpg which is currently lacked, for example, whether the current face is a front image, if so, intercepting the current face, and saving the current face, for example, saving the current face as currentFaceSnap _1. jpg.
Then, the human body in the monitoring image is identified, the most appropriate simulated human body (namely a standard human body model) is matched from a simulated human body library of the camera according to structural information such as gender, body type, clothes color and the like, four limbs and the head of the simulated human body are fitted with the human body in real time, the basic behavior of the human body and the structural data of the human body are guaranteed to be complete, the human body can be further processed by follow-up intelligent analysis, and the human body is output and can be sent to a non-local area network picture. The original picture playback can be directly checked in the local area network, meanwhile, the image of the designated person is hidden in the non-local area network, and the picture can be restored as far as possible by using the client side through a re-synthesis mode.
Specifically, when a marked human body exists in the monitored image, the intelligent identification module identifies and acquires current structural data of the human body, such as body type, gender, top-bottom type color and the like. And matching the simulated human body with the closest data in the memory, the disk specified folder and the simulated human body library by using the current body type and gender data of the human body, and adjusting the simulated human body data according to other structured data to enable the simulated human body data to be attached to the human body in the monitored image as much as possible. After the generation is finished, the simulation information can be stored in a memory in the format of simulation human body information plus simulation information aging time. If the human body appears in the subsequently acquired monitoring image, the matched simulated human body is preferentially searched from the memory. If the human body is not marked and no highly matching simulated human body is found in the memory, it is not replaced, i.e. it is not processed. If the human body is marked or a simulated human body highly matched with the human body in the monitored image exists in the memory, replacing the human body in the picture with the simulated human body, and refreshing the corresponding aging time so as to solve the problem of replacement of the head of a person in the monitored image.
Because the agent in the monitoring picture may have various different body states, the binding of the skeleton structure of the human body cannot be realized under certain conditions, namely the simulated human body is perfectly attached to the human body in the monitoring picture. Therefore, the simulation human body can be subjected to the processing of body state simulation and joint binding based on the actual human body in the monitoring image. During processing, the simulated human body can be synchronously adjusted according to the body shape of the actor, scaled according to the pixel distance between the binding points in an equal ratio, and then the corresponding binding points of the human body in the current monitoring image are covered, so that binding coverage is realized. After the binding coverage is finished, the covering treatment can be carried out on the partial area which possibly exists in the monitoring image and is not covered by the simulated human body.
Before the simulation replacement processing of the monitoring image is completed, the interference data can be used for covering the part, namely the personnel image, of the monitoring image, which is judged to need to be replaced. And then, covering the human face and the human body, recoding the covered human face and the human body, and obtaining the privacy-removed monitoring image after coding. The intelligent identification module can send the code stream to a non-local area network to form a code stream, and a stream receiving device in the non-local area network can acquire the code stream. Referring to fig. 4 and fig. 5, fig. 4 is a specific monitoring image provided in an embodiment of the present application, and fig. 5 is a specific privacy-removed monitoring image provided in an embodiment of the present application. It can be clearly seen that the human image corresponding to the child in fig. 4 is overlaid with the human simulation model in fig. 5.
In a non-local area network, after a user acquires a stream (i.e., a data stream composed of privacy-removed monitoring images) which has been subjected to simulation replacement by using a terminal such as a mobile phone, the stream can be restored by a client, and after the image is restored, a user at the client can acquire a more real monitoring image so as to accurately acquire information in the monitoring image.
The required data also needs to be acquired before privacy restoration is performed. The client user selects family personnel (all or appointed) from the camera configuration, downloads corresponding configuration files, and meanwhile, the client user can also import some personnel head video files (namely preset videos), the client stores the head video files, identifies corresponding face features, and stores matched face screenshots (namely historical face images) or face images (namely standard face images) into a plurality of data tables, wherein the storage format is as follows:
the tbl _ model _ info table is:
model_id(integer) Feature1(string) face_id(integer) version(char)
1 …… 1 ……
…… …… …… ……
wherein, model _ id is an identification serial number of the dummy head model file; feature1 is a face feature value obtained by extracting features of a simulated human head model file; the face _ id is a face identification serial number corresponding to the simulated person head model file and corresponds to the face _ id in the tbl _ face _ info.
the tbl _ face _ info table is:
face_id(integer) Feature(string) front_view(blob) upper_view(blob) left_view(blob)
1 …… …… …… ……
2 …… …… …… ……
…… …… …… …… ……
wherein, the face _ id is a face identification serial number corresponding to the simulated human head model file; feature is a characteristic value of the face image; front _ view: a front view picture file in the face image; upper _ view is a front upper view picture file in the face image; left _ view is a left view picture file in the face image; the video is an imported video file (i.e., a preset video).
the tbl _ snap _ face _ info table is:
Figure BDA0002717893160000191
tbl _ Snap _ face _ info differs from tbl _ face _ info in Snap _ face _ id (integer) and face _ id (integer). The head influence in the present embodiment includes various types of data, and the various types of data participate in the generation or optimization of the real-person head model. tbl _ snap _ face _ info is used for performing playback service, that is, the corresponding monitored image is a monitored image obtained at a past moment, and the screenshot of the face in the head image is a screenshot within a certain time and changes along with the change of the time, so that in order to ensure the authenticity of the generated real-person head model and meet the moment of the monitored image, the corresponding head image can be selected according to the moment of obtaining the monitored image to perform image restoration. And tbl _ face _ info is used for performing live service, that is, the obtained monitored image is the image at the current time, and the image restoration is performed by using the current head picture at this time.
the tbl _ snap _ match _ info table is:
Figure BDA0002717893160000192
wherein snap _ face _ id is id of the face screenshot and is consistent with tbl _ snap _ face _ info; the face _ id is a face identification serial number corresponding to the simulated person head model file and is consistent with that in tbl _ face _ info; the start _ time _ in _ use is the start time of replacing the face screenshot by simulation; end _ time _ in _ use is the end time of the simulation replacement using the face shot.
When the privacy is recovered, after the client acquires the privacy-removed monitoring image, the client acquires the characteristic value of the head simulation model in the privacy-removed monitoring image, compares the characteristic value with feature1 data in a tbl _ model _ info table, and if the feature1 data with the similarity reaching the threshold set in the client exists, judges the service type of the current client:
A) if the current client side carries out live service, acquiring table data according to the face _ id in the tbl _ face _ info table;
B) if the current client side carries out playback service, snap _ face _ id is obtained according to face _ id in tbl _ face _ info and current playback time, then table data is obtained in tbl _ snap _ face _ info, and if no matched snap _ face _ id is found, the steps are the same as those of live service.
And if the obtained data has the video file, performing three-dimensional modeling on the video file by using a model algorithm version corresponding to the client, and optimizing the generated three-dimensional model by using the human face screenshot and/or the human face image. If the model does not exist, a default modeling file can be used according to the algorithm version, and the modeling is adjusted by using the face screenshot and/or the face image, so that the model is consistent with the real head of the person as much as possible. Specifically, when the current service is a playback service, three view files of the face screenshot can be preferentially used for optimizing three-dimensional modeling; if the missing view file exists, the missing view file can be replaced by the corresponding view file in the face image.
After the three-dimensional model (namely the real head model) is obtained, the details of the three-dimensional model are adjusted according to the head posture of the human simulation model in the picture, and the head simulation model is covered, so that privacy restoration is realized. And recoding the restored privacy recovery monitoring image, smoothly arranging the replaced image boundary, and finally outputting the image through a display part of the client. Referring to fig. 6, fig. 6 is a specific privacy restoration monitoring image according to an embodiment of the present application. It can be seen that the head of the human phantom in fig. 5 is restored.
In the following, the monitoring image processing apparatus provided by the embodiment of the present application is introduced, and the monitoring image processing apparatus described below and the monitoring image processing method described above may be referred to correspondingly.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a monitoring image processing apparatus according to an embodiment of the present application, including:
the identification module 110 is configured to obtain a monitoring image, and perform identity identification processing on a person image in the monitoring image to obtain an identity identification result;
the simulation model obtaining module 120 is configured to determine a head simulation model according to the identity recognition result, generate a corresponding human behavior simulation model using the person image, and construct a human simulation model using the head simulation model and the human behavior simulation model;
and the replacing module 130 is configured to replace the person image with the person simulation model to obtain the privacy-removed monitoring image.
Optionally, the simulation model obtaining module 120 includes:
the human body structure data acquisition unit is used for acquiring human body structure data corresponding to the identity recognition result;
and the processing unit is used for performing body attitude simulation and/or behavior binding processing on the standard human body model based on the human body structure data and the personnel image to obtain a human body behavior simulation model.
Optionally, the simulation model obtaining module 120 includes:
a head model information obtaining unit, configured to determine head model information corresponding to the identity recognition result by using an identity-head model correspondence;
and the head simulation model determining unit is used for determining the head simulation model according to the head model information.
Optionally, the identification module 110 includes:
the facial feature extraction unit is used for extracting facial features of the personnel image to obtain facial features;
the matching unit is used for matching the facial features with the standard features corresponding to the identity information in the configuration file;
the first determining unit is used for determining target identity information corresponding to the target standard features as an identity recognition result if the matched target standard features exist;
and the second determining unit is used for determining the identity recognition result as a no result and executing preset operation if the target standard feature does not exist.
Optionally, the replacement module 130 includes:
the interference unit is used for covering interference data of the personnel image in the monitoring image to obtain an interference image;
and the replacing unit is used for replacing the interference area corresponding to the personnel image by utilizing the personnel simulation model to obtain the privacy-removed monitoring image.
Optionally, the method further comprises:
the marking module is used for carrying out identity marking processing on the personnel image based on the identity recognition result;
correspondingly, the method also comprises the following steps:
the identity mark judging module is used for carrying out identity mark processing on the monitored image based on the historical identity mark and judging whether the personnel image has the identity mark;
the identity recognition result acquisition module is used for acquiring an identity recognition result according to the identity mark if the identity mark exists;
the head judgment module is used for judging whether the personnel image comprises a personnel head image or not if the personnel image does not comprise the identity mark;
the identification module 110 is a module that performs identification processing on the person image in the monitored image after determining that the person image includes the person head image, to obtain an identification result.
In the following, another monitoring image processing apparatus provided in the embodiments of the present application is introduced, and the monitoring image processing apparatus described below and the monitoring image processing method described above may be referred to correspondingly.
Referring to fig. 8, fig. 8 is a schematic structural diagram of another monitoring image processing apparatus according to an embodiment of the present application, including:
an image obtaining module 210, configured to obtain a privacy-removed monitoring image;
the identity information determining module 220 is used for determining identity information corresponding to the person plausible model in the privacy-removed monitoring image;
the model generating module 230 is configured to obtain a head model of a real person corresponding to the identity information;
and the recovery module 240 is configured to replace the head simulation model of the human simulation model with the human head model to obtain the privacy recovery monitoring image.
In the following, the electronic device provided by the embodiment of the present application is introduced, and the electronic device described below and the monitoring image processing method described above may be referred to correspondingly.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Wherein the electronic device 100 may include a processor 101 and a memory 102, and may further include one or more of a multimedia component 103, an information input/information output (I/O) interface 104, and a communication component 105.
The processor 101 is configured to control the overall operation of the electronic device 100 to complete all or part of the steps in the monitoring image processing method; the memory 102 is used to store various types of data to support operation at the electronic device 100, such data may include, for example, instructions for any application or method operating on the electronic device 100, as well as application-related data. The Memory 102 may be implemented by any type or combination of volatile and non-volatile Memory devices, such as one or more of Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic or optical disk.
The multimedia component 103 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 102 or transmitted through the communication component 105. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 104 provides an interface between the processor 101 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 105 is used for wired or wireless communication between the electronic device 100 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding Communication component 105 may include: Wi-Fi part, Bluetooth part, NFC part.
The electronic Device 100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic components, and is configured to perform the monitoring image Processing method according to the above embodiments.
The following describes a computer-readable storage medium provided in an embodiment of the present application, and the computer-readable storage medium described below and the monitoring image processing method described above may be referred to correspondingly.
The present application further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the monitoring image processing method described above.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relationships such as first and second, etc., are intended only to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms include, or any other variation is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that includes a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A monitoring image processing method, characterized by comprising:
acquiring a monitoring image, and performing identity recognition processing on a personnel image in the monitoring image to obtain an identity recognition result;
determining a corresponding head simulation model according to the identity recognition result, generating a corresponding human behavior simulation model by using the personnel image, and constructing a human simulation model by using the head simulation model and the human behavior simulation model;
and replacing the personnel image by using the personnel simulation model to obtain a privacy-removed monitoring image.
2. The method for processing the monitoring image according to claim 1, wherein the generating of the corresponding human behavior simulation model by using the human image comprises:
acquiring human body structure data corresponding to the identity recognition result;
and performing body attitude simulation and/or behavior binding treatment on the standard human body model based on the human body structure data and the personnel image to obtain a human body behavior simulation model.
3. The method for processing the monitoring image according to claim 1, wherein the determining the head simulation model according to the identification result comprises:
determining head model information corresponding to the identity recognition result by utilizing a pre-stored identity-head model corresponding relation;
and determining the head simulation model corresponding to the human simulation model according to the head model information.
4. The method for processing the monitoring image according to claim 1, wherein the step of performing the identification processing on the personnel image in the monitoring image to obtain the identification result comprises:
carrying out facial feature extraction processing on the personnel image to obtain facial features;
matching the facial features with standard features corresponding to each identity information in a configuration file;
if the matched target standard features exist, determining target identity information corresponding to the target standard features as the identity recognition result;
and if the target standard characteristic does not exist, determining the identity recognition result as a no-result, and executing a preset operation.
5. The monitoring image processing method according to claim 1, wherein the replacing the human image with the human simulation model to obtain a privacy-removed monitoring image comprises:
carrying out interference data coverage processing on the personnel images in the monitoring images to obtain interference images;
and replacing the interference area corresponding to the personnel image by using the personnel simulation model to obtain the privacy-removing monitoring image.
6. The monitor image processing method according to any one of claims 1 to 5, further comprising, after obtaining the identification result:
performing identity marking processing on the personnel image based on the identity recognition result;
correspondingly, after the monitoring image is obtained, before the personal image in the monitoring image is subjected to identity recognition processing to obtain an identity recognition result, the method further includes:
performing identity marking processing on the monitoring image based on a historical identity mark, and judging whether the personnel image has the identity mark;
if the identity mark exists, acquiring the identity recognition result according to the identity mark;
if the person image does not have the identity mark, judging whether the person image comprises a person head image or not;
and if the head image of the person is included, executing the step of carrying out identity recognition processing on the image of the person in the monitoring image to obtain an identity recognition result.
7. A method of processing a monitor image, comprising:
acquiring a privacy-removed monitoring image;
determining identity information corresponding to the person simulation model in the privacy-removed monitoring image;
acquiring a real person head model corresponding to the identity information;
and replacing the head simulation model of the personnel simulation model by using the real head model to obtain a privacy recovery monitoring image.
8. A monitor image processing apparatus characterized by comprising:
the identification module is used for acquiring a monitoring image and carrying out identity identification processing on a personnel image in the monitoring image to obtain an identity identification result;
the simulation model obtaining module is used for determining a head simulation model according to the identity recognition result, generating a corresponding human behavior simulation model by using the personnel image, and constructing a personnel simulation model by using the head simulation model and the human behavior simulation model;
and the replacing module is used for replacing the personnel image by utilizing the personnel simulation model to obtain the privacy-removing monitoring image.
9. A monitor image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring privacy-removed monitoring images;
the identity information determining module is used for determining identity information corresponding to the person simulation model in the privacy-removed monitoring image;
the model generating module is used for acquiring a real person head model corresponding to the identity information;
and the recovery module is used for replacing the head simulation model of the person simulation model by using the real person head model to obtain a privacy recovery monitoring image.
10. An electronic device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor for executing the computer program to implement the monitor image processing method according to any one of claims 1 to 7.
11. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the monitor image processing method according to any one of claims 1 to 7.
CN202011078856.2A 2020-10-10 2020-10-10 Monitoring image processing method and device, electronic equipment and readable storage medium Pending CN114332972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011078856.2A CN114332972A (en) 2020-10-10 2020-10-10 Monitoring image processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011078856.2A CN114332972A (en) 2020-10-10 2020-10-10 Monitoring image processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114332972A true CN114332972A (en) 2022-04-12

Family

ID=81033102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011078856.2A Pending CN114332972A (en) 2020-10-10 2020-10-10 Monitoring image processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114332972A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116486981A (en) * 2023-06-15 2023-07-25 北京中科江南信息技术股份有限公司 Method for storing health data and method and device for reading health data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116486981A (en) * 2023-06-15 2023-07-25 北京中科江南信息技术股份有限公司 Method for storing health data and method and device for reading health data
CN116486981B (en) * 2023-06-15 2023-10-03 北京中科江南信息技术股份有限公司 Method for storing health data and method and device for reading health data

Similar Documents

Publication Publication Date Title
CN106897658B (en) Method and device for identifying human face living body
CN109359548B (en) Multi-face recognition monitoring method and device, electronic equipment and storage medium
CN110119711B (en) Method and device for acquiring character segments of video data and electronic equipment
CN109284729B (en) Method, device and medium for acquiring face recognition model training data based on video
CN109325933B (en) Method and device for recognizing copied image
US10685460B2 (en) Method and apparatus for generating photo-story based on visual context analysis of digital content
US20200314482A1 (en) Control method and apparatus
CN108269333A (en) Face identification method, application server and computer readable storage medium
US9159362B2 (en) Method and system for detecting and recognizing social interactions in a video
US20220172476A1 (en) Video similarity detection method, apparatus, and device
CN109815813B (en) Image processing method and related product
CN109710780A (en) A kind of archiving method and device
CN109299658B (en) Face detection method, face image rendering device and storage medium
CN111738120B (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN111124888A (en) Method and device for generating recording script and electronic device
US20190188481A1 (en) Motion picture distribution system
Kong et al. Digital and physical face attacks: Reviewing and one step further
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN109711287B (en) Face acquisition method and related product
CN111339943A (en) Object management method, system, platform, equipment and medium
US10299117B2 (en) Method for authenticating a mobile device and establishing a direct mirroring connection between the authenticated mobile device and a target screen device
CN114332972A (en) Monitoring image processing method and device, electronic equipment and readable storage medium
KR102277929B1 (en) Real time face masking system based on face recognition and real time face masking method using the same
CN115223022B (en) Image processing method, device, storage medium and equipment
CN112689120A (en) Monitoring method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination