CN115565647A - Terminal device, image processing method and storage medium - Google Patents

Terminal device, image processing method and storage medium Download PDF

Info

Publication number
CN115565647A
CN115565647A CN202211241040.6A CN202211241040A CN115565647A CN 115565647 A CN115565647 A CN 115565647A CN 202211241040 A CN202211241040 A CN 202211241040A CN 115565647 A CN115565647 A CN 115565647A
Authority
CN
China
Prior art keywords
image
detection model
processed
target
privacy protection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211241040.6A
Other languages
Chinese (zh)
Inventor
于春晓
李平
陈哲
李艳林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Medical Equipment Co Ltd
Original Assignee
Qingdao Hisense Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Medical Equipment Co Ltd filed Critical Qingdao Hisense Medical Equipment Co Ltd
Priority to CN202211241040.6A priority Critical patent/CN115565647A/en
Publication of CN115565647A publication Critical patent/CN115565647A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Processing (AREA)

Abstract

The application provides a terminal device, an image processing method and a storage medium, and belongs to the technical field of computers. The multiple privacy protection modes for carrying out privacy protection on different parts of a human body are configured, corresponding detection models are set for each privacy protection mode, a user can select the required privacy protection mode according to different use scenes, images to be processed are detected according to the detection models of the different parts corresponding to the different privacy protection modes, and the images to be processed are fuzzified based on the detection result, so that the purpose of protecting different privacy parts of the human body according to different use scenes can be achieved.

Description

Terminal device, image processing method and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a terminal device, an image processing method, and a storage medium.
Background
With the rapid development of remote medical treatment, remote consultation brings great convenience and help to a large number of primary doctors and patients.
However, in the process of remote consultation, there is usually a need to screen, record, etc. the consultation process, and the video data generated by the process has a potential outflow risk, which may cause the disclosure of privacy information such as the identity and physical signs of the patient.
At present, a privacy protection method for protecting different human body parts aiming at different use scenes is lacked.
Disclosure of Invention
In order to solve the above problems in the prior art, embodiments of the present application provide a terminal device, an image processing method, and a storage medium, which can flexibly protect different parts of a human body for different use scenes.
In a first aspect, an embodiment of the present application provides a terminal device, including: a display, a memory, and a processor;
the display is configured to: displaying an interface of the terminal equipment during operation;
the memory is configured to: storing programs or data used by the terminal equipment to run;
the processor is configured to:
responding to the operation that a user selects a privacy protection mode in a mode selection interface, and acquiring at least one target detection model corresponding to the target privacy protection mode selected by the user; a plurality of privacy protection modes for carrying out privacy protection on different parts of the human body are displayed in the mode selection interface;
performing target detection on each frame of image to be processed acquired from acquired video data by using the at least one target detection model to obtain a target area in the image to be processed;
and performing fuzzification processing on each frame of image to be processed in the video data respectively based on a target area in each frame of image to be processed to obtain processed video data, and outputting the processed video data.
In one possible embodiment, the privacy protection mode includes at least one of a face privacy protection mode, an upper limb privacy protection mode, and a trunk privacy protection mode; the detection model corresponding to the face privacy protection mode comprises an eye detection model, a nose detection model and a mouth detection model; the detection model corresponding to the upper limb privacy protection mode comprises an eye detection model, a nose detection model, a mouth detection model and a chest detection model; the detection model corresponding to the trunk privacy protection mode comprises a chest detection model and a human body neutral detection model.
In one possible embodiment, the treatment appliance is configured to:
if the target privacy protection mode selected by the user comprises a face privacy protection mode and a trunk privacy protection mode, respectively acquiring a detection model corresponding to the face privacy protection mode and a detection model corresponding to the trunk privacy protection mode as the target detection models.
In one possible embodiment, the treatment appliance is configured to:
determining a human body area image contained in the image to be processed;
synchronously carrying out target detection on the human body region image by adopting each target detection model respectively to obtain target frame position information output by each target detection model;
and obtaining a target area in each frame of image to be processed based on the target frame position information output by each target detection model.
In one possible embodiment, the treatment appliance is configured to:
converting the image to be processed into an HSV color space to obtain a color space image;
and carrying out binarization processing on the image to be processed based on the hue value of each pixel point in the color space image to respectively obtain a background area image and a human body area image contained in the image to be processed.
In one possible embodiment, the treatment appliance is configured to:
determining a first type of pixel corresponding to a pixel point of which the tone value is within a set tone interval in the color space image in the image to be processed, and setting the pixel value of the first type of pixel point as a first set value; the set hue interval comprises a hue value which is greater than or equal to the minimum hue threshold value and less than or equal to the maximum hue threshold value;
determining second type pixel points corresponding to pixel points of which the tone values are outside a set tone interval in the color space image in the image to be processed, and setting the pixel values of the second type pixel points as second set values;
and taking an image area formed by the pixel points of the first set value as the human body area image, and taking an image area formed by the pixel points of the second set value as the background area image.
In one possible embodiment, the treatment appliance is configured to:
respectively performing downsampling on the human body region image by different multiples to obtain a plurality of sampled images;
for each target detection model, synchronously executing the following operations:
inputting each sampling image in the plurality of sampling images into the target detection model respectively to obtain a target detection result of each sampling image output by the target detection model; the target detection result comprises a target frame position and a confidence level;
and fitting the positions of the target frames of the plurality of sampling images according to the confidence coefficient in the target detection result of each sampling image to obtain the position information of the target frames output by the target detection model.
In a possible implementation, the terminal device further includes a communication module; the treatment appliance is configured to:
and transmitting the processed video data to other terminal equipment through the communication module.
In a second aspect, an embodiment of the present application provides an image processing method, which is applied to a terminal device, and the method includes:
responding to the operation that a user selects a privacy protection mode in a mode selection interface, and acquiring at least one target detection model corresponding to the target privacy protection mode selected by the user; a plurality of privacy protection modes for carrying out privacy protection on different parts of the human body are displayed in the mode selection interface;
performing target detection on each frame of image to be processed acquired from acquired video data by using the at least one target detection model to obtain a target area in the image to be processed;
and performing fuzzification processing on each frame of image to be processed in the video data respectively based on a target area in each frame of image to be processed to obtain processed video data, and outputting the processed video data.
In a third aspect, an embodiment of the present application provides an image processing apparatus, including:
the detection unit is used for responding to the operation that the user selects the privacy protection mode in the mode selection interface and acquiring at least one target detection model corresponding to the target privacy protection mode selected by the user; a plurality of privacy protection modes for carrying out privacy protection on different parts of a human body are displayed in the mode selection interface;
performing target detection on each frame of image to be processed acquired from acquired video data by using the at least one target detection model to obtain a target area in the image to be processed;
and the privacy unit is used for respectively performing fuzzification processing on each frame of image to be processed in the video data based on a target area in each frame of image to be processed to obtain processed video data and outputting the processed video data.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method of the second aspect is implemented.
According to the terminal device, the image processing method and the storage medium, multiple privacy protection modes for carrying out privacy protection on different parts of a human body are configured, corresponding detection models are set for each privacy protection mode, a user can select the required privacy protection mode according to different use scenes, images to be processed are detected according to the detection models of the different parts corresponding to the different privacy protection modes, fuzzification processing is carried out on the images to be processed based on the detection results, and therefore the purpose of protecting the different privacy parts of the human body according to the different use scenes can be achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is an application scene diagram of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 3 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic illustration of an ultrasonic thyroid examination provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a privacy preserving mode interface provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of another privacy preserving mode interface provided by embodiments of the present application;
FIG. 7 is a schematic diagram of another privacy preserving mode interface provided by embodiments of the present application;
FIG. 8 is a schematic diagram of another privacy preserving mode interface provided by embodiments of the present application;
FIG. 9 is a schematic diagram of another privacy preserving mode interface provided by embodiments of the present application;
fig. 10 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 11 is a schematic diagram illustrating an eye detection result according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a nasal test result according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram of a mouth detection result provided in an embodiment of the present application;
fig. 14 is a schematic diagram of a target area of an image to be processed according to an embodiment of the present application;
fig. 15 is a schematic diagram illustrating a blurring process performed on a target area of an image to be processed according to an embodiment of the present application;
fig. 16 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 17 is a schematic diagram of another image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the following application scenarios described in the embodiments of the present application are for more clearly illustrating the technical solutions in the embodiments of the present application, and do not constitute limitations on the technical solutions provided in the embodiments of the present application, and it is obvious to a person skilled in the art that the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems with the occurrence of new application scenarios.
In order to meet the requirements of different use scenes for protecting different parts of a human body, the embodiment of the application provides terminal equipment, an image processing method and a storage medium. The terminal device can respond to the operation that the user selects the privacy protection mode in the mode selection interface, obtain at least one target detection model corresponding to the target privacy protection mode selected by the user, perform target detection on the image to be processed by adopting the at least one target detection model to obtain a target area in the image to be processed, perform fuzzification processing on each frame of image to be processed in the video data respectively based on the target area in each frame of image to be processed to obtain processed video data, and output the processed video data. According to the method, multiple privacy protection modes for carrying out privacy protection on different parts of a human body are configured, corresponding detection models are set for each privacy protection mode, a user can select a required privacy protection mode according to different use scenes, images to be processed are detected according to the detection models of the different parts corresponding to the different privacy protection modes, and the images to be processed are fuzzified based on detection results, so that the purpose of protecting the different privacy parts of the human body according to the different use scenes can be achieved.
Fig. 1 schematically shows an application scenario of an image processing method in an embodiment of the present application. In this application scenario, a plurality of terminal devices may implement instant messaging over a network. As shown in fig. 1, the terminal device 100 can communicate with the terminal device 200 and the terminal device 300 through a network. Fig. 3 illustrates 3 terminal devices as an example, and in actual use, the number of the terminal devices may be more than 3, or less than 3.
In a possible application scenario, the image processing method in the embodiment of the present application may be applied to remote consultation, and the terminal device may be, but is not limited to, a medical device, a computer, and the like in a hospital. In the remote consultation, it is assumed that a patient is located in a certain office of hospital a, and a doctor of hospital a is operating a medical instrument to diagnose the patient. The terminal device 100 can acquire video data of a doctor in the hospital a for diagnosis and treatment of the patient in real time, and send the video data to the terminal device 200 in the hospital B and the terminal device 300 in the hospital C through the network, so that the doctor in the hospital B and the doctor in the hospital C can know the condition of the patient in real time, and the doctors in the three hospitals of the hospital a, the hospital B and the hospital C can make a diagnosis on the condition of the patient in a remote consultation mode.
In the above application scenario, the video data transmitted between the terminal devices inevitably includes body images of the patient. In the embodiment of the application, in order to avoid the video data from revealing the individual privacy of the patient, doctors in the hospital A can select different privacy protection modes according to actual needs. Terminal equipment 100 can detect the video data that the doctor diagnoses for this patient according to the target detection model that the target privacy protection mode that the doctor of A hospital selected corresponds, and carry out the privacy processing to video data according to the area of treating privacy that obtains that detects, render the video data after the privacy processing to show on terminal equipment 100's display, and send the video data after the privacy processing to terminal equipment 200 and terminal equipment 300 through the network, terminal equipment 200 and terminal equipment 300 receive the video data that the doctor diagnoses for this patient after the privacy processing, render to terminal equipment 200 and terminal equipment 300's display and show.
Fig. 2 exemplarily shows a hardware configuration block diagram of the terminal device 100 in the embodiment of the present application. The hardware configuration block diagram is also applicable to the terminal device 200 and the terminal device 300 in fig. 1. As shown in fig. 2, the terminal device 100 may include: processor 110, memory 120, display 130, camera 140, communication module 150, and bus 160; the processor 110, memory 120, display 130, camera 140 and communication module 150 are connected by a bus 160. The communication module 150 is used for transmitting or receiving data through a network.
A camera 140 for capturing still images or video. The number of the cameras can be one or more. The camera 140 may include a lens and a photosensitive element. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements may convert the optical image into electrical signals in a proportional relationship with the optical image, and after converting the optical image into electrical signals, the light sensing elements transmit the electrical signals to the processor 110 for conversion into digital image signals. In some embodiments, the terminal device may not be provided with a camera, and the terminal device may be connected to an external image capturing device through a data line or a wireless network, and obtain video data captured by the image capturing device.
A display 130 for displaying information input by the user or information provided to the user and a Graphical User Interface (GUI) of various menus of the terminal apparatus 100. Specifically, the display 130 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display 130 may be configured to display an interface when the terminal device operates, and may also display images such as video data after privacy processing in the remote consultation according to the embodiment of the present application.
A memory 120 may be used to store data or program codes used by the terminal device when operating. The processor 110 performs various functions of the terminal device 100 and data processing by executing data or program codes stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the terminal device 100 to operate.
The processor 110 is a control center of the terminal device 100, connects various parts of the entire terminal device 100 with various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 110 may include one or more processing units. In the present application, the processor 110 may run an operating system, an application program, a user interface display, a touch response, and the image processing method according to the embodiment of the present application. The processor 110 performs a specific process of the image processing method, which will be described in detail below.
Fig. 3 shows a flowchart of an image processing method according to an embodiment of the present application. The method may be applied to the terminal device shown in fig. 1, for example, may be applied to the terminal device 100. The method can be used for protecting the privacy information of different parts of a human body, and the following description takes the example of protecting the privacy information of a patient when the patient is subjected to ultrasonic thyroid examination in remote consultation. As shown in fig. 3, the method may include the steps of:
step S301, responding to the operation that the user selects the privacy protection mode in the mode selection interface, and acquiring at least one target detection model corresponding to the target privacy protection mode selected by the user.
For example, as shown in fig. 4, in a remote consultation, when a doctor performs an ultrasonic thyroid examination on a patient, a camera of the terminal device 100 captures an operation image of the doctor, and privacy information of the face of the patient may be exposed. Before consultation, a mode selection interface may be displayed on a display of the terminal device 100, as shown in fig. 5, multiple privacy protection modes for privacy protection of different parts of a human body are displayed in the mode selection interface, and each privacy protection mode may be correspondingly provided with one or more detection models. For example, the privacy preserving mode may include at least one of a face privacy preserving mode, an upper limb privacy preserving mode, and a trunk privacy preserving mode. The detection models corresponding to the face privacy protection mode can comprise an eye detection model, a nose detection model and a mouth detection model; the detection model corresponding to the upper limb privacy protection mode can comprise an eye detection model, a nose detection model, a mouth detection model and a chest detection model; the detection models corresponding to the torso privacy protection mode can comprise a chest detection model and a human body median detection model.
To protect the patient's facial privacy information, the doctor may select a face privacy protection mode in a mode selection interface and click a confirmation button, as shown in fig. 6. The terminal device 100 may, in response to an operation of a doctor selecting a face privacy protection mode in a mode selection interface, acquire an eye detection model, a nose detection model, and a mouth detection model corresponding to the face privacy protection mode, and use the eye detection model, the nose detection model, and the mouth detection model as target detection models.
In other embodiments, if it is desired to protect the patient's face and chest privacy information, the physician may select the upper limb privacy protection mode in the mode selection interface and click the confirmation button, as shown in fig. 7. The terminal device 100 may acquire the eye detection model, the nose detection model, the mouth detection model, and the chest detection model and set the eye detection model, the nose detection model, the mouth detection model, and the chest detection model as target detection models in response to an operation of a doctor selecting an upper limb privacy protection mode in a mode selection interface.
In other embodiments, if it is desired to protect patient chest and median privacy information, the physician may select the torso privacy protection mode in the mode selection interface and click the confirmation button, as shown in fig. 8. The terminal device 100 may acquire the chest detection model and the human body median detection model in response to an operation of the doctor selecting the trunk privacy protection mode in the mode selection interface, and may take the chest detection model and the human body median detection model as the target detection model.
In other embodiments, if protection of the patient's face privacy information and median privacy information is desired, the doctor may select a face privacy protection mode and a torso privacy protection mode in the mode selection interface and click a confirmation button, as shown in fig. 9. The terminal device 100 may acquire the eye detection model, the nose detection model, the mouth detection model, the chest detection model, and the human body median detection model in response to an operation of a doctor selecting the face privacy protection mode and the trunk privacy protection mode in the mode selection interface, and may use the eye detection model, the nose detection model, the mouth detection model, the chest detection model, and the human body median detection model as the target detection model.
Step S302, aiming at each frame of image to be processed obtained from the collected video data, at least one target detection model is adopted to carry out target detection on the image to be processed, and a target area in the image to be processed is obtained.
In some embodiments, the method shown in fig. 10 may be used to process each frame of image to be processed obtained from the acquired video data, and the processing procedure may include the following steps:
step S3021, converting the image to be processed into an HSV color space, and obtaining a color space image.
And step S3022, performing binarization processing on the image to be processed based on the hue value of each pixel point in the color space image, and respectively obtaining a background area image and a human body area image which are contained in the image to be processed.
In order to determine the human body area image contained in the image to be processed, the image to be processed may be converted into an HSV color space to obtain a color space image, where H denotes hue, S denotes saturation, and V denotes lightness.
Determining a first type of pixel points corresponding to pixel points with tone values within a set tone interval in the image to be processed in the color space image, and setting the pixel values of the first type of pixel points as a first set value; setting a tone value which is greater than or equal to the minimum tone threshold value and less than or equal to the maximum tone threshold value in the tone interval; and determining second type pixel points corresponding to the pixel points of which the tone values are outside the set tone intervals in the color space image in the image to be processed, and setting the pixel values of the second type pixel points as second set values. And taking an image area formed by the pixel points of the first set value as a human body area image, and taking an image area formed by the pixel points of the second set value as a background area image.
For example, in remote consultation, when an image of a human body region on a hospital bed needs to be acquired, the image of the human body region can be acquired by removing a pure background image based on a prior condition that the hospital bed color is pure. When acquiring the image of the body region on the patient bed, the minimum hue threshold of the hue interval may be 70, the maximum hue threshold may be 248, the first setting value may be 255, and the second setting value may be 0. The pixel point with the hue value not less than 70 and not more than 248 in the color space image can be set as 255, the pixel point with the hue value not less than 0 and less than 70 and the pixel point with the hue value not less than 248 and not more than 359 can be set as 0, the regional image with the pixel point of 255 is used as the regional image of the human body, and the regional image with the pixel point of 0 is used as the regional image of the background.
Step S3023, performing target detection on the human body region image included in the image to be processed by using at least one target detection model, to obtain a target region in the image to be processed.
In some embodiments, after the human body region image included in the image to be processed is determined, downsampling of different multiples of the human body region image can be performed respectively to obtain a plurality of sampled images, and after the plurality of sampled images are obtained, target detection can be performed on the downsampled human body region image synchronously by using each target detection model respectively to obtain target frame position information output by each target detection model. Specifically, for each target detection model, the following operations are performed synchronously: respectively inputting each sampling image in the plurality of sampling images into a target detection model to obtain a target detection result of each sampling image output by the target detection model; the detection result comprises the position and the confidence degree of the target frame; and fitting the positions of the target frames of the plurality of sampling images according to the confidence coefficient in the target detection result of each sampling image to obtain the position information of the target frames output by the target detection model, and obtaining the target area in each frame of image to be processed based on the position information of the target frames.
Exemplarily, after the human body region image included in the image to be processed is determined, 2-time, 4-time and 8-time downsampling may be performed on the human body region image, so as to obtain 2-time, 4-time and 8-time downsampled sampling images, and the 2-time, 4-time and 8-time downsampled sampling images may be synchronously input into the eye detection model, the nose detection model and the mouth detection model, so that each target detection model synchronously performs target detection on the human body region image. For the eye detection model, the eye target frame positions and the confidence degrees corresponding to the 2-time, 4-time and 8-time downsampled sampling images are output, and the eye target frame positions corresponding to the 2-time, 4-time and 8-time downsampled sampling images are fitted according to the confidence degrees corresponding to the 2-time, 4-time and 8-time downsampled sampling images, so that the target frame position information output by the eye detection model is obtained, as shown in fig. 11. For the nose detection model, the positions and confidence degrees of the nose target frames corresponding to the sampling images subjected to 2-time, 4-time and 8-time downsampling are output, and the positions of the nose target frames corresponding to the sampling images subjected to 2-time, 4-time and 8-time downsampling are fitted according to the confidence degrees corresponding to the sampling images subjected to 2-time, 4-time and 8-time downsampling, so that the position information of the target frame output by the nose detection model is obtained, as shown in fig. 12. For the mouth detection model, mouth target frame positions and confidence degrees corresponding to the 2-fold, 4-fold and 8-fold down-sampled images are output, and the mouth target frame positions corresponding to the 2-fold, 4-fold and 8-fold down-sampled images are fitted according to the confidence degrees corresponding to the 2-fold, 4-fold and 8-fold down-sampled images, so as to obtain target frame position information output by the mouth detection model, as shown in fig. 13. According to the target frame position information output by the eye, nose and mouth detection models, a target region in the image to be processed is obtained, as shown in fig. 14. The plurality of sampling images are respectively input into the target detection model, the positions and the confidence degrees of the target frames are output, the positions of the target frames of the plurality of sampling images are fitted according to the confidence degrees, the position information of the target frames output by the target detection model is obtained, and the accuracy of image detection can be effectively improved.
And each target detection model is adopted to synchronously carry out target detection on the human body region image, and each target detection model works synchronously, so that the image detection efficiency can be effectively improved.
In other embodiments, in order to improve the efficiency of image detection, after a human body region image included in an image to be processed is determined, multiple down-sampling may be performed on the human body region image to obtain a multiple down-sampled image, and after the sampled image is obtained, target detection may be performed on the human body region image synchronously by using each target detection model to obtain target frame position information output by each target detection model. For example, after the human body region image included in the image to be processed is determined, 8-fold downsampling may be performed on the human body region image, and the 8-fold downsampled image is synchronously input into each target detection model to obtain the target frame position information output by each target detection model.
The human body image contained in the image to be processed is subjected to down-sampling with set multiple, so that the data size of the image to be processed can be effectively reduced, and the image detection efficiency is improved.
Step S303, based on the target area in each frame of image to be processed, performing fuzzification processing on each frame of image to be processed in the video data to obtain processed video data, and outputting the processed video data.
Based on the target area in each frame of the image to be processed shown in fig. 14, each frame of the image to be processed in the video data may be subjected to blurring processing by gaussian blurring, pixel blurring, or the like, and after the blurring processing, as shown in fig. 15, processed video data is obtained and output. In the remote consultation, the terminal equipment obtains the processed video data, can display the processed video data through the display, and can also send the processed video data to other terminal equipment through a network.
It should be noted that the application scenarios include, but are not limited to, protecting the privacy region of the patient in the remote consultation performed by the doctor, and may also be applied to various usage scenarios that have protection requirements for the privacy region of the human body, and may be dynamically adjusted according to the protection requirements for the privacy region of the human body.
According to the method and the device, on the basis that the privacy protection modes are flexibly configured according to different parts, the background removal processing and the down-sampling processing are carried out on the image to be processed, and the image to be processed is synchronously detected by using a plurality of detection models, so that the efficiency of target detection is effectively improved, and the efficiency of privacy protection is further improved.
Based on the same inventive concept, an embodiment of the present application further provides an image processing apparatus, as shown in fig. 16, the image processing apparatus includes:
the detection unit 1601 is configured to, in response to an operation of a user selecting a privacy protection mode in a mode selection interface, obtain at least one target detection model corresponding to a target privacy protection mode selected by the user; a plurality of privacy protection modes for carrying out privacy protection on different parts of the human body are displayed in the mode selection interface; and aiming at each frame of image to be processed acquired from the acquired video data, performing target detection on the image to be processed by adopting at least one target detection model to obtain a target area in the image to be processed.
The privacy protecting unit 1602 is configured to perform blurring processing on each frame of to-be-processed image in the video data based on a target area in each frame of to-be-processed image, to obtain processed video data, and output the processed video data.
In one possible embodiment, the privacy protection mode comprises at least one of a face privacy protection mode, an upper limb privacy protection mode and a trunk privacy protection mode; the detection model corresponding to the face privacy protection mode comprises an eye detection model, a nose detection model and a mouth detection model; the detection model corresponding to the upper limb privacy protection mode comprises an eye detection model, a nose detection model, a mouth detection model and a chest detection model; the detection model corresponding to the trunk privacy protection mode comprises a chest detection model and a human body neutral detection model.
In a possible implementation, the detecting unit 1601 is specifically configured to:
if the target privacy protection mode selected by the user comprises a face privacy protection mode and a trunk privacy protection mode, respectively acquiring a detection model corresponding to the face privacy protection mode and a detection model corresponding to the trunk privacy protection mode as target detection models.
In a possible implementation, the detecting unit 1601 is specifically configured to:
determining a human body area image contained in an image to be processed;
synchronously carrying out target detection on the human body region image by adopting each target detection model respectively to obtain target frame position information output by each target detection model;
and obtaining a target area in each frame of image to be processed based on the target frame position information output by each target detection model.
In a possible implementation, the detecting unit 1601 is specifically configured to:
converting the image to be processed into an HSV color space to obtain a color space image;
and performing binarization processing on the image to be processed based on the hue value of each pixel point in the color space image to respectively obtain a background area image and a human body area image contained in the image to be processed.
In a possible implementation, the detecting unit 1601 is specifically configured to:
determining a first type of pixel points corresponding to pixel points with tone values within a set tone interval in the image to be processed in the color space image, and setting the pixel values of the first type of pixel points as a first set value; setting a tone value which is greater than or equal to the minimum tone threshold value and less than or equal to the maximum tone threshold value in the tone interval;
determining second type pixel points corresponding to pixel points of which the tone values are outside the set tone interval in the image to be processed in the color space image, and setting the pixel values of the second type pixel points as second set values;
and taking an image area formed by the pixel points of the first set value as a human body area image, and taking an image area formed by the pixel points of the second set value as a background area image.
In a possible implementation, the detecting unit 1601 is specifically configured to:
respectively performing downsampling on the human body area image by different times to obtain a plurality of sampled images;
for each target detection model, the following operations are synchronously executed:
respectively inputting each sampling image in the plurality of sampling images into a target detection model to obtain a target detection result of each sampling image output by the target detection model; the target detection result comprises a target frame position and a confidence coefficient;
and fitting the positions of the target frames of the plurality of sampling images according to the confidence coefficient in the target detection result of each sampling image to obtain the position information of the target frames output by the target detection model.
In a possible implementation, as shown in fig. 17, the image processing apparatus may further include an output unit 1701 specifically configured to: and transmitting the processed video data to other terminal equipment.
The embodiment of the present application further provides a computer storage medium, where computer-executable instructions are stored in the computer storage medium, and the computer-executable instructions are used to implement the image processing method according to any embodiment of the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A terminal device, comprising: a display, a memory, and a processor;
the display is configured to: displaying an interface of the terminal equipment during operation;
the memory is configured to: storing programs or data used by the terminal equipment to run;
the processor is configured to:
responding to the operation that a user selects a privacy protection mode in a mode selection interface, and acquiring at least one target detection model corresponding to the target privacy protection mode selected by the user; a plurality of privacy protection modes for carrying out privacy protection on different parts of the human body are displayed in the mode selection interface;
performing target detection on each frame of image to be processed acquired from acquired video data by using the at least one target detection model to obtain a target area in the image to be processed;
and performing fuzzification processing on each frame of image to be processed in the video data respectively based on a target area in each frame of image to be processed to obtain processed video data, and outputting the processed video data.
2. The terminal device of claim 1, wherein the privacy protection mode comprises at least one of a face privacy protection mode, an upper limb privacy protection mode, and a trunk privacy protection mode;
the detection model corresponding to the face privacy protection mode comprises an eye detection model, a nose detection model and a mouth detection model;
the detection model corresponding to the upper limb privacy protection mode comprises an eye detection model, a nose detection model, a mouth detection model and a chest detection model;
the detection model corresponding to the trunk privacy protection mode comprises a chest detection model and a human body median detection model.
3. The terminal device of claim 2, wherein the processor is configured to:
if the target privacy protection mode selected by the user comprises a face privacy protection mode and a trunk privacy protection mode, respectively acquiring a detection model corresponding to the face privacy protection mode and a detection model corresponding to the trunk privacy protection mode as the target detection models.
4. The terminal device of claim 1, wherein the processor is configured to:
determining a human body area image contained in the image to be processed;
synchronously carrying out target detection on the human body region image by adopting each target detection model respectively to obtain target frame position information output by each target detection model;
and obtaining a target area in each frame of image to be processed based on the target frame position information output by each target detection model.
5. The terminal device of claim 4, wherein the processor is configured to:
converting the image to be processed into an HSV color space to obtain a color space image;
and carrying out binarization processing on the image to be processed based on the hue value of each pixel point in the color space image to respectively obtain a background area image and a human body area image contained in the image to be processed.
6. The terminal device of claim 5, wherein the processor is configured to:
determining a first type of pixel corresponding to a pixel point of which the tone value is within a set tone interval in the color space image in the image to be processed, and setting the pixel value of the first type of pixel point as a first set value; the set hue interval comprises a hue value which is greater than or equal to the minimum hue threshold value and less than or equal to the maximum hue threshold value;
determining second type pixel points corresponding to pixel points with tone values outside a set tone interval in the image to be processed in the color space image, and setting the pixel values of the second type pixel points as second set values;
and taking an image area formed by the pixel points of the first set value as the human body area image, and taking an image area formed by the pixel points of the second set value as the background area image.
7. The terminal device according to any one of claims 4 to 6, wherein the processor is configured to:
respectively performing downsampling on the human body region image by different multiples to obtain a plurality of sampled images;
for each target detection model, synchronously executing the following operations:
inputting each sampling image in the plurality of sampling images into the target detection model respectively to obtain a target detection result of each sampling image output by the target detection model; the target detection result comprises a target frame position and a confidence level;
and fitting the positions of the target frames of the plurality of sampling images according to the confidence coefficient in the target detection result of each sampling image to obtain the position information of the target frames output by the target detection model.
8. The terminal device of claim 1, wherein the terminal device further comprises a communication module; the treatment appliance is configured to:
and transmitting the processed video data to other terminal equipment through the communication module.
9. An image processing method, applied to a terminal device, the method comprising:
responding to the operation that a user selects a privacy protection mode in a mode selection interface, and acquiring at least one target detection model corresponding to the target privacy protection mode selected by the user; a plurality of privacy protection modes for carrying out privacy protection on different parts of a human body are displayed in the mode selection interface;
performing target detection on each frame of image to be processed acquired from acquired video data by using the at least one target detection model to obtain a target area in the image to be processed;
and performing fuzzification processing on each frame of image to be processed in the video data respectively based on a target area in each frame of image to be processed to obtain processed video data, and outputting the processed video data.
10. A computer-readable storage medium having a computer program stored therein, the computer program characterized by: which when executed by a processor implements the method of claim 9.
CN202211241040.6A 2022-10-11 2022-10-11 Terminal device, image processing method and storage medium Pending CN115565647A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211241040.6A CN115565647A (en) 2022-10-11 2022-10-11 Terminal device, image processing method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211241040.6A CN115565647A (en) 2022-10-11 2022-10-11 Terminal device, image processing method and storage medium

Publications (1)

Publication Number Publication Date
CN115565647A true CN115565647A (en) 2023-01-03

Family

ID=84745906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211241040.6A Pending CN115565647A (en) 2022-10-11 2022-10-11 Terminal device, image processing method and storage medium

Country Status (1)

Country Link
CN (1) CN115565647A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278692A (en) * 2023-11-16 2023-12-22 邦盛医疗装备(天津)股份有限公司 Desensitization protection method for monitoring data of medical detection vehicle patients

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278692A (en) * 2023-11-16 2023-12-22 邦盛医疗装备(天津)股份有限公司 Desensitization protection method for monitoring data of medical detection vehicle patients
CN117278692B (en) * 2023-11-16 2024-02-13 邦盛医疗装备(天津)股份有限公司 Desensitization protection method for monitoring data of medical detection vehicle patients

Similar Documents

Publication Publication Date Title
US20170007137A1 (en) Method of estimating blood pressure based on image
CN111414831A (en) Monitoring method and system, electronic device and storage medium
EP3308702B1 (en) Pulse estimation device, and pulse estimation method
EP4055812B1 (en) A system for performing ambient light image correction
US10405808B2 (en) Contactless blood pressure monitoring of a patient
KR102304370B1 (en) Apparatus and method of analyzing status and change of wound area based on deep learning
JP6812685B2 (en) Dynamic analyzer
US9569838B2 (en) Image processing apparatus, method of controlling image processing apparatus and storage medium
KR101789166B1 (en) Method and Apparatus for jaundice diagnosis based on an image, Assisting Apparatus for jaundice diagnosis based on an image
CN115565647A (en) Terminal device, image processing method and storage medium
CN108697310B (en) Image processing apparatus, image processing method, and program-recorded medium
CN111784686A (en) Dynamic intelligent detection method, system and readable storage medium for endoscope bleeding area
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN112862752A (en) Image processing display method, system electronic equipment and storage medium
CN112786163A (en) Ultrasonic image processing and displaying method and system and storage medium
CN111597593A (en) Anti-peeping identification method and system and display device
EP3836155A1 (en) Screen capturing via mobile computing devices using matrix code
CN115484860A (en) Real-time detection and correction of shadows in hyperspectral retinal images
WO2023112930A1 (en) Image processing device, terminal, and monitoring method
US20220386981A1 (en) Information processing system and information processing method
KR102421739B1 (en) System and method for monitoring oral health using camera device
WO2024090218A1 (en) Diagnosis system, diagnosis device, program, diagnosis method, method for diagnosing skin, and method for diagnosing stresses
EP4366656A1 (en) System and method for determining an orthodontic occlusion class
CN117598792A (en) Interventional consumable control method for vascular interventional operation robot
KR20210135402A (en) System for diagnosing and analyzing disease on the internet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination