CN113762184A - Image processing method, image processing device, electronic equipment and computer storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN113762184A
CN113762184A CN202111070564.9A CN202111070564A CN113762184A CN 113762184 A CN113762184 A CN 113762184A CN 202111070564 A CN202111070564 A CN 202111070564A CN 113762184 A CN113762184 A CN 113762184A
Authority
CN
China
Prior art keywords
person
class
image
personnel
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111070564.9A
Other languages
Chinese (zh)
Inventor
孙贺然
路露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111070564.9A priority Critical patent/CN113762184A/en
Publication of CN113762184A publication Critical patent/CN113762184A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment discloses an image processing method, an image processing device, electronic equipment and a computer storage medium, wherein the method comprises the following steps: acquiring at least one frame of image to be processed acquired by image acquisition equipment; performing target detection on each frame of image in at least one frame of image to be processed to obtain a personnel image in at least one frame of image to be processed; carrying out identity recognition and behavior recognition on an object in a person image to respectively obtain a first behavior recognition result of a first type of person and a second behavior recognition result of a second type of person, wherein the first type of person is a person to be cared, and the second type of person is a person to be cared of the first type of person; and generating a data report for reflecting that the second type of person takes care of the first type of person based on the first behavior recognition result and the second behavior recognition result.

Description

Image processing method, image processing device, electronic equipment and computer storage medium
Technical Field
The present disclosure relates to computer vision processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer storage medium.
Background
In the related technology, aiming at the people needing to be cared such as the old, a camera can be adopted to collect corresponding images, so that the behavior state of the people needing to be cared is determined, and the nursing relationship between the nursing person and the old is evaluated; however, this scheme is a scheme for subjectively evaluating a care relationship, is not objective and accurate enough, and takes a lot of time since a user is required to watch a video or review a video in real time.
Disclosure of Invention
The embodiment of the disclosure provides a technical scheme for image processing.
The embodiment of the present disclosure provides an image processing method, including:
acquiring at least one frame of image to be processed acquired by image acquisition equipment;
performing target detection on each frame of image in the at least one frame of image to be processed to obtain a personnel image in the at least one frame of image to be processed;
performing identity recognition and behavior recognition on the object in the personnel image to respectively obtain a first behavior recognition result of a first class of personnel and a second behavior recognition result of a second class of personnel, wherein the first class of personnel is cared personnel, and the second class of personnel is cared personnel of the first class of personnel;
and generating a data report for reflecting that the second type of person takes care of the first type of person based on the first behavior recognition result and the second behavior recognition result.
In some embodiments, the method further comprises:
respectively performing attribute identification on the first class of people and the second class of people in the personnel image to obtain a first attribute identification result of the first class of people and a second attribute identification result of the second class of people, wherein the attribute identification is at least used for identifying the expression of each person;
the generating a data report reflecting that the second class of person cares for the first class of person based on the first behavior recognition result and the second behavior recognition result comprises:
generating the data report based on the first behavior recognition result, the second behavior recognition result, the first attribute recognition result, and the second attribute recognition result.
In some embodiments, after obtaining the first behavior recognition result of the first class of person, the method further comprises:
responding to the first behavior identification result to represent that the first class of people fall down or stop moving, and generating first alarm information;
and sending the first alarm information to the first target equipment.
In some embodiments, before the sending the first warning information to the first target device, the method further comprises:
determining an alarm level based on the first alarm information;
determining a care state based on the first behavior recognition result and the second behavior recognition result;
and determining the first target equipment matched with the alarm level and/or the nursing state from a plurality of preset associated equipment, and sending the first alarm information to the first target equipment.
In some embodiments, after the obtaining of the second behavior recognition result of the second type of person, the method further includes:
responding to the second behavior recognition result to represent that the behavior of the second class of people is abnormal, and generating second alarm information;
and sending the second warning information to second target equipment, wherein the second target equipment comprises equipment used by relatives corresponding to the first class of people and/or equipment used by management personnel corresponding to the second class of people.
In some embodiments, the person image comprises an image of a visiting person other than the first type of person and the second type of person;
the method further comprises the following steps:
in response to the fact that the image of the visiting person is not matched with a pre-stored person image, determining the visiting person as a stranger, and performing behavior analysis on the stranger to obtain an analysis result of the stranger;
generating third alarm information under the condition that the analysis result of the stranger meets a preset condition;
sending the third warning information to a third target device;
and/or the presence of a gas in the gas,
in response to the image of the visiting person being matched with a pre-stored person image, determining the visiting person as a familiar person and performing behavior analysis on the familiar person to obtain an analysis result of the familiar person;
and sending the analysis result of the familiar person to a fourth target device.
In some embodiments, after generating the data report, the method further comprises at least one of:
generating a psychological coaching scheme for the first class of people based on the data report;
generating a care instruction plan for the second class of people based on the data report;
wherein the psychological coaching scheme is obtained based on a first attribute recognition result of the first class of people or based on the first attribute recognition result and the first behavior recognition result; the nursing guidance plan is obtained based on at least one of a second attribute recognition result and the second behavior recognition result of the second type of person.
An embodiment of the present disclosure further provides an image processing apparatus, including:
the acquisition module is used for acquiring at least one frame of image to be processed acquired by the image acquisition equipment;
the detection module is used for carrying out target detection on each frame of image in the at least one frame of image to be processed to obtain a personnel image in the at least one frame of image to be processed;
the identification module is used for carrying out identity identification and behavior identification on the object in the personnel image to respectively obtain a first behavior identification result of a first class of personnel and a second behavior identification result of a second class of personnel, wherein the first class of personnel is nursed personnel, and the second class of personnel is nursed personnel of the first class of personnel;
and the processing module is used for generating a data report for reflecting that the second type of personnel takes care of the first type of personnel based on the first behavior recognition result and the second behavior recognition result.
The disclosed embodiments also provide an electronic device comprising a processor and a memory for storing a computer program capable of running on the processor; wherein the content of the first and second substances,
the processor is configured to run the computer program to perform any one of the image processing methods described above.
The disclosed embodiments also provide a computer storage medium having a computer program stored thereon, which when executed by a processor implements any of the image processing methods described above.
It can be seen that, in the embodiment of the present disclosure, a data report reflecting that the second type of person cares the first type of person may be generated according to the first behavior recognition result of the first type of person and the second behavior recognition result of the second type of person, and since the first behavior recognition result of the first type of person and the second behavior recognition result of the second type of person are objective and accurate information, the embodiment of the present disclosure may objectively and accurately evaluate the care relationship of the second type of person cares the first type of person; in addition, in the process of evaluating the nursing relationship of the second type of personnel nursing the first type of personnel, the user does not need to watch the video or review the video in real time, so that the time can be saved, and the efficiency of evaluating the nursing relationship of the second type of personnel nursing the first type of personnel is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram of an application scenario of an embodiment of the present disclosure;
fig. 2 is an interface schematic diagram of a terminal device according to an embodiment of the disclosure;
FIG. 3 is a flow chart of an image processing method of an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a product architecture involved in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of functions that can be implemented in embodiments of the present disclosure;
FIG. 6 is another schematic diagram of functionality that can be implemented in embodiments of the present disclosure;
FIG. 7 is a schematic diagram of a component structure of an image processing apparatus according to an embodiment of the disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The present disclosure will be described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the examples provided herein are merely illustrative of the present disclosure and are not intended to limit the present disclosure. In addition, the embodiments provided below are some embodiments for implementing the disclosure, not all embodiments for implementing the disclosure, and the technical solutions described in the embodiments of the disclosure may be implemented in any combination without conflict.
It should be noted that, in the embodiments of the present disclosure, the terms "comprises," "comprising," or any other variation thereof are intended to cover a non-exclusive inclusion, so that a method or apparatus including a series of elements includes not only the explicitly recited elements but also other elements not explicitly listed or inherent to the method or apparatus. Without further limitation, the use of the phrase "including a. -. said." does not exclude the presence of other elements (e.g., steps in a method or elements in a device, such as portions of circuitry, processors, programs, software, etc.) in the method or device in which the element is included.
For example, the image processing method provided by the embodiment of the present disclosure includes a series of steps, but the image processing method provided by the embodiment of the present disclosure is not limited to the described steps, and similarly, the image processing apparatus provided by the embodiment of the present disclosure includes a series of modules, but the apparatus provided by the embodiment of the present disclosure is not limited to include the explicitly described modules, and may also include modules that are required to be configured for acquiring relevant information or performing processing based on the information.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
In the embodiment of the disclosure, image acquisition equipment such as a camera can be installed in a personnel residence, and the personnel residence can be a family residence or a residence such as a social welfare institution; one or more image acquisition devices can be installed in the personnel residence; the image acquisition equipment installed in the personnel dwelling comprises image acquisition equipment used for acquiring an image of the interior of the personnel dwelling, and the image acquisition equipment installed in the personnel dwelling can also comprise image acquisition equipment used for acquiring an image of a doorway of a family dwelling.
Referring to fig. 1, the image capturing device installed at the residence of a person may include any one of the following: an intelligent network CAMERA (IPC) with image processing capability, an intelligent module with image acquisition and image processing capability, and a common CAMERA without image processing capability, where the common CAMERA may be an IPC or other devices with image acquisition capability.
Any image acquisition equipment installed in the personnel residence can be the original image acquisition equipment of the personnel residence, so that the original image acquisition equipment is conveniently and fully utilized; any image acquisition device installed at the residence of people can also be a newly added image acquisition device. That is, the embodiment of the present disclosure can flexibly deploy corresponding image acquisition devices according to actual application scenarios, thereby implementing image acquisition and processing; in addition, the original image acquisition equipment is utilized, so that the hardware deployment cost is favorably reduced.
The intelligent IPC or intelligent module can perform at least one of the following image processing operations on the acquired image: face detection, face tracking, face attribute identification and face feature extraction. In practical application, a Software Development Kit (SDK) with an embedded algorithm may be embedded in the intelligent IPC or the intelligent module, so that the intelligent IPC or the intelligent module may perform at least one of the above-mentioned image processing operations on the acquired image to obtain a preliminary result of image processing.
In the embodiment of the disclosure, the image acquisition device installed in the residence of people can interact with the cloud device through the local area network, illustratively, the intelligent IPC or the intelligent module can report the device state, the preliminary result of image processing and the like to the cloud device, and can also synchronize the human face image database with the cloud device; the images that the ordinary camera can collect are reported to the cloud equipment.
The cloud equipment can further perform image processing on the primary result of the image processing reported by the intelligent IPC or the intelligent module, and can also perform image processing on the image acquired by the common camera. Exemplarily, referring to fig. 1, an edge device may be further added between the ordinary camera and the local area network, the ordinary camera may perform data interaction with the edge device, and the ordinary camera may report the acquired image to the edge device; the edge device processes the received image to obtain a primary result of image processing, then the primary result of image processing can be reported to the cloud device through the local area network, and the cloud device can perform further image processing on the primary result of image processing.
Referring to fig. 1, the cloud device may send the image processing result to the terminal device of the relevant person, so as to display the image processing result on the interface of the terminal device. Here, the terminal device may be a fixed terminal such as a personal computer or a handheld mobile terminal, and the operating system of the terminal device may be an operating system such as Windows, Android, iOS, or the like; the interface of the terminal device may be a Web browser page or an Application (APP) interface.
Referring to fig. 2, the interface of the terminal device presents the person identity information obtained by face recognition and the behavior recognition result corresponding to the person identity information; exemplarily, an attribute identification result corresponding to the person identity information is also presented on the interface of the terminal device; here, the attribute recognition result may represent a result of recognizing an expression of a person; also shown in fig. 2 are neighbor identity information and event alert information associated with the personnel identity information; it should be noted that fig. 2 is only an exemplary illustration of the content presented on the interface of the terminal, and the embodiment of the present disclosure is not limited thereto.
Based on the above described application scenarios, the embodiment of the present disclosure provides an image processing method. Fig. 3 is a flowchart of an image processing method according to an embodiment of the disclosure, and as shown in fig. 3, the flowchart may include:
step 301: acquiring at least one frame of image to be processed acquired by image acquisition equipment.
In the embodiment of the present disclosure, the image capturing device may be the above-mentioned intelligent IPC, an intelligent module with image capturing and image processing capabilities, a general camera without image processing capability, or the like. At least one frame of image to be processed can be a continuous frame image or a discontinuous frame image; for example, in the case that a plurality of image capturing devices are used to capture images, images captured by the plurality of image capturing devices at the same time may be fused to obtain a frame of image to be processed after the fusion process.
The format of the image to be processed can be Joint Photographic Experts GROUP (JPEG), Bitmap (BMP), Portable Network Graphics (PNG) or other formats; it should be noted that, the format of the motion field image is merely illustrated here, and the embodiment of the present disclosure does not limit the format of the motion field image.
In the embodiment of the present disclosure, one or more image capturing devices may be provided; multiple image capture devices may be deployed indoors and outdoors.
Step 302: and carrying out target detection on each frame of image in at least one frame of image to be processed to obtain a personnel image in the at least one frame of image to be processed.
In practical application, a human body detection model can be trained in advance, and the human body detection model is used for detecting a person image from an image; after the trained human body detection model is obtained, each frame of image to be processed can be input into the human body detection model, each frame of image to be processed is processed by using the human body detection model, a human body detection frame corresponding to the image is obtained, and the image in the human body detection frame is the personnel image.
The Network structure of the human body detection model is not limited in the embodiments of the present disclosure, and the Network structure of the human body detection model may be a two-stage detection Network structure, for example, the human body detection model may be a fast-Regions with a Convolutional Neural Network (fast RCNN) or the like; the network structure of the human body detection model may also be a single-stage detection network structure, for example, the network structure of the human body detection model is RetinaNet.
It can be understood that each frame of the image to be processed may include one human body, or may include a plurality of human bodies; therefore, the human body detection is carried out on each frame of image to be processed, and one human body image or a plurality of human body images can be obtained.
Step 303: and performing identity recognition and behavior recognition on the object in the image of the person to respectively obtain a first behavior recognition result of a first type of person and a second behavior recognition result of a second type of person, wherein the first type of person is a person to be cared, and the second type of person is a caretaker of the first type of person.
In the embodiment of the disclosure, the first-class person may be an old person, a minor person or other person who loses mobility; the second type of person may be a caregiver or a social welfare agency worker; in practical application, the images of the first class of people may be pre-stored, and after the image of the person in the at least one frame of image to be processed is obtained, under the condition that the image of the person in the image to be processed is successfully matched with the pre-stored image of the first class of people, it may be determined that the image of the person in the image to be processed includes the image of the first class of people, that is, the identification of the first class of people in the image of the person may be realized.
Similarly, the images of the second class of people may be pre-stored, and after the image of the person in the at least one frame of image to be processed is obtained, under the condition that the image of the person in the image to be processed is successfully matched with the pre-stored image of the second class of people, it may be determined that the image of the person in the image to be processed includes the image of the second class of people, that is, the identification of the second class of people in the image of the person may be realized.
For example, whether the image of the person in the image to be processed matches with the image of the first class of person stored in advance may be determined through human body feature comparison, and whether the image of the person in the image to be processed matches with the image of the second class of person stored in advance may also be determined through human body feature comparison. Here, the human body feature may be a feature used for identifying the identity of a person, such as a human face feature or a human hand feature.
In practical application, after the first class of people is identified in the personnel image, the first class of people can be subjected to behavior identification according to the image of the first class of people in at least one frame of image to be processed, so that a first behavior identification result of the first class of people is obtained; for example, the human body key points in the image of the first class of people may be identified, and the first behavior identification result of the first class of people may be determined according to the position distribution information of the human body key points in the image of the first class of people. In the disclosed embodiment, the human body key points may include at least one of a head key point, an upper body key point and a lower body key point, the upper body key point may include a chest key point, an abdomen key point, a back key point, etc., and the lower body key point may include a knee key point, a calf key point, a thigh key point, a foot key point, etc.
Similarly, after the second type of person is identified in the person image, the second type of person may be identified according to the image of the second type of person in the at least one frame of image to be processed, so as to obtain a second behavior identification result of the second type of person.
Step 304: and generating a data report for reflecting that the second type of person takes care of the first type of person based on the first behavior recognition result and the second behavior recognition result.
Here, the data report may reflect the behavior of the first type of person and the second type of person, and may also reflect the situation of the first type of person and the second type of person, for example, for the situation that the first type of person and the second type of person are in the same image to be processed, the data report may reflect the situation that the second type of person attends to the first type of person; for the case where the first and second types of people are in different images, the data report may reflect the working status of the second type of people and the behavior of the first type of people.
In the case where the first class of people is elderly and the second class of people is a caregiver, in a first example, the data report may reflect that the caregiver is nursing the elderly; in a second example, the data report may reflect the working status of a caregiver cleaning a room or cooking a meal; in a third example, the data report may reflect the state of a caregiver resting in an area such as a bedroom, while the elderly are alone. It will be appreciated that the first and second examples are examples where the caregiver is in a good working condition, and the third example is where the caregiver is in an ineligible working condition.
In practical application, based on the first behavior recognition result and the second behavior recognition result, the respective behaviors of the first class of people and the second class of people can be determined, the situation of the first class of people and the second class of people can also be reflected, and a data report for reflecting that the second class of people attends to the first class of people can be generated.
In practical applications, the steps 301 to 304 may be implemented by a Processor in the electronic Device, where the Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor.
Here, the electronic device may include the cloud device, and may further include at least one of an intelligent IPC, an intelligent module, and an edge device. In practical applications, the electronic device may use an Artificial Intelligence (AI) technique to perform image processing, where the AI technique includes at least one of the following: face detection, face tracking, face attribute identification and face feature extraction.
It can be seen that, in the embodiment of the present disclosure, a data report reflecting that the second type of person cares the first type of person may be generated according to the first behavior recognition result of the first type of person and the second behavior recognition result of the second type of person, and since the first behavior recognition result of the first type of person and the second behavior recognition result of the second type of person are objective and accurate information, the embodiment of the present disclosure may objectively and accurately evaluate the care relationship of the second type of person cares the first type of person; in addition, in the process of evaluating the nursing relationship of the second type of personnel nursing the first type of personnel, the user does not need to watch the video or review the video in real time, so that the time can be saved, and the efficiency of evaluating the nursing relationship of the second type of personnel nursing the first type of personnel is improved.
In consideration of the correlation between the expressions of the first class of people and the expressions of the second class of people and the nursing relationship of the second class of people for nursing the first class of people, in the embodiment of the disclosure, the first class of people and the second class of people in the image of the people can be subjected to attribute recognition respectively to obtain a first attribute recognition result of the first class of people and a second attribute recognition result of the second class of people, wherein the attribute recognition is at least used for recognizing the expressions of the people.
Accordingly, the data report may be generated based on the first behavior recognition result, the second behavior recognition result, the first attribute recognition result, and the second attribute recognition result.
In order to realize attribute identification of the first class of people or the second class of people, in the embodiment of the disclosure, under the condition that at least one frame of image to be processed comprises a continuous frame image, face key point detection can be performed on the people image in the continuous frame image to obtain the face key point of the first class of people or the second class of people in each frame of image of the continuous frame image; determining the change information of the face key points of the first class of people or the second class of people in the adjacent frame images according to the face key points of the first class of people or the second class of people in each frame image of the continuous frame images; and determining expression information of the first class of people or the second class of people according to the change information of the face key points of the first class of people or the second class of people in the adjacent frame images, namely determining a first attribute recognition result or a second attribute recognition result.
In practical application, a face key point model can be trained in advance, the face key point model is used for determining coordinates of face key points (for example, 150, 220 and the like) in a person image, and the face key points can comprise pixel points at five sense organs of a face; after determining the coordinates of the face key points of the first class of people or the second class of people in each frame of image of the continuous frame of images, the change information of the face key points of the first class of people or the second class of people in the adjacent frame of images can be determined; here, the change information of the face key points of the first-class person or the second-class person in the adjacent frame images includes coordinate change information. The adjacent frame images may include N images adjacent to each other, N being an integer greater than or equal to 2.
For example, the attribute recognition result of the first or second class of people may include moods of happiness, calm, sadness, pain, depression, anger, tension, anxiety, and the like.
It is understood that it is beneficial to determine the expression information of the first type person or the second type person by comparing the change information of the face key points of the first type person or the second type person in the adjacent frame images, for example, in the case that the mouth angle of the first type person is determined to be raised by detecting the change of the mouth angle key points of the first type person, the expression of the first type person can be determined to be a happy expression.
It can be understood that the embodiment of the disclosure can combine the expression information and behavior recognition results of various persons to more accurately generate a data report reflecting that the second person attends the first person, and further, is favorable for accurately performing subsequent processing on the data report.
In one implementation mode, responding to the first behavior recognition result representing that the first class of people falls down or stops moving, and generating first alarm information; and sending the first alarm information to the first target equipment.
In practical applications, it can be determined that the first-class person is in a falling state under the condition that at least part of the key points of the human body of the first-class person are positioned on a horizontal plane parallel to the ground.
In practical application, under the condition that the postures of the first class of people in continuous multi-frame images to be processed are the same abnormal posture and the difference between the abnormal posture and the reference posture is larger than the set difference amplitude, the first class of people is determined to be in a stop motion state; in the embodiment of the disclosure, at least one normal posture can be set for the first class of people, the reference posture belongs to the normal posture of the first class of people, and the abnormal posture does not belong to the normal posture of the first class of people; taking the sitting posture as an example, the reference posture can be a normal sitting posture, and the normal sitting posture can be a posture with bent lower limbs; the abnormal posture may be a state in which the first type of person is sitting up but the head is slouched. In some practical scenarios, if the first type of person is in a state of stopping movement, the body may slide down along the sofa/chair to a certain extent, the bending of the lower limb may disappear, the pelvis may be relatively far away from the back of the seat, and the like, and therefore, the abnormal posture may also be a posture in which the lower limb is not bent, or a posture in which the pelvis is far away from the back of the seat, and the like.
In the embodiment of the present disclosure, the first target device may be a terminal device used by a related person, and the related person may be a family member of the first type of person, for example, the related person may be a guardian of the first type of person; the person of interest may also be a worker of a caregiver or social welfare agency.
Under the condition that the first-class personnel are in a falling state or a motion stopping state, the first-class personnel can be considered to have a dangerous condition, at the moment, the first alarm information is sent to the first target equipment, so that the related personnel can know the dangerous condition through the first target equipment and can perform subsequent treatment aiming at the dangerous condition.
In practical application, after receiving the first warning information, the terminal device used by the relevant person may display the first warning information on an interface, for example, referring to fig. 2, the following first warning information is displayed on the interface of the terminal device: the old people fall down on the floor.
The following exemplarily explains a manner of determination of the first target device.
Determining an alert level based on the first alert information before sending the first alert information to the first target device; determining a care state based on the first behavior recognition result and the second behavior recognition result; and determining a first target device matched with the alarm level and/or the nursing state from a plurality of preset associated devices, and sending first alarm information to the first target device.
The preset associated devices can comprise terminal devices used by a nurse, terminal devices used by guardians of first-class personnel, terminal devices used by an emergency system and terminal devices used by danger event handling personnel; for each associated device, the alarm level and/or the nursing state matched with the associated device can be predetermined; therefore, after the alarm level and/or the nursing state corresponding to the first class of people are determined, the first target device can be determined. Here, the hazardous event handler means a person who performs special handling for various types of events that endanger the first kind of person.
In the embodiment of the disclosure, the alarm level may reflect the risk degree corresponding to the first alarm information, and the higher the alarm level is, the higher the risk degree corresponding to the first alarm information is; for example, in a case where the first warning information indicates that the elderly person is in a falling state, the risk level corresponding to the first warning information may be considered to be low, that is, the warning level is low, and in this case, the first warning information may be sent to a terminal device of a guardian of a caregiver or a first-class person. Under the condition that the first alarm information indicates that the old person is in a stop state and the duration of the stop state exceeds the set duration, the danger degree corresponding to the first alarm information can be considered to be higher, namely, the alarm level is higher, and under the condition, the first alarm information can be sent to the terminal equipment used by the guardian, the emergency system and the dangerous event handling personnel of the first class of personnel.
In the embodiment of the disclosure, the nursing state may reflect the nursing relationship of the second class of people to the first class of people, and the nursing state may indicate that the second class of people is nursing the first class of people, the second class of people does not nurse the first class of people but is in a normal working state, the second class of people does not nurse the first class of people and is not in a normal working state, and the like; exemplarily, in the case that the nursing state indicates that the second type of person is nursing the first type of person or the second type of person is not nursing the first type of person but is in a normal working state, the first warning information may be sent to the terminal device of the caregiver; when the nursing state indicates that the second type of person does not nurse the first type of person and is not in a normal working state, the first warning information can be sent to the terminal device of the caregiver, and the first warning information can also be sent to the terminal device used by the guardian, the emergency system or the dangerous event handling person of the first type of person.
Therefore, the first type of personnel can be considered to have a dangerous situation under the condition that the first type of personnel is in a falling state or a motion stopping state, and at the moment, the first target equipment receiving the first alarm information can be favorably and accurately determined by determining the alarm level and/or the nursing state, so that the first alarm information can be known by the relevant personnel in the dangerous situation, and the feasibility of carrying out subsequent processing on the dangerous situation is improved.
In the embodiment of the disclosure, after a second behavior recognition result of a second class of people is obtained, second warning information is generated in response to the fact that the behavior of the second class of people is represented to be abnormal by the second behavior recognition result; and sending second alarm information to second target equipment, wherein the second target equipment comprises equipment used by relatives corresponding to the first class of people and/or equipment used by management personnel corresponding to the second class of people.
Here, the manager corresponding to the second type of person may represent a manager of a company to which the second type of person belongs.
For example, after the second behavior recognition result is obtained, it may be determined whether the second behavior recognition result meets a preset criterion, where the preset criterion may represent a behavior criterion of the second class of people. Under the condition that the second behavior recognition result does not accord with the preset criterion, the behavior of the second class of personnel is abnormal; and under the condition that the second behavior recognition result meets the preset criterion, the behavior of the second class of people is not abnormal.
After generating the second warning information, the second warning information may be sent to the second target device; the second target device may display the second warning information on the interface after receiving the second warning information.
It can be understood that, when the second behavior recognition result represents that the behavior of the second type of person is abnormal, the behavior of the second type of person can be considered as an abnormal behavior, so that by sending the second alarm information to the second target device, the related person can know the abnormal behavior of the second type of person in time, and thus the specific processing is performed.
For the implementation manner of judging whether the second behavior recognition result meets the preset criterion, in the first example, the duration that the second type of person leaves the residence of the first type of person at any time may be determined according to the second behavior recognition result, and the second behavior recognition result meets the preset criterion when the duration that the second type of person leaves the residence of the first type of person at any time is smaller than the first time threshold; and under the condition that the time length that the second-class person leaves the residence of the first-class person at any time is greater than or equal to the first time length threshold value, determining that the second behavior recognition result does not meet the preset criterion.
In the embodiment of the disclosure, the occurrence position of the second type of person in the multiple frames of images to be processed can be analyzed, so that the time point when the second type of person leaves the residence of the first type of person each time is determined, and timing can be performed after the time point when the second type of person leaves the residence of the first type of person each time is determined; when the time point that the second type of person leaves the residence of the first type of person at any time is determined, and the timing value obtained by timing is greater than or equal to the first time threshold value, the time length that the second type of person leaves the residence of the first type of person at a single time can be considered to be greater than or equal to the first time threshold value; in the case where the second type of person returns to the doorway of the residence of the first type of person again, the timing may be stopped, thereby obtaining the length of time that the second type of person leaves the residence of the first type of person this time.
Exemplarily, in the case that the first time threshold is 2 hours, and the time for the second type of person to leave the residence of the first type of person for a single time is 1 hour, the second behavior recognition result may be considered to meet the preset criterion; in the case that the first time threshold is 2 hours and the second type of person leaves the residence of the first type of person for a single time is 3 hours, the second behavior recognition result may be considered to be not in accordance with the preset criterion.
It can be understood that when the time length that the second type of person leaves the residence of the first type of person for a single time is greater than or equal to the first time length threshold, it can be considered that the behavior of the second type of person is abnormal, so that by generating the second alarm information, the related person can know the abnormal behavior of the second type of person in time, and thus the targeted processing is performed; for example, referring to fig. 2, the following second warning information is presented on the interface of the terminal device: the time for the nanny to leave exceeds 2 hours, so that related personnel can know the abnormal behaviors of the second class of personnel in time.
For the implementation manner of determining whether the second behavior recognition result meets the preset criterion, in the second example, the total duration of the second class of people leaving the residence of the first class of people in each set period may be determined according to the second behavior recognition result, and the second behavior recognition result may be determined to meet the preset criterion when the total duration of the second class of people leaving the residence of the first class of people in any one set period is less than the second duration threshold; and determining that the second behavior recognition result does not meet the preset criterion under the condition that the total time length for the second type of people to leave the residence of the first type of people in any set period is greater than or equal to a second time length threshold value.
In the embodiment of the present disclosure, the setting period may be set according to the working rule of the second class of people, for example, the setting period may be one day, one week, and the like. According to the above description, the time length of the second type of person leaving the residence of the first type of person each time can be determined, so that the total time length of the second type of person leaving the residence of the first type of person in each set period can be obtained by counting the time length of the second type of person leaving the residence of the first type of person each time in each set period.
Illustratively, the period is set for one day, and the second duration threshold is 3.5 hours; under the condition that the total time that the second-class person leaves the residence of the first-class person in one day is 2.5 hours, the second behavior recognition result can be considered to meet the preset criterion; in the case where the total time period during which the second type of person leaves the residence of the first type of person within a day is 5 hours, the second behavior recognition result may be considered to be not in accordance with the preset criterion.
It can be understood that, when the total time length that the second type of person leaves the residence of the first type of person in any one set period is greater than or equal to the second time length threshold, the behavior of the second type of person can be considered as abnormal behavior, so that by generating the second alarm information, the related person can know the abnormal behavior of the second type of person in time, and thus the targeted processing is performed.
For the implementation manner of determining whether the second behavior recognition result meets the preset criterion, in the third example, the information of the interaction behavior between the first class of people and the second class of people may be determined according to the first behavior recognition result and the second behavior recognition result, and the second behavior recognition result is determined to meet the preset criterion under the condition that the information of the interaction behavior meets the preset interaction behavior criterion; and under the condition that the information of the interactive behavior does not accord with the preset interactive behavior criterion, determining that the second behavior identification result does not accord with the preset criterion.
In the embodiment of the present disclosure, the preset interactive behavior criterion may be a behavior criterion formulated by a family member of the first class of people, and understandably, when the information of the interactive behavior conforms to the non-preset interactive behavior criterion, it may be considered that the behavior of the second class of people is abnormal, so that, by generating the second alarm information, the relevant people can know the abnormal behavior of the second class of people in time, thereby performing targeted processing.
For the implementation manner of determining whether the information of the interactive behavior conforms to the preset interactive behavior criterion, for example, it may be determined that the information of the interactive behavior does not conform to the preset interactive behavior criterion under the condition that the type of the interactive behavior between the first class of people and the second class of people does not belong to the preset compliant behavior type, or under the condition that the duration of the interactive behavior between the first class of people and the second class of people is less than the third duration threshold.
Illustratively, the preset compliance behavior type may be talking, massaging, assisting, or the like. The third duration threshold may be preset according to the physiological needs and psychological needs of the second class of people.
In the embodiment of the disclosure, when the second behavior recognition result meets the preset criterion, the behavior of the second class of people can be considered as a normal behavior; in the case that the second behavior recognition result does not meet the preset criterion, it may be considered that the behavior of the second type of person is abnormal.
Considering that both the normal behavior and the abnormal behavior of the second class of people can be used for guiding the second class of people to implement the behavior subsequently according to the preset criterion, in the embodiment of the disclosure, the behavior information of the second class of people can be classified and stored according to the judgment result of whether the second behavior recognition result meets the preset criterion. By storing the behavior information of the second class of people in a classified manner, the behavior information of the normal behavior and the behavior information of the abnormal behavior of the second class of people can be conveniently checked by the user.
In an actual scene, the person image may further include an image of a visiting person other than the first type person and the second type person;
correspondingly, in response to the fact that the image of the visiting person is not matched with the pre-stored person image, the visiting person is determined as an unfamiliar person, behavior analysis is conducted on the unfamiliar person, and an analysis result of the unfamiliar person is obtained; generating third alarm information under the condition that the analysis result of the stranger meets the preset condition; sending third alarm information to third target equipment;
in the embodiment of the disclosure, the personnel images may be pre-stored in the intelligent IPC, the intelligent module, the edge device or the cloud device, the pre-stored personnel images may include images of personnel known to the first type of personnel, and the personnel known to the first type of personnel may be second type of personnel, family members of the first type of personnel, friends of the first type of personnel, neighbors of the first type of personnel, and the like; illustratively, images and identity information of persons known to the first class of persons may be presented on an interface of a terminal device used by the relevant person, e.g., with reference to fig. 2, images and identity information of neighbors of the first class of persons may be presented on an interface of the terminal device.
In practical application, images of people known by the first class of people can be collected in advance, and corresponding identity information is labeled for the images of the people known by the first class of people.
After obtaining the personnel image in the at least one frame of image to be processed, judging whether the images of the visiting personnel except the first type personnel and the second type personnel in the at least one frame of image to be processed are matched with the prestored personnel image; determining that the visiting person is a stranger under the condition that the image of the visiting person is not matched with the pre-stored person image; determining the visiting person as a familiar person in the case where the image of the visiting person matches a pre-stored person image
Here, the analysis result of the stranger may include at least one of: behavior information of strangers, and positions of strangers; the preset condition may represent a condition that poses a danger to the first type of person; the third target device may comprise at least one of: the first-class personnel management system comprises terminal equipment used by family members of the first-class personnel, terminal equipment used by the second-class personnel, equipment used by management personnel corresponding to the second-class personnel, terminal equipment used by an emergency system and terminal equipment used by dangerous event handling personnel.
After the third warning information is sent to the third target device, the third warning information may be displayed on an interface of the third target device, for example, referring to fig. 2, the following second warning information is displayed on an interface of the terminal device used by the family members of the first class of people: "strangers visit, strangers' behavior belongs to dangerous behavior".
It can be seen that under the condition that the analysis result of the stranger meets the preset condition, the first-class person is considered to be possibly damaged, and at the moment, the third alarm information is generated to timely inform the related person or the dangerous event handling person, so that the visiting behavior of the stranger is timely handled, and the damage caused by the visiting behavior of the stranger is reduced.
For the implementation manner of judging whether the analysis result of the stranger meets the preset condition, exemplarily, the analysis result of the stranger can be determined to meet the preset condition under the condition that the behavior information of the stranger includes preset dangerous behavior information or the position of the stranger is not in a preset area; and under the condition that the behavior information of the stranger does not include preset dangerous behavior information and the position of the stranger is in a preset area, determining that the analysis result of the stranger does not meet the preset condition.
Here, the preset dangerous behavior information may be used to represent behaviors of theft, damage, injury, and the like, and it is understood that the first type of person may be considered to be harmed in the case where the behavior information of the stranger includes the preset dangerous behavior information.
The preset area may represent an area predetermined to allow strangers to be present, for example, the preset area may represent a living room area of the first type of person and an out-door area of a residence of the first type of person; in the case where the location of the stranger is not in the preset area, it is considered that the first kind of person and the property of the first kind of person may be damaged.
In the embodiment of the disclosure, when the behavior information of the stranger does not include the preset dangerous behavior information and the position of the stranger is in the preset area, the stranger can be considered as a normally visited person; for example, the stranger is a normally visited property worker.
Therefore, by determining the behavior information of the strangers, whether the analysis result of the strangers meets the preset condition or not can be judged accurately.
In the embodiment of the disclosure, when the analysis result of the stranger does not meet the preset condition, in order to allow the related person to know the visiting situation of the stranger, prompt information representing the visiting situation of the stranger may be generated. Then, prompt information representing the visiting situation of the stranger can be sent to the terminal equipment of the related person; after receiving the prompt information, the terminal device may display the prompt information on an interface, for example, referring to fig. 2, the following prompt information is displayed on the interface of the terminal device: "strangers visit, strangers act normally".
Exemplarily, if the stranger is in an out-door area of a residence of the first type of person and the behavior information of the stranger does not include preset dangerous behavior information, prompt information representing the visiting situation of the stranger can be sent to the terminal device used by the family members of the first type of person or the terminal device used by the nanny, and third warning information does not need to be generated; if the stranger is in the preset area and the behavior of the stranger is stealing, third warning information can be generated and sent to the terminal equipment used by family members of the first class of people, the terminal equipment used by the nurse and the terminal equipment used by the dangerous event handling personnel.
In the embodiment of the disclosure, in response to the matching of the image of the visitor and the pre-stored personnel image, the visitor is determined as a familiar person and behavior analysis is performed on the familiar person to obtain an analysis result of the familiar person; and sending the analysis result of the familiar person to the fourth target device.
Here, the analysis result of the familiar person may include at least one of: behavioral information of the familiar person, location of the familiar person; illustratively, if the familiar person is in an out-door area of the residence of the first type of person and fails to enter the interior of the residence of the first type of person, the results of the analysis by the familiar person may include: the visit of the familiar person and the identity information of the familiar person. If the familiar person is in the interior area of the residence of the first type of person and the behavior information of the familiar person does not include the preset dangerous behavior information, the analysis result of the familiar person may include a visit record of the familiar person. The visit record may include information such as the identity of the visiting person, the time of the visit, etc.
In the embodiment of the present disclosure, the pre-stored person image, the preset condition, and the like may be updated according to the actual requirement of the user, for example, the pre-stored person image may be expanded based on the image of the person who is in the same line as the familiar person, or the image of the existing familiar person may be updated in the pre-stored person image.
In the embodiment of the present disclosure, after the data report is generated, a psychological coaching scheme for the first class of people may be generated based on the data report; and/or generating a care instruction plan for the second class of people based on the data report.
The psychological counseling scheme is obtained based on a first attribute recognition result of the first class of people or based on the first attribute recognition result and a first behavior recognition result; the nursing guidance plan is obtained based on at least one of the second attribute recognition result and the second behavior recognition result of the second type person.
Illustratively, in a case where the first attribute recognition result indicates that the expression of the first type person is a tense expression, a psychological coaching scheme corresponding to the tense emotion may be generated; when the first attribute identification result indicates that the expression of the first class of people is a depressed expression, and the first behavior identification result indicates that the first class of people is in a static state for a long time, a psychological tutoring scheme corresponding to the depressed emotion can be generated. And under the condition that the second behavior recognition result of the second type of personnel does not accord with the preset criterion, generating a corresponding nursing guidance scheme according to the preset criterion.
As can be appreciated, since a psychological coaching scheme for the first class of people and/or a care guidance scheme for the second class of people can be generated from the data report, the first class of people can be intentionally coached based on the psychological coaching scheme, and the second class of people can be intentionally coached based on the care guidance scheme.
In one implementation, by performing target detection on each frame of image in at least one frame of image to be processed, an object image in at least one frame of image to be processed can be obtained; correspondingly, after the first class of people in the image of the people is identified, behavior analysis can be performed on the first class of people based on the image of the first class of people and the image of the object in the at least one frame of image to be processed, so that a first behavior identification result of the first class of people is obtained.
In practical application, detection models of various objects can be trained in advance, and the detection models of various objects are used for detecting images of various objects from the images; after the trained detection models of various objects are obtained, each frame of image to be processed can be input into the detection models of various objects, each frame of image to be processed is processed by using the detection models of various objects, an object detection frame corresponding to the image is obtained, and the image in the object detection frame is the object image.
The network structure of the detection model of each kind of object is not limited in the embodiment of the present disclosure, and the network structure of the detection model of each kind of object may be a two-stage detection network structure, for example, the detection model of the kind of object may be fast RCNN, etc.; the network structure of the detection model of each object may also be a single-stage detection network structure, for example, the network structure of the detection model of each object is RetinaNet.
It can be understood that each frame of image to be processed may include one detectable object or a plurality of detectable objects, where the plurality of detectable objects may be the same type of object or different types of objects, and then, after each frame of image to be processed is processed by using detection models of various types of objects, one object image or a plurality of object images may be obtained.
Illustratively, in the case that the first behavior recognition result indicates that the first type of person lies on the first type of object, corresponding warning information is generated and sent to the first target device. Here, the first-type object means an object other than an object for resting the first-type person.
In the embodiment of the present disclosure, the object for the first class of people to rest may be predetermined, for example, the object for the first class of people to rest may be an object such as a bed, a sofa, a deck chair, etc.; after determining the object for the first type of person to rest, the first type of object may be determined, e.g. the first type of object may be a floor, carpet, table, etc. It should be noted that the above description is only an exemplary description of the object for resting the first person and the first object, and the embodiment of the present disclosure is not limited thereto.
It can be seen that, in the embodiment of the present disclosure, corresponding warning information may be generated when the first behavior recognition result indicates that the first class person lies on the first class object, which is beneficial to timely understanding the behavior state of the first class person and performing subsequent processing on the behavior state of the first class person. Furthermore, the basis for performing behavior analysis on the first class of people not only comprises the images of the first class of people, but also comprises the object images, so that the behavior information of the first class of people can be obtained accurately, and accurate warning information can be generated.
When the first-class person is an old person, the image processing method according to the embodiment of the disclosure can assist children of the old person or social welfare agencies to take care of the old person, is beneficial to effectively reducing social cost, and is beneficial to solving the social care problem to a certain extent.
Considering the situation that the first type of person can restore to the standing state from the lying state by himself/herself, in the embodiment of the present disclosure, the duration of the first type of person lying on the first type of object may be determined according to the first behavior recognition result; and generating corresponding alarm information under the condition that the duration of the first-class person lying on the first-class object is greater than or equal to a fourth time threshold.
Here, the fourth time threshold may be set in advance according to actual needs, for example, a longer fourth time threshold may be set for a first type of person with strong mobility, and a shorter fourth time threshold may be set for a first type of person with weak mobility. In practice, the fourth time threshold may be updated according to a change in the mobility of the first person, for example, the fourth time threshold may be lowered in the case where it is determined that the first person recovers from the injured state.
It can be understood that, when the duration that the first-class person lies on the first-class object is greater than or equal to the fourth time length threshold, it may be considered that the first-class person has a dangerous situation such as a fall, an injury, a death, and the like, and at this time, by generating corresponding alarm information, it is beneficial for the relevant person to know the dangerous situation and perform subsequent processing for the dangerous situation.
In an actual scene, when the behavior information of the first class of people indicates that the first class of people lies on the second class of objects, prompt information can be generated, and the second class of objects represent objects for the first class of people to rest.
It will be appreciated that in the case of a person of the first type lying on an object of the second type, the behaviour of the person of the first type may generally be considered to be normal resting behaviour, and therefore, instead of generating an alarm message, a reminder message may be generated.
After the prompt message is generated, the prompt message can be sent to the first target device; after receiving the prompt message, the first target device may display the prompt message on the interface.
It can be seen that the prompt information is generated, so that the related personnel can know the state of the first class of personnel conveniently.
In the embodiment of the disclosure, under the condition that the expression information of the first class of people includes the preset negative expression, corresponding expression prompt information can be generated; in the case that the expression information of the first-class person does not include the preset negative expression, the expression of the first-class person may be recorded, or the expression information of the first-class person may be ignored.
Here, the negative expression may represent the following expression: sadness, pain, depression, anger, tension, anxiety.
After the expression prompt information is obtained, the expression prompt information can be sent to terminal equipment of related personnel; after receiving the expression prompt information, the terminal device may display the expression prompt information on an interface, for example, referring to fig. 2, the following expression prompt information is displayed on the interface of the terminal device: "the mood of the elderly: sadness ".
For example, in the case that the expression prompt information and the alarm information are generated at the same time, the alarm information and the expression prompt information may be displayed at the same time, for example, referring to fig. 2, the following expression prompt information may be displayed at the same time when the alarm information is displayed: "emotional distress".
Exemplarily, in the case that the expression information of the first-class person includes a preset negative expression, determining that the expression information of the first-class person includes a duration of the preset negative expression, and in the case that the duration is greater than or equal to a fifth duration threshold, generating third prompt information; in the case that the duration of the emotion information of the first class of people including the preset negative emotion information is less than the fifth duration threshold, the emotion information of the first class of people may be recorded, or the emotion information of the first class of people may be ignored.
It can be understood that, in the case that the expression information of the first-class person includes the preset negative expression, it may be considered that the first-class person needs to take care of or care of in time, and at this time, by generating the expression prompt information, it is beneficial for the relevant person to know the negative emotion of the first-class person and perform subsequent processing for the negative emotion.
An image processing method according to an embodiment of the present disclosure is exemplarily described below with reference to the drawings.
Referring to fig. 4, the architecture of the electronic device may include an algorithm layer, a service layer, and a service layer, where the algorithm layer is used to implement functions of face detection and tracking, portrait attribute identification, face identification, pedestrian re-identification (ReID), motion gesture detection, and the like; the service layer is used for realizing functions of expression analysis, track behavior analysis, visitor identification, customer attribute judgment and the like; and the service layer is used for realizing functions of abnormal behavior management, abnormal personnel identification, expression analysis, video preview and the like.
Illustratively, the behavior detection of the first class of people and the second class of people can be realized through various functions realized by an algorithm layer, a track behavior analysis function realized by a service layer and an abnormal behavior management function realized by a service layer; the expression detection of the first class of people can be realized through the face detection tracking, portrait attribute recognition and face recognition functions realized by an algorithm layer, the expression analysis function realized by a service layer and the expression analysis function realized by a service layer; through various functions realized by the algorithm layer, functions of track behavior analysis, visitor identification, customer attribute judgment and the like realized by the service layer, and functions of abnormal behavior management, abnormal personnel identification and the like realized by the service layer, whether the analysis result of strangers meets the preset condition can be judged.
Referring to fig. 5, the embodiment of the present disclosure can implement functions of facial expression detection, visitor person identification, person behavior detection, etc. of a first type of person, for example, facial expression information of the first type of person is happy or painful, and through the visitor person identification, whether a person in an image is a stranger or an acquaintance can be identified, where the acquaintance represents the person described above who knows the first type of person; the current behavior of the first person may be determined by the behavior detection, for example, the current behavior of the first person may be a fall or a stoppage of motion.
Referring to fig. 6, in the case that the first type of people includes the old and the second type of people includes the caregiver, the embodiment of the disclosure may implement a behavior and expression detection function of the old based on the cloud device, and may also implement a behavior and expression detection function of the caregiver, and may determine whether behavior information of the second type of people meets a preset criterion through the behavior and emotion detection function of the caregiver; by the emotion detection of the elderly, the living state of the elderly can be judged, for example, whether the living state of the elderly is a sudden disease or long-term depression.
Referring to fig. 6, the embodiment of the present disclosure may implement visitor identification based on the cloud device, that is, may identify whether a person in an image to be processed is a stranger or an acquaintance.
According to the recorded content, when the first-class personnel are old people, operations such as expression detection, face recognition, behavior recognition and the like can be performed from the image through an AI technology, and corresponding warning information and prompting information are generated, so that the method is favorable for prompting relevant personnel to perform timely processing and assisting in solving a severe aging problem; in addition, the embodiment of the disclosure can realize multiple functions of old people behavior detection, old people expression detection, behavior detection of visitors and the like, and the application scene is wide.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
On the basis of the image processing method proposed by the foregoing embodiment, an embodiment of the present disclosure proposes an image processing apparatus.
Fig. 7 is a schematic diagram illustrating a composition structure of an image processing apparatus according to an embodiment of the disclosure, and as shown in fig. 7, the apparatus may include:
an obtaining module 701, configured to obtain at least one frame of to-be-processed image acquired by an image acquisition device;
a detection module 702, configured to perform target detection on each frame of image in the at least one frame of image to be processed to obtain a person image in the at least one frame of image to be processed;
an identification module 703, configured to perform identity identification and behavior identification on an object in the personal image, and obtain a first behavior identification result of a first type of person and a second behavior identification result of a second type of person, respectively, where the first type of person is a cared person, and the second type of person is a cared person of the first type of person;
a processing module 704, configured to generate a data report reflecting that the second type of person takes care of the first type of person based on the first behavior recognition result and the second behavior recognition result.
In some embodiments, the identifying module 703 is further configured to perform attribute identification on the first class of people and the second class of people in the person image respectively to obtain a first attribute identification result of the first class of people and a second attribute identification result of the second class of people, where the attribute identification is at least used to identify expressions of the people;
accordingly, the processing module 704 is specifically configured to generate the data report based on the first behavior recognition result, the second behavior recognition result, the first attribute recognition result, and the second attribute recognition result.
In some embodiments, the processing module 704 is further configured to generate a first warning message in response to the first behavior recognition result indicating that the first class of people falls or stops moving; and sending the first alarm information to the first target equipment.
In some embodiments, the processing module 704 is further configured to, before sending the first warning information to the first target device, perform at least one of:
determining an alarm level based on the first alarm information;
determining a care state based on the first behavior recognition result and the second behavior recognition result;
the processing module 704 is further configured to determine the first target device matching the alarm level and/or the care status from a plurality of preset associated devices, and send the first alarm information to the first target device.
In some embodiments, the processing module 704 is further configured to generate second warning information in response to the second behavior recognition result representing that the behavior of the second class of people is abnormal; and sending the second warning information to second target equipment, wherein the second target equipment comprises equipment used by relatives corresponding to the first class of people and/or equipment used by management personnel corresponding to the second class of people.
In some embodiments, the person image comprises an image of a visiting person other than the first type of person and the second type of person;
the processing module 704 is further configured to:
in response to the fact that the image of the visiting person is not matched with a pre-stored person image, determining the visiting person as a stranger, and performing behavior analysis on the stranger to obtain an analysis result of the stranger;
generating third alarm information under the condition that the analysis result of the stranger meets a preset condition;
sending the third warning information to a third target device;
and/or the presence of a gas in the gas,
in response to the image of the visiting person being matched with a pre-stored person image, determining the visiting person as a familiar person and performing behavior analysis on the familiar person to obtain an analysis result of the familiar person;
and sending the analysis result of the familiar person to a fourth target device.
In some embodiments, the processing module is further configured to, after generating the data report, perform at least one of:
generating a psychological coaching scheme for the first class of people based on the data report;
generating a care instruction plan for the second class of people based on the data report;
wherein the psychological coaching scheme is obtained based on a first attribute recognition result of the first class of people or based on the first attribute recognition result and the first behavior recognition result; the nursing guidance plan is obtained based on at least one of a second attribute recognition result and the second behavior recognition result of the second type of person.
In practical applications, the obtaining module 701, the detecting module 702, the identifying module 703 and the processing module 704 may all be implemented by a processor in a computer device, where the processor may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller and a microprocessor.
In addition, each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a Processor (Processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Specifically, the computer program instructions corresponding to an image processing method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disk, a usb disk, or the like, and when the computer program instructions corresponding to an image processing method in the storage medium are read or executed by an electronic device, any one of the image processing methods of the foregoing embodiments is implemented.
Based on the same technical concept of the foregoing embodiment, referring to fig. 8, it illustrates an electronic device 80 provided by an embodiment of the present disclosure, which may include: a memory 801 and a processor 802; wherein the content of the first and second substances,
the memory 801 is used for storing computer programs and data;
the processor 802 is configured to execute the computer program stored in the memory to implement any one of the image processing methods of the foregoing embodiments.
In practical applications, the memory 801 may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor 802.
The processor 802 may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor. It is understood that the electronic devices for implementing the above-described processor functions may be other devices, and the embodiments of the present disclosure are not particularly limited.
The embodiment of the present disclosure further provides a computer program, which includes computer readable codes, and when the computer readable codes are run in an electronic device, a processor in the electronic device executes a method for implementing any one of the image processing methods.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, which are not repeated herein for brevity
The methods disclosed in the method embodiments provided by the present disclosure may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in the various product embodiments provided by the disclosure may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the various method or apparatus embodiments provided by the present disclosure may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present disclosure.
While the embodiments of the present disclosure have been described in connection with the drawings, the present disclosure is not limited to the specific embodiments described above, which are intended to be illustrative rather than limiting, and it will be apparent to those of ordinary skill in the art in light of the present disclosure that many more modifications can be made without departing from the spirit of the disclosure and the scope of the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring at least one frame of image to be processed acquired by image acquisition equipment;
performing target detection on each frame of image in the at least one frame of image to be processed to obtain a personnel image in the at least one frame of image to be processed;
performing identity recognition and behavior recognition on the object in the personnel image to respectively obtain a first behavior recognition result of a first class of personnel and a second behavior recognition result of a second class of personnel, wherein the first class of personnel is cared personnel, and the second class of personnel is cared personnel of the first class of personnel;
and generating a data report for reflecting that the second type of person takes care of the first type of person based on the first behavior recognition result and the second behavior recognition result.
2. The method of claim 1, further comprising:
respectively performing attribute identification on the first class of people and the second class of people in the personnel image to obtain a first attribute identification result of the first class of people and a second attribute identification result of the second class of people, wherein the attribute identification is at least used for identifying the expression of each person;
the generating a data report reflecting that the second class of person cares for the first class of person based on the first behavior recognition result and the second behavior recognition result comprises:
generating the data report based on the first behavior recognition result, the second behavior recognition result, the first attribute recognition result, and the second attribute recognition result.
3. The method according to claim 1 or 2, wherein after said obtaining a first behavioral recognition result of a first person of a first type, the method further comprises:
responding to the first behavior identification result to represent that the first class of people fall down or stop moving, and generating first alarm information;
and sending the first alarm information to the first target equipment.
4. The method of claim 3, wherein prior to the sending the first alert information to the first target device, the method further comprises:
determining an alarm level based on the first alarm information;
determining a care state based on the first behavior recognition result and the second behavior recognition result;
and determining the first target equipment matched with the alarm level and/or the nursing state from a plurality of preset associated equipment, and sending the first alarm information to the first target equipment.
5. The method according to any one of claims 1 to 4, wherein after the obtaining of the second behavior recognition result of the second class of people, the method further comprises:
responding to the second behavior recognition result to represent that the behavior of the second class of people is abnormal, and generating second alarm information;
and sending the second warning information to second target equipment, wherein the second target equipment comprises equipment used by relatives corresponding to the first class of people and/or equipment used by management personnel corresponding to the second class of people.
6. The method according to any one of claims 1 to 5, wherein the person image comprises an image of a visiting person other than the first type of person and the second type of person;
the method further comprises the following steps:
in response to the fact that the image of the visiting person is not matched with a pre-stored person image, determining the visiting person as a stranger, and performing behavior analysis on the stranger to obtain an analysis result of the stranger;
generating third alarm information under the condition that the analysis result of the stranger meets a preset condition;
sending the third warning information to a third target device;
and/or the presence of a gas in the gas,
in response to the image of the visiting person being matched with a pre-stored person image, determining the visiting person as a familiar person and performing behavior analysis on the familiar person to obtain an analysis result of the familiar person;
and sending the analysis result of the familiar person to a fourth target device.
7. The method of any one of claims 2 to 6, wherein after generating the data report, the method further comprises at least one of:
generating a psychological coaching scheme for the first class of people based on the data report;
generating a care instruction plan for the second class of people based on the data report;
wherein the psychological coaching scheme is obtained based on a first attribute recognition result of the first class of people or based on the first attribute recognition result and the first behavior recognition result; the nursing guidance plan is obtained based on at least one of a second attribute recognition result and the second behavior recognition result of the second type of person.
8. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring at least one frame of image to be processed acquired by the image acquisition equipment;
the detection module is used for carrying out target detection on each frame of image in the at least one frame of image to be processed to obtain a personnel image in the at least one frame of image to be processed;
the identification module is used for carrying out identity identification and behavior identification on the object in the personnel image to respectively obtain a first behavior identification result of a first class of personnel and a second behavior identification result of a second class of personnel, wherein the first class of personnel is nursed personnel, and the second class of personnel is nursed personnel of the first class of personnel;
and the processing module is used for generating a data report for reflecting that the second type of personnel takes care of the first type of personnel based on the first behavior recognition result and the second behavior recognition result.
9. An electronic device comprising a processor and a memory for storing a computer program operable on the processor; wherein the content of the first and second substances,
the processor is configured to run the computer program to perform the method of any one of claims 1 to 7.
10. A computer storage medium on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the method of any one of claims 1 to 7.
CN202111070564.9A 2021-09-13 2021-09-13 Image processing method, image processing device, electronic equipment and computer storage medium Withdrawn CN113762184A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111070564.9A CN113762184A (en) 2021-09-13 2021-09-13 Image processing method, image processing device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111070564.9A CN113762184A (en) 2021-09-13 2021-09-13 Image processing method, image processing device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN113762184A true CN113762184A (en) 2021-12-07

Family

ID=78795288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111070564.9A Withdrawn CN113762184A (en) 2021-09-13 2021-09-13 Image processing method, image processing device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113762184A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107856039A (en) * 2017-11-16 2018-03-30 北京科技大学 A kind of service robot system and method for accompanying and attending to of supporting parents of accompanying and attending to of supporting parents
CN109544859A (en) * 2018-12-12 2019-03-29 焦点科技股份有限公司 A kind of active safety household camera intelligent realization method
CN110298325A (en) * 2019-07-02 2019-10-01 四川长虹电器股份有限公司 Expression impaired patients assisted care system based on video Expression Recognition
CN110443109A (en) * 2019-06-11 2019-11-12 万翼科技有限公司 Abnormal behaviour monitor processing method, device, computer equipment and storage medium
CN110795971A (en) * 2018-08-02 2020-02-14 深圳云天励飞技术有限公司 User behavior identification method, device, equipment and computer storage medium
CN111191483A (en) * 2018-11-14 2020-05-22 百度在线网络技术(北京)有限公司 Nursing method, nursing device and storage medium
CN111507290A (en) * 2019-05-28 2020-08-07 小蚁科技(香港)有限公司 Comforter monitoring and nursing system
CN111882820A (en) * 2020-07-30 2020-11-03 重庆电子工程职业学院 Nursing system for special people
CN111975772A (en) * 2020-07-31 2020-11-24 深圳追一科技有限公司 Robot control method, device, electronic device and storage medium
CN112613444A (en) * 2020-12-29 2021-04-06 北京市商汤科技开发有限公司 Behavior detection method and device, electronic equipment and storage medium
CN112613780A (en) * 2020-12-29 2021-04-06 北京市商汤科技开发有限公司 Learning report generation method and device, electronic equipment and storage medium
CN112686156A (en) * 2020-12-30 2021-04-20 平安普惠企业管理有限公司 Emotion monitoring method and device, computer equipment and readable storage medium
CN113158858A (en) * 2021-04-09 2021-07-23 苏州爱可尔智能科技有限公司 Behavior analysis method and system based on deep learning

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107856039A (en) * 2017-11-16 2018-03-30 北京科技大学 A kind of service robot system and method for accompanying and attending to of supporting parents of accompanying and attending to of supporting parents
CN110795971A (en) * 2018-08-02 2020-02-14 深圳云天励飞技术有限公司 User behavior identification method, device, equipment and computer storage medium
CN111191483A (en) * 2018-11-14 2020-05-22 百度在线网络技术(北京)有限公司 Nursing method, nursing device and storage medium
CN109544859A (en) * 2018-12-12 2019-03-29 焦点科技股份有限公司 A kind of active safety household camera intelligent realization method
CN111507290A (en) * 2019-05-28 2020-08-07 小蚁科技(香港)有限公司 Comforter monitoring and nursing system
CN110443109A (en) * 2019-06-11 2019-11-12 万翼科技有限公司 Abnormal behaviour monitor processing method, device, computer equipment and storage medium
CN110298325A (en) * 2019-07-02 2019-10-01 四川长虹电器股份有限公司 Expression impaired patients assisted care system based on video Expression Recognition
CN111882820A (en) * 2020-07-30 2020-11-03 重庆电子工程职业学院 Nursing system for special people
CN111975772A (en) * 2020-07-31 2020-11-24 深圳追一科技有限公司 Robot control method, device, electronic device and storage medium
CN112613444A (en) * 2020-12-29 2021-04-06 北京市商汤科技开发有限公司 Behavior detection method and device, electronic equipment and storage medium
CN112613780A (en) * 2020-12-29 2021-04-06 北京市商汤科技开发有限公司 Learning report generation method and device, electronic equipment and storage medium
CN112686156A (en) * 2020-12-30 2021-04-20 平安普惠企业管理有限公司 Emotion monitoring method and device, computer equipment and readable storage medium
CN113158858A (en) * 2021-04-09 2021-07-23 苏州爱可尔智能科技有限公司 Behavior analysis method and system based on deep learning

Similar Documents

Publication Publication Date Title
US10121070B2 (en) Video monitoring system
US7106885B2 (en) Method and apparatus for subject physical position and security determination
US11688265B1 (en) System and methods for safety, security, and well-being of individuals
WO2019239813A1 (en) Information processing method, information processing program, and information processing system
US20190029569A1 (en) Activity analysis, fall detection and risk assessment systems and methods
CN112784662A (en) Video-based fall risk evaluation system
CN109492595B (en) Behavior prediction method and system suitable for fixed group
EP2877861A1 (en) A system, method, software application and data signal for determining movement
JP7196645B2 (en) Posture Estimation Device, Action Estimation Device, Posture Estimation Program, and Posture Estimation Method
JP6199791B2 (en) Pet health examination apparatus, pet health examination method and program
CN108882853A (en) Measurement physiological parameter is triggered in time using visual context
JPH08257017A (en) Condition monitoring device and its method
CN110910606B (en) Target tracking-based child anti-lost method and system
JP7026105B2 (en) Service provision system
Pinitkan et al. Abnormal activity detection and notification platform for real-time ad hoc network
Chang et al. In-bed patient motion and pose analysis using depth videos for pressure ulcer prevention
CN113762184A (en) Image processing method, image processing device, electronic equipment and computer storage medium
JP6793383B1 (en) Behavior identification system
Soman et al. A Novel Fall Detection System using Mediapipe
US20210142047A1 (en) Salient feature extraction using neural networks with temporal modeling for real time incorporation (sentri) autism aide
CN111126290B (en) Detention discovery and early warning method and system based on face recognition
JP7059663B2 (en) Information processing equipment
Safarzadeh et al. Real-time fall detection and alert system using pose estimation
WO2020003952A1 (en) Computer executable program, information processing device, and computer execution method
JP2021174189A (en) Method of assisting in creating menu of service, method of assisting in evaluating user of service, program causing computer to execute the method, and information providing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211207