CN108701214A - Image processing method, device and equipment - Google Patents

Image processing method, device and equipment Download PDF

Info

Publication number
CN108701214A
CN108701214A CN201780005969.XA CN201780005969A CN108701214A CN 108701214 A CN108701214 A CN 108701214A CN 201780005969 A CN201780005969 A CN 201780005969A CN 108701214 A CN108701214 A CN 108701214A
Authority
CN
China
Prior art keywords
image
target object
identification model
training
description information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780005969.XA
Other languages
Chinese (zh)
Inventor
张李亮
李思晋
封旭阳
赵丛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Shenzhen Dajiang Innovations Technology Co Ltd
Original Assignee
Shenzhen Dajiang Innovations Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dajiang Innovations Technology Co Ltd filed Critical Shenzhen Dajiang Innovations Technology Co Ltd
Publication of CN108701214A publication Critical patent/CN108701214A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A kind of image processing method, device and equipment, wherein the method includes:Receive the first image of the collected target object of described first image sensor, and the second image of the collected target object of the second imaging sensor, second image of the first image of the target object and the target object is input in preset identification model, obtain the description information of the motion characteristic in the specified region for describing the target object, the status information that the target object is determined according to the motion characteristic description information, can improve the accuracy of fatigue detecting.

Description

Image processing method, device and equipment
Technical field
The present invention relates to electronic technology field more particularly to image processing method, device and equipment.
Background technology
With the development and the improvement of people's living standards of traffic technique, trip mode of driving with its distinctive superiority Optimal selection through going on a journey as most of people brings convenience and comfort level to the trip of people.But due to fatigue Traffic accident caused by driving, tremendous influence is caused to the life security and property of people.
In practical application, by detecting whether vehicle is pressed onto the traffic mark on road, to detect whether driver locates In fatigue driving state, still, if the driving technology level of driver is not high, it is also possible to overwhelm the traffic sign on road Line is mistaken for fatigue driving so as to cause by the driver, it is seen then that the accuracy of the mode of above-mentioned detection fatigue driving is relatively low.
Invention content
The embodiment of the invention discloses a kind of image processing method, device and equipment, can be by mesh such as drivers The image real time transfer for marking object improves the accuracy of detection fatigue driving.
In a first aspect, an embodiment of the present invention provides a kind of image processing method, this method includes:
Receive the first image of the collected target object of described first image sensor and second imaging sensor Second image of the collected target object, described first image include at least one of gray level image or RGB image, Second image includes depth image;
Second image of the first image of the target object and the target object is input to preset identification model In, obtain the description information of the motion characteristic in the specified region for describing the target object;
The status information of the target object is determined according to the motion characteristic description information;
The preset identification model is used for the first image of the target object and the second figure of the target object The specified region of picture is identified.
Second aspect, an embodiment of the present invention provides a kind of image processing apparatus, which includes:
Receiving module, the first image for receiving the collected target object of described first image sensor and described Second image of the collected target object of the second imaging sensor, described first image include gray level image or RGB figures At least one of as, second image includes depth image;
Identification module, it is pre- for the second image of the first image of the target object and the target object to be input to If identification model in, obtain the description information of the motion characteristic in the specified region for describing the target object;
Determining module, the status information for determining the target object according to the motion characteristic description information;
The preset identification model is used for the first image of the target object and the second figure of the target object The specified region of picture is identified.
The third aspect, an embodiment of the present invention provides a kind of image processing equipment, which includes:Processor and storage Device, the processor are connected with the memory by bus, and the memory is stored with executable program code, the processing Device executes the image processing method described in first aspect of the embodiment of the present invention for calling the executable program code.
Fourth aspect, an embodiment of the present invention provides a kind of computer readable storage mediums, are stored thereon with computer journey When the computer program is executed by least one processor, the image real time transfer described in above-mentioned first aspect may be implemented in sequence Method.
5th aspect, an embodiment of the present invention provides a kind of computer program product, which includes depositing The non-transient computer readable storage medium of computer program is stored up, which is operable to that computer is made to realize State the image processing method described in first aspect.
It through the embodiment of the present invention can be according to the first imaging sensor and the second imaging sensor collected first Image (the first image includes RGB image or gray level image) and depth image are inputted as the signal of preset identification model, can To realize the complementation of the first image data and depth image data, and tied on the basis of the images such as RGB image, gray level image It closes depth map and processing is optimized to identification model, and then improve tired to the driver in driver's cabin etc. in specified region The accuracy of labor detection, improves safety.
Description of the drawings
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability For the those of ordinary skill of domain, without having to pay creative labor, others are can also be obtained according to these attached drawings Attached drawing.
Fig. 1 is a kind of flow diagram of image processing method disclosed by the embodiments of the present invention;
Fig. 2 is the structural schematic diagram of another image data processing system disclosed by the embodiments of the present invention;
Fig. 3 is the flow diagram of another image processing method disclosed by the embodiments of the present invention;
Fig. 4 is a kind of structural schematic diagram of image processing apparatus disclosed by the embodiments of the present invention;
Fig. 5 is a kind of structural schematic diagram of image processing equipment disclosed by the embodiments of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiment is only a part of the embodiment of the present invention, instead of all the embodiments.Based on this Embodiment in invention, every other reality obtained by those of ordinary skill in the art without making creative efforts Example is applied, shall fall within the protection scope of the present invention.
The embodiment of the present invention is applied to image processing apparatus, which includes the first imaging sensor and second Imaging sensor, first imaging sensor can refer to monocular vision sensor, and the second imaging sensor can refer to more mesh Visual sensor, the first imaging sensor and the second imaging sensor can be set in the camera of image processing apparatus, e.g., Monocular vision sensor is set in monocular cam, multi-vision visual sensor is set in more mesh cameras.
Image processing apparatus in the embodiment of the present invention can be connect with vehicle, and can be arranged in the car, the image The first imaging sensor and second sensor of processing unit can be adjusted with the postural change dynamic of target object on operator seat The angle of whole acquisition image, so as to clearly collect the image of the target object on operator seat.
Whether the embodiment of the present invention can be applied to detected target object (target object can refer to user) in tired Labor state, more specifically, can be applied to whether detection driver is fatigue driving.
The first image in the embodiment of the present invention includes at least one of gray level image or RGB image, the second image packet Include depth image.
The relatively low problem of accuracy based on current method for detecting fatigue driving, the present invention propose at a kind of image data Reason method, apparatus and equipment, image processing apparatus can receive the first figure of the collected target object of the first imaging sensor Picture, i.e. RGB (Red Green Blue) image is coloured image and the acquisition of the second imaging sensor for having red, green, blue color To the second image of target object, the first image of the target object and the second image are input in preset identification model, The description information of the motion characteristic in the specified region for describing the target object is obtained, it is true according to the motion characteristic description information The status information of the fixed target object, the status information of the target object are used to indicate the target to seeming no in tired shape State.Multi-signal may be implemented as the input of signal in the image data for the target object that the present invention is acquired using multiple sensors Complementation, to for the input terminal of preset identification model provide enough information content, so as to improve fatigue detecting Accuracy.
The embodiment of the invention discloses a kind of image processing method, device and equipment, for based at image data Whether reason mode detected target object is fatigue state, to improve the accuracy of fatigue detecting, is described in detail separately below.
Referring to Fig. 1, Fig. 1 is a kind of flow diagram of image processing method provided in an embodiment of the present invention, it should Method can be applied to image processing apparatus, which includes the first imaging sensor and the second imaging sensor, this Image data method described in embodiment, including:
S101, the first image for receiving the collected target object of the first imaging sensor and second image sensing Second image of the collected target object of device.
Wherein, which includes at least one of gray level image or RGB image, which includes depth map Picture.
In the embodiment of the present invention, if the input using the image data of monocular vision sensor acquisition as signal, In the case of ambient light is insufficient, the quality of monocular vision sensor the image collected data substantially reduces, so that figure As processing unit is difficult to get the information of needs from image data;If the image data using infrared sensor acquisition is made For the input of signal, since infrared sensor is difficult to accurately capture the face of target object, so that image procossing fills Set the information for being difficult to that needs are got from image data.That is, if with single sensor the image collected data Input as signal, it is difficult to ensure that the input terminal for preset identification model provides enough information content, therefore, image procossing Input of a variety of imaging sensor the image collected data as signal may be used in device, and the mutual of multi-signal may be implemented It mends, to provide enough information content for the input terminal of preset identification model.
Specifically, image processing apparatus can receive the first figure of the collected target object of the first imaging sensor Second image of picture and the collected target object of the second imaging sensor, so as to by first image and second Input of the image as signal.
As an alternative embodiment, image processing apparatus can detect the light under current scene, current field The light of scape is unsatisfactory for preset light intensity, and image processing apparatus can open light compensating lamp, call monocular vision sensor (i.e. First imaging sensor), to acquire the image data of target object, using the image data of collected target object as default Identification model input.
In the embodiment of the present invention, in order to solve monocular vision sensor under the weaker scene of light, image quality is relatively low The problem of, image processing apparatus can improve the quality of image by opening light compensating lamp.That is, image processing apparatus can To detect the light under current scene, the light of current scene is unsatisfactory for preset light intensity, and image processing apparatus can be with It determines that the light under current scene is weaker, light compensating lamp can be opened, call monocular vision sensor (i.e. the first imaging sensor), To acquire the image data of target object, using the image data of collected target object as the defeated of preset identification model Enter, to improve the quality of acquisition image.
S102, the second image of the first image of the target object and the target object is input to preset identification model In, obtain the description information of the motion characteristic in the specified region for describing the target object.
Wherein, which is used for the first image of the target object and the second image of the target object Specified region be identified, which can refer to neural network recognization model.
In the embodiment of the present invention, image processing apparatus can be by the of the first image of the target object and the target object Two images are input in preset identification model, which is used to carry out initial identification to first image, knows Do not go out the target object in first image, the preset identification model be additionally operable to according to the target object that identifies to this second Image carries out depth recognition, that is, identifies the specified region of the target object in second image, obtain for describing the target The description information of the motion characteristic in the specified region of object, the input using the first image and the second image as the identification model The signal at end can improve the accuracy for the motion characteristic for identifying specified region, while only specified region being identified, can To improve the efficiency of the description information of the acting characteristic in the specified region for obtaining the target object, image processing equipment can be saved Resource consumption.
Wherein, the specified region of the target object can refer to the ocular of the target object, mouth region, nose region Domain etc., the description information of motion characteristic may include the description information of the eye closing feature of the ocular of the target object, or should The description information or ocular, mouth region of the opening feature of the mouth region of target object are special at a distance from nasal area The description information etc. of sign.
S103, the status information that the target object is determined according to the motion characteristic description information.
In the embodiment of the present invention, image processing apparatus can determine the target object according to the motion characteristic description information Status information, the status information can serve to indicate that the target object whether be in fatigue state, can be by image data at It manages whether detected target object is in fatigue state, the efficiency of fatigue detecting can be improved.
If as an alternative embodiment, the specified region of the target object includes:The mouth area of the target object Domain;The motion characteristic description information includes:The mouth region of the target object is in the description information for opening feature;Above-mentioned basis The motion characteristic description information determines that the concrete mode of the status information of the target object includes:It is obtained according in prefixed time interval The mouth region of the target object arrived be in open feature description information, count the target object mouth region be in The number of katal sign, if the mouth region of the target object is in the number for opening feature more than the first preset value, it is determined that refer to Show that the target object is in the status information of designated state.
For example, which is 1 minute, which is 4 times, and image processing apparatus is according to pre- If the first image of multiframe in time interval and the second image, the mouth region of the obtained target object, which is in, opens feature Description information, the mouth region for counting the target object is in the number for opening feature, if at the mouth region of the target object It it is 5 times in opening the number of feature, then image processing apparatus can determine that the mouth region of the target object is in opening feature Number be more than the second pre- threshold value, and determine and indicate that the target object is in the status information of fatigue state.
In the embodiment of the present invention, when being in fatigue state due to target object, the face of the target object can be shown not Same motion characteristic, therefore image processing apparatus can judge the target object according to the face action feature of the target object Whether fatigue state is in.That is, image processing apparatus can be according to the target object obtained in prefixed time interval Mouth region be in open feature description information, count the target object mouth region be in open feature number, If the mouth region of the target object is in the number for opening feature more than the first predetermined threshold value, it is determined that indicate the target pair Status information as being in designated state (designated state can refer to fatigue state), image processing apparatus is by counting the mesh The mouth region for marking object is in the mode for the number for opening feature, judges whether the target object is in fatigue state, can be with Improve the accuracy of detection fatigue state.
It should be noted that the target object is in state of speaking in order to prevent, it is mistaken for the target object and is in finger Determine state (i.e. the designated state refers to fatigue state), therefore it can refer to the target pair that above-mentioned mouth region, which is in opening feature, The upper lip of elephant is more than preset distance threshold at a distance from lower lip, to improve the standard that image processing apparatus detects fatigue state Exactness.
As an alternative embodiment, the specified region of the target object includes:The ocular of the target object; The motion characteristic description information includes:The ocular of the target object is in the description information of eye closing feature;Above-mentioned basis should Motion characteristic description information determines that the concrete mode of the status information of the target object includes:According to being obtained in prefixed time interval The ocular of the target object be in the description information of eye closing feature, the ocular for counting the target object is in and closes one's eyes The number of feature, if the number that the ocular of the target object is in eye closing feature is more than the second pre- threshold value, it is determined that instruction The target object is in the status information of designated state.
For example, which is 1 minute, which is 5 times, and image processing apparatus is according to pre- If the ocular of the first image of multiframe in time interval and the second image, the obtained target object is in eye closing feature Description information, the ocular for counting the target object is in the number of eye closing feature, if at the ocular of the target object It it is 6 times in the number of eye closing feature, then image processing apparatus can determine that the ocular of the target object is in eye closing feature Number be more than the second pre- threshold value, and determine and indicate that the target object is in the status information of fatigue state.
In the embodiment of the present invention, image processing apparatus can be according to the eye of the target object obtained in prefixed time interval Portion region is in the description information of eye closing feature, and the ocular for counting the target object is in the number of eye closing feature, if should The number that the ocular of target object is in eye closing feature is more than the second pre- threshold value, it is determined that indicates that the target object is in and refers to Determine the status information of state (designated state can refer to fatigue state), image processing apparatus is by counting the target object Ocular is in the mode of the number of eye closing feature, judges whether the target object is in fatigue state, can improve detection The accuracy of fatigue state.
In the embodiment of the present invention, image processing apparatus can receive first that the first imaging sensor collects target object Second image of image and the collected target object of the second imaging sensor, by the first image of the target object and second Image is input in preset identification model, obtains the description letter of the motion characteristic in the specified region for describing the target object Breath, the status information of the target object is determined according to the description information of the motion characteristic, by being acquired with a variety of imaging sensors The image data arrived is inputted as the signal of the identification model, and the complementation of multi-signal may be implemented, to be preset identification The input terminal of model provides enough information content, and combines depth map to knowing on the basis of gray level image or RGB image Processing is optimized in other model, and then improves the accuracy of fatigue detecting.
Based on the above-mentioned description to image processing method, the embodiment of the present invention provides a kind of image real time transfer system System, as shown in Fig. 2, the image data processing system includes image processing apparatus 201, vehicle 202 and positioned at the vehicle 201 Target object 203 (i.e. the target object i.e. driver) on operator seat, image processing apparatus 201 may include a variety of biographies Sensor (in figure by taking the first imaging sensor 2011 and the second imaging sensor 2012 as an example), the image processing apparatus 201 and vehicle 202 are connected, which can be arranged on the roof of the close operator seat in the vehicle 202, can also set It sets on the console of the vehicle 202, so as to clearly collect the image data of target object, the image real time transfer System can be used to implement a kind of image processing method, specifically, referring to 3, Fig. 3 is provided in an embodiment of the present invention one Kind image processing method, the image processing method include:
If S301, detecting target object, the object identity of the target object is obtained.
In the embodiment of the present invention, the first imaging sensor or the second imaging sensor may be used in image processing apparatus 201 The image of the operator seat of collection vehicle 202, to judge whether the operator seat of the vehicle has target object, if there are target object, Then obtain the object identity of the target object.
Wherein, the object identity of the target object can refer to the mark of someone, such as name;It can also refer to the target The mark in object location, such as China;The gender mark that can also refer to target object, such as man or Ms.
S302, search with the associated identification model of object identity of the target object, regard associated identification model as this Preset identification model.
In the embodiment of the present invention, image processing apparatus 201 can be searched and the associated knowledge of the object identity of the target object Other model, using the associated identification model as preset identification model, to use and the associated identification model of object identity The image of target object is identified, the accuracy of identification can be improved.
For example, if the object identity is the name of someone, the calling of image processing apparatus 201 is associated with the name Identification model, if the gender that the object identity is target object identifies (such as man), image processing apparatus 201 can call With the associated identification model of gender of the target object.
It should be noted that image processing apparatus 201 can store a large amount of identification model, and from the identification model of storage It is middle to call the identification model needed;Image processing apparatus 201 can also call needs by network connection from network server Identification model, to save the memory headroom of the image processing apparatus 201.
S303, the first image for receiving the collected target object of the first imaging sensor and second image sensing Second image of the collected target object of device.
S304, the second image of the first image of the target object and the target object is input to preset identification model In, obtain the description information of the motion characteristic in the specified region for describing the target object.
As an alternative embodiment, the first image and the second image of acquisition training object, using initial identification Model is trained the first image of the training object and the specified region of the second image, the identification mould after being trained Type.
In the embodiment of the present invention, image processing apparatus 201 may be used the first image and the second image to identification model into Row optimization, can improve the recognition accuracy to the motion characteristic of target object.That is, image processing apparatus 201 can be with The first image and the second image of acquisition training object, using initial identification model to the first image of the training object and second The specified region of image is trained, the identification model after being trained, and after largely training, can arrive preset identification Model, to improve the accuracy of identification image data.
As an alternative embodiment, it is above-mentioned using initial identification model to the first image of the training object and The specified region of two images is trained, and the concrete mode of the identification model after being trained includes:Obtain the training object Current training corpus carries out the first image of training object and the specified region of the second image using the initial identification model Identification obtains training description information, determines the similarity of the training object current training corpus and the training description information, if The similarity is less than default similarity value, then adjusts the identification parameter in the initial identification model, the identification after being trained Model.
In the embodiment of the present invention, image processing apparatus 201 can receive the current training corpus of the training object of input, The first image of training object and the specified region of the second image are identified using the initial identification model, training is obtained and retouches Information is stated, determines the similarity of the training object current training corpus and the training description information, if the similarity is less than in advance If similarity value, it is determined that the accuracy of identification of the initial model is relatively low, and image processing apparatus 201 can adjust the initial identification mould First image of next trained object and the second image are input in the identification model after adjustment, weight by the identification parameter in type Above-mentioned steps are executed again, after largely training, if repeatedly the training corpus of training object is similar to the training description information Degree is more than default similarity value, i.e., using stability and the higher identification model of accuracy of identification as the identification model after training.
Image processing apparatus 201 can acquire the first image of the user under different geographical, varying environment or different scenes And second first image and second image of the image as above-mentioned trained object, so as to improve the robustness of identification model; Image processing apparatus 201 can also only acquire the first image and the second image of the higher user of frequency using the vehicle 202 As the first image and the second image of above-mentioned trained object, trained complexity can be reduced, and identification model can be improved Utilization rate.
As an alternative embodiment, if detecting the training instruction to the preset identification model, calling should First imaging sensor acquires the first image of training of the target object, and second imaging sensor is called to acquire the target The second image of training of object instructs the preset identification model according to first image of training and the second image of training Practice.
In the embodiment of the present invention, image processing apparatus 201 detects that the accuracy of identification of the default identification model is relatively low, or connects It receives when being trained instruction to the preset identification model, image processing apparatus 201 can call first imaging sensor 2011 acquire the first image of training of the target object, and the instruction for calling second imaging sensor 2012 to acquire the target object Practice the second image, the preset identification model is trained according to first image of training and the second image of training, to improve The accuracy of identification of the default identification model, and then improve the accuracy of identification fatigue driving.
S305, the status information that the target object is determined according to the motion characteristic description information.
Wherein, whether status information is in fatigue driving state for at least target object.
If driving the vehicle as an alternative embodiment, being determined according to the status information of the target object and needing to suspend , then prompt message is exported, the prompt message is for prompting target object pause to drive the vehicle.
In the embodiment of the present invention, if image processing apparatus 201 determines the target according to the status information of the target object Object is in fatigue driving state, it may be determined that needs pause to drive the vehicle 202, image processing apparatus 201 can be exported and be carried Show information, to prompt target object pause to drive the vehicle, the safety of vehicle drive can be improved.
Wherein, which may be used the mode of voice and prompts, can also be by the way of showing on a display screen The mode of prompt or a variety of combinations prompts.
If as an alternative embodiment, needing to start the vehicle according to the determination of the status information of the target object Automatic driving mode then controls the vehicle launch automatic driving mode.
In the embodiment of the present invention, if image processing apparatus 201 determines the target according to the status information of the target object Object is in fatigue driving state, it may be determined that needs to start the automatic driving mode of the vehicle 202, then controls the vehicle launch Automatic driving mode can prevent traffic accident caused by fatigue driving from occurring, can improve the safety of vehicle drive.
The embodiment of the present invention, image processing apparatus can be established with vehicle and be connected, which, which detects, is located at The target object of the vehicle drive position can obtain the object identity of the target object, obtain and the associated knowledge of the object identity Other model can improve the accuracy of identification using associated identification model as preset identification model.In addition, at the image Signal input of the device using the image data of a variety of imaging sensors acquisition as default identification model is managed, may be implemented a variety of The complementation of signal, to provide enough information content for the input terminal of preset identification model, and in RGB image or gray scale It combines depth map that processing is optimized to identification model on the basis of image, and then improves the accuracy of fatigue driving detection, And the safety of vehicle drive can be improved.
Based on the above-mentioned description to image processing method and image data processing system, the embodiment of the present invention provides one Kind image processing apparatus, refers to Fig. 4, image processing apparatus as shown in Figure 4 may include:
First imaging sensor 401, the first image for acquiring target object.
Second imaging sensor 402, the second image for acquiring target object.
Wherein, described first image includes at least one of gray level image or RGB image, and second image includes deep Spend image.
Receiving module 403, the first image for receiving the collected target object of described first image sensor and institute State the second image of the collected target object of the second imaging sensor.
Identification module 404, for inputting the second image of the first image of the target object and the target object Into preset identification model, the description information of the motion characteristic in the specified region for describing the target object is obtained.
Determining module 405, the status information for determining the target object according to the motion characteristic description information.
Wherein, the preset identification model is used for the to the first image of the target object and the target object The specified region of two images is identified.
Wherein, the specified region of the target object includes:The mouth region of the target object;The motion characteristic is retouched Stating information includes:The mouth region of the target object is in the description information for opening feature.
Optionally, the determining module 405, specifically for according to the target object obtained in prefixed time interval Mouth region is in the description information for opening feature, and the mouth region for counting the target object is in the number for opening feature; If the mouth region of the target object is in the number for opening feature more than the first predetermined threshold value, it is determined that indicate the target Object is in the status information of designated state.
Wherein, the specified region of the target object includes:The ocular of the target object;The motion characteristic is retouched Stating information includes:The ocular of the target object is in the description information of eye closing feature.
Optionally, the determining module 405, for the eye according to the target object obtained in prefixed time interval Region is in the description information of eye closing feature, and the ocular for counting the target object is in the number of eye closing feature;If institute The ocular for stating target object is in the number of eye closing feature more than the second pre- threshold value, it is determined that indicates at the target object In the status information of designated state.
Optionally, described image processing unit is connect with vehicle, and described image processing unit is located at operator seat for acquiring Object image information.
Optionally, output module 406, if driving institute for determining to need to suspend according to the status information of the target object Vehicle is stated, then exports prompt message, the prompt message is for prompting the target object pause to drive the vehicle.
Optionally, control module 407, if for needing to start the vehicle according to the determination of the status information of the target object Automatic driving mode, then control the vehicle launch automatic driving mode.
Optionally, it is called if for detecting the training instruction to the preset identification model calling module 408 Described first image sensor acquires the first image of training of the target object, and calls the second imaging sensor acquisition The second image of training of the target object.
Optionally, the first training module 409 is used for according to the first image of the training and the second image of training to described pre- If identification model be trained.
Optionally, the first imaging sensor 401 is additionally operable to acquire the first image of trained object.
Optionally, the second imaging sensor 402 is additionally operable to acquire the second image of trained object.
Optionally, the second training module 410, for using initial identification model to the first image of the trained object and The specified region of second image is trained, the identification model after being trained.
Optionally, the second training module 410 is specifically used for obtaining the current training corpus of the trained object;Using institute It states initial identification model the first image of training object and the specified region of the second image is identified, obtains training description letter Breath;Determine the similarity of the trained object current training corpus and the trained description information;If the similarity is less than Default similarity value, then adjust the identification parameter in the initial identification model, the identification model after being trained.
Optionally, if acquisition module 411 obtain the object identity of the target object for detecting target object.
Optionally, searching module 412, for searching and the associated identification model of the object identity of the target object.
Optionally, the receiving module 403 is specifically used for using associated identification model as the preset identification mould Type, and execute first image for receiving the collected target object of described first image sensor and second image The step of second image of the sensor collected target object.
In the embodiment of the present invention, image processing apparatus can receive first that the first imaging sensor collects target object Second image of image and the collected target object of the second imaging sensor, by the first image of the target object and second Image is input in preset identification model, obtains the description letter of the motion characteristic in the specified region for describing the target object Breath, the status information of the target object is determined according to the description information of the motion characteristic, by being acquired with a variety of imaging sensors The image data arrived is inputted as the signal of the identification model, and the complementation of multi-signal may be implemented, to be preset identification The input terminal of model provides enough information content, and then improves the accuracy of fatigue detecting.
Fig. 5 is referred to, Fig. 5 is a kind of schematic block diagram of image processing equipment provided in an embodiment of the present invention.As schemed A kind of image processing equipment in the present embodiment shown may include:At least one processor 501, such as CPU;It is at least one to deposit Reservoir 502, communication device 503, sensor 504, controller 505, above-mentioned processor 501, memory 502, communication device 503, Sensor 504, controller 505 are connected by bus 506.
Wherein, communication device 503 can be used for exporting prompt message, can be also used for establishing the communication connection with vehicle, And it sends and instructs to vehicle.
Sensor 504, including the first imaging sensor and the second imaging sensor, the first imaging sensor can refer to list Mesh visual sensor, the second imaging sensor can refer to multi-vision visual sensor, the first imaging sensor, for acquiring target First image of object, the second imaging sensor, for the second image using target object.
Controller 505, for when needing to control vehicle launch automatic Pilot, controlling vehicle launch automatic mode.
For storing instruction, processor 501 calls the program code stored in memory 502 to memory 502.
Specifically, processor 501 calls the program code stored in memory 502, following operation is executed:
Receive the first image of the collected target object of described first image sensor and second imaging sensor Second image of the collected target object;
Second image of the first image of the target object and the target object is input to preset identification model In, obtain the description information of the motion characteristic in the specified region for describing the target object;
The status information of the target object is determined according to the motion characteristic description information;
The preset identification model is used for the first image of the target object and the second figure of the target object The specified region of picture is identified.
Wherein, described first image includes at least one of gray level image or RGB image, and second image includes deep Spend image.
Optionally, the specified region of the target object includes:The mouth region of the target object;The motion characteristic Description information includes:The mouth region of the target object is in the description information for opening feature;Processor 501 calls memory Following operation can also be performed in the program code stored in 502:
The description information for opening feature is according to the mouth region of the target object obtained in prefixed time interval, The mouth region for counting the target object is in the number for opening feature;
If the mouth region of the target object is in the number for opening feature more than the first predetermined threshold value, it is determined that instruction The target object is in the status information of designated state.
Optionally, the specified region of the target object includes:The ocular of the target object;The motion characteristic Description information includes:The ocular of the target object is in the description information of eye closing feature;Processor 501 calls memory Following operation can also be performed in the program code stored in 502:
The description information of eye closing feature is according to the ocular of the target object obtained in prefixed time interval, The ocular for counting the target object is in the number of eye closing feature;
If the number that the ocular of the target object is in eye closing feature is more than the second pre- threshold value, it is determined that instruction institute State the status information that target object is in designated state.
Optionally, described image processing unit is connect with vehicle, and described image processing unit is located at operator seat for acquiring Object image information.
Optionally, processor 501 calls the program code stored in memory 502, and following operation can also be performed:
The vehicle is driven if being determined according to the status information of the target object and needing to suspend, exports prompt message, The prompt message is for prompting the target object pause to drive the vehicle.
Optionally, processor 501 calls the program code stored in memory 502, and following operation can also be performed:
If determining the automatic driving mode for needing to start the vehicle according to the status information of the target object, control The vehicle launch automatic driving mode.
Optionally, processor 501 calls the program code stored in memory 502, and following operation can also be performed:
If detecting the training instruction to the preset identification model, described first image sensor is called to acquire institute The first image of training of target object is stated, and the second figure of training for calling second imaging sensor to acquire the target object Picture;
The preset identification model is trained according to the first image of the training and the second image of training.
Optionally, processor 501 calls the program code stored in memory 502, and following operation can also be performed:
The first image and the second image of acquisition training object;
The first image of the trained object and the specified region of the second image are trained using initial identification model, The identification model after being trained.
Optionally, processor 501 calls the program code stored in memory 502, and following operation can also be performed:
Obtain the current training corpus of the trained object;
The first image of training object and the specified region of the second image are identified using the initial identification model, It obtains training description information;
Determine the similarity of the trained object current training corpus and the trained description information;
If the similarity is less than default similarity value, the identification parameter in the initial identification model is adjusted, is obtained The identification model after training.
Optionally, processor 501 calls the program code stored in memory 502, and following operation can also be performed:
If detecting target object, the object identity of the target object is obtained;
Search the associated identification model of object identity with the target object;
Using associated identification model as the preset identification model, and execute the reception described first image sensing Second figure of the first image of the collected target object of device and the collected target object of second imaging sensor The step of picture.
In the embodiment of the present invention, image processing apparatus can receive first that the first imaging sensor collects target object Second image of image and the collected target object of the second imaging sensor, by the first image of the target object and second Image is input in preset identification model, obtains the description letter of the motion characteristic in the specified region for describing the target object Breath, the status information of the target object is determined according to the description information of the motion characteristic, by being acquired with a variety of imaging sensors The image data arrived is inputted as the signal of the identification model, and the complementation of multi-signal may be implemented, to be preset identification The input terminal of model provides enough information content, and combines depth map to identification on the basis of gray level image, RGB image Processing is optimized in model, and then improves the accuracy of fatigue detecting.
Present invention also provides a kind of computer program product, which includes storing computer program Non-transient computer readable storage medium, which is operable to make computer to execute above-mentioned Fig. 1 and Fig. 3 to correspond to The step of image data method in embodiment, embodiment and advantageous effect which solves the problems, such as can With referring to the embodiment and advantageous effect of the image data method of above-mentioned Fig. 1 and Fig. 3, overlaps will not be repeated.
It should be noted that for each embodiment of the method above-mentioned, for simple description, therefore it is all expressed as to a system The combination of actions of row, but those skilled in the art should understand that, the present invention is not limited by the described action sequence, because For according to the present invention, certain some step can be performed in other orders or simultaneously.Secondly, those skilled in the art also should Know, embodiment described in this description belongs to preferred embodiment, involved action and module not necessarily this hair Necessary to bright.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include:Flash disk, read-only memory (Read-Only Memory, ROM), random access device (Random Access Memory, RAM), disk or CD etc..
Above disclosed is only a kind of section Example of the present invention, cannot limit the power of the present invention with this certainly Sharp range, those skilled in the art can understand all or part of the processes for realizing the above embodiment, and is weighed according to the present invention Equivalent variations made by profit requirement, still belong to the scope covered by the invention.

Claims (27)

1. a kind of image processing method, which is characterized in that be applied to image processing apparatus, described image processing unit includes First imaging sensor and the second imaging sensor, the method includes:
Receive the first image and second imaging sensor acquisition of the collected target object of described first image sensor Second image of the target object arrived, described first image include at least one of gray level image or RGB image, described Second image includes depth image;
Second image of the first image of the target object and the target object is input in preset identification model, is obtained To the description information of the motion characteristic in the specified region for describing the target object;
The status information of the target object is determined according to the motion characteristic description information.
2. according to the method described in claim 1, it is characterized in that, the specified region of the target object includes:The target The mouth region of object;The motion characteristic description information includes:The mouth region of the target object, which is in, opens feature Description information.
3. according to the method described in claim 2, it is characterized in that, described according to described in motion characteristic description information determination The status information of target object, including:
The description information for opening feature, statistics are according to the mouth region of the target object obtained in prefixed time interval The mouth region of the target object is in the number for opening feature;
If the mouth region of the target object is in the number for opening feature more than the first predetermined threshold value, it is determined that described in instruction Target object is in the status information of designated state.
4. according to the method described in claim 1, it is characterized in that, the specified region of the target object includes:The target The ocular of object;The motion characteristic description information includes:The ocular of the target object is in eye closing feature Description information.
5. according to the method described in claim 4, it is characterized in that, described according to described in motion characteristic description information determination The status information of target object, including:
It is in the description information of eye closing feature according to the ocular of the target object obtained in prefixed time interval, counts The ocular of the target object is in the number of eye closing feature;
If the number that the ocular of the target object is in eye closing feature is more than the second pre- threshold value, it is determined that indicate the mesh Mark object is in the status information of designated state.
6. according to claim 1-5 any one of them methods, which is characterized in that described image processing unit is connect with vehicle, Described image processing unit is used to acquire the image information of the object positioned at operator seat.
7. according to the method described in claim 6, it is characterized in that, further including:
The vehicle is driven if being determined according to the status information of the target object and needing to suspend, exports prompt message, it is described Prompt message is for prompting the target object pause to drive the vehicle.
8. according to the method described in claim 6, it is characterized in that, further including:
If determining the automatic driving mode for needing to start the vehicle according to the status information of the target object, described in control Vehicle launch automatic driving mode.
9. method according to claim 7 or 8, which is characterized in that further include:
If detecting the training instruction to the preset identification model, described first image sensor is called to acquire the mesh The first image of training of object is marked, and the second image of training for calling second imaging sensor to acquire the target object;
The preset identification model is trained according to the first image of the training and the second image of training.
10. method according to claim 7 or 8, which is characterized in that further include:
The first image and the second image of acquisition training object;
The first image of the trained object and the specified region of the second image are trained using initial identification model, obtained The identification model after training.
11. according to the method described in claim 10, it is characterized in that, described use initial identification model to the trained object The first image and the specified region of the second image be trained, the identification model after being trained, including:
Obtain the current training corpus of the trained object;
The first image of training object and the specified region of the second image are identified using the initial identification model, obtained Training description information;
Determine the similarity of the trained object current training corpus and the trained description information;
If the similarity is less than default similarity value, the identification parameter in the initial identification model is adjusted, is trained The identification model afterwards.
12. the method according to claim 1 or 11, which is characterized in that further include:
If detecting target object, the object identity of the target object is obtained;
Search the associated identification model of object identity with the target object;
Using associated identification model as the preset identification model, and executes the reception described first image sensor and adopt Second image of the first image of the target object collected and the collected target object of second imaging sensor Step.
13. a kind of image processing apparatus, which is characterized in that described image processing unit includes the first imaging sensor and the second figure As sensor, described device includes:
Receiving module, the first image for receiving the collected target object of described first image sensor and described second Second image of the collected target object of imaging sensor, described first image include in gray level image or RGB image At least one, second image includes depth image;
Identification module, it is preset for the second image of the first image of the target object and the target object to be input to In identification model, the description information of the motion characteristic in the specified region for describing the target object is obtained;
Determining module, the status information for determining the target object according to the motion characteristic description information;
The preset identification model is used for the second image of the first image and target object of the target object Specified region is identified.
14. device according to claim 13, which is characterized in that the specified region of the target object includes:The mesh Mark the mouth region of object;The motion characteristic description information includes:The mouth region of the target object, which is in, opens feature Description information.
15. device according to claim 14, which is characterized in that
The determining module is opened specifically for being according to the mouth region of the target object obtained in prefixed time interval The description information of katal sign, the mouth region for counting the target object are in the number for opening feature;If the target object Mouth region be in and open the number of feature and be more than the first predetermined threshold value, it is determined that indicate that the target object is in specified shape The status information of state.
16. device according to claim 13, which is characterized in that the specified region of the target object includes:The mesh Mark the ocular of object;The motion characteristic description information includes:The ocular of the target object is in eye closing feature Description information.
17. device according to claim 16, which is characterized in that
The determining module, for being in the spy that closes one's eyes according to the ocular of the target object obtained in prefixed time interval The description information of sign, the ocular for counting the target object are in the number of eye closing feature;If the eye of the target object The number that portion region is in eye closing feature is more than the second pre- threshold value, it is determined that indicates that the target object is in the shape of designated state State information.
18. according to claim 13-17 any one of them devices, which is characterized in that described image processing unit connects with vehicle It connects, described image processing unit is used to acquire the image information of the object positioned at operator seat.
19. device according to claim 18, which is characterized in that further include:
Output module exports if driving the vehicle for determining to need to suspend according to the status information of the target object Prompt message, the prompt message is for prompting the target object pause to drive the vehicle.
20. device according to claim 18, which is characterized in that further include:
Control module, if needing to start the automatic Pilot mould of the vehicle for being determined according to the status information of the target object Formula then controls the vehicle launch automatic driving mode.
21. the device according to claim 19 or 20, which is characterized in that further include:
If calling module calls described first image to pass for detecting the training instruction to the preset identification model Sensor acquires the first image of training of the target object, and second imaging sensor is called to acquire the target object The second image of training;
First training module, for according to the first image of the training and training the second image to the preset identification model into Row training.
22. the device according to claim 19 or 20, which is characterized in that further include:
Described first image sensor is additionally operable to acquire the first image of trained object;
Second imaging sensor is additionally operable to acquire the second image of the trained object;
Described image processing unit further includes:
Second training module, for using initial identification model to the first image of the trained object and specifying for the second image Region is trained, the identification model after being trained.
23. device according to claim 22, which is characterized in that
Second training module is specifically used for obtaining the current training corpus of the trained object;Using the initial identification model The specified region of the first image and the second image to training object is identified, and obtains training description information;Determine the instruction Practice the similarity of the current training corpus and the trained description information of object;If the similarity is less than default similarity value, The identification parameter in the initial identification model is then adjusted, the identification model after being trained.
24. the device according to claim 13 or 23, which is characterized in that further include:
If acquisition module obtains the object identity of the target object for detecting target object;
Searching module, for searching and the associated identification model of the object identity of the target object;
The receiving module is specifically used for using associated identification model as the preset identification model, and is connect described in execution Receive the first image and the collected institute of second imaging sensor of the collected target object of described first image sensor The step of stating the second image of target object.
25. a kind of image processing equipment, which is characterized in that including:Processor and memory, the processor and the memory It is connected by bus, the memory is stored with executable program code, and the processor is for calling the executable program Code executes the image processing method as described in any one of claim 1 to 12.
26. a kind of computer readable storage medium, which is characterized in that the computer storage media is stored with computer program, The computer program includes program instruction, and described program instruction makes the processor execute such as right when being executed by a processor It is required that the step of image data method described in any one of 1 to 12.
27. a kind of computer program product, which is characterized in that the computer program product includes storing computer program Non-transient computer readable storage medium, the computer program are operable to that computer is made to realize in claim 1 to 12 The step of any one of them image data method.
CN201780005969.XA 2017-12-25 2017-12-25 Image processing method, device and equipment Pending CN108701214A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/118174 WO2019126908A1 (en) 2017-12-25 2017-12-25 Image data processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN108701214A true CN108701214A (en) 2018-10-23

Family

ID=63843765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780005969.XA Pending CN108701214A (en) 2017-12-25 2017-12-25 Image processing method, device and equipment

Country Status (2)

Country Link
CN (1) CN108701214A (en)
WO (1) WO2019126908A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147713A (en) * 2019-03-28 2019-08-20 石化盈科信息技术有限责任公司 A kind of method for detecting fatigue driving and system
CN110570400A (en) * 2019-08-19 2019-12-13 河北极目楚天微电子科技有限公司 Information processing method and device for chip 3D packaging detection
WO2021026855A1 (en) * 2019-08-15 2021-02-18 深圳市大疆创新科技有限公司 Machine vision-based image processing method and device
CN114022871A (en) * 2021-11-10 2022-02-08 中国民用航空飞行学院 Unmanned aerial vehicle driver fatigue detection method and system based on depth perception technology
WO2023108364A1 (en) * 2021-12-13 2023-06-22 华为技术有限公司 Method and apparatus for detecting driver state, and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770317B (en) * 2020-07-22 2023-02-03 平安国际智慧城市科技股份有限公司 Video monitoring method, device, equipment and medium for intelligent community
CN112068218A (en) * 2020-09-14 2020-12-11 北京数衍科技有限公司 Self-adaptive detection method and device for pedestrian steps
CN112378916B (en) * 2020-11-10 2024-03-29 厦门长江电子科技有限公司 Automatic image grading detection system and method based on machine vision
CN113610004B (en) * 2021-08-09 2024-04-05 上海擎朗智能科技有限公司 Image processing method, robot and medium

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080091628A1 (en) * 2006-08-16 2008-04-17 Narayan Srinivasa Cognitive architecture for learning, action, and perception
US20090086269A1 (en) * 2007-09-28 2009-04-02 Kyocera Mita Corporation Image Forming Apparatus and Image Forming System
CN103473530A (en) * 2013-08-30 2013-12-25 天津理工大学 Adaptive action recognition method based on multi-view and multi-mode characteristics
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN103810491A (en) * 2014-02-19 2014-05-21 北京工业大学 Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
US20140184484A1 (en) * 2012-12-28 2014-07-03 Semiconductor Energy Laboratory Co., Ltd. Display device
CN104268520A (en) * 2014-09-22 2015-01-07 天津理工大学 Human motion recognition method based on depth movement trail
CN104504856A (en) * 2014-12-30 2015-04-08 天津大学 Fatigue driving detection method based on Kinect and face recognition
CN104616437A (en) * 2015-02-27 2015-05-13 浪潮集团有限公司 Vehicle-mounted fatigue identification system and method
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states
US20150379945A1 (en) * 2012-11-15 2015-12-31 Semiconductor Energy Laboratory Co., Ltd. Liquid crystal display device
CN106203394A (en) * 2016-07-26 2016-12-07 浙江捷尚视觉科技股份有限公司 Fatigue driving safety monitoring method based on human eye state detection
CN106446811A (en) * 2016-09-12 2017-02-22 北京智芯原动科技有限公司 Deep-learning-based driver's fatigue detection method and apparatus
CN106485214A (en) * 2016-09-28 2017-03-08 天津工业大学 A kind of eyes based on convolutional neural networks and mouth state identification method
CN106599806A (en) * 2016-12-01 2017-04-26 西安理工大学 Local curved-surface geometric feature-based human body action recognition method
CN106897659A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The recognition methods of blink motion and device
CN107126224A (en) * 2017-06-20 2017-09-05 中南大学 A kind of real-time monitoring of track train driver status based on Kinect and method for early warning and system
CN107203753A (en) * 2017-05-25 2017-09-26 西安工业大学 A kind of action identification method based on fuzzy neural network and graph model reasoning
CN107229922A (en) * 2017-06-12 2017-10-03 西南科技大学 A kind of fatigue driving monitoring method and device
CN107358794A (en) * 2017-06-13 2017-11-17 深圳前海慧泊中安运营管理有限公司 Data processing method and device
CN107492074A (en) * 2017-07-21 2017-12-19 触景无限科技(北京)有限公司 Image acquisition and processing method, device and terminal device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130124763A (en) * 2012-05-07 2013-11-15 현대모비스 주식회사 System of determining tiredness driving based on the driver's face image and methed thereof
CN103714321B (en) * 2013-12-26 2017-09-26 苏州清研微视电子科技有限公司 Driver's Face detection system based on range image and intensity image
CN105740767A (en) * 2016-01-22 2016-07-06 江苏大学 Driver road rage real-time identification and warning method based on facial features

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080091628A1 (en) * 2006-08-16 2008-04-17 Narayan Srinivasa Cognitive architecture for learning, action, and perception
US20090086269A1 (en) * 2007-09-28 2009-04-02 Kyocera Mita Corporation Image Forming Apparatus and Image Forming System
US20150379945A1 (en) * 2012-11-15 2015-12-31 Semiconductor Energy Laboratory Co., Ltd. Liquid crystal display device
US20140184484A1 (en) * 2012-12-28 2014-07-03 Semiconductor Energy Laboratory Co., Ltd. Display device
CN103473530A (en) * 2013-08-30 2013-12-25 天津理工大学 Adaptive action recognition method based on multi-view and multi-mode characteristics
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN103810491A (en) * 2014-02-19 2014-05-21 北京工业大学 Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN104268520A (en) * 2014-09-22 2015-01-07 天津理工大学 Human motion recognition method based on depth movement trail
CN104504856A (en) * 2014-12-30 2015-04-08 天津大学 Fatigue driving detection method based on Kinect and face recognition
CN104616437A (en) * 2015-02-27 2015-05-13 浪潮集团有限公司 Vehicle-mounted fatigue identification system and method
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states
CN106897659A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The recognition methods of blink motion and device
CN106203394A (en) * 2016-07-26 2016-12-07 浙江捷尚视觉科技股份有限公司 Fatigue driving safety monitoring method based on human eye state detection
CN106446811A (en) * 2016-09-12 2017-02-22 北京智芯原动科技有限公司 Deep-learning-based driver's fatigue detection method and apparatus
CN106485214A (en) * 2016-09-28 2017-03-08 天津工业大学 A kind of eyes based on convolutional neural networks and mouth state identification method
CN106599806A (en) * 2016-12-01 2017-04-26 西安理工大学 Local curved-surface geometric feature-based human body action recognition method
CN107203753A (en) * 2017-05-25 2017-09-26 西安工业大学 A kind of action identification method based on fuzzy neural network and graph model reasoning
CN107229922A (en) * 2017-06-12 2017-10-03 西南科技大学 A kind of fatigue driving monitoring method and device
CN107358794A (en) * 2017-06-13 2017-11-17 深圳前海慧泊中安运营管理有限公司 Data processing method and device
CN107126224A (en) * 2017-06-20 2017-09-05 中南大学 A kind of real-time monitoring of track train driver status based on Kinect and method for early warning and system
CN107492074A (en) * 2017-07-21 2017-12-19 触景无限科技(北京)有限公司 Image acquisition and processing method, device and terminal device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAIQI HUANG等: "A Discriminative Model of Motion and Cross Ratio for View-Invariant Action Recognition", 《 IEEE TRANSACTIONS ON IMAGE PROCESSING 》 *
王志瑞等: "基于LERBF算法的人体行为自相似识别", 《控制工程》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147713A (en) * 2019-03-28 2019-08-20 石化盈科信息技术有限责任公司 A kind of method for detecting fatigue driving and system
WO2021026855A1 (en) * 2019-08-15 2021-02-18 深圳市大疆创新科技有限公司 Machine vision-based image processing method and device
CN110570400A (en) * 2019-08-19 2019-12-13 河北极目楚天微电子科技有限公司 Information processing method and device for chip 3D packaging detection
CN110570400B (en) * 2019-08-19 2022-11-11 河北极目楚天微电子科技有限公司 Information processing method and device for chip 3D packaging detection
CN114022871A (en) * 2021-11-10 2022-02-08 中国民用航空飞行学院 Unmanned aerial vehicle driver fatigue detection method and system based on depth perception technology
WO2023108364A1 (en) * 2021-12-13 2023-06-22 华为技术有限公司 Method and apparatus for detecting driver state, and storage medium

Also Published As

Publication number Publication date
WO2019126908A1 (en) 2019-07-04

Similar Documents

Publication Publication Date Title
CN108701214A (en) Image processing method, device and equipment
CN110858295B (en) Traffic police gesture recognition method and device, vehicle control unit and storage medium
CN105354530B (en) A kind of body color recognition methods and device
CN108416235B (en) The anti-peeping method, apparatus of display interface, storage medium and terminal device
CN102893307A (en) Object displaying apparatus, object displaying system, and object displaying method
CN101390128B (en) Detecting method and detecting system for positions of face parts
CN110910549A (en) Campus personnel safety management system based on deep learning and face recognition features
CN106774945A (en) A kind of aircraft flight control method, device, aircraft and system
CN110119719A (en) Biopsy method, device, equipment and computer readable storage medium
CN106815574B (en) Method and device for establishing detection model and detecting behavior of connecting and calling mobile phone
WO2018176376A1 (en) Environmental information collection method, ground station and aircraft
CN102752458A (en) Driver fatigue detection mobile phone and unit
CN109684981A (en) Glaucoma image-recognizing method, equipment and screening system
CN107622246B (en) Face recognition method and related product
CN104143086A (en) Application technology of portrait comparison to mobile terminal operating system
CN110208946A (en) A kind of wearable device and the exchange method based on wearable device
CN107403450A (en) A kind of method and device of unmanned plane pinpoint landing
CN102088539A (en) Method and system for evaluating pre-shot picture quality
CN107300378A (en) A kind of personal identification method for positioning object, device and system
CN112241659A (en) Detection device and detection method for illegal building and terminal equipment
KR102021394B1 (en) Method of user-based elevator apparatus with artificial intelligence type
CN112651962A (en) AI intelligent diagnosis system platform
CN112037255A (en) Target tracking method and device
CN207780803U (en) A kind of face identification device
CN110602384A (en) Exposure control method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20230224

AD01 Patent right deemed abandoned