WO2022027895A1 - 异常坐姿识别方法、装置、电子设备、存储介质及程序 - Google Patents
异常坐姿识别方法、装置、电子设备、存储介质及程序 Download PDFInfo
- Publication number
- WO2022027895A1 WO2022027895A1 PCT/CN2020/136267 CN2020136267W WO2022027895A1 WO 2022027895 A1 WO2022027895 A1 WO 2022027895A1 CN 2020136267 W CN2020136267 W CN 2020136267W WO 2022027895 A1 WO2022027895 A1 WO 2022027895A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- sitting posture
- feature map
- current
- key point
- Prior art date
Links
- 230000002159 abnormal effect Effects 0.000 title claims abstract description 124
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000001514 detection method Methods 0.000 claims description 77
- 238000011176 pooling Methods 0.000 claims description 77
- 230000008569 process Effects 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 11
- 230000004913 activation Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 4
- 230000036544 posture Effects 0.000 description 238
- 238000013528 artificial neural network Methods 0.000 description 26
- 230000004044 response Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 5
- 238000002372 labelling Methods 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 208000014674 injury Diseases 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/593—Recognising seat occupancy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19173—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0476—Cameras to detect unsafe condition, e.g. video cameras
Definitions
- the present disclosure relates to the technical field of deep learning, and in particular, to a method, device, electronic device, storage medium and program for identifying abnormal sitting posture.
- vehicle cabin intelligence includes aspects such as personalized service and safety perception.
- safety perception since the user's sitting posture during vehicle driving is related to the user's safety, that is, an inappropriate sitting posture will increase the probability of injury to the user in the event of a vehicle collision, thereby reducing the safety of the user's ride.
- embodiments of the present disclosure are expected to provide a method, apparatus, electronic device, storage medium, and program for identifying an abnormal sitting posture.
- Embodiments of the present disclosure provide a method for identifying an abnormal sitting posture, including:
- the abnormal sitting posture type includes a sitting posture type that has a safety risk.
- the current sitting posture of at least one user included in the cabin is determined by recognizing the current scene image in the obtained vehicle cabin, and further, when the current sitting posture of the user belongs to the abnormal sitting posture type, a warning message is issued, so as to prevent Users who are in an abnormal sitting posture will be prompted to improve the safety of the user's ride.
- the abnormal sitting posture type includes at least one of the following:
- a first abnormal sitting posture in which the user's body is leaning forward a second abnormal sitting posture in which the user's body is leaning sideways, and a third abnormal sitting posture in which the user's body is lying down.
- the types of abnormal sitting postures are more abundant, so that a variety of abnormal sitting postures can be covered more comprehensively, and the safety of the user's ride can be guaranteed.
- identifying the current sitting posture of at least one user located in the vehicle cabin based on the current scene image including:
- the current sitting posture of each user located in the vehicle cabin is determined. In this way, through the relative positional relationship between the key point information of at least one user in the current scene image and the set reference object, the current sitting posture of each user in the vehicle cabin can be accurately determined.
- the key point information includes head key point information; based on the relative positional relationship between the key point information of each user and the set reference object, the current sitting posture of each user located in the vehicle cabin is determined ,include:
- the head key point information of any user is lower than the set lower edge of the steering wheel, it is determined that the current sitting posture of any user is the first abnormal sitting posture in which the user leans forward. In this way, by judging that the key point information of the user's head is lower than the set lower line of the steering wheel, it is quickly determined that the current sitting posture of the user is the first abnormal sitting posture in which the user's body leans forward.
- the key point information includes left shoulder key point information and right shoulder key point information; based on the relative positional relationship between the key point information of each user and the set reference object, determine the The current sitting posture of each user, including:
- the angle between the connection line between the left shoulder key point and the right shoulder key point of any user and the set seat reference surface is greater than the set first angle threshold, it is determined that the current sitting posture of any user is the user's body The second abnormal sitting position of the roll. In this way, when the angle between the key point of the user's shoulder and the seat reference surface is greater than the first angle threshold, it is quickly determined that the current sitting posture of the user is the second abnormal sitting posture in which the user's body is tilted.
- the key point information includes neck key point information and crotch key point information; based on the relative positional relationship between the key point information of each user and the set reference object, determine the The current sitting posture of each user, including:
- the angle between the connection line between the neck key point and the crotch key point of any user and the set horizontal reference plane is smaller than the set second angle threshold, it is determined that the current sitting posture of any user is the horizontal reference plane of the user.
- the third abnormal sitting position of lying In this way, when the angle between the key point of the user's neck, the key point of the crotch and the horizontal reference plane is smaller than the second angle threshold, it is quickly determined that the current sitting posture of the user is the third abnormal sitting posture in which the user's body is lying down.
- identifying the current sitting posture of at least one user located in the vehicle cabin based on the current scene image including:
- an intermediate feature map corresponding to the current scene image is generated
- the current sitting posture of each user is determined based on the intermediate feature map and detection frame information of each user in the at least one user. In this way, it is only necessary to use the intermediate feature map corresponding to the current scene image and the detection frame information of each user to quickly determine the current sitting posture of each user, and at the same time, because there are no intermediate parameters, the determined current sitting posture of each user is accurate. Sex is high.
- generating detection frame information for each user in the at least one user located in the vehicle cabin including:
- the center point position information of the detection frame of each user located in the vehicle cabin is generated.
- the detection frame information (including center point position information) corresponding to the user is determined by means of feature map processing, and then the detection frame information is compared with the intermediate feature map corresponding to the current scene image to determine the current pose information of the user.
- the center point position information of the detection frame of each user located in the vehicle cabin is generated, including:
- the converted target channel feature map is subjected to maximum pooling processing to obtain multiple pooling values and the position corresponding to each pooling value in the multiple pooling values. index; the position index is used to identify the position of the pooled value in the converted target channel feature map;
- the center point position information of the detection frame of each user located in the vehicle cabin is generated. In this way, by performing the maximum pooling process on the target channel feature map, the target pooling value belonging to the user center point can be more accurately determined from multiple pooling values, and then the center point of each user's detection frame can be more accurately determined. location information.
- determining the current sitting posture of each user based on the intermediate feature map and detection frame information of each user in the at least one user includes:
- the user For each user, based on the center point position information indicated by the user's detection frame information, 20 extracts N feature values at the feature positions matching the center point position information from the classification feature map; The maximum eigenvalue is selected from the eigenvalues, and the sitting posture category of the channel feature map corresponding to the maximum eigenvalue in the classification feature map is determined as the current sitting posture of the user. In this way, by performing at least one second convolution process on the intermediate feature map to generate a classification feature map, and then combining the generated center point position information of each user, the current sitting posture of each user can be more accurately determined.
- Embodiments of the present disclosure provide an abnormal sitting posture recognition device, including:
- the acquisition module is used to acquire the current scene image in the cabin
- an identification module configured to identify the current sitting posture of at least one user located in the vehicle cabin based on the current scene image
- a determination module configured to issue a warning message when the current sitting posture of the user belongs to an abnormal sitting posture type; wherein the abnormal sitting posture type includes a sitting posture type with a safety risk.
- An embodiment of the present disclosure provides an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor and the memory Communication between them is via a bus, and the machine-readable instructions are executed by the processor to execute the steps of the method for recognizing an abnormal sitting posture according to the first aspect or any one of the implementation manners.
- An embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the abnormal sitting posture recognition according to the first aspect or any one of the implementation manners is performed. steps of the method.
- An embodiment of the present disclosure provides a computer program, where the computer program includes computer-readable codes, and when the computer-readable codes run in a computer, causes the computer to execute any one of the above abnormal sitting posture recognition methods.
- the abnormal sitting posture recognition method, device, electronic device, storage medium, and program proposed by the embodiments of the present disclosure first, a current scene image in the vehicle cabin is acquired; then based on the current scene image, at least one image located in the vehicle cabin is identified. The current sitting posture of the user; finally, when the current sitting posture of the user belongs to an abnormal sitting posture type, a warning message is issued; wherein, the abnormal sitting posture type includes a sitting posture type that has a safety risk; Image recognition, determine the current sitting posture of at least one user included in the cabin, and further issue a warning message when the current sitting posture of the user belongs to the abnormal sitting posture type, so as to prompt the user in the abnormal sitting posture and improve the user's riding security.
- FIG. 1 shows a schematic flowchart of a method for identifying an abnormal sitting posture provided by an embodiment of the present disclosure
- FIG. 2 shows a schematic diagram of a system architecture to which the method for identifying an abnormal sitting posture according to an embodiment of the present disclosure is applied;
- FIG. 3 shows a schematic diagram of a current scene image in an abnormal sitting posture recognition method provided by an embodiment of the present disclosure
- FIG. 4 shows a schematic structural diagram of an abnormal sitting posture recognition device 400 provided by an embodiment of the present disclosure
- FIG. 5 shows a schematic structural diagram of an electronic device 500 provided by an embodiment of the present disclosure.
- cabin intelligence can include personalized services, safety perception and other aspects.
- safety perception since the user's sitting posture during vehicle driving is related to the user's safety, that is, an inappropriate sitting posture will increase the probability of injury to the user in the event of a vehicle collision, thereby reducing the safety of the user's ride. Therefore, in order to solve the above problem, an embodiment of the present disclosure provides a method for identifying an abnormal sitting posture.
- FIG. 1 is a schematic flowchart of a method for identifying an abnormal sitting posture provided by an embodiment of the present disclosure
- the method includes S101 to S103, wherein:
- S102 based on the current scene image, identify the current sitting posture of at least one user located in the vehicle cabin.
- the abnormal sitting posture type includes a sitting posture type with a safety risk.
- the current sitting posture of at least one user located in the vehicle cabin is determined by recognizing the current scene image obtained in the vehicle cabin, and further, when the current sitting posture of the user belongs to an abnormal sitting posture type, a warning message is issued, Thereby, the user in the abnormal sitting posture is prompted, and the safety of the user's ride is improved.
- FIG. 2 is a schematic diagram of a system architecture to which the method for identifying an abnormal sitting posture according to an embodiment of the present disclosure can be applied; as shown in FIG.
- the vehicle terminal 201 and the abnormal sitting posture recognition terminal 203 can establish a communication connection through the network 202, and the vehicle terminal 201 reports the current scene image in the vehicle cabin to the abnormal sitting posture recognition terminal device 203 through the network 202, and the abnormal sitting posture recognition
- the terminal device 203 recognizes the current sitting posture of at least one user located in the vehicle cabin based on the current scene image; in the case that the current sitting posture of the user belongs to the abnormal sitting posture type, the warning information is determined, and finally, the abnormal The sitting posture recognition terminal 203 uploads the warning information to the network 202 , and sends the warning information to the vehicle terminal 201 through the network 202 .
- the vehicle terminal 201 may include an in-vehicle image acquisition device
- the abnormal sitting posture recognition terminal 203 may include an in-vehicle visual processing device or a remote server with visual information processing capability.
- the network 202 can be wired or wireless.
- the abnormal sitting posture recognition terminal is a vehicle-mounted visual processing device
- the vehicle terminal can communicate with the vehicle-mounted visual processing device through a wired connection, such as data communication through a bus
- the abnormal sitting posture recognition terminal is a remote server
- the vehicle terminal can Data exchange with remote server through wireless network.
- the vehicle terminal 201 may be an in-vehicle visual processing device with an in-vehicle image acquisition module, which is specifically implemented as an in-vehicle host with a camera.
- the abnormal sitting posture recognition method of the embodiment of the present disclosure may be executed by the vehicle terminal 201 , and the above-mentioned system architecture may not include the network 202 and the abnormal sitting posture recognition terminal 203 .
- a camera device may be set on the top of the cabin, and an image of the current scene in the cabin can be acquired in real time through the camera device set in the cabin.
- the installation position of the camera device may be a position where all users in the vehicle cabin can be photographed.
- the current scene image can be identified to determine the current sitting posture corresponding to each user in the cabin.
- the current sitting posture may be a sitting posture category of each user.
- identifying the current sitting posture of at least one user located in the vehicle cabin may include:
- the current sitting posture of each user located in the vehicle cabin is determined.
- the current scene image can be input into the key point detection neural network to determine the key point information of at least one user in the current scene image; and for each user in the vehicle cabin, the user's key point information and the set reference can be The relative positional relationship between objects is used to determine the current sitting posture of the user.
- the key point information includes head key point information; based on the relative positional relationship between the key point information of each user and the set reference object, the current sitting posture of each user located in the vehicle cabin is determined, which can be include:
- the head key point information of any user is lower than the set lower line of the steering wheel, it is determined that the current sitting posture of any user is the first abnormal sitting posture in which the user's body leans forward.
- a schematic diagram of a current scene image includes a steering wheel 31, a lower line 32 of the steering wheel, and a driver 33, and the lower line 32 of the steering wheel is the edge of the steering wheel on the side close to the driver , the resulting reference line perpendicular to the direction of travel.
- the lower edge of the steering wheel divides the current scene image into two regions, namely a first region 34 located above the lower edge of the steering wheel and a second region 35 located below the lower edge of the steering wheel.
- An abnormal sitting posture when it is detected that the key point information of the user's head is higher than the set lower line of the steering wheel, that is, when it is detected that the key point information of the user's head is located in the first area 34, it is determined that the current sitting posture of the user does not belong to the user
- the first abnormal sitting posture in which the body is leaning forward if the key point information of the user's head is located on the lower edge of the steering wheel, it is determined that the current sitting posture of the user does not belong to the first abnormal sitting posture in which the user's body is leaning forward.
- the key point information includes left shoulder key point information and right shoulder key point information; based on the relative positional relationship between the key point information of each user and the set reference object, determine each user located in the cabin. current sitting position, including:
- the angle between the connection line between the left shoulder key point and the right shoulder key point of any user and the set seat reference surface is greater than the set first angle threshold, it is determined that the current sitting posture of any user is the user's body roll The second abnormal sitting posture.
- the first angle threshold may be set according to actual needs, for example, the first angle may be 45 degrees; and the side on which the user backs against the seat (ie, the vertical surface of the seat) may be set as the seat reference plane. Then, the connection line between the detected left shoulder key point and the right shoulder key point and the angle between the set seat reference surface and the set seat reference surface can be determined, and when the angle is greater than the set first angle threshold, the current sitting posture of the user is determined. is the second abnormal sitting posture of the user's body leaning forward; when the angle is less than or equal to the set first angle threshold, it is determined that the current sitting posture of the user does not belong to the second abnormal sitting posture of the user's body leaning forward.
- the key point information includes neck key point information and crotch key point information; based on the relative positional relationship between the key point information of each user and the set reference object, determine each user located in the cabin. current sitting position, including:
- the angle between the connection line between the neck key point and the crotch key point of any user and the set horizontal reference plane is smaller than the set second angle threshold, it is determined that the current sitting posture of any user is the one where the user's body is lying down.
- the third abnormal sitting posture is the one where the user's body is lying down.
- the set horizontal reference plane may be the seat level plane
- the second angle threshold may be set according to actual needs.
- the angle between the connection line between the neck key point and the crotch key point and the set horizontal reference plane can be determined, and when the angle is smaller than the set second angle threshold, it is determined that the user's current sitting posture is the one where the user's body is lying down.
- the third abnormal sitting posture when the angle is greater than or equal to the set second angle threshold, it is determined that the current sitting posture of the user does not belong to the third abnormal sitting posture in which the user's body is lying down.
- the current scene image may also be input into the trained neural network to determine the current sitting posture of each user included in the current scene image.
- identifying the current sitting posture of at least one user located in the vehicle cabin may include:
- Step 1 based on the current scene image, generate an intermediate feature map corresponding to the current scene image;
- Step 2 based on the intermediate feature map, generate detection frame information of each user in the at least one user located in the cabin;
- Step 3 Determine the current sitting posture of each user based on the intermediate feature map and the detection frame information of each user in the at least one user.
- the current scene image may be input into the trained neural network, and the backbone network in the neural network performs multiple convolution processing on the current scene image to generate an intermediate feature map corresponding to the current scene image.
- the detection frame information of each user in the at least one user located in the vehicle cabin may be generated by using the intermediate feature map and the detection frame detection branch network included in the neural network.
- the detection frame information of each user in the at least one user located in the vehicle cabin may be generated, which may include:
- A1 perform at least one first convolution process on the intermediate feature map to generate a channel feature map corresponding to the intermediate feature map;
- A2 based on the target channel feature map representing the position in the channel feature map, generate the center point position information of the detection frame of each user located in the vehicle cabin.
- At least one first convolution process may be performed on the intermediate feature map to generate a channel feature map corresponding to the intermediate feature map, and the number of channels corresponding to the channel feature map may be three channels.
- the channel feature map includes a first channel feature map representing the position (the first channel feature map is the target channel feature map), a second channel feature map representing the length information of the detection frame, and a feature map representing the width information of the detection frame.
- the third channel feature map is a first channel feature map representing the position (the first channel feature map is the target channel feature map), a second channel feature map representing the length information of the detection frame, and a feature map representing the width information of the detection frame.
- the center point position information of the detection frame of each user included in the vehicle cabin can be generated based on the target channel feature map representing the location in the channel feature map, and the second channel feature map and the first channel feature map in the channel feature map can also be generated based on The three-channel feature map determines the size information (length and width) of the detection frame.
- the detection frame information (including the center point position information) corresponding to the user is determined by means of feature map processing, and then the detection frame information is compared with the intermediate feature map corresponding to the current scene image to determine the current pose of the user. information.
- the center point position information of the detection frame of each user located in the vehicle cabin is generated, which may include:
- B1 use the activation function to perform eigenvalue conversion processing on each eigenvalue in the target channel feature map representing the position, and generate a converted target channel feature map;
- the position index of ; the position index is used to identify the position of the pooled value in the transformed target channel feature map;
- an activation function may be used to perform feature value conversion processing on the target feature map to generate a converted target channel feature map, where each feature value in the target channel feature map is a value between 0 and 1.
- the activation function may be a sigmoid function. For the feature value of any feature point in the converted target channel feature map, if the feature value is closer to 1, the probability that the feature point corresponding to the feature value belongs to the center point of the user's detection frame is greater.
- the maximum pooling process can be performed on the converted target channel feature map to obtain the pooling value corresponding to each feature position in the target channel feature map and each pool.
- the location index corresponding to the pooled value; the location index can be used to identify the location of the pooled value in the transformed target channel feature map.
- the same position index in the corresponding position index at each feature position can be merged to obtain the target channel feature map corresponding to multiple pooling values and the position index corresponding to each pooling value in the multiple pooling values .
- the preset pooling size and pooling step size may be set according to actual needs. For example, the preset pooling size may be 3 ⁇ 3, and the preset pooling step size may be 1.
- a pooling threshold can be set, and multiple obtained pooling values can be screened to obtain at least one target pooling value greater than the pooling threshold among the multiple pooling values, and based on the position index corresponding to the target pooling value, The center point position information of the detection frame of each user included in the vehicle cabin is generated.
- multi-frame sample images collected by a camera device corresponding to the current scene image may be acquired, and an adaptive algorithm is used to generate a pooling threshold according to the collected multi-frame sample images.
- a 3 ⁇ 3 maximum pooling process with a step size of 1 can be performed on the target channel feature map; during pooling, for every 3 ⁇ 3 feature points in the target channel feature map, the feature value is determined.
- the maximum response value (that is, the pooling value) of the 3 ⁇ 3 feature points and the position index of the maximum response value on the feature map of the target channel.
- the number of maximum response values is related to the size of the target channel feature map; for example, if the size of the target channel feature map is 80 ⁇ 60 ⁇ 3, the maximum response obtained after the maximum pooling process is performed on the target channel feature map There are 80 ⁇ 60 values in total; and for each maximum response value, there may be at least one other maximum response value with the same position index.
- the maximum response values with the same position index are combined to obtain M maximum response values and a position index corresponding to each of the M maximum response values.
- each of the M maximum response values is compared with the pooling threshold; when a certain maximum response value is greater than the pooling threshold, the maximum response value is determined as the target pooling value.
- the position index corresponding to the target pooling value that is, the position information of the center point of the user's detection frame.
- a second feature value at the feature position matching the center point position information can be selected from the second channel feature map.
- the selected second feature value is determined as the length corresponding to the user's detection frame
- the third feature value at the feature position matching the center point position information is selected from the third channel feature map
- the selected third feature value is The value is determined as the width corresponding to the user's detection frame, and the size information of the user's detection frame is obtained.
- the target pooling value belonging to the user center point can be more accurately determined from multiple pooling values, and then the detection frame of each user can be more accurately determined. center point information.
- the current sitting posture of each user may be determined based on the intermediate feature map, the frame checking information of each user in the at least one user, and the posture classification branch network in the trained neural network.
- the current sitting posture of each user is determined based on the intermediate feature map and the detection frame information of each user in the at least one user, including:
- C1 perform at least one second convolution process on the intermediate feature map to generate a classification feature map of N channels corresponding to the intermediate feature map; wherein, the number of channels N of the classification feature map is consistent with the number of sitting posture categories, and the classification feature map of the N channel Each channel feature map in corresponds to a sitting posture category, and N is a positive integer greater than 1;
- At least one second convolution process can be performed on the intermediate feature map to generate a classification feature map corresponding to the intermediate feature map.
- the number of channels in the classification feature map is N
- the value of N is consistent with the number of sitting posture categories
- Each channel feature map in the classification feature map corresponds to a sitting posture category. For example, if the sitting posture category includes: normal sitting posture, body leaning forward, and body leaning backward, then the value of N is 3; if the sitting posture category includes: normal sitting posture, body leaning forward, body leaning backward, and body lying, then this The value of N is 4.
- the sitting posture category can be set according to actual needs, and this is only an exemplary description.
- N feature values at the feature positions matching the center point position information can be extracted from the classification feature map, and from the N feature values.
- the maximum eigenvalue is selected, and the sitting posture category of the channel feature map corresponding to the maximum eigenvalue in the classification feature map is determined as the current sitting posture of the user.
- the classification feature map is a 3-channel feature map
- the sitting posture category corresponding to the first channel feature map in the classification feature map can be normal sitting posture
- the sitting posture category corresponding to the second channel feature map can be leaning forward
- the sitting posture category corresponding to the third channel feature map can be body roll
- three feature values are extracted from the classification feature map, namely 0.8, 0.5, and 0.2, then the channel feature map corresponding to 0.8 in the classification feature map (classification
- the sitting posture category (normal sitting posture) of the first channel feature map in the feature map is determined as the current sitting posture of user A.
- the current sitting posture of each user can be more accurately determined.
- each user's current sitting position is determined based on a trained neural network.
- a neural network can be trained by the following steps:
- each branch network corresponds to one type of prediction data
- the neural network is trained based on a variety of prediction data and labeled data corresponding to the scene image samples.
- multiple branch networks are set to process the sample feature map, and multiple prediction data corresponding to the scene image samples are generated.
- the neural network is trained by using the generated multiple prediction data, the accuracy of the trained neural network can be improved.
- the labeling data may include labeling key point position information, labeling detection frame information, and labeling sitting posture categories.
- the scene image samples may be input into the neural network to be trained, and the backbone network in the neural network to be trained performs at least one convolution process on the scene image samples to generate sample feature maps corresponding to the scene image samples.
- the sample feature maps are respectively input into multiple branch networks in the neural network to be trained to generate multiple types of prediction data corresponding to the scene image samples, wherein each branch network corresponds to one type of prediction data.
- the predicted data may include predicted detection frame information, predicted key position point information, and predicted sitting posture category.
- the branch network of the neural network includes a detection frame detection branch network
- the sample feature map is input to the detection frame detection branch network in the neural network to generate at least one user included in the scene image sample.
- the predicted detection box information is not limited to:
- the branch network of the neural network includes a key point detection branch network, and the sample feature map is input to the key point detection branch network in the neural network to generate each of the scene image samples included. User's multiple predicted key location point information.
- the branch network of the neural network includes the detection frame detection branch network and the posture classification branch network
- the sample feature map is input into the key point detection branch network in the neural network
- a classification feature map is obtained, and based on the predicted detection frame information of at least one user and the classification feature map, a predicted sitting posture category of each user included in the scene image sample is generated.
- multiple branch networks are set to process the sample feature map to obtain multiple types of prediction data, and the neural network is trained through multiple types of prediction data, so that the accuracy of the trained neural network is high.
- the first loss value can be generated based on the predicted detection frame information and the labeled detection frame information; based on the predicted key position point information and the labeled key The position point information is used to generate the second loss value; the third loss value is generated based on the predicted sitting posture category and the marked sitting posture category, and the neural network is trained based on the first loss value, the second loss value and the third loss value. neural network.
- the abnormal sitting posture type refers to a sitting posture type that has a safety risk.
- the abnormal sitting posture types may include at least one of the following: a first abnormal sitting posture in which the user leans forward, a second abnormal sitting posture in which the user leans sideways, and a third abnormal sitting posture in which the user lies laterally.
- the abnormal sitting posture type may also include other sitting postures with safety risks, and this is only an exemplary description.
- the user's current sitting posture is a normal sitting posture, it is determined that the user does not belong to an abnormal sitting posture; if the user's current sitting posture is leaning forward, it is determined that the user belongs to an abnormal sitting posture.
- the types of abnormal sitting postures are more abundant, so that a variety of abnormal sitting postures can be covered more comprehensively, so as to ensure the safety of the user's ride.
- warning information may be generated based on the abnormal sitting posture type to which the user's current sitting posture belongs, wherein the warning information may be played in the form of voice.
- the generated warning message may be "Dangerous, leaning forward, please adjust the sitting posture".
- each position of the cabin can also be identified.
- the identification of each position in the cabin can be: co-pilot position, left rear position, right rear position, etc., and is determined based on the current scene image.
- the position identifier corresponding to each user and when it is determined that the user's current sitting posture belongs to an abnormal sitting posture type, warning information may be generated based on the abnormal sitting posture type to which the user's current sitting posture belongs and the sitting posture identification. For example, if the current sitting posture of user A is leaning forward, and the position corresponding to user A is identified as the co-pilot position, the generated warning information may be "The passenger in the co-pilot position is leaning forward, please adjust the sitting posture".
- warning information may be generated based on the abnormal sitting posture type to which the user's current sitting posture belongs, so as to warn the user and reduce the probability of danger to the user.
- the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
- an embodiment of the present disclosure also provides an abnormal sitting posture recognition device 400.
- FIG. 4 it is a schematic diagram of the architecture of the abnormal sitting posture recognition device provided by the embodiment of the present disclosure, including an acquisition module 401, a recognition module 402, Determining module 403, specifically:
- an acquisition module 401 configured to acquire the current scene image in the vehicle cabin
- An identification module 402 configured to identify the current sitting posture of at least one user located in the vehicle cabin based on the current scene image
- the determining module 403 is configured to issue a warning message when the current sitting posture of the user belongs to an abnormal sitting posture type, wherein the abnormal sitting posture type refers to a sitting posture type that has a safety risk.
- the abnormal sitting posture types include at least one of the following: a first abnormal sitting posture in which the user leans forward, a second abnormal sitting posture in which the user leans sideways, and a third abnormal sitting posture in which the user lies laterally.
- the identifying module 402 when identifying the current sitting posture of at least one user located in the vehicle cabin based on the current scene image, is configured to:
- the current sitting posture of each user located in the vehicle cabin is determined.
- the key point information includes head key point information
- the identification module 402 based on the relative positional relationship between the key point information of each user and the set reference object, determines When the current sitting position of each user in the cabin, it is used to:
- the head key point information of any user is lower than the set lower edge of the steering wheel, it is determined that the current sitting posture of any user is the first abnormal sitting posture in which the user leans forward.
- the key point information includes left shoulder key point information and right shoulder key point information; the identification module 402 is based on the relative positional relationship between the key point information of each user and the set reference object. , when determining the current sitting posture of each user located in the cabin, it is used to:
- the angle between the connection line between the left shoulder key point and the right shoulder key point of any user and the set seat reference surface is greater than the set first angle threshold, it is determined that the current sitting posture of any user is the user's body The second abnormal sitting position of the roll.
- the key point information includes neck key point information and crotch key point information;
- the identification module 402 is based on the relative positional relationship between the key point information of each user and the set reference object. , when determining the current sitting posture of each user located in the cabin, it is used to:
- the angle between the connection line between the neck key point and the crotch key point of any user and the set horizontal reference plane is smaller than the set second angle threshold, it is determined that the current sitting posture of any user is the horizontal reference plane of the user.
- the third abnormal sitting position of lying is smaller than the set second angle threshold.
- the identifying module 402 when identifying the current sitting posture of at least one user located in the vehicle cabin based on the current scene image, is configured to:
- an intermediate feature map corresponding to the current scene image is generated
- the current sitting posture of each user is determined based on the intermediate feature map and detection frame information of each user in the at least one user.
- the identification module 402 when generating the detection frame information of each user in the at least one user located in the vehicle cabin based on the intermediate feature map, is used for:
- the center point position information of the detection frame of each user located in the vehicle cabin is generated.
- the identification module 402 generates the center point position information of the detection frame of each user located in the vehicle cabin based on the target channel feature map representing the location in the channel feature map. , for:
- the converted target channel feature map is subjected to maximum pooling processing to obtain multiple pooling values and the position corresponding to each pooling value in the multiple pooling values. index; the position index is used to identify the position of the pooled value in the converted target channel feature map;
- the center point position information of the detection frame of each user located in the vehicle cabin is generated.
- the identifying module 402 when determining the current sitting posture of each user based on the intermediate feature map and the detection frame information of each user in the at least one user, is used to:
- N feature values at the feature positions matching the center point position information For each user, based on the center point position information indicated by the user's detection frame information, extract N feature values at the feature positions matching the center point position information from the classification feature map; The maximum eigenvalue is selected from the values, and the sitting posture category of the channel feature map corresponding to the maximum eigenvalue in the classification feature map is determined as the current sitting posture of the user.
- the functions or templates included in the apparatus provided by the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
- the functions or templates included in the apparatus provided by the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
- an embodiment of the present disclosure also provides an electronic device 500 .
- a schematic structural diagram of an electronic device 500 provided by an embodiment of the present disclosure includes a processor 501 , a memory 502 , and a bus 503 .
- the memory 502 is used to store execution instructions, including the memory 5021 and the external memory 5022; the memory 5021 here is also called the internal memory, which is used to temporarily store the operation data in the processor 501 and the data exchanged with the external memory 5022 such as the hard disk,
- the processor 501 exchanges data with the external memory 5022 through the memory 5021.
- the processor 501 communicates with the memory 502 through the bus 503, so that the processor 501 executes the following instructions:
- the abnormal sitting posture type refers to a sitting posture type that has a safety risk.
- an embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the abnormal sitting posture recognition method described in the above method embodiment is executed. step.
- the computer program product of the method for identifying an abnormal sitting posture provided by the embodiments of the present disclosure is used to store computer-readable codes.
- the processor of the electronic device executes the code to implement any of the above
- the embodiment provides an abnormal sitting posture identification method.
- the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
- the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
- the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
- the present disclosure provides a method, device, electronic device, storage medium and program for identifying abnormal sitting posture; wherein, a current scene image in a vehicle cabin is acquired; based on the current scene image, at least one user located in the vehicle cabin is identified If the current sitting posture of the user belongs to an abnormal sitting posture type, a warning message is issued; wherein, the abnormal sitting posture type includes a sitting posture type with a safety risk.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Psychiatry (AREA)
- Emergency Management (AREA)
- Social Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Emergency Alarm Devices (AREA)
- Alarm Systems (AREA)
Abstract
Description
Claims (14)
- 一种异常坐姿识别方法,包括:获取车舱内的当前场景图像;基于所述当前场景图像,识别位于所述车舱内的至少一个用户的当前坐姿;在用户的当前坐姿属于异常坐姿类型的情况下,发出警示信息;其中,所述异常坐姿类型包括存在安全风险的坐姿类型。
- 根据权利要求1所述的方法,所述异常坐姿类型包括以下至少一种:用户身体前倾的第一异常坐姿、用户身体侧倾的第二异常坐姿和用户身体横躺的第三异常坐姿。
- 根据权利要求2所述的方法,基于所述当前场景图像,识别位于所述车舱内的至少一个用户的当前坐姿:基于所述当前场景图像,确定所述当前场景图像中至少一个用户的关键点信息;基于各个用户的关键点信息与设置的参考物之间的相对位置关系,确定位于所述车舱内的各个用户的当前坐姿。
- 根据权利要求3所述的方法,所述关键点信息包括头部关键点信息;基于各个用户的关键点信息与设置的参考物之间的相对位置关系,确定位于所述车舱内的各个用户的当前坐姿,包括:若任一用户的所述头部关键点信息低于设置的方向盘下沿线,确定所述任一用户的当前坐姿为用户身体前倾的第一异常坐姿。
- 根据权利要求3所述的方法,所述关键点信息包括左肩关键点信息和右肩关键点信息;基于各个用户的关键点信息与设置的参考物之间的相对位置关系,确定位于所述车舱内的各个用户的当前坐姿,包括:若任一用户的左肩关键点与右肩关键点之间的连线、与设置的座椅参考面之间的角度大于设置的第一角度阈值,确定所述任一用户的当前坐姿为用户身体侧倾的第二异常坐姿。
- 根据权利要求3所述的方法,所述关键点信息包括脖子关键点信息和胯部关键点信息;基于各个用户的关键点信息与设置的参考物之间的相对位置关系,确定位于所述车舱内的各个用户的当前坐姿,包括:若任一用户的脖子关键点与胯部关键点之间的连线、与设置的水平参考面之间的角度小于设置的第二角度阈值,确定所述任一用户的当前坐姿为用户身体横躺的第三异常坐姿。
- 根据权利要求1所述的方法,基于所述当前场景图像,识别位于所述车舱内的至少一个用户的当前坐姿,包括:基于所述当前场景图像,生成所述当前场景图像对应的中间特征图;基于所述中间特征图,生成位于所述车舱内的至少一个用户中每个用户的检测框信息:基于所述中间特征图和所述至少一个用户中每个用户的检测框信息,确定每个用户的当前坐姿。
- 根据权利要求7所述的方法,基于所述中间特征图,生成位于所述车舱内的至少一个用户中每个用户的检测框信息,包括:对所述中间特征图进行至少一次第一卷积处理,生成所述中间特征图对应的通道特征图;基于所述通道特征图中表征位置的目标通道特征图,生成位于所述车舱内的每个用户的检测框的中心点位置信息。
- 根据权利要求8所述的方法,基于所述通道特征图中表征位置的目标通道特征图,生成位于所述车舱内的每个用户的检测框的中心点位置信息,包括:利用激活函数对所述表征位置的目标通道特征图中每个特征值进行特征值转换处理,生成转换后的目标通道特征图;按照预设的池化尺寸和池化步长,对转换后的目标通道特征图进行最大池化处理,得到多个池化值以及与多个池化值中的每个池化值对应的位置索引;所述位置索引用于标识所述池化值在所述转换后的目标通道特征图中的位置;基于所述每个池化值以及池化阈值,从多个池化值中确定属于至少一个用户的检测框的中心点的目标池化值;基于所述目标池化值对应的位置索引,生成位于所述车舱内的每个用户的检测框的中心点位置信息。
- 根据权利要求7所述的方法,基于所述中间特征图和所述至少一个用户中每个用户的检测框信息,确定每个用户的当前坐姿,包括:对所述中间特征图进行至少一次第二卷积处理,生成所述中间特征图对应的N通道的分类特征图;其中,所述分类特征图的通道数N与坐姿类别的数量一致,所述N通道的分类特征图中的每个通道特征图对应一种坐姿类别,N为大于1的正整数;针对每个用户,基于所述用户的检测框信息指示的中心点位置信息,从所述分类特征图中提取与所述中心点位置信息匹配的特征位置处的N个特征值;从N个特征值中选取最大特征值,将分类特征图中,与最大特征值对应的通道特征图的坐姿类别,确定为所述用户的当前坐姿。
- 一种异常坐姿识别装置,包括:获取模块,用于获取车舱内的当前场景图像;识别模块,用于基于所述当前场景图像,识别位于所述车舱内的至少一个用户的当前坐姿;确定模块,用于在用户的当前坐姿属于异常坐姿类型的情况下,发出警示信息;其中,所述异常坐姿类型是指存在安全风险的坐姿类型。
- 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至10任一所述的异常坐姿识别方法的步骤。
- 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至10任一所述的异常坐姿识别方法的步骤。
- 一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现如权利要求1至10任一项所述的异常坐姿识别方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217039206A KR20220019097A (ko) | 2020-08-07 | 2020-12-14 | 이상 착석 자세 식별 방법, 장치, 전자 기기, 저장 매체 및 프로그램 |
JP2021571346A JP2022547246A (ja) | 2020-08-07 | 2020-12-14 | 非正規着座姿勢の認識方法、装置、電子機器、記憶媒体及びプログラム |
US17/536,840 US20220084316A1 (en) | 2020-08-07 | 2021-11-29 | Method and electronic device for recognizing abnormal sitting posture, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010790210.0 | 2020-08-07 | ||
CN202010790210.0A CN111931640B (zh) | 2020-08-07 | 2020-08-07 | 异常坐姿识别方法、装置、电子设备及存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/536,840 Continuation US20220084316A1 (en) | 2020-08-07 | 2021-11-29 | Method and electronic device for recognizing abnormal sitting posture, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022027895A1 true WO2022027895A1 (zh) | 2022-02-10 |
Family
ID=73307054
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/136267 WO2022027895A1 (zh) | 2020-08-07 | 2020-12-14 | 异常坐姿识别方法、装置、电子设备、存储介质及程序 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220084316A1 (zh) |
JP (1) | JP2022547246A (zh) |
KR (1) | KR20220019097A (zh) |
CN (1) | CN111931640B (zh) |
WO (1) | WO2022027895A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114550099A (zh) * | 2022-03-01 | 2022-05-27 | 常莫凡 | 基于数字孪生的综合健康管理系统 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931640B (zh) * | 2020-08-07 | 2022-06-10 | 上海商汤临港智能科技有限公司 | 异常坐姿识别方法、装置、电子设备及存储介质 |
CN112613440A (zh) * | 2020-12-29 | 2021-04-06 | 北京市商汤科技开发有限公司 | 一种姿态检测的方法、装置、电子设备及存储介质 |
CN112712053B (zh) * | 2021-01-14 | 2024-05-28 | 深圳数联天下智能科技有限公司 | 一种坐姿信息的生成方法、装置、终端设备及存储介质 |
CN112733740B (zh) * | 2021-01-14 | 2024-05-28 | 深圳数联天下智能科技有限公司 | 一种注意力信息的生成方法、装置、终端设备及存储介质 |
US11851080B2 (en) * | 2021-02-03 | 2023-12-26 | Magna Mirrors Of America, Inc. | Vehicular driver monitoring system with posture detection and alert |
US20220319045A1 (en) * | 2021-04-01 | 2022-10-06 | MohammadSado Lulu | System For Posture Detection Using The Camera Of A Hand-Held Device |
KR102513042B1 (ko) * | 2022-11-30 | 2023-03-23 | 주식회사 알에스팀 | 버스 안전사고 예방을 위한 이동 감지 시스템 |
CN115877899B (zh) * | 2023-02-08 | 2023-05-09 | 北京康桥诚品科技有限公司 | 一种漂浮舱内的液体控制方法、装置、漂浮舱和介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985259A (zh) * | 2018-08-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | 人体动作识别方法和装置 |
CN109389068A (zh) * | 2018-09-28 | 2019-02-26 | 百度在线网络技术(北京)有限公司 | 用于识别驾驶行为的方法和装置 |
CN110348335A (zh) * | 2019-06-25 | 2019-10-18 | 平安科技(深圳)有限公司 | 行为识别的方法、装置、终端设备及存储介质 |
CN110517261A (zh) * | 2019-08-30 | 2019-11-29 | 上海眼控科技股份有限公司 | 安全带状态检测方法、装置、计算机设备和存储介质 |
WO2020063753A1 (zh) * | 2018-09-27 | 2020-04-02 | 北京市商汤科技开发有限公司 | 动作识别、驾驶动作分析方法和装置、电子设备 |
CN111301280A (zh) * | 2018-12-11 | 2020-06-19 | 北京嘀嘀无限科技发展有限公司 | 一种危险状态识别方法及装置 |
CN111931640A (zh) * | 2020-08-07 | 2020-11-13 | 上海商汤临港智能科技有限公司 | 异常坐姿识别方法、装置、电子设备及存储介质 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567743A (zh) * | 2011-12-20 | 2012-07-11 | 东南大学 | 基于视频图像的驾驶员姿态自动识别方法 |
JP6372388B2 (ja) * | 2014-06-23 | 2018-08-15 | 株式会社デンソー | ドライバの運転不能状態検出装置 |
JP6507015B2 (ja) * | 2015-04-08 | 2019-04-24 | 日野自動車株式会社 | 運転者状態判定装置 |
US10318831B2 (en) * | 2016-07-21 | 2019-06-11 | Gestigon Gmbh | Method and system for monitoring the status of the driver of a vehicle |
JP2019034576A (ja) * | 2017-08-10 | 2019-03-07 | オムロン株式会社 | 運転者状態把握装置、運転者状態把握システム、及び運転者状態把握方法 |
CN107730846A (zh) * | 2017-10-25 | 2018-02-23 | 深圳纳富特科技有限公司 | 坐姿矫正的提醒方法、装置及计算机可读存储介质 |
JP7051526B2 (ja) * | 2018-03-26 | 2022-04-11 | 本田技研工業株式会社 | 車両用制御装置 |
JP7102850B2 (ja) * | 2018-03-28 | 2022-07-20 | マツダ株式会社 | ドライバ状態判定装置 |
CN109409331A (zh) * | 2018-11-27 | 2019-03-01 | 惠州华阳通用电子有限公司 | 一种基于雷达的防疲劳驾驶方法 |
JP7259324B2 (ja) * | 2018-12-27 | 2023-04-18 | 株式会社アイシン | 室内監視装置 |
CN111414780B (zh) * | 2019-01-04 | 2023-08-01 | 卓望数码技术(深圳)有限公司 | 一种坐姿实时智能判别方法、系统、设备及存储介质 |
CN109910904B (zh) * | 2019-03-22 | 2021-03-09 | 深圳市澳颂泰科技有限公司 | 一种驾驶行为与车辆驾驶姿态识别系统 |
CN111439170B (zh) * | 2020-03-30 | 2021-09-17 | 上海商汤临港智能科技有限公司 | 儿童状态检测方法及装置、电子设备、存储介质 |
-
2020
- 2020-08-07 CN CN202010790210.0A patent/CN111931640B/zh active Active
- 2020-12-14 KR KR1020217039206A patent/KR20220019097A/ko unknown
- 2020-12-14 JP JP2021571346A patent/JP2022547246A/ja active Pending
- 2020-12-14 WO PCT/CN2020/136267 patent/WO2022027895A1/zh active Application Filing
-
2021
- 2021-11-29 US US17/536,840 patent/US20220084316A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985259A (zh) * | 2018-08-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | 人体动作识别方法和装置 |
WO2020063753A1 (zh) * | 2018-09-27 | 2020-04-02 | 北京市商汤科技开发有限公司 | 动作识别、驾驶动作分析方法和装置、电子设备 |
CN109389068A (zh) * | 2018-09-28 | 2019-02-26 | 百度在线网络技术(北京)有限公司 | 用于识别驾驶行为的方法和装置 |
CN111301280A (zh) * | 2018-12-11 | 2020-06-19 | 北京嘀嘀无限科技发展有限公司 | 一种危险状态识别方法及装置 |
CN110348335A (zh) * | 2019-06-25 | 2019-10-18 | 平安科技(深圳)有限公司 | 行为识别的方法、装置、终端设备及存储介质 |
CN110517261A (zh) * | 2019-08-30 | 2019-11-29 | 上海眼控科技股份有限公司 | 安全带状态检测方法、装置、计算机设备和存储介质 |
CN111931640A (zh) * | 2020-08-07 | 2020-11-13 | 上海商汤临港智能科技有限公司 | 异常坐姿识别方法、装置、电子设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
LIU MIN , PAN, LIAN, ZENG XIN-HUA, ZHU ZE-DE: "Sitting Behavior Recognition Based on MTCNN", COMPUTER ENGINEERING AND DESIGN, vol. 40, no. 11, 30 November 2019 (2019-11-30), XP055894789 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114550099A (zh) * | 2022-03-01 | 2022-05-27 | 常莫凡 | 基于数字孪生的综合健康管理系统 |
Also Published As
Publication number | Publication date |
---|---|
JP2022547246A (ja) | 2022-11-11 |
CN111931640A (zh) | 2020-11-13 |
US20220084316A1 (en) | 2022-03-17 |
CN111931640B (zh) | 2022-06-10 |
KR20220019097A (ko) | 2022-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022027895A1 (zh) | 异常坐姿识别方法、装置、电子设备、存储介质及程序 | |
US11535280B2 (en) | Method and device for determining an estimate of the capability of a vehicle driver to take over control of a vehicle | |
US10679078B2 (en) | Helmet wearing determination method, helmet wearing determination system, helmet wearing determination apparatus, and program | |
US11776083B2 (en) | Passenger-related item loss mitigation | |
WO2020078461A1 (zh) | 车辆座椅智能调节方法和装置、车辆、电子设备、介质 | |
JP2020123352A (ja) | 人の状態認識を基盤として身体部位の長さ及び顔情報を使用して乗客の身長及び体重を予測する方法及び装置 | |
CN110826370B (zh) | 车内人员的身份识别方法、装置、车辆及存储介质 | |
CN111439170B (zh) | 儿童状态检测方法及装置、电子设备、存储介质 | |
WO2022027893A1 (zh) | 安全带佩戴检测方法、装置、电子设备、存储介质及程序 | |
WO2022027894A1 (zh) | 驾驶员行为检测方法、装置、电子设备、存储介质和程序 | |
WO2013179588A1 (ja) | 人検出装置 | |
US20170004354A1 (en) | Determination device, determination method, and non-transitory storage medium | |
KR20130016606A (ko) | 사용자 적응형 특이행동 검출기반의 안전운전보조시스템 | |
US11417108B2 (en) | Two-wheel vehicle riding person number determination method, two-wheel vehicle riding person number determination system, two-wheel vehicle riding person number determination apparatus, and program | |
CN115331205A (zh) | 一种云边协同的驾驶员疲劳检测系统 | |
KR20190134909A (ko) | 주행상황 판단 정보 기반 운전자 상태 인식 장치 및 방법 | |
KR101350882B1 (ko) | 영상 분석 서버 | |
CN109165607B (zh) | 一种基于深度学习的驾驶员手持电话检测方法 | |
KR20150067679A (ko) | 차량용 제스처 인식 시스템 및 그 방법 | |
JP2021034739A (ja) | 事象発生推定のための学習データ生成方法・プログラム、学習モデル及び事象発生推定装置 | |
CN112541425A (zh) | 情绪检测方法、装置、介质及电子设备 | |
US11138755B2 (en) | Analysis apparatus, analysis method, and non transitory storage medium | |
WO2021262166A1 (en) | Operator evaluation and vehicle control based on eyewear data | |
US20230326069A1 (en) | Method and apparatus for determining a gaze direction of a user | |
Imteaj et al. | Enhancing Road Safety Through Cost-Effective, Real-Time Monitoring of Driver Awareness with Resource-Constrained IoT Devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021571346 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20948150 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20948150 Country of ref document: EP Kind code of ref document: A1 |