CN112954486B - Vehicle-mounted video trace processing method based on sight attention - Google Patents
Vehicle-mounted video trace processing method based on sight attention Download PDFInfo
- Publication number
- CN112954486B CN112954486B CN202110359280.5A CN202110359280A CN112954486B CN 112954486 B CN112954486 B CN 112954486B CN 202110359280 A CN202110359280 A CN 202110359280A CN 112954486 B CN112954486 B CN 112954486B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- display
- attention
- label
- mark point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/70—Multimodal biometrics, e.g. combining information from different biometric modalities
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
Abstract
The invention discloses a vehicle-mounted video trace processing method based on sight attention, which comprises the following steps that (1) a vehicle-mounted cinema takes a central control screen as a display and plays videos for passengers; (2) monitoring a driving state of the autonomous vehicle, a head pose of the occupant, and a line of sight direction; (3) classifying the attention of the passengers according to the state in the step (2), and defining the attention as a primary label and a secondary label; (4) the secondary label only establishes a mark point, and the primary label simultaneously records the duration of the current label; (5) continuing monitoring until the current label is eliminated; (6) if the eliminated current label corresponds to the first-level label, the mark point on the progress bar flickers, and the missed progress within the duration of the current label is displayed on the progress bar; and if the eliminated current label corresponds to the secondary label, only keeping the mark point on the progress bar. The method and the device can avoid the condition that the passengers miss videos or live broadcasts due to attention transfer or emergency response caused by an emergency of the environment, and improve the experience degree of the passengers.
Description
Technical Field
The invention relates to a video-mounted video trace processing method, in particular to a vehicle-mounted video trace processing method based on sight attention.
Background
With the development of internet and automatic driving technologies, the scenes of video software used by passengers are more diversified, and the passengers like to use mobile devices and tablet computers possibly on public transportation vehicles or use playing software on vehicle-mounted entertainment systems.
An unmanned vehicle is a kind of intelligent vehicle, mainly relying on an intelligent driver in the vehicle, mainly based on a computer system, to achieve the purpose of unmanned driving, and in order to improve the functionality and comfort of unmanned driving, ford has applied a patent named "autonomous vehicle entertainment system" to the united states patent and trademark office. This patent document shows that an unmanned vehicle can be converted to a mobile cinema with the retractable projector screen blocking the windshield and the front seats slid backwards so that passengers can relax and watch movies. In addition, chinese patent application No. 201810110277.8, patent name: a miniature vehicle-mounted cinema taking an automobile windshield as a display also refers to a vehicle-mounted cinema taking the automobile windshield as the display, and therefore vehicle-mounted video playing in unmanned driving is a development direction of automobiles. In addition, the central control display can also be used as a video player.
The vehicle-mounted video and other videos are played differently, the video is watched in the driving process, the road condition is complex, and emergencies such as braking, jolting, sudden acceleration/deceleration exist, so that when a passenger watches the video in a relatively complex mobile scene, the passenger often misses the video or broadcasts the video due to attention transfer or emergency reaction caused by emergencies of the environment, and thus some important information is lost or watching experience is reduced.
For the automatic driving control device, refer to chinese patent application No. 201680004772, X, the patent name of which is the invention patent of the automatic driving control device. This patent mentions an automatic driving control apparatus capable of switching between a manual driving mode in which driving operation by an occupant of the vehicle is required and an automatic control mode in which driving operation by the occupant of the vehicle is not required. Wherein the automatic driving control unit controls the vehicle in the automatic control mode. In addition, from patent No. CN 105185112A: method and system for analyzing and identifying driving behavior, and patent publication No. CN 202661066U: a method for judging the motion state of an automobile is disclosed, which is a method for early warning the attitude of the automobile based on an accelerometer and a gyroscope.
A head posture estimation method belongs to the field of machine learning and computer vision. Firstly, a certain number of head images are collected, head gestures corresponding to the head images are recorded, and then a network model is constructed and trained through a computer neural network learning method to obtain a trained network model. The head gestures in the head images can be classified by self-definition before training, so that the trained model has the functions of inputting the head images and outputting the corresponding head gesture classification.
The sight Estimation, english size Estimation, and generalized size Estimation generally refer to research related to eyeballs, eye movements, sight lines, and the like, and can be applied to driving assistance to detect whether a driver is tired of driving and whether attention is focused. At present, the existing sight line estimation method is mostly based on a computer deep neural network to construct a sight line estimation model. For example, chinese patent application nos. 202010871924.4, 201810417422.7, 201710612923.6, etc.
Disclosure of Invention
The invention aims to provide a vehicle-mounted video trace processing method based on sight attention, which solves the problems and avoids the condition that a passenger misses a video or broadcasts the video directly due to attention transfer or emergency reaction caused by an emergency event of the environment.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a vehicle-mounted video trace processing method based on sight attention comprises the following steps:
(1) the vehicle-mounted cinema takes the central control screen as a display and plays videos for passengers;
(2) monitoring the autonomous vehicle and the occupants;
monitoring an automatic driving vehicle to obtain the driving state of the automatic driving vehicle, wherein the driving state comprises a stable state and a non-stable state during automatic driving;
monitoring the head pose and the sight line direction of the passenger, and determining the head pose and the sight line direction of the passenger, wherein the head pose comprises a display facing to the display and a display deviating from the display, and the sight line direction comprises a display watched and a display not watched;
(3) classifying the attention of the occupant;
when the driving state is a non-steady state, judging the head posture:
if the head posture is off the display, the attention classification result is S1;
if the head posture is towards the display, judging the sight line direction of the passenger, if the display is not watched, the attention classification result is S2, and if the display is watched, the attention classification result is S3;
when the driving state is a steady state, judging the head posture:
if the head posture is off the display, the attention classification result is S4;
if the head posture is towards the display, judging the sight line direction of the passenger, if the passenger does not watch the display, the attention classification result is S5, if the passenger watches the display, the attention classification result is S6;
(4) marking S1, S2 and S4 as primary labels, marking S3 and S5 as secondary labels, establishing a label point on a progress bar of video playing at the initial moment of each label, acquiring the time of the label point, and if the label point is the primary label, simultaneously recording the duration of the current label;
(5) continuing to monitor the autonomous vehicle and the occupant until S1-S5 is removed and the occupant' S current attention classification result is S6;
(6) if the eliminated current label corresponds to the first-level label, the mark point on the progress bar flickers, and the missed progress within the duration of the current label is displayed on the progress bar; and if the eliminated current label corresponds to the secondary label, only keeping the mark point on the progress bar.
Preferably, the method comprises the following steps: monitoring an autonomous vehicle using an autonomous driving control device, the autonomous driving control device comprising:
an automatic driving control unit: the automatic driving control system is used for carrying out automatic driving control on the vehicle;
a motion state determination unit: the system comprises a gyroscope sensor, a three-axis acceleration sensor and a processor, wherein the gyroscope sensor is used for acquiring the rotation angular velocity information of a vehicle in real time, and the three-axis acceleration sensor is used for acquiring the acceleration information of the vehicle in real time; the processor is used for calculating the rotation angular velocity variation w and the acceleration variation a in each second according to the rotation angular velocity information and the acceleration information;
presetting a rotation angular velocity variable threshold value w1 and an acceleration variable threshold value a 1; if w < w1, and a < a1, then there is a plateau, and the rest are non-plateaus.
Preferably, the method comprises the following steps: monitoring the head posture and the sight direction of the passenger, specifically comprising the following steps:
(1) acquiring a head image of a passenger in the vehicle through a camera;
(2) identifying a predetermined region of the face from the head image and extracting facial feature points from the predetermined region of the face;
(3) analyzing the facial feature points by adopting a first model to obtain the head posture of the passenger, wherein the first model is obtained by using a plurality of groups of data through machine learning training, and each piece of data in the plurality of groups of data comprises the facial feature points and identifies the head posture;
(4) and analyzing the facial feature points by adopting a second model to obtain the sight direction of the passenger, wherein the second model is obtained by using multiple groups of data through machine learning training, and each piece of data in the multiple groups of data comprises the facial feature points and identifies the sight direction.
Preferably, the method comprises the following steps: and (7) if the passenger needs to return to a mark point to play again, selecting the corresponding mark point, and returning the video to the mark point.
Preferably, the method comprises the following steps: in the step (7), selecting a corresponding mark point, and returning the video to the mark point, specifically: the screen point touches the mark point, touches the mark point and slides, or double-clicks the mark point.
Compared with the prior art, the invention has the advantages that:
(1) the method is mainly applied to the situation that a driver watches videos in unmanned driving or passengers watch videos in common driving. In the driving process, the situations of attention transfer or emergency reaction of passengers, such as complex road conditions, vehicle braking, jolt, sudden acceleration/deceleration and the like, often occur, so that the video or live broadcast situation is missed.
(2) The invention can monitor the head posture and the sight line direction of the passengers according to the driving state of the vehicle, classify the attention of the passengers in actual driving and perform corresponding operation on the progress bar according to different classifications. Based on the monitoring of the head posture and the sight direction, the method meets the actual requirements of various scenes, can be applied to various conditions, such as increasing information interference of future electronic equipment, frequent access of social messages of a smart phone, information reminding of a smart watch, or influence of human factors from the same person, and is suitable for grading attention by monitoring the head posture and the sight direction in consideration of the real conditions that videos may be missed or live broadcasts.
(3) When the situation encountered in the actual driving disappears, the passenger can select to return to the mark point or continue playing according to the actual requirement. The invention can make the vehicle-mounted personnel clearly know what reason the video clip is missed because of carrying out visual label classification on attention, and help to judge whether to return to the marking point or continue playing according to actual needs. All marking and marking are automatically finished, viewing is not affected, and user experience can be effectively improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of a display interface corresponding to a primary label;
FIG. 3 is an enlarged view of a portion A of the progress bar of FIG. 2;
FIG. 4 is a diagram of a display interface corresponding to a secondary label;
in the figure: 1. a progress bar; 2. a visual marker.
Detailed Description
The invention will be further explained with reference to the drawings.
Example 1: referring to fig. 1 to 4, a vehicle-mounted video trace processing method based on sight line attention includes the following steps:
(1) the vehicle-mounted cinema takes the central control screen as a display and plays videos for passengers;
(2) monitoring the autonomous vehicle and the occupants;
monitoring an automatic driving vehicle to obtain the driving state of the automatic driving vehicle, wherein the driving state comprises a stable state and a non-stable state during automatic driving;
monitoring the head pose and the sight line direction of the passenger, and determining the head pose and the sight line direction of the passenger, wherein the head pose comprises a display facing to the display and a display deviating from the display, and the sight line direction comprises a display watched and a display not watched;
(3) classifying the attention of the occupant;
when the driving state is a non-steady state, judging the head posture:
if the head posture is off the display, the attention classification result is S1;
if the head posture is towards the display, judging the sight line direction of the passenger, if the display is not watched, the attention classification result is S2, and if the display is watched, the attention classification result is S3;
when the driving state is a steady state, judging the head posture:
if the head posture is off the display, the attention classification result is S4;
if the head posture is towards the display, judging the sight line direction of the passenger, if the display is not watched, the attention classification result is S5, and if the display is watched, the attention classification result is S6;
(4) marking S1, S2 and S4 as primary labels, marking S3 and S5 as secondary labels, establishing a marking point on a progress bar of video playing at the starting moment of each label, acquiring the time of the marking point, and if the label is the primary label, simultaneously recording the duration of the current label;
(5) continuing to monitor the autonomous vehicle and the occupant until S1-S5 is removed and the occupant' S current attention classification result is S6;
(6) if the eliminated current label corresponds to the first-level label, the mark point on the progress bar flickers, and the missed progress within the duration of the current label is displayed on the progress bar; and if the eliminated current label corresponds to the secondary label, only keeping the mark point on the progress bar.
Regarding step (2), in this embodiment, when the head pose and the sight line direction of the passenger are actually monitored, a camera is used to perform image acquisition, then the image is divided into one frame and one frame of pictures, and each frame of picture is analyzed, so that the head pose of each frame of picture can be obtained. When the head posture of a passenger deviates from a display, the situation that multiple continuous images deviate from the display inevitably occurs, a sensitivity grade is preset according to the continuous deviation time of the multiple-needle images, wherein the strong grade is 3s, the medium grade is 5s, the weak grade is more than 10 seconds, and the default sensitivity grade is set as the medium grade.
Then, regarding the head posture, if the head posture of each image is off the display in all the images within 5s, it can be determined that the head posture of the occupant is off the display.
As for the method of determining and setting the sight-line direction, the sensitivity level may be set in the same manner as the head posture, and for example, if the sight-line direction of each image in all the images within 5s is not watched on the display, it is determined that the sight-line direction of the occupant is not watched on the display. In actual operation, the time length corresponding to the sensitivity level can be set and changed according to actual conditions.
With regard to step (4), we explain by means of fig. 3 and 4.
When the label is a first-level label, referring to fig. 3, the progress bar 1 in fig. 3 includes a middle black-white bar, and the left end point of the black-white bar and the right end point of the black-white bar. Assuming that the attention of the passenger is distracted due to the road bump, and the result of the attention classification is S1, the operations are performed according to the steps (a 1) - (a 3):
(a1) at the starting time of the current label, establishing a mark point on a progress bar 1 of video playing, namely establishing a mark point at the left end point of a black-white long strip, and acquiring the time of the mark point;
(a2) for convenient occupant viewing, we also display a visual marker 2 at the marker point;
(a3) if the current label is continuously kept, the duration of the current label is recorded at the same time, the record is displayed by the right end point of the black-and-white strip, the longer the time is, the longer the black-and-white strip is, and for more striking, a black circle as shown in fig. 3 can be arranged at the right end point of the black-and-white strip.
When the label is a secondary label, referring to fig. 4, it can be seen from fig. 4 that there is only one marked point on the progress bar 1.
Example 2: referring to fig. 1 to 4, to better illustrate the solution of the present invention, on the basis of embodiment 1, step (7) is added to select a corresponding mark point if the occupant needs to return to the mark point to replay, and the video returns to the mark point. And in the step (7), selecting a corresponding mark point, and returning the video to the mark point, specifically: and touching the mark point, touching the mark point and sliding or double-clicking the mark point by the screen. The rest is the same as in example 1.
Example 3: with reference to fig. 1 to 4, to better illustrate the solution of the present invention, we further add the following on the basis of example 1:
monitoring an autonomous vehicle with an autonomous driving control device, the autonomous driving control device comprising:
an automatic driving control unit: the automatic driving control system is used for carrying out automatic driving control on the vehicle;
a motion state determination unit: the system comprises a gyroscope sensor, a three-axis acceleration sensor and a processor, wherein the gyroscope sensor is used for acquiring the rotation angular velocity information of a vehicle in real time, and the three-axis acceleration sensor is used for acquiring the acceleration information of the vehicle in real time; the processor is used for calculating the rotation angular velocity variation w and the acceleration variation a in each second according to the rotation angular velocity information and the acceleration information;
presetting a rotation angular velocity variable threshold value w1 and an acceleration variable threshold value a 1; if w < w1, and a < a1, then there is a plateau, and the rest are non-plateaus. w1 and a1 can be adjusted according to actual conditions.
Monitoring the head posture and the sight direction of the passenger, specifically comprising the following steps:
(1) acquiring a head image of a passenger in the vehicle through a camera;
(2) identifying a predetermined region of the face from the head image and extracting facial feature points from the predetermined region of the face;
(3) analyzing the facial feature points by adopting a first model to obtain the head posture of the passenger, wherein the first model is obtained by using a plurality of groups of data through machine learning training, and each piece of data in the plurality of groups of data comprises the facial feature points and identifies the head posture;
(4) and analyzing the facial feature points by adopting a second model to obtain the sight direction of the passenger, wherein the second model is obtained by using a plurality of groups of data through machine learning training, and each piece of data in the plurality of groups of data comprises the facial feature points and identifies the sight direction.
The first model and the second model are obtained through machine learning training, and before the training of the first model, a large number of head images of passengers are collected, a predetermined face area is recognized from each head image, and then face feature points are extracted from the predetermined face area. Identifying the facial feature points in each head image, marking the head posture, dividing the identified head images into a training set and a testing set, sending the data in the training set to a first model for training to obtain a training result, and testing by using the data in the testing set. Similarly, the second model requires that the gaze direction be on and identified by the facial feature points. The predetermined face area may be a predetermined rectangular area or a face area of a certain size.
Example 4: referring to fig. 1 to fig. 4, we further improve on the basis of embodiment 3. In this case, the automatic driving control device is provided with a GPS positioning device and accesses a network map with real-time traffic information. Therefore, when some emergency occurs in the front, the vehicle-mounted personnel can be informed in advance whether to suspend waiting or not. For example, if the road section in front continuously bumps due to road repair, the passenger can be reminded whether to pause or not 5 minutes before the road section is reached, and preparation is made for the next advance.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (5)
1. A vehicle-mounted video trace processing method based on sight attention is characterized by comprising the following steps: the method comprises the following steps:
(1) the vehicle-mounted cinema takes the central control screen as a display and plays videos for passengers;
(2) monitoring the autonomous vehicle and the occupants;
monitoring an automatic driving vehicle to obtain the driving state of the automatic driving vehicle, wherein the driving state comprises a stable state and a non-stable state during automatic driving;
monitoring the head pose and the sight line direction of the passenger, and determining the head pose and the sight line direction of the passenger, wherein the head pose comprises a display facing to the display and a display deviating from the display, and the sight line direction comprises a display watched and a display not watched;
(3) classifying the attention of the occupant;
when the driving state is a non-steady state, judging the head posture:
if the head posture is off the display, the attention classification result is S1;
if the head posture is towards the display, judging the sight line direction of the passenger, if the display is not watched, the attention classification result is S2, and if the display is watched, the attention classification result is S3;
when the driving state is a steady state, judging the head posture:
if the head posture is off the display, the attention classification result is S4;
if the head posture is towards the display, judging the sight line direction of the passenger, if the display is not watched, the attention classification result is S5, and if the display is watched, the attention classification result is S6;
(4) marking S1, S2 and S4 as primary labels, marking S3 and S5 as secondary labels, establishing a marking point on a progress bar of video playing at the starting moment of each label, acquiring the time of the marking point, and if the label is the primary label, simultaneously recording the duration of the current label;
(5) continuing to monitor the autonomous vehicle and the occupant until S1-S5 is removed and the occupant' S current attention classification result is S6;
(6) if the eliminated current label corresponds to the first-level label, the mark point on the progress bar flickers, and the missed progress within the duration of the current label is displayed on the progress bar; and if the eliminated current label corresponds to the secondary label, only keeping the mark point on the progress bar.
2. The sight line attention-based on-vehicle video trace processing method according to claim 1, characterized in that: monitoring an autonomous vehicle using an autonomous driving control device, the autonomous driving control device comprising:
an automatic driving control unit: the automatic driving control system is used for carrying out automatic driving control on the vehicle;
a motion state determination unit: the vehicle acceleration control system comprises a gyroscope sensor, a three-axis acceleration sensor and a processor, wherein the gyroscope sensor is used for acquiring rotation angular velocity information of a vehicle in real time, and the three-axis acceleration sensor is used for acquiring acceleration information of the vehicle in real time; the processor is used for calculating the rotation angular velocity variation w and the acceleration variation a in each second according to the rotation angular velocity information and the acceleration information;
presetting a rotation angular velocity variable threshold value w1 and an acceleration variable threshold value a 1; if w < w1, and a < a1, then there is a plateau, and the rest are non-plateaus.
3. The sight line attention-based on-vehicle video trace processing method according to claim 1, characterized in that: monitoring the head posture and the sight direction of the passenger, specifically comprising the following steps:
(1) acquiring head images of passengers in the vehicle through a camera;
(2) identifying a predetermined region of the face from the head image and extracting facial feature points from the predetermined region of the face;
(3) analyzing the facial feature points by adopting a first model to obtain the head posture of the passenger, wherein the first model is obtained by using a plurality of groups of data through machine learning training, and each piece of data in the plurality of groups of data comprises the facial feature points and identifies the head posture;
(4) and analyzing the facial feature points by adopting a second model to obtain the sight direction of the passenger, wherein the second model is obtained by using multiple groups of data through machine learning training, and each piece of data in the multiple groups of data comprises the facial feature points and identifies the sight direction.
4. The sight line attention-based on-vehicle video trace processing method according to claim 1, characterized in that: and (7) if the passenger needs to return to a mark point to play again, selecting the corresponding mark point, and returning the video to the mark point.
5. The sight line attention-based on-vehicle video trace processing method according to claim 4, characterized in that: in the step (7), selecting a corresponding mark point, and returning the video to the mark point, specifically: the screen point touches the mark point, touches the mark point and slides, or double-clicks the mark point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110359280.5A CN112954486B (en) | 2021-04-02 | 2021-04-02 | Vehicle-mounted video trace processing method based on sight attention |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110359280.5A CN112954486B (en) | 2021-04-02 | 2021-04-02 | Vehicle-mounted video trace processing method based on sight attention |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112954486A CN112954486A (en) | 2021-06-11 |
CN112954486B true CN112954486B (en) | 2022-07-12 |
Family
ID=76232173
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110359280.5A Active CN112954486B (en) | 2021-04-02 | 2021-04-02 | Vehicle-mounted video trace processing method based on sight attention |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112954486B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023272635A1 (en) * | 2021-06-30 | 2023-01-05 | 华为技术有限公司 | Target position determining method, determining apparatus and determining system |
CN113709566B (en) * | 2021-08-11 | 2024-03-22 | 咪咕数字传媒有限公司 | Method, device, equipment and computer storage medium for playing multimedia content |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101324957A (en) * | 2008-07-16 | 2008-12-17 | 上海大学 | Intelligent playing method of football video facing to mobile equipment |
CN105357585A (en) * | 2015-08-29 | 2016-02-24 | 华为技术有限公司 | Method and device for playing video content at any position and time |
CN105847914A (en) * | 2016-03-29 | 2016-08-10 | 乐视控股(北京)有限公司 | Vehicle-mounted display-based video playing method and device |
US9665169B1 (en) * | 2015-03-11 | 2017-05-30 | Amazon Technologies, Inc. | Media playback after reduced wakefulness |
EP3323240A1 (en) * | 2015-07-16 | 2018-05-23 | Blast Motion Inc. | Integrated sensor and video motion analysis method |
CN111145055A (en) * | 2019-03-28 | 2020-05-12 | 黄河科技学院 | Network teaching system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7920102B2 (en) * | 1999-12-15 | 2011-04-05 | Automotive Technologies International, Inc. | Vehicular heads-up display system |
-
2021
- 2021-04-02 CN CN202110359280.5A patent/CN112954486B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101324957A (en) * | 2008-07-16 | 2008-12-17 | 上海大学 | Intelligent playing method of football video facing to mobile equipment |
US9665169B1 (en) * | 2015-03-11 | 2017-05-30 | Amazon Technologies, Inc. | Media playback after reduced wakefulness |
EP3323240A1 (en) * | 2015-07-16 | 2018-05-23 | Blast Motion Inc. | Integrated sensor and video motion analysis method |
CN105357585A (en) * | 2015-08-29 | 2016-02-24 | 华为技术有限公司 | Method and device for playing video content at any position and time |
CN105847914A (en) * | 2016-03-29 | 2016-08-10 | 乐视控股(北京)有限公司 | Vehicle-mounted display-based video playing method and device |
CN111145055A (en) * | 2019-03-28 | 2020-05-12 | 黄河科技学院 | Network teaching system |
Non-Patent Citations (1)
Title |
---|
适合快速反应的小型化转播车的系统设计与研究;赵强;《山东大学硕士论文》;20130115;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112954486A (en) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10882398B2 (en) | System and method for correlating user attention direction and outside view | |
US20210150390A1 (en) | Systems and methods for providing visual allocation management | |
US20210357670A1 (en) | Driver Attention Detection Method | |
EP3067827B1 (en) | Driver distraction detection system | |
US20180229654A1 (en) | Sensing application use while driving | |
US9460601B2 (en) | Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance | |
US20150116493A1 (en) | Method and system for estimating gaze direction of vehicle drivers | |
CN112954486B (en) | Vehicle-mounted video trace processing method based on sight attention | |
CN109074748A (en) | Image processing equipment, image processing method and movable body | |
US20240249520A1 (en) | Integrated internal and external camera system in vehicles | |
EP3956807A1 (en) | A neural network for head pose and gaze estimation using photorealistic synthetic data | |
US20230347903A1 (en) | Sensor-based in-vehicle dynamic driver gaze tracking | |
JP2019032739A (en) | Reproducing apparatus and reproducing method, program thereof, recording apparatus, control method of recording apparatus, and the like | |
CN114340970A (en) | Information processing device, mobile device, information processing system, method, and program | |
Rong et al. | Artificial intelligence methods in in-cabin use cases: A survey | |
WO2020019231A1 (en) | Apparatus and method for use with vehicle | |
JP2006048171A (en) | Status estimation device, status estimation method, information providing device using the same, and information providing method | |
JP2022047580A (en) | Information processing device | |
US20190149777A1 (en) | System for recording a scene based on scene content | |
CN116071949A (en) | Augmented reality method and device for driving assistance | |
JP7521920B2 (en) | Driving assistance device and data collection system | |
US20200265252A1 (en) | Information processing apparatus and information processing method | |
CN110287838B (en) | Method and system for monitoring behaviors of driving and playing mobile phone | |
US20240331409A1 (en) | Generation method, display device, and generation device | |
US11948227B1 (en) | Eliminating the appearance of vehicles and/or other objects when operating an autonomous vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |