CN110901519B - Multi-scene application method and system for vehicle information projection - Google Patents

Multi-scene application method and system for vehicle information projection Download PDF

Info

Publication number
CN110901519B
CN110901519B CN201911166504.XA CN201911166504A CN110901519B CN 110901519 B CN110901519 B CN 110901519B CN 201911166504 A CN201911166504 A CN 201911166504A CN 110901519 B CN110901519 B CN 110901519B
Authority
CN
China
Prior art keywords
vehicle
scene
distance
state
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911166504.XA
Other languages
Chinese (zh)
Other versions
CN110901519A (en
Inventor
陈芓良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilu Technology Co Ltd
Original Assignee
Dilu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilu Technology Co Ltd filed Critical Dilu Technology Co Ltd
Priority to CN201911166504.XA priority Critical patent/CN110901519B/en
Publication of CN110901519A publication Critical patent/CN110901519A/en
Application granted granted Critical
Publication of CN110901519B publication Critical patent/CN110901519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2400/00Special features or arrangements of exterior signal lamps for vehicles
    • B60Q2400/50Projected symbol or information, e.g. onto the road or car body

Abstract

The invention discloses a multi-scene application method and a multi-scene application system for vehicle information projection, which comprises the following steps that an acquisition module acquires and transmits state data of a vehicle in real time; the processing module receives the data of the acquisition module and converts the image data captured by the acquisition module into distance parameters; the distance parameter is transmitted to a subentry module; the subentry unit identifies and judges the state of the scene to which the current vehicle belongs by using the distance parameters and determines the color and the pattern to be projected; the projection generation module projects the color and the pattern on the ground. The invention has the beneficial effects that: the figure and the color projected on the body surface by the vehicle body are combined with the state action of the vehicle, so that the effect of reminding/warning/informing is achieved; by the novel reminding mode, the safety of the automobile in a static state is enhanced, unsafe factors caused by the fact that personnel outside the automobile cannot understand the action of the automobile state are reduced, and the safety of the automobile in a non-moving state is improved.

Description

Multi-scene application method and system for vehicle information projection
Technical Field
The invention relates to the technical field of automobile external interaction, in particular to a multi-scene application method for vehicle information projection and an application system thereof.
Background
In recent years, an autonomous automobile, also called an unmanned automobile, a computer-driven automobile or a wheeled mobile robot, is an intelligent automobile which realizes unmanned driving through a computer system. The automotive autopilot technology includes video cameras, radar sensors, and laser range finders to learn about the surrounding traffic conditions and navigate the road ahead through a detailed map (a map collected by a manned automobile). All this is done through google's data center, which can process the vast amount of information collected by cars about the surrounding terrain. In this regard, the autonomous vehicle corresponds to a remote control vehicle or an intelligent vehicle of google data center. One of the applications of the technology of the Internet of things in the automatic driving technology of the automobile.
As an external interactive function of an automatic driving automobile, vehicle information projection is also a quite popular research object, but the existing vehicle information projection is lack of use scene, is only used as light of a welcome and has no practical significance, and the practical significance of the current vehicle is not great due to the fact that the personalized function of the ribs is greatly small, so that the information projection function needs to be expanded to increase the practical significance of the information projection function in the vehicle and serve as an auxiliary reminding function.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
Therefore, one technical problem solved by the present invention is: different classification and grading methods are provided based on different scenes, different information is displayed in different patterns and colors, and the problem of lack of vehicle information projection using scenes is solved.
In order to solve the technical problems, the invention provides the following technical scheme: a multi-scene application method of vehicle information projection comprises the following steps that an acquisition module acquires and transmits state data of a vehicle in real time; the processing module receives the data of the acquisition module and converts the image data captured by the acquisition module into distance parameters; the distance parameter is transmitted to a subentry module; the subentry unit identifies and judges the state of the scene to which the current vehicle belongs by using the distance parameters and determines the color and the pattern to be projected; the projection generation module projects the color and the pattern on the ground.
As a preferable scheme of the multi-scenario application method for vehicle information projection according to the present invention, wherein: the scheme for judging the scene state of the vehicle comprises the following steps that the state of the vehicle defines the parking state and the driving door state of the vehicle; the parking state corresponds to that the vehicle is not started; the door opening state corresponds to the operation of opening the door of a user in the vehicle.
As a preferable scheme of the multi-scenario application method for vehicle information projection according to the present invention, wherein: the item dividing unit comprises the following defining steps of defining a safety distance D; defining three distances D1, D2, D3, wherein D1< D2< D3; defining a scene; scene one: when the automobile is parked close to the side in a running state, the door of the automobile is opened; scene two: the car is in a parking state, and a person/object with a threat approaches the car.
As a preferable scheme of the multi-scenario application method for vehicle information projection according to the present invention, wherein: the first scene comprises the following steps that the door opening side recognizes that a pedestrian/non-motor vehicle approaches the rear part; the light source emitting device is positioned at the side door skirt and projects red warning information on the nearby ground; the warning message includes a warning that the doors of the side of the vehicle are opening for safety.
As a preferable scheme of the multi-scenario application method for vehicle information projection according to the present invention, wherein: the identification of the door opening side comprises the steps of acquiring a camera image of the surrounding environment of the vehicle and judging whether a substance exists or not; if a substance is present, the distance between the substance and the vehicle is continuously monitored to determine whether the substance is approaching.
As a preferable scheme of the multi-scenario application method for vehicle information projection according to the present invention, wherein: the second scene comprises the following steps of dividing different threat levels according to different distances relatively close to the vehicle body; warning information with different threat levels is projected on the ground around the vehicle body to remind/warn people/objects with threats outside the vehicle.
As a preferable scheme of the multi-scenario application method for vehicle information projection according to the present invention, wherein: the safe distance D defined by the subelement comprises D1=5 meters, D2=10 meters and D3=15 meters in the scene one; d1' =0.3 meters, D2' =0.5 meters, D3' =1 meters in the scene two; and on the basis of the completion of the definition of D1, D2 and D3, the warning levels are divided into three warning levels I1, I2 and I3 according to the prompt levels of the scene I and the two atmosphere lamps.
As a preferable scheme of the multi-scenario application method for vehicle information projection according to the present invention, wherein: the sub-items of the alarm level are as follows, and the I1 alarm level comprises a first scene: the rear threat object approaches the door opening side, the distance D is less than D1, and if the probability collision is more than 70%, the warning level is I1; scene two: and recognizing that the threat object is close to the vehicle body and the distance from the vehicle body is less than D1', and judging that the warning level is I1. The I2 prompt level includes, scenario one: the rear threat object approaches the door opening side, the distance D1 is less than the distance D2, and the warning level is I2; scene two: and recognizing that the threat object is close to the vehicle body, and if D1'< distance < D2', the warning level is I2. The I3 perception level includes, scenario one: the rear threat object approaches the door opening side, the distance D2 is less than the distance D3, and the warning level is I3; scene two: and recognizing that the threat object is close to the vehicle body, and if D2'< distance < D3', the warning level is I3.
As a preferable scheme of the multi-scenario application method for vehicle information projection according to the present invention, wherein: performing projection display by adopting different display characteristics according to different levels, wherein the projection display comprises that I1 is displayed by red light and warning patterns projected on the ground; i2, displaying by yellow light and warning patterns projected on the ground; i3 is shown by green light and warning patterns projected on the ground.
Therefore, one technical problem solved by the present invention is: different classification and grading systems are provided based on different scenes, different information is displayed in different patterns and colors, and the problem of lack of vehicle information projection using scenes is solved.
In order to solve the technical problems, the invention provides the following technical scheme: a multi-scene application system for vehicle information projection comprises an acquisition module, a processing module, a subentry module and a projection generation module; the acquisition module is arranged on the vehicle body, comprises vehicle-mounted camera data, radar data, ultrasonic data and LIDAR data, and is used for acquiring and transmitting state data of the vehicle in real time; the processing module is connected with the acquisition module and is used for converting the image data captured by the acquisition module into distance parameters; the subentry unit is connected with the processing module and is used for identifying and judging the state of the scene to which the current vehicle belongs according to the distance parameter and determining the color and the pattern to be projected; the projection generation module is arranged on a projection device of the vehicle and is used for projecting the patterns and colors determined by the item dividing unit.
The invention has the beneficial effects that: the projection device is added on the automobile body, so that the projection display area close to 360 degrees can be ensured on the ground around the automobile body; the patterns and colors projected on the body surface by the vehicle body are combined with the state action of the vehicle, so that the effects of reminding/warning/informing are achieved; by the novel reminding mode, the safety of the automobile in a static state is enhanced, unsafe factors caused by the fact that personnel outside the automobile do not know the actions of the automobile in the state are reduced, and the safety of the automobile in a non-moving state is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor. Wherein:
fig. 1 is a schematic overall flow chart of a multi-scenario application method for vehicle information projection according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of an overall principle of a multi-scenario application system for projecting vehicle information according to a second embodiment of the present invention.
Fig. 3 is a schematic projection view of a rear light according to a second embodiment of the present invention;
fig. 4 is a projection diagram illustrating identification of a threatening approach in a parking state according to a second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected" and "connected" in the present invention are to be construed broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
In this embodiment, with a mode of carrying out the projection on subaerial, show more vehicle information for the outer user of car, let the outer user of car more directly know the current state of car and the action that takes place, promote the security of car. The projection of information on the ground surrounding the vehicle reveals different colors and graphics to represent the state of the vehicle when static. The atmosphere lamp overcomes the defects of single function and deficient practical function of the existing atmosphere lamp, gives more information contents to the atmosphere lamp on the automobile body through the comprehensive expansion (whole automobile body) and graded information display of the display area of the atmosphere lamp around the automobile body, and can help people outside/in the automobile to know the current state of the automobile obviously and more visually, thereby avoiding unsafe events.
Based on a camera or a radar on a vehicle body as environmental data acquisition equipment, the judgment is carried out according to the state of the vehicle and the acquired data, and colors and graphs expressing different information are projected on the ground so as to enhance the safety and the interactivity of the vehicle in a static state.
Specifically, the present embodiment provides a multi-scenario application method for vehicle information projection, which includes the following steps,
s1: the acquisition module 100 acquires and transmits state data of the vehicle in real time;
s2: the processing module 200 receives the data of the acquisition module 100, converts the image data captured by the acquisition module 100 into distance parameters, captures an actual image through a camera, obtains the camera parameters by using camera calibration (converting pixel distance into actual distance) to obtain the pixel coordinates of the image, calculates the image pixel distance and space distance transformation formula, and converts the obtained pixel coordinates into actual space coordinates. Also for example, the internal parameters fc1, fc2, cc1, cc2 of the kit can be calibrated by matlab, where fc1= f/dx; fc2= f/dy; dx represents the actual distance represented by each pixel in the x direction, and the actual object distance is measured after dx and dy are calculated by using the parameter matrix of matlab (according to the number of the pixel points and the actual distance dx and dy corresponding to each pixel point).
S3: the distance parameter is transmitted to the subentry module 300;
s4: the sub-item unit 300 determines the scene status of the current vehicle by using the distance parameter identification, and determines the color and pattern to be projected. It should be noted that, in this step, the subentry unit 300 is implemented by using an image recognition classification algorithm, and only needs to recognize whether there are objects around the vehicle to classify the scene where the vehicle is located, and uses a neural network classifier, and an artificial neural network is referred to as Neural Network (NN) for short, which is an intelligent bionic model constructed based on the connection theory and is a nonlinear dynamical system composed of a large number of neurons. For the neural network, the findings are abstracted and simplified by researching, analyzing and summarizing the organizational structure and the activity rule of the human brain, so that a model established by the human brain is simulated. A non-linear mapping from the input space to the output space that "learns" or discovers relationships between variables by adjusting weights and thresholds allows the recognition and classification of objects. A large amount of data is used for training to improve the recognition accuracy, for example, the convolutional neural network fuses the distance parameter components in the embodiment to realize scene classification recognition.
Before proceeding with the display level narration, the subentry unit 300 needs to complete the following definition:
the itemizing unit 300 includes the following defining steps,
defining a safe distance D;
defining three distances D1, D2, D3, wherein D1< D2< D3;
defining a scene;
scene one: when the automobile is parked close to the side in a running state, the door of the automobile is opened;
scene two: the car is in a parking state, and a person/object with a threat approaches the car.
Further, the scheme for judging the scene state of the vehicle comprises,
the state of the vehicle defines the parking state and the driving state of the vehicle;
the parking state corresponds to that the vehicle is not started;
the door opening state corresponds to the operation of opening the door of a user in the vehicle.
According to the above definitions and scenario decisions, there are therefore:
the following steps are included in scenario one,
the door opening side recognizes that a pedestrian/non-motor vehicle approaches the rear part;
the light source emitting device is positioned at the side door skirt edge and projects red warning information on the nearby ground;
the warning message includes a warning that the doors of the side of the vehicle are opening for safety.
The following steps are included in the second scenario,
different threat levels are divided according to different distances relatively close to the vehicle body;
warning information with different threat levels is projected on the ground around the vehicle body to remind/warn people/objects with threats outside the vehicle.
S5: the projection generation module 400 projects colors and patterns on the ground.
The identification of the door opening side includes acquiring a camera image of the vehicle surroundings to determine whether a substance is present; if the substance exists, the distance between the substance and the vehicle is continuously monitored, and whether the substance is approaching or not is judged. The vehicle acquisition module 100 includes vehicle camera data, radar data, ultrasound data, and LIDAR data, wherein the vehicle camera data captures camera images of the vehicle surroundings through a lens of the vehicle camera, determines whether there is a substance around the vehicle, and continuously monitors the distance between the substance and the vehicle through the ultrasound radar, determines whether the substance is approaching.
In this embodiment, the safety distance D defined by the sub-item unit 300 preferably includes,
scene 1, D1=5 meters, D2=10 meters, D3=15 meters;
d1' =0.3 meters, D2' =0.5 meters, D3' =1 meter in scene 2;
in view of the foregoing scenarios, this embodiment needs to be described further.
Scene 1: testing according to the electric bicycle with the highest speed; 1. the testing speed is not more than v =25km/h for the design speed specified by the regulation; 2. the reaction time of the test person in the dangerous condition is t; 3. and d is the stopping distance of the brake.
Reaction time test average: t =0.45 sec
Brake distance test average: d =1.56 m
Distance from reaction to stop: d = v x t + D, test result about 4.71 meters;
in the limit, D1=5 m is taken to avoid collision; d2 and D3 are used as warning and reminding functions, and D2= 2X D1 is respectively taken in consideration of the avoidance behavior of the avoided pedestrian/non-motor vehicle; d3=3 × D1.
Scene 2: the distance is set mainly by considering whether the door of the other vehicle can touch the vehicle in the opening process after the other vehicle stops; a distance of about 1 meter is required for a common vehicle door to be fully opened; a half-open vehicle door needs 0.5 meter distance; the automobile interior passenger can be guaranteed to need 30 cm under the limit condition of getting off.
Therefore, in scenario 2, D1' =0.3 m, D2' =0.5 m, and D3' =1 m are set.
On the basis of the completion of the definition of D1, D2 and D3, the warning device is divided into three warning levels I1, I2 and I3 according to the prompt levels of the scene one and the two atmosphere lamps.
The sub-items according to the above alert level are as follows,
the I1 alert level includes the number of,
scene one: the rear threat object approaches the door opening side, the distance D is less than D1, and if the probability collision is more than 70%, the warning level is I1;
scene two: recognizing that a threat object approaches to the vehicle body, and judging that the distance between the threat object and the vehicle body is less than D1', wherein the warning level is I1;
the I2 prompt levels include that,
scene one: the rear threat object approaches the door opening side, the distance D1 is less than the distance D2, and the warning level is I2;
scene two: recognizing that a threat object approaches to the vehicle body, and if the condition that D1'< distance < D2' is met, the warning level is I2;
the I3 perception level includes the number of,
scene one: the rear threat object approaches the door opening side, the distance D2 is less than the distance D3, and the warning level is I3;
scene two: and recognizing that the threat object is close to the vehicle body, and if D2'< distance < D3', the warning level is I3.
Different display characteristics are adopted for projection display according to different levels, including,
i1, displaying by red light and warning patterns projected on the ground;
i2, displaying by yellow light and warning patterns projected on the ground;
i3 is shown by green light and warning patterns projected on the ground.
According to the method provided by the embodiment, a larger projection area is performed on the ground, and displayed information is graded to correspond to different reminding levels; the method is different from the traditional mode that the automobile displays information through the automobile lamp, is more visual and clear, and has strong expansibility. Through reforming transform some present vehicles are used for decorative effect's usher lamp, can be around the vehicle subaerial projection go out different warning information to inform outer user vehicle state and action, avoid for example opening the door in-process and take place the accident that rubs etc. because the user does not know the vehicle state emergence with the rear personnel, increase security and interactive, have more practical meanings.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media includes instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein. A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
Example 2
Referring to the illustrations of fig. 1 to 3, the present embodiment provides a multi-scenario application system for vehicle information projection, and the multi-scenario application method for vehicle information projection can be implemented according to the present system, and the system includes an acquisition module 100, a processing module 200, an item division module 300, and a projection generation module 400.
Specifically, the acquisition module 100 is arranged on the vehicle body, and the acquisition module 100 includes vehicle-mounted camera data, radar data, ultrasonic data and LIDAR data, and is used for acquiring and transmitting state data of the vehicle in real time; the processing module 200 is connected to the acquisition module 100, and is configured to convert image data captured by the acquisition module 100 into a distance parameter; the item dividing unit 300 is connected to the processing module 200, and is configured to identify and determine a scene state to which the current vehicle belongs according to the distance parameter, and determine a color and a pattern to be projected; the projection generation module 400 is disposed in a projection device of a vehicle, and is configured to project the pattern and the color determined by the sub-item unit 300. As illustrated in fig. 2, it is not difficult to find that the projection of the vehicle body-wide surrounding information should be included as well as the projection of the tail light, depending on the actual position where the projection generation module 400 is disposed on the vehicle body.
In this embodiment, the projection generating module 400 is a vehicle projector, which is a vehicle-mounted HUD (Head Up Display), hereinafter referred to as HUD. HUDs are flight aids that are currently commonly used on aircraft. By "heads-up" is meant that the pilot is able to see the important information he needs without lowering his head. Head-up displays were first presented on military aircraft to reduce the frequency with which pilots need to look down at the instruments, avoiding interruptions in attention and loss of Awareness of the state (status Awareness). The most practical functions of HUDs can be summarized in three main categories: vehicle information, navigation, and safety.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (6)

1. A multi-scene application method for vehicle information projection is characterized in that: comprises the following steps of (a) carrying out,
the acquisition module (100) acquires and transmits state data of the vehicle in real time;
the processing module (200) receives the data of the acquisition module (100) and converts the image data captured by the acquisition module (100) into distance parameters;
the distance parameter is transmitted to a sub-item module (300);
the subentry module (300) identifies and judges the state of the scene to which the current vehicle belongs by using the distance parameters, and determines the color and the pattern to be projected;
the projection generation module (400) projects colors and patterns on the ground;
the itemizing module (300) comprises the defining steps of,
defining a safe distance D;
defining three distances D1, D2, D3, wherein D1< D2< D3;
defining a scene;
scene one: when the automobile is parked close to the side in a running state, the door of the automobile is opened;
scene two: the method comprises the following steps that a person/object with a threat approaches a vehicle when the vehicle is in a parking state;
the second scenario includes the following steps,
different threat levels are divided according to different distances relatively close to the vehicle body;
projecting warning information with different threat levels on the ground around the vehicle body to remind/warn people/objects with threats outside the vehicle;
the safe distance D defined by the item-dividing module (300) comprises,
in the scene one, D1=5 meters, D2=10 meters and D3=20 meters;
d1' =1 meter, D2' =0.5 meter, D3' =0.3 meter in the scene two;
on the basis of finishing the definition of D1, D2 and D3, dividing the scene into three warning levels I1, I2 and I3 according to the prompt levels of the scene I and the two pairs of atmosphere lamps;
the sub-items of the alert level are as follows,
the I1 alert level includes the number of,
scene one: the rear threat object approaches the door opening side, the distance D is less than D1, and if the probability collision is more than 70%, the warning level is I1;
scene two: recognizing that a threat object approaches to the vehicle body, and judging that the distance between the threat object and the vehicle body is less than D1', wherein the warning level is I1;
the I2 prompt levels include that,
scene one: the rear threat object approaches the door opening side, the distance D1 is less than the distance D2, and the warning level is I2;
scene two: recognizing that a threat object approaches to the vehicle body, and if the condition that D1'< distance < D2' is met, the warning level is I2;
the I3 perception level includes the number of,
scene one: the rear threat object approaches the door opening side, the distance D2 is less than the distance D3, and the warning level is I3;
scene two: and recognizing that the threat object is close to the vehicle body, and if D2'< distance < D3', the warning level is I3.
2. The multi-scenario application method of vehicle information projection according to claim 1, characterized in that: the scheme for judging the scene state of the current vehicle comprises the following steps,
the state of the vehicle defines the parking state and the driving state of the vehicle;
the parking state corresponds to that the vehicle is not started;
the door opening state corresponds to the operation of opening the door of a user in the vehicle.
3. The multi-scenario application method of vehicle information projection according to claim 2, characterized in that: the first of the scenarios comprises the following steps,
the door opening side recognizes that a pedestrian/non-motor vehicle approaches the rear part;
the light source emitting device is positioned at the side door skirt edge and projects red warning information on the nearby ground;
the warning message includes a warning that the doors of the side of the vehicle are opening for safety.
4. The multi-scenario application method of vehicle information projection according to claim 3, characterized in that: the identification of the side on which the door is opened includes,
acquiring a camera image of the surrounding environment of the vehicle to judge whether a substance exists;
if the substance exists, the distance between the substance and the vehicle is continuously monitored, and whether the substance is approaching or not is judged.
5. The multi-scenario application method of vehicle information projection according to claim 4, characterized in that: different display characteristics are adopted for projection display according to different levels, including,
i1, displaying by red light and warning patterns projected on the ground;
i2, displaying by yellow light and warning patterns projected on the ground;
i3 is shown by green light and warning patterns projected on the ground.
6. A system for operating the multi-scenario application method of vehicle information projection of claim 1, characterized in that: comprises an acquisition module (100), a processing module (200), a subentry module (300) and a projection generation module (400);
the acquisition module (100) is arranged on a vehicle body, and the acquisition module (100) comprises vehicle-mounted camera data, radar data, ultrasonic data and LIDAR data and is used for acquiring and transmitting state data of a vehicle in real time;
the processing module (200) is connected with the acquisition module (100) and is used for converting the image data captured by the acquisition module (100) into a distance parameter;
the subentry module (300) is connected with the processing module (200) and is used for identifying and judging the state of the scene to which the current vehicle belongs according to the distance parameters and determining the color and the pattern to be projected;
the projection generation module (400) is arranged on a projection device of the vehicle and is used for projecting the patterns and the colors determined by the item division module (300).
CN201911166504.XA 2019-11-25 2019-11-25 Multi-scene application method and system for vehicle information projection Active CN110901519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911166504.XA CN110901519B (en) 2019-11-25 2019-11-25 Multi-scene application method and system for vehicle information projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911166504.XA CN110901519B (en) 2019-11-25 2019-11-25 Multi-scene application method and system for vehicle information projection

Publications (2)

Publication Number Publication Date
CN110901519A CN110901519A (en) 2020-03-24
CN110901519B true CN110901519B (en) 2022-11-18

Family

ID=69819275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911166504.XA Active CN110901519B (en) 2019-11-25 2019-11-25 Multi-scene application method and system for vehicle information projection

Country Status (1)

Country Link
CN (1) CN110901519B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113401042A (en) * 2021-07-19 2021-09-17 深圳市裕富照明有限公司 Vehicle lamp control system and method and vehicle
CN115410394A (en) * 2022-08-01 2022-11-29 江苏航天大为科技股份有限公司 Internet of vehicles is with dangerous early warning reminding device of big data analysis

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104044502A (en) * 2014-06-17 2014-09-17 无锡市崇安区科技创业服务中心 Car door opening prompt device
CN106394395B (en) * 2015-07-30 2019-06-21 大陆投资(中国)有限公司 Automobile enabling anti-collision early warning method and system
CN206954090U (en) * 2017-05-17 2018-02-02 上海蔚兰动力科技有限公司 Driving intention instruction device
CN110401825A (en) * 2018-04-24 2019-11-01 长城汽车股份有限公司 Vehicle projecting method and device
CN109808589A (en) * 2019-02-25 2019-05-28 浙江众泰汽车制造有限公司 Vehicle blind zone prompt system

Also Published As

Publication number Publication date
CN110901519A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
US11392131B2 (en) Method for determining driving policy
JP7332726B2 (en) Detecting Driver Attention Using Heatmaps
US9824286B2 (en) Method and apparatus for early detection of dynamic attentive states for providing an inattentive warning
KR102267331B1 (en) Autonomous vehicle and pedestrian guidance system and method using the same
US10562452B2 (en) Rear-view mirror simulation
CN111386701B (en) Image processing apparatus, image processing method, and program
CN111133448A (en) Controlling autonomous vehicles using safe arrival times
CN111094095B (en) Method and device for automatically sensing driving signal and vehicle
CN114454809A (en) Intelligent light switching method, system and related equipment
CN110901519B (en) Multi-scene application method and system for vehicle information projection
Schwehr et al. Driver's gaze prediction in dynamic automotive scenes
WO2020031812A1 (en) Information processing device, information processing method, information processing program, and moving body
US20230045416A1 (en) Information processing device, information processing method, and information processing program
CN110188687A (en) Landform recognition methods, system, equipment and the storage medium of automobile
CN110920541B (en) Method and system for realizing automatic control of vehicle based on vision
CN109878535A (en) Driving assistance system and method
KR20210100775A (en) Autonomous driving device for detecting road condition and operation method thereof
Kondyli et al. A 3D experimental framework for exploring drivers' body activity using infrared depth sensors
CN117242460A (en) Computerized detection of unsafe driving scenarios
KR20220036870A (en) Method, system, and computer program product for determining safety-critical traffic scenarios for driver assistance systems (das) and highly automated driving functions (had)
Zheng DEVELOPMENT, VALIDATION, AND INTEGRATION OF AI-DRIVEN COMPUTER VISION AND DIGITAL-TWIN SYSTEMS FOR TRAFFIC SAFETY DIAGNOSTICS
CN112970029A (en) Deep neural network processing for sensor blind detection in autonomous machine applications
WO2021145227A1 (en) Information processing device, information processing method, and program
WO2021241261A1 (en) Information processing device, information processing method, program, and learning method
CN112750170B (en) Fog feature recognition method and device and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant