CN117115874A - AR-HUD-based identification method, device, equipment and medium - Google Patents

AR-HUD-based identification method, device, equipment and medium Download PDF

Info

Publication number
CN117115874A
CN117115874A CN202210530226.7A CN202210530226A CN117115874A CN 117115874 A CN117115874 A CN 117115874A CN 202210530226 A CN202210530226 A CN 202210530226A CN 117115874 A CN117115874 A CN 117115874A
Authority
CN
China
Prior art keywords
hud
target person
target
indication information
information corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210530226.7A
Other languages
Chinese (zh)
Inventor
方建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect and Technology Shanghai Corp
Original Assignee
Pateo Connect and Technology Shanghai Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pateo Connect and Technology Shanghai Corp filed Critical Pateo Connect and Technology Shanghai Corp
Priority to CN202210530226.7A priority Critical patent/CN117115874A/en
Publication of CN117115874A publication Critical patent/CN117115874A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an AR-HUD-based identification method, an AR-HUD-based identification device, AR-HUD-based identification equipment and an AR-HUD-based identification medium, wherein the AR-HUD-based identification method comprises the following steps: acquiring facial image information corresponding to a target person; identifying the target person according to the facial image information, wherein the position of the target person is positioned outside the vehicle; and displaying indication information corresponding to the identification result on the AR-HUD, wherein the indication information is used for indicating the position of the target person. The recognition method, the device, the equipment and the medium based on the AR-HUD can recognize the target personnel through facial recognition and indicate the target personnel in the AR-HUD, and can help the user in the vehicle to intuitively, quickly and accurately find the target personnel.

Description

AR-HUD-based identification method, device, equipment and medium
Technical Field
The invention relates to the technical field of augmented reality, in particular to an AR-HUD-based identification method, an AR-HUD-based identification device, AR-HUD-based identification equipment and AR-HUD-based medium.
Background
The AR-HUD (augmented reality-head-up display) is an emerging augmented head-up display technology, which can reasonably superimpose and display auxiliary driving information in a driver's sight area and is combined with actual traffic conditions. Through the AR-HUD, a driver can expand and enhance the perception of the driver on driving environment, for example, the AR-HUD displays preset data such as interest points, steering information, lane information, front vehicle identification and the like which are attached to a real scene.
For the vehicle application scene of the connected target personnel, the environment is complex, so that the target personnel are difficult to find even if approaching, the target is easy to miss, or the traffic is blocked because of the low-speed driving for finding the person.
Disclosure of Invention
The invention aims to solve the technical problems and provides an AR-HUD-based identification method, an AR-HUD-based identification device, AR-HUD-based identification equipment and an AR-HUD-based identification medium.
The invention solves the technical problems by the following technical scheme:
the invention provides an AR-HUD-based identification method, which comprises the following steps:
acquiring facial image information corresponding to a target person;
identifying the target person according to the facial image information, wherein the position of the target person is positioned outside the vehicle;
and displaying indication information corresponding to the identification result on the AR-HUD, wherein the indication information is used for indicating the position of the target person.
The invention also provides an AR-HUD-based identification device, which comprises a display unit, one or more processing units and a storage unit, wherein the one or more processing units are respectively in communication connection with the display unit and the storage unit;
the storage unit is configured to store instructions that, when executed by the one or more processing units, cause the one or more processing units to perform steps comprising:
acquiring facial image information corresponding to a target person;
identifying the target person according to the facial image information, wherein the position of the target person is positioned outside the vehicle;
and displaying indication information corresponding to the identification result on the display unit, wherein the indication information is used for indicating the position of the target person.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the identification method based on the AR-HUD when executing the computer program.
The present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-mentioned AR-HUD based identification method.
The invention has the positive progress effects that: according to the AR-HUD-based recognition method, device, equipment and medium, the target personnel are recognized through facial recognition and indicated in the AR-HUD, so that in-vehicle users can be helped to intuitively, quickly and accurately find the target personnel.
Drawings
Fig. 1 is a flowchart of an AR-HUD based recognition method according to embodiment 1 of the present invention.
Fig. 2 is a schematic diagram illustrating an application of the AR-HUD based recognition method according to embodiment 1 of the present invention.
Fig. 3 is a schematic block diagram of an AR-HUD based recognition device according to embodiment 2 of the present invention.
Fig. 4 is a block diagram of the electronic device in embodiment 3 of the present invention.
Detailed Description
The invention is further illustrated by means of the following examples, which are not intended to limit the scope of the invention.
Example 1
Referring to fig. 1, the embodiment specifically provides an identification method based on AR-HUD, including:
s1, acquiring facial image information corresponding to a target person;
s2, identifying target personnel according to the facial image information, wherein the position of the target personnel is located outside the vehicle;
s3, displaying indication information corresponding to the identification result on the AR-HUD, wherein the indication information is used for indicating the position of the target person.
The identification method of the embodiment can be applied to a vehicle terminal or other vehicle systems so as to enable the vehicle to be identified and displayed by the target personnel based on the AR-HUD. As a preferred embodiment, the facial image information in step S1 is collected in advance through an in-vehicle personnel monitoring system or a data uploading channel. Specifically, step S1 of acquiring facial image information of the target person may acquire facial images uploaded by the target person or others through a preset data uploading channel, for example, the vehicle owner uploads photos of the target person through an application program used by the vehicle owner; or obtained by scanning the target person in advance by a camera of a DMS (Driver Monitor System, driver monitoring system), OMS (Occupancy Monitoring System, passenger monitoring system). In which the facial image is acquired by a DMS (for example, DMS, or OMS, hereinafter), it is known that the method needs to be applied to identify the scanned target person when it subsequently leaves the vehicle. For example, when a passenger takes a taxi for the first time, his facial image is scanned and recorded by the DMS, after which the cloud can issue his facial image to the contracted vehicle for recognition when he again has contracted the taxi. Of course, the manner of acquiring the face image of the target person is not limited to the above. The method and the device for monitoring the face of the person in the vehicle provide a mode of monitoring the video in the vehicle and uploading the face image of the person in the vehicle in a data channel uploading mode, provide data support for subsequent identification, and increase the accuracy of identification.
As a preferred embodiment, step S2 is preceded by:
and determining target personnel according to at least one of an external instruction input by a user, a voice recognition result of call information and an incoming call query result.
For the target person to be identified in the step S2, the user may directly input an external instruction to determine, for example, the owner inputs the information of the target person to be sent to the receiver, and similarly, the cloud may also issue the specified target person information; the voice recognition result of the call information can be confirmed, for example, after the built-in receiver task is started, the call is monitored, the call voice information is obtained, the information of the person of the opposite party is determined through the voice recognition of the telephone, the network voice telephone and the network video telephone, and the comparison and the matching can be carried out according to the voice files of the prestored person; matching the network video telephone can be performed by comparing the network video telephone with a pre-stored facial image; personnel information can be confirmed for an incoming call by identifying the incoming call number. According to the method and the device, the target personnel are intelligently determined through the external instruction, the call voice recognition result, the incoming call inquiry result and the like, and convenient use experience can be provided for the car owners.
As a preferred embodiment, step S2 includes:
if the current position accords with the preset position, acquiring an environment image;
the facial image information is compared in the environmental image to identify the target person.
After the target person is confirmed, step S2 is used to start the recognition of the target person, preferably, the environmental image acquisition is started when the target person is set at the preset position, and the face image of the target person is compared based on the environmental image to perform the recognition. Specifically, for example, according to the determined target person and the appointed place, it is of course possible to further appoint a certain time, to go to the preset position where the target person is located, i.e. the destination, to connect the person, and to perform the matching recognition when reaching the preset position. The preset position can be set according to the expected position of the target person, for example, the preset position is set to be a preset radius range with the intersection as the center when the preset position meets a certain intersection, and the current position of the vehicle is located in the preset radius range and meets the preset position. Optionally, the preset position may be determined according to at least one of an external instruction input by a user, a voice recognition result of call information, and an incoming call query result, that is, in the processes of the incoming call and the call of the user, based on the voice recognition result of the incoming call information, specific call content is recognized, and a contracted place is obtained as a setting basis of the preset position. The acquisition environment image can be specifically acquired through a camera of the automobile data recorder, a vehicle-mounted 360-degree panoramic camera and the like, and the identification target personnel can be compared with the face image acquired before. According to the method and the device, the identification is started when the current position is judged to be located at the preset position, so that the identification operation can be initiated timely, meanwhile, unnecessary environment image acquisition and matching work are avoided, the identification efficiency is improved, and the computing resources are saved.
As a preferred embodiment, step S3 includes:
determining a target display position according to the identification result;
and displaying indication information corresponding to the identification result at the target display position of the virtual image display area of the AR-HUD.
The present embodiment sets a target display position in a virtual image display area of an AR-HUD based on a recognition result, that is, a target person matched from a face image and an environment image, and displays instruction information for the recognition result. For example, when a real image of a target person is located in a virtual image display region of the AR-HUD, a target display position may be determined based on the real image, for example, such that the indication information is moved and displayed following the target display position corresponding to the real image. The effect thereof can be seen from the virtual image display region of the AR-HUD shown in fig. 2, in which the indication information "find your target" is displayed at the target display position 100.
As a preferred embodiment, the indication information is used for indicating the position direction of the target person; the step of determining the target display position according to the recognition result comprises the following steps:
if the identified target person is outside the virtual image display area of the AR-HUD, an edge area closest to the target person is determined as a target display position in the virtual image display area of the AR-HUD.
Considering that the coverage area of the virtual image display area of the AR-HUD is small, there may be a situation that the target person has been identified, but the station is not located in the virtual image display area, and in this case, in order to make an accurate indicative indication for the position of the target person, an edge area closest to the target person in the virtual image display area is taken as the target display position. For example, the target person is recognized in the front right, but because the vehicle is located in the central or left lane, the target person cannot be displayed in the virtual image display area because the street on the road on which the target person stands is relatively deviated from the view angle of the current lane, and only the street on which the target person stands can be displayed in the virtual image display area along the adjacent lane; the right edge region of the virtual image display region, i.e., the adjacent lane, may be taken as the target display position at this time. Accordingly, indication information such as an arrow to the right is displayed at the target display position, and the letter "your target is located at about 5 meters" is dubbed. The method and the device can give prompt in time when the target person is identified but cannot be displayed in the virtual image display range of the AR-HUD temporarily, so that the situation that a user misses the target person, and unnecessary searching or detour and the like are caused is avoided.
In a preferred embodiment, step S3 further includes:
generating navigation prompt information according to the position of the target person;
and displaying navigation prompt information in a superposition manner in a virtual image display area of the AR-HUD.
For the situation that the identified station of the target person is not in the virtual image display area due to the small coverage area of the virtual image display area of the AR-HUD, fine navigation can be planned again according to the position of the target person, and a navigation prompt can be further generated. The navigation prompt can be that the direction of the target person is marked at the edge of the AR-HUD virtual image display area through a highlighting indication or a flashing arrow and other indication symbols; and when the vehicle is tracked and the position of the target person enters the virtual image display range of the AR-HUD, striking labeling is carried out and a voice prompt is sent out. According to the method, the system and the device, the user can meet the opponent as soon as possible under the condition of approaching the target person through the station planning fine navigation of the target person.
In one application scenario of the embodiment, when a contact person of a vehicle owner makes a call to inform the vehicle owner that the contact person needs to come about, as the contact person takes the vehicle, the facial image of the contact person is recognized by the vehicle DMS system and then stored in the vehicle system, and the information such as telephone number, call, voiceprint and the like of the contact person is matched through quick remarks; after the incoming call is answered, the identity of the contact person is determined through a Bluetooth address book, a pre-stored facial image of the contact person recorded by the DMS is obtained, and the appointed merging destination and merging time are obtained through voiceprint recognition and voice recognition processing, so that the identity, time and destination of the opposite side of the contact person are displayed in a virtual image display area of a central control screen of the automobile or an AR-HUD after the telephone is ended, and the identity, time and destination of the opposite side are confirmed or modified by an automobile owner; the vehicle owner drives to reach the periphery of the destination to find many people, the vehicle carries out automatic matching recognition, the contact person is recognized from the environment image and the face image, but the contact person is not positioned in the virtual image display range of the AR-HUD, and at the moment, indication information of guide lines is attached to the corresponding position of the virtual image display region of the AR-HUD according to the position of the contact person, so that the vehicle owner can be helped to efficiently and quickly find the contact person. Of course, the above-mentioned vehicle owner going to the receiver is only one application scenario of the present embodiment, and those skilled in the art can appreciate that the present embodiment is not limited by the above-mentioned examples. For example, in another application scenario, a police party may drive a patrol by wanted objects' facial images at locations where they may be present and quickly locate wanted target persons in the crowd.
The recognition method based on the AR-HUD can help in-vehicle users to intuitively, rapidly and accurately find target persons by recognizing the target persons through the face and indicating the target persons in the AR-HUD.
Example 2
Referring to fig. 3, the present embodiment specifically provides an AR-HUD based recognition device, where the recognition device includes a display unit 1, one or more processing units 2, and a storage unit 3, where the one or more processing units 2 are respectively connected with the display unit 1 and the storage unit 3 in a communication manner;
the storage unit 3 is configured to store instructions that, when executed by the one or more processing units, cause the one or more processing units 2 to perform steps comprising:
s51, acquiring facial image information corresponding to a target person;
s52, identifying target personnel according to the facial image information, wherein the position of the target personnel is located outside the vehicle;
s53, displaying indication information corresponding to the identification result on the display unit 1, wherein the indication information is used for indicating the position of the target person.
The identification device of the embodiment can be applied to a vehicle terminal or other vehicle systems so as to enable the vehicle to be identified and displayed by the target personnel based on the AR-HUD. The facial image information in the above step S51 is collected in advance through an in-vehicle personnel monitoring system or a data uploading channel. For example, the car owner uploads the photo of the target person through an application program used by the car owner; or the target personnel are scanned in advance by the cameras of the DMS and the OMS. After confirming the target person, step S52 may set at a preset position to start the environmental image acquisition and compare the face image of the target person based on the environmental image for recognition. For example, when the vehicle is located at a certain intersection, the preset position is set as a preset radius range with the intersection as the center, and when the current position of the vehicle is located in the preset radius range, the preset position is met. The preset position can be determined according to at least one of an external instruction input by a user, a voice recognition result of call information and an incoming call query result, namely, specific call content in the user is recognized based on the voice recognition result of the information such as the incoming call of the user in the process of the incoming call of the user, the call of the user and the like, and a contracted place is obtained and is used as a setting basis of the preset position. The acquisition environment image can be specifically acquired through a camera of the automobile data recorder, a vehicle-mounted 360-degree panoramic camera and the like, and the identification target personnel can be compared with the face image acquired before. Based on the recognition result, that is, the target person matched from the face image and the environment image, step S53 sets the target display position in the virtual image display region of the AR-HUD and displays instruction information for the recognition result. For example, when a real image of a target person is located in a virtual image display region of the AR-HUD, a target display position may be determined based on the real image, for example, such that the indication information is moved and displayed following the target display position corresponding to the real image.
The recognition device based on the AR-HUD can recognize the target personnel through facial recognition and instruct the target personnel in the AR-HUD, and can help the user in the vehicle to intuitively, quickly and accurately find the target personnel.
Example 3
Referring to fig. 4, the present embodiment provides an electronic device 30, which includes a processor 31, a memory 32, and a computer program stored in the memory 32 and executable on the processor 31, wherein the processor 31 implements the AR-HUD based identification method in embodiment 1 when executing the program. The electronic device 30 shown in fig. 4 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
The electronic device 30 may be in the form of a general purpose computing device, which may be a server device, for example. Components of electronic device 30 may include, but are not limited to: the at least one processor 31, the at least one memory 32, a bus 33 connecting the different system components, including the memory 32 and the processor 31.
The bus 33 includes a data bus, an address bus, and a control bus.
Memory 32 may include volatile memory such as Random Access Memory (RAM) 321 and/or cache memory 322, and may further include Read Only Memory (ROM) 323.
Memory 32 may also include a program/utility 325 having a set (at least one) of program modules 324, such program modules 324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The processor 31 executes various functional applications and data processing, such as the AR-HUD based recognition method in embodiment 1 of the present invention, by running a computer program stored in the memory 32.
The electronic device 30 may also communicate with one or more external devices 34 (e.g., keyboard, pointing device, etc.). Such communication may be through an input/output (I/O) interface 35. Also, model-generating device 30 may also communicate with one or more networks, such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet, via network adapter 36. Network adapter 36 communicates with the other modules of model-generating device 30 via bus 33. Other hardware and/or software modules may be used in connection with the model-generating device 30, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, data backup storage systems, and the like.
It should be noted that although several units/modules or sub-units/modules of an electronic device are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present invention. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Example 4
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the AR-HUD based identification method in embodiment 1.
More specifically, among others, readable storage media may be employed including, but not limited to: portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible embodiment, the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the AR-HUD based identification method of embodiment 1 when the program product is run on the terminal device.
Wherein the program code for carrying out the invention may be written in any combination of one or more programming languages, the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device, partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the invention, but such changes and modifications fall within the scope of the invention.

Claims (11)

1. An AR-HUD based identification method, comprising:
acquiring facial image information corresponding to a target person;
identifying the target person according to the facial image information, wherein the position of the target person is positioned outside the vehicle;
and displaying indication information corresponding to the identification result on the AR-HUD, wherein the indication information is used for indicating the position of the target person.
2. The AR-HUD based recognition method of claim 1, the step of recognizing the target person from the facial image information comprising:
if the current position accords with the preset position, acquiring an environment image;
the face image information is compared in the environment image to identify the target person.
3. The AR-HUD based recognition method according to claim 1 or 2, the step of acquiring facial image information corresponding to the target person, comprising, before:
and determining the target personnel according to at least one of an external instruction input by a user, a voice recognition result of call information and an incoming call query result.
4. The AR-HUD based recognition method of claim 2, wherein the preset position is determined according to at least one of an external command input by a user, a voice recognition result of call information, and an incoming call query result.
5. The AR-HUD based recognition method of claim 3, wherein the step of displaying the indication information corresponding to the recognition result at the AR-HUD includes:
determining a target display position according to the identification result;
and displaying indication information corresponding to the identification result at the target display position of the virtual image display area of the AR-HUD.
6. The AR-HUD based recognition method of claim 5, wherein the indication information is used to indicate a location direction in which the target person is located; the step of determining the target display position according to the identification result comprises the following steps:
and if the identified target person is out of the virtual image display area of the AR-HUD, determining an edge area closest to the target person in the virtual image display area of the AR-HUD as the target display position.
7. The AR-HUD based identification method according to claim 6, further comprising, after the step of displaying the indication information corresponding to the identification result by the AR-HUD:
generating navigation prompt information according to the position of the target person;
and displaying the navigation prompt information in a superposition manner in a virtual image display area of the AR-HUD.
8. The AR-HUD based recognition method of claim 1, wherein the facial image information is pre-collected through an in-vehicle personnel monitoring system or a data uploading channel.
9. An AR-HUD-based identification device is characterized by comprising a display unit, one or more processing units and a storage unit, wherein the one or more processing units are respectively in communication connection with the display unit and the storage unit;
the storage unit is configured to store instructions that, when executed by the one or more processing units, cause the one or more processing units to perform steps comprising:
acquiring facial image information corresponding to a target person;
identifying the target person according to the facial image information, wherein the position of the target person is positioned outside the vehicle;
and displaying indication information corresponding to the identification result on the unit, wherein the indication information is used for indicating the position of the target person.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the AR-HUD based identification method of any of claims 1-8 when the computer program is executed by the processor.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the AR-HUD based identification method according to any of claims 1-8.
CN202210530226.7A 2022-05-16 2022-05-16 AR-HUD-based identification method, device, equipment and medium Pending CN117115874A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210530226.7A CN117115874A (en) 2022-05-16 2022-05-16 AR-HUD-based identification method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210530226.7A CN117115874A (en) 2022-05-16 2022-05-16 AR-HUD-based identification method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117115874A true CN117115874A (en) 2023-11-24

Family

ID=88811554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210530226.7A Pending CN117115874A (en) 2022-05-16 2022-05-16 AR-HUD-based identification method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117115874A (en)

Similar Documents

Publication Publication Date Title
US11967109B2 (en) Vehicle localization using cameras
JP6613623B2 (en) On-vehicle device, operation mode control system, and operation mode control method
KR100533033B1 (en) Position tracing system and method using digital video process technic
CN102819965A (en) Parking guidance and searching system for parking lot
US9495869B2 (en) Assistance to law enforcement through ambient vigilance
CN113237490A (en) AR navigation method, system, electronic device and storage medium
CN113870613B (en) Parking space determination method and device, electronic equipment and storage medium
CN108399785A (en) Find the method, apparatus and storage medium of vehicle
CN114425991A (en) Image processing method, medium, device and image processing system
CN111243325A (en) Parking multidimensional sensing investigation method and device, storage medium and terminal
CN117115874A (en) AR-HUD-based identification method, device, equipment and medium
JP2021043861A (en) Driving support control device, determination support control device, driving support system, driving support control method and determination support control method
JP2016095673A (en) Vehicle information guide system, vehicle information guide method, and computer program
WO2020258222A1 (en) Method and system for identifying object
CN116028435B (en) Data processing method, device and equipment of automobile data recorder and storage medium
CN116958915B (en) Target detection method, target detection device, electronic equipment and storage medium
JP2014154125A (en) Travel support system, travel support method and computer program
CN112313137B (en) Travel information processing device and processing method
CN115482001A (en) Information processing method and device, and storage medium
CN113793529A (en) Method, device, apparatus, storage medium and program product for assisting parking
CN117091597A (en) Vehicle positioning method and device and electronic equipment
WO2020073272A1 (en) Snapshot image to train an event detector
CN115691187A (en) Vehicle moving method, device, equipment and storage medium
WO2020073270A1 (en) Snapshot image of traffic scenario
CN116719975A (en) Traffic accident handling method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination