CN111294564A - Information display method and wearable device - Google Patents

Information display method and wearable device Download PDF

Info

Publication number
CN111294564A
CN111294564A CN202010139373.2A CN202010139373A CN111294564A CN 111294564 A CN111294564 A CN 111294564A CN 202010139373 A CN202010139373 A CN 202010139373A CN 111294564 A CN111294564 A CN 111294564A
Authority
CN
China
Prior art keywords
target
blind area
image
wearable device
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010139373.2A
Other languages
Chinese (zh)
Inventor
沈健春
皇旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010139373.2A priority Critical patent/CN111294564A/en
Publication of CN111294564A publication Critical patent/CN111294564A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/005Traffic control systems for road vehicles including pedestrian guidance indicator

Abstract

The embodiment of the invention discloses an information display method and wearable equipment, wherein the method comprises the following steps: acquiring a target image including a target traveling vehicle; determining blind area information of a target running vehicle according to the target image, wherein the blind area information comprises a blind area position; virtual screen through wearable equipment shows blind area position to in solving correlation technique, the pedestrian can't learn the vehicle blind area region, takes place the pedestrian easily and is drawn into the vehicle bottom and causes life danger's problem.

Description

Information display method and wearable device
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an information display method and wearable equipment.
Background
As a vehicle widely used, an automobile is important to safety of driving on a road. For large motor vehicles, the sight line is easily blocked by a long vehicle body and a 'blind field of view' occurs because the driver is positioned at a normal driver seat. For example, there are other traveling vehicles in the blind area of the field of vision, or pedestrians walking, which may cause a safety hazard when the vehicle is traveling because the vehicle driver cannot directly see the vehicle and the pedestrians.
At present, although there are many ways that can let the driver accurate know the scope of the field of vision blind area of vehicle to whether make the driver notice the blind area within range has the potential safety hazard, nevertheless, the pedestrian still can't learn the vehicle blind area region, like this, takes place very easily that the pedestrian is drawn into the vehicle bottom and causes dangerous problem.
Disclosure of Invention
The embodiment of the invention provides an information display method and wearable equipment, and aims to solve the problems that in the related art, pedestrians cannot know a vehicle blind area, and are easily involved into the vehicle bottom to cause danger.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides an information display method, which is applied to a wearable device, and the method may include:
acquiring a target image including a target traveling vehicle;
determining blind area information of a target running vehicle according to the target image, wherein the blind area information comprises a blind area position;
and displaying the blind area position through a virtual screen of the wearable device.
In a second aspect, an embodiment of the present invention provides a wearable device, where the wearable device may include:
an acquisition module for acquiring a target image including a target traveling vehicle;
the processing module is used for determining blind area information of the target running vehicle according to the target image, wherein the blind area information comprises a blind area position;
and the display module is used for displaying the blind area position through a virtual screen of the wearable device.
In a third aspect, an embodiment of the present invention provides a wearable device, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the information display method according to the first aspect.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon a computer program for causing a computer to execute the information display method according to the first aspect, if the computer program is executed in the computer.
In the embodiment of the invention, when a pedestrian walks on a road, a target image comprising a target running vehicle (such as a large-sized motor vehicle) can be acquired in real time through wearable equipment such as augmented reality glasses equipment, then, the blind area position of the target running vehicle is determined according to the target image, and then, the blind area position is displayed through a virtual screen of the wearable equipment. Thus, the user can see the blind area of the vehicle in the field of view under the turning situation in real time. In addition, whether can detect the pedestrian through augmented reality glasses equipment and be in dangerous blind area region to guide the pedestrian initiative and keep away from the blind area region, avoid dangerous emergence. Compared with the traditional method for displaying the blind area to the driver through the display in the vehicle from the perspective of the driver, the method for enabling the driver to take preventive measures is different from the method for enabling the pedestrian to clearly see the blind area of the vehicle from the perspective of the pedestrian, so that the preventive measures are taken actively, the traffic accidents are prevented, and the risk of collision between the pedestrian and the vehicle is reduced to a great extent.
Drawings
The present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters designate like or similar features.
Fig. 1 is a schematic view of an application scenario of an information display method according to an embodiment of the present invention;
fig. 2 is a flowchart of an information display method according to an embodiment of the present invention;
fig. 3 is a schematic distribution diagram of cameras on an AR glasses device according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a method for implementing information display according to an embodiment of the present invention;
fig. 5 is a schematic diagram of determining blind area information according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a wearable device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an information display method and electronic equipment, aiming at solving the problem that in the related art, pedestrians cannot know a vehicle blind area and are easily involved in the vehicle bottom to cause life danger.
The information display method provided by the embodiment of the invention can be applied to the following application scenes, and the following detailed description is given:
when a user walks or drives a device for riding instead of walk such as a bicycle at a curve (for example, a curve of 30 degrees to 90 degrees), acquiring images within a preset shooting range through an electronic device such as a wearable device, identifying whether a target running vehicle meeting preset conditions such as a large motor vehicle is included in the images, and determining the images including the target running vehicle as target images; then, a blind area position of the target traveling vehicle is determined according to the target image, and then the blind area position is displayed through a virtual screen of the wearable device so that the user is prevented from entering the blind area position.
It should be noted that the wearable device in the embodiment of the present invention may include an Augmented Reality (AR) glasses device, so that the user may see the blind area of the vehicle in the field of view in the case of turning in real time through the AR glasses device. In addition, the method can detect whether the pedestrian is in a dangerous blind area region or not, so that the pedestrian is guided to actively keep away from the blind area region, and the danger is avoided.
Here, the method provided by the embodiment of the present invention may be applied to any scene in which a user holding a wearable device may encounter a traveling vehicle having a blind area, in addition to the above-mentioned scene. By the method provided by the embodiment of the invention, the pedestrian can clearly see the blind area of the vehicle, so that preventive measures are actively taken, traffic accidents are prevented, and the risk of collision between the pedestrian and the vehicle is reduced to a great extent.
It should be noted that the electronic device in the embodiment of the present invention may include at least one of the following: the mobile phone, the tablet personal computer, the intelligent wearable device and the like are provided with devices for receiving information and displaying the position of the blind area.
Based on the application scenario, the following describes in detail an information display method provided by an embodiment of the present invention.
Fig. 2 is a flowchart of an information display method according to an embodiment of the present invention.
As shown in fig. 2, the information display method may specifically include steps 210 to 230, which are specifically as follows:
in step 210, a target image including a target traveling vehicle is acquired.
In the embodiment of the present invention, the target image may be acquired in at least one of the following manners, which are described below.
The first method is as follows: the wearable device collects images in a preset shooting range through the wearable device; identifying whether a target running vehicle meeting a preset condition is included in the image; an image including the target traveling vehicle is determined as a target image.
Wherein, under the condition that wearable equipment includes first camera and second camera, gather first image through first camera to and gather the second image through the second camera. The images comprise a first image and a second image, and shooting ranges of the first camera and the second camera are different.
For example, when the wearable device is an AR glasses device, as shown in fig. 3, one camera, that is, a first camera and a second camera, are respectively disposed on the left side and the right side of the AR glasses device, so that live-action images on both sides of a road can be captured, and thus, the target image is obtained by recognizing the live-action images.
The second method comprises the following steps: shooting images within a preset shooting range through electronic equipment such as a mobile phone; identifying whether a target running vehicle meeting a preset condition is included in the image; if the image comprises the target running vehicle meeting the preset conditions, determining the image comprising the target running vehicle as a target image, and then sending the target image to the wearable device. The mobile phone and the wearable device establish communication connection between the mobile phone and the wearable device before the step of sending the target image to the wearable device.
The identifying whether the target driving vehicle meeting the preset condition is included in the image specifically may include: and identifying at least one vehicle in the image, and determining a target running vehicle meeting a preset condition in the at least one vehicle. Here, the target traveling vehicle may be plural or may be one. Thus, when the target traveling vehicle is a plurality of target traveling vehicles, the blind area information of each target traveling vehicle can be determined separately.
And step 220, determining blind area information of the target running vehicle according to the target image, wherein the blind area information comprises a blind area position.
Obtaining state parameters of a target running vehicle according to the target image, wherein the state parameters comprise a turning angle degree and a wheel base; and determining the blind area information of the target running vehicle according to the angle of rotation and the wheelbase.
Further, in the embodiment of the present invention, the step of obtaining the state parameter of the target traveling vehicle according to the target image may specifically include: and carrying out three-dimensional modeling on the target running vehicle in the target image to obtain the state parameters of the target running vehicle.
In the embodiment of the present invention, determining blind area information of a target traveling vehicle according to a rotation angle degree and a wheel base includes:
under the condition that the blind area information also comprises a blind area corresponding to the blind area position, obtaining the inner wheel differential area of the target running vehicle according to the turning angle and the wheelbase; and determining the inner wheel difference area as the blind area.
And step 230, displaying the blind area position through a virtual screen of the wearable device.
In one possible embodiment, the blind area information is processed in real time according to the three-dimensional registration of the natural feature points to obtain target information; and displaying the target information on the target image in a distinguishing way through the virtual screen.
Therefore, the processed blind area information can be distinguished and displayed on the virtual screen of the wearable device in a virtual-real combined mode, and a user can observe the blind area of the target running vehicle more clearly.
In addition, in another possible embodiment, after step 230, the display method may further include:
determining whether the wearable equipment meets an early warning condition or not according to the current running speed of the target running vehicle and the target distance between the wearable equipment and the blind area position;
and displaying the early warning information through a virtual screen of the wearable equipment under the condition that the wearable equipment meets the early warning condition.
Based on this, in one possible embodiment, besides displaying the early warning information through the virtual screen of the wearable device, the safe route where the user travels can be determined according to the wearable device and the blind area position; the user is presented with a safe route through a virtual screen of the wearable device.
What needs to be prompted is that the above mentioned related early warning conditions can be determined according to the moving speed of the target running vehicle, the moving speed of the wearable device, the distance between the target running vehicle and the blind area position, and the blind area information within a preset time period. The early warning condition can be dynamically adjusted according to the moving speed of the target driving vehicle, the moving speed of the wearable device, the distance between the target driving vehicle and the blind area position and the blind area information which are acquired in real time.
Therefore, in the embodiment of the invention, when the pedestrian walks on the road, the target image including the target running vehicle (such as a large-sized motor vehicle) can be acquired in real time through the wearable device such as the augmented reality AR glasses device, then, the blind area position of the target running vehicle is determined according to the target image, and then, the blind area position is displayed through the virtual screen of the wearable device.
From this, can detect whether the pedestrian is in dangerous blind area region through AR glasses equipment, if be in the blind area region, can in time make the warning to the pedestrian. Thereby initiatively keep away from the blind area region, avoid dangerous emergence. Compared with the traditional method for displaying the blind area to the driver through the display in the vehicle from the perspective of the driver, the method for displaying the blind area to the driver to enable the driver to take preventive measures is different from the method for displaying the blind area to the driver from the perspective of the pedestrian to enable the pedestrian to clearly see the blind area of the vehicle, so that the preventive measures are taken actively, traffic accidents are prevented, and the risk of collision between the pedestrian and the vehicle is reduced to a great extent.
In order to facilitate understanding of the method provided by the embodiment of the present invention, based on the above, the wearable device is taken as an AR glasses device, and the user is taken as a pedestrian walking on the road as an example, so as to illustrate the information display method provided by the embodiment of the present invention.
Fig. 4 is a schematic flowchart of a method for implementing information display according to an embodiment of the present invention.
As shown in fig. 4, the method may include steps 401 to 410, which are specifically as follows:
step 401, collecting an image in a preset shooting range through AR glasses equipment.
The first image is collected through a first camera of the AR glasses equipment, and the second image is collected through a second camera of the AR glasses equipment. Here, the shooting ranges of the first camera and the second camera are different.
For example, the following steps are carried out: and acquiring the video image entering the identification range in real time through a binocular camera arranged on the AR glasses.
And 402, identifying whether the target running vehicle meeting the preset condition is included in the image.
The method for identifying the target running vehicle may include:
the first method is as follows: and identifying the image by adopting an image identification mode of a convolutional neural network so as to determine whether the image comprises a target running vehicle meeting a preset condition.
The second method comprises the following steps: and identifying the image by adopting an image identification mode of wavelet moment to determine whether the image comprises a target running vehicle meeting a preset condition.
In this way, based on the example in step 401, the first image and the second image captured by the binocular camera are sent to the image processor, so that the image processor recognizes the images, thereby determining whether the target traveling vehicle exists in the current field of view.
In step 403, an image including the target traveling vehicle is determined as a target image in the image.
If the target running vehicle meeting the preset condition is identified to be included in the image, namely the existence of the target running vehicle in the current pedestrian field is identified, the image including the target running vehicle is determined as the target image.
Otherwise, if it is recognized that the image does not include the target traveling vehicle meeting the preset condition, that is, there is no target traveling vehicle in the current view of the pedestrian, step 402 is repeatedly executed.
And step 404, performing three-dimensional modeling on the target running vehicle in the target image according to the target image to obtain the state parameter of the target running vehicle.
The state parameters of the target traveling vehicle include the number of turning angles α and the wheel base L.
Step 405, determining blind area information of the target driving vehicle according to the angle degree and the wheel base, wherein the blind area information comprises a blind area position and a blind area corresponding to the blind area position.
Here, when the number of turning angles of the target traveling vehicle is not 0 degrees, the blind area information of the target traveling vehicle at the time of turning, such as the inside blind area information, is obtained from the turning angle α and the wheel base L, and the three-dimensional image of the inside blind area is created based on the inside blind area information, it is noted that the blind area information may further include the outside blind area information, and it is determined whether the inside blind area information or the outside blind area information depends on the position where the pedestrian stands, as shown in fig. 5, when the pedestrian stands on the inside of the target traveling vehicle, i.e., point a, the blind area information obtained is the inside blind area information, whereas when the pedestrian stands on the outside of the target traveling vehicle, i.e., point B, the blind area information obtained is the outside blind area information.
As shown in fig. 5, the step of obtaining the vehicle inside blind area information, that is, the blind area, according to the turning angle α and the wheel base L in the embodiment of the present invention may include:
the inner wheel differential area of the target running vehicle can be obtained according to the turning angle and the wheelbase; and determining the inner wheel difference area as the blind area. Specifically, the inner wheel differential area is expressed by equation (1):
S=L*tan(α/2) (1)
wherein α is the turning angle, L is the wheelbase, and the area S of the inner wheel difference is obtained.
In addition, three-dimensional images of blind area positions in front of and behind the target running vehicle are established according to the target image, the vehicle type of the target running vehicle and other information. Here, if the number of degrees of turning angle of the target traveling vehicle is 0, that is, the vehicle is not turning, three-dimensional images of the blind area positions in front of and behind the target traveling vehicle are created based on the preset blind area positions in front of and behind the vehicle.
And 406, processing the blind area information in real time according to the three-dimensional registration of the natural feature points to obtain target information.
Here, based on the three-dimensional registration of the virtual-real combined natural feature points, the target information corresponding to the blind area information (such as the blind area position and the blind area) obtained in step 405 is determined, and the target information can be understood as the size, direction and position of the projection of the blind area on the virtual screen.
Further, the method for obtaining the target information based on the three-dimensional registration technology of the natural feature points in the embodiment of the invention comprises the following steps:
firstly, extracting a first characteristic point set of a vehicle from a preset template image, then extracting a second characteristic point set corresponding to a target running vehicle from each frame image acquired by a first camera and/or a second camera, and tracking the spatial position of the target running vehicle acquired by the first camera and/or the second camera through the matching relationship between the first characteristic point set and the second characteristic point set to complete tracking registration.
Step 407, displaying the target information on the target image distinctively through the virtual screen.
For example, the target information and the target image obtained in step 406 are fused, and then projected and displayed through a virtual screen of the AR glasses device.
In one possible embodiment, the target running vehicle in the target image is displayed in a special red mark mode in the virtual screen, and target information, namely a dangerous blind area is highlighted additionally, so that pedestrians can see a scene after the virtual and real images are fused, and the blind area of the target running vehicle in the current visual field can be seen clearly.
And step 408, determining whether the AR glasses equipment meets the early warning condition or not according to the current running speed of the target running vehicle and the target distance between the AR glasses equipment and the blind area position.
In the embodiment of the present invention, the current driving speed and the target distance of the target driving vehicle are processed, and whether the AR glasses device satisfies the warning condition may be determined according to the current driving speed of the target driving vehicle, the driving speed of the pedestrian (equivalent to the moving speed of the AR glasses device), and the target distance.
Therefore, whether the pedestrian is in a dangerous blind area or not can be judged according to the real-time motion states of the pedestrian and the vehicle.
And 409, timely prompting the pedestrian through the AR glasses equipment under the condition that the AR glasses equipment meets the early warning condition.
If the shortest distance between the pedestrian and the blind area is less than the preset safety distance, prompting the pedestrian through at least one of the following modes:
(1) the alarm module arranged in the AR glasses equipment body timely prompts the pedestrians.
(2) And displaying the early warning information through a virtual screen of the AR glasses equipment.
(3) The pedestrian is prompted through the voice prompt and the virtual screen of the AR glasses device.
(4) Displaying the flashing danger indicator through a projection display element of the AR glasses to prompt the pedestrian. Here, the projection display element may belong to one element on the virtual screen or may be a separate display element.
Step 410, the pedestrian is shown a safe route through a virtual screen of the AR glasses device.
Determining a safe running route of the pedestrian according to the AR glasses equipment and the blind area position; the pedestrian is presented with the safe route through the virtual screen of the AR glasses device.
Here, if the pedestrian is judged to be in the dangerous blind area, the AR glasses device automatically generates a path to the safe area, and the path is displayed through the projection display device, i.e., the virtual screen. So that the pedestrian is guided to the safe area through the virtual screen.
Therefore, the embodiment of the invention can realize that when the pedestrian walks on the road, the blind area of the highlighted vehicle in the visual field under the turning condition can be seen in real time through the AR glasses equipment worn. Whether the pedestrian is in the danger area can be detected in real time through the AR glasses equipment of wearing in addition, if be in the danger area, can in time make the warning to the pedestrian, and can provide the route that reaches safe region. Therefore, the device is actively far away from the dangerous area, and the danger is avoided. Compared with the traditional method for displaying the blind area to the driver through the display in the vehicle from the perspective of the driver to enable the driver to take preventive measures, the method enables the pedestrian to clearly see the blind area of the vehicle from the perspective of the pedestrian, so that the preventive measures are taken actively to prevent traffic accidents.
Based on the information display method, an embodiment of the present invention further provides an electronic device, which is specifically described in detail with reference to fig. 6.
Fig. 6 is a schematic structural diagram of a wearable device according to an embodiment of the present invention.
As shown in fig. 6, the wearable device 60 may include:
the acquisition module 601 is configured to acquire a target image including a target traveling vehicle.
The processing module 602 is configured to determine blind area information of the target traveling vehicle according to the target image, where the blind area information includes a blind area position.
The display module 603 is configured to display the blind area position through a virtual screen of the wearable device.
The processing module 602 in the embodiment of the present invention may specifically be configured to: obtaining state parameters of a target running vehicle according to the target image, wherein the state parameters comprise the number of turning angles and the wheelbase; and determining the blind area information of the target running vehicle according to the angle of rotation and the wheelbase.
Further, the processing module 602 may specifically include a module for, in a case where the target traveling vehicle is a plurality of target traveling vehicles, three-dimensionally modeling each of the plurality of target traveling vehicles in the target image to obtain the state parameters of the target traveling vehicle, respectively. And obtaining an inner wheel differential area of the target driving vehicle according to the turning angle and the wheel base under the condition that the blind area information further comprises a blind area corresponding to the blind area position; and determining the inner wheel difference area as the blind area.
In addition, the obtaining module 601 in the embodiment of the present invention may be specifically configured to collect, by a wearable device, an image within a preset shooting range; identifying whether a target running vehicle meeting a preset condition is included in the image; and if the image comprises the target running vehicle meeting the preset condition, determining the image comprising the target running vehicle as the target image.
Further, the obtaining module 601 may be specifically configured to, in a case that the wearable device includes a first camera and a second camera, acquire a first image through the first camera, and acquire a second image through the second camera; the images comprise a first image and a second image, and shooting ranges of the first camera and the second camera are different.
The display module 603 in the embodiment of the present invention may be specifically configured to process the blind area information in real time according to three-dimensional registration of the natural feature points to obtain target information; and displaying the target information on the target image in a distinguishing way through the virtual screen.
In addition, in a possible embodiment, the electronic device 60 may further include a determining module 604. The determining module 604 may be configured to determine whether the wearable device meets the early warning condition according to a current driving speed of the target driving vehicle and a target distance between the wearable device and the blind area location. The display module 603 may be further configured to display the warning information through a virtual screen of the wearable device when the wearable device meets the warning condition.
Based on this, the display module 603 may also be configured to determine a safe route for the user to travel according to the wearable device and the blind area location; the user is presented with a safe route through a virtual screen of the wearable device.
In the embodiment of the invention, when a pedestrian walks on a road, a target image comprising a target driving vehicle (such as a large-sized motor vehicle) can be acquired in real time through wearable equipment such as augmented reality AR glasses equipment, then, the blind area position of the target driving vehicle is determined according to the target image, and then, the blind area position is displayed through a virtual screen of the wearable equipment. Thus, the user can see the blind area of the vehicle in the field of view under the turning situation in real time. In addition, can detect whether the pedestrian is in dangerous blind area region through AR glasses equipment to guide the pedestrian initiative and keep away from the blind area region, avoid dangerous emergence. Compared with the traditional method for displaying the blind area to the driver through the display in the vehicle from the perspective of the driver, the method for enabling the driver to take preventive measures is different from the method for enabling the pedestrian to clearly see the blind area of the vehicle from the perspective of the pedestrian, so that the preventive measures are taken actively, the traffic accidents are prevented, and the risk of collision between the pedestrian and the vehicle is reduced to a great extent.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed in a computer, the computer is caused to execute the steps of the information display method of an embodiment of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An information display method is applied to wearable equipment and is characterized by comprising the following steps:
acquiring a target image including a target traveling vehicle;
determining blind area information of the target running vehicle according to the target image, wherein the blind area information comprises a blind area position;
and displaying the blind area position through a virtual screen of the wearable device.
2. The method according to claim 1, wherein the determining blind spot information of the target traveling vehicle from the target image includes:
obtaining state parameters of the target running vehicle according to the target image, wherein the state parameters comprise a turning angle degree and a wheel base;
and determining the blind area information of the target running vehicle according to the angle of rotation and the wheelbase.
3. The method according to claim 2, wherein the obtaining of the state parameter of the target traveling vehicle from the target image includes:
and when the target running vehicle is a plurality of target running vehicles, performing three-dimensional modeling on each of the plurality of target running vehicles in the target image to respectively obtain the state parameters of the target running vehicles.
4. The method according to claim 2 or 3, wherein the blind area information further includes a blind area corresponding to the blind area position;
the determining the blind area information of the target running vehicle according to the angle of rotation and the wheelbase comprises the following steps:
obtaining the inner wheel difference area of the target running vehicle according to the turning angle and the wheelbase;
and determining the inner wheel difference area as the blind area.
5. The method of claim 1, wherein said obtaining a target image including a target traveling vehicle comprises:
acquiring an image within a preset shooting range through the wearable device;
identifying whether a target running vehicle meeting a preset condition is included in the image;
and if the image comprises the target running vehicle meeting the preset condition, determining the image comprising the target running vehicle as the target image.
6. The method of claim 5, wherein the capturing a plurality of images within a capture range of the wearable device comprises:
acquiring a first image through the first camera and acquiring a second image through the second camera under the condition that the wearable device comprises the first camera and the second camera;
the images comprise the first image and the second image, and shooting ranges of the first camera and the second camera are different.
7. The method of claim 1, wherein displaying the blind spot location via a virtual screen of the wearable device comprises:
processing the blind area information in real time according to the three-dimensional registration of the natural feature points to obtain target information;
and displaying the target information on the target image in a distinguishing way through the virtual screen.
8. The method of claim 1, wherein after the displaying the blind location via a virtual screen of the wearable device, the method further comprises:
determining whether an early warning condition is met according to the current running speed of the target running vehicle and the target distance between the wearable device and the blind area position;
and displaying early warning information through a virtual screen of the wearable device under the condition that the early warning condition is met.
9. The method of claim 8, wherein after the wearable device satisfies the pre-alert condition, the method further comprises:
determining a safe route for a user to travel according to the wearable device and the blind area position;
presenting the safe route to the user through a virtual screen of the wearable device.
10. A wearable device, comprising:
an acquisition module for acquiring a target image including a target traveling vehicle;
the processing module is used for determining blind area information of the target running vehicle according to the target image, wherein the blind area information comprises a blind area position;
and the display module is used for displaying the blind area position through a virtual screen of the wearable device.
CN202010139373.2A 2020-03-03 2020-03-03 Information display method and wearable device Pending CN111294564A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010139373.2A CN111294564A (en) 2020-03-03 2020-03-03 Information display method and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010139373.2A CN111294564A (en) 2020-03-03 2020-03-03 Information display method and wearable device

Publications (1)

Publication Number Publication Date
CN111294564A true CN111294564A (en) 2020-06-16

Family

ID=71022581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010139373.2A Pending CN111294564A (en) 2020-03-03 2020-03-03 Information display method and wearable device

Country Status (1)

Country Link
CN (1) CN111294564A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184920A (en) * 2020-10-12 2021-01-05 中国联合网络通信集团有限公司 AR-based skiing blind area display method and device and storage medium
CN113358113A (en) * 2021-06-18 2021-09-07 刘治昊 Navigation device based on clothes hanger reflection principle
CN113467502A (en) * 2021-07-24 2021-10-01 深圳市北斗云信息技术有限公司 Unmanned aerial vehicle driving examination system
CN113771869A (en) * 2021-09-22 2021-12-10 上海安亭地平线智能交通技术有限公司 Vehicle control method and method for controlling vehicle based on wearable device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101816477A (en) * 2010-03-25 2010-09-01 佘龙华 Safety school bag for personnel traffic safety
CN105632245A (en) * 2016-03-14 2016-06-01 桂林航天工业学院 Vehicle approaching reminding device and method
US20160207454A1 (en) * 2015-01-16 2016-07-21 Ford Global Technologies, Llc Haptic vehicle alert based on wearable device
CN107449440A (en) * 2016-06-01 2017-12-08 北京三星通信技术研究有限公司 The display methods and display device for prompt message of driving a vehicle
CN108275145A (en) * 2018-02-12 2018-07-13 北汽福田汽车股份有限公司 Alarm method, system and the vehicle of vehicle
CN108932868A (en) * 2017-05-26 2018-12-04 奥迪股份公司 The danger early warning system and method for vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101816477A (en) * 2010-03-25 2010-09-01 佘龙华 Safety school bag for personnel traffic safety
US20160207454A1 (en) * 2015-01-16 2016-07-21 Ford Global Technologies, Llc Haptic vehicle alert based on wearable device
CN105632245A (en) * 2016-03-14 2016-06-01 桂林航天工业学院 Vehicle approaching reminding device and method
CN107449440A (en) * 2016-06-01 2017-12-08 北京三星通信技术研究有限公司 The display methods and display device for prompt message of driving a vehicle
CN108932868A (en) * 2017-05-26 2018-12-04 奥迪股份公司 The danger early warning system and method for vehicle
CN108275145A (en) * 2018-02-12 2018-07-13 北汽福田汽车股份有限公司 Alarm method, system and the vehicle of vehicle

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184920A (en) * 2020-10-12 2021-01-05 中国联合网络通信集团有限公司 AR-based skiing blind area display method and device and storage medium
CN112184920B (en) * 2020-10-12 2023-06-06 中国联合网络通信集团有限公司 AR-based skiing blind area display method, device and storage medium
CN113358113A (en) * 2021-06-18 2021-09-07 刘治昊 Navigation device based on clothes hanger reflection principle
CN113467502A (en) * 2021-07-24 2021-10-01 深圳市北斗云信息技术有限公司 Unmanned aerial vehicle driving examination system
CN113771869A (en) * 2021-09-22 2021-12-10 上海安亭地平线智能交通技术有限公司 Vehicle control method and method for controlling vehicle based on wearable device
CN113771869B (en) * 2021-09-22 2024-03-15 上海安亭地平线智能交通技术有限公司 Vehicle control method and method for controlling vehicle based on wearable device

Similar Documents

Publication Publication Date Title
CN111294564A (en) Information display method and wearable device
JP7332726B2 (en) Detecting Driver Attention Using Heatmaps
CN108369780B (en) Vision recognition assistance system and detection system for object to be viewed
EP1961622B1 (en) Safety-travel assistance device
EP2544449B1 (en) Vehicle perimeter monitoring device
JP7188394B2 (en) Image processing device and image processing method
JP4707067B2 (en) Obstacle discrimination device
JP4872245B2 (en) Pedestrian recognition device
JP2022520544A (en) Vehicle intelligent driving control methods and devices, electronic devices and storage media
US20130300872A1 (en) Apparatus and method for displaying a blind spot
US20120320212A1 (en) Surrounding area monitoring apparatus for vehicle
US20150015384A1 (en) Object Detection Device
CN111414796A (en) Adaptive transparency of virtual vehicles in analog imaging systems
CN106143309A (en) A kind of vehicle blind zone based reminding method and system
CN107392092B (en) A kind of intelligent vehicle road ahead environment perspective cognitive method based on V2V
JP7163748B2 (en) Vehicle display control device
CN107408338A (en) Driver assistance system
CN112896159A (en) Driving safety early warning method and system
CN111052174A (en) Image processing apparatus, image processing method, and program
CN108482367A (en) A kind of method, apparatus and system driven based on intelligent back vision mirror auxiliary
JP5539250B2 (en) Approaching object detection device and approaching object detection method
CN110544368A (en) fatigue driving augmented reality early warning device and early warning method
EP3456574A1 (en) Method and system for displaying virtual reality information in a vehicle
JP5003473B2 (en) Warning device
JP2020095466A (en) Electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200616

RJ01 Rejection of invention patent application after publication