CN116533881A - Vehicle vision expansion method and related equipment - Google Patents
Vehicle vision expansion method and related equipment Download PDFInfo
- Publication number
- CN116533881A CN116533881A CN202310471235.8A CN202310471235A CN116533881A CN 116533881 A CN116533881 A CN 116533881A CN 202310471235 A CN202310471235 A CN 202310471235A CN 116533881 A CN116533881 A CN 116533881A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- real
- information
- user
- image information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000004438 eyesight Effects 0.000 title abstract description 13
- 230000000007 visual effect Effects 0.000 claims abstract description 44
- 210000001508 eye Anatomy 0.000 claims description 20
- 238000003860 storage Methods 0.000 claims description 15
- 210000005252 bulbus oculi Anatomy 0.000 claims description 10
- 210000003128 head Anatomy 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 210000001747 pupil Anatomy 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 4
- 239000000758 substrate Substances 0.000 claims 1
- 238000013461 design Methods 0.000 abstract description 8
- 230000008569 process Effects 0.000 abstract description 5
- 238000004590 computer program Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 8
- 238000010168 coupling process Methods 0.000 description 5
- 238000005859 coupling reaction Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000009194 climbing Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000005669 field effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/60—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/802—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Mechanical Engineering (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
Abstract
The invention discloses a visual field expanding method and related equipment for a vehicle, relates to the field of vehicle control, and mainly aims to solve the problem that an existing vehicle perspective design is difficult to truly provide a panoramic image which is matched with a driver's real visual field. The method comprises the following steps: determining spatial position information and gaze direction information of a user relative to the vehicle based on the at least two spatial positioning sensors; acquiring real-time external image information of a target expansion area; the real-time external image information is projected to the target expansion area based on the spatial position information and the sight line direction information of the user relative to the vehicle to realize the visual field expansion of the target expansion area. The invention is used for the vision expansion process of the vehicle.
Description
Technical Field
The invention relates to the field of vehicle control, in particular to a vehicle vision expansion method and related equipment.
Background
Vehicle perspective refers to presenting a picture of a dead zone of a vehicle to a user. At present, the design of the perspective chassis has the advantages that a driver can intuitively see the topography and the obstruction under the chassis through the function of the perspective chassis, and can adjust the forward, backward and turning of the vehicle more pertinently according to pictures, so that the blind opening can be effectively avoided, and unnecessary loss caused by blind opening is avoided, and the novel vehicle is more practical especially on non-paved roads.
However, the perspective emphasis of the existing vehicles is focused on splicing signals acquired by multiple cameras and displaying the signals through a central control screen and the like, the nature of the signals is not different from 360 panoramic images, and the fact that the spatial position of any camera cannot coincide with the position of human eyes is neglected, so that even if a vehicle body is not shielded, the images observed by the cameras and the human eyes still have a sequence difference, such as the visual field seen by a front camera in the vehicle traveling process, and only the position of the front camera is seen by the human eyes in the real environment, so that the prior art can not provide a panoramic image which completely coincides with the real visual field of a user only by splicing and displaying the visual field images of all cameras to the user.
Disclosure of Invention
In view of the above, the present invention provides a method and related equipment for expanding the field of view of a vehicle, and is mainly aimed at solving the problem that it is difficult to actually provide a panoramic image matching with the actual field of view of a driver in the conventional perspective design of a vehicle.
To solve at least one of the above problems, in a first aspect, the present invention provides a method for expanding a field of view of a vehicle, including:
determining spatial position information and gaze direction information of a user relative to the vehicle based on the at least two spatial positioning sensors;
acquiring real-time external image information of a target expansion area;
and projecting the real-time external image information to the target expansion area based on the spatial position information and the sight line direction information of the user relative to the vehicle, so as to realize the visual field expansion of the target expansion area.
Alternatively to this, the method may comprise,
at least one space positioning sensor is arranged on two sides of the viewing angle central line of the user.
Optionally, the determining the spatial position information and the sight direction information of the user relative to the vehicle based on at least two spatial positioning sensors includes:
respectively acquiring real-time images of at least two user heads based on the at least two spatial positioning sensors;
acquiring eyeball positions, pupil angles and face angles in the real-time images of the heads of the at least two users respectively based on an opencv face recognition algorithm;
and comparing eyeball positions, pupil angles and face angles in the real-time images of the heads of the at least two users based on an eyeball focusing algorithm to determine the spatial position information and the sight direction information of the users relative to the vehicle.
Optionally, an image capturing device is disposed outside the target expansion area, and the acquiring real-time external image information of the target expansion area includes:
and the image acquisition equipment based on the target expansion area acquires the real-time external image information.
Optionally, the method further comprises:
acquiring a longitudinal inclination angle of the vehicle based on a body angle sensor of the vehicle;
acquiring sight direction information of a user under the condition that the longitudinal inclination angle of the vehicle is larger than a preset angle;
and correcting a display angle of the real-time external image information based on the vertical tilt angle and the line-of-sight direction information.
Optionally, the method further comprises:
acquiring a running speed of a vehicle based on a speed sensor of the vehicle;
determining an image lag time based on the travel speed and a distance between the vehicle speed sensor and a user's eyes;
and adjusting a time axis of the real-time external image information based on the image lag time so as to synchronize the real-time external image information with the image information acquired by the human eyes of the user.
Optionally, the method further comprises:
acquiring the definition of the real-time external image information;
acquiring current weather information under the condition that the definition of the real-time external image information is lower than a preset definition;
under the condition that the current weather information reflects that the visibility of the external environment is low, sending a view expansion closing request to a user;
and sending an image acquisition device detection request to a user under the condition that the current weather information reflects that the visibility of the external environment is high.
In a second aspect, an embodiment of the present invention further provides a view expansion device for a vehicle, including:
a determining unit for determining spatial position information and gaze direction information of a user with respect to the vehicle based on at least two spatial positioning sensors;
an acquisition unit for acquiring real-time external image information of the target extension area;
and a projection unit configured to project the real-time external image information to the target expansion area based on spatial position information and line-of-sight direction information of the user with respect to the vehicle, so as to achieve a field of view expansion of the target expansion area.
In order to achieve the above object, according to a third aspect of the present invention, there is provided a computer-readable storage medium including a stored program, wherein the steps of the above-described vehicle field of view expansion method are implemented when the program is executed by a processor.
In order to achieve the above object, according to a fourth aspect of the present invention, there is provided an electronic device including at least one processor, and at least one memory connected to the processor; the processor is used for calling the program instructions in the memory and executing the steps of the vehicle vision expansion method.
By means of the technical scheme, the visual field expansion method and the relevant equipment of the vehicle are used for solving the problem that the conventional vehicle perspective design is difficult to truly provide panoramic images which are matched with the real visual field of a driver; acquiring real-time external image information of a target expansion area; and projecting the real-time external image information to the target expansion area based on the spatial position information and the sight line direction information of the user relative to the vehicle, so as to realize the visual field expansion of the target expansion area. In the scheme, when the real vision of human eyes and the visual image of the camera are combined, the active screen visual field imaging which is more suitable for the visual direction of human eyes is obtained based on the optimization by determining the spatial position information and the visual direction information of the user in the vehicle.
Accordingly, the visual field expanding device, the device and the computer readable storage medium for the vehicle provided by the embodiment of the invention also have the technical effects.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a schematic flow chart of a method for expanding a field of view of a vehicle according to an embodiment of the present invention;
fig. 2 is a block diagram schematically showing the composition of a view expansion device for a vehicle according to an embodiment of the present invention;
fig. 3 is a schematic block diagram showing the composition of a visual field expansion electronic device for a vehicle according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In order to solve the problem that the conventional perspective design of a vehicle is difficult to truly provide a panoramic image matching with the real field of view of a driver, an embodiment of the present invention provides a field of view expansion method of a vehicle, as shown in fig. 1, the method includes:
s101, determining spatial position information and sight line direction information of a user relative to a vehicle based on at least two spatial positioning sensors;
the two-dimensional face image lacks three-dimensional information, is easy to be influenced by illumination, gesture, expression and the like, and can be subjected to three-dimensional fusion by adopting at least two spatial positioning sensors to recover a three-dimensional face model, wherein the three-dimensional information can be used for gesture, illumination, transformation and generation of the model, and the spatial positioning sensors can be cameras and the like. The embodiment of the invention can determine the spatial position information and the sight direction information of the user relative to the vehicle based on at least two spatial positioning sensors, and the information is used for optimizing the subsequent angle for imaging the view of the screen.
S102, acquiring real-time external image information of a target expansion area;
the image sensor can be an ultra-wide angle camera, such as a fish-eye ultra-visual field camera, for collecting external image information, or a camera with a four-way rotation function, so that a larger and more limited visual field is obtained, and real-time external image information of a target expansion area is obtained.
And S103, projecting the real-time external image information to the target expansion area based on the spatial position information and the sight line direction information of the user relative to the vehicle so as to realize the visual field expansion of the target expansion area.
By means of the technical scheme, the visual field expansion method of the vehicle solves the problem that the conventional vehicle perspective design is difficult to truly provide panoramic images which are consistent with the real visual field of a driver, and determines the spatial position information and the sight direction information of a user relative to the vehicle based on at least two spatial positioning sensors; acquiring real-time external image information of a target expansion area; and projecting the real-time external image information to the target expansion area based on the spatial position information and the sight line direction information of the user relative to the vehicle, so as to realize the visual field expansion of the target expansion area. In the scheme, when the real vision of human eyes and the visual image of the camera are combined, the active screen visual field imaging which is more suitable for the visual direction of human eyes is obtained based on the optimization by determining the spatial position information and the visual direction information of the user in the vehicle.
In one embodiment, at least one spatial positioning sensor is disposed on each side of the user's viewing angle centerline.
The binocular positioning is exemplified by at least two spatial positioning sensors, such as cameras or positioners, and at least one spatial positioning sensor is arranged on two sides of the viewing angle center line of the user, so that when the two cameras work, the relative positions of the two cameras are known, and the positioning of the user relative to the spatial position information and the sight direction information of the vehicle is realized.
In one embodiment, the determining spatial position information and gaze direction information of the user relative to the vehicle based on the at least two spatial positioning sensors includes:
respectively acquiring real-time images of at least two user heads based on the at least two spatial positioning sensors;
acquiring eyeball positions, pupil angles and face angles in the real-time images of the heads of the at least two users respectively based on an opencv face recognition algorithm;
and comparing eyeball positions, pupil angles and face angles in the real-time images of the heads of the at least two users based on an eyeball focusing algorithm to determine the spatial position information and the sight direction information of the users relative to the vehicle.
The embodiment of the invention is based on calibrating two cameras, determining the corresponding coordinate conversion relation between the imaging plane and the user plane, determining the proportional relation between the pixel distance and the actual physical distance on the imaging plane of the cameras, and converting the image coordinates of the user into the actual coordinates by collecting the eyeball position, pupil angle and face angle of the user in the real-time image, and the corresponding coordinate conversion relation and the proportional relation between the pixel distance and the actual physical distance. Thereby realizing the positioning of the spatial position information of the vehicle. Furthermore, the geometric model can be established under the natural illumination condition, and the connecting line of the eyeball center and the pupil center is used as the sight direction of the user so as to position the sight direction information.
In one embodiment, an image capturing device is disposed outside the target expansion area, and the acquiring real-time external image information of the target expansion area includes:
and the image acquisition equipment based on the target expansion area acquires the real-time external image information.
The target expansion area may be an area of a vehicle chassis, an AB pillar, a vehicle door, an engine hood, etc., and the image sensors are respectively disposed in the target expansion area, for example, when a user needs to be a design for making the engine hood transparent, the image sensors correspondingly acquire real-time external image information acquired by the image sensors outside the engine hood. Thereby realizing the field of view expansion of the target expansion area.
In one embodiment, the method further comprises:
acquiring a longitudinal inclination angle of the vehicle based on a body angle sensor of the vehicle;
acquiring sight direction information of a user under the condition that the longitudinal inclination angle of the vehicle is larger than a preset angle;
and correcting a display angle of the real-time external image information based on the vertical tilt angle and the line-of-sight direction information.
For example, the body angle sensor may be composed of a photo-coupling element, an apertured slot plate, or the like. The photoelectric coupling element is a light emitting diode and a phototransistor. The open-pore grooved plate is arranged between the light-emitting diode and the phototransistor. The perforated slotted plate has a plurality of small holes. When the angle of the vehicle changes, the perforated channel plate can rotate along with the vehicle. The phototransistor operates according to light passing through the open-cell slot plate and outputs a digital pulse signal. The vehicle electronic control unit can recognize the steering angle, the rotating direction and the rotating speed of the vehicle according to the signals.
It can be appreciated that certain line of sight obstructions may be experienced when the user vehicle is climbing a hill; the sight line is wider when the downhill slope is carried out; when the vehicle climbs on a steep hillside and reaches the highest point, the sight of the driver can leave the road surface, and then sight blind areas are caused.
Furthermore, in the embodiment of the invention, when the vehicle runs on a steep slope, whether the human eyes can see the sky through inertia head-up or not can be judged in advance, so that the external image information is adjusted in advance, and the external image information cannot be separated from the road surface.
Therefore, in the embodiment of the invention, the longitudinal inclination angle of the vehicle is focused, and under the condition that the longitudinal inclination angle of the vehicle is larger than the preset angle, the observable area reflecting the sight of the user correspondingly generates larger change and smaller change, at the moment, the sight direction information of the user is acquired, and the display angle of the real-time external image information is corrected based on the sight direction information, so that a panoramic image which is more consistent with the real field of view is provided for the user.
In one embodiment, the method further comprises:
acquiring a running speed of a vehicle based on a speed sensor of the vehicle;
determining an image lag time based on the travel speed and a distance between the vehicle speed sensor and a user's eyes;
and adjusting a time axis of the real-time external image information based on the image lag time so as to synchronize the real-time external image information with the image information acquired by the human eyes of the user.
By way of example, due to the difference of the space positions, the visual image obtained by the human eyes is generally lagged behind the image obtained by the camera, and the current vehicle speed can be obtained through the vehicle speed sensor, so that the lagging time of the visual image obtained by the human eyes relative to the image obtained by the camera can be calculated and used for processing the obtained real-time external image information. By the method, the difference between the camera view and the human eye natural view on the time axis can be eliminated in the vehicle running process, so that a split view without split feeling is obtained.
Furthermore, the embodiment of the invention enables human eyes to obtain the visual effect of the same visual angle and visual distance through a screen and a windshield by cutting the real-time external image information, and the screen imaging is seamlessly synchronized and spliced with the visual field of the actual human eyes through calculation, so that the AR visual fusion effect is achieved, namely, the picture displayed by the display screen is a video stream which is subjected to calculation processing, and the visual effect that the vehicle body part in front of the display screen completely disappears in the visual angle is achieved.
In one embodiment, the method further comprises:
acquiring the definition of the real-time external image information;
acquiring current weather information under the condition that the definition of the real-time external image information is lower than a preset definition;
under the condition that the current weather information reflects that the visibility of the external environment is low, sending a view expansion closing request to a user;
and sending an image acquisition device detection request to a user under the condition that the current weather information reflects that the visibility of the external environment is high.
For example, if the definition of the external image information is lower than the preset definition, the cause investigation is performed from two factors, namely, the cause of the vehicle and the cause of the external environment. Firstly judging current weather information, if the visibility of the external environment is lower, determining that the reason that the definition of the current real-time external image information is lower than the preset definition is an external factor, and inquiring whether to close the visual field expansion or not to a user under the condition that the definition of the external image information is lower.
Further, if the current weather information reflects that the visibility of the external environment is high, it is determined that the reason that the definition of the current real-time external image information is lower than the preset definition is an internal factor, and then a detection request of the image acquisition device is sent to the user, so that the user checks which part of the camera and the image processing algorithm in the vehicle has a problem.
Further, compared with the existing 360-degree looking-around image or transparent chassis technology, the embodiment of the invention has an active screen vision imaging function adapting to the eye sight direction, can be seamlessly spliced with the visual field of naked eyes through glass to form a larger range visual field, is a first person vision imaging more conforming to the visual field habit of human bodies, can form a large-area visual field effect, can be applied to immersive cabin experience, can also be applied to barrier-free visual field expansion application, such as a transparent upright transparent hood and the like, can also be applied to glass-free totally-enclosed cabins, such as tanks, submarines, aircrafts and the like, and can also be applied to totally-enclosed vehicles.
Further, as an implementation of the method shown in fig. 1, an embodiment of the present invention further provides a field of view expanding device for a vehicle, which is configured to implement the method shown in fig. 1. The embodiment of the device corresponds to the embodiment of the method, and for convenience of reading, details of the embodiment of the method are not repeated one by one, but it should be clear that the device in the embodiment can correspondingly realize all the details of the embodiment of the method. As shown in fig. 2, the apparatus includes: a determination unit 21, an acquisition unit 22 and a projection unit 23, wherein
A determining unit 21 for determining spatial position information and line-of-sight direction information of a user with respect to the vehicle based on at least two spatial positioning sensors;
an acquisition unit 22 for acquiring real-time external image information of the target extension area;
and a projection unit 23 configured to project the real-time external image information to the target expansion area based on the spatial position information and the line-of-sight direction information of the user with respect to the vehicle, so as to achieve the field of view expansion of the target expansion area.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. One or more kernels can be arranged, and the kernel parameters are adjusted to realize a vehicle vision expansion method, so that the problem that the conventional vehicle perspective design is difficult to truly provide panoramic images which are consistent with the real vision of a driver can be solved.
An embodiment of the present invention provides a computer-readable storage medium including a stored program that, when executed by a processor, implements the above-described vehicle field of view expansion method.
The embodiment of the invention provides a processor, which is used for running a program, wherein the program runs to execute the method for expanding the visual field of the vehicle.
The embodiment of the invention provides electronic equipment, which comprises at least one processor and at least one memory connected with the processor; wherein the processor is configured to call the program instructions in the memory to perform the vehicle vision expansion method as described above
An embodiment of the present invention provides an electronic device 30, as shown in fig. 3, where the electronic device includes at least one processor 301, and at least one memory 302 and a bus 303 connected to the processor; wherein, the processor 301 and the memory 302 complete communication with each other through the bus 303; the processor 301 is configured to call up the program instructions in the memory to perform the above-described vehicle field of view expansion method.
The intelligent electronic device herein may be a PC, PAD, cell phone, etc.
The present application also provides a computer program product adapted to perform a program initialized with the above-mentioned vehicle's field of view expansion method steps when executed on a flow management electronic device.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Embodiments of the present application also provide a computer program product comprising computer software instructions which, when run on a processing device, cause the processing device to perform a flow of control of a memory as in the corresponding embodiment of fig. 1.
The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be stored by a computer or data storage devices such as servers, data centers, etc. that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tapes), optical media (e.g., DVDs), or semiconductor media (e.g., solid State Disks (SSDs)), among others.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.
Claims (10)
1. A method for expanding a field of view of a vehicle, comprising:
determining spatial position information and gaze direction information of a user relative to the vehicle based on the at least two spatial positioning sensors;
acquiring real-time external image information of a target expansion area;
the real-time external image information is projected to the target expansion area based on the spatial position information and the sight line direction information of the user relative to the vehicle to realize the visual field expansion of the target expansion area.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
at least one space positioning sensor is arranged on two sides of the visual angle central line of the user.
3. The method of claim 2, wherein determining spatial location information and gaze direction information of a user relative to a vehicle based on at least two spatial location sensors comprises:
respectively acquiring real-time images of at least two user heads based on the at least two spatial positioning sensors;
acquiring eyeball positions, pupil angles and face angles in the real-time images of the heads of the at least two users respectively based on an opencv face recognition algorithm;
and comparing eyeball positions, pupil angles and face angles in the real-time images of the heads of the at least two users based on an eyeball focusing algorithm to determine the spatial position information and the sight direction information of the users relative to the vehicle.
4. The method according to claim 1, wherein an image acquisition device is disposed outside the target extension area, and the acquiring real-time external image information of the target extension area includes:
and acquiring the real-time external image information by using an image acquisition device based on the target expansion area.
5. The method as recited in claim 1, further comprising:
acquiring a longitudinal inclination angle of the vehicle based on a body angle sensor of the vehicle;
acquiring sight direction information of a user under the condition that the longitudinal inclination angle of the vehicle is larger than a preset angle;
and correcting a display angle of the real-time external image information based on the longitudinal inclination angle and the line-of-sight direction information.
6. The method as recited in claim 1, further comprising:
acquiring a running speed of a vehicle based on a speed sensor of the vehicle;
determining an image lag time based on the travel speed and a distance of the vehicle speed sensor from a user's eye;
and adjusting a time axis of the real-time external image information based on the image lag time to synchronize the real-time external image information with the image information acquired by the human eyes of the user.
7. The method as recited in claim 1, further comprising:
acquiring the definition of the real-time external image information;
acquiring current weather information under the condition that the definition of the real-time external image information is lower than a preset definition;
sending a view expansion closing request to a user under the condition that the current weather information reflects that the visibility of the external environment is low;
and sending an image acquisition device detection request to a user under the condition that the current weather information reflects that the visibility of the external environment is high.
8. A visual field expansion device for a vehicle, comprising:
a determining unit for determining spatial position information and gaze direction information of a user with respect to the vehicle based on at least two spatial positioning sensors;
an acquisition unit for acquiring real-time external image information of the target extension area;
and a projection unit configured to project the real-time external image information to the target expansion area based on spatial position information and line-of-sight direction information of the user with respect to the vehicle, so as to achieve a field of view expansion of the target expansion area.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the method of expanding the field of view of the vehicle according to any one of claims 1 to 7 is implemented when the program is executed by a processor.
10. An electronic device comprising at least one processor and at least one memory coupled to the processor; wherein the processor is configured to invoke program instructions in the memory to perform the vehicle field of view expansion method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310471235.8A CN116533881A (en) | 2023-04-27 | 2023-04-27 | Vehicle vision expansion method and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310471235.8A CN116533881A (en) | 2023-04-27 | 2023-04-27 | Vehicle vision expansion method and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116533881A true CN116533881A (en) | 2023-08-04 |
Family
ID=87455414
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310471235.8A Pending CN116533881A (en) | 2023-04-27 | 2023-04-27 | Vehicle vision expansion method and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116533881A (en) |
-
2023
- 2023-04-27 CN CN202310471235.8A patent/CN116533881A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10382699B2 (en) | Imaging system and method of producing images for display apparatus | |
US20160280136A1 (en) | Active-tracking vehicular-based systems and methods for generating adaptive image | |
US10666856B1 (en) | Gaze-directed photography via augmented reality feedback | |
KR102021320B1 (en) | Display visibility based on eye convergence | |
US9756319B2 (en) | Virtual see-through instrument cluster with live video | |
JP2022530012A (en) | Head-mounted display with pass-through image processing | |
EP2119223B1 (en) | Control method based on a voluntary ocular signal, particularly for filming | |
US7783077B2 (en) | Eye gaze tracker system and method | |
US9771083B2 (en) | Cognitive displays | |
US20110228051A1 (en) | Stereoscopic Viewing Comfort Through Gaze Estimation | |
CN111095363B (en) | Display system and display method | |
JP2018058544A (en) | On-vehicle display control device | |
WO2005088970A1 (en) | Image generation device, image generation method, and image generation program | |
CA2788956A1 (en) | Method and system for sequential viewing of two video streams | |
JP2002176661A (en) | Image display device | |
US20200150916A1 (en) | Multi-panel display system and method for jointly displaying a scene | |
US11743447B2 (en) | Gaze tracking apparatus and systems | |
CN115223231A (en) | Sight direction detection method and device | |
CN113212312B (en) | AR rearview mirror assembly and control method thereof | |
JP2022040819A (en) | Image processing device and image processing method | |
US20220030178A1 (en) | Image processing apparatus, image processing method, and image processing system | |
US12022231B2 (en) | Video recording and playback systems and methods | |
US20230017779A1 (en) | Vvideo processing and playback systems and methods | |
CN116533881A (en) | Vehicle vision expansion method and related equipment | |
US20230401782A1 (en) | Electronic apparatus and control method of the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |