CN110996051A - Method, device and system for vehicle vision compensation - Google Patents

Method, device and system for vehicle vision compensation Download PDF

Info

Publication number
CN110996051A
CN110996051A CN201911170290.3A CN201911170290A CN110996051A CN 110996051 A CN110996051 A CN 110996051A CN 201911170290 A CN201911170290 A CN 201911170290A CN 110996051 A CN110996051 A CN 110996051A
Authority
CN
China
Prior art keywords
user
image
vehicle
information
external camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911170290.3A
Other languages
Chinese (zh)
Inventor
张�成
刘方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zebra Network Technology Co Ltd
Original Assignee
Zebra Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebra Network Technology Co Ltd filed Critical Zebra Network Technology Co Ltd
Priority to CN201911170290.3A priority Critical patent/CN110996051A/en
Publication of CN110996051A publication Critical patent/CN110996051A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a method, a device and a system for vehicle vision compensation, wherein the method comprises the following steps: receiving image information of a vehicle user; acquiring gesture information of a user according to the image information; generating an adjusting instruction shot by an external camera according to the attitude information of the user; the adjusting instruction is used for controlling shooting parameters of at least one external camera. The method and the device have the advantages that the visual blind area of the column A is eliminated, the blind area image fused with the surrounding real scene is displayed on the column A more accurately, vehicle user experience is improved, and vehicle driving safety is improved.

Description

Method, device and system for vehicle vision compensation
Technical Field
The invention relates to the technical field of traffic vehicles, in particular to a method, a device and a system for vehicle vision compensation.
Background
Vehicles have become indispensable vehicles, and driving safety and comfort have been the objectives pursued in the vehicle industry, and vehicle a-pillars are pillars at the left and right front of the driver, pillars connecting the hood and the body, and bear part of the load of the vehicle, provided that when a vehicle collides, the a-pillar will bear a considerable part of the impact.
However, the a-pillar is a pillar between the windshield and the left and right front doors, and between the engine compartment and the cab, the upper side of the left and right rear-view mirrors may block the view of the user, and particularly, before the vehicle turns and enters a curve, may cause a blind field of vision for the driver.
In the prior art, an external area monitoring system with a fixed visual angle is adopted, for example, a camera at the root of an external rearview mirror monitors an image of 25 frames/second for a fan-shaped blind area with a distance of 3 meters and a width of 9.5 meters, but the visual angle of a driver is increased or decreased along with the change of the moving speed of a vehicle, so that the monitoring system cannot accurately provide an accurate blind area image for a vehicle user, or an A column is designed to be hollow, but the rigidity and the strength of the vehicle are not enough.
Disclosure of Invention
The invention provides a vehicle vision compensation method, device and system, which are used for eliminating a column A vision blind area, more accurately displaying a blind area image fused with surrounding real scenes on a column A, improving vehicle user experience and improving the safety of vehicle driving.
In a first aspect, an embodiment of the present invention provides a method for vehicle vision compensation, including:
receiving image information of a vehicle user;
acquiring gesture information of a user according to the image information;
generating an adjusting instruction shot by an external camera according to the attitude information of the user; the adjusting instruction is used for controlling shooting parameters of at least one external camera.
In one possible design, the pose information includes a user's face orientation, eye view height, eye view direction, and pupil position.
In one possible design, generating an adjustment instruction for shooting by an external camera according to the posture information of the user includes:
determining the current visual angle range of the user according to the face orientation, the eye visual angle height, the eye visual angle direction and the pupil position of the user;
generating an adjusting instruction shot by an external camera according to the current visual angle range of the user; wherein, the shooting parameters of the external camera comprise: shooting angle, shooting range.
In one possible design, further comprising:
receiving an image collected by the external camera, and cutting and/or splicing the image according to the current posture information of a user to obtain a processed blind area image; the blind area image is displayed on the a-pillar of the vehicle.
In one possible design, further comprising:
predicting the age of the user according to the image information; if the predicted age exceeds a preset threshold, reminding the user whether to switch to the presbyopic mode or not through voice; if a mode switching instruction is received, setting the display mode to be a presbyopic mode; and under the presbyopia mode, displaying the amplified blind area image on the column A.
In one possible design, calculating the offset difference between the current posture information and the posture information of the user at the last moment, and if the offset difference is detected to be larger than a first preset threshold value, reminding the user to adjust the sitting posture; or if the shooting angle corresponding to the adjustment instruction is larger than a second preset threshold value, reminding a user of sitting posture adjustment.
In a second aspect, an apparatus for vehicle vision compensation, comprises:
the receiving module is used for receiving image information of a vehicle user;
the acquisition module is used for acquiring the posture information of the user according to the image information;
the generating module is used for generating an adjusting instruction shot by an external camera according to the posture information of the user; the adjusting instruction is used for controlling shooting parameters of at least one external camera.
In one possible design, the pose information includes a user's face orientation, eye view height, eye view direction, and pupil position.
In one possible design, generating an adjustment instruction for shooting by an external camera according to the posture information of the user includes:
determining the current visual angle range of the user according to the face orientation, the eye visual angle height, the eye visual angle direction and the pupil position of the user;
generating an adjusting instruction shot by the external camera according to the current visual angle range of the user; wherein, the shooting parameters of the external camera comprise: shooting angle, shooting range.
In one possible design, further comprising:
receiving an image collected by the external camera, and cutting and/or splicing the image according to the current posture information of a user to obtain a processed blind area image;
the blind area image is displayed on the a-pillar of the vehicle.
In one possible design, further comprising:
predicting the age of the user according to the image information;
if the predicted age exceeds a preset threshold, reminding the user whether to switch to the presbyopic mode or not through voice;
if a mode switching instruction is received, setting the display mode to be a presbyopic mode; and under the presbyopia mode, displaying the amplified blind area image on the column A.
In one possible design, further comprising:
calculating the offset difference between the current posture information and the posture information of the user at the last moment, and if the offset difference is detected to be larger than a first preset threshold value, reminding the user to adjust the sitting posture;
or if the shooting angle corresponding to the adjustment instruction is larger than a second preset threshold value, reminding a user of sitting posture adjustment.
In a third aspect, an embodiment of the present invention provides a system for vehicle vision compensation, including a memory and a processor, where the memory stores executable instructions of the processor; wherein the processor is configured to perform the method of vehicle vision compensation of any of the first aspect via execution of the executable instructions.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for vehicle vision compensation according to any one of the first aspect.
The invention provides a method, a device and a system for vehicle vision compensation, wherein the method comprises the following steps: receiving image information of a vehicle user; acquiring gesture information of a user according to the image information; generating an adjusting instruction shot by an external camera according to the attitude information of the user; the adjusting instruction is used for controlling shooting parameters of at least one external camera. The method and the device have the advantages that the visual blind area of the column A is eliminated, the blind area image fused with the surrounding real scene is displayed on the column A more accurately, vehicle user experience is improved, and vehicle driving safety is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic structural view of a vehicle A-pillar;
FIG. 2 is a flow chart of a method for compensating for vehicle vision according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a blind area image for vehicle vision compensation according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for compensating for vehicle vision according to a second embodiment of the present invention;
FIG. 5 is a schematic diagram of a blind area image of vehicle vision compensation according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a vehicle vision compensation device according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of a system for vehicle vision compensation according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
The A-pillar, foreign language A-pillar, is the connecting pillar that connects the roof and the front cabin on the front left and right sides, between the engine compartment and the cockpit, above the left and right rear-view mirrors (refer to FIG. 1, FIG. 1 is the schematic structural diagram of the A-pillar of the vehicle), can block a part of the turning vision, especially the left turning. For example, when a pedestrian crossing the street is present in the blind area of the A column when the vehicle is at the intersection, and the driver does not confirm the pedestrian left or right, the traffic accident caused by collision is likely to happen. The overlap angle of the binocular vision line at the position of the column A is 5-6 degrees, if the overlap angle is smaller and better according to the comfortable angle of a driver, the column A needs to be thinner and better, but the column A needs to be high in rigidity and load bearing in the use process of a vehicle, the column A needs to be wider and better, and the necessity of the existing column A design leads to the use experience of a blind area of a driver. Therefore, the invention provides a vehicle vision compensation method, which eliminates the A-column view blind area, realizes the display of the blind area image which is more integrated with the surrounding real scene and has high accuracy on the A-column, improves the vehicle user experience and improves the safety of vehicle driving.
Fig. 2 is a flowchart of a method for vehicle vision compensation according to an embodiment of the present invention, and as shown in fig. 2, the method for vehicle vision compensation in this embodiment may include:
s101, receiving image information of a vehicle user.
Specifically, the system for vehicle vision compensation may capture image information of a user in real time through a camera inside the vehicle, such as image information of the user at a current time, image information of a previous time, and the like, and upload the image information to the vehicle-mounted main control processor, so as to monitor the posture of the user during driving, especially during driving before the vehicle turns or enters a curve.
The vehicle interior camera in this embodiment may include a monocular camera, a binocular camera, a depth camera, and an infrared camera, and this embodiment is not limited. The vehicle-mounted internal camera shoots the user in real time to obtain image information of the user and uploads the image information to the vehicle-mounted main control processor, and the main control processor receives the image information of the user uploaded by the internal camera to monitor the posture of the user and realize safety monitoring in the driving process.
And S102, acquiring the posture information of the user according to the image information.
Specifically, the posture information of the user includes the face orientation, the eye view height, the eye view direction, and the pupil position of the user.
In this embodiment, the vehicle-mounted main control processor receives image information of a user, processes the image information, and obtains posture information of the user. For example, obtaining a multi-frame image of a user in a vehicle, obtaining a multi-frame depth image of a current image, and processing image information of the user frame by frame to extract a character region in the image to obtain each frame of character region; and obtaining the action posture of the current user according to each frame of character region, particularly carrying out face-face recognition on the image of the character region frame by frame, and labeling each pixel of the face-face recognition to obtain the face orientation, the eye visual angle height, the eye visual angle direction and the pupil position of the user. The image can be preprocessed by histogram equalization, Gaussian filtering and the like so as to reduce noise interference of the face gray level image. Then, the horizontal position of the human eyes is determined through horizontal checking projection, and then the vertical position of the human eyes is determined through vertical integral projection, so that the positions of the human eyes in the human face image are positioned. On one hand, the characteristic that the gray scale of a white sclera area of a human eye is obviously compared with the gray scale of a black eyeball area is utilized, on the other hand, the characteristic that the gray scale difference of an image highlights the eye is obtained by utilizing the characteristics that the gray scale change of the human eye area and the skin color gray scale around the eye is obvious and the like, and the position of the eye is effectively obtained. And then acquiring the inner and outer eye corners of human eyes and the positions of pupils based on a corner detection method of the image edge.
And acquiring the sight direction of the user by adopting a method of calculating the sight direction vector by adopting a geometric relation. For example: and emitting a ray from the origin of the camera to pass through the pupil center of the image plane, calculating the intersection point of the origin of the camera and the surface of the eyeball, wherein the intersection point is the three-dimensional coordinate of the pupil under the coordinate system of the camera, and the direction vector from the center of the eyeball to the intersection point is the sight line direction vector. And then, respectively calculating included angles between the sight line direction vector and the horizontal direction and the vertical direction, and distinguishing overlooking, head-up and looking-up according to the included angles so as to obtain the eye visual angle height and the eye visual angle direction of the user. The adjustment instruction shot by the external camera is generated through the posture information of the user to obtain the image of the environment corresponding to the blind area, so that the blind area image fused with the surrounding real scene can be displayed on the column A, the vehicle user experience is improved, and the safety of vehicle driving is improved.
In an optional embodiment, images of a user can be input into a preset training model, and the sight direction and the head position posture data of the user can be obtained through direct learning; and obtaining the eye visual angle height, the eye visual angle direction and the like of the user according to the sight line direction and the head position posture data of the user. The preset training model is, for example, a convolutional neural network model provided by means of convolution, pooling, nonlinear variation and the like, and can obtain more essential visual features of the video image so as to obtain attitude information with higher accuracy.
S103, generating an adjusting instruction shot by an external camera according to the posture information of the user; the adjustment instruction is used for controlling the shooting parameters of at least one external camera.
In particular, the on-board system may include a plurality of external cameras and be coupled to a system for vehicle vision compensation. The vehicle vision compensation system determines the current visual angle range of the user according to the face orientation, the eye visual angle height, the eye visual angle direction and the pupil position of the user; generating an adjusting instruction shot by an external camera according to the current visual angle range of a user; wherein, the shooting parameter of outside camera includes: shooting angle, shooting range.
The field of vision of the human eye is 95 ° outward, 60 ° inward, 60 ° upward, and 75 ° downward. The optic nerve defect or blind spot is located at the temple 12-15 °, 1.5 ° horizontally down, 7.5 ° high and 5.5 ° wide.
For example, according to the direction that the face of a user is opposite to the column A, the eyes look down for 1 degree, and the pupil positions of two eyes are opposite to the column A, the middle angle range of the current visual angle range of the user, which is the blind area, is 2-3 degrees, and accordingly, shooting adjustment instructions of external cameras are generated, for example, shooting parameters of a plurality of external cameras are adjusted, wherein the shooting angle of one external camera is 1 degree in the top view, and the shooting range of the other external camera is the middle angle range of the blind area, which is 2-3 degrees; and then acquiring an acquired image, and displaying a blind area image on the A column of the vehicle through processing.
In an optional embodiment, receiving an image collected by an external camera, and cutting and/or splicing the image according to current posture information of a user to obtain a processed blind area image; the blind area image is displayed on the vehicle a-pillar.
In the embodiment, an image collected by an external camera is received, an image of a part blocked by an A column is obtained, the image is cut and/or spliced according to the current posture information of a user, a complete visual range is formed with the surrounding environment, and a processed blind area image is obtained; and displays the blind spot image on the a-pillar of the vehicle.
In an optional embodiment, the driver seat is arranged on the left side of the vehicle, the width of the column A of the vehicle is about 8cm, the left blind area included angle caused by the driver is 6 degrees, and the right blind area included angle is 2 degrees. In addition, when the vehicle is static, the range of the sheltered sight line can reach 210 degrees, and the visual angle of the driver is reduced along with the acceleration of the vehicle, so that the influence of the blind area on the visual angle of the driver is larger and larger. Therefore, for example, according to the current posture information of the user, the face is facing to the A column direction, the eyes look down and other information, the image is cut and/or spliced, and a processed blind area image is obtained; image stitching determines the correct registration associated with each image by image registration and image fusion, for example, by correlating pixel coordinates in one image with pixel coordinates in another image. A registered set of global consistencies can be computed using estimated registrations of direct point-to-point (pixel-to-pixel) comparisons combined with gradient descent, etc., and the parts of the images that overlap are found. In order to present a complete visual range to a user, if necessary, the image needs to be cut, a part where the image overlaps is cut according to a visual range of a driver shielded by the a-pillar, and the overlapping area can be processed by adopting a regular cutting and irregular cutting manner, so as to obtain a processed blind area image (refer to fig. 3, fig. 3 is a schematic diagram of a blind area image for vehicle visual compensation provided by an embodiment of the present invention), and the blind area image is displayed on the a-pillar of the vehicle. The fusion of the blind area image and the surrounding real scenes is improved, and the experience of a vehicle user is effectively improved.
Fig. 4 is a flowchart of a method for vehicle vision compensation according to a second embodiment of the present invention, and as shown in fig. 4, the method for vehicle vision compensation in this embodiment may include:
predicting the age of the user according to the image information; if the predicted age exceeds a preset threshold, reminding the user whether to switch to the presbyopic mode or not through voice; if a mode switching instruction is received, setting the display mode to be a presbyopic mode; wherein, under the presbyopia mode, the A post displays the enlarged blind area image.
Specifically, since the number of the blue spot cells gradually decreases with age of the user, the diameter of the pupil gradually decreases and the reaction rate gradually increases. Vision also changes with age.
Predicting the age of the vehicle user according to the received image information, and if the predicted age exceeds a preset threshold, for example, predicting the age of the vehicle user to be 55 years old and the preset threshold to be 50 years old, reminding the user whether to switch to a presbyopic mode by voice; the user switches to the far-sight eye mode according to the voice prompt, and the vehicle vision compensation system receives a mode switching instruction and sets the display mode to the far-sight eye mode; wherein, in the presbyopia mode, the magnified blind area image is displayed on the a-pillar, for example, see fig. 5,
fig. 5 is a schematic diagram of a blind area image of vehicle vision compensation according to a second embodiment of the present invention.
The method comprises the steps that the age of a user is predicted according to image information of the user, and if the predicted age exceeds a preset threshold value, the user is reminded whether to switch to a presbyopic mode through voice; if a mode switching instruction is received, the display mode is set to be the far-eye mode, so that the blind area image fused with the surrounding real scene is displayed on the A column for the user more accurately and clearly, and the experience of the vehicle user is improved.
In an alternative embodiment, the main control processors of the external camera and the vehicle internal camera may be separately arranged, or may be integrated into a vehicle main control processor.
In an optional embodiment, calculating a deviation difference between the current posture information and the posture information at the last moment of the user, and if the detected deviation difference is greater than a first preset threshold value, reminding the user to adjust the sitting posture; or if the shooting angle corresponding to the detection adjustment instruction is larger than a second preset threshold value, reminding the user of sitting posture adjustment.
Specifically, image information of a vehicle user is received, posture information of the user is obtained according to the image information, a deviation difference between the current posture information of the user and the posture information of the user at the last moment can be calculated, and if the deviation difference is detected to be larger than a first preset threshold value, the user is reminded to adjust the sitting posture in time; or generating an adjusting instruction shot by an external camera according to the posture information of the user, and if the shooting angle corresponding to the adjusting instruction is detected to be larger than a second preset threshold value, reminding the user of sitting posture adjustment.
In this embodiment, the eye visual angle height, the eye visual angle direction and the pupil position of the current user can be obtained according to the image information, and compared with the eye visual angle height, the eye visual angle direction and the pupil position of the user at the last moment, so as to obtain the deviation difference, because the visual field of the human eyes in the vertical plane is horizontal by the standard sight, the optimal visual area is lower than the area of the standard sight by 30 degrees, and the visual field binocular visual area in the horizontal plane is an area within 60 degrees, so the first preset threshold value can adopt 30 degrees, the present embodiment is not specifically limited, and if the deviation difference between the current posture information of the user and the posture information at the last moment is detected to be greater than 30 degrees, the user is reminded of sitting posture adjustment. Or if the shooting angle corresponding to the detection adjusting instruction is larger than 90 degrees, reminding the user of adjusting the sitting posture in time. The vehicle monitoring system can adopt an acousto-optic reminder to prompt, and simultaneously sends out alarm voice and the like to inform a user through the flashing of color light so as to remind the user to adjust the sitting posture in time and improve the safety of vehicle driving.
Fig. 6 is a schematic structural diagram of a vehicle vision compensation device according to a third embodiment of the present invention, and as shown in fig. 6, the vehicle vision compensation device according to the third embodiment of the present invention may include:
a receiving module 21 for receiving image information of a vehicle user;
an obtaining module 22, configured to obtain posture information of the user according to the image information;
the generating module 23 is configured to generate an adjustment instruction for shooting by an external camera according to the posture information of the user; the adjustment instruction is used for controlling the shooting parameters of at least one external camera.
In an alternative embodiment, the pose information includes the user's face orientation, eye view height, eye view direction, and pupil position.
In an optional embodiment, the generating an adjustment instruction for shooting by an external camera according to the posture information of the user includes:
determining the current visual angle range of the user according to the face orientation, the eye visual angle height, the eye visual angle direction and the pupil position of the user;
generating an adjusting instruction shot by an external camera according to the visual angle range of a user; wherein, the shooting parameter of outside camera includes: shooting angle, shooting range.
In an optional embodiment, further comprising:
receiving an image collected by an external camera, and cutting and/or splicing the image according to the current posture information of a user to obtain a processed blind area image;
the blind area image is displayed on the a-pillar of the vehicle.
In an optional embodiment, further comprising:
predicting the age of the user according to the image information;
if the predicted age exceeds a preset threshold, reminding the user whether to switch to the presbyopic mode or not through voice;
if a mode switching instruction is received, setting the display mode to be a presbyopic mode; wherein, under the presbyopia mode, the A post displays the enlarged blind area image.
In an optional embodiment, further comprising:
calculating the offset difference between the current posture information and the posture information at the last moment of the user, and if the detected offset difference is larger than a first preset threshold value, reminding the user to adjust the sitting posture;
or if the shooting angle corresponding to the detection adjustment instruction is larger than a second preset threshold value, reminding the user of sitting posture adjustment.
The device for vehicle vision compensation of the embodiment may execute the technical solutions in the methods shown in fig. 2 and fig. 4, and the specific implementation process and technical principle of the device refer to the related descriptions in the methods shown in fig. 2 and fig. 4, which are not described herein again.
Fig. 7 is a schematic structural diagram of a system for vehicle vision compensation according to a fourth embodiment of the present invention, and as shown in fig. 7, the path prediction system 30 of this embodiment may include: a processor 31 and a memory 32.
A memory 32 for storing computer programs (such as application programs, functional modules, etc. implementing the above-described method of vehicle vision compensation), computer instructions, etc.;
the computer programs, computer instructions, etc. described above may be stored in one or more memories 32 in partitions. And the above-mentioned computer program, computer instructions, data, etc. can be called by the processor 31.
A processor 31 for executing the computer program stored in the memory 32 to implement the steps of the method according to the above embodiments.
Reference may be made in particular to the description relating to the preceding method embodiment.
The processor 31 and the memory 32 may be separate structures or may be integrated structures integrated together. When the processor 31 and the memory 32 are separate structures, the memory 32 and the processor 31 may be coupled by a bus 33.
The server in this embodiment may execute the technical solutions in the methods shown in fig. 2 and fig. 4, and the specific implementation process and technical principle of the server refer to the relevant descriptions in the methods shown in fig. 2 and fig. 4, which are not described herein again.
In addition, embodiments of the present application further provide a computer-readable storage medium, in which computer-executable instructions are stored, and when at least one processor of the user equipment executes the computer-executable instructions, the user equipment performs the above-mentioned various possible methods.
Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in user equipment. Of course, the processor and the storage medium may reside as discrete components in a communication device.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (14)

1. A method of vehicle vision compensation, comprising:
receiving image information of a vehicle user;
acquiring gesture information of a user according to the image information;
generating an adjusting instruction shot by an external camera according to the attitude information of the user; the adjusting instruction is used for controlling shooting parameters of at least one external camera.
2. The method of claim 1, wherein the pose information comprises a face orientation, an eye view height, an eye view direction, and a pupil position of the user.
3. The method of claim 2, wherein generating an adjustment instruction for external camera shooting according to the posture information of the user comprises:
determining the current visual angle range of the user according to the face orientation, the eye visual angle height, the eye visual angle direction and the pupil position of the user;
generating an adjusting instruction shot by the external camera according to the current visual angle range of the user; wherein, the shooting parameters of the external camera comprise: shooting angle, shooting range.
4. The method of claim 1, further comprising:
receiving an image collected by the external camera, and cutting and/or splicing the image according to the current posture information of a user to obtain a processed blind area image;
the blind area image is displayed on the a-pillar of the vehicle.
5. The method of claim 4, further comprising:
predicting the age of the user according to the image information;
if the predicted age exceeds a preset threshold, reminding the user whether to switch to the presbyopic mode or not through voice;
if a mode switching instruction is received, setting the display mode to be a presbyopic mode; and under the presbyopia mode, displaying the amplified blind area image on the column A.
6. The method of claim 1, further comprising:
calculating the offset difference between the current posture information and the posture information of the user at the last moment, and if the offset difference is detected to be larger than a first preset threshold value, reminding the user to adjust the sitting posture;
or if the shooting angle corresponding to the adjustment instruction is larger than a second preset threshold value, reminding a user of sitting posture adjustment.
7. An apparatus for vehicle vision compensation, comprising:
the receiving module is used for receiving image information of a vehicle user;
the acquisition module is used for acquiring the posture information of the user according to the image information;
the generating module is used for generating an adjusting instruction shot by an external camera according to the posture information of the user; the adjusting instruction is used for controlling shooting parameters of at least one external camera.
8. The apparatus of claim 7, wherein the pose information comprises a face orientation, an eye view height, an eye view direction, and a pupil position of the user.
9. The apparatus of claim 8, wherein generating an adjustment instruction for external camera shooting according to the posture information of the user comprises:
determining the current visual angle range of the user according to the face orientation, the eye visual angle height, the eye visual angle direction and the pupil position of the user;
generating an adjusting instruction shot by the external camera according to the current visual angle range of the user; wherein, the shooting parameters of the external camera comprise: shooting angle, shooting range.
10. The apparatus of claim 7, further comprising:
receiving an image collected by the external camera, and cutting and/or splicing the image according to the current posture information of a user to obtain a processed blind area image;
the blind area image is displayed on the a-pillar of the vehicle.
11. The apparatus of claim 10, further comprising:
predicting the age of the user according to the image information;
if the predicted age exceeds a preset threshold, reminding the user whether to switch to the presbyopic mode or not through voice;
if a mode switching instruction is received, setting the display mode to be a presbyopic mode; and under the presbyopia mode, displaying the amplified blind area image on the column A.
12. The apparatus of claim 7, further comprising:
calculating the offset difference between the current posture information and the posture information of the user at the last moment, and if the offset difference is detected to be larger than a first preset threshold value, reminding the user to adjust the sitting posture;
or if the shooting angle corresponding to the adjustment instruction is larger than a second preset threshold value, reminding a user of sitting posture adjustment.
13. A system for vehicle vision compensation, comprising a memory and a processor, wherein the memory stores instructions for executing the processor; wherein the processor is configured to perform the method of vehicle vision compensation of any of claims 1-6 via execution of the executable instructions.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of vehicle vision compensation according to any one of claims 1 to 6.
CN201911170290.3A 2019-11-26 2019-11-26 Method, device and system for vehicle vision compensation Pending CN110996051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911170290.3A CN110996051A (en) 2019-11-26 2019-11-26 Method, device and system for vehicle vision compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911170290.3A CN110996051A (en) 2019-11-26 2019-11-26 Method, device and system for vehicle vision compensation

Publications (1)

Publication Number Publication Date
CN110996051A true CN110996051A (en) 2020-04-10

Family

ID=70086874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911170290.3A Pending CN110996051A (en) 2019-11-26 2019-11-26 Method, device and system for vehicle vision compensation

Country Status (1)

Country Link
CN (1) CN110996051A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438466A (en) * 2021-06-30 2021-09-24 东风汽车集团股份有限公司 Method, system, device and computer readable storage medium for widening external view field
CN113682319A (en) * 2021-08-05 2021-11-23 地平线(上海)人工智能技术有限公司 Camera adjusting method and device, electronic equipment and storage medium
CN114379461A (en) * 2020-10-20 2022-04-22 采埃孚汽车科技(上海)有限公司 Driving assistance apparatus and method for vehicle
CN114779470A (en) * 2022-03-16 2022-07-22 青岛虚拟现实研究院有限公司 Display method of augmented reality head-up display system
CN115379124A (en) * 2022-10-26 2022-11-22 四川中绳矩阵技术发展有限公司 Visual angle system changing along with visual angle, image imaging method, device and medium
CN115933881A (en) * 2022-12-14 2023-04-07 小米汽车科技有限公司 Vehicle control method, device, medium and vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827960A (en) * 2016-03-21 2016-08-03 乐视网信息技术(北京)股份有限公司 Imaging method and device
CN207257516U (en) * 2017-09-15 2018-04-20 爱驰汽车有限公司 Blind area detection system and automobile
CN108995591A (en) * 2018-08-01 2018-12-14 北京海纳川汽车部件股份有限公司 Vehicle panoramic has an X-rayed display methods, system and the vehicle with it
CN109934076A (en) * 2017-12-19 2019-06-25 广州汽车集团股份有限公司 Generation method, device, system and the terminal device of the scene image of vision dead zone
CN109987026A (en) * 2018-01-03 2019-07-09 钟少童 Vehicle blind zone instrument for safety running
CN209191793U (en) * 2018-11-29 2019-08-02 北京车联天下信息技术有限公司 A kind of pillar A blind monitoring device and vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827960A (en) * 2016-03-21 2016-08-03 乐视网信息技术(北京)股份有限公司 Imaging method and device
CN207257516U (en) * 2017-09-15 2018-04-20 爱驰汽车有限公司 Blind area detection system and automobile
CN109934076A (en) * 2017-12-19 2019-06-25 广州汽车集团股份有限公司 Generation method, device, system and the terminal device of the scene image of vision dead zone
CN109987026A (en) * 2018-01-03 2019-07-09 钟少童 Vehicle blind zone instrument for safety running
CN108995591A (en) * 2018-08-01 2018-12-14 北京海纳川汽车部件股份有限公司 Vehicle panoramic has an X-rayed display methods, system and the vehicle with it
CN209191793U (en) * 2018-11-29 2019-08-02 北京车联天下信息技术有限公司 A kind of pillar A blind monitoring device and vehicle

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114379461A (en) * 2020-10-20 2022-04-22 采埃孚汽车科技(上海)有限公司 Driving assistance apparatus and method for vehicle
CN113438466A (en) * 2021-06-30 2021-09-24 东风汽车集团股份有限公司 Method, system, device and computer readable storage medium for widening external view field
CN113682319A (en) * 2021-08-05 2021-11-23 地平线(上海)人工智能技术有限公司 Camera adjusting method and device, electronic equipment and storage medium
CN113682319B (en) * 2021-08-05 2023-08-01 地平线(上海)人工智能技术有限公司 Camera adjustment method and device, electronic equipment and storage medium
CN114779470A (en) * 2022-03-16 2022-07-22 青岛虚拟现实研究院有限公司 Display method of augmented reality head-up display system
CN115379124A (en) * 2022-10-26 2022-11-22 四川中绳矩阵技术发展有限公司 Visual angle system changing along with visual angle, image imaging method, device and medium
CN115933881A (en) * 2022-12-14 2023-04-07 小米汽车科技有限公司 Vehicle control method, device, medium and vehicle
CN115933881B (en) * 2022-12-14 2024-03-12 小米汽车科技有限公司 Vehicle control method and device, medium and vehicle

Similar Documents

Publication Publication Date Title
CN110996051A (en) Method, device and system for vehicle vision compensation
US10949690B2 (en) Driving state determination device, determination device, and driving state determination method
CN107444263B (en) Display device for vehicle
US9773179B2 (en) Vehicle operator monitoring system and method
US11025836B2 (en) Driving assistance device, driving assistance method, and driving assistance program
JP4899340B2 (en) Driving sense adjustment device and driving sense adjustment method
JP5092776B2 (en) Gaze direction detection device and gaze direction detection method
JP6512475B2 (en) INFORMATION PROVIDING DEVICE, INFORMATION PROVIDING METHOD, AND INFORMATION PROVIDING CONTROL PROGRAM
CN103770707A (en) Device and method for producing bird eye view having function of automatically correcting image
CN111114434B (en) Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device
CN111277796A (en) Image processing method, vehicle-mounted vision auxiliary system and storage device
CN112849158B (en) Image display method, vehicle-mounted display system and automobile
EP3456574B1 (en) Method and system for displaying virtual reality information in a vehicle
JP2017039373A (en) Vehicle video display system
JP2018022958A (en) Vehicle display controller and vehicle monitor system
JP2009248812A (en) Driving assistance device
CN214775848U (en) A-column display screen-based obstacle detection device and automobile
JP2008037118A (en) Display for vehicle
JP6365600B2 (en) Vehicle display device
JP5223289B2 (en) Visual information presentation device and visual information presentation method
JP2019113720A (en) Vehicle surrounding display control device
JP6649063B2 (en) Vehicle rear display device and display control device
CN113071414A (en) Intelligent vehicle sun shield system and image acquisition method thereof
CN116636808B (en) Intelligent cockpit driver visual health analysis method and device
US20220185182A1 (en) Target identification for vehicle see-through applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200410