CN111918035A - Vehicle-mounted looking-around method and device, storage medium and vehicle-mounted terminal - Google Patents

Vehicle-mounted looking-around method and device, storage medium and vehicle-mounted terminal Download PDF

Info

Publication number
CN111918035A
CN111918035A CN202010758026.8A CN202010758026A CN111918035A CN 111918035 A CN111918035 A CN 111918035A CN 202010758026 A CN202010758026 A CN 202010758026A CN 111918035 A CN111918035 A CN 111918035A
Authority
CN
China
Prior art keywords
vehicle
image data
splicing
information
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010758026.8A
Other languages
Chinese (zh)
Other versions
CN111918035B (en
Inventor
雷超方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lichi Semiconductor Co ltd
Original Assignee
Shanghai Lichi Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lichi Semiconductor Co ltd filed Critical Shanghai Lichi Semiconductor Co ltd
Priority to CN202010758026.8A priority Critical patent/CN111918035B/en
Publication of CN111918035A publication Critical patent/CN111918035A/en
Application granted granted Critical
Publication of CN111918035B publication Critical patent/CN111918035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Abstract

The embodiment of the application discloses a vehicle-mounted look-around method, a vehicle-mounted look-around device, a storage medium and a vehicle-mounted terminal, and belongs to the field of vehicle gauge chip technology and monitoring systems. The method is used in a vehicle-mounted terminal in a vehicle-mounted all-around system, and the vehicle-mounted all-around system further comprises a display screen and n cameras positioned around a vehicle; the method comprises the following steps: the method comprises the steps that head rotation information of a driver is obtained, the head rotation information is used for indicating a left or right rotation angle of the head of the driver, the rotation angle is smaller than a maximum rotation angle, and the maximum rotation angle is an angle which is formed by positioning an eyeball of the driver at the most edge position of an eyebox and enabling the driver to see the front of the eyeball after the head is rotated; splicing the images shot by the m cameras into a monitoring visual field image according to the head rotation information; and displaying the monitoring visual field image on the display screen in real time. The embodiment of the application can ensure that the picture size of the monitoring view image is proper and the driving view is not blocked; the energy of the driver is not dispersed; and the driving safety hazard can not be caused.

Description

Vehicle-mounted looking-around method and device, storage medium and vehicle-mounted terminal
Technical Field
The embodiment of the application relates to the field of vehicle gauge chip technology and monitoring systems, in particular to a vehicle-mounted look-around method and device, a storage medium and a vehicle-mounted terminal.
Background
The vehicle-mounted all-round looking system generally comprises a vehicle-mounted terminal, a display screen and a plurality of cameras arranged around the vehicle, wherein the vehicle-mounted terminal can acquire images at different visual angles shot by the cameras and display the images on the display screen, so that a user can observe the surrounding condition of the vehicle.
In the related art, the vehicle-mounted all-around system may include a plurality of display screens, and a plurality of images are displayed in different display screens; alternatively, the vehicle-mounted surround view system may include a display screen, and the plurality of images are displayed in different partitions of the display screen.
When a plurality of images are displayed, if the picture size of the images is ensured to be proper, the driving visual field can be shielded by displaying the plurality of images; if the driving view is not blocked, the observation is affected if the image is too small. In addition, displaying multiple images also distracts the driver, causing the driving field of vision to deviate from the straight ahead, which presents a driving safety hazard.
Disclosure of Invention
The embodiment of the application provides a vehicle-mounted all-around viewing method, a vehicle-mounted all-around viewing device, a storage medium and a vehicle-mounted terminal, which are used for solving the problem of potential driving safety hazards when a plurality of images are displayed. The technical scheme is as follows:
on one hand, the vehicle-mounted looking-around method is used in a vehicle-mounted terminal in a vehicle-mounted looking-around system, the vehicle-mounted looking-around system further comprises a display screen and n cameras located around a vehicle, and n is larger than or equal to 4; the method comprises the following steps:
acquiring head rotation information of a driver, wherein the head rotation information is used for indicating a left or right rotation angle of the head of the driver, the rotation angle is smaller than a maximum rotation angle, and the maximum rotation angle is an angle at which an eyeball of the driver is located at the most edge position of an eyebox and the head of the driver can be seen right ahead after the head of the driver is rotated;
splicing the images shot by the m cameras into a monitoring view image according to the head rotation information, wherein m is less than n;
and displaying the monitoring visual field image on the display screen in real time.
In a possible implementation manner, the vehicle-mounted terminal includes a splicing unit and a DMA, and the splicing of the images shot by the m cameras into the monitoring view image according to the head rotation information includes:
generating splicing information according to the head rotation information through the splicing unit, and sending the splicing information to the DMA, wherein the splicing information is used for indicating that monitoring view image data are selected according to a requirement of splicing lines of monitoring view images to be aligned, and the monitoring view image data are used for generating the monitoring view images;
selecting the monitoring view image data from the target image data shot by the n cameras according to the splicing information through the DMA, and sending the monitoring view image data to the splicing unit;
and splicing the monitoring visual field image data through the splicing unit to obtain the monitoring visual field image.
In a possible implementation manner, the generating, by the splicing unit, splicing information according to the head rotation information includes:
acquiring row and column deviation information of each camera through the splicing unit, wherein the row and column deviation information is used for indicating the deviation of row data and column data generated when images shot by each camera are aligned;
acquiring a conversion relation through the splicing unit, wherein the conversion relation is used for indicating the relation between different rotation angles and different splicing information, and the splicing information is obtained by calculation according to the row-column deviation information;
and searching the splicing information matched with the head rotation information in the conversion relation through the splicing unit.
In a possible implementation manner, the selecting, by the DMA, the monitoring-field image data from the target image data captured by the n cameras according to the stitching information includes:
reading the target image data determined according to the splicing information into a video buffer area through a first channel in the DMA;
and sending the monitoring visual field image data selected in the video buffer area according to the splicing information to the splicing unit through a second channel in the DMA.
In one possible implementation manner, the reading, by the first channel in the DMA, the target image data determined according to the stitching information into a video buffer includes:
determining m lines of image data required for splicing the ith line of image in the monitoring view image according to the splicing information through the DMA, wherein one line of image data in the m lines corresponds to one camera in the m cameras, and the line numbers of the image data corresponding to different cameras are the same or different, wherein the line numbers are the same or different and are determined by the alignment or the non-alignment of each camera image line, and i is a positive integer;
and sequentially reading a line of image data corresponding to each camera into the video buffer area through a first channel in the DMA to obtain the target image data, wherein a first address interval is arranged between the addresses of the same line of image data of different cameras, and a second address interval is arranged between the addresses of two adjacent lines of image data of the same camera.
In one possible implementation manner, the sending, to the splicing unit, the monitored-view image data selected in the video buffer according to the splicing information through a second channel in the DMA includes:
determining a cutting boundary of the monitoring visual field image according to the splicing information through the DMA;
selecting the monitoring view image data from the target image data in the video buffer according to the cutting boundary through the DMA;
sending the monitoring visual field image data to the splicing unit through a second channel in the DMA;
and the splicing unit discards the superposed data in the monitoring view image data to obtain the final monitoring view image data.
In a possible implementation manner, the displaying the monitoring view image on the display screen in real time includes:
configuring the resolution of the display screen according to the row and column deviation information of each camera, wherein the row and column deviation information is used for indicating the deviation of row data and column data generated when the images shot by each camera are aligned;
displaying the monitoring view image on the display screen in real time at the configured resolution.
On one hand, the vehicle-mounted all-around viewing device is used in a vehicle-mounted terminal in a vehicle-mounted all-around viewing system, the vehicle-mounted all-around viewing system further comprises a display screen and n cameras located around a vehicle, and n is larger than or equal to 4; the device comprises:
the head rotation information is used for indicating a left or right rotation angle of the head of the driver, the rotation angle is smaller than a maximum rotation angle, and the maximum rotation angle is an angle at which an eyeball of the driver is located at the most edge position of an eyebox and a right front side can be seen after the head is rotated;
the splicing module is used for splicing the images shot by the m cameras into a monitoring visual field image according to the head rotation information, wherein m is less than n;
and the display module is used for displaying the monitoring view image on the display screen in real time.
In one aspect, a computer-readable storage medium is provided having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement the in-vehicle look-around method as described above.
In one aspect, a vehicle-mounted terminal is provided, where the vehicle-mounted terminal includes a processor and a memory, and the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the vehicle-mounted looking-around method described above.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
the vehicle-mounted all-around system comprises n cameras arranged around a vehicle, and can acquire head rotation information of a driver in the driving process of the vehicle, wherein the head rotation information is used for indicating the left or right rotation angle of the head of the driver, and images shot by the m cameras are spliced into monitoring view images according to the head rotation information, and m is less than n; and finally, displaying the monitoring view images on the display screen, so that one monitoring view image formed by splicing the images shot by part of the cameras can be displayed according to the head rotation information of the driver. Because only one monitoring view image is required to be displayed, and a plurality of images are not required to be displayed, the picture size of the monitoring view image can be ensured to be proper, and the driving view can not be shielded; and does not distract the driver. In addition, the rotation angle is smaller than the maximum rotation angle, and the maximum rotation angle is an angle that the eyeball of the driver is positioned at the most edge position of the eyebox and can see the front of the driver after the head is rotated, so that the driver can be ensured to still see the front of the driver in the driving vision when the head is rotated, and the driving potential safety hazard can not be caused.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic view of a camera head installation according to some exemplary embodiments;
FIG. 2 is a method flow diagram of a vehicle mount look-around method provided by one embodiment of the present application;
FIG. 3 is a schematic view of a forward looking field, head and eyeball provided by one embodiment of the present application;
FIG. 4 is a schematic view of the field of view, head and eyeball as one embodiment of the present application provides for turning the head to the left;
FIG. 5 is a schematic view of a field of view, head and eyeball as one embodiment of the present application provides for turning the head to the right;
FIG. 6 is a schematic illustration of monitoring a rotation angle provided by an embodiment of the present application;
FIG. 7 is a hardware block diagram of a vehicle mount look-around method according to an embodiment of the present application;
FIG. 8 is a schematic cut-away view of an image provided by one embodiment of the present application;
FIG. 9 is a schematic illustration of storage of image data provided by an embodiment of the present application;
FIG. 10 is a method flow diagram of an on-board look-around method provided by one embodiment of the present application;
FIG. 11 is a schematic diagram illustrating an upper half space read and a lower half space write of a data read/write mode in a video buffer according to an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating a top half space writing and a bottom half space reading of a data read/write mode in a video buffer according to an embodiment of the present application;
fig. 13 is a block diagram of a vehicle-mounted looking-around device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
The embodiment of the application acquires images of different directions around the vehicle by capturing slight rotation of the head of the driver, and displays the images on a display screen which can be seen by the driver when the driver lifts eyes, and the view of the driver does not need to leave the front. Namely, when the scheme of the invention is adopted, a high-definition display screen can be arranged at the position where the head of the driver can be raised and the view in the front is not blocked; meanwhile, when the driver observes the image displayed on the display screen, the view of the driver does not need to leave the front, and the driving safety of the driver is not affected.
In the embodiment of the application, n cameras are installed around the vehicle, and n is a positive integer greater than or equal to 4. In the present embodiment, the mounting position of the camera is described by taking n =8 as an example, and the number of n is not limited.
When n =8, 8 cameras are evenly distributed around the vehicle. Taking the example of mounting the cameras in a counterclockwise sequence, the camera 0 is mounted right in front of the vehicle, the camera 4 is mounted right behind the vehicle, the cameras 1-3 are located on the left sides of the cameras 0 and 4, the cameras 5-7 are located on the right sides of the cameras 0 and 4, and an interval of 45 ° is formed between each camera, please refer to fig. 1. Several range nouns referred to in FIG. 1 are described below.
Right-ahead view range: the visual angle range right ahead of the driver is not monitored, only the driving record is stored, namely, the image data of the camera 0 is read only, and the image splicing is not carried out.
Left side monitoring view range: the image data captured by the camera 1/2/3 is used to output a monitoring view image of the left side of the vehicle by stitching.
Right side monitoring view range: the image data captured by the camera 5/6/7 is used to splice and output the monitoring view image on the right side of the vehicle.
And (3) a rear monitoring view angle range, namely splicing and outputting a monitoring view image behind the vehicle through image data of the camera 3/4/5.
It should be noted that the left-side monitoring view angle range is equal to the right-side monitoring view angle range and is greater than the right-rear monitoring view angle range.
Please refer to fig. 2, which shows a flowchart of a method of a vehicle-mounted looking-around method according to an embodiment of the present application, where the vehicle-mounted looking-around method can be applied to a vehicle-mounted terminal in a vehicle-mounted looking-around system, the vehicle-mounted looking-around system further includes a display screen and n cameras located around the vehicle, and n is greater than or equal to 4. The vehicle-mounted looking-around method can comprise the following steps:
step 201, head rotation information of a driver is obtained, the head rotation information is used for indicating a left or right rotation angle of the head of the driver, the rotation angle is smaller than a maximum rotation angle, and the maximum rotation angle is an angle at which an eyeball of the driver is located at the most edge position of an eyebox and a right front side can be seen after the head is rotated.
Wherein, the maximum rotation angle is the maximum angle of the head of the driver rotating leftwards or rightwards. Typically, when the driver does not turn the head, the eyeball is centered in the orbit, see the field of view, head orientation, and eyeball position shown in FIG. 3. When the driver turns the head to the left, in order for the driver to see straight ahead, the driver's eyeball will typically move to the right until it stops when it reaches the rightmost side of the orbit, at which point the driver can still see straight ahead and his head is turned to the maximum angle of rotation, see field of view, head orientation and eyeball position shown in fig. 4. When the driver turns his head to the right, in order for the driver to see straight ahead, the driver's eyeball will typically move to the left until it stops when it reaches the leftmost side of the orbit, at which point the driver can still see straight ahead and his head is turned to the maximum angle of rotation, see field of view, head orientation and eyeball position shown in fig. 5.
In this embodiment, the vehicle-mounted terminal may capture the head rotation of the driver through an angle detection unit, and the angle detection unit may be an angle gyroscope, a binocular camera, and the like worn on the head, which is not limited in this embodiment.
The angle detection unit can acquire the rotation angle of the head and the rotation direction of the head, and the rotation angle and the rotation direction are collectively referred to as head rotation information in this embodiment.
And step 202, splicing the images shot by the m cameras into a monitoring view image according to the head rotation information.
The vehicle-mounted terminal can select m corresponding cameras from the n cameras according to the head rotation information, images shot by the m cameras are spliced into a monitoring view image, and m is less than n.
And step 203, displaying the monitoring view image on the display screen in real time.
It should be noted that, if the driver rotates the head as a continuous action, the vehicle-mounted terminal can acquire the head rotation information in real time and then display the spliced monitoring view image information on the display screen in real time. For example, initially the driver's head is directly over the front, and the monitoring view is directly behind the vehicle; when the head starts to rotate to the left, the monitoring view starts to rotate from the right back to the left, and in the rotating process, the vehicle-mounted terminal sequentially splices the images of the camera 5/4/3/2/1 to form a monitoring view image of the current monitoring view range, please refer to fig. 6. Wherein, the monitoring rotation angle = (left monitoring view angle range/maximum rotation angle) × head rotation angle.
To sum up, in the vehicle-mounted looking-around method provided by the embodiment of the application, the vehicle-mounted looking-around system comprises n cameras installed around the vehicle, in the driving process of the vehicle, head rotation information of a driver can be obtained, the head rotation information is used for indicating the left or right rotation angle of the head of the driver, images shot by the m cameras are spliced into a monitoring view image according to the head rotation information, and m is less than n; and finally, displaying the monitoring view images on the display screen, so that one monitoring view image formed by splicing the images shot by part of the cameras can be displayed according to the head rotation information of the driver. Because only one monitoring view image is required to be displayed, and a plurality of images are not required to be displayed, the picture size of the monitoring view image can be ensured to be proper, and the driving view can not be shielded; and does not distract the driver. In addition, the rotation angle is smaller than the maximum rotation angle, and the maximum rotation angle is an angle that the eyeball of the driver is positioned at the most edge position of the eyebox and can see the front of the driver after the head is rotated, so that the driver can be ensured to still see the front of the driver in the driving vision when the head is rotated, and the driving potential safety hazard can not be caused.
The hardware configuration of the in-vehicle surround view system will be explained below.
Referring to fig. 7, the vehicle-mounted panoramic view system includes a vehicle-mounted terminal, a display screen and n cameras, the vehicle-mounted terminal may include an angle detection unit, a splicing unit, a Direct Memory Access (DMA), a central processing unit, a video buffer, a system bus arbitration unit and a Memory, the system bus arbitration unit is respectively connected with the Memory, the n cameras, the DMA, the splicing unit and the central processing unit through a bus, the n cameras are connected with the DMA, the DMA is connected with the video buffer through a bus, the DMA is connected with the splicing unit, the splicing unit is connected with the display screen through a bus, the splicing unit is connected with the angle detection unit, and the angle detection unit is connected with the central processing unit.
The following describes a configuration flow of the in-vehicle surround view system.
First, the central processing unit needs to configure the operating modes of n cameras.
1) The central processing unit needs to enable the n cameras simultaneously, so that the images shot by the n cameras are ensured to be synchronous;
2) when the cameras are installed on a vehicle, the cameras have certain deviation in the horizontal direction and the vertical direction, so that the images shot by each camera have row deviation and column deviation during splicing, therefore, the central processing unit needs to acquire row and column deviation information of each camera through a self-checking step when the system is started, and the row and column deviation information is used for indicating the deviation of row data and column data generated when the images shot by each camera are aligned. Referring to the images captured by the three cameras shown in fig. 8, there is a deviation in the line data of the three images, and it is necessary to determine the upper and lower boundaries of the images according to the line deviation.
3) The central processing unit needs to determine the storage address of the image data shot by each camera in the memory and the data storage mode of the image data in the memory.
In this embodiment, a first address interval is provided between storage addresses of line data of the same line in images captured by different cameras, and a second address interval is provided between storage addresses of line data of different lines in images captured by the same camera. Referring to fig. 9, in fig. 9, the first address interval is taken as interval a, the second address interval is taken as interval B, and assuming that the storage address of the row data of the first row in the image captured by the first camera is C, the address of the row data of the second row is C + B, the address of the row data of the third row is C +2B, and so on. The storage address of the line data of the first line in the image shot by the second camera is C + A, the storage address of the line data of the second line is C + A + B, and so on. And the storage address of the line data of the first line in the image shot by the third camera is C +2A, the storage address of the line data of the second line is C +2A + B, and so on.
Secondly, the central processing unit carries out angle correction on the angle detection unit through a self-checking verification step or a user starting verification program when the system is started for the first time, so that the angle detection unit can accurately capture the rotating angle of the head.
Thirdly, the central processing unit configures row and column deviation information and a conversion relation of the n cameras to the splicing unit, wherein the conversion relation can comprise a mapping relation or a calculation formula of a rotation angle and a row and a column, so that the splicing unit can search the conversion relation according to the head rotation information and the row and column deviation information after receiving the head rotation information sent by the angle detection unit to obtain the splicing information.
Fourthly, the central processing unit configures the storage address and the data storage mode of the cameras to the DMA, configures target image data to be cached in the video buffer area through the first channel, and configures the maximum line deviation of the n cameras, wherein the maximum line deviation is used for selecting the image data of the aligned lines in the monitoring visual field image from the target image data. And transmitting the monitoring view image data to a splicing unit through a second channel, and configuring a left and right cutting boundary of the image and a column overlapping region between the camera images.
Assuming that the maximum line deviation is 2, i.e., the first line of one image is aligned with the third line of the other image, the monitor view image data may be selected from the line data of the third line to the last-but-one line in the target image data.
And fifthly, the central processing unit configures the resolution and the refresh rate to the display screen according to the maximum row deviation so that the display screen can display the spliced monitoring view image data according to the resolution.
After the configuration of the vehicle-mounted all-round looking system is completed, the vehicle-mounted all-round looking mode can be executed through the vehicle-mounted all-round looking system. Please refer to fig. 10, which shows a flowchart of a method of a vehicle-mounted looking-around method according to an embodiment of the present application, where the vehicle-mounted looking-around method can be applied to a vehicle-mounted terminal in a vehicle-mounted looking-around system, the vehicle-mounted looking-around system further includes a display screen and n cameras located around the vehicle, and n is greater than or equal to 4. The vehicle-mounted looking-around method can comprise the following steps:
step 1001, the head rotation information of the driver is acquired through the angle detection unit, and the head rotation information is sent to the splicing unit.
The head rotation information is used to indicate a left or right rotation angle of the head of the driver, the rotation angle is smaller than a maximum rotation angle, the maximum rotation angle is an angle at which the eyeball of the driver after rotating the head is located at the most peripheral position of the eye socket and the front of the eyeball can be seen, which is described in step 201 and is not described herein again.
In this embodiment, the vehicle-mounted terminal may capture the head rotation of the driver through an angle detection unit, and the angle detection unit may be an angle gyroscope, a binocular camera, and the like worn on the head, which is not limited in this embodiment.
The angle detection unit can acquire the rotation angle of the head and the rotation direction of the head, and the rotation angle and the rotation direction are collectively referred to as head rotation information in this embodiment.
Step 1002, obtaining row and column deviation information of each camera through a splicing unit, where the row and column deviation information is used to indicate a deviation between row data and column data generated when images shot by each camera are aligned.
The row and column deviation information is pre-configured to the splicing unit by the central processing unit, so that the splicing unit can directly read the row and column deviation information.
And 1003, acquiring a conversion relation through the splicing unit, wherein the conversion relation is used for indicating a relation between different rotation angles and different splicing information, the splicing information is obtained through calculation according to row-column deviation information, the splicing information is used for indicating that monitoring view image data are selected according to a requirement of splicing rows of the monitoring view images, and the monitoring view image data are used for generating monitoring view images.
The conversion relation is configured to the splicing unit in advance by the central processing unit, so that the splicing unit can directly read the conversion relation.
And 1004, searching splicing information matched with the head rotation information in the conversion relation through the splicing unit, and sending the splicing information to the DMA.
Step 1005, reading the target image data determined according to the splicing information into a video buffer area through a first channel in the DMA.
In this embodiment, the DMA may determine, as the target image data, the image data that is shot by the m cameras and used for stitching the same line of the monitored view images, according to the stitching information. For example, the DMA determines, as target image data, images that are captured by the cameras 2 to 4 and used for stitching the same line of the monitored view image, according to the stitching information.
Reading the target image data determined according to the splicing information into the video buffer area through the first channel in the DMA may include: determining m lines of image data required for splicing the ith line of image in the monitoring view image according to the splicing information through the DMA, wherein one line of image data in the m lines corresponds to one camera in the m cameras, and the line numbers of the image data corresponding to different cameras are the same or different, wherein the line numbers are the same or different and are determined by the alignment or the non-alignment of each camera image, and i is a positive integer; and sequentially reading a line of image data corresponding to each camera into a video buffer area through a first channel in the DMA to obtain target image data, wherein a first address interval is arranged between the addresses of the same line of image data of different cameras, and a second address interval is arranged between the addresses of two adjacent lines of image data of the same camera.
Referring to fig. 9, taking reading target image data captured by the camera k, the camera k +1, and the camera k +2 as an example, if a line deviation between the camera k and the camera k +1 is j, and a line deviation between the camera k +2 and the camera k is p according to the stitching information, it may be determined that a j +1 th line of image data of the camera k, a 1 st line of image data of the camera k +1, and a p +1 th line of image data of the camera k +2 are used for stitching a first line of images of the monitoring view image, and then, a j +1 th line of data of the camera k may be read first, a 1 st line of data of the camera k +1 may be read, and finally, a p +1 th line of image data of the camera k +2 may be read, so as to obtain the target image data.
And step 1006, sending the monitoring view image data selected in the video buffer area according to the splicing information to the splicing unit through a second channel in the DMA.
Wherein, sending the monitored visual field image data selected in the video buffer area according to the splicing information to the splicing unit through the second channel in the DMA may include: determining a cutting boundary of the monitoring visual field image according to the splicing information through the DMA; selecting monitoring visual field image data from the target image data in the video buffer area according to the cutting boundary through the DMA; sending the monitoring visual field image data to a splicing unit through a second channel in the DMA; and discarding the superposed data in the monitoring view image data by the splicing unit to obtain the final monitoring view image data.
The DMA can discard the image data which is not aligned up and down in the target image data, and then the left boundary and the right boundary are determined according to the splicing information and the size of the display screen, so that the cutting boundary is finally determined. Referring to fig. 8, the cutting boundary determined in fig. 8 is a rectangular boundary with ABCD as four vertices, and the DMA may select image data within the cutting boundary from the target image data as monitoring view image data and transmit the monitoring view image data to the stitching unit.
Referring to fig. 11, when the monitoring view image data is selected, the left boundary data of the monitoring data image data starts from the column cut start point of the image captured by the camera k, and the right boundary data ends at the column cut end point of the image captured by the camera k +2, and then the DMA needs to discard the image data outside the left boundary and the image data outside the right boundary in the target data. The DMA can transmit or not transmit the image overlapping data, and all the selected monitoring view image data is transmitted to the splicing unit through the second channel, and if the image overlapping data is transmitted, the splicing unit discards the image overlapping data.
It should be noted that the video buffer in this embodiment may be divided into an upper half space and a lower half space, and when the target image data is written into the lower half space line by line in the first channel, the monitoring view image data is read out from the upper half space line by line in the second channel. As shown in fig. 11, when the second channel reads the j +1 th line image data of the camera k, the 1 st line image data of the camera k +1, and the p +1 th line image data of the camera k +2 from the upper half space, the first channel writes the j +2 th line image data of the camera k, the 2 nd line image data of the camera k +1, and the p +2 th line image data of the camera k +2 to the lower half space line by line. When the second channel reads out the image data of the monitoring view from the lower half space, the first channel writes the target image data into the upper half space, namely, the video buffering is carried out by adopting ping-pong operation. As shown in fig. 12, when the second channel reads the j +2 th line image data of the camera k, the 2 nd line image data of the camera k +1, and the p +2 th line image data of the camera k +2 from the lower half space, the first channel writes the j +3 th line image data of the camera k, the 3 rd line image data of the camera k +1, and the p +3 th line image data of the camera k +2 to the upper half space line by line.
It should be noted that the buffer sizes of the upper half space and the lower half space are determined by the monitoring angle, and when the number n of the cameras is 8, 3 lines of video images are generally buffered.
And step 1007, splicing the monitoring view image data through a splicing unit to obtain a monitoring view image.
And step 1008, configuring the resolution of the display screen according to the row and column deviation information of each camera.
Since there is data missing in the upper and lower lines of the monitoring view image, the resolution of the display screen needs to be configured to ensure the display effect.
And step 1009, displaying the monitoring view image on the display screen in real time at the configured resolution.
It should be noted that, if the driver rotates the head as a continuous action, the vehicle-mounted terminal can acquire the head rotation information in real time and then display the spliced monitoring view image information on the display screen in real time. For example, initially the driver's head is directly over the front, and the monitoring view is directly behind the vehicle; when the head starts to rotate to the left, the monitoring view starts to rotate from the right back to the left, and in the rotating process, the vehicle-mounted terminal sequentially splices the images of the camera 5/4/3/2/1 to form a monitoring view image of the current monitoring view range, please refer to fig. 6. Wherein, the monitoring rotation angle = (left monitoring view angle range/maximum rotation angle) × rotation angle.
To sum up, in the vehicle-mounted looking-around method provided by the embodiment of the application, the vehicle-mounted looking-around system comprises n cameras installed around the vehicle, in the driving process of the vehicle, head rotation information of a driver can be obtained, the head rotation information is used for indicating the left or right rotation angle of the head of the driver, images shot by the m cameras are spliced into a monitoring view image according to the head rotation information, and m is less than n; and finally, displaying the monitoring view images on the display screen, so that one monitoring view image formed by splicing the images shot by part of the cameras can be displayed according to the head rotation information of the driver. Because only one monitoring view image is required to be displayed, and a plurality of images are not required to be displayed, the picture size of the monitoring view image can be ensured to be proper, and the driving view can not be shielded; and does not distract the driver. In addition, the rotation angle is smaller than the maximum rotation angle, and the maximum rotation angle is an angle that the eyeball of the driver is positioned at the most edge position of the eyebox and can see the front of the driver after the head is rotated, so that the driver can be ensured to still see the front of the driver in the driving vision when the head is rotated, and the driving potential safety hazard can not be caused.
Please refer to fig. 13, which shows a block diagram of a vehicle-mounted looking-around device according to an embodiment of the present application, where the vehicle-mounted looking-around device may be applied to a vehicle-mounted terminal in a vehicle-mounted looking-around system, the vehicle-mounted looking-around system further includes a display screen and n cameras located around the vehicle, where n is greater than or equal to 4. The vehicle-mounted all-round looking device can comprise:
an obtaining module 1310, configured to obtain head rotation information of a driver, where the head rotation information is used to indicate a left or right rotation angle of the head of the driver, and the rotation angle is smaller than a maximum rotation angle, where the maximum rotation angle is an angle at which an eyeball of the driver is located at a most edge position of an eyebox and a right front of the eyeball can be seen after the head of the driver is rotated;
a splicing module 1320, configured to splice images captured by the m cameras into a monitoring view image according to the head rotation information, where m is less than n;
the display module 1330 is configured to display the monitoring view image on the display screen in real time.
In a possible embodiment, the vehicle-mounted terminal includes a splicing unit and a DMA, and the splicing module 1320 is further configured to:
generating splicing information according to the head rotation information through a splicing unit, and sending the splicing information to a DMA (direct memory access), wherein the splicing information is used for indicating that monitoring view image data are selected according to the requirement of splicing lines of the monitoring view image, and the monitoring view image data are used for generating a monitoring view image;
selecting monitoring visual field image data from target image data shot by the n cameras through the DMA according to the splicing information, and sending the monitoring visual field image data to the splicing unit;
and splicing the monitoring visual field image data through a splicing unit to obtain a monitoring visual field image.
In one possible embodiment, the stitching module 1320 is further configured to:
the method comprises the steps that row and column deviation information of each camera is obtained through a splicing unit, and the row and column deviation information is used for indicating the deviation of row data and column data generated when images shot by the cameras are aligned;
acquiring a conversion relation through a splicing unit, wherein the conversion relation is used for indicating the relation between different rotation angles and different splicing information, and the splicing information is obtained by calculation according to row-column deviation information;
and searching splicing information matched with the head rotation information in the conversion relation through the splicing unit.
In one possible embodiment, the stitching module 1320 is further configured to:
reading target image data determined according to the splicing information into a video buffer area through a first channel in the DMA;
and sending the monitoring view image data selected in the video buffer area according to the splicing information to the splicing unit through a second channel in the DMA.
In one possible embodiment, the stitching module 1320 is further configured to:
determining m lines of image data required for splicing the ith line of image in the monitoring view image according to the splicing information through the DMA, wherein one line of image data in the m lines corresponds to one camera in the m cameras, and the line numbers of the image data corresponding to different cameras are the same or different, wherein the line numbers are the same or different and are determined by the alignment or the non-alignment of each camera image line, and i is a positive integer;
and sequentially reading a line of image data corresponding to each camera into a video buffer area through a first channel in the DMA to obtain target image data, wherein a first address interval is arranged between the addresses of the same line of image data of different cameras, and a second address interval is arranged between the addresses of two adjacent lines of image data of the same camera.
In one possible embodiment, the stitching module 1320 is further configured to:
determining a cutting boundary of the monitoring visual field image according to the splicing information through the DMA;
selecting monitoring visual field image data from target image data in a video buffer area according to a cutting boundary through a DMA (direct memory access);
sending the monitoring visual field image data to a splicing unit through a second channel in the DMA;
and discarding the superposed data in the monitoring view image data by the splicing unit to obtain the final monitoring view image data.
In one possible embodiment, the display module 1330 is further configured to:
the method comprises the steps that the resolution ratio of a display screen is configured according to row and column deviation information of each camera, wherein the row and column deviation information is used for indicating the deviation of row data and column data generated when images shot by the cameras are aligned;
and displaying the monitoring visual field image on the display screen in real time at the configured resolution.
In summary, in the vehicle-mounted looking-around device provided in the embodiment of the present application, the vehicle-mounted looking-around system includes n cameras installed around the vehicle, and in the vehicle driving process, head rotation information of the driver can be obtained, where the head rotation information is used to indicate a left or right rotation angle of the head of the driver, and then images captured by the m cameras are spliced into a monitoring view image according to the head rotation information, where m is less than n; and finally, displaying the monitoring view images on the display screen, so that one monitoring view image formed by splicing the images shot by part of the cameras can be displayed according to the head rotation information of the driver. Because only one monitoring view image is required to be displayed, and a plurality of images are not required to be displayed, the picture size of the monitoring view image can be ensured to be proper, and the driving view can not be shielded; and does not distract the driver. In addition, the rotation angle is smaller than the maximum rotation angle, and the maximum rotation angle is an angle that the eyeball of the driver is positioned at the most edge position of the eyebox and can see the front of the driver after the head is rotated, so that the driver can be ensured to still see the front of the driver in the driving vision when the head is rotated, and the driving potential safety hazard can not be caused.
One embodiment of the present application provides a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement the vehicle looking around method as described above.
One embodiment of the present application provides an in-vehicle terminal, where the in-vehicle terminal includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the in-vehicle look-around method as described above.
It should be noted that: in the vehicle-mounted surround view device provided in the above embodiment, only the division of the above functional modules is exemplified when performing vehicle-mounted surround view, and in practical application, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the vehicle-mounted surround view device is divided into different functional modules to complete all or part of the above described functions. In addition, the vehicle-mounted looking-around device provided by the embodiment and the vehicle-mounted looking-around method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description should not be taken as limiting the embodiments of the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. The vehicle-mounted looking-around method is characterized by being used in a vehicle-mounted terminal in a vehicle-mounted looking-around system, wherein the vehicle-mounted looking-around system further comprises a display screen and n cameras positioned around a vehicle, and n is more than or equal to 4; the method comprises the following steps:
acquiring head rotation information of a driver, wherein the head rotation information is used for indicating a left or right rotation angle of the head of the driver, the rotation angle is smaller than a maximum rotation angle, and the maximum rotation angle is an angle at which an eyeball of the driver is located at the most edge position of an eyebox and the head of the driver can be seen right ahead after the head of the driver is rotated;
splicing the images shot by the m cameras into a monitoring view image according to the head rotation information, wherein m is less than n;
and displaying the monitoring visual field image on the display screen in real time.
2. The method according to claim 1, wherein the vehicle-mounted terminal includes a stitching unit and a Direct Memory Access (DMA), and the stitching the images captured by the m cameras into the monitoring view image according to the head rotation information includes:
generating splicing information according to the head rotation information through the splicing unit, and sending the splicing information to the DMA, wherein the splicing information is used for indicating that monitoring view image data are selected according to a requirement of splicing lines of monitoring view images to be aligned, and the monitoring view image data are used for generating the monitoring view images;
selecting the monitoring view image data from the target image data shot by the n cameras according to the splicing information through the DMA, and sending the monitoring view image data to the splicing unit;
and splicing the monitoring visual field image data through the splicing unit to obtain the monitoring visual field image.
3. The method of claim 2, wherein generating, by the stitching unit, stitching information from the head rotation information comprises:
acquiring row and column deviation information of each camera through the splicing unit, wherein the row and column deviation information is used for indicating the deviation of row data and column data generated when images shot by each camera are aligned;
acquiring a conversion relation through the splicing unit, wherein the conversion relation is used for indicating the relation between different rotation angles and different splicing information, and the splicing information is obtained by calculation according to the row-column deviation information;
and searching the splicing information matched with the head rotation information in the conversion relation through the splicing unit.
4. The method according to claim 2, wherein the selecting, by the DMA, the monitoring-field image data from the target image data captured by the n cameras according to the stitching information includes:
reading the target image data determined according to the splicing information into a video buffer area through a first channel in the DMA;
and sending the monitoring visual field image data selected in the video buffer area according to the splicing information to the splicing unit through a second channel in the DMA.
5. The method of claim 4, wherein reading the target image data determined according to the stitching information into a video buffer through a first channel in the DMA comprises:
determining m lines of image data required for splicing the ith line of image in the monitoring view image according to the splicing information through the DMA, wherein one line of image data in the m lines respectively corresponds to one camera in the m cameras, and the line numbers of the image data corresponding to different cameras are the same or different, wherein the line numbers are the same or different and are determined by the alignment or the non-alignment of each camera image line, and i is a positive integer;
and sequentially reading a line of image data corresponding to each camera into the video buffer area through a first channel in the DMA to obtain the target image data, wherein a first address interval is arranged between the addresses of the same line of image data of different cameras, and a second address interval is arranged between the addresses of two adjacent lines of image data of the same camera.
6. The method of claim 4, wherein sending the monitored view image data selected in the video buffer according to the stitching information to the stitching unit through a second channel in the DMA comprises:
determining a cutting boundary of the monitoring visual field image according to the splicing information through the DMA;
selecting the monitoring view image data from the target image data in the video buffer according to the cutting boundary through the DMA;
sending the monitoring visual field image data to the splicing unit through a second channel in the DMA;
and the splicing unit discards the superposed data in the monitoring view image data to obtain the final monitoring view image data.
7. The method of claim 1, wherein said displaying said monitoring field of view image in real time on said display screen comprises:
configuring the resolution of the display screen according to the row and column deviation information of each camera, wherein the row and column deviation information is used for indicating the deviation of row data and column data generated when the images shot by each camera are aligned;
displaying the monitoring view image on the display screen in real time at the configured resolution.
8. A vehicle-mounted all-round looking device is characterized by being used in a vehicle-mounted terminal in a vehicle-mounted all-round looking system, wherein the vehicle-mounted all-round looking system further comprises a display screen and n cameras positioned around a vehicle, and n is more than or equal to 4; the device comprises:
the head rotation information is used for indicating a left or right rotation angle of the head of the driver, the rotation angle is smaller than a maximum rotation angle, and the maximum rotation angle is an angle at which an eyeball of the driver is located at the most edge position of an eyebox and a right front side can be seen after the head is rotated;
the splicing module is used for splicing the images shot by the m cameras into a monitoring visual field image according to the head rotation information, wherein m is less than n;
and the display module is used for displaying the monitoring view image on the display screen in real time.
9. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the vehicle look-around method according to any one of claims 1 to 7.
10. An in-vehicle terminal, characterized in that the in-vehicle terminal comprises a processor and a memory, wherein the memory stores at least one instruction, and the instruction is loaded and executed by the processor to realize the in-vehicle looking-around method according to any one of claims 1 to 7.
CN202010758026.8A 2020-07-31 2020-07-31 Vehicle-mounted looking-around method and device, storage medium and vehicle-mounted terminal Active CN111918035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010758026.8A CN111918035B (en) 2020-07-31 2020-07-31 Vehicle-mounted looking-around method and device, storage medium and vehicle-mounted terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010758026.8A CN111918035B (en) 2020-07-31 2020-07-31 Vehicle-mounted looking-around method and device, storage medium and vehicle-mounted terminal

Publications (2)

Publication Number Publication Date
CN111918035A true CN111918035A (en) 2020-11-10
CN111918035B CN111918035B (en) 2022-04-15

Family

ID=73287699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010758026.8A Active CN111918035B (en) 2020-07-31 2020-07-31 Vehicle-mounted looking-around method and device, storage medium and vehicle-mounted terminal

Country Status (1)

Country Link
CN (1) CN111918035B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024032698A1 (en) * 2022-08-10 2024-02-15 中车长春轨道客车股份有限公司 Monitoring display system suitable for traveling rail of high-speed magnetic levitation vehicle

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003196645A (en) * 2001-12-28 2003-07-11 Equos Research Co Ltd Image processing device of vehicle
CN101093644A (en) * 2006-06-19 2007-12-26 深圳安凯微电子技术有限公司 LCD control circuit and method for picture in picture function supported under multiple output formats
CN102448773A (en) * 2009-05-29 2012-05-09 富士通天株式会社 Image generating apparatus and image display system
JP2015015527A (en) * 2013-07-03 2015-01-22 クラリオン株式会社 Video display system, video synthesis device and video synthesis method
CN105323552A (en) * 2015-10-26 2016-02-10 北京时代拓灵科技有限公司 Method and system for playing panoramic video
US10027888B1 (en) * 2015-09-28 2018-07-17 Amazon Technologies, Inc. Determining area of interest in a panoramic video or photo
US10154228B1 (en) * 2015-12-23 2018-12-11 Amazon Technologies, Inc. Smoothing video panning
CN110337807A (en) * 2017-04-07 2019-10-15 英特尔公司 The method and system of camera apparatus is used for depth channel and convolutional neural networks image and format
CN110481432A (en) * 2019-09-22 2019-11-22 贾实 A column dynamic visual system
WO2020122533A1 (en) * 2018-12-11 2020-06-18 (주)미경테크 Vehicular around view monitoring system through adjustment of viewing angle of camera, and method thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003196645A (en) * 2001-12-28 2003-07-11 Equos Research Co Ltd Image processing device of vehicle
CN101093644A (en) * 2006-06-19 2007-12-26 深圳安凯微电子技术有限公司 LCD control circuit and method for picture in picture function supported under multiple output formats
CN102448773A (en) * 2009-05-29 2012-05-09 富士通天株式会社 Image generating apparatus and image display system
JP2015015527A (en) * 2013-07-03 2015-01-22 クラリオン株式会社 Video display system, video synthesis device and video synthesis method
US10027888B1 (en) * 2015-09-28 2018-07-17 Amazon Technologies, Inc. Determining area of interest in a panoramic video or photo
CN105323552A (en) * 2015-10-26 2016-02-10 北京时代拓灵科技有限公司 Method and system for playing panoramic video
US10154228B1 (en) * 2015-12-23 2018-12-11 Amazon Technologies, Inc. Smoothing video panning
CN110337807A (en) * 2017-04-07 2019-10-15 英特尔公司 The method and system of camera apparatus is used for depth channel and convolutional neural networks image and format
WO2020122533A1 (en) * 2018-12-11 2020-06-18 (주)미경테크 Vehicular around view monitoring system through adjustment of viewing angle of camera, and method thereof
CN110481432A (en) * 2019-09-22 2019-11-22 贾实 A column dynamic visual system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邹超洋: "《基于多摄像头全景图像拼接的实时视频监控技术研究》", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024032698A1 (en) * 2022-08-10 2024-02-15 中车长春轨道客车股份有限公司 Monitoring display system suitable for traveling rail of high-speed magnetic levitation vehicle

Also Published As

Publication number Publication date
CN111918035B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
US10715791B2 (en) Virtual eyeglass set for viewing actual scene that corrects for different location of lenses than eyes
KR100911066B1 (en) Image display system, image display method and recording medium
KR102096730B1 (en) Image display method, method for manufacturing irregular screen having curved surface, and head-mounted display device
WO2022088103A1 (en) Image calibration method and apparatus
EP3304498A1 (en) Systems and methods for producing a combined view from fisheye cameras
US11073917B1 (en) Reading support system and method by relating book object to interactive content
EP3804347B1 (en) A method for processing image data with reduced transmission bandwidth for display
CN111800589B (en) Image processing method, device and system and robot
US20140192033A1 (en) 3d image apparatus and method for displaying images
CN111918035B (en) Vehicle-mounted looking-around method and device, storage medium and vehicle-mounted terminal
EP3660780A1 (en) Method and apparatus for acquiring images, acquisition device, and computer storage medium
JPH0946622A (en) Two-screen display device
CN114212029A (en) Perspective vehicle-mounted display system capable of eliminating visual field blind area and vehicle
US20200134771A1 (en) Image processing method, chip, processor, system, and mobile device
US11889193B2 (en) Zoom method and apparatus, unmanned aerial vehicle, unmanned aircraft system and storage medium
CN112752992A (en) Mixed reality or virtual reality camera system
KR20200063017A (en) Apparatus for image processing and image processing method
US20210176452A1 (en) Head-mounted display having an image sensor array
CN114020150A (en) Image display method, image display device, electronic apparatus, and medium
CN112083901A (en) Image processing method, device, equipment and storage medium
CN112172671A (en) Method and device for displaying rear view image of commercial vehicle, computer equipment and storage medium
US20230393795A1 (en) Image processing apparatus, image processing method, and storage medium
WO2021199557A1 (en) Sign evaluation device, learning device, sign evaluation method, learning method, and program
US20230239458A1 (en) Stereoscopic-image playback device and method for generating stereoscopic images
US20200346583A1 (en) Electronic mirror system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant