CN114332789A - Image processing method, apparatus, device, vehicle, and medium - Google Patents

Image processing method, apparatus, device, vehicle, and medium Download PDF

Info

Publication number
CN114332789A
CN114332789A CN202011062526.4A CN202011062526A CN114332789A CN 114332789 A CN114332789 A CN 114332789A CN 202011062526 A CN202011062526 A CN 202011062526A CN 114332789 A CN114332789 A CN 114332789A
Authority
CN
China
Prior art keywords
image
view
area
sampling time
visual field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011062526.4A
Other languages
Chinese (zh)
Inventor
陈泽通
覃进严
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN202011062526.4A priority Critical patent/CN114332789A/en
Publication of CN114332789A publication Critical patent/CN114332789A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses an image processing method, an image processing device, an image processing apparatus, a vehicle and a medium, wherein the method comprises the steps of obtaining a key frame image sequence, wherein the key frame image sequence comprises a plurality of image frames corresponding to sampling time and corresponding to a first view direction of the vehicle; and splicing the multiple image frames according to the sampling time sequence to obtain a first view image. According to the technical scheme, the image information processing method and the image information processing device, the visual field range of the image is enlarged by splicing a plurality of image frames, and the utilization rate of the image information is improved.

Description

Image processing method, apparatus, device, vehicle, and medium
Technical Field
The present invention relates generally to the field of vehicle electronic product technology, and in particular, to an image processing method, apparatus, device, vehicle, and medium.
Background
At present, in the process of backing a vehicle, a driver can finish backing a vehicle with the assistance of a rearview image of a scene around the tail of the vehicle displayed on a display device on the vehicle.
The rearview image is acquired by a reversing camera mounted at the tail of the vehicle, when the vehicle backs, the vehicle moves, according to driving safety and driving habits, a driver cannot always look at the rearview image on a display but needs to watch the rearview image back and forth on the rearview mirror and the display, and if the driver stays too long in the sight time of the rearview mirror, the rearview image acquired by the reversing camera is missed. In the moving process of the vehicle, the rear view image can be changed and partially shielded by the rear bumper, the real situation under the bumper cannot be completely displayed, and certain potential safety hazards are caused.
Disclosure of Invention
In view of the above-mentioned drawbacks or deficiencies in the prior art, it is desirable to provide an image processing method, apparatus, device, vehicle, and medium to increase the field of view of an image.
In a first aspect, the present invention provides an image processing method, comprising:
acquiring a key frame image sequence, wherein the key frame image sequence comprises a plurality of image frames corresponding to sampling time and corresponding to a first visual field direction of the vehicle;
and splicing the multiple image frames according to the sampling time sequence to obtain a first view image.
Wherein, splice a plurality of image frames according to sampling time sequence, obtain first field of view image, include:
acquiring the same image area contained in a plurality of image frames and an image area different from other image frames contained in each image frame;
extracting the same image area as a target area;
extending and superposing different image sub-regions according to the sampling time to obtain an extended region;
and splicing the target area and the extension area to obtain a first visual field image.
The method for acquiring the same image area contained in the plurality of image frames and the image area different from other image frames contained in each image frame comprises the following steps:
acquiring a historical image frame corresponding to first sampling time and a current image frame corresponding to second sampling time, wherein the first sampling time is less than the second sampling time;
and comparing the historical image frame with the current image frame to obtain the same image area contained in the historical image frame and the current image frame, a first image subregion contained in the historical image frame and a second image subregion contained in the current image frame.
Wherein, extend the stack according to the sampling time with different image subregion, include:
filling a first image subregion in an image region to be superposed;
determining edge coordinates of a first image subregion;
taking the edge coordinates as the superposition starting point of the second image subregion;
determining a filling area corresponding to the second image subregion in the image area to be superimposed according to the superimposition starting point;
filling the second image sub-region in a fill region.
Wherein, the method also comprises:
superposing preset image lines and auxiliary identification lines corresponding to the shielding objects in the first view image to obtain a second view image;
outputting the second view image to an external display device;
and circularly splicing the multiple image frames according to the sampling time sequence to the step of overlapping a preset image line and an auxiliary identification line corresponding to the barrier in the first view image until the overlapping range of the second view image and the first view image is unchanged.
Wherein, superpose preset image lines and the supplementary marking line that corresponds with the shelter in first field of vision image, obtain second field of vision image and include:
determining an image area of an obstruction portion in a first view image;
processing an image area of the shielding object part;
calling an image line corresponding to the shielding object to fill the outline of the processed image area of the shielding object part to obtain a second view image in an initial state, wherein the image line corresponding to the shielding object is generated in advance according to the outline of the image area of the shielding object part;
and superposing the auxiliary identification line on the second visual field image in the initial state to obtain the second visual field image.
Wherein, the shelter is the bumper of rear of a vehicle part, superposes preset image lines and supplementary identification line that correspond with the shelter in first field of vision image, includes:
determining an image area corresponding to the bumper in the spliced image;
processing an image area corresponding to the bumper;
calling an image sub-area corresponding to the bumper to fill up the outline of the image area corresponding to the bumper to obtain a second visual field image in an initial state, wherein an image line corresponding to the bumper is generated in advance according to the outline of the image area of the bumper;
and superposing the auxiliary identification line on the second visual field image in the initial state to obtain the second visual field image.
In a second aspect, the present invention provides an image processing apparatus comprising:
an acquisition module configured to acquire a key frame image sequence including a plurality of image frames corresponding to a sampling time and corresponding to a first field of view direction of a vehicle;
the splicing module is configured to splice the image frames according to the sampling time sequence to obtain a first view image.
In a third aspect, the present invention provides an electronic device, comprising:
a memory, a processor, and a computer program stored on the memory and executable on the processor;
the processor when executing the program performs the method as in the first aspect.
In a fourth aspect, the invention provides a vehicle comprising the electronic device described above.
In a fifth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program for implementing the method described in the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides an image processing method, an image processing device, an image processing apparatus, a vehicle and a medium, which can acquire a key frame image sequence, wherein the key frame image sequence comprises a plurality of image frames corresponding to sampling time and corresponding to a first visual field direction of the vehicle; and splicing the multiple image frames according to the sampling time sequence to obtain a first view image. According to the technical scheme provided by the embodiment of the application, the image frames corresponding to the first view directions are overlapped, the view of the image is increased and the display range of the image is enlarged by overlapping different image areas contained in each image frame, so that the utilization value of the image information is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a schematic structural diagram illustrating an application scenario according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating an image processing method according to an embodiment of the present application;
fig. 3 is a flowchart illustrating another image processing method according to an embodiment of the present application;
FIG. 4 is a schematic interface diagram of a display according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating an image processing method according to an embodiment of the present application;
fig. 6 is a block diagram showing a configuration of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a block diagram showing a configuration of another image processing apparatus according to an embodiment of the present application;
fig. 8 shows a schematic structural diagram of a central control device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, fig. 1 shows a schematic structural diagram of an application scenario related to the present application. As shown in fig. l, the reversing camera 1 and the central control device 2, the reversing camera 1 is used for collecting environmental information of the tail of the vehicle and generating a rear view image. For example, the reversing camera 1 may be mounted on a trunk rack at the rear of the vehicle, or the like. The center control device 2 includes a memory 21, a controller 22, and a display 23.
The memory 21 is used for storing the rearview images collected by the reversing camera 1.
The controller 22 is in signal connection with the reversing camera 1 and the memory 21 respectively, and the controller 22 is used for controlling the reversing camera 1 to acquire a rearview image according to a preset image acquisition frequency when determining that the vehicle is switched from a driving gear to a reverse gear.
The display 23 is in signal connection with the memory 21 and the controller 22. The display 23 is used for displaying the rearview environment information acquired from the reversing camera 1 or the processed rearview image. The central control device 2 and the reversing camera 1 may both include a sensing device (not shown in the figure), the controller 22 may be in signal connection with the sensing device, the sensing device is used for acquiring vehicle operation information, for example, a vehicle speed sensor is used for acquiring the driving speed of the vehicle, and the controller 22 may determine whether the vehicle finishes reversing according to the vehicle operation information.
The controller 22 is configured to, when it is determined that the vehicle is switched from the driving gear to the reverse gear, control the reverse camera 1 to collect a rear view image for several seconds according to a preset image collection frequency, and store a key frame image sequence of the collected rear view image through the memory 21. The controller 22 performs stitching processing on the current image frame and the historical image frame in the key frame image sequence to obtain a first view image.
The controller 22 is further configured to superimpose a preset image line and an auxiliary identification line corresponding to the blocking object in the first view image to obtain a second view image, and control the second view image to be displayed on the display 23, so that the driver can obtain environment information at the tail of the vehicle according to the second view image. The display 23 may be another display connected to the in-vehicle signal.
The controller 22 may execute an image processing method as shown in fig. 2, which may be applied in the application scenario shown in fig. 1, and the method includes:
step 210, obtaining a key frame image sequence.
And step 220, splicing the multiple image frames according to the sampling time sequence to obtain a first view image.
In the step 210, the key frame image is an image corresponding to a key motion in the motion or change of the object, and the key frame image sequence includes a plurality of image frames corresponding to the first visual field direction of the vehicle and corresponding to the sampling time. For example, in the process of backing up, when a key frame image sequence acquired by a backing-up camera is acquired, the key frame image sequence may be acquired at a speed of at least 10 key frame images per second, for example, at a speed of 30 key frame images per second, and the key frame image sequence includes a plurality of rear view images corresponding to the sampling time.
For example, the key frame image sequence may be acquired by a reverse camera, and the process may be: when the vehicle is determined to be switched from the driving gear to the reverse gear, the reverse camera is controlled to collect a plurality of image frames corresponding to the first visual field direction of the vehicle to obtain a key frame image sequence, and the key frame image sequence is cached to a memory of the central control device. The first view direction may be a direction in which a shooting angle of the reversing camera is located.
In the step 220, the stitching the image frames according to the sampling time sequence to obtain the first view field image may include the following steps:
acquiring the same image area contained in a plurality of image frames and an image area different from other image frames contained in each image frame;
extracting the same image area as a target area; extending and superposing different image subregions according to sampling time to obtain an extended region;
and splicing the target area and the extension area to obtain a first visual field image.
It should be noted that the speed of acquiring the same image region included in the plurality of image frames and the image region included in each image frame different from the other image frames corresponds to the speed of acquiring the key frame image of at least 10 frames per second, specifically, the speed of comparing the first 10 frames of the key frame image per second every 0.5 second. The first view image is a long image, and the height of the long image is equal to the sum of the current time and the heights of all key frame images collected by the reversing camera before the current time.
The process of acquiring the same image area included in the plurality of image frames and an image area different from the other image frames included in each image frame may include:
acquiring a historical image frame corresponding to first sampling time and a current image frame corresponding to second sampling time, wherein the first sampling time is less than the second sampling time; and comparing the historical image frame with the current image frame to obtain the same image area contained in the historical image frame and the current image frame, a first image subarea contained in the historical image frame and a second image subarea contained in the current image frame.
The process of extending and superimposing different image sub-regions according to the sampling time may be: filling a first image subregion in an image region to be superposed; determining edge coordinates of a first image subregion; taking the edge coordinates as a superposition starting point of the second image subregion; determining a filling area corresponding to the second image subregion in the image area to be superimposed according to the superimposition starting point; the second image sub-region is filled in the fill region.
For example, when the vehicle is switched from a driving gear to a reverse gear, the reverse camera is triggered to cache a plurality of key frame images into the memory, and the acquired key frame image sequence is transmitted to the central control device. The first sampling instant may be understood as a corresponding first instant when the key frame image is sampled. The second sampling timing is a timing at which the superimposed image is processed. For example, the key frame image sequence includes { T }1,T2,…,T10}. Wherein the key frame image T1Is a first sampling time t1Is defined as a historyImage, key frame image T2Is the second sampling time t2Is defined as the current image, where t2>t1
After the historical image and the current image are spliced to obtain a first view image, acquiring the first view image as an updated historical image; and acquiring an image at the next moment of the current moment in the key frame image sequence as an updated current image. For example, key frame image T1And key frame image T2Splicing to obtain a first visual field image P1The first visual field image P1Defined as a new history image, using the first-view image P1Updating a keyframe image T1(ii) a Key frame image T3Defined as a new current image, the key frame image T3Is t3Key frame image acquired at a moment, where t3>t2。t3After the time, the above-mentioned historical image frame and current image frame are obtained in a similar manner as described above, and are not listed here.
In the above process, the historical image frame and the current image frame are continuously changed and updated, the historical image frame may be a key frame image acquired at a first sampling time of the reversing camera, and the current image frame may be a key frame image acquired at a next time (second sampling time) from the initial time. Splicing a key frame image acquired at an initial moment and a key frame image acquired at a next moment of the initial moment to obtain a first view image, and then taking the first view image as an updated historical image frame; and acquiring an image at the next moment of the current moment acquired by the reversing camera as an updated current image frame.
Through the embodiment, the image frames corresponding to the first visual field directions of the vehicles are spliced to increase the visual field range of the user, and the utilization rate of the image information is further improved.
For example, the stitched image may be applied in a reverse image to increase the reverse view field of the driver, thereby increasing the safety of the reverse.
If in a reversing scene, when the vehicle is determined to be switched from a driving gear to a reversing gear, the controller automatically generates a control signal and sends the control signal to the reversing camera, the reversing camera responds to a key frame image sequence acquired by the control signal, the memory can cache the key frame image sequence acquired by the reversing camera, and the controller splices a plurality of image frames according to a sampling time sequence to obtain a first view image.
Optionally, as shown in fig. 3, the image processing method further includes:
and step 230, superposing preset image lines and auxiliary identification lines corresponding to the obstruction in the first view image to obtain a second view image.
And 240, outputting the second view field image to an external display device.
And 250, circularly splicing the multiple image frames according to the sampling time sequence until a preset image line and an auxiliary identification line corresponding to the obstruction are superposed in the first view image until the superposition range of the second view image and the first view image is unchanged.
In the above step, superimposing the preset image line and the auxiliary identification line corresponding to the obstruction in the first view image includes: determining an image area of an obstruction portion in a first view image; processing an image area of the shielding object part; calling an image line corresponding to the shielding object to fill the contour of the image area corresponding to the shielding object part after the filling processing to obtain a second view image in an initial state, and overlapping an auxiliary identification line on the initial second view image to obtain a second view image, wherein the image line corresponding to the shielding object removes the shielding object part from the image area of the shielding object part acquired by the reversing camera in advance; generated from the outer contour of the image area of the obstruction part.
The processing of the image area of the shelter part is to perform transparentization processing on the partial image of the shelter, and the transparentization processing can be performed through image processing software, specifically, the image of the shelter part can be cut off, or a layer can be selected, and the image opacity of the shelter part is adjusted to be larger than 95%.
In the scene of backing a car, this shelter can be the bumper, and this supplementary marking line can be the supplementary marking line of parking, and the image lines that correspond with the shelter is produced according to the outer profile of the image of the bumper part that the camera of backing a car gathered in advance, then superpose preset image lines and supplementary marking line that correspond with the shelter in first field of vision image, include: determining an image area corresponding to the bumper in the spliced image; processing an image area corresponding to the bumper; calling an image sub-area corresponding to the bumper to fill up the outline of the image area corresponding to the bumper to obtain a second visual field image in an initial state, wherein an image line corresponding to the bumper is generated in advance according to the outline of the image area of the bumper; and superposing the auxiliary identification line on the second visual field image in the initial state to obtain the second visual field image. The auxiliary identification line is a virtual extension line of two sides of the vehicle displayed on the display of the external display device, and can help a driver to judge the distance between the vehicle and an obstacle, and when the vehicle backs along a straight line, the parking auxiliary identification line can be two straight lines; when the vehicle is reversed along a curve, the parking assistance identification line may be two curves having the same radian. As shown in fig. 4, fig. 4 shows an interface schematic diagram of the display 23 of the external display device, on which a second visual field image including environment information of the rear of the vehicle, a virtual bumper line L1, and an auxiliary stop line L2 is displayed.
Optionally, the method further includes: synthesizing a preset number of second view images according to the output sequence of the second view images to obtain at least one section of video; and outputting at least one piece of video to an external display device.
Specifically, a video of 1 second length is synthesized for at least every 10 second-field images. Preferably, every 30 second-view images are combined into a video with the length of 1 second, the fluency of the video can be guaranteed by 30 frames per second, the second-view images are cached while being spliced, the cached second-view images are combined into the video, and the efficiency is high. A piece of video may comprise at least 1 second long video, and the composite piece of video is buffered in memory.
Further, when the vehicle speed reaches a preset first threshold value, the overlapping range of a second visual field image and a first visual field image displayed in the display equipment is unchanged, the vehicle is determined to be backed, the backing camera is determined to acquire a live-action visual field image, and the acquired live-action visual field image is output to the external display equipment, so that the actual environment condition behind the vehicle can be conveniently viewed. The live-action view image is an environment image of the tail part of the vehicle acquired by a reversing camera after reversing is finished, and the first threshold value can be set to be 0 km/h.
In the embodiment of the application, the second view image is subjected to transparentization processing on the image area of the shielding object, so that the environment information possibly shielded by the shielding object can be further displayed, and the preset image lines and the auxiliary identification lines corresponding to the shielding object are superposed in the second view image. The parking device can show richer environment information to a driver, for example, obstacles in a parking environment can be shown, the driver can judge how to adjust the parking direction of the vehicle according to the environment information, and the process is repeated continuously, so that the parking safety is improved.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating an image processing method according to an embodiment of the present application, where the method is applied to a reverse scene.
As shown in fig. 5, when the vehicle is switched from the driving gear to the reverse gear, the reverse camera is triggered to transmit the collected key frame image sequence to the central control device, and the central control device obtains the collected key frame image and buffers the key frame image sequence. For example, the key frame image sequence includes { T }1,T2,…,T10}. Wherein the key frame image T1Is a first sampling time t1Is defined as a history image frame, a key frame image T2Is the second sampling time t2The image collected at the moment is defined as the current image, wherein t2>t1. Key frame image T1And key frame image T2Splicing to obtain a first visual field image P1In the first visual field image P1And superposing a preset virtual bumper line and an auxiliary stop line to obtain a second view image.
And then judging whether the vehicle speed is 0km/h (namely a parking state), and outputting the live-action view image acquired by the camera to an external display device when the vehicle speed is 0 km/h.
When the vehicle speed is not 0km/h, the first visual field image P is displayed1Defined as a new historical image frame, using the first-view image P1Updating a keyframe image T1(ii) a Key frame image T3Defined as a new current image, the key frame image T3Is t3Images acquired at times, where t3>t2. Then the first visual field image P is displayed2And key frame image T3Splicing to obtain a first visual field image P2In the first visual field image P2And superposing a preset image line and an auxiliary identification line corresponding to the obstruction.
And continuously judging the vehicle speed, circularly acquiring, splicing and superposing until the vehicle speed is 0km/h, determining that the vehicle backing is finished, and outputting the live-action view image acquired by the vehicle backing camera to an external display device.
The detailed process of performing the stitching processing on the history image frame and the current image frame in the key frame image sequence and overlapping the preset image line and the auxiliary identification line corresponding to the obstruction in the second view image is the same as the foregoing embodiment, and will not be described herein.
Referring to fig. 6, fig. 6 is a schematic structural diagram illustrating an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the apparatus includes:
an acquisition module 410 is configured to acquire a key frame image sequence including a plurality of image frames corresponding to a sampling time and corresponding to a first field of view direction of the vehicle.
The stitching module 420 is configured to stitch the plurality of image frames according to the sampling time sequence to obtain a first view image.
Optionally, the splicing module 420 is configured to:
acquiring the same image area contained in a plurality of image frames and an image area different from other image frames contained in each image frame;
extracting the same image area as a target area;
extending and superposing different image subregions according to sampling time to obtain an extended region;
and splicing the target area and the extension area to obtain a first visual field image.
Optionally, the splicing module 420 is configured to:
acquiring a historical image frame corresponding to first sampling time and a current image frame corresponding to second sampling time, wherein the first sampling time is less than the second sampling time;
and comparing the historical image frame with the current image frame to obtain the same image area contained in the historical image frame and the current image frame, a first image subregion contained in the historical image frame and a second image subregion contained in the current image.
Optionally, the splicing module 420 is configured to:
filling a first image subregion in an image region to be superposed;
determining edge coordinates of a first image subregion;
taking the edge coordinates as a superposition starting point of the second image subregion;
determining a filling area corresponding to the second image subregion in the image area to be superimposed according to the superimposition starting point;
the second image sub-region is filled in the fill region.
Through the embodiment, the image frames corresponding to the first visual field directions of the vehicles are spliced to increase the visual field range of the user, and the utilization rate of the image information is further improved.
As shown in fig. 7, the apparatus further includes:
the overlapping module 430 is configured to overlap preset image lines and auxiliary identification lines corresponding to the obstruction in the first view image to obtain a second view image;
an output module 440 configured to output the second view image to an external display device;
the loop module 450 is configured to perform a step of stitching the plurality of image frames according to the sampling time sequence until a preset image line and an auxiliary identification line corresponding to the obstruction are superimposed in the first view image until the superimposition range of the second view image and the first view image is unchanged.
Optionally, the superimposing module 430 is configured to:
determining an image area of an obstruction portion in a first view image;
processing an image area of the shielding object part;
calling an image line corresponding to the shielding object to fill the outline of the image area of the shielding object part after the filling processing to obtain a second view image in an initial state, wherein the image line corresponding to the shielding object is generated in advance according to the outline of the image area of the shielding object part; and superposing the auxiliary identification line on the second visual field image in the initial state to obtain the second visual field image.
Optionally, the covering is a bumper of the vehicle rear portion, and the overlap module 430 is configured to:
determining an image area corresponding to the bumper in the spliced image;
processing an image area corresponding to the bumper;
calling an image sub-area corresponding to the bumper to fill up the outline of the image area corresponding to the bumper to obtain a second visual field image in an initial state, wherein an image line corresponding to the bumper is generated in advance according to the outline of the image area of the bumper;
and superposing the auxiliary identification line on the second visual field image in the initial state to obtain the second visual field image.
In the embodiment of the application, the second view image is subjected to transparentization processing on the image area of the shelter, so that the environment information possibly sheltered by the shelter can be further displayed, richer environment information is displayed for a driver, and the preset image lines and the auxiliary identification lines corresponding to the shelter are superposed in the second view image. The automobile reversing device can display the obstacles in the reversing environment, so that a driver can judge how to adjust the reversing direction of the automobile according to the environment information, and the process is repeated continuously, so that the reversing safety is improved.
According to the technical scheme, the historical images and the current images in the key frame image sequence are spliced to obtain the first view image, the first view image comprises information of all key frame images before the current moment, and the view range of the displayed images can be expanded.
Furthermore, the preset image lines and the auxiliary identification lines corresponding to the shielding objects are superposed in the first view image, so that a driver can see the images under the shielding objects, and the safety performance in the reversing process is improved.
In another aspect, an embodiment of the present application provides an electronic device, which includes:
a memory, a processor, and a computer program stored on the memory and executable on the processor;
the processor, when executing the program, implements the image processing method as provided in the above embodiments.
In another aspect, the present application provides a vehicle comprising an electronic device as described in the above embodiments, the electronic device performing the method steps as shown in fig. 2, fig. 3 or fig. 5.
The electronic equipment can splice the historical images and the current images in the key frame image sequence to obtain a first view image, namely, a plurality of image frames corresponding to the first view direction of the vehicle are spliced to increase the view range of a user, and further the utilization rate of image information is improved.
Furthermore, the preset image lines and the auxiliary identification lines corresponding to the shielding objects can be superimposed in the second view image, so that a driver can see the images under the shielding objects, and the safety performance in the reversing process is improved.
In another aspect, the present application also provides a computer-readable storage medium, which may be included in the center control apparatus described in the following embodiments; or the central control device can be independently arranged and not assembled in the central control device. Center control device as shown in fig. 8, the center control device includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section into a Random Access Memory (RAM) 504. In the RAM503, various programs and data necessary for system operation are also stored. The CPU501, ROM502, and RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drives are also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to an embodiment of the invention, the processes described above with reference to the flow diagrams 2 and 3 may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 501.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves. The described units or modules may also be provided in a processor or in a control processor, and may be described, for example, as: a control processor includes an acquisition module and a stitching module. Wherein the designation of a unit or module does not in some way constitute a limitation of the unit or module itself. For example, the acquisition module may also be described as an "acquisition module configured to acquire a sequence of key frame images".
The above-described computer-readable medium carries one or more programs which, when executed by a server, cause the server to implement the image processing method described in the above-described embodiments. The server may be another electronic device.
For example, the server may implement the process as shown in fig. 3: step 210: acquiring a key frame image sequence; step 220: and splicing the multiple image frames according to the sampling time sequence to obtain a first view image.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the disclosure. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (11)

1. An image processing method, characterized in that the method comprises:
acquiring a key frame image sequence, wherein the key frame image sequence comprises a plurality of image frames corresponding to sampling time and corresponding to a first visual field direction of a vehicle;
and splicing the plurality of image frames according to a sampling time sequence to obtain a first view image.
2. The image processing method according to claim 1, wherein said stitching the plurality of image frames in a sampling time order to obtain a first view image comprises:
acquiring the same image area contained in the plurality of image frames and an image area different from other image frames contained in each image frame;
extracting the same image area as a target area;
extending and superposing the different image sub-regions according to the sampling time to obtain an extended region;
and splicing the target area and the extension area to obtain a first visual field image.
3. The method according to claim 2, wherein said acquiring the same image region included in the plurality of image frames and an image region included in each of the image frames different from the other image frames comprises:
acquiring a historical image frame corresponding to first sampling time and a current image frame corresponding to second sampling time, wherein the first sampling time is less than the second sampling time;
and comparing the historical image frame with the current image frame to obtain the same image areas contained in the historical image frame and the current image frame, wherein the historical image frame comprises a first image subarea, and the current image frame comprises a second image subarea.
4. The image processing method of claim 3, wherein said overlaying the different image sub-regions according to a sample time extension comprises:
filling the first image subarea in an image area to be superposed;
determining edge coordinates of the first image subregion;
taking the edge coordinates as a superposition starting point of the second image subregion;
determining a filling area corresponding to the second image subregion in the image area to be superimposed according to the superimposition starting point;
filling the second image sub-region in the fill region.
5. The image processing method according to claim 1, characterized in that the method further comprises:
superposing a preset image line and an auxiliary identification line corresponding to the shielding object in the first view image to obtain a second view image;
outputting the second view image to an external display device;
and circulating the step of splicing the plurality of image frames according to the sampling time sequence to the step of superposing preset image lines and auxiliary identification lines corresponding to the shielding objects in the first view image until the superposition range of the second view image and the first view image is unchanged.
6. The image processing method according to claim 5, wherein the superimposing a preset image line corresponding to an obstruction and an auxiliary identification line in the first view image to obtain a second view image comprises:
determining an image area of an obstruction portion in the first view image;
processing an image area of the shelter part;
calling an image line corresponding to a shelter to fill the outline of the processed image area of the shelter part to obtain a second view image in an initial state, wherein the image line corresponding to the shelter is generated in advance according to the outline of the image area of the shelter part;
and superposing an auxiliary identification line on the second visual field image in the initial state to obtain the second visual field image.
7. The method according to claim 5, wherein the obstruction is a bumper of a vehicle tail portion, and the superimposing of the preset image line and the auxiliary identification line corresponding to the obstruction in the first visual field image comprises:
determining an image area corresponding to a bumper in the spliced image;
processing an image area corresponding to the bumper;
calling an image sub-area corresponding to a bumper to fill up the processed outline of the image area corresponding to the bumper to obtain a second visual field image in an initial state, wherein the image line corresponding to the bumper is generated in advance according to the outline of the image area of the bumper;
and superposing an auxiliary identification line on the second visual field image in the initial state to obtain the second visual field image.
8. An image processing apparatus characterized by comprising:
an acquisition module configured to acquire a key frame image sequence including a plurality of image frames corresponding to a sampling time and corresponding to a first field of view direction of a vehicle;
and the splicing module is configured to splice the image frames according to a sampling time sequence to obtain a first view image.
9. An electronic device, characterized in that the electronic device comprises:
a memory, a processor, and a computer program stored on the memory and executable on the processor;
the processor, when executing the program, implements the method of any of claims 1-7.
10. A vehicle characterized by comprising the electronic device of claim 9.
11. A computer-readable storage medium, on which a computer program is stored, the computer program being for implementing the method of any one of claims 1-7.
CN202011062526.4A 2020-09-30 2020-09-30 Image processing method, apparatus, device, vehicle, and medium Pending CN114332789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011062526.4A CN114332789A (en) 2020-09-30 2020-09-30 Image processing method, apparatus, device, vehicle, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011062526.4A CN114332789A (en) 2020-09-30 2020-09-30 Image processing method, apparatus, device, vehicle, and medium

Publications (1)

Publication Number Publication Date
CN114332789A true CN114332789A (en) 2022-04-12

Family

ID=81031643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011062526.4A Pending CN114332789A (en) 2020-09-30 2020-09-30 Image processing method, apparatus, device, vehicle, and medium

Country Status (1)

Country Link
CN (1) CN114332789A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115589527A (en) * 2022-11-23 2023-01-10 禾多科技(北京)有限公司 Automatic driving image sending method, automatic driving image sending device, electronic equipment and computer medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115589527A (en) * 2022-11-23 2023-01-10 禾多科技(北京)有限公司 Automatic driving image sending method, automatic driving image sending device, electronic equipment and computer medium
CN115589527B (en) * 2022-11-23 2023-06-27 禾多科技(北京)有限公司 Automatic driving image transmission method, device, electronic equipment and computer medium

Similar Documents

Publication Publication Date Title
EP2660104B1 (en) Apparatus and method for displaying a blind spot
CN110641366B (en) Obstacle tracking method and system during driving, electronic device and storage medium
CN113104029B (en) Interaction method and device based on automatic driving
CN108382305B (en) Image display method and device and vehicle
CN108437896B (en) Vehicle driving assistance method, device, equipment and storage medium
CN113276774B (en) Method, device and equipment for processing video picture in unmanned vehicle remote driving process
DE112016006740B4 (en) Parking assist apparatus
JP7426174B2 (en) Vehicle surrounding image display system and vehicle surrounding image display method
CN111277796A (en) Image processing method, vehicle-mounted vision auxiliary system and storage device
CN110831818A (en) Parking assist method and parking assist device
JP2010028803A (en) Image displaying method for parking aid
CN113147745A (en) Interaction method and device based on automatic driving
WO2019072461A1 (en) Parking assistance method, control device for carrying out the parking assistance method, and vehicle comprising the control device
CN114332789A (en) Image processing method, apparatus, device, vehicle, and medium
CN116674468A (en) Image display method and related device, vehicle, storage medium, and program
JP7075273B2 (en) Parking support device
CN115273525A (en) Parking space mapping display method and system
CN111959417B (en) Automobile panoramic image display control method, device, equipment and storage medium
CN114407928A (en) Vehicle avoidance control method and vehicle avoidance control device
JP2011155651A (en) Apparatus and method for displaying vehicle perimeter image
CN113879214A (en) Display method of electronic rearview mirror, electronic rearview mirror display system and related equipment
CN115489536B (en) Driving assistance method, system, equipment and readable storage medium
JP2014106738A (en) In-vehicle image processing device
KR101861523B1 (en) Apparatus and method for supporting driving of vehicle
JP7384830B2 (en) Method and device for displaying vehicle surroundings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination