CN112698717B - Local image processing method and device, vehicle-mounted system and storage medium - Google Patents

Local image processing method and device, vehicle-mounted system and storage medium Download PDF

Info

Publication number
CN112698717B
CN112698717B CN201911014302.3A CN201911014302A CN112698717B CN 112698717 B CN112698717 B CN 112698717B CN 201911014302 A CN201911014302 A CN 201911014302A CN 112698717 B CN112698717 B CN 112698717B
Authority
CN
China
Prior art keywords
image
virtual camera
displayed
determining
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911014302.3A
Other languages
Chinese (zh)
Other versions
CN112698717A (en
Inventor
李雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201911014302.3A priority Critical patent/CN112698717B/en
Publication of CN112698717A publication Critical patent/CN112698717A/en
Application granted granted Critical
Publication of CN112698717B publication Critical patent/CN112698717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The embodiment of the application provides a local image processing method and device, a vehicle-mounted system and a storage medium, wherein the method comprises the following steps: displaying a plurality of partial graph options in a graphical interface; determining the type of the image to be displayed in response to a selection operation of the target partial graph option; generating an image to be displayed according to the type of the image to be displayed, and displaying the image to be displayed in the image interface; the image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: and a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin. According to the embodiment of the application, the high-quality local image is displayed in the vehicle-mounted system, the local image can meet the visual requirement of human eyes better through the visual angle of the virtual camera, and the local image is reversely mapped to the original image, so that the mapping process is simplified.

Description

Local image processing method and device, vehicle-mounted system and storage medium
Technical Field
The present invention relates to the field of vehicle-mounted driving assistance technology, and in particular, to a local image processing method and device, a vehicle-mounted system, and a storage medium.
Background
Along with the development of science and technology, more and more automobiles are provided with vehicle-mounted looking-around systems, and in the vehicle-mounted looking-around systems, besides displaying the surrounding environment of the automobile body by adopting a panoramic image, partial images for observing details, such as whether obstacles exist at the bottom of wheels, whether the wheels are pressed or not, and the like, can be displayed.
In the prior art, the high-definition camera can be used for collecting high-definition local images, but the cost is high, or the required local images can be extracted from the panoramic image, such as local amplification, but the image quality is limited by simple extraction, so that the user requirements are difficult to meet.
Disclosure of Invention
In view of the foregoing, a method and apparatus for processing a partial image, an in-vehicle system, and a storage medium are proposed to overcome or at least partially solve the foregoing problems, including:
a processing method of local image is applied to a vehicle-mounted system, and comprises the following steps:
displaying a plurality of partial graph options in a graphical interface;
determining the type of the image to be displayed in response to a selection operation of the target partial graph option;
generating an image to be displayed according to the type of the image to be displayed, and displaying the image to be displayed in the image interface; the image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin;
Wherein, the step of generating the image to be displayed according to the image type to be displayed includes:
determining pixel positions of the image to be displayed, which is matched with the image type, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
determining pixel information corresponding to the pixel position from the original image;
and generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed.
Optionally, the image type includes an image to be displayed for a front wheel of the vehicle, an image to be displayed for a rear wheel of the vehicle.
Optionally, the method further comprises:
detecting a steering wheel angle of a vehicle, and determining a running direction of the vehicle according to the steering wheel angle;
when the vehicle runs in the reverse direction and the running speed is smaller than the first preset speed, determining the type of the image to be displayed as the image to be displayed for the rear wheels of the vehicle;
and when the vehicle runs in the forward direction and the running speed is smaller than the second preset speed, determining the type of the image to be displayed as the image to be displayed for the front wheels of the vehicle.
Optionally, the step of determining, according to the pixel position of the image to be displayed, pixel information corresponding to the pixel position from the original image includes:
Determining a first space point mapped by a first pixel point under a virtual camera coordinate system aiming at the first pixel point in the image to be displayed; the first pixel point is any pixel point in the image to be displayed;
determining a second space point corresponding to the first space point under a world coordinate system;
determining a second pixel point mapped by the second space point in the original image;
and determining the pixel information of the second pixel point as the pixel information of the first pixel point.
Optionally, the step of determining, for a first pixel point in the image to be displayed, a first spatial point mapped by the first pixel point under a virtual camera coordinate system includes:
obtaining the resolution corresponding to the virtual camera;
determining virtual camera internal parameters corresponding to the virtual camera by combining the resolution;
and determining a first space point mapped by the first pixel point in the image to be displayed under a virtual camera coordinate system by adopting the virtual camera internal parameters.
Optionally, the virtual camera reference includes a reference matrix, and the step of determining the virtual camera reference corresponding to the virtual camera in combination with the resolution includes:
Determining a virtual camera principal point and a virtual camera focal length by combining the resolution;
and calculating an internal reference matrix corresponding to the virtual camera by adopting the main point of the virtual camera and the focal length of the virtual camera.
Optionally, the step of determining the virtual camera focal length in combination with the resolution includes:
determining a first camera position of the virtual camera under a vehicle body coordinate system and a center position of a visual field center of the virtual camera under the vehicle body coordinate system; the first camera position is a center point of the position of the virtual camera under a vehicle body coordinate system;
calculating a position distance corresponding to the first camera position and the central position;
and calculating to obtain a virtual camera focal length corresponding to the virtual camera by adopting the resolution, the position distance and the preset visual field range length.
Optionally, the step of determining the virtual camera principal point further includes, in combination with the resolution:
and calculating to obtain the horizontal direction coordinate and the vertical direction coordinate of the virtual camera main point corresponding to the virtual camera by adopting the resolution ratio.
Optionally, the step of determining a second spatial point corresponding to the first spatial point in the world coordinate system includes:
Determining an affine transformation matrix between a vehicle body coordinate system and a world coordinate system;
combining the affine transformation matrix to determine virtual camera external parameters corresponding to the virtual camera;
and determining a second space point of the first space point under a world coordinate system by adopting the virtual camera external parameters.
Optionally, the virtual camera external parameters include a rotation matrix, and the step of determining the virtual camera external parameters corresponding to the virtual camera by combining the affine transformation matrix includes:
calculating a transformation direction vector under a world coordinate system by adopting the affine transformation matrix and a first vertical axis direction vector of a camera coordinate system;
calculating to obtain a rotation vector and a rotation angle by adopting the transformation direction vector and a second vertical axis direction vector under a world coordinate system;
and calculating to obtain a rotation matrix corresponding to the virtual camera by adopting the rotation vector and the rotation angle.
Optionally, the virtual camera external parameters include a translation matrix, and the step of determining the virtual camera external parameters corresponding to the virtual camera according to the affine transformation matrix includes:
calculating a second camera position of the virtual camera under a world coordinate system by adopting the affine transformation matrix and the first camera position of the virtual camera under a vehicle body coordinate system;
And determining a translation matrix corresponding to the second camera position.
Optionally, the step of determining, in the original image, a second pixel point mapped by the second spatial point includes:
determining a mapping point of the second space point in the distortion correction map of the vehicle-mounted camera;
and determining a second pixel point mapped by the mapping point in the original image by adopting a distortion model of the vehicle-mounted camera.
A processing device of a partial image, applied to an on-vehicle system, comprising:
the local map option display module is used for displaying a plurality of local map options in the graphical interface;
the image type determining module is used for determining the type of the image to be displayed in response to the selection operation of the target local image option;
the image generation module is used for generating an image to be displayed according to the type of the image to be displayed;
the display module is used for displaying the image to be displayed in the image interface;
the image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin;
Wherein the image generation module comprises:
the image type processing sub-module is used for determining the pixel position of the image to be displayed, which is matched with the image type, and the original image acquired by the vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
a pixel information determining sub-module, configured to determine pixel information corresponding to the pixel position from the original image;
and the image to be displayed generating sub-module is used for generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed.
Optionally, the image type includes an image to be displayed for a front wheel of the vehicle, an image to be displayed for a rear wheel of the vehicle.
Optionally, the apparatus further comprises:
the driving direction determining module is used for detecting the steering wheel angle of the vehicle and determining the driving direction of the vehicle according to the steering wheel angle;
the driving reversing determining module is used for determining that the type of the image to be displayed is the image to be displayed for the rear wheels of the vehicle when the vehicle reverses and the driving speed is smaller than a first preset speed;
and the forward running determining module is used for determining that the type of the image to be displayed is the image to be displayed for the front wheels of the vehicle when the vehicle runs forward and the running speed is smaller than the second preset speed.
Optionally, the pixel information determining submodule includes:
a first spatial point mapping unit, configured to determine, for a first pixel point in the image to be displayed, a first spatial point mapped by the first pixel point in a virtual camera coordinate system; the first pixel point is any pixel point in the image to be displayed;
a second space point determining unit configured to determine a second space point corresponding to the first space point in a world coordinate system;
a second pixel mapping unit, configured to determine, in the original image, a second pixel mapped by the second spatial point;
and the pixel information mapping unit is used for determining the pixel information of the second pixel point as the pixel information of the first pixel point.
Optionally, the first spatial point mapping unit includes:
a resolution obtaining subunit, configured to obtain a resolution corresponding to the virtual camera;
a virtual camera internal reference determining subunit, configured to determine a virtual camera internal reference corresponding to the virtual camera in combination with the resolution;
and the internal reference determining first space point subunit is used for determining a first space point mapped by a first pixel point in the image to be displayed under a virtual camera coordinate system by adopting the virtual camera internal reference.
Optionally, the virtual camera internal parameters include an internal parameter matrix, and the virtual camera internal parameter determination subunit is specifically configured to:
determining a virtual camera principal point and a virtual camera focal length by combining the resolution;
and calculating an internal reference matrix corresponding to the virtual camera by adopting the main point of the virtual camera and the focal length of the virtual camera.
Optionally, the virtual camera internal parameter determining subunit is specifically configured to, when determining the virtual camera focal length in combination with the resolution:
determining a first camera position of the virtual camera under a vehicle body coordinate system and a center position of a visual field center of the virtual camera under the vehicle body coordinate system; the first camera position is a center point of the position of the virtual camera under a vehicle body coordinate system;
calculating a position distance corresponding to the first camera position and the central position;
and calculating to obtain a virtual camera focal length corresponding to the virtual camera by adopting the resolution, the position distance and the preset visual field range length.
Optionally, the virtual camera internal parameter determining subunit is specifically configured to, when determining the virtual camera principal point in combination with the resolution:
and calculating to obtain the horizontal direction coordinate and the vertical direction coordinate of the virtual camera main point corresponding to the virtual camera by adopting the resolution ratio.
Optionally, the second spatial point determining unit includes:
an affine transformation matrix determining subunit for determining an affine transformation matrix between the vehicle body coordinate system and the world coordinate system;
a virtual camera external parameter determining subunit, configured to combine the affine transformation matrix to determine a virtual camera external parameter corresponding to the virtual camera;
and the external parameter determining second space point subunit is used for determining a second space point of the first space point under a world coordinate system by adopting the virtual camera external parameter.
Optionally, the virtual camera external parameter includes a rotation matrix, and the virtual camera external parameter determining subunit is specifically configured to:
calculating a transformation direction vector under a world coordinate system by adopting the affine transformation matrix and a first vertical axis direction vector of a camera coordinate system;
calculating to obtain a rotation vector and a rotation angle by adopting the transformation direction vector and a second vertical axis direction vector under a world coordinate system;
and calculating to obtain a rotation matrix corresponding to the virtual camera by adopting the rotation vector and the rotation angle.
Optionally, the virtual camera external parameter includes a translation matrix, and the virtual camera external parameter determining subunit is specifically configured to:
Calculating a second camera position of the virtual camera under a world coordinate system by adopting the affine transformation matrix and the first camera position of the virtual camera under a vehicle body coordinate system;
and determining a translation matrix corresponding to the second camera position.
Optionally, the second pixel mapping unit includes:
the distortion correction map mapping subunit is used for determining mapping points of the second space point mapped in the distortion correction map of the vehicle-mounted camera;
and the distortion model is used for determining a second pixel point subunit, and the second pixel point subunit is used for determining the second pixel point mapped by the mapping point in the original image by adopting the distortion model of the vehicle-mounted camera.
An on-vehicle system comprises a display device, an on-vehicle camera and a processing device;
the display device is used for displaying a plurality of local image options and displaying the image to be displayed when the image to be displayed is received;
the vehicle-mounted camera is used for acquiring an original image and sending the original image to the processing device;
the processing device is used for responding to the selection operation of the target local image options and determining the type of the image to be displayed; generating an image to be displayed according to the type of the image to be displayed, and sending the image to the display device; the image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin;
The processing device is specifically configured to, when executing the generating the image to be displayed according to the image type to be displayed:
determining pixel positions of the image to be displayed, which is matched with the image type, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
determining pixel information corresponding to the pixel position from the original image;
and generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a method of processing a partial image as described above
The embodiment of the application has the following advantages:
in the embodiment of the application, a plurality of local image options are displayed in a graphical interface, an image type to be displayed is determined in response to selection operation of a target local image option, a pixel position of an image to be displayed, which is matched with the image type, is determined according to the image type to be displayed, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, pixel information corresponding to the pixel position is determined from the original image, then the image to be displayed is generated according to the pixel information corresponding to all the pixel positions of the image to be displayed, the image to be displayed is displayed in the image interface, the image to be displayed is displayed in a virtual camera view angle, and a virtual camera coordinate system where a virtual camera is located is: the coordinate system constructed by taking the position of the virtual camera under the coordinate system of the vehicle body as the origin of the coordinate system realizes the display of high-quality local images in the vehicle-mounted system, enables the local images to more meet the visual requirement of human eyes through the visual angle of the virtual camera, reversely maps the local images to the original images, and simplifies the mapping process.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the description of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of steps of a method for processing a partial image according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a graphical interface provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a graphical interface provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a virtual camera according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an in-vehicle camera according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating steps of another method for processing a partial image according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating another method for processing a partial image according to an embodiment of the present application;
FIG. 8 is a schematic illustration of an imaging provided in an embodiment of the present application;
FIG. 9 is a flowchart illustrating another method for processing a partial image according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a coordinate system provided in an embodiment of the present application;
FIG. 11 is a schematic structural diagram of a local image processing device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an in-vehicle system according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. It will be apparent that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1, a step flowchart of a local image processing method provided in an embodiment of the present application is shown, where the method may be applied to an on-vehicle system, and specifically may include the following steps:
step 101, displaying a plurality of local map options in a graphical interface;
in the novel intelligent vehicle, a vehicle-mounted large screen can be configured, as shown in fig. 2, a manual or automatic option is displayed in a graphical interface of the vehicle-mounted large screen, so that a user can trigger a local image display function, a local image display option can be displayed in the graphical interface, the local image display option is opposite to the display of a spliced large image, a plurality of local images in the spliced image can be used as a local image display option, as shown in fig. 3, and a local image option of a front wheel sub image and a local image option of a rear wheel sub image are displayed.
Step 102, determining the type of the image to be displayed in response to the selection operation of the target partial graph option;
the image types may include an image to be displayed for a front wheel of the vehicle, an image to be displayed for a rear wheel of the vehicle, which may be a partial image.
In the manual mode, when the user selects the target partial image option, the type of image to be displayed may be determined, and if the front partial image option is selected, the type of image to be displayed is determined to be an image for the front wheels of the vehicle, which includes partial wheel images of the left front wheel and the right front wheel.
In an embodiment of the present application, the method may further include the following steps:
detecting a steering wheel angle of a vehicle, and determining a running direction of the vehicle according to the steering wheel angle; when the vehicle runs in the reverse direction and the running speed is smaller than the first preset speed, determining the type of the image to be displayed as the image to be displayed for the rear wheels of the vehicle; and when the vehicle runs in the forward direction and the running speed is smaller than the second preset speed, determining the type of the image to be displayed as the image to be displayed for the front wheels of the vehicle.
In the automatic mode, the steering wheel angle of the vehicle may be detected, and then the traveling direction of the vehicle, such as forward traveling and reverse traveling, may be determined based on the steering wheel angle.
Since the user needs to observe the environment behind the vehicle during the reverse running, when the reverse running is detected and the running speed is smaller than the first preset speed, the type of the image to be displayed can be determined as the image to be displayed for the rear wheels of the vehicle, for example, when the vehicle is in reverse running and the running speed is smaller than 10km/h, the image to be displayed is the image containing the left rear wheel and the right rear wheel, and when the running speed is larger than the first preset speed, the image display can be omitted.
In the forward running, since the surrounding environment condition of the vehicle is related to the running speed, if the vehicle runs at a low speed when more pedestrians are around, the running speed of the vehicle CAN be obtained through the vehicle body CAN network, when the running speed is smaller than a second preset speed, if the running speed is smaller than 15km/h, the type of the image to be displayed CAN be determined as the image to be displayed aiming at the front wheels of the vehicle, if the image to be displayed is the image comprising the left front wheel and the right front wheel, and when the running speed is larger than the second preset speed, the image display CAN be omitted.
Step 103, generating an image to be displayed according to the type of the image to be displayed, and displaying the image to be displayed in the image interface; the image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: and a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin.
In practical application, in order to change the viewing angle of the original image, so that the partial image can more meet the visual requirement of human eyes, a virtual camera may be set, and the virtual camera may be symmetrically set with respect to the coordinate axis of the vehicle body coordinate system, such as virtual cameras cam0-cam3 set for four wheels in fig. 4.
Specifically, the position and angle of the virtual camera may be set by default, or may be manually adjusted by a user, and the virtual camera coordinate system may be constructed by selecting a position under the vehicle body coordinate system, where the position is not limited to a single coordinate point, and may be a position area, and the angle of the virtual camera may be adjusted, so that the selected position may be used as the origin of coordinates of the virtual camera coordinate system corresponding to the virtual camera, and the direction perpendicular to the imaging plane of the virtual camera may be the Z axis.
After determining the type of the image to be displayed, which corresponds to the type of the image and is displayed in the virtual camera view angle, can be obtained, and then the image to be displayed can be displayed in an image interface.
In an embodiment of the present application, the step of generating the image to be displayed according to the type of the image to be displayed may include the following sub-steps:
a sub-step 11 of determining pixel positions of the image to be displayed, which are matched with the image type, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
in practical application, as shown in fig. 5, vehicle-mounted cameras, such as fisheye cameras, may be respectively disposed around the vehicle, so that images of surrounding environments of the vehicle may be collected by the vehicle-mounted cameras to obtain an original image.
Because the original image acquired by the vehicle-mounted camera is larger and does not need to be displayed in the image to be displayed, in order to avoid the complete mapping of the pixel points in the original image, the pixel position of the image to be displayed, which is matched with the image type, can be determined first, and the image to be displayed is reversely mapped to the original image from the image to be displayed.
A sub-step 12 of determining pixel information corresponding to the pixel position from the original image;
after determining the pixel position of the image to be displayed, the image can be reversely mapped to the original image from the pixel position, and the pixel information such as RGB information corresponding to the pixel position is determined from the original image.
And a sub-step 13 of generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed.
After the pixel information corresponding to all the pixel positions of the image to be displayed is determined, the pixel information can be used as the pixel information of the pixel positions in the image to be displayed, and then the image to be displayed can be generated.
In the embodiment of the application, a plurality of local image options are displayed in a graphical interface, an image type to be displayed is determined in response to selection operation of a target local image option, a pixel position of an image to be displayed, which is matched with the image type, is determined according to the image type to be displayed, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, pixel information corresponding to the pixel position is determined from the original image, then the image to be displayed is generated according to the pixel information corresponding to all the pixel positions of the image to be displayed, the image to be displayed is displayed in the image interface, the image to be displayed is displayed in a virtual camera view angle, and a virtual camera coordinate system where a virtual camera is located is: the coordinate system constructed by taking the position of the virtual camera under the coordinate system of the vehicle body as the origin of the coordinate system realizes the display of high-quality local images in the vehicle-mounted system, enables the local images to more meet the visual requirement of human eyes through the visual angle of the virtual camera, reversely maps the local images to the original images, and simplifies the mapping process.
Referring to fig. 6, a step flowchart of another method for processing a partial image according to an embodiment of the present application is shown, where the method may be applied to an in-vehicle system, and may specifically include the following steps:
step 601, displaying a plurality of local map options in a graphical interface;
step 602, determining the type of the image to be displayed in response to the selection operation of the target partial graph option;
step 603, determining a pixel position of the image to be displayed, which is matched with the image type, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
step 604, determining, for a first pixel point in the image to be displayed, a first spatial point mapped by the first pixel point under a virtual camera coordinate system; the first pixel point is any pixel point in the image to be displayed;
after the first pixel point to be displayed is determined, a first spatial point corresponding to the first pixel point can be determined in a virtual camera coordinate system corresponding to the virtual camera.
Step 605, determining a second space point corresponding to the first space point under a world coordinate system;
in practical applications, since the first spatial point in the virtual camera coordinate system cannot be directly mapped to the original image, and the world coordinate system has a mapping relationship with the original image, the first spatial point can be mapped to the second spatial point under the world coordinate system.
Specifically, after the mapped second spatial point is obtained, a connection line between the coordinate point of the virtual camera in the world coordinate system and the second spatial point may be determined, and an intersection point (ground point) between the connection line and the sitting ground (real ground) in the world coordinate system may be determined, where the intersection point is used as the alternative second spatial point.
Step 606, determining a second pixel point mapped by the second spatial point in the original image;
after the second spatial point is determined, a mapping relationship between the world coordinate system and the original image can be adopted to map the second spatial point under the world coordinate system to a second pixel point in the original image.
In an embodiment of the present application, step 606 may include the following sub-steps:
determining a mapping point of the second space point in the distortion correction map of the vehicle-mounted camera; and determining a second pixel point mapped by the mapping point in the original image by adopting a distortion model of the vehicle-mounted camera.
In a specific implementation, internal parameters and external parameters of a real vehicle-mounted camera can be obtained, a mapping relation between space points in a world coordinate system and mapping points in the distortion correction map is determined, further, the mapping points of second space points in the distortion correction map of the vehicle-mounted camera are determined, then a distortion model of the vehicle-mounted camera can be used for determining second pixel points of the mapping points in an original image.
Step 607, determining the pixel information of the second pixel point as the pixel information of the first pixel point;
after obtaining the pixel information of the second pixel point, the pixel information may be used as the pixel information of the first pixel point in the image to be displayed.
Step 608, generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed, and displaying the image to be displayed in the image interface.
In the embodiment of the application, a plurality of local image options are displayed in a graphical interface, an image type to be displayed is determined in response to selection operation of a target local image option, a pixel position of an image to be displayed, which is matched with the image type, is determined according to the image type to be displayed, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, pixel information corresponding to the pixel position is determined from the original image, then the image to be displayed is generated according to the pixel information corresponding to all the pixel positions of the image to be displayed, the image to be displayed is displayed in the image interface, the image to be displayed is displayed in a virtual camera view angle, and a virtual camera coordinate system where a virtual camera is located is: the coordinate system constructed by taking the position of the virtual camera under the coordinate system of the vehicle body as the origin of the coordinate system realizes the display of high-quality local images in the vehicle-mounted system, enables the local images to more meet the visual requirement of human eyes through the visual angle of the virtual camera, reversely maps the local images to the original images, and simplifies the mapping process.
Referring to fig. 7, a step flowchart of another method for processing a partial image according to an embodiment of the present application is shown, where the method may be applied to an in-vehicle system, and may specifically include the following steps:
step 701, displaying a plurality of local map options in a graphical interface;
step 702, determining the type of the image to be displayed in response to the selection operation of the target partial graph option;
step 703, determining the pixel position of the image to be displayed matching the image type and the original image acquired by the vehicle-mounted camera matching the image type according to the image type to be displayed;
step 704, obtaining a resolution corresponding to the virtual camera;
in practical application, the resolution corresponding to the virtual camera may be preset, for example, 800×600, and may be obtained from the configuration parameters when calculating the internal parameters of the virtual camera.
Step 705, determining a virtual camera internal parameter corresponding to the virtual camera in combination with the resolution;
after the resolution is obtained, the virtual camera internal parameters corresponding to the virtual camera can be calculated by combining the resolution, wherein the virtual camera internal parameters can comprise an internal parameter matrix and distortion parameters, and the distortion parameters can be 0 if the virtual camera has no distortion.
In one embodiment of the present application, step 705 comprises the sub-steps of:
a sub-step 21 of determining a virtual camera principal point and a virtual camera focal length in combination with the resolution;
in an embodiment of the present application, for the virtual camera focal length, the sub-step 21 may include the following sub-steps:
a sub-step 211 of determining a first camera position of the virtual camera in a vehicle body coordinate system and a center position of a field of view center of the virtual camera in the vehicle body coordinate system; the first camera position is a center point of the position of the virtual camera under a vehicle body coordinate system;
in a specific implementation, a first camera position of the virtual camera in the vehicle body coordinate system may be determined, where the first camera position may be set by default or may be set according to a user operation, and a center position of a field of view center of the virtual camera in the vehicle body coordinate system may be determined, for example, 2 meters beside a wheel.
A substep 212 of calculating a position distance corresponding to the first camera position and the center position;
after the first camera position and the center position are determined, a position distance between the first camera position and the center position may be calculated from the coordinates.
And step 213, calculating a virtual camera focal length corresponding to the virtual camera by adopting the resolution, the position distance and the preset visual field range length.
Since the camera coordinate system and the vehicle body coordinate system satisfy the principle of "pinhole imaging", as shown in fig. 8, the first camera position co (x cc ,y cc ,z cc ) Center position c1 (x ct ,y ct 0), the resolution, the position distance and the preset visual field range length can be adopted, and the virtual camera focal length corresponding to the virtual camera is calculated, so that the following formula is shown:
where w is the horizontal resolution, d is the position distance between the first camera position and the center position, len is the field of view length, and f is the virtual camera focal length.
In an embodiment of the present application, for the virtual camera principal point, the sub-step 21 may further include the following sub-steps:
and step 214, calculating to obtain the horizontal direction coordinate and the vertical direction coordinate of the virtual camera main point corresponding to the virtual camera by adopting the resolution.
In a specific implementation, the resolution may be directly used to calculate the horizontal direction coordinate and the vertical direction coordinate of the virtual camera principal point corresponding to the virtual camera, and the resolution of the camera principal point (c x ,c y ) The following formula is shown:
where w is the horizontal resolution and h is the vertical resolution.
And a sub-step 22 of calculating an internal reference matrix corresponding to the virtual camera by adopting the main point of the virtual camera and the focal length of the virtual camera.
After determining the principal point of the virtual camera and the focal length of the virtual camera, an internal reference matrix corresponding to the virtual camera can be calculated, as shown in the following formula:
step 706, determining a first spatial point mapped by a first pixel point in the image to be displayed under a virtual camera coordinate system by adopting the virtual camera internal parameters;
after determining the reference matrix, the reference matrix may be used to map the first pixel point in the image to be displayed to the first spatial point under the virtual camera coordinate system, as shown in the following formula:
wherein the first pixel point is (x p ,y p ) The first spatial point is (x 0 ,y 0 ,z 0 ) S is the scaling factor.
Step 707 of determining a second spatial point corresponding to the first spatial point under a world coordinate system;
step 708, determining a second pixel point mapped by the second spatial point in the original image;
step 709, determining that the pixel information of the second pixel point is the pixel information of the first pixel point;
and step 710, generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed, and displaying the image to be displayed in the image interface.
In the embodiment of the application, a plurality of local image options are displayed in a graphical interface, an image type to be displayed is determined in response to selection operation of a target local image option, a pixel position of an image to be displayed, which is matched with the image type, is determined according to the image type to be displayed, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, pixel information corresponding to the pixel position is determined from the original image, then the image to be displayed is generated according to the pixel information corresponding to all the pixel positions of the image to be displayed, the image to be displayed is displayed in the image interface, the image to be displayed is displayed in a virtual camera view angle, and a virtual camera coordinate system where a virtual camera is located is: the coordinate system constructed by taking the position of the virtual camera under the coordinate system of the vehicle body as the origin of the coordinate system realizes the display of high-quality local images in the vehicle-mounted system, enables the local images to more meet the visual requirement of human eyes through the visual angle of the virtual camera, reversely maps the local images to the original images, and simplifies the mapping process.
Referring to fig. 9, a step flowchart of another method for processing a partial image according to an embodiment of the present application is shown, where the method may be applied to an in-vehicle system, and may specifically include the following steps:
step 901, displaying a plurality of local map options in a graphical interface;
step 902, determining the type of the image to be displayed in response to the selection operation of the target partial graph option;
step 903, determining, according to the image type to be displayed, a pixel position of the image to be displayed, which is matched with the image type, and an original image acquired by a vehicle-mounted camera, which is matched with the image type;
step 904, determining, for a first pixel point in the image to be displayed, a first spatial point mapped by the first pixel point under a virtual camera coordinate system; the first pixel point is any pixel point in the image to be displayed;
step 905, determining an affine transformation matrix between a vehicle body coordinate system and a world coordinate system;
in a specific implementation, since the vehicle body coordinate system and the world coordinate system need to satisfy affine transformation, an affine transformation matrix between the vehicle body coordinate system and the world coordinate system can be obtained.
Step 906, determining virtual camera external parameters corresponding to the virtual camera by combining the affine transformation matrix;
The external parameter may be an external parameter of the virtual camera in a world coordinate system.
After determining the affine transformation matrix, virtual camera external parameters corresponding to the virtual camera may be determined in combination with the affine transformation matrix, and the virtual camera external parameters may include a rotation matrix and a translation matrix between the virtual camera coordinate system and the world coordinate system.
In an embodiment of the present application, for a rotation matrix, step 906 may include the following sub-steps:
a substep 31 of calculating a transformation direction vector in the world coordinate system by using the affine transformation matrix and the first vertical axis direction vector of the camera coordinate system;
in a specific implementation, a first vertical axis direction vector of the camera coordinate system, namely a Z axis direction vector, may be determined, and then, a transformation direction vector of the first vertical axis direction vector in the world coordinate system may be calculated by combining an affine transformation matrix, a first camera position of the virtual camera in the vehicle body coordinate system, and a center position of a view center in the vehicle body coordinate system, as shown in the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,for transforming the direction vector, H is an affine transformation matrix, Z cam For the first vertical axis direction vector, (x) cc ,y cc ,z cc ) For the first camera position, (x) ct ,y ct ,z ct ) Is a central position.
A substep 32, calculating a rotation vector and a rotation angle by using the transformation direction vector and a second vertical axis direction vector under the world coordinate system;
In particular implementation, as shown in FIG. 10, O w Form a world coordinate system for the origin of coordinates, O c Forming a camera coordinate system for the origin of coordinates, and a second vertical axis direction vector under the world coordinate systemIt is possible to determine the rotation vector +.>Perpendicular second vertical axis direction vector +.>And a direction vector change>The plane is formed as shown in the following formula: />
Further, the rotation angle θ may be expressed as follows:
and step 33, calculating to obtain a rotation matrix corresponding to the virtual camera by adopting the rotation vector and the rotation angle.
After the rotation vector and the rotation angle are determined, the rotation vector and the rotation angle can be adopted to calculate and obtain a rotation matrix corresponding to the virtual camera, and the following formula is shown:
wherein the rotation vectorR is a rotation matrix.
In an embodiment of the present application, for a translation matrix, step 906 may include the following sub-steps:
a substep 41, calculating a second camera position of the virtual camera under the world coordinate system by adopting the affine transformation matrix and the first camera position of the virtual camera under the vehicle body coordinate system;
in a specific implementation, a first camera position of the virtual camera in a vehicle body coordinate system can be obtained, and then, a second camera position of the first camera position in the virtual camera in a world coordinate system is calculated by combining an affine transformation matrix, wherein the following formula is shown as follows:
Wherein the first camera position (x cc ,y cc ,z cc ) The second camera position is (x cw ,y cw ,z cw ) H is an affine transformation matrix.
In a sub-step 42, a translation matrix corresponding to the second camera position is determined.
After determining the second camera position, a translation matrix corresponding to the second camera position may be determined.
Step 907, determining a second spatial point of the first spatial point in a world coordinate system by adopting the virtual camera external parameters;
after determining the rotation matrix and the translation matrix, the rotation matrix and the translation matrix may be used to determine a second spatial point of the first spatial point in the world coordinate system, as shown in the following formula:
specifically, the first spatial point is (x 0 ,y 0 ,z 0 ) The second spatial point is (x 1 ,y 1 ,z 1 ) R is a rotation matrix,is a translation matrix. Combining the calculation formula about the first spatial point in the above +.> A mapping function from the first pixel point to the second spatial point can be obtained as shown in the following formula:
specifically, after the mapped second spatial point is obtained, a connection line between the coordinate point of the virtual camera in the world coordinate system and the second spatial point may be determined, and an intersection point (ground point) between the connection line and the sitting ground (real ground) in the world coordinate system may be determined, where the intersection point is used as the alternative second spatial point.
Step 908, determining a second pixel point mapped by the second spatial point in the original image;
step 909, determining the pixel information of the second pixel point as the pixel information of the first pixel point;
step 910, generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed, and displaying the image to be displayed in the image interface.
In the embodiment of the application, a plurality of local image options are displayed in a graphical interface, an image type to be displayed is determined in response to selection operation of a target local image option, a pixel position of an image to be displayed, which is matched with the image type, is determined according to the image type to be displayed, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, pixel information corresponding to the pixel position is determined from the original image, then the image to be displayed is generated according to the pixel information corresponding to all the pixel positions of the image to be displayed, the image to be displayed is displayed in the image interface, the image to be displayed is displayed in a virtual camera view angle, and a virtual camera coordinate system where a virtual camera is located is: the coordinate system constructed by taking the position of the virtual camera under the coordinate system of the vehicle body as the origin of the coordinate system realizes the display of high-quality local images in the vehicle-mounted system, enables the local images to more meet the visual requirement of human eyes through the visual angle of the virtual camera, reversely maps the local images to the original images, and simplifies the mapping process.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments and that the acts referred to are not necessarily required by the embodiments of the present application.
Referring to fig. 11, a schematic structural diagram of a local image processing apparatus according to an embodiment of the present application is shown, where the apparatus may be applied to a vehicle-mounted system, and may specifically include the following modules:
a local map option display module 1101, configured to display a plurality of local map options in a graphical interface;
an image type determining module 1102, configured to determine an image type to be displayed in response to a selection operation of a target local map option;
an image generating module 1103, configured to generate an image to be displayed according to the type of the image to be displayed;
a display module 1104 for displaying the image to be displayed in the image interface;
It should be noted that the display module 1104 may be disposed inside the device or may be disposed outside the device, which is not limited in the embodiment of the present application.
The image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin;
wherein, the image generating module 1103 includes:
the image type processing sub-module is used for determining the pixel position of the image to be displayed, which is matched with the image type, and the original image acquired by the vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
a pixel information determining sub-module, configured to determine pixel information corresponding to the pixel position from the original image;
and the image to be displayed generating sub-module is used for generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed.
In an embodiment of the present application, the image types include an image to be displayed for a front wheel of the vehicle and an image to be displayed for a rear wheel of the vehicle.
In an embodiment of the present application, the apparatus further includes:
The driving direction determining module is used for detecting the steering wheel angle of the vehicle and determining the driving direction of the vehicle according to the steering wheel angle;
the driving reversing determining module is used for determining that the type of the image to be displayed is the image to be displayed for the rear wheels of the vehicle when the vehicle reverses and the driving speed is smaller than a first preset speed;
and the forward running determining module is used for determining that the type of the image to be displayed is the image to be displayed for the front wheels of the vehicle when the vehicle runs forward and the running speed is smaller than the second preset speed.
In an embodiment of the present application, the pixel information determining submodule includes:
a first spatial point mapping unit, configured to determine, for a first pixel point in the image to be displayed, a first spatial point mapped by the first pixel point in a virtual camera coordinate system; the first pixel point is any pixel point in the image to be displayed;
a second space point determining unit configured to determine a second space point corresponding to the first space point in a world coordinate system;
a second pixel mapping unit, configured to determine, in the original image, a second pixel mapped by the second spatial point;
and the pixel information mapping unit is used for determining the pixel information of the second pixel point as the pixel information of the first pixel point.
In an embodiment of the present application, the first spatial point mapping unit includes:
a resolution obtaining subunit, configured to obtain a resolution corresponding to the virtual camera;
a virtual camera internal reference determining subunit, configured to determine a virtual camera internal reference corresponding to the virtual camera in combination with the resolution;
and the internal reference determining first space point subunit is used for determining a first space point mapped by a first pixel point in the image to be displayed under a virtual camera coordinate system by adopting the virtual camera internal reference.
In an embodiment of the present application, the virtual camera internal parameters include an internal parameter matrix, and the virtual camera internal parameter determining subunit is specifically configured to:
determining a virtual camera principal point and a virtual camera focal length by combining the resolution;
and calculating an internal reference matrix corresponding to the virtual camera by adopting the main point of the virtual camera and the focal length of the virtual camera.
In an embodiment of the present application, the virtual camera internal parameter determining subunit is specifically configured to, when determining the virtual camera focal length in combination with the resolution:
determining a first camera position of the virtual camera under a vehicle body coordinate system and a center position of a visual field center of the virtual camera under the vehicle body coordinate system; the first camera position is a center point of the position of the virtual camera under a vehicle body coordinate system;
Calculating a position distance corresponding to the first camera position and the central position;
and calculating to obtain a virtual camera focal length corresponding to the virtual camera by adopting the resolution, the position distance and the preset visual field range length.
In an embodiment of the present application, the virtual camera internal parameter determining subunit is specifically configured to, when determining the virtual camera principal point in combination with the resolution:
and calculating to obtain the horizontal direction coordinate and the vertical direction coordinate of the virtual camera main point corresponding to the virtual camera by adopting the resolution ratio.
In an embodiment of the present application, the second spatial point determining unit includes:
an affine transformation matrix determining subunit for determining an affine transformation matrix between the vehicle body coordinate system and the world coordinate system;
a virtual camera external parameter determining subunit, configured to combine the affine transformation matrix to determine a virtual camera external parameter corresponding to the virtual camera;
and the external parameter determining second space point subunit is used for determining a second space point of the first space point under a world coordinate system by adopting the virtual camera external parameter.
In an embodiment of the present application, the virtual camera external parameter includes a rotation matrix, and the virtual camera external parameter determining subunit is specifically configured to:
Calculating a transformation direction vector under a world coordinate system by adopting the affine transformation matrix and a first vertical axis direction vector of a camera coordinate system;
calculating to obtain a rotation vector and a rotation angle by adopting the transformation direction vector and a second vertical axis direction vector under a world coordinate system;
and calculating to obtain a rotation matrix corresponding to the virtual camera by adopting the rotation vector and the rotation angle.
In an embodiment of the present application, the virtual camera outlier includes a translation matrix, and the virtual camera outlier determining subunit is specifically configured to:
calculating a second camera position of the virtual camera under a world coordinate system by adopting the affine transformation matrix and the first camera position of the virtual camera under a vehicle body coordinate system;
and determining a translation matrix corresponding to the second camera position.
In an embodiment of the present application, the second pixel mapping unit includes:
the distortion correction map mapping subunit is used for determining mapping points of the second space point mapped in the distortion correction map of the vehicle-mounted camera;
the distortion model determining unit is configured to determine, by using the distortion model of the vehicle-mounted camera, a second pixel point mapped by the mapping point in the original image, in this embodiment, by displaying a plurality of local map options in a graphical interface, determining an image type to be displayed in response to a selection operation on a target local map option, determining, according to the image type to be displayed, a pixel position of the image to be displayed that matches the image type, and an original image acquired by the vehicle-mounted camera that matches the image type, determining pixel information corresponding to the pixel position from the original image, and then generating, according to pixel information corresponding to all pixel positions of the image to be displayed, an image to be displayed in the image interface, where the image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system in which the virtual camera is located is: the coordinate system constructed by taking the position of the virtual camera under the coordinate system of the vehicle body as the origin of the coordinate system realizes the display of high-quality local images in the vehicle-mounted system, enables the local images to more meet the visual requirement of human eyes through the visual angle of the virtual camera, reversely maps the local images to the original images, and simplifies the mapping process.
Referring to fig. 12, a schematic structural diagram of an in-vehicle system provided in an embodiment of the present application is shown, including a display device 1201, an in-vehicle camera 1202, and a processing device 1203.
The display device 1201 is configured to display a plurality of local map options, and when receiving an image to be displayed, display the image to be displayed;
the vehicle-mounted camera 1202 is configured to collect an original image and send the original image to the processing device;
the processing device 1203 is configured to determine a type of an image to be displayed in response to a selection operation of the target local map option; generating an image to be displayed according to the type of the image to be displayed, and sending the image to the display device; the image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin;
the processing device 1203 is specifically configured to, when generating the image to be displayed according to the image type to be displayed:
determining pixel positions of the image to be displayed, which is matched with the image type, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
Determining pixel information corresponding to the pixel position from the original image;
and generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed.
In the embodiment of the application, a plurality of local image options are displayed in a graphical interface, an image type to be displayed is determined in response to selection operation of a target local image option, a pixel position of an image to be displayed, which is matched with the image type, is determined according to the image type to be displayed, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, pixel information corresponding to the pixel position is determined from the original image, then the image to be displayed is generated according to the pixel information corresponding to all the pixel positions of the image to be displayed, the image to be displayed is displayed in the image interface, the image to be displayed is displayed in a virtual camera view angle, and a virtual camera coordinate system where a virtual camera is located is: the coordinate system constructed by taking the position of the virtual camera under the coordinate system of the vehicle body as the origin of the coordinate system realizes the display of high-quality local images in the vehicle-mounted system, enables the local images to more meet the visual requirement of human eyes through the visual angle of the virtual camera, reversely maps the local images to the original images, and simplifies the mapping process.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the above local image processing method.
For the device, system, and medium embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference should be made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present embodiments have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the present application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above detailed description of a method and apparatus for processing a partial image, a vehicle-mounted system, and a storage medium applies specific examples to illustrate the principles and embodiments of the present application, where the above description of the embodiments is only for helping to understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (24)

1. A method for processing a partial image, applied to a vehicle-mounted system, comprising:
displaying a plurality of partial graph options in a graphical interface;
determining the type of the image to be displayed in response to a selection operation of the target partial graph option;
generating an image to be displayed according to the type of the image to be displayed, and displaying the image to be displayed in the graphical interface; the image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin;
Wherein, the step of generating the image to be displayed according to the image type to be displayed includes:
determining pixel positions of the image to be displayed, which is matched with the image type, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
determining pixel information corresponding to the pixel position from the original image, including: for a first pixel point in the image to be displayed, determining a first spatial point mapped by the first pixel point under a virtual camera coordinate system includes: acquiring a preset resolution corresponding to the virtual camera; determining virtual camera internal parameters corresponding to the virtual camera by combining the resolution; determining a first space point mapped by a first pixel point in the image to be displayed under a virtual camera coordinate system by adopting the virtual camera internal reference; the first pixel point is any pixel point in the image to be displayed;
and generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed.
2. The method of claim 1, wherein the image types include an image to be displayed for a front wheel of the vehicle, an image to be displayed for a rear wheel of the vehicle.
3. The method according to claim 1, wherein the method further comprises:
detecting a steering wheel angle of a vehicle, and determining a running direction of the vehicle according to the steering wheel angle;
when the vehicle runs in the reverse direction and the running speed is smaller than the first preset speed, determining the type of the image to be displayed as the image to be displayed for the rear wheels of the vehicle;
and when the vehicle runs in the forward direction and the running speed is smaller than the second preset speed, determining the type of the image to be displayed as the image to be displayed for the front wheels of the vehicle.
4. The method of claim 1, wherein the step of determining pixel information corresponding to the pixel location from the original image further comprises:
determining a second space point corresponding to the first space point under a world coordinate system;
determining a second pixel point mapped by the second space point in the original image;
and determining the pixel information of the second pixel point as the pixel information of the first pixel point.
5. The method of claim 4, wherein the virtual camera references comprise a reference matrix, and wherein the step of determining the virtual camera references corresponding to the virtual camera in combination with the resolution comprises:
Determining a virtual camera principal point and a virtual camera focal length by combining the resolution;
and calculating an internal reference matrix corresponding to the virtual camera by adopting the main point of the virtual camera and the focal length of the virtual camera.
6. The method of claim 5, wherein the step of determining a virtual camera focal length in conjunction with the resolution comprises:
determining a first camera position of the virtual camera under a vehicle body coordinate system and a center position of a visual field center of the virtual camera under the vehicle body coordinate system; the first camera position is a center point of the position of the virtual camera under a vehicle body coordinate system;
calculating a position distance corresponding to the first camera position and the central position;
and calculating to obtain a virtual camera focal length corresponding to the virtual camera by adopting the resolution, the position distance and the preset visual field range length.
7. The method of claim 5, wherein the step of determining a virtual camera principal point in combination with the resolution further comprises:
and calculating to obtain the horizontal direction coordinate and the vertical direction coordinate of the virtual camera main point corresponding to the virtual camera by adopting the resolution ratio.
8. The method of claim 4 or 5 or 6 or 7, wherein the step of determining a second spatial point corresponding to the first spatial point in world coordinate system comprises:
determining an affine transformation matrix between a vehicle body coordinate system and a world coordinate system;
combining the affine transformation matrix to determine virtual camera external parameters corresponding to the virtual camera;
and determining a second space point of the first space point under a world coordinate system by adopting the virtual camera external parameters.
9. The method of claim 8, wherein the virtual camera outliers comprise a rotation matrix, and wherein the step of determining the virtual camera outliers corresponding to the virtual camera in conjunction with the affine transformation matrix comprises:
calculating a transformation direction vector under a world coordinate system by adopting the affine transformation matrix and a first vertical axis direction vector of a camera coordinate system;
calculating to obtain a rotation vector and a rotation angle by adopting the transformation direction vector and a second vertical axis direction vector under a world coordinate system;
and calculating to obtain a rotation matrix corresponding to the virtual camera by adopting the rotation vector and the rotation angle.
10. The method of claim 9, wherein the virtual camera extrinsic parameters comprise a translation matrix, and wherein the step of determining the virtual camera extrinsic parameters corresponding to the virtual camera in conjunction with the affine transformation matrix comprises:
Calculating a second camera position of the virtual camera under a world coordinate system by adopting the affine transformation matrix and the first camera position of the virtual camera under a vehicle body coordinate system;
and determining a translation matrix corresponding to the second camera position.
11. The method of claim 4, wherein the step of determining the second pixel point of the second spatial point map in the original image comprises:
determining a mapping point of the second space point in the distortion correction map of the vehicle-mounted camera;
and determining a second pixel point mapped by the mapping point in the original image by adopting a distortion model of the vehicle-mounted camera.
12. A processing apparatus for a partial image, applied to an in-vehicle system, comprising:
the local map option display module is used for displaying a plurality of local map options in the graphical interface;
the image type determining module is used for determining the type of the image to be displayed in response to the selection operation of the target local image option;
the image generation module is used for generating an image to be displayed according to the type of the image to be displayed;
the display module is used for displaying the image to be displayed in the graphical interface;
The image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin;
wherein the image generation module comprises:
the image type processing sub-module is used for determining the pixel position of the image to be displayed, which is matched with the image type, and the original image acquired by the vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
a pixel information determining sub-module, configured to determine pixel information corresponding to the pixel position from the original image; the pixel information determination submodule includes: a first spatial point mapping unit, configured to determine, for a first pixel point in the image to be displayed, a first spatial point mapped by the first pixel point in a virtual camera coordinate system; the first spatial point mapping unit includes: a resolution obtaining subunit, configured to obtain a preset resolution corresponding to the virtual camera; a virtual camera internal reference determining subunit, configured to determine a virtual camera internal reference corresponding to the virtual camera in combination with the resolution; an internal reference determining first space point subunit, configured to determine a first space point mapped by a first pixel point in the image to be displayed under a virtual camera coordinate system by using the virtual camera internal reference; the first pixel point is any pixel point in the image to be displayed;
And the image to be displayed generating sub-module is used for generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed.
13. The apparatus of claim 12, wherein the image types include an image to be displayed for a front wheel of a vehicle, an image to be displayed for a rear wheel of the vehicle.
14. The apparatus of claim 12, wherein the apparatus further comprises:
the driving direction determining module is used for detecting the steering wheel angle of the vehicle and determining the driving direction of the vehicle according to the steering wheel angle;
the driving reversing determining module is used for determining that the type of the image to be displayed is the image to be displayed for the rear wheels of the vehicle when the vehicle reverses and the driving speed is smaller than a first preset speed;
and the forward running determining module is used for determining that the type of the image to be displayed is the image to be displayed for the front wheels of the vehicle when the vehicle runs forward and the running speed is smaller than the second preset speed.
15. The apparatus of claim 12, wherein the pixel information determination submodule further comprises:
a second space point determining unit configured to determine a second space point corresponding to the first space point in a world coordinate system;
A second pixel mapping unit, configured to determine, in the original image, a second pixel mapped by the second spatial point;
and the pixel information mapping unit is used for determining the pixel information of the second pixel point as the pixel information of the first pixel point.
16. The apparatus of claim 15, wherein the virtual camera internal parameters comprise an internal parameter matrix, the virtual camera internal parameter determination subunit being specifically configured to:
determining a virtual camera principal point and a virtual camera focal length by combining the resolution;
and calculating an internal reference matrix corresponding to the virtual camera by adopting the main point of the virtual camera and the focal length of the virtual camera.
17. The apparatus of claim 16, wherein the virtual camera internal reference determination subunit, when determining a virtual camera focal length in combination with the resolution, is specifically configured to:
determining a first camera position of the virtual camera under a vehicle body coordinate system and a center position of a visual field center of the virtual camera under the vehicle body coordinate system; the first camera position is a center point of the position of the virtual camera under a vehicle body coordinate system;
calculating a position distance corresponding to the first camera position and the central position;
And calculating to obtain a virtual camera focal length corresponding to the virtual camera by adopting the resolution, the position distance and the preset visual field range length.
18. The apparatus according to claim 16, wherein the virtual camera internal reference determination subunit, when determining a virtual camera principal point in combination with the resolution, is specifically configured to:
and calculating to obtain the horizontal direction coordinate and the vertical direction coordinate of the virtual camera main point corresponding to the virtual camera by adopting the resolution ratio.
19. The apparatus according to claim 15 or 16 or 17 or 18, wherein the second spatial point determination unit comprises:
an affine transformation matrix determining subunit for determining an affine transformation matrix between the vehicle body coordinate system and the world coordinate system;
a virtual camera external parameter determining subunit, configured to combine the affine transformation matrix to determine a virtual camera external parameter corresponding to the virtual camera;
and the external parameter determining second space point subunit is used for determining a second space point of the first space point under a world coordinate system by adopting the virtual camera external parameter.
20. The apparatus of claim 19, wherein the virtual camera outlier comprises a rotation matrix, the virtual camera outlier determination subunit being specifically configured to:
Calculating a transformation direction vector under a world coordinate system by adopting the affine transformation matrix and a first vertical axis direction vector of a camera coordinate system;
calculating to obtain a rotation vector and a rotation angle by adopting the transformation direction vector and a second vertical axis direction vector under a world coordinate system;
and calculating to obtain a rotation matrix corresponding to the virtual camera by adopting the rotation vector and the rotation angle.
21. The apparatus of claim 20, wherein the virtual camera outlier comprises a translation matrix, the virtual camera outlier determination subunit being specifically configured to:
calculating a second camera position of the virtual camera under a world coordinate system by adopting the affine transformation matrix and the first camera position of the virtual camera under a vehicle body coordinate system;
and determining a translation matrix corresponding to the second camera position.
22. The apparatus of claim 15, wherein the second pixel point mapping unit comprises:
the distortion correction map mapping subunit is used for determining mapping points of the second space point mapped in the distortion correction map of the vehicle-mounted camera;
and the distortion model is used for determining a second pixel point subunit, and the second pixel point subunit is used for determining the second pixel point mapped by the mapping point in the original image by adopting the distortion model of the vehicle-mounted camera.
23. A vehicle-mounted system, which is characterized by comprising a display device, a vehicle-mounted camera and a processing device;
the display device is used for displaying a plurality of local image options and displaying the image to be displayed when the image to be displayed is received;
the vehicle-mounted camera is used for acquiring an original image and sending the original image to the processing device;
the processing device is used for responding to the selection operation of the target local image options and determining the type of the image to be displayed; generating an image to be displayed according to the type of the image to be displayed, and sending the image to the display device; the image to be displayed is an image displayed in a virtual camera view angle, and a virtual camera coordinate system where the virtual camera is located is: a coordinate system constructed by taking the position of the virtual camera under the vehicle body coordinate system as a coordinate origin;
the processing device is specifically configured to, when generating the image to be displayed according to the image type to be displayed:
determining pixel positions of the image to be displayed, which is matched with the image type, and an original image acquired by a vehicle-mounted camera, which is matched with the image type, according to the image type to be displayed;
Determining pixel information corresponding to the pixel position from the original image, including: for a first pixel point in the image to be displayed, determining a first spatial point mapped by the first pixel point under a virtual camera coordinate system includes: acquiring a preset resolution corresponding to the virtual camera; determining virtual camera internal parameters corresponding to the virtual camera by combining the resolution; determining a first space point mapped by a first pixel point in the image to be displayed under a virtual camera coordinate system by adopting the virtual camera internal reference; the first pixel point is any pixel point in the image to be displayed;
and generating the image to be displayed according to the pixel information corresponding to all the pixel positions of the image to be displayed.
24. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method of processing a partial image according to any one of claims 1 to 11.
CN201911014302.3A 2019-10-23 2019-10-23 Local image processing method and device, vehicle-mounted system and storage medium Active CN112698717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911014302.3A CN112698717B (en) 2019-10-23 2019-10-23 Local image processing method and device, vehicle-mounted system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911014302.3A CN112698717B (en) 2019-10-23 2019-10-23 Local image processing method and device, vehicle-mounted system and storage medium

Publications (2)

Publication Number Publication Date
CN112698717A CN112698717A (en) 2021-04-23
CN112698717B true CN112698717B (en) 2023-07-25

Family

ID=75505398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911014302.3A Active CN112698717B (en) 2019-10-23 2019-10-23 Local image processing method and device, vehicle-mounted system and storage medium

Country Status (1)

Country Link
CN (1) CN112698717B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008013022A (en) * 2006-07-05 2008-01-24 Sanyo Electric Co Ltd Drive assisting device for vehicle
CN108762492A (en) * 2018-05-14 2018-11-06 歌尔科技有限公司 Method, apparatus, equipment and the storage medium of information processing are realized based on virtual scene

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100443552B1 (en) * 2002-11-18 2004-08-09 한국전자통신연구원 System and method for embodying virtual reality
US10203762B2 (en) * 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
CN107577988B (en) * 2017-08-03 2020-05-26 东软集团股份有限公司 Method, device, storage medium and program product for realizing side vehicle positioning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008013022A (en) * 2006-07-05 2008-01-24 Sanyo Electric Co Ltd Drive assisting device for vehicle
CN108762492A (en) * 2018-05-14 2018-11-06 歌尔科技有限公司 Method, apparatus, equipment and the storage medium of information processing are realized based on virtual scene

Also Published As

Publication number Publication date
CN112698717A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN112224132B (en) Vehicle panoramic all-around obstacle early warning method
CN109741455B (en) Vehicle-mounted stereoscopic panoramic display method, computer readable storage medium and system
EP3565739B1 (en) Rear-stitched view panorama for rear-view visualization
CN110341597B (en) Vehicle-mounted panoramic video display system and method and vehicle-mounted controller
CN111223038B (en) Automatic splicing method of vehicle-mounted looking-around images and display device
US9858639B2 (en) Imaging surface modeling for camera modeling and virtual view synthesis
JP5739584B2 (en) 3D image synthesizing apparatus and method for visualizing vehicle periphery
JP3286306B2 (en) Image generation device and image generation method
JP4642723B2 (en) Image generating apparatus and image generating method
US20170324943A1 (en) Driver-assistance method and a driver-assistance apparatus
CN105894549A (en) Panorama assisted parking system and device and panorama image display method
JP2008077628A (en) Image processor and vehicle surrounding visual field support device and method
WO2000064175A1 (en) Image processing device and monitoring system
JP6726006B2 (en) Calculation of distance and direction of target point from vehicle using monocular video camera
CN106060427A (en) Panorama imaging method and device based on single camera
CN111768332A (en) Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device
US20170024851A1 (en) Panel transform
CN108174089B (en) Backing image splicing method and device based on binocular camera
TW201605247A (en) Image processing system and method
CN105774657B (en) Single-camera panoramic reverse image system
KR101771657B1 (en) Navigation apparatus for composing camera images of vehicle surroundings and navigation information, method thereof
JP2008034964A (en) Image display apparatus
CN112698717B (en) Local image processing method and device, vehicle-mounted system and storage medium
CN110084851B (en) Binocular point cloud generation method and system
CN110400255B (en) Vehicle panoramic image generation method and system and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant