CN117864023A - Picture display method, device, equipment and storage medium - Google Patents

Picture display method, device, equipment and storage medium Download PDF

Info

Publication number
CN117864023A
CN117864023A CN202211247797.6A CN202211247797A CN117864023A CN 117864023 A CN117864023 A CN 117864023A CN 202211247797 A CN202211247797 A CN 202211247797A CN 117864023 A CN117864023 A CN 117864023A
Authority
CN
China
Prior art keywords
image
scene
user
target
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211247797.6A
Other languages
Chinese (zh)
Inventor
张海伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Liuhuan Information Technology Co ltd
Original Assignee
Guangzhou Liuhuan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Liuhuan Information Technology Co ltd filed Critical Guangzhou Liuhuan Information Technology Co ltd
Priority to CN202211247797.6A priority Critical patent/CN117864023A/en
Publication of CN117864023A publication Critical patent/CN117864023A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a picture display method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a first scene image and a target user image, wherein the first scene image is a scene image outside the vehicle equipment, which is acquired through a first image pickup component, the first image pickup component is arranged on the vehicle equipment and is positioned outside a vehicle space of the vehicle equipment, the target user image is a user image inside the vehicle equipment, which is acquired through a second image pickup component, and the second image pickup component is arranged on the vehicle equipment and is positioned inside the vehicle space; determining a user sight direction according to a target user image, generating a target picture corresponding to the user sight direction according to a first scene image, wherein the target picture is obtained by performing image optimization processing on a second scene image corresponding to the user sight direction, and the second scene image is obtained based on the first scene image; a target screen is displayed in the vehicle space. The technical scheme plays a role in strengthening the external view of the vehicle seen by the user.

Description

Picture display method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, apparatus, device, and storage medium for displaying a picture.
Background
Currently, automobiles are increasingly common in people's daily lives. The appearance of the automobile provides convenience for people to travel, and saves time. For the driver driving the car, driving safety is of paramount importance. Under some special driving scenes, such as at night, in rainy and snowy days, in heavy fog weather, and the like, the visibility in the vehicle is poor, so that the road conditions are not accurately judged by drivers, and safety accidents are easy to occur.
Disclosure of Invention
The application provides a picture display method, a picture display device, picture display equipment and a picture display storage medium, so as to solve the technical problem that a driver is influenced to judge road conditions under a scene with poor visibility in a vehicle.
In a first aspect, a method for displaying a picture is provided, including:
acquiring a first scene image and a target user image, wherein the first scene image is a scene image outside vehicle equipment acquired through a first image pickup component, the first image pickup component is arranged on the vehicle equipment and is positioned outside a vehicle space of the vehicle equipment, the target user image is a user image inside the vehicle equipment acquired through a second image pickup component, and the second image pickup component is arranged on the vehicle equipment and is positioned inside the vehicle space;
Determining a user sight direction according to the target user image, and generating a target picture corresponding to the user sight direction according to the first scene image, wherein the target picture is obtained by performing image optimization processing on a second scene image corresponding to the user sight direction, and the second scene image is obtained based on the first scene image;
and displaying the target picture in the vehicle space.
In the technical scheme, a scene image outside a vehicle space acquired through an outside camera and a target user image in the vehicle space acquired through an inside camera are acquired, then a user sight direction is determined according to the target user image in the vehicle space, a picture corresponding to the user sight direction in the vehicle is generated according to the scene image outside the vehicle space, and finally the target picture is displayed in the vehicle space. Since the target picture is a picture corresponding to the in-vehicle user's line of sight direction and is obtained based on a scene image outside the vehicle space, the content of the target picture can reflect the out-of-vehicle scene in the user's line of sight direction; the scene image is obtained through the external camera, the external scene of the user in the sight direction reflected by the target image is more real and visual compared with the external scene of the user seen in the vehicle through the window, and the target image is obtained by carrying out image optimization processing on the external scene image in the sight direction of the user, so that the content of the external scene in the sight direction of the user reflected by the target image is clearer and more reliable, the target image is displayed in the vehicle, the user can clearly know the external scene through the target image, the effect of strengthening the external scene seen by the user is achieved, the accurate judgment of road conditions by drivers is facilitated, and the occurrence of safety accidents is reduced.
With reference to the first aspect, in one possible implementation manner, the number of the second image capturing components is plural, and the capturing angles of the different second image capturing components are different; the generating, according to the first scene image, a target picture corresponding to the user sight line direction includes: performing image fusion on the first scene images acquired by each second image pickup component to obtain fused scene images outside the vehicle equipment; acquiring a second scene image corresponding to the user sight direction from the fused scene image; and performing image optimization processing on the second scene image to obtain the target picture. The scene images acquired by all the cameras outside the vehicle are fused, the scene images conforming to the sight direction of the user are acquired from the fused scene images, and the image optimization processing is carried out, so that the images displayed in the vehicle are obtained, and the images displayed in the vehicle are ensured to be identical and clear to the scene images outside the vehicle in the sight direction of the user.
With reference to the first aspect, in a possible implementation manner, the performing image optimization processing on the second scene image to obtain the target picture includes: acquiring running environment information of the vehicle equipment, wherein the running environment information at least comprises running time and running weather; and carrying out image optimization processing on the second scene image by adopting an image optimization processing mode corresponding to the driving environment information to obtain the target picture. The image optimization processing mode matched with the running environment information of the vehicle equipment is adopted to carry out the image optimization processing on the scene image in the direction of the user's sight, so that the differentiation processing on the image can be realized, and the target picture obtained by the image optimization processing can be clearer and more reliable.
With reference to the first aspect, in one possible implementation manner, the performing, by using an image optimization processing manner corresponding to the driving environment information, image optimization processing on the second scene image to obtain the target picture includes: under the condition that a sight shielding target exists in the running environment of the vehicle equipment according to the running weather, adopting a first image optimization processing mode to perform image optimization processing on the second scene image to obtain the target picture; the first image optimization processing mode refers to a mode of removing a sight-blocking target from an image. When it is determined that a sight-line shielding target exists in the running environment of the vehicle equipment, the sight-line shielding target is removed from the image, so that a target picture obtained by image optimization processing can be clearer and more reliable.
With reference to the first aspect, in one possible implementation manner, the performing, by using an image optimization processing manner corresponding to the driving environment information, image optimization processing on the second scene image includes: under the condition that the vehicle equipment runs in a preset time period according to the running time, adopting a second image optimization processing mode to perform image optimization processing on a second scene image corresponding to the sight direction of the user to obtain the target picture; the second image optimization processing mode refers to a mode of adjusting the brightness of an image. When the vehicle equipment is determined to be in a specific time period, the brightness of the image is adjusted, so that a target picture obtained by image optimization processing can be clearer and more reliable.
With reference to the first aspect, in a possible implementation manner, the performing image optimization processing on the second scene image to obtain the target picture includes: detecting image factors of the second scene image, wherein the image factors at least comprise image acquisition time, image content and image quality; and carrying out image optimization processing on the second scene image by adopting an image optimization processing mode corresponding to the image factors to obtain the target picture. The image optimization processing mode matched with the image factors of the scene image in the user sight direction is adopted to carry out the image optimization processing on the scene image in the user sight direction, so that the differentiation processing on the image can be realized, and the target picture obtained by the image optimization processing is clearer and more reliable.
With reference to the first aspect, in a possible implementation manner, after the displaying the target screen in the vehicle space, the method further includes: and under the condition that the picture adjustment instruction is acquired, responding to the picture adjustment instruction, and adjusting and displaying the target picture until the picture adjustment ending instruction is acquired. The target picture displayed in the vehicle is adjusted according to the picture adjustment instruction, so that the target picture is completely consistent with the sight direction of the user, and the user can conveniently know the external view of the vehicle based on the target picture.
With reference to the first aspect, in a possible implementation manner, the method further includes: acquiring a driving scene of the vehicle equipment; and indicating the car lamps in the vehicle space to display according to the display modes corresponding to the driving scene. The car lamps in the indicating car are displayed according to the running scene of the car, so that the visual effect can be enhanced, and the reminding effect on the user is achieved.
In a second aspect, there is provided a picture display device including:
the image acquisition module is used for acquiring a first scene image and a target user image, wherein the first scene image is a scene image outside vehicle equipment, the scene image is acquired through a first image pickup component, the first image pickup component is arranged on the vehicle equipment and is positioned outside a vehicle space of the vehicle equipment, the target user image is a user image inside the vehicle equipment, the user image is acquired through a second image pickup component, and the second image pickup component is arranged on the vehicle equipment and is positioned inside the vehicle space;
the image generation module is used for determining a user sight line direction according to the target user image, generating a target image corresponding to the user sight line direction according to the first scene image, wherein the target image is obtained by performing image optimization processing on a second scene image corresponding to the user sight line direction, and the second scene image is obtained based on the first scene image;
And the display module is used for displaying the target picture in the vehicle space.
In a third aspect, there is provided a computer device comprising a memory and one or more processors, the memory being connected to the one or more processors, the one or more processors being configured to execute one or more computer programs stored in the memory, the one or more processors, when executing the one or more computer programs, causing the computer device to implement the picture display method of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the picture display method of the first aspect.
The application can realize the following technical effects: since the target picture is a picture corresponding to the in-vehicle user's line of sight direction and is obtained based on a scene image outside the vehicle space, the content of the target picture can reflect the out-of-vehicle scene in the user's line of sight direction; the scene image is obtained through the external camera, the external scene of the user in the sight direction reflected by the target image is more real and visual compared with the external scene of the user seen in the vehicle through the window, and the target image is obtained by carrying out image optimization processing on the external scene image in the sight direction of the user, so that the content of the external scene in the sight direction of the user reflected by the target image is clearer and more reliable, the target image is displayed in the vehicle, the user can clearly know the external scene through the target image, the effect of strengthening the external scene seen by the user is achieved, the accurate judgment of road conditions by drivers is facilitated, and the occurrence of safety accidents is reduced.
Drawings
Fig. 1 is a flow chart of a method for displaying a picture according to an embodiment of the present application;
FIGS. 2A-2B are technical schematics of gaze tracking provided by embodiments of the present application;
FIG. 3 is a schematic diagram of image fusion and acquisition of a second scene image provided by an embodiment of the present application;
fig. 4 is a flowchart of another method for displaying a picture according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a picture display device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The technical scheme of the application can be suitable for automobile driving scenes, and can be particularly applied to image acquisition and image processing of scenes outside the vehicle in the automobile driving scenes and then display the scenes inside the vehicle so as to achieve the purpose of replacing and displaying the scenes outside the vehicle seen by a user through the vehicle window.
The solution of the present application is particularly applicable to vehicle devices having a vehicle space, including but not limited to passenger cars, off-road vehicles, recreational vehicles, and the like. Optionally, the technical solution of the present application may also be applied to other devices having a connection relationship with the vehicle device, where the connection relationship between the vehicle device and the other devices may be a wired connection relationship or a wireless connection relationship. The other device may be, for example, a vehicle networking server, such as a cloud server, that forms a vehicle networking with the vehicle device; alternatively, it may be a mobile device having a connection relationship with a vehicle device, such as a cellular phone or an in-vehicle display, or the like.
The general technical concept of the application is as follows: in the running process of the vehicle, the external scene shot by the external camera of the vehicle is processed and integrated to obtain an integral external scene Jing Tuxiang, the user sight direction of the user in the vehicle is obtained according to the image of the user in the vehicle shot by the internal camera of the vehicle, then a target picture which is in the user sight direction and is subjected to image optimization processing is formed based on the external scene image, so that the target picture can clearly reflect the external scene in the user sight direction, and finally the target picture is displayed in the vehicle to replace the external scene in the user sight direction which is seen by the user through the window. Because the target picture is the picture of the outside scene image in the direction of the user's sight after the image optimization processing, the outside scene reflected by the target picture is clearer and more reliable than the outside scene in the direction of the user's sight seen by the user through the window, and the user can accurately know the road condition information according to the outside scene reflected by the target picture, thereby avoiding the occurrence of safety accidents.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for displaying a picture according to an embodiment of the present application, where the method may be applied to the aforementioned vehicle device or other devices, as shown in fig. 1, and the method includes the following steps:
S101, acquiring a first scene image and a target user image.
The first scene image is a scene image outside the vehicle equipment, which is acquired through a first image pickup component, and the first image pickup component is arranged on the vehicle equipment and is positioned outside a vehicle space of the vehicle equipment. Specifically, the first camera component may be a vehicle-mounted camera provided on the vehicle body for capturing an external view of the vehicle device, which may include a front-view camera, a rear-view camera, a side-view camera, a through-view camera, and the like; the number of cameras with different functions varies, for example, there may be 4 cameras looking around for capturing an external view around the vehicle device. It should be understood that any scene outside the vehicle captured by the first imaging means may be referred to as a first scene image.
The target user image is a user image in the vehicle device acquired by a second image pickup means provided on the vehicle device and located in a vehicle space of the vehicle device. Specifically, the second image pickup means may be an in-vehicle camera provided inside the vehicle for picking up a scene in a vehicle space of the vehicle device, for example, a built-in camera. The number of second image pickup means may be one or more for picking up all scenes in the vehicle space. The target user image may refer to a user image of a user located at the driving position captured by the second image capturing section.
Specifically, if the technical scheme of the application is applied to vehicle equipment, the vehicle equipment can acquire a first scene image and a target user image through a vehicle-mounted camera on the vehicle equipment; if the technical scheme is applied to other equipment with a connection relation with the vehicle equipment, after the vehicle-mounted camera on the vehicle equipment acquires the first scene image and the target user image, the first scene image and the target image can be sent to the other equipment in real time, and then the other equipment acquires the first scene image and the target user image.
S102, determining a user sight direction according to the target user image, and generating a target picture corresponding to the user sight direction according to the first scene image.
Here, determining the user's gaze direction from the target user image means determining the gaze direction of the user to which the target user image corresponds, i.e. determining the gaze direction of the driver in the vehicle device, based on the target user image.
In some possible implementations, the user gaze direction may be determined directly from the target user image.
In one possible implementation, the visual line tracking technology based on appearance may be used to determine the visual line direction of the user, and the visual line tracking technology based on appearance may be also referred to as a 2D visual line tracking method, that is, the visual line direction is determined by using an eye image as input and determining the pupil position, the iris center and the eye angle position through image processing technologies such as a circular hough transform, a pupil space shape, a gradient descent method and the like. The visual line tracking technology based on appearance can be as shown in fig. 2A-2B, where in fig. 2A is the front of the eye, ec is the center of the eyeball, oc is the center of the iris, P1, P2 are the positions of the inner and outer corners of the eye, U1, U2 are the junction point of the upper eyelid and the iris, fig. 2B is the top view of the eye, and d is the vector passing through the center Ec of the eyeball and the center Oc of the iris. Since the iris center Oc rotates around the eye center Ec when the human eye gazes in different directions, and the eye center Ec is stationary, the direction of the eye gazes can be simplified as the direction of the connecting line of the iris center Oc and the eye center Ec, that is, the Oc vector passing through the iris center with the eye center Ec as the starting point. The eyeball center (Ec) can be determined by the corner position, and therefore, the gazing direction of the human eye can be determined by determining the positions of the iris center (Oc) and the two corners (P1, P2) from the eye image.
In the specific implementation, eye target detection can be performed on a target user image, the position of eyes in the target user image is detected, then the eye image is intercepted from the target user image according to the position of the eyes, then the eye image is input into a depth convolution neural network which is obtained in advance and used for detecting the gaze direction of the eyes, and the detection of key points and the detection of the vision of the eyes area are performed through the depth convolution neural network so as to detect the vision direction of the user corresponding to the target user image. The user sight direction is determined through the deep convolutional neural network, so that the accuracy of user sight direction detection can be improved.
Alternatively, the user line-of-sight direction may also be determined by detecting other facial features than the eyes, such as head features, nose features, ear features, and the like in the target user image. Specifically, the user's line-of-sight direction may be determined by detecting a change in the position of a facial feature such as a head feature, a nose feature, or an ear feature in two or more adjacent frames of target user images, the change in position being used to reflect the speed, acceleration, or the like of the facial feature such as a head feature, a nose feature, or an ear feature. The user's line of sight direction is determined by detecting other facial features in the target user image than the eyes, and detection of the user's line of sight direction can be achieved even when the image quality of the target user image is poor.
The present application is not limited to the above-described manner, but is not limited to a specific manner of determining the line-of-sight direction from the target user image.
In other possible implementations, the user gaze direction may also be determined in combination with the target user image and the first scene image. Specifically, the target user image and the first scene image may be converted into the same virtual world coordinate system based on a mapping relationship between a pixel coordinate system (which refers to a two-dimensional coordinate system in which the pixel points are located) and a world coordinate system (which refers to a three-dimensional coordinate system in which the object is located), and a calibration relationship between the first imaging component and the second imaging component, so as to obtain a virtual user three-dimensional image corresponding to the target user image and a virtual scene three-dimensional image corresponding to the first scene image; and determining the three-dimensional coordinates of the head of the user in the virtual world coordinate system according to the virtual user three-dimensional image of the target user, and finally determining the user sight direction of the target user image in the virtual three-dimensional world according to the relative position relationship between the three-dimensional coordinates of the head of the user in the virtual world coordinate system and the virtual scene three-dimensional image, wherein the user sight direction is used as the user sight direction. The mapping relation between the pixel coordinate system corresponding to the target user image and the world coordinate system can be obtained based on the conversion matrix of the first image pickup component, the mapping relation between the pixel coordinate system corresponding to the first scene image and the world coordinate system can be obtained based on the conversion matrix of the second image pickup component, and the conversion matrix of one image pickup component comprises the camera internal parameters and the camera external parameters corresponding to the one image pickup component; the calibration relation between the first image pickup device and the second image pickup device can be obtained by pre-calibrating.
The target picture is obtained by carrying out image optimization processing on a second scene image corresponding to the sight direction of the user, wherein the second scene image can be understood as a scene image formed by an external scene in the sight direction of the user, and the second scene image is obtained based on the first scene image, namely the second scene image is derived from the first scene image.
In some possible implementation scenarios, after the first scene images acquired by the second image capturing components are acquired, the second scene image corresponding to the user's line of sight direction may be determined according to the first scene image, and then image optimization processing may be performed on the second scene image, so as to obtain the target picture.
In a possible implementation manner, panoramic fusion may be performed on the first scene images acquired by all the second image capturing components, so as to obtain panoramic images, and then the second scene images are determined from the panoramic images. The target picture can be obtained by the following steps A1 to A3.
A1, performing image fusion on the first scene images acquired by the second image pickup components to obtain fused scene images outside the vehicle equipment.
Here, the first scene images acquired by the respective second image pickup devices may be subjected to image fusion on a two-dimensional image level or may be subjected to image fusion on a three-dimensional image level. If the first scene image is fused on the two-dimensional image level, the fused scene image obtained by fusion is a two-dimensional image; if the first scene image is fused on the three-dimensional image level, the fused and scene image obtained by fusion is a three-dimensional image.
Specifically, the first scene image can be fused on the two-dimensional image level through the steps of feature matching, image mapping, image registration and image stitching fusion; the first scene images can be fused on the three-dimensional image level by performing three-dimensional reconstruction fusion on the first scene images after the first scene images are converted to the same virtual world coordinate system.
A2, acquiring a second scene image corresponding to the sight direction of the user from the fused scene image.
Specifically, if the fusion scene is a two-dimensional scene image, an image with an angle in the range of the viewing angle of the human eye may be cut out from the fusion scene image as a second scene image with the image content in the direction of the user's line of sight as an image center.
For example, referring to fig. 3, it is assumed that a1, a2, a3, and a4 in fig. 3 are the first scene images captured by the 4 second imaging devices, the capturing view angles of the 4 second imaging devices are 0 ° to 60 °,40 ° to 100 °, 80 ° to 140 °, and 120 ° to 180 °, respectively, and the fused scene image obtained by image fusion of the 4 scene images captured by the 4 second imaging devices is a panoramic image with a view angle range of 180 ° as shown in a5 in fig. 3. Assuming that the direction of the user's line of sight is shown as a vector d1 in fig. 3, the angle corresponding to the vector d1 is 60 °, and the range of viewing angles of eyes from which a scene can be seen when the eyes are looking is 45 °, the content in the region a6 in fig. 3 can be intercepted as a second scene image, and the viewing angle of the second scene image is 37.5 ° to 82.5 °.
Specifically, if the fused scene image is a three-dimensional scene image, a virtual camera may be set in the three-dimensional scene image, the view angle of the virtual camera is set to be centered on the view angle of the user's line of sight, the range of view angles of eyes from which the scene can be seen when the eyes are looking at is taken as the shooting range, and then the content of the three-dimensional scene image shot by the virtual camera is acquired as the second scene image.
And A3, performing image optimization processing on the second scene image to obtain a target picture.
Here, performing image optimization processing on the second scene image to obtain the target image means that the content in the target image is clearer than the content in the second scene image through image processing. Wherein the content in the target picture is consistent with the content in the second scene image.
In some possible embodiments, a unified image optimization processing method may be used to perform image optimization processing on the second scene image, so as to obtain the target picture. For example, the image optimization processing model is used for carrying out image optimization processing on the second scene image to obtain a target picture; or, performing image optimization processing on the second scene image by means of filtering, fourier transformation and the like.
In other possible implementation scenarios, the second scene image may also be optimized according to the specific situation of the second scene image. The optimizing process of the second scene image according to the specific situation of the second scene image may be as follows:
(1) And (3) performing image optimization processing on the second scene image through the following steps A31-A32 to obtain a target picture.
A31, acquiring the running environment information of the vehicle equipment.
Here, the running environment information of the vehicle device is used to reflect the current running environment of the vehicle device, and the running environment information of the vehicle device includes at least running time and running weather. The running time refers to the running time of the vehicle equipment and can be obtained based on the system time; the driving weather may be obtained based on sensors on the vehicle equipment or the internet. Optionally, the driving environment of the vehicle device may further include a geographical location (e.g., mountain area, plains, etc.), an ambient light intensity, an ambient humidity, an ambient temperature, etc. where the vehicle device is driving. The geographic position can be obtained based on a satellite map, and the intensity of ambient light, the ambient humidity, the ambient temperature and the like can be obtained by various sensors on the vehicle equipment.
A32, optimizing the second scene image by adopting an image optimizing processing mode corresponding to the running environment information of the vehicle equipment to obtain a target picture.
Specifically, when it is determined that a sight-line shielding target exists in a running environment of the vehicle device according to running weather, performing image optimization processing on the second scene image by adopting a first image optimization mode to obtain a target picture, wherein the first image optimization processing mode is a mode of removing the sight-line shielding target from the image. The sight-blocking target refers to a target which affects visibility in the running process of vehicle equipment and does not belong to road content, and specifically can be a substance existing in specific weather such as rain, snow, fog and the like.
In a specific implementation, when it is determined that a sight-shielding target exists in a running environment of the vehicle device according to running weather, the second scene image can be input into a convolutional neural network which is obtained through training in advance and used for removing the sight-shielding target, and image optimization processing is performed on the second scene image through the convolutional neural network, so that a target picture is obtained.
Because targets affecting the sight of a driver such as rain, snow and fog exist in the weather such as rain, snow and fog, targets shielding road conditions such as rain, snow and fog exist in the photographed image, the road conditions can be restored by removing the targets in the image, so that the target picture is clearer and more reliable, and the driver can know the road conditions more clearly.
Specifically, when it is determined that the vehicle device runs in the preset time period according to the running time, a second image optimization mode is adopted to perform optimization processing on the second scene image to obtain a target picture, wherein the second image optimization processing mode is a mode of adjusting brightness. The preset time period refers to a time period in which visibility is low, for example, a time period of night, a time period of early morning, and the like.
In the specific implementation, when the vehicle equipment is determined to travel in the preset time period according to the travel time, the second scene image can be subjected to noise reduction, sharpening and other processes, the brightness of the second scene image is adjusted, and the image optimization process of the second scene image is realized, so that the target picture is obtained. When the vehicle equipment is determined to be in a specific time period, the brightness of the image is adjusted, so that a target picture obtained by image optimization processing can be clearer and more reliable.
Not limited to the above, in an alternative embodiment, there may be more image optimization processing modes corresponding to the running environment information of the vehicle device. For example, when it is determined that a sight-line blocking target exists in the running environment of the vehicle device according to the running weather, and it is determined that the vehicle device runs for a preset period of time according to the running time, the second scene image may be optimized by first adopting the second image optimization method, and then the second scene image may be optimized by the first image optimization method. For another example, when it is determined that the vehicle device is traveling in a mountain environment according to a geographical location where the vehicle device is traveling, the second scene image may be optimized by gaussian filtering. The present application is not limited.
The image optimization processing mode matched with the running environment information of the vehicle equipment is adopted to carry out the image optimization processing on the scene image in the direction of the user's sight, so that the differentiation processing on the image can be realized, and the target picture obtained by the image optimization processing can be clearer and more reliable.
(2) And (3) performing image optimization processing on the second scene image through the following steps A33-A34 to obtain a target picture.
A33, detecting image factors of the second scene image.
Here, the image factors of the second scene image are used to reflect the image characteristics of the second scene, and the image factors of the second scene image include at least the image acquisition time of the second scene image, the image content of the second scene image, and the image quality of the second scene image. The image acquisition time of the second scene image refers to the time when the second image pickup component picks up the first scene image corresponding to the second scene image, and can be determined based on the pick-up time of the second image pickup component; the image content of the second scene image refers to things in the second scene image, and can be determined through target detection; the image quality of the second scene image includes brightness, sharpness, color saturation, etc. of the second scene image.
And A34, performing image optimization processing on the second scene image by adopting an image optimization mode of image factors of the second scene image to obtain a target picture.
Specifically, under the condition that the second scene image is determined to be the scene image in the preset time period according to the image acquisition time of the second scene image, a second image optimization mode is adopted to perform optimization processing on the second scene image to obtain a target picture, wherein the second image optimization processing mode refers to a mode of adjusting brightness; and under the condition that the sight shielding target exists in the second scene image is determined according to the image content, performing image optimization processing on the second scene image by adopting a first image optimization mode. Not limited to the manner described above.
The image optimization processing mode matched with the image factors of the scene image in the user sight direction is adopted to carry out the image optimization processing on the scene image in the user sight direction, so that the differentiation processing on the image can be realized, and the target picture obtained by the image optimization processing is clearer and more reliable.
Without being limited to the two modes, the optimization processing of the second scene image according to the specific situation of the second scene image may also have more modes, for example, the image optimization processing may also be performed on the second scene image in combination with the steps a31-a 34.
In the step A1-A3, the scene images acquired by all the cameras outside the vehicle are fused, and the scene images conforming to the sight direction of the user are acquired from the fused scene images for image optimization processing, so that the images displayed in the vehicle are obtained, and the images displayed in the vehicle are ensured to be the same and clear enough as the images of the scenes outside the vehicle in the sight direction of the user.
In another possible embodiment, after determining the direction of the user's line of sight, the second image capturing device corresponding to the direction of the user's line of sight may be determined first, where the second image capturing device corresponding to the direction of the user's line of sight is a second image capturing device whose viewing angle range coincides with the first viewing angle range, and the first viewing angle range is a viewing angle range of a human eye viewing angle range in which the user's line of sight is taken as the line of sight center and the user's eyes can see the scene when looking at the first viewing angle range may be shown as Q in fig. 3. Then determining a fusion scene image according to the first scene image acquired by the second camera component corresponding to the sight direction of the user; and if the second image capturing components corresponding to the user sight line direction are more than one, performing image fusion on the first scene image acquired by the second image capturing components corresponding to the user sight line direction to obtain a fused scene image, wherein the image fusion mode can be described in the step A1. And finally, acquiring a second scene image corresponding to the sight direction of the user from the fused scene image, and performing image optimization processing on the second scene image to obtain a target picture.
In other possible implementation scenarios, after the first scene images acquired by the second image capturing units are acquired, enhancement processing may be performed on each first scene image first, and then the second target frame may be determined according to the first scene images obtained by the enhancement processing.
Specifically, the target picture can be obtained by the following steps B1 to B3.
And B1, performing image optimization processing on the first scene images acquired by each image pickup component to acquire enhanced scene images corresponding to each image pickup component.
Here, the manner of performing the image optimization process on the first scene image may refer to the manner of performing the image optimization process on the second scene image, that is, refer to the description of step A3, which is not repeated herein.
B1, performing image fusion on the enhanced scene images corresponding to the second image pickup components to obtain enhanced fusion scene images.
Here, the manner of performing image fusion on the enhanced scene image corresponding to each second image capturing component to obtain the enhanced fusion scene may refer to the manner of performing image fusion on the first scene image in the foregoing step A1, which is not described herein.
And B3, acquiring an enhanced fusion scene image corresponding to the user sight line direction from the enhanced fusion scene images, and taking the enhanced fusion scene image corresponding to the user sight line direction as a target picture.
Here, the method of obtaining the corresponding enhanced fusion scene image corresponding to the user line of sight direction may refer to the manner of obtaining the second scene image corresponding to the user line of sight direction from the fusion scene image in the foregoing step A2, which is not described herein.
Optionally, after the enhancement processing is performed on each first scene image to obtain an enhanced scene image corresponding to each second image capturing component, the second image capturing component corresponding to the line of sight direction of the user may be determined first. Then determining a target picture according to the enhanced scene image corresponding to the second image pickup component corresponding to the user sight line direction, wherein if only one second image pickup component corresponding to the user sight line direction exists, the enhanced scene image corresponding to the second image pickup component corresponding to the user sight line direction is determined to be the target picture; and if a plurality of second image pickup components corresponding to the user sight line direction are provided, performing image fusion on the enhanced scene image corresponding to the second image pickup components corresponding to the user sight line direction to obtain an enhanced fusion scene image. And finally, obtaining a user picture corresponding to the user sight direction from the enhanced fusion scene image.
In still other possible implementation scenarios, after the first scene image acquired by each second image capturing component is acquired, the second image capturing component corresponding to the user's line of sight direction may be determined first, and then the target picture may be determined according to the first scene image acquired by the second image capturing component corresponding to the user's line of sight direction.
Not limited to the above description, all target frames corresponding to the sight direction of the user generated according to the first scene image are within the protection scope of the application.
S103, displaying a target screen in the vehicle space.
Specifically, if the present application is applied to a vehicle apparatus, the vehicle apparatus may display a target screen through Head Up Display (HUD) projection, or display the target screen on a window glass capable of displaying a screen; if the scheme is applied to other devices which have connection relation with the vehicle device, the other devices can send the target picture to the vehicle device, so that the vehicle device can display the target picture through the HUD or window glass capable of displaying the picture in a projection mode. If the other device has a display function and is located in the vehicle space, the other device may also directly display the target screen.
In the technical scheme corresponding to fig. 1, the scene image outside the vehicle space acquired by the camera outside the vehicle and the target user image inside the vehicle space acquired by the camera inside the vehicle are acquired first, then the user sight direction is determined according to the target user image inside the vehicle space, the picture corresponding to the user sight direction inside the vehicle is generated according to the scene image outside the vehicle space, and finally the target picture is displayed inside the vehicle space. Since the target picture is a picture corresponding to the in-vehicle user's line of sight direction and is obtained based on a scene image outside the vehicle space, the content of the target picture can reflect the out-of-vehicle scene in the user's line of sight direction; the scene image is obtained through the external camera, the external scene of the user in the sight direction reflected by the target image is more real and visual compared with the external scene of the user seen in the vehicle through the window, and the target image is obtained by carrying out image optimization processing on the external scene image in the sight direction of the user, so that the content of the external scene in the sight direction of the user reflected by the target image is clearer and more reliable, the target image is displayed in the vehicle, the user can clearly know the external scene through the target image, the effect of strengthening the external scene seen by the user is achieved, the accurate judgment of road conditions by drivers is facilitated, and the occurrence of safety accidents is reduced.
Referring to fig. 4, fig. 4 is a flowchart of another method for displaying a picture according to an embodiment of the present application, where the method may be applied to the aforementioned vehicle device or other devices, as shown in fig. 4, and the method includes the following steps:
s201, acquiring an initial scene image and an initial target user image.
Here, the initial scene image and the initial target user image may refer to a scene image outside the vehicle device and a user image inside the vehicle device, which are acquired through the first image capturing part and the second image capturing part, respectively, when the vehicle device is started for the first time, may also refer to a scene image outside the vehicle device and a user image inside the vehicle device, which are acquired through the first image capturing part and the second image capturing part, respectively, when the vehicle device is just started, and may also refer to a scene image outside the vehicle device and a user image inside the vehicle device, which are acquired through the first image capturing part and the second image capturing part, respectively, when the vehicle device is in a debug mode, wherein the vehicle device is in a mode in which a user can manually control and adjust an angle of a screen.
For a specific implementation manner of acquiring the initial scene image and the initial target user image, reference may be made to the description of step S101, which is not repeated here.
S202, determining an initial user sight direction according to the initial target user image, and generating an initial target picture corresponding to the initial user sight direction according to the initial scene image.
S203, displaying an initial target screen in the vehicle space.
The specific embodiments of step S202 to step S203 may refer to the descriptions of step S102 to step S103, and are not described herein.
S204, when receiving the picture adjustment instruction, adjusting the initial target picture in response to the picture adjustment instruction.
The picture adjusting instruction refers to an instruction for adjusting a picture, and the picture adjusting instruction can be sent by a user. Adjusting the initial target picture refers to adjusting the picture view angle of the initial target picture, so that the picture view angle of the initial target picture can be consistent with the picture view angle seen by a user through the car window, and the content and the angle of the initial target picture are consistent with the content and the angle seen by the user through the car window. The screen view angle of the initial target screen refers to the field of view of the initial target screen. For example, if the initial target picture is obtained based on the second scene image a6 in fig. 3, the picture angle of view of the initial target picture is 37.5 ° to 82.5 °.
Specifically, the user may adjust the target screen through a steering wheel or an on-vehicle display screen of the vehicle device. For example, when the screen adjustment instruction is acquired through the vehicle-mounted display screen, the initial target screen may be displayed on the vehicle-mounted display screen, a track of the user dragging the initial target screen on the vehicle-mounted display screen is acquired, and the screen view angle of the initial target screen is adjusted based on the track dragged by the user.
S205, when receiving the screen adjustment end instruction, the screen view angle of the initial target screen and the initial user line-of-sight direction are recorded.
The picture adjustment ending instruction refers to an instruction for ending picture adjustment, and the picture adjustment ending instruction can be sent by a user. For example, when the user stops dragging the initial target screen on the in-vehicle display screen and the stopping time exceeds a preset time threshold, it is determined that a screen adjustment end instruction is received.
S206, acquiring a third scene image and a first target user image.
Here, the third scene image and the first target user image may refer to a scene image outside the vehicle device and a user image inside the vehicle device acquired through the first image capturing section and the second image capturing section, respectively, during traveling of the vehicle device. The specific implementation manner of acquiring the third scene image and the first target user image may refer to the description of step S101, which is not repeated here.
S207, determining a first user sight line direction according to the first target user image, and generating a first target picture corresponding to the first user sight line according to the third scene image.
Here, for the specific embodiment of step S207, reference may be made to the description of step S102, which is not repeated here.
S208, adjusting the first target picture according to the initial user sight direction and the picture view angle of the initial target picture.
Specifically, a direction difference between the first user line-of-sight direction and the initial user line-of-sight direction may be determined, and the picture view angle of the first target picture may be adjusted such that the difference between the picture view angle of the first target picture and the picture view angle of the initial target picture is equal to the direction difference.
For example, if the initial user viewing direction is 60 °, the screen view angle of the initial target screen is 37.5 ° to 82.5 °, and the first user viewing direction is 70 °, the screen view angle of the first target screen may be adjusted to 47.5 ° to 92.5 °.
S209, displaying the adjusted first target picture in the vehicle space.
Here, for the specific embodiment of step S209, reference may be made to the description of step S103, which is not repeated here.
In the above-described scheme of fig. 4, after the target screen is displayed in the vehicle space, the screen viewing angle of the target screen is adjusted according to the screen adjustment instruction, and in the subsequent running process of the vehicle device, the screen is adjusted according to the user's line of sight direction in the screen adjustment process and then displayed, so that the displayed screen is completely consistent with the screen actually seen by the user.
Optionally, in some possible cases, the road condition may also be indicated in combination with a light. The method further comprises the following steps: acquiring a driving scene of vehicle equipment; and indicating the lamplight in the vehicle space to be displayed according to the display mode corresponding to the driving scene where the vehicle equipment is located.
Specifically, different display modes can be set for the lamplight in the vehicle space according to different driving scenes in advance, and then the corresponding display modes are selected according to the driving scene where the vehicle equipment is located. The driving scene where the vehicle device is located may include driving conditions and road condition information of vehicle devices around the vehicle device.
For example, when a front vehicle of the vehicle device brakes, a red flashing of the lights in the vehicle space may be indicated.
The car lamps in the indicating car are displayed according to the running scene of the car, so that the visual effect can be enhanced, and the reminding effect on the user is achieved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a screen display device according to an embodiment of the present application, where the screen display device may be the aforementioned vehicle apparatus or other apparatuses. As shown in fig. 5, the screen display device 30 includes:
an image acquisition module 301, configured to acquire a first scene image and a target user image, where the first scene image is a scene image outside a vehicle device acquired by a first image capturing component, the first image capturing component is disposed on the vehicle device and located outside a vehicle space of the vehicle device, and the target user image is a user image inside the vehicle device acquired by a second image capturing component, and the second image capturing component is disposed on the vehicle device and located inside the vehicle space;
The picture generation module 302 is configured to determine a user line of sight direction according to the target user image, and generate a target picture corresponding to the user line of sight direction according to the first scene image, where the target picture is obtained by performing image optimization processing on a second scene image corresponding to the user line of sight direction, and the second scene image is obtained based on the first scene image;
and a display module 303, configured to display the target screen in the vehicle space.
In one possible design, the number of the second image capturing components is plural, and the capturing angles of the different second image capturing components are different; the screen generating module 302 is specifically configured to: performing image fusion on the first scene images acquired by each second image pickup component to obtain fused scene images outside the vehicle equipment; acquiring a second scene image corresponding to the user sight direction from the fused scene image; and performing image optimization processing on the second scene image to obtain the target picture.
In one possible design, the screen generating module 302 is specifically configured to: acquiring running environment information of the vehicle equipment, wherein the running environment information at least comprises running time and running weather; and carrying out image optimization processing on the second scene image by adopting an image optimization processing mode corresponding to the driving environment information to obtain the target picture.
In one possible design, the screen generating module 302 is specifically configured to: under the condition that a sight shielding target exists in the running environment of the vehicle equipment according to the running weather, adopting a first image optimization processing mode to perform image optimization processing on the second scene image to obtain the target picture; the first image optimization processing mode refers to a mode of removing a sight-blocking target from an image.
In one possible design, the screen generating module 302 is specifically configured to: under the condition that the vehicle equipment runs in a preset time period according to the running time, adopting a second image optimization processing mode to perform image optimization processing on the second scene image to obtain the target picture; the second image optimization processing mode refers to a mode of adjusting the brightness of an image.
In one possible design, the screen generating module 302 is specifically configured to: detecting image factors of the second scene image, wherein the image factors at least comprise image acquisition time, image content and image quality; and carrying out image optimization processing on the second scene image by adopting an image optimization processing mode corresponding to the image factors to obtain the target picture.
In one possible design, the above-mentioned screen display device 30 further includes an adjustment module 304, configured to, in response to the screen adjustment instruction, perform adjustment display on the target screen until a screen adjustment end instruction is acquired, when the screen adjustment instruction is acquired.
In one possible design, the above-mentioned picture display device 30 further includes a car light control module 305, configured to obtain a driving scene where the vehicle device is located; and indicating the car lamps in the vehicle space to display according to the display modes corresponding to the driving scene.
It should be noted that, in the embodiment corresponding to fig. 5, the details not mentioned in the foregoing description of the method embodiment may be referred to, and will not be repeated here.
The device firstly acquires the scene image outside the vehicle space acquired by the camera outside the vehicle and the target user image in the vehicle space acquired by the camera inside the vehicle, and then determining the user sight line direction according to the target user image in the vehicle space, generating a picture corresponding to the user sight line direction in the vehicle according to the scene image outside the vehicle space, and finally displaying the target picture in the vehicle space. Since the target picture is a picture corresponding to the in-vehicle user's line of sight direction and is obtained based on a scene image outside the vehicle space, the content of the target picture can reflect the out-of-vehicle scene in the user's line of sight direction; the scene image is obtained through the external camera, the external scene of the user in the sight direction reflected by the target image is more real and visual compared with the external scene of the user seen in the vehicle through the window, and the target image is obtained by carrying out image optimization processing on the external scene image in the sight direction of the user, so that the content of the external scene in the sight direction of the user reflected by the target image is clearer and more reliable, the target image is displayed in the vehicle, the user can clearly know the external scene through the target image, the effect of strengthening the external scene seen by the user is achieved, the accurate judgment of road conditions by drivers is facilitated, and the occurrence of safety accidents is reduced.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application, where the computer device may be the aforementioned vehicle device or other devices. The computer device 40 comprises a processor 401, a memory 402. The memory 402 is connected to the processor 401, for example by a bus, to the processor 401.
The processor 401 is configured to support the computer device 40 to perform the corresponding functions in the method embodiments described above. The processor 401 may be a central processing unit (central processing unit, CPU), a network processor (network processor, NP), a hardware chip or any combination thereof. The hardware chip may be an application specific integrated circuit (application specific integrated circuit, ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof.
The memory 402 is used for storing program codes and the like. Memory 402 may include Volatile Memory (VM), such as random access memory (random access memory, RAM); the memory 402 may also include a non-volatile memory (NVM), such as read-only memory (ROM), flash memory (flash memory), hard disk (HDD) or Solid State Drive (SSD); memory 402 may also include a combination of the above types of memory.
The processor 401 may call the program code to perform the following operations:
acquiring a first scene image and a target user image, wherein the first scene image is a scene image outside vehicle equipment acquired through a first image pickup component, the first image pickup component is arranged on the vehicle equipment and is positioned outside a vehicle space of the vehicle equipment, the target user image is a user image inside the vehicle equipment acquired through a second image pickup component, and the second image pickup component is arranged on the vehicle equipment and is positioned inside the vehicle space;
determining a user sight direction according to the target user image, and generating a target picture corresponding to the user sight direction according to the first scene image, wherein the target picture is obtained by performing image optimization processing on a second scene image corresponding to the user sight direction, and the second scene image is obtained based on the first scene image;
and displaying the target picture in the vehicle space.
The present application also provides a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method of the previous embodiments.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in the embodiments may be accomplished by computer programs stored in a computer-readable storage medium, which when executed, may include the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only memory (ROM), a random-access memory (Random Access memory, RAM), or the like.
The foregoing disclosure is only illustrative of the preferred embodiments of the present application and is not intended to limit the scope of the claims herein, as the equivalent of the claims herein shall be construed to fall within the scope of the claims herein.

Claims (11)

1. A picture display method, comprising:
acquiring a first scene image and a target user image, wherein the first scene image is a scene image outside vehicle equipment acquired through a first image pickup component, the first image pickup component is arranged on the vehicle equipment and is positioned outside a vehicle space of the vehicle equipment, the target user image is a user image inside the vehicle equipment acquired through a second image pickup component, and the second image pickup component is arranged on the vehicle equipment and is positioned inside the vehicle space;
Determining a user sight direction according to the target user image, and generating a target picture corresponding to the user sight direction according to the first scene image, wherein the target picture is obtained by performing image optimization processing on a second scene image corresponding to the user sight direction, and the second scene image is obtained based on the first scene image;
and displaying the target picture in the vehicle space.
2. The method according to claim 1, wherein the number of the second image pickup devices is plural, and the photographing angles of the different second image pickup devices are different;
the generating, according to the first scene image, a target picture corresponding to the user sight line direction includes:
performing image fusion on the first scene images acquired by each second image pickup component to obtain fused scene images outside the vehicle equipment;
acquiring a second scene image corresponding to the user sight direction from the fused scene image;
and performing image optimization processing on the second scene image to obtain the target picture.
3. The method according to claim 2, wherein the performing image optimization processing on the second scene image to obtain the target picture includes:
Acquiring running environment information of the vehicle equipment, wherein the running environment information at least comprises running time and running weather;
and carrying out image optimization processing on the second scene image by adopting an image optimization processing mode corresponding to the driving environment information to obtain the target picture.
4. The method according to claim 3, wherein the performing image optimization processing on the second scene image by using an image optimization processing manner corresponding to the driving environment information to obtain the target picture includes:
under the condition that a sight shielding target exists in the running environment of the vehicle equipment according to the running weather, adopting a first image optimization processing mode to perform image optimization processing on the second scene image to obtain the target picture; the first image optimization processing mode refers to a mode of removing a sight-blocking target from an image.
5. A method according to claim 3, wherein said performing image optimization processing on the second scene image by using an image optimization processing method corresponding to the driving environment information to obtain the target picture includes:
under the condition that the vehicle equipment runs in a preset time period according to the running time, adopting a second image optimization processing mode to perform image optimization processing on the second scene image to obtain the target picture; the second image optimization processing mode refers to a mode of adjusting the brightness of an image.
6. The method according to claim 2, wherein the performing image optimization processing on the second scene image to obtain the target picture includes:
detecting image factors of the second scene image, wherein the image factors at least comprise image acquisition time, image content and image quality;
and carrying out image optimization processing on the second scene image by adopting an image optimization processing mode corresponding to the image factors to obtain the target picture.
7. The method of any one of claims 1-6, wherein after displaying the target frame in the vehicle space, further comprising:
and under the condition that the picture adjustment instruction is acquired, responding to the picture adjustment instruction, and adjusting and displaying the target picture until the picture adjustment ending instruction is acquired.
8. The method according to any one of claims 1-6, further comprising:
acquiring a driving scene of the vehicle equipment;
and indicating the car lamps in the vehicle space to display according to the display modes corresponding to the driving scene.
9. A picture display device, comprising:
The image acquisition module is used for acquiring a first scene image and a target user image, wherein the first scene image is a scene image outside vehicle equipment, the scene image is acquired through a first image pickup component, the first image pickup component is arranged on the vehicle equipment and is positioned outside a vehicle space of the vehicle equipment, the target user image is a user image inside the vehicle equipment, the user image is acquired through a second image pickup component, and the second image pickup component is arranged on the vehicle equipment and is positioned inside the vehicle space;
the image generation module is used for determining a user sight line direction according to the target user image, generating a target image corresponding to the user sight line direction according to the first scene image, wherein the target image is obtained by performing image optimization processing on a second scene image corresponding to the user sight line direction, and the second scene image is obtained based on the first scene image;
and the display module is used for displaying the target picture in the vehicle space.
10. A computer device comprising a memory, a processor connected to the processor for executing one or more computer programs stored in the memory, which processor, when executing the one or more computer programs, causes the computer device to implement the method of any of claims 1-8.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-8.
CN202211247797.6A 2022-10-12 2022-10-12 Picture display method, device, equipment and storage medium Pending CN117864023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211247797.6A CN117864023A (en) 2022-10-12 2022-10-12 Picture display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211247797.6A CN117864023A (en) 2022-10-12 2022-10-12 Picture display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117864023A true CN117864023A (en) 2024-04-12

Family

ID=90583454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211247797.6A Pending CN117864023A (en) 2022-10-12 2022-10-12 Picture display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117864023A (en)

Similar Documents

Publication Publication Date Title
KR102554643B1 (en) Multiple operating modes to expand dynamic range
CN108460734B (en) System and method for image presentation by vehicle driver assistance module
US10567674B2 (en) Systems and methods for detecting objects in imaging systems
CN107209856B (en) Environmental scene condition detection
JP6657925B2 (en) In-vehicle camera system and image processing device
KR101579100B1 (en) Apparatus for providing around view and Vehicle including the same
JP6036065B2 (en) Gaze position detection device and gaze position detection method
US11022795B2 (en) Vehicle display control device
US9736364B2 (en) Camera capable of reducing motion blur in a low luminance environment and vehicle including the same
KR101778173B1 (en) A rearview assembly of a vehicle for displaying images
CN106218587B (en) A kind of method and apparatus for opening defrosting function
TWI749030B (en) Driving assistance system and driving assistance method
WO2019049763A1 (en) Image processing device, image processing method, and program
KR20200043391A (en) Image processing, image processing method and program for image blur correction
CN111351474B (en) Vehicle moving target detection method, device and system
TW201420398A (en) System and method for monitoring traffic safety of vehicle
JP2019001325A (en) On-vehicle imaging device
CN117864023A (en) Picture display method, device, equipment and storage medium
JP6649063B2 (en) Vehicle rear display device and display control device
CN113507559A (en) Intelligent camera shooting method and system applied to vehicle and vehicle
US20210318560A1 (en) Information processing device, information processing method, program, and mobile object
CN112406702A (en) Driving assistance system and method for enhancing driver's eyesight
CN109801355A (en) A kind of method and device expanding vision field of driver
JP3735468B2 (en) Mobile object recognition device
US11014574B2 (en) System and method for handling light source impairment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination