CN114040091A - Image processing method, imaging system, and vehicle - Google Patents

Image processing method, imaging system, and vehicle Download PDF

Info

Publication number
CN114040091A
CN114040091A CN202111139830.9A CN202111139830A CN114040091A CN 114040091 A CN114040091 A CN 114040091A CN 202111139830 A CN202111139830 A CN 202111139830A CN 114040091 A CN114040091 A CN 114040091A
Authority
CN
China
Prior art keywords
image
vehicle
camera
processing method
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111139830.9A
Other languages
Chinese (zh)
Inventor
陈万慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yankan Intelligent Technology Co ltd
Original Assignee
Beijing Yankan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yankan Intelligent Technology Co ltd filed Critical Beijing Yankan Intelligent Technology Co ltd
Priority to CN202111139830.9A priority Critical patent/CN114040091A/en
Publication of CN114040091A publication Critical patent/CN114040091A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The present invention relates to the field of imaging methods, and in particular, to an image processing method, an imaging system, and a vehicle. The image processing method comprises the following steps: controlling a first camera device to shoot the person in the vehicle to obtain an image in the vehicle; the first camera device is positioned in the vehicle, and the in-vehicle image comprises an in-vehicle person image; controlling a second camera device to shoot the outside of the vehicle to obtain a scene image; the second camera device is arranged outside the vehicle and used for shooting a scene. The invention also discloses a camera system. The invention also discloses a vehicle. According to the image processing method, the camera system and the vehicle, the two camera devices are used for shooting the images of the inside and outside scenes, so that a user can obtain the co-illumination of the characters and the scenes in the vehicle without getting out of the vehicle door, and the user can more conveniently take a self-timer when driving and going out.

Description

Image processing method, imaging system, and vehicle
Technical Field
The present invention relates to the field of imaging methods, and in particular, to an image processing method, an imaging system, and a vehicle.
Background
With the improvement of national economic level, the number of travel of people is gradually increased, especially with the increase of the number of automobiles kept by people year by year, the frequency of using the automobiles by people is higher and higher, and the proportion of travel modes occupied by self-driving travel is gradually increased. People often want to group with scenery along the way so as to keep or share the playing process, however, the interior of the car is a closed environment, if group photo is needed, the car needs to be parked and moved out, but some places are not suitable for parking or getting off, so that people cannot well group with scenery along the way in the shooting process. Therefore, a technology for directly synthesizing a picture of a person inside a vehicle and a picture of a scene outside the vehicle at the time of photographing is a problem which needs to be solved urgently.
Disclosure of Invention
The invention provides an image processing method, an image pickup system and a vehicle, which can effectively realize that a figure photo in the vehicle and a scene photo outside the vehicle can be directly synthesized during shooting.
In a first aspect, an embodiment of the present invention provides an image processing method, where the image processing method includes:
controlling a first camera device to shoot the person in the vehicle to obtain an image in the vehicle; the first camera device is positioned in the vehicle, and the in-vehicle image comprises an in-vehicle person image;
controlling a second camera device to shoot the outside of the vehicle to obtain a scene image; the second camera device is arranged outside the vehicle and is used for shooting a scene;
and extracting the image of the person in the vehicle from the image in the vehicle, synthesizing the image of the person in the vehicle and the scenery image, and generating and displaying a synthesized image.
In a second aspect, an embodiment of the present invention provides an image capturing system, where the image capturing system includes a first image capturing device, a second image capturing device, and a main control apparatus electrically connected to the first image capturing device and the second image capturing device, respectively, and the main control apparatus includes:
a memory for storing a computer program; and
a processor for executing the computer program to implement the above-described image processing method.
In a third aspect, an embodiment of the present invention provides a vehicle, which includes a body, and the above-described imaging system provided to the body.
Above-mentioned image processing method, camera system and vehicle, shoot interior image of car and external scenery image through two camera device, in with the car in the image in the car in the people image with external scenery image composition, form the composite image, thereby be convenient for form in an image with people in the car and external scenery, make the user need not go out the door and just can obtain the personage in the car and the co-license of scenery, thereby the person of facilitating the use drives when going out and autodynes, also do benefit to the scenery of staying simultaneously or share and drive in the way and interesting thing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of the invention and that other drawings may be derived from the structure shown in the drawings by those skilled in the art without the exercise of inventive faculty.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
Fig. 2 is a scene diagram of an image processing method according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of a vehicle according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a second camera according to an embodiment of the present invention.
Fig. 5 is a schematic view of a user interface of a touch screen according to an embodiment of the present invention.
Fig. 6 is a schematic view of a first sub-flow of an image processing method according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a second camera according to another embodiment of the invention.
Fig. 8 is a schematic diagram of a second sub-flow of an image processing method according to another embodiment of the present invention.
Fig. 9 is a flowchart illustrating an image processing method according to another embodiment of the present invention.
Fig. 10 is a schematic structural diagram of an image capturing system according to an embodiment of the present invention.
DESCRIPTION OF SYMBOLS IN THE DRAWINGS
100 Vehicle with a steering wheel 101 Vehicle A column
102 Vehicle B column 103 Vehicle C post
10 Master control equipment 11 Touch screen
20 First photographing device 30 Second photographing device
A Shooting area 99 Image pickup system
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the descriptions in this application referring to "first", "second", etc. are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be based on the realization of persons with ordinary skill in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent, and is not within the protection scope of the present application.
Referring to fig. 1-4, a first embodiment of the present application provides an image processing method. The image processing method is run on the camera system 99. The camera system 99 includes a main control apparatus 10 provided in a vehicle 100, a first camera 20 mounted inside the vehicle 100, and a second camera 30 mounted outside the vehicle 100. The number of the first camera devices 20 is plural, the plural first camera devices 20 are arranged in the vehicle 100, in a possible embodiment, the plural first camera devices 20 are configured to cover plural viewing angles in the vehicle in accordance with the viewing angles, and the plural first camera devices 20 may be arranged in the vehicle a pillar 101, the vehicle B pillar 102, and the vehicle C pillar 103, respectively. In a possible embodiment, when the vehicle is provided with a vehicle data recorder or a headrest display screen, the plurality of first cameras 20 may also be provided on the vehicle data recorder or the headrest display screen. The number of the second camera devices 30 is plural, the plural second camera devices 30 can be arranged at different positions of the vehicle body and the viewing angles are matched to cover the 360-degree viewing angle around the vehicle, the camera area a of each second camera device 30 is as shown in fig. 4, and the positions of the plural second camera devices 30 and the positions of the plural first camera devices 20 are in one-to-one correspondence. In some possible embodiments, the number of the first image capturing devices 20 may be one and the number of the second image capturing devices 30 may also be one. The image processing method specifically includes the following steps.
Step S102, controlling the first camera device 20 to shoot the person in the vehicle to obtain the image in the vehicle; wherein the in-vehicle image includes an in-vehicle character image. Specifically, the main control device 10 may receive a first photographing instruction input by a user, and control the first camera 20 to photograph the vehicle occupant in response to the first photographing instruction to obtain the vehicle image. In some possible embodiments, the main control device 10 may further receive a first adjustment instruction input by the user, and adjust the shooting parameters of the first camera 20 in response to the first adjustment instruction. Specifically, the first adjustment instruction and the first photographing instruction may be input through a physical key or a virtual key provided on the main control device 10, or may also be input through voice. In this embodiment, the main control device 10 is provided with a touch screen 11, and the first shooting instruction and the first adjustment instruction are input through the touch screen 11. The first camera device 20 or the main control device 10 stores a face tracking algorithm, so that the first camera device 20 has a face tracking function, thereby facilitating to obtain a relatively clear image of a person in the vehicle through the first camera device 20. The shooting parameters may be external parameters or internal parameters of the first imaging device 20. For example, the first adjustment instruction may adjust a pitch angle and a yaw angle of the first imaging device 20. Accordingly, the main control device 10 displays the screen preview area 111 through the touch panel 11, and the screen preview area 111 displays a plurality of adjustment icons 1111. The adjustment icons 1111 are up, down, left, and right adjustment icons (1112, 1113, 1114, 1115). The upper adjustment icon 1112 is used for a user to operate and generate a corresponding first angle adjustment instruction, and the first angle adjustment instruction is used for controlling the first camera device 20 to be lifted upwards. The lower, left and right adjustment icons (1113, 1114, 1115) are used for user operation to generate corresponding second, third and fourth angle adjustment instructions, and the second, third and fourth angle adjustment instructions are used for controlling the downward, leftward and rightward rotation of the first camera device 20.
In some possible embodiments, the first adjustment instruction further includes an optimization instruction, and the first adjustment instruction is further configured to set the shooting parameters of the first camera device 20 and the second camera device 30, so that the first camera device 20 and the second camera device 30 shoot according to the preset shooting parameters. The shooting parameters comprise optimized parameters corresponding to the optimized instructions. Before step S102, the image processing method further includes the following steps.
Step S201: receiving a first optimization instruction input by a user, and displaying an optimization icon on a user interface for the user to select; and receiving a second optimization instruction generated by the optimization icon selected by the user, and configuring optimization parameters corresponding to the second optimization instruction on the first image pickup device 20 and/or the second image pickup device 30. Specifically, the first optimization instruction and the second optimization instruction are input through virtual keys on the user interface of the touch screen 11. The main control device 10 receives a first optimization instruction input by a user, and displays an optimization icon through a user interface of the touch screen 11 for the user to select. Further, the main control apparatus 10 receives a second optimization instruction generated by the user by selecting the optimization icon, and configures optimization parameters corresponding to the second optimization instruction to the first image capturing device 20 and/or the second image capturing device 30. In this embodiment, the optimization icons displayed on the user interface of the touch screen 11 include a first optimization icon and a second optimization icon, after the user touches the first optimization icon, the main control device 10 outputs a user interface including a first optimization list for performing a plurality of optimization processes on the first image capturing device 20, and the user selects a first optimization item performed on the first image capturing device 20 from the user interface of the first optimization list, where the first optimization item includes, but is not limited to, brightness, contrast, beauty, filter, white balance, exposure, focus, and the like. In addition, after the user touches the second optimization icon, the main control device 10 outputs a user interface including a second optimization list for performing various optimization processes on the second camera 30, and the user selects a second optimization processing item for performing the second optimization processing on the second camera from the user interface of the second optimization list, where the second optimization processing item includes, but is not limited to, defogging, brightness, contrast, filter, white balance, exposure, focus, and the like.
In step S104, the second imaging device 30 is controlled to capture the outside of the vehicle 100 to acquire a subject image. Specifically, the main control apparatus 10 receives the second photographing instruction, and controls the second photographing device 30 to photograph the outside of the vehicle 100 to acquire the subject image. In some possible embodiments, the main control device 10 further receives a second adjustment instruction input by the user, and adjusts the shooting parameters of the second camera 30 in response to the second adjustment instruction. Specifically, the second adjustment instruction and the second photographing instruction may be input through a physical key or a virtual key provided on the main control device 10, or may also be input through voice. In the present embodiment, the second photographing instruction and the second adjustment instruction are input through the touch screen 11. The shooting parameters may be external parameters or internal parameters of the second imaging device 30. For example, the second adjustment instruction may adjust a pitch angle and a yaw angle of the second imaging device 30. Accordingly, the main control device 10 displays the touch screen 11 to display a scene preview area 112 (see fig. 5), and the scene preview area 112 displays a plurality of adjustment icons 1121. The plurality of adjustment icons 1121 are up, down, left, and right adjustment icons (1122, 1123, 1124, 1125). The upper adjustment icon 1122 is used for a user to operate and generate a corresponding fifth angle adjustment command, and the fifth angle adjustment command is used for controlling the second camera device 30 to be lifted upwards. The lower, left and right adjustment icons (1122, 1123, 1124 and 1125) are used for generating corresponding sixth, seventh and eighth angle adjustment instructions for user operation, and the sixth, seventh and eighth angle adjustment instructions are used for controlling the downward, leftward and rightward rotation of the second camera device 30.
Please refer to fig. 6, which is a first sub-flowchart of an image processing method according to an embodiment of the present invention. Controlling the second camera device 30 to take a picture of the outside of the vehicle 100 to acquire a subject image specifically includes the following steps.
In step S1042, a first shooting parameter of the first imaging device 20 during shooting is acquired. Specifically, the main control device 10 acquires a first shooting parameter of the first imaging apparatus 20 at the time of shooting. The first shooting parameters include first direction parameters, and the first direction parameters are information of viewing angles in the horizontal direction and the vertical direction when the first camera device 20 shoots. The viewing angle information may be determined by establishing a coordinate system, and the coordinate system may be established with the center of the vehicle 100 as a dot. Specifically, the main control device 10 may acquire an initial viewing angle parameter of the first camera 20, acquire and store adjustment information during movement of the first camera 20 in real time, and when a shooting action of the first camera 20 is detected, calculate a first shooting parameter of the first camera 20 during shooting according to the initial viewing angle parameter and the adjustment information before the shooting action.
In step S1044, a second shooting parameter of the second camera 30 is determined according to the first shooting parameter. Specifically, the main control apparatus 10 determines the angle of view information in the horizontal direction and the vertical direction of the first image pickup device 20 as first direction parameters, and the second image pickup device 30 determines second direction parameters, that is, angle of view information in the horizontal direction and the vertical direction, from the first direction parameters to determine second photographing parameters of the second image pickup device 30. For example, if the main control apparatus 10 acquires the viewing angle information of the first camera 20 as being directed directly behind the vehicle 100, the main control apparatus 10 determines that the viewing angle information is directed directly behind the vehicle 100 as the second shooting parameter of the second camera.
In step S1046, the shooting angle of view of the second camera device 30 is adjusted according to the second shooting parameter. Specifically, the main control apparatus 10 adjusts the shooting angle of view of the second image pickup device 30 according to the second shooting parameter so that the shooting angles of the first image pickup device 20 and the second image pickup device 30 are the same. For example, the main control apparatus 10 acquires initial angle-of-view information of the second image pickup device 30, calculates movement information of the second image pickup device 30 from the initial angle-of-view information and the second photographing parameter, controls the second image pickup device 30 to move or rotate based on the movement information, or switches to call another second image pickup device 30 to adjust the photographing angle of view of the second image pickup device 30 to be directed directly behind the vehicle 100. Therefore, the angle of shooting the figure is the same as the angle of shooting the scenery, and the self-shooting image is closer to the real self-shooting image.
And step S106, extracting the image of the person in the vehicle from the image in the vehicle, synthesizing the image of the person in the vehicle and the scenery image, and generating and displaying a synthesized image. In this embodiment, the main control device 10 identifies images of all in-vehicle people in the in-vehicle image, obtains an outline of the in-vehicle people image, copies the outline of the in-vehicle people image to a blank layer to form a layer to be synthesized, synthesizes the layer to be synthesized with an image of a scene image, covers the outline of the in-vehicle people image to the scene image and displays the scene image, generates a synthesized image, and displays the synthesized image through the touch screen 11.
In some possible embodiments, after the in-vehicle character image is extracted from the in-vehicle image and the in-vehicle character image is synthesized with the scene image, and a synthesized image is generated and displayed, the image processing method further includes a step of performing optimization processing on the synthesized image. The optimization processing of the composite image specifically includes the following steps.
Step S301: the main control device 10 receives a third optimization instruction input by a user, displays an optimization icon on the display interface through the touch screen 11 for the user to select, receives a fourth optimization instruction generated by the user through selecting the optimization icon, and configures optimization parameters corresponding to the fourth optimization instruction into a character image and/or a scene image in the composite image in response to the fourth optimization instruction. The optimization parameters corresponding to the fourth optimization command are configured in the human image and/or the scenery image in the composite image, namely, the optimization processing corresponding to the fourth optimization command is carried out on the human image and/or the scenery image in the composite image. Specifically, the third optimization instruction and the fourth optimization instruction may be input through a physical key or a virtual key provided on the touch screen 11, or may also be input through voice. In the present embodiment, the third optimization instruction and the fourth optimization instruction are both input through the touch screen 11. A composite image including a person image area and a subject image area is displayed on the touch screen 11. When the character image needs to be optimized, the user may click on the character image area displayed on the touch screen 11, the main control device 10 outputs a third optimization list in response to the click of the user, the third optimization list includes third optimization items only for optimizing the character, and the user selects specific optimization items in the third optimization list, so that the main control device 10 performs corresponding optimization on the character image according to the optimization items selected by the user. The third optimization processing item includes processing modes such as optimizing and beautifying the character image, filtering and the like. When the scene image needs to be optimized, the user may click the scene image area of the composite image displayed on the touch screen 11, the main control device 10 outputs a fourth optimization list in response to the click of the user, the fourth optimization list includes a fourth optimization item for optimizing only the scene, and the user selects a specific optimization item from the fourth optimization list, so that the main control device 10 performs corresponding optimization processing on the scene image according to the optimization item selected by the user. The fourth optimization processing item comprises processing modes such as defogging, filter and the like. When the composite image needs to be adjusted integrally, the user clicks the person image area and the scene image area of the composite image displayed on the touch screen 11 at the same time, the main control device 10 responds to the click of the user to output a fifth optimization list, the fifth optimization list includes fifth optimization items for optimizing the person and the scene at the same time, and the user selects specific optimization items from the fifth optimization list, so that the main control device 10 performs corresponding optimization processing on the person image and the scene image according to the optimization items selected by the user. The fifth optimization processing item includes optimization processing such as color balance and filters. The integral scene picture of the composite image is optimized, so that the requirement of users on multi-style shooting can be met, and the desired shooting effect can be achieved.
In some possible embodiments, before the in-vehicle character image is extracted from the in-vehicle image and the in-vehicle character image is combined with the scene image, and the combined image is generated and displayed, the image processing method further includes a step of performing optimal setting on the combined image. The step of the optimized setting is similar to the optimized setting step of the previous embodiment, and is different from the time point of the optimized setting, in this embodiment, the optimized setting is performed on the first image capturing device 20 and the second image capturing device 30 in advance before the first image capturing device 20 and the second image capturing device 30 perform the shooting, so that the first image capturing device 20 and the second image capturing device 30 perform the shooting according to the preset optimized parameters. The optimal settings may be entered via the touch screen 11.
Please refer to fig. 7 and 8, which are second sub-flowcharts of the image processing method according to the embodiment of the present invention. In the present embodiment, the number of the second image pickup devices 30 is one. The second camera device 30 is mounted on the roof of the vehicle and can rotate 360 degrees in the horizontal direction and also can move in the vertical direction, so that the shooting visual angles of the plurality of first camera devices 20 at different positions can be kept consistent through one second camera device 30. Controlling the second camera device 30 to take a picture of the outside of the vehicle 100 to acquire a subject image specifically includes the following steps.
In step S2042, a third shooting parameter of the first imaging device 20 during shooting is acquired. Specifically, the third shooting parameter includes a third directional parameter of the first image pickup device 20, where the third directional parameter is used to indicate an included angle between the first image pickup device 20 and the horizontal direction and the vertical direction.
In step S2044, a fourth shooting parameter of the second imaging device 30 is determined according to the third shooting parameter. Specifically, the main control apparatus 10 determines the angle of view information of the first image pickup device 20 in the horizontal direction and the vertical direction as the third direction parameters, and the second image pickup device 30 determines the fourth direction parameters, that is, the angle of view information in the horizontal direction and the vertical direction, from the third direction parameters to determine the second image pickup device fourth direction parameters. For example, the main control apparatus 10 acquires the viewing angle information of the first camera 20 as being directed directly behind the vehicle 100, and thereby determines that the fourth direction parameter of the second camera is also directed directly behind the vehicle 100.
Step S2046, controlling the second camera 30 to adjust the angle of view based on the fourth shooting parameter. Specifically, the main control apparatus 10 adjusts the shooting angle of view of the second image pickup device 30 according to the fourth shooting parameter so that the shooting angles of the first image pickup device 20 and the second image pickup device 30 are the same. For example, the main control apparatus 10 acquires initial angle-of-view information of the second imaging device 30, calculates movement information of the second imaging device 30 from the initial angle-of-view information and the fourth imaging parameter, and controls the second imaging device 30 to move based on the movement information to adjust the imaging angle of view of the second imaging device 30 to be directed directly behind the vehicle 100. Therefore, the angle of shooting the figure is the same as the angle of shooting the scenery, and the self-shooting image is closer to the real self-shooting image.
In the above embodiment, a plurality of first cameras 20 and a second camera 30 are provided, the viewing angle of the first camera 20 is kept consistent by adjusting the second camera 30, and then the images of the person inside the car and the scenery outside the car are respectively taken and synthesized, so that the user can conveniently take a self-timer when driving and going out.
Referring to fig. 9, the image processing method further includes the following steps before extracting the in-vehicle character image from the in-vehicle image and combining the in-vehicle character image with the subject image.
And S402, displaying the images in the vehicle and the scenery images for a user to select. Specifically, the touch screen displays a plurality of in-vehicle images and a plurality of scene images for selection by a user.
Step S404, receiving the image to be synthesized selected by the user. Specifically, a first selection request of a user selecting one in-vehicle image and a second selection request of the user selecting one scene image are received, and then the selected in-vehicle image and the selected scene image are marked as a group of images to be combined.
According to the embodiment, the images to be synthesized are formed by independently selecting the images in the automobile and the scenery images by the user, the images to be synthesized are synthesized to generate the synthesized images, and the selection of various synthesized images is provided for the user, so that the satisfaction degree of the user is improved.
In some possible embodiments, the first camera 20 comprises a terminal device having a camera. The terminal device is connected to the main control device 10 in a wired or wireless manner. So that the user can hold the terminal device to move in the vehicle, and the in-vehicle image including the in-vehicle person can be acquired more conveniently and freely.
In some possible embodiments, the first camera 20 may also only include a camera, and the camera is connected to the main control device 10 by a wired manner, and a user can move the camera in the vehicle 100 by holding the camera in hand, so as to adjust the shooting angle of the camera.
Please refer to fig. 10, which is a schematic structural diagram of a camera system 800 according to an embodiment of the present disclosure. The camera system 800 comprises a processor 801, a memory 802, a first camera 20, and a second camera 30, wherein the first camera 20 and the second camera 30 are respectively used for acquiring an in-car image and a scene image. The memory 802 is used to store image processing program instructions and the processor 801 is used to execute 100 the image processing program instructions to implement the image processing method.
The processor 801 may be, in some embodiments, a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip, and is configured to execute the image Processing method program instructions stored in the memory 802.
The memory 802 includes at least one type of readable storage medium including flash memory, hard disks, multi-media cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, and the like. The memory 802 may be an internal storage unit of the computer device, such as a hard disk of the computer device, in some embodiments. The memory 802 may also be a storage device of an external computer device in other embodiments, such as a plug-in hard disk provided on the computer device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and so forth. Further, the memory 802 may also include both internal and external storage units of the computer device. The memory 802 can be used not only for storing application software installed in a computer device and various types of data such as codes for realizing image processing and the like, but also for temporarily storing data that has been output or is to be output.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program instructions are loaded and executed on a computer. The computer apparatus may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the unit is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that the above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, to the extent that such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, it is intended that the present application also encompass such modifications and variations.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. An image processing method, characterized in that the image processing method comprises:
controlling a first camera device to shoot the person in the vehicle to obtain an image in the vehicle; the first camera device is positioned in the vehicle, and the in-vehicle image comprises an in-vehicle person image;
controlling a second camera device to shoot the outside of the vehicle to obtain a scene image; the second camera device is arranged outside the vehicle and is used for shooting a scene;
and extracting the image of the person in the vehicle from the image in the vehicle, synthesizing the image of the person in the vehicle and the scenery image, and generating and displaying a synthesized image.
2. The image processing method according to claim 1, further comprising:
acquiring a first shooting parameter of the first camera device during shooting;
determining a second shooting parameter of the second camera device according to the first shooting parameter;
and adjusting the second camera device according to the second shooting parameter.
3. The image processing method according to claim 2, wherein the number of the first image pickup device and the second image pickup device is at least one, the image processing method comprising:
controlling one of a plurality of first camera devices to shoot the person in the vehicle to acquire a first shooting parameter of the first camera device; the plurality of first camera devices are all installed in the vehicle, and the first shooting parameters comprise first direction parameters;
determining a second direction parameter of the second camera device according to the first direction parameter;
and controlling the second camera to shoot based on the second direction parameter.
4. The image processing method according to claim 1, wherein the extracting an in-vehicle person image from an in-vehicle image and combining the in-vehicle person image with the subject image to generate and display a combined image, specifically comprises:
identifying images of all people in the car image;
acquiring the outlines of the images of all the persons in the car and copying the outlines to a blank layer to form a layer to be synthesized;
and covering the outlines of the images of all the persons in the car in the layer to be synthesized on the scenery image and displaying the scenery image.
5. The image processing method according to claim 1, wherein the first camera includes a terminal device having a camera or a camera fixed in the vehicle.
6. The image processing method according to claim 1, wherein the in-vehicle image and the scene image each include a plurality of pieces, the image processing method further comprising:
displaying the multiple in-vehicle images and the multiple scenery images for a user to select;
receiving an in-vehicle image and a scene image selected by a user;
and extracting an in-vehicle character image from the in-vehicle image selected by the user and combining the in-vehicle character image with a scene image selected by the user.
7. The image processing method according to claim 1, wherein before the controlling the first imaging device to image the in-vehicle person to acquire the in-vehicle image, the image processing method further comprises:
receiving a first optimization instruction, and displaying an optimization icon on a user interface for a user to select;
and receiving a second optimization instruction generated by the optimization icon selected by the user, and configuring optimization parameters corresponding to the second optimization instruction to the first camera device and/or the second camera device.
8. The image processing method according to claim 1, wherein after extracting the in-vehicle character image from the in-vehicle image and synthesizing the in-vehicle character image with the subject image, generating and displaying a synthesized image, the image processing method further comprises:
receiving a third optimization instruction, and displaying an optimization icon on a user interface for a user to select;
and receiving a fourth optimization instruction generated by the optimization icon selected by the user, and configuring optimization parameters corresponding to the fourth optimization instruction on the character image and/or the scenery image.
9. An image pickup system, comprising a first image pickup device, a second image pickup device, and a main control apparatus electrically connected to the first image pickup device and the second image pickup device, respectively, the main control apparatus comprising:
a memory for storing a computer program; and
a processor for executing the computer program to implement the image processing method of any one of claims 1 to 8.
10. A vehicle characterized by comprising a body, and the camera system according to claim 9 provided to the body.
CN202111139830.9A 2021-09-28 2021-09-28 Image processing method, imaging system, and vehicle Pending CN114040091A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111139830.9A CN114040091A (en) 2021-09-28 2021-09-28 Image processing method, imaging system, and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111139830.9A CN114040091A (en) 2021-09-28 2021-09-28 Image processing method, imaging system, and vehicle

Publications (1)

Publication Number Publication Date
CN114040091A true CN114040091A (en) 2022-02-11

Family

ID=80140281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111139830.9A Pending CN114040091A (en) 2021-09-28 2021-09-28 Image processing method, imaging system, and vehicle

Country Status (1)

Country Link
CN (1) CN114040091A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020124260A1 (en) * 2001-03-02 2002-09-05 Creative Design Group, Inc. Video production system for vehicles
JP2008168714A (en) * 2007-01-10 2008-07-24 Alpine Electronics Inc Vehicle condition monitoring device and image processing device
US20170251163A1 (en) * 2016-02-29 2017-08-31 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for multimedia capture
JP2019186790A (en) * 2018-04-12 2019-10-24 株式会社Jvcケンウッド Video image control device, vehicle photographing device, video image control method, and program
CN111301284A (en) * 2018-12-11 2020-06-19 丰田自动车株式会社 In-vehicle device, program, and vehicle
CN112889271A (en) * 2019-05-29 2021-06-01 华为技术有限公司 Image processing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020124260A1 (en) * 2001-03-02 2002-09-05 Creative Design Group, Inc. Video production system for vehicles
JP2008168714A (en) * 2007-01-10 2008-07-24 Alpine Electronics Inc Vehicle condition monitoring device and image processing device
US20170251163A1 (en) * 2016-02-29 2017-08-31 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for multimedia capture
JP2019186790A (en) * 2018-04-12 2019-10-24 株式会社Jvcケンウッド Video image control device, vehicle photographing device, video image control method, and program
CN111301284A (en) * 2018-12-11 2020-06-19 丰田自动车株式会社 In-vehicle device, program, and vehicle
CN112889271A (en) * 2019-05-29 2021-06-01 华为技术有限公司 Image processing method and device

Similar Documents

Publication Publication Date Title
CN107566717B (en) Shooting method, mobile terminal and computer readable storage medium
CN110456967B (en) Information processing method, information processing apparatus, and program
EP3188467B1 (en) Method for image capturing using unmanned image capturing device and electronic device supporting the same
US20130257858A1 (en) Remote control apparatus and method using virtual reality and augmented reality
US7876334B2 (en) Photography with embedded graphical objects
JP5835384B2 (en) Information processing method, information processing apparatus, and program
EP2840445A1 (en) Photograph shooting method and electronic device
CN104322050A (en) Electronic camera, image display device, and image display program
CN110177210B (en) Photographing method and related device
CN109448050B (en) Method for determining position of target point and terminal
KR102655625B1 (en) Method and photographing device for controlling the photographing device according to proximity of a user
DE102018133013A1 (en) VEHICLE REMOTE CONTROL DEVICE AND VEHICLE REMOTE CONTROL METHOD
US20210097659A1 (en) Automatic Generation of All-In-Focus Images with a Mobile Camera
WO2023050677A1 (en) Vehicle, image capture method and apparatus, device, storage medium, and computer program product
JP2017084422A (en) Device, method, and program
US20100110211A1 (en) Image presentation angle adjustment method and camera device using the same
CN114040091A (en) Image processing method, imaging system, and vehicle
CN111314600B (en) Electronic apparatus, control method, and computer-readable medium
WO2023230291A2 (en) Devices, methods, and graphical user interfaces for user authentication and device management
JP2014062932A (en) Attitude control device, operation device, attitude control system, control method, control program, and recording medium
CN116828131A (en) Shooting processing method and device based on virtual reality and electronic equipment
JP6128185B2 (en) Apparatus, method, and program
US10937217B2 (en) Electronic device and control method thereof for generating edited VR content
US9307142B2 (en) Imaging method and imaging apparatus
KR102514702B1 (en) A mobile terminal for generating a photographed image and a method for generating a photographed image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination